Next Article in Journal
Fatigue Crack Evaluation with the Guided Wave–Convolutional Neural Network Ensemble and Differential Wavelet Spectrogram
Next Article in Special Issue
Singular Spectrum Analysis for Modal Estimation from Stationary Response Only
Previous Article in Journal
Opto-Mechatronics System for Train-Track Micro Deformation Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Damage Detection and Localization under Variable Environmental Conditions Using Compressed and Reconstructed Bayesian Virtual Sensor Data

Department of Automotive and Mechanical Engineering, Metropolia University of Applied Sciences, Leiritie 1, 01600 Vantaa, Finland
Sensors 2022, 22(1), 306; https://doi.org/10.3390/s22010306
Submission received: 8 November 2021 / Revised: 8 December 2021 / Accepted: 28 December 2021 / Published: 31 December 2021
(This article belongs to the Special Issue Model-Free Structural Health Monitoring Approaches)

Abstract

:
Structural health monitoring (SHM) with a dense sensor network and repeated vibration measurements produces lots of data that have to be stored. If the sensor network is redundant, data compression is possible by storing the signals of selected Bayesian virtual sensors only, from which the omitted signals can be reconstructed with higher accuracy than the actual measurement. The selection of the virtual sensors for storage is done individually for each measurement based on the reconstruction accuracy. Data compression and reconstruction for SHM is the main novelty of this paper. The stored and reconstructed signals are used for damage detection and localization in the time domain using spatial or spatiotemporal correlation. Whitening transformation is applied to the training data to take the environmental or operational influences into account. The first principal component of the residuals is used to localize damage and also to design the extreme value statistics control chart for damage detection. The proposed method was studied with a numerical model of a frame structure with a dense accelerometer or strain sensor network. Only five acceleration or three strain signals out of the total 59 signals were stored. The stored and reconstructed data outperformed the raw measurement data in damage detection and localization.

1. Introduction

Structural health monitoring (SHM) is based on frequent vibration measurements using a sensor network with a large number of sensors. As a result, the amount of data is tremendous. An increasing number of sensors is anticipated in future applications—for example, sensing skins [1]. The advantage of a dense sensor network is more reliable damage identification. A disadvantage is higher hardware and data management costs. Historical data must be stored for several years. These training data are used to capture the dynamic characteristics of the undamaged structure under variable environmental or operational conditions. The future measurements will be compared with the training data in order to detect and localize damage.
Data storage may be costly or the amount of data may exceed the storage capacity. Therefore, data reduction is necessary. One method for data reduction is to extract and store only selected features from the time records. Features are dynamic characteristics of the structure, which are expected to be sensitive to damage. Such features are, for example, natural frequencies and mode shapes, which can be extracted from the measurement data using system identification techniques [2]. Significant data compression is possible, because each measurement yields a single data point. However, the original time histories will be lost and cannot be recovered. Therefore, it may be necessary to save everything, resulting in terabytes of data every day [3].
The most common dimensionality reduction method is principal component analysis (PCA) [4], which is a linear method that maximizes the variance in the data by projecting the multidimensional data onto directions, principal components (PC) that account for the largest variability. If only a few PCs are retained, some loss of data results in the reconstruction. PCA has been applied, e.g., to image compression [5]. A disadvantage of PCA is that the reconstruction error is not available. Therefore, it may be difficult to select the number of PCs for storage. If the chosen number of PCs exceeds the optimum, the reconstruction error increases, because the PCs start to model the noise [6].
It is also possible to select a limited number of sensor signals that are permanently stored so that the omitted signals can be accurately reconstructed [6]. The selection is related to optimal sensor placement (OSP), which has been investigated in many applications [7]. Although the objective in those applications is to place a limited number of physical sensors in optimal positions, the same approach can be used to select a subset of signals for storage. The selection in this paper is based on the maximum accuracy of the reconstructed signals.
Some review papers and comparisons of different optimal sensor placement (OSP) algorithms exist [8,9,10,11]. They present the most commonly applied algorithms and criteria. The sensor placement is a discrete optimization problem for which genetic algorithms have been proposed [12,13,14]. Alternatively, a computationally efficient and widely used algorithm is to start with a large set of candidate sensor locations and remove one sensor in each round based on the selected cost function until the stopping criterion is met. This backward sequential sensor placement (BSSP) algorithm has been used in many studies [15,16,17]. Another iterative method is to add one sensor in turn to the sensor network until the stopping criterion is met. The algorithm is called the forward sequential sensor placement (FSSP) algorithm [12,17]. BSSP is used in this study, because experiments have shown its better performance over FSSP.
Data compression results in a decrease in accuracy of the reconstructed signals. Since the signal-to-noise ratio (SNR) is important for damage detection [18], data compression may also decrease the performance of the SHM system. The main novelty of the present paper is to introduce a data compression technique so that the data compression ratio is large and the reconstructed data are so accurate that they can be applied to SHM. With empirical Bayesian virtual sensing, the measurement error can be decreased before data compression so that the stored and reconstructed signals are more accurate than the original measurements. Proof is given that it is more beneficial to store selected virtual sensor signals than the corresponding physical sensor signals [6].
The objective of the present study is to detect and localize damage using the stored and reconstructed virtual sensor data. Damage detection is based on changes in the dynamic characteristics of the structure. Records of structural motion, for example, acceleration or strain, are measured simultaneously at selected degrees of freedom. First, a training data set is acquired from the undamaged structure under different environmental or operational conditions. These data are used to build a statistical data model of the undamaged structure. Next, the structure is monitored with repeated measurements in order to have an early warning of structural failure. The new data are compared to the training data using novelty detection techniques, and a statistically significant change in the dynamic characteristics is an indication of damage. Particular attention is needed to take different environmental or operational conditions into account, because they can have a considerable influence on the very same dynamic characteristics. Several techniques have been proposed to eliminate the environmental or operational influences on the data, even without measuring the underlying quantities; see, e.g., [19,20] and the references therein. Another novelty of this paper is the application of whitening transformation [21] to the training data in order to consider different environmental or operational influences in the data. Although whitening is a well-known transformation, the author is not aware of its wide application to this end. Damage localization can also be attempted if the changes in the data can be associated with a particular sensor.
In this paper, damage detection and localization are performed in the time domain. Time-domain and feature-domain methods for damage detection were compared [22], and it was found that the selected features were more sensitive to damage than the physical or virtual sensor data. This was probably due to the fact that the features had a higher SNR than the sensor data. However, damage detection in the time domain has certain advantages compared to that in the feature domain. For example, the data dimensionality is often lower and the number of data points larger, which is advantageous in statistical analysis. The algorithm can also be fully automated, as system identification is not necessary. In addition, each time history can be associated with a single sensor to localize damage. Therefore, if measurement error can be decreased, the performance of time-domain methods is expected to improve.
It should be noted that there are several techniques for damage detection and localization using vibration measurements. In addition to time domain or feature domain methods, the techniques can be categorized as physics-based or data-based approaches. The present paper is restricted to data-based methods so that no numerical model, for example, a finite element model, is needed.
It is by no means claimed that the proposed method is optimal, but a single algorithm was selected to study whether data compression and reconstruction can be successfully applied to detect and localize damage in the time domain. In fact, several techniques were compared to eliminate environmental or operational influences, resulting in very similar performance as whitening transformation. A number of statistical novelty detection methods were also tested, and the one with the most robust behavior was selected. A comparison of different techniques to detect and localize damage is out of the scope of this study. The user can actually choose any algorithm once the reconstructed data are available. Nevertheless, it is believed that the results presented in this paper are representative for many time-domain methods. Feature-domain methods are also beyond the scope of this paper, because many feature-extraction techniques already include noise reduction due to averaging and may not benefit from the proposed data compression and reconstruction. In fact, the reconstructed signals become correlated and are no longer independent. This may cause challenges in system identification.
The paper is organized as follows. Virtual sensing using Bayesian estimation is outlined in Section 2. Optimal sensor placement for data storage and reconstruction is also discussed. An algorithm for damage detection and localization follows in Section 3. In Section 4, the proposed method is studied with numerical simulations of ambient vibration measurements under variable environmental conditions. Concluding remarks are given in Section 5.

2. Data Compression and Reconstruction Using Bayesian Virtual Sensing

The objective is to store only a small percentage of the dense sensor network data so that the full data can be accurately reconstructed and used for damage detection. Different environmental conditions between measurements must be considered. It is assumed that the environmental conditions remain nearly constant during a single measurement. This can be justified because the measurement period is typically much shorter than the variations in the environment—for example, outdoor temperature.
The dynamic characteristics of the structure can change between measurements due to environmental or operational variability as well as due to damage. It is important to retain those variations during data compression and reconstruction. Therefore, each measurement is processed independently in this stage. Different measurements are pooled only in the damage-detection stage to distinguish between those two aforementioned influences.
Figure 1 shows the flowchart of the whole process from vibration measurement to damage identification. Each vibration measurement acquired with a sensor network is processed as follows. First, Bayesian virtual sensors [23] are designed to reduce the measurement error. Then, a limited number of virtual sensor signals are selected for storage based on the optimal sensor placement algorithm (Figure 1b). The excluded signals can be reconstructed using the stored signals and a coefficient matrix. The stored and reconstructed signals are finally used for damage detection and localization. To this end, training data from the undamaged structure under different environmental or operational conditions are used.
This section discusses data compression and reconstruction, while the next section covers damage detection and localization. The main theoretical novelty of this paper is included in Section 2.2, Section 2.3 and Section 2.4.

2.1. Empirical Bayesian Virtual Sensing

The objective of empirical virtual sensing is to decrease the measurement error of all sensors. The derivation of Bayesian virtual sensors [20] is briefly repeated for completeness. Consider a sensor network measuring p simultaneously sampled response variables y = y t p at time t. Each measured data point y includes measurement error w = w t p :
y = x + w ,
where x = x t p are the exact values of the measured degrees of freedom. Equation (1) can be written in the following form at time t [24].
x y = I 0 I I x w ,
where I is the identity matrix of size p and 0 is the null matrix of size p × p. For simplicity but without loss of generality, assume zero-mean variables x and y. The partitioned covariance matrix is
Σ x x Σ x y Σ y x Σ y y = E x y x T y T = I 0 I I E x w x T w T I 0 I I T = I 0 I I Σ x x 0 0 Σ w w I I 0 I = Σ x x Σ x x Σ x x Σ x x + Σ w w
where E( ) denotes the expectation operator and the measurement error w is assumed to be zero-mean Gaussian, independent of x, with a covariance matrix Σww.
A linear minimum mean-square error (MMSE) estimate for x|y (x given y) is obtained by minimizing the mean-square error (MSE) [24]. The expected value, or the conditional mean, of the predicted variable is
x ^ = E ( x | y ) = Σ x x ( Σ x x + Σ w w ) 1 y = Σ x x Σ y y 1 y ,
and the covariance matrix of the estimation error is
Σ post = cov ( x | y ) = Σ x x Σ x x ( Σ x x + Σ w w ) 1 Σ x x = Σ x x Σ x x Σ y y 1 Σ x x .  
It was shown that the virtual sensors (Equation (4)) are more accurate than the actual measurements [23].

2.2. Storing Physical Sensor Data

Let us assume that channels v of the physical measurement are stored. The objective is to reconstruct the signals of the whole sensor network. For simplicity but without loss of generality, assume zero-mean variables. The stored signals are the actual measurements yv. Then, E ( x | y v ) and cov ( x | y v ) are estimated using the partitioned mixed-data covariance matrix
E x y v x T y v T = Σ x x Σ x x v Σ x v x Σ y v y v ,
because E x y v T = E x x v T . The conditional mean (the reconstructed data) and the error covariance matrix are respectively obtained using MMSE:
E ( x | y v ) = Σ x x v Σ y v y v 1 y v = B y v
and
cov ( x | y v ) = Σ x x Σ x x v Σ y v y v 1 Σ x v x ,
where B = Σ x x v Σ y v y v 1 is the coefficient matrix that has to be stored along with the stored signals yv.

2.3. Storing Virtual Sensor Data

Once a limited number of virtual sensor signals are stored, the omitted signals must be reconstructed. In the following, formulas for the reconstruction and its estimation error are derived.
Let us assume that the signals of channels v of the virtual sensors are stored, whereas the omitted signals of channels u must be reconstructed. The error variances of the stored signals are the diagonal elements of Σ post , vv , which is a submatrix of Σ post (Equation (5)) consisting only rows v and columns v. When reconstructing the omitted signals using the stored virtual sensors’ data x ^ v , the conditional mean E ( x u | x ^ v ) and covariance matrix cov ( x u | x ^ v ) must be derived. The Bayesian virtual sensors are not exact, but follow the error model
x = x ^ + e ,
where e is the posterior error having a zero mean and covariance matrix Σ post (Equation (5)). Thus,
E ( x u | x ^ v ) = E ( x ^ u | x ^ v ) = Σ x ^ u x ^ v Σ x ^ v x ^ v 1 x ^ v = A x ^ v ,
where A = Σ x ^ u x ^ v Σ x ^ v x ^ v 1 is the coefficient matrix that has to be stored along with the stored signals x ^ v . The data covariance matrix Σ x ^ x ^ is estimated using the full virtual sensor data. According to MMSE, the two terms in the right-hand side of Equation (9) are orthogonal [25]. Therefore, the covariances are related as
cov x u | x ^ v = cov x ^ u | x ^ v + Σ post , u u = Σ x ^ u x ^ u Σ x ^ u x ^ v Σ x ^ v x ^ v 1 Σ x ^ v x ^ u + Σ post , u u .
The diagonal elements of this matrix are the variances of the reconstruction errors. Notice that the reconstruction error is higher than the error of the virtual sensor Σ post , u u .

2.4. Comparison of the Two Storage Strategies

The question may arise as to whether the actual measurements yv or the Bayesian estimates x ^ v (Equation (4)) of the selected channels v should be stored. The choice depends on the accuracy of the reconstruction. Intuitively, more accurate signals should be used for the reconstruction of signals of the sensors u. It is now proved that using x ^ v instead of yv results in a smaller reconstruction error. The reconstructed signals are either E ( x u | y v ) or E ( x u | x ^ v ) in Equations (7) and (10), respectively. The corresponding error covariance matrices are given in Equations (8) and (11), respectively. In order to compare the error variances, some manipulation of Equation (11) is needed. Using Equation (4),
Σ x ^ x ^ = E x ^ x ^ T = Σ x x Σ y y 1 E y y T Σ y y 1 Σ x x = Σ x x Σ y y 1 Σ x x .
Therefore,
Σ x ^ u x ^ u = Σ x u x Σ y y 1 Σ x x u .
Substituting Equations (5) and (13) into Equation (11) results in
cov ( x u | x ^ v ) = cov ( x ^ u | x ^ v ) + Σ post , u u = Σ x u x Σ y y 1 Σ x x u Σ x ^ u x ^ v Σ x ^ v x ^ v 1 Σ x ^ v x ^ u + Σ x u x u Σ x u x Σ y y 1 Σ x x u = Σ x u x u Σ x ^ u x ^ v Σ x ^ v x ^ v 1 Σ x ^ v x ^ u .
This is compared with the error covariance cov ( x u | y v ) . Consider any single sensor in the set u. Then the covariance matrix becomes the variance. The difference is
cov ( x u | y v ) cov ( x u | x ^ v ) = Σ x u x u Σ x u x v Σ y v y v 1 Σ x v x u Σ x u x u Σ x ^ u x ^ v Σ x ^ v x ^ v 1 Σ x ^ v x ^ u = Σ x u x v Σ x ^ v x ^ v 1 Σ y v y v 1 Σ x v x u > Σ x u x v Σ y v y v 1 Σ y v y v 1 Σ x v x u = 0 .
The following facts were used in the derivation: (1) The Bayesian virtual sensors are more accurate than the physical sensors, (2) all the covariance matrices are positive definite, and (3) the cross-correlations are equal:
Σ x u x v = Σ x ^ u x ^ v .
The proof of Equation (16) is derived using Equation (9):
Σ x u x v = E x u x v T = E x ^ u + e u x ^ v + e v T = E x ^ u x ^ v T + E x ^ u e v T + E e u x ^ v T + E e u e v T = Σ x ^ u x ^ v ,
because the last three terms in the middle row are zero.
From Equation (15), it can be concluded that the stored Bayesian virtual sensors x ^ v instead of the corresponding raw measurements yv result in a smaller reconstruction error and should be preferred for storage. This will also be shown in the numerical experiments.

2.5. Optimal Sensor Placement

The stored signals are selected using an optimal sensor placement algorithm (Figure 1b). It is an iterative procedure starting with an initial large sensor network including all measured degrees of freedom (DOF). Each virtual sensor in turn is removed with replacement, and the error variances of all reconstructed signals are computed, which are the diagonal terms of the error covariance matrix (Equation (11)). The cost function for these reduced sensor networks is evaluated. The minimum cost is found and the reduced sensor network corresponding to this minimum becomes the new candidate set for the next round. In other words, the removed sensor corresponding to this minimum cost is permanently discarded. The process is repeated until the desired number of sensors or the allowed error limit is reached. Finally, the data from the remaining virtual sensors are stored together with matrix A in Equation (10) for reconstruction of the discarded sensors.
The objective function f used in this study is
f k , i = σ c r , i σ i | sensor   k   removed
where σcr,i is the critical reconstruction error of sensor i defined by the user, and σi is the current reconstruction error of sensor i. Notice that each sensor can be given a different threshold. The cost function R is the negative objective function:
R k , i = σ i σ c r , i | sensor   k   removed
The decision rule is the minimax criterion—that is, sensor k that minimizes the worst-case loss can be permanently removed:
argmin k   max i R k , i
or using the objective function f,
argmax k   min i f k , i
As an illustrative example, consider a sensor network with 10 sensors on a structure. Assume that the OSP algorithm has proceeded to the point where sensors 2, 4, and 7 are stored, whereas sensors 1, 3, 5, 6, 8, 9, and 10 must be reconstructed. The next round of the BSSP algorithm investigates whether one of sensors 2, 4, or 7 can be removed. To this end, sensor k = 2 is first removed, and all sensors except 4 and 7 are reconstructed. The estimated standard deviations of all sensors are plotted in Figure 2a. The errors of sensors 4 and 7 are posterior variances (Equation (5)), whereas the other errors are reconstruction errors (Equation (11)). The minimum distance from the threshold is found for sensor i = 8. The same procedure is performed by removing sensor k = 4 (Figure 2b) and sensor k = 7 (Figure 2c) and recording the minimum distances shown in the figures, from which the maximum is found (Figure 2a). The decision rule then says that sensor 2 can be permanently removed. By storing only sensors 4 and 7, the remaining signals can be reconstructed with tolerable error. The removal continues until all trials result in a similar case as in Figure 2c, where the error threshold is exceeded. In that case no more sensors can be removed and the procedure is terminated.
Notice that the threshold cannot be given an arbitrary low value, but it must be greater than the posterior variance for all sensors. This is because the reconstruction error is always greater than the posterior error.

3. Damage Detection and Localization

Processing a single measurement yields both data compression and noise reduction. The next step is to use the full data (stored and reconstructed signals) of all measurements in damage detection. The flowchart for damage detection and localization is plotted in Figure 3. Notice that one possible algorithm is only introduced, but the user is free to apply any other method once the stored and reconstructed data are available.
The stored and reconstructed data are used for damage detection in the time domain. First, the mean vector and the covariance matrix are estimated using training data from the undamaged structure under different environmental or operational conditions. Whitening transformation is applied to the training data [26]. This transformation is then fixed and applied to the test data. The transformed data are subjected to principal component analysis (PCA). Retaining the first principal component only, the data dimensionality is decreased to one. An extreme value statistics (EVS) control chart is then designed for the first PC scores with appropriate control limits and subgroup size [20,27,28,29]. In this paper, the probability of false alarms equal to 0.001 was used.
Damage location is assumed to correspond to the direction of the first principal component of the residuals. The largest projection of the first PC on the sensor coordinates reveals the sensor closest to damage.
It is essential to model the data of each measurement independently for compression and reconstruction so that the environmental, operational, or damage effects are retained during this first phase. Elimination of the environmental or operational influences is performed in the second phase, in which several measurements are pooled to build a data model of the undamaged structure under different environmental or operational conditions. Novelty detection is then applied to the test data using the data model of the second phase. Each step is discussed in more detail in the following.

3.1. Spatial and Spatiotemporal Correlation

Covariance-based methods in vibration-based structural health monitoring are effective and quite common. The covariance matrix is estimated as follows. Simultaneous time series of the sensor network are formed by combining the stored and reconstructed signals. Training data are formed by pooling several measurements from the undamaged structure. If the process can be assumed stationary with zero mean, temporal correlation can also be utilized [30]. The time-shifted covariance matrix estimate, with a time shift i, is computed by
R ^ i = 1 N i k = 1 N i x k x k + i T ,
where x k = x k Δ t is the kth sample at a time instant k Δ t where Δ t is the sampling period and N is the number of samples. If m is the model order, the covariance matrix is
R = R 0 R 1 R m R 1 T R 0 R m 1 R m T R m 1 T R 0
If spatial correlation is only studied, the correlation matrix is simply R 0 . Spatial correlation is related to mode shapes, whereas temporal correlation also takes the frequency information into account. The covariance matrix is estimated using the training data.

3.2. Data Normalization Using Whitening Transformation

Environmental or operational variability often affects the dynamic characteristics of the structure, which can mask the effects of damage. Fortunately, in a multivariate case, the variables are often highly correlated, which affects the covariance structure of the training data (see an example in Figure 4a). Once damage occurs, the covariance structure also changes. Consequently, the new data points are assumed to stand out like the two isolated data points in Figure 4.
Whitening transformation is applied to the training data to consider the environmental or operational influences without measuring the underlying quantities. The objective of whitening or “sphering” is to linearly transform the data vector into another vector such that the elements of the new vector are uncorrelated and have unit variances. The whitening transformation therefore results in a unit data covariance matrix. For example, let us have a two-dimensional variable with a multivariate Gaussian distribution (Figure 4a). After whitening, the distribution looks like a hypersphere (Figure 4b). The transformation is, however, not unique, as the axes in Figure 4b can be rotated, resulting in another possible transformation (Figure 4c). Two isolated data points, not included in the training data set, are highlighted to distinguish between these two different whitening transformations. The choice of transformation is important in damage localization, discussed in Section 3.7.
The symmetric data covariance matrix of the training data is first decomposed using spectral decomposition:
R = E x x T = U D U T ,
where D is a diagonal matrix with eigenvalues, and U is an orthogonal matrix:
U T U = U U T = I ,
where I is the identity matrix.
Whitening is a linear transformation from the original variables x into z:
z = W 1 x ,
in which the whitening matrix W1 is
W 1 = D 1 2 U T ,
To show that Equation (26) is a whitening transformation, the covariance matrix of z should be the identity matrix:
E z z T = W 1 E x x T W 1 T = D 1 2 U T U D U T U D 1 2 = I
Whitening does not result in dimensionality reduction, and further processing is typically needed.
Notice how whitening transformation differs from the subspace methods that project the data onto the signal space and the noise space. As an example, Figure 4a illustrates two points, from which the red square lies in the signal space and the green circle in the noise space. Subspace methods can only detect damage in the noise space, whereas a data point in the signal space remains undetected. The whitening transformation instead does not divide the data into two subspaces, which can in some cases be advantageous. In other cases, it may not be desirable to have alarms when the data points lie in the signal space, e.g., in cases where extreme temperature causes the data to fall outside the training data. In the present example, the user has to decide whether the red square in Figure 4a is an indication of damage or merely an extreme environmental or operational condition.

3.3. Residual Generation

The variable z in Equation (26) can serve as a residual without further processing. It has a unit covariance matrix in the undamaged case. The correlation structure is expected to change due to damage because of changes in the mode shapes (spatial correlation) or additionally in the natural frequencies (spatiotemporal correlation). If the structure is damaged, the whitening transformation (Equation (26)) does not necessarily transform the new data points inside the hypersphere. Instead, it is expected that the transformed data points are located further away from the center of the hypersphere (see Figure 4). If the distance from the center is statistically significant, it is an indication of damage.

3.4. Principal Component Analysis

Whitening transformation of the training data results in data points that fall inside a hypersphere with a unit covariance matrix. Once damage occurs, it is expected that the new data points are located outside this hypersphere after the same transformation. It is also probable that the new data points are located in a certain direction from the center of the hypersphere. This direction can be found using principal component analysis (PCA) [4]. Therefore, PCA is applied to all of the data, both training and test data. The first principal component (PC) is only chosen for dimensionality reduction, which is expected to remove the curse of dimensionality. The PC scores of the first PC are used for damage detection and the first PC for damage localization.

3.5. Extreme Value Statistics

After principal component analysis, the dimensionality of the residuals is reduced to one. The distribution of this one-dimensional feature is not known, or more specifically, normal distribution may not be assumed. To this end, an extreme value distribution can be utilized [27]. The maxima or minima of a large set of independent, identically distributed random variables can be modelled with a generalized extreme value (GEV) distribution [28]. The data are divided into subgroups of n data points. The minimum and maximum from each batch are recorded, resulting in data called block minima and block maxima. The parameters of the two GEV distributions are identified for the block minima and block maxima of the data from the undamaged structure [20].

3.6. Control Chart

Control charts, designed for the extreme values, are used for novelty detection with the control limits computed according to the in-control data from the undamaged structure [29]. The control limits are computed from the GEV distributions by choosing the probability of exceedance (here 0.001). If the plotted new data points exceed the control limits, it is a possible indication of damage.

3.7. Damage Localization

As mentioned before, the whitening matrix (Equation (27)) is not unique. Multiplication with an orthogonal matrix also results in a whitening transformation [21]. A special whitening matrix is the inverse square root of R:
W 2 = U D 1 2 U T ,
which is advantageous in damage localization, as it preserves the orientation in the original variable space (Figure 4c). The transformed variable with the largest value in the first principal component is assumed to reveal the sensor closest to damage.
Notice that for damage detection, it makes no difference which whitening matrix is used.

4. Numerical Experiment

Damage detection and localization were studied using simulated data. A two-dimensional finite element model of a steel frame with an additional spring element was subjected to three uncorrelated random excitations (Figure 5). The density of the steel was ρ = 7850 kg/m3 and the spring constant was k = 2.0 MN/m. The relationship between temperature and the Young’s modulus of steel was stepwise linear, as shown in Figure 6a. The frame was modelled with 176 simple beam elements, with a cross-section shown in Figure 5. The first seven modes were used in the simulation. Modal damping was assumed with damping ratios of ζ1–2 = 0.01, ζ3 = 0.015, and ζ4–7 = 0.02.
The three loads in the simulations were pseudorandom periodic excitations in the frequency range between 0 and 53.33 Hz with random amplitudes and phases [5]. All analyses had different loading functions. Steady-state analyses were performed in the frequency domain using modal superposition. Lateral accelerations at 59 points (every third node) and strains in the middle of 59 beam elements (every third element) were recorded (Figure 5). The measurement period was 32.77 s with a sampling frequency of 250 Hz. Each measurement consisted of 8192 samples per channel. Independent and identically distributed Gaussian random noise was added to each sensor. The average SNR was 30 dB for accelerometers and 10 dB for strain gauges. The standard deviations of the noise in all sensors were assumed to be known. They can also be approximated from the measurement data [23].
A rather complex but also fairly realistic temperature distribution was assumed. The temperature of the upper left (UL) corner, TUL, varied randomly between −25 °C and +40 °C. The temperature at the upper right (UR) corner was TUR = TUL ± 5 °C, at the bottom left (BL) support was TBL = TUL ± 3 °C, and at the bottom right (BR) support was TBR = TUR ± 3 °C. Temperature variation between the aforementioned points was assumed to be linear, except that Gaussian random noise with a standard deviation of 0.2 °C was added to each element. Temperature within a single element was constant. Sample distributions of the Young’s modulus in the elements are plotted in Figure 6b. The distribution did not change during a single measurement.
The variations of the lowest natural frequencies between measurements due to temperature or damage are shown in Figure 7. A measurement with incipient damage is indicated with a vertical line. The natural frequencies were not used for data analysis, and were only plotted to illustrate how damage was masked by the temperature influence.
If modal parameters were used for damage identification, the following considerations should be made. Only three natural frequencies exist in the frequency range of the excitation. Three features are not enough to remove the environmental effects. If mode shapes were also used, the resulting dimensionality of the feature vector would be 3 + 3 (59 − 1) = 177 for real modes and 3 + 3 (59 − 1) 2 = 351 for complex modes (−1 is due to the scaling of the mode shape vectors). These large numbers would probably result in the curse of dimensionality, because one measurement yields only a single data point in the high-dimensional space. Some other issues in system identification were already mentioned in the introduction.
Visually unobservable damage at the support of the left leg was modelled with the removal of material inside the cross-section of the bottom element. The length of the element was 62.5 mm. Five different damage levels were studied with wall thicknesses of 4.5, 4.0, 3.5, 3.0, and 2.5 mm (Table 1). Notice that as material was removed, both the stiffness and mass decreased.
The first 100 measurements were acquired from the undamaged structure and each damage level was monitored with six measurements under random and unknown environmental conditions. Training data were the first 70 measurements. The extreme value statistics (EVS) control charts were designed using the same training data.

4.1. Bayesian Virtual Sensing and Selection of Sensors for Storage

Each vibration measurement was individually processed, as illustrated in Figure 1. Bayesian virtual sensing resulted in noise reduction. A detail of the measured and estimated accelerations of sensor 2 in measurement 1 (undamaged) is plotted in Figure 8. The exact values are also shown. It can be seen that the Bayesian virtual sensor was more accurate than the corresponding physical sensor. The same applied to the other sensors.
The estimation errors of each virtual sensor in all measurements are plotted in Figure 9. The variability between measurements was quite small. Although the measurement errors were equal (blue horizontal line), the virtual sensor errors differed between sensors. All virtual sensors were more accurate than the hardware.
Next, a subset of the Bayesian virtual sensors was selected separately for each measurement by applying the backward sequential sensor placement (BSSP) approach [17]. The requirement was that the standard deviation of the measurement error had to be decreased at least 50% in all sensors.
The negative cost function was the minimum difference between the allowed and current reconstruction errors in any sensor in the network. The reduced network with the maximum distance (minimum cost) was selected for the next round. As a result, a single sensor was permanently removed from the network. Sensor removal continued until the accuracy criterion was violated. The required number of virtual sensors was five for most measurements.
Once a single sensor was permanently removed, the errors of the stored and reconstructed virtual sensors were evaluated. The mean error of the full data as a function of the number of stored sensors is plotted in Figure 10 for measurement 1 (undamaged). It can be seen that storing only five virtual sensors instead of all 59 virtual sensors did not significantly increase the average noise level. When the number of stored sensors was further decreased below five, the reconstruction error increased considerably. On the other hand, when raw data were stored, the mean reconstruction error was larger. This was also theoretically proven (see Equation (15)).
The standard deviations of the measurement error, Bayesian virtual sensor error, and reconstruction error of all sensors in measurement 1 are plotted in Figure 11a. It can be seen that the reconstruction error was only slightly larger than that in the Bayesian virtual sensors. Sensors, for which the two errors were equal, corresponded to the stored signals, which were not reconstructed. They are indicated with black dots. The allowable error is also shown. The reconstruction errors were clearly smaller than requested. When raw data were stored, 22 signals were needed for reconstruction (Figure 11b), and for the most part, the reconstruction errors were larger than when five virtual sensor signals were stored.
The reconstruction errors of sensors near the supports were small. The reconstruction error is, however, not the only important quantity. A high signal-to-noise ratio (SNR) is crucial for detection [18]. The SNR of the stored and reconstructed data in measurement 1 is plotted in Figure 12a when five virtual sensors were stored, as well as the SNR of the reconstruction when 22 physical sensors were stored. It can be seen that the sensors near the supports actually had a very low SNR, which could have an adverse effect on damage detection and localization. Nevertheless, the SNR of each stored and reconstructed virtual sensor was larger than that of the corresponding physical sensor and, for the most part, was also larger than the SNR when 22 physical sensors were stored.
A histogram of the selected virtual sensors for storage in all measurements is shown in Figure 13. The most often selected sensors were located in six different regions of the structure. The placement of the stored virtual and physical sensors in measurement 1 is plotted in Figure 14a,b, respectively. Notice that no sensors were selected close to damage location (sensor 1).
The data compression ratio was computed as follows. If all data were stored, the number of floating-point numbers in each measurement was 59 × 8192 = 483,328 numbers. Storing five virtual sensor signals and the coefficient matrix A (Equation (10)) of size 54 × 5 resulted in 41,230 numbers. Consequently, only 8.5% of the total data had to be stored.
If raw signals were stored with the same requirement for the reconstruction accuracy, a larger number of signals had to be stored and all signals had to be reconstructed. Therefore, storing 22 signals and the coefficient matrix B (Equation (7)) of size 59 × 22 resulted in 181,522 numbers. Consequently, 37.6% of the total data had to be stored and the accuracy was still lower that when five virtual sensors were stored, as seen in Figure 11.

4.2. Damage Detection and Localization Using Spatial Correlation

Damage detection was studied using four different data: raw measurements, all virtual sensors without compression, stored and reconstructed signals, and stored signals only. Spatial correlation was applied. EVS control charts were designed with a subgroup size of 1000 and are plotted in Figure 15. Notice that logarithmic scaling was applied to the vertical axis for clarity. The data points to the left of the blue vertical line correspond to the training data, whereas the black vertical lines indicate the onsets of the five damage scenarios. Only the largest damage level was clearly detected using the actual measurement data (Figure 15a). Classification of occasional out-of-control samples was difficult. All damage cases were detected using all virtual sensors (Figure 15c) or the stored and reconstructed data (Figure 15d). There is a slight difference between the two control charts, showing that the detection performance increased due to compression. This was quite a surprise, because the noise level in the reconstructed data was slightly larger than in the Bayesian virtual sensor data. The reason for this behavior is not known, and it is questionable whether this result can be generalized.
It may be argued that due to redundancy, only the selected virtual sensors would be enough for damage detection. This argument was tested by storing the seven most selected virtual sensors (sensors 10, 11, 22, 30, 38, 45, and 52; see Figure 13) from each measurement and designing an EVS control chart for these data (Figure 15b). Only the largest damage level was detected. Due to different environmental conditions between measurements, more than seven signals would have been needed to remove the environmental influences.
Damage localization was done by plotting the squared projection of the first principal component on each sensor (Figure 16). Using the actual measurement data, damage was localized to sensor 5, and using the stored and reconstructed virtual sensors, damage was localized to sensor 3. Notice that sensor 3 was not included in the stored sensors, but its data were reconstructed. The correct position was closest to sensor 1. Neither analysis pointed to the correct sensor, but in either case, the suggested damage location was in the vicinity of the actual damage. The localization accuracy was slightly higher when the stored and reconstructed virtual sensors were used. The SNR in sensor 1 was very small, which probably resulted in the inaccuracy in damage localization. In many structures, damage may be located close to the fixed support, where the stresses are large but the vibration amplitude is very small, resulting in a small SNR. Therefore, strain measurements at these locations could be considered.

4.3. Damage Detection Using Spatiotemporal Correlation

Damage detection was also investigated with a spatiotemporal correlation model. The data and the algorithm remained the same. A model order equal to 30 was used. The size of the data matrix including the shifted data was very large, and a recursive algorithm was necessary to estimate the covariance matrices [31]. Covariance matrix estimates were needed in two stages: first, the covariance matrix of the training data for whitening, and second, the covariance matrix of all transformed data (residuals) for PCA.
Spatiotemporal correlation models yielded EVS control charts, shown in Figure 17, which can be compared with the corresponding charts in Figure 15. Considerable improvement resulted with the raw measurement data (Figure 17a). Almost all damage levels could be detected. However, occasional false alarms emerged. Drastic improvement occurred with the stored signals of the selected seven virtual sensors (Figure 17b). Nearly all damage cases were detected, with occasional false indications of damage. Spatiotemporal correlation was necessary to make the data redundant. Almost no effect was observed with the virtual sensors (Figure 17c) or the stored and reconstructed virtual sensors (Figure 17d). The detection performance slightly decreased, but was nevertheless almost perfect.

4.4. Strain Measurements

Strain measurements yielded very similar results and conclusions as the acceleration measurements. The main observations are briefly discussed. The most important result was that a strain sensor must be located very close to damage. Otherwise, detection may not be possible. Only three virtual strain sensors were selected for storage from most measurements (Figure 18). The number of physical sensors was considerably larger. The accuracy requirement was satisfied with just three virtual sensors (Figure 19a) or with 24 physical sensors (Figure 19b). The virtual sensors were located at the supports and at the left corner of the frame (Figure 20a), whereas the positions of the stored physical sensors are shown in Figure 20b. The SNRs of the virtual strain sensors were more uniform than those of the accelerometers (Figure 21). Spatial correlation analyses yielded the control charts shown in Figure 22. Notice that seven stored signals were used for Figure 22b, whereas only three signals were stored for Figure 22d. Three damage levels were detected using the raw data (Figure 22a) or seven selected virtual sensors (Figure 22b). All damage levels were detected using all virtual sensors (Figure 22c) or the stored and reconstructed virtual sensors (Figure 22d). The out-of-control samples in Figure 22d, in which the block maximum is negative or the block minimum is positive, can be ignored. Again, quite surprisingly, the stored and reconstructed virtual sensors yielded a slightly better detection performance than when all virtual sensors were stored. Damage was correctly localized to sensor 1. When sensor 1 was removed from the measurements, the detection performance decreased drastically.
Storing three virtual sensor signals and the coefficient matrix A (Equation (10)) of size 56 × 3 resulted in 24,744 numbers. Consequently, only 5.1% of the total data had to be stored.

4.5. Different Damage Locations

Damage detection and localization were also studied with different locations of damage. The same damage type and levels were assumed, as shown in Table 1. Both acceleration and strain measurements were considered. Six damage locations were arbitrarily selected, including the location presented earlier. They are plotted in Figure 23.
The results of damage detection and localization are given in Table 2, Table 3 and Table 4 for raw data, virtual sensor data, and stored and reconstructed virtual sensor data, respectively. Very interesting observations can be made: (1) Virtual sensors outperformed raw data in damage detection. (2) Damage detection performance was better when using stored and reconstructed virtual sensor data than all virtual sensors. (3) Damage localization performance was affected by the damage position and the sensor type. Either accelerometers or strain sensors, but not both, were able to localize damage in a certain position. This was especially true with the stored and reconstructed virtual sensor data (Table 4). (4) Detection of damage at location 2 was difficult with accelerometers. (5) Detection of minor damage at locations 2 and 5 was challenging using strain sensors.

4.6. Different Damage Detection Algorithms

As mentioned in the introduction, comparison of different damage detection algorithms is out of the scope of this study. If restricted to data-based methods in the time domain, two alternative methods were tested with the same data. Damage location 1 was assumed (Figure 23) to have the damage severities shown in Table 1. The two algorithms were Mahalanobis distance (MD) [32] and a regression-based minimum mean-square error (MMSE) estimation method [20]. MMSE was also applied to damage localization. Damage detection using the stored and reconstructed virtual accelerations is shown in Figure 24. Both methods were capable of detecting all damage levels. For the other data, the results are shown in Table 5. They can be compared with the results of the whitening algorithm shown in the first rows of Table 2, Table 3 and Table 4. MMSE performed slightly better than MD. Whitening outperformed the two alternative methods, but only slightly. Especially with the raw strain data, whitening could detect smaller damage than the other two algorithms.

5. Conclusions

A data-compression technique for storing and reconstructing simultaneously measured vibration signals in a dense sensor network was proposed. The stored and reconstructed data were used for damage detection and localization. Data compression and reconstruction for SHM is the main novelty of this paper.
The first step was to reduce measurement error by applying Bayesian virtual sensing. The virtual sensors, being more accurate than the physical sensors, replaced the actual measurements in the subsequent steps. The covariance matrix of the measurement errors was assumed to be diagonal and known. The measurement errors can also be different in each sensor, and they can be approximated from the measurement data. However, it is more difficult to estimate cross-correlated noise (full covariance matrix).
Data compression and reconstruction was done individually for each measurement, because the dynamic characteristics of the structure could vary between measurements due to environmental or operational variability, or damage. On the other hand, a full set of training data from several measurements under different environmental or operational conditions was used to build a covariance model of the undamaged structure. This model was applied to novelty detection using whitening transformation and principal component analysis. The first principal component was assumed to reveal the sensor closest to the damage location.
Data analysis for damage detection was performed in the time domain. No mathematical model of the structure nor system identification was needed. Spatial and spatiotemporal correlation models were compared. Spatiotemporal correlation gave no improvement over spatial correlation when all virtual sensors or stored and reconstructed virtual sensors were used. When raw measurement data or only the stored virtual sensor data were used, spatiotemporal correlation considerably increased the sensitivity to damage. A strain sensor had to be located close to damage. Otherwise, damage remained undetected. Accelerometers were also able to detect remote damage.
The main results are (1) it is more beneficial to store virtual sensor data than physical sensor data. (2) Less than 8.5% of the total amount of virtual sensor data had to be stored in the studied example, whereas 37.6% of the physical sensor data had to be stored for the same accuracy. (3) The stored and reconstructed virtual sensor data were more accurate than the actual measurements. (4) The accuracy of the reconstructed virtual sensors was only slightly smaller than that of the Bayesian virtual sensors. (5) The errors of the Bayesian virtual sensors and, consequently, the reconstruction errors were not the same even when the measurement errors were equal. (6) Whitening transformation was able to take the environmental or operational influences into account without measuring the underlying variables. (7) Damage detection and localization were more reliable with the stored and reconstructed virtual sensors than with the actual measurements. (8) Damage localization was successful with either accelerometers or strain sensors, but not both. (9) Damage localization to a reconstructed virtual sensor was possible. (10) Damage detection performance was slightly higher using the stored and reconstructed data than all virtual sensors, but generalization of this result remained questionable and needs further investigation. Different damage types should be studied with a more complex structure. Experimental results are also needed to validate the proposed technique.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This research was supported by Metropolia University of Applied Sciences.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Sadoughi, M.; Downey, A.; Yan, J.; Hu, C.; Laflamme, S. Reconstruction of unidirectional strain maps via iterative signal fusion for mesoscale structures monitored by a sensing skin. Mech. Syst. Signal Process. 2018, 112, 401–416. [Google Scholar] [CrossRef]
  2. Brincker, R.; Ventura, C. Introduction to Operational Modal Analysis; Wiley: Chichester, UK, 2015. [Google Scholar]
  3. Cawley, P. Structural health monitoring: Closing the gap between research and industrial deployment. Struct. Health Monit. 2018, 17, 1225–1244. [Google Scholar] [CrossRef] [Green Version]
  4. Sharma, S. Applied Multivariate Techniques; Wiley: New York, NY, USA, 1996. [Google Scholar]
  5. Brandt, A. Noise and Vibration Analysis: Signal Analysis and Experimental Procedures; Wiley: Chichester, UK, 2011. [Google Scholar]
  6. Kullaa, J. Optimal sensor placement of Bayesian virtual sensors. In Proceedings of the ISMA2020, International Conference on Noise and Vibration Engineering, KU Leuven, Belgium, 7–9 September 2020; Desmet, W., Pluymers, B., Moens, D., Vandemaele, S., Eds.; KU Leuven-Departement Werktuigkunde: Leuven, Belgium, 2020; pp. 973–985. [Google Scholar]
  7. Mallardo, V.; Aliabadi, M. Optimal sensor placement for structural, damage and impact identification: A review. Struct. Durab. Health Monit. 2013, 9, 287–323. [Google Scholar] [CrossRef]
  8. Yi, T.-H.; Li, H.-N. Methodology developments in sensor placement for health monitoring of civil infrastructures. Int. J. Distrib. Sens. Netw. 2012, 8, 612726. [Google Scholar] [CrossRef]
  9. Krause, A.; Guestrin, C.; Gupta, A.; Kleinberg, J. Near-optimal sensor placements: Maximizing information while minimizing communication cost. In Proceedings of the 5th International Conference on Information Processing in Sensor Networks (IPSN ’06), New York, NY, USA, 19–21 April 2006; pp. 2–10. [Google Scholar]
  10. Meo, M.; Zumpano, G. On the optimal sensor placement techniques for a bridge structure. Eng. Struct. 2005, 27, 1488–1497. [Google Scholar] [CrossRef]
  11. Leyder, C.; Chatzi, E.; Frangi, A.; Lombaert, G. Comparison of optimal sensor placement algorithms via implementation on an innovative timber structure. In Life-Cycle of Engineering Systems: Emphasis on Sustainable Civil Infrastructure. In Proceedings of the Fifth International Symposium on Life-Cycle Civil Engineering (IALCCE 2016), Delft, The Netherlands, 16–19 October 2016; Bakker, J., Frangopol, D.M., van Breugel, K., Eds.; Taylor & Francis Group: London, UK, 2017; pp. 260–267. [Google Scholar]
  12. Papadimitriou, C. Optimal sensor placement methodology for parametric identification of structural systems. J. Sound Vib. 2004, 278, 923–947. [Google Scholar] [CrossRef]
  13. Han, J.-H.; Lee, I. Optimal placement of piezoelectric sensors and actuators for vibration control of a composite plate using genetic algorithms. Smart Mater. Struct. 1999, 8, 257–267. [Google Scholar] [CrossRef]
  14. Worden, K.; Burrows, A.P. Optimal sensor placement for fault detection. Eng. Struct. 2001, 23, 885–901. [Google Scholar] [CrossRef]
  15. Kammer, D.C. Sensor placement for on-orbit modal identification and correlation of large space structures. J. Guid. Control Dyn. 1991, 14, 251–259. [Google Scholar] [CrossRef]
  16. Kammer, D.C. Effects of noise on sensor placement for on-orbit modal identification of large space structures. J. Dyn. Syst. Meas. Control—Trans. ASCE 1992, 114, 436–443. [Google Scholar] [CrossRef]
  17. Papadimitriou, C.; Lombaert, G. The effect of prediction error correlation on optimal sensor placement in structural dynamics. Mech. Syst. Signal Process. 2012, 28, 105–127. [Google Scholar] [CrossRef]
  18. Kay, S.M. Fundamentals of Statistical Signal Processing. Detection Theory; Prentice-Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  19. Sohn, H. Effects of environmental and operational variability on structural health monitoring. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2007, 365, 539–560. [Google Scholar] [CrossRef] [PubMed]
  20. Kullaa, J. Robust damage detection using Bayesian virtual sensors. Mech. Syst. Signal Process. 2020, 135, 106384. [Google Scholar] [CrossRef]
  21. Hyvärinen, A.; Karhunen, J.; Oja, E. Independent Component Analysis; John Wiley & Sons: New York, NY, USA, 2001. [Google Scholar]
  22. Kullaa, J. Comparison of time domain and feature domain damage detection. In Proceedings of the 8th International Operational Modal Analysis Conference (IOMAC 2019), Copenhagen, Denmark, 12–14 May 2019; pp. 115–126. [Google Scholar]
  23. Kullaa, J. Bayesian virtual sensing in structural dynamics. Mech. Syst. Signal Process. 2018, 115, 497–513. [Google Scholar] [CrossRef]
  24. Scharf, L.L. Statistical Signal Processing: Detection, Estimation, and Time Series Analysis; Addison-Wesley: Reading, MA, USA, 1991. [Google Scholar]
  25. Stark, H.; Woods, J.W. Probability and Random Processes with Applications to Signal Processing, 3rd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  26. Kullaa, J. Whitening transformation in damage detection. In Smart structures. In Proceedings of the 5th European Conference on Structural Control—EACS 2012, Genoa, Italy, 18–20 June 2012; Del Grosso, A.E., Basso, P., Eds.; Erredi Grafiche Editoriali: Genoa, Italy, 2012. [Google Scholar]
  27. Worden, K.; Allen, D.; Sohn, H.; Farrar, C.R. Damage detection in mechanical structures using extreme value statistics. In Proceedings of the SPIE 9th Annual International Symposium on Smart Structures and Materials, San Diego, CA, USA, 17 March 2002; Volume 4693, pp. 289–299. [Google Scholar]
  28. Coles, S. An Introduction to Statistical Modeling of Extreme Values; Springer: Bristol, UK, 2001. [Google Scholar]
  29. Montgomery, D.C. Introduction to Statistical Quality Control, 3rd ed.; Wiley: New York, NY, USA, 1997. [Google Scholar]
  30. Kullaa, J. Eliminating environmental influences in structural health monitoring using spatiotemporal correlation models. In Proceedings of the Fourth European Workshop on Structural Health Monitoring, Krakow, Poland, 2–4 July 2008; Uhl, T., Ostachowicz, W., Holnicki-Szulc, J., Eds.; DEStech Publications: Lancaster, PA, USA, 2008; pp. 1033–1040. [Google Scholar]
  31. Kullaa, J. Distinguishing between sensor fault, structural damage, and environmental or operational effects in structural health monitoring. Mech. Syst. Signal Process. 2011, 25, 2976–2989. [Google Scholar] [CrossRef]
  32. Worden, K.; Manson, G.; Fieller, N.R.J. Damage detection using outlier analysis. J. Sound Vib. 2000, 229, 647–667. [Google Scholar] [CrossRef]
Figure 1. Flowcharts of (a) the whole process and (b) the optimal sensor placement (OSP) function.
Figure 1. Flowcharts of (a) the whole process and (b) the optimal sensor placement (OSP) function.
Sensors 22 00306 g001
Figure 2. Reconstruction errors of each sensor when one sensor in turn out of sensors 2, 4, and 7 is removed. The remaining seven sensors were removed earlier and must be reconstructed. (a) Sensor 2 removed; (b) Sensor 4 removed; (c) Sensor 7 removed. The minimum distances from the threshold are also shown. The largest minimum distance from the threshold is in plot (a), indicating that sensor 2 can be permanently removed.
Figure 2. Reconstruction errors of each sensor when one sensor in turn out of sensors 2, 4, and 7 is removed. The remaining seven sensors were removed earlier and must be reconstructed. (a) Sensor 2 removed; (b) Sensor 4 removed; (c) Sensor 7 removed. The minimum distances from the threshold are also shown. The largest minimum distance from the threshold is in plot (a), indicating that sensor 2 can be permanently removed.
Sensors 22 00306 g002
Figure 3. Flowchart of damage detection and localization.
Figure 3. Flowchart of damage detection and localization.
Sensors 22 00306 g003
Figure 4. An illustrative example of whitening. (a) Original variables; (b) whitening transformation using W1; (c) whitening transformation using W2. The two isolated points represent outliers in the signal space (red square) and in the noise space (green circle).
Figure 4. An illustrative example of whitening. (a) Original variables; (b) whitening transformation using W1; (c) whitening transformation using W2. The two isolated points represent outliers in the signal space (red square) and in the noise space (green circle).
Sensors 22 00306 g004
Figure 5. Frame structure with 59 accelerometers. Every other sensor number is displayed. Strain sensors were located almost at the same points. The intact cross-section is also shown.
Figure 5. Frame structure with 59 accelerometers. Every other sensor number is displayed. Strain sensors were located almost at the same points. The intact cross-section is also shown.
Sensors 22 00306 g005
Figure 6. (a) Young’s modulus versus temperature; (b) sample distributions of the Young’s modulus.
Figure 6. (a) Young’s modulus versus temperature; (b) sample distributions of the Young’s modulus.
Sensors 22 00306 g006
Figure 7. Variation of the seven lowest natural frequencies due to temperature and damage. Frequencies on the right of the vertical line are from the damaged structure. Different damage levels are also indicated.
Figure 7. Variation of the seven lowest natural frequencies due to temperature and damage. Frequencies on the right of the vertical line are from the damaged structure. Different damage levels are also indicated.
Sensors 22 00306 g007
Figure 8. Detail of time history of accelerometer 2 in measurement 1 (undamaged): measured data, virtual sensor data, and noiseless data.
Figure 8. Detail of time history of accelerometer 2 in measurement 1 (undamaged): measured data, virtual sensor data, and noiseless data.
Sensors 22 00306 g008
Figure 9. Measurement errors (blue horizontal line) and Bayesian virtual sensor errors of all sensors in each measurement (all damage levels).
Figure 9. Measurement errors (blue horizontal line) and Bayesian virtual sensor errors of all sensors in each measurement (all damage levels).
Sensors 22 00306 g009
Figure 10. Mean reconstruction error as a function of the number of stored physical or virtual sensor signals in measurement 1. The blue horizontal line is the measurement error.
Figure 10. Mean reconstruction error as a function of the number of stored physical or virtual sensor signals in measurement 1. The blue horizontal line is the measurement error.
Sensors 22 00306 g010
Figure 11. (a) Measurement error, virtual sensor error, and reconstruction error of all sensors in measurement 1, when five virtual sensor signals were stored. (b) Measurement error and reconstruction error of all sensors in measurement 1, when 22 physical sensor signals were stored. The allowable error is also shown. The stored sensors are indicated with dots.
Figure 11. (a) Measurement error, virtual sensor error, and reconstruction error of all sensors in measurement 1, when five virtual sensor signals were stored. (b) Measurement error and reconstruction error of all sensors in measurement 1, when 22 physical sensor signals were stored. The allowable error is also shown. The stored sensors are indicated with dots.
Sensors 22 00306 g011
Figure 12. (a) SNR of physical sensors, Bayesian virtual sensors, and stored and reconstructed virtual sensors in measurement 1 when five virtual sensor signals were stored. (b) SNR of physical sensors and stored and reconstructed sensors in measurement 1 when 22 physical sensor signals were stored. The stored sensors are indicated with dots.
Figure 12. (a) SNR of physical sensors, Bayesian virtual sensors, and stored and reconstructed virtual sensors in measurement 1 when five virtual sensor signals were stored. (b) SNR of physical sensors and stored and reconstructed sensors in measurement 1 when 22 physical sensor signals were stored. The stored sensors are indicated with dots.
Sensors 22 00306 g012
Figure 13. Histogram of selected virtual sensors for storage in all measurements (all damage levels).
Figure 13. Histogram of selected virtual sensors for storage in all measurements (all damage levels).
Sensors 22 00306 g013
Figure 14. Selected sensors for storage in measurement 1: (a) virtual sensors 11, 22, 29, 44, and 52; (b) 22 physical sensors.
Figure 14. Selected sensors for storage in measurement 1: (a) virtual sensors 11, 22, 29, 44, and 52; (b) 22 physical sensors.
Sensors 22 00306 g014
Figure 15. Damage detection using EVS control charts with spatial correlation: (a) all physical sensors; (b) seven virtual sensors: 10, 11, 22, 30, 38, 45, and 52; (c) all Bayesian virtual sensors; (d) stored and reconstructed virtual sensors. The vertical lines correspond to the end of training data (blue) and the five damage levels (black).
Figure 15. Damage detection using EVS control charts with spatial correlation: (a) all physical sensors; (b) seven virtual sensors: 10, 11, 22, 30, 38, 45, and 52; (c) all Bayesian virtual sensors; (d) stored and reconstructed virtual sensors. The vertical lines correspond to the end of training data (blue) and the five damage levels (black).
Sensors 22 00306 g015
Figure 16. Damage localization: (a) all physical sensors; (b) stored and reconstructed virtual sensors. The correct damage position was closest to sensor 1.
Figure 16. Damage localization: (a) all physical sensors; (b) stored and reconstructed virtual sensors. The correct damage position was closest to sensor 1.
Sensors 22 00306 g016
Figure 17. Damage detection using EVS control charts with a model order equal to 30: (a) all physical sensors; (b) seven virtual sensors: 10, 11, 22, 30, 38, 45, and 52; (c) all Bayesian virtual sensors; (d) stored and reconstructed virtual sensors. The vertical lines correspond to the end of training data (blue) and the five damage levels (black).
Figure 17. Damage detection using EVS control charts with a model order equal to 30: (a) all physical sensors; (b) seven virtual sensors: 10, 11, 22, 30, 38, 45, and 52; (c) all Bayesian virtual sensors; (d) stored and reconstructed virtual sensors. The vertical lines correspond to the end of training data (blue) and the five damage levels (black).
Sensors 22 00306 g017aSensors 22 00306 g017b
Figure 18. Mean reconstruction error as a function of the number of stored physical or virtual sensor signals in measurement 1. The blue horizontal line is the measurement error.
Figure 18. Mean reconstruction error as a function of the number of stored physical or virtual sensor signals in measurement 1. The blue horizontal line is the measurement error.
Sensors 22 00306 g018
Figure 19. (a) Measurement error, virtual sensor error, and reconstruction error of all sensors in measurement 1 when virtual sensor signals were stored. (b) Measurement error and reconstruction error of all sensors in measurement 1 when physical sensor signals were stored. The allowable error is also shown. The stored sensors are indicated with dots.
Figure 19. (a) Measurement error, virtual sensor error, and reconstruction error of all sensors in measurement 1 when virtual sensor signals were stored. (b) Measurement error and reconstruction error of all sensors in measurement 1 when physical sensor signals were stored. The allowable error is also shown. The stored sensors are indicated with dots.
Sensors 22 00306 g019
Figure 20. (a) Selected virtual sensors for storage in measurement 1: sensors 1, 22, and 59. (b) A total of 24 selected physical sensors for storage in measurement 1.
Figure 20. (a) Selected virtual sensors for storage in measurement 1: sensors 1, 22, and 59. (b) A total of 24 selected physical sensors for storage in measurement 1.
Sensors 22 00306 g020
Figure 21. (a) SNR of physical sensors, Bayesian virtual sensors, and stored and reconstructed virtual sensors in measurement 1 when three virtual sensor signals were stored. (b) SNR of physical sensors and stored and reconstructed virtual sensors in measurement 1 when 24 physical sensor signals were stored. The stored sensors are indicated with dots.
Figure 21. (a) SNR of physical sensors, Bayesian virtual sensors, and stored and reconstructed virtual sensors in measurement 1 when three virtual sensor signals were stored. (b) SNR of physical sensors and stored and reconstructed virtual sensors in measurement 1 when 24 physical sensor signals were stored. The stored sensors are indicated with dots.
Sensors 22 00306 g021
Figure 22. Damage detection using EVS control charts with spatial correlation: (a) all physical sensors; (b) seven virtual sensors: 1, 22, 31, 32, 38, 46, and 59; (c) all Bayesian virtual sensors; (d) stored and reconstructed virtual sensors. The vertical lines correspond to the end of training data (blue) and the five damage levels (black).
Figure 22. Damage detection using EVS control charts with spatial correlation: (a) all physical sensors; (b) seven virtual sensors: 1, 22, 31, 32, 38, 46, and 59; (c) all Bayesian virtual sensors; (d) stored and reconstructed virtual sensors. The vertical lines correspond to the end of training data (blue) and the five damage levels (black).
Sensors 22 00306 g022
Figure 23. Six damage locations.
Figure 23. Six damage locations.
Sensors 22 00306 g023
Figure 24. Damage detection from the stored and reconstructed virtual acceleration signals using two alternative algorithms and EVS control charts. (a) MD; (b) MMSE. The vertical lines correspond to the end of training data (blue) and the five damage levels (black).
Figure 24. Damage detection from the stored and reconstructed virtual acceleration signals using two alternative algorithms and EVS control charts. (a) MD; (b) MMSE. The vertical lines correspond to the end of training data (blue) and the five damage levels (black).
Sensors 22 00306 g024
Table 1. Damage scenarios in a single beam element along a length of 62.5 mm. The measurement numbers are also given.
Table 1. Damage scenarios in a single beam element along a length of 62.5 mm. The measurement numbers are also given.
Damage LevelPlate Thickness
mm
Thickness Decrease
mm
Measurements
D05.001–100
D14.50.5101–106
D24.01.0107–112
D33.51.5113–118
D43.02.0119–124
D52.52.5125–130
Table 2. Damage detection and localization using raw data. The number in parentheses is the sensor found by the algorithm.
Table 2. Damage detection and localization using raw data. The number in parentheses is the sensor found by the algorithm.
Damage LocationNearest
Sensors
Acceleration DetectionAcceleration LocalizationStrain
Detection
Strain
Localization
Loc 11D5Fail (5)D3–D5OK (1)
Loc 215–16 Fail (22) Fail 23
Loc 321–22 Fail (26)D2–D5OK (22)
Loc 428–29 OK (28) Fail 16
Loc 551–52 OK (51) Fail 1
Loc 622–23 OK (22) Fail 59
Table 3. Damage detection and localization using virtual sensor data. The number in parentheses is the sensor found by the algorithm.
Table 3. Damage detection and localization using virtual sensor data. The number in parentheses is the sensor found by the algorithm.
Damage LocationNearest
Sensors
Acceleration DetectionAcceleration LocalizationStrain
Detection
Strain
Localization
Loc 11D1–D5Fail (3)D1–D5OK (1)
Loc 215–16 OK (16)D3–D5Fail (21)
Loc 321–22D2–D5Fail (25)D1–D5OK (22)
Loc 428–29D3–D5OK (28)D4–D5Fail (38)
Loc 551–52D3–D5OK (52)D5Fail (58)
Loc 622–23D3–D5Fail (26)D3–D5Fail (17)
Table 4. Damage detection and localization using stored and reconstructed virtual sensor data. The number in parentheses is the sensor found by the algorithm.
Table 4. Damage detection and localization using stored and reconstructed virtual sensor data. The number in parentheses is the sensor found by the algorithm.
Damage LocationNearest
Sensors
Acceleration DetectionAcceleration LocalizationStrain
Detection
Strain
Localization
Loc 11D1–D5Fail (3)D1–D5OK (1)
Loc 215–16D5OK (15)D3–D5Fail (23)
Loc 321–22D2–D5Fail (19)D1–D5OK (22)
Loc 428–29D2–D5OK (28)D2–D5Fail (21)
Loc 551–52D2–D5OK (52)D3–D5Fail (59)
Loc 622–23D1–D5Fail (19)D2–D5OK (23)
Table 5. Damage detection and localization using MD and MMSE algorithms. (Acc = acceleration; raw = raw data; vs = virtual sensor data; rec = stored and reconstructed data). The number in parentheses is the sensor found by the algorithm.
Table 5. Damage detection and localization using MD and MMSE algorithms. (Acc = acceleration; raw = raw data; vs = virtual sensor data; rec = stored and reconstructed data). The number in parentheses is the sensor found by the algorithm.
DataMD
Detection
MMSE
Detection
MMSE
Localization
Acc rawD5D5Fail (4)
Acc VSD2–D5D1–D5Fail (3)
Acc recD1–D5D1–D5Fail (3)
Strain rawD5D5OK (1)
Strain VSD2–D5D1–D5OK (1)
Strain recD1–D5D1–D5OK (1)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kullaa, J. Damage Detection and Localization under Variable Environmental Conditions Using Compressed and Reconstructed Bayesian Virtual Sensor Data. Sensors 2022, 22, 306. https://doi.org/10.3390/s22010306

AMA Style

Kullaa J. Damage Detection and Localization under Variable Environmental Conditions Using Compressed and Reconstructed Bayesian Virtual Sensor Data. Sensors. 2022; 22(1):306. https://doi.org/10.3390/s22010306

Chicago/Turabian Style

Kullaa, Jyrki. 2022. "Damage Detection and Localization under Variable Environmental Conditions Using Compressed and Reconstructed Bayesian Virtual Sensor Data" Sensors 22, no. 1: 306. https://doi.org/10.3390/s22010306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop