Next Article in Journal
Spectroscopic Ellipsometry and Optical Modelling of Structurally Colored Opaline Thin-Films
Previous Article in Journal
A 3D Fluorescence Classification and Component Prediction Method Based on VGG Convolutional Neural Network and PARAFAC Analysis Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Score-Guided Regularization Strategy-Based Unsupervised Structural Damage Detection Method

School of Computer and Big Data, Fuzhou University, Fujian 350108, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(10), 4887; https://doi.org/10.3390/app12104887
Submission received: 24 April 2022 / Revised: 7 May 2022 / Accepted: 8 May 2022 / Published: 12 May 2022
(This article belongs to the Section Civil Engineering)

Abstract

:
It is critical to use scientific methods to track the performance degradation of in-service buildings over time and avoid accidents. In recent years, both supervised and unsupervised learning methods have yielded positive results in structural health monitoring (SHM). Supervised learning approaches require data from the entire structure and various damage scenarios for training. However, it is impractical to obtain adequate training data from various damage situations in service facilities. In addition, most known unsupervised approaches for training only take response data from the entire structure. In these situations, contaminated data containing both undamaged and damaged samples, typical in real-world applications, prevent the models from fitting undamaged data, resulting in performance loss. This work provides an unsupervised technique for detecting structural damage for the reasons stated above. This approach trains on contaminated data, with the anomaly score of the data serving as the model’s output. First, we devised a score-guided regularization approach for damage detection to expand the score difference between undamaged and damaged data. Then, multi-task learning is incorporated to make parameter adjustment easier. The experimental phase II of the SHM benchmark data and data from the Qatar University grandstand simulator are used to validate this strategy. The suggested algorithm has the most excellent mean AUC of 0.708 and 0.998 on the two datasets compared to the classical algorithm.

1. Introduction

Structural health monitoring (SHM) is a practical and essential process for assessing the structural integrity of structures to prevent damage from ageing, material deterioration, environmental behavior, poor construction, and natural disasters. The essential technique of structural health monitoring is structural damage detection. Many approaches for detecting damage have been proposed and implemented in recent years. Vibration-based damage detection is among the most productive approaches to detect structural deterioration in general [1]. Vibration-based damage detection approaches can be divided into physical model-based and data-based methods [2]. The measurement data obtained by sensors positioned on the monitored structure is the sole data source for data-based approaches. Data-based methods are more commonly used as structural monitoring tools because they account for the impacts of uncertainty on structural damage [3] and are computationally affordable. Recent advances in computation technology facilitate the development of more complex models for data-based methods by means of so-called deep learning [4], which is characterized by artificial neural networks with high numbers of layers. This allows learning complex non-linear mappings between inputs and outputs from data. These data-based methods can be trained through supervised or unsupervised learning.
In recent years, supervised learning has been widely used in SHM. For example, Abdeljaber et al. [5] applied the supervised learning method based on one-dimensional convolutional neural network (1D-CNN) to successfully verify the recognition performance of 1D-CNN under nine damage conditions. Avci et al. [6] used the trained 1D-CNN to diagnose the damage status of structures. Sergio et al. [7] proposed a new analysis method of building a vulnerability framework based on machine learning and tested and verified the reliability of the established module through the rapid visual scanning support model. Although supervised learning has been found to improve damage detection accuracy, it requires a substantial amount of complete and diverse damage case data for training. Furthermore, manually labelling the enormous amount of data obtained is impractical. Certain researchers have employed finite element models to generate sufficient training data from various decision support systems [8]. However, it is difficult to simulate the environment’s uncertainty and boundary conditions and the nonlinear relationship between the damage level and the variation level of the model properties. Therefore, unsupervised learning is the preferred method for damage detection [9].
Unsupervised learning has been popular in recent years for damage detection. Sarmadi et al. [10] suggested an unsupervised damage detection model for detecting damage in scenarios with environmental variables. This model uses Mahalanobis distance to distinguish different damage states, and performance is validated on laboratory-scale bridges exposed to various damage states. Da Silva et al. [11] adopted an unsupervised clustering method based on Fuzzy c-means (FCM) and Gustafson-Kessel (GK) clustering algorithms to build a model using only data obtained from the entire structure. Damage to a three-layer bookshelf structure is detected using these well-trained models. The damage detection results reveal that all intact cases are classified better than damage cases. The GK algorithm’s performance is superior to that of the FCM method in specific damage circumstances. Adeli et al. [12] extracted characteristics from pre-processed signals of whole states and another set of pre-processed signals of unknown states using Restricted Boltzmann Machines. The raw acceleration signal is processed using a simultaneous compressive wavelet transform and a Fast-Fourier transform. The collected features are then performed to analyze data states.
The one-class support vector machine (OC-SVM) has been frequently employed in damage detection as an unsupervised machine learning method. Long and Buyukozturk [13] used an automated OC-SVM to locate damage locations in a three-story, two-bay steel structure in the laboratory. Wang and Cha [14] integrated OC-SVM with an auto-encoder to detect structural damage in laboratory steel bridges. Only the data from the entire structure is used in their research. The auto-encoders loss error is used as a damage sensitivity factor in training. The proposed method can detect slight damage and has a high detection rate. In addition, cluster-based unsupervised algorithms for structural damage detection are commonly used. Diez et al. [15] suggested an unsupervised structural damage assessment technique based on the K-nearest neighbor algorithm to detect bridge damage. We must manually pre-define the number of clusters K for data clustering in the K-nearest neighbor approach. However, when dealing with high-dimensional and massive training data from various unknown conditions, determining an acceptable value for K can be difficult, and the clustering results are prone to local optimality. Cha et al. [16] suggested an unsupervised learning method that uses a fast-clustering strategy based on density peaks for civil infrastructure damage detection to address the issues raised by conventional clustering-based methods.
In structural damage detection, it is crucial to choose appropriate metrics. Throughout the years, researchers have used many different damage parameters as indicators for judging structural damage status, such as natural frequencies, damping, frequency response function, modal strain energy, etc. Different damage parameters have different application scenarios. The success of structural damage detection algorithms largely depends on the selection of damage parameters, because the estimation of the threshold often has a great relationship with the damage indicators. Fan and Qiao [17] reviewed extensive literature and recommended that modal curvature-based algorithms are more reliable in detecting damage than frequency and modal-based algorithms. Cao et al. [18] concluded that damping is a very effective indicator of damage, especially when combined with frequency or mode shape, provided that appropriate damping measurement techniques can be used. Barman et al. [19] used modal strain energy (DIMSE), frequency response function strain energy dissipation ratio (FRFSEDR), flexibility strain energy damage ratio (FSEDR) and residual force-based damage index (RFBDI) as damage indexes; then, they applied Bayesian data fusion approach to these four damage indexes to find out the accurate damage location. In addition to using damage parameters as metrics, other factors are often used as metrics in unsupervised structural damage methods. Pathirage et al. [20] used the learned feature representations in the hidden layers of the deep auto-encoders as damage sensitive features to indicate the presence of damage in the monitored structures. Wang and Cha [14] used three indexes quantifying the reconstruction losses between the original acceleration response inputs and the reconstructed acceleration response outputs as damage-sensitive features.
According to our findings, the data employed in many unsupervised structural damage detection algorithms tends to be undamaged. On the other hand, building constructions are frequently exposed to the natural environment, resulting in differing degrees of damage. It is not easy to ensure that the data obtained is entirely undamaged. We also noticed that the selection of suitable damage indicators has the some limitations: (1) When selecting damage parameters as damage indicators, researchers are required to have relevant background knowledge; (2) damage indicators for most unsupervised structural damage detection methods always solely rely on pre-defined distance metrics or reconstruction errors, and thereby cannot be flexibly embedded into other structural damage detection models. As a result, a new unsupervised damage detection method based score-guided regularization strategy is proposed in this study. Figure 1 depicts the basic flow of the score-guided network. The representation learner, which maps the data to the hypothesis space, is connected to the score-guided network, as shown in the diagram. The representation learner is trained to extract more valuable information using the score-guided regularization technique, widening the gap between undamaged and damaged data. This method’s representation learner employs an auto-encoder to extract damage characteristics, while the score-guided network serves as a damaged detector. Finally, multi-task learning is applied to make parameter modification easier. The following are the novelty and significant contributions of our proposed method:
(1)
The proposed method requires contaminated data as training data, which is more realistic than current unsupervised algorithms that solely employ undamaged data.
(2)
The model can learn more representative information using a score-guided regularization technique. The difference between undamaged and damaged data is widened by scoring the input data, allowing undamaged and damaged data to be distinguished. The model takes an anomaly score as output. Proposed method selects the anomaly score, that is, the score of the input sample, as the damage indicators, which does not require relevant background knowledge or pre-defined, but is directly output in an end-to-end manner.
(3)
Using multi-task learning to simplify the hyperparameter tuning procedure increases the model’s performance and adaptability.
There are four sections in this paper. The “Introduction” section introduces and evaluates existing unsupervised structural damage detection algorithms. Section 2 explains the unique unsupervised structural damage detection algorithm. The suggested algorithm is validated in Section 3 using two benchmark datasets. Section 4 summarizes the entire text and suggests future research areas.

2. Proposed Method

2.1. Model

This section describes the suggested unsupervised structural damage detection method in detail. The method extracts feature with a deep auto-encoder and then employs a score-guided regularization strategy to widen the anomaly score difference between undamaged and damaged data. It also employs multi-task learning to fine-tune the network’s hyperparameters. The algorithm’s training data is contaminated data. Figure 2 depicts the suggested method’s general schematic diagram, divided into three sections: a reconstructor, a score-guided network, and a multi-task learning module for dynamically altering the parameters. Given a data sample X , the reconstructor first maps it to a representation Z in the latent space and generates an approximate X ^   of the input sample. The score-guided network uses Z as input to learn the anomaly score S . Parameters are dynamically learned using multi-task learning during training.

2.1.1. Extract Features Using Auto-Encoder

Like most unsupervised deep learning models, the auto-encoder seeks to achieve abstract learning of input samples by equating the network’s predicted output with the input samples. Rumelhart et al. [21] suggested the auto-encoder concept first, and then Bourlard et al. [22] discussed it. An encoder and a decoder are generally found in an auto-encoder. The encoder translates high-dimensional input samples to low-dimensional abstract representations to minimize sample dimensionality. The decoder translates the abstract representations into predicted outputs to replicate the input samples. It adjusts the network parameters to minimize the reconstruction error to acquire the best abstract representation of the input samples. Figure 3 depicts the auto-encoder structure.
Suppose given an input sample X = R d × n , where d represents the dimension of the input sample and n is the number of input samples. The weight matrix and bias of the encoder and decoder are w m , w d and b m , b d , respectively. g ( · ) is the activation function of each layer. The auto-encoder sends samples to the encoder and uses linear mapping and a nonlinear activation function to finish the encoding.
H = g ( W m X + b m )
The decoder then completes the decoding of the encoded features. It obtains the input sample reconstruction X ^ . Given the encoding H ,   X ^ can be thought of as a prediction of X with the same dimension as X . The decoding procedure is similar to that of encoding.
X ^ = g ( W d H + b d )
The L2 loss is commonly employed as a cost function for reconstruction to optimize the weight matrix and biases parameters. It can be stated as follows:
L R E ( X , X ^ ) = 1 n i = 1 n X i X ι ^ 2
To learn feature representations in the sample data, auto-encoders commonly use a gradient descent technique to backpropagate the error to alter the network parameters and gradually reduce the reconstructed error function by iterative fine-tuning. Assuming that the learning rate is η , the auto-connection encoder’s weights and bias update formulas are as follows:
W = W η L R E ( X , X ^ ) W
b = b η L R E ( X , X ^ ) b
Figure 4 shows how this paper extracts characteristics from response data using a deep auto-encoder with five hidden layers. Deep auto-encoders with multiple hidden layers can learn more complex underlying feature representations of input data mathematically than auto-encoders with only one hidden layer. The encoder is represented by layers L 1 to L 4 , while layers L 4 to L 7 represent the decoder in Figure 4. The previous layer’s output is used as the next layer’s input. The encoder weight matrices w 1 , w 2 and w 3 are used to learn the final compressed feature representation on the L 4 layer; b 1 , b 2 , and b 3 are the encoder biases. The decoding matrices w 5 , w 6 , w 7 and the biases b 5 , b 6 , b 7 of the decoder are responsible for reconstructing input data on output layer L 7 using the learned feature representation on the L 4 layer. The function g ( · ) aims to strengthen the learning ability of the auto-encoder. In this paper, the autoencoder adopts Equation (3) as the reconstruction loss function. When the undamaged data is input, the reconstruction loss will be slight; when the damaged data is input, the reconstruction loss is often much more significant than when the undamaged data is input.

2.1.2. Introducing the Score-Guided Regularization Strategy to Expand Anomaly Score Differences

Specifically, the score-guided regularization strategy [23] extracts valuable information from ostensibly undamaged and damaged samples, with more miniature scores for undamaged samples and higher scores for damaged samples. Making full use of apparent data enhances representation learners’ optimization, leading undamaged and damaged samples in different directions. Furthermore, the score-guided network directly learns discriminative metrics. It has more robust capabilities because it provides anomaly scores end-to-end rather than relying on pre-defined distance metrics or reconstruction errors. Figure 5 depicts the score-guided network structure of this article. The function F ( · ) is a pre-defined d function used to learn the data’s features. In this model, the pre-defined d function refers to the auto-encoder, X represents the training data, and F ( X ) represents the abstract representations of X . The score-guided network in this paper only contains two linear layers L 1 and L 2 , which are used to convert the learned latent features into scores s . w s 1 ,   w s 2 and b s 1 ,   b s 2 are the weight matrix and bias of the two linear layers, respectively.
The loss function of the score-guided network is defined as follows:
L S E ( · , s ) = { λ n | s μ 0 | , F ( · ) < ε λ a max ( 0 , a s ) ,   F ( · ) ε
where ε is the threshold, after processing by the pre-defined d function F ( · ) , if it is lower than ε, we consider the current sample to be an apparent undamaged sample; otherwise, we consider the current sample to be a suspicious damaged sample. Parameter a determines the guiding position of an anomaly score; λ n and λ a are used as weight parameters to adjust the guiding effect of the anomaly score. We set a very small positive value μ 0 approaching 0 to be the target anomaly scores for the apparent undamaged data, due to that the zero scores will force most weights of the score-guided network to be 0.
Since F ( · ) refers to the auto-encoder in this paper, we can take the reconstruction error of the auto-encoder as a self-supervised signal. We take the loss function of the auto-encoder (Equation (3)) as F ( · ) . Therefore, the above loss function expression can be written as:
L S E ( X i , X ι ^ , s ) = { λ n | s μ 0 | , X i X ι ^ 2 < ε λ a max ( 0 , a s ) , X i X ι ^ 2 ε
From another perspective, the regularization function can be rewritten as follows:
L S E = λ n L n o r m a l + λ a L a b n o r m a l
L n o r m a l is the undamaged data part, and L a b n o r m a l is the suspected damaged part.
It’s worth mentioning that a large a , intuitively, can better enlarge the score gap between the undamaged and damaged data. However, a too large a will only lengthen the distribution of scores, and the performance improvement will stabilize with the increase of a . As the score-guided network learns the anomaly score and a determines the effect together with λ a , the actual impact of a is small, not requiring adjustment; therefore, we set a = 6 in all experiments.

2.1.3. Tuning Parameters Using Multi-Task Learning

Multi-task learning is a technique for optimizing multi-objective models employed in many deep learning tasks [24]. Multi-task learning tries to enhance learning efficiency and prediction accuracy by learning numerous targets from a joint representation instead of constructing a separate model for each task [25]. It can be thought of as a way to increase inductive knowledge transfer by allowing complementary tasks to share domain information. The auto-encoder task and the score-guided network task are two subtasks of the approach provided in this paper. The loss functions of the auto-encoder and the score-guided network are L R E ( X , X ^ ) and L S E ( X i , X ι ^ , s ) , respectively, according to Equations (3) and (7).
In multi-task learning, the optimal training effect can only be achieved by weighting the loss function of the subtasks appropriately. The loss function of the algorithm in this paper adopts the typical loss function expression as follows:
L t o t a l = λ R E L R E ( X , X ^ ) + λ S E 1 n i = 1 n L S E ( X i , X ι ^ , s )
where λ R E is the weights of the loss function of the auto-encoder; λ S E is the weights of the loss function of the score-guided network. The model’s performance depends heavily on the choice of these two parameters. We introduce uncertainty to determine these two parameters [26] automatically. The core idea of this method is to give lower loss function weights to tasks with higher uncertainty to reduce the difficulty of training.
This strategy is based on a Bayesian model that includes two types of uncertainty: Epistemic uncertainty and Aleatoric uncertainty. Epistemic uncertainty is a type of model uncertainty that describes what our model does not know owing to a lack of training data. Our uncertainty about information that our data cannot explain is captured by Aleatoric uncertainty. There are two types of Aleatoric uncertainty: data-dependent uncertainty and task-dependent uncertainty. Data-dependent uncertainty is expected as a model output and is dependent on the input data. Task-dependent uncertainty is not dependent on the input data. It is not a model output; instead, it is a quantity that stays constant for all input data and varies between different tasks. Task-dependent uncertainty measures the relative confidence between tasks in a multi-task environment, representing the uncertainty inherent in the regression or classification task. Hence, we use task-dependent uncertainty as the basis for weighted loss.
After introducing noise (uncertainty) parameters σ 1 and σ 2 for auto-encoder and score-guided networks. The final loss function of the proposed model is:
L ( X , X ^ , s ) = 1 2 σ 1 2 L R E ( X , X ^ ) + 1 2 σ 2 2 1 n i = 1 n L S E ( X i , X ι ^ , s ) + l o g σ 1 σ 2
We can see that as the noise σ continues to increase, the task’s weight decreases. l o g σ 1 σ 2 is the standard term of this loss function, which aims to prevent a certain σ from being too large and causing serious imbalance in model training. Parameters σ 1 and σ 2 are optimized by the back error propagation algorithm.
σ 1 = σ 1 η L ( X , X ^ , s ) σ 1
σ 2 = σ 2 η L ( X , X ^ , s ) σ 2

2.1.4. Data Generation

Consider a structure equipped with n joints (or accelerometers). A total of m experiments are carried out to construct the training data set. The first experiment ( E = 1 ) involves measuring n acceleration signals across the entire structure. In the remaining m 1 experiments, each experiment has one joint damaged. For the data collected by each accelerometer, we can divide it into damaged data and undamaged data, as follows:
D a m a g e d i = [ D E = i + 1 , J = i ]
U n d a m a g e d i = [ U E = 1 , J = i , U E = 2 , J = i U E = i , J = i , U E = i + 2 , J = i U E = m , J = i ]
where D and U indicate the measurements acquired for the damaged and undamaged structural cases, respectively. ( U | D ) E = m , J = i represents the data obtained by the i th accelerometer in the m th experiment. We can notice that the undamaged data set for the i th joint includes signals measured when all joints are undamaged ( U E = 1 , J = i ) and signals measured when one of the other joints is damaged. The purpose of this is to eliminate the mutual influence between joints.
The next step is to normalize the data. To be more realistic, we do not normalize the damaged and undamaged data separately, which most current studies use, but normalize the damaged and undamaged data together. The normalization process is expressed as follows:
N i = N o r m a l i z e ( D a m a g e d i , U n d a m a g e d i )
D N i = N i d
U N i = N i u
where N i is the vectors after uniform standardization, and N o r m a l i z e ( · ) represents the normalization function. D N i and U N i represent damaged and undamaged vectors in N i , respectively. The next step is to partition the undamaged and damaged vectors into many frames, each with a fixed number( n s ) of samples. This operation’s outcome for joint i can be written as:
D F i = [ D F i , 1 , D F i , 2 , D F i , 3 D F i , n d ]
U F i = [ U F i , 1 , U F i , 2 , U F i , 3 U F i , n u ]
where D F i and U F i are vectors containing the damaged and undamaged frames for the joint i , respectively. n d and n u are the total number of damaged and undamaged frames, respectively. Given the total number of samples in each acceleration signal n T ; n d and n u can be computed as:
n d = n T n s
n u = n × n T n s
It is clear from Equations (20) and (21) that the number of undamaged frames will be much greater than the number of damaged frames for a given joint i . Some researchers choose the same amount of damaged and undamaged frames for training to avoid the impact of imbalance on performance, which is impracticable. As a result, selecting the same number of damaged and undamaged frames is skipped. Then, send the damaged and undamaged frames to train in a jumble. Figure 6 depicts the specific flow chart. After the data has been processed, it is fed into the model for training, and a trained model is generated. After processing data from an unknown state according to the previous stages, input it into the trained model for testing. Finally, the model generates a score s , representing the current data’s damage score.

2.2. Algorithm

Algorithm 1 illustrates the proposed algorithm’s process.   ξ , and φ represent the encoder, decoder and score-guided network, respectively; σ 1 and σ 2 are the noise parameters of the auto-encoder and score-guided network. These parameters are initialized randomly and optimized in the training iteration (Steps 3–13) to minimize the loss in Equation (10). Specifically, the representation learner learns the data representation Z through Z = ξ ( x ) in Step 6. Then, the scoring network learns the anomaly scores s through S = φ ( Z ) in Step 8. Then, Step 9 calculates the loss function, and Steps 10 and 11 take a backpropagation algorithm and update the network parameters. Data representation and anomaly score distribution are optimized during training. Given a trained model, anomaly scores can be directly calculated given new data samples.
Algorithm 1.  Damage   Detection   Network
Input:  Training   set   { x i } n , x i R d
Output:   An   optimized   damaged   detection   network   M
T r a i n i n g :
1 .   Randomly   initialize   the   network   parameters   of   the   ξ ,   ,   φ
2 .   Initialize   the   noise   parameter   σ 1 , σ 2
3 .   for   e MaxEpoch   do
4 . Divide   input   data   into   batches
5 . f o r   x each   batch   d o
6 .     Z Z = ξ ( x )
7 .     x ^ x ^ = ( Z )
8 .     S S = φ ( Z )
9 .   Calculate   loss   function   L   by   Equation   ( 10 )
10 . Update   the   parameters   of   ξ ,   by   Equations   ( 4 )   and   ( 5 )
11 . Update   σ 1 , σ 2   by   Equations   ( 11 )   and   ( 12 )  
12 . e n d   f o r
13 .   e n d   f o r
14 .   r e t u r n   Optimized   Model   M
            Testing :
15 .   Input   x t e s t
16 .   S t e s t = M ( x t e s t )
17 .   return   anomaly   score   S t e s t

3. Experiment

3.1. Verification on the Qatar University Grandstand Simulator

This section uses the QU grandstand simulator (QUGS) [27] to verify the proposed damage detection algorithm.

3.1.1. Qatar University Grandstand Simulator

A total of 30 spectators can be accommodated in the QU grandstand simulator. As shown in Figure 7, the steel frame comprises 8 major girders and 25 filler beams supported by 4 columns. The 8 girders are 4.6 m long, while the 5 filler beams in the cantilevered area are around 1 m-long and the remaining 20 beams are each 77 cm long. The two long columns are approximately 1.65 m long. The grandstand simulator’s primary steel frame has 30 accelerometers installed on the major girders at the 30 joints. For more detail on the structure, the readers are directed to [27].

3.1.2. Experiment Setup

To obtain the training data set, researchers did 31 experiments [27]. The first experiment was carried out under the entire case, whereas the others were carried out under damaged cases caused by loosening one or more of the 30 beam-to-girder joints. Following Section 2.1.4, the frame length n s was taken as 128 samples after standardization; hence, vectors D F i and U F i contain 63,488 frames for each joint i where n d = 2048 and n u = 61 , 440 . Only 20% of these frames were used for the testing process, 72% for the training process, and 8% for validation. Therefore, the length of the training set, validation set, and test set are 45,711, 5079, and 12,698, respectively. The model is trained using forward and backpropagation after the processed data has been shuffled.
As illustrated in Figure 4, a deep auto-encoder with seven layers was used. In each test, the size of the deep auto-input encoder’s layer was equal to the length of the collected acceleration data. Layers L 1 through L 7 in Figure 4 have sizes of 128, 80, 40, 20, 40, 80, and 128, respectively. The score-guided network comprises two linear layers with sizes of 20 and 10 units each. The batch size in each experiment is 1024, the training epochs are 100, and the Adam optimizer’s learning rate is 0.0001. Table 1 shows the anomaly scores for 30 accelerometers. All experiments were performed 5 times.

3.1.3. Discussions

The scores of the trained models on the test sets are shown in Table 1. The ID column is the accelerometer’s code; the Undamaged and Damaged column is the average anomaly score of the undamaged and damaged data. As indicated in Table 1, the anomaly scores for the undamaged and damaged data can be separated for most accelerometers. Damaged data can be distinguished from undamaged data if an appropriate strategy for selecting an anomaly score threshold is used. The first accelerometer, for example, has an average anomaly score of 0.384 for undamaged data and a value of 5.784 for damaged data, indicating good discrimination. We draw the score distribution density map of the first five accelerometers in Figure 8 to examine the score distribution quickly. The x-axis represents the score, and the y-axis represents the density of the score; (a) to (e) are the score distribution density maps of accelerometers 1 to 5. The undamaged and damaged data scores are greatly staggered in the score distribution displayed in Figure 8, indicating that our model successfully enhances the gap between the undamaged and damaged data.
The ROC-AUC is used as a measurement indicator to evaluate our algorithm’s (replaced by MTL + SG − AE below) performance. The ROC curve depicts the relative trade-off between true positive (TP) and false positive (FP) probabilities, where TP refers to the probability of judging undamaged samples as undamaged samples, and FP refers to the probability of judging damaged samples as undamaged samples [28]. In the process of choosing comparison algorithms, we found that cluster-based methods account for a large proportion of unsupervised structural damage detection, so we chose two classic cluster-based unsupervised methods as comparison algorithms: Gaussian Mixture Model (GMM) and Support Vector Clustering (SVC). At the same time, we noticed that the combination of autoencoder and traditional algorithm has also become a trend of unsupervised structural damage detection, so we also use the algorithm combining autoencoder and one-class support vector machine(AE + OC − SVM) as one of the comparison algorithms. Finally, to demonstrate the effectiveness of multi-task learning, we also incorporate the algorithm (SG − AE) without introducing it as our comparative experiment.
The best ROC curves of the five methods under this dataset are plotted in Figure 9, and the graph demonstrates that MTL + SG − AE has the maximum area under the curve (AUC). Among the five algorithms, only AE + OC − SVM, SG − AE, and MTL + SG − AE are deep algorithms. Figure 10a shows the AUC values of these three algorithms changes with the training epochs. Figure 10a shows that the performances of SG − AE and MTL + SG − AE are significantly better than that of AE + OC − SVM; additionally, the multi-task learning we implemented improves the performance of MTL + SG − AE over that of SG − AE. The mean AUC of the five algorithms is shown in Figure 10b. As can be shown, the average AUC of all algorithms is more than 0.5, implying that these algorithms have unique effects on this dataset. MTL + SG − AE, on the other hand, has the highest average AUC of 0.708.

3.2. Verification on Experimental Phase II of the SHM Benchmark Data

The study presented in this section is based on experimental phase II of the SHM benchmark problem data [29].

3.2.1. Experimental Phase II of the SHM Benchmark Data

The International Association for Structural Control (IASC)—American Society of Civil Engineers (ASCE) Structural Health Monitoring Task Group published the data in 2003 as a benchmark. A four-story steel structure developed at the University of British Columbia serves as the benchmark frame. The structure was fitted with fifteen accelerometers. For more information on the laboratory structure, the readers are directed to [29]. On the benchmark frame, nine structural damage situations were simulated. Under ambient stimulation, 15 accelerometers measured acceleration output for each case. As illustrated in Table 2, structural damage progressed from undamaged (Case 1) to severely damaged (Case 9).
It should be noted that vibration data from accelerometers 1 to 3 mounted on the structural ground was ignored due to insufficient power and information [30]. Therefore, the vibration signal of accelerometers 4 to 15 is analyzed in this paper. Due to the importance of the nature of vibrational signals, some signals are visualized to assess their nature. For example, Figure 11 shows the acceleration measurement history of the 10th accelerometer from Case 1 to Case 4, with (a) to (d) representing Case 1 to Case 4. As can be seen from the graph below, the vibrational response of the 10th accelerometer due to ambient vibrations appears to be non-stationary. More precisely, Entezami et al. [31] showed that most of the vibration signals of ASCE structures have non-stationary behavior.

3.2.2. Experiment Setup

As previously stated, 12 accelerometers were employed in training. From [29], we can know that the acceleration measurements were captured at 200 Hz in all cases. The vibration response was measured for 300 s in Cases 1 through 5, 227.84 s in Case 6, and 900 s in the other cases.
The data gathered for the whole structure (Case 1) and the data collected while the structure was severely damaged (Case 9) were used for training in this experiment. Following Section 2.1.4, using n s = 128 samples as the frame length, each of Case 1’s 12 undamaged signals was divided into n u = 468 frames, while Case 9’s 12 entirely damaged signals were separated into n d = 1404 frames. Overall, 70% of the data should be used for training and 30% for testing, and 328 healthy samples and 983 damaged samples were used for training for each accelerometer. The model configuration of this experiment is the same as the model configuration in Section 3.1.2.
To test the trained model and reflect the health status of the structure more intuitively, the data collected in 9 cases is decomposed into samples according to Section 2.1.4, then input processed data to the corresponding trained model. The model will output an anomaly score for the current sample, reflecting the likelihood of damage to the entire structure. Anomaly scores for all cases are shown in Table 3. All experiments were performed 5 times.

3.2.3. Discussions

The experimental results are shown in Table 3. As far as the entire structure is concerned, with the increase in the overall damage degree of the structure, the mean anomaly score shows a steady upward trend. For the undamaged case (Case 1), the average anomaly score is 0.824, the lowest average anomaly score compared to the average anomaly scores in the other cases, indicating that the current situation has the lowest probability of overall structural stability damage. In Case 2, removing a single diagonal brace increased the value to 1.115. When additional braces were removed in Cases 3–6, the average anomaly scores gradually climbed from 1.298 in Case 3 to 1.792 in Case 6. In Case 7, however, when all of the remaining diagonal braces were eliminated, the anomaly score jumped to 2.118. Finally, the anomaly score has grown to 3.655 due to loosened bolts in Cases 8 and 9, revealing a significant gap inside the entire structure. As far as a single sensor is concerned, it can be observed that the proposed algorithm exhibits remarkable results on most accelerometers. As the damage degree increases, the anomaly score also increases. In Case 9, most anomaly scores are more extensive than 4, and the anomaly scores of some accelerometers are even more significant than 5, gradually approaching 6. Figure 12 has drawn the score distribution of the 1st accelerometer in 9 cases. The figure shows that as the degree of damage increases, the anomaly score becomes larger and larger, and there are clear boundaries between the different degrees of damage.
Interestingly, the trained model can also better detect the degree of structural damage in the remaining cases, despite only using a portion of the data from Cases 1 and 9. In other words, even though the acceleration signals in Cases 2–8 and the remaining frames in Cases 1 and 9 were unfamiliar to the algorithm, it was able to assign a good anomaly score that accurately reflected the actual degree of damage in all cases.
We also compare our proposed method with the four algorithms mentioned in Section 3.1.3. Figure 13 shows the best ROC curves of the five algorithms; in this figure, the ROC curves of SG − AE and MTL + SG − AE overlap, and the AUC values are both 1. According to [28], we know that when the value of AUC is 1, the model is perfect. Therefore, these two algorithms are perfect for this dataset. Figure 14a shows the variation trends of the AUC values of the three depth algorithms during training. It is not difficult to find that the final AUC values are all 1, but during the training process, the initial AUC value of MTL + SG − AE is 0.8, while the other two algorithms are 0. Compared with the other two algorithms, MTL + SG − AE has a faster speed converging to 1, reflecting the algorithm’s excellent training. Figure 14b depicts the average AUC of the five algorithms. The figure shows that the performance of MTL + SG − AE is much better than that of GMM, SVC, AE + OC − SVM. Moreover, compared with SG − AE, the average AUC is also improved from 0.961 to 0.998, which shows the effectiveness of multi-task learning.
To make the claims more substantiated, we also compared with other method using SHM benchmark data. Literature [5] used SHM benchmark data to verify the performance of the proposed unsupervised method, this method utilizes the distribution of the PoD values throughout the structure to identify the location of structural damage will be investigated. In their study, Case 1 has the smallest value of PoD and the Case 9 has the biggest value of PoD, which is similar to the distribution of anomaly score under this dataset. However, they only use undamaged data for training, and do not take into account that it is difficult to ensure the acquired data is completely undamaged in actual cases. These methods can provide judgment support for decision makers, but our algorithm is more suitable for practical scenarios and can be more flexibly integrated into other damage detection frameworks.

4. Conclusions

This study proposes a new unsupervised machine learning-based structural damage detection method. The biggest difference from other unsupervised methods is the use of contaminated data for training, which makes the model more suitable for actual structural monitoring scenarios. In particular, proposed method is composed by: (1) auto-encoder, which is used to extract the features of the sample; (2) score-guided regularization network, which is used to expand the abnormal score difference between healthy samples and damaged samples, so that the model can learn more representational information. Specifically, it is to make full use of obvious data to enhance the optimization of representation learners, so that samples with small damage can also be given larger abnormal scores; (3) multi-task learning, which simplifies the parameter tuning process of the model by introducing uncertain parameters. After a detailed description of each phase, we validated the performance on the QUGS and ASCE (II). Experimental results show that the proposed method achieves good results on both the above datasets, providing reliable estimates for structural damage detection.
For concluding, we wish to highlight a proposed method that can be applied in many structural damage monitoring scenarios to develop risk mitigation strategies:
(1)
The proposed method generates a separate model for each monitoring point, so even if there are few monitoring points available in the real-case structure monitoring, the method can be well applied.
(2)
The proposed method outputs anomaly scores in an end-to-end manner without requiring prior knowledge to define various indicators in advance, making the model more convenient to apply to different scenarios.
(3)
The proposed method uses contaminated data for training, which is more suitable for actual structural monitoring scenarios and provides higher applicability.
Although the suggested unsupervised method performs well in detecting structural damage, how to define a score threshold based on the output anomaly score is still a challenge, which is also one of the challenges of unsupervised learning. Some variant algorithms based on extreme value theory are worthy of consideration to deal with the problem of threshold estimation.

Author Contributions

Conceptualization, Y.Q.; methodology, Y.Q.; software, Y.Q.; validation, Y.Q.; formal analysis, Y.Q. and S.Z.; investigation, Y.Q.; resources, Y.Q. and K.C.; data curation, Y.Q.; writing—original draft preparation, Y.Q.; writing—review and editing, Y.Q. and S.Z.; visualization, Y.Q.; supervision, S.Z.; project administration, Y.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Farrar, C.R.; Worden, K. Structural Health Monitoring: A Machine Learning Perspective, 1st ed.; John Wiley & Sons: Chichester, UK, 2012; pp. 16–239. [Google Scholar]
  2. Barthorpe, R.J. On Model- and Data-Based Approaches to Structural Health Monitoring. Ph.D. Thesis, University of Sheffield, Sheffield, UK, 2014. [Google Scholar]
  3. Alcover, I.F. Data-Based Models for Assessment and Life Prediction of Monitored Civil Infrastructure Assets. Ph.D. Thesis, University of Surrey, Guildford, UK, 2014. [Google Scholar]
  4. Mohsen, A.; Armin, D.E.; Gokhan, P. Data-driven structural health monitoring and damage detection through deep learning: State-of-the-art review. Sensors 2020, 20, 2778. [Google Scholar]
  5. Abdeljaber, O.; Avci, O.; Kiranyaz, M.S.; Boashash, B.; Sodano, H.; Inman, D.J. 1-D CNNs for structural damage detection: Verification on a structural health monitoring benchmark data. Neurocomputing 2018, 275, 1308–1317. [Google Scholar] [CrossRef]
  6. Avci, O.; Abdeljaber, O.; Kiranyaz, M.S.; Inman, D. Structural Damage Detection in Real Time: Implementation of 1D Convolutional Neural Networks for SHM Applications. In Structural Health Monitoring & Damage Detection, 7th ed.; Springer: Berlin, Germany, 2017; pp. 49–54. [Google Scholar]
  7. Sergio, R.; Angelo, C.; Valeria, L.; Uva, G. Machine-learning based vulnerability analysis of existing buildings. Autom. Constr. 2021, 132, 103936. [Google Scholar]
  8. Sanayei, M.; Khaloo, A.; Gul, M.; Catbas, F.N. Automated finite element model updating of a scale bridge model using measured static and modal test data. Eng. Struct. 2015, 102, 66–79. [Google Scholar] [CrossRef]
  9. Ding, X.M.; Li, Y.H.; Belatreche, A.; Maguire, L. An experimental evaluation of novelty detection methods. Neurocomputing 2014, 135, 313–327. [Google Scholar] [CrossRef]
  10. Sarmadi, H.; Karamodin, A. A novel anomaly detection method based on adaptive Mahalanobis-squared distance and one-class kNN rule for structural health monitoring under environmental effects. Mech. Syst. Signal Process. 2019, 140, 106495. [Google Scholar] [CrossRef]
  11. Da Silva, S.; Júnior, M.D.; Junior, V.L.; Brennan, M.J. Structural damage detection by fuzzy clustering. Mech. Syst. Signal Process. 2008, 22, 1636–1649. [Google Scholar] [CrossRef]
  12. Rafiei, M.H.; Adeli, H. A novel unsupervised deep learning model for global and local health condition assessment of structures. Eng. Struct. 2018, 156, 598–607. [Google Scholar] [CrossRef]
  13. Long, J.; Buyukozturk, O. Automated structural damage detection using one-class machine learning. In Dynamics of Civil Structures, 4th ed; Springer: Cham, Germany, 2014; pp. 117–128. [Google Scholar]
  14. Wang, Z.; Cha, Y.J. Unsupervised deep learning approach using a deep auto-encoder with a one-class support vector machine to detect structural damage. Struct. Health Monit. 2020, 20, 406–425. [Google Scholar] [CrossRef]
  15. Diez, A.; Khoa, N.L.D.; Alamdari, M.M.; Wang, Y.; Chen, F.; Runcie, P. A clustering approach for structural health monitoring on bridges. J. Civ. Struct. Health Monit. 2016, 6, 429–445. [Google Scholar] [CrossRef] [Green Version]
  16. Cha, Y.-J.; Wang, Z.L. Unsupervised novelty detection–based structural damage localization using a density peaks-based fast clustering algorithm. Struct. Health Monit. 2017, 17, 313–324. [Google Scholar] [CrossRef]
  17. Fan, W.; Qiao, P. Vibration-based Damage Identification Methods: A Review and Comparative Study. Struct. Health Monit. 2011, 10, 83–111. [Google Scholar] [CrossRef]
  18. Cao, M.S.; Sha, G.G.; Gao, Y.F.; Ostachowicz, W. Structural damage identification using damping: A compendium of uses and features. Smart Mater. Struct. 2017, 26, 043001. [Google Scholar] [CrossRef]
  19. Barman, S.; Mishra, M.; Maiti, D.; Maity, D. Vibration-based damage detection of structures employing Bayesian data fusion coupled with TLBO optimization algorithm. Struct. Multidiscip. Optim. 2021, 64, 2243–2266. [Google Scholar] [CrossRef]
  20. Pathirage, C.S.N.; Li, J.; Li, L.; Hao, H.; Liu, W.; Ni, P. Structural damage identification based on autoencoder neural networks and deeplearning. Eng. Struct. 2018, 172, 13–28. [Google Scholar] [CrossRef]
  21. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  22. Bourlard, H.; Kamp, Y. Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 1988, 59, 291–294. [Google Scholar] [CrossRef]
  23. Huang, Z.Y.; Zhang, B.H.; Hu, G.Q.; Li, L.; Xu, Y.; Jin, Y. Enhancing Unsupervised Anomaly Detection with Score-Guided Network. arXiv 2021, arXiv:2109.04684. [Google Scholar]
  24. Caruana, R. Multitask learning. Mach. Learn. 1997, 28, 41–75. [Google Scholar] [CrossRef]
  25. Baxter, J. A model of inductive bias learning. J. Artif. Intell. Res. 2000, 12, 149–198. [Google Scholar] [CrossRef]
  26. Cipolla, R.; Gal, Y.; Kendall, A. Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7482–7491. [Google Scholar]
  27. Abdeljaber, O.; Avci, O.; Kiranyaz, S.; Gabbouj, M.; Inman, D.J. Real-time vibration-based structural damage detection using one-dimensional convolutional neural networks. J. Sound Vib. 2017, 388, 154–170. [Google Scholar] [CrossRef]
  28. Fawcett, T. ROC Graphs: Notes and Practical Considerations for Researchers; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2003; pp. 1–38. [Google Scholar]
  29. Dyke, S.J.; Bernal, D.; Beck, J.; Ventura, C. Experimental Phase II of the Structural Health Monitoring Benchmark Problem. In Proceedings of the 16th ASCE Engineering Mechanics Conference, Seattle, WA, USA, 16–18 July 2003. [Google Scholar]
  30. Sarmadi, H.; Entezami, A.; Khorram, M.D. Energy-based damage localization under ambient vibration and non-stationary signals by ensemble empirical mode decomposition and Mahalanobis-squared distance. J. Vib. Control 2019, 26, 1012–1027. [Google Scholar] [CrossRef]
  31. Entezami, A.; Shariatmadar, H. Damage localization under ambient excitations and non-stationary vibration signals by a new hybrid algorithm for feature extraction and multivariate distance correlation methods. Struct. Health Monit. 2018, 18, 347–375. [Google Scholar] [CrossRef]
Figure 1. The basic flow of an unsupervised score-guided network. In this model, the representation learner is simultaneously trained with the scoring network. The scoring network guides the representation learner to learn better discriminative information from the input data.
Figure 1. The basic flow of an unsupervised score-guided network. In this model, the representation learner is simultaneously trained with the scoring network. The scoring network guides the representation learner to learn better discriminative information from the input data.
Applsci 12 04887 g001
Figure 2. The unsupervised structural damage detection model’s overall schematic diagram.
Figure 2. The unsupervised structural damage detection model’s overall schematic diagram.
Applsci 12 04887 g002
Figure 3. Auto-encoder structure.
Figure 3. Auto-encoder structure.
Applsci 12 04887 g003
Figure 4. The structure of the auto-encoder in this paper.
Figure 4. The structure of the auto-encoder in this paper.
Applsci 12 04887 g004
Figure 5. The score-guided network structure of this paper.
Figure 5. The score-guided network structure of this paper.
Applsci 12 04887 g005
Figure 6. The brief flow of the model from data processing to training model to testing.
Figure 6. The brief flow of the model from data processing to training model to testing.
Applsci 12 04887 g006
Figure 7. The main steel frame of the QU grandstand simulator [27].
Figure 7. The main steel frame of the QU grandstand simulator [27].
Applsci 12 04887 g007
Figure 8. Score distribution of the first five accelerometers, the x-axis represents the score, and the y-axis represents the density of the score distribution. (ae) represent accelerometer 1 to 5.
Figure 8. Score distribution of the first five accelerometers, the x-axis represents the score, and the y-axis represents the density of the score distribution. (ae) represent accelerometer 1 to 5.
Applsci 12 04887 g008
Figure 9. The best ROC curves of five algorithms.
Figure 9. The best ROC curves of five algorithms.
Applsci 12 04887 g009
Figure 10. (a) Graph of the AUC values of the three depth algorithms changing with the training epochs. (b) Average AUC of the five algorithms.
Figure 10. (a) Graph of the AUC values of the three depth algorithms changing with the training epochs. (b) Average AUC of the five algorithms.
Applsci 12 04887 g010
Figure 11. The acceleration response of the 10th accelerometer from Case 1 to 4. (ad) represent Case 1 to 4.
Figure 11. The acceleration response of the 10th accelerometer from Case 1 to 4. (ad) represent Case 1 to 4.
Applsci 12 04887 g011
Figure 12. Score density plot of the 1st accelerometer in 9 cases.
Figure 12. Score density plot of the 1st accelerometer in 9 cases.
Applsci 12 04887 g012
Figure 13. The best ROC curves of the five algorithms, in which the curve of SG − AE coincides with the curve of MTL + SG − AE.
Figure 13. The best ROC curves of the five algorithms, in which the curve of SG − AE coincides with the curve of MTL + SG − AE.
Applsci 12 04887 g013
Figure 14. (a) Graph of the AUC values of the three depth algorithms changing with the training epochs; (b) Average AUC of the five algorithms.
Figure 14. (a) Graph of the AUC values of the three depth algorithms changing with the training epochs; (b) Average AUC of the five algorithms.
Applsci 12 04887 g014
Table 1. Mean anomaly scores of undamaged and damaged data. ID column represents the code of the accelerometer; the Undamaged column represents the average anomaly score of the undamaged data; and the Damaged column represents the anomaly score of the damaged data.
Table 1. Mean anomaly scores of undamaged and damaged data. ID column represents the code of the accelerometer; the Undamaged column represents the average anomaly score of the undamaged data; and the Damaged column represents the anomaly score of the damaged data.
IDUndamagedDamagedIDUndamagedDamagedIDUndamagedDamaged
10.3845.785110.8024.108210.8934.361
20.7863.752120.8193.887220.5906.008
30.8827.248130.6683.779230.8023.818
40.5634.5211140.7565.063240.8171.674
50.2197.104150.9993.854251.0593.015
60.9054.124160.8604.486260.5072.322
70.9691.955170.9860.081270.4905.482
80.3304.820181.0662.937280.3204.509
90.8584.126190.7893.314290.4801.041
100.7663.177200.8743.786301.3000.097
Table 2. Description of the structural cases in the benchmark problem.
Table 2. Description of the structural cases in the benchmark problem.
Structural CaseDescription
1Undamaged.
2On the southeast corner, a brace on the first floor is removed.
3Braces on the first and fourth levels are removed on the southeast corner.
4On the southeast corner, braces on all floors are removed.
5All braces on the east are unfastened.
6Case 5 + braces of the second-floor on the north face are removed.
7All braces are separated.
8Case 7 + loosened bolts at ends of the beam on the east face, north side, on the first and second levels.
9Case 7 + loosened bolts at ends of the beam on the east face, north side, on all levels.
Table 3. Anomaly scores of 12 accelerometers in 9 cases.
Table 3. Anomaly scores of 12 accelerometers in 9 cases.
IDLocationOrientationCase1Case2Case3Case4Case5Case6Case7Case8Case9
11F/WestN/S1.0801.5331.8232.6842.7332.8222.7223.4645.179
21F/CenterE/W0.1080.1060.1040.0960.9700.9800.1110.1130.113
31F/EastN/S1.071.1511.3921.6951.8781.9672.1683.3134.601
42F/WestN/S1.0801.2171.7271.7891.8421.8782.5583.2885.438
52F/CenterE/W−0.299−0.289−0.296−0.306−0.303−0.302−0.291−0.295−0.291
62F/EastN/S1.1321.8201.8471.8581.8921.9112.9843.7825.475
73F/WestN/S1.1351.4111.5721.8822.0082.4142.4693.1474.407
83F/CenterE/W1.1202.1892.2262.2892.3242.3454.0684.3644.509
93F/EastN/S1.1421.5341.6681.8301.8532.3323.1553.7654.037
104F/WestN/S1.0901.4061.7561.7792.0942.2532.2422.5794.422
114F/CenterE/W0.1850.1740.1670.1750.1750.1760.1820.1800.179
124F/EastN/S1.0421.5481.5941.7052.3832.7333.0483.7675.790
Score 0.8241.1501.2981.4561.6541.7922.1182.6223.655
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Que, Y.; Zhong, S.; Chen, K. A Score-Guided Regularization Strategy-Based Unsupervised Structural Damage Detection Method. Appl. Sci. 2022, 12, 4887. https://doi.org/10.3390/app12104887

AMA Style

Que Y, Zhong S, Chen K. A Score-Guided Regularization Strategy-Based Unsupervised Structural Damage Detection Method. Applied Sciences. 2022; 12(10):4887. https://doi.org/10.3390/app12104887

Chicago/Turabian Style

Que, Yunfei, Shangping Zhong, and Kaizhi Chen. 2022. "A Score-Guided Regularization Strategy-Based Unsupervised Structural Damage Detection Method" Applied Sciences 12, no. 10: 4887. https://doi.org/10.3390/app12104887

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop