Next Article in Journal
An Analysis of Loss Functions for Heavily Imbalanced Lesion Segmentation
Previous Article in Journal
A Novel Control Strategy for Soft Open Point to Address Terminal Voltage Violations and Load Rate Imbalance in Low-Voltage Power Distribution Station Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Damage Detection in Glass Fibre Composites Using Cointegrated Hyperspectral Images

Department of Robotics and Mechatronics, Faculty of Mechanical Engineering and Robotics, AGH University of Krakow, 30-059 Krakow, Poland
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(6), 1980; https://doi.org/10.3390/s24061980
Submission received: 31 January 2024 / Revised: 13 March 2024 / Accepted: 15 March 2024 / Published: 20 March 2024

Abstract

:
Hyperspectral imaging (HSI) is a remote sensing technique that has been successfully applied for the task of damage detection in glass fibre-reinforced plastic (GFRP) materials. Similarly to other vision-based detection methods, one of the drawbacks of HSI is its susceptibility to the lighting conditions during the imaging, which is a serious issue for gathering hyperspectral data in real-life scenarios. In this study, a data conditioning procedure is proposed for improving the results of damage detection with various classifiers. The developed procedure is based on the concept of signal stationarity and cointegration analysis, and achieves its goal by performing the detection and removal of the non-stationary trends in hyperspectral images caused by imperfect lighting. To evaluate the effectiveness of the proposed method, two damage detection tests have been performed on a damaged GFRP specimen: one using the proposed method, and one using an established damage detection workflow, based on the works of other authors. Application of the proposed procedure in the processing of a hyperspectral image of a damaged GFRP specimen resulted in significantly improved accuracy, sensitivity, and F-score, independently of the type of classifier used.

1. Introduction

Glass fibre-reinforced polymers (GFRPs) are a type of composite consisting of glass fibres embedded in a polymer matrix. Using GFRPs in place of more traditional engineering materials, such as polymers, metals, or ceramics has become more and more widespread, due to a number of advantageous factors [1,2]. Among them, a few of the most important factors are their exceptional strength to weight ratio, high resistance to harsh environmental conditions, high durability, and ability to be manufactured into complex shapes using a range of processes. Due to these advantages, GFRPs are one of the most widely utilized composite materials across all the fields of engineering, e.g., the manufacturing of airship fuselage and wing parts, ship hulls, auto body parts, printed circuit boards, pressure tanks, concrete reinforcement bars, and blades of wind turbines. Despite their numerous advantages, successfully using GFRPs also poses a challenge, due to the increased number of failure modes of composite materials [3].
The most common and serious failure modes in GFRPs are the development of cracks and delaminations. Typical causes of the damage include manufacturing-induced defects, cyclic and dynamic loading, and exposure to environmental factors [4,5], which are the result of varying and difficult operating conditions. The presence of manufacturing-induced defects leads to failure in the early stages of their exploitation by significantly reducing the mechanical properties of the material. However, these kinds of defects can be avoided by employing an extensive post-manufacturing quality control. The loading and environmental issues are much harder to circumvent. Some of these problems can be avoided in the early stages of designing GFRP structures, especially by supporting the design with the data gathered from already deployed structures and provided by simulations, which are able to obtain strikingly accurate results by the employment of improved simulation techniques [6,7]. While this process is vital to preventing the damage, operational conditions of GFRP structures can be unpredictable. Because of this, the use of structural health monitoring (SHM) and damage detection techniques are employed to improve safety and power generation efficiency, and to minimize the maintenance cost.
Most of the damage detection techniques are based on strain measurement, acoustic emission, ultrasound, vibration, thermography, or vision [8]. Strain measurement-based damage detection utilizes strain sensors, either installed on the surface or integrated into the blade itself, and infer the presence of damage by detecting anomalies in the strain signal [9]. The drawback to this approach is that the accuracy of damage detection is dependent on the distance between the damage and the sensor. Acoustic emission events are detected by piezoelectric sensors installed on the blade of the turbine that register the elastic waves caused by the release of energy during the appearance of the damage [10]. The main challenges these systems face are distinguishing between the acoustic emission events caused by the damage and the noise, and the high internal damping of GFRPs. Ultrasound-based methods are proven to reliably detect the damage using the interaction between the damage and the ultrasonic waves [11]. However, the ultrasonic tests are limited by the accessibility of the tested object, which is especially unfavourable in offshore turbines. Vibration-based methods aim to detect changes in mechanical properties with the use of modal characteristics [12]. It has been observed that these kinds of methods fail to detect damage in its early stages. The thermography-based approach aims to detect damage using remotely registered thermal gradients on the surface of the wind turbine blade; however, similarly to the vibration-based approach, it is unable to detect damage in its early stages, and is prone to being heavily influenced by environmental factors such as temperature and humidity [13].
One of the most promising advances in SHM and damage detection was made in the field of vision-based methods [14]. Some of the techniques used in vision-based methods include stereoscopic vision, digital image correlation, computational intelligence-based algorithms and hyperspectral imaging (HSI) [15,16]. Overall, the quality of results obtained using vision-based methods is dependent on the quality of lighting [17], which is understandably hard to control in real-life applications due to the environmental influences. Hyperspectral imaging also suffers from that drawback. However, due to the increased amount of data gathered in HSI, in comparison to the other vision-based methods, the filtering of such influence yields better results.
The current methods developed for overcoming the lighting issue in vision-based methods, and especially in hyperspectral imaging, have been advanced classification methods, such as classifiers operating on spectral–spatial data [18,19,20], where instead of classifying each pixel alone, the data about the neighbouring pixels and location of the pixel in question are also included in the learning data [21], which lowers the method’s sensitivity to imperfect lighting. This method, however, performs best in scenarios where different endmembers (classes) present on the hyperspectral image are of a different chemical composition from one another, resulting in a high separation of those classes. In cases where the physical properties, such as the presence of cracks or delaminations, are the endmembers, the spectral–spatial approach fails to distinguish between a higher intensity pixel being an endmember or being a result of the uneven lighting.
To resolve this issue, we propose a technique which utilizes cointegration analysis as a data conditioning tool, and which aims to minimize the influence of imperfect lighting on the performance of HSI-based damage detectors. To the best of the authors’ knowledge, the cointegration analysis has never been investigated in the literature to develop a data conditioning process for classification of hyperspectral images used for damage detection in GFRPs, while its proven usefulness in trend removal applications makes it an appealing option to explore for HSI processing.

2. Materials and Methods

2.1. Damage Detection Using Hyperspectral Imaging

In hyperspectral imaging, a series of monochromatic images is registered over a wide range of the electromagnetic spectrum. While a traditional digital photograph contains three channels, each corresponding to one of red, green, or blue colours, hyperspectral images can contain hundreds of channels, each channel containing information about a single wavelength of electromagnetic waves. Most commonly, hyperspectral images cover the visible part of the spectrum, around 380 to 700 n m , and a part of the infrared spectrum, in our case near infrared (NiR) and short-wave infrared (SWiR) at 700 to 2500 n m . Hyperspectral images are stored in data structures called hypercubes. Their dimensions are x × y × λ , where x and y are spatial dimensions, and λ is a spectral dimension.
In the process of hyperspectral imaging, a specimen is lit with a white light by the illuminator. After the interaction of electromagnetic radiation with the material, the reflected light is registered with a hyperspectral camera and saved into a hypercube, as shown by a schematic in Figure 1, and in the photograph of the hyperspectral camera in Figure 2.
Hyperspectral imaging is based on the interaction of light with matter, primarily through the mechanisms of absorption and reflection. Absorption is mainly influenced by the chemical composition of the matter. Every element has a characteristic spectral signature that describes the intensity at which every wavelength in a given spectrum is absorbed. Thanks to this phenomenon, hyperspectral images contain chemical information about the scanned object, similar to the information that could be obtained using a spectrometric test. On the other hand, the phenomenon of reflection is primarily influenced by the physical features of the scanned object. Instead of applying to a select few of the wavelengths as is the case in chemical difference, the physical difference in the scanned object affects the whole spectrum with nearly the same intensity at every wavelength of the hyperspectral image.
Thanks to its ability to detect chemical and physical changes, HSI has found uses in the fields of medicine, agriculture, remote sensing, and many more [22,23]. In our study, we are focusing on the application of hyperspectral imaging in damage detection systems for GFRP composites.
The great amount of information and high dimensionality of hyperspectral images limits the use of conventional digital image processing techniques. One of the ways of circumventing this issue is the deployment of classification algorithms, either conventional, such as Adaptive Cosine Estimators (ACEs), or machine learning-based classification algorithms, such as support vector machines (SVMs) or artificial neural networks (ANNs).
While it has been proven that such systems can perform the task of damage detection well in laboratory conditions, in real-life applications, the issue of imperfect lighting becomes crucial. The change in the radiant flux of lighting results in the shift of the spectrum in a way that is similar to the presence of a physical feature, such as formation of a crack or delamination, which leads to an increase in false positive damage detections.

2.2. Cointegration Analysis

Hyperspectral images can be considered as multidimensional, stochastic, discrete signals. One of the properties that apply to such signals is its stationarity. A signal is considered to be strictly stationary when the probability density function (PDF) of a stochastic variable is constant over the signal’s duration. This definition often finds a limited use in signal processing, especially in the case of processes realized in real-world full of noise and non-ideal systems. For this purpose, a range of weaker criteria are utilized: the N-th order stationarity, and weak-, or wide-sense, stationarity. The process X t is classified as n-th degree stationary if it satisfies Equation (1):
F X ( x t 1 + τ , . . . , x t n + τ ) = F X ( x t 1 , . . . , x t n ) τ , t 1 , . . . , t n R n { 1 , . . . , N }
essentially limiting the global requirement to a requirement for n up to an order N.
The signal of weak-sense stationarity is only required to have a constant mean and autocovariance over its duration.
Other types of stationarity are the trend-stationary signals and difference-stationary signals. The signal is called trend-stationary if, after the removal of the deterministic trend, it becomes stationary [24]. The signal is called difference-stationary if it becomes stationary after one or few differentiations. Another more precise way of describing such a property is to state the order of integration of the signal. A signal of d-th order of integration is typically denoted as I ( d ) . The order of integration can be thought of as the number of times the differentiation is required to be applied to transform the signal into a stationary signal. It is also a way of describing the presence of a unit root.
To confirm or deny the presence of a unit root, a range of statistical tests have been developed. Two of the most widely used unit root tests are the Dickey–Fuller test (AD test) and the Augmented Dickey–Fuller test (ADF test) [25].
In the AD test, a data series is represented as a 1st order autoregressive signal (AR(1)), which is represented as:
X t = α + ρ X t 1 + ε t
where X t is a signal at time t, α is a trend term, ρ is a parameter of the lagged term, and ε t is the error of the AR model. The value of α = 0 signifies that the process is a random walk, there are no stochastic trends in a signal, and the value of α 0 means that the process is a random drift.
The AD test is used to determine if the signal is integrated of order I ( 0 ) or I ( 1 ) [26], and decides between two hypotheses:
H 0 : ρ = 1 H 1 : ρ < 1
The rejection of H 0 means that the signal is stationary in the wide sense—integrated of order I ( 1 ) . The hypothesis can be rejected or confirmed after performing a t-test over the duration of the signal.
In practice, however, such an evaluation is improper, as both X t and X t 1 could be non-stationary, thus transforming the t-distribution into a normal distribution, due to the normal central limit theorem. To bypass that issue, the AR model is constructed for the first differences of the signal, such that:
X t X t 1 = α + ( ρ 1 ) X t 1 + ε t
which can also be represented as:
Δ X t = α + δ X t 1 + ε t
Because Δ X t is in essence a first difference of X t , it is integrated of order I ( 0 ) . If the H 0 holds true, then in the differenced model, a parameter δ = 0 , removing the potentially non-stationary term X t 1 from the right-hand side of (5), thus allowing for calculation of the t-statistic. The calculated t-statistic is then evaluated against the critical value of the Dickey–Fuller distribution for a chosen significance level, which is the criterion for rejecting or approving the H 0 .
The ADF test expands the AD test by allowing the use of higher order AR models [26]. This is achieved by using an additional lagged term in the AR model. For the ADF test, the model with one lagged term ( A R ( 2 ) ) takes the form of:
Δ X t = α + δ 0 X t 1 + δ 1 Δ X t 1 + ε t
or in general, for n lagged terms in A R ( n + 1 ) model:
Δ X t = α + δ 0 X t 1 + i = 1 n δ i Δ X t i + ε t
The hypotheses for ADF are formulated as:
H 0 : δ 0 = 0 H 1 : δ 0 < 0
Similarly to AD, rejection of H 0 , on the grounds of the δ 0 t-statistic absolute value being smaller than the critical value of the DF distribution’s critical value for a given significance level, indicates that the X series’ order of integration is less than I ( 1 ) .
Cointegration analysis is based on a concept of stationarity [27]. The cointegrating relationship between two data series exists if
X c = β X
is integrated of order I ( d 1 ) , where β is the cointegrating vector, and X is a collection of ( X 1 , X 2 , , X k ) time series integrated of order I ( d ) . In essence, a cointegrating relationship exists if a linear combination of two or more non-stationary data series is itself stationary. In this context, cointegration analysis is used as a detrending tool to remove the influence of the environment [28,29].
After ensuring that all data series considered are integrated of the same order, and that the order is at least I ( 1 ) , the next step is to test for the existence of cointegration relationships among them. For this purpose, a Johansen test is utilized [30].
The Johansen test for cointegration is a two-part procedure. In the first step, the signals are modelled using the Vector Error Correction Models (VECMs) [31]. There are a few variants of VECMs, and the difference between them are the presence or absence of terms representing the trend, and the constant, and the number of cointegrating vectors.
The process of estimating the VECM itself consists of four steps. The first step is to estimate a Vector Autoregressive Model (VAR) for the collection of signals X t . The VAR model of rank p takes the form of [32]:
X t = Φ D t + Π 1 X t 1 + + Π p X t p + ε t
where D t are the deterministic terms that consist of constant ( u 0 ) and trend ( u 1 t ) terms, and can be expanded as:
D t = u 0 + u 1 t
where Φ is the parameter of deterministic terms, and Π i are the parameters of lagged terms.
The VAR model is then transformed into a VECM model. VECMs are the extension of Error Correction Models for multivariate area, and are formulated as:
Δ X t = Φ D t + Π X t 1 + Γ 1 Δ X t 1 + + Γ p 1 Δ X t p + 1 + ε t
where D t are the deterministic terms, Φ is the parameter of deterministic terms, and is constructed as Π = Π 1 + + Π p I n , Π is the matrix of long run impact, and Γ is the matrix of short-term impact, taking the form of Γ k = j = k + 1 p Π j for k = 1 , , p .
By transforming a VAR model into a VECM model, it is ensured that the variables integrated of order I ( 1 ) are encapsulated in the Π X t 1 term; therefore, if the cointegrating relationship exists, it will be included in this term.
The rank of matrix Π :
r = r a n k ( Π )
represents the number of potential cointegrating vectors found. It can take a value between 0 and n.
The matrix Π is then factorized into two matrices:
Π = A B T
where A and B are ( n × r ) matrices of rank r. Matrix B is a collection of potential cointegrating vectors.
The second part of the Johansen test is to test found potential cointegrating vectors, using either the trace or maximum eigenvalue statistic. This test consists of m [ 0 , r ] stages, and in every stage, it challenges the null hypothesis that the number of cointegrating vectors H 0 : r = m with the alternative being H 1 : r > m . If the null hypothesis is rejected, the next stage is performed, and if the null hypothesis is accepted, the number of confirmed cointegrating relationships is m.
Next, the found cointegrating vectors are normalized, such that:
β = [ 1 , β 2 , , β n ] T
and the residuals of projecting the data series onto the cointegrating vectors is tested for the existence of the unit root. If that test confirms that the residuals are stationary, the cointegration relationship is confirmed.

2.3. The Specimens

The specimens used for this study were made of a glass fibre-reinforced plastic (GFRP) composite material. The matrix consisted of an epoxy resin, the glass fibres had a unidirectional layup, and there were five layers of the glass fibre mat. The material data for such composites, for the purpose of simulating the damage event, can be found in the literature [33,34]. To introduce the damage, a destructive strength test was performed. The specimen was subjected to a quasi-static tensile test, in accordance with the ASTM E1922 Standard [35]. Test Method for Translaminar Fracture Toughness of Laminated and Pultruded Polymer Matrix Composite Materials. The test consisted of loading the specimen in tension, along the direction of the fibre layup, as presented on Figure 3. The load was applied by the loading pins that were put into the loading holes of the specimen. The test was displacement driven, and the loading rate was set at 3 m m min ( 3 × 10 5   m s ). The test was stopped when the material failure was confirmed by a rapid drop of force and visual confirmation of the presence of a crack. The peak force registered was 9639.6   N . As a result of the test, a crack of length 120 m m and width of 4 m m , and reaching the full depth of the specimen, directed along the fibre layup direction, was introduced to the specimen, as shown in Figure 4.

2.4. Hyperspectral Imaging Equipment

After the damage was introduced, a hyperspectral image of the specimen was captured. Imaging was completed using a Headwall Photonics Inc., Bolton, USA Desktop Scanning Kit. The hyperspectral camera was equipped with two imaging sensors: a visible and near infrared (VNiR) H-type sensor, and a short-wave infrared (SWiR) M-type sensor. The VNiR sensor has a wavelength range of 400 n m to 1000 n m with a spectral resolution of 1.6   n m , and spatial resolution of 1600 px, and the SWiR sensor has a wavelength range of 890 n m to 2500 n m , with a spectral resolution of 9.8   n m and spatial resolution of 384 px.
The Desktop Scanning Kit utilizes a pushbroom scanning method, i.e., a single row of an image is registered at regular intervals, while the scanned object is moved at a setup rate by the motion platform integrated into the device.
To improve the accuracy of the upcoming image registration step, fiducial markers were placed in the background of the scanned specimen.
As a result, two hyperspectral data cubes were obtained, one from each sensor, along with white and black references for image calibration purposes. The VNiR hypercube dimensions were 1600 × 2500 × 375 , and the SWiR hypercube dimensions were 384 × 600 × 165 .

2.5. Hyperspectral Image Preprocessing

The first step of preprocessing consists of converting the digital number (DN) registered by the sensors to a physical quantity of reflectance. This step is called the calibration of a hyperspectral image [36]. Reflectance (R) is a dimensionless measure of an effectiveness in reflecting the radiant energy. It is defined as:
R = Φ e r Φ e i
where Φ e r is the radiant flux reflected by the surface, and Φ e i is the radiant flux incident onto that surface.
Black and white references obtained during the imaging are themselves hyperspectral images of size 1 × 1 × λ , and are used in the calibration procedure to obtain reflectance values of pixels in a hyperspectral image that are absolute—independent of the radiant flux of an illuminator used during the imaging step, or the exposition time of an image. The white reference is obtained by capturing a hyperspectral image of a calibration standard made of Spectralon material, which is highly reflective at every wavelength used by hyperspectral sensors. The black reference is obtained by capturing a hyperspectral image with the sensors covered by a shield impermeable to electromagnetic radiation at wavelengths used by hyperspectral sensors.
With those two references, an image is calibrated by applying:
R ( x , y , λ ) = D N ( x , y , λ ) D N b l a c k ( λ ) D N w h i t e ( λ ) D N b l a c k ( λ )
where R ( x , y , λ ) is a reflectance of a hyperspectral image pixel at x and y coordinates at λ wavelength, D N ( x , y , λ ) is a digital number of a hyperspectral image pixel at x and y coordinates at λ wavelength, D N b l a c k ( λ ) is a digital number of the black reference at λ wavelength, and D N w h i t e ( λ ) is a digital number of the white reference at λ wavelength. The calibration is performed on both VNiR and SWiR hypercubes independently.
The next step in the preprocessing procedure is image registration [37]. In this step, both hypercubes are prepared for fusion by aligning the image and upscaling the SWiR hypercube to match the spatial resolution of the VNiR hypercube. Three corresponding pairs of points are chosen from both hypercubes by selecting the corners of fiducial markers. Their coordinates are stored in vectors x v and y v for the VNiR hypercube and x s and y s for the SWiR hypercube. Using these two vectors, an affine transformation is calculated, such that
x v i y v i = A x s i y s i + B i
where A is a transformation matrix, B is a translation vector, and i is the number of pixels. That transformation is then applied to the SWiR hypercube.
After the registration and fusion, a splice correction is performed [38], which shifts the reflectance values in the SWiR hypercube to satisfy:
R V N i R ( x , y , λ n ) = R S W i R ( x , y , λ n )
where λ n is the wavelength in the middle of the wavelength range covered by both the VNiR and SWiR sensors, in our case λ n = 945 n m . The correction is performed by applying:
R S W i R _ c o r r e c t e d ( x , y , λ ) = R S W i R ( x , y , λ ) f ( λ )
where f ( λ ) is a correction factor given by:
f ( λ ) = R S W i R ( x , y , λ n ) ( 2 · R V N i R ( x , y , λ n ) R V N i R ( x , y , λ n 1 ) )
After applying the splice correction, the VNiR and SWiR hypercubes are fused, which is obtained by concatenation along the spectral dimension.
The next step of preprocessing is an application of the Savitzky–Golay filter for each pixel along the spectral dimension [39]. The window size used was 7, and the order of the filter was 2. The filtration was achieved by applying:
R f i l t e r e d ( x , y , λ ) = ( 2 R ( x , y , λ 3 ) + 3 R ( x , y , λ 2 ) + 6 R ( x , y , λ 1 ) + + 7 R ( x , y , λ ) + 6 R ( x , y , λ + 1 ) + 3 R ( x , y , λ + 2 ) 2 R ( x , y , λ + 3 ) ) / 21
After the filtration, 10 points were selected from the damaged region and 10 from the undamaged region of the specimen. By calculating the means of these points, spectral signatures of the damage to the composite and of the healthy material were constructed.

2.6. Data Conditioning Using Cointegration Analysis

The cointegration analysis is used in the data conditioning step as a trend removal tool [40]. The Johansen procedure was applied to find the cointegrating vector β for the spectral signatures of healthy and damaged material. The lag order used in the AR models of signatures in the Johansen test was N = 7 . The t-statistic of 3.46 exceeded a critical value of the Dickey–Fuller t-distribution at a confidence level of 95%: −3.42 allowed us to conclude that a cointegration relationship between these spectral signatures exists.
The whole hyperspectral image was then projected onto the found cointegrating vector β by applying:
R c o i n t ( x , y , λ ) = β ( λ ) × R ( x , y , λ )
The resulting data cube was then subtracted from the fused and filtered hypercube, resulting in a datacube of cointegration residuals.
R r e s i d u a l s ( x , y , λ ) = R ( x , y , λ ) R c o i n t ( x , y , λ )
To allow for the comparison of the results of applying the data conditioning technique, two independent workflows for processing hyperspectral images for the purpose of damage detection will be examined: First, the method based on traditional hyperspectral image processing, where the learning data are simply the spectra of the pixels on the hyperspectral image, and second, a method where cointegration analysis is used to transform the spectra into residuals of cointegration. For this purpose, the trend removal and filtering subroutines in the traditional workflow are skipped to allow the ADF test, which is required to be positive and indicate the presence of the unit root in raw data, to hint as to the existence of cointegrating vectors. After this step, the hypercube is projected onto these vectors, which allows for the calculation of cointegration residuals. These residuals are used as the learning data in our proposed method.

2.7. Classification Algorithms

Three kinds of classification algorithms were used, and with every one of them, the learning and classification was performed once on a filtered hypercube, and once on a data cube of cointegration residuals. The first algorithm was a multilayer perceptron neural network. It consisted of 10 input neurons, 2 fully connected layers of 10 neurons, and two output neurons. The second algorithm was a support vector machine with Gaussian kernel. The last algorithm was a support vector machine with a cubic kernel. Before the classification, a principal component analysis was performed, and the first 10 principal components were used as the training data.

3. Results

3.1. Data Conditioning Results

The ADF tests were performed on the spectral signatures obtained from the preprocessing of hyperspectral images. The null hypothesis in ADF was H 0 : δ 0 = 0 , and the alternative was H 1 : δ 0 < 0 . The results of the ADF tests, along with the critical values of DF distribution, are presented in Table 1.
The models used in the ADF tests were AR(5) with drift. In both cases, H 0 was not rejected, confirming that both spectral signatures are integrated of order I ( 1 ) .
After the ADF test, the Johansen test for cointegration was performed. Similarly, a VECM model with lag order of 5 was used. The null hypothesis in the Johansen test was H 0 : r < k , where r is the number of cointegrating vectors, and k is the number of tested data series. The alternative hypothesis was H 1 : r = k . The results of the tests are presented in Table 2.
For VECM models with either a constant, or a trend, the tests indicate that there is one cointegrating vector. The models with no deterministic terms, and one with both a constant and a trend, indicate that there are two valid cointegrating vectors.
The cointegrating vector was calculated using the model with the deterministic trend. The original spectral signatures, and the residua of projecting them onto the cointegrating vectors, are shown in Figure 5.

3.2. Classification without the Data Conditioning

The classification was performed using three kinds of algorithms. The learning and learning validation data consisted of 500 spectral signatures of healthy and damaged material, which defined the two classes. A five-fold cross-validation was used to prevent overfitting. After the learning was completed, the classification algorithms were applied to the preprocessed hyperspectral images. The results of classification are presented in Figure 6, and the metrics of classifier performance in Table 3.
The accuracies achieved by the classification algorithms in the validation step were 81.4% for the bilayered artificial neural network, 76.4% for the SVM with Gaussian kernel, and 83.6% for the SVM with cubic kernel.

3.3. Classification with the Data Conditioning

The setup for the classification algorithms was similar to that described in a previous section. The key difference is that instead of using spectral signatures as the algorithms’ input, the residuals of the projection onto the cointegrating vectors were used. The results of classification are presented in Figure 7, and the metrics of the classifiers are shown in Table 4.
The accuracies achieved by the classification algorithms in the validation step were 94.5% for the bilayered artificial neural network, 94.2% for the SVM with Gaussian kernel, and 94.5% for the SVM with cubic kernel.

4. Discussion

The ADF tests confirmed that both of the spectral signatures are non-stationary, and are integrated of order I ( 1 ) . Because the variant of the ADF test used utilized the A R ( 5 ) model with drift, both of these signals can be classified as trend-stationary. Because of the cointegration analysis’ proven ability to detrend data series, it is reasonable to apply the Johansen test in an effort to remove those trends.
The noteworthy difference in such application is that the cointegration and the concept of stationarity itself in our research is applied not to a time series data, but to the spectra of particular areas on the hyperspectral image, or the models of their spectral signatures—in other words, the temporal dimension is replaced with a spectral dimension.
While it is not always obvious which features, or their combinations, of the data used in the machine learning are the key factors in the determination of the algorithm’s output, the increase in the distance between the different classes generally improve the performance. This is supported by the remarkable success of the PCA analysis in nearly every machine learning task, and other methods that achieve a similar result, for example, application of the kernel tricks in SVM algorithms.
The proposed usage of cointegration analysis achieves that goal, which is visible in the difference between the input data in Figure 5. Another confirmation of the proposed algorithm’s validity comes from the classification results themselves. By comparing the results of classification with and without the data conditioning step in Figure 6 and Figure 7, a conclusion can be drawn that the algorithms that used cointegration residuals perform better than the ones that use the spectral signatures. In the first case, the areas classified as damaged are consistent with the reference image, and the false positives that are present appear as the small groups of pixels, and can be distinguished easily from the actual damage. In the latter case, the damaged areas have been correctly classified as damaged; however, the bounds of the damaged area are hard to determine exactly because of the difference in its characteristic—on the left side of the crack, the transition from healthy to damaged is clear and abrupt, but on its right side, the transition seems gradual and soft. This difference can be attributed to the existence of trends in the data, which are abundant in hyperspectral imaging due to imperfect lighting conditions.
Overall, the improvement in performance of the classifiers when using the data conditioning step is most visible in the increase in the precision metric. Across all the tested classifiers, the precision increased twofold, which is the result of significantly fewer occurrences of false positive classifications. The consistency of this increase in each of the three tested classification methods: the artificial neural network, and the Gaussian and cubic kernel SVMs, suggests that the improved quality of classification by using the proposed method of data conditioning is not specific to a singular classifier and is beneficial regardless of the type of classifier used.

5. Conclusions

In our study, the cointegration analysis was used to develop an unconventional data conditioning process for classification of hyperspectral images. The process takes advantage of cointegration analysis’ ability to remove trends in the data, which were suspected to decrease the performance of classifiers. Application of the developed technique, and comparison of the results, obtained with and without its use, allowed us to conclude the following:
  • The application of cointegration analysis improves the results of classification, especially in regard to decreasing the number and spatial distribution of false positives;
  • In hyperspectral imaging, the benefits of data conditioning apply universally to classification algorithms, regardless of the specific algorithm used;
  • In hyperspectral imaging, the concept of signal stationarity can be considered in the spectral domain, instead of the traditional temporal domain, while still keeping its properties.
This said, we believe that further work is required on the topic of the use of cointegration analysis in HSI. Among many other areas, the aspects of the performance against different data conditioning procedures, other approaches to damage detection using HSI (such as using segmentation algorithms for deciding if the pixel in question bears marks of damage or not), the performance on different kinds of samples (different kinds of damage, different kinds of material), and the applicability to the detection of dynamic crack propagation can be explored further.

Author Contributions

Conceptualization, J.D., P.B.D., W.J.S. and T.U.; Methodology, J.D., P.B.D., W.J.S. and T.U.; Software, J.D. and P.B.D.; Validation, J.D.; Formal analysis, J.D.; Investigation, J.D., P.B.D. and W.J.S.; Resources, P.B.D.; Data curation, J.D.; Writing—original draft, J.D.; Writing—review and editing, J.D., P.B.D., W.J.S. and T.U.; Visualization, J.D.; Supervision, W.J.S. and T.U.; Project administration, T.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACEAdaptive Cosine Estimator
ADFAugmented Dickey–Fuller test
ANNArtificial neural network
ARAutoregressive
DFDickey–Fuller test
DNDigital Number
GFRPGlass Fibre-Reinforced Plastic
HSIHyperspectral Imaging
NDTNon-destructive Testing
NiRNear Infrared
PDFProbability Density Function
SHMStructural Health Monitoring
SVMSupport vector machine
SWiRShort-Wave Infrared
VECMVector Error Correction Model
VNiRVisible and Near Infrared

References

  1. Qureshi, J. A Review of Fibre Reinforced Polymer Structures. Fibers 2022, 10, 27. [Google Scholar] [CrossRef]
  2. Feng, P.; Wang, J.; Wang, Y.; Loughery, D.; Niu, D. Effects of corrosive environments on properties of pultruded GFRP plates. Compos. Part B Eng. 2014, 67, 427–433. [Google Scholar] [CrossRef]
  3. Beura, S.; Chakraverty, A.P.; Thatoi, D.N.; Mohanty, U.K.; Mohapatra, M. Failure modes in GFRP composites assessed with the aid of SEM fractographs. Mater. Today Proc. 2021, 41, 172–179. [Google Scholar] [CrossRef]
  4. Lee, Y.J.; Jhan, Y.T.; Chung, C.H. Fluid–structure interaction of FRP wind turbine blades under aerodynamic effect. Compos. Part B Eng. 2012, 43, 2180–2191. [Google Scholar] [CrossRef]
  5. Dai, J.; Li, M.; Chen, H.; He, T.; Zhang, F. Progress and challenges on blade load research of large-scale wind turbines. Renew. Energy 2022, 196, 482–496. [Google Scholar] [CrossRef]
  6. Laudani, A.A.M.; Vryonis, O.; Lewin, P.L.; Golosnoy, I.O.; Kremer, J.; Klein, H.; Thomsen, O.T. Numerical simulation of lightning strike damage to wind turbine blades and validation against conducted current test data. Compos. Part A Appl. Sci. Manuf. 2022, 152, 106708. [Google Scholar] [CrossRef]
  7. Mellouli, H.; Kalleli, S.; Mallek, H.; Ben Said, L.; Ayadi, B.; Dammak, F. Electromechanical behavior of piezolaminated shell structures with imperfect functionally graded porous materials using an improved solid-shell element. Comput. Math. Appl. 2024, 155, 1–13. [Google Scholar] [CrossRef]
  8. Wang, W.; Xue, Y.; He, C.; Zhao, Y. Review of the Typical Damage and Damage-Detection Methods of Large Wind Turbine Blades. Energies 2022, 15, 5672. [Google Scholar] [CrossRef]
  9. Arsenault, T.J.; Achuthan, A.; Marzocca, P.; Grappasonni, C.; Coppotelli, G. Development of a FBG based distributed strain sensor system for wind turbine structural health monitoring. Smart Mater. Struct. 2013, 22, 075027. [Google Scholar] [CrossRef]
  10. Xu, D.; Liu, P.F.; Chen, Z.P.; Leng, J.X.; Jiao, L. Achieving robust damage mode identification of adhesive composite joints for wind turbine blade using acoustic emission and machine learning. Compos. Struct. 2020, 236, 111840. [Google Scholar] [CrossRef]
  11. Chakrapani, S.K.; Barnard, D.J.; Dayal, V. Review of ultrasonic testing for NDE of composite wind turbine blades. AIP Conf. Proc. 2019, 2102, 100003. [Google Scholar] [CrossRef]
  12. Kim, W.; Yi, J.H.; Kim, J.T.; Park, J.H. Vibration-based Structural Health Assessment of a Wind Turbine Tower Using a Wind Turbine Model. Procedia Eng. 2017, 188, 333–339. [Google Scholar] [CrossRef]
  13. Sanati, H.; Wood, D.; Sun, Q. Condition Monitoring of Wind Turbine Blades Using Active and Passive Thermography. Appl. Sci. 2018, 8, 2004. [Google Scholar] [CrossRef]
  14. Dong, C.Z.; Catbas, F.N. A review of computer vision–based structural health monitoring at local and global levels. Struct. Health Monit. 2021, 20, 692–743. [Google Scholar] [CrossRef]
  15. Rizk, P.; Al Saleh, N.; Younes, R.; Ilinca, A.; Khoder, J. Hyperspectral imaging applied for the detection of wind turbine blade damage and icing. Remote Sens. Appl. Soc. Environ. 2020, 18, 100291. [Google Scholar] [CrossRef]
  16. Długosz, J.; Dao, P.B.; Staszewski, W.J.; Uhl, T. Damage Detection in Composite Materials Using Hyperspectral Imaging. In European Workshop on Structural Health Monitoring; Springer: Cham, Switzerland, 2023; pp. 463–473. [Google Scholar] [CrossRef]
  17. Wawerski, A.; Siemiątkowska, B.; Józwik, M.; Fajdek, B.; Partyka, M. Machine Learning Method and Hyperspectral Imaging for Precise Determination of Glucose and Silicon Levels. Sensors 2024, 24, 1306. [Google Scholar] [CrossRef]
  18. Wang, Y.; Ou, X.; He, H.J.; Kamruzzaman, M. Advancements, limitations and challenges in hyperspectral imaging for comprehensive assessment of wheat quality: An up-to-date review. Food Chem. X 2024, 21, 101235. [Google Scholar] [CrossRef] [PubMed]
  19. Zhao, Y.; Bao, W.; Xu, X.; Zhou, Y. Hyperspectral image classification based on local feature decoupling and hybrid attention SpectralFormer network. Int. J. Remote Sens. 2024, 45, 1727–1754. [Google Scholar] [CrossRef]
  20. Ghous, U.; Sarfraz, M.; Ahmad, M.; Li, C.; Hong, D. (2+1)D Extreme Xception Net for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5159–5172. [Google Scholar] [CrossRef]
  21. He, L.; Li, J.; Liu, C.; Li, S. Recent Advances on Spectral–Spatial Hyperspectral Image Classification: An Overview and New Guidelines. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1579–1597. [Google Scholar] [CrossRef]
  22. Carrasco, O.; Gomez, R.B.; Chainani, A.; Roper, W.E. Hyperspectral imaging applied to medical diagnoses and food safety. In Proceedings of the AeroSense 2003, Orlando, FL, USA, 21–25 April 2003; p. 215. [Google Scholar] [CrossRef]
  23. Clark, R.N.; Swayze, G.A. Mapping Minerals, Amorphous Materials, Environmental Materials, Vegetation, Water, Ice and Snow, and Other Materials: The USGS Tricorder Algorithm; Technical Report; Environmental Science, Materials Science; Lunar and Planetary Institute: Houston, TX, USA, 1995. [Google Scholar]
  24. DeJong, D.N.; Nankervis, J.C.; Savin, N.E.; Whiteman, C.H. Integration Versus Trend Stationary in Time Series. Econometrica 1992, 60, 423. [Google Scholar] [CrossRef]
  25. Dao, P.B. Cointegration method for temperature effect removal in damage detection based on lamb waves. Diagnostyka 2013, 14, 61–67. [Google Scholar]
  26. Dickey, D.A.; Fuller, W.A. Distribution of the Estimators for Autoregressive Time Series with a Unit Root. J. Am. Stat. Assoc. 1979, 74, 427. [Google Scholar] [CrossRef]
  27. Granger, C.W. Some properties of time series data and their use in econometric model specification. J. Econom. 1981, 16, 121–130. [Google Scholar] [CrossRef]
  28. Chen, Q.; Kruger, U.; Leung, A.Y.T. Cointegration Testing Method for Monitoring Nonstationary Processes. Ind. Eng. Chem. Res. 2009, 48, 3533–3543. [Google Scholar] [CrossRef]
  29. Cross, E.J.; Worden, K.; Chen, Q. Cointegration: A novel approach for the removal of environmental trends in structural health monitoring data. Proc. R. Soc. A Math. Phys. Eng. Sci. 2011, 467, 2712–2732. [Google Scholar] [CrossRef]
  30. Johansen, S. Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. Econometrica 1991, 59, 1551. [Google Scholar] [CrossRef]
  31. Sargan, J.D. Wages and prices in the United Kingdom: A study in econometric methodology. Econom. Anal. Natl. Econ. Plan. 1964, 16, 25–54. [Google Scholar]
  32. Holtz-Eakin, D.; Newey, W.; Rosen, H.S. Estimating Vector Autoregressions with Panel Data. Econometrica 1988, 56, 1371. [Google Scholar] [CrossRef]
  33. Shakir Abbood, I.; aldeen Odaa, S.; Hasan, K.F.; Jasim, M.A. Properties evaluation of fiber reinforced polymers and their constituent materials used in structures—A review. Mater. Today Proc. 2021, 43, 1003–1008. [Google Scholar] [CrossRef]
  34. Bent, F.; Sørensen, W.J.; Holmes, P.B.; Branner, K. Blade materials, testing methods and structural design. In Wind Power Generation and Wind Turbine Design; WIT Press: Billerica, MA, USA, 2010; Volume 44. [Google Scholar] [CrossRef]
  35. ASTM E1922; Standard Test Method for Translaminar Fracture Toughness of Laminated and Pultruded Polymer Matrix Composite Materials. ASTM International: Conshohocken, PA USA, 2015. [CrossRef]
  36. Geladi, P.L.M. Calibration. In Techniques and Applications of Hyperspectral Image Analysis; John Wiley & Sons, Ltd.: Chichester, UK, 2007; pp. 203–220. [Google Scholar] [CrossRef]
  37. Fox, T.; Elder, E.; Crocker, I. Image Registration and Fusion Techniques. In PET-CT in Radiotherapy Treatment Planning; Elsevier: Amsterdam, The Netherlands, 2008; pp. 35–51. [Google Scholar] [CrossRef]
  38. Sara, D.; Mandava, A.K.; Kumar, A.; Duela, S.; Jude, A. Hyperspectral and multispectral image fusion techniques for high resolution applications: A review. Earth Sci. Inform. 2021, 14, 1685–1705. [Google Scholar] [CrossRef]
  39. Ruffin, C.; King, R.L.; Younan, N.H. A Combined Derivative Spectroscopy and Savitzky-Golay Filtering Method for the Analysis of Hyperspectral Data. GIScience Remote Sens. 2008, 45, 1–15. [Google Scholar] [CrossRef]
  40. Dao, P.B.; Staszewski, W.J. Cointegration and how it works for structural health monitoring. Measurement 2023, 209, 112503. [Google Scholar] [CrossRef]
Figure 1. The process of hyperspectral imaging.
Figure 1. The process of hyperspectral imaging.
Sensors 24 01980 g001
Figure 2. The hyperspectral camera setup with the trajectory of light in yellow: 1—The hyperspectral sensor, 2—the reflector, 3—the specimen, 4—the background with fiducial markers, 5—the illuminator, 6—the linear stage, 7—the preview monitor.
Figure 2. The hyperspectral camera setup with the trajectory of light in yellow: 1—The hyperspectral sensor, 2—the reflector, 3—the specimen, 4—the background with fiducial markers, 5—the illuminator, 6—the linear stage, 7—the preview monitor.
Sensors 24 01980 g002
Figure 3. Specimen dimensions and tensile test setup schematics.
Figure 3. Specimen dimensions and tensile test setup schematics.
Sensors 24 01980 g003
Figure 4. Photograph of the specimen after the introduction of the crack, with different regions of the specimen highlighted: 1—Markings inked on the surface of the specimen, for purpose of specimen identification, and as a size reference. 2—Holes cut into the specimen before the tensile test, for purpose of accepting loading pins. 3—Notch cut into the specimen before the tensile test, directed perpendicularly to the glass fibre layup. 4—Area damaged in tensile test—a crack going through the whole thickness of the specimen, starting at the bottom of the notch, and ending at the edge of the specimen, following the layup direction of glass fibres.
Figure 4. Photograph of the specimen after the introduction of the crack, with different regions of the specimen highlighted: 1—Markings inked on the surface of the specimen, for purpose of specimen identification, and as a size reference. 2—Holes cut into the specimen before the tensile test, for purpose of accepting loading pins. 3—Notch cut into the specimen before the tensile test, directed perpendicularly to the glass fibre layup. 4—Area damaged in tensile test—a crack going through the whole thickness of the specimen, starting at the bottom of the notch, and ending at the edge of the specimen, following the layup direction of glass fibres.
Sensors 24 01980 g004
Figure 5. Model spectra before and after the data conditioning step.
Figure 5. Model spectra before and after the data conditioning step.
Sensors 24 01980 g005
Figure 6. Classification results without the cointegration analysis data conditioning. (a): A reference RGB image of the damaged specimen. (b): Results of classification using 2-layered neural network. (c): Results of classification using SVM with Gaussian kernel. (d): Results of classification using SVM with cubic kernel.
Figure 6. Classification results without the cointegration analysis data conditioning. (a): A reference RGB image of the damaged specimen. (b): Results of classification using 2-layered neural network. (c): Results of classification using SVM with Gaussian kernel. (d): Results of classification using SVM with cubic kernel.
Sensors 24 01980 g006
Figure 7. Classification results with the cointegration analysis data conditioning. (a): A reference RGB image of the damaged specimen. (b): Results of classification using 2-layered neural network. (c): Results of classification using SVM with Gaussian kernel. (d): Results of classification using SVM with cubic kernel.
Figure 7. Classification results with the cointegration analysis data conditioning. (a): A reference RGB image of the damaged specimen. (b): Results of classification using 2-layered neural network. (c): Results of classification using SVM with Gaussian kernel. (d): Results of classification using SVM with cubic kernel.
Sensors 24 01980 g007
Table 1. Results of ADF test at significance level of 5%.
Table 1. Results of ADF test at significance level of 5%.
Data SeriesT-StatisticCritical ValueNull Rejected
Damaged spectral signature−0.5655−2.8681False
Healthy spectral signature−2.2881−2.8681False
Table 2. Johansen test results by type of VECM model, at significance level of 5%, at r = 2.
Table 2. Johansen test results by type of VECM model, at significance level of 5%, at r = 2.
VECM ModelT-StatisticCritical ValueNull Rejected
No deterministic terms17.937212.3206True
With deterministic constant18.217320.2619False
With deterministic trend14.193715.4948False
With deterministic trend and constant37.889225.8723True
Table 3. Classifier metrics without data conditioning.
Table 3. Classifier metrics without data conditioning.
SensitivitySpecificityPrecisionAccuracyF1 Score
Neural network0.6950.8240.2630.8140.381
Gaussian kernel SVM0.6180.7770.1990.7640.301
Cubic kernel SVM0.7130.8470.2960.8360.418
Table 4. Classifier metrics with data conditioning, and the trend in comparison to metrics without data conditioning.
Table 4. Classifier metrics with data conditioning, and the trend in comparison to metrics without data conditioning.
SensitivitySpecificityPrecisionAccuracyF1 Score
Neural network 0.670 0.970 0.672 0.945 0.674
Gaussian kernel SVM 0.695 0.968 0.649 0.942 0.654
Cubic kernel SVM 0.587 0.977 0.698 0.945 0.638
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Długosz, J.; Dao, P.B.; Staszewski, W.J.; Uhl, T. Damage Detection in Glass Fibre Composites Using Cointegrated Hyperspectral Images. Sensors 2024, 24, 1980. https://doi.org/10.3390/s24061980

AMA Style

Długosz J, Dao PB, Staszewski WJ, Uhl T. Damage Detection in Glass Fibre Composites Using Cointegrated Hyperspectral Images. Sensors. 2024; 24(6):1980. https://doi.org/10.3390/s24061980

Chicago/Turabian Style

Długosz, Jan, Phong B. Dao, Wiesław J. Staszewski, and Tadeusz Uhl. 2024. "Damage Detection in Glass Fibre Composites Using Cointegrated Hyperspectral Images" Sensors 24, no. 6: 1980. https://doi.org/10.3390/s24061980

APA Style

Długosz, J., Dao, P. B., Staszewski, W. J., & Uhl, T. (2024). Damage Detection in Glass Fibre Composites Using Cointegrated Hyperspectral Images. Sensors, 24(6), 1980. https://doi.org/10.3390/s24061980

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop