Next Article in Journal
Definition and Identification of Honey Bee Welfare Practices Within the Five Domains Framework for Sustainable Beekeeping
Previous Article in Journal
The Influence of Various Types of Functional Bread on Postprandial Glycemia in Healthy Adults
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bridge Displacements Monitoring Method Based on Pixel Sequence

1
College of Civil Engineering, Chongqing Jiaotong University, Chongqing 400074, China
2
College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(24), 11901; https://doi.org/10.3390/app142411901
Submission received: 28 October 2024 / Revised: 6 December 2024 / Accepted: 9 December 2024 / Published: 19 December 2024
(This article belongs to the Section Civil Engineering)

Abstract

:
In light of the challenges posed by intricate algorithms, subpar recognition accuracy, and prolonged recognition duration in current machine vision for bridge structure monitoring, this paper presents an innovative method for recognizing and extracting structural edges based on the Gaussian difference method. Initially, grayscale processing enhances the image’s information content. Subsequently, a Region of Interest (ROI) is identified to streamline further processing steps. Following this, Gaussian check images at different scales are processed, capitalizing on the observation that edges show reduced correspondence to the Gaussian kernel. Then, the structure image’s edges are derived using the difference algorithm. Lastly, employing the scale factor, the algorithm translates the detected edge displacement within the image into the precise physical displacement of the structure. This method enables continuous monitoring of the structure and facilitates the assessment of its safety status. The experimental results affirm that the proposed algorithm adeptly identifies and extracts the structural edge’s geometric characteristics with precision. Furthermore, the displacement information derived from the scale factor closely aligns with the actual displacement, validating the algorithm’s effectiveness.

1. Introduction

The Structural Health Monitoring (SHM) system, as a crucial part of the bridge’s lifecycle, plays an essential role in ensuring the safety, reliability, and maintainability of bridges [1]. During a bridge’s service life, it is subjected to various loads, such as seismic forces from the natural environment and common wind loads. These loads cause mechanical responses in the structure, which the SHM system monitors to evaluate the structural mechanical performance. The most direct expression of the structural mechanical response is displacement, which varies with different input loads. Displacement, as a fundamental quantity, can reflect the bridge’s vibration, stiffness, and other mechanical signals. Therefore, accurately extracting the bridge’s structural displacement is key to achieving a highly accurate bridge monitoring system.
Traditional displacement monitoring technologies primarily rely on sensors, such as strain sensors, linear displacement sensors, dial gauge sensors, laser displacement sensors, and accelerometers [2,3]. Although sensors can provide accurate monitoring results, the data transmission from sensors heavily relies on data cables and signal wires. During long-distance signal transmission [4], the installation and maintenance of these cables is a significant challenge that cannot be ignored [5]. Furthermore, they can only monitor displacements at certain points in the structure, and the transmission of data in a high-speed and accurate way is also a key issue. Hence, there is an urgent need for more advanced methods to upgrade the SHM systems for bridges. SHM systems utilizing Global Navigation Satellite System (GNSS), laser Doppler vibrometers, and radar systems have been proven reliable and useful [6], but these systems highly depend on power and communication systems.
With the rapid development of computer technology, image-processing technology has also emerged. By analyzing optical signals in images using computers, machine vision-based bridge structure displacement monitoring technology has gradually matured. Xudong Jian et al. [7] used deep learning and computer vision methods, employing deep multilayer perceptrons to identify the influence surface of bridge structures and study the traffic load and state evaluation of the structure. Experiments demonstrated the framework’s high accuracy and robustness. Xu et al. [8] used machine vision technology to identify vehicle trajectories, combined with millimeter-wave radar to monitor the structure’s displacement response, and calculated the vehicle’s axle load using the structural displacement influence line, achieving vehicle–bridge coupling analysis. Ye et al. [9] used UAV technology to monitor the structural health of bridges and extract structural displacement. The use of UAV monitoring has the advantages of low cost, flexibility, and high accuracy in extracting bridge structural displacement. Shengfei Zhang et al. [10] researched vision-based displacement monitoring (VDM) and developed a 2D VDM method to achieve VDM in three-dimensional space using two cameras.
Khuc T, Nguyen T A, Dao H et al. [11] proposed a UAV-based structural monitoring method to measure sway displacement, addressing the challenges of traditional SHM systems by bypassing fixed camera positions and improving measurement accuracy through advanced algorithms. However, the method faces high computational demands, limited UAV flight time, and sensitivity to wind. Similarly, Tian Y et al., introduced a UAV-based non-contact cable force estimation method using line segment matching to address displacement calculation issues. Although it provides a cost-effective and robust solution for dynamic displacement, it is suitable only for cables with large vibrations under low-wind conditions, and UAV motion can affect the results. Han Y et al. [12] developed a vision-based method for measuring structural displacement with UAVs and lasers, using black-and-white markers and laser projection. This system is non-contact, adaptable to various environments, and capable of working in low-light conditions, but it is limited to in-plane displacement and constrained by UAV camera frame rates. The method also faces challenges with setup and measuring out-of-plane displacements.
Jana D et al. [13] focused on cable tension estimation for the Dubrovnik cable-stayed bridge using handheld camera video and image analysis techniques. The approach provides stable displacement measurements and tension estimates comparable to design values but struggles with small-amplitude or high-frequency vibrations. Michel Guzman-Acevedo G et al., presented a reliable UAV-based method for estimating cable tension using template matching and optical flow algorithms. This method has shown high accuracy in both lab and field tests, with minimal deviation from reference values. However, it is limited by UAV battery life, high computational costs, and environmental instability. Future work for both methods could explore integration with IMUs for real-time monitoring and improve small cable vibration detection, enhancing their applicability for SHM in cable-stayed bridges. Chen et al., discussed the displacement monitoring technology based on machine vision, highlighting its advantages and exploring methods for synchronous multi-point displacement detection. They also analyzed the current limitations of machine vision technology and suggested future research directions. Duan et al., proposed a method utilizing Scale-Invariant Feature Transform (SIFT) for extracting structural feature points, facilitating the study of Full-Field Displacement Vectors (FFDVs) to achieve multi-point displacement monitoring across an entire structure. Meanwhile, Xing et al., introduced a multi-point displacement measurement method for bridges using a low-cost camera, with the accuracy of the method validated through experimental tests.
It can be seen that compared to traditional contact sensor displacement monitoring, non-contact displacement monitoring is easier to install and does not require contact with the structure. Typically, bridge deformation monitoring systems rely on sensors and other equipment. For small and medium-sized bridge monitoring projects that use sensors, the costs, including sensors, labor, installation, and others, can easily reach millions of CNY. In contrast, machine vision-based monitoring methods only require a visual acquisition system, targets, and data storage and processing equipment. This results in a cost reduction of 50–70% compared to sensor-based systems [14,15]. It can achieve wide-area monitoring coverage and, by avoiding the limitations of high-precision sensors, can be applied in various environments. Additionally, it does not require frequent replacement or maintenance of sensors, is cost-effective, and does not interfere with the normal operation of bridges. This paper proposes a machine vision system for monitoring displacement in standard bridge structures. This system mainly consists of a high-definition camera and a data processing unit, with the latter comprising only a regular PC and MATLAB R2023b software. Compared to other methods, this system offers advantages such as simple usage conditions, low cost (requiring only a single camera setup), and low learning curve (due to simple algorithms). It is referred to as the DoG Structural Displacement Recognition System (DoG stands for Difference of Gaussians). During the validation process, a theoretical and practical comparison method was employed, using the DoG algorithm to monitor a simulated structure and comparing the results with displacement data to verify the algorithm’s accuracy and applicability.

2. The DoG Algorithm Applied for Edge Detection

Data obtained through algorithmic processing of graphics can then be subjected to deeper signal processing by the computer. Common preprocessing methods include grayscale conversion and binarization, which translate the information of each pixel into a format understandable by computers. However, interference from some noisy pixels can affect the desired results. Therefore, during preprocessing, it is crucial to apply denoising techniques to the images to enhance their quality. Common denoising methods include median filtering, mean filtering, bilateral filtering, and Gaussian filtering.
To mitigate Gaussian noise, the utilization of a Gaussian filter based on the noise distribution function is advisable. The expression for the Gaussian low-pass filter function in the image is as follows:
G x , y = 1 2 π σ exp x 2 + y 2 2 σ 2
where (x, y) represents the pixel coordinates of the Gaussian kernel. G(x, y) denotes the value of the two-dimensional Gaussian function at coordinates (x, y).
During the processing stage, the Gaussian filter operates by smoothing the image, wherein the gray value of pixels is represented by the weighted average of neighboring pixels. Particularly at the image edges, where the gray value tends to increase, the weighting of surrounding pixels in the Gaussian filter is comparatively reduced. This preservation of edge details is essential during noise reduction processing.
Due to this property, feature detection at a certain scale can be performed by subtracting two adjacent Gaussian scale–space images to achieve the desired outcome. This results in the DoG response image.
  g 1 ( x , y ) = G σ 1 ( x , y )     f ( x , y )   g 2 ( x , y ) = G σ 2 ( x , y )     f ( x , y )
where g represents the blurred result of the image at a certain Gaussian scale, and f represents the value of a specific pixel or a pixel region in the input image.
Subtracting the two images g 1 and g 2 yields the expression for the DoG (Difference of Gaussians) function:
D o G g 1 g 2 = 1 2 π 1 σ 1 e x 2 + y 2 2 σ 1 2 1 σ 2 e x 2 + y 2 2 σ 2 2     f ( x , y )
The main purpose of applying differential processing to images is to enhance edges and details while suppressing noise. Through multi-scale analysis, features at different scales can be extracted. The precision of edge extraction is higher after performing the Gaussian difference operation, and it does not introduce large-scale noise. The main steps of edge detection include gradient calculation, calculation of gradient magnitude and direction, and non-maximum suppression.
The main purpose of gradient calculation is to compute the gradient of the difference image to obtain the edge strength and direction. The specific calculation formulas are as follows:
D x , y = ( D x ,   D   y )
where D x , y represents the image gradient using the Gaussian difference method; x , y represent the pixel directions in the horizontal and vertical axes, respectively.
The formulas for calculating gradient magnitude and direction are as follows:
M x , y = ( D x ) 2 + ( D y ) 2
θ x , y = arctan ( D y D x )
where M x , y represents the gradient magnitude of the pixel, and θ x , y represents the gradient direction of the pixel.
Multiple convolutions with the DoG (Difference of Gaussians) function can be applied to an image, as depicted in Figure 1. This technique aids in amplifying the detection of relatively subtle structures, which might otherwise be overlooked or challenging to identify in subsequent processes. It mitigates biases in structural performance status information and enables more precise localization for feature point and edge extraction, as shown in Figure 2.
In the operation of the Difference of Gaussians (DoG) method, image processing operations, such as dilation and erosion, need to be applied to the images. The dilation algorithm enlarges the target area, amalgamating background points that touch the target area into the target object, consequently extending the boundary of the target outward. Its purpose is to fill in some holes in the target area and eliminate small grain noise contained in the target area. Conversely, the erosion algorithm diminishes the extent of the target area, leading to a reduction in the image’s boundary and the elimination of small and insignificant target objects. The formulas for dilation and erosion are as follows:
A B = x , y   ( B )   x y A ϕ A B = x , y   ( B )   x y A
Figure 3 illustrates the application of the DoG system for structural displacement extraction. The DoG system proposed in this paper is divided into four main steps.
In the validation experiments described in this paper, data collection is mainly divided into algorithm data collection and validation data collection. As shown in Step 1 of Figure 3, this section focuses on algorithm data collection. By adjusting the field of view (FOV) of the calibrated camera, we ensure that the optical signals of the entire bridge structure can be captured and preserved. This is essential for the subsequent comparison of algorithm displacement with LVDT displacement measurements to determine the accuracy of the DoG algorithm.
As shown in Step 2 of Figure 3, the DoG algorithm proposed in this paper is mainly applied in the edge extraction phase of structural optical signal processing. By processing the structure under fixed multi-scale DoG filtering, the edge results can reflect the position of the structure in space. Under simulated load conditions, the structure will respond to different loads, resulting in deformation. The DoG edge processing can capture these deformation signals, leading to pixel-level deformation in the collected images. Therefore, when processing the structure, since the camera’s attitude angle and field of view are fixed under the same working conditions, it is only necessary to compare the structural displacement response results at different load times and points to determine the degree of deformation.
In Step 2, the collected video is segmented to obtain the optical signal of the structure at a specific moment. The DoG algorithm is then used to extract the edge information of the structure and track feature points. Since the edge signals are continuous [16,17], feature points can be arbitrarily selected. Compared to traditional LVDT displacement meters, the DoG algorithm can extract an infinite number of feature points, effectively solving the problem of insufficient data completeness in traditional displacement extraction. Moreover, the feature points are derived from the natural edge contours of the structure itself, eliminating the need for preprocessing results during system operation and avoiding any impact on the bridge’s traffic capacity in actual conditions [18].
By tracking the feature points, the displacement of the feature points and the edges near the feature points can be obtained, thereby determining the mechanical response of the structure. The mechanical response of the structure [19] is reflected by the pixel displacement, and through the above steps, the pixel-level displacement of the structure in the image can be obtained [20,21].
In a normal working state, the structure responds to changes in load size or shifts in the point of application with corresponding mechanical signal feedback. This feedback is reflected in optical information as subtle displacements in the structure of the image [22], typically at the pixel level, occupying only a few or dozens of pixels in the image. In the images captured by the camera, this is specifically manifested as pixel displacement at observation points/feature points.
In the Difference of Gaussians (DoG) algorithm, the Gaussian-blurred image difference can highlight the high-gradient areas of the image. Since the pixel values at the edges change drastically, and the pixel value differences at the edges of the Gaussian-blurred image are significant, calculating the difference can enhance edge information.
Since displacement is a relative value, we can use the DoG algorithm to extract edges in the initial state and the loaded state. The relative deformation between the two states can be considered as the structural mechanical response. By locating the feature points, the pixel displacement of the structure in the image can be determined. Because the feature points and adjusted edge extraction proposed in this paper are obtained through differences, it is necessary to ensure that the parameters of the DoG algorithm are consistent under the same working conditions. Otherwise, different Gaussian filter bases may lead to overall deviations in the extracted results, significantly affecting the accuracy of subsequent structural mechanical performance evaluations.
In the system proposed in this paper, the spatial state of the structure and the camera remains unchanged under the same time–space conditions. The structure, in reality, will be recorded on the image according to a certain scale, known as the pixel scale factor, which can be calculated based on the camera’s posture and the spatial distance between the camera and the structure.
The pixel scale factor (Z) is a parameter used to convert the image signal into the displacement signal in this system. The calculation formula is as follows:
Z = L s L p
where L s represents the actual length of the object. L p represents the pixel length of the object in the image. The unit of the pixel scale factor is mm/pixel.
Using the pixel scale factor, we can convert the deformation of the structure’s edge in the image into the actual deformation of the structure, thereby assessing the mechanical performance of the bridge under normal working conditions and monitoring the displacement of the bridge.
After determining the edge of the structure using the Gaussian difference method, the displacement of the structure is calculated and detected. In this study, a comparison between the calculated displacement obtained using our proposed method and the actual displacement allows for the evaluation and monitoring of the overall structural performance during regular operation [23].

3. Experimental Structure Composition

The experimental model employed in this investigation is a novel assembled I-beam aluminum-supported bridge, depicted in Figure 4. The bridge configuration, akin to a simply supported beam structure, is illustrated in Figure 5, which also depicts the dimensions and specific construction of the test piece. The main beam consists of two prefabricated aluminum alloy I-beams spliced together. The I-beams are standardized with a height of 200 mm, a width of 50 mm, a flange thickness of 5 mm, and a web thickness of 5 mm. Another aspect of standardization involves the ratio of flange thickness to web thickness. The aluminum alloy I-beams on both sides are welded together with three hollow rectangular steels and are consistent with the main beams in terms of material. They are welded to the beam segments and mid-span of the aluminum alloy beams. The bridge deck is made of organic glass, with a rectangular cross-section measuring 12 mm in height and 500 mm in width. The bridge deck is bonded to the longitudinal beams using acrylic adhesive. The model bridge is hinge-supported at both ends, and its calculated span length is 5.64 m. The completed experimental beam and the restraint conditions are shown in the following figure:
In this experiment, it is essential to investigate the degree of deformation of the main beam structure of the bridge under various loads simulated to represent normal working conditions. The experimental design includes the following key points:
(1)
Camera arrangement
The theoretical framework proposed in this paper relies on non-contact computer vision detection technology for analyzing structural displacement measurements. Hence, the utilization of structural imagery data as initial data is paramount for this purpose. Using different acquisition devices can lead to variations in the collected data. Variations in factors like sampling frequency, focal length, resolution, and other device parameters can influence the accuracy of subsequent measurement analyses. Additionally, the outcomes from the same device can vary due to factors like shooting environments, lighting conditions, shooting distances, and camera angles.
(2)
Displacement sensor (LVDT) for deflection measurement
The data acquisition system (DH5902N, DHDAS, Jiangsu Donghua Testing Technology Co., Ltd., Jingjiang, Jiangsu China) is used for data collection. The sensors are positioned at the lower edge of the simply supported beam, with one sensor placed every 1/8 span, totaling seven sensors.
(3)
Loading method
To assess the suitability of the algorithm introduced in this paper for deflections under various load conditions, we utilize a manual loading approach for incremental loading. This approach allows for the generation of time-history displacement curves for the supported beam. Through the comparison of time-history displacement curves with data retrieved from displacement sensors, one can ascertain the precision of the acquired displacement values.
As illustrated in Figure 6, the entirety of the system’s components is seamlessly interconnected to work together, thus finalizing the monitoring system for bridge structure displacement. The front-end camera serves as the primary image acquisition system, transmitting the image signal to the computer for processing. Subsequently, the computer outputs the structure monitoring results, facilitating the evaluation of structural health.
In the laboratory, a model car equipped with weights is used to simulate the state of a bridge under the action of loads during normal operation. The feasibility of the proposed algorithm for structural detection is initially assessed through static load monitoring. Following this, its practical application capability is evaluated under dynamic loads.
(1)
In the image acquisition system, a Canon 5DS R camera is utilized, maintaining consistent camera parameters under different load conditions (Figure 7).
The camera outputs images with a resolution of 8688 × 5792 pixels for photography and 1920 × 1080 pixels for videography, capturing static and dynamic loading, respectively. The camera is a full-frame camera with a sensor size of 36 × 24 mm. The required focal length for the shooting process can be calculated using the following formula:
f = d × s w
where f represents the focal length of the camera, d represents the distance from the camera to the object being photographed, s represents the width of the sensor, and w represents the width of the object being photographed.
In the experiment, the distance from the test beam to the camera is 5 m, the sensor width is 36 mm, and the width of the test beam is 6 m. Therefore, it can be determined as follows:
f = 5 m × 0.036 m 6 m = 0.03 m
Therefore, in the validation experiments for the algorithm presented in this paper, fixing the camera’s focal length at 30 mm ensures that the proportion of the structure in the image is maximized while also guaranteeing that the mechanical characteristics of the structure can be effectively captured. To simulate the bridge’s response to loadings under real working conditions, loading is applied continuously over a short period, starting from the unloaded state to extract bridge displacement. Subsequently, the captured images and videos are transmitted to the data processing terminal via the camera data transmission cable for further processing.
(2)
To validate the accuracy of the algorithm proposed in this paper, LVDT displacement sensors are used to obtain precise displacement data for comparison. The displacement data from the seven LVDT sensors, which are in contact with the main beam, are collected using a Donghua tester (Table 1, Figure 8). Subsequently, the displacement extracted using the DoG algorithm proposed in this paper is compared with the accurate data obtained from the LVDT sensors to assess the accuracy and reliability of the algorithm’s displacement measurements [24].
(3)
The loading is manually applied using weights of 10 kg and 5 kg during the experiment. A model car is used to simulate the force applied to the bridge under working conditions.
(4)
In this study, the algorithm will be used for deformation recognition based on image signals, where the smallest unit in the image is the pixel. Therefore, the minimum resolution in the algorithm system corresponds to the pixel factor in the experiment. The specific calculation method and value of the pixel factor will be provided in Section 4.3.1.
(5)
The sampling rates of the LVDT and the camera offer multiple options. To select the most appropriate rates, the Nyquist–Shannon theory was referenced, which states that for band-limited signals, the sampling frequency must be at least twice the highest frequency to avoid aliasing. Before the validation experiments, finite element software Midas Civil 2022 was used to analyze the simply supported beam in the experiment, and the first five modal frequencies of the structure were calculated as 0.511 Hz, 2.035 Hz, 4.544 Hz, 7.995 Hz, and 12.317 Hz. Based on this, the camera sampling rate was set to 25 Hz, and the LVDT sampling rate was set to 500 Hz, ensuring accurate extraction of the structural response.
In the laboratory setting, the experiment is arranged according to the specific layout depicted in Figure 9:
In the experiments described in this paper, simulating the structural stress response of a bridge under working conditions requires applying loads to the structure. To verify the basic feasibility and practicality of the proposed algorithm, the load tests are divided into static load tests and dynamic load tests. The static load test serves as a fundamental test to evaluate the edge extraction effect of the DoG algorithm. By applying image-to-space scale transformation to obtain the algorithm’s displacement and comparing it with LVDT data, the accuracy of the DoG algorithm can be analyzed. The dynamic load test aims to simulate the structural response of the bridge under working conditions [25], thereby assessing the applicability of the DoG algorithm proposed in this paper.
To avoid any forward or backward movement of the vehicle caused by the applied load during the static load experiment, the vehicle is firmly secured in position prior to incremental loading and recording. Therefore, the weight of the vehicle does not need to be considered. The weights used in the laboratory are 10 kg, 5 kg, and 2 kg. The configuration of the loaded vehicle and supplementary weights is depicted in Figure 10:
In the static load experiment, the vehicle is secured in place before incremental loading and recording to prevent the vehicle from sliding forward or backward due to the applied load. To avoid structural displacement fluctuations caused by impact effects during loading, which may lead to significant fluctuations in displacement measured by displacement sensors and extracted by the algorithm, the structure is allowed to settle for one minute after loading. Subsequently, the mechanical condition of the bridge is captured through photography, and displacement sensor data are collected (Table 2).
During the dynamic load experiment, it is necessary to simulate the mechanical performance generated by a car passing over the bridge under normal working conditions. Therefore, after loading the vehicle, it is propelled at a constant speed over the bridge. During the experiment, a motor generates traction force to propel the vehicle steadily. Concurrently, the structure is monitored by a camera, and data are gathered (Table 3).

4. Experimental Results and Analysis

4.1. Experimental Data Collection

4.1.1. Image Data Collection

During the static load experiment, to document the deformation of the bridge under varying loads, the bridge structure is photographed using the camera equipment specified in Section 2. The initial condition is when the vehicle is unloaded. For each additional increment of 10 kg, images are captured after the structure has stabilized (Figure 11).
During the dynamic load experiment, the model car will be propelled by a motor located at the right end of the test beam, moving from the left to the right at a speed of 0.15 m/s. Given that the test beam is 6 m long, the dynamic load signal acquisition time will be approximately 40 s.

4.1.2. LVDT Data Collection

To authenticate the precision of the algorithm introduced in this paper, the data from Linear Variable Differential Transformer (LVDT) sensors offer the actual extent of deformation of the bridge structure. Below are the time-history displacement curves for both static and dynamic load conditions (Figure 12). Curves labeled 1 and 7 depict displacement at the near and far ends, respectively, at the one-eighth points. Curves 2 and 6 represent the displacement curves at the near and far ends, respectively, at the one-quarter points. Curves 3 and 5 represent the displacement curves at the 3/8 and 5/8 points, respectively. Curve 4 represents the measured displacement curve at the mid-span.

4.2. Edge Detection Effectiveness

4.2.1. Image Preprocessing

The images output by the camera are RGB images, consisting of a three-dimensional array of size MN3. Processing RGB images generally takes more time due to their complexity. Therefore, grayscale conversion is essential in the proposed algorithm to reduce the time and hardware consumption needed for extracting structural deformation.
Common methods for image grayscale conversion include averaging, weighted averaging, and min-max normalization. In this study, the weighted averaging method is used for grayscale conversion of the collected image information, with the results as follows (Figure 13):
In the experiment, although the initial output image size is 8688 ×5792, the portion of the image containing the bridge structure is only 8688 × 350, approximately 6% of the total area. Once the Region of Interest (ROI) is extracted, the processing speed is anticipated to increase significantly, from several to over ten times faster. Therefore, manual selection of the ROI is adopted. The height of the rectangular ROI is determined to be 1.5 times the height of the experimental girder. Initially, the lower edge of the structure is positioned on the centerline of the ROI, as illustrated in Figure 14.

4.2.2. DoG Algorithm for Edge Extraction

Common edge detection operators include Sobel, Prewitt, and Canny operators. These operators are mainly used to locate edges based on changes in grayscale values in the image. The Canny operator performs non-maximum suppression and double thresholding after grayscale detection, making it less susceptible to noise compared to other operators and, thus, widely applicable in various scenarios. However, in many scenarios, the results obtained by the aforementioned operators cannot be directly used. The edge extraction performed on the ROI region identified earlier yields the following results (Figure 15, Figure 16 and Figure 17):
The above results indicate that common operators cannot accurately extract the edges of the beam under the experimental conditions, highlighting the significant limitations of traditional operators.
In the novel edge detection algorithm proposed in this paper, the grayscale image undergoes initial processing through Gaussian filtering at two distinct scales. This involves the generation of the saliency map using Gaussian filtering with different standard deviations, where one filter has a larger standard deviation while the other has a smaller one. The smaller-scale Gaussian filter is employed to preserve the edges and details within the image, while the larger-scale Gaussian filter aids in monitoring. The Gaussian difference image is obtained by subtracting the outputs of the two filtered images by the Gaussian filters.
When selecting scales, the main considerations include edge size, potential displacement range, and image characteristics. If the edge width is small and fine, such as thin lines, a smaller Gaussian scale is needed. Conversely, if the edge is rougher, a larger scale value is required to capture boundary contours effectively. Smaller-scale Gaussian filters perform better at higher spatial frequencies, covering edge features effectively through different scales of Gaussian differences, thus enhancing edge detection sensitivity and accuracy. Considering that the edges of the experimental beam are approximately seven pixels wide, it is advisable to choose a relatively smaller scale within a reasonable range when selecting a smaller-scale Gaussian filter.
During the initial processing of Gaussian difference, the blurring effect induced by Gaussian filtering may result in image blur, thereby rendering the resultant difference image ineffective. Therefore, it is necessary to perform image binarization to enhance the image information. In addition, during the Difference of Gaussians (DoG) process, two different scales of Gaussian kernels need to be applied.
For the selection of a smaller scale, given the relatively diminutive edges in this study, an initial decision is made with a smaller scale of 0.5 and a larger scale of 8. The binarization threshold is established at 10. The Gaussian kernel size is 3 × 3, and the Gaussian kernels for the two scales are as follows (Figure 18):
Under the Gaussian kernel scales of 0.5 and 8, with a binary threshold of 10, the resulting edge structure is shown in Figure 19:
Observations reveal that the Gaussian difference algorithm proposed in this paper adeptly extracts the edge features of the structure in comparison to alternative algorithms. It has a higher efficiency in suppressing noise and achieves a more complete extraction of edge line shapes.

4.3. Structure Displacement Extraction

As mentioned earlier, the edge image obtained by the Gaussian difference (DoG) method represents the relative position of the structure in the image. Since the structural deformation characteristics are also relative displacements, there is no need for correction of the structure information image extracted by the Gaussian difference method during the processing.
Through eroding and dilating, the gaps and spikes in the image are removed. Subsequently, it scans the pixels within the target area, processing them halfway down the vertical pixel column of the image. This scan is conducted from top to bottom, deleting the first encountered white pixel along with its adjacent white pixels. Next, it performs a second scan in the same pixel column, retaining the first detected white pixel and its adjacent white pixels. This process yields the edge contour line of the lower edge of the structure.
Achieving successful extraction of structural geometric edge features with the proposed method necessitates selecting the same baseline for each condition at identical scales, thereby ensuring that the edge extraction results from the Gaussian difference method, where the edge occupies more than one pixel, do not affect the results. Consider using an algorithm to peel off the target pixels. The specific peeling method entails preserving the first pixel in each column from top to bottom. Since dilation and erosion operations have been performed on the preliminary image, the resulting image in the peeling algorithm is continuous. This acts as the reference point for each condition, as depicted in Figure 20:

4.3.1. Static Load Structural Displacement Extraction

In order to validate the feasibility of the proposed algorithm under static load conditions, the edge extraction algorithm was applied to each scenario to acquire the baseline for every case. The initial state was the bridge structure without any load. The results of structural deformation and edge detection under different conditions were merged using MATLAB, resulting in a series of images. Through the fusion algorithm, the deformation of the results was reflected by the displacement of pixels (Figure 21).
Next, the number of pixel displacements S p along the edge line in the image was calculated. Then, the pixel scale factor Z was utilized to convert the pixel displacement into the image displacement S s . In this experiment, targets with a size of 48 mm × 48 mm were used. In the figure below, D represents the actual size of a component of the structure, and d represents the corresponding length in terms of the number of pixels in the image. In the image acquisition process, these targets occupied 72–75 pixels, with an average of approximately 74 pixels for multiple points. Therefore, the scale factor used in the experiment is calculated as follows:
Z = D d = 48 74 = 0.6486   m m / p i x e l
The S s calculation formula is as follows:
S s = S p × Z
To validate the accuracy of this algorithm, each 1/8 point is considered a reference point. We calculated the relative error between the image displacement S s obtained through the above method for each condition at seven reference points and the actual displacement S l obtained from the LVDT displacement sensor. The error calculation formula is as follows:
After extracting the overall displacement of the bridge using the algorithm proposed
S s S l S l × 100 %
earlier, the displacement curve of the structure at a specific moment is obtained as follows (Figure 22):
By performing the aforementioned operations, the error between the image displacement and the actual displacement for each scenario can be calculated as follows (Table 4, Table 5 and Table 6):
More commonly used methods for identifying structural deformation include the Canny edge detection method and the improved Canny edge detection method with sub-pixel correction. According to existing algorithms, the results of the 10 kg, 20 kg, and 30 kg experiments in this study were calculated. In Figure 23, the blue sections depict the error of the Canny operator and the improved Canny operator sub-pixel algorithm, respectively. The yellow sections represent the error of the DoG algorithm proposed in this paper.
It can be seen that under static loading conditions, the DoG algorithm has higher accuracy compared to other algorithms. It can accurately extract mechanical data on structural deformation, enabling the assessment of structural safety.

4.3.2. Dynamic Load Structural Displacement Extraction

In real-world scenarios, the load conditions that a structure is subjected to are often more complex than the static loads described above. Meanwhile, video capture is used to record the structural loading state, with a sampling rate higher than that of photography, significantly enhancing the effectiveness of structural state extraction and identification. In the dynamic load experiment, we simulate the moving load experienced by the structure during normal use by using a model car loaded with weight and driven by a motor. The geometric shape of the structure is recorded by video. We apply frame-by-frame processing to the extracted video, using the DoG algorithm to process the deformation edges of the structure in each frame. Using the first frame as the initial state, we track the displacement of the structure’s deformation over time, which allows us to obtain the deformation image of the structure under load at any given moment [26]. By analyzing these graphical results through mechanical methods, we can derive the mechanical information of the structure at any time. In this experiment, we will use the above theory to extract the mechanical performance of the structure under dynamic loading to achieve displacement recognition and detection of the structure.
Based on the results of the above algorithm, the approximate time-history displacement curve at the specified point can be obtained as follows (Figure 24):
Plotting the above DoG displacement and LVDT data together results in Figure 25:
For the results mentioned above, the NRSME factor is used to evaluate both the algorithm’s results and the LVDT results. From the above four test conditions, the NRSME values are calculated as follows: 0.0501 for 10 kg, 0.0568 for 20 kg, 0.0374 for 30 kg, and 0.0433 for 40 kg. In image processing, an NRSME value below 0.05 is generally considered to indicate very good algorithm performance and reliability, with high result quality. NRSME values between 0.05 and 0.1 are viewed as indicative of good algorithm performance, with relatively small errors and high result quality.
From the time-history displacement curve results of the structural part mentioned above, it can be observed that the DoG algorithm proposed in this paper can achieve high-precision identification and extraction of displacement under the working conditions of the structure.

5. Conclusions

The DoG algorithm proposed in this paper utilizes the structural characteristics of the bridge to extract signals for bridge deformation. It achieves good accuracy under various loading conditions and demonstrates higher data stability compared to existing mature algorithms. Additionally, the following aspects require in-depth research in the context of this study:
(i)
The unavoidable vibrations during the measurement process. When using the camera, consideration must be given to the potential translational movements of the camera. Monitoring the angular acceleration or velocity of the camera is necessary to reduce environmental noise and improve robustness.
(ii)
The transition from laboratory validation to field validation needs to be smooth and well-validated.
(iii)
In practical applications, environmental complexities may arise. It is important to intelligently and accurately select the Gaussian kernel scale during the algorithm process to ensure that when elements in the image respond to the Gaussian function, the background can be more precisely filtered.
At the same time, the research in this paper is fundamental research, focusing only on monitoring deformation under structural loading conditions. Therefore, the experiments were conducted at a speed of 0.15 m/s. When extracting the mechanical characteristics of the structure based on bridge deformation, larger dynamic responses need to be used, such as increasing the vehicle speed or applying impact to the structure. However, from the perspective of deformation monitoring alone, the algorithm proposed in this paper performs well. In addition, applying the fundamental algorithm developed in this paper to practical monitoring requires further research, such as how to improve the stability of the DoG method in complex backgrounds and enhance the efficiency of the algorithm. Additionally, applying the fundamental algorithm to the overall system in the later stages is another challenge that needs to be addressed.

Author Contributions

Methodology, T.W.; Validation, Z.S.; Formal analysis, X.L.; Data curation, Z.Z.; Project administration, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shenzhen Science and Technology Program [CY20220818095608018], Chongqing Natural Science Foundation General Project [CSTB2022NSCQ-MSX1409], and Chongqing Natural Science Foundation Doctoral Direct Project [CSTB2023NSCQ-BSX0030]. And The APC was funded by [CY20220818095608018].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available on request.

Acknowledgments

The authors would like to thank the department of the State Key Laborator of Mountain Bridge and Tunnel Engineering at Chongqing Jiaotong University for their help in this investigation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hou, R.; Xia, Y. Review on the new development of vibration-based damageidentification for civil engineering structures: 2010–2019. J. Sound Vib. 2021, 491, 115741. [Google Scholar] [CrossRef]
  2. He, Z.; Li, W.; Salehi, H.; Zhang, H.; Zhou, H.; Jiao, P. Integrated structural healthmonitoring in bridge engineering. Autom. Constr. 2022, 136, 104168. [Google Scholar] [CrossRef]
  3. Ni, P.; Han, Q.; Du, X.; Fu, J.; Xu, K. Probabilistic model updating of civil structureswith a decentralized variational inference approach. Mech. Syst. Signal Process. 2024, 209, 111106. [Google Scholar] [CrossRef]
  4. Kim, K.; Choi, J.; Chung, J.; Koo, G.; Bae, I.H.; Sohn, H. Structural displacement estimation through multi-rate fusion ofaccelerometer and RTK-GPS displacement and velocity measurements. Measurement 2018, 130, 223–235. [Google Scholar] [CrossRef]
  5. Soleymani, A.; Jahangir, H.; Nehdi, L. Damage detection and monitoring in heritagemasonrystructures: Systematic review. Constr. Build. Mater. 2023, 397, 132402. [Google Scholar] [CrossRef]
  6. Liu, Y.; Bao, Y. Review of electromagnetic waves-based distance measurement technologies for remote monitoring of civil engineering structures. Measurement 2021, 176, 109193. [Google Scholar] [CrossRef]
  7. Jian, X.; Xia, Y.; Chatzi, E.; Lai, Z. Bridge influence surface identification using a deep multilayer perceptron and computer vision techniques. Struct. Health Monit. 2024, 23, 1606–1626. [Google Scholar] [CrossRef]
  8. Xu, W. Vehicle Load Identification Using Machine Vision and Displacement Influence Lines. Buildings 2024, 14, 392. [Google Scholar] [CrossRef]
  9. Yi, C.Y. Research on Bridge Dynamic Visual Displacement Measurement Method Based on UAV. Bachelor’s Thesis, Central South University, Changsha, China, 2023. [Google Scholar]
  10. Zhang, S.; Ni, P.; Wen, J.; Han, Q.; Du, X.; Xu, K. Automated vision-based multi-plane bridge displacement monitoring. Autom. Constr. 2024, 166, 105619. [Google Scholar] [CrossRef]
  11. Khuc, T.; Nguyen, T.A.; Dao, H.; Catbas, F.N. Swaying displacement measurement for structural monitoring using computer vision and an unmanned aerial vehicle. Measurement 2020, 159, 107769. [Google Scholar] [CrossRef]
  12. Han, Y.; Wu, G.; Feng, D. Vision-based displacement measurement using an unmanned aerial vehicle. Struct. Control. Health Monit. 2022, 29, e3025. Available online: https://onlinelibrary.wiley.com/doi/10.1002/stc.3025 (accessed on 14 November 2024). [CrossRef]
  13. Jana, D.; Nagarajaiah, S. Computer vision-based real-time cable tension estimation in Dubrovnik cable-stayed bridge using moving handheld video camera. Struct. Control. Health Monit. 2021, 28, e2713. Available online: https://onlinelibrary.wiley.com/doi/10.1002/stc.2713 (accessed on 14 November 2024). [CrossRef]
  14. Zheng, Y. Multi-target Structural Dynamic Displacement Measurement Method Based on Computer Vision. Hunan Commun. Sci. Technol. 2024, 50, 153–157. [Google Scholar]
  15. Zhang, J. Design and Implementation of Bridge Monitoring System Based on Multi-Sensors. Bachelor’s Thesis, Jilin Chemical Engineering College, Jilin, China, 2024. [Google Scholar] [CrossRef]
  16. Duan, X.; Chu, X.; Zhu, W.; Zhou, Z.; Luo, R.; Meng, J. Novel Method for Bridge Structural Full-Field Displacement Monitoring and Damage Identification. Appl. Sci. 2023, 13, 1756. [Google Scholar] [CrossRef]
  17. Chu, X.; Zhou, Z.; Zhu, W.; Duan, X. Multi-Point Displacement Synchronous Monitoring Method for Bridges Based on Computer Vision. Appl. Sci. 2023, 13, 6544. [Google Scholar] [CrossRef]
  18. Sun, C.; Shen, J.; Zhang, X.; Shi, H.; Wang, Y. Degradation modeling and remaining life prediction of multi-state long-life systems under random environmental influences. Meas. Sci. Technol. 2024, 35, 095110. [Google Scholar] [CrossRef]
  19. Zhang, S.; Ni, P.; Wen, J.; Han, Q.; Du, X.; Fu, J. Intelligent identification of moving forces based on visual perception. Mech. Syst. Signal Process. 2024, 214, 111372. [Google Scholar] [CrossRef]
  20. Xie, Y.; Meng, X.; Nguyen, D.T.; Xiang, Z.; Ye, G.; Hu, L. A Discussion of Building a Smart SHM Platform for Long-Span Bridge Monitoring. Sensors 2024, 24, 3163. [Google Scholar] [CrossRef]
  21. Waqas, H.A.; Sahil, M.; Riaz, A.; Ahmed, S.; Waseem, M.; Seitz, H. Efficient bridge steel bearing health monitoring using laser displacement sensors and wireless accelerometers. Front. Built Environ. 2024, 10, 1396815. [Google Scholar] [CrossRef]
  22. Luo, R. Research on Bridge Holographic Deformation Monitoring and Damage Identification Based on Pier Outreach Camera. Bachelor’s Thesis, Chongqing Jiaotong University, Chongqing, China, 2024. [Google Scholar]
  23. Giglioni, V.; Poole, J.; Venanzi, I.; Ubertini, F.; Worden, K. A domain adaptation approach to damage classification with an application to bridge monitoring. Mech. Syst. Signal Process. 2024, 209, 111135. [Google Scholar] [CrossRef]
  24. Luo, K.; Kong, X.; Zhang, J.; Hu, J.; Li, J.; Tang, H. Computer Vision-Based Bridge Inspection and Monitoring: A Review. Sensors 2023, 23, 7863. [Google Scholar] [CrossRef] [PubMed]
  25. Guzman-Acevedo, G.M.; Quintana-Rodriguez, J.A.; Vazquez-Becerra, G.E.; Garcia-Armenta, J. A reliable methodology to estimate cable tension force in cable-stayed bridges using Unmanned Aerial Vehicle (UAV). Measurement 2024, 229, 114498. [Google Scholar] [CrossRef]
  26. Yu, Z.; Zhang, G.; Zhang, C.; Zhang, Z.; Wang, Q. Visualization of Time-Series InSAR Processing and Deformation Monitoring Experiment Based on PySide. Eng. Surv. 2024, 33, 41–48. [Google Scholar] [CrossRef]
Figure 1. The convolution of the DoG with an image.
Figure 1. The convolution of the DoG with an image.
Applsci 14 11901 g001
Figure 2. Diagram illustrating the effect of the DoG.
Figure 2. Diagram illustrating the effect of the DoG.
Applsci 14 11901 g002
Figure 3. Four main steps of the DoG system.
Figure 3. Four main steps of the DoG system.
Applsci 14 11901 g003
Figure 4. Experimental bridge.
Figure 4. Experimental bridge.
Applsci 14 11901 g004
Figure 5. Diagram of experimental beam structure.
Figure 5. Diagram of experimental beam structure.
Applsci 14 11901 g005
Figure 6. Machine vision measurement system.
Figure 6. Machine vision measurement system.
Applsci 14 11901 g006
Figure 7. Pinhole camera model.
Figure 7. Pinhole camera model.
Applsci 14 11901 g007
Figure 8. Schematic diagram of the DoG monitoring system.
Figure 8. Schematic diagram of the DoG monitoring system.
Applsci 14 11901 g008
Figure 9. Experimental layout diagram.
Figure 9. Experimental layout diagram.
Applsci 14 11901 g009
Figure 10. Experimental vehicle and additional weights.
Figure 10. Experimental vehicle and additional weights.
Applsci 14 11901 g010
Figure 11. Structure state at each dynamic load level.
Figure 11. Structure state at each dynamic load level.
Applsci 14 11901 g011
Figure 12. Deformation curves of the structure under dynamic load condition.
Figure 12. Deformation curves of the structure under dynamic load condition.
Applsci 14 11901 g012
Figure 13. Grayscale image using the weighted averaging method.
Figure 13. Grayscale image using the weighted averaging method.
Applsci 14 11901 g013
Figure 14. The result of the Region of Interest (ROI) selection.
Figure 14. The result of the Region of Interest (ROI) selection.
Applsci 14 11901 g014
Figure 15. The result of the Canny operator.
Figure 15. The result of the Canny operator.
Applsci 14 11901 g015
Figure 16. The result of the Prewitt operator.
Figure 16. The result of the Prewitt operator.
Applsci 14 11901 g016
Figure 17. The result of the Sobel operator.
Figure 17. The result of the Sobel operator.
Applsci 14 11901 g017
Figure 18. Gaussian difference method, Gaussian kernel ((a) Description of a smaller Gaussian kernel; (b) Description of a bigger Gaussian kernel).
Figure 18. Gaussian difference method, Gaussian kernel ((a) Description of a smaller Gaussian kernel; (b) Description of a bigger Gaussian kernel).
Applsci 14 11901 g018
Figure 19. The result of the DoG operator.
Figure 19. The result of the DoG operator.
Applsci 14 11901 g019
Figure 20. Pixel extraction result.
Figure 20. Pixel extraction result.
Applsci 14 11901 g020
Figure 21. Pixel displacement extraction image.
Figure 21. Pixel displacement extraction image.
Applsci 14 11901 g021
Figure 22. DoG displacement under static load.
Figure 22. DoG displacement under static load.
Applsci 14 11901 g022
Figure 23. Error comparison between Canny operator and DoG.
Figure 23. Error comparison between Canny operator and DoG.
Applsci 14 11901 g023
Figure 24. DoG data results.
Figure 24. DoG data results.
Applsci 14 11901 g024
Figure 25. Comparison between DoG algorithm and LVDT displacement under dynamic load ((a) Description of 10 kg; (b) Description of 20 kg; (c) Description of 30 kg; (d) Description of 40 kg).
Figure 25. Comparison between DoG algorithm and LVDT displacement under dynamic load ((a) Description of 10 kg; (b) Description of 20 kg; (c) Description of 30 kg; (d) Description of 40 kg).
Applsci 14 11901 g025aApplsci 14 11901 g025b
Table 1. Conditions of instruments.
Table 1. Conditions of instruments.
Applsci 14 11901 i001Canon 5DsR
(Canon Limited, Tokyo, Japan)
Sensor size36 × 24 mm (3:2)
Effective pixels50.6 MP
Sensor typeCMOS
Applsci 14 11901 i002EF 24–70 mm f/2.8L II USM
(Canon Limited, Tokyo, Japan)
Focal length24–70 mm
Convert focal length in APS-C format38–112 mm
Maximum diameter and length88.5 × 113 mm
Lens structure13 groups of 18 pieces
Applsci 14 11901 i003DH#5G103
(Jiangsu Donghua Testing Technology Co., Ltd., Taizhou, China)
Sensitivity0.4 mV/mm@2 V
Effective itinerary50 mm
Output parametersImages 8868 × 5792
Videos1920 × 1080 (25 FPS)
Table 2. Static load conditions.
Table 2. Static load conditions.
Condition NumberLoad ConditionLoad Position
J10 kgNear the Edge of the Mid-Span
J210 kg
J320 kg
J430 kg
J540 kg
Table 3. Dynamic load conditions.
Table 3. Dynamic load conditions.
Condition NumberLoad ConditionLoad Position
D110 kgPropelled at a Constant Speed of 15 cm per Second from Left to Right by the Motor
D220 kg
D330 kg
D440 kg
Table 4. The image structure recognition results under a 10 kg load.
Table 4. The image structure recognition results under a 10 kg load.
Reference Points1\8 L2\8 L3\8 L4\8 L5\8 L6\8 L7\8 L
S p / p i x e l 1.23332.33333.00003.53333.16672.20001.1333
S s / m m 0.80001.51351.94592.29192.05411.42700.7351
S l / m m 0.80991.48782.04462.18291.91121.31810.7353
Deviation1.22%1.73%4.82%4.99%7.48%8.27%0.02%
Table 5. The image structure recognition results under a 20 kg load.
Table 5. The image structure recognition results under a 20 kg load.
Reference Points1\8 L2\8 L3\8 L4\8 L5\8 L6\8 L7\8 L
S p / p i x e l 2.76675.00006.96677.00006.53334.66672.4667
S s / m m 1.79463.24324.51894.54054.23783.02701.6000
S l / m m 1.82353.31234.44994.74004.13282.90841.5667
Deviation1.58%2.09%1.55%4.21%2.54%4.08%2.13%
Table 6. The image structure recognition results under a 40 kg load.
Table 6. The image structure recognition results under a 40 kg load.
Reference Points1\8 L2\8 L3\8 L4\8 L5\8 L6\8 L7\8 L
S p / p i x e l 4.06677.333310.666713.000012.00008.90004.6000
S s / m m 2.63784.75686.91898.43247.78385.77302.9838
S l / m m 2.51844.54406.71348.50347.93795.83293.1568
Deviation4.74%4.68%3.06%0.83%1.94%1.03%5.48%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shen, Z.; Zhu, W.; Wu, T.; Luo, X.; Zhou, Z. Bridge Displacements Monitoring Method Based on Pixel Sequence. Appl. Sci. 2024, 14, 11901. https://doi.org/10.3390/app142411901

AMA Style

Shen Z, Zhu W, Wu T, Luo X, Zhou Z. Bridge Displacements Monitoring Method Based on Pixel Sequence. Applied Sciences. 2024; 14(24):11901. https://doi.org/10.3390/app142411901

Chicago/Turabian Style

Shen, Zimeng, Weizhu Zhu, Tong Wu, Xianghao Luo, and Zhixiang Zhou. 2024. "Bridge Displacements Monitoring Method Based on Pixel Sequence" Applied Sciences 14, no. 24: 11901. https://doi.org/10.3390/app142411901

APA Style

Shen, Z., Zhu, W., Wu, T., Luo, X., & Zhou, Z. (2024). Bridge Displacements Monitoring Method Based on Pixel Sequence. Applied Sciences, 14(24), 11901. https://doi.org/10.3390/app142411901

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop