Application Framework and Optimal Features for UAV-Based Earthquake-Induced Structural Displacement Monitoring
Abstract
:1. Introduction
- The specimen used for monitoring structural displacements was a reduced-scale structural model rather than a full-scale structure. Compared to a reduced-scale structural model, monitoring the responses of a full-scale structure requires a larger camera-to-scene distance, which may compromise the resolution of the UAV-captured videos.
- The study was conducted in a controlled indoor environment, which is different from actual field applications. Factors such as weather-induced UAV drift and additional image noise caused by non-uniform lighting are therefore not able to be considered in the study.
- Only one or two types of features and reference target patterns were selected for each study. There is no direct comparison of the accuracy or robustness of different features or patterns when conducting structural displacement monitoring.
2. UAV Video Image Collection Program
2.1. Case Study: NHERI Converging Design (Phase II) Shake Table Test Program
2.2. UAV Vison-Based Monitoring Plan and Reference Target Layout
3. Target Detection and Tracking Algorithm
- Lens distortion correction is applied to each video frame to obtain distortion-free image frames (see Section 4.2).
- Coarse bounding boxes representing the initial regions of interest (ROIs) of each target are manually defined in the reference video frame. The reference video frame is the first frame of each test video, which corresponds to a state at 5 to 10 s before the testing (and hence movement of the specimen) initiates. The pixel dimension of the coarse bounding box in the image frame is 51 × 51 pixels.
- The colored video frames are converted into grayscale images since black-to-white contrast is easier to identify using a single value threshold compared to a 3-value threshold required with all RGB channels in a colored image. Moreover, grayscale images are of reduced size; thus, the computational cost of the algorithm is reduced.
3.1. Feature Points Tracking-Based Algorithm for Square Checkerboard Patterns
- Step-1: The grayscale initial ROI image (Figure 4a) is thresholded into a binary image (Figure 4b) using the Otsu’s method [34]. Pixels representing white color in the pattern are defined as foreground pixels. To remove the noise in the background, connected components that have fewer than 10 pixels are removed in the binary image.
- Step-2: Edge detection with pixel level accuracy (Figure 4c) is performed on the binary image using the Sobel operator [35]. Although the Canny edge detection method [36] or other sub-pixel edge detection methods [37] may provide better localization of the edge points, the edge detection using the Sobel operator is sufficient for the refined ROI generation in step 3 with a lower computational cost, since this step only aims to generate a refined ROI to filter out the background noise from the initial ROI. In this step, both the edge points representing the edge of the squares (black against white or red against white) and the boundary of the target (white, black, or red against background) are extracted.
- Step-3: The boundary point set of all edge points is extracted to form the bounding box of the target, representing the exact target region with pixel level accuracy (blue solid lines in Figure 4d). The approximate center of this bounding box is calculated as the mean of the boundary point set. A scaling operation expanding the bounding box by a factor of 1.1 relative to its approximate center is applied to obtain the refined ROI (red dash lines in Figure 4d), ensuring comprehensive inclusion of all feature points within the refined ROI.
- Step-4: The Harris-Stephens feature detection algorithm with quadratic interpolation [38,39] is applied to detect the corner points within the refined ROI in the grayscale image (red dash lines in Figure 4e). The Harris-Stephens feature detection algorithm runs a 3 × 3 window over the refined ROI in the grayscale image and computes the spatial gradient matrix M at each pixel (x, y) in the refined ROI [38]:
3.2. Hough Transform-Based Algorithm for Concentric Circular Pattern Targets
- Step-1: The Canny edge detection [36] is performed on the initial ROI grayscale image to obtain sub-pixel coordinates for the edge points of circles.
- Step-2: The Hough Transform is performed in the ROI grayscale image. Because there are three circles in each target, a heuristic clustering algorithm is applied to cluster all circle parameters (a, b, r) into three clusters based on the radius value r. Each cluster center represents the center point and radius of each circle.
- Step-3: The geometric distance between the center point of the inner circle and the mean of the center points of the other two circles is determined. If the distance is greater than one pixel, the center point of the inner circle will be suppressed for the calculation of the target geometric center. This step aims to avoid the potential inaccurate detection of circles with small diameters. During the implementation, some unstable detections are observed for the center point of the inner circle in the pattern. The diameter of the inner circle is 6 cm, corresponding to only five to six pixels in the video frame. For circles with a small radius, the number of edge points on the circumference is relatively small. Since the Hough Transform relies on accumulation of the parameter set from edge points, the low number of edge points results in fewer votes in the accumulator. Therefore, the inner circle is more difficult to detect and localize than the other two circles.
- Step-4: The geometric center of the target (u, v) is computed as the mean of center points of all the unsuppressed circles. An example of circle detection and target center extraction for Type 4 target is shown in Figure 6.
- Step-5: Repeat steps 1 through 4 for all video frames. In each video frame, the ROI is updated as a 51 × 51 pixel-sized region with the center point (u, v) as the target center detected in the previous video frame. The algorithm finally returns a time series of the geometric center of the target with sub-pixel level accuracy.
4. Three-Dimensional World Point Reconstruction and Structural Displacement Extraction
4.1. Camera Calibration and Lens Distortion Correction
4.2. Camera Pose Recovery
4.3. Three-Dimentional World Point Reconstruction and Structural Displacement Extraction
5. Structural Displacement Extraction Validation
5.1. Ground Truth Measurements and Comparison Plan
5.1.1. Processing Steps for Ground Truth Measurements
- Baseline correction: The raw acceleration time series is baseline corrected using the mean value of the pre-event data (the first 100 data points for each test).
- Zero-padding: The baseline corrected acceleration data is tapered for the first and last second, then the data is zero-padded for 20 s at both the beginning and the end of the time series.
- Filtering: A fourth-order bandpass Butterworth filter with cutoff frequencies of 0.1 and 50 Hz is applied to remove measurement noise above and below those limits.
- Integration: The fourth-order Runge-Kutta method is applied to integrate the filtered acceleration to obtain the velocity.
5.1.2. Analog Sensor Plan for Results Comparison
5.2. Structural Displacement Tracking Results and Error Characteristics
6. Optimal Features for Earthquake-Induced Structural Displacement Monitoring
- Overall Tracking Accuracy Evaluation: RMSEs in these analyses are consistently less than 8 mm for the 67% MCER tests and less than 10 mm in general for the 100% MCER tests (only 6 data points have errors that exceed 10 mm among the 224 data points in total for the MCER tests). Thus, the proposed structural displacement monitoring method can achieve an overall tracking accuracy at the millimeter level.
- Feature Size: For a certain pattern shape (2-tile black-and-white checkerboard), significantly larger RMSEs are observed for smaller pattern sizes (Type 3M and Type 3S) compared to a larger pattern size (Type 3). Smaller reference targets have less pixel dimensions in the video frame. For example, the side length of the black square in the Type 3S pattern is 5 cm, which corresponds to only four to five pixels in the video frame. The intensity contrast between the target region and the background is significantly reduced, leading to reduced performance on the refined ROI generation with Otsu’s thresholding and edge detection. Additionally, as previously mentioned in Section 3.1, the small pixel dimensions of reference targets reduce the values in the spatial gradient matrix of the corner point in Harris-Stephens feature detection. Therefore, it is observed that targets with smaller dimensions have larger RMSEs for displacement tracking by comparing 2-tile black-and-white checkerboard targets with different sizes. Aiming to investigate the minimum required feature (pattern) size for reasonable displacement tracking under different intensities of earthquakes, a statistical analysis on the normalized RMSEs for displacement tracking with respect to normalized tile dimensions of targets is conducted for all the black-and-white checkerboard targets (specifically, Type 1, 2, 3, 3M, and 3S) in all nine tests. For each individual target during a specific test, both the tracking RMSE (in X/Y-direction) and the tile dimension D (the side length of each black square) are normalized by the peak displacement ΔGT,max (in the X/Y-direction) in each ground truth measurement. Figure 16 presents a compilation of the statistical analysis results. Each data point of the normalized RMSE with respect to the normalized tile dimension (D/ΔGT,max) is depicted with gray symbols. As previously presented in Table 2, a larger overall peak roof displacement in the Y-direction is observed compared to the X-direction for the nine tests in this study. In addition, very large peak roof displacements in the Y-direction are recorded during the 100% and 110% MCER Northridge earthquake tests (MID 8 and 18). Therefore, both RMSEs and tile dimensions after normalization (in the Y-direction) are less than the values in the X-direction. For each direction, data points are divided into multiple bins based on the normalized tile dimension with a fixed bin width of 10%, which are indicated by the red and white background in the plot. The statistics of the data within each bin are presented by a box and whisker plot in Figure 16. As the normalized tile dimensions (D/ΔGT,max) increase, the normalized RMSEs decrease from 5.78% to 3.56% in the X-direction and decrease from 2.68% to 1.46% in the Y-direction. For both X- and Y-direction, a relatively stable RMSE can be achieved when the normalized tile dimensions (D/ΔGT,max) are greater than 50% (i.e., the actual tile dimension of the target is greater than 50% of the peak displacement during the earthquake).
- Intensity Contrast: In the proposed target detection and tracking algorithm, the colored ROI image is converted into grayscale for the convenience of thresholding and the reduced computational cost. Type 3 (black-and-white) and Type 3R (red-and-white) targets are both 2-tile checkerboard patterns with the same geometric shape and dimension but with different colors. Five pairs of contrasting colors involved in target detection and tracking for these two patterns are listed below in descending order of intensity difference within the color pairs in the grayscale image: (1) black and white, (2) red and white, (3) white and background, (4) black and background, and (5) red and background. As previously discussed in Section 3.1, the two corner points between red squares and background are suppressed for the Type 3R pattern in feature point detection and tracking. Therefore, displacement tracking for the Type 3R pattern relies largely on the red-to-white and white-to-background contrast which have relatively large intensity differences within each color pair. However, displacement tracking for the Type 3 targets relies on black-to-white, white-to-background, and black-to-background contrasts, where the intensity difference between the black region and the background is relatively low and it may reduce the performance of feature point localization. This insight is reflected by the slightly larger results in the RMSE statistics for the Type 3 targets compared to the Type 3R targets.
- Pattern shape: Square checkerboard patterns and concentric circular patterns are included for the structural displacement monitoring of the 6-story mass timber building specimen. The square checkerboard patterns provide orthogonal features with explicit feature points, while the concentric circular pattern only provides circular shapes without any explicit point for tracking. Slightly larger RMSEs are observed for the concentric circular pattern (Type 4) compared to square checkerboard patterns with the same geometric dimension and color (Type 1, 2, and 3). Furthermore, additional oscillations are observed in tracking results from the concentric circular pattern. Figure 17 presents the comparison of the displacement results between the multiple reference targets at the northeast corner of the roof level (region 2) from the 67% MCER Ferndale earthquake test (MID 12). From the overlay of the displacement time series, there are good visual alignments between the ground truth measurements and UAV vision-based tracking results from all the reference targets in the region. However, additional oscillations can be clearly observed in the tracking results of the concentric circular pattern from the zoom-in view. These additional oscillations occur within low-amplitude regions of the displacement time series (e.g., t = 7–8.5 s in the X-direction and t = 4–5.5 s in the Y-direction for MID 12). These oscillations are not observed in the tracking results from the square checkerboard patterns. This difference is caused by the different algorithms applied in the detection and tracking of the two different types of features (i.e., orthogonal features and circular features). Orthogonal features (corner points) in the square checkerboard patterns are detected first in the reference frame. Then, sub-pixel coordinates of the feature points in the previous video frame are used as the input of the Kanade-Lucas-Tomasi (KLT) feature tracker in the following video frame for displacement tracking. However, the KLT algorithm cannot be applied to circular features since there are no explicit feature points. A Hough Transform is applied to each video frame with a coarsely updated ROI for circular features. The potential error in the ROI updating can be propagated in the Hough Transform in the subsequent video frame. Therefore, compared to concentric circular patterns, orthogonal-shaped patterns with explicit feature points that can be tracked by the Kanade-Lucas-Tomasi (KLT) algorithm are observed to reduce error propagation during the video sequence and reduce the displacement measurement noise.
7. Conclusions
- The proposed UAV vision-based method demonstrates reasonable accuracy in tracking structural displacements during a wide range of earthquake motion inputs, with overall root-mean-square errors (RMSEs) at the millimeter level compared to the ground truth measurements from analog sensors.
- Given a specific earthquake event and input direction, the average displacement tracking RMSEs are proportional to the achieved peak input accelerations (PIAs).
- Based on the statistical analysis of the error characteristics across the various reference target patterns, it is observed that the pattern sizes, pattern shapes, and intensity contrast in the region of interest can affect the accuracy of structural displacement monitoring. Orthogonal-shaped patterns (e.g., straight line intersections or squares) with explicit feature points that can be tracked by the Kanade-Lucas-Tomasi (KLT) algorithm are observed to reduce the error propagation during the video sequence and reduce the displacement measurement noise.
- To characterize the effect of pattern size on the robustness of structural displacement monitoring, a relatively stable RMSE can be observed when the normalized tile dimension (D/ΔGT,max) is greater than 50% for black-and-white checkerboard patterns. Therefore, the actual tile dimension of the black-and-white checkerboard pattern is suggested to be greater than 50% of the peak displacement expected during an earthquake to ensure reasonable accuracy in tracking structural displacements.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wu, H.; Zhong, B.; Li, H.; Love, P.; Pan, X.; Zhao, N. Combining Computer Vision with Semantic Reasoning for On-Site Safety Management in Construction. J. Build. Eng. 2021, 42, 103036. [Google Scholar] [CrossRef]
- Hou, X.; Li, C.; Fang, Q. Computer Vision-Based Safety Risk Computing and Visualization on Construction Sites. Autom. Constr. 2023, 156, 105129. [Google Scholar] [CrossRef]
- Shi, M.; Chen, C.; Xiao, B.; Seo, J. Vision-Based Detection Method for Construction Site Monitoring by Integrating Data Augmentation and Semisupervised Learning. J. Constr. Eng. Manag. 2024, 150, 04024027. [Google Scholar] [CrossRef]
- Zhang, X.; Wogen, B.E.; Liu, X.; Iturburu, L.; Salmeron, M.; Dyke, S.J.; Poston, R.; Ramirez, J.A. Machine-Aided Bridge Deck Crack Condition State Assessment Using Artificial Intelligence. Sensors 2023, 23, 4192. [Google Scholar] [CrossRef] [PubMed]
- Tang, W.; Jahanshahi, M.R. Active Perception Based on Deep Reinforcement Learning for Autonomous Robotic Damage Inspection. Mach. Vis. Appl. 2024, 35, 110. [Google Scholar] [CrossRef]
- Yao, Z.; Jiang, S.; Wang, S.; Wang, J.; Liu, H.; Narazaki, Y.; Cui, J.; Spencer, B.F., Jr. Intelligent Crack Identification Method for High-Rise Buildings Aided by Synthetic Environments. Struct. Des. Tall Spec. Build. 2024, 33, e2117. [Google Scholar] [CrossRef]
- Liu, Z.; Xue, J.; Wang, N.; Bai, W.; Mo, Y. Intelligent Damage Assessment for Post-Earthquake Buildings Using Computer Vision and Augmented Reality. Sustainability 2023, 15, 5591. [Google Scholar] [CrossRef]
- Kustu, T.; Taskin, A. Deep Learning and Stereo Vision Based Detection of Post-Earthquake Fire Geolocation for Smart Cities within the Scope of Disaster Management: İstanbul Case. Int. J. Disaster Risk Reduct. 2023, 96, 103906. [Google Scholar] [CrossRef]
- Cheng, M.-Y.; Sholeh, M.N.; Kwek, A. Computer Vision-Based Post-Earthquake Inspections for Building Safety Assessment. J. Build. Eng. 2024, 94, 109909. [Google Scholar] [CrossRef]
- Cheng, C.; Kawaguchi, K. A Preliminary Study on the Response of Steel Structures Using Surveillance Camera Image with Vision-Based Method during the Great East Japan Earthquake. Measurement 2015, 62, 142–148. [Google Scholar] [CrossRef]
- Hutchinson, T.C.; Kuester, F. Monitoring Global Earthquake-Induced Demands Using Vision-Based Sensors. IEEE Trans. Instrum. Meas. 2004, 53, 31–36. [Google Scholar] [CrossRef]
- Wang, X.; Wittich, C.E.; Hutchinson, T.C.; Bock, Y.; Goldberg, D.; Lo, E.; Kuester, F. Methodology and Validation of UAV-Based Video Analysis Approach for Tracking Earthquake-Induced Building Displacements. J. Comput. Civ. Eng. 2020, 34, 04020045. [Google Scholar] [CrossRef]
- Wang, X.; Lo, E.; De Vivo, L.; Hutchinson, T.C.; Kuester, F. Monitoring the Earthquake Response of Full-Scale Structures Using UAV Vision-Based Techniques. Struct. Control Health Monit. 2022, 29, e2862. [Google Scholar] [CrossRef]
- Cao, P.; Ji, R.; Ma, Z.; Sorosh, S.; Lo, E.; Norton, T.; Driscoll, J.; Wang, X.; Hutchinson, T.; Pei, S. UAV-Based Video Analysis and Semantic Segmentation for SHM of Earthquake-Excited Structures. In Proceedings of the 18th World Conference of Earthquake Engineering, Milan, Italy, 30 June–5 July 2024. [Google Scholar]
- Weng, Y.; Shan, J.; Lu, Z.; Lu, X.; Spencer, B.F., Jr. Homography-Based Structural Displacement Measurement for Large Structures Using Unmanned Aerial Vehicles. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 1114–1128. [Google Scholar] [CrossRef]
- Shan, J.; Huang, P.; Loong, C.N.; Liu, M. Rapid Full-Field Deformation Measurements of Tall Buildings Using UAV Videos and Deep Learning. Eng. Struct. 2024, 305, 117741. [Google Scholar] [CrossRef]
- Perry, B.J.; Guo, Y. A Portable Three-Component Displacement Measurement Technique Using an Unmanned Aerial Vehicle (UAV) and Computer Vision: A Proof of Concept. Measurement 2021, 176, 109222. [Google Scholar] [CrossRef]
- Ribeiro, D.; Santos, R.; Cabral, R.; Saramago, G.; Montenegro, P.; Carvalho, H.; Correia, J.; Calçada, R. Non-Contact Structural Displacement Measurement Using Unmanned Aerial Vehicles and Video-Based Systems. Mech. Syst. Signal Process. 2021, 160, 107869. [Google Scholar] [CrossRef]
- Zhang, C.; Lu, Z.; Li, X.; Zhang, Y.; Guo, X. A Two-Stage Correction Method for UAV Movement-Induced Errors in Non-Target Computer Vision-Based Displacement Measurement. Mech. Syst. Signal Process. 2025, 224, 112131. [Google Scholar] [CrossRef]
- Khuc, T.; Nguyen, T.A.; Dao, H.; Catbas, F.N. Swaying Displacement Measurement for Structural Monitoring Using Computer Vision and an Unmanned Aerial Vehicle. Measurement 2020, 159, 107769. [Google Scholar] [CrossRef]
- Fukuda, Y.; Feng, M.Q.; Shinozuka, M. Cost-Effective Vision-Based System for Monitoring Dynamic Response of Civil Engineering Structures. Struct. Control Health Monit. 2010, 17, 918–936. [Google Scholar] [CrossRef]
- Han, Y.; Wu, G.; Feng, D. Vision-Based Displacement Measurement Using an Unmanned Aerial Vehicle. Struct. Control Health Monit. 2022, 29, e3025. [Google Scholar] [CrossRef]
- Van Den Einde, L.; Conte, J.P.; Restrepo, J.I.; Bustamante, R.; Halvorson, M.; Hutchinson, T.C.; Lai, C.-T.; Lotfizadeh, K.; Luco, J.E.; Morrison, M.L.; et al. NHERI@UC San Diego 6-DOF Large High-Performance Outdoor Shake Table Facility. Front. Built Environ. 2021, 6, 580333. [Google Scholar] [CrossRef]
- Barbosa, A. NHERI Converging Design Project: Overview of 6-Story Shake Table Test Program. In Proceedings of the 2024 EERI Annual Meeting, Seattle, WA, USA, 9–12 April 2024. [Google Scholar]
- McBain, M.; Pieroni, L.; Araujo, R.; Simpson, B.G.; Barbosa, A. Full-Scale Shake Table Testing of a Six-Story Mass Timber Building with Post-Tensioned Rocking Walls and Buckling-Restrained Boundary Elements. J. Struct. Eng. 2025, to be submitted. [Google Scholar]
- Barbosa, A.; Simpson, B.; van de Lindt, J.; Sinha, A.; Field, T.; McBain, M.; Uarac, P.; Kontra, S.; Mishra, P.; Gioiella, L.; et al. Shake Table Testing Program for Mass Timber and Hybrid Resilient Structures Datasets for the NHERI Converging Design Project. In Shake Table Testing Program of 6-Story Mass Timber and Hybrid Resilient Structures (NHERI Converging Design Project) 2025. DesignSafe-CI. Available online: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-5736/#detail-86b00b74-105f-4c13-a63f-594b32c52444 (accessed on 9 January 2025).
- Pei, S.; Ryan, K.L.; Berman, J.W.; van de Lindt, J.W.; Pryor, S.; Huang, D.; Wichman, S.; Busch, A.; Roser, W.; Wynn, S.L.; et al. Shake-Table Testing of a Full-Scale 10-Story Resilient Mass Timber Building. J. Struct. Eng. 2024, 150, 04024183. [Google Scholar] [CrossRef]
- Sorosh, S.; Hutchinson, T.C.; Ryan, K.L.; Smith, K.W.; Kovac, A.; Zabet, S.; Pei, S. Experimental Characterization of a Full-Scale Stair System Detailed to Achieve Seismic Resiliency. Earthq. Eng. Struct. Dyn. 2025, submitted. [Google Scholar]
- American Society of Civil Engineers. Minimum Design Loads and Associated Criteria for Buildings and Other Structures; American Society of Civil Engineers: Reston, VA, USA, 2021; ISBN 978-0-7844-1578-8. [Google Scholar]
- Edwins, D.J. Modal Testing: Theory, Practice and Application; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- DJI. Available online: https://www.dji.com (accessed on 19 September 2024).
- Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the IJCAI’81: 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; Volume 2, pp. 674–679. [Google Scholar]
- Shi, J.; Tomasi, C. Good Features to Track. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar] [CrossRef]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
- Kanopoulos, N.; Vasanthavada, N.; Baker, R.L. Design of an Image Edge Detection Filter Using the Sobel Operator. IEEE J. Solid-State Circuits 1988, 23, 358–367. [Google Scholar] [CrossRef]
- Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
- Trujillo-Pino, A.; Krissian, K.; Alemán-Flores, M.; Santana-Cedrés, D. Accurate Subpixel Edge Location Based on Partial Area Effect. Image Vis. Comput. 2013, 31, 72–90. [Google Scholar] [CrossRef]
- Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the Alvey Vision Conference 1988, Manchester, UK, 31 August – 2 September 1998; Alvey Vision Club: Manchester, UK, 1988; pp. 147–151. [Google Scholar]
- Zhang, Z.; Lu, H.; Li, X.; Li, W.; Yuan, W. Application of Improved Harris Algorithm in Sub-Pixel Feature Point Extraction. Int. J. Comput. Electr. Eng. 2014, 6, 101–104. [Google Scholar] [CrossRef]
- Chen, X.; Lu, L.; Gao, Y. A New Concentric Circle Detection Method Based on Hough Transform. In Proceedings of the 2012 7th International Conference on Computer Science & Education (ICCSE), Melbourne, VIC, Australia, 14–17 July 2012; pp. 753–758. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed.; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2003; ISBN 978-0-521-54051-3. [Google Scholar]
- Bouguet, J.Y. Camera Calibration Toolbox for MATLAB. 2015. Available online: https://data.caltech.edu/records/jx9cx-fdh55 (accessed on 19 September 2024).
- Agisoft Metashape User Manual—Professional Edition. Version 2.1. 2024. Available online: https://www.agisoft.com/downloads/user-manuals/ (accessed on 19 September 2024).
- Fraser, C.S. Digital Camera Self-Calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
- Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ Photogrammetry: A Low-Cost, Effective Tool for Geoscience Applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
- Zheng, Y.; Sugimoto, S.; Okutomi, M. ASPnP: An Accurate and Scalable Solution to the Perspective-n-Point Problem. IEICE Trans. Inf. Syst. 2013, E96.D, 1525–1535. [Google Scholar] [CrossRef]
- Colomina, I.; Molina, P. Unmanned Aerial Systems for Photogrammetry and Remote Sensing: A Review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
- De Fino, M.; Galantucci, R.A.; Fatiguso, F. Condition Assessment of Heritage Buildings via Photogrammetry: A Scoping Review from the Perspective of Decision Makers. Heritage 2023, 6, 7031–7066. [Google Scholar] [CrossRef]
- Hesch, J.A.; Roumeliotis, S.I. A Direct Least-Squares (DLS) Method for PnP. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 383–390. [Google Scholar]
- Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An Accurate O(n) Solution to the PnP Problem. Int. J. Comput. Vis. 2009, 81, 155–166. [Google Scholar] [CrossRef]
- Schonberger, J.L.; Frahm, J.-M. Structure-From-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Zach, C. Robust Bundle Adjustment Revisited. In Proceedings of the 13th European Conference on Computer Vision—ECCV 872 2014, Zurich, Switzerland, 6–12 September 2014; pp. 772–787. [Google Scholar]
- Sutton, M.A.; Yan, J.H.; Tiwari, V.; Schreier, H.W.; Orteu, J.J. The Effect of Out-of-Plane Motion on 2D and 3D Digital Image Correlation Measurements. Opt. Lasers Eng. 2008, 46, 746–757. [Google Scholar] [CrossRef]
- MathWorks. Computer Vision Toolbox (R2023b); MathWorks Inc.: Natick, MA, USA, 2023. Available online: https://www.mathworks.com/products/computer-vision.html (accessed on 19 September 2024).
Earthquake | MID 1 | Input Direction | Intensity Level | Achieved Peak Input Acceleration (PIA) | ||
---|---|---|---|---|---|---|
X [g] | Y [g] | Z [g] | ||||
1994 Northridge, USA (Crustal 2) Station: Sun Valley—Roscoe Blvd | 7 | XYZ | 67% MCER 3 | 0.38 | 0.52 | 0.45 |
8 | 100% MCER | 0.59 | 0.83 | 0.73 | ||
18 | 110% MCER | 0.65 | 0.92 | 0.75 | ||
2010 Ferndale, USA (Intraslab) Station: 89486 | 12 | XYZ | 67% MCER | 0.38 | 0.41 | 0.75 |
15 | 100% MCER | 0.59 | 0.63 | 1.16 | ||
2010 Maule, Chile (Interface) Station: CSCH | 13 | XY | 67% MCER | 0.42 | 0.34 | 0.02 |
16 | 100% MCER | 0.64 | 0.52 | 0.03 | ||
2004 Niigata, Japan (Crustal) Station: NIGH11 | 14 | XYZ | 68.9% MCER | 0.58 | 0.40 | 0.30 |
17 | 100% MCER | 0.79 | 0.55 | 0.45 |
MID | Intensity Level | Peak Roof Acceleration 1,2 [g] | Peak Roof Displacement 2 [cm] | ||
---|---|---|---|---|---|
X | Y | X | Y | ||
7 | 67% MCER | 0.64 | 0.89 | 13.94 | 22.36 |
8 | 100% MCER | 0.89 | 1.10 | 19.72 | 33.71 |
12 | 67% MCER | 0.74 | 0.83 | 20.84 | 15.99 |
13 | 67% MCER | 0.58 | 0.85 | 9.26 | 17.14 |
14 | 68.9% MCER | 0.77 | 1.07 | 19.05 | 15.99 |
15 | 100% MCER | 0.91 | 1.11 | 29.16 | 24.19 |
16 | 100% MCER | 0.72 | 1.16 | 11.61 | 22.64 |
17 | 100% MCER | 1.01 | 0.99 | 28.16 | 24.46 |
18 | 110% MCER | 0.91 | 1.32 | 24.46 | 36.28 |
Camera View | UAV Platform | Unfolded Size [L × W × H-mm] | Battery Life [min] | Frame Rate [fps] | Resolution |
---|---|---|---|---|---|
Plan view (XY view) | DJI Matrice 300 UAV with DJI Zenmuse P1 Camera | 810 × 670 × 430 | 55 | 59.94 | 3840 × 2160 |
East view (YZ view) | DJI Mavic 2 Pro UAV | 322 × 242 × 84 | 31 | 29.97 | 3840 × 2160 |
North view (XZ view) | DJI Mavic 3 Enterprise UAV | 348 × 283 × 108 | 45 | 29.97 | 3840 × 2160 |
Reference Target | Target Pattern Type | Target Dimension [cm × cm] | Number of Targets |
---|---|---|---|
Stationary Targets (10 targets) | Type A 1 | 45.7 × 45.7 | 3 |
Type B | 45.7 × 45.7 | 5 | |
Type C | 20 × 20 | 2 | |
Moving Targets (28 targets) | Type 1 | 20 × 20 | 4 |
Type 2 | 20 × 20 | 5 | |
Type 3 | 20 × 20 | 7 | |
Type 3R | 20 × 20 | 4 | |
Type 3M | 15 × 15 | 2 | |
Type 3S | 10 × 10 | 2 | |
Type 4 | 20 × 20 | 4 |
Region Number | Analog Sensors Assumed as Ground Truth for Each Test | ||||||||
---|---|---|---|---|---|---|---|---|---|
MID 7 | MID 8 | MID 12 | MID 13 | MID 14 | MID 15 | MID 16 | MID 17 | MID 18 | |
① | C048: X C050: Y | C048: X C050: Y | C048: X C050: Y | C048: X C050: Y | C048: X C050: Y | C048: X C050: Y | C048: X C050: Y | C048: X C050: Y | C048: X C050: Y |
② | 121: X 122: Y | S433: X 122: Y | 121: X 122: Y | 121: X 122: Y | 121: X 122: Y | S433: X 122: Y | 121: X 122: Y | 121: X 122: Y | S433: X 122: Y |
③ | 702: X S436: Y | 702: X S436: Y | 702: X S436: Y | 702: X S436: Y | S433: X S436: Y | S433: X S436: Y | S433: X S436: Y | S433: X S436: Y | S433: X S436: Y |
④ | C047: X 127: Y | C047: X 127: Y | C047: X 127: Y | C047: X 127: Y | C047: X 127: Y | C047: X 127: Y | C047: X 127: Y | C047: X 127: Y | C047: X 127: Y |
⑤ | 123: X 124: Y | S433: X 124: Y | 123: X 124: Y | 123: X 124: Y | 123: X 124: Y | S433: X 124: Y | 123: X 124: Y | 123: X 124: Y | 123: X 124: Y |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ji, R.; Sorosh, S.; Lo, E.; Norton, T.J.; Driscoll, J.W.; Kuester, F.; Barbosa, A.R.; Simpson, B.G.; Hutchinson, T.C. Application Framework and Optimal Features for UAV-Based Earthquake-Induced Structural Displacement Monitoring. Algorithms 2025, 18, 66. https://doi.org/10.3390/a18020066
Ji R, Sorosh S, Lo E, Norton TJ, Driscoll JW, Kuester F, Barbosa AR, Simpson BG, Hutchinson TC. Application Framework and Optimal Features for UAV-Based Earthquake-Induced Structural Displacement Monitoring. Algorithms. 2025; 18(2):66. https://doi.org/10.3390/a18020066
Chicago/Turabian StyleJi, Ruipu, Shokrullah Sorosh, Eric Lo, Tanner J. Norton, John W. Driscoll, Falko Kuester, Andre R. Barbosa, Barbara G. Simpson, and Tara C. Hutchinson. 2025. "Application Framework and Optimal Features for UAV-Based Earthquake-Induced Structural Displacement Monitoring" Algorithms 18, no. 2: 66. https://doi.org/10.3390/a18020066
APA StyleJi, R., Sorosh, S., Lo, E., Norton, T. J., Driscoll, J. W., Kuester, F., Barbosa, A. R., Simpson, B. G., & Hutchinson, T. C. (2025). Application Framework and Optimal Features for UAV-Based Earthquake-Induced Structural Displacement Monitoring. Algorithms, 18(2), 66. https://doi.org/10.3390/a18020066