An Accurate and Convenient Method of Vehicle Spatiotemporal Distribution Recognition Based on Computer Vision
Abstract
:1. Introduction
2. Framework of the Proposed Method
3. Vehicle Detection Based on YOLO-v5
3.1. YOLO-v5 Model
3.2. Model Training and Performance Analysis
4. Vehicle Spatiotemporal Distribution Recognition
4.1. Corner Points Marking
4.2. Determination of the Relative Position of the Camera and Vehicle
4.3. Identification Vehicle Spatial Information by Using Pose Estimation
4.4. Summary of the Proposed Method
5. Verification by Lab-Scale Tests
5.1. Verification of the Accuracy of the Recognized Vehicle Spatiotemporal Information
5.2. Investigation of Robustness to Camera Disturbance
6. Verification by Field Measurement
7. Conclusions and Discussions
- The proposed method can identify the spatiotemporal information of passing vehicles with high accuracy. Its accuracy is not dependent on the location of camera, so the camera can be installed at convenient locations.
- There is no need to know the prior information of road before measurement, so marking on the road can be eliminated. Hence, it is possible to detect the spatiotemporal information of vehicles passing along curved path.
- It is robust to camera disturbance and it can work as long as the camera is calibrated and the internal parameter matrix and distortion coefficient are known, so it is suitable for long-term monitoring.
- It also has some limitations, such as high sensitivity to pixel coordinates. When the vehicle is far away, the estimation is less accurate. The results obtained by this method are also affected by poor environmental conditions such as extreme weak and strong light.
- It is admitted that this method cannot fully automatically identify the spatiotemporal information of vehicles since it needs to select marked points manually. For the recognition principle and the effect of multiple vehicles in the same field of vision, there is little difference, but it will increase the workload. How to update the algorithm to fully automatically identify the spatiotemporal information of vehicles should be further investigated in the future.
- The method of using binocular vision system to reduce the influence of pixels, and obtain more accurate vehicle marking points based on deep learning should also be explored, so as to obtain the spatiotemporal information of vehicle more conveniently. In addition, it is great to implement the algorithm model in embedded devices to achieve long-term monitoring.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Trott, J.J.; Grainger, J.W. Design of a dynamic weighbridge for recording vehicle wheel loads. Weigh. Devices 1968, 219, 01411273. [Google Scholar]
- Lee, C.E. A portable electronic scale for weighing vehicles in motion. Highw. Res. Rec. 1966, 127, 22–33. [Google Scholar]
- Caprani, C.C.; Obrien, E.J.; McLachlan, G.J. Characteristic traffic load effects from a mixture of loading events on short to medium span bridges. Struct. Saf. 2008, 30, 394–404. [Google Scholar] [CrossRef]
- O’Connor, A.; O’Brien, E.J. Traffic load modelling and factors influencing the accuracy of predicted extremes. Can. J. Civ. Eng. 2005, 32, 270–278. [Google Scholar] [CrossRef]
- Obrien, E.J.; Enright, B. Using Weigh-in-Motion Data to Determine Aggressiveness of Traffic for Bridge Loading. J. Bridge Eng. 2013, 18, 232–239. [Google Scholar] [CrossRef]
- Oconnor, C. Wheel loads from bridge strains-laboratory studies. J. Struct. Eng.-ASCE 1988, 144, 1724–1740. [Google Scholar] [CrossRef]
- Yu, Y.; Cai, C.S.; Deng, L. State-of-the-art review on bridge weigh-in-motion technology. Adv. Struct. Eng. 2016, 19, 1514–1530. [Google Scholar] [CrossRef]
- Yuan, Y.G.; Han, W.H.; Li, G.L.; Xie, Q.; Guo, Q. Time-dependent reliability assessment of existing concrete bridges including non-stationary vehicle load and resistance processes. Eng. Struct. 2019, 197, 109426. [Google Scholar] [CrossRef]
- Yu, Y.; Cai, C.S.; Deng, L. Nothing-on-road bridge weigh-in-motion considering the transverse position of the vehicle. Struct. Infrastruct. Eng. 2018, 14, 1108–1122. [Google Scholar] [CrossRef]
- Chan, T.H.T.; Law, S.S.; Yung, T.H.; Yuan, X.R. An interpretive method for moving force identification. J. Sound Vib. 1999, 219, 503–524. [Google Scholar] [CrossRef]
- Law, S.S.; Chan, T.H.T.; Zeng, Q.H. Moving force identification: A time domain method. J. Sound Vib. 1997, 201, 1–22. [Google Scholar] [CrossRef]
- Law, S.S.; Chan, T.H.T. Moving force identification—A frequency and time domains analysis. J. Dyn. Syst. Meas. Control. 1999, 121, 394–401. [Google Scholar] [CrossRef]
- Lin, M.; Yoon, J.; Kim, B. Self-driving car location estimation based on a particle-aided unscented Kalman filter. Sensors 2020, 20, 2544. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Simeonova, S.; Shahbazi, M. Orientation- and scale-invariant multi-vehicle detection and tracking from unmanned aerial videos. Remote Sens. 2019, 11, 2155. [Google Scholar] [CrossRef]
- Balamuralidhar, N.; Tilon, S.; Nex, F. MultEYE: Monitoring system for real-time vehicle detection, tracking and speed estimation from UAV Imagery on edge-computing platforms. Remote Sens. 2021, 13, 573. [Google Scholar] [CrossRef]
- Ojio, T.; Carey, C.H.; Obrien, E.J.; Doherty, C.; Taylor, S.E. Contactless Bridge Weigh-in-Motion. J. Bridge Eng. 2015, 21. [Google Scholar] [CrossRef]
- Feng, M.Q.; Leung, R.Y.; Eckersley, C.M. Non-Contact vehicle Weigh-in-Motion using computer vision. Measurement 2020, 153, 107415. [Google Scholar] [CrossRef]
- Brown, R.; Wicks, A. Vehicle Tracking for Bridge Load Dynamics Using Vision Techniques. In Proceedings of the 34th IMAC Conference and Exposition on Structural Dynamics, Orlando, FL, USA, 25–28 January 2016; pp. 83–90. [Google Scholar]
- Lipton, A.J.; Fujiyoshi, H.; Patil, R.S.; Ieee Comp, S.O.C. Moving target classification and tracking from real-time video. In Proceedings of the 4th IEEE Workshop on Applications of Computer Vision (WACV 98), Princeton, NJ, USA, 19–21 October 1998; pp. 8–14. [Google Scholar]
- Dan, D.H.; Ge, L.F.; Yan, X.F. Identification of moving loads based on the information fusion of weigh-in-motion system and multiple camera machine vision. Measurement 2019, 144, 155–166. [Google Scholar] [CrossRef]
- Liu, H.; Li, Q. Vehicle detection in low-altitude aircraft video. Geomat. Inf. Sci. Wuhan Univ. 2011, 36, 316–320. [Google Scholar]
- Chen, Z.C.; Li, H.; Bao, Y.Q.; Li, N.; Jin, Y. Identification of spatiotemporal distribution of vehicle loads on long-span bridges using computer vision technology. Struct. Control Health Monit. 2016, 23, 517–534. [Google Scholar] [CrossRef]
- Wen, Z.; Wang, Y.; Kuijper, A.; Di, N.; Luo, J.; Zhang, L.; Jin, M. On-orbit real-time robust cooperative target identification in complex background. Chin. J. Aeronaut. 2015, 28, 1451–1463. [Google Scholar] [CrossRef]
- Wen, Z.; Wang, Y.; Luo, J.; Kuijper, A.; Di, N.; Jin, M. Robust, fast and accurate vision-based localization of a cooperative target used for space robotic arm. Acta Astronaut. 2017, 136, 101–114. [Google Scholar] [CrossRef]
- Cao, Y.T.; Wang, G.; Yan, D.M.; Zhao, Z.M. Two algorithms for the detection and tracking of moving vehicle targets in aerial infrared image sequences. Remote Sens. 2016, 8, 28. [Google Scholar] [CrossRef] [Green Version]
- Jeong, H.Y.; Nguyen, H.H.; Bhawiyuga, A. Spatiotemporal local-remote senor fusion (ST-LRSF) for cooperative vehicle positioning. Sensors 2018, 18, 1092. [Google Scholar] [CrossRef]
- Liu, K.Q.; Wang, J.Q. Fast dynamic vehicle detection in road scenarios based on pose estimation with Convex-Hull model. Sensors 2019, 19, 3136. [Google Scholar] [CrossRef]
- Lopez-Sastre, R.J.; Herranz-Perdiguero, C.; Guerrero-Gomez-Olmedo, R.; Onoro-Rubio, D.; Maldonado-Bascon, S. Boosting multi-vehicle tracking with a joint object detection and viewpoint estimation sensor. Sensors 2019, 19, 4062. [Google Scholar] [CrossRef] [PubMed]
- Tang, X.Y.; Song, H.S.; Wang, W.; Yang, Y.N. Vehicle spatial distribution and 3D trajectory extraction algorithm in a cross-camera traffic scene. Sensors 2020, 20, 6517. [Google Scholar] [CrossRef]
- Zhang, B.; Zhou, L.M.; Zhang, J. A methodology for obtaining spatiotemporal information of the vehicles on bridges based on computer vision. Comput.-Aided Civ. Infrastruct. Eng. 2019, 34, 471–487. [Google Scholar] [CrossRef]
- Zhou, Y.; Pei, Y.L.; Li, Z.W.; Fang, L.; Zhao, Y.; Yi, W.J. Vehicle weight identification system for spatiotemporal load distribution on bridges based on non-contact machine vision technology and deep learning algorithms. Measurement 2020, 159, 107801. [Google Scholar] [CrossRef]
- Gomaa, A.; Abdelwahab, M.M.; Abo-Zahhad, M.; Minematsu, T.; Taniguchi, R. Robust vehicle detection and counting algorithm employing a convolution neural network and optical flow. Sensors 2019, 19, 4588. [Google Scholar] [CrossRef]
- Jian, X.D.; Xia, Y.; Lozano-Galant, J.A.; Sun, L.M. Traffic Sensing Methodology Combining Influence Line Theory and Computer Vision Techniques for Girder Bridges. J. Sens. 2019, 2019, 3409525. [Google Scholar] [CrossRef]
- Xia, Y.; Jian, X.D.; Yan, B.; Su, D. Infrastructure Safety Oriented Traffic Load Monitoring Using Multi-Sensor and Single Camera for Short and Medium Span Bridges. Remote Sens. 2019, 11, 2651. [Google Scholar] [CrossRef]
- Ge, L.F.; Dan, D.H.; Li, H. An accurate and robust monitoring method of full-bridge traffic load distribution based on YOLO-v3 machine vision. Struct. Control Health Monit. 2020, 27, e2636. [Google Scholar] [CrossRef]
- Zhu, J.S.; Li, X.T.; Zhang, C.; Shi, T. An accurate approach for obtaining spatiotemporal information of vehicle loads on bridges based on 3D bounding box reconstruction with computer vision. Measurement 2021, 181, 109657. [Google Scholar] [CrossRef]
- Slabaugh, G.G. Computing Euler angles from a rotation matrix. Retrieved August 1999, 6, 39–63. [Google Scholar]
A | B | C | D | X | Y | Error (x) | ||||
---|---|---|---|---|---|---|---|---|---|---|
1142 | 1401 | 1484 | 1201 | 2450 | 2779 | 2821 | 2423 | 401.35 | 155.94 | 0.34% |
975 | 1241 | 1323 | 1056 | 2163 | 2514 | 2531 | 2196 | 450.81 | 155.91 | 0.18% |
818 | 1077 | 1165 | 903 | 1921 | 2251 | 2293 | 1961 | 502.68 | 155.45 | 0.54% |
656 | 921 | 1004 | 752 | 1694 | 2004 | 2069 | 1739 | 550.43 | 155.11 | 0.08% |
514 | 787 | 859 | 621 | 1471 | 1776 | 1857 | 1547 | 601.73 | 156.00 | 0.29% |
397 | 647 | 730 | 499 | 1283 | 1576 | 1657 | 1357 | 648.08 | 155.65 | 0.30% |
266 | 508 | 591 | 362 | 1101 | 1379 | 1454 | 1176 | 694.36 | 155.07 | 0.81% |
144 | 389 | 469 | 253 | 927 | 1205 | 1273 | 1016 | 744.76 | 155.39 | 0.70% |
42 | 274 | 361 | 139 | 776 | 1039 | 1120 | 871 | 791.45 | 155.36 | 1.07% |
A | B | C | D | X | Y | Error (x) | ||||
---|---|---|---|---|---|---|---|---|---|---|
1862 | 1201 | 2201 | 1280 | 1225 | 2545 | 1659 | 2681 | 413.94 | −186.97 | 3.49% |
1931 | 1037 | 2269 | 1103 | 1352 | 2267 | 1766 | 2391 | 463.14 | −194.12 | 2.92% |
2010 | 878 | 2330 | 942 | 1475 | 2008 | 1873 | 2116 | 513.6 | −193.21 | 2.72% |
2077 | 726 | 2389 | 788 | 1587 | 1774 | 1967 | 1873 | 560.2 | −188.45 | 1.85% |
2145 | 583 | 2446 | 637 | 1688 | 1558 | 2056 | 1647 | 607.31 | −193.19 | 1.22% |
2206 | 448 | 2498 | 501 | 1780 | 1368 | 2133 | 1447 | 650.31 | −194.18 | 0.05% |
2257 | 325 | 2544 | 373 | 1860 | 1184 | 2202 | 1260 | 697.75 | −190.90 | 0.32% |
2312 | 206 | 2592 | 253 | 1941 | 1016 | 2270 | 1084 | 744.96 | −191.64 | 0.67% |
2363 | 96 | 2636 | 140 | 2018 | 855 | 2338 | 913 | 796.74 | −193.05 | 0.41% |
Points Number | u (Pixel) | v (Pixel) | x (m) | y (m) |
---|---|---|---|---|
P1 | 442 | 674 | 0 | 0 |
P2 | 1233 | 684 | 10.5 | 0 |
P3 | 534 | 280 | 0 | 20 |
P4 | 900 | 281 | 10.5 | 20 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Z.; Feng, Y.; Zhang, Y.; Liu, J.; Zhu, C.; Chen, A. An Accurate and Convenient Method of Vehicle Spatiotemporal Distribution Recognition Based on Computer Vision. Sensors 2022, 22, 6437. https://doi.org/10.3390/s22176437
Chen Z, Feng Y, Zhang Y, Liu J, Zhu C, Chen A. An Accurate and Convenient Method of Vehicle Spatiotemporal Distribution Recognition Based on Computer Vision. Sensors. 2022; 22(17):6437. https://doi.org/10.3390/s22176437
Chicago/Turabian StyleChen, Zhiwei, Yuliang Feng, Yao Zhang, Jiantao Liu, Cixiang Zhu, and Awen Chen. 2022. "An Accurate and Convenient Method of Vehicle Spatiotemporal Distribution Recognition Based on Computer Vision" Sensors 22, no. 17: 6437. https://doi.org/10.3390/s22176437
APA StyleChen, Z., Feng, Y., Zhang, Y., Liu, J., Zhu, C., & Chen, A. (2022). An Accurate and Convenient Method of Vehicle Spatiotemporal Distribution Recognition Based on Computer Vision. Sensors, 22(17), 6437. https://doi.org/10.3390/s22176437