3D Reconstruction of Asphalt Pavement Macro-Texture Based on Convolutional Neural Network and Monocular Image Depth Estimation
Abstract
:1. Introduction
2. Methods
2.1. Data Set
- 1.
- Four NMASs, which were 4.75, 9.5, 12.5, and 16 mm, were selected for the AC mixture (AC-5, AC-10, AC-13, and AC-16).
- 2.
- Three NMASs, except for 4.75 mm, were used for the SMA and OGFC mixture (SMA-10, SMA-13, SMA-16, OGFC-10, OGFC-13, and OGFC-16).
- 3.
- Four NMASs, which were 9.5, 12.5, 16, and 19 mm, were selected for the PA mixture (PA-10, PA-13, PA-16, and PA-20).
2.1.1. Acquisition of Asphalt Macro-Texture Data Set
2.1.2. Point Cloud Data Preprocessing
2.1.3. Image Augmentation
2.2. Introduction of Network Architecture
2.2.1. Introduction of CNN and CNN-Based Depth Estimation
- (1)
- PReLU function
- (2)
- RReLU function
- (3)
- ELU function
2.2.2. Encoder–Decoder Network Architecture
2.3. Training Strategy
2.3.1. Loss Function
2.3.2. Transfer Learning
2.4. Reconstruction of 3D Pavement Macro Texture Evaluation
2.4.1. Accuracy Evaluation Index
2.4.2. Effectiveness Evaluation
3. Results and Discussion
3.1. Performance on Different Data Sets
3.2. Comparison of Different Training Results
3.3. Evaluation of Macroscopic Texture Accuracy of Asphalt After Reconstruction
3.4. Pavement Performance Evaluation
4. Case Study
4.1. Field Test Comparison Test of Asphalt Pavement
4.2. Field Testing of Skid Resistance
4.3. Analysis of Relationship Between MTD and Skid Resistance of Asphalt Pavement
5. Conclusions
- (1)
- The macroscopic texture RGB-D data set of asphalt pavement constructed in this paper can be directly used for model training. At the same time, the proposed CNN model shows excellent performance compared with other models on the NYU Depths V2 public data set.
- (2)
- The macroscopic texture of the pavement reconstructed by the CNN constructed in this paper can be directly used for the detection of asphalt pavement. The proposed macro-texture depth map meets the technical requirements of pavement skid resistance test in terms of resolution (208 × 144 mm) and measurement accuracy.
- (3)
- Compared with the traditional sanding method, this method shows significant advantages in reconstruction efficiency and engineering applicability, and can be used as an effective supplement to the existing macroscopic texture detection system for asphalt pavement.
- (4)
- The current study primarily focuses on 3D macrotexture reconstruction of asphalt pavements under static conditions, while its application in dynamic scenarios (e.g., vehicle-mounted mobile measurement systems) has not yet been explored. Future research will specifically address three critical challenges in dynamic environments: (i) motion blur compensation, (ii) viewpoint variation handling, and (iii) real-time processing requirements. These will be tackled through the implementation of temporal modeling architectures (particularly recurrent neural networks) coupled with advanced motion compensation algorithms, ultimately enabling high-fidelity texture reconstruction for mobile measurement applications.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Dong, N.; Prozzi, J.A.; Ni, F. Reconstruction of 3D pavement texture on handling dropouts and spikes using multiple data processing methods. Sensors 2019, 19, 278. [Google Scholar] [CrossRef] [PubMed]
- ISO13473-1-1997b; Characterization of Pavement Texture by Use of Surface Profiles Part 1: Determination of Mean Profile Depth. ISO: Geneva, Switzerland, 1997.
- Yu, M.; You, Z.; Wu, G.; Kong, L.; Liu, C.; Gao, J. Measurement and modeling of skid resistance of asphalt pavement: A review. Constr. Build. Mater. 2020, 260, 119878. [Google Scholar] [CrossRef]
- Li, Q.; Zou, Q.; Zhang, D. Road Pavement Defect Detection Using High Precision 3D Surveying Technology. Geomat. Inf. Sci. Wuhan Univ. 2017, 42, 1549–1564. [Google Scholar]
- Zhang, X.; Liu, T.; Liu, C.; Chen, Z. Research on skid resistance of asphalt pavement based on three-dimensional laser-scanning technology and pressure-sensitive film. Constr. Build. Mater. 2014, 69, 49–59. [Google Scholar] [CrossRef]
- Dan, H.C.; Lu, B.; Li, M. Evaluation of asphalt pavement texture using multi view stereo reconstruction based on deep learning. Constr. Build. Mater. 2024, 412, 134837. [Google Scholar] [CrossRef]
- Qi, C.; Liu, W.; Wu, C.; Su, H.; Guibas, L. Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar]
- Shen, H.; Chai, Y. Summary of Binocular Vision in Computer Vision. Sci. Technol. Inf. 2007, 150–151. [Google Scholar] [CrossRef]
- Schwarz, M.; Schulz, H.; Behnke, S. RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 1329–1335. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Peng, Y.; Zhang, L.; Zhang, Y.; Liu, S.; Guo, M. Deep deconvolution neural network for image super-resolution. J. Softw. 2017, 29, 926–934. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Grigorev, A.; Jiang, F.; Rho, S.; Sori, W.; Liu, S.; Sai, S. Depth estimation from single monocular images using deep hybrid network. Multimed. Tools Appl. 2017, 76, 18585–18604. [Google Scholar] [CrossRef]
- Laina, I.; Rupprecht, C.; Belagiannis, V.; Tombari, F.; Navab, N. Deeper depth prediction with fully convolutional residual networks. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 239–248. [Google Scholar]
- Chen, D. Evaluating asphalt pavement surface texture using 3D digital imaging. Int. J. Pavement Eng. 2020, 21, 416–427. [Google Scholar] [CrossRef]
- Acharya, P.K.; Henderson, T.C. Parameter estimation and error analysis of range data. In Proceedings of the 1988 IEEE International Conference on Robotics and Automation, Philadelphia, PA, USA, 24–29 April 1988; pp. 1709–1714. [Google Scholar]
- Guan, Y.; Cheng, X.; Shi, G. A robust method for fitting a plane to point clouds. J. Tongji Univ. (Nat. Sci.) 2008, 36, 981–984. [Google Scholar]
- Min, C.; Chen, S. Conditional extremums of functions of multi-variables. Stud. Coll. Math. 2021, 24, 72–75. [Google Scholar]
- Ma, F.; Karaman, S. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 4796–4803. [Google Scholar]
- Zwald, L.; Lambert, L.S. The BerHu penalty and the grouped effect. arXiv 2012, arXiv:1207.6868. [Google Scholar]
- Xu, W.; Zou, L.; Wu, L.; Fu, Z. Self-Supervised monocular depth learning in low-texture areas. Remote Sens. 2021, 13, 1673. [Google Scholar] [CrossRef]
- Chen, J.; Huang, X.; Zheng, B.; Zhao, R.; Liu, X.; Cao, Q.; Zhu, S. Real-time identification system of asphalt pavement texture based on the close-range photogrammetry. Constr. Build. Mater. 2019, 226, 910–919. [Google Scholar] [CrossRef]
- Miao, Y.; Wang, L.; Wang, X.; Gong, X. Characterizing asphalt pavement 3-D macrotexture using features of co-occurrence matrix. Int. J. Pavement Res. Technol. 2015, 8, 243. [Google Scholar]
- Liu, F.; Shen, C.; Lin, G. Deep convolutional neural fields for depth estimation from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5162–5170. [Google Scholar]
- Wang, P.; Shen, X.; Lin, Z.; Cohen, S.; Price, B.; Yuille, A.L. Towards unified depth and semantic prediction from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 2800–2809. [Google Scholar]
- Hao, Z.; Li, Y.; You, S.; Lu, F. Detail preserving depth estimation from a single image using attention guided networks. In Proceedings of the 2018 International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; pp. 304–313. [Google Scholar]
- Dong, S.; Han, S.; Wu, C.; Xu, O.; Kong, H. Asphalt pavement macrotexture reconstruction from monocular image based on deep convolutional neural network. Comput.-Aided Civ. Infrastruct. Eng. 2022, 37, 1754–1768. [Google Scholar] [CrossRef]
- Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2022, 109, 43–76. [Google Scholar] [CrossRef]
- JTG 3450-2019; Field Test Methods of Subgrade and Pavement for Highway Engineering. Ministry of Transport of the People’s Republic of China: Beijing, China, 2019.
Parameter | Scan Line | The Required Scanning Width | Actual Scan Width | Scanning Width Resolution | Scanning Width Spacing | Scan Length | Scanning Length Spacing |
---|---|---|---|---|---|---|---|
Value | 70 articles | 72.009 mm | 71.5645 mm | 0.0246 mm | 1.04 mm | 101.6 mm | 0.0356 mm |
Number | Block Name | Input/C | Output/C | Output/Size |
---|---|---|---|---|
Conv1 | 3 | 64 | 104 × 72 | |
×3 | Residual Block 1 | 64 | 256 | 52 × 36 |
×4 | Residual Block 2 | 256 | 512 | 26 × 18 |
×23 | Residual Block 3 | 512 | 1024 | 13 × 9 |
×3 | Residual Block 4 | 1024 | 2048 | 13 × 9 |
Conv2 | 2048 | 1024 | 13 × 9 | |
Up1 | 1024 | 512 | 26 × 18 | |
Up2 | 512 | 256 | 52 × 36 | |
Up3 | 256 | 128 | 104 × 72 | |
Up4 | 128 | 64 | 208 × 144 | |
Up5 | 256 | 16 | 208 × 144 | |
Up6 | 512 | 16 | 208 × 144 | |
Up7 | 1024 | 16 | 208 × 144 | |
Up8 | 2048 | 16 | 208 × 144 | |
Conv3 | 64 | 64 | 208 × 144 | |
Conv4 | 3 | 32 | 208 × 144 | |
Conv5 | 128 | 128 | 208 × 144 | |
Conv6 | 128 | 128 | 208 × 144 | |
Conv7 | 128 | 1 | 208 × 144 |
Model | RMSE | REL | Accuracies | |||||||
---|---|---|---|---|---|---|---|---|---|---|
V2 | RGB | V2 | RGB | δ = 1 | δ = 2 | δ = 3 | ||||
V2 | RGB | V2 | RGB | V2 | RGB | |||||
Liu [25] | 0.824 | 1.012 | 0.230 | 0.421 | 0.614 | 0.492 | 0.883 | 0.832 | 0.971 | 0.828 |
Wang [26] | 0.745 | 0.623 | 0.220 | 0.465 | 0.605 | 0.548 | 0.890 | 0.810 | 0.970 | 0.914 |
Hao [27] | 0.555 | 0.715 | 0.127 | 0.221 | 0.841 | 0.726 | 0.966 | 0.892 | 0.991 | 0.957 |
Dong [28] | 0.592 | 0.668 | 0.139 | 0.275 | 0.826 | 0.879 | 0.946 | 0.927 | 0.987 | 0.933 |
Ours | 0.630 | 0.491 | 0.135 | 0.102 | 0.801 | 0.931 | 0.951 | 0.979 | 0.986 | 0.990 |
Measuring Point Number | MTD | MTD′ | REL (%) |
---|---|---|---|
1 | 1.45 | 1.38 | 5.07% |
2 | 1.38 | 1.44 | 4.17% |
3 | 1.19 | 1.23 | 3.25% |
4 | 1.30 | 1.35 | 3.70% |
5 | 1.46 | 1.39 | 5.04% |
6 | 1.33 | 1.26 | 5.56% |
7 | 1.12 | 1.18 | 5.08% |
8 | 1.36 | 1.32 | 3.03% |
9 | 1.44 | 1.50 | 4.00% |
10 | 1.33 | 1.41 | 5.67% |
11 | 1.12 | 1.16 | 3.45% |
12 | 1.49 | 1.53 | 2.61% |
13 | 1.30 | 1.33 | 2.25% |
Measuring Point Number | MTD | MTD′ | REL (%) |
---|---|---|---|
1 | 1.74 | 1.80 | 3.33% |
2 | 1.92 | 1.96 | 2.04% |
3 | 1.83 | 1.89 | 3.17% |
4 | 1.78 | 1.73 | 2.89% |
5 | 1.9 | 1.93 | 1.55% |
6 | 1.72 | 1.67 | 2.99% |
7 | 1.74 | 1.69 | 2.96% |
8 | 2.00 | 2.05 | 2.44% |
9 | 1.95 | 1.90 | 2.63% |
10 | 1.91 | 1.96 | 2.55% |
11 | 1.68 | 1.75 | 4.00% |
12 | 1.75 | 1.68 | 4.17% |
13 | 1.57 | 1.66 | 5.42% |
14 | 1.66 | 1.63 | 1.84% |
15 | 1.83 | 1.79 | 2.23% |
Measuring Point Number | MTD | MTD′ | REL (%) |
---|---|---|---|
1 | 2.04 | 1.94 | 5.15% |
2 | 2.14 | 2.24 | −4.46% |
3 | 2.36 | 2.27 | 3.96% |
4 | 2.40 | 2.46 | −2.44% |
5 | 2.14 | 2.21 | −3.17% |
6 | 2.57 | 2.48 | 3.63% |
7 | 2.51 | 2.59 | −3.09% |
8 | 2.18 | 2.23 | −2.24% |
9 | 2.08 | 2.01 | 3.48% |
10 | 2.13 | 2.04 | 4.41% |
11 | 2.35 | 2.27 | 3.52% |
12 | 2.08 | 2.01 | 3.48% |
13 | 2.60 | 2.52 | 3.17% |
14 | 2.41 | 2.50 | −3.60% |
15 | 2.03 | 1.95 | 4.10% |
16 | 2.07 | 2.01 | 2.99% |
17 | 2.17 | 2.25 | −3.56% |
18 | 2.47 | 2.53 | −2.37% |
19 | 2.12 | 2.18 | −2.75% |
20 | 2.15 | 2.22 | −3.15% |
21 | 2.31 | 2.37 | −2.53% |
22 | 2.41 | 2.46 | −2.03% |
Observed Temperature T (°C) | Temperature Correction ΔF (°C) | Observed Temperature (°C) | Temperature Correction (°C) |
---|---|---|---|
0 | −6 | 25 | +2 |
5 | −4 | 30 | +3 |
10 | −3 | 35 | +4 |
15 | −1 | 40 | +7 |
20 | 0 |
Number | FB10 | FB20 | Number | FB10 | FB20 |
---|---|---|---|---|---|
1 | 65.37 | 62.37 | 26 | 65.55 | 62.55 |
2 | 66.56 | 63.56 | 27 | 66.13 | 63.13 |
3 | 65.12 | 62.12 | 28 | 67.92 | 64.92 |
4 | 63.98 | 60.98 | 29 | 67.44 | 64.44 |
5 | 67.46 | 64.46 | 30 | 65.61 | 62.61 |
6 | 67.34 | 64.34 | 31 | 66.5 | 63.50 |
7 | 64.97 | 61.97 | 32 | 66.52 | 63.52 |
8 | 66.13 | 63.13 | 33 | 67.36 | 64.36 |
9 | 65.58 | 62.58 | 34 | 63.39 | 60.39 |
10 | 62.25 | 59.25 | 35 | 65.58 | 62.58 |
11 | 67.48 | 64.48 | 36 | 64.83 | 61.83 |
12 | 65.12 | 62.12 | 37 | 63.98 | 60.98 |
13 | 66.64 | 63.64 | 38 | 62.49 | 59.49 |
14 | 65.61 | 62.61 | 39 | 66.58 | 63.58 |
15 | 64.48 | 61.48 | 40 | 64.4 | 61.40 |
16 | 67.46 | 64.46 | 41 | 62.07 | 59.07 |
17 | 65.55 | 62.55 | 42 | 62.32 | 59.32 |
18 | 67.56 | 64.56 | 43 | 67.24 | 64.24 |
19 | 62.21 | 59.21 | 44 | 63.78 | 60.78 |
20 | 64.8 | 61.80 | 45 | 64.62 | 61.62 |
21 | 66.39 | 63.39 | 46 | 63.39 | 60.39 |
22 | 66.32 | 63.32 | 47 | 67.39 | 64.39 |
23 | 66.64 | 63.64 | 48 | 64.48 | 61.48 |
24 | 67.48 | 64.48 | 49 | 67.48 | 64.48 |
25 | 66.45 | 63.45 | 50 | 62.32 | 59.32 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, X.; Yin, C. 3D Reconstruction of Asphalt Pavement Macro-Texture Based on Convolutional Neural Network and Monocular Image Depth Estimation. Appl. Sci. 2025, 15, 4684. https://doi.org/10.3390/app15094684
Liu X, Yin C. 3D Reconstruction of Asphalt Pavement Macro-Texture Based on Convolutional Neural Network and Monocular Image Depth Estimation. Applied Sciences. 2025; 15(9):4684. https://doi.org/10.3390/app15094684
Chicago/Turabian StyleLiu, Xinliang, and Chao Yin. 2025. "3D Reconstruction of Asphalt Pavement Macro-Texture Based on Convolutional Neural Network and Monocular Image Depth Estimation" Applied Sciences 15, no. 9: 4684. https://doi.org/10.3390/app15094684
APA StyleLiu, X., & Yin, C. (2025). 3D Reconstruction of Asphalt Pavement Macro-Texture Based on Convolutional Neural Network and Monocular Image Depth Estimation. Applied Sciences, 15(9), 4684. https://doi.org/10.3390/app15094684