EDRNet: Edge-Enhanced Dynamic Routing Adaptive for Depth Completion
Abstract
:1. Introduction
- We propose a Sparse Adaptive Dynamic Routing Transformer block (SADRT) based on CNN and the Transformer as part of the basic unit of the encoder in EDRNet. By combining dynamic routing and sparse adaptive activation mechanism, the method in this paper improves the computational efficiency of the model while maintaining high accuracy, providing a feasible solution for real-time applications.
- We design a multi-lead structure perception network framework. In layer 4 of the encoder, we introduce the edge information map extracted by Canny operator and feature fusion with the RGB image and sparse depth map. The method effectively combines the structural information of the RGB image and edge image, further improving the accuracy and quality of the depth map.
- We design a matching loss function for object edge depth error. This loss function comprises two components: edge strength loss and edge position matching loss. Experiments on two publicly accessible datasets demonstrate that this loss function enhances the edge quality of the depth map by explicitly modelling the depth error in the edge region.
2. Materials and Methods
2.1. Network Architecture
2.2. SADRT Block
2.2.1. Dynamic Routing Transformer
2.2.2. Adaptive Sparse Activation
- Convolutional Operation
- Sparse Activation
- Adaptive Sparse Activation
2.3. Introducing Edge Image References
2.4. Composite Depth Completion Loss Function
2.4.1. Edge-Matching Loss Function
- Edge strength loss
- Edge position-matching loss
2.4.2. L1 Loss Function
2.4.3. Total Variation Loss Function
2.4.4. Dice Loss Function
3. Results and Discussion
3.1. Datasets
3.2. Evaluation Metrics
3.3. Implementation Details
3.4. Discussion
3.4.1. Quantitative Analysis
3.4.2. Qualitative Analysis
3.4.3. Visualisation and Analysis
3.4.4. Performance Comparison of Hybrid CNN–Transformer Block
3.5. Ablation Results
3.5.1. Ablation Experiments for Each Block
3.5.2. The Edge Image Introduces Positional Experiments
3.5.3. Comparisons with General Feature Backbones
3.5.4. Comparison and Ablation Study of Loss Function
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Appendix A.1. Curve Fitting and Significance Analysis
Method | Edge Image | EM Loss | SADRT Block | R2↑ | Change |
---|---|---|---|---|---|
A | √ | √ | √ | 0.90 | - |
B | √ | √ | 0.83 | −0.07 | |
C | √ | √ | 0.85 | −0.05 | |
D | √ | √ | 0.87 | −0.03 |
Appendix A.2. Experiments on the Selection of Edge Operators
Edge Method | RMSE↓ mm | MAE↓ mm | Edge SSIM↑ |
---|---|---|---|
Canny | 708.56 | 202.56 | 0.912 |
Sobel | 715.23 | 204.69 | 0.904 |
Prewitt | 719.54 | 208.12 | 0.897 |
Appendix A.3. Experiments to Determine the Parameters of the Loss Function
- (1)
- Univariate adjustment experiments
Method | KITTI DC | ||
---|---|---|---|
RMSE↓ mm | MAE↓ mm | R2↑ | |
α = 0.2 | 709.23 | 202.52 | 0.899 |
α = 0.3 | 708.56 | 202.56 | 0.904 |
α = 0.4 | 708.88 | 202.92 | 0.872 |
α = 0.5 | 709.45 | 203.14 | 0.844 |
Method | KITTI DC | ||
---|---|---|---|
RMSE↓ mm | MAE↓ mm | R2↑ | |
β = 0.2 | 708.56 | 202.56 | 0.904 |
β = 0.4 | 708.98 | 202.87 | 0.901 |
β = 0.6 | 709.45 | 203.23 | 0.899 |
β = 0.8 | 710.23 | 203.69 | 0.900 |
- (2)
- Empirically orientated experiments
Method | α | β | γ | δ | KITTI DC | ||
---|---|---|---|---|---|---|---|
RMSE↓ mm | MAE↓ mm | R2↑ | |||||
A | 0.3 | 0.2 | 0.3 | 0.2 | 708.69 | 202.74 | 0.902 |
B | 0.3 | 0.2 | 0.2 | 0.3 | 708.56 | 202.56 | 0.904 |
C | 0.3 | 0.2 | 0.3 | 0.3 | 709.63 | 203.17 | 0.898 |
D | 0.3 | 0.2 | 0.2 | 0.2 | 709.02 | 202.95 | 0.897 |
- (3)
- Comprehensive validation
Appendix A.4. Overfitting Correlation Analysis
Appendix A.5. Correlation Analysis of Inference Speed and Power Consumption
- Additional analysis of inference speed (FPS):
Method | KITTI DC (FPS)↑ | NYUv2 (FPS)↑ |
---|---|---|
SDformer [12] | 8 | 12 |
GuideFormer [10] | 9 | 12 |
CompletionFormer [11] | 12 | 14 |
Ours | 14 | 16 |
- 2.
- Power consumption analysis:
Method | KITTI DC(W)↓ | NYUv2(W)↓ | ||
---|---|---|---|---|
Training | Inference | Training | Inference | |
SDformer [12] | 290 | 270 | 290 | 260 |
GuideFormer [10] | 290 | 280 | 280 | 260 |
CompletionFormer [11] | 270 | 250 | 250 | 230 |
Ours | 260 | 230 | 230 | 210 |
Appendix A.6. Network Details
Name | Operator | Input Dimension | Output Dimension |
---|---|---|---|
input | RGB and Sparse Depth | RGB Image: Sparse Depth: | RGB Image: Sparse Depth: |
Conv1 | Concat [RGB, Sparse Depth] Conv3 × 3 + BN + ReLU | ||
Conv2 | MobileNetV2 Block×3 | ||
Conv3 | MobileNetV2 Block×3 | ||
Conv4 | SADRT Block×3 | ||
Conv5 | SADRT Block×3 Canny image: → downsampling → 1×1 Conv → concat | ||
Conv6 | SADRT Block×3 | ||
Conv7 | SADRT Block×3 | ||
Dec4 | 3 × 3 Conv Attention Layer | ||
Dec3 | 3 × 3 Conv Attention Layer | ||
Dec2 | 3 × 3 Conv Attention Layer | ||
Dec1 | 3 × 3 Conv Attention Layer | ||
Dec0 | 3 × 3 Conv Attention Layer | ||
Prediction Depth | - | ||
Refine | Spatial Propagation Network Refinement |
References
- Levin, A.; Lischinski, D.; Weiss, Y. Colorization using optimization. ACM Trans. Graph. (TOG) 2004, 23, 689–694. [Google Scholar] [CrossRef]
- Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Bombay, India, 7 January 1998; pp. 839–846. [Google Scholar] [CrossRef]
- Ku, J.; Harakeh, A.; Waslander, S.L. In Defense of Classical Image Processing: Fast Depth Completion on the CPU. In Proceedings of the 2018 15th Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada, 8–10 May 2018. [Google Scholar]
- Uhrig, J.; Schneider, N.; Schneider, L.; Franke, U.; Brox, T.; Geiger, A. Sparsity Invariant CNNs. In Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China, 10–12 October 2017; pp. 11–20. [Google Scholar] [CrossRef]
- Huang, Z.; Fan, J.; Cheng, S.; Yi, S.; Wang, X.; Li, H. HMS-Net: Hierarchical Multi-Scale Sparsity-Invariant Network for Sparse Depth Completion. IEEE Trans. Image Process. 2018, 29, 3429–3441. [Google Scholar] [CrossRef] [PubMed]
- Chodosh, N.; Wang, C.; Lucey, S. Deep Convolutional Compressed Sensing for LiDAR Depth Completion. arXiv 2018, arXiv:1803.08949. [Google Scholar] [CrossRef]
- Ma, F.; Karaman, S. Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Ma, F.; Cavalheiro, G.V.; Karaman, S. Self-Supervised Sparse-to-Dense: Self-Supervised Depth Completion from LiDAR and Monocular Camera. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3288–3295. [Google Scholar] [CrossRef]
- Jaritz, M.; Charette, R.D.; Wirbel, É.; Perrotton, X.; Nashashibi, F. Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation. In Proceedings of the 2018 International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; pp. 52–60. [Google Scholar] [CrossRef]
- Rho, K.; Ha, J.; Kim, Y. GuideFormer: Transformers for Image Guided Depth Completion. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 6240–6249. [Google Scholar]
- Zhang, Y.; Guo, X.; Poggi, M.; Zhu, Z.; Huang, G.; Mattoccia, S. CompletionFormer: Depth Completion with Convolutions and Vision Transformers. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 18527–18536. [Google Scholar] [CrossRef]
- Qian, J.; Sun, M.; Lee, A.; Li, J.; Zhuo, S.; Chiang, P. SDformer: Efficient End-to-End Transformer for Depth Completion. arXiv 2024, arXiv:2409.08159. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
- Liu, S.; Mello, S.D.; Gu, J.; Zhong, G.; Yang, M.-H.; Kautz, J. Learning Affinity via Spatial Propagation Networks. In Proceedings of the Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic Routing Between Capsules. arXiv 2017, arXiv:1710.09829. [Google Scholar] [CrossRef]
- Liu, X.; Liu, Y.; Fu, W.; Liu, S. RETRACTED ARTICLE: SCTV-UNet: A COVID-19 CT segmentation network based on attention mechanism. Soft Comput 2024, 28, 473. [Google Scholar] [CrossRef] [PubMed]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
- Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor Segmentation and Support Inference from RGBD Images. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012. [Google Scholar]
- Agarwal, A.; Arora, C. Attention Attention Everywhere: Monocular Depth Prediction with Skip Attention. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2–7 January 2023; pp. 5850–5859. [Google Scholar] [CrossRef]
- Lee, J.H.; Han, M.-K.; Ko, D.W.; Suh, I.H. From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation. arXiv 2019, arXiv:1907.10326. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Cheng, X.; Wang, P.; Yang, R. Learning Depth with Convolutional Spatial Propagation Network. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2361–2379. [Google Scholar] [CrossRef] [PubMed]
- Cheng, X.; Wang, P.; Guan, C.; Yang, R. CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
- Xu, Y.; Zhu, X.; Shi, J.; Zhang, G.; Bao, H.; Li, H. Depth Completion from Sparse LiDAR Data with Depth-Normal Constraints—Supplementary Materials. In Proceedings of the ICCV, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2811–2820. [Google Scholar]
- Tang, J.; Tian, F.-P.; Feng, W.; Li, J.; Tan, P. Learning Guided Convolutional Network for Depth Completion. IEEE Trans. Image Process. 2019, 30, 1116–1129. [Google Scholar] [CrossRef] [PubMed]
- Zhao, S.; Gong, M.; Fu, H.; Tao, D. Adaptive Context-Aware Multi-Modal Network for Depth Completion. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 2021, 30, 5264–5276. [Google Scholar] [CrossRef] [PubMed]
- Qiu, J.; Cui, Z.; Zhang, Y.; Zhang, X.; Liu, S.; Zeng, B.; Pollefeys, M. DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene From Sparse LiDAR Data and Single Color Image. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3308–3317. [Google Scholar] [CrossRef]
- Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Xu, B.; Li, J.; Yang, J. RigNet: Repetitive Image Guided Network for Depth Completion. In Proceedings of the European Conference on Computer Vision, Virtual, 11–17 October 2021. [Google Scholar]
- Srinivas, A.; Lin, T.Y.; Parmar, N.; Shlens, J.; Abbeel, P.; Vaswani, A. Bottleneck Transformers for Visual Recognition. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 16514–16524. [Google Scholar]
- Peng, Z.; Guo, Z.; Huang, W.; Wang, Y.; Xie, L.; Jiao, J.; Tian, Q.; Ye, Q. Conformer: Local Features Coupling Global Representations for Recognition and Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 9454–9468. [Google Scholar] [CrossRef] [PubMed]
- Kim, S.; Gholami, A.; Shaw, A.E.; Lee, N.; Mangalam, K.; Malik, J.; Mahoney, M.W.; Keutzer, K. Squeezeformer: An Efficient Transformer for Automatic Speech Recognition. arXiv 2022, arXiv:2206.00888. [Google Scholar] [CrossRef]
- Yang, Y.; Pan, Y.; Yin, J.; Han, J.; Ma, L.; Lu, H. Hybridformer: Improving Squeezeformer with Hybrid Attention and NSR Mechanism. In Proceedings of the ICASSP 2023–2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity Mappings in Deep Residual Networks. arXiv 2016, arXiv:1603.05027. [Google Scholar] [CrossRef]
- Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Shao, L. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions. arXiv 2021, arXiv:2102.12122. [Google Scholar] [CrossRef]
- Lee, Y.; Kim, J.; Willette, J.; Hwang, S.J. MPViT: Multi-Path Vision Transformer for Dense Prediction. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 7277–7286. [Google Scholar] [CrossRef]
- Van Gansbeke, W.; Neven, D.; Brabandere, B.D.; Gool, L.V. Sparse and Noisy LiDAR Completion with RGB Guidance and Uncertainty. In Proceedings of the 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 27–31 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Liao, Y.; Huang, L.; Wang, Y.; Kodagoda, S.; Yu, Y.; Liu, Y. Parse geometry from a line: Monocular depth estimation with partial laser observation. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2016; pp. 5059–5066. [Google Scholar] [CrossRef]
Method | KITTI DC | NYUv2 | ||||||
---|---|---|---|---|---|---|---|---|
RMSE↓ mm | MAE↓ mm | iRMSE↓ 1/km | iMAE↓ 1/km | SSIM↑ | LPIPS↓ | RMSE↓ (m) | REL↓ | |
CSPN [23] | 1019.64 | 279.46 | 2.93 | 1.15 | 0.794 | 0.162 | 0.117 | 0.016 |
CSPN++ [24] | 743.69 | 209.28 | 2.07 | 0.90 | 0.896 | 0.132 | - | - |
Sparse-to-dense [7] | 814.73 | 249.95 | 2.80 | 1.21 | 0.864 | 0.101 | 0.230 | 0.044 |
FCFR [25] | 735.81 | 217.15 | 2.20 | 0.98 | 0.913 | 0.090 | 0.106 | 0.015 |
GuideNet [26] | 736.24 | 218.83 | 2.25 | 0.99 | 0.911 | 0.089 | 0.101 | 0.015 |
ACMNet [27] | 744.91 | 206.09 | 2.08 | 0.90 | 0.895 | 0.098 | 0.105 | 0.015 |
DeepLiDAR [28] | 758.38 | 226.50 | 2.56 | 1.15 | 0.875 | 0.115 | 0.115 | 0.022 |
RigNet [29] | 713.44 | 204.55 | 2.16 | 0.92 | 0.922 | 0.078 | 0.090 | 0.013 |
SDformer [12] | 809.78 | 222.32 | 2.32 | 0.93 | 0.905 | 0.095 | - | - |
GuideFormer [10] | 721.48 | 207.76 | 2.14 | 0.97 | 0.917 | 0.086 | - | - |
CompletionFormer [11] | 708.87 | 203.45 | 2.01 | 0.88 | 0.925 | 0.073 | 0.090 | 0.012 |
Ours | 708.56 | 202.56 | 2.04 | 0.88 | 0.931 | 0.075 | 0.092 | 0.012 |
Block | RMSE↓ mm | MAE↓ mm | FLOPs↓ G | FPS↑ |
---|---|---|---|---|
BoTNet [30] | 88.5 | 34.9 | 368.2 | 11 |
Conformer [31] | 88.2 | 35.1 | 366.3 | 13 |
Squeezeformer [32] | 87.9 | 34.8 | 365.7 | 12 |
HybridFormer [33] | 87.4 | 34.1 | 366.2 | 13 |
SADRT (Ours) | 86.3 | 34.1 | 364.5 | 15 |
Method | Edge Image | EM Loss | SADRT Block | RMSE↓ mm | MAE↓ mm | Params↓ M | FLOPs↓ G |
---|---|---|---|---|---|---|---|
Backbone Only | 90.3 | 35.1 | 67.8 | 374.7 | |||
A | √ | 90.1 | 35.0 | 68.9 | 383.6 | ||
B | √ | 90.0 | 35.0 | 69.1 | 394.5 | ||
C | √ | 90.1 | 34.9 | 55.4 | 359.9 | ||
D | √ | √ | 89.7 | 34.9 | 69.5 | 389.6 | |
E | √ | √ | 88.5 | 34.6 | 62.1 | 362.7 | |
F | √ | √ | 87.9 | 34.5 | 66.3 | 361.9 | |
G | √ | √ | √ | 86.3 | 34.1 | 67.5 | 364.5 |
Method | Location of Edge Image | RMSE↓ mm | MAE↓ mm |
---|---|---|---|
A | Encoder Layer 1 | 722.41 | 204.28 |
B | Encoder Layer 4 | 708.56 | 202.56 |
C | Decoder Layer 1 | 715.23 | 201.87 |
D | Decoder Layer 4 | 725.32 | 203.16 |
Backbone | RMSE↓ mm | MAE↓ mm | FLOPs↓ G | FPS↑ |
---|---|---|---|---|
Swin-Tiny [34] | 92.6 | 36.4 | 634.8 | 7 |
ResNet34 [35] | 91.4 | 35.5 | 582.1 | 7 |
PVT-Large [36] | 91.4 | 35.6 | 419.8 | 9 |
MPViT-Base [37] | 91.0 | 35.5 | 1259.3 | 3 |
CompletionFormer Tiny [11] | 90.9 | 35.3 | 389.4 | 9 |
Ours | 90.3 | 35.1 | 374.7 | 15 |
Method | KITTI DC | |||
---|---|---|---|---|
RMSE↓ mm | MAE↓ mm | iRMSE↓ 1/km | iMAE↓ 1/km | |
L1 Loss [38] | 712.35 | 204.87 | 2.11 | 1.02 |
Smooth L1 Loss [39] | 710.23 | 204.34 | 2.06 | 0.93 |
Hybrid Loss [40] | 709.13 | 203.78 | 2.02 | 0.88 |
Ours | 708.56 | 202.56 | 2.04 | 0.88 |
Method | α | β | γ | δ | KITTI DC | |||
---|---|---|---|---|---|---|---|---|
RMSE↓ mm | MAE↓ mm | iRMSE↓ 1/km | iMAE↓ 1/km | |||||
A | 0 | 0.2 | 0.2 | 0.3 | 713.23 | 205.79 | 2.23 | 0.95 |
B | 0.3 | 0 | 0.2 | 0.3 | 709.44 | 203.44 | 2.09 | 0.91 |
C | 0.3 | 0.2 | 0 | 0.3 | 708.85 | 202.93 | 2.03 | 0.89 |
D | 0.3 | 0.2 | 0.2 | 0 | 710.87 | 204.01 | 2.11 | 0.98 |
Ours | 0.3 | 0.2 | 0.2 | 0.3 | 708.56 | 202.56 | 2.04 | 0.88 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, F.; Li, B.; Zhang, Q. EDRNet: Edge-Enhanced Dynamic Routing Adaptive for Depth Completion. Mathematics 2025, 13, 953. https://doi.org/10.3390/math13060953
Sun F, Li B, Zhang Q. EDRNet: Edge-Enhanced Dynamic Routing Adaptive for Depth Completion. Mathematics. 2025; 13(6):953. https://doi.org/10.3390/math13060953
Chicago/Turabian StyleSun, Fuyun, Baoquan Li, and Qiaomei Zhang. 2025. "EDRNet: Edge-Enhanced Dynamic Routing Adaptive for Depth Completion" Mathematics 13, no. 6: 953. https://doi.org/10.3390/math13060953
APA StyleSun, F., Li, B., & Zhang, Q. (2025). EDRNet: Edge-Enhanced Dynamic Routing Adaptive for Depth Completion. Mathematics, 13(6), 953. https://doi.org/10.3390/math13060953