Next Article in Journal
Trustworthy High-Performance Multiplayer Games with Trust-but-Verify Protocol Sensor Validation
Previous Article in Journal
Inversion Method for Transformer Winding Hot Spot Temperature Based on Gated Recurrent Unit and Self-Attention and Temperature Lag
Previous Article in Special Issue
A Lightweight Cross-Layer Smoke-Aware Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

TTFDNet: Precise Depth Estimation from Single-Frame Fringe Patterns

Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, Shenzhen Key Lab of Micro-Nano Photonic Information Technology, State Key Laboratory of Radio Frequency Heterogeneous Integration, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(14), 4733; https://doi.org/10.3390/s24144733 (registering DOI)
Submission received: 24 June 2024 / Revised: 17 July 2024 / Accepted: 19 July 2024 / Published: 21 July 2024
(This article belongs to the Special Issue Deep Learning for Computer Vision and Image Processing Sensors)

Abstract

This work presents TTFDNet, a transformer-based and transfer learning network for end-to-end depth estimation from single-frame fringe patterns in fringe projection profilometry. TTFDNet features a precise contour and coarse depth (PCCD) pre-processor, a global multi-dimensional fusion (GMDF) module and a progressive depth extractor (PDE). It utilizes transfer learning through fringe structure consistency evaluation (FSCE) to leverage the transformer’s benefits even on a small dataset. Tested on 208 scenes, the model achieved a mean absolute error (MAE) of 0.00372 mm, outperforming Unet (0.03458 mm) models, PDE (0.01063 mm) and PCTNet (0.00518 mm). It demonstrated precise measurement capabilities with deviations of ~90 μm for a 25.4 mm radius ball and ~6 μm for a 20 mm thick metal part. Additionally, TTFDNet showed excellent generalization and robustness in dynamic reconstruction and varied imaging conditions, making it appropriate for practical applications in manufacturing, automation and computer vision.
Keywords: fringe projection profilometry; depth estimation; deep learning; transfer learning fringe projection profilometry; depth estimation; deep learning; transfer learning

Share and Cite

MDPI and ACS Style

Cai, Y.; Guo, M.; Wang, C.; Lu, X.; Zeng, X.; Sun, Y.; Ai, Y.; Xu, S.; Li, J. TTFDNet: Precise Depth Estimation from Single-Frame Fringe Patterns. Sensors 2024, 24, 4733. https://doi.org/10.3390/s24144733

AMA Style

Cai Y, Guo M, Wang C, Lu X, Zeng X, Sun Y, Ai Y, Xu S, Li J. TTFDNet: Precise Depth Estimation from Single-Frame Fringe Patterns. Sensors. 2024; 24(14):4733. https://doi.org/10.3390/s24144733

Chicago/Turabian Style

Cai, Yi, Mingyu Guo, Congying Wang, Xiaowei Lu, Xuanke Zeng, Yiling Sun, Yuexia Ai, Shixiang Xu, and Jingzhen Li. 2024. "TTFDNet: Precise Depth Estimation from Single-Frame Fringe Patterns" Sensors 24, no. 14: 4733. https://doi.org/10.3390/s24144733

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop