sensors-logo

Journal Browser

Journal Browser

3D Imaging and Sensing System

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Sensing and Imaging".

Viewed by 44383

Editor


E-Mail Website
Collection Editor
Group for Quality Assurance and Image Processing, Technical University Ilmenau, 98684 Ilmenau, Germany
Interests: 3D sensors; structured light; robot vision; multimodal/multispectral imaging; human–machine interaction; high-resolution surface and shape measuring methods; machine learning
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Three-dimensional imaging and sensing systems are gaining increasingly more importance in a variety of applications, such as industrial process control, robot vision, autonomous driving and systems, biometrics, medicine, forensics and cultural heritage preservation. There are a couple of 3D data acquisition techniques such as structured light, stereo vision, plenoptic systems and LIDAR. Moreover, in multimodal 3D imaging, the acquisition of a scene is performed simultaneously with a 3D sensor and cameras at different spectral ranges. As a result, the object/scene is described by its spatial 3D coordinates (point clouds), its temporal behavior and, in addition, by further image modalities (e.g., a thermal image, multi-spectral image, polarization image, etc.).

In this Topical Collection, we look forward to receiving contributions presenting technical, methodological and algorithmic approaches capable of contributing to the future development of 3D sensors and their applications—not limited to special application areas—, covering a wide range of topics such as new sensor principles, real-time techniques, multi-3D sensor systems, 3D sensors for harsh operating conditions, new concepts for the 3D detection of uncooperative (transparent and specular) objects up to multimodal 3D sensors and calibration techniques.

Prof. Dr. Gunther Notni
Collection Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • real-time 3D sensors
  • 3D multi-sensor systems
  • 3D machine vision
  • robot vision
  • 3D vision for autonomous systems
  • 3D sensor applications
  • calibration techniques
  • multimodal 3D sensors
  • 3D inspection

Published Papers (28 papers)

2024

Jump to: 2023, 2022

20 pages, 2695 KiB  
Article
Image Reconstruction Requirements for Short-Range Inductive Sensors Used in Single-Coil MIT
by Joe R. Feldkamp
Sensors 2024, 24(9), 2704; https://doi.org/10.3390/s24092704 - 24 Apr 2024
Viewed by 240
Abstract
MIT (magnetic induction tomography) image reconstruction from data acquired with a single, small inductive sensor has unique requirements not found in other imaging modalities. During the course of scanning over a target, measured inductive loss decreases rapidly with distance from the target boundary. [...] Read more.
MIT (magnetic induction tomography) image reconstruction from data acquired with a single, small inductive sensor has unique requirements not found in other imaging modalities. During the course of scanning over a target, measured inductive loss decreases rapidly with distance from the target boundary. Since inductive loss exists even at infinite separation due to losses internal to the sensor, all other measurements made in the vicinity of the target require subtraction of the infinite-separation loss. This is accomplished naturally by treating infinite-separation loss as an unknown. Furthermore, since contributions to inductive loss decline with greater depth into a conductive target, regularization penalties must be decreased with depth. A pair of squared L2 penalty norms are combined to form a 2-term Sobolev norm, including a zero-order penalty that penalizes solution departures from a default solution and a first-order penalty that promotes smoothness. While constraining the solution to be non-negative and bounded from above, the algorithm is used to perform image reconstruction on scan data obtained over a 4.3 cm thick phantom consisting of bone-like features embedded in agarose gel, with the latter having a nominal conductivity of 1.4 S/m. Full article
Show Figures

Figure 1

15 pages, 6531 KiB  
Article
Range-Intensity-Profile-Guided Gated Light Ranging and Imaging Based on a Convolutional Neural Network
by Chenhao Xia, Xinwei Wang, Liang Sun, Yue Zhang, Bo Song and Yan Zhou
Sensors 2024, 24(7), 2151; https://doi.org/10.3390/s24072151 - 27 Mar 2024
Viewed by 400
Abstract
Three-dimensional (3D) range-gated imaging can obtain high spatial resolution intensity images as well as pixel-wise depth information. Several algorithms have been developed to recover depth from gated images such as the range-intensity correlation algorithm and deep-learning-based algorithm. The traditional range-intensity correlation algorithm requires [...] Read more.
Three-dimensional (3D) range-gated imaging can obtain high spatial resolution intensity images as well as pixel-wise depth information. Several algorithms have been developed to recover depth from gated images such as the range-intensity correlation algorithm and deep-learning-based algorithm. The traditional range-intensity correlation algorithm requires specific range-intensity profiles, which are hard to generate, while the existing deep-learning-based algorithm requires large number of real-scene training data. In this work, we propose a method of range-intensity-profile-guided gated light ranging and imaging to recover depth from gated images based on a convolutional neural network. In this method, the range-intensity profile (RIP) of a given gated light ranging and imaging system is obtained to generate synthetic training data from Grand Theft Auto V for our range-intensity ratio and semantic network (RIRS-net). The RIRS-net is mainly trained on synthetic data and fine-tuned with RIP data. The network learns both semantic depth cues and range-intensity depth cues in the synthetic data, and learns accurate range-intensity depth cues in the RIP data. In the evaluation experiments on both a real-scene and synthetic test dataset, our method shows a better result compared to other algorithms. Full article
Show Figures

Figure 1

14 pages, 9349 KiB  
Article
3D Digital Modeling of Dental Casts from Their 3D CT Images with Scatter and Beam-Hardening Correction
by Mohamed A. A. Hegazy, Myung Hye Cho, Min Hyoung Cho and Soo Yeol Lee
Sensors 2024, 24(6), 1995; https://doi.org/10.3390/s24061995 - 21 Mar 2024
Viewed by 472
Abstract
Dental 3D modeling plays a pivotal role in digital dentistry, offering precise tools for treatment planning, implant placement, and prosthesis customization. Traditional methods rely on physical plaster casts, which pose challenges in storage, accessibility, and accuracy, fueling interest in digitization using 3D computed [...] Read more.
Dental 3D modeling plays a pivotal role in digital dentistry, offering precise tools for treatment planning, implant placement, and prosthesis customization. Traditional methods rely on physical plaster casts, which pose challenges in storage, accessibility, and accuracy, fueling interest in digitization using 3D computed tomography (CT) imaging. We introduce a method that can reduce both artifacts simultaneously. To validate the proposed method, we carried out CT scan experiments using plaster dental casts created from dental impressions. After the artifact correction, the CT image quality was greatly improved in terms of image uniformity, contrast-to-noise ratio (CNR), and edge sharpness. We examined the correction effects on the accuracy of the 3D models generated from the CT images. As referenced to the 3D models derived from the optical scan data, the root mean square (RMS) errors were reduced by 8.8~71.7% for three dental casts of different sizes and shapes. Our method offers a solution to challenges posed by artifacts in CT scanning of plaster dental casts, leading to enhanced 3D model accuracy. This advancement holds promise for dental professionals seeking precise digital modeling for diverse applications in dentistry. Full article
Show Figures

Figure 1

15 pages, 21175 KiB  
Article
Pre-Segmented Down-Sampling Accelerates Graph Neural Network-Based 3D Object Detection in Autonomous Driving
by Zhenming Liang, Yingping Huang and Yanbiao Bai
Sensors 2024, 24(5), 1458; https://doi.org/10.3390/s24051458 - 23 Feb 2024
Viewed by 504
Abstract
Graph neural networks (GNNs) have been proven to be an ideal approach to deal with irregular point clouds, but involve massive computations for searching neighboring points in the graph, which limits their application in large-scale LiDAR point cloud processing. Down-sampling is a straightforward [...] Read more.
Graph neural networks (GNNs) have been proven to be an ideal approach to deal with irregular point clouds, but involve massive computations for searching neighboring points in the graph, which limits their application in large-scale LiDAR point cloud processing. Down-sampling is a straightforward and indispensable step in current GNN-based 3D detectors to reduce the computational burden of the model, but the commonly used down-sampling methods cannot distinguish the categories of the LiDAR points, which leads to an inability to effectively improve the computational efficiency of the GNN models without affecting their detection accuracy. In this paper, we propose (1) a LiDAR point cloud pre-segmented down-sampling (PSD) method that can selectively reduce background points while preserving the foreground object points during the process, greatly improving the computational efficiency of the model without affecting its 3D detection accuracy. (2) A lightweight GNN-based 3D detector that can extract point features and detect objects from the raw down-sampled LiDAR point cloud directly without any pre-transformation. We test the proposed model on the KITTI 3D Object Detection Benchmark, and the results demonstrate its effectiveness and efficiency for autonomous driving 3D object detection. Full article
Show Figures

Figure 1

2023

Jump to: 2024, 2022

19 pages, 5498 KiB  
Article
Integral Imaging Display System Based on Human Visual Distance Perception Model
by Lijin Deng, Zhihong Li, Yuejianan Gu and Qi Wang
Sensors 2023, 23(21), 9011; https://doi.org/10.3390/s23219011 - 06 Nov 2023
Viewed by 906
Abstract
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This [...] Read more.
In an integral imaging (II) display system, the self-adjustment ability of the human eye can result in blurry observations when viewing 3D targets outside the focal plane within a specific range. This can impact the overall imaging quality of the II system. This research examines the visual characteristics of the human eye and analyzes the path of light from a point source to the eye in the process of capturing and reconstructing the light field. Then, an overall depth of field (DOF) model of II is derived based on the human visual system (HVS). On this basis, an II system based on the human visual distance (HVD) perception model is proposed, and an interactive II display system is constructed. The experimental results confirm the effectiveness of the proposed method. The display system improves the viewing distance range, enhances spatial resolution and provides better stereoscopic display effects. When comparing our method with three other methods, it is clear that our approach produces better results in optical experiments and objective evaluations: the cumulative probability of blur detection (CPBD) value is 38.73%, the structural similarity index (SSIM) value is 86.56%, and the peak signal-to-noise ratio (PSNR) value is 31.12. These values align with subjective evaluations based on the characteristics of the human visual system. Full article
Show Figures

Figure 1

16 pages, 9044 KiB  
Article
Edge Bleeding Artifact Reduction for Shape from Focus in Microscopic 3D Sensing
by Sang-Ho Park, Ga-Rin Park and Kwang-Ryul Baek
Sensors 2023, 23(20), 8602; https://doi.org/10.3390/s23208602 - 20 Oct 2023
Viewed by 668
Abstract
Shape from focus enables microscopic 3D sensing by combining it with a microscope system. However, edge bleeding artifacts of estimated depth easily occur in this environment. Therefore, this study analyzed artifacts and proposed a method to reduce edge bleeding artifacts. As a result [...] Read more.
Shape from focus enables microscopic 3D sensing by combining it with a microscope system. However, edge bleeding artifacts of estimated depth easily occur in this environment. Therefore, this study analyzed artifacts and proposed a method to reduce edge bleeding artifacts. As a result of the analysis, the artifact factors are the depth of field of the lens, object texture, brightness difference between layers, and the slope of the object. Additionally, to reduce artifacts, a weighted focus measure value method was proposed based on the asymmetry of local brightness in artifacts. The proposed reduction method was evaluated through simulation and implementation. Edge bleeding artifact reduction rates of up to 60% were shown in various focus measure operators. The proposed method can be used with postprocessing algorithms and reduces edge bleeding artifacts. Full article
Show Figures

Figure 1

15 pages, 1796 KiB  
Article
A 256 × 256 LiDAR Imaging System Based on a 200 mW SPAD-Based SoC with Microlens Array and Lightweight RGB-Guided Depth Completion Neural Network
by Jier Wang, Jie Li, Yifan Wu, Hengwei Yu, Lebei Cui, Miao Sun and Patrick Yin Chiang
Sensors 2023, 23(15), 6927; https://doi.org/10.3390/s23156927 - 03 Aug 2023
Viewed by 1609
Abstract
Light detection and ranging (LiDAR) technology, a cutting-edge advancement in mobile applications, presents a myriad of compelling use cases, including enhancing low-light photography, capturing and sharing 3D images of fascinating objects, and elevating the overall augmented reality (AR) experience. However, its widespread adoption [...] Read more.
Light detection and ranging (LiDAR) technology, a cutting-edge advancement in mobile applications, presents a myriad of compelling use cases, including enhancing low-light photography, capturing and sharing 3D images of fascinating objects, and elevating the overall augmented reality (AR) experience. However, its widespread adoption has been hindered by the prohibitive costs and substantial power consumption associated with its implementation in mobile devices. To surmount these obstacles, this paper proposes a low-power, low-cost, single-photon avalanche detector (SPAD)-based system-on-chip (SoC) which packages the microlens arrays (MLAs) and a lightweight RGB-guided sparse depth imaging completion neural network for 3D LiDAR imaging. The proposed SoC integrates an 8 × 8 SPAD macropixel array with time-to-digital converters (TDCs) and a charge pump, fabricated using a 180 nm bipolar-CMOS-DMOS (BCD) process. Initially, the primary function of this SoC was limited to serving as a ranging sensor. A random MLA-based homogenizing diffuser efficiently transforms Gaussian beams into flat-topped beams with a 45° field of view (FOV), enabling flash projection at the transmitter. To further enhance resolution and broaden application possibilities, a lightweight neural network employing RGB-guided sparse depth complementation is proposed, enabling a substantial expansion of image resolution from 8 × 8 to quarter video graphics array level (QVGA; 256 × 256). Experimental results demonstrate the effectiveness and stability of the hardware encompassing the SoC and optical system, as well as the lightweight features and accuracy of the algorithmic neural network. The state-of-the-art SoC-neural network solution offers a promising and inspiring foundation for developing consumer-level 3D imaging applications on mobile devices. Full article
Show Figures

Figure 1

21 pages, 5685 KiB  
Article
A New Vision Measurement Technique with Large Field of View and High Resolution
by Yong Li, Chenguang Liu, Xiaoyu You and Jian Liu
Sensors 2023, 23(14), 6615; https://doi.org/10.3390/s23146615 - 23 Jul 2023
Viewed by 1031
Abstract
The three-dimensional (3D) displacement resolution of conventional visual measurement systems can only reach tens of microns in cases involving long measuring distances (2.5 m) and large fields of view (1.5 m × 1.5 m). Therefore, a stereo vision measurement technology based on confocal [...] Read more.
The three-dimensional (3D) displacement resolution of conventional visual measurement systems can only reach tens of microns in cases involving long measuring distances (2.5 m) and large fields of view (1.5 m × 1.5 m). Therefore, a stereo vision measurement technology based on confocal scanning is proposed herein. This technology combines macroscopic visual measurement technology with confocal microscopic measurement technology to achieve a long measuring distance, a large field of view, and micron-level measuring resolution. First, we analyzed the factors affecting the 3D resolution of the visual system and developed a 3D resolution model of the visual system. Subsequently, we fabricated a prototype based on the resolution model and the proposed stereo vision measurement technology. The 3D displacement resolution measurement results in the full field of view show that the displacement resolutions of the developed equipment in the x-, y-, and z-directions can reach 2.5, 2.5, and 6 μm, respectively. Full article
Show Figures

Figure 1

15 pages, 598 KiB  
Article
A Fast and Reliable Solution to PnP, Using Polynomial Homogeneity and a Theorem of Hilbert
by Daniel Keren, Margarita Osadchy and Amit Shahar
Sensors 2023, 23(12), 5585; https://doi.org/10.3390/s23125585 - 14 Jun 2023
Viewed by 1173
Abstract
One of the most-extensively studied problems in three-dimensional Computer Vision is “Perspective-n-Point” (PnP), which concerns estimating the pose of a calibrated camera, given a set of 3D points in the world and their corresponding 2D projections in an image captured by the camera. [...] Read more.
One of the most-extensively studied problems in three-dimensional Computer Vision is “Perspective-n-Point” (PnP), which concerns estimating the pose of a calibrated camera, given a set of 3D points in the world and their corresponding 2D projections in an image captured by the camera. One solution method that ranks as very accurate and robust proceeds by reducing PnP to the minimization of a fourth-degree polynomial over the three-dimensional sphere S3. Despite a great deal of effort, there is no known fast method to obtain this goal. A very common approach is solving a convex relaxation of the problem, using “Sum Of Squares” (SOS) techniques. We offer two contributions in this paper: a faster (by a factor of roughly 10) solution with respect to the state-of-the-art, which relies on the polynomial’s homogeneity; and a fast, guaranteed, easily parallelizable approximation, which makes use of a famous result of Hilbert. Full article
Show Figures

Figure 1

17 pages, 12164 KiB  
Article
Robust Detection, Segmentation, and Metrology of High Bandwidth Memory 3D Scans Using an Improved Semi-Supervised Deep Learning Approach
by Jie Wang, Richard Chang, Ziyuan Zhao and Ramanpreet Singh Pahwa
Sensors 2023, 23(12), 5470; https://doi.org/10.3390/s23125470 - 09 Jun 2023
Cited by 3 | Viewed by 1633
Abstract
Recent advancements in 3D deep learning have led to significant progress in improving accuracy and reducing processing time, with applications spanning various domains such as medical imaging, robotics, and autonomous vehicle navigation for identifying and segmenting different structures. In this study, we employ [...] Read more.
Recent advancements in 3D deep learning have led to significant progress in improving accuracy and reducing processing time, with applications spanning various domains such as medical imaging, robotics, and autonomous vehicle navigation for identifying and segmenting different structures. In this study, we employ the latest developments in 3D semi-supervised learning to create cutting-edge models for the 3D object detection and segmentation of buried structures in high-resolution X-ray semiconductors scans. We illustrate our approach to locating the region of interest of the structures, their individual components, and their void defects. We showcase how semi-supervised learning is utilized to capitalize on the vast amounts of available unlabeled data to enhance both detection and segmentation performance. Additionally, we explore the benefit of contrastive learning in the data pre-selection step for our detection model and multi-scale Mean Teacher training paradigm in 3D semantic segmentation to achieve better performance compared with the state of the art. Our extensive experiments have shown that our method achieves competitive performance and is able to outperform by up to 16% on object detection and 7.8% on semantic segmentation. Additionally, our automated metrology package shows a mean error of less than 2 μm for key features such as Bond Line Thickness and pad misalignment. Full article
Show Figures

Figure 1

16 pages, 63200 KiB  
Article
High-Capacity Spatial Structured Light for Robust and Accurate Reconstruction
by Feifei Gu, Hubing Du, Sicheng Wang, Bohuai Su and Zhan Song
Sensors 2023, 23(10), 4685; https://doi.org/10.3390/s23104685 - 12 May 2023
Cited by 5 | Viewed by 1496
Abstract
Spatial structured light (SL) can achieve three-dimensional measurements with a single shot. As an important branch in the field of dynamic reconstruction, its accuracy, robustness, and density are of vital importance. Currently, there is a wide performance gap of spatial SL between dense [...] Read more.
Spatial structured light (SL) can achieve three-dimensional measurements with a single shot. As an important branch in the field of dynamic reconstruction, its accuracy, robustness, and density are of vital importance. Currently, there is a wide performance gap of spatial SL between dense reconstruction (but less accurate, e.g., speckle-based SL) and accurate reconstruction (but often sparser, e.g., shape-coded SL). The central problem lies in the coding strategy and the designed coding features. This paper aims to improve the density and quantity of reconstructed point clouds by spatial SL whilst also maintaining a high accuracy. Firstly, a new pseudo-2D pattern generation strategy was developed, which can improve the coding capacity of shape-coded SL greatly. Then, to extract the dense feature points robustly and accurately, an end-to-end corner detection method based on deep learning was developed. Finally, the pseudo-2D pattern was decoded with the aid of the epipolar constraint. Experimental results validated the effectiveness of the proposed system. Full article
Show Figures

Figure 1

19 pages, 10797 KiB  
Article
3D Object Detection via 2D Segmentation-Based Computational Integral Imaging Applied to a Real Video
by Michael Kadosh and Yitzhak Yitzhaky
Sensors 2023, 23(9), 4191; https://doi.org/10.3390/s23094191 - 22 Apr 2023
Cited by 1 | Viewed by 1755
Abstract
This study aims to achieve accurate three-dimensional (3D) localization of multiple objects in a complicated scene using passive imaging. It is challenging, as it requires accurate localization of the objects in all three dimensions given recorded 2D images. An integral imaging system captures [...] Read more.
This study aims to achieve accurate three-dimensional (3D) localization of multiple objects in a complicated scene using passive imaging. It is challenging, as it requires accurate localization of the objects in all three dimensions given recorded 2D images. An integral imaging system captures the scene from multiple angles and is able to computationally produce blur-based depth information about the objects in the scene. We propose a method to detect and segment objects in a 3D space using integral-imaging data obtained by a video camera array. Using objects’ two-dimensional regions detected via deep learning, we employ local computational integral imaging in detected objects’ depth tubes to estimate the depth positions of the objects along the viewing axis. This method analyzes object-based blurring characteristics in the 3D environment efficiently. Our camera array produces an array of multiple-view videos of the scene, called elemental videos. Thus, the proposed 3D object detection applied to the video frames allows for 3D tracking of the objects with knowledge of their depth positions along the video. Results show successful 3D object detection with depth localization in a real-life scene based on passive integral imaging. Such outcomes have not been obtained in previous studies using integral imaging; mainly, the proposed method outperforms them in its ability to detect the depth locations of objects that are in close proximity to each other, regardless of the object size. This study may contribute when robust 3D object localization is desired with passive imaging, but it requires a camera or lens array imaging apparatus. Full article
Show Figures

Figure 1

22 pages, 13278 KiB  
Article
A Cylindrical Near-Field Acoustical Holography Method Based on Cylindrical Translation Window Expansion and an Autoencoder Stacked with 3D-CNN Layers
by Jiaxuan Wang, Weihan Zhang, Zhifu Zhang and Yizhe Huang
Sensors 2023, 23(8), 4146; https://doi.org/10.3390/s23084146 - 20 Apr 2023
Cited by 1 | Viewed by 1576
Abstract
The performance of near-field acoustic holography (NAH) with a sparse sampling rate will be affected by spatial aliasing or inverse ill-posed equations. Through a 3D convolution neural network (CNN) and stacked autoencoder framework (CSA), the data-driven CSA-NAH method can solve this problem by [...] Read more.
The performance of near-field acoustic holography (NAH) with a sparse sampling rate will be affected by spatial aliasing or inverse ill-posed equations. Through a 3D convolution neural network (CNN) and stacked autoencoder framework (CSA), the data-driven CSA-NAH method can solve this problem by utilizing the information from data in each dimension. In this paper, the cylindrical translation window (CTW) is introduced to truncate and roll out the cylindrical image to compensate for the loss of circumferential features at the truncation edge. Combined with the CSA-NAH method, a cylindrical NAH method based on stacked 3D-CNN layers (CS3C) for sparse sampling is proposed, and its feasibility is verified numerically. In addition, the planar NAH method based on the Paulis–Gerchberg extrapolation interpolation algorithm (PGa) is introduced into the cylindrical coordinate system, and compared with the proposed method. The results show that, under the same conditions, the reconstruction error rate of the CS3C-NAH method is reduced by nearly 50%, and the effect is significant. Full article
Show Figures

Figure 1

21 pages, 4194 KiB  
Article
Digital Fringe Projection-Based Clamping Force Estimation Algorithm for Railway Fasteners
by Zhengji Fan, Yingping Hong, Yunfeng Wang, Yanan Niu, Huixin Zhang and Chengqun Chu
Sensors 2023, 23(6), 3299; https://doi.org/10.3390/s23063299 - 21 Mar 2023
Viewed by 1491
Abstract
The inspection of railway fasteners to assess their clamping force can be used to evaluate the looseness of the fasteners and improve railway safety. Although there are various methods for inspecting railway fasteners, there is still a need for non-contact, fast inspection without [...] Read more.
The inspection of railway fasteners to assess their clamping force can be used to evaluate the looseness of the fasteners and improve railway safety. Although there are various methods for inspecting railway fasteners, there is still a need for non-contact, fast inspection without installing additional devices on fasteners. In this study, a system that uses digital fringe projection technology to measure the 3D topography of the fastener was developed. This system inspects the looseness through a series of algorithms, including point cloud denoising, coarse registration based on fast point feature histograms (FPFH) features, fine registration based on the iterative closest point (ICP) algorithm, specific region selection, kernel density estimation, and ridge regression. Unlike the previous inspection technology, which can only measure the geometric parameters of fasteners to characterize the tightness, this system can directly estimate the tightening torque and the bolt clamping force. Experiments on WJ-8 fasteners showed a root mean square error of 9.272 N·m and 1.94 kN for the tightening torque and clamping force, demonstrating that the system is sufficiently precise to replace manual measurement and can substantially improve inspection efficiency while evaluating railway fastener looseness. Full article
Show Figures

Figure 1

19 pages, 2311 KiB  
Article
Three-Dimensional Digital Zooming of Integral Imaging under Photon-Starved Conditions
by Gilsu Yeo and Myungjin Cho
Sensors 2023, 23(5), 2645; https://doi.org/10.3390/s23052645 - 28 Feb 2023
Viewed by 1207
Abstract
In this paper, we propose new three-dimensional (3D) visualization of objects at long distance under photon-starved conditions. In conventional three-dimensional image visualization techniques, the visual quality of three-dimensional images may be degraded because object images at long distances may have low resolution. Thus, [...] Read more.
In this paper, we propose new three-dimensional (3D) visualization of objects at long distance under photon-starved conditions. In conventional three-dimensional image visualization techniques, the visual quality of three-dimensional images may be degraded because object images at long distances may have low resolution. Thus, in our proposed method, we utilize digital zooming, which can crop and interpolate the region of interest from the image to improve the visual quality of three-dimensional images at long distances. Under photon-starved conditions, three-dimensional images at long distances may not be visualized due to the lack of the number of photons. Photon counting integral imaging can be used to solve this problem, but objects at long distance may still have a small number of photons. In our method, a three-dimensional image can be reconstructed, since photon counting integral imaging with digital zooming is used. In addition, to estimate a more accurate three-dimensional image at long distance under photon-starved conditions, in this paper, multiple observation photon counting integral imaging (i.e., N observation photon counting integral imaging) is used. To show the feasibility of our proposed method, we implement the optical experiments and calculate performance metrics, such as peak sidelobe ratio. Therefore, our method can improve the visualization of three-dimensional objects at long distances under photon-starved conditions. Full article
Show Figures

Figure 1

15 pages, 5225 KiB  
Article
3D Scanner-Based Identification of Welding Defects—Clustering the Results of Point Cloud Alignment
by János Hegedűs-Kuti, József Szőlősi, Dániel Varga, János Abonyi, Mátyás Andó and Tamás Ruppert
Sensors 2023, 23(5), 2503; https://doi.org/10.3390/s23052503 - 23 Feb 2023
Cited by 1 | Viewed by 2179
Abstract
This paper describes a framework for detecting welding errors using 3D scanner data. The proposed approach employs density-based clustering to compare point clouds and identify deviations. The discovered clusters are then classified according to standard welding fault classes. Six welding deviations defined in [...] Read more.
This paper describes a framework for detecting welding errors using 3D scanner data. The proposed approach employs density-based clustering to compare point clouds and identify deviations. The discovered clusters are then classified according to standard welding fault classes. Six welding deviations defined in the ISO 5817:2014 standard were evaluated. All defects were represented through CAD models, and the method was able to detect five of these deviations. The results demonstrate that the errors can be effectively identified and grouped according to the location of the different points in the error clusters. However, the method cannot separate crack-related defects as a distinct cluster. Full article
Show Figures

Figure 1

16 pages, 22548 KiB  
Article
Lensless Three-Dimensional Imaging under Photon-Starved Conditions
by Jae-Young Jang and Myungjin Cho
Sensors 2023, 23(4), 2336; https://doi.org/10.3390/s23042336 - 20 Feb 2023
Cited by 2 | Viewed by 1146
Abstract
In this paper, we propose a lensless three-dimensional (3D) imaging under photon-starved conditions using diffraction grating and computational photon counting method. In conventional 3D imaging with and without the lens, 3D visualization of objects under photon-starved conditions may be difficult due to lack [...] Read more.
In this paper, we propose a lensless three-dimensional (3D) imaging under photon-starved conditions using diffraction grating and computational photon counting method. In conventional 3D imaging with and without the lens, 3D visualization of objects under photon-starved conditions may be difficult due to lack of photons. To solve this problem, our proposed method uses diffraction grating imaging as lensless 3D imaging and computational photon counting method for 3D visualization of objects under these conditions. In addition, to improve the visual quality of 3D images under severely photon-starved conditions, in this paper, multiple observation photon counting method with advanced statistical estimation such as Bayesian estimation is proposed. Multiple observation photon counting method can estimate the more accurate 3D images by remedying the random errors of photon occurrence because it can increase the samples of photons. To prove the ability of our proposed method, we implement the optical experiments and calculate the peak sidelobe ratio as the performance metric. Full article
Show Figures

Figure 1

33 pages, 9737 KiB  
Article
Multi-Class Classification and Multi-Output Regression of Three-Dimensional Objects Using Artificial Intelligence Applied to Digital Holographic Information
by Uma Mahesh R N and Anith Nelleri
Sensors 2023, 23(3), 1095; https://doi.org/10.3390/s23031095 - 17 Jan 2023
Viewed by 2220
Abstract
Digital holographically sensed 3D data processing, which is useful for AI-based vision, is demonstrated. Three prominent methods of learning from datasets such as sensed holograms, computationally retrieved intensity and phase from holograms forming concatenated intensity–phase (whole information) images, and phase-only images (depth information) [...] Read more.
Digital holographically sensed 3D data processing, which is useful for AI-based vision, is demonstrated. Three prominent methods of learning from datasets such as sensed holograms, computationally retrieved intensity and phase from holograms forming concatenated intensity–phase (whole information) images, and phase-only images (depth information) were utilized for the proposed multi-class classification and multi-output regression tasks of the chosen 3D objects in supervised learning. Each dataset comprised 2268 images obtained from the chosen eighteen 3D objects. The efficacy of our approaches was validated on experimentally generated digital holographic data then further quantified and compared using specific evaluation matrices. The machine learning classifiers had better AUC values for different classes on the holograms and whole information datasets compared to the CNN, whereas the CNN had a better performance on the phase-only image dataset compared to these classifiers. The MLP regressor was found to have a stable prediction in the test and validation sets with a fixed EV regression score of 0.00 compared to the CNN, the other regressors for holograms, and the phase-only image datasets, whereas the RF regressor showed a better performance in the validation set for the whole information dataset with a fixed EV regression score of 0.01 compared to the CNN and other regressors. Full article
Show Figures

Figure 1

2022

Jump to: 2024, 2023

10 pages, 10341 KiB  
Communication
Multi-Scale Histogram-Based Probabilistic Deep Neural Network for Super-Resolution 3D LiDAR Imaging
by Miao Sun, Shenglong Zhuo and Patrick Yin Chiang
Sensors 2023, 23(1), 420; https://doi.org/10.3390/s23010420 - 30 Dec 2022
Cited by 1 | Viewed by 1997
Abstract
LiDAR (Light Detection and Ranging) imaging based on SPAD (Single-Photon Avalanche Diode) technology suffers from severe area penalty for large on-chip histogram peak detection circuits required by the high precision of measured depth values. In this work, a probabilistic estimation-based super-resolution neural network [...] Read more.
LiDAR (Light Detection and Ranging) imaging based on SPAD (Single-Photon Avalanche Diode) technology suffers from severe area penalty for large on-chip histogram peak detection circuits required by the high precision of measured depth values. In this work, a probabilistic estimation-based super-resolution neural network for SPAD imaging that firstly uses temporal multi-scale histograms as inputs is proposed. To reduce the area and cost of on-chip histogram computation, only part of the histogram hardware for calculating the reflected photons is implemented on a chip. On account of the distribution rule of returned photons, a probabilistic encoder as a part of the network is first proposed to solve the depth estimation problem of SPADs. By jointly using this neural network with a super-resolution network, 16× up-sampling depth estimation is realized using 32 × 32 multi-scale histogram outputs. Finally, the effectiveness of this neural network was verified in the laboratory with a 32 × 32 SPAD sensor system. Full article
Show Figures

Figure 1

11 pages, 7961 KiB  
Communication
A Continuous Motion Shape-from-Focus Method for Geometry Measurement during 3D Printing
by Jona Gladines, Seppe Sels, Michael Hillen and Steve Vanlanduit
Sensors 2022, 22(24), 9805; https://doi.org/10.3390/s22249805 - 14 Dec 2022
Cited by 1 | Viewed by 1298
Abstract
In 3D printing, as in other manufacturing processes, there is a push for zero-defect manufacturing, mainly to avoid waste. To evaluate the quality of the printed parts during the printing process, an accurate 3D measurement method is required. By scanning the part during [...] Read more.
In 3D printing, as in other manufacturing processes, there is a push for zero-defect manufacturing, mainly to avoid waste. To evaluate the quality of the printed parts during the printing process, an accurate 3D measurement method is required. By scanning the part during the buildup, potential nonconformities to tolerances can be detected early on and the printing process could be adjusted to avoid scrapping the part. Out of many, shape-from-focus, is an accurate method for recovering 3D shapes from objects. However, the state-of-the-art implementation of the method requires the object to be stationary during a measurement. This does not reconcile with the nature of 3D printing, where continuous motion is required for the manufacturing process. This research presents a novel methodology that allows shape-from-focus to be used in a continuous scanning motion, thus making it possible to apply it to the 3D manufacturing process. By controlling the camera trigger and a tunable lens with synchronous signals, a stack of images can be created while the camera or the object is in motion. These images can be re-aligned and then used to create a 3D depth image. The impact on the quality of the 3D measurement was tested by analytically comparing the quality of a scan using the traditional stationary method and of the proposed method to a known reference. The results demonstrate a 1.22% degradation in the measurement error. Full article
Show Figures

Figure 1

13 pages, 4823 KiB  
Article
Three-Dimensional Integral Imaging with Enhanced Lateral and Longitudinal Resolutions Using Multiple Pickup Positions
by Jiheon Lee and Myungjin Cho
Sensors 2022, 22(23), 9199; https://doi.org/10.3390/s22239199 - 26 Nov 2022
Cited by 4 | Viewed by 1343
Abstract
In this paper, we propose an enhancement of three-dimensional (3D) image visualization techniques by using different pickup plane reconstructions. In conventional 3D visualization techniques, synthetic aperture integral imaging (SAII) and volumetric computational reconstruction (VCR) can be utilized. However, due to the lack of [...] Read more.
In this paper, we propose an enhancement of three-dimensional (3D) image visualization techniques by using different pickup plane reconstructions. In conventional 3D visualization techniques, synthetic aperture integral imaging (SAII) and volumetric computational reconstruction (VCR) can be utilized. However, due to the lack of image information and shifting pixels, it may be difficult to obtain better lateral and longitudinal resolutions of 3D images. Thus, we propose a new elemental image acquisition and computational reconstruction to improve both the lateral and longitudinal resolutions of 3D objects. To prove the feasibility of our proposed method, we present the performance metrics, such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and peak-to-sidelobe ratio (PSR). Therefore, our method can improve both the lateral and longitudinal resolutions of 3D objects more than the conventional technique. Full article
Show Figures

Figure 1

20 pages, 16022 KiB  
Article
3D Point Cloud Acquisition and Correction in Radioactive and Underwater Environments Using Industrial 3D Scanners
by Dongjun Hyun, Sungmoon Joo, Ikjune Kim and Jonghwan Lee
Sensors 2022, 22(23), 9053; https://doi.org/10.3390/s22239053 - 22 Nov 2022
Cited by 2 | Viewed by 1450
Abstract
This study proposes a method to acquire an accurate 3D point cloud in radioactive and underwater environments using industrial 3D scanners. Applications of robotic systems at nuclear facility dismantling require 3D imaging equipment for localization of target structures in radioactive and underwater environments. [...] Read more.
This study proposes a method to acquire an accurate 3D point cloud in radioactive and underwater environments using industrial 3D scanners. Applications of robotic systems at nuclear facility dismantling require 3D imaging equipment for localization of target structures in radioactive and underwater environments. The use of industrial 3D scanners may be a better option than developing prototypes for researchers with basic knowledge. However, such industrial 3D scanners are designed to operate in normal environments and cannot be used in radioactive and underwater environments. Modifications to environmental obstacles also suffer from hidden technical details of industrial 3D scanners. This study shows how 3D imaging equipment based on the industrial 3D scanner satisfies the requirements of the remote dismantling system, using a robotic system despite insufficient environmental resistance and hidden technical details of industrial 3D scanners. A housing unit is designed for waterproofing and radiation protection using windows, mirrors and shielding. Shielding protects the industrial 3D scanner from radiation damage. Mirrors reflect the light required for 3D scanning because shielding blocks the light. Windows in the waterproof housing also transmit the light required for 3D scanning with the industrial 3D scanner. The basic shielding thickness calculation method through the experimental method is described, including the analysis of the experimental results. The method for refraction correction through refraction modeling, measurement experiments and parameter studies are described. The developed 3D imaging equipment successfully satisfies the requirements of the remote dismantling system: waterproof, radiation resistance of 1 kGy and positional accuracy within 1 mm. The proposed method is expected to provide researchers with an easy approach to 3D scanning in radioactive and underwater environments. Full article
Show Figures

Figure 1

15 pages, 3609 KiB  
Article
Calibration of a Catadioptric System and 3D Reconstruction Based on Surface Structured Light
by Zhenghai Lu, Yaowen Lv, Zhiqing Ai, Ke Suo, Xuanrui Gong and Yuxuan Wang
Sensors 2022, 22(19), 7385; https://doi.org/10.3390/s22197385 - 28 Sep 2022
Cited by 1 | Viewed by 1536
Abstract
In response to the problem of the small field of vision in 3D reconstruction, a 3D reconstruction system based on a catadioptric camera and projector was built by introducing a traditional camera to calibrate the catadioptric camera and projector system. Firstly, the intrinsic [...] Read more.
In response to the problem of the small field of vision in 3D reconstruction, a 3D reconstruction system based on a catadioptric camera and projector was built by introducing a traditional camera to calibrate the catadioptric camera and projector system. Firstly, the intrinsic parameters of the camera and the traditional camera are calibrated separately. Then, the calibration of the projection system is accomplished by the traditional camera. Secondly, the coordinate system is introduced to calculate, respectively, the position of the catadioptric camera and projector in the coordinate system, and the position relationship between the coordinate systems of the catadioptric camera and the projector is obtained. Finally, the projector is used to project the structured light fringe to realize the reconstruction using a catadioptric camera. The experimental results show that the reconstruction error is 0.75 mm and the relative error is 0.0068 for a target of about 1 m. The calibration method and reconstruction method proposed in this paper can guarantee the ideal geometric reconstruction accuracy. Full article
Show Figures

Figure 1

40 pages, 11261 KiB  
Article
An Experimental Assessment of Depth Estimation in Transparent and Translucent Scenes for Intel RealSense D415, SR305 and L515
by Eva Curto and Helder Araujo
Sensors 2022, 22(19), 7378; https://doi.org/10.3390/s22197378 - 28 Sep 2022
Cited by 7 | Viewed by 2805
Abstract
RGB-D cameras have become common in many research fields since these inexpensive devices provide dense 3D information from the observed scene. Over the past few years, the RealSense™ range from Intel® has introduced new, cost-effective RGB-D sensors with different technologies, more sophisticated [...] Read more.
RGB-D cameras have become common in many research fields since these inexpensive devices provide dense 3D information from the observed scene. Over the past few years, the RealSense™ range from Intel® has introduced new, cost-effective RGB-D sensors with different technologies, more sophisticated in both hardware and software. Models D415, SR305, and L515 are examples of successful cameras launched by Intel® RealSense™ between 2018 and 2020. These three cameras are different since they have distinct operating principles. Then, their behavior concerning depth estimation while in the presence of many error sources will also be specific. For instance, semi-transparent and scattering media are expected error sources for an RGB-D sensor. The main new contribution of this paper is a full evaluation and comparison between the three Intel RealSense cameras in scenarios with transparency and translucency. We propose an experimental setup involving an aquarium and liquids. The evaluation, based on repeatability/precision and statistical distribution of the acquired depth, allows us to compare the three cameras and conclude that Intel RealSense D415 has overall the best behavior namely in what concerns the statistical variability (also known as precision or repeatability) and also in what concerns valid measurements. Full article
Show Figures

Figure 1

16 pages, 11462 KiB  
Article
Physics-Based Simulation of Soft-Body Deformation Using RGB-D Data
by Daeun Kang, Jaeseok Moon, Saeyoung Yang, Taesoo Kwon and Yejin Kim
Sensors 2022, 22(19), 7225; https://doi.org/10.3390/s22197225 - 23 Sep 2022
Cited by 1 | Viewed by 2634
Abstract
Providing real-time interaction in an immersive environment has drawn considerable attention in the virtual training fields. Physics-based simulations are suitable for such environments; however, they require the definition and adjustment of coefficients that determine material properties, making the methods more complex and time-consuming. [...] Read more.
Providing real-time interaction in an immersive environment has drawn considerable attention in the virtual training fields. Physics-based simulations are suitable for such environments; however, they require the definition and adjustment of coefficients that determine material properties, making the methods more complex and time-consuming. In this paper, we introduce a novel approach to simulating the soft-body deformation of an observed object. Using an off-the-shelf RGB-D sensor, the proposed approach tracks an object’s movement and simulates its deformation in an iterative manner. Polygonal models with different resolutions are used to improve the simulation speed and visual quality. During the simulation process, a low-resolution model is used for surface deformation using neighboring feature points detected from the sensor, and a volumetric model is added for internal force estimation. To visualize the observed model in detail, the deformed and low-resolution model is mapped to a high-resolution model using mean value coordinate interpolation. To handle topological deformations, such as cutting or tearing, a part intersected by a cutting tool is recognized by the sensor and responds to external forces. As shown in the experimental results, our approach generates convincing deformations of observed objects in real time. Full article
Show Figures

Figure 1

17 pages, 5108 KiB  
Article
Millimeter-Wave Imaging System Based on Direct-Conversion Focal-Plane Array Receiver
by Sergey Korolyov, Aleksandr Goryunov, Ivan Illarionov, Vladimir Parshin and Petr Zemlyanukha
Sensors 2022, 22(19), 7132; https://doi.org/10.3390/s22197132 - 20 Sep 2022
Cited by 3 | Viewed by 1995
Abstract
A new approach to millimeter-wave imaging was suggested and experimentally studied. This approach can be considered as the evolution of the well-established focal-plane array (FPA) millimeter-wave imaging. The significant difference is the use of a direct-conversion array receiver, instead of the direct-detection array [...] Read more.
A new approach to millimeter-wave imaging was suggested and experimentally studied. This approach can be considered as the evolution of the well-established focal-plane array (FPA) millimeter-wave imaging. The significant difference is the use of a direct-conversion array receiver, instead of the direct-detection array receiver, along with the frequency-modulated continuous-wave (FMCW) radar technique. The sensitivity of the direct-conversion receiver is several orders higher than the sensitivity of the direct-detection one, which allows us to increase the maximum imaging range by more than one order of magnitude. The additional advantage of the direct-conversion technique is the opportunity to obtain information about the range to an object. The realization of the direct-conversion FPA imaging system was made possible due to original sensitive simple-designed receiving elements based on low-barrier Mott diodes. The suggested imaging method’s main characteristics, which include the achievable angular and range resolution and the achievable maximum imaging range, were studied. A maximum range of up to 100 m was experimentally determined. A 94 GHz 8 × 8 imaging system was developed for demonstration purposes and studied in detail. The suggested technique is assumed to be useful for creating a long-range millimeter-wave camera, in particular, for robotic systems that operate in poor environmental conditions. Full article
Show Figures

Figure 1

17 pages, 6082 KiB  
Article
Research on Multi-View 3D Reconstruction Technology Based on SFM
by Lei Gao, Yingbao Zhao, Jingchang Han and Huixian Liu
Sensors 2022, 22(12), 4366; https://doi.org/10.3390/s22124366 - 09 Jun 2022
Cited by 12 | Viewed by 3688
Abstract
Multi-view 3D reconstruction technology is used to restore a 3D model of practical value or required objects from a group of images. This paper designs and implements a set of multi-view 3D reconstruction technology, adopts the fusion method of SIFT and SURF feature-point [...] Read more.
Multi-view 3D reconstruction technology is used to restore a 3D model of practical value or required objects from a group of images. This paper designs and implements a set of multi-view 3D reconstruction technology, adopts the fusion method of SIFT and SURF feature-point extraction results, increases the number of feature points, adds proportional constraints to improve the robustness of feature-point matching, and uses RANSAC to eliminate false matching. In the sparse reconstruction stage, the traditional incremental SFM algorithm takes a long time, but the accuracy is high; the traditional global SFM algorithm is fast, but its accuracy is low; aiming at the disadvantages of traditional SFM algorithm, this paper proposes a hybrid SFM algorithm, which avoids the problem of the long time consumption of incremental SFM and the problem of the low precision and poor robustness of global SFM; finally, the MVS algorithm of depth-map fusion is used to complete the dense reconstruction of objects, and the related algorithms are used to complete the surface reconstruction, which makes the reconstruction model more realistic. Full article
Show Figures

Figure 1

13 pages, 726 KiB  
Article
Deep Neural Network for Point Sets Based on Local Feature Integration
by Hao Chu, Zhenquan He, Shangdong Liu, Chuanwen Liu, Jiyuan Yang and Fei Wang
Sensors 2022, 22(9), 3209; https://doi.org/10.3390/s22093209 - 22 Apr 2022
Cited by 3 | Viewed by 1942
Abstract
The research of object classification and part segmentation is a hot topic in computer vision, robotics, and virtual reality. With the emergence of depth cameras, point clouds have become easier to collect and increasingly important because of their simple and unified structures. Recently, [...] Read more.
The research of object classification and part segmentation is a hot topic in computer vision, robotics, and virtual reality. With the emergence of depth cameras, point clouds have become easier to collect and increasingly important because of their simple and unified structures. Recently, a considerable number of studies have been carried out about deep learning on 3D point clouds. However, data captured directly by sensors from the real-world often encounters severe incomplete sampling problems. The classical network is able to learn deep point set features efficiently, but it is not robust enough when the method suffers from the lack of point clouds. In this work, a novel and general network was proposed, whose effect does not depend on a large amount of point cloud input data. The mutual learning of neighboring points and the fusion between high and low feature layers can better promote the integration of local features so that the network can be more robust. The specific experiments were conducted on the ScanNet and Modelnet40 datasets with 84.5% and 92.8% accuracy, respectively, which proved that our model is comparable or even better than most existing methods for classification and segmentation tasks, and has good local feature integration ability. Particularly, it can still maintain 87.4% accuracy when the number of input points is further reduced to 128. The model proposed has bridged the gap between classical networks and point cloud processing. Full article
Show Figures

Figure 1

Back to TopTop