Next Article in Journal
Data-Augmented Deep Learning Models for Abnormal Road Manhole Cover Detection
Previous Article in Journal
A Modified Keystone-Based Forward-Looking Arc Array Synthetic Aperture Radar 3D Imaging Method
Previous Article in Special Issue
Proposal of a Real-Time Test Platform for Tactile Internet Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Precision 3D Reconstruction Study with Emphasis on Refractive Calibration of GelStereo-Type Sensors

1
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
2
School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
3
Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
4
School of Future Technology, University of Chinese Academy of Sciences, Beijing 100049, China
5
Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(5), 2675; https://doi.org/10.3390/s23052675
Submission received: 31 January 2023 / Revised: 21 February 2023 / Accepted: 24 February 2023 / Published: 28 February 2023
(This article belongs to the Special Issue Advances in Tactile Sensing and Robotic Grasping)

Abstract

:
GelStereo sensing technology is capable of performing three-dimensional (3D) contact shape measurement under various contact structures such as bionic curved surfaces, which has promising advantages in the field of visuotactile sensing. However, due to multi-medium ray refraction in the imaging system, robust and high-precision tactile 3D reconstruction remains a challenging problem for GelStereo-type sensors with different structures. In this paper, we first propose a universal Refractive Stereo Ray Tracing (RSRT) model for GelStereo-type sensing systems to realize 3D reconstruction of the contact surface. Moreover, a relative geometry-based optimization method is presented to calibrate multiple parameters of the proposed RSRT model, such as the refractive indices and structural dimensions. Furthermore, extensive quantitative calibration experiments are performed on four different GelStereo sensing platforms; the experimental results show that the proposed calibration pipeline can achieve less than 0.35 mm in Euclidean distance error, based on which we believe that the proposed refractive calibration method can be further applied in more complex GelStereo-type and other similar visuotactile sensing systems. Such high-precision visuotactile sensors can facilitate the study of robotic dexterous manipulation.

1. Introduction

Tactile perception is one of the main ways for humans to interact with real-world environments [1,2], and has a natural appeal when visual impairment occurs. Naturally, endowing robots with human-like tactile perception capabilities has become an important part of robots moving towards the open reality [3,4,5]. Recently, visuotactile sensing technology has received increasing attention in the robotic tactile sensing community [6]. Compared to traditional tactile arrays [7] using sensing principles such as capacitive [8], resistive [9], and piezoelectric [10], visuotactile sensing technology has significant advantages in spatial resolution, cost, and stability, especially the ability to directly obtain high-precision tactile deformation [11]. To date, visuotactile sensors have realized the measurement of various tactile patterns such as sliding contact [12,13], six-axis force/torque [14], distributed force field [15], 3D contact deformation [16,17], etc. Furthermore, the acquired rich tactile sensing information significantly improves the robot’s dexterous manipulation ability, and enables the robot to successfully complete many challenging missions, such as USB interface assembly [18], cable manipulation [19], swing-up manipulation [20], in-hand manipulation [21], cloth manipulation [22], and more. In summary, the high-precision measurement of contact deformation by visuotactile sensors makes these complicated robotic dexterous manipulation tasks possible to a certain extent. In this sense, the visuotactile sensors can be called deformation sensors. However, it is a very challenging problem to realize high-precision 3D deformation measurement on arbitrary contact structures [23,24,25].
Currently, various types of tactile sensors have been developed, with different structures used to adapt to different sensing requirements and integration scenarios. A detailed survey of visuotactile sensing technologies can be found in [6]. Nevertheless, certain sensors cannot achieve high-precision 3D contact deformation measurement, such as FingerVision [26,27], TacTip [28,29], etc. To the best of our knowledge, there are four types of technical routes for visuotactile sensing technology that can achieve high-precision 3D contact deformation sensing. First, the GelSight-type sensors proposed by Yuan et al. [11,30] realize 3D reconstruction of the contact surface using photometric stereo algorithm. At present, the GelSight-type sensors are mostly integrated with the fingertip of parallel grippers [31]. Romero et al. extended the GelSight pipeline to a curvature contact structure, although the accuracy of 3D reconstruction is reduced by fitting the surface with a plane [32]. Second, Do et al. proposed the learning-based 3D reconstruction pipeline using monocular depth estimation, which is adopted by the DenseTact sensors [24,33]. However, the generalization ability of this learning-based route to various contact scenarios in practical real-world situations requires further consideration. Third, Soft-bubble sensors [34,35] directly employ the tailored depth sensors to measure 3D contact deformation, which is difficult to integrate in a setting as small as a bionic fingertip.
Recently, our previous studies proposed the GelStereo route based on a binocular stereo vision system [14,17,21]. The biggest advantage of this method is that there are no requirements for the contact surface structure, and it can achieve high-precision 3D contact deformation measurement on multi-curvature bionic contact surfaces. Due to the multi-medium refraction of light in GelStereo-type sensors, binocular stereo 3D reconstruction methods need to be remodeled; however, few studies have explored this issue. In [21], we used a regression method to directly realize high-resolution 2D-to-3D mapping (3D reconstruction), which requires a professional calibration platform to obtain enough ground-truth 2D-to-3D samples. Ma et al. presented a binocular visuotactile sensor with ray tracing modelling [36]. However, both the contact and refracting surfaces in their scene were flat, and the absolute 3D reconstruction errors were not fully evaluated.
In this paper, we carry out an in-depth study of the universal 3D reconstruction pipeline of GelStereo-type sensors, with an emphasis on refractive calibration. To begin with, a Universal Refractive Stereo Ray Tracing model, which we call GU-RSRT, is presented for GelStereo-type sensors with various structures. The GU-RSRT performs ray tracing modeling on the GelStereo imaging systems, in which the parameters include intrinsic and extrinsic parameters of the binocular camera, refractive indices, and structural geometry. To obtain these parameters in GU-RTST model, we propose a Universal Multi-Medium Refractive (UMMR) calibration method using the embedded relative geometric features of checkerboards. Furthermore, a Marker-Based Self-Calibration (MBSC) method is proposed for specific GelStereo-type sensors with known structured markers embedded on the sensor surface, which omits the step of checkerboard calibration and updates the sensor parameters during daily use, significantly improving the service life of the sensor. Extensive calibration and evaluation experiments are performed on four GelStereo-type sensors with different refracting and contact surfaces. The experimental results show that the proposed refractive calibration method can obtain reasonable parameters for the GU-RSRT model, and the constructed 3D reconstruction system achieves 3D points measurement with less than 0.35 mm Euclidean distance error on different sensor platforms. Furthermore, the proposed 3D reconstruction pipeline with refractive calibration can be practically applied to high-precision 3D deformation measurement in various GelStereo-type sensors and similar visuotactile sensors, including those with binocular cameras and undergoing multi-medium light refraction, as shown in Figure 1.
In summary, the contributions of this paper can be summarized as follows:
  • A universal refractive stereo ray tracing model that can handle sensors with arbitrary refracting and contact surfaces is presented for GelStereo-type sensors.
  • A universal multi-medium refractive calibration method using the embedded geometric features of checkerboards is proposed to obtain refractive parameters in GelStereo imaging systems. The results show that the proposed calibration method can realize high precision (less than 0.35 mm Euclidean distance error) in 3D contact geometry measurements on different sensor platforms.
  • A marker-based self-calibration method that can automatically perform refractive calibration every time the sensor starts up is proposed for specific GelStereo-type sensors with known structured markers embedded on the sensor surface, allowing for prolonged sensor life.
The rest of this paper is organized as follows. We first provide a tactile 3D reconstruction pipeline for GelStereo-type sensors using a GU-RSRT model (Section 2). The refractive calibration methods are proposed to obtain the parameters of GU-RSRT model in Section 3. Then, the experimental design and results are presented in Section 4 and Section 5. Finally, we discuss the limitations and future works in Section 6, and conclude the paper in Section 7.

2. 3D Reconstruction of GelStereo-Type Sensors

GelStereo-type sensors utilize a binocular vision system for 3D geometry sensing. With sparse or dense stereo matching points pairs on left and right tactile images [17,21], a 2D-to-3D model is needed to reconstruct the 3D tactile points with high precision. The 2D-to-3D modeling using ray tracing method has shown good performance in previous work [37]. In this paper, we present a Universal Refractive Stereo Ray Tracing model that can be instantiated to any GelStereo-type sensors. The GU-RSRT geometrically models the propagation paths of light rays through multi-medium refraction in GelStereo-type sensor imaging systems, assuming that each medium is homogeneous (i.e., the rays propagate in a straight line in each medium). Figure 2 diagrams the GU-RSRT model in detail; key symbols are listed in Table 1. Without loss of generality, we introduce the GU-RSRT model with m times refraction and use the general equation f * ( x , y , z ) = 0 to describe the refracting surfaces.

2.1. Light Ray Path

Taking the left ray as an example, we backpropagate the rays from the camera optical center to the 3D points on the sensor surface in the left camera coordinate system.
Given a point on the left tactile image (indicated by ( u l , v l ) ), the direction of the ray from the left camera optical center is computed according to the pin-hole camera model:
r 0 l = K 1 [ u l , v l , 1 ] T | | K 1 [ u l , v l , 1 ] T | | 2 = ( r 0 x l , r 0 y l , r 0 z l )
where K R 3 × 3 indicates the intrinsic parameter of the camera and r 0 l denotes the direction vector of left ray in medium 0 (i.e., the air). The starting point of the ray is the camera optical center, indicated by O l ( x 0 l , y 0 l , z 0 l ) . The equation of the left ray in medium 0 (short for left ray 0) is expressed as follows:
x x 0 l r 0 x l = y y 0 l r 0 y l = z z 0 l r 0 z l .
For rays in each medium i, we first solve the starting point ( P i l ), then compute the normal vector ( n i l ) of the refracting surface at this point, and finally infer the direction vector ( r i l ) of the ray according to Snell’s law:
The equation of the refracting surface between medium i 1 and i can be expressed as:
f i ( x , y , z ) = 0 .
The solution to the simultaneous equations, including refracting surface equation f i (Equation (3)) and left ray i 1 equation, is the starting point of left ray i, indicated by P i l ( x i l , y i l , z i l ) . For example, P 1 l ( x 1 l , y 1 l , z 1 l ) is solved by Equations (2) and (3) with i = 1 . Next, the normal vector of the refracting surface at point P i l is computed by
n i l = f i ( x , y , z ) | P i l | | f i ( x , y , z ) | P i l | | 2 .
From [38], the direction vector of the left ray in medium i can be formed as follows:
r i l = α r i 1 l + β n i l = ( r i x l , r i y l , r i z l )
where α = μ i 1 / μ i and
β = 1 ( μ i 1 μ i ) 2 [ 1 ( r i 1 l · n i l ) 2 ] μ i 1 μ i r i 1 l · n i l .
Then, the equation of the left ray in medium i can be expressed as
x x i l r i x l = y y i l r i y l = z z i l r i z l .
Using Equations (1)–(7), the paths of rays from the left camera’s optical center to medium m are computed in the left camera coordinate system. In the same way, we obtain the trajectories of rays passing through the right camera optical center in the right camera coordinate system.

2.2. Intersection Points

The left and right rays intersect in medium m on the sensor surface. This intersection point is considered as the reconstructed 3D point. In order to compute this point, we transform the right ray m from the right to the left camera coordinate system:
L P m r = L R R R P m r + L T R = ( x m r , y m r , z m r )
L r m r = L R R R r m r = ( r m x r , r m y r , r m z r )
where L R R R 3 × 3 and L T R R 3 × 1 denote the rotation matrix and translation vector between the right and the left camera coordinate systems, respectively. Then, an equation set to solve the intersection point is built as follows:
L P m l + t l L r m l = L P m r + t r L r m r x m l + t l r m x l = x m r + t r r m x r y m l + t l r m y l = y m r + t r r m y r z m l + t l r m z l = z m r + t r r m z r .
This equation set is overdetermined; thus, that the least square method is employed. The reconstructed 3D point P can be computed by
P = L P m l + t ^ l L r m l + L P m r + t ^ r L r m r 2
where ( t ^ l , t ^ r ) is the least square solution of Equation (10).

3. Refractive Calibration

3.1. Problem Formulation

To achieve high-precision 3D geometry sensing, a refractive calibration method is desirable in order to obtain a fine set of parameters for GU-RSRT model. The model parameters are divided into three parts, including camera parameters, refractive indices, and structural parameters. Zhang’s method [39] is employed to obtain the intrinsic and extrinsic binocular camera parameters using checkerboard images taken in the air. How to calibrate the parameters of refractive indices and structure is the primary task of this section.
In GelStereo-type sensor imaging systems, the shapes of refracting surfaces are already known, as they are determined by the sensors’ structure design. However, the pose of the refracting surface in the camera coordinate system is uncertain due to deviations during sensor assembly. The pose is determined by several translation and orientation parameters (indicated by ϕ s = { } ), which vary on various sensor platforms, mainly related to the characteristics of the refracting surface. In addition, the refractive indices (indicated by ϕ r = { μ 0 , μ 1 , , μ m } ) are determined by material properties, which are affected by the components and molding environment. In summary, the parameter sets for the structure ϕ s and refractive indices ϕ r are unknown and require calibration.
Considering the reconstructed 3D points P using the GU-RSRT model, the refractive calibration is defined as an optimization problem:
ϕ s , ϕ r = arg min ϕ s , ϕ r F ( P )
where F is the objective function. Then, how to build this objective function becomes the main problem in calibration.

3.2. Universal Multi-Medium Refractive Calibration Method

In GelStereo-type sensor imaging systems, we observe that 2D-to-3D models that do not take refraction into consideration or have poor refraction parameters could lead to distortion of the reconstructed 3D point clouds. Our main idea for calibration is to optimize the parameters of the refraction system by reconstructing undistorted 3D point clouds. Objects or patterns with known structures can provide the ground truth of relative geometric features. Among these, checkerboards with known corner point structures are easy to make and detect. Here, we propose a Universal Multi-Medium Refractive (UMMR) calibration method for GelStereo-type sensors. In practice, we fully pressed checkerboards onto the surface of the sensor’s transparent elastomer. The 3D point of each corner was then reconstructed by the GU-RSRT model. The objective functions were designed to ensure the spatial invariance of the checkerboard corners.
As shown in Figure 3, the Euclidean distance and perpendicularity can be used to constrain the geometric relationship of the checkerboard corners. The number of checkerboard corners is M × N , the length of each cell is denoted by l, and C i , j indicates the corner point at the i-th row and the j-th column. The ground truth Euclidean distance between C i , j and C k , g on the checkerboard is computed by
C i , j C k , g 2 = l ( k i ) 2 + ( g j ) 2 .
The objective function for the Euclidean distance can be designed as follows:
F 1 = 1 ( N M ) 2 i = 0 N 1 j = 0 M 1 k = 0 N 1 g = 0 M 1 P i , j P k , g 2 C i , j C k , g 2
where P i , j indicates the reconstructed 3D point corresponding to C i , j using the GU-RSRT model. This function describes the 3D reconstruction error of the Euclidean distance between th checkerboard corners. The red line in Figure 3a illustrates the Euclidean distance between C i , j and C k , g .
The horizontal edges of the checkerboard are perpendicular to the vertical edges. Then, the objective function for perpendicularity can be designed as
F 2 = 1 N M i = 0 N 1 j = 0 M 1 P i , 0 P i , M 1 · P 0 , j P N 1 , j P i , 0 P i , M 1 · P 0 , j P N 1 , j .
This objective function expresses that the red vector is perpendicular to the blue vector in Figure 3b.
The final objective function is a linear combination of F 1 and F 2
F = ω 1 F ¯ 1 + ω 2 F ¯ 2
where F ¯ 1 is the normalized F 1 and F ¯ 2 is the normalized F 2 . Then, ω 1 and ω 2 directly show the importance of the Euclidean distance and perpendicularity relationships. A differential evolution algorithm is employed to solve this optimization problem (Equation (12)).

3.3. Marker-Based Self-Calibration

Instead of the checkerboard mentioned in the UMMR method, the structured markers embedded on the sensor surface can provide the ground truth of relative geometric features. A Marker-Based Self-Calibration (MBSC) method is proposed for specific GelStereo-type sensors. These sensors should have curved refracting surfaces, curved sensor surfaces, and markers with known structures.
Unlike the checkerboard with a planar structure, markers are distributed in 3D space. The Euclidean distance between markers is mainly used in self-calibration. In order to improve the computational efficiency without loss of geometric constraints, the voxel downsampling method is employed to downsample all markers into a few key markers. The objective function is designed as follows:
F = 1 S 2 i = 1 S j = 1 S P i P j 2 Q i Q j 2
where S is the number of markers after downsampling and Q i refers to the reference 3D points of P i . Compared to the UMMR method, the marker-based self-calibration method is simple and convenient. Moreover, it can be performed every time the sensor starts up, and is able to deal with the 3D precision loss caused by the slight change of refractive indices over time.

4. Experiments: Design and Setup

4.1. Experiments Design

In order to verify the effectiveness of the proposed tactile 3D reconstruction pipeline, including the GU-RSRT model and the refractive calibration methods, we carried out the following experiments on several sensor platforms.
Quantitative experiments on 3D reconstruction. First of all, we quantitatively evaluate the accuracy of the proposed tactile 3D reconstruction pipeline using ground truth 3D points obtained from high-precision measuring instruments. Specifically, the 3D reconstruction errors on four different GelStereo-type sensors with various refracting surfaces and sensor surfaces were evaluated using the Mean Absolute Error (MAE) in the X, Y, Z directions and the Mean Euclidean Distance Error (MEDE) between the reconstructed 3D points and the ground truth. Moreover, on these sensor platforms we analyzed the reconstruction errors of 3D points with different contact depths and regions.
Method comparison experiments. In addition to the methods proposed in this paper, two other commonly used 3D reconstruction methods were used to ensure a comprehensive evaluation.
  • Traditional Triangulation Method (TTM): Without considering multi-medium refraction, the traditional triangulation method was applied using binocular camera parameters calibrated in the air.
  • Camera Parameters Absorption Method (CPAM): The 3D reconstruction errors caused by multi-medium refraction can be absorbed by the camera parameters to a certain extent [40,41]. In practice, we calibrated the binocular camera using checkerboard images taken on the sensor’s transparent gel surface, then used triangulation.
  • GU-RSRT+UMMR: The GU-RSRT model with parameters calibrated through the universal multi-medium refractive calibration method was applied to GelStereo-type sensors for tactile 3D reconstruction.
  • GU-RSRT+MBSC: The GU-RSRT model with parameters calibrated through marker-based self-calibration method was applied to GelStereo-type sensors for tactile 3D reconstruction.
Ablation studies. In-depth ablation studies on the UMMR calibration method were carried out to study the importance of each relative geometric feature.
  • GU-RSRT+UMMR ( F 1 ): Only the objective function for the Euclidean distance is used in UMMR calibration.
  • GU-RSRT+UMMR ( F 2 ): Only the objective function for perpendicularity is used in UMMR calibration.

4.2. Platform

4.2.1. Sensor Platform

As shown in Figure 4, four GelStereo-type sensors (including GelStereo Tip, GelStereo Palm2.0, GelStereo Palm1.0, and GelStereo BioTip) with different sizes, sensor surfaces, and refracting surfaces are employed to carry out the experiments. The first row in Figure 4 diagrams the imaging system of each sensor in detail. The parameter sets which require calibration are listed in Table 2.
In these four sensors, the rays propagate in the air before entering the camera. The refractive index of air μ 0 is 1, which is exempt from calibration. In the imaging system of GelStereo Tip, GelStereo Palm2.0, and GelStereo BioTip, the rays undergo two refractions. In the imaging system of GelStereo Palm1.0, the rays undergo three refractions, where mediums 1 and 3 are the same.
The refracting surface coordinate system and left camera coordinate system are shown in Figure 4, denoted by { S } and { C L } , respectively. The equation of refracting surface f i in { S } is known. The transformation between { S } and { C L } is unknown, which contains structure parameters to be calibrated. In the GelStereo Tip and GelStereo Palm2.0, d x y indicates the distance between the left camera optical center and the X O Y -plane of { S } . The direction of the z-axis of { S } expressed in { C L } is ( z x , z y , 1 z x 2 z y 2 ) , known as the normal vector of the flat refracting surface. The refracting surfaces of the GelStereo Palm1.0 are hemispherical. As a result, the orientation parameters have no effect on the ray paths, and only translation is concerned; here, ( d x , d y , d z ) expresses the origin of { S } in { C L } , i.e., the translation vector between { S } and { C L } . Different from the GelStereo Palm1.0, the GelStereo BioTip sensor has multi-curvature refracting surfaces. Except for the translation vector, the direction vector of the x-axis of { S } in { C L } is denoted by ( 1 x y 2 x z 2 , x y , x z ) .

4.2.2. Ground Truth Collection Platform

A platform for collecting binocular tactile image pairs and corresponding ground truth 3D points was needed for evaluation. As shown in Figure 5a, a 3D Computer Numerical Control (CNC) linear guide was employed to generate high-precision 3D positions. A thin probe with a black dot on its tip was equipped on the tool side of 3D CNC linear guide. We used this probe tip to specify 3D positions in the workspace of linear guide. The GelStereo-type sensor without markers and coating layer was fixed on the workbench. Then, the probe was driven by the linear guide to press on the transparent gel surface of the sensor. At each sampled position, the 3D position of the probe tip was read from the linear guide, with the black dot on the probe tip being projected on the left and right image planes. As shown in Figure 5b,c, the small black dot on the probe can be clearly seen on the tactile images; to detect them, we used the blob detection algorithm in OpenCV (https://docs.opencv.org/4.x/d0/d7a/classcv_1_1SimpleBlobDetector.html, accessed on 23 February 2023). Considering the pixel position of the black dot on the images, the estimated 3D position was computed using the proposed pipeline. The 3D readings from the linear guide were converted to the left camera coordinate system using a transformation matrix, which was obtained by ArUco-based pose estimation [42] and the Iterative Closest Point (ICP) algorithm [43]. The converted 3D points are the ground truth of tactile 3D reconstruction.

4.3. Implementation Details

4.3.1. UMMR Calibration

UMMR calibration was conducted during sensor fabrication, specifically, before painting markers and the coating layer. First, checkerboard images on the sensor surface were captured for refractive calibration using the UMMR method. Specifically, we pasted a checkerboard pattern on the flat surface of the 3D-printed calibration board. We manually pressed this calibration board onto the transparent gel surface, as shown in Figure 6a. The binocular tactile images in Figure 6b were recorded at the same time. Note that the checkerboard must be in full contact with the gel surface to ensure that all rays from the checkerboard directly enter the gel layer. The checkerboard was pressed at different positions with various poses in order to cover the sensor surface as much as possible. In addition, we chose different checkerboards for each sensor. The main principle was to select as large as possible a checkerboard pattern while ensuring complete contact with the gel surface. In practice, checkerboards with 5 × 4 grids with 1 mm edge length, 10 × 8 grids with 2 mm edge length, 10 × 6 grids with 1 mm edge length, and 10 × 6 grids with 1 mm edge length were used for the GelStereo Tip, GelStereo Palm2.0, GelStereo Palm1.0, and GelStereo BioTip, respectively. After collecting the checkerboard images, we eliminated the distortion of these images, then detected the checkerboard corners on the image plane.
Then, the optimization problem was formulated based on Equation (16) for refractive calibration. In this paper, min-max normalization was employed for the objective function value of F 1 and F 2 on the pre-set parameter ranges. The weights ( ω 1 , ω 2 ) in Equation (16) were set as (0.6, 0.4) for the GelStereo Tip, (0.8, 0.2) for the GelStereo Palm2.0, (0.5, 0.5) for the GelStereo Palm1.0, and (0.2, 0.8) for the GelStereo BioTip.
Finally, this optimization problem was solved using a differential evolution algorithm. The bounds of parameters are listed in Table 3, and were set according to prior knowledge, such as the material properties and sensor structure.

4.3.2. Data Collection for Evaluation

Using the platform in Figure 5, points on the sensor surface were sampled to evaluate the 3D reconstruction errors. These sampling points are illustrated on the image plane with red dots, as shown in Figure 7. The probe was programmed to press at these sampling points with multiple depths. In practice, on the GelStereo Tip sensor, each sampling surface point was pressed seven times, with the depth from 0 mm to 3 mm in 0.5 mm intervals. On the GelStereo Palm2.0 sensor, each sampling surface point was pressed eight times, with the depth from 0 mm to 3.5 mm in 0.5 mm intervals. On the GelStereo Palm1.0 sensor, each sampling surface point was pressed eight times, with the depth from 0 mm to 2.8 mm in 0.4 mm intervals. On the GelStereo BioTip sensor, each sampling surface point was pressed four times, with the depth from 0 mm to 1.2 mm in 0.4 mm intervals.
To evaluate the 3D reconstruction errors of different regions on the sensor surface, these sampling points were divided into several groups, which are depicted by yellow lines and numbers in Figure 7; the larger the number, the outer the points.

5. Experimental Results

3D reconstruction accuracy. The refractive calibration results and 3D reconstruction errors of GelStereo Tip sensor, GelStereo Palm2.0 sensor, GelStereo Palm1.0 sensor, and GelStereo BioTip sensor are shown in Table 4, Table 5, Table 6 and Table 7, respectively. The proposed 3D reconstruction pipeline achieves mean Euclidean distance errors of 0.183 mm on the GelStereo Tip, 0.256 mm on the GelStereo Palm2.0, 0.264 mm on the GelStereo Palm1.0, and 0.328 mm on the GelStereo BioTip, indicating the feasibility of the proposed method on GelStereo-type sensors with different structures.
Methods comparison. In Figure 8, we visualize the tactile 3D point clouds obtained by different 3D reconstruction methods. The 3D reconstruction errors are shown in Table 4, Table 5, Table 6 and Table 7. We find that 3D reconstruction using the proposed GU-RSRT model and refractive calibration method outperforms the other methods. On the GelStereo Tip, GelStereo Palm1.0, and GelStereo BioTip sensors, the MEDE of TTM is about 10 times that of the proposed method, while on the GelStereo Palm2.0 the performance of TTM is even worse. As shown in Figure 8, the 3D point clouds reconstructed using TTM are severely distorted on the GelStereo Palm1.0 and GelStereo BioTip sensors with curved refracting surfaces. In addition, the CPAM, commonly used in underwater environments, performs poorly on GelStereo-type sensors, and the MEDE is about 10 mm. One possible reason is that the refracting surface in GelStereo-type sensor imaging systems is farther away from the camera optical center compared to underwater scenarios, leading to difficulties in absorbing the refraction effects with these camera parameters.
Self-calibration. The results of marker-based self-calibration and the 3D reconstruction errors of the GelStereo Palm1.0 and GelStereo BioTip sensors are shown in Table 6 and Table 7, respectively. We find that the MBSC method performs better than the UMMR calibration method on the GelStereo Palm1.0 and GelStereo BioTip. Specifically, the MEDE is reduced by about 0.03 mm. This is probably because the relative geometric features in 3D space (i.e., markers on curved surface) are more helpful for calibration than those in 2D plane (i.e., checkerboards). Compared to the UMMR calibration method, the MBSC method is more accurate and convenient. Therefore, marker-based self-calibration is the preferred calibration method for these specific GelStereo-type sensors, which have curved refracting surfaces, curved sensor surfaces, and markers with known structures.
In addition, we applied this marker-based self-calibration method to the GelStereo Tip and GelStereo Palm2.0 sensor; however, the performance was not satisfactory, as shown in Table 4 and Table 5. According to [44], the 3D reconstruction errors of the GelStero Tip and GelStereo Palm2.0 caused by multi-medium refraction are mainly distributed in the z-axis direction, which is due to the refracting surfaces in the GelStero Tip and GelStereo Palm2.0 being flat and the normal of the refracting surface almost parallel to the z-axis. Therefore, the relative geometric information in the z-axis is significant for calibration. On the GelStereo Tip, the sensor surface is flat, and the 3D coordinates of markers are almost consistent on the z-axis. Because of this, the self-calibration method based on marker distance barely works on the GelStereo Tip sensor. On the GelStereo Palm2.0, the performance of the marker-based self-calibration method might be improved by a specific marker sampling method that pays more attention to the z-axis.
3D reconstruction errors with different contact depths and regions. We further studied the reconstruction accuracy of the proposed method on different sensor platforms for different contact depths and contact regions in order to evaluate its robustness. The error distributions at different contact depths are illustrated using a violin plot in Figure 9, which indicates that the 3D reconstruction accuracy of GelStereo-type sensors using the proposed method is almost independent of contact depth. In addition, the 3D errors at different contact regions are depicted in Figure 10. The region numbers correspond to the numbers in Figure 7. It can be seen that the reconstruction errors of the outer points are larger than those of the middle points on the sensor surface. Specifically, the MEDE of the outermost points is about 1.4 times of the mean value. One possible reason for this is that the projection of the black dot (on the probe) on the image plane is stretched into an ellipse-like shape, when the probe presses at the outer regions. Inevitably, the commonly used blob detection algorithm introduces errors to the center of black dot, which affects the accuracy of 3D reconstruction. Moreover, on the GelStereo BioTip sensor we find that the errors in regions 4 and 5 are significantly larger. In Figure 8d, several green points in the positive X direction protrude from the sensor surface. This might be caused by the accuracy loss of the refracting surface equation. To solve this problem, it might be possible to either improve the production accuracy of the transparent supporting shell where refraction occurs or correct the equation of refracting surface through calibration.
Ablation studies. The results of ablation studies on the UMMR calibration method are demonstrated in Table 8. The experimental results show that the objective function combining Euclidean distance and perpendicularity works better than a single objective function with Euclidean distance F 1 or perpendicularity F 2 on all these GelStereo-type sensors. Moreover, we find that the UMMR calibration method with objective function F 1 achieves a MEDE of less than 0.6 mm on these sensors, which is not much worse than the full UMMR method. This finding shows the importance and indispensability of Euclidean distance features, especially on the GelStereo Tip, GelStereo Palm2.0, and GelStereo Palm1.0 sensors. For achieving better constraints on spatial geometry, the perpendicular features are complementary to Euclidean distance features.

6. Discussion

In GelStereo-type sensor imaging systems, the shapes of refracting surfaces play an important role in ray tracing. Although the refracting surface function depends on sensor design, the manufacturing process (such as 3D printing, laser cutting, etc.) of the transparent supporting plate might bring errors into this function, especially on the curved supporting plate in the GelStereo Palm1.0 and GelStereo BioTip sensors. To solve this problem, a method for correcting the function of the refracting surface should be integrated into the refractive calibration process. In this way, the precision of tactile 3D reconstruction can be further improved in GelStereo-type sensors.
As mentioned in Section 5, with a flat refracting surface the relative geometric information in the surface normal direction needs special attention during calibration. For marker-based self-calibration with high precision and computational efficiency, an algorithm generating marker pairs to compute the Euclidean distance is required, which solves the problem of effectively representing the geometric features with fewer markers on various GelStereo-type sensor platforms.

7. Conclusions

In this paper, we present a universal Refractive Stereo Ray Tracing model for GelStereo-type sensors to model the tactile 3D reconstruction under multi-medium light refraction. In addition, a Universal Multi-Medium Refractive (UMMR) calibration method is proposed to obtain the refractive and structural parameters in the GU-RSRT model, in which relative geometric features on checkerboards are employed to build an optimization problem for calibration. In addition, a self-calibration method based on structured markers on the sensor surface is provided for specific GelStereo-type sensors.
Extensive calibration and evaluation experiments are conducted on four different GelStereo-type sensors with various structure designs. The experimental results show that the proposed refractive calibration method can obtain reasonable parameters of the GU-RSRT model and the 3D reconstruction error of the mean Euclidean distance error is less than 0.35 mm, which outperforms the other 3D reconstruction methods. In addition, the accuracy of the marker-based self-calibration method is slightly better than the UMMR calibration method on GelStereo-type sensors with curved refracting surfaces. The self-calibration method has great potential in improving calibration efficiency and sensor service life. Moreover, our experimental results show the robustness of the proposed 3D reconstruction pipeline with different contact depths and regions.
The feasibility of the proposed tactile 3D reconstruction pipeline is fully demonstrated in this paper. Its practical application scenario is visuotactile sensing (especially high-precision 3D contact geometry measurement) based on binocular cameras and undergoing multi-medium light refraction. With high-precision sensing capability, GelStereo-type sensors and other similar visuotactile sensors could provide more possibilities for robots to achieve rich-contact and dexterous manipulation. In the future, we intend to further improve the 3D reconstruction performance of GelStereo-type sensors and apply GelStereo-type sensors to robotic perception and manipulation tasks.

Author Contributions

Conceptualization, C.Z., S.W. and S.C.; methodology, C.Z., J.H. and S.C.; software, C.Z. and S.C.; validation, J.H. and B.Z.; formal analysis, C.Z. and S.C.; investigation, C.Z., J.H. and S.C.; data curation, J.H., Y.H. and B.Z.; writing—original draft preparation, C.Z., S.C. and S.W.; writing—review and editing, C.Z., S.C. and S.W.; visualization, Y.H.; supervision, S.W.; funding acquisition, S.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the National Key Research and Development Program of China under Grant 2018AAA0103003, and in part by the National Natural Science Foundation of China under 62273342.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baranek, G.T.; Foster, L.G.; Berkson, G. Tactile defensiveness and stereotyped behaviors. Am. J. Occup. Ther. 1997, 51, 91–95. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Dargahi, J.; Najarian, S. Human tactile perception as a standard for artificial tactile sensing—A review. Int. J. Med. Robot. Comput. Assist. Surg. 2004, 1, 23–35. [Google Scholar] [CrossRef] [PubMed]
  3. Röder, B.; Rösler, F.; Spence, C. Early vision impairs tactile perception in the blind. Curr. Biol. 2004, 14, 121–124. [Google Scholar] [CrossRef] [PubMed]
  4. Li, Q.; Kroemer, O.; Su, Z.; Veiga, F.F.; Kaboli, M.; Ritter, H.J. A review of tactile information: Perception and action through touch. IEEE Trans. Robot. 2020, 36, 1619–1634. [Google Scholar] [CrossRef]
  5. Liu, Y.; Bao, R.; Tao, J.; Li, J.; Dong, M.; Pan, C. Recent progress in tactile sensors and their applications in intelligent systems. Sci. Bull. 2020, 65, 70–88. [Google Scholar] [CrossRef] [Green Version]
  6. Abad, A.C.; Ranasinghe, A. Visuotactile sensors with emphasis on gelsight sensor: A review. IEEE Sens. J. 2020, 20, 7628–7638. [Google Scholar] [CrossRef]
  7. Chi, C.; Sun, X.; Xue, N.; Li, T.; Liu, C. Recent progress in technologies for tactile sensors. Sensors 2018, 18, 948. [Google Scholar] [CrossRef] [Green Version]
  8. Boutry, C.M.; Negre, M.; Jorda, M.; Vardoulis, O.; Chortos, A.; Khatib, O.; Bao, Z. A hierarchically patterned, bioinspired e-skin able to detect the direction of applied pressure for robotics. Sci. Robot. 2018, 3, eaau6914. [Google Scholar] [CrossRef]
  9. Chen, M.; Luo, W.; Xu, Z.; Zhang, X.; Xie, B.; Wang, G.; Han, M. An ultrahigh resolution pressure sensor based on percolative metal nanoparticle arrays. Nat. Commun. 2019, 10, 4024. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Park, J.; Kim, M.; Lee, Y.; Lee, H.S.; Ko, H. Fingertip skin–inspired microstructured ferroelectric skins discriminate static/dynamic pressure and temperature stimuli. Sci. Adv. 2015, 1, e1500661. [Google Scholar] [CrossRef] [Green Version]
  11. Yuan, W.; Dong, S.; Adelson, E.H. Gelsight: High-resolution robot tactile sensors for estimating geometry and force. Sensors 2017, 17, 2762. [Google Scholar] [CrossRef] [Green Version]
  12. Yuan, W.; Li, R.; Srinivasan, M.A.; Adelson, E.H. Measurement of shear and slip with a GelSight tactile sensor. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 304–311. [Google Scholar]
  13. Dong, S.; Yuan, W.; Adelson, E.H. Improved gelsight tactile sensor for measuring geometry and slip. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 137–144. [Google Scholar]
  14. Zhang, C.; Cui, S.; Cai, Y.; Hu, J.; Wang, R.; Wang, S. Learning-based Six-axis Force/Torque Estimation Using GelStereo Fingertip Visuotactile Sensing. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 3651–3658. [Google Scholar]
  15. Ma, D.; Donlon, E.; Dong, S.; Rodriguez, A. Dense tactile force estimation using GelSlim and inverse FEM. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5418–5424. [Google Scholar]
  16. Johnson, M.K.; Adelson, E.H. Retrographic sensing for the measurement of surface texture and shape. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1070–1077. [Google Scholar]
  17. Cui, S.; Wang, R.; Hu, J.; Zhang, C.; Chen, L.; Wang, S. Self-Supervised Contact Geometry Learning by GelStereo Visuotactile Sensing. IEEE Trans. Instrum. Meas. 2021, 71, 1–9. [Google Scholar] [CrossRef]
  18. Li, R.; Platt, R.; Yuan, W.; ten Pas, A.; Roscup, N.; Srinivasan, M.A.; Adelson, E. Localization and manipulation of small parts using gelsight tactile sensing. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 3988–3993. [Google Scholar]
  19. She, Y.; Wang, S.; Dong, S.; Sunil, N.; Rodriguez, A.; Adelson, E. Cable manipulation with a tactile-reactive gripper. Int. J. Robot. Res. 2021, 40, 1385–1401. [Google Scholar] [CrossRef]
  20. Wang, C.; Wang, S.; Romero, B.; Veiga, F.; Adelson, E. Swingbot: Learning physical features from in-hand tactile exploration for dynamic swing-up manipulation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 5633–5640. [Google Scholar]
  21. Cui, S.; Wang, R.; Hu, J.; Wei, J.; Wang, S.; Lou, Z. In-hand object localization using a novel high-resolution visuotactile sensor. IEEE Trans. Ind. Electron. 2021, 69, 6015–6025. [Google Scholar] [CrossRef]
  22. Pan, C.; Lepert, M.; Yuan, S.; Antonova, R.; Bohg, J. Task-Driven In-Hand Manipulation of Unknown Objects with Tactile Sensing. arXiv 2022, arXiv:2210.13403. [Google Scholar]
  23. Sun, H.; Kuchenbecker, K.J.; Martius, G. A soft thumb-sized vision-based sensor with accurate all-round force perception. Nat. Mach. Intell. 2022, 4, 135–145. [Google Scholar] [CrossRef]
  24. Do, W.K.; Jurewicz, B.; Kennedy III, M. DenseTact 2.0: Optical Tactile Sensor for Shape and Force Reconstruction. arXiv 2022, arXiv:2209.10122. [Google Scholar]
  25. Gomes, D.F.; Lin, Z.; Luo, S. GelTip: A finger-shaped optical tactile sensor for robotic manipulation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 9903–9909. [Google Scholar]
  26. Yamaguchi, A.; Atkeson, C.G. Implementing tactile behaviors using fingervision. In Proceedings of the 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), Birmingham, UK, 15–17 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 241–248. [Google Scholar]
  27. Yamaguchi, A.; Atkeson, C.G. Tactile behaviors with the vision-based tactile sensor FingerVision. Int. J. Humanoid Robot. 2019, 16, 1940002. [Google Scholar] [CrossRef]
  28. Ward-Cherrier, B.; Pestell, N.; Cramphorn, L.; Winstone, B.; Giannaccini, M.E.; Rossiter, J.; Lepora, N.F. The tactip family: Soft optical tactile sensors with 3D-printed biomimetic morphologies. Soft Robot. 2018, 5, 216–227. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Lepora, N.F. Soft biomimetic optical tactile sensing with the TacTip: A review. IEEE Sens. J. 2021, 21, 21131–21143. [Google Scholar] [CrossRef]
  30. Donlon, E.; Dong, S.; Liu, M.; Li, J.; Adelson, E.; Rodriguez, A. Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1927–1934. [Google Scholar]
  31. Taylor, I.; Dong, S.; Rodriguez, A. Gelslim3.0: High-resolution measurement of shape, force and slip in a compact tactile-sensing finger. arXiv 2021, arXiv:2103.12269. [Google Scholar]
  32. Romero, B.; Veiga, F.; Adelson, E. Soft, round, high resolution tactile fingertip sensors for dexterous robotic manipulation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 4796–4802. [Google Scholar]
  33. Do, W.K.; Kennedy III, M. DenseTact: Optical Tactile Sensor for Dense Shape Reconstruction. arXiv 2022, arXiv:2201.01367. [Google Scholar]
  34. Alspach, A.; Hashimoto, K.; Kuppuswamy, N.; Tedrake, R. Soft-bubble: A highly compliant dense geometry tactile sensor for robot manipulation. In Proceedings of the 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft); IEEE: Piscataway, NJ, USA, 2019; pp. 597–604. [Google Scholar]
  35. Kuppuswamy, N.; Alspach, A.; Uttamchandani, A.; Creasey, S.; Ikeda, T.; Tedrake, R. Soft-bubble grippers for robust and perceptive manipulation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 9917–9924. [Google Scholar]
  36. Ma, H.; Ji, J.; Lee, K.M. Effects of refraction model on binocular visuotactile sensing of 3-D deformation. IEEE Sensors J. 2022, 22, 17727–17736. [Google Scholar] [CrossRef]
  37. Hu, J.; Cui, S.; Wang, S.; Zhang, C.; Wang, R.; Chen, L.; Li, Y. GelStereo Palm: A Novel Curved Visuotactile Sensor for 3D Geometry Sensing. IEEE Trans. Ind. Inform. 2023. [Google Scholar] [CrossRef]
  38. Glassner, A.S. An Introduction to Ray Tracing; Morgan Kaufmann: Burlington, MA, USA, 1989. [Google Scholar]
  39. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef] [Green Version]
  40. Shortis, M. Calibration techniques for accurate measurements by underwater camera systems. Sensors 2015, 15, 30810–30826. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Meline, A.; Triboulet, J.; Jouvencel, B. A camcorder for 3D underwater reconstruction of archeological objects. In Proceedings of the OCEANS 2010 MTS/IEEE SEATTLE, Seattle, WA, USA, 20-23 September 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1–9. [Google Scholar]
  42. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Marín-Jiménez, M.J. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar] [CrossRef]
  43. Zhang, Z. Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vis. 1994, 13, 119–152. [Google Scholar] [CrossRef]
  44. Du, H.; Li, M.g.; Meng, J. Study on the reconstruction method of stereo vision in glass flume. Adv. Eng. Softw. 2016, 94, 14–19. [Google Scholar] [CrossRef]
Figure 1. Diagram of tactile 3D reconstruction based on ray tracing: (a) real GelStereo-type sensor contacted with a pen; (b) ray-tracing diagram (the blue lines indicate light ray paths inside the sensor undergoing multi-medium refraction); (c) reconstructed 3D point cloud (the points in light color denote the contact area).
Figure 1. Diagram of tactile 3D reconstruction based on ray tracing: (a) real GelStereo-type sensor contacted with a pen; (b) ray-tracing diagram (the blue lines indicate light ray paths inside the sensor undergoing multi-medium refraction); (c) reconstructed 3D point cloud (the points in light color denote the contact area).
Sensors 23 02675 g001
Figure 2. Diagram of the GU-RSRT model.
Figure 2. Diagram of the GU-RSRT model.
Sensors 23 02675 g002
Figure 3. The relative geometric features on checkerboard: (a) the Euclidean distance feature and (b) the perpendicularity feature.
Figure 3. The relative geometric features on checkerboard: (a) the Euclidean distance feature and (b) the perpendicularity feature.
Sensors 23 02675 g003
Figure 4. The GelStereo-type sensors. The first row shows a diagram of sensor imaging systems with multi-medium refraction. The second row shows pictures of real-world sensors. (a) The GelStereo Tip sensor. (b) The GelStereo Palm sensor with flat refracting surface (GelStereo Palm2.0). (c) The GelStereo Palm sensor with hemispherical refracting surface (GelStereo Palm1.0) [37]. © [2023] IEEE. Reprinted, with permission, from [37]. (d) The GelStereo BioTip sensor.
Figure 4. The GelStereo-type sensors. The first row shows a diagram of sensor imaging systems with multi-medium refraction. The second row shows pictures of real-world sensors. (a) The GelStereo Tip sensor. (b) The GelStereo Palm sensor with flat refracting surface (GelStereo Palm2.0). (c) The GelStereo Palm sensor with hemispherical refracting surface (GelStereo Palm1.0) [37]. © [2023] IEEE. Reprinted, with permission, from [37]. (d) The GelStereo BioTip sensor.
Sensors 23 02675 g004
Figure 5. The platform for ground truth 3D points collection. (a) The 3D CNC linear guide with GelStereo-type sensors. (b) The left and right tactile image pairs when the probe presses on the sensor surface. The black dot on the probe tip can be clearly seen in the images. (c) The blob detection results of (b) are shown with a red dot on the image. The red tuples indicate the pixel positions.
Figure 5. The platform for ground truth 3D points collection. (a) The 3D CNC linear guide with GelStereo-type sensors. (b) The left and right tactile image pairs when the probe presses on the sensor surface. The black dot on the probe tip can be clearly seen in the images. (c) The blob detection results of (b) are shown with a red dot on the image. The red tuples indicate the pixel positions.
Sensors 23 02675 g005
Figure 6. Checkerboard images collection on GelStereo-type sensor. (a) The image collection scene. (b) Right tactile image of the checkerboard.
Figure 6. Checkerboard images collection on GelStereo-type sensor. (a) The image collection scene. (b) Right tactile image of the checkerboard.
Sensors 23 02675 g006
Figure 7. Illustration of sampling points on sensor surfaces: (a) GelStereo Tip, (b) GelStereo Palm2.0, (c) GelStereo Palm1.0, (d) GelStereo BioTip sensor.
Figure 7. Illustration of sampling points on sensor surfaces: (a) GelStereo Tip, (b) GelStereo Palm2.0, (c) GelStereo Palm1.0, (d) GelStereo BioTip sensor.
Sensors 23 02675 g007
Figure 8. Visualization of tactile 3D point clouds on various sensors using different methods: (a) GelStereo Tip, (b) GelStereo Palm2.0, (c) GelStereo Palm1.0, (d) GelStereo BioTip.
Figure 8. Visualization of tactile 3D point clouds on various sensors using different methods: (a) GelStereo Tip, (b) GelStereo Palm2.0, (c) GelStereo Palm1.0, (d) GelStereo BioTip.
Sensors 23 02675 g008
Figure 9. Violin plot showing the distribution of 3D reconstruction errors at different contact depths. The X error, Y error, Z error, and ED error denote the error on the X-axis, Y-axis, Z-axis, and Euclidean distance between the reconstructed 3D points and the ground truth. The white dots indicate medians.
Figure 9. Violin plot showing the distribution of 3D reconstruction errors at different contact depths. The X error, Y error, Z error, and ED error denote the error on the X-axis, Y-axis, Z-axis, and Euclidean distance between the reconstructed 3D points and the ground truth. The white dots indicate medians.
Sensors 23 02675 g009
Figure 10. Violin plot showing the distribution of 3D reconstruction errors at different contact regions. The X error, Y error, Z error, and ED error denote the error on the X-axis, Y-axis, Z-axis, and Euclidean distance between the reconstructed 3D points and the ground truth. The white dots indicate medians.
Figure 10. Violin plot showing the distribution of 3D reconstruction errors at different contact regions. The X error, Y error, Z error, and ED error denote the error on the X-axis, Y-axis, Z-axis, and Euclidean distance between the reconstructed 3D points and the ground truth. The white dots indicate medians.
Sensors 23 02675 g010
Table 1. Notation for the GU-RSRT model.
Table 1. Notation for the GU-RSRT model.
SymbolsDescriptions
l , r the ray from left or right camera optical center (superscripts)
ithe index of media, i { 0 , 1 , 2 , , m } (subscripts)
L , R the left or right camera coordinate system (subscripts/superscripts)
O * R 3 the camera optical center
r * R 3 the direction vector of the ray (unit vector)
n * R 3 the normal vector of refracting surface (unit vector)
P * R 3 the intersections of the ray and refracting surfaces (3D points)
μ * R 1 refractive indices
Table 2. The parameter sets to be calibrated in each sensor.
Table 2. The parameter sets to be calibrated in each sensor.
SensorsParameters to Calibrate
GelStereo Tip { μ 1 , μ 2 , d x y , z x , z y }
GelStereo Palm2.0 { μ 1 , μ 2 , d x y , z x , z y }
GelStereo Palm1.0 { μ 1 = μ 3 , μ 2 , d x , d y , d z }
GelStereo BioTip { μ 1 , μ 2 , d x , d y , d z , x y , x z }
Table 3. Parameter ranges of the GelStereo-type sensors used for optimization.
Table 3. Parameter ranges of the GelStereo-type sensors used for optimization.
ParametersGelStereo TipGelStereo Palm2.0ParametersGelStereo Palm1.0GelStereo BioTip
μ 1 [ 1.21 , 1.61 ] [ 1.21 , 1.61 ] μ 1 ( μ 3 ) [ 1.21 , 1.61 ] [ 1.29 , 1.69 ]
μ 2 [ 1.21 , 1.61 ] [ 1.21 , 1.61 ] μ 2 [ 1.29 , 1.69 ] [ 1.21 , 1.61 ]
d x y (mm) [ 19.0 , 25.0 ] [ 30.0 , 36.0 ] d x (mm) [ 6.3 , 10.3 ] [ 2.2 , 4.2 ]
z x [ 0.2 , 0.2 ] [ 0.2 , 0.2 ] d y (mm) [ 1.0 , 1.0 ] [ 1.0 , 1.0 ]
z y [ 0.2 , 0.2 ] [ 0.2 , 0.2 ] d z (mm) [ 20 , 26 ] [ 12.5 , 16.5 ]
x y / [ 0.1 , 0.1 ]
x z / [ 0.1 , 0.1 ]
Table 4. The calibration results and 3D reconstruction errors of the GelStereo Tip sensor. The unit of d x y , X MAE, Y MAE, Z MAE, and MEDE is mm.
Table 4. The calibration results and 3D reconstruction errors of the GelStereo Tip sensor. The unit of d x y , X MAE, Y MAE, Z MAE, and MEDE is mm.
Methods μ 1 μ 2 d xy z x z y X MAEY MAEZ MAEMEDE
TTM/////0.0700.0981.6351.642
CPAM/////0.1570.1679.5239.527
GU-RSRT+UMMR (ours)1.4681.40521.016−0.01110.0157 0.0380.0680.1460.183
GU-RSRT+MBSC (ours)1.6041.59921.5100.06110.07230.1340.0830.5520.584
Table 5. The calibration results and 3D reconstruction errors of GelStereo Palm2.0 sensor. The unit of d x y , X MAE, Y MAE, Z MAE, and MEDE is mm.
Table 5. The calibration results and 3D reconstruction errors of GelStereo Palm2.0 sensor. The unit of d x y , X MAE, Y MAE, Z MAE, and MEDE is mm.
Methods μ 1 μ 2 d xy z x z y X MAEY MAEZ MAEMEDE
TTM/////0.1460.1417.6247.629
CPAM/////0.3431.01117.79517.828
GU-RSRT+UMMR (ours)1.4431.36933.935−0.0003−0.00580.0760.0880.2000.256
GU-RSRT+MBSC (ours)1.6801.59330.0020.01100.00980.1700.1727.3037.307
Table 6. The calibration results and 3D reconstruction errors of GelStereo Palm1.0 sensor. The unit of d x , d y , d z , X MAE, Y MAE, Z MAE, and MEDE is mm.
Table 6. The calibration results and 3D reconstruction errors of GelStereo Palm1.0 sensor. The unit of d x , d y , d z , X MAE, Y MAE, Z MAE, and MEDE is mm.
Methods μ 1 = μ 3 μ 2 d x d y d z X MAEY MAEZ MAEMEDE
TTM/////0.6920.7722.1012.470
CPAM/////6.1200.1613.6957.172
GU-RSRT+UMMR (ours)1.3241.4708.259−0.12623.4960.0860.1540.1930.291
GU-RSRT+MBSC (ours)1.3791.5227.5030.14923.7930.1050.1230.1740.264
Table 7. The calibration results and 3D reconstruction errors of GelStereo BioTip sensor. The unit of d x , d y , d z , X MAE, Y MAE, Z MAE, and MEDE is mm.
Table 7. The calibration results and 3D reconstruction errors of GelStereo BioTip sensor. The unit of d x , d y , d z , X MAE, Y MAE, Z MAE, and MEDE is mm.
Methods μ 1 μ 2 d x d y d z x y x z X MAEY MAEZ MAEMEDE
TTM///////0.8100.9942.8163.224
CPAM///////9.2631.7998.47712.871
GU-RSRT+UMMR (ours)1.5091.3733.450−0.11514.218−0.00340.03040.0930.0790.3110.365
GU-RSRT+MBSC (ours)1.5941.4503.5760.00314.8230.00270.01560.0690.0750.2940.328
Table 8. The results of ablation studies on the UMMR calibration method. The unit of X MAE, Y MAE, Z MAE, and MEDE is mm.
Table 8. The results of ablation studies on the UMMR calibration method. The unit of X MAE, Y MAE, Z MAE, and MEDE is mm.
GelStereo TipGelStereo Palm2.0
X MAEY MAEZ MAEMEDEX MAEY MAEZ MAEMEDE
GU-RSRT+UMMR ( F 1 )0.0380.0660.1630.1970.0780.0920.3300.375
GU-RSRT+UMMR ( F 2 ) 0.135 0.0590.1730.2490.0750.138 1.392 1.406
GU-RSRT+UMMR0.0380.0680.1460.1830.0760.0880.2000.256
GelStereo Palm1.0GelStereo BioTip
X MAEY MAEZ MAEMEDEX MAEY MAEZ MAEMEDE
GU-RSRT+UMMR ( F 1 )0.1140.1590.2250.3310.0980.1160.5390.574
GU-RSRT+UMMR ( F 2 )0.2900.3631.0131.1690.1270.1260.2970.391
GU-RSRT+UMMR0.0860.1540.1930.2910.0930.0790.3110.365
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, C.; Cui, S.; Wang, S.; Hu, J.; Huangfu, Y.; Zhang, B. High-Precision 3D Reconstruction Study with Emphasis on Refractive Calibration of GelStereo-Type Sensors. Sensors 2023, 23, 2675. https://doi.org/10.3390/s23052675

AMA Style

Zhang C, Cui S, Wang S, Hu J, Huangfu Y, Zhang B. High-Precision 3D Reconstruction Study with Emphasis on Refractive Calibration of GelStereo-Type Sensors. Sensors. 2023; 23(5):2675. https://doi.org/10.3390/s23052675

Chicago/Turabian Style

Zhang, Chaofan, Shaowei Cui, Shuo Wang, Jingyi Hu, Yipeng Huangfu, and Boyue Zhang. 2023. "High-Precision 3D Reconstruction Study with Emphasis on Refractive Calibration of GelStereo-Type Sensors" Sensors 23, no. 5: 2675. https://doi.org/10.3390/s23052675

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop