Next Article in Journal
A Modular Multirotor Unmanned Aerial Vehicle Design Approach for Development of an Engineering Education Platform
Previous Article in Journal
Non-Contact Heart-Rate Measurement Method Using Both Transmitted Wave Extraction and Wavelet Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tomographic Proximity Imaging Using Conductive Sheet for Object Tracking †

1
School of Engineering, The University of Tokyo, Tokyo 113-8656, Japan
2
Graduate School of Frontier Sciences, The University of Tokyo, Chiba 277-8563, Japan
*
Author to whom correspondence should be addressed.
This paper is an extended version of Zehao, L.; Shunsuke, Y.; Akio, Y. Tomographic Approach for Proximity Imaging using Conductive Sheet. In Proceedings of the 46th Annual Conference of the IEEE Industrial Electronics Society, Singapore, 18–21 October 2020.
Sensors 2021, 21(8), 2736; https://doi.org/10.3390/s21082736
Submission received: 23 March 2021 / Revised: 2 April 2021 / Accepted: 9 April 2021 / Published: 13 April 2021
(This article belongs to the Section Sensing and Imaging)

Abstract

:
This paper proposes a proximity imaging sensor based on a tomographic approach with a low-cost conductive sheet. Particularly, by defining capacitance density, physical proximity information is transformed into electric potential. A novel theoretical model is developed to solve the capacitance density problem using the tomographic approach. Additionally, a prototype is built and tested based on the model, and the system solves an inverse problem for imaging the capacitance density change that indicates the object’s proximity change. In the evaluation test, the prototype reaches an error rate of 10.0–15.8% in horizontal localization at different heights. Finally, a hand-tracking demonstration is carried out, where a position difference of 33.8–46.7 mm between the proposed sensor and depth camera is achieved at 30 fps.

1. Introduction

Object tracking technique is crucial in robotics like teleoperation, input interface, and human–robot interaction. Camera-based approaches are commonly adopted [1] when tracking an object at a larger range. Thanks to the recent advancement of machine learning technologies [2,3], camera-based object tracking has been widely used in robot–human cooperation [4] and autonomous driving vehicles [5].
However, at a closer range, visual solutions suffer from restraint of field of view and blocking effect with nontransparent objects. These effects limit the potential of applying visual solutions to close-range proximity sensing. Close-range proximity sensing is necessary for human–computer interfaces such as augmented reality (AR) devices [6]. These devices usually possess limited space for sensors, therefore requiring a thinner proximity sensor for a wider detection area.
Several researches have been carried out regarding close range proximity sensors, and three common approaches are adopted: optical, inductive, and capacitive approaches. Sensors such as Light Detection and Ranging (lidar) or depth sensor are precise in the optical approach [7], but they are hard to implement and still suffer from object occlusion. The inductive approach can realize robust sensors, but it only works with conductive objects [8]. On the other hand, sensors based on the capacitive approach can detect both conductive and non-conductive objects, but they are vulnerable to disturbances and contamination [9,10]. There are also researches that combine inductive and capacitive approaches together [11]. These approaches focus on single proximity sensing, which could only give proximity on one point.
For 3D object tracking, capacitance sensors are often utilized in the form of arrays to generate 3D proximity information. These sensors usually are thin and can be hidden behind some surfaces, and are also not restrained by field of view. Zhang et al. [12] applied a conductive paint pattern onto a wall to track human poses. Ye et al. [13] proposed a high-performance capacitance sensor array by introducing a high sensitivity capacitance measuring circuit and grounded shields around each sensor. However, the sensor array method usually requires complex design of hardware and manufacturing, making it difficult to integrate into arbitrary-shaped devices.
Since the works in [14,15], tomographic approaches have been utilized to determine changes of conductivity across materials through boundary electrodes’ information, which is usually concerned with the pressure distribution on the material. The application of electric impedance tomography sensors has proven its advantage of scalability, versatility, and ease of fabrication. In the previous researches, most of the researches have been focusing on force and pressure imaging [16,17,18]. Some attempts have been made to utilize tomography to image proximity of cylindrical objects; however, it only reveals information on one dimension [19,20]. There have not been endeavor towards using tomographic approach to solve proximity imaging on 2D surfaces.
The purpose of this paper is to capture an object’s 3D position using a thin conductive sheet. Therefore, a novel theoretical model is proposed to find capacitance density, which is related to proximity distribution, from the boundary electrodes on a conductive sheet. Proximity distribution is then used to predict the object’s position. To accomplish the model, the capacitance density on the conductive sheet due to surrounding objects is defined and introduced into the differential equation for the tomographic approach. The system solves an inverse problem to reconstruct the capacitance density, thus estimating the proximity distribution. As the detector is a homogeneous thin layer of a conductive sheet, the detection area can easily be scaled to a larger surface. Additionally, with its hardware architecture, it is possible to incorporate pressure sensing into the same sensor, granting the sensor the abilities of imaging touch interaction and 3D object tracking with low cost and easy manufacturing.
In our previous conference paper, the theoretical model for estimating capacitance density distribution was presented [21]. In this paper, the proximity mapping from capacitance density inside the model is improved. Based on the proposed model, we made a functioning prototype and evaluated the performance of the proposed system. The main contributions of this paper are as follows:
  • To present an improved novel proximity imaging method [21] for an object tracking application.
  • To develop a proximity imaging sensor using a low-cost conductive sheet and evaluate its proximity and horizontal position estimation accuracy.
  • To implement a hand-tracking demonstration as a potential application of the proposed system.

2. Methods

2.1. Overview

Our final goal is to visualize a proximity distribution on a surface (see Figure 1). First, proximity–capacitance coupling is introduced. A single layer of conductive sheet is used to convert the proximity information to potential on each electrode at the sheet’s boundary. The system solves an inverse problem to find the capacitance distribution on the surface. In this paper, we assume that the conductive sheet used for capacitance coupling is a pure resistive material and that only electrically-grounded objects are considered as target objects. The following discussion is all under a single time frame, and all the variables are time-dependent.
As shown in Figure 2, when a grounded object exists above a conductive surface, capacitance C eq appears between these two objects. If voltage V in is applied at the boundary of the surface, free electrons will accumulate on the surface and the grounded object. The distribution of electrons is represented by ρ e ( r ) as the value of the charge density at point r R 2 . We also define ϕ ( r ) as the potential value at the same point. As a result, capacitance density ς C ( r ) can be defined on the surface as
ς C ( r ) = ρ e ( r ) ϕ ( r ) .
The shape and distance of the grounded object placed over the surface determine the capacitance density. Establishing the precise relation between the capacitance density and the distance distribution can be considerably difficult. However, the distance between the target object and the detecting sheet can be inferred by obtaining the capacitance density.
For the sensor design, N electrodes are attached to the sheet at the boundary, as shown in Figure 2. By inputting a voltage signal through one electrode and reading voltage data from other electrodes one at a time, we can obtain N 1 number of data for one input condition. After switching the input condition through all the boundary electrodes, we can obtain N ( N 1 ) number of data for a single detection frame. Using those data to solve the inverse problem, we can obtain the capacitance distribution across the sheet. We can then estimate a proximity map of the grounded objects above the sensor.

2.2. Forward Problem

A reconstruction algorithm is used to solve the inverse problem of estimating ς C using the potentials at the boundary electrodes for multiple input conditions. To define the inverse problem, we first consider the forward problem. Given the capacitance density ς C ( r ) condition, the potential distribution on the sheet is obtained by solving a partial differential equation. As a thin sheet is utilized, the model is established on a 2D surface.
Considering the condition of free electrons at a point inside the conductive sheet domain Ω , the accumulation rate of its free electrons is equal to the inflow of the current to the point:
· j s ( r ) + ρ e ( r ) t = 0 on Ω ,
where j s is the current density vector, ρ e is the amount of free electrons at that point, whose values are time variant, and t is time. The current density vector can be represented as in the following equation:
j s ( r ) = σ ( r ) E ( r ) = σ ( r ) ( ϕ ( r ) ) ,
where E ( r ) is the electric field, σ ( r ) is the conductivity of the homogeneous sheet, and ϕ ( r ) is the potential at the point. Equation (2) can then be rewritten as
· ( σ ( r ) ϕ ( r ) ) = ρ e ( r ) t .
Following the shunt model [22], the potential at the input electrode region Ω d is treated as the boundary condition
ϕ ( r ) = V 0 e j ω t on Ω d ,
where V 0 is the voltage amplitude of the input sine signal and ω is its angular frequency. The potential at every point in the domain can then be represented as
ϕ ( r ) = V ( r ) e j ( ω t + φ ( r ) ) on Ω ,
where V ( r ) is the amplitude and φ ( r ) is the phase. According to the previous definition of the capacitance density ς C ( r ) in (1), the density of free electrons on position r is
ρ e ( r ) = ς C ( r ) V ( r ) e j ( ω t + φ ( r ) ) .
A differential equation can then be derived from (4) and (7):
· ( σ ( r ) ϕ ( r ) ) j ω ς C ( r ) ϕ ( r ) = 0 .
Note that ϕ ( r ) is a complex number. Because the conductivity of the sheet does not change in our application, the variable in (8) is ς C ( r ) .
The finite element method (FEM) is used to find the potential distribution with (8). By solving the forward problem, the potentials at the electrodes can be calculated, and the output on electrode areas Ω e can be represented as
ϕ ( r ) = V e ( r ) e j ( ω t + φ e ( r ) ) on Ω e .

2.3. Inverse Problem

A regularization-based imaging method [23] is utilized due to its fast reconstruction speed. Amplitude V e is used as our main measurement parameter, as it is easier for signal measurement. The basic idea of the dynamic imaging is linearizing the system around an initial density value vector ς 0 . After obtaining the Jacobian matrix J between the electrode potential amplitudes and the capacitance density, the differential amplitude vector δ V can be approximately calculated using the following equation:
δ V J δ ς + w
where δ V is a vector comprising the difference of the current electrode amplitude readings and the original amplitudes, which are recorded before locating objects near the sensor; vector δ ς is the change of the capacitance density on each element; and vector w represents an error. The amplitude vector has N ( N 1 ) elements, while the capacitance density vector has N elem elements. In practice, we obtain the original amplitude data V ori when there is no object in the detection range. We then subtract V ori from the measured data V mea to obtain the difference amplitude vector δ V and do the reconstruction.
Through this method, a mapping between the electrode potential information δ V and the capacitance density δ ς can be established in a linear relation, and an acceptable reconstruction speed is achieved.

2.3.1. Jacobian Matrix

The Jacobian matrix is the derivative with respect to the capacitance density change. The matrix could be calculated numerically by perturbing the capacitance density of each element in the mesh by δ ς . In practice, a difference approximation for J is obtained by dividing δ V by δ ς :
J i , k = δ V i δ ς k ; i = 1 , , N ( N 1 ) ; k = 1 , , N elem
where N is the number of electrodes and N elem is the number of elements in the detection area mesh.

2.3.2. Regularization

As shown in (10), solving δ ς from δ V is a highly ill-posed problem. Therefore, the Tikhonov regularization method is utilized to solve this problem. Lionheart et al. [24] stated that the formal solution of Tikhonov regularization is proposed as follows:
δ ς ^ = ( J T J + λ 2 Q ) 1 J T δ V
where λ is a hyperparameter that controls the amount of regularization and Q is a regularization matrix which, in our case, is an identity matrix I . By pre-calculating the matrix ( J T J + λ 2 Q ) 1 J T , a fast speed reconstruction of the capacitance density can be achieved.

2.4. Proximity Mapping and Calibration

Different from the previous paper [21], for proximity mapping, we used the following equation to approximately fit the distance with the capacitance for objects possessing a flat bottom surface. According to the capacitance between two infinite size planar boards C = ε 0 S / d , the capacitance density can be derived as ς = ε 0 / d where d is the distance between the two boards and ε 0 is vacuum permittivity. We generated the following equation to approximate the proximity and capacitance density:
d k = b 1 + b 2 δ ς k
where d k is the normal distance of the object from the k-th element and δ ς k is the output from the solver on the k-th element. Parameters b 1 and b 2 can be fitted from the calibration experiment data. However, for a complex-shaped object, a specific function has to be considered to yield a better result in decoupling proximity and capacitance, which means that a separate calibration is needed for differently shaped objects.

3. Implementation

3.1. Sensor Construction

The outline of the sensor is shown in Figure 3. A multiplexer switches the input signal on every electrode, and an AD converter (ADC) reads the voltage signal on every channel. Due to the large impedance in our system, voltage followers were implemented before every ADC to prevent the impedance of ADC affecting the sensor circuit. The multiplexer chosen was MUX36S16IPW, and the amplifier used for the voltage follower was MCP6291RT-E/MS. The circuit part is shown in Figure 4. For ADC, we used a USB-6349 Multifunction I/O Device (National Instrument), which could provide 500   kHz sampling speed for all the analog input channels. The control system was developed using LabView. For the sensor, a polyethylene-including carbon conductive sheet (ZC-86, Engineer Inc., Yokohama, Japan) with size of 200   m m × 200   m m was used (see Figure 4). The surface resistance of the sheet is 5 × 10 3   Ω / Sq at 20   C temperature and 50 % humidity, and its moisture permeability is 60   g / m 2 . Sixteen copper square electrodes with a side length of 12   m m were arranged evenly on the perimeter and were connected to the sensor sheet with conductive tape.
A voltage with amplitude of 3   V and frequency of 20   kHz was used as the excitation signal. The signal generator used was WF-1946 (NF Corporation, Yokohama, Japan). This frequency was chosen because of the sampling speed for the ADC, and also because it generated the largest change in the electrode output amplitude with regard to the distance change of a hovering object. The channel selection on the multiplexer was controlled by USB-6349’s digital output channel. For the signal reading process, 500 sample points were recorded on every excitation condition. After reading one excitation condition, the controller changed the input condition. The switching speed of the controller was 1   k H z . For every excitation condition, the last 250 sample points were used to compute the amplitude. A hamming window was applied before applying fast Fourier transformation to erase the effect of phase difference. After recording all 16 excitation conditions, all data were sent to the inverse solver. The imaging frame rate for our prototype was 30   H z .

3.2. Reconstruction Solver

The mesh used in the solver was generated by Altair Hypermesh software. In this paper, the size of the mesh was set to 200   m m × 200   m m to match with the size of the real sensor. The domain was divided into 5000 isosceles right-angled triangle elements. The length of the cathetus on each triangle was 4   m m . Because we used a linear model to approximately solve the problem, the difference in size of the mesh elements affected the performance of the system greatly. Therefore, elements with the same size were applied across the whole domain. The side length of each square electrode was set to 12   m m . Python language was used to implement the forward FEM simulator and the inverse solver. For calculating the Jacobian matrix inside (10), a change of capacitance density δ ς k of 1 × 10 4   F / m 2 was applied to one element. The corresponding electrode output amplitude δ V i was obtained to construct the Jacobian matrix, as described in (11). The matrix was constructed by repeating this calculation on all 5000 elements. Choosing the hyperparameter λ in (12) considerably affected the reconstruction results. The optimal hyperparameter was chosen according to the best resolution method mentioned in [25], which was 203 in this paper. All the results mentioned in the succeeding paragraphs were all generated using the solver with optimized parameters.
The reconstruction solver was built after choosing the hyperparameters. Some simple examples were used to demonstrate the performance (see Figure 5). Reconstruction was applied in a detection area with a size of 168   m m × 168   m m . A threshold process was applied to the result to show the pattern more clearly. We used 80 % of the maximum value as the threshold value, and values smaller than the threshold are not shown in the result graph.
For parameters in (13), we used the calibration result of an 80   m m × 80   m m object, which will be mentioned in Section 4 to fit b 1 and b 2 . Figure 6 shows some measurement results of the constructed system.

4. Experiments

Two experiments were conducted to examine the performance of the sensor and show its potential applications. The first one was an evaluation experiment, and the other one was a hand-tracking demonstration.

4.1. Performance Evaluation

4.1.1. Testing Device

To evaluate the performance of the sensor in proximity sensing, the proximity sensing performance (position accuracy on measuring objects) was investigated.
In order to evaluate the accuracy of the 3D position sensing, an XYZ stage was prepared to control the object position. As shown in Figure 7, an ANYCUBIC Chiron 3D printer with 0.1   m m position accuracy was modified into an XYZ stage for moving the testing object. An aluminum frame was attached to the end of the moving component and the stage was calibrated so that its working space was parallel to the surface on which the sensor was placed. As shown in Figure 7, a leveling mechanism was attached between the object and the aluminum frame to compensate the tilt because of the extended part.
The objects used were made out of electron-conjugated conductive polymer sheet with a resistance of 2.5 × 10 2   Ω / Sq and acrylic boards which can be laser-cut freely and constructed into any shape. Four different sized square objects (objects A, B, C, and D in Figure 7) with side length of 20   m m , 30   m m , 40   m m , and 80   m m were chosen as the testing objects. Three other objects (objects E, F, and G in Figure 7) were chosen to demonstrate sensor’s capability of detecting randomly shaped objects and non-flat objects. These objects were grounded using a crocodile clipper attached to the slit that is extended from the bottom surfaces.

4.1.2. Metrics

To evaluate the results, the center of position (CoP) was calculated according to the reconstruction result. The CoP was generated as the center of mass of the elements whose estimated capacitance density δ ς is larger than 90 % of the maximum value. The capacitance density at CoP is the mean value of elements chosen to calculate CoP. In this work, position error (PE) was used as the main evaluation method at every height situation between the reconstructed CoP and the ground truth. For comparison between different objects, every object was hovered above the detection surface at ten different heights from 5   m m to 50   m m . At each height, the center of the object stayed at 25 different evenly distributed points inside the detection area ( 168   m m × 168   m m ). For randomly-shaped objects and non-flat objects, we tested these objects on three different heights: 10   m m , 30   m m , and 50   m m . The objects were also placed at 25 points same as the square objects.
To investigate the detection range of the sensor regarding the size of objects, the square objects were hovered above the center points at 20 different heights from 5   m m to 100   m m . The reconstructed capacitance density value at CoP was used to determine the detection range.
Before the experiment, every object was placed at 400   m m (where the object is far beyond the detection range) above the center point of the sensor to get the original voltage data V ori for that object. We measured 50 datasets at every position and obtained the mean value of the output amplitude data, V mea . After gathering the amplitude output on every position, the difference between the gathered data and the original voltage data was used for the reconstruction.

4.1.3. Results

Figure 8a shows the mean capacitance densities at CoP at different heights for every object, and Table 1 summarizes their relative standard deviations (RSD). The result indicates that the capacitance density becomes smaller as the target object gets smaller.
However, the difference of the values between the 40   m m object and 80   m m object was not large, although the side length is two times larger. Because the electric field coupling is more complex on the perimeter of a flat object, larger objects posses more evenly distributed electric fields in the center. Due to this effect, the largest reconstructed value that represents the center value became similar on larger objects at the same height. We can utilize this property to implement some application on larger objects such as hand tracking. A simple flat object can be utilized to calibrate the sensor. The fitted parameters can then be used for hand-tracking experiments.
Note that the RSD value at 5   m m distance was much larger than at larger distances. The value difference is caused by the same effect described above. The electric fields is much more condensed at the perimeter of the sensor sheet. As shown in Figure 8b, at a smaller distance, the reconstructed value at the CoP is much larger on corner than the value at the center. As a result, the RSD value at 5   m m is significantly larger.
The last row of Table 1 demonstrates the detection range of every square object. Because of the noise, the proposed sensor performed poorly when the reconstructed capacitance value is smaller than 3 × 10 7   F / m 2 . As a result, we use this value as threshold of determining the maximum detection range for the objects. The height of the measurement point whose capacitance density value at CoP is the closest to the threshold was set as detection range. Due to stronger electrical coupling with larger objects, the detection range increases with the size of the object.
Table 2 presents the PE value of the measurement results, and Figure 9 shows the CoP results for Object D ( 80   m m × 80   m m ). PE increased when height increased. For every object, the PE at each height value is similar, which means that the horizontal position accuracy on detecting a single object is similar regardless of the change in size.
Table 3 presents the Standard Deviation (STD) of the CoP coordinates of all samples at a single measurement on every height. The STD value represents the resolution of the sensor which varies with size and height of the object. The resolution is higher when objects are larger or object’s height is smaller.
The result of non-square objects (objects E, F, and G) is shown in Figure 10, which is similar to the result of squared objects with the same size. The results further indicates that the PE value is not concerned with the shape of the object. Furthermore, the data show that although with some larger error, the proposed sensor possesses the ability of detecting non-flat object.

4.2. Hand-Tracking Application

4.2.1. Measuring Device

To get the ground truth data of the hand, an Intel Realsense Development Kit SR300 depth camera was used to record the hand position simultaneously with our sensor (see Figure 11). A sleeve made of a material that is invisible to the depth camera was attached to the arm to restrict the visible area to the hand. First, the point cloud inside the camera coordinate is calculated from the depth image using the default information given by the data sheet. These points were then transformed from the camera coordinate ( x c , y c , z c ) to the sensor coordinate ( x s , y s , z s ) . From the points in the sensor coordinate, the hand was then extracted as a set of points that satisfy ( | x s | < 300   m m , | y s | < 300   m m , z s < 300   m m ). The contour of the hand inside the depth image was then generated using OpenCV library, as shown in Figure 11. The center position of the hand was determined by the eclipse covering the contour. The center of the oval was used as the ground truth value for the hand position. The camera captures the upper surface of the hand, whereas our sensor detects the lower surface. Therefore, after getting the center point of the hand’s upper surface z up , we estimated the height of the lower surface z low using the relationship z low = z up 15   m m .
To estimate the hand position with the proposed sensor, we used the CoP of a reconstruction image as X and Y axis’s position estimation of the hand. The capacitance density at CoP is used to estimate the distance of the hand from the sensor. The human subject did not touch any grounded object but weakly coupled with the ground. To accomplish the estimation, the sensor has to be calibrated. The result shown in Figure 9 indicates that the estimated positions shrink towards the center when the distance gets larger. The estimation of position is not correlated with the size of the object. As a result, a model was used to fit the estimated CoP for the ground truth to get a more accurate position. The relationship between the estimated CoP and ground truth is complicated; however, inside this paper, a linear model (14) was used to simplify calculation:
x = ( c 1 z + c 2 ) x s y = ( c 1 z + c 2 ) y s z = z s
where c 1 and c 2 were fitted from the calibration experiment using an 80   m m × 80   m m object previously mentioned. The calibration took place at ten heights from 10   m m to 100   m m . For each height, 25 points in the horizontal plane, consistent with the previous experiment, were measured. In addition, (13) was used to fit the output capacitance density at CoP with the distance of the calibration object.

4.2.2. Results

Three sets of experiment were carried out under three different hand and arm directions. Figure 12 shows the hand position estimation generated by the camera and the sensor, and Table 4 shows the position difference between the two methods in every set of experiment. Every data consists of 75 frames hand movement (around 2.5 s). The three different arm directions are ( 1 , 1 , 0 ) , ( 0 , 1 , 0 ) , ( 1 , 1 , 0 ) from the hand center in the sensor base coordinate. The mean distance between the sensor estimation and the depth camera was approximately 15 % of the sheet side length throughout the three sets of experiments. In most cases, the correlation coefficient between the proposed sensor and the camera system is larger than 0.9 . Because the arm can also be detected by the sensor, the results tended to be biased towards the arm at a larger distance from the sensor sheet. Besides, the error between the linear model (14) and the actual relationship between estimated CoP and the ground truth might caused the gap between sensor estimation and camera results. In addition, as the detection area was limited, sensitivity at the perimeter was lower than that inside the detection area ( 168   m m × 168   m m ) bringing larger error in the result at perimeter areas.

4.3. Discussion

The proposed proximity imaging sensor yielded a prominent result in single object 3D position detection. Inside the performance evaluation experiment, the proposed sensor achieved horizontal errors of 10.0–19.8% of the sheet length and variations of the proximity at approximately 10 % . For the hand-tracking task, the proposed system successfully estimated the hand position in 3D space. As shown in Table 4, the system reached a PE of 33.4 46.6 mm, which is smaller than half a hand-width. In most cases the correlation coefficient between the depth camera and the proposed system was larger than 0.9 . Therefore, we conclude that the proposed sensor successfully estimated the 3D position information.
As shown in Table 2, the precision of positioning a single object was similar with differently sized objects. This means that using a larger sensor for the same object would not improve the positioning accuracy. However, the detection range might change with the shape of the sheet. Particularly, increasing the size of the sheet might help detect the same object at a larger distance.
Besides, the position accuracy of the proposed sensor can be improved. The contact impedance at the electrodes might affect the distribution of current inside the sensor material, resulting in distortion of the CoP from ground truth inside Figure 9. The error caused by contact impedance can be addressed by applying scaling function comparing with the simulation data [18]. Furthermore, for single object detection, a more precise fitting model substituting (14) can be used to increase position accuracy.
The temporal characteristic of the sensor is mainly concerned with the sampling speed of ADC, switching speed, and signal frequency. In this work, the imaging frame rate of the sensor was 30 Hz due to software limitations. With a more integrated software system, the theoretical frame rate could reach 60 Hz using the current hardware setup. On the other hand, the signal frequency is hard to increase because it affects the performance of the sensor, and higher frequency might also be difficult for portable ADC.
Although the experiment conducted herein exhibited the feasibility of the proposed sensor, some limitations must be acknowledged. First, it is difficult to find a universal function for relating proximity and capacitance density. Inside the proposed system, a function was hypothesized specifically for flat-bottom objects. However, for complex objects, this method is very approximate, restricting the precision of the sensor. In future works, we will first try to figure out how to discern the size of the object based on the acquired data.
Second, as the sensing area is directly exposed to the environment, environmental noise affects the performance of the proposed system. If the voltage change triggered by the object is smaller than noise, the object is undetectable. Currently, a simple signal processing was applied in order to exclude the effect of the phase change in obtaining the amplitude of the signal. In future works, the resolution of the sensor might increase by adapting more sophisticated signal processing techniques.
Last, multiple objects detection is difficult to realize with the current reconstruction algorithm. When two objects were hovering very close to the sensor (closer than 1   m m ), the algorithm could distinguish two objects in the reconstructed image. However, distinguishing these objects was infeasible at a larger distance. Applying different Jacobian matrices for different heights or implementing different reconstruction approaches like the D-Bar method [26] and deep learning method [27,28,29,30] might help solve this issue in the future.

5. Conclusions

In this paper, we presented a proximity imaging sensor based on the tomographic approach using a low-cost conductive sheet. A novel theoretical model to image proximity using capacitance–proximity coupling was proposed to achieve the design of the sensor. A prototype sensor using the proposed model was implemented and tested. The position accuracy evaluation performed on the prototype revealed that the sensor reached a horizontal localization error rate of 10.0–15.8% at different height conditions inside the detection range. A hand-tracking application was carried out on the proposed sensor. Compared with the depth camera system, the proposed system achieved a position error of 33.38 46.66 mm, proving its feasibility. Therefore, the proposed system is expected to be used in various robotic applications, such as teleoperation, input interface, and interaction between human, machine, and environments.

Author Contributions

Conceptualization, Z.L., S.Y., and A.Y.; methodology, Z.L.; software, Z.L.; validation, Z.L., S.Y., and A.Y.; formal Analysis, Z.L.; investigation, Z.L.; resources, Z.L.; data curation, Z.L.; writing—original draft preparation, Z.L.; writing—review and editing, S.Y. and A.Y.; visualization, Z.L.; supervision, A.Y.; project administration, S.Y.; funding acquisition, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Japan Society for the Promotion of Science KAKENHI under Grant JP19H04189.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data can be requested by direct email contacts to the author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ciaparrone, G.; Sánchez, F.L.; Tabik, S.; Troiano, L.; Tagliaferri, R.; Herrera, F. Deep learning in video multi-object tracking: A survey. Neurocomputing 2020, 381, 61–88. [Google Scholar] [CrossRef] [Green Version]
  2. Hu, K.; Ye, J.; Fan, E.; Shen, S.; Huang, L.; Pi, J. A novel object tracking algorithm by fusing color and depth information based on single valued neutrosophic cross-entropy. J. Intell. Fuzzy Syst. 2017, 32, 1775–1786. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, Q.; Zhang, L.; Bertinetto, L.; Hu, W.; Torr, P.H. Fast online object tracking and segmentation: A unifying approach. In Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1328–1338. [Google Scholar]
  4. Palinko, O.; Rea, F.; Sandini, G.; Sciutti, A. Robot reading human gaze: Why eye tracking is better than head tracking for human-robot collaboration. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, Korea, 9–14 October 2016; pp. 5048–5054. [Google Scholar]
  5. Hubmann, C.; Becker, M.; Althoff, D.; Lenz, D.; Stiller, C. Decision making for autonomous driving considering interaction and uncertain prediction of surrounding vehicles. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium, Los Angeles, CA, USA, 11–14 June 2017; pp. 1671–1678. [Google Scholar]
  6. Milford, P.N. Augmented Reality Proximity Sensing. U.S. Patent 9606612B2, 28 March 2017. [Google Scholar]
  7. Hsiao, K.; Nangeroni, P.; Huber, M.; Saxena, A.; Ng, A.Y. Reactive grasping using optical proximity sensors. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 2098–2105. [Google Scholar]
  8. Kan, W.; Huang, Y.; Zeng, X.; Guo, X.; Liu, P. A dual-mode proximity sensor with combination of inductive and capacitive sensing units. Sens. Rev. 2018, 38, 199–206. [Google Scholar]
  9. Li, N.; Zhu, H.; Wang, W.; Gong, Y. Parallel double-plate capacitive proximity sensor modelling based on effective theory. AIP Adv. 2014, 4, 1–9. [Google Scholar] [CrossRef]
  10. Hu, X.; Yang, W. Planar capacitive sensors–designs and applications. Sens. Rev. 2010, 30, 24–39. [Google Scholar]
  11. Nguyen, T.D.; Kim, T.; Noh, J.; Phung, H.; Kang, G.; Choi, H.R. Skin-Type Proximity Sensor by Using the Change of Electromagnetic Field. IEEE Trans. Ind. Electron. 2021, 68, 2379–2388. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Yang, C.; Hudson, S.E.; Harrison, C.; Sample, A. Wall++ Room-Scale Interactive and Context-Aware Sensing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–15. [Google Scholar]
  13. Ye, Y.; He, C.; Liao, B.; Qian, G. Capacitive proximity sensor array with a simple high sensitivity capacitance measuring circuit for human–computer interaction. IEEE Sens. J. 2018, 18, 5906–5914. [Google Scholar] [CrossRef]
  14. Nagakubo, A.; Alirezaei, H.; Kuniyoshi, Y. A deformable and deformation sensitive tactile distribution sensor. In Proceedings of the 2007 IEEE International Conference on Robotics and Biomimetics, Sanya, China, 15–18 December 2007; pp. 1301–1308. [Google Scholar]
  15. Kato, Y.; Mukai, T.; Hayakawa, T.; Shibata, T. Tactile sensor without wire and sensing element in the tactile region based on EIT method. In Proceedings of the SENSORS (2007 IEEE), Atlanta, GA, USA, 28–31 October 2007; pp. 792–795. [Google Scholar]
  16. Yao, A.; Yang, C.L.; Seo, J.K.; Soleimani, M. EIT-Based Fabric Pressure Sensing. Comput. Math. Methods Med. 2013, 2013, 1999. [Google Scholar] [CrossRef] [PubMed]
  17. Russo, S.; Nefti-Meziani, S.; Carbonaro, N.; Tognetti, A. A quantitative evaluation of drive pattern selection for optimizing EIT-based stretchable sensors. Sensors 2017, 17, 1999. [Google Scholar] [CrossRef] [PubMed]
  18. Yoshimoto, S.; Kuroda, Y.; Oshiro, O. Tomographic approach for universal tactile imaging with electromechanically coupled conductors. IEEE Trans. Ind. Electron. 2018, 67, 627–636. [Google Scholar] [CrossRef]
  19. Mühlbacher-Karrer, S.; Zangl, H. Object detection based on electrical capacitance tomography. In Proceedings of the 2015 IEEE Sensors Applications Symposium (SAS), Zadar, Croatia, 13–15 April 2015; pp. 1–5. [Google Scholar]
  20. Mühlbacher-Karrer, S.; Zangl, H. Detection of conductive objects with electrical capacitance tomography. In Proceedings of the 2016 IEEE SENSORS, Orlando, FL, USA, 30 October–3 November 2016; pp. 1–3. [Google Scholar]
  21. Li, Z.; Yoshimoto, S.; Yamamoto, A. Tomographic Approach for Proximity Imaging using Conductive Sheet. In Proceedings of the IECON 2020 The 46th Annual Conference of the IEEE Industrial Electronics Society, Singapore, 18–21 October 2020; pp. 748–753. [Google Scholar]
  22. Cheney, M.; Isaacson, D.; Newell, J.C. Electrical impedance tomography. SIAM Rev. 1999, 41, 85–101. [Google Scholar] [CrossRef] [Green Version]
  23. Silvera-Tawil, D.; Rye, D.; Soleimani, M.; Velonaki, M. Electrical impedance tomography for artificial sensitive robotic skin: A review. IEEE Sens. J. 2014, 15, 2001–2016. [Google Scholar] [CrossRef] [Green Version]
  24. Lionheart, W.; Polydorides, N.; Borsic, A. The Reconstruction Problem; CRC Press: Boca Raton, FL, USA, 2004; pp. 3–64. [Google Scholar]
  25. Graham, B.; Adler, A. Objective selection of hyperparameter for EIT. Physiol. Meas. 2006, 27, S65–S79. [Google Scholar] [CrossRef] [PubMed]
  26. Knudsen, K.; Lassas, M.; Mueller, J.L.; Siltanen, S. Regularized D-bar method for the inverse conductivity problem. Inverse Probl. Imaging 2009, 3, 599. [Google Scholar] [CrossRef]
  27. Hamilton, S.J.; Hauptmann, A. Deep D-bar: Real-time electrical impedance tomography imaging with deep neural networks. IEEE Trans. Med Imaging 2018, 37, 2367–2377. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Lin, Z.; Guo, R.; Zhang, K.; Li, M.; Yang, F.; Xu, S.; Abubakar, A. Neural network-based supervised descent method for 2D electrical impedance tomography. Physiol. Meas. 2020, 41, 074003. [Google Scholar] [CrossRef] [PubMed]
  29. Martin, S.; Choi, C.T.M. Nonlinear Electrical Impedance Tomography Reconstruction Using Artificial Neural Networks and Particle Swarm Optimization. IEEE Trans. Magn. 2016, 52, 1–4. [Google Scholar] [CrossRef]
  30. Liu, S.; Wu, H.; Huang, Y.; Yang, Y.; Jia, J. Accelerated structure-aware sparse Bayesian learning for three-dimensional electrical impedance tomography. IEEE Trans. Ind. Inform. 2019, 15, 5033–5041. [Google Scholar] [CrossRef]
Figure 1. Overview of the proximity imaging. The system interprets physical distance information as voltage data on electrodes and then uses an inverse problem solver to map capacitance density (modified from the work in [21]).
Figure 1. Overview of the proximity imaging. The system interprets physical distance information as voltage data on electrodes and then uses an inverse problem solver to map capacitance density (modified from the work in [21]).
Sensors 21 02736 g001
Figure 2. Explanation of proximity capacitance coupling and electrode position illustration (modified from the work in [21]).
Figure 2. Explanation of proximity capacitance coupling and electrode position illustration (modified from the work in [21]).
Sensors 21 02736 g002
Figure 3. Outline of the sensor structure. Voltage followers are used to improve performance.
Figure 3. Outline of the sensor structure. Voltage followers are used to improve performance.
Sensors 21 02736 g003
Figure 4. Sensor construction, circuit, and detection area.
Figure 4. Sensor construction, circuit, and detection area.
Sensors 21 02736 g004
Figure 5. Reconstruction results of the algorithm. The first and second rows represent the ground truth and the reconstruction result, respectively. Inside ground truth image, the black area possesses a capacitance density of zero, while the yellow area is the ground truth target whose value is 1 × 10 9   F / m 2 .
Figure 5. Reconstruction results of the algorithm. The first and second rows represent the ground truth and the reconstruction result, respectively. Inside ground truth image, the black area possesses a capacitance density of zero, while the yellow area is the ground truth target whose value is 1 × 10 9   F / m 2 .
Sensors 21 02736 g005
Figure 6. Proximity reconstruction results under three different hand-hovering conditions. A threshold of 80 % value between the minimum and maximum proximity value is applied.
Figure 6. Proximity reconstruction results under three different hand-hovering conditions. A threshold of 80 % value between the minimum and maximum proximity value is applied.
Sensors 21 02736 g006
Figure 7. Experiment setup for performance evaluation.
Figure 7. Experiment setup for performance evaluation.
Sensors 21 02736 g007
Figure 8. (a) Mean capacitance density value at center of position (CoP) reconstructed regarding distance using objects with different size. The red line shows the fitted curve of the 80 mm object. (b) Capacitance density value at CoP reconstructed regarding distance with different horizontal position using 80   m m object (Object D).
Figure 8. (a) Mean capacitance density value at center of position (CoP) reconstructed regarding distance using objects with different size. The red line shows the fitted curve of the 80 mm object. (b) Capacitance density value at CoP reconstructed regarding distance with different horizontal position using 80   m m object (Object D).
Sensors 21 02736 g008
Figure 9. Reconstructed CoP compared with ground truth points in experiment with an 80 mm object. The points and gray crosses represent the reconstructed CoPs and the ground truths, respectively. Units of graphs is in millimeter mm.
Figure 9. Reconstructed CoP compared with ground truth points in experiment with an 80 mm object. The points and gray crosses represent the reconstructed CoPs and the ground truths, respectively. Units of graphs is in millimeter mm.
Sensors 21 02736 g009
Figure 10. Sensor performance information on a reference object (object D) and three non-square object (objects E, F, and G). The first three columns present the position error of each measurement. The points and gray crosses represent the reconstructed CoPs and the ground truths, respectively. The columns in the middle demonstrate the reconstructed capacitance density value at CoP of each measurement. The reconstructed image is displayed in the last column. Inside the image, the object was hovering 10 mm above the central point of the sensor. Units of all graphs is in millimeter m m and units for capacitance density value is F / mm 2 .
Figure 10. Sensor performance information on a reference object (object D) and three non-square object (objects E, F, and G). The first three columns present the position error of each measurement. The points and gray crosses represent the reconstructed CoPs and the ground truths, respectively. The columns in the middle demonstrate the reconstructed capacitance density value at CoP of each measurement. The reconstructed image is displayed in the last column. Inside the image, the object was hovering 10 mm above the central point of the sensor. Units of all graphs is in millimeter m m and units for capacitance density value is F / mm 2 .
Sensors 21 02736 g010
Figure 11. Hand position recognition process using depth camera.
Figure 11. Hand position recognition process using depth camera.
Sensors 21 02736 g011
Figure 12. Comparison of hand trajectories between the sensor and depth camera. The blue and orange lines represent the sensor and the camera, respectively.
Figure 12. Comparison of hand trajectories between the sensor and depth camera. The blue and orange lines represent the sensor and the camera, respectively.
Sensors 21 02736 g012
Table 1. Relative standard deviation (RSD) of capacitance density value at CoP on every height. The last row presents detection range of every square object. The values are height of the measurement point whose reconstructed capacitance density is closest to 3 × 10 7   F / mm 2 .
Table 1. Relative standard deviation (RSD) of capacitance density value at CoP on every height. The last row presents detection range of every square object. The values are height of the measurement point whose reconstructed capacitance density is closest to 3 × 10 7   F / mm 2 .
Object AObject BObject CObject D
20 mm30 mm40 mm80 mm
Distance (mm)RSD (%)RSD (%)RSD (%)RSD (%)
520.521.922.227.5
106.06.57.98.3
154.94.27.96.4
209.17.27.38.9
256.35.55.28.5
305.96.56.48.3
358.16.29.88.6
4011.510.211.77.8
459.19.312.012.6
5010.210.910.910.9
Detection Range (mm)50607090
Table 2. PE data for experimental data and simulation data.
Table 2. PE data for experimental data and simulation data.
Object AObject BObject CObject D
20 mm × 20 mm30 mm × 30 mm40 mm × 40 mm80 mm × 80 mm
Distance (mm)Mean (mm)STD (mm)Mean (mm)STD (mm)Mean (mm)STD (mm)Mean (mm)STD (mm)
521.5188.80620.8067.31619.9357.64420.7256.860
1022.2418.68122.6767.76921.99910.08921.4148.833
1524.60111.21422.9098.69626.66214.31422.06710.144
2028.59214.27721.9839.31324.7329.99422.50410.675
2528.03012.69025.55412.15723.75611.52422.81511.779
3027.74313.93326.62413.00925.25113.45824.40113.887
3530.90915.68426.62112.20531.33515.17725.18914.571
4037.80419.25626.55411.72629.40613.47424.72113.829
4534.17415.81339.64819.57429.17816.17027.48914.647
5038.86917.13239.19121.50430.34916.17336.27516.522
Table 3. Standard deviation (STD) of the CoP coordinates from all samples (50 frames × 25 points) at one measurement point on every height. The STD value of a single measurement point indicates the resolution of proposed sensor at the certain height.
Table 3. Standard deviation (STD) of the CoP coordinates from all samples (50 frames × 25 points) at one measurement point on every height. The STD value of a single measurement point indicates the resolution of proposed sensor at the certain height.
Object AObject BObject CObject D
20 mm × 20 mm30 mm × 30 mm40 mm × 40 mm80 mm × 80 mm
Distance (mm)x Axis (mm)y Axis (mm)x Axis (mm)y Axis (mm)x Axis (mm)y Axis (mm)x Axis (mm)y Axis (mm)
50.8260.9150.5030.5550.3310.3640.6760.845
101.2821.3880.7310.7410.5830.6480.8271.025
151.5581.5031.6011.1920.7650.8611.1871.461
201.8051.7591.4241.7371.8291.9171.3391.544
252.3282.4001.4811.5731.8771.4831.6541.838
303.0302.7011.6691.9931.1881.4531.5601.809
353.5452.9072.2212.9191.7351.8782.2272.650
402.9952.5402.5442.5242.2662.4062.2682.650
453.7704.1202.9032.6793.3702.9432.5972.871
504.2173.7763.0522.8422.4342.5502.5282.460
Table 4. Position difference between camera estimation and sensor estimation.
Table 4. Position difference between camera estimation and sensor estimation.
Experiment SetSet 1Set 2Set 3
Arm Direction ( 1 , 1 , 0 ) ( 0 , 1 , 0 ) ( 1 , 1 , 0 )
Mean Distance (mm)46.6633.3845.14
Mean X Difference (mm)32.4518.721.7
Mean Y Difference (mm)10.6221.423.2
Mean Z Difference (mm)6.319.510.4
X Correlation Coefficient0.9050.9540.892
Y Correlation Coefficient0.9490.9810.907
Z Correlation Coefficient0.8510.9330.725
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Z.; Yoshimoto, S.; Yamamoto, A. Tomographic Proximity Imaging Using Conductive Sheet for Object Tracking. Sensors 2021, 21, 2736. https://doi.org/10.3390/s21082736

AMA Style

Li Z, Yoshimoto S, Yamamoto A. Tomographic Proximity Imaging Using Conductive Sheet for Object Tracking. Sensors. 2021; 21(8):2736. https://doi.org/10.3390/s21082736

Chicago/Turabian Style

Li, Zehao, Shunsuke Yoshimoto, and Akio Yamamoto. 2021. "Tomographic Proximity Imaging Using Conductive Sheet for Object Tracking" Sensors 21, no. 8: 2736. https://doi.org/10.3390/s21082736

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop