Next Article in Journal
A Novel Hybrid Intelligent Indoor Location Method for Mobile Devices by Zones Using Wi-Fi Signals
Previous Article in Journal
Low Complexity HEVC Encoder for Visual Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pupil and Glint Detection Using Wearable Camera Sensor and Near-Infrared LED Array

School of Mechatronical Engineering, Beijing Institute of Technology, 5 South Zhongguancun Street, Haidian District, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(12), 30126-30141; https://doi.org/10.3390/s151229792
Submission received: 13 October 2015 / Revised: 17 November 2015 / Accepted: 27 November 2015 / Published: 2 December 2015
(This article belongs to the Section Physical Sensors)

Abstract

:
This paper proposes a novel pupil and glint detection method for gaze tracking system using a wearable camera sensor and near-infrared LED array. A novel circular ring rays location (CRRL) method is proposed for pupil boundary points detection. Firstly, improved Otsu optimal threshold binarization, opening-and-closing operation and projection of 3D gray-level histogram are utilized to estimate rough pupil center and radius. Secondly, a circular ring area including pupil edge inside is determined according to rough pupil center and radius. Thirdly, a series of rays are shot from inner to outer ring to collect pupil boundary points. Interference points are eliminated by calculating gradient amplitude. At last, an improved total least squares is proposed to fit collected pupil boundary points. In addition, the improved total least squares developed is utilized for the solution of Gaussian function deformation to calculate glint center. The experimental results show that the proposed method is more robust and accurate than conventional detection methods. When interference factors such as glints and natural light reflection are located on pupil contour, pupil boundary points and center can be detected accurately. The proposed method contributes to enhance stability, accuracy and real-time quality of gaze tracking system.

1. Introduction

Human beings acquire 80%~90% of outside information from our eyes. Humans’ visual perception information can be acquired through eye gaze tracking. With the increasing development of computer/machine vision technology, gaze tracking technology has been more and more widely applied in fields of medicine [1], production tests [2], human-machine interaction [3,4], aviation military [5,6], etc.
As one of traditional gaze tracking methods [7,8,9,10,11,12], the pupil center-corneal reflection (PCCR) technique has been developed and improved increasingly in recent years [13,14,15,16,17,18]. Pupil and glint (corneal reflection) center detection plays a crucial role on gaze tracking methods based on PCCR. There are always interference factors such as eyelashes, eyelids, shadows and natural light reflection in the images acquired by a CCD camera, which will cause false boundary points around pupil contour. In order to ensure the accuracy of gaze estimation, robust and accurate method of pupil and glint detection is essential.
Previous scholars have done a great many research works on pupil and glint detection. Ebisawa [19] proposes a pupil detection technique using two alternate infrared light sources and image difference of bright and dark eye image. Bright/dark eye image is acquired by switching on light source in coaxial/uncoaxial with the camera during add/even field alternatively, due to which the sampling time is limited. The glint position stays almost fixed. To detect it, pupil brightness should be as low as possible. Although the image difference method is simple, switching on/off of light sources may influence its stability. To overcome the limitation of this technique, methods utilizing single eye image are proposed continuously.
In [13], in order to obtain accurate pupil center position, double ellipse fitting (rough and detailed) are performed to eliminate false boundary points. It is difficult to eliminate false boundary points around pupil contour and double ellipse fitting cost a long time. The glint is detected by searching near the pupil. Its centroid is then calculated as center position. The uncertain searching time and result of glint can lead to instability of the method. Yoo et al. [20] acquire rough pupil bound by iterative projection. Snakes are utilized to converge to the boundary of pupil. Elimination of false boundary points is not considered. Glint searching region is limited by rough pupil bound. At last, pupil and glint center position are determined by ellipse fitting. Gwon et al. [21] locate approximate pupil area using CED method, then precise pupil center is obtained by calculating geometric center of black pixels. Before pupil detection, glints are erased by neighboring pixels in horizontal direction. The erasion causes error to pixel points around pupil contour and influences accuracy of pupil center location. To better locate pupil boundary, Li et al. [22] develop a feature-based method. In the process of feature detection, pupil contour candidates are detected along a series of rays shooting from a best guess of pupil center and marked with crosses. RANSAC is applied to differentiate pupil contour points (inliers) and interference points (outliers). When interference factors such as glints and natural light reflection locate on or around pupil contour, part of interference points and pupil contour points are mixed together. In this case, RANSAC is not capable enough to differentiate them. The location accuracy of pupil center is affected. Krishnamoorthi and Annapoorani [23] propose a boundary extraction technique to localize pupil. Orthogonal polynomials model is adopted to analysis the structure of an eye image. Hartley’s statistical hypothesis test is employed in edge map extraction. A where-to-go approach is proposed to find pupil boundary points with the assistant of weightage assignment. Although the algorithm can locate pupil boundary points accurately, it has a limitation of boundary assumption.
The remainder of this paper is organized as follows: Section 2 presents the proposed method in detail. Section 3 describes the experiments and shows the experimental results. Section 4 concludes the whole work.

2. Proposed Method

A novel and robust method of pupil and glint detection using wearable camera sensor and near-infrared LED array for gaze tracking system is proposed in this paper. Compared with original Starburst, the proposed circular ring rays location(CRRL) method has higher stability, accuracy and real-time quality. This method overcomes the location uncertainty of initial shooting point of rays. The process of shooting rays back towards the start point to collect more pupil boundary points is omitted. RANSAC is also omitted for the reason that the interference points can be eliminated effectively. Pupil center can be detected accurately when interference points are located on or around pupil contour. Improved Otsu method is employed to acquire the eye’s binary image. Part of the remainder interference factors (including eyelashes and eyelids) are eliminated by opening-and-closing operation with structure elements of different size. Projections of 3D gray-level histogram are utilized to estimate rough pupil radius and center position. The circular ring area is determined by provisional pupil radius and center. A series of rays with equal gap are shot from the inner to outer ring to detect pupil boundary points by calculating gradient amplitude. Gradient amplitude of each pixel is used to eliminate false boundary points. Spline interpolation is performed on the neighborhood of boundary points to obtain subpixel-precise ones. Improved total least squares is developed to fit ellipse and then pupil center position is calculated through elliptic equation fitted. Because the gray levels of glint pixels are higher than anywhere else, rough glint region is estimated by binarization with a fixed threshold level. According to glint’s illumination intensity (suited for Gaussian distribution), Gaussian function deformation solved by improved total least squares is utilized to calculate glint center.

2.1. Proposed Gaze Tracking Device

In this study, we develop a wearable gaze tracking device composed by a helmet, a monitor, an array of four near-infrared light emitting diodes (NIR LEDs) and a microspur camera shown in Figure 1. Considering the imaging distance is limited between 3~5 cm, a microspur camera is adopted to acquire eye image. The image resolution is 640 × 480 pixels (CCD sensor). The wavelength of NIR LED is 850 nm and the power is less than 5mw. The experimental system brings no harm to human eyes [24].
Figure 1. Proposed gaze tracking device.
Figure 1. Proposed gaze tracking device.
Sensors 15 29792 g001

2.2. Pupil Detection

2.2.1. Binarization and Opening-and-Closing Operation

An improved Otsu method is employed to obtain eye binary image in this paper. Proposed by Otsu first in 1979, the Otsu method is based on adaptive threshold selecting [25]. The original eye image is shown in Figure 2a. Gray-level histogram of eye image is shown in Figure 2b.
Figure 2. (a) Original eye image; (b) Gray-level histogram of eye image.
Figure 2. (a) Original eye image; (b) Gray-level histogram of eye image.
Sensors 15 29792 g002
Assuming number of pixels with gray level i is n i in eye image, all gray levels are divided into 3 groups, as shown in Figure 2b:
G 0 = { 0 ~ T 1 } G 1 = { T 1 + 1 ~ T 2 } G 2 = { T 2 + 1 ~ 255 }
Group G 0 contains mainly gray levels of black area such as pupil and eyelashes. Group G 1 contains mainly gray levels of iris and shadows. Group G 2 contains mainly gray levels of cornea and skin around. Assuming the respective occurring probability of G 0 , G 1 , G 2 is ω 0 , ω 1 , ω 2 , the corresponding gray level is   h 0 ,     h 1 ,     h 2 :
{ ω 0 = i = 0 T 1 p i ,   h 0 = i = 0 T 1 i p i ω 0 ω 1 = i = T 1 + 1 T 2 p i ,   h 1 = i = T 1 + 1 T 2 i p i ω 1 ω 2 = i = T 2 + 1 255 p i = 1 ω 0 ω 1 ,   h 2 = i = T 2 + 1 255 i p i ω 2 = h ω 0 h 0 ω 1 h 1 ω 2
p i = n i / N is the occurring probability of each gray level. N = i = 0 255 n i g ( T 1 , T 2 ) is the total pixel number. h = i = 0 255 i p i is average gray level of eye image.
The class variances are defined as
{ σ 0 2 = i = 0 T 1 ( i h 0 ) 2 p i ω 0 σ 1 2 = i = T 1 + 1 T 2 ( i h 1 ) 2 p i ω 1 σ 2 2 = i = T 2 + 1 255 ( i h 2 ) 2 p i ω 2
The within-class variance is defined as
σ W 2 = ω 0 σ 0 2 + ω 1 σ 1 2 + ω 2 σ 2 2
We develop an improved and fast solution method of optimal thresholds. According to Equations (3) and (4), within-class variance is transformed into integral form in Equation (5).
σ W 2 = 0 T 1 ( i h 0 ) 2 p i ω 0 + T 1 + 1 T 2 ( i h 1 ) 2 p i ω 1 + T 2 + 1 255 ( i h 2 ) 2 p i ω 2
Partial derivative on T 1 and T 2 is calculated respectively on both sides of Equation (5). The calculation result is shown in Equation (6).
{ 2 T 1 h 0 h 1 = 0 2 T 2 h 1 h 2 = 0
Formula to solve threshold in Otsu method is expanded as Equation (7).
g ( T 1 , T 2 ) = Arg   Max 0 < T 1 < T 2 < 255 { ω 0 ( h 0 h ) 2 + ω 1 ( h 1 h ) 2 + ω 2 ( h 2 h ) 2 }
According to Equations (6) and (7), optimal thresholds can be solved. For each pixel point in the original eye image, mean gray-level of a 3 × 3 neighboring region around it is calculated to substitute its original gray-level. The occurring probabilities of new gray-levels are calculated and utilized to solve optimal segmentation threshold T 1 and T 2 according to Equations (6) and (7). According to the distributing regularity of eye image’s gray-level histogram, value of T 1 is limited between 0~50, value of T 1 is limited between T 1 ~150. The maximum value of g ( T 1 , T 2 ) is calculated according to Equation (7) and the corresponding ( T 1 , T 2 ) is the optimal threshold solved.
The computational complexity of the new method to solve optimal threshold is decreased. As shown in Table 1, the segmentation time of improved method is less than that of original Otsu, which contributes to the real-time quality of eye gaze tracking.
Table 1. Segmentation time.
Table 1. Segmentation time.
MethodOriginal OtsuImproved Otsu
Time/ms32.417.1
In order to extract the pupil, threshold T 1 is utilized in the process of binarization. Eye’s binary image is shown in Figure 3.
Figure 3. Eye’s binary image with Otsu optimal threshold.
Figure 3. Eye’s binary image with Otsu optimal threshold.
Sensors 15 29792 g003
To eliminate interference points (mainly remnant eyelashes and eyelids) clearly, opening-and-closing operation with structure elements of different size are employed. According to the shape and size of interference factors shown in Figure 3, a 0.3 T 1 × 0.3 T 1 square structure element is utilized in the process of opening operation, and a 0.7 T 1 × 0.7 T 1 square structure element is utilized in the process of closing operation. The operating result is shown in Figure 4.
Figure 4. Result of opening-and-closing operation.
Figure 4. Result of opening-and-closing operation.
Sensors 15 29792 g004

2.2.2. Rough Location of Pupil Area and Center

Pupil image acquired through opening-and-closing operation presents an elliptical shape (irregular at glints and natural light reflection). 3D gray-level histogram of opening-and-closing operation result is shown in Figure 5a. Projection along x and y axis of 3D gray-level histogram is shown in Figure 5b. Rough location of pupil area and center position is determined by distribution of gray level in projection image. Rough pupil area locates in a rectangular box with length l 2 and width l 4 . Estimated pupil center is defined as o p = ( l 1 + l 2 / 2 , l 3 + l 4 / 2 ) . Estimated pupil radius is defined as r p = ( l 2 + l 4 ) / 4 .
Figure 5. (a) 3D gray-level histogram of opening-and-closing operation result; (b) Rough location of pupil area and center.
Figure 5. (a) 3D gray-level histogram of opening-and-closing operation result; (b) Rough location of pupil area and center.
Sensors 15 29792 g005

2.2.3. Collection of Pupil Boundary Points

A novel circular ring rays location (CRRL) method is proposed for pupil boundary points detection based on a modified Starburst. The proposed method has the following advantages than the original Starburst method. First, a series of rays are shot from inner circular ring to outer circular ring in proposed method instead of shooting rays from a guessed point to detected point. In original Starburst, a second shooting of rays is needed to collect more pupil counter candidates. In our proposed method, shooting rays once can collect sufficient pupil boundary points to fit ellipse, which saves the period of pupil boundary points collection. The style of shooting rays can also save calculation time because the rays shot are shorter than those shot in original Starburst method. Second, RANSAC is utilized in original Starburst to distinguish and separate pupil contour points (inliers) and interference points (outliers), which costs much time. We calculate the gradient amplitude at pixels neighboring pupil boundary utilizing pixel gray values of pupil and iris region in advance. Then a threshold of gradient amplitude is set to detect pupil boundary points. Number of pupil boundary points detected on each ray is counted to eliminate interference points. The experimental results show that the method for interference points elimination is suitable and effective in CRRL method. Third, cubic spline interpolation is utilized neighboring collected pupil boundary points to determine subpixel-precise pupil boundary points, which contributes to the accuracy enhancement of pupil center location.
Collection steps of pupil boundary points are presented in detail below:
Input: Gray-level eye image.
Output: Point set of pupil boundary points.
Step 1: Building of circular ring area. As shown in Figure 6, in order to build a circular ring area including pupil boundary inside, estimated pupil center o p is taken as center of inner and outer ring (green line) with respective radius 0.5 r p and 1.5 r p .
Step 2: Location of pupil boundary points. 36 rays (with equal gap 10 ° ) are shot from inner to outer circular ring. Gradient f = [   g x g y ] T is calculated at each pixel location ( x , y ) along shooting direction of each ray. M ( x , y ) = g x   2 + g y   2 is calculated as gradient amplitude. According to variation range of gradient amplitude neighboring pupil contour, a threshold of gradient amplitude is set as [ 1.3 δ , 1.5 δ ]   ( δ = T 2 T 1 ) in advance to select pupil boundary points. If gradient amplitude at pixel location ( x , y ) along shooting direction is within the range of , pixel ( x , y ) is recorded as one of pupil boundary points. Located pixel points matching threshold δ on each ray are counted.
Step 3: Elimination of interference points. When interference factors (glints and natural light reflection) are located on or around pupil contour, number of pixels matching threshold δ on the ray may be more than 1. In this case, all boundary points recorded on the ray are eliminated to avoid interference caused by glints and natural light reflection.
Step 4: Subpixel-precise location of pupil boundary points. To enhance location accuracy of pupil boundary points, cubic spline interpolation [26] is utilized neighboring collected pupil boundary points in Step 2 to determine subpixel-precise pupil boundary points.
Step 5: Mark of pupil boundary points. As shown in Figure 6, determined pupil boundary points are marked with yellow “+”. All the determined candidates of pupil boundary points are collected into one point set for ellipse fitting.
Figure 6. Extraction result of pupil boundary points.
Figure 6. Extraction result of pupil boundary points.
Sensors 15 29792 g006

2.2.4. Ellipse Fitting

Total least squares (TLS) [27,28] was proposed first in 1980. An improved total least squares is developed in this paper to fit collected pupil boundary points. Compared with least squares (LS) method, errors of independent and dependent variable are taken into account in the calculating process of total least squares. In TLS, matrix equation A x = b is solved by considering errors in both data matrix A and observation vector b . To compensate errors existed in A and b , perturbation vector e is utilized to perturb observation vector b , and simultaneously, perturbation matrix E is utilized to perturb observation data matrix A . Both e and E are of minimum amount.
Assuming the elliptic equation of eye pupil is A x 2 + B x y + C y 2 + D x + E y + F = 0 , constraint condition is set as A + C = 1 [29] in order to obtain higher fitting accuracy. Then elliptic equation is deformed as Equation (8).
B x i y i + C ( y i 2 x i 2 ) + D x i + E y i + F = x i 2
where i = 1 , 2 , , n ,   n is the number of pupil boundary points extracted. Errors in pixel position ( x , y ) is defined as ( v x , v y ) , the ideal form of Equation (8) is defined as
B ( x i y i v x i y i ) + C [ ( y i 2 x i 2 ) ( v y i 2 v x i 2 ) ] + D ( x i v x i ) + E ( y i v y i ) + F = ( x i 2 v x i 2 )
Transform Equation (8) into matrix form
M τ = Y
where M = [ x 1 y 1 y 1 2 x 1 2 x 1 y 1 1 x 2 y 2 y 2 2 x 2 2 x 2 y 2 1 x i y i y i 2 x i 2 x i y i 1 x n 1 y n 1 y n 1 2 x n 1 2 x n 1 y n 1 1 x n y n y n 2 x n 2 x n y n 1 ] , τ = [ B C D E F ] T ,   Y = [ x 1 2 x 2 2 x i 2 x n 1 2 x n 2 ] T . Let augmented matrix H = [ Y , M ] and its singular values σ 1 σ 2 σ min ) are calculated utilizing SVD method. According to the subspace interpretation of total least squares, the total least squares solution of matrix equation M τ = Y is deduced as
τ TLS = ( M T M σ min 2 I ) 1 M T Y
where σ min is the minimal singular value of augmented matrix H . Consequently, σ min 2 is the common variance of each component in perturbation matrix D = [ e , E ] .
For the reason that row of constant in coefficient matrix M cannot be considered in SVD, we propose an improved method for SVD solution. By setting α 1 i = x i y i , α 2 i = y i 2 x i 2 , α 3 i = x i , α 4 i = y i , β i = x i 2 , error equation of ellipse can be define as
v i = B x 1 i + C x 2 i + D x 3 i + E x 4 i + F z i
Here we set
  { α ¯ r = 1 n i = 1 n X r i   ( r = 1 , 2 , 3 , 4 ) β ¯ = 1 n i = 1 n β i
Therefore, coefficient F is described as
F = β ¯ α ¯ 1 B α 2 C α ¯ 3 D α ¯ 4 E = β ¯ α ¯ T τ
where α ¯ = [ α ¯ 1 α ¯ 2 α ¯ 3 α ¯ 4 ] T , τ = [ B C D E ] T .
By taking Equation (14) into Equation (12), we acquire
ε = X τ Z
where ε = [ v 1 v 2 v n ] , X = [ x 11 x ¯ 1 x 21 x ¯ 2 x 41 x ¯ 4 x 12 x ¯ 1 x 22 x ¯ 2 x 42 x ¯ 4 x 1 n x ¯ 1 x 2 n x ¯ 2 x 4 n x ¯ 4 ] , τ = [ B C D E ] , Z = [ z 1 z ¯ z 2 z ¯ z n z ¯ ] .
The total least squares solution of matrix equation ε = X τ Z is described as
τ TLS = ( X T X γ min 2 I ) 1 X T Z
New augmented matrix is defined as L = [ Z , X ] . In order to improve the fitting accuracy and stability of TLS, a novel and fast SVD solution method is utilized to acquire the singular values of matrix L . L is described as SVD format in Equation (17).
L = U Σ V T
Matrix Q is defined as
Q = L T L = ( U Σ V T ) T ( U Σ V T ) = ( V Σ U T ) ( U Σ V T ) = V Σ 2 V T
Equation (19) shows the multiplication result of different rows L s , L t in matrix L .
Q s t = [ L s , L t ] T [ L s , L t ]
where 1 s 4 ,   1 t 4 ,   s t . Eigenvalue matrix Σ s t is calculated as
Σ s t = Δ s t T ( Q s t ) Δ s t
Then rows of matrix L are redefined as [ L s , L t ] Δ s t . Orthogonal transformation is conducted for any two redefined rows of matrix L . Non-diagonal elements of matrix Q are eliminated. Eigenvalue matrix of Q is solved as
Σ = V T Q V = V T ( L T L ) V = [ γ 1 2 0 γ 2 2 0 γ m 2 ]
γ 1 , γ 2 , , γ m ( γ 1 γ 2 γ m ) are the singular values of matrix L . τ = [ B C D E ] T is calculated according to Equation (16). Pupil center can be acquired through Equation (22).
  { x p = B E 2 C D 4 A C B 2 y p = B D 2 A E 4 A C B 2
where A = 1 C .
The sensitivity of the TLS problem depends on the ratio r = ( σ ˜ p σ p + 1 ) / σ ˜ p . σ ˜ p , σ p + 1 and σ ˜ p are the respective least singular value of matrix X (or M ), L (or H ) and X 0 (or M 0 )(coefficient matrix in corresponding LS problem). When the value of ratio r is larger, the TLS will be more accurate than LS. During the ellipse fitting of pupil boundary points, the respective ratios r of TLS problem solved by SVD and improved SVD are 0.82 and 0.94. The improved TLS achieves a higher accuracy than original TLS. Improved TLS method makes a compensation for errors in pixel location. The fitting result is more closed to the ideal form of elliptic equation (Equation (9)).
The result of ellipse fitting is shown in Figure 7. Red ellipse represents the fitted pupil contour. Red “ ” represents the center of fitted pupil contour.
Figure 7. Ellipse fitting result.
Figure 7. Ellipse fitting result.
Sensors 15 29792 g007

2.3. Glint Detection

For the reason that the pixel number of glint region is limited and there is halo existing around glint contour, the proposed method for pupil detection is not suitable for glint. Improved Gaussian fitting is utilized to locate glint center.

2.3.1. Rough Location of Glint Region

Because illumination intensity of glint is higher and its gray-levels are near to 255, t h r e s h o l d = 240 is adopted on binarization of eye image to extract glints. A 2 × 2 square structure element is utilized in the process of opening-and-closing operation to filter binary image. As shown in Figure 8, red rectangular boxes are utilized to locate rough glint regions.
Figure 8. Rough location of glint region.
Figure 8. Rough location of glint region.
Sensors 15 29792 g008

2.3.2. Gaussian Fitting

Figure 9a shows the enlarged glint region. The 3D gray-level histogram of enlarged glint is shown in Figure 9b. The glint’s illumination intensity suits for Gaussian distribution [30].
Figure 9. (a) Enlarged glint region; (b) 3D gray-level histogram of enlarged glint.
Figure 9. (a) Enlarged glint region; (b) 3D gray-level histogram of enlarged glint.
Sensors 15 29792 g009
Gaussian function of glint illumination intensity is defined as Equation (23):
I ( x , y ) = H · e [ ( x x g ) 2 2 σ x 2 + ( y y g ) 2 2 σ y 2 ]
I ( x , y ) is the gray-level of pixel ( x , y ) in glint region. As the amplitude of Gaussian distribution, H is the highest gray-level in glint region. ( x g   , y g ) represents the glint center to be calculated.   σ x an   σ y is the respective standard deviation of gray-level in horizontal and vertical direction. A logarithmic operation is conducted to Equation (23). The arrangement and deformation result is as follow:
z = a x 2 + b y 2 + c x + d y + e
where z = ln I ( x , y ) ,   a = 1 / 2 σ x 2 , b = 1 / 2 σ y 2 , c = x g / σ x 2 ,   d = y g / σ y 2 ,   e = x g 2 / 2 σ x 2 y g 2 / 2 σ y 2 + ln H . Subpixel-precise boundary points of glint are extracted with cubic spline interpolation neighboring glint contour. Pixel points inside glint contour are substituted into Equation (24) for calculating. The improved total least squares proposed in Section 2.2.4 is utilized for the solution of overdetermined equations composed by Equation (24). Glint center is calculated according to Equation (25) with the solved value of a ,   b ,   c ,   d .
{ x g = c 2 a y g = d 2 b
Figure 10 shows the detection result of glint center (marked with green “+”).
Figure 10. Detection result of glint center.
Figure 10. Detection result of glint center.
Sensors 15 29792 g010

3. Experimental Results

3.1. Pupil Detection

3.1.1. Pupil Detection of Single Subject

The process of pupil detection is shown in Figure 11. Figure 11a-d shows four original eye image with different relative position of pupil and glints acquired from single subject; a 1 - d 1 shows eye binary image utilizing improved Otsu optimal threshold; a 2 - d 2 shows result of opening-and-closing operation with 5 × 5 square structure element; a 3 - d 3 shows extraction result of pupil boundary points (marked with yellow “+”); a 4 - d 4 shows fitting results of pupil (red ellipse). The center of fitted pupil contour is marked with red “ ”.
Table 2 shows the parameters of pupil detection, including threshold T 1 and T 2 , rough pupil center and final pupil center fitted.
Figure 11. (ad) Original eye image; (a1d1) Eye binary image utilizing improved Otsu optimal threshold; (a2d2) Results of opening-and-closing operation; (a3d3) Extraction result of pupil boundary points; (a4d4) Pupil fitting result.
Figure 11. (ad) Original eye image; (a1d1) Eye binary image utilizing improved Otsu optimal threshold; (a2d2) Results of opening-and-closing operation; (a3d3) Extraction result of pupil boundary points; (a4d4) Pupil fitting result.
Sensors 15 29792 g011
Table 2. Parameters of pupil detection.
Table 2. Parameters of pupil detection.
Eye ImageThreshold T 1 Threshold T 2 Rough Pupil Center ( x 0 , y 0 )Final Pupil Center ( x p , y p )
Figure 11a13117(199, 230)(196.39, 230.88)
Figure 11b15120(311, 212)(310.66, 209.67)
Figure 11c15122(344, 194)(344.34, 192.55)
Figure 11d14119(379, 207)(378.43, 206.37)

3.1.2. Pupil Detection of Different Subjects

In order to verify the applicability of the proposed circular ring rays location(CRRL) method, original eye images of another four different subjects are acquired. The experimental results are shown in Figure 12. In Section 2.2.1, a larger size of structure element is set in process of closing operation than that in opening operation. For subjects with heavy eyelashes and eyelids, different sizes of structure element in opening-and-closing operation can ensure the complete elimination of remnant interference factors caused by eyelashes and eyelids.
Figure 12. (ad) Original eye image; (a1d1) Eye binary image utilizing improved Otsu optimal threshold; (a2d2) Results of opening-and-closing operation; (a3d3) Extraction result of pupil boundary points; (a4d4) Pupil fitting result.
Figure 12. (ad) Original eye image; (a1d1) Eye binary image utilizing improved Otsu optimal threshold; (a2d2) Results of opening-and-closing operation; (a3d3) Extraction result of pupil boundary points; (a4d4) Pupil fitting result.
Sensors 15 29792 g012
Table 3 shows the parameters of pupil detection, including threshold T 1 and T 2 , rough pupil center and final pupil center fitted.
Table 3. Parameters of pupil detection.
Table 3. Parameters of pupil detection.
Eye ImageThreshold T 1 Threshold T 2 Rough Pupil Center ( x 0 ,     y 0 )Final Pupil Center ( x p ,     y p )
Figure 12a22132(282, 210)(284.12, 211.65)
Figure 12b13116(318, 186)(317.46, 185.34)
Figure 12c24133(292, 186)(293.59, 185.60)
Figure 12d11121(299, 164)(299.38, 162.11)

3.2. Glint Detection

Glint detection is implemented for Figure 11a–d and Figure 12a–d. The process of detection is shown in Figure 13. Figure 13 a 5 d 5 , a 7 d 7 show the rough location of glints after binarization and filtering operation. No. 1,2,3,4 are glint number defined. Figure 13 a 6 d 6 , a 8 d 8 show the detection result of glints. The glint center is marked with green “+”.
Figure 13. (a5d5) Rough location of glints in Figure 11a–d; (a6d6) Glint detection results in Figure 11a–d; (a7d7) Rough location of glints in Figure 12a–d; (a8d8) Glint detection results in Figure 12a–d.
Figure 13. (a5d5) Rough location of glints in Figure 11a–d; (a6d6) Glint detection results in Figure 11a–d; (a7d7) Rough location of glints in Figure 12a–d; (a8d8) Glint detection results in Figure 12a–d.
Sensors 15 29792 g013
Table 4 shows the parameters of glint detection in Figure 11a–d and Figure 12a–d.
Table 4. Parameters of glint detection.
Table 4. Parameters of glint detection.
Detected Glint Center ( x g ,     y g )
Glint Number1234
Figure 11a(212.39, 214.42)(268.28, 214.64)(213.53, 241.31)(266.76, 241.24)
Figure 11b(293.85, 201.79)(345.34, 202.49)(294.15, 227.63)(343.94, 227.71)
Figure 11c(296.90, 191.21)(348.58, 191.36)(298.34, 217.17)(347.55, 217.43)
Figure 11d(314.53, 196.17)(366.49, 197.12)(316.03, 221.18)(365.87, 222.52)
Figure 12a(264.25, 207.31)(318.64, 208.15)(265.12, 235.43)(317.20, 235.98)
Figure 12b(211.39, 133.26)(252.13, 134.68)(221.64, 149.52)(251.54, 149.22)
Figure 12c(265.47, 186.29)(321.40, 186.24)(263.68, 216.44)(319.87, 215.31)
Figure 12d(284.31, 152.37)(331.82, 152.21)(283.14, 176.33)(329.13, 176.45)

3.3. Stability and Error

To evaluate the stability and accuracy of proposed method, 105 eye images of each subject are acquired for pupil and glint detection. Stability, RMS error and processing time of proposed method in this paper are shown in Table 5. As a reference, stability, RMS error and processing time of detection methods in paper [13,20,21,22] are listed in Table 5. As can be seen from the experimental results in Table 5, stability, accuracy and real-time quality of the proposed method are better than those in paper [13,20,21,22].
Table 5. Stability, RMS error and processing time of different methods.
Table 5. Stability, RMS error and processing time of different methods.
MethodPupil DetectionGlint Detection
StabilityError (Pixels)Time (ms)StabilityError (Pixels)Time (ms)
Proposed method99.4%2.1743.698.7%0.6921.5
Paper [13]94.9%6.4892.190.9%1.7338.6
Paper [20]95.2%7.8665.594.1%1.2834.1
Paper [21]97.9%5.4354.3---
Paper [22]96.6%5.95126.4---

4. Conclusions

A novel and robust method of pupil and glint detection using a wearable camera sensor and near-infrared LED array for gaze tracking system is proposed in this paper. A circular ring rays location (CRRL) method is proposed for detection of pupil boundary points. An improved Otsu method is proposed for threshold segmentation. The experimental results show that the segmentation time of improved method is less than that of original Otsu, which contributes to the real-time quality of eye gaze tracking. Size and number of gradient amplitude are employed to eliminate interference factors. In order to compensate for errors of pupil boundary points in horizontal and vertical direction, improved total least squares is developed to fit ellipse. The experimental results show that the improved total least squares has a higher accuracy than original total least squares on pupil ellipse fitting. For the purpose of a higher location accuracy of glint, improved total least squares is utilized for the solution of Gaussian function deformation to calculate glint center. As we can see from the experimental results, stability, accuracy and real-time quality of the proposed method are better than those existing currently for pupil and glint detection. When interference factors such as glints and natural light reflection are located neighboring pupil boundary, interference points caused can be eliminated fast and effectively. The proposed method contributes to the enhancement of stability, accuracy and real-time quality of gaze tracking system.

Acknowledgments

This work is supported by Program for Changjiang Scholars and Innovation Research Team in University under Grant No. IRT1208 and Basic Research Fund of Beijing Institute of Technology under Grant (No. 20130242015). We would like to thank the editor and all anonymous reviewers for their constructive suggestions.

Author Contributions

All authors have siginificant cointributions to this article. Jianzhong Wang was mainly respoisible for deployment of the system and revision of the paper; Guangyue Zhang was responsible for developing pupil and glint detection method and writing the paper; Jiadong Shi, the corresponding author, was responsible for performing experiments and analysing data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blondon, K.; Wipfli, R.; Lovis, C. Use of eye-tracking technology in clinical reasoning: a systematic review. Stud. Health Technol. Inf. 2015, 210, 90–94. [Google Scholar]
  2. Higins, E.; Leinenger, M.; Rayner, K. Eye movements when viewing advertisements. Front. Psychol. 2014, 210. Available online: http://journal.frontiersin.org/article/10.3389/fpsyg.2014.00210/full (accessed on 30 November 2015). [Google Scholar] [CrossRef] [PubMed]
  3. Spakov, O.; Majaranta, P. Scrollable keyboards for casual eye typing. Psychol. J. 2009, 7, 159–173. [Google Scholar]
  4. Noureddin, B.; Lawrence, P.D.; Man, C.F. A non-contact device for tracking gaze in human computer interface. Comput. Vis. Image Underst. 2005, 98, 52–82. [Google Scholar] [CrossRef]
  5. Biswas, P.; Langdon, P. Multimodal intelligent eye-gaze tracking system. Int. J. Hum. Comput. Interact. 2015, 31, 277–294. [Google Scholar] [CrossRef]
  6. Lim, C.J.; Kim, D. Development of gaze tracking interface for controlling 3D contents. Sens. Actuator A Phys. 2012, 185, 151–159. [Google Scholar] [CrossRef]
  7. Yarbus, A.L. Eye Movements and Vision; Plenum Press: New York, NY, USA, 1967. [Google Scholar]
  8. Dodge, R.; Cline, T.S. The angle velocity of eye movements. Psychol. Rev. 1901, 8, 145–157. [Google Scholar] [CrossRef]
  9. Ditchburn, R.W. Eye movements and Visual Perception; Clarendon Press: Oxford, UK, 1973. [Google Scholar]
  10. Miles, W. The peep-hole method for observing eye movements in reading. J. Gen. Psychol. 1928, 1, 373–374. [Google Scholar] [CrossRef]
  11. Robinson, D.A. A method of measuring eye movements using a scleral search coil in a magnetic field. IEEE Trans. Biomed. Eng. 1963, 10, 137–145. [Google Scholar] [PubMed]
  12. Cornsweet, T.N.; Crane, H.S. Accurate two-dimensional eye tracker using first and fourth Purkinje images. J. Opt. Soc. Am. 1973, 63, 921–928. [Google Scholar] [CrossRef] [PubMed]
  13. Ohno, T.; Mukawa, N.; Yoshikawa, A. Free gaze: a gaze tracking system for everyday gaze interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications Symposium, New Orleans, LA, USA, 25–27 March 2002; pp. 125–132.
  14. Goñi, S.; Echeto, J.; Villanueva, A.; Cabeza, R. Robust algorithm for pupil-glint vector detection in a video-oculography eye tracking system. In Proceedings of the International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 941–944.
  15. Villanueva, A.; Cabeza, R. A novel gaze estimation system with one calibration point. IEEE Trans. Syst. Man Cybern. 2008, 38, 1123–1138. [Google Scholar] [CrossRef] [PubMed]
  16. Gneo, M.; Schmid, M.; Conforto, S.; D’Alessio, T. A free geometry model-independent neural eye-gaze tracking system. J. NeuroEng. Rehabil. 2002, 82. [Google Scholar] [CrossRef] [PubMed]
  17. Blignaut, P. Mapping the pupil-glint vector to gaze coordinates in a simple video-based eye tracker. J. Eye Mov. Res. 2014, 7, 1–11. [Google Scholar]
  18. Lai, C.C.; Shih, S.W.; Hung, Y.P. Hybrid method for 3-D gaze tracking using glint and contour features. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 24–37. [Google Scholar]
  19. Ebisawa, Y. Unconstrained pupil detection technique using two light sources and the image difference method. Visual. Intell. Des. Engine Arch. 1995, 15, 79–89. [Google Scholar]
  20. Yoo, D.H.; Chung, M.J.; Ju, D.B.; Choi, I.H. Non-intrusive eye gaze estimation using a projective invariant under head movement. In Proceedings of the IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 15–19 May 2006; pp. 3443–3448.
  21. Gwon, S.Y.; Cho, C.W.; Lee, H.C. Robust eye and pupil detection method for gaze tracking. Int. J. Adv. Robot. Syst. 2013, 10, 1–7. [Google Scholar]
  22. Li, D.H.; Winfield, D.W.; Parkhurst, D.J. Starburst: A hybrid algorithm for video-based eye tracking combining feature-based and model-based approaches. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA, 25–25 June 2005; pp. 79–86.
  23. Krishnamoorthi, R.; Annapoorani, G. A simple boundary extraction technique for irregular pupil localization with orthogonal polynomials. Comput. Vis. Image Underst. 2012, 116, 262–273. [Google Scholar] [CrossRef]
  24. Sliney, D.; Aron-Rosa, D.; DeLori, F.; Fankhauser, F.; Landry, R.; Mainster, M.; Marshall, J.; Rassow, B.; Stuck, B.; Trokel, S.; et al. Adjustment of guidelines for exposure of the eye to optical radiation from ocular instruments: Statement from a task group of the International Commission on Non-Ionizing Radiation Protection. Appl. Opt. 2005, 44, 2162–2176. [Google Scholar] [CrossRef] [PubMed]
  25. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar]
  26. Truchetet, F.; Nicolier, F.; Laligant, O. Subpixel edge detection for dimensional control by artificial vision. J. Electron. Imaging 2001, 10, 234–239. [Google Scholar] [CrossRef]
  27. Pearson, K. On lines and planes of closest fit to systems of points in space. Philos. Mag. 1901, 2, 559–572. [Google Scholar] [CrossRef]
  28. Golub, G.H.; Van Loan, C.F. An analysis of the total least squares problem. SIAM J. Numer. Anal. 1980, 177, 883–893. [Google Scholar] [CrossRef]
  29. Gander, W.; Golub, G.H.; Strebel, R. Least-squares fitting of circles and ellipses. BIT Numer. Math. 1994, 34, 558–578. [Google Scholar] [CrossRef]
  30. Shortis, M.R.; Clarke, T.A.; Short, T. A comparison of some techniques for the subpixel location of discrete target images. In Photonics for Industrial Applications, Proceedings of the International Society for Optics and Photonics, Boston, MA, USA, 6 October 1994; pp. 239–259.

Share and Cite

MDPI and ACS Style

Wang, J.; Zhang, G.; Shi, J. Pupil and Glint Detection Using Wearable Camera Sensor and Near-Infrared LED Array. Sensors 2015, 15, 30126-30141. https://doi.org/10.3390/s151229792

AMA Style

Wang J, Zhang G, Shi J. Pupil and Glint Detection Using Wearable Camera Sensor and Near-Infrared LED Array. Sensors. 2015; 15(12):30126-30141. https://doi.org/10.3390/s151229792

Chicago/Turabian Style

Wang, Jianzhong, Guangyue Zhang, and Jiadong Shi. 2015. "Pupil and Glint Detection Using Wearable Camera Sensor and Near-Infrared LED Array" Sensors 15, no. 12: 30126-30141. https://doi.org/10.3390/s151229792

APA Style

Wang, J., Zhang, G., & Shi, J. (2015). Pupil and Glint Detection Using Wearable Camera Sensor and Near-Infrared LED Array. Sensors, 15(12), 30126-30141. https://doi.org/10.3390/s151229792

Article Metrics

Back to TopTop