Next Article in Journal
Ultrafast Fiber Bragg Grating Interrogation for Sensing in Detonation and Shock Wave Experiments
Previous Article in Journal
Why Using Molecularly Imprinted Polymers in Connection to Biosensors?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Interactive Image Segmentation Method in Hand Gesture Recognition

1
School of Machinery and Automation, Wuhan University of Science and Technology, Wuhan 430081, China
2
School of Computing, University of Portsmouth, Portsmouth PO1 3HE, UK
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(2), 253; https://doi.org/10.3390/s17020253
Submission received: 28 October 2016 / Accepted: 17 January 2017 / Published: 27 January 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy.

1. Introduction

Hand gesture recognition, utilized in visual input of controlling computers, is one of the most important aspects in human-computer interaction [1]. Compared with the traditional input methods, such as mice, keyboards and data gloves [2,3], the use of hand gestures to control computers will greatly reduce the user’s learning curve and further expand the application scenario. To achieve hand gesture control [4], many research achievements have been conducted by the pioneers in the field. Sophisticated data gloves can capture every single movement of finger joints by highly sensitive sensors [5,6] and store the hand gesture data. The hand gesture recognition process based on computer vision is illustrated in Figure 1. However, some essential problems have yet to be solved. Firstly, the vision-driven hand gesture recognition method is highly dependent on the sensibility of image sensors, therefore the relatively poor image quality hinders its development. Secondly, the image processing algorithms are not robust as they supposed to be, some of which cannot meet the demand to finish the segmentation correctly, while others fulfill the accuracy demands, but require too many human interactions [7], which are not efficient in real applications.
To address the above problems, with the cutting edge technologies, the image sensor industry has mushroomed recently. On the one hand, new kinds of image sensors, like the Microsoft Kinect 2.0, or Asus Xtion, have come into the commercial market [8], and the innovative infrared camera [9] makes it possible obtain depth information from image sensors. On the other hand, innovations in image processing algorithms have made them capable of segmenting accurate hand gestures, promoting in turn the accuracy of classifiers to ascribe gestures into different patterns.
The image segmentation is an important stage in the whole hand gesture recognition process, and several well-known segmentation methods have been proposed to meet different image segmentation demands. For example, in the graph cut method [10], proposed by Boykov and Jolly, the main idea was to divide one image into “object” and “background”. A gray scale histogram was established to describe the distribution of gray scale, and then a cut was drawn to divide the object and background. Max-flow/min cut algorithm was applied to minimize the energy function of one cut, and the segmentation was achieved by this minimized cut. These algorithms not only focus on the whole image, but also take every morphological detail into account. Random walker [11,12] is another supervised image segmentation method, where the image is viewed as an electric circuit. The edges are replaced by passive linear resistors, and the weight of each edge equals the electrical conductance. It proved to perform better segmentation compared with the graph cut method. Gulshan et al. [13] proposed an interactive image segmentation method, which regarded shape as a powerful cue for object recognition, making the problem well posed. The use of geodesic-star convexity made it have a much lower error rate compared with Euclidean star-convexity.
In the process of hand gesture recognition [13], the feature extraction is also very important. The image feature methods such as HOG [14], Hu invariant [15] and Haar [16] are used. In this paper, as for classifier and template matching algorithms, the sparse representation will be applied, since it requires much less sample for training. With the intention of recognising five different hand gestures, according to the dataset of hand gesture images, a dictionary will also be built. Then the K-SVD [17] algorithm is adapted for sample training, and the algorithm will be evaluate and compared with other methods.

2. Modelling of Hand Gesture Images

In order to optimize the segmentation, the human visual system was carefully studied. Our eyes usually got a fuzzy picture of the whole scene at first, and then the saccadic eye movements [18] help us to obtain the details of regions of interest. With the inspiration of the human visual system, we used the Gaussian Mixture Model (GMM) [19] to get an overall view the color distributions of the image. Since the color images are mainly represented in digital formats, with tens of thousands of pixels in one image made up of red, green and blue sub-pixels, as shown in Figure 2, an M × N × 3 array was applied to store the color information in one image, where M is the horizontal resolution and N is the vertical.

2.1. Single Gaussian Model

The single Gaussian distribution, also known as the normal distribution [20], was proposed by the French scientist Moivre in 1733. The probability density function of a single Gaussian distribution is given by the formula:
p ( x ) = 1 2 π σ exp ( ( x μ ) 2 2 σ 2 )
where μ is the mathematical expectation or the mean, σ is the covariance of Gaussian distribution, and exp denotes the exponential function. For convenience, the single Gaussian distribution is usually denoted as:
X ~ N ( μ , σ 2 )
The single Gaussian distribution formula is capable of dealing with gray scale pictures, because the variable x has only one dimension. One color image is an M × N × 3 array, so any element xi in dataset X = { x 1 , x 2 , , x n } should be at least 3-dimensional. To address this problem, the concept of the multi-dimensional Gaussian distribution is introduced. The definition of d dimensional Gaussian distribution is:
N ( x ; μ , Σ ) = 1 ( 2 π ) d | Σ | exp [ ( x μ ) T Σ 1 ( x μ ) 2 ]
where μ is a d dimensional vector, and as for the RGB model, each component of μ represents the average red, green and blue color density value. Σ is the covariance matrix and Σ 1 is its inverse matrix. (xμ)T is the transposed matrix of (xμ). To simplify Equation (3) above, θ is introduced to represent the parameters μ and Σ , then the probability density function of the d dimensional Gaussian distribution can be written as:
p ( x ) = N ( x ; θ ) .
According to the law of large numbers, every pixel is one sample of the real scene. When the resolution is high enough, the average color density could be estimated.

2.2. Gaussian Mixture Model of RGB Image

In reality, the color distributions of the gesture image in Figure 2 can be represented by three histograms [21], shown in Figure 3. With independent red, green and blue distributions shown in Figure 3, we can notice that the gesture image cannot be exactly described by one single Gaussian model. But there are about five peaks in each histogram, so five single Gaussian models should be applied in gesture image modelling.
GMM is introduced to approximate the continuous probability distribution by increasing the number of single Gaussian models. The probability density function of GMM with k mixed Gaussian models becomes:
p ( x ) = i = 1 k π i p i ( x ; θ i )
p ( x ) = i = 1 k π i N i ( x ; μ i , Σ i ) ,
where i { 1 , 2 , , k } shows which single Gaussian model the component belongs to. π i is the mixing coefficients of k mixed component [22] or the prior probability of x belonging to the i-th single Gaussian model, and i = 1 k π i = 1 . p i ( x ; θ i ) is the probability density function of the i-th single Gaussian model, parameterized by μ i and Σ i in N i ( x ; μ i , Σ i ) . Θ is introduced as a parameters [23] set, { π 1 , π 2 , , π k , θ 1 , θ 2 , , θ k }, to denote α i and θ i .
As mentioned above, one RGB hand gesture image could be described in the dataset X = { x 1 , x 2 , , x n } , and if we regard X as a sample, its probability density is:
p ( X ; Θ ) = j = 1 n p ( x j ; Θ ) = L ( Θ ; X ) , x j X ,
where L ( X ; Θ ) is called likelihood function of parameters given the sample X. Then we hope to find a set of parameter Θ to finish modelling. According to maximum likelihood method [24], our next task is to find Θ ^ where:
Θ ^ = arg max Θ   L ( Θ ; X ) .
The function L ( Θ ; X ) and L ( X ; Θ ) have the same equation form, but considering now we are going to use X to estimate Θ , the Θ becomes variables and X are the fixed parameters, it is denoted in the second form. The value of p ( X ; Θ ) is usually too small to be calculated by computer, so we are going to replace it with the log-likelihood function [25]:
ln ( L ( Θ ; X ) ) = ln [ j = 1 n p ( x j ; Θ ) ]
= j = 1 n ln [ i = 1 k π i p i ( x j ; θ i ) ] .

2.3. Expectation Maximum Algorithm

After establishing the Gaussian mixture model of a RGB hand gesture image, there are still several parameters that need to be estimated. The expectation maximum (EM) algorithm [26] is introduced for the subsequent calculations. The EM algorithm is a method of acquiring the parameters set Θ in the maximum likelihood method. There are two steps in this algorithm, called the E-step and M-step, respectively. To start the E-step we will introduce another probability Qi(xj). It is a posterior probability of πi, in another words, the posterior probability of each xj belonging to the i-th single Gaussian model, from the dataset X.
Q i ( x j ) = π i p i ( x j ; θ i ) t = 1 k π t p t ( x j ; θ t ) ,
where the definition of Q i ( x j ) is given according to Bayes’ theorem, and i = 1 k Q i ( x j ) = 1 . Then we use Equation (11) to modify the log-likelihood function in (10):
ln ( L ( Θ ; X ) ) = j = 1 n ln [ i = 1 k Q i ( x j ) π i p i ( x j ; θ i ) Q i ( x j ) ]
j = 1 n i = 1 k Q i ( x j ) ln [ π i p i ( x j ; θ i ) Q i ( x j ) ] .
From (12) to (13), the Jensen’s inequality have been applied, since l n ( x ) = 1 x 2 0 , it is concave on its domain. Then:
ln [ i = 1 k Q i ( x j ) π i p i ( x j ; θ i ) Q i ( x j ) ] i = 1 k Q i ( x j ) ln [ π i p i ( x j ; θ i ) Q i ( x j ) ] ,
Maximizing Equation (13) guarantees that ln ( L ( Θ ; X ) ) is maximized. The iteration of an EM algorithm estimating the new parameters in terms of the old parameters is given as follows:
  • Initialization: Initialize μ i 0 with random numbers [27], and the unit matrices are used as covariance matrices Σ i 0 to start the first iteration. The mixed coefficients or prior probability is assumed as π i 0 = 1 k .
  • E-step: Compute the posterior probability of π i using current parameters:
    Q i ( x j ) : = π i p i ( x j ; θ i ) t = 1 k π t p t ( x j ; θ t ) = π i N ( x j ; μ i , Σ i ) t = 1 k π t N ( x j ; μ t , Σ t )
  • M-step: Renew the parameters:
    π i : = 1 n j = 1 n Q i ( x j )
    μ i : = j = 1 n Q i ( x j ) x t j = 1 n Q i ( x j )
    Σ i : = j = 1 n Q i ( x j ) ( x j μ i ) ( x j μ i ) T j = 1 n Q i ( x j )
For most hand gesture images, the number of iterations is usually defined as a certain number. In order to improve the segmentation quality and to take account of the efficiency, the number of iterations should be 8 [28].

3. Interactive Image Segmentation

The modelling method discussed previously provides a universal way of dealing with hand gesture images. To segment the digital images, a mask is introduced as shown in Figure 4, which is a binary bitmap denoted as α. By introducing it, we changed the segmentation problem into a pixels labelling problem. As αj ∈ {1,0}, the value 0 is taken for labelling background pixels and 1 for foreground pixels.
To deal with the GMM tractably, we introduce two independent k-component GMMs, one for the foreground modelling and one for the background modelling. Each pixel xj, either from the background or the foreground model, is marked as αj = 1 or 0. The parameters of each component become: θi = {πi(αj), μi(αj), Σi(αj); αj = 0,1, i = 1, …, k}.

3.1. Gibbs Random Field

The overall color modelling completes the first step in our human visual system, to take every detail of the image into account, Gibbs random field (GRF) [29] is introduced. GRF is defined as:
P ( A = a ) = 1 Z ( T ) exp ( 1 T E ( α ) ) ,
Here, P ( A = a ) gives the probability of the system A being in the state a. T is a constant parameter, whose unit is temperature in physics, and usually its value is 1. Z ( T ) is the partition function, and:
Z ( T ) = a A exp ( 1 T E ( a ) ) ,
where, E ( α ) is interpreted as the energy function of the state a, to apply GRF in image segmentation, the Gibbs Energy [30] can be defined as follows:
E ( α ) = E ( α , Θ , X ) = E ( α , i , θ , X ) = U ( α , i , θ , X ) + V ( α , X )
The term U ( α , i , θ , X ) , also called regional term, is defined taking account of GMM. It indicates the penalty of x j being classified in the background or foreground:
U ( α , i , θ , X ) = j = 1 n ln [ p i ( x j ) × π i ( α j ) ] ,
= j = 1 n { ln [ π i ( α j ) ] ln [ 1 2 ln | Σ i ( α j ) | ] + 1 2 [ x j μ i ( α j ) ] T Σ i ( α j ) 1 [ x j μ i ( α j ) ] } .
and V ( α , X ) , which is the boundary term, which is defined to describe the smoothness between pixel x u and its neighbour pixels x v in the pixel set N:
V ( α , X ) = γ x u , x v N [ α u α v ] exp ( β x u x v 2 ) ,
where the constant γ was obtained as 50 by optimizing the efficiency over training. [ α u α v ] is an indicator function taking values 0 or 1, by judging the formula inside. β is a constant, which represents the contrast of the pixel set N, to adjust the exponential term. E ( x ) in the equation below is the expectation:
β = 1 2 E x u , x v N [ ( x u x v ) T ( x u x v ) ]

3.2. Automatical Seed Selection

Until now all the constants have been defined. To begin with, all the pixels in the picture are automatically marked as undefined and labeled U [31]. B is the background seed pixel set and O is the foreground seed set. After the training over training set X, the set O is obtained as the segmentation result and O U . Three pixel sets are shown in Figure 5.
To achieve the segmentation automatically, we propose an initial seeds selection method in hand gesture images. Considering that the human skin color has an elliptical distribution in YCbCr color space [32], the image is transformed from RGB color space to YCbCr, using the equation below:
[ Y C b C r ] = [ 16 128 128 ] + 1 256 [ 65.738 129.057 25.06 37.945 74.494 112.43 112.439 94.154 18.28 ] [ r g b ] ,
where, Y indicates the luminance. By setting Y ( 0 , 80 ) , the interference of highlights would be overcome. Then the Cb, Cr values of human skin color are located by the elliptical equations given below:
{ ( x 1.6 ) 2 26.39 2 + ( y 2.41 ) 2 14.03 2 < 1 [ x y ] = [ cos ( 2.53 ) sin ( 2.53 ) sin ( 2.53 ) cos ( 2.53 ) ] [ C b 109.38 C r 152.02 ] ,
where, x and y are the intermediate variables. All the pixels satisfying the equations above will be marked as the foreground seeds, which belong to set O. We also define the pixels on the image edges as background seeds, which belong to set B, because the gestures are usually located far away from the edges of the images. The result of seeds selection are displayed in Figure 6 below.

3.3. Min-Cut/Max-Flow Algorithm

According to the Gibbs random field, the image segmentation or pixel labelling problem equals minimizing the Gibbs energy function:
min { α j ; i U } [ min i   E ( α , i , θ , X ) ]
The min-cut/max-flow algorithm [33] is proposed to finish the segmentation more accurately. The idea of this algorithm is to regard one image as a net with nodes, and each node take the place of a corresponding pixel. Apart from that, two extra nodes, S and T, are introduced, which represent “source” and “sink”, respectively. Node S is linked to pixels belonging to O, while T linked pixels in B as shown in Figure 7.
There are three kinds of links in the neighbourhood N, from pixel to pixel, from pixel to S and from pixel to T, denoted as x u x v ¯ ,   x u S ¯ ,   x u T ¯ . Each link is assumed with a certain weight or a cost [34] while it being cut down, which detailed in Table 1.
According to the max-flow/min-cut theorem, an optimal segmentation is defined by the minimum cut C as seen in Figure 7c. C is known as a set of x u x v ¯ links, so that:
| C | = x U U ( C , i , θ , x ) + x N V ( C , x )
= E ( C , i , θ , X ) [ x O U ( α = 1 , i , θ , x ) + x B U ( α = 0 , i , θ , x ) ]
Then the Gibbs energy could be minimized by using the min-cut defined above. The whole process of this segmentation is as follows: firstly, assign the GMM components i to each x j U according to the human select of the U region. Secondly, the parameters set Θ is learned from the whole pixel set X. Thirdly, use the min-cut to minimize the Gibbs energy of the whole image. Then jump to the first step to start another round, and after eight times, the optimal segmentation will be achieved.

4. Experimental Comparison

To evaluate interactive segmentation quantitatively, an image dataset proposed by Gulshan [13], which contains 49 images from GrabCut dataset [35], 99 images from PASCAL VOC’09 segmentation challenge [36] and 3 images from the alpha-matting dataset [37] is chosen. Those images cover all kinds of shapes, textures and backgrounds. The corresponding ground true images together with the initial seeds were also included in this dataset. The initial seed maps were made up of 4 manually generated brush-strokes all in 8 pixels wide, and one for foreground and 3 for background as shown in Figure 8.
To simulate the human interactions, after the first segmentation with initial seed map, one more seed would be generated in the largest connected segmentation error area (LEA) automatically. As shown in Figure 9a, the blue area is the segmentation result of the algorithm, while the white one is the ground true segmentation and the LEA is marked in yellow. From Figure 9b, the seed is a round dot (8 pixels in diameter), generated according to the LEA. Then we update the segmentation with all the seeds. After that, this step is repeated 20 times, and a sequence of segmentations will be obtained.
To evaluate the quality of segmentation results, we used two different methods in evaluating the region accuracy (RA) and boundary accuracy (BA). Each evaluation will be conducted to a single segmentation, and all the images in Gushan’s dataset will be tested to verify that our proposed method is suitable for interactive image segmentation.

4.1. Region Accuracy

The RA of segmentation results is evaluated by a weighted Fβmeasure [38]. Compared with normal Fβmeasure, the two terms Precision and Recall become:
P r e c i s i o n w = T P w T P w + F P w
R e c a l l w = T P w T P w + N P w
where, TP denotes the overlap of ground true and segmented foreground pixels. FP is the wrongly segmented pixels compared with ground true images and NP represent the wrongly segmented background pixels.
The F β w m e a s u r e is defined as follows:
R A = F β w = ( 1 + β 2 ) P r e c i s i o n w R e c a l l w β 2 P r e c i s i o n w + R e c a l l w
where, β signifies the effectiveness of detection with respect to a user who attaches β times as much importance to Recallw as to Precisionw, normally β = 1. Then, we apply F 1 w m e a s u r e to calculate the RA of different segmentation results. The higher RA is, the better the segmentation achieved is.

4.2. Boundary Accuracy

The BA [39] is defined according to the Hausdorff distance. The boundary pixels of ground true image and segmented image are defined as BGT and BSEG as shown in Figure 10.
The formula is as follows:
B A = N ( B S E G ) + N ( B G T ) s min g ( d i s t ( s , g ) ) + g min s ( d i s t ( g , s ) ) ,
where, gBGT and sBSEG, dist(·) denotes the Euclidean distance, N(·) is the pixel number in the set. The value of BA shows the segmentation accuracy of boundaries.

4.3. Results Analysis

We segmented the images from the dataset by graph cut and random walker as shown in Figure 11. The segmentation test of our method has been made on Gulshan’s dataset as well as our hand gesture images, and some of the results using our method on hand gesture image segmentation are shown here in Figure 12.
For a more rigorous test, we tested 151 images from Gulshan’s dataset and used the human interaction simulator to perform the interactions, which generated the seeds 20 times to further refine the segmentation results. The result of each simulation step has been tested on the experiment platform. The RA and BA scores are the mean values of 151 segmentations, shown in Figure 13 and Figure 14.
From the figures above, the segmentation quality shows an increase with simulated human interactions. When the seed number becomes high, a satisfactory segmentation will be achieved. Our method obtains the best segmentation quality with few human interactions. Since the seeds are generated once automatically in human hand image segmentation, our method is suitable for human image segmentation.

5. Hand Gesture Recognition

We defined five hand gestures: hand closed (HC), hand open (HO), wrist extension (WE), wrist flexion (WF), and fine pitch (FP), as shown in Figure 15.
One hundred images of each hand gesture were captured and segmented by the proposed method. We used the recognition framework in Figure 16. Each gesture takes 50 images for training and 50 for testing. To achieve a better classification, we extract HOG along with Hu invariant moments at the same weights. The K-SVD dictionary training method [40] is used to choose atoms representing [41] all features and reduce the computation costs.
We tested the recognition rates on both unsegmented hand images and segmented hand images. The recognition rates on unsegmented hand images are shown in Table 2, and the recognition rates on segmented hand images are shown in Table 3.
By segmenting the images before feature extraction, the recognition rates on those five hand gestures are increased compared with unsegmented images, according to the results in the tables above.

6. Conclusions and Future Work

In conclusion, the interactive hand gesture image segmentation method can perfectly meet the segmentation demands of hand gesture images with no human interactions. The mechanism behind this method is carefully explored and deduced with the assistance of modern mathematical theories. Comparing the segmentation results of hand gestures with other popular image segmentation methods, our method can obtain a better segmentation accuracy and a higher quality, when there are limited seeds. Automatic seeds selection also helps to reduce human interactions. The segmentation work in turn improves the recognition rate. In future work, we could adapt this method to higher resolution pictures, which requires simplifying the calculation process. In seed selection, the automatic selection method could be improved to overcome various interferes, such as highlights, shadows and image distortion. Other future work will focus on improving the recognition rate by integrating the segmentation algorithm with more advanced recognition methods.

Acknowledgments

This work was supported by grants of National Natural Science Foundation of China (Grant No. 51575407, 51575338, 61273106, 51575412) and the EU Seventh Framework Programme (Grant No. 611391).

Author Contributions

D.C. and G.L. conceived and designed the experiments; D.C. performed the experiments; D.C. and G.L. analyzed the data; D.C. contributed reagents/materials/analysis tools; D.C. wrote the paper; H.L., Z.J. and H.Y. edited the language.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nardi, B.A. Context and Consciousness: Activity Theory and Human-Computer Interaction; MIT Press: Cambridge, MA, USA, 1996; p. 400. [Google Scholar]
  2. Chen, D.C.; Li, G.F.; Jiang, G.Z.; Fang, Y.F.; Ju, Z.J.; Liu, H.H. Intelligent Computational Control of Multi-Fingered Dexterous Robotic Hand. J. Comput. Theor. Nanosci. 2015, 12, 6126–6132. [Google Scholar] [CrossRef] [Green Version]
  3. Ju, Z.J.; Zhu, X.Y.; Liu, H.H. Empirical Copula-Based Templates to Recognize Surface EMG Signals of Hand Motions. Int. J. Humanoid Robot. 2011, 8, 725–741. [Google Scholar] [CrossRef]
  4. Miao, W.; Li, G.F.; Jiang, G.Z.; Fang, Y.; Ju, Z.J.; Liu, H.H. Optimal grasp planning of multi-fingered robotic hands: A review. Appl. Comput. Math. 2015, 14, 238–247. [Google Scholar]
  5. Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H.; Aszmann, O.C. The extraction of neural information from the surface EMG for the control of upper-limb prostheses: Emerging avenues and challenges. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 797–809. [Google Scholar] [CrossRef] [PubMed]
  6. Ju, Z.; Liu, H. Human Hand Motion Analysis with Multisensory Information. IEEE/ASME Trans. Mechatron. 2014, 19, 456–466. [Google Scholar] [CrossRef] [Green Version]
  7. Panagiotakis, C.; Papadakis, H.; Grinias, E.; Komodakis, N.; Fragopoulou, P.; Tziritas, G. Interactive Image Segmentation Based on Synthetic Graph Coordinates. Pattern Recognit. 2013, 46, 2940–2952. [Google Scholar] [CrossRef]
  8. Yang, D.F.; Wang, S.C.; Liu, H.P.; Liu, Z.J.; Sun, F.C. Scene modeling and autonomous navigation for robots based on kinect system. Robot 2012, 34, 581–589. [Google Scholar] [CrossRef]
  9. Wang, C.; Liu, Z.; Chan, S.C. Superpixel-Based Hand Gesture Recognition with Kinect Depth Camera. Trans. Multimed. 2015, 17, 29–39. [Google Scholar] [CrossRef]
  10. Sinop, A.K.; Grady, L. A Seeded Image Segmentation Framework Unifying Graph Cuts and Random Walker Which Yields a New Algorithm. In Proceedings of the IEEE 11th International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, 14–20 October 2007; pp. 1–8.
  11. Grady, L. Multilabel random walker image segmentation using prior models. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; pp. 763–770.
  12. Couprie, C.; Grady, L.; Najman, L.; Talbot, H. Power watersheds: A new image segmentation framework extending graph cuts, random walker and optimal spanning forest. In Proceedings of the IEEE 12th International Conference on Computer Vision (ICCV), Kyoto, Japan, 27 September–4 October 2009; pp. 731–738.
  13. Varun, G.; Carsten, R.; Antonio, C.; Andrew, B.; Andrew, Z. Geodesic star convexity for interactive image segmentation. In Proceedings of the IEEE CVPR, San Francisco, CA, USA, 13–18 June 2010; pp. 3129–3136.
  14. Ju, Z.; Liu, H. A Unified Fuzzy Framework for Human Hand Motion Recognition. IEEE Trans. Fuzzy Syst. 2011, 19, 901–913. [Google Scholar]
  15. Xu, Y.; Yu, G.; Wang, Y.; Wu, X.; Ma, Y. A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images. Sensors 2016, 16, 1325. [Google Scholar] [CrossRef] [PubMed]
  16. Fernando, M.; Wijjayanayake, J. Novel Approach to Use Hu Moments with Image Processing Techniques for Real Time Sign Language Communication. Int. J. Image Process. 2015, 9, 335–345. [Google Scholar]
  17. Chen, Q.; Georganas, N.D.; Petriu, E.M. Real-time vision-based hand gesture recognition using haar-like features. In Proceedings of the EEE Instrumentation & Measurement Technology Conference IMTC, Warsaw, Poland, 1–3 May 2007; pp. 1–6.
  18. Sun, R.; Wang, J.J. A Vehicle Recognition Method Based on Kernel K-SVD and Sparse Representation. Pattern Recognit. Artif. Intell. 2014, 27, 435–442. [Google Scholar]
  19. Jiang, Y.V.; Won, B.-Y.; Swallow, K.M. First saccadic eye movement reveals persistent attentional guidance by implicit learning. J. Exp. Psychol. Hum. Percept. Perform. 2014, 40, 1161–1173. [Google Scholar] [CrossRef] [PubMed]
  20. Ju, Z.; Liu, H.; Zhu, X.; Xiong, Y. Dynamic Grasp Recognition Using Time Clustering, Gaussian Mixture Models and Hidden Markov Models. Adv. Robot. 2009, 23, 1359–1371. [Google Scholar] [CrossRef]
  21. Bian, X.; Zhang, X.; Liu, R.; Ma, L.; Fu, X. Adaptive classification of hyperspectral images using local consistency. J. Electron. Imaging 2014, 23, 063014. [Google Scholar]
  22. Song, H.; Wang, Y. A spectral-spatial classification of hyperspectral images based on the algebraic multigrid method and hierarchical segmentation algorithm. Remote Sens. 2016, 8, 296. [Google Scholar] [CrossRef]
  23. Hatwar, S.; Anil, W. GMM based Image Segmentation and Analysis of Image Restoration Tecniques. Int. J. Comput. Appl. 2015, 109, 45–50. [Google Scholar] [CrossRef]
  24. Couprie, C.; Najman, L.; Talbot, H. Seeded segmentation methods for medical image analysis. In Medical Image Processing; Springer: New York, NY, USA, 2011; pp. 27–57. [Google Scholar]
  25. Bańbura, M.; Modugno, M. Maximum likelihood estimation of factor models on datasets with arbitrary pattern of missing data. J. Appl. Econ. 2014, 29, 133–160. [Google Scholar] [CrossRef]
  26. Simonetto, A.; Leus, G. Distributed Maximum Likelihood Sensor Network Localization. IEEE Trans. Signal Process. 2013, 62, 1424–1437. [Google Scholar] [CrossRef]
  27. Ju, Z.; Liu, H. Fuzzy Gaussian Mixture Models. Pattern Recognit. 2012, 45, 1146–1158. [Google Scholar] [CrossRef]
  28. Zhang, Y.; Brady, M.; Smith, S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med. Imaging 2001, 20, 45–57. [Google Scholar] [CrossRef] [PubMed]
  29. Song, W.; Cho, K.; Um, K.; Won, C.S.; Sim, S. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation. Sensors 2012, 12, 17186–17207. [Google Scholar] [CrossRef] [PubMed]
  30. Wei, S.; Kyungeun, C.; Kyhyun, U.; Chee, S.; Sungdae, S. Complete Scene Recovery and Terrain Classification in Textured Terrain Meshes. Sensors 2012, 12, 11221–11237. [Google Scholar]
  31. Liao, L.; Lin, T.; Li, B.; Zhang, W. MR brain image segmentation based on modified fuzzy C-means clustering using fuzzy GIbbs random field. J. Biomed. Eng. 2008, 25, 1264–1270. [Google Scholar]
  32. Kakumanu, P.; Makrogiannis, S.; Bourbakis, N. A survey of skin-color modeling and detection methods. Pattern Recognit. 2007, 40, 1106–1122. [Google Scholar] [CrossRef]
  33. Lee, G.; Lee, S.; Kim, G.; Park, J.; Park, Y. A Modified GrabCut Using a Clustering Technique to Reduce Image Noise. Symmetry 2016, 8, 64. [Google Scholar] [CrossRef]
  34. Ning, J.; Zhang, L.; Zhang, D.; Wu, C. Interactive image segmentation by maximal similarity based region merging. Pattern Recognit. 2010, 43, 445–456. [Google Scholar] [CrossRef]
  35. Grabcut Image Dataset. Available online: http://research.microsoft.com/enus/um/cambridge/projects/visionimagevideoediting/segmentation/grabcut.htm (accessed on 18 December 2016).
  36. Everingham, M.; Van, G.L.; Williams, C.K.; Winn, I.J.; Zisserman, A. The PASCAL Visual Object Classes Challenge 2009 (VOC2009) Results. Available online: http://host.robots.ox.ac.uk/pascal/VOC/voc2009/ (accessed on 26 December 2016).
  37. Rhemann, C.; Rother, C.; Wang, J.; Gelautz, M.; Kohli, P.; Rott, P. A perceptually motivated online benchmark for image matting. In Proceedings of the CVPR, Miami, FL, USA, 20–25 June 2009; pp. 1826–1833.
  38. Margolin, R.; Zelnik-Manor, L.; Tal, A. How to Evaluate Foreground Maps? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 248–255.
  39. Zhao, Y.; Nie, X.; Duan, Y. A benchmark for interactive image segmentation algorithms. In Proceedings of the IEEE Person-Oriented Vision, Kona, HI, USA, 7 January 2011; pp. 33–38.
  40. Zhou, Y.; Liu, K.; Carrillo, R.E.; Barner, K.E.; Kiamilev, F. Kernel-based sparse representation for gesture recognition. Pattern Recognit. 2013, 46, 3208–3222. [Google Scholar] [CrossRef]
  41. Yu, F.; Zhou, F. Classification of machinery vibration signals based on group sparse representation. J. Vibroeng. 2016, 18, 1540–1545. [Google Scholar] [CrossRef]
Figure 1. Process of hand gesture recognition.
Figure 1. Process of hand gesture recognition.
Sensors 17 00253 g001
Figure 2. The RGB format hand gesture image.
Figure 2. The RGB format hand gesture image.
Sensors 17 00253 g002
Figure 3. Color distributions of the gesture image. (a) Red distribution; (b) green distribution; (c) blue distribution.
Figure 3. Color distributions of the gesture image. (a) Red distribution; (b) green distribution; (c) blue distribution.
Sensors 17 00253 g003
Figure 4. The mask.
Figure 4. The mask.
Sensors 17 00253 g004
Figure 5. The relationships between three pixel sets.
Figure 5. The relationships between three pixel sets.
Sensors 17 00253 g005
Figure 6. The result of automatic seed selection.
Figure 6. The result of automatic seed selection.
Sensors 17 00253 g006
Figure 7. Nodes and net model.
Figure 7. Nodes and net model.
Sensors 17 00253 g007
Figure 8. The evaluation samples from dataset.
Figure 8. The evaluation samples from dataset.
Sensors 17 00253 g008
Figure 9. Evaluation on the dataset.
Figure 9. Evaluation on the dataset.
Sensors 17 00253 g009
Figure 10. Boundary extraction.
Figure 10. Boundary extraction.
Sensors 17 00253 g010
Figure 11. The evaluation on different algorithms.
Figure 11. The evaluation on different algorithms.
Sensors 17 00253 g011
Figure 12. Segmentation results of our method on hand images.
Figure 12. Segmentation results of our method on hand images.
Sensors 17 00253 g012
Figure 13. Region accuracy comparison.
Figure 13. Region accuracy comparison.
Sensors 17 00253 g013
Figure 14. Boundary accuracy comparison.
Figure 14. Boundary accuracy comparison.
Sensors 17 00253 g014
Figure 15. Five hand gestures for recognition.
Figure 15. Five hand gestures for recognition.
Sensors 17 00253 g015
Figure 16. Hand gesture recognition framework.
Figure 16. Hand gesture recognition framework.
Sensors 17 00253 g016
Table 1. The weight of each link.
Table 1. The weight of each link.
Link TypeWeightPrecondition
x u x v ¯ exp ( β x u x v 2 ) x u , x v N
x u S ¯ U ( α = 0 , i , θ , X ) x u U
K x u O
0 x u B
x u T ¯ U ( α = 1 , i , θ , X ) x u U
0 x u O
K x u B
where K = 1 + max x u X x u , x v N exp ( β x u x v 2 )
Table 2. Recognition rates on unsegmented hand images.
Table 2. Recognition rates on unsegmented hand images.
GesturesRecognition Rates
Hand close86.7%
Hand open73.3%
Wrist extension100%
Wrist flexion100%
Fine pitch66.7%
Over all rate85.3%
Table 3. Recognition rates on segmented hand images.
Table 3. Recognition rates on segmented hand images.
GesturesRecognition Rates
Hand close93.3%
Hand open100%
Wrist extension100%
Wrist flexion100%
Fine pitch100%
Over all rate98.7%

Share and Cite

MDPI and ACS Style

Chen, D.; Li, G.; Sun, Y.; Kong, J.; Jiang, G.; Tang, H.; Ju, Z.; Yu, H.; Liu, H. An Interactive Image Segmentation Method in Hand Gesture Recognition. Sensors 2017, 17, 253. https://doi.org/10.3390/s17020253

AMA Style

Chen D, Li G, Sun Y, Kong J, Jiang G, Tang H, Ju Z, Yu H, Liu H. An Interactive Image Segmentation Method in Hand Gesture Recognition. Sensors. 2017; 17(2):253. https://doi.org/10.3390/s17020253

Chicago/Turabian Style

Chen, Disi, Gongfa Li, Ying Sun, Jianyi Kong, Guozhang Jiang, Heng Tang, Zhaojie Ju, Hui Yu, and Honghai Liu. 2017. "An Interactive Image Segmentation Method in Hand Gesture Recognition" Sensors 17, no. 2: 253. https://doi.org/10.3390/s17020253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop