Next Article in Journal
Application of a Novel Film Sealant Technology for Penetrating Corneal Wounds: An Ex-Vivo Study
Previous Article in Journal
Investigation of Factors Affecting Sensitivity Enhancement of an Optical Fiber Probe for Microstructure Measurement Using Oblique Incident Light
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vein Pattern Verification and Identification Based on Local Geometric Invariants Constructed from Minutia Points and Augmented with Barcoded Local Feature

by
Yutthana Pititheeraphab
1,
Nuntachai Thongpance
2,
Hisayuki Aoyama
3 and
Chuchart Pintavirooj
1,*
1
Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
2
College of Biomedical Engineering, Rangsit University, Pathum Thani 12000, Thailand
3
Faculty of Engineering, University of Electro-Communications, Tokyo 182-8585, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(9), 3192; https://doi.org/10.3390/app10093192
Submission received: 30 March 2020 / Revised: 23 April 2020 / Accepted: 27 April 2020 / Published: 3 May 2020
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
This paper presents the development of a hybrid feature—dorsal hand vein and dorsal geometry—modality for human recognition. Our proposed hybrid feature extraction method exploits two types of features: dorsal hand geometric-related and local vein pattern. Using geometric affine invariants, the peg-free system extracts minutia points and vein termination and bifurcation and constructs a set of geometric invariants, which are then used to establish the correspondence between two sets of minutiae—one for the query vein image and the other for the reference vein image. When the correspondence is established, geometric transformation parameters are computed to align the query with the reference image. Once aligned, hybrid features are extracted for identification. In this study, the algorithm was tested on a database of 140 subjects, in which ten different dorsal hand geometric-related images were taken for each individual, and yielded the promising results. In this regard, we have achieved an equal error rate (EER) of 0.243%, indicating that our method is feasible and effective for dorsal vein recognition with high accuracy. This hierarchical scheme significantly improves the performance of personal verification and/or identification.

Graphical Abstract

1. Introduction

A biometric system uses signature points of measurable uniqueness, derived from the physiological and/or behavioral characteristics possessed by an individual, to characterize and determine his/her identity. Biometric characteristics are preferably used in security systems over more traditional security measures. They are also used in internet access, computer system security, secure electronic passport control, banking, mobile phones, credit cards, secured access to buildings, health and social services, parenthood determination, terrorist determination and corpse identification. A number of relevant biometric technologies have been developed based on diverse biometric cues, such as DNA [1], ear morphology [2], facial features [3], fingerprints [4], gait [5], hand and finger geometry [6], iris [7], keystroke [8], odor [9], palm print [10], hand writing and signature [11], voice [12], etc.
The problem of personal verification and/or identification using palm images has drawn considerable attention. Researchers have purposed various methods [6,13,14,15,16] which can be categorized into geometric-related methods and vein pattern feature methods. Geometric-related methods exploit the geometric characteristics of the hand and/or fingers to provide biometric information. Park and Kim [17] computed curvature on the extracted hand contour to detect the fingertips and valleys. The curvature then served as the primary geometric feature, while finger length, width and the angle of finger valley were determined to serve as secondary geometric features. This technique, however, required pegs to stabilize the hand before the image acquisition in order to achieve a system invariant to the geometric transformation of the hand relative to the camera coordinate system. Xiong and Xu [18] proposed a peg-free hand shape analysis and recognition. In the technique they proposed, ellipse fitting was modelled to the finger regions to derive the major and minor axes of each finger, which were later used to estimate the finger rotation and translation. The technique was limited in the case of rigid transformation. Gupta et al. [19] applied a principle component analysis (PCA) to the finger width and length of five fingers and used it for geometric-based features; however. this technique also assumed a rigid transformation. Guo et al. [20] presented another peg-free hand geometric identification system. This technique determined the tip of the middle finger and normalized the rotation by aligning the straight line joining the hand centroid and middle fingertip horizontally. This technique reportedly performed well in the case of rigid transformation. Ayurzana et al. [21] used the ratios of structural lengths (e.g., the ratio of index/middle finger, ratio of palm length/ palm width) to derive a system invariant to similarity transformation and immune to the scale parameters. Other related work using geometric-based features included [22].
Vein pattern feature methods exploit the uniqueness of the superficial vasculature inside the human body. Areas such as the hands, fingers and wrists provide a source of promising biometric data that remains relatively unaffected by aging. To effectively visualize these vein patterns, infrared imaging technology is required. As deoxyhemoglobin in the vein absorbs more infrared, the intensity of venous regions in the image appears as darker pixel vales compared with the surrounding tissues. Wang et al. [23] used near infrared (NIR) for vein pattern imaging. The disadvantage of this NIR imaging system was that defections on skin surface were visible in the image, requiring their removal with image processing. To extract features, morphological thinning was applied to the vein pattern and Housdorff distance was used as a similarity measure [24]. Joardar et al. [25] applied NIR for the real-time vein pattern imaging of the subcutaneous dorsal hand. This system did not require a fixed setup to place the hand during data acquisition as they had developed a self-locating algorithm. This was used in conjunction with a two-axis pan-tilt mechanism to locate the palm dorsum automatically, irrespective of the initial positioning of the palm dorsum by the person under testing. Wang et al. [26] proposed another dorsal hand subcutaneous vein pattern NIR imaging system that was reportedly immune to the undesirable effects of visible light by surrounding the system with a rectangular box with a specially designed opening for the hand. In addition, a handle was added inside the imaging box for the user to grasp during the image acquisition process. This was to make the system invariant to rigid transformation, i.e., the hand location and orientation parameters. Related works of vein pattern-based personal verification and/or identification systems using the hand and/or finger images are reviewed in [27,28,29,30,31,32,33,34,35,36,37,38,39,40].
This paper proposes a hybrid-feature dorsal hand vein and dorsal geometry modality for human recognition. The salient aspects and/or contributions of this paper are enumerated as follows.
(1) This paper presents the development of a real-time, low-cost and easy-to-implement vein pattern personal verification system for human recognition.
(2) The system has a non-peg setup for placing the human hand during data acquisition with an NIR-sensitive camera, thus making the system hygienic for users, while the non-peg setup also provides comfort to the users.
(3) The proposed hybrid feature approach exploits both global and local aspects: the geometric feature is global, whereas the vein pattern feature and barcoded feature are local aspects of human recognition. This hierarchical scheme was found to significantly improve the performance.
(4) This system is invariant to affine geometric transformation. From the vein pattern image, the minutia, vein termination and vein bifurcation points are extracted. With the extracted minutia points, we can construct a set of geometric invariance and further use it to establish a correspondence between two sets of minutiae: one for the query image and the other for the reference image. Once the correspondence is established, the geometric transformation parameters are computed to align the query against the reference image. With this geometric-invariant algorithm, the subjects can freely align their hand in the acquisition system.
The paper is organized as follows: Section 2 explains the vein pattern image acquisition system; Section 3 is devoted to the identification process; Section 4 describes the experimental validation procedures and results; and Section 5 presents the discussion and conclusion.

2. Vein Pattern Identification System

Our proposed contactless human recognition system using the dorsal vein pattern is shown in Figure 1. The system consists of four units including (i) image acquisition, (ii) image preprocessing, (iii) a feature extraction unit and (iv) an identification process. The first unit acquires an image of the query hand without the user touching the sensor. The captured image is then preprocessed to enhance the image in preparation for the feature extraction. With the extracted features, the person is identified in the final unit. The following subsections provide detailed descriptions of each unit in this system.

2.1. Image Acquisition Unit

The image acquisition unit is the frontend part of the system, used for capturing a person’s palm image in a contactless way. This is a crucial step, as it affects the complexity of the image preprocessing unit. Capturing high-quality images results in less complexity in the image processing requirements and expectedly yields fewer errors in human recognition. In contrast, low-quality images typically result in more complex image processing and higher error rates. Obtaining high-quality images allows the relevant key points and features to be extracted more effectively. The design of palm imaging systems for human recognition has been an active research topic over the last decade. These systems exploit the concept of the infrared light absorption of deoxyhemoglobin reduced hemoglobin (HbR) in the blood. Most of the systems presented in the literature use NIR [23,41,42,43,44,45] with wavelengths of 750–1500 nm or far infrared (FIR) [23,46] light sources. Figure 2 shows our proposed image acquisition system using an 850-nm wavelength light source. The system can be designed to operate in two modes, i.e., a transmission mode [28,41,47] where the infrared light source and the infrared sensitive camera are installed on opposite sides of the hand, and a reflection mode, [17,19,24,26,48] where the infrared light source and infrared-sensitive camera are aligned on the same side of the hand. We opted to implement a transmission mode configuration, because it is considered to be more resistant to noise [41]. As shown in Figure 2, an infrared LED is installed in the scatter-light protection cylindrical case with a 5 cm diameter and 5 cm height. A concave lens is also installed at the orifice of the cylindrical case to focus the light onto the subject’s hand. On the opposite side of the infrared LED, an 850-nm wavelength IR-filtered camera is installed. For specific usage scenarios, the FUJIKO CCTV is modified with the removal of the IR cut filter to reduce the infrared transmission. Equipped with the replacement of IR filter, it selectively transmits light at a 850 nm wavelength, with an adjustable arm for focal length control. To prevent the scattering of ambient light, the system is also covered with protective plastic, leaving only the front side exposed for hand imaging, as shown in Figure 2b. Samples of IR-radiated vein images are shown in Figure 2c.

2.2. Image Preprocessing Unit

As shown in Figure 3a, the images obtained from the image acquisition are presented in color, consisting of veins that appear darker than the surrounding parts. Image preprocessing is applied to enhance the quality of the acquired images and to ensure the efficacy of the feature extraction process. Based on a global threshold [49], the image is first converted into grayscale and then segmented to extract the hand portion from the background of the image. After that, a 3 × 3 median filter is applied to remove salt-and-pepper noise, together with the opening and closing morphological technique to remove speckled remaining noise. Figure 3c, in the red square frame, shows images with interference, zooming into the finer details before processing them with image enhancement. Figure 3d illustrates a typical output image, with the hand portion in the foreground after the image enhancement.

2.3. Feature Extraction Process

The proposed recognition system is based on hand geometry and vein patterns. These are utilized in a hierarchical scheme consisting of local and global features. Global features are related to hand geometry, whereas local ones are associated with vein patterns.

2.3.1. Global Hand Geometry Feature Extraction

The goal of the hand geometry feature extraction stage is to find the fingertips and concave points (valleys) joining fingers. The obvious characteristic of the tip and the concave point of the fingers is the extremal curvature of the contour around the hand. Therefore, most methods of finding the distal and concave points of the fingers are obtained from the curvature of the hand contour [17,18,50]. Fingertip points can also be identified using the convex hulls of the binary palm image. Once the fingertip point is located, the concave point is further identified by measuring the distance between the midpoint of each fingertip point to the concave point, known as the convexity defection [50,51]. In this research, we exploit the radius transform by measuring the distance between the hand centroid to the hand contour. The fingertip and the concave point yield the maximal and minimal radius distance, respectively. To find the hand centroid, a distance map method is applied. As shown in Figure 4, the point with maximal distance represents the hand centroid. Figure 5 demonstrates the robustness of this centroid detection with various hand geometric transformations. Once the centroid is located, the Euclidian distance between the centroid and the hand contours—the so-called radius distance—is computed using Equation (1).
D k i = ( Q x P k i x ) 2 + ( Q y P k i y ) 2
where P k i = { P k i } i = 1 I is the hand contour extracted from each image consisting of points, P k i = { P k i x , P k i y } are the coordinates of these hand contour points, Q = { Q x , Q y } are the coordinates of the reference point and D k i is the distance between the centroid and hand contour.
As shown below, Figure 6a shows a centroid detected with the distance transform process. Figure 6b shows a plot of the radius distance. From the graph of radius distance, the fiducial points, including fingertip and concave points, can be located by finding the extremal points. The fingertip points will be the five points of maximal radius distance, denoted respectively as T1–T5, while the concave point will be the four minimal radius distance points, denoted respectively as V1–V4 (local minima point, Vn = Min(Tn: Tn+1)). The detected fingertip and the concave points are shown in Figure 6c.

2.3.2. Local Dorsal Hand Vein Feature Extraction

There are many different types of veins. Ibrahim et al. [47] classified veins into seven types, namely termination, bifurcation, lake, independent ridge, dot, spur and crossover. With their unique characteristics across individuals, vein patterns can potentially serve as biometrics. Human recognition using vein pattern biometrics can be roughly classified into two main categories: (i) minutia based and (ii) non-minutia based. For minutia-based vein pattern identification [23,41,42,43,44,45,46,48,49,50,51,52,53], minutia points, such as the point of vein termination or bifurcation, are extracted and further used for human recognition. Non-minutia-based vein pattern identification is based on matching between the vein patterns of unknown identity with that of the reference directly, without extracting minutia point information [27]. In this study, we have opted for a minutia extraction approach. The minutia points used here are the vein terminations and bifurcations. With the extracted minutia points, a set of geometric invariant features are constructed and further used to establish the correspondence between two sets of minutiae, using the one from the query image and the other from the reference image. Once the correspondence is established, the geometric transformation parameters are computed to align the query with the reference image. The quantitative similarity criterion is then computed and used for human recognition. Our process of minutia-based feature extraction from vein patterns is divided into two subprocesses: (i) vein extraction, and (ii) minutia extraction, as explained in more detail below.

A. Vein Pattern Extraction

The vein pattern extraction process is started by converting the color palm image from the image acquisition system into a grayscale image. Veins in the grayscale image appear dark, as deoxyhemoglobin contained within the blood inside the veins absorbs more infrared light. As a result, less infrared light is transmitted to the IR-sensitive camera. Vein extraction from the grayscale image is subdivided into two steps: (i) region of interest (ROI) extraction and (ii) vein extraction. The process of vein extraction is shown in Figure 7.
(i) ROI Extraction
To improve the efficiency of human recognition using vein patterns, rather than using the whole vein image, a defined ROI is used. From the hand geometric feature extraction process, we have determined U2 to be the point on the contour from T2 with equal distance when traversing from T2 to V2 and, similarly, U3 to be the point on the contour from T5 with an equal distance when traversing from T5 to V4. P1 can then be marked as the middle point between U2 and V2. P2 can be marked as the middle point between V4 and U3, as shown in Figure 8a. The coordinates of P3 and P4 are then calculated as in Equations (2) and (3).
P 3 { x , y } = P 3 { xp 2 + ( yp 2   yp 1 ) ,   yp 2 + ( xp 2 xp 1 ) } ,
and
P 4 { x , y } = P 4 { xp 1 + ( yp 2   yp 1 ) ,   yp 1 + ( xp 2 xp 1 )   } .
This ROI bounds a section of the vein pattern inside a square box, having P1, P2, P3 and P4 as the vertices. An identified ROI is shown in Figure 8b, with the corresponding derived vein pattern shown in Figure 8c.
(ii) Vein Extraction
As shown in Figure 8c, venous regions within the image will be darker than the other areas due to the process of infrared light absorption by deoxyhemoglobin in the bloodstream. The process of extracting the vein pattern from the ROI involves three steps.
Step 1: Intensity Profile
To extract the veins from the ROI, the intensity between adjacent pixels is compared. Intensity profiles are calculated along four axes (horizontal, vertical, left diagonal and right diagonal), as shown in Figure 9. From these intensity profiles, vein location pixels are determined from the points of minimal intensity. Figure 10 shows the vertical intensity profile with the corresponding vein location pixels. Note that these pixels represent the center points of different veins.
Step 2: Calculation of the Curvatures of Profiles
As explained in step 1, the vein location pixel will be a point of local minimal intensity. However, rather using the point of minimal intensity, we seek to use the point of maximum curvature, where a threshold can be used to determine the width of the vein at the vein location pixel. To compute the curvature of the intensity profile, we use Equation (4):
k ( z ) = d 2 P f ( z ) / d z 2 ( 1 + ( d P f ( z ) / d z ) 2 ) 3 2
where Pf(z) is the intensity profile, d 2 P f ( z ) d z 2 is the second derivative of the intensity profile and d P f ( z ) d z is the first derivative of the intensity profile.
Step 3: Vein Detection
Figure 11 depicts the vein extraction process. The intensity profile is shown in Figure 11a; after applying Equation (4), the graph of curvature was produced, as shown in Figure 11c. To mitigate noise disturbance, the curvature threshold is set as annotated with solid lines in Figure 11c. Curvatures exceeding this threshold are considered as veins. The maximum point of curvature is equivalent to the center of the vein, identified as Px in Figure 11e. Points where the curvature crosses the threshold line determine the width of the vein detected in that vicinity, illustrated as Wx in Figure 11e. Pixels lying within this region are labeled as vein portions, whereas those lying outside this region are classified as non-vein portion pixels. To improve the accuracy of vein detection, this algorithm is repeated for intensity profiles computed in four directions (Figure 9). The resulting images detected from all intensity profiles are then combined to yield the final vein image. Figure 12 demonstrates the vein image detection from four directional intensity profiles computed using the described algorithm.

B. Minutia Extraction

The goal of this feature extraction process is to obtain minutiae from the vein image. The interesting minutiae are the points of vein termination or bifurcation. Minutia extraction is performed by the following steps below.
(i)
Apply morphological closing to remove possible noise from the vein image, as shown in Figure 13a.
(ii)
Apply morphological thinning to the result from step (i).
(iii)
Apply morphological pruning to the result from step (ii) to remove existing spurs.
(iv)
Apply vein ending and vein bifurcation pattern matching to locate the minutia.
This minutia extraction process is illustrated in Figure 13. Example results of this procedure are shown in Figure 13e.

2.4. Human Recognition Process Based on Vein Pattern Matching

Our method of vein pattern matching is based on minutia registration. The registration process can be carried out in the presence of a rigid transformation and an affine transformation. The registration process is shown in Figure 14 for the general nonlinear case. A set of triangles spanning three minutia points is obtained from the query as well as the candidate vein pattern from the minutia points previously extracted from each. These are sorted in ascending order of area prior to establishing correspondences. Once enough correspondences are established using the matching method described below, the transformation parameters are determined, the images are aligned and a distance map error between the test vein pattern image and the undo-transformed image is computed for later use in vein pattern recognition.

2.4.1. Minutia-Based Matching

Given a set of minutia points on the query vein pattern image and another on one of the templates, we want to consider a model that is preserved under the existing (rigid and affine) map. Given the centroid as a reference point, the minutia points that are close to the centroid to some certain distance will be considered, as shown in Figure 15. For a set of m minutia points, we can form a set of up to ( m 3 ) triangles by considering all the sets of possible triangles formed by taking three non-collinear minutia points at a time. Under a rigid map, the sides’ lengths and the angles and areas of each of these triangles are absolute invariants and are preserved under a rigid transformation (i.e., the corresponding triangles after the rigid transformations will have the same invariant value, whereas under an affine map only the area is a relative invariant, which can be made into an absolute invariant by considering the ratios of pairs of triangles [   ( ( m 3 ) 2 ) such pairs]).

2.4.2. Matching in the Presence of a Rigid Transformation

The rigid transformation corresponds to the case where the hand of the query vein pattern, including the candidate retrieved in the database, is in exactly the same position and orientation. For each triangle, the shape (triangles sides’ lengths and angles) and areas are preserved.
The algorithm for matching two sets of triangles, one from the template and the other from the query (in Figure 16), proceeds as follows:
(1)
Obtain the set of triangles for the template and the query vein pattern using a triangulation process. These sets are shown in Figure 16a,g, respectively.
(2)
Sort the triangles of each set in ascending order, as shown in Figure 16b,f, respectively.
(3)
Reorient each triangle in the list in a standardized position where the longest length is taken as the base of the triangle and the length is decreasing starting from the base in a clockwise direction. The list of triangles in the standardized position for the template and the query are shown in Figure 16c,e, respectively.
(4)
Perform a run length technique, which is similar to the “list-matching algorithm”, by searching for the longest string of matching triangles between the triangles in the template and the query list. A circular shift of triangles in the query list is applied upon searching for the matching triangle in the template list. The matching criterion between each pair of triangles, say triangle ith and jth, is according to
% ε 2 = F i F j 2 F i 2 × 100 < ξ   ,
where Fi = Fi (d1, d2, d3, α1, α2, α3, A) is the feature vector of the ith triangle defined in Figure 17, with d1, d2 and d3 being the lengths of the triangle sides. α1, α2 and α3 are the angles and A is the area of the triangle; ξ is some threshold value.
(5)
Declare the match on the longest string (N) of triangles that yields the minimum averaged error of i % ε i 2 N .

2.4.3. Matching in the Presence of Affine Transformation

Under an affine transformation, the lengths and angles of the triangles are no longer preserved. Under an affine transformation, the area of the corresponding triangles, however, becomes a relative invariant, with the two corresponding areas being related to each other through the determinant of the linear transformation matrix T in the affine map transform (T, b) where b is the translation vector. If the area patches of the triangle sequence in the template are [Aa(1),…,Aa(n)], the corresponding area patches A(k) associated with the sequence of the triangles on the query are related to those of the template in accordance with the following relative invariant:
A a ( k ) = | a 11 a 12 a 21 a 22 | A ( k ) , k = 1 , 2 , , n   ,
where |T| is the determinant of the transformation matrix T. As the linear matrix is unknown, the absolute affine invariants are constructed out of the area relative invariants by taking the ratio of two triangles to cancel out the dependence of the area relative invariant on |T|. By taking the ratio of consecutive elements in the sequence, a set of absolute invariants are obtained in (7) and (8).
I a ( k ) = ± A a ( k ) | A a ( k + 1 ) m o d n |   ,
and
I ( k ) = ± A ( k ) | A ( k + 1 ) m o d n |   ,
This is for k = 1,2,…, n. In the case of noise free measurement, the absolute invariant of the template equals that of the query, i.e., Ia(k) = I(k), and in the absence of noise and occlusion, each of Ia(k) will have a counter part with I(k). That counterpart is easily determined through a circular shift involving n comparisons, where n is the number of invariants. To allow for noise and small deviations from an affine map, we allow a small error percentage difference between corresponding invariants to allow for only a small difference between the area patches before declaring them as matching. This may reduce the length of the matching triangle sequence. Lowering the allowable error percentage effectively makes the matching stricter. Experimentally, an error percentage of 5% was applied. We adopted a run length method to decide on the correspondence between the two ordered sets of triangles. For every starting point on the sequence, the run length method computes a sequence of consecutive invariants, satisfying the criterion stated in Equation (9).
% e 2 = ( I ( i ) I a ( j ) ) 2 I ( i ) 2 × 100 < ξ   .
We declare a match on the longest string (M) of triangles that yields the minimum averaged error, given by Equation (10):
i % ε i 2 M   .
Once correspondences are found, the vertices of the matching triangles are used to estimate the affine transformation. The algorithm for matching the triangles in the query and template vein patterns under an affine map is shown in Figure 18.

3. Classification and Identification of a Query Vein Pattern

3.1. Distance Map Error

Once the best warping transformation between the candidate and the query finger positions is estimated based on the matching triangles. For geometric transformation, the corresponding minutia points are established from the matching minutia-based triangles. We rely only on strong matches of all minutia points on the query image compared to the closest one on the reference image. Therefore, triangles that have no counterpart are discarded. The performance of our method is represented by calculating an average distance map error denoted as EVeinMAP, as defined in Equation (11). This is done for every other possible candidate vein pattern. In an identification problem, the query is identified as the candidate that results in the smallest average geometric distance map error function. In a verification problem, the identity of the query vein pattern is verified if the average geometric distance map error function is below a set threshold and the distance map error is computed and denoted as EVeinMAP. An equation of the distance map error is shown below.
E V e i n M A P = i = 1 n ( m c ( i ) m q ( i ) ) 2 n   ,
where m q ( i ) = ( x i , y i ) is the coordinate of the ith minutia of the query image after the undo transformation. m c ( i ) = ( x i , y i ) is the coordinate of the ith minutia of the reference that is closest to the minutia that is under the undo transformation. n is the number of minutiae subjected to the undo transformation.
Figure 19 presents an example of the image registration of the inquiry hand contour (white) against the reference hand contour (red).
To improve the specificity for human recognition, we used another feature vector containing physical parameters extracted from the finger and palm, including the length and width of the fingers, as shown in Figure 20.
In the figure, for example, point C1 is the middle point between the adjacent concave points V2 and V3. The finger length is defined to be the distance between the fingertip T3 and C1. The finger width is defined as the distance between point C2 and C3, where C2 is the half-distance point on the contour between T3 and V2, and where C3 is the half-distance point on the contour between T3 and V3. Feature d11 is the palm width, which is the distance between C4 and U3, where C4 is the mid-point between V1 and U2. The error of the length and width of the fingers is then calculated using Equation (12).
E H a n d = F x F y 2 F x 2 × 100   ,
where (lThumb, lIndex, lMiddle, lRing, lLittle, wThumb, wIndex, wMiddle, wRing, wLittle, wPalm) is the feature vector of hand x; lThumb, lIndex, lMiddle, lRing, lLittle are the lengths of the thumb, index, middle, ring and little fingers, respectively; wThumb, wIndex, wMiddle, wRing, wLittle, wPalm are the widths of the thumb, index, middle, ring, little fingers and palm, respectively.

3.2. Feature Matching: Barcoded Features

Recognition can also be augmented by an appearance-based matching process that compares the barcoded features’ interiors to the matching triangles formed by the declared distance map. It is necessary that the query corresponds to the candidate image. There are two reasons that make barcoded features beneficial for enhancing vein pattern identification and classification: (i) the fingerprint image consists only of a background and foreground (vein and non-vein), and (ii) the geometric error distance map is a global feature whereas barcoded features reflect local properties. This combination of local and global features enhances the performance of our recognition to a certain degree.

3.2.1. Computing the Barcoded Features

Given the two corresponding triangles formed by the two sets of three corresponding minutia points (in the same manner as the distance map described in Section 4), the vein pattern images are capsulated by the query and the candidate triangles, as shown in Figure 21a. Figure 21b presents square boxes which are bounded by the triangle patches of interest to derive a sub-image. Each sub-image is then divided into ten vertical stripes, where we compute the energy contained in each stripe, normalize them and input this computed gray level into the corresponding stripe shown in Figure 21c. This constitutes the barcoded feature vector which is shown in Figure 21d. A similar feature vector will be obtained for the corresponding minutia triangle on the candidate vein pattern.

3.2.2. Matching Based on Barcoded Features

Once the barcoded feature vector is computed for all the query triangles (warped in accordance to the estimated transformation under an assumed candidate vein pattern) and the declared correspondent triangles on the given candidate vein pattern, the barcoded error function is defined by Equation (13).
E v e i n = i j ( S c ( i , j ) S q ( i , j ) ) 2 i j S c 2 ( i , j ) × 100   ,
where Sc and Sq are the corresponding barcoded strips (Figure 21d) associated with the (i, j) corresponding triangles on the candidate and the query vein patterns, and where the summation is over all the triangles on the query and their counterparts on the candidate.
With the augmentation of the error metric based on the barcoded features, the overall verification and identification rule combines the geometric error map described in Section 3 with that given in the following error metric:
E T o t l e = β 1 E V e i n M a p + β 2 E V e i n + β 3 E H a n d   ,
where factors β 1 , β 2 , β 3 are decimal values between 0 and 1 and β 1 + β 2 + β 3 = 1 .
The complete vein pattern human recognition process is shown in Figure 22. From the query palm image, the hand centroid and minutia points are extracted. All the minutia points that are located close to the centroid to some certain distance are used to derive a set of absolute affine invariant features, as explained in Section 2.4.3. The matching set of absolute affine invariance against that of a reference image is then processed. The corresponding minutiae between the query and the reference are then used to align the query image against the reference. One aligned, the error function, given in Equation (14), is then computed. The score function, defined as in equation (15), is then used to identify and/or verify the query person.
S = 1 E T o t l e 100   .

4. Experimental Results

The experiments are divided into two parts: (i) the registration of two vein patterns after performing an alignment with the affine transformations and (ii) the results of the vein pattern verification of our proposed algorithm against a database of vein patterns.

4.1. Vein Pattern Registration Based on Minutia-Based Matching in the Presence of Affine Transformation

A dorsal image is captured with our system. The size of the original image is 420 × 456 pixels. Proceeding to the ROI process (see Section 2.3.2 A), the ROI vein pattern image is a size of 145 × 145 pixels. Preprocessing image enhancement is then performed on the ROI, including a median filter, closing morphological process, thinning morphological process, pruning morphological process and minutia detection (see Section 2.3.2 B). Triangulation is formed on a set of selected minutia triplets, as described in Section 2.4.1. The sequence of triangles both in the query image and in the reference image is in ascending order. A ratio of the consecutive triangle area is formed to derive an absolute invariance. The two sequences are then shifted and compared to find the matching triangle. The vertices of the corresponding matching triangle are used to estimate the affine transformation parameter. With the estimated transformation, the query image is aligned against the reference image. To provide a quantitative measure, the average distance error is computed. Figure 23 shows the matching triangles in the query image and the reference image. Figure 24 demonstrates the alignment of the query against the reference image. The average distance map error before and after the alignment of Figure 24 is shown in Table 1. To test the performance of the alignment further, we randomly selected a dataset that corresponds to 10 individuals with two vein images from each individual, thereby giving a total of (10 × 2) = 20 vein image data. The data are divided into two sets, denoted as the reference set and the query set. The vein image data in the query set are aligned against the vein image data in the reference set using the proposed algorithm. The data are shown in Figure 25. In the figure, the reference set is the first row, while the query set is the first column. The average alignment error is also shown in Figure 25.

4.2. Vein Pattern Recognition Based on Combined Features

Our human recognition is based on hybrid features that combined geometric-related features and vein pattern features. The geometric features are related to the geometric structure of the hand, namely the geometric error (EHand) derived from Equation (12). The vein pattern features are the vein pattern error (EVeinMap) and barcoded features error (EVein) derived from Equations (11) and (13), respectively. The combined error is the weighted linear combination of these errors (Equation (14)), which is then used in our vein pattern human recognition system.
We have tested the performance of our vein pattern recognition system on 140 subjects (87 females; 53 males) with an age between 18 and 40 years. Ten images are collected from one subject with different hand orientations, including a left and right roll rotation, front and back pitch rotation, left and right yaw rotation and four relaxed hand positions. The roll, pitch and yaw rotation collection are shown in Figure 26.
The limits of the roll and pitch angles are within ±20, while that of the yaw angle is 0–180°. The total number of images is 1400 (140 × 10) images. The number of genuine matches is 6300 matches (45 possible pairs of vein patterns coming from the same hand × 140 individuals = 6300) and the number of impostor matches is 9730 matches ((140 × 139/2) possible pairs of template vein patterns coming from the different vein patterns). The distribution of matching scores is generated in a set of two parallel images—images obtained from the same hand (red line on the right) and the ones from different hands (blue line on the left), as shown in Figure 27, where the cross-over point of the false non match rate (FNMR) curve and the false match rate (FMR) curve is 0.607.
As shown in Figure 28, we calculated the Receiver Operating Characteristics (ROC) curve. The area under the curve (AUC) is used as the optimization objective since it provides a good representation of the ROC performance. In this system, the AUC value of 0.99842 shows that our method is extremely good. We also evaluated the overall performance in terms of equal error rate (EER), known as the error rate when the false non match rate (FNMR) of genuine vein patterns and the false match rate (FMR) of impostor vein patterns assume the same value, as shown in Figure 29. A common approach is to use the cross-over point between the FNMR curve and the FMR curve. In this regard, we have achieved an EER of 0.243%, indicating that our method is feasible and effective for dorsal vein recognition with a high accuracy. Finally, a detection-error tradeoff (DET) curve plotting the FMR against the FNMR is presented in Figure 30. The comparison results are presented in Table 2.

5. Discussion and Conclusion

In this paper, we introduced a hybrid method of dorsal hand vein and dorsal geometry modality for human recognition. We designed a hand vein pattern acquisition that is immune to ambient light disturbance. The captured infrared image is then preprocessed using a median filter, closing morphological process, thinning morphological process, pruning morphological process and minutia detection. To find correspondences between the minutia points of two vein pattern images, a set of geometric invariants was determined based on the triangles constructed from sets of the minutia point triplets. After the correspondences were established, the parameters of a relevant transformation were estimated and the two images were aligned. The performance of our method was demonstrated by its ability to register the two vein pattern images scanned under a host of shape transformations. The results of the vein pattern alignment revealed that our proposed method can be used to find the corresponding minutiae and align any two vein patterns in the case of affine transformation. This makes our system applicable under varying conditions that differ from those under which the database of vein pattern was constructed. For vein pattern verification, we also proposed a rule that combines the geometric distance map error with the barcoded features to verify the query vein pattern against the reference vein pattern. Our performance yielded an area of 0.99842 under the ROC curve. Our algorithm compares favorably to other methods, resulting in an EER of 0.243%.

Author Contributions

Conceptualization, Y.P. and C.P.; data curation, Y.P.; formal analysis, Y.P. and C.P.; investigation, Y.P.; methodology, Y.P.; project administration, C.P.; resources, N.T.; software, Y.P.; supervision, C.P.; validation, H.A. and C.P.; visualization, Y.P.; writing—original draft, Y.P.; writing—review & editing, H.A., C.P. and Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to show our gratitude to Jamie O’Reilly and Thanate Angsuwatanakul, Rangsit University, who provided insight and expertise that greatly assisted the research, as well as Thapanee Khemanuwong, King Mongkut’s Institute of Technology Ladkrabang, for assistance with English language editing and proofreading.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xue, M.; Li, J.; Xu, W.; Lu, Z.; Wang, K.; Ko, P.K.; Chan, M. A self-assembly conductive device for direct DNA identification in integrated microarray based system. In Proceedings of the Digest. International Electron Devices Meeting, San Francisco, CA, USA, 8–11 December 2002; pp. 207–210. [Google Scholar]
  2. Moreno, B.; Sanchez, A.; Vélez, J.F. On the use of outer ear images for personal identification in security applications. In Proceedings of the IEEE 33rd Annual 1999 International Carnahan Conference on Security Technology (Cat. No. 99CH36303), Madrid, Spain, 5–7 October 1999; pp. 469–476. [Google Scholar]
  3. Gao, Y.; Leung, M.K. Face recognition using line edge map. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 764–779. [Google Scholar]
  4. Alam, M.; Akhteruzzaman, M. Real time fingerprint identification. In Proceedings of the IEEE 2000 National Aerospace and Electronics Conference. NAECON 2000. Engineering Tomorrow (Cat. No. 00CH37093), Dayton, OH, USA, 12 October 2000; pp. 434–440. [Google Scholar]
  5. Lee, C.-S.; Elgammal, A. Gait style and gait content: Bilinear models for gait recognition using gait re-sampling. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, 19 May 2004; pp. 147–152. [Google Scholar]
  6. Sanchez-Reillo, R.; Sanchez-Avila, C.; Gonzalez-Marcos, A. Biometric identification through hand geometry measurements. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1168–1171. [Google Scholar] [CrossRef] [Green Version]
  7. Liam, L.W.; Chekima, A.; Fan, L.C.; Dargham, J.A. Iris recognition using self-organizing neural network. In Proceedings of the Student conference on research and development, Shah Alam, Malaysia, 17 July 2002; pp. 169–172. [Google Scholar]
  8. Yu, E.; Cho, S. GA-SVM wrapper approach for feature subset selection in keystroke dynamics identity verification. In Proceedings of the International Joint Conference on Neural Networks, Portland, OR, USA, 20–24 July 2003; pp. 2253–2257. [Google Scholar]
  9. Nakamoto, T.; Moriizumi, T. Odor sensor using quartz-resonator array and neural-network pattern recognition. In Proceedings of the IEEE 1988 Ultrasonics Symposium Proceedings, Chicago, IL, USA, 2–5 October 1988; pp. 613–616. [Google Scholar]
  10. Zhang, L.; Zhang, D. Characterization of Palmprints by Wavelet Signatures via Directional Context Modeling. IEEE Trans. Syst. Man, Cybern. Part B (Cybernetics) 2004, 34, 1335–1347. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Palla, S.; Lei, H.; Govindaraju, V. Signature and lexicon pruning techniques. In Proceedings of the Ninth International Workshop on Frontiers in Handwriting Recognition, Tokyo, Japan, 26–29 October 2004; pp. 474–478. [Google Scholar]
  12. Kong, W.K.; Zhang, D. Palmprint texture analysis based on low-resolution images for personal authentication. In Proceedings of the Object recognition supported by user interaction for service robots, Quebec City, QC, Canada, 11–15 August 2002; pp. 807–810. [Google Scholar]
  13. Funada, J.-i.; Ohta, N.; Mizoguchi, M.; Temma, T.; Nakanishi, K.; Murai, A.; Sugiuchi, T.; Wakabayashi, T.; Yamada, Y. Feature extraction method for palmprint considering elimination of creases. In Proceedings of the Fourteenth International Conference on Pattern Recognition (Cat. No. 98EX170), Brisbane, Australia, 20 August 1998; pp. 1849–1854. [Google Scholar]
  14. Li, W.; Zhang, D.; Xu, Z. Palmprint identification by Fourier transform. Int. J. Pattern Recognit. Artif. Intell. 2002, 16, 417–432. [Google Scholar] [CrossRef]
  15. Sharma, S.; Dubey, S.R.; Singh, S.K.; Saxena, R.; Singh, R.K. Identity verification using shape and geometry of human hands. Expert Syst. Appl. 2015, 42, 821–832. [Google Scholar] [CrossRef]
  16. You, J.; Li, W.; Zhang, D. Hierarchical palmprint identification via multiple feature extraction. Pattern Recognit. 2002, 35, 847–859. [Google Scholar] [CrossRef]
  17. Park, G.; Kim, S. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns. Sensors 2013, 13, 2895–2910. [Google Scholar] [CrossRef] [Green Version]
  18. Xiong, W.; Xu, C.; Ong, S.H. Peg-free human hand shape analysis and recognition. In Proceedings of the (ICASSP’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 23 March 2005; Volume 72, pp. 77–80. [Google Scholar]
  19. Gupta, P.; Srivastava, S.; Gupta, P. An accurate infrared hand geometry and vein pattern based authentication system. Knowledge-Based Syst. 2016, 103, 143–155. [Google Scholar] [CrossRef]
  20. Guo, J.-M.; Liu, Y.-F.; Chu, M.-H.; Wu, C.-C.; Le, T.-N. Contact-free hand geometry identification system. In Proceedings of the 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 3185–3188. [Google Scholar]
  21. Ayurzana, O.; Pumbuurei, B.; Kim, H. A study of hand-geometry recognition system. In Proceedings of the Ifost, Ulaanbaatar, Mongolia, 28 June–1 July 2013; pp. 132–135. [Google Scholar]
  22. Duta, N. A survey of biometric technology based on hand shape. Pattern Recognit. 2009, 42, 2797–2806. [Google Scholar] [CrossRef]
  23. Wang, L.; Leedham, G.; Cho, S.-Y. Infrared imaging of hand vein patterns for biometric purposes. IET Comput. Vis. 2007, 1, 113–122. [Google Scholar] [CrossRef]
  24. Joardar, S.; Chatterjee, A.; Rakshit, A. A Real-Time Palm Dorsa Subcutaneous Vein Pattern Recognition System Using Collaborative Representation-Based Classification. IEEE Trans. Instrum. Meas. 2014, 64, 959–966. [Google Scholar] [CrossRef]
  25. Joardar, S.; Chatterjee, A.; Rakshit, A. Real-time NIR imaging of Palm Dorsa subcutaneous vein pattern based biometrics: An SRC based approach. IEEE Instrum. Meas. Mag. 2016, 19, 13–19. [Google Scholar] [CrossRef]
  26. Wang, Y.; Xie, W.; Yu, X.; Shark, L.-K. nautomatic physical access control system based on hand vein biometric identification. IEEE Trans. Consum. Electron. 2015, 61, 320–327. [Google Scholar] [CrossRef]
  27. Alejo, W.; Rodriguez, D.; Kemper, G. A biometric method based on the matching of dilated and skeletonized IR images of the veins map of the dorsum of the hand. IEEE Lat. Am. Trans. 2015, 13, 1438–1445. [Google Scholar] [CrossRef] [Green Version]
  28. Chen, L.; Wang, J.; Yang, S.; He, H. A Finger Vein Image-Based Personal Identification System With Self-Adaptive Illuminance Control. IEEE Trans. Instrum. Meas. 2017, 66, 294–304. [Google Scholar] [CrossRef]
  29. Chen, P.; Ding, B.; Wang, H.; Liang, R.; Zhang, Y.; Zhu, W.; Liu, Y. Design of low-cost personal identification system that uses combined palm vein and palmprint biometric features. IEEE Access 2019, 7, 15922–15931. [Google Scholar] [CrossRef]
  30. Gupta, P.; Gupta, P. Multibiometric authentication system using slap fingerprints, palm dorsal vein, and hand geometry. IEEE Trans. Ind. Electron. 2018, 65, 9777–9784. [Google Scholar] [CrossRef]
  31. Kang, W.; Wu, Q. Contactless Palm Vein Recognition Using a Mutual Foreground-Based Local Binary Pattern. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1974–1985. [Google Scholar] [CrossRef]
  32. Kumar, A.; Prathyusha, K.V. Personal Authentication Using Hand Vein Triangulation and Knuckle Shape. IEEE Trans. Image Process. 2009, 18, 2127–2136. [Google Scholar] [CrossRef] [Green Version]
  33. Kumar, A.; Zhou, Y. Human Identification Using Finger Images. IEEE Trans. Image Process. 2011, 21, 2228–2244. [Google Scholar] [CrossRef]
  34. Mirmohamadsadeghi, L.; Drygajlo, A. Palm vein recognition with local texture patterns. IET Biom. 2014, 3, 198–206. [Google Scholar] [CrossRef]
  35. Nurhudatiana, A.; Kong, A.W.-K. On Criminal Identification in Color Skin Images Using Skin Marks (RPPVSM) and Fusion With Inferred Vein Patterns. IEEE Trans. Inf. Forensics Secur. 2015, 10, 916–931. [Google Scholar] [CrossRef]
  36. Ramachandra, R.; Raja, K.B.; Venkatesh, S.K.; Busch, C. Design and Development of Low-Cost Sensor to Capture Ventral and Dorsal Finger Vein for Biometric Authentication. IEEE Sens. J. 2019, 19, 6102–6111. [Google Scholar] [CrossRef]
  37. Wang, D.; Li, J.; Memik, G. User identification based on finger-vein patterns for consumer electronics devices. IEEE Trans. Consum. Electron. 2010, 56, 799–804. [Google Scholar] [CrossRef]
  38. Wang, Y.; Zhang, K.; Shark, L.-K. Personal identification based on multiple keypoint sets of dorsal hand vein images. IET Biom. 2014, 3, 234–245. [Google Scholar] [CrossRef]
  39. Yang, L.; Yang, G.; Xi, X.; Su, K.; Chen, Q.; Yin, Y. Finger Vein Code: From Indexing to Matching. IEEE Trans. Inf. Forensics Secur. 2018, 14, 1210–1223. [Google Scholar] [CrossRef]
  40. Yuksel, A.; Akarun, L.; Sankur, B. Hand vein biometry based on geometry and appearance methods. IET Comput. Vis. 2011, 5, 398. [Google Scholar] [CrossRef] [Green Version]
  41. Cao, J.; Xu, M.; Shi, W.; Yu, Z.; Salim, A.; Kilgore, P. MyPalmVein: A palm vein-based low-cost mobile identification system for wide age range. In Proceedings of the 17th International Conference on E-health Networking, Application & Services (HealthCom), Boston, MA, USA, 14–17 October 2015; pp. 292–297. [Google Scholar]
  42. Cross, J.; Smith, C. Thermographic imaging of the subcutaneous vascular network of the back of the hand for biometric identification. In Proceedings of the The Institute of Electrical and Electronics Engineers. 29th Annual 1995 International Carnahan Conference on Security Technology, Surrey, UK, 18–20 October 1995; pp. 20–35. [Google Scholar]
  43. Harsha, P.; Subashini, C. A real time embedded novel finger-vein recognition system for authenticated on teller machine. In Proceedings of the 2012 International Conference on Emerging Trends in Electrical Engineering and Energy Management (ICETEEEM), Chennai, India, 13–15 December 2012; pp. 271–275. [Google Scholar]
  44. Miura, N.; Nagasaka, A.; Miyatake, T. Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Mach. Vis. Appl. 2004, 15, 194–203. [Google Scholar] [CrossRef]
  45. Wang, J.-G.; Yau, W.-Y.; Suwandy, A.; Sung, E. Person recognition by fusing palmprint and palm vein images based on “Laplacianpalm” representation. Pattern Recognit. 2008, 41, 1514–1527. [Google Scholar] [CrossRef]
  46. Lin, C.-L.; Fan, K.-C. Biometric Verification Using Thermal Images of Palm-Dorsa Vein Patterns. IEEE Trans. Circuits Syst. Video Technol. 2004, 14, 199–213. [Google Scholar] [CrossRef] [Green Version]
  47. Ibrahim, M.M.S.; Al-Namiy, F.S.; Beno, M.; Rajaji, L. Biometric authentication for secured transaction using finger vein technology. Int. Conf. Sustain. Energy Intell. Syst. 2011. [Google Scholar] [CrossRef]
  48. Gupta, P.; Gupta, P. Multi-modal fusion of palm-dorsa vein pattern for accurate personal authentication. Knowl. Based Syst. 2015, 81, 117–130. [Google Scholar] [CrossRef]
  49. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man, Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  50. Amayeh, G.; Bebis, G.; Hussain, M. A comparative study of hand recognition systems. In Proceedings of the 2010 International Workshop on Emerging Techniques and Challenges for Hand-Based Biometrics, Istanbul, Turkey, 22 August 2010; pp. 1–6. [Google Scholar]
  51. Tofighi, G.; Monadjemi, S.A.; Ghasem-Aghaee, N. Rapid hand posture recognition using adaptive histogram template of skin and hand edge contour. In Proceedings of the 6th Iranian Conference on Machine Vision and Image Processing, Isfahan, Iran, 27–28 October 2010; pp. 1–5. [Google Scholar]
  52. Huang, D.; Zhu, X.; Wang, Y.; Zhang, D. Dorsal hand vein recognition via hierarchical combination of texture and shape clues. Neurocomputing 2016, 214, 815–828. [Google Scholar] [CrossRef]
  53. Toh, K.-A.; Eng, H.-L.; Choo, Y.-S.; Cha, Y.-L.; Yau, W.-Y.; Low, K.-S. Identity verification through palm vein and crease texture. In Proceedings of the International Conference on Biometrics; Springer: Berlin/Heidelberg, Germany, 2006; pp. 546–553. [Google Scholar]
  54. Ahmad, F.; Cheng, L.-M.; Khan, A. Lightweight and Privacy-Preserving Template Generation for Palm-Vein-Based Human Recognition. IEEE Trans. Inf. Forensics Secur. 2020, 15, 184–194. [Google Scholar] [CrossRef]
  55. Cho, S.; Oh, B.-S.; Toh, K.-A.; Lin, Z. Extraction and Cross-Matching of Palm-Vein and Palmprint From the RGB and the NIR Spectrums for Identity Verification. IEEE Access 2020, 8, 4005–4021. [Google Scholar] [CrossRef]
  56. Wu, W.; Elliott, S.J.; Lin, S.; Yuan, W. Low-cost biometric recognition system based on NIR palm vein image. IET Biom. 2019, 8, 206–214. [Google Scholar] [CrossRef]
  57. Zhou, Y.; Kumar, A. Human Identification Using Palm-Vein Images. IEEE Trans. Inf. Forensics Secur. 2011, 6, 1259–1274. [Google Scholar] [CrossRef]
  58. Zhou, Y.; Kumar, A. Contactless palm vein identification using multiple representations. In Proceedings of the 2010 Fourth IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), Washington, DC, USA, 27–29 September 2010; pp. 1–6. [Google Scholar]
Figure 1. Proposed contactless human recognition using the dorsal vein pattern.
Figure 1. Proposed contactless human recognition using the dorsal vein pattern.
Applsci 10 03192 g001
Figure 2. Image acquisition system design. (a) Setup, (b) design and construction, (c) samples of acquired images.
Figure 2. Image acquisition system design. (a) Setup, (b) design and construction, (c) samples of acquired images.
Applsci 10 03192 g002
Figure 3. Image preprocessing: (a) original image, (b) gray scale image, (c) binarized image using the global threshold, (d) segmentation output image.
Figure 3. Image preprocessing: (a) original image, (b) gray scale image, (c) binarized image using the global threshold, (d) segmentation output image.
Applsci 10 03192 g003
Figure 4. Fiducial points including fingertip and concave point extraction.
Figure 4. Fiducial points including fingertip and concave point extraction.
Applsci 10 03192 g004
Figure 5. Robustness of hand centroid point extraction of distance transform image.
Figure 5. Robustness of hand centroid point extraction of distance transform image.
Applsci 10 03192 g005
Figure 6. (a) Distance transform, (b) radius distance plot, (c) extracted fiducial points.
Figure 6. (a) Distance transform, (b) radius distance plot, (c) extracted fiducial points.
Applsci 10 03192 g006
Figure 7. Vein extraction process, including (i) region of interest (ROI) extraction and (ii) vein extraction.
Figure 7. Vein extraction process, including (i) region of interest (ROI) extraction and (ii) vein extraction.
Applsci 10 03192 g007
Figure 8. Vein pattern ROI extraction: (a) reference point for ROI; (b) ROI for vein extraction; and (c) result of ROI.
Figure 8. Vein pattern ROI extraction: (a) reference point for ROI; (b) ROI for vein extraction; and (c) result of ROI.
Applsci 10 03192 g008
Figure 9. Intensity profiles in four directions: horizontal, vertical, left diagonal and right diagonal.
Figure 9. Intensity profiles in four directions: horizontal, vertical, left diagonal and right diagonal.
Applsci 10 03192 g009
Figure 10. Vein location pixels A, B, C, D, E and F (left) from the vertical intensity profiles (right).
Figure 10. Vein location pixels A, B, C, D, E and F (left) from the vertical intensity profiles (right).
Applsci 10 03192 g010
Figure 11. Vein extraction process: (a) intensity profile of vein image; (b) compute the curvature of the intensity; (c) curvature of the intensity; (d) compute peak detection and width detection; (e) peak and width detection’s position; (f) mapping function of (a) and (e); and (g) result of vein extraction.
Figure 11. Vein extraction process: (a) intensity profile of vein image; (b) compute the curvature of the intensity; (c) curvature of the intensity; (d) compute peak detection and width detection; (e) peak and width detection’s position; (f) mapping function of (a) and (e); and (g) result of vein extraction.
Applsci 10 03192 g011
Figure 12. Vein image detection: (a) Results from four directional intensity profiles; (b) combined vein image; (c) final classified vein image.
Figure 12. Vein image detection: (a) Results from four directional intensity profiles; (b) combined vein image; (c) final classified vein image.
Applsci 10 03192 g012
Figure 13. Minutia extraction: (a) vein image; (b) morphological closing operation; (c) morphological thinning; (d) morphological pruning; and (e) result of minutia extraction.
Figure 13. Minutia extraction: (a) vein image; (b) morphological closing operation; (c) morphological thinning; (d) morphological pruning; and (e) result of minutia extraction.
Applsci 10 03192 g013
Figure 14. Registration process.
Figure 14. Registration process.
Applsci 10 03192 g014
Figure 15. Triangulation on the set of minutia points close to the centroid: (a) determination of the centroid of vein pattern image; (b) calculation of the distance between minutia points and centroid; (c) selected ten closing minutia points; (d) local convex hull from the selected minutia set; and (e) forming triangles from the selected minutia set.
Figure 15. Triangulation on the set of minutia points close to the centroid: (a) determination of the centroid of vein pattern image; (b) calculation of the distance between minutia points and centroid; (c) selected ten closing minutia points; (d) local convex hull from the selected minutia set; and (e) forming triangles from the selected minutia set.
Applsci 10 03192 g015
Figure 16. Matching in the case of rigid transformation. (a) Set of triangles of vein pattern minutiae of the template. (b) Sorted (a) in ascending order. (c) Reorientation of (b) in a normalized position. (g) Set of triangles of vein pattern minutiae of the unknown. (f) Sorted (g) in ascending order. (e) Reorientation of (f) in a normalized position. (d) Matching triangles between (c) and (e).
Figure 16. Matching in the case of rigid transformation. (a) Set of triangles of vein pattern minutiae of the template. (b) Sorted (a) in ascending order. (c) Reorientation of (b) in a normalized position. (g) Set of triangles of vein pattern minutiae of the unknown. (f) Sorted (g) in ascending order. (e) Reorientation of (f) in a normalized position. (d) Matching triangles between (c) and (e).
Applsci 10 03192 g016
Figure 17. Definition of the parameters in feature vector F; di is the corresponding length of triangle, αi is the corresponding angle and A is the area of triangle.
Figure 17. Definition of the parameters in feature vector F; di is the corresponding length of triangle, αi is the corresponding angle and A is the area of triangle.
Applsci 10 03192 g017
Figure 18. Matching in the case of affine transformation. (a) Set of triangles of vein minutiae from the template. (b) Sorted (a) in ascending order. (c) The area of (b). (d) Ratio of the area of (c). (h) Set of triangles of vein minutiae from the query. (g) Sorted (h) in ascending order. (f) The area of (g). (e) Ratio of the area of (f). Matching triangles between (d) and (e) are shown in the box.
Figure 18. Matching in the case of affine transformation. (a) Set of triangles of vein minutiae from the template. (b) Sorted (a) in ascending order. (c) The area of (b). (d) Ratio of the area of (c). (h) Set of triangles of vein minutiae from the query. (g) Sorted (h) in ascending order. (f) The area of (g). (e) Ratio of the area of (f). Matching triangles between (d) and (e) are shown in the box.
Applsci 10 03192 g018
Figure 19. Image registration of the inquiry hand contour (white) against the reference hand contour (red): (a) the inquiry hand contour and the reference hand contour; (b) registering both hand contour by affine transformation; (c) the result of image registration.
Figure 19. Image registration of the inquiry hand contour (white) against the reference hand contour (red): (a) the inquiry hand contour and the reference hand contour; (b) registering both hand contour by affine transformation; (c) the result of image registration.
Applsci 10 03192 g019
Figure 20. Feature extractions.
Figure 20. Feature extractions.
Applsci 10 03192 g020
Figure 21. Barcoded features;.(a) Vein pattern images are capsulated by the corresponding query and candidate triangles. (b) Square boxes are then bounded on the triangle patch. (c) Each sub-image is then divided into ten vertical stripes. (d) Compute the energy in each stripe, normalize them into gray level and input the computed gray level into the corresponding stripe.
Figure 21. Barcoded features;.(a) Vein pattern images are capsulated by the corresponding query and candidate triangles. (b) Square boxes are then bounded on the triangle patch. (c) Each sub-image is then divided into ten vertical stripes. (d) Compute the energy in each stripe, normalize them into gray level and input the computed gray level into the corresponding stripe.
Applsci 10 03192 g021
Figure 22. Vein pattern human recognition/verification process.
Figure 22. Vein pattern human recognition/verification process.
Applsci 10 03192 g022
Figure 23. Matching triangles of the template (a) and the query (b) vein patterns in the presence of affine transformation.
Figure 23. Matching triangles of the template (a) and the query (b) vein patterns in the presence of affine transformation.
Applsci 10 03192 g023
Figure 24. Two vein patterns before (a) and after (b) alignment in the presence of affine transformation.
Figure 24. Two vein patterns before (a) and after (b) alignment in the presence of affine transformation.
Applsci 10 03192 g024
Figure 25. Ten randomly selected vein images in the query set aligned against each of the corresponding ten reference vein images. Images on the diagonal are perfectly matched.
Figure 25. Ten randomly selected vein images in the query set aligned against each of the corresponding ten reference vein images. Images on the diagonal are perfectly matched.
Applsci 10 03192 g025
Figure 26. Vein pattern impressions of (a) and (c) row angle, (b) and (d) pitch angle, (e) and (f) yaw angle.
Figure 26. Vein pattern impressions of (a) and (c) row angle, (b) and (d) pitch angle, (e) and (f) yaw angle.
Applsci 10 03192 g026
Figure 27. Distribution of genuine similarity score and distribution of impostor similarity score.
Figure 27. Distribution of genuine similarity score and distribution of impostor similarity score.
Applsci 10 03192 g027
Figure 28. Zoomed-in Receiver Operating Characteristics (ROC) curve, where the square mark in the circle represents the cut-off point.
Figure 28. Zoomed-in Receiver Operating Characteristics (ROC) curve, where the square mark in the circle represents the cut-off point.
Applsci 10 03192 g028
Figure 29. False match rate (FMR) and false non-match rate (FNMR) curves.
Figure 29. False match rate (FMR) and false non-match rate (FNMR) curves.
Applsci 10 03192 g029
Figure 30. Detection-error tradeoff (DET) curve plotted between FMR against FNMR.
Figure 30. Detection-error tradeoff (DET) curve plotted between FMR against FNMR.
Applsci 10 03192 g030
Table 1. Average distance map and alignment errors of the two vein patterns after performing affine transformation.
Table 1. Average distance map and alignment errors of the two vein patterns after performing affine transformation.
ErrorBefore AlignmentAfter Alignment
Average distance map
(in pixels)
10.9151.773
Alignment errors on average
(% of the size of the vein picture)
7.5270.122
Table 2. Performance comparison of our algorithm with other matching algorithms at EER.
Table 2. Performance comparison of our algorithm with other matching algorithms at EER.
ReferencesYearMethodologyPerformance (EER)
Ahmad et al. [54]2020wave atom transform1.49%
(self-built database)
Cho et al. [55]2020cross-spectral matching0.472%
Wu, Elliott, Lin, & Yuan [56]2018Haarwavelet decomposition and partial least squares algorithm0.4058%
(self-built database)
Gupta & Gupta [30]2018Multibiometric fusion based on serial methodology0.72%
Kumar et al. [32]2009Triangulation of hand vein images and simultaneous extraction of knuckle shape informationEER is 1.14%, EER of fusion is 0.38% for left hand and 0.28% for right hand
Zhou & Kumar [57]2011Neighborhood Matching Random Transform0.51%
Zhou & Kumar [58]2010Hessian Phase2.24%
Local Radon Transform1.03%
Ordinal Code2.00%
The Laplacian palm5.00%
Score Combined of above four methods0.38%
Kang & Wu [31]2014Improved LBP Method on mutual foreground0.267%
Proposed algorithm 0.243%

Share and Cite

MDPI and ACS Style

Pititheeraphab, Y.; Thongpance, N.; Aoyama, H.; Pintavirooj, C. Vein Pattern Verification and Identification Based on Local Geometric Invariants Constructed from Minutia Points and Augmented with Barcoded Local Feature. Appl. Sci. 2020, 10, 3192. https://doi.org/10.3390/app10093192

AMA Style

Pititheeraphab Y, Thongpance N, Aoyama H, Pintavirooj C. Vein Pattern Verification and Identification Based on Local Geometric Invariants Constructed from Minutia Points and Augmented with Barcoded Local Feature. Applied Sciences. 2020; 10(9):3192. https://doi.org/10.3390/app10093192

Chicago/Turabian Style

Pititheeraphab, Yutthana, Nuntachai Thongpance, Hisayuki Aoyama, and Chuchart Pintavirooj. 2020. "Vein Pattern Verification and Identification Based on Local Geometric Invariants Constructed from Minutia Points and Augmented with Barcoded Local Feature" Applied Sciences 10, no. 9: 3192. https://doi.org/10.3390/app10093192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop