Next Article in Journal
FF-PCA-LDA: Intelligent Feature Fusion Based PCA-LDA Classification System for Plant Leaf Diseases
Next Article in Special Issue
Applying a Deep Learning Neural Network to Gait-Based Pedestrian Automatic Detection and Recognition
Previous Article in Journal
Pulmonary Artery Remodeling and Advanced Hemodynamics: Magnetic Resonance Imaging Biomarkers of Pulmonary Hypertension
Previous Article in Special Issue
Online Signature Verification Systems on a Low-Cost FPGA
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of a Vein Biometric Recognition System on an Ordinary Smartphone

by
Paula López-González
,
Iluminada Baturone
,
Mercedes Hinojosa
and
Rosario Arjona
*
Instituto de Microelectrónica de Sevilla (IMSE-CNM), Universidad de Sevilla-CSIC, 41092 Seville, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(7), 3522; https://doi.org/10.3390/app12073522
Submission received: 8 March 2022 / Revised: 26 March 2022 / Accepted: 28 March 2022 / Published: 30 March 2022

Abstract

:
Nowadays, biometrics based on vein patterns as a trait is a promising technique. Vein patterns satisfy universality, distinctiveness, permanence, performance, and protection against circumvention. However, collectability and acceptability are not completely satisfied. These two properties are directly related to acquisition methods. The acquisition of vein images is usually based on the absorption of near-infrared (NIR) light by the hemoglobin inside the veins, which is higher than in the surrounding tissues. Typically, specific devices are designed to improve the quality of the vein images. However, such devices increase collectability costs and reduce acceptability. This paper focuses on using commercial smartphones with ordinary cameras as potential devices to improve collectability and acceptability. In particular, we use smartphone applications (apps), mainly employed for medical purposes, to acquire images with the smartphone camera and improve the contrast of superficial veins, as if using infrared LEDs. A recognition system has been developed that employs the free IRVeinViewer App to acquire images from wrists and dorsal hands and a feature extraction algorithm based on SIFT (scale-invariant feature transform) with adequate pre- and post-processing stages. The recognition performance has been evaluated with a database composed of 1000 vein images associated to five samples from 20 wrists and 20 dorsal hands, acquired at different times of day, from people of different ages and genders, under five different environmental conditions: day outdoor, indoor with natural light, indoor with natural light and dark homogeneous background, indoor with artificial light, and darkness. The variability of the images acquired in different sessions and under different ambient conditions has a large influence on the recognition rates, such that our results are similar to other systems from the literature that employ specific smartphones and additional light sources. Since reported quality assessment algorithms do not help to reject poorly acquired images, we have evaluated a solution at enrollment and matching that acquires several images subsequently, computes their similarity, and accepts only the samples whose similarity is greater than a threshold. This improves the recognition, and it is practical since our implemented system in Android works in real-time and the usability of the acquisition app is high.

1. Introduction

Biometrics refers to measurements of a biological trait [1] and their employment for individual recognition. Involuntarily, we apply biometric recognition in many daily situations. For example, when someone who is not in our contact list calls us by phone, we try to associate the voice we hear with the people we know. If the voice matches one of our memories, we quickly recognize the person. Biometric recognition systems are automated methods to recognize an individual.
There are two phases in a biometric recognition system [1]. In the first phase, enrollment, the individual registers his/her biometric data, which are stored in the system (normally, in a database). The registered data are known as template. In the second phase, matching, the individual is recognized from new biometric data. The second phase is carried out as many times as the individual has to be recognized. A biometric recognition system is composed of four stages: acquisition, preprocessing, feature extraction, template storage, and matching. Samples of biometric traits are obtained during the acquisition stage, in which sensors get raw data. These data are preprocessed by applying filters and extracting regions of interest, among others. The feature extraction stage is applied to generate a representation of the samples that facilitates the generation and storage of templates at enrollment and the comparison with them at matching. The matching result (also known as the score) shows how similar templates are.
Comparisons can be split into genuine and impostor. Genuine comparisons are computed between samples from the same individual, while impostor comparisons are computed between samples from different individuals. False non-match rate (FNMR) and false match rate (FMR) are usual indicators for the evaluation of the authentication performance. They are also known as false rejection rate (FRR) and false acceptance rate (FAR), respectively. FNMR is determined by the number of genuine comparisons with a similarity score smaller than the authentication threshold. On the other hand, FMR is determined by the number of impostor comparisons with a score greater than the threshold. From these indicators, equal error rate (EER) is the value where FNMR and FMR coincide.
Nowadays, biometric recognition systems are employed in many application fields for access control in electronic commerce, banking, tourism, migration, offices, hospitals, cloud, etc. The advantages of biometrics over passwords or identity cards is that biometric data do not need to be remembered and the risk of theft is reduced because biological measurements are intrinsic to individuals. Biometric recognition systems became very popular after they were introduced in daily life devices (laptops, tablets, or smartphones) to protect personal data and carry out secure payments. The introduction of biometric recognition systems in mobile phones was in 2004 (almost twenty years ago) in the Pantech GI100. In 2007, fingerprint recognition was included in the Toshiba G500, consolidating the use of fingerprint sensors in mobile phones. The acceptance of fingerprint recognition in the smartphone market started in 2013 with the inclusion of the Touch-ID system in the iPhone 5S. As a reaction to the Apple action, Android included fingerprint recognition in the Samsung Galaxy S5 in 2014. In the last years, the number of smartphones with fingerprint recognition has increased exponentially. Meanwhile, the development of facial recognition technologies has gained importance over the years. In 2012, Google registered a patent for facial smartphone unlocking and integrated it in Android 4.0 systems. Similar to the Touch-ID system, Apple included the Face-ID system in the iPhone X in 2017. The acquisition in Face-ID (like in other facial recognition systems) employs an infrared dot projector module in addition to the camera.
Biometric recognition based on vascular traits is a relatively recent technique that has not yet been sufficiently studied and exploited [2]. Vascular patterns are formed by the structure of blood vessels within the body. The typical acquisition is based on sensing in an optical way the absorption of near-infrared (NIR) light by the hemoglobin in the bloodstream. Under illumination from NIR LEDs, vessels appear as dark structures and the surrounding tissues appear bright because vessels have much more light absorption in that spectrum. Although arteries or veins can be employed as biometric traits, veins are easier to detect, and their images are clearer. There are more veins than arteries in the human body and veins are closer to the skin surface. The body parts usually considered for biometric recognition based on veins are fingers, hands, and wrists [2].
There are several properties that should be considered for a biometric trait [1]: (1) universality (if all individuals possess the biometric trait); (2) distinctiveness or uniqueness (if the biometric trait is sufficiently different for individuals); (3) permanence (if the trait does not vary over time); (4) performance (if the recognition is accurate and fast); (5) circumvention (if the trait can be imitated using artifacts); (6) collectability or measurability (if the acquisition is easy to carry out); and (7) acceptability (if the individuals accept the acquisition of the trait and the use of the recognition system).
Depending on the biometric trait, these properties vary. In the case of fingerprints and faces, universality, distinctiveness, performance, and acceptability are high. Permanence is medium because fingerprints (by the uses of the fingers) and faces (by the individual aging) change over time. Regarding collectability, contact is usually required in the case of fingerprints, which means that skin oiliness, dust, or creams may reduce the quality of the images acquired. In addition, specific sensors are required. For these reasons, the collectability of fingerprints is medium. Collectability for faces is also medium. In this case, non-specific cameras can be employed, but significant changes in the appearance due to the growth of a full beard or wearing masks affect facial recognition. Circumvention is medium in both cases since it is possible to spoof fingerprints and faces by using silicon fingers (for fingerprints) and photos, videos, or 3D face masks (for faces).
In the case of veins, universality, distinctiveness, permanence, performance, and protection against circumvention are high because everyone has veins, vein patterns are different among individuals, the veins are usually invariant throughout the life, the error rate can be low and the recognition can be at real time, and finally, veins are inside the body and are difficult to spoof using artifacts. However, collectability and acceptability are medium. Although acquisition techniques are contactless, environmental factors, such as light, temperature, or humidity, and biological factors, e.g., the thickness of the layers of human tissues and diseases, such as anemia, hyperthermia, or hypotension, affect vein recognition.
Furthermore, specific and large-sized acquisition devices, not suitable for ordinary smartphones, are typically employed to acquire veins [3,4,5,6]. There are only a few works focused on the vein acquisition with portable devices and smartphones [7,8,9,10,11,12]. However, a specific device is designed in [8,9], additional illumination is employed together with a modified smartphone to remove the NIR blocking filter in [10], smartphones with an infrared camera are employed in [7], and very preliminary works are presented in [11,12].
This work aims to evaluate acquisition methods and vein recognition algorithms using ordinary smartphone cameras, i.e., without modification or additional modules. The sophistication of ordinary smartphone cameras improves the quality of the captures. Besides, there are smartphone applications (apps) that acquire vein images in the visible domain and apply filtering techniques to improve the contrast of superficial veins, simulating the acquisition in the infrared spectrum. These apps are widely used to locate veins for medical purposes, such as blood extraction [13].
Our contribution in this work is to develop and implement a vein recognition system by using the Xiaomi Redmi Note 8 (a very commercial smartphone with an ordinary camera) and to collect and evaluate a database of vein images acquired by the IRVeinViewer app (a free app for Android systems to acquire veins). In order to evaluate the influence of environmental factors, different day times under different conditions (day outdoor, indoor with natural light, indoor with natural light and a dark homogeneous background, indoor with artificial light, and darkness) are considered. In order to evaluate biological factors, people from different ages and genders are considered.
The state-of-the-art biometric recognition algorithms and quality assessment algorithms included in [7] and in the PLUS OpenVein toolkit [14] are analyzed as a first step to select the algorithms of our implementation. Among the recognition algorithms, the most extended are based on the extraction of a binary information (e.g., the maximum curvature [15]) and the extraction of local texture descriptors (e.g., SIFT [16]). Regarding quality assessment algorithms, the techniques employed in the literature are based on global contrast factor (GCF) [17], a measurement of the brightness uniformity and clarity (Wang17) [18], and signal to noise ratio based on the human visual system (HSNR) [19].
The paper is structured as follows. Section 2 reviews the related work. The details of how the database was created with the IRVeinViewer app are given in Section 3. Details of the implemented system and their evaluation with the created database in terms of recognition performance and quality control of the acquired images are shown in Section 4. Finally, conclusions and future work are given in Section 5.

2. Related Work

A relevant step in vein recognition is the acquisition of samples. High-quality images reduce the complexity of image preprocessing and increase recognition accuracy. The acquisition process is primarily based on two properties [1]: (1) an inoffensive near-infrared (NIR) light that can enter the skin with a radiation wavelength larger than visible light (around 750–1000 nm), and (2) the presence of hemoglobin in the bloodstream, which absorbs this radiation. Hemoglobin can be classified as oxygenated and deoxygenated, depending on whether it contains dioxygen or not. Deoxygenated hemoglobin shows a maximum absorption at wavelengths of 760 nm, while oxygenated hemoglobin shows a higher absorption at 800 nm [20].
Vein acquisition devices are usually composed of a NIR illumination source and a camera. Based on their positions, two different methods are used for the acquisition: reflection and refraction (or transmission), which are optical phenomena that occur when a ray of light propagating through a medium encounters a separating surface of another medium. During reflection, part of the incident light changes direction and continues to propagate in the original medium. Therefore, images are obtained using the light reflected by the tissues and veins. In this case, the camera and the NIR light source are on the same side and the biometric sample is in front of them. During refraction, part of the light is transmitted to the other medium. So, in the case of transmission, the refracted light is captured by a camera in front of the NIR light source and the biometric sample is placed between the light source and the camera.
In the refraction method, the light has to pass through the human tissue until the camera captures it. Therefore, infrared light should have higher intensity. Besides, due to the locations of the infrared light source and the camera, the acquisition device is bulkier. On the contrary, the reflection method allows a more compact design with a lower power consumption. In this work, we focus on the reflection-based acquisition because it is more suitable for smartphones. In the literature, there are several proposals based on the reflection method to acquire veins in the infrared spectrum from the palms, dorsal hands, fingers, and wrists. These proposals are summarized in the following.
For palm veins, the acquisition prototype described in [8] was used to acquire the VERA PalmVein database. The prototype contains a CCD camera and twenty 940-nm NIR LEDs placed on the edges of a printed circuit board (PCB). In addition, it includes a 920-nm high-pass filter to reduce ambient light and an ultrasound distance sensor to turn off the acquisition when the hand is not placed at an adequate distance. A graphical user interface (GUI), which allows for checking whether the palm is centered in the image and the infrared light is adequate, guides the acquisition. The samples were acquired inside a building. The database was evaluated in [9] considering the region of interest (ROI) of the images. The EER obtained was 9.86% for images without enhancement and 6.25% for images with enhancement.
The acquisition prototype described in [10] was used for dorsal hands. It employed the camera of a Nexus 5 smartphone modified to remove the infrared blocking filter. As the infrared lighting source, a structure was created with sixteen 850-nm NIR LEDs placed in a circle with the smartphone camera in its center. In addition, a 780-nm high-pass filter was used to reduce the influence of ambient light. Each LED was controlled individually from an app developed for the acquisition that interacted with an Arduino-based control board. The acquisition was carried out inside a car and inside a building. The extraction of the ROI was carried out manually and the images were enhanced. The recognition performance evaluation showed that the best EER value was 4.13% and the worst EER was 24.30%.
For finger veins, the acquisition prototype described in [8] was composed of twelve 850-nm NIR LEDs arranged in three groups of four LEDs in a PCB to provide an adequate illumination. In addition, the power of each group of LEDs was adjusted by means of a software program. In this way, it is possible to provide very homogeneous lighting, avoiding saturation in the central point or insufficient lighting at the edges. Since the veins are deeper in the fingers, optimal illumination is needed. A CMOS camera was employed to acquire images from a minimum distance of 10 cm. The prototype also included a 740-nm high-pass filter to reduce the effect of ambient light. The ROI was extracted, and the images were enhanced. Recognition performance results have not been reported yet.
For wrist veins, the UC3M-CV2 database [7] was acquired using the infrared cameras and NIR illumination of two Xiaomi smartphones. The acquisition considered several environmental conditions (outdoor/indoor and external artificial or natural light without direct sunlight at different daytimes). A software program to guide the wrist position was developed as part of the acquisition protocol and images were enhanced. From the evaluation of the recognition performance, the resulting EER values ranged from 6.82% to 18.72%.
The acquisition of veins in the visible spectrum was preliminarily explored in [11,12] from images captured with ordinary smartphone cameras. In [11], veins are extracted and processed in the RGB domain and performance of the acquisition is evaluated for medical applications (e.g., vein puncture guidance). In [12], the identification capability of three kinds of neural networks processing the information obtained from the ROI is evaluated. The highest success identification rates range from 94.11% to 96.07% in the three structures.

3. Creation of a Database with a Viewer App for Vein Images

In this work, a Xiaomi Redmi Note 8 smartphone and the IRVeinViewer app were used for the acquisition of the images. The used smartphone has an ordinary quad camera with 48 mega pixels. As many other smartphone cameras, it has a NIR blocking filter to improve the quality of the acquired images. Hence, acquisition of veins based on the absorption of the IR light is not possible. The IRVeinViewer App is a free Android application that can be used as a vein scanner. Currently, there are other similar apps, such as VeinSeek, VeinCamera, or VeinScanner. These apps transform the images captured in the visible spectrum into NIR-like images by enhancing the contrast and visualization of green and blue channels. The limitation is that only the most superficial veins can be detected. The common use of these apps is for medical applications, not for biometric recognition purposes.
Once the IRVeinViewer app is started, the user has to focus the body area with the back camera. The application adjusts the optimal contrast and shows the simulated IR images in real-time on the smartphone screen. Figure 1 shows acquisition examples. Since we noticed that deep veins (like palm and finger veins) were not well acquired with the app, we focused on veins from the dorsal hand and wrist.
Some factors must be considered besides the acquisition device: biological, environmental, and sample position. Regarding biological factors, the thickness of the subcutaneous layers can hinder the acquisition of veins correctly, and diseases (such as anemia, hyperthermia, or hypotension) can influence the way in which blood flows through the veins. Regarding environmental factors, sunlight contains a part of the infrared wavelength spectrum that converts images from blue to violet, and the recognition performance can be affected in this situation. Changes in temperature and humidity can also affect how blood flows through the veins.
In order to evaluate environmental factors, different day times under five conditions were considered:
-
Outdoor: open areas with non-direct sunlight.
-
Indoor natural light 1: closed areas (inside a building) with natural light.
-
Indoor natural light 2: closed areas with natural light and a dark homogeneous background to capture the sample.
-
Indoor artificial light: closed areas (inside a building) with artificial light.
-
Indoor darkness: closed areas in absence of light.
It is important to consider that the smartphone flashlight was always used, since it was not possible to turn it off. This could affect the acquisition in certain light conditions.
Veins from the wrist and dorsal hand (2 body areas) were acquired from 10 volunteers for both left and right hands (2 sides). For each condition, 5 samples were taken. The complete database is composed of 1000 veins images (10 individuals × 2 body areas × 2 sides × 5 conditions × 5 samples). The ages of the volunteers ranged from 18 to 75 to assess biological factors. The average age in the participants of this database is 36.4 years. Additionally, both genders were considered, with 8 women and 2 men.
The acquisition followed a protocol with the following steps:
  • Inform about the target and procedure of the acquisition.
  • Deliver an informed consent document.
  • Collect information about age and gender.
  • Capture 5 biometric samples considering body side and area (left wrist, left dorsal hand, right wrist, and right dorsal hand) and condition (outdoor, indoor natural light 1, indoor natural light 2, indoor artificial light, and darkness).
Samples were acquired by the individuals who held the smartphone and interacted with the app. Since the IRVeinViewer has not been developed to take pictures, the samples were collected by means of screenshots. In general, individuals tried to acquire samples containing enough information about the veins.
Since the acquisition of the images was non-guided, an essential step is the extraction of the region of interest (ROI), which is the recognition area with the most discriminative information. For this work, manual extraction was performed. However, a guided acquisition system (similar to that proposed in [7] for vein images acquired from wrists) could be applied. Figure 2 shows examples of veins images acquired from a wrist and a dorsal hand for each condition established. ROI extraction was applied to the acquisitions under outdoor, indoor natural light 1, indoor artificial light, and darkness conditions. In the case of the indoor natural light 2 condition, since the vein images were acquired with a dark homogeneous background, ROI extraction was not applied, and only foreground was extracted.

4. Implementation and Evaluation of the Recognition System

A recognition system was developed, evaluated with the created database, and implemented as an Android application. The first step in the design was to select the adequate pre-, post-, and processing algorithms of the recognition system. The second step was to select the quality assessment algorithms to avoid the acquisition of bad images.

4.1. Recognition Algorithms

Feature extraction algorithms for veins can be classified into four groups according to the information extracted: minutiae of the vein patterns, features based on convolutional neural networks (CNNs), binary structure of the veins, and local texture descriptors [2]. The extraction of minutiae is not commonly employed for vein images because the number of minutiae located is small. The results obtained with CNNs are promising. However, the application of CNNs to vein recognition is very recent and is not yet well-established. The most extended feature extraction techniques belong to the last two groups. Therefore, we focused on evaluating MC (maximum curvature) [15] as an algorithm for the extraction of the binary structure of the veins and SIFT (scale-invariant feature transform) [16] as an algorithm for the extraction of local texture descriptors.
MC aims to emphasize the center lines of the veins with different widths and levels of brightness. Four cross-sectional profiles are computed by the first and second derivatives in different directions (horizontal, vertical, and two diagonals). Concave profiles indicate vein lines and the local maxima at each concave area indicate the center positions of the veins. Each center position gets a score based on the width and the curvature of the concave area. Once scores are calculated, a filter is applied to connect the vein centers and remove noise. To binarize the image, a median thresholding is applied.
SIFT searches local maximum points through the space and frequency domains of the vein images, which reduces the probability of selecting noise points. A keypoint is selected attending to its neighborhood and applying Taylor series expansion. Each keypoint detected is described by horizontal and vertical locations, scales, and orientations.
We employed the Matlab implementation of the MC-based recognition included in the OpenVein Toolkit [14]. The steps for feature extraction based on MC are: (1) image rescaling, (2) ROI (or foreground) extraction, and (3) MC extraction. MC extraction requires a parameter named as sigma to define the convolution kernel size for the computation of the derivatives. The value of sigma employed was 15, according to the vein images of the database acquired. The matching of the binary features given by MC was performed with the extended version of the approach proposed in [21], computing the correlation between the reference and the input features by shifting the reference features horizontally and vertically, as well as rotating them. The parameters employed were: cw = 50 and ch = 30, as the number of pixels to shift in, respectively, vertical and horizontal direction; cr = 45, as the rotation angle to search (from −45 to 45 degrees); and crs = 0.5, as the rotation degree to sweep within the search angle (−45, −44.5, −44, and so on until 45). Figure 3 shows MC features extracted from two samples of the same individual, from the wrist and from the dorsal hand.
We employed the Python and OpenCV implementation of the SIFT-based recognition method available in [7]. The steps for feature extraction based on SIFT are: (1) image rescaling, (2) ROI (or foreground) extraction, (3) image enhancement, and (4) SIFT extraction. In [7], image enhancement is performed by firstly applying CLAHE (contrast adaptive limited histogram equalization) twice and subsequently Gaussian, median, and average filters. CLAHE works by improving the local contrast and enhancing the definitions of veins. However, since CLAHE generates high-frequency noise, Gaussian, median, and average filters are applied to smooth the result and remove most of the generated noise. The kernel size considered for these filters was 11 × 11. As illustrated in the following subsection, we obtained somewhat better results by applying CLAHE twice and then, an 11 × 11-kernel Gaussian filter. The number of keypoints detected in the images was limited to 100, retrieving just the strongest keypoints, as in [7]. The detected keypoints were compared using the Fast Library for Approximate Nearest Neighbors (FLANN) [22]. For each keypoint of the input image, the Euclidean distance is calculated to the nearest neighbors in the reference image. If these distances pass Lowe’s test, i.e., the distance to a nearest neighbor is smaller than the distances to other nearest neighbors multiplied by a ratio, the keypoint is considered as a match. Matching scores are the normalized number of matches, i.e., the number of matched keypoints divided by 100.
In order to avoid false matching pairs, we implemented a procedure to verify that the segments connecting matching pairs should feature similar lengths and orientations, since possible deformations in the acquisition process are small [23]. The standard deviations of the lengths and orientation angles were calculated for all the matching pairs and those pairs with values greater than a threshold were removed. Figure 4 shows the importance of applying this postprocessing.
As illustrated in Figure 5 for the case of wrists and SIFT, the complete biometric recognition process includes acquisition, preprocessing, feature extraction, matching, and postprocessing stages.

4.2. Quality Assessment Algorithms

Since there is not a standardized proposal for quality estimation of vein images, we evaluated three methods employed in the literature and provided by the OpenVein Toolkit [14]: GCF (global contrast factor), Wang17, and HSNR (signal to noise ratio based on the human visual system). GCF [17] computes local contrasts at various spatial frequencies and then, combines them to compute a global contrast. It measures the overall perceived contrast of an image, which indicates the richness of the details observed. Wang17 [18] combines brightness uniformity (computed as the gray mean difference between different parts of the vein image) and clarity (computed by blurring an image with a low-pass filter and comparing the variations between neighboring pixels before and after blurring). HSNR [19] simulates the cognitive function of the human visual system by combining contrast, shifting, effective area, and signal to noise indexes. Contrast index is based on computing the mean square deviation of the gray values of the pixels. Shifting index computes the horizontal and vertical deviation of foreground areas’ center of mass with respect to the geometric center of the whole image. Effective area index measures the foreground information with respect to the complete image information. Signal to noise index is computed with the contrast sensitivity weight matrix and the impulse noise distribution matrix of the vein image. The values of these quality metrics range from 0 to 8 for GCF, from 0 to 1 for Wang17, and from 0 to 100 for HSNR. Higher values mean better image quality.

4.3. Implementation Decisions and Performance Results

The images acquired from the smartphone camera have a resolution of 1080 × 1080 pixels but were rescaled to 300 × 300 pixels to improve the execution time of the recognition algorithms.
The results of the recognition performance in terms of EER values obtained with the MC and SIFT algorithms are shown in Table 1. Matching comparisons for each separated condition were performed by following the FVC (fingerprint verification competition) protocol, i.e., genuine comparisons were performed by comparing each sample to all the remaining samples from the same individual, and the impostor comparisons were performed by comparing the first sample of each individual to the first sample of all the remaining individuals. For all conditions, genuine comparisons included all the possible comparisons and impostor comparisons were performed by comparing the first sample of each individual under each condition to the first sample of all the remaining individuals under all the conditions. In all cases, symmetrical comparisons were removed.
Table 2 compares our results with the other studies included in Section 2, which also acquire images of veins from smartphones and employ MC and SIFT algorithms. It should be noted that specific smartphones with near-infrared cameras were employed for the UC3M-CV2 database [7]. In the case of [10], an add-on module for smartphones was employed to supply an external light source. Furthermore, the smartphone was modified to remove the NIR blocking filter. From these results, a first conclusion is that similar recognition rates can be achieved with our proposal, even employing standard cameras and no additional light sources.
From the results in Table 1, several conclusions can be obtained with respect to the body parts employed to locate veins, the acquisition conditions, and the algorithms considered. Regarding the body part, the results with wrists were somewhat better than with dorsal hands. We noticed that acquisition of wrist images was maybe easier for the users. Regarding the acquisition conditions, indoor conditions with light provided the best results. The results at indoor natural light 1 and 2 were similar except for dorsal hands and MC, where the results are better for indoor natural light 2, improving clearly for MC. The reason for this is that for indoor natural light 2, recognition is not only based on vein features extracted from the ROI, but also on features from the foreground associated with the knuckles and how the thumb is positioned when closing the hand. This is illustrated in Figure 6 for the case of the MC.
Regarding the algorithms, SIFT clearly provides better recognition results than MC. SIFT results are further improved by adjusting the enhancement filters at preprocessing and, mainly, by removing the false matching pairs at postprocessing. An explanation for the better results is that SIFT extracts more information than MC from the images. While MC deals with binary information about the existence or absence of veins, SIFT deals with enriched information summarized in the selection of keypoints. Interestingly, most of the keypoints matched between genuine samples are located in areas of no veins, which is something expected since the areas with no veins are wider. This is shown in Table 3.
Since SIFT provides better recognition results, the version with the best preprocessing and postprocessing was selected for our Android implementation. For that purpose, OpenCV in native code (C/C++) was used. The computation times provided are shown in Table 4. The feature extraction is the slowest stage (127.62 ms) while the preprocessing (10.35 ms), matching (4.16 ms), and postprocessing (3.37 ms) are much faster. In global terms, the implementation is suitable for real-time recognition. In comparison to other results from the literature, the implementation performed in this work provides lower computation times (despite considering an additional postprocessing stage).
Concerning the selection of quality assessment algorithms, Table 5 shows the results obtained with our database and the results obtained in the literature. The rows associated with our database are filled in light gray and the highest values are depicted with bold fonts. Overall, these results suggest that these algorithms evaluate the quality of the vein images acquired in this work as comparable to other vein images from databases available in the literature. The problem is that none of these algorithms allowed us to automatically eliminate the blurred images with unclearly detected veins that provoke bad recognition rates. The reason for this is that the quality metrics evaluated by these algorithms are independent of the content and structures of the veins.
In order to improve the quality of the acquisition process, we find it more efficient to acquire several images subsequently at both enrollment and matching, and to evaluate their similarity directly with the selected SIFT algorithm, prior to storing their features as templates or comparing them with templates (which is practical since the computation time allows real-time operation). Since the similarity score for a zero false match rate (FMR0) can be known a priori, only those images whose similarity score with the other acquired images are greater than FMR0 are finally considered for both enrollment and matching. This is transparent to the users, who can easily acquire several images subsequently in real time. Recognition performance after applying this quality control and the percent value of removed samples are shown in Table 6. The recognition performance improves considerably in most cases, as expected, with a lower rejecting percentage in the case of wrists, which confirms the above-mentioned idea that the users may find easier to acquire wrist images with our implementation. Based on the recognition accuracy and the removed samples, it is also confirmed that the implemented system works better under indoor conditions with light.

5. Discussion

In the context of biometric authentication systems for access control, the use of an app for ordinary smartphones is very interesting to increase the usability (collectability) and acceptability of the system. While facial and fingerprint recognition has been widely employed in smartphones, the use of vein-based traits has not yet been sufficiently studied and exploited. Custom-designed devices with near-infrared light sources usually perform vein acquisition in vascular biometric systems. Some of these designed devices may be expensive and cannot be used in some contexts because of their portability and size. In this work, we have explored the feasibility of an app, the free IRVeinViewer app, that detects superficial veins from images captured in the visible spectrum by an ordinary smartphone camera.
Since deep veins are not well acquired, we designed and evaluated our system with a database composed of 1000 images of dorsal hands and wrists since they have more superficial veins and are easier to acquire with a smartphone. Among the feature-extraction algorithms reported for vein-based recognition, we employed the maximum curvature (MC) and SIFT (scale-invariant feature transform) to evaluate in terms of the equal error rate (EER) the authentication performance obtained. The results show that SIFT algorithm provides better EER values than MC and that wrist images allow better authentication than images of dorsal hands. Using SIFT on wrist images, the EER values range from 0.63% to 16.80%. The system implemented in Android allows for performance in real time, with the feature extraction stage being the slowest stage (it lasts 127.62 ms).
The best and worst EER values with our database are similar to those obtained with databases whose images were acquired with specific devices. Hence, it is worth further improving the cheaper solution explored herein.
The influence of environmental factors in the authentication performance was evaluated considering acquisitions performed outdoors and indoors (with natural and artificial light or no light). The results show that indoor acquisitions are somewhat better than outdoor acquisitions. However, our conclusion is that how the user acquires the images with the app from one time to another has a greater influence, i.e., if the user carries out matching in the same environment as enrollment, and after a short time from enrollment, the authentication is better. Therefore, one way of improving this work is to develop a smart app that guides the user in the acquisition. We will consider this as future work.
The methods employed in the literature for the quality estimation of images did not detect images with unclearly acquired veins. Hence, it is worth exploring other methods such as the genuine score-based evaluation considered in this work. This is something to further explore in a smart app.
Since vein-based biometrics offers a high circumvention but their EER values are not expected to be low enough in ordinary smartphones, it is a good choice for multimodal biometric systems. Finally, another future line of research will be the application of convolutional neural networks (CNNs) for vein recognition on ordinary smartphones since the preliminary results in the literature are promising.

Author Contributions

Conceptualization, all authors; methodology, R.A. and I.B.; software, P.L.-G. and M.H.; validation, R.A. and I.B.; formal analysis, R.A. and I.B.; investigation, all authors; resources, P.L.-G. and M.H.; data curation, all authors; writing—original draft preparation, P.L.-G. and R.A.; writing—review and editing, R.A. and I.B.; visualization, P.L.-G.; supervision, R.A. and I.B.; project administration, I.B. and R.A.; funding acquisition, I.B. and R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was conducted thanks to Grant PDC2021-121589-I00 funded by MCIN/AEI/10.13039/501100011033 and the “European Union NextGenerationEU/PRTR”, Grant PID2020-119397RB-I00 funded by MCIN/AEI/ 10.13039/501100011033, and Grant US-1265146 funded by “Fondo Europeo de Desarrollo Regional (FEDER)” and “Consejería de Transformación Económica, Industria, Conocimiento y Universidades de la Junta de Andalucía, dentro del Programa Operativo FEDER 2014-2020”.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Jain, A.K.; Flynn, P.; Ross, A.A. Handbook of Biometrics; Springer: Boston, MA, USA, 2008. [Google Scholar] [CrossRef]
  2. Uhl, A.; Busch, C.; Marcel, S.; Veldhuis, R. Handbook of Vascular Biometrics; Springer: Cham, Switzerland, 2020. [Google Scholar] [CrossRef] [Green Version]
  3. Kauba, C.; Prommegger, B.; Uhl, A. Combined Fully Contactless Finger and Hand Vein Capturing Device with a Corresponding Dataset. Sensors 2019, 22, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Bosphorus Hand Database. Available online: http://bosphorus.ee.boun.edu.tr/hand/Home.aspx (accessed on 26 March 2022).
  5. Faundez-Zanuy, M.; Mekyska, J.; Font-Aragonès, X. A New Hand Image Database Simultaneously Acquired in Visible, Near-Infrared and Thermal Spectrums. Cogn. Comput. 2014, 6, 230–240. [Google Scholar] [CrossRef]
  6. Kauba, C.; Uhl, A. Shedding Light on the Veins—Reflected Light or Transillumination in Hand-Vein Recognition. In Proceedings of the IEEE International Conference on Biometrics (ICB), Gold Coast, QLD, Australia, 20–23 February 2018. [Google Scholar] [CrossRef]
  7. Garcia-Martin, R.; Sanchez-Reillo, R. Vein Biometric Recognition on a Smartphone. IEEE Access 2020, 8, 104801–104813. [Google Scholar] [CrossRef]
  8. Sierro, A.; Ferrez, P.; Roduit, P. Contact-Less Palm/Finger Vein Biometrics. In Proceedings of the IEEE International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 9–11 September 2015. [Google Scholar] [CrossRef]
  9. Tome, P.; Marcel, S. On the vulnerability of palm vein recognition to spoofing attacks. In Proceedings of the IEEE International Conference on Biometrics (ICB), Phuket, Thailand, 19–22 May 2015. [Google Scholar] [CrossRef] [Green Version]
  10. Debiasi, L.; Kauba, C.; Prommegger, B.; Uhl, A. Near-Infrared Illumination Add-On for Mobile Hand-Vein Acquisition. In Proceedings of the IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), Redondo Beach, CA, USA, 22–25 October 2018. [Google Scholar] [CrossRef]
  11. Song, J.H.; Kim, C.; Yoo, Y. Vein visualization using a smart phone with multispectral Wiener estimation for point-of-care applications. IEEE J. Biomed. Health Inform. 2015, 19, 773–778. [Google Scholar] [CrossRef] [PubMed]
  12. Kurban, O.C.; Nıyaz, Ö.; Yildirim, T. Neural network based wrist vein identification using ordinary camera. In Proceedings of the International Symposium on INnovations in Intelligent SysTems and Applications (INISTA), Sinaia, Romania, 2–5 August 2016. [Google Scholar] [CrossRef]
  13. Ambati, L.S.; El-Gayar, O.; El, O.; Nawar, N. Design Principles for Multiple Sclerosis Mobile Self-Management Applications: A Patient-Centric Perspective. In Proceedings of the Americas Conference on Information Systems (AMCIS), on-line, 9–13 August 2021. [Google Scholar]
  14. Kauba, C.; Uhl, A. An Available Open-Source Vein Recognition Framework. Available online: https://www.wavelab.at/sources/OpenVein-Toolkit/ (accessed on 26 March 2022).
  15. Miura, N.; Nagasaka, A.; Miyatake, T. Extraction of Finger-Vein Patterns Using Maximum Curvature Points in Image Profiles. IEICE Trans. Inf. Syst. 2007, 90, 1185–1194. [Google Scholar] [CrossRef] [Green Version]
  16. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  17. Matković, K.; Neumann, L.; Neumann, A.; Psik, T.; Purgathofer, W. Global contrast factor—A new approach to image contrast. In Proceedings of the First Eurographics conference on Computational Aesthetics in Graphics, Visualization and Imaging (Computational Aesthetics), Girona, Spain, 18–20 May 2005. [Google Scholar]
  18. Wang, C.; Zeng, X.; Sun, X.; Dong, W.; Zhu, Z. Quality assessment on near infrared palm vein image. In Proceedings of the IEEE 32nd Youth Academic Annual Conference of Chinese Association of Automation (YAC), Hefei, China, 19–21 May 2017. [Google Scholar] [CrossRef]
  19. Ma, H.; Cui, F.P.; Oluwatoyin, P. A Non-Contact Finger Vein Image Quality Assessment Method. Appl. Mech. Mater. 2012, 239–240, 986–989. [Google Scholar] [CrossRef]
  20. Swarbrick, J.; Boylan, J.C. Encyclopedia of Pharmaceutical Technology; CRC Press: Boca Ratón, FL, USA, 1999; Volume 19, ISBN 10: 0824728181. [Google Scholar]
  21. Miura, N.; Nagasaka, A.; Miyatake, T. Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Mach. Vis. Appl. 2004, 15, 194–203. [Google Scholar] [CrossRef]
  22. Muja, M.; Lowe, D.G. Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration. In Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP), Lisboa, Portugal, 5–8 February 2009. [Google Scholar]
  23. Zhang, G.; Meng, X. High Security Finger Vein Recognition Based on Robust Keypoint Correspondence Clustering. IEEE Access 2021, 9, 154058–154070. [Google Scholar] [CrossRef]
Figure 1. Examples of acquisition with the IRVeinViewer app: (a,c) acquisition procedure; (b,d) final images obtained.
Figure 1. Examples of acquisition with the IRVeinViewer app: (a,c) acquisition procedure; (b,d) final images obtained.
Applsci 12 03522 g001
Figure 2. Samples acquired from a wrist (first row) and a dorsal hand (third row) together with their extracted ROIs if applicable (second and fourth rows, respectively) for the conditions (from left to right): Outdoor, Indoor natural light 1, Indoor natural light 2, Indoor artificial light, and darkness.
Figure 2. Samples acquired from a wrist (first row) and a dorsal hand (third row) together with their extracted ROIs if applicable (second and fourth rows, respectively) for the conditions (from left to right): Outdoor, Indoor natural light 1, Indoor natural light 2, Indoor artificial light, and darkness.
Applsci 12 03522 g002
Figure 3. Wrist (a,b) and dorsal hand (c,d) samples of the same individual, and their Maximum Curvature features (e,f) and (g,h), respectively.
Figure 3. Wrist (a,b) and dorsal hand (c,d) samples of the same individual, and their Maximum Curvature features (e,f) and (g,h), respectively.
Applsci 12 03522 g003
Figure 4. SIFT matching for two samples of the same individual from: (a,b) the wrist, and (c,d) the dorsal hand. Matching pairs before (left) and after (right) removing false pairs (postprocessing).
Figure 4. SIFT matching for two samples of the same individual from: (a,b) the wrist, and (c,d) the dorsal hand. Matching pairs before (left) and after (right) removing false pairs (postprocessing).
Applsci 12 03522 g004
Figure 5. (a) Biometric recognition process implemented in this work. Wrist result examples of (b) acquisition, (c) preprocessing, (d) SIFT feature extraction, and (e) postprocessing.
Figure 5. (a) Biometric recognition process implemented in this work. Wrist result examples of (b) acquisition, (c) preprocessing, (d) SIFT feature extraction, and (e) postprocessing.
Applsci 12 03522 g005
Figure 6. Maximum Curvature features (right) extracted from a vein image acquired under indoor natural light 2 condition (left). Features outside the ROI are indicated by red ellipses.
Figure 6. Maximum Curvature features (right) extracted from a vein image acquired under indoor natural light 2 condition (left). Features outside the ROI are indicated by red ellipses.
Applsci 12 03522 g006
Table 1. EER values (%) obtained for the database acquired.
Table 1. EER values (%) obtained for the database acquired.
ConditionBody PartMCSIFT 1SIFT 2SIFT 3
OutdoorWrist15.005.646.005.50
Dorsal Hand16.266.005.003.76
Indoor natural light 1Wrist9.802.351.310.63
Dorsal Hand14.213.874.452.87
Indoor natural light 2Wrist10.902.500.921.00
Dorsal Hand9.833.991.280.50
Indoor artificial lightWrist10.183.733.392.82
Dorsal Hand14.706.604.601.70
DarknessWrist7.875.373.081.43
Dorsal Hand11.784.625.935.80
AllWrist32.8120.9219.2716.80
Dorsal Hand30.7421.9020.9619.30
SIFT 1: CLAHE twice and 11 × 11-kernel Gaussian, Median and Average filters (preprocessing). SIFT 2: CLAHE twice and 11 × 11-kernel Gaussian filter (preprocessing). SIFT 3: CLAHE twice and 11 × 11-kernel Gaussian filter (preprocessing) and false pairs removed (postprocessing).
Table 2. Comparison to other proposals in the literature in terms of best and worst EER values.
Table 2. Comparison to other proposals in the literature in terms of best and worst EER values.
Body Part (Proposal)AlgorithmBest EER (%)Worst EER (%)
Wrist (This work)MC7.8732.81
SIFT 30.6316.80
Dorsal Hand (This work)MC9.8330.74
SIFT 30.5019.30
Wrist [7]SIFT 16.8218.72
Dorsal Hand [10]MC4.1324.30
SIFT 410.6341.12
SIFT 1: CLAHE twice and 11 × 11-kernel Gaussian, Median and Average filters (preprocessing). SIFT 3: CLAHE twice and 11 × 11-kernel Gaussian filter (preprocessing) and false pairs removed (postprocessing). SIFT 4: CLAHE, High Frequency Emphasis and Circular Gabor filters (preprocessing).
Table 3. SIFT keypoints matched in genuine samples located in vein and no-vein areas.
Table 3. SIFT keypoints matched in genuine samples located in vein and no-vein areas.
ConditionBody PartVein (%)No Vein (%)
OutdoorWrist0.140.86
Dorsal Hand0.170.83
Indoor natural light 1Wrist0.090.91
Dorsal Hand0.130.87
Indoor natural light 2Wrist0.130.87
Dorsal Hand0.140.86
Indoor artificial lightWrist0.070.93
Dorsal Hand0.100.90
DarknessWrist0.080.93
Dorsal Hand0.100.90
AllWrist0.150.85
Dorsal Hand0.170.83
Table 4. Computation times (ms) for the stages of vein recognition based on SIFT.
Table 4. Computation times (ms) for the stages of vein recognition based on SIFT.
Proposal (Smartphone Platform)
Stage[7] (Xiaomi Pocophone F1)[7] (Xiaomi Mi 8)This Work (Xiaomi Redmi Note 8)
Preprocessing342810.35
Feature Extraction227259127.62
Matching664.16
Postprocessing --3.37
Total267293145.50
Table 5. Results of quality metrics.
Table 5. Results of quality metrics.
Quality Metrics
DatabaseBody PartGCFWang17HSNR
OutdoorWrist1.760.3993.57
Dorsal Hand1.930.3692.53
Indoor natural light 1Wrist2.130.3990.76
Dorsal Hand2.250.3891.17
Indoor natural light 2Wrist1.810.3890.34
Dorsal Hand1.850.3591.39
Indoor artificial lightWrist2.310.3891.72
Dorsal Hand2.120.3791.73
DarknessWrist2.240.4088.84
Dorsal Hand2.130.3788.13
HandVein 850 mm [3]Palm Hand1.420.6890.43
HandVein 950 mm [3]Palm Hand1.870.6691.76
Bosphorus [4]Dorsal Hand2.690.3386.12
Tecnocampus [5]Palm and Dorsal Hand2.310.3785.09
Vera [9]Palm Hand1.310.4385.09
PROTECT [6]Dorsal Hand2.800.5682.43
Table 6. Recognition performance after removing samples by genuine score-based evaluation.
Table 6. Recognition performance after removing samples by genuine score-based evaluation.
ConditionBody PartEER (%)
without Quality Control
EER (%)
with Quality
Control
% Removed
Samples
OutdoorWrist5.501.107
Dorsal Hand3.761.159
Indoor natural
light 1
Wrist0.630.501
Dorsal Hand2.8709
Indoor natural
light 2
Wrist1.0001
Dorsal Hand0.500.375
Indoor artificial lightWrist2.8203
Dorsal Hand1.700.534
DarknessWrist1.430.782
Dorsal Hand5.802.876
AllWrist16.8015.452.8
Dorsal hand19.3018.346.6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

López-González, P.; Baturone, I.; Hinojosa, M.; Arjona, R. Evaluation of a Vein Biometric Recognition System on an Ordinary Smartphone. Appl. Sci. 2022, 12, 3522. https://doi.org/10.3390/app12073522

AMA Style

López-González P, Baturone I, Hinojosa M, Arjona R. Evaluation of a Vein Biometric Recognition System on an Ordinary Smartphone. Applied Sciences. 2022; 12(7):3522. https://doi.org/10.3390/app12073522

Chicago/Turabian Style

López-González, Paula, Iluminada Baturone, Mercedes Hinojosa, and Rosario Arjona. 2022. "Evaluation of a Vein Biometric Recognition System on an Ordinary Smartphone" Applied Sciences 12, no. 7: 3522. https://doi.org/10.3390/app12073522

APA Style

López-González, P., Baturone, I., Hinojosa, M., & Arjona, R. (2022). Evaluation of a Vein Biometric Recognition System on an Ordinary Smartphone. Applied Sciences, 12(7), 3522. https://doi.org/10.3390/app12073522

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop