2.2.2. PIS-CVBR®

After the database collection (experimental process explained in the later section), the next step was to recognize the user: authentication/verification (1:1 user comparison) or identification (1:N user comparison). For this purpose, Preprocessing and Identification Software for Contactless Vascular Biometric Recognition (PIS-CVBR®) is proposed in this paper. It was divided into three parts or steps: preprocessing, feature extraction, and feature matching.

This software has been also developed using Python™ 3.4.2.

## Preprocessing

The main goal of preprocessing was to enhance, normalize, and define the vein patterns in other to extract the features later on. This process is summarized in Figure 3. The infrared RGB images were captured in 640 × 480 resolution and ".jpg" compressed format (Figure 3a) with TGS-CVBR® and the modified camera (RGB camera). The first step, RGB to greyscale (monochromatic image with values from 0, black, to 255, white) transformation, is shown in Figure 3b.

In order to obtain a higher contrast between veins and the rest living tissue, the adaptive histogram equalization technique Contrast Limited Adaptive Histogram Equalization (CLAHE) [20] was used (Figure 3c). To reduce the high-frequency noise (salt-and-pepper and Gaussian noise in this case) generated by this algorithm and the camera sensor, several low-pass software filters were employed (Figure 3d) in the following order: Gaussian filter, Median filter, and Averaging filter. The kernel size of all of them was 11 × 11. This was the last step of the preprocessing task.

**Figure 3.** Preprocessing and Identification Software for Contactless Vascular Biometric Recognition (PIS-CVBR®): Preprocessing steps for User 0. Example images (above) and their histograms (below): (**a**) RGB image. (**b**) Image after greyscale conversion. (**c**) Image after greyscale conversion and Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. (**d**) Image after greyscale conversion, CLAHE algorithm, and filtered by Gaussian filter, Median filter, and Averaging (11 × 11 kernel).

Finally, it is important to remark that in this paper, and at this moment, the ROI extraction was not considered required for this software. However, it would probably improve system performance and is a step to contemplate in the future.

## Feature Extraction

For the extraction of unique features from the wrist vein patterns, three scale-orientation-invariant algorithms for homography have been used and tested: Scale-Invariant Feature Transform (SIFT®) [34], Speeded Up Robust Features (SURF®) [35] and Oriented FAST and Rotated BRIEF (ORB) [36]. They have been selected and used, along with the TGS-CVBR® algorithm, in order to avoid the variability of the size and orientation of the wrist area, caused by the non-contact feature.

The first algorithm, SIFT®, patented in 2004 [34], was based on the Harris Corner Detector, whose variant-scale features were the motivation for improvement. SIFT® is a well-known algorithm due to its excellent performance but also to its high computing time. In order to reduce this time, SURF® was patent in 2006 [35]. Finally, the ORB algorithm, a fusion of the modified FAST and BRIEF algorithms, was published in 2011 as an open-use and a faster alternative.

In VBR, only SIFT® has been used in the wrist variant, in [37]. However, in the current study, these three algorithms have been compared, and also, with a contactless dataset. After the preprocessing, the performance of the feature extraction (100 key points) for each algorithm, with scale and orientation, is shown in Figure 4a–c, respectively.

(**a**) (**b**)

(**c**)

**Figure 4.** PIS-CVBR®: Feature extraction for User 0. Scale and orientation of the 100 key points extracted with the three algorithms used: (**a**) Scale-Invariant Feature Transform (SIFT®). (**b**) Speeded Up Robust Features (SURF®). (**c**) Oriented FAST and Rotated BRIEF (ORB).

#### Feature Matching

For the feature or key points matching, two algorithms were used:


The matching between the wrist pattern image of one user (User 0) and real-time video capture was taken and shown, for the two matching algorithms, in Figure 5.

The BFM and the FLANN algorithms provided distances between the features matched. These distances are similarity values between matched features or key points. For the BFM, the Hamming distance was selected. A higher value of distance means that the points were more separated, i.e., they were less similar. To decide if these matched points are suitable, the Lowe's ratio test [34] was used for FLANN (SIFT® and SURF®), and a simple distance score value was set for BFM (ORB). The result of performance per each algorithm is discussed in Section 3.2.2.

So as to obtain a real-time authentication and identification system, analyzing the computational performance of the proposal software algorithms, TGS-CVBR® and PIS-CVBR® are combined. Figure 6 shows and summarized the authentication and verification process made in this work.

**Figure 5.** PIS-CVBR®: feature matching for User 0. Correct matching points for two samples of User 0 with the three feature extraction algorithms: (**a**) SIFT® (with Fast Library for Approximate Nearest Neighbors (FLANN)). (**b**) SURF® (with FLANN). (**c**) ORB (with Brute Force Matcher (BFM)).

**Figure 6.** TGS-CVBR® and PIS-CVBR® union: Real-time authentication and identification process.

For the authentication or verification task (green block in Figure 6, 1:1 user comparison), the unique user image pattern (User X extracted from the database) was compared with the real-time video capture (samples), i.e., the features extracted from the image (Figure 7, left) were matched with the features extracted from the streaming video (Figure 7, right). Please, check the supplementary video material for better comprehension.

**Figure 7.** TGS-CVBR® and PIS-CVBR® union: User 0 real-time authentication (screenshot). Unique user image pattern (**left** side) comparison with video (**right** side) using (SIFT algorithm, 7–8 FPS). In the video, the word "User" refers to the subject, and the wrist is predefined (User 0 = Subject 0 and Right wrist, User 1 = Subject 0 and Left wrist).

For the identification task (yellow block in Figure 6, 1:N user comparison), once the unique features had been extracted from each user (User 0 to User 100) at the initialization of the program, they were compared with real-time video capture. Figure 8 shows two identification examples of two users. It is important to notice that according to the normative ISO/IEC 19795-1:2019 [29], this software does not identify because does not provide a rank index, R, of the number of users considered as potential candidates selected with a threshold T.

**Figure 8.** Final system software: User 0 and User 1 real-time identification using TGS-CVBR® and PIS-CVBR® (SIFT algorithm). (**a**) User 0 capture. (**b**) User 1 capture.

(**a**) (**b**)

The computing performance for these two tasks is detailed in the results section, Section 3.

#### *2.3. Dataset Collection: Experimental and Evaluation Procedure*

The database acquired in this work was named UC3M-Contactless Version 1 (UC3M-CV1) database. It was collected with the proposed TGS-CVBR®, and the hardware described previously. The two other databases detailed in Table 1 and acquired with physical contact, UC3M [8] and PUT [6], were employed in this study in order to compare, with contact and non-contact dataset, the results obtained with the software algorithms proposed: TGS-CVBR® and PIS-CVBR®.
