**2. Finger Image Capture and Preprocessing**

## *2.1. Image Acquisition*

As shown in Figure 2a, we have developed a homemade image acquisition device to obtain finger trimodal images. The finger imaging device is designed to capture fingerprint (FP), finger-vein (FV), and finger-knuckle-print (FKP) images automatically. It is composed of a binocular camera with two optical filters, a fingerprint acquisition instrument and an array of LEDs at a wavelength of 850 nm. In the imaging device, the FP images are directly obtained through a fingerprint instrument, which has a quick collection speed. Based on the imaging characteristics of a finger, the FV images are collected by using the near infrared (NIR) light to illuminate the palm side of a finger in penetration manner [27]. For FKP modality, we use the principle of reflecting the visible lights source for image acquisition.

For the sake of improving convenient acquisition of images, a collection groove with fixed sizes is designed in the imaging device, which is used to limit the position of a finger for imaging. It can effectively avoid, to a large extent, an image mismatch problem caused by the rotation and translation of a finger. As shown in Figure 2b, the dimensions of our finger imaging device are 10.9 × 9.8 × 17.8 cm (length × width × height: cm).

**Figure 2.** A finger trimodal image acquisition system. (**a**) the imaging schematic diagram; (**b**) a homemade image capture device; (**c**) a system interface of image acquisition.

As shown in Figure 2c, the acquisition program runs on the platform of Windows, and the software interface is built using C++ language. The top of the interface is designed to display the finger trimodal images captured in real time. Considering the friendliness of human-computer interaction, a window on the right side of the system is designed to remind users of the problems in the system operation.

From Figure 2c, it can be clearly seen that the original captured finger trimodal images have, to a small extent, still some posture variation. To solve this problem during the acquisition process, we present the posture correction method for finger trimodal images.

#### *2.2. Posture Correction*

Although a collection groove is designed in the acquisition device to fix the position of a finger, the finger still has a plane rotation at a small range. As the finger plane rotates, the edge line of the finger changes steadily. Hence, the rotation angle of a finger posture can be calculated and corrected based on the edge line of the finger. Due to the different illuminations, the edge line of the finger-vein is easier to detect and process than the finger-knuckle-print. Therefore, the finger in the finger-vein imaging space is selected to calculate the rotation angle, and then the three modalities are rotated and corrected together. The calculation process of a finger posture angle is shown in Figure 3.

**Figure 3.** Computing finger posture angle. (**a**) the edge line of the finger; (**b**) the coordinate extraction of the finger edge line; (**c**) finger rotation direction extraction.

At first, the captured finger image is filtered to remove noise, and the edge line of the finger is detected. Then, the point coordinates of two edge lines are extracted to calculate the midpoint coordinates. As shown in Figure 3b, {*Ln*} (*n* = 1,2, ... , *N*) represents the coordinate set in the left edge line of the finger, and {*Rn*} represents the coordinate set in the right edge line of the finger, *X* and *Y* represent the row and column coordinates of the midpoint {*Mn*}. The calculation is as follows:

$$\begin{cases} X\_{M\_n} = \left(X\_{L\_n} + X\_{R\_n}\right) / 2\\ Y\_{M\_n} = Y\_{L\_n} = Y\_{R\_n} \end{cases} \tag{1}$$

The linear fitting of the midpoint {*Mn*} by least squares method yields the direction line: *l* = *kx* + *b*, where:

$$\begin{cases} \begin{cases} \begin{array}{c} \sum\limits\_{\begin{subarray}{c} \left( \mathbf{x}\_{M\_{\text{u}}} - \overline{\mathbf{x}} \right) (y\_{M\_{\text{u}}} - \overline{\mathbf{y}}\right) \\ \sum\limits\_{\begin{subarray}{c} \left( \mathbf{x}\_{M\_{\text{u}}} - \overline{\mathbf{x}} \right)^{2} \end{subarray}} \\\ b = \overline{\mathbf{y}} - k\overline{\mathbf{x}} \end{cases} \end{cases} \tag{2}$$

Finally, according to *k*, the posture angle θ of the finger is calculated as follows:

$$\theta = \arctan(\frac{1}{k}) \tag{3}$$

Noteworthy, the center of rotation should select the center of the finger direction line *l*, which can reduce the amplitude of the posture swing of the finger and improve the stability of the correction. Hence, taking the midpoint *MN*/*<sup>2</sup>* as the center of rotation and θ as the angle of rotation, the finger in the finger-vein imaging space is rotated and corrected. Some original images and corrected images for the same finger are shown in Figure 4.

From Figure 4, we can see that the proposed posture correction algorithm is effective to solve the problem of random plane rotation of the finger. This shows that the selection of the hardware and the posture correction algorithms have achieved better effects in improving the consistency of the finger posture.

**Figure 4.** Some corrected image samples after rotation.

However, from the corrected finger images, we can see that they still contain some unnecessary backgrounds and uninformative parts. Hence, the captured finger images need to be processed to implement the regions of interest (ROIs) localization.
