*2.3. ROI Extraction*

Since the imaging principle and acquisition approach of FP, FV, and FKP traits are different, diverse ROI extraction methods are supposed to be adopted accordingly [2]. In this paper, we apply the core point detection method to extract the FP ROI image [30], the convex direction coding method to extract the FKP ROI image [31], and the interphalangeal joint prior method to extract the FV ROI image [3]. Therefore, the FP, FV, and FKP images are cropped into 152 × 152 pixels, 200 × 91 pixels and 200 × 90 pixels, respectively. Some finger trimodal ROI images are shown in Figure 5.

**Figure 5.** The finger trimodal region of interest (ROI) images of four fingers. (**a**) fingerprint (FP) ROIs samples; (**b**) samples of finger-vein (FV) ROIs; (**c**) finger-knuckle-print (FKP) ROIs samples.

#### **3. Finger Image Enhancement**

In recent decades, Gabor filters have been widely applied in many fields since they not only extract the texture information in multiple directions of an image, but also reduce the influence of illumination to some extent [32]. In terms of the abundant texture information of FP, FV, and FKP traits, with respect to direction, oriented Gabor filters are used here to perform image enhancement.

A Gabor filter consists of a real part and an imaginary part. Generally, the real part is also called an even-symmetrical Gabor filter, which is suitable for ridge detection in an image [2]. Since these three modality images of a finger all have their own particular ridge textures, the real part of Gabor filter can be used to extract the flexible feature information effectively [33]. It can be expressed as

$$G(\mathbf{x}, y, \theta\_k, f\_k) = \frac{\mathcal{V}}{2\pi\sigma^2} \exp\left\{-\frac{1}{2} \left(\frac{\mathbf{x}\_{\theta\_k}^2 + \gamma^2 y\_{\theta\_k}^2}{\sigma^2}\right)\right\} \cos\left(2\pi f\_k \mathbf{x}\_{\theta\_k}\right) \tag{4}$$

where *x*θ*<sup>k</sup>* = *x*cosθ*<sup>k</sup>* + *y*sinθ*k*, *y*θ*<sup>k</sup>* = *y*cosθ*<sup>k</sup>* − *x*sinθ*k*, σ and *k* = (1, 2, ... , *k*), respectively, represent the scale index and the orientation index, θ*<sup>k</sup>* denotes the orientation of the *k*-th Gabor filter, and *fk* is the central frequency of the sinusoidal plane wave. Assuming *R*(*x*,*y*) is an original ROI image, each *k*-th Gabor filtered image *Ik*(*x*,*y*) can be obtained by

$$I\_k(\mathbf{x}, \mathbf{y}) = \mathcal{R}(\mathbf{x}, \mathbf{y}) \* \mathcal{G}(\mathbf{x}, \mathbf{y}, \partial\_{\mathbf{k}\prime} f\_{\mathbf{k}}) \tag{5}$$

where the symbol "∗" represents two-dimensional convolution.

First, the original ROI image is convoluted with *k*-channel Gabor filters. Then, the *k* Gabor filtered images are merged into an image *I*(*x*,*y*) by using the competitive coding method proposed in [15]. Some Gabor filtered images are shown in Figure 6. It can be clearly seen that the texture information of finger images can be effectively enhanced after multichannel Gabor filtering. Based on this, we apply the coding-based theory to obtain more stable and reliable finger features.

**Figure 6.** The enhanced images of the finger three modalities.

#### **4. Feature Extraction Based on Local Coding Algorithm**

To make full use of local position and gradient features between adjacent pixels in Gabor filtered images, a local coding algorithm based on generalized symmetric LGS is proposed for feature representation. The specific steps are as follows:

**Step 1** The finger trimodal images are respectively enhanced by *k*-channel even symmetric Gabor filters in Section 3, and the Gabor filtered images are obtained.

**Step 2** As shown in Figure 7, for each center pixel in the Gabor enhanced images, we respectively select three pixels in the left and right of *n* × *n* neighborhoods (a square area in Figure 7) to constitute the GSLGS operator in the horizontal orientation. In terms of weight distribution, the weights of symmetric pixels in the right and left sides maintain equal weights. More details are shown in Figure 7.

**Figure 7.** The design of the GSLGS operator (0◦ direction, *n* = 3).

Since Gabor features of finger trimodal have diverse directions, the information of the surrounding pixels in multiple orientations should be extracted for efficient feature representation. Centered on the target pixel, rotating the GSLGS operator counterclockwise by θ*<sup>k</sup>* (corresponding to Step 1), the structure of GSLGS in an arbitrary orientation can be obtained. For instance, when *k* = 4, the structures of GSLGS are shown in Figure 8.

**Step 3** The coding process of the GSLGS operation is shown in Figure 9. In the neighborhood of left and right sides, these gray values of the pixels are respectively compared in succession starting from each target pixel. If the value becomes larger, the relationship between the two pixels to be compared is coded to 1. In contrast, the relationship is coded to 0. The coding calculation process is expressed as

$$F(\theta\_k) = \sum\_{r=1}^{6} p\_r (g\_r - f\_r) 2^{6-r} + \sum\_{l=1}^{6} q(g\_l - f\_l) 2^{6-l}, (k = 1, 2, \dots, K) \tag{6}$$

$$p\_r(g\_r - f\_r) = \begin{cases} 1, g\_r - f\_r \ge 0, \\ 0, g\_r - f\_r < 0 \end{cases} \tag{7}$$

$$q\_l(g\_l - f\_l) = \begin{cases} \ 1, \ g\_l - f\_l \ge 0, \\\ 0, \ g\_l - f\_l < 0 \end{cases} \tag{8}$$

where *gr* and *fr* (*gl* and *fl*), respectively, denote values of two adjacent pixels in the right (left) neighborhood, and *F(*θ*k)* represent the feature coded value in the θ*<sup>k</sup>* orientation.

From Figure 9, we can see that the coded values of the target pixel at 0◦ and 45◦, respectively, can be obtained according to Equations (6)–(8). Similarly, the same calculation process is done at 90◦ and 135◦. Thus, we calculate the coded values of the central pixel in these four directions as follow:

$$F(\theta\_1) = (010100)\_2 + (110110)\_2 = (0 \times 32 + 1 \times 16 + 0 \times 8 + 1 \times 4 + 0 \times 2 + 0 \times 1) + (1 \times 32 + 1 \times 16 + 0 \times 8 + 1 \times 4 + 1 \times 2 + 0 \times 1) = 74.$$

*F*(θ2) = (100100)2 + (000100)2 = (1 × 32 + 0 × 16 + 0 × 8 + 1 × 4 + 0 × 2 + 0 × 1) + (0 × 32 + 0 × 16 + 0 × 8 + 1 × 4 + 0 × 2 + 0 × 1) = 40. *F*(θ3) = (100110)2 + (010110)2 = (1 × 32 + 0 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 0 × 1) + (0 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 0 × 1) = 60.

*F*(θ4) = (010011)2 + (010101)2 = (0 × 32 + 1 × 16 + 0 × 8 + 0 × 4 + 1 × 2 + 1 × 1) + (0 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 0 × 2 + 1 × 1) = 39.

**Figure 9.** The coding process of GSLGS operator at 0◦ and 45◦.

**Step 4** Inspired by the optimal response of Gabor filters in multiple orientations, we choose the maximum value among these coded values as the final coded value, which can be defined as

$$F(\mathbf{x}, y) = \arg\max\_{\partial\_k \in (\mathbf{0}^\circ, 180^\circ)} \{ F(\partial\_k) \} \tag{9}$$

As mentioned above, the final coded value of each target pixel in the Gabor enhanced image can be obtained according to the GSLGS operator. For instance, the coded value in Figure 7 is: *F*(*x*,*y*) = argmax {*F*(θ*1*), *F*(θ*2*), *F*(θ*3*), *F*(θ*4*)} = *F*(θ*1*) = 74.

Considering the great capability of a Gabor filter in enhancing texture feature from any orientation, the GSLGS operator is extended into arbitrary orientations, which has superior orientation selectivity. Therefore, it can effectively solve image mismatch problem due to finger pose variation. More importantly, the proposed local coding algorithm can entirely consider the relationships between each target pixel and its surrounding neighborhoods. In addition, the distribution of weights is conformable in the symmetric pixels on both sides. Hence, the finger feature representation of local neighborhoods can maintain balance in the GSLGS operator.
