**2. Materials and Methods**

#### *2.1. Related Work*

Over the last thirty years, there have been important developments in deep learning methods. These developments had a significant impact on a wide range of applications dealing with computer vision and pattern recognition. The research field of automatic fingerprint recognition is among the most interesting topics, due to the requirement to increase the recognition accuracy rate. Additionally, deep learning methods avoid the focus on methods devoted to minutiae extraction as handcrafted features, shifting the interest to the analysis of the whole image. The latest investigations devoted to the field of fingerprint image organization are reviewed in this section.

The most sensitive step in the fingerprint recognition scheme is image improvement. Wang et al. [16] proposed an algorithm for fingerprint image quality enhancement, i.e., to improve the clarity and continuity of ridges, based on the wavelet transform and a mechanism of compensation coefficients for each sub-band based on a Gaussian template. Yang et al. [17] presented an enhancement technique which approached both spatial and frequency domains. A spatial ridge-compensation filter was employed to enhance the fingerprint image in the spatial domain. Then, a frequency bandpass filter performed sharp attenuation of both the radial and angular-frequency domains. Shrein et al. [13] used a convolutional neural network that performed the classification tasks for fingerprints in the IAFIS (integrated automated fingerprint identification system) database with 95.9% accuracy. He has shown that precise image preprocessing, aiming to reduce the dimensionality of the feature vector, greatly decreased the training times, even in networks of moderate depth. Mohamed [14] thoroughly investigated all the factors which may affect fingerprint classification using CNNs. His proposed system consists of a preprocessing stage dealing with increasing the fingerprint quality, and a post-processing step devoted to training and classification. A resized image (its dimensions reduced from 512 × 512 pixels to 200 × 200 pixels) was created in order to reduce the training time. A classification accuracy of 99.2% with a zero rejection rate was reported. Militello et al. [15] used a pretrained convolutional neural network and two fingerprint databases with heterogeneous characteristics (PolyU and NIST) for classification purposes. The used features were the arch, left loop, right loop, whorl, and three nets: AlexNet, GoogLeNet, and ResNet. The comparative analysis allowed the system to determine the type of classification that should be used for the best performance in terms of precision and model efficiency. Borra et al. [18] reported a method based on a denoising procedure (the wave atom transform technique), image augmentation (based on morphological operations), and an adaptive genetic neural network in order to evaluate the performance of the approach. The networks used the feature values that were extracted from each image. The experiments were performed on the FVC2000 databases. The authors reported better performance values compared to some neural networks and machine learning approaches. Listyalina et al. [19] sought to classify raw fingerprint images. They proposed a deep learning method (i.e., transfer learning GoogLeNet) which transferred the classification steps, such as pre-processing, feature extraction, and classification rather than training a deep CNN architecture. They used fingerprint images from the NIST-4 database and reported performance accuracy measures as follows: 94.7% and 96.2% for the five-class and four-class classification problems, respectively. Tertychnyi et al. [20] proposed an efficient deep neural network algorithm to recognize low-quality fingerprint images. These images are affected by physical damage, dryness, wetness, and/or blurriness. A VGG16 convolutional network model was employed based on transfer learning for training. In addition, both image dimension reduction and data augmentation were performed to improve the computing cost. They reported an

average accuracy of 89.4%; this is almost the same accuracy provided by regular CNNs. Finally, Pandya et al. [21] proposed a model which encompasses a pre-processing stage (histogram equalization, enhancement based on a Gabor filter, and ridge thinning) and a classification stage using a CNN architecture. The proposed algorithm achieved a 98.21% classification accuracy with a 0.9 loss for 560 samples (56 users providing 10 images each). Overfitting was avoided by Nur-A-Alam et al. [22] using a combination of the Gabor filtering technique coupled with deep learning techniques and principal component analysis (PCA). The meaningful features that can support an automatic authentication process for the fingerprint for personal identification and verification were extracted using the fusion of CNNs and Gabor filters; PCA reduces the dimensionality of statistical features. The proposed approach reached an accuracy of 99.87%. An efficient unimodal and multimodal biometric system based on CNNs and feature selection for fast palmprint recognition was recently proposed by Trabelsi et al. [23]. Simplified Gabor–PCA convolutional networks, an enhanced feature selection method, and a "reduction of the dimensions" approach were used to achieve a high recognition rate, i.e., 0% equal error rate (meaning the best trade-off between false rejections and false acceptances) and 100% rank-one recognition (meaning the percentage of samples recognized by the system). Oleiwi et al. [24] introduced a fingerprint classification method based on gender techniques, which integrates the Wiener filter and multi-level histogram techniques with three CNNs. They used CNNs to extract the fingerprint features followed by Softmax as a classifier.

### *2.2. Proposed Methodology*

#### 2.2.1. Mathematical Approaches

To improve the quality of fingerprint images, the first- and second-order derivative filters were used. An image is defined by an image function *A*(*x*, *y*) that gives the intensity of the gray levels at pixel position (*x*, *y*). The gradient vector of the image function is defined as in [25]:

$$
\nabla A(\mathbf{x}, \mathbf{y}) = \begin{bmatrix} G\_{\mathbf{x}} G\_{\mathbf{y}} \end{bmatrix} = \begin{bmatrix} \frac{\partial A(\mathbf{x}, \mathbf{y})}{\partial \mathbf{x}} \frac{\partial A(\mathbf{x}, \mathbf{y})}{\partial \mathbf{y}} \end{bmatrix} \tag{1}
$$
