Previous Article in Journal
Quantum-Dot CA-Based Fredkin Gate and Conservative D-latch for Reliability-Based Information Transmission on Reversible Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recovery of Incomplete Fingerprints Based on Ridge Texture and Orientation Field

School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2873; https://doi.org/10.3390/electronics13142873 (registering DOI)
Submission received: 13 June 2024 / Revised: 15 July 2024 / Accepted: 16 July 2024 / Published: 21 July 2024

Abstract

:
The recovery of mutilated fingerprints plays an important role in improving the accuracy of fingerprint recognition and the speed of identity retrieval, so it is crucial to recover mutilated fingerprints efficiently and accurately. In this paper, we propose a method for the restoration of mutilated fingerprints based on the ridge texture and orientation field. First, the part to be restored is identified via the local quality of the fingerprint, and a mask image is generated. Second, a novel dual-stream fingerprint restoration network named IFSR is designed, which contains two branches, namely an orientation prediction branch guided by the fingerprint orientation field and a detail restoration branch guided by the high-quality fingerprint texture image, through which the damaged region of the mutilated fingerprint is restored. Finally, the method proposed in this paper is validated on a real dataset and an artificially damaged fingerprint dataset. The equal error rate (EER) achieved on the DB1, DB2, and DB4 datasets of FVC2002 is 0.10%, 0.12%, and 0.20%, respectively, while on the DB1, DB2, and DB4 datasets of FVC2004, the EER reaches 1.13%, 2.00%, and 0.27%, respectively. On the artificially corrupted fingerprint dataset, the restoration method achieves a peak signal-to-noise ratio (PSNR) of 16.6735.

1. Introduction

Fingerprint recognition has been widely used in the fields of security authentication and crime detection due to its excellent reliability and non-intrusive qualities [1,2]. Fingerprint recognition technology has made significant progress over the years, but there are still numerous challenges in recognizing mutilated fingerprints. In real-world scenarios, especially at crime scenes, it is common to encounter mutilated fingerprints where some of the information has been destroyed or lost, making the process of identifying the original fingerprint complex and difficult [3,4].
The main challenge of residual fingerprint identification is that, on the one hand, residual fingerprints may be interfered with by background noise such as acquisition conditions, the degree of breakage, and smudging during feature extraction, resulting in the generation of spurious features, which affects the matching results [5,6]. On the other hand, residual fingerprints contain less valid information, which makes it difficult for the Automatic Fingerprint Identification System (AFIS) to find enough matches between residual fingerprints and complete fingerprints for feature matching [7,8,9]. In recent years, many researchers have proposed methods to enhance and restore mutilated fingerprints in order to minimize the impact of the damaged portion while ensuring identification accuracy [10,11].
Filtering methods are one of the most commonly used methods for fingerprint enhancement and restoration. Among these, Gabor filtering (GF) has good directional selectivity and frequency selectivity, which is suitable for enhancing the texture of fingerprints, and it is widely used in the field of fingerprint enhancement and restoration [12,13,14]. Mei et al. proposed an orthogonal curved-line GF on the basis of Gabor filtering (OCL-GF), which speeds up the enhancement process while retaining the enhancement effect of Gabor filtering [15]. Shams et al. proposed a new method for fingerprint image enhancement that combines a diffusion coherence filter and a 2D log-Gabor filter, which solves the bandwidth limitation problem of traditional Gabor filters [16]. Wang et al. improved the clarity and continuity of ridge structures in fingerprint images by applying a 2D discrete wavelet transform to decompose a fingerprint image into four sub-bands and generating compensated images by adaptively obtaining the compensation coefficients for each sub-band through a reference-based Gaussian template [17].
In addition to filtering methods, researchers have also attempted to infer the positional part based on the retained ridge structure in low-quality fingerprints. For example, Li (phase-filed) proposed a method to obtain the pixel values in a damaged fingerprint region using image information outside it utilizing the nonlocal Cahn–Hilliard (CH) equation, and this method produced robust and efficient fingerprint images [18]. Gupta considered the role of detail density and fingerprint reconstruction directional field orientation in fingerprint restoration. Based on the estimated directional field and phase, a reconstructed image was obtained by augmenting the structure of the ridges using high-order polynomials of the continuous phase [19]. Li proposed a multi-training stage image enhancement algorithm for cryogenic fingerprints to enhance fingerprints captured under dry and cryogenic conditions, restoring low-quality fingerprints to normal quality [20].
With the rapid development and wide application of deep learning, neural networks have become a powerful tool for recovering mutilated fingerprints [21,22,23]. Wong proposed a multi-task Convolutional Neural Network (CNN)-based method to recover the fingerprint ridge structure from corrupted fingerprint images by learning from the noise and damage caused by various undesirable conditions of the finger and sensor [24]. Liu et al. proposed a deep-CNN architecture with a nested UNet to segment and enhance latent fingerprints through pixel-to-pixel and end-to-end training, which can improve the quality of fingerprint images instantly [25]. Zhu et al. used a generative adversarial network to force the generation of augmented latent fingerprints in a fingerprint skeleton map and proposed Gaussian-based reconstruction loss of detail weight maps [26].
However, although deep learning-based image reconstruction methods have made some progress in theory and practice, they still have many drawbacks that cannot be ignored [27,28,29]. First, fingerprint restoration, as an important constituent step of mutilated fingerprint recognition, has the main goal of filling in the missing ridges in a fingerprint image, and the vast majority of fingerprint reconstruction methods try to combine fingerprint enhancement and fingerprint structure restoration into a single task, neglecting the reasonable planning of the restoration area. Thus, for large areas of missing fingerprints, it is often not possible to ensure the authenticity of the restored fingerprint ridge structure. Second, existing methods often do not fully consider the unique directionality and ridge flow characteristics of fingerprints when designing fingerprint reconstruction networks, and the results of such network training often fail to achieve the expected results. Finally, there is currently only a limited number of publicly available residual fingerprint datasets, which directly affects the training effect of the fingerprint recovery neural network model, leading to the overfitting of the model, as well as a decrease in prediction accuracy and a lack of generalization ability.
In view of the above problems, this study proposes a new method based on the principle of a local fingerprint quality assessment, and it is dedicated to improving the reconstruction of mutilated fingerprints. In this paper, the reconstruction task of the fingerprint ridge structure is split into two sub-steps. Firstly, a local quality assessment of the mutilated fingerprint is performed, which aims to judge the quality of the mutilated fingerprint in order to determine the region that needs to be reconstructed; this process generates a mask containing information on the site to be restored. Then, a two-stream network guided by fingerprint ridges and directional fields is designed to accurately reconstruct the ridge structure of the mask region based on the pre-acquired high-quality fingerprint structure. Finally, in order to overcome the limitation of the number of publicly available mutilated fingerprint datasets, intact fingerprints are artificially destructed to generate new mutilated fingerprint datasets for model training. This strategy effectively compensates for the lack of mutilated fingerprint datasets and significantly improves the performance of the neural network model on the fingerprint recovery task. By being designed in this way, the method proposed in this paper further enhances the ability to effectively reconstruct a mutilated fingerprint while avoiding some of the weaknesses of other mutilated fingerprint methods.
The main contributions of this paper are as follows:
(1)
A mask generation method based on a local quality assessment of mutilated fingerprints is proposed, and it can determine the specific location of the fingerprint structure to be restored and increase the accuracy of the fingerprint restoration process;
(2)
A network model for the recovery of mutilated fingerprints based on the ridge texture and directional field is proposed, and it can predict the ridge texture structure in the region to be recovered in the mask map based on the intact ridge texture so as to restore the damaged region of mutilated fingerprints;
(3)
Better restoration results are achieved on real datasets, namely FVC2002 [30] and FVC2004 [31], and on an artificially damaged dataset.

2. Materials and Methods

The overall structure of the method proposed in this paper is mainly divided into two parts, as shown in Figure 1: the ridge texture extraction phase based on a local quality assessment and the ridge reconstruction phase of the residual fingerprint. In the ridge texture extraction phase, the mutilated fingerprint is first decomposed into a texture part and a cartoon part using full variational decomposition. The cartoon part contains the larger, smooth background region of the fingerprint, and the texture part contains the structure and details of the fingerprint. Then, the local quality of the fingerprint ridge texture is evaluated by using the ridge-specific frequency features and directional features of the fingerprint. Finally, the higher-quality ridge texture map and the corresponding mask map are extracted based on the quality assessment results, where the ridge texture map serves as the baseline for fingerprint reconstruction, and the mask map is the part of the fingerprint that needs to be reconstructed.
In the ridge reconstruction phase, the ridge texture of the high-quality part of the mutilated fingerprint is first enhanced to make the ridge structure clearer. At the same time, directional field maps of the ridge texture are extracted. Then, the enhanced ridge maps and the directional field maps are spliced with the mask maps (superposed on the channel) and are used as inputs to the fingerprint ridge reconstruction network, which is proposed in this paper. The proposed fingerprint ridge reconstruction network consists of two complementary parallel dual-stream encoding–decoding networks, namely a structural reconstruction branch guided by the high-quality ridge maps and a directional reconstruction branch guided by the directional field maps, so as to make the reconstructed ridge structure more realistic and reliable.

2.1. Localized Quality Assessment of Fingerprint Images

Residual fingerprints usually contain a lot of noise, and their quality and texture attributes are extremely poor, which makes the subsequent quality assessment and feature extraction difficult tasks. To solve these problems, in this paper, we preprocess residual fingerprints using a full variational approach, which can decompose a latent fingerprint image into two layers: a cartoon layer and a texture layer. The cartoon layer contains unwanted components (e.g., structural noise and background noise), while the texture layer mainly consists of fingerprint ridges with strong texture features. This cartoon texture decomposition makes it easier to detect regions of interest from the texture layer. In this paper, we use the improved Localized Total Variation (LTV) method proposed in [32] to separate the cartoon portion  c  and the texture portion  t  of a mutilated fingerprint image.
The total localized variation (LTV) in a fingerprint image for pixel  I ( i , j )  is as follows:
L T V ( I ( i , j ) ) = G σ I ( i , j )
here  G σ  is a Gaussian kernel with standard deviation  σ .
The original image is low-pass filtered, and the low-pass-filtered image  L σ I ( i , j )  is used to calculate the relative reduction rate  λ ( i , j )  for each pixel point in the graphic, where  L σ  is the low-pass filter.
λ ( i , j ) = L T V ( I ( i , j ) ) L T V ( L σ I ( i , j ) ) L T V ( I ( i , j ) )
The cartoon part  c ( x , y )  and the texture part  t ( x , y )  of the point are denoted as
c ( i , j ) = ω ( λ ( i , j ) ) L σ I ( i , j ) + ( 1 ω ( λ ( i , j ) ) ) I ( i , j )
t ( i , j ) = I ( i , j ) c ( i , j )
here  ω ( · ) : 0 , 1 0 , 1 , W  is a non-decreasing segmented affine function that is constant and equal to 0 around 0 and constant and equal to 1 around 1, and the total variation decomposition can better emphasize the ridge texture properties of the fingerprint.
The local quality score of the texture part of a fingerprint image is calculated by combining the frequency and orientation characteristics of the fingerprint. Through this quality assessment, it is possible to identify the ridges and background noise that are of interest. The result of this quality assessment will help to accurately extract high-quality ridge texture structures from the fingerprint. Fingerprint ridge frequency is an important characteristic of a fingerprint that describes the sparseness of the ridges in a particular region of the fingerprint. The calculation of this parameter can help to assess the clarity of the ridge lines in a specific region and to use the ridge frequency as a judgment criterion to differentiate between noisy regions and fingerprint regions. The maximum amplitude within a localized area is a common measure of the localized ridge frequency of a fingerprint. For a frequency characteristic quality assessment, a fingerprint image is subdivided into small image blocks of  8 × 8  pixels. The quality assessment score of the frequency characteristics of this small image block is defined as the ridge frequency characteristics of the windowed image centered on these small blocks and covering an area size of  64 × 64  pixels. The strategy is designed to take into account the effect of ridge width on the frequency domain information, and, after observing a large number of fingerprint images, it is found that the ridge width is generally larger than 5 pixels. Since the fingerprint block for quality assessment contains at least two to three ridges, a localized area of 8 × 8 pixels is not sufficient to accurately capture all the frequency domain information. Therefore, the image area is enlarged to  64 × 64  pixels in order to obtain more complete and accurate frequency domain information.
In order to calculate the frequency feature scores of an image block, the image block is first subjected to a 2D Fourier transform to obtain the frequency information of the local region image:
F ( u , v ) = 1 W 2 x = 0 W 1 y = 0 W 1 I ( x , y ) e j 2 π ( u x W + v y W )
here  W  denotes the width of the image block, and  I  is the original image block. In this paper,  W = 64 . Then, a band-pass filter,  H ( u , v ) , is applied to specify the effective frequency range of the fingerprint image block. Here, the effective frequency is in the range of (5, 15) pixels.
G ( u , v ) = H ( u , v ) F ( u , v )
The maximum amplitude value within the effective frequency is the frequency characteristic of the localized block  Q f .
Q f = max { A ( u , v ) }
which
A ( u , v ) = Re G ( u , v ) 2 + Im G ( u , v ) 2
here  A ( u , v )  is the amplitude matrix of  G ( u , v ) Re G ( u , v )  and  Im G ( u , v )  are the real and imaginary parts of  G ( u , v ) .
Although frequency features are valuable for fingerprint localized quality assessment, due to the complexity of fingerprints in the field, it is difficult to perform accurate identification based on frequency features alone. Therefore, in this paper, the local quality of fingerprints is further assessed by combining the fingerprint directionality features.
The localized directional features of fingerprints show remarkable regularity, which is mainly reflected in the flow orientation and arrangement pattern of fingerprint ridges. In most cases, the ridges in the same fingerprint area have the same or a similar flow direction and show relatively consistent arrangement patterns. This regularity of local directional features is of great significance, and, based on the continuity and smoothness of directional features, regions with consistent directional features can be regarded as part of the same fingerprint. At the same time, this method can effectively screen out background noise and invalid information. In this paper, we use the directional consistency of a local fingerprint block with its surrounding blocks to characterize the directionality of a fingerprint [33]. Similar to the above method for frequency characteristics, the fingerprint image to be tested is segmented into small blocks with a size of  8 × 8  pixels, and the orientation difference between each fingerprint block and its surrounding 8 blocks is calculated:
Q o = 1 9 × ( i , j ) D θ ( i , j ) θ ( i , j )
here  θ ( i , j )  is the ridge direction of the fingerprint block, and  θ ( i , j )  is the ridge orientation of the fingerprint block in region D surrounding the nugget.
The final localized quality assessment of the residual fingerprints is shown in the following equation, where  ω 1  and  ω 2  are the weighting coefficients:
Q = ω 1 Q f + ω 2 Q o
In this method, for each fingerprint block, its mass fraction is calculated and compared with the threshold value  T . If the mass score of a fingerprint block is greater than the threshold value  T , the region is labeled as having a clear ridge structure. Conversely, if the quality score is lower than the threshold value  T , it is determined that the ridge structure of this region has been destroyed or that this region is essentially a background part.
Based on the above evaluation, the local fingerprint images are further analyzed and processed to segment the high-quality ridge texture map and the mask map to be reconstructed. Firstly, the fingerprints are blocked based on the local fingerprint quality scores. All fingerprint blocks with quality scores greater than threshold T are identified, and distant and isolated blocks are excluded to eliminate incoherent ridge structures.
Next, the remaining fingerprint blocks are further screened. The region with the largest number of connected blocks is identified and considered the “main region” of the ridge. Regions with a distance greater than 40 pixels from the “main region” are then eliminated to remove discrete and irrelevant parts. The distance from the “main region” affects the difficulty of recovery; the farther the fingerprint region is from the “main region”, the harder it is to recover. After many experiments, we found that the best recovery result is achieved by keeping the regions with a distance of less than 40 pixels from the “main region” and discarding the regions with a distance of more than 40 pixels from the “main region”.
Finally, a minimum convex bag containing all the remaining regions is calculated. The regions within this convex packet are considered to be the total set of fingerprints to be recovered. In this total set, the regions with quality scores greater than threshold T are labeled as clear ridge texture regions, and the fingerprint structure is enhanced in this part, while those with quality scores less than threshold T are labeled as regions to be recovered, which is what we call the mask map.

2.2. Localized Quality Assessment of Fingerprint Images

2.2.1. Network Structure

In this work, residual fingerprint ridge reconstruction is considered as the problem of predicting the ridge texture structure in the to-be-restored region in the mask map based on the intact ridge texture. In order to solve this problem, this paper proposes a generative adversarial network model for ridge reconstruction. The overall structure of the proposed network is shown in Figure 2. A two-stream encoding–decoding network guided by the directional field and the ridge texture is used as a generator, and a discriminator is used to determine whether the generated ridge structure is consistent with the groundtruth.
The generator framework consists of two parallel encoding–decoding network structures. Based on the in-depth consideration of fingerprint ridge orientation properties, two branches are designed: a ridge restore branch guided by the fingerprint ridge image and an orientation prediction branch guided by the fingerprint orientation field. These two branches have the same structure but run in parallel and complement each other’s deficiencies.
The inputs to the ridge restoration branch are the texture maps of the enhanced fingerprint images and the mask maps generated with the method in Section 2.1, which are connected at the channel level in order to achieve the integration and direct utilization of the input information via this network. The encoder part consists of four convolutional blocks, where the convolution operation uses partial convolution instead of regular convolution in order to efficiently deal with irregular occlusion regions and prevent information interference. After each convolution, the partial convolution updates the mask to motivate the model to fill more occluded regions. In image processing and computer vision, “irregular occlusion regions” refer to areas where pixel information is partially or completely lost due to various reasons, and these areas have irregular shapes. These irregular occlusion regions pose challenges during network training because traditional convolution operations are not effective at handling incomplete information. Therefore, specialized convolution operations, such as partial convolution, are needed to effectively process these irregular occlusion regions and prevent interference with the information. In this paper, “irregular occlusion regions” refers to the areas of the mask map generated in Section 2.1. Thereafter, the output of the convolutional layer is scale-transformed using a bulk normalization (BN) layer, and nonlinear transformations are introduced through the ReLU activation layer to enhance the model’s ability to solve complex problems. The convolution kernel sizes for partial convolution are 7 × 7, 5 × 5, 3 × 3, and 3 × 3 with step sizes of 2, 2, 2, and 1. During training, the image is processed using 0-padding. In addition, the structure of the coding phase of the orientation prediction branch is the same as that of the ridge recovery branch, except that its input is a combination of the orientation field map of the fingerprint and the mask map.
In the decoding stage, the latent space obtained from the encoder of the orientation prediction branch is combined with the latent space obtained from the encoder of the ridge recovery branch, and, together, they serve as inputs to the decoder of the ridge restoration branch, an operation that enables the orientation features of the fingerprint to guide the restoration of the ridges of the fingerprint. The encoder also consists of four convolutional blocks, which are upsampled using proximity interpolation; additionally, the outputs of the encoder are connected to the corresponding decoders through skip connections, which allows for low-level information to be passed directly to the high level and avoids the loss of information due to the excessive depth of the network. In the context of an encoder–decoder deep learning network architecture, information loss has been a pervasive issue with the increase in network depth. Due to the accumulation of layers, the original input information might suffer certain degradation following the process of encoding and decoding, resulting in the incomplete representation of original information in the final output of the network. Skip connections serve as an effective solution to this issue, and have been widely incorporated into these networks. The principle of skip connections is to establish direct pathways for the propagation of information at different depths within the network. By implementing skip connections at specific layers, it allows the immediate propagation of output from certain encoder layers to the corresponding decoder layers, bypassing some intermediate neural network layers and thus preventing potential information loss during this process. In the orientation prediction branch, the decoding structure is the same as that of the ridge restoration branch, which is not repeated here to avoid repetition.

2.2.2. Loss Function

In the proposed fingerprint reconstruction network, three types of loss functions, namely adversarial loss, reconstruction loss, and perceptual loss, are used to jointly train the network. These three types of losses are fused together to jointly optimize the training of the model.
The adversarial loss is used to evaluate the difference between the generated restored image and the actual image. The form of the adversarial loss function is as follows:
L a = min G max D = E I g , E g [ log D ( I g , E g ) ] + E I o , E o log [ 1 D ( I o , E o ) ]
In this formulation,  E  is the desired operation; min and max denote the minimization and maximization operations performed on generator  G  and discriminator  D , respectively;  I g  and  E g  are the groundtruths of the complete ridge map and the orientation field map, respectively; and  I o  and  E o  represent the ridge map and the orientation field map generated via the generator. For fingerprint image restoration, we hope that generator G can generate as realistic of fingerprint images as possible, i.e., minimize the adversarial loss, while for discriminator  D , we hope that it can accurately distinguish the real fingerprint from the restored one, i.e., maximize the adversarial loss.
In this study, the  L 1  paradigm was used as the image reconstruction loss function, as shown in the following equation:
L r = Ι o Ι g 1
here  1  stands for the L1 paradigm, and  I o  and  I g  stand for the generated image and groundtruth, respectively.
In order to improve the perceptual quality of the generated image, a perceptual loss function is introduced. Perceptual loss is usually defined as the difference between the generated image and the groundtruth in some feature expression space. A pre-trained VGG network is used as a feature extractor. Perceptual loss can be defined as the number of L1 norms of  I o  and  I g  in the feature space, where  F ( )  denotes the extraction of features through the VGG network with the following equation:
L p = F ( I o ) F ( I g ) 1
The following is the joint loss, where  L  is the weight parameter:
L = λ 1 L a + λ 2 L r + λ 3 L p
By using the above three loss functions together, the network can be optimized from different perspectives. Adversarial loss makes the generated fingerprints close to the real fingerprint distribution, reconstruction loss ensures fine pixel-level reconstruction, and perceptual loss ensures similarity in high-level semantic features. This design allows the network to generate fingerprint images that are both structurally approximate and detail accurate and that capture the complexity of real-world fingerprints, further improving the accuracy and effectiveness of fingerprint reconstruction.

3. Results

3.1. Database and Details

3.1.1. Training Datasets

In this paper, we use the NISTSD300and FVC2006fingerprint datasets as the training data, and we use the NVIDIA Irregular Mask Dataset as the mask to corrupt the training fingerprints [34,35].
FVC2006 is one of the main datasets in the Fingerprint Verification Competition (FVC) series. FVC2006 contains four different datasets, namely DB1, DB2, DB3, and DB4, each of which contains fingerprints from 110 subjects, with six samples per subject and 660 fingerprint image samples per database.
The NIST SD300 fingerprint dataset is a fingerprint repository provided by the National Institute of Standards and Technology (NIST) for fingerprint identification research. The dataset contains a collection of rolled and normal fingerprints collected from nearly 900 subjects at the time of arrest, totaling about 9000 different fingerprint images. The dataset scans and stores fingerprint images at a resolution of 500 dpi, which provides good authenticity and representativeness.
The NVIDIA Irregular Mask Dataset is provided by NVIDIA and is specifically designed to study and improve image restoration algorithms [36]. This dataset contains a variety of irregular image masks that are used to simulate the many image corruption, occlusion, and damage situations that can occur in real life.

3.1.2. Test Datasets

The proposed method in this paper is tested on real fingerprint datasets and an artificially damaged fingerprint dataset. For real fingerprints, FVC2002 and FVC2004 are used as test sets in this paper.
Artificially corrupted fingerprint test dataset: Due to the lack of groundtruths for real fingerprints, high-quality fingerprints are artificially corrupted by adding masks in order to test the proposed method more intuitively. High-quality fingerprints are selected as the groundtruth in the NIST SD 302(b_U) dataset, which is also a fingerprint library provided by the National Institute of Standards and Technology (NIST), where the NIST SD 302(b_U) dataset is a baseline operator-assisted rolled fingerprint impression and a 4-4-2 slap impression (4-4-2 slap impression is a method of taking fingerprints), totaling about 2000 fingerprint images [37].
The establishment method of the artificially corrupted fingerprint test set is as follows:
First, the mask images in the NVIDIA Irregular Mask Dataset are categorized into four subsets of 15%, 25%, 35%, and 45% according to the damage range, with 1000 mask images in each subset.
Second, 1000 high-quality fingerprint images are artificially selected from 2000 complete fingerprint images in the NIST SD 302(b_U) dataset, and Gabor filtering is used to enhance the fingerprint images.
Finally, the mask images and high-quality fingerprint images obtained in the first two steps are sequentially combined and spliced to obtain artificially corrupted fingerprint images, with corrupted ranges of 15%, 25%, 35%, and 45%, and the number of images in the sub-dataset of each corrupted range is 1000.
Implementation details: The model is implemented in PyTorch. Training is initiated on a single RTX 3090 (24GB). Adam is utilized to optimize the loss function, with initial training using a learning rate of  2 × 10 4  and then the fine-tuning of the model at a learning rate of  5 × 10 5 . The discriminator is trained with a learning rate of 1/10 that of the generator, the batch size is set to 4, and the epoch is 70,000.

3.2. Results

3.2.1. Real Fingerprint Datasets

The fingerprint recovery effect of the method proposed in this paper on the FVC200 and FVC2004 datasets is shown in Figure 3, with (a) and (b) showing two real low-quality fingerprints in the FVC dataset. The low-quality regions of the fingerprints can be accurately recognized, and the corresponding mask map can be generated using the method in Section 2.1, which can be seen in the zoomed-in original fingerprints and the restored fingerprints’ detail maps. The method proposed in this paper can better recover the ridge structure of the mutilated region of the fingerprints.
In order to illustrate the superior performance of the method proposed in this paper over other similar fingerprint enhancement methods, several advanced fingerprint enhancement restoration methods are selected for a comparison on the FVC2002 and FVC2004 datasets. Among them, VeriFinger is an advanced and multifunctional fingerprint recognition software developed by Neurotechnology, Inc., Vilnius, Lithuania; Gabor + MCC and phase-filed are fingerprint restoration methods based on traditional methods; and multi-task, IMMH, and MTS are methods based on deep learning.
The ultimate goal of residual fingerprint recovery is to improve the fingerprint identification accuracy of the AFRS. Thus, Table 1 lists the equal error rates (ERRs) of each method on the FVC dataset. The data therein are taken from the corresponding papers, where the best results are marked in boldface type. It can be seen that the method proposed in this paper achieves the best results on FVC2002 DB2, FVC2002 DB4, FVC2004DB1, and FVC2004 DB4, whereas the ERR of this paper’s method is 0.1% on the FVC2002DB1 and 2.00% on the FVC2004 DB2 datasets, respectively, which is only second to the optimal result. By combining the above results, it is determined that the method proposed in this paper can recover the structure of fingerprints better than other similar methods, thus improving the recognition accuracy of the AFRS.
Figure 4 shows several fingerprint images that cannot be reconstructed with the fingerprint restoration network. The fingerprint ridge recovery method proposed in this paper predicts corrupted ridges with clear ridges. Therefore, the method proposed in this paper is not applicable to fingerprint images where it is difficult to extract clear ridges. If there is no clear fingerprint ridge structure around the destroyed area, it cannot be recovered with this method either, as shown in Figure 4.

3.2.2. Artificially Damaged Fingerprint Dataset

Due to the lack of groundtruth in real fingerprints, to test the proposed method more intuitively, high-quality fingerprints are artificially corrupted by adding masks. The range of destruction is 15%, 25%, 35%, and 45%. As shown in Figure 5, the first row of images shows the complete original fingerprint image and the artificially damaged fingerprint images, and the second row demonstrates the results after recovery using the proposed method. As can be seen from the enlarged recovery detail image, the proposed method can recover the damaged fingerprint structure, and, at the same time, the recovered fingerprint feature points are similar to the original fingerprint feature points.
In order to visualize the performance of the proposed method, square masks of different sizes are used to corrupt the image. As shown in Figure 6, the effectiveness of the proposed method can be seen. Even in the face of a large area of destruction, the method proposed in this paper can still reconstruct part of the ridges.
In this experiment, ROC curves of the original intact fingerprint (groundtruth), artificially damaged fingerprints, and fingerprints reconstructed using the present method are obtained, as shown in Figure 7. Panels (a), (b), (c), and (d) show the fingerprints when the degree of damage is 15%, 25%, 35%, and 45%, respectively.
When the fingerprints are less damaged, i.e., 15% and 25%, the automatic fingerprint identification system (AFIS) is still able to extract sufficient valid feature points from the retained portion, which makes the ROC curves of the damaged and reconstructed fingerprints similar to those of the original fingerprints. However, as the degree of fingerprint damage increases, e.g., 35% and 45%, the number of effective feature points that can be extracted via the AFIS decreases, and the fingerprint destruction process may also produce misleading false features, which, together, lead to a decrease in the recognition accuracy of damaged fingerprints. By using the restoration method proposed in this paper, the damaged portion of the fingerprint can be effectively restored, increasing the number of effective features extracted via the AFIS, thereby improving the recognition accuracy of the damaged fingerprint.
Figure 7 shows that the fingerprint restoration algorithm proposed in this paper has significant restoration effects on different degrees of damage and can effectively improve the recognition accuracy of damaged fingerprints. This verifies the effectiveness and wide applicability of the proposed method. Table 2 shows the EER of the artificially damaged dataset. It can be seen that the method of this paper can effectively improve the recognition accuracy of damaged fingerprints.
The PSNR is used to further validate the effectiveness of the fingerprint reconstruction network method proposed in this paper. The PSNR is a widely accepted and objective criterion for evaluating the goodness of image restoration. Its theoretical basis is that better image restoration should produce results closer to the original image, thus resulting in a higher PSNR between the restored image and the original image.
P S N R = 10 log 10 I max 2 1 m n i = 1 m j = 1 n ( I i , j K i . j ) 2
here,  I i , j  and  K i . j  denote the pixel values of the original fingerprint image and the recovered fingerprint image at position  ( i , j ) , respectively.  m  and  n  are the length and width of the image, respectively. Figure 8 shows the PSNR during the iteration process. It can be seen that the fingerprint image restoration becomes increasingly better with the increase in the number of iterations of this training. At iteration = 70,000, the PSNR score is 16.6735 dB. To verify the efficiency of the proposed method, the PSNR is calculated, and the results are compared with the values of other state-of-the-art methods, as shown in Table 3. From these calculations, it can be confirmed that the proposed algorithm outperforms other methods in recovering damaged fingerprint images.

4. Conclusions

In this paper, a fingerprint restoration algorithm based on the ridge texture and orientation field is proposed. This algorithm employs a unique fingerprint local quality assessment strategy to accurately isolate mask maps that need to be recovered and produce high-quality ridge structure maps. Further, a two-stream network incorporating ridge texture and fingerprint directional field information is constructed, resulting in remarkable performance in efficiently recovering the ridge structures of mutilated fingerprints.
This study expects to ultimately improve the fingerprint recognition accuracy of AFRS. To achieve this goal, we evaluate the EER of this paper’s algorithm for recovered fingerprint images on two real fingerprint datasets, FVC2002 and FVC2004. On the DB1, DB2, and DB4 datasets of FVC2002, the experimental results show that the algorithm proposed in this paper successfully reduces the EER to 0.10%, 0.12%, and 0.20%. Similarly, when applied to the DB1, DB2, and DB4 datasets of FVC2004 for evaluation, the recovery method proposed in this study also achieves EERs as low as 1.13%, 2.00%, and 0.27%.
For a deeper validation of this method, we calculate the peak signal-to-noise ratio (PSNR) between the fingerprints and the groundtruth after using the restoration algorithm on the fingerprint dataset with artificial damage. The algorithm achieves a PSNR of 16.6735 dB when recovering the artificially corrupted dataset, which proves the high effectiveness and stability of proposed method on the fingerprint reconstruction task.

Author Contributions

Conceptualization, Y.S.; methodology, Y.S.; software, Y.S.; validation, Y.S.; formal analysis, Y.S. and Y.T.; investigation, Y.S.; resources, X.C. and Y.T.; data curation, Y.S. and Y.T.; writing—original draft preparation, Y.S.; writing—review and editing, X.C. and Y.T.; visualization, Y.S.; supervision, Y.T.; project administration, X.C.; funding acquisition, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by [Jilin Provincial Department of Science and Technology] grant numbers [20210203044SF].

Data Availability Statement

FVC2002 web site: http://bias.csr.unibo.it/fvc2002, FVC2004 web site: http://bias.csr.unibo.it/fvc2004, FVC2006 web site: http://bias.csr.unibo.it/fvc2006, NIST SD 302 web site: https://data.nist.gov, NVIDIA Irregular Mask Dataset web site: https://nv-adlr.github.io/publication/partialconv-inpainting (all accessed on 30 January 2024).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maltoni, D.; Maio, D.; Jain, A.K.; Feng, J. Handbook of Fingerprint Recognition; Springer International Publishing: Cham, The Netherlands, 2022; ISBN 978-3-030-83623-8. [Google Scholar]
  2. Malhotra, A.; Vatsa, M.; Singh, R.; Morris, K.B.; Noore, A. Multi-Surface Multi-Technique (MUST) Latent Fingerprint Database. IEEE Trans. Inform. Forensic Secur. 2024, 19, 1041–1055. [Google Scholar] [CrossRef]
  3. Cao, K.; Jain, A.K. Automated Latent Fingerprint Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 788–800. [Google Scholar] [CrossRef]
  4. Malhotra, A.; Sankaran, A.; Vatsa, M.; Singh, R.; Morris, K.B.; Noore, A. Understanding ACE-V Latent Fingerprint Examination Process via Eye-Gaze Analysis. IEEE Trans. Biom. Behav. Identity Sci. 2021, 3, 44–58. [Google Scholar] [CrossRef]
  5. Tran, Q.N.; Hu, J. A Multi-Filter Fingerprint Matching Framework for Cancelable Template Design. IEEE Trans. Inform. Forensic Secur. 2021, 16, 2926–2940. [Google Scholar] [CrossRef]
  6. Oblak, T.; Haraksim, R.; Peer, P.; Beslay, L. Fingermark Quality Assessment Framework with Classic and Deep Learning Ensemble Models. Knowl. Based Syst. 2022, 250, 109148. [Google Scholar] [CrossRef]
  7. Li, Y.; Pang, L.; Zhao, H.; Cao, Z.; Liu, E.; Tian, J. Indexing-Min–Max Hashing: Relaxing the Security–Performance Tradeoff for Cancelable Fingerprint Templates. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 6314–6325. [Google Scholar] [CrossRef]
  8. Fan, Y.; Huang, Y.; Cai, K.; Yan, F.; Peng, J. Surfel Set Simplification with Optimized Feature Preservation. IEEE Access 2016, 4, 10258–10269. [Google Scholar] [CrossRef]
  9. Wang, Y.; Li, Y.; Wang, J.; Lv, H. An Optical Flow Estimation Method Based on Multiscale Anisotropic Convolution. Appl. Intell. 2024, 54, 398–413. [Google Scholar] [CrossRef]
  10. Tu, Y.; Yao, Z.; Xu, J.; Liu, Y.; Zhang, Z. Fingerprint Restoration Using Cubic Bezier Curve. BMC Bioinform. 2020, 21, 514. [Google Scholar] [CrossRef]
  11. Lee, C.; Kim, S.; Kwak, S.; Hwang, Y.; Ham, S.; Kang, S.; Kim, J. Semi-Automatic Fingerprint Image Restoration Algorithm Using a Partial Differential Equation. AIMS Math. 2023, 8, 27528–27541. [Google Scholar] [CrossRef]
  12. Bian, W.; Ding, S.; Jia, W. Collaborative Filtering Model for Enhancing Fingerprint Image. IET Image Process. 2018, 12, 149–157. [Google Scholar] [CrossRef]
  13. Lei, J.; Peng, Q.; You, X.; Jabbar, H.H.; Wang, P.S.P. Fingerprint Enhancement Based on Wavelet and Anisotropic Filtering. Int. J. Patt. Recogn. Artif. Intell. 2012, 26, 1256001. [Google Scholar] [CrossRef]
  14. Zahedi, M.; Ghadi, O.R. Combining Gabor Filter and FFT for Fingerprint Enhancement Based on a Regional Adaption Method and Automatic Segmentation. SIViP 2015, 9, 267–275. [Google Scholar] [CrossRef]
  15. Mei, Y.; Zhao, B.; Zhou, Y.; Chen, S. Orthogonal Curved-line Gabor Filter for Fast Fingerprint Enhancement. Electron. Lett. 2014, 50, 175–177. [Google Scholar] [CrossRef]
  16. Shams, H.; Jan, T.; Khalil, A.A.; Ahmad, N.; Munir, A.; Khalil, R.A. Fingerprint Image Enhancement Using Multiple Filters. PeerJ Comput. Sci. 2023, 9, e1183. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, J.W.; Le, N.T.; Wang, C.C.; Lee, J.S. Enhanced Ridge Structure for Improving Fingerprint Image Quality Based on a Wavelet Domain. IEEE Signal Process. Lett. 2015, 22, 390–394. [Google Scholar] [CrossRef]
  18. Li, Y.; Xia, Q.; Lee, C.; Kim, S.; Kim, J. A Robust and Efficient Fingerprint Image Restoration Method Based on a Phase-Field Model. Pattern Recognit. 2022, 123, 108405. [Google Scholar] [CrossRef]
  19. Gupta, R.; Khari, M.; Gupta, D.; Crespo, R.G. Fingerprint Image Enhancement and Reconstruction Using the Orientation and Phase Reconstruction. Inf. Sci. 2020, 530, 201–218. [Google Scholar] [CrossRef]
  20. Cheng, C.-H.; Chiu, C.-T.; Kuan, C.-Y.; Su, Y.-C.; Liu, K.-H.; Lee, T.-C.; Chen, J.-L.; Luo, J.-Y.; Chun, W.-C.; Chang, Y.-R.; et al. Multiple Training Stage Image Enhancement Enrolled with CCRGAN Pseudo Templates for Large Area Dry Fingerprint Recognition. IEEE Access 2023, 11, 86790–86800. [Google Scholar] [CrossRef]
  21. Tian, C.; Fei, L.; Zheng, W.; Xu, Y.; Zuo, W.; Lin, C.-W. Deep Learning on Image Denoising: An Overview. Neural Netw. 2020, 131, 251–275. [Google Scholar] [CrossRef]
  22. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent Advances in Convolutional Neural Networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
  23. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Lake City, UT, USA, 18–23 June 2018; IEEE: Salt Lake City, UT, USA, 2018; pp. 2472–2481. [Google Scholar]
  24. Wong, W.J.; Lai, S.-H. Multi-Task CNN for Restoring Corrupted Fingerprint Images. Pattern Recognit. 2020, 101, 107203. [Google Scholar] [CrossRef]
  25. Liu, M.; Qian, P. Automatic Segmentation and Enhancement of Latent Fingerprints Using Deep Nested UNets. IEEE Trans. Inform. Forensic Secur. 2021, 16, 1709–1719. [Google Scholar] [CrossRef]
  26. Zhu, Y.; Yin, X.; Hu, J. FingerGAN: A Constrained Fingerprint Generation Scheme for Latent Fingerprint Enhancement. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 8358–8371. [Google Scholar] [CrossRef]
  27. Zeng, F.; Hu, S.; Xiao, K. Research on Partial Fingerprint Recognition Algorithm Based on Deep Learning. Neural Comput. Applic 2019, 31, 4789–4798. [Google Scholar] [CrossRef]
  28. Chen, F.; Li, M.; Zhang, Y. A Fusion Method for Partial Fingerprint Recognition. Int. J. Patt. Recogn. Artif. Intell. 2013, 27, 1356009. [Google Scholar] [CrossRef]
  29. Guo, X.; Yang, H.; Huang, D. Image Inpainting via Conditional Texture and Structure Dual Generation. In Proceedings of the IEEE/CVF international conference on computer vision, Virtual, 11–17 October 2024. [Google Scholar]
  30. Maio, D.; Maltoni, D.; Cappelli, R.; Wayman, J.L.; Jain, A.K. FVC2002: Second Fingerprint Verification Competition. In Proceedings of the IEEE/Object Recognition Supported by User Interaction for Service Robots, Quebec City, QC, Canada, 10 December 2002. [Google Scholar]
  31. Maio, D.; Maltoni, D.; Cappelli, R.; Wayman, J.L.; Jain, A.K. FVC2004: Third Fingerprint Verification Competition. In Biometric Authentication; Zhang, D., Jain, A.K., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3072, pp. 1–7. ISBN 978-3-540-22146-3. [Google Scholar]
  32. Buades, A.; Le, T.; Morel, J.-M.; Vese, L. Cartoon+Texture Image Decomposition. Image Process. Line 2011, 1, 200–207. [Google Scholar] [CrossRef]
  33. Ding, S.; Bian, W.; Sun, T.; Xue, Y. Fingerprint Enhancement Rooted in the Spectra Diffusion by the Aid of the 2D Adaptive Chebyshev Band-Pass Filter with Orientation-Selective. Inf. Sci. 2017, 415–416, 233–246. [Google Scholar] [CrossRef]
  34. Fiumara, G.; Flanagan, P.; Grantham, J.; Bandini, B.; Ko, K.; Libert, J. NIST Special Database 300: Uncompressed Plain and Rolled Images from Fingerprint Cards; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2018; p. NIST TN 1993. [Google Scholar]
  35. Fierrez, J.; Ortega-Garcia, J.; Torre Toledano, D.; Gonzalez-Rodriguez, J. Biosec Baseline Corpus: A Multimodal Biometric Database. Pattern Recognit. 2007, 40, 1389–1392. [Google Scholar] [CrossRef]
  36. Liu, G.; Reda, F.A.; Shih, K.J.; Wang, T.-C.; Tao, A.; Catanzaro, B. Image Inpainting for Irregular Holes Using Partial Convolutions. In Proceedings of the European conference on computer vision (ECCV), Glasgow, UK, 23–28 August 2018. [Google Scholar]
  37. Fiumara, G.; Schwarz, M.; Heising, J.; Peterson, J.; Flanagan, P.; Marshall, K. NIST Special Database 302: Supplemental Release of Latent Annotations; National Institute of Standards and Technology (U.S.): Gaithersburg, MD, USA, 2021; p. NIST TN 2190. [Google Scholar]
Figure 1. Overall structure of the method proposed in this paper.
Figure 1. Overall structure of the method proposed in this paper.
Electronics 13 02873 g001
Figure 2. Overall structure of the network.
Figure 2. Overall structure of the network.
Electronics 13 02873 g002
Figure 3. Effectiveness of fingerprint recovery in FVC database. (a,b) showing two real low-quality fingerprints in the FVC dataset.
Figure 3. Effectiveness of fingerprint recovery in FVC database. (a,b) showing two real low-quality fingerprints in the FVC dataset.
Electronics 13 02873 g003
Figure 4. Examples of unrecoverable fingerprints images.
Figure 4. Examples of unrecoverable fingerprints images.
Electronics 13 02873 g004
Figure 5. Fingerprint restoration effect of artificial damage.
Figure 5. Fingerprint restoration effect of artificial damage.
Electronics 13 02873 g005
Figure 6. Recovered images of fingerprints under different damages.
Figure 6. Recovered images of fingerprints under different damages.
Electronics 13 02873 g006
Figure 7. ROC curves for the artificially damaged dataset. Panels (ad) show the fingerprints when the degree of damage is 15%, 25%, 35%, and 45%, respectively.
Figure 7. ROC curves for the artificially damaged dataset. Panels (ad) show the fingerprints when the degree of damage is 15%, 25%, 35%, and 45%, respectively.
Electronics 13 02873 g007
Figure 8. PSNR of fingerprint image with number of iterations.
Figure 8. PSNR of fingerprint image with number of iterations.
Electronics 13 02873 g008
Table 1. Results of different methods on FVC2002 dataset and FVC2004 dataset.
Table 1. Results of different methods on FVC2002 dataset and FVC2004 dataset.
MethodFVC2002 ERR (%)FVC2004 ERR (%)
DB1DB2DB4DB1DB2DB4
VeriFinger0.250.310.355.655.462.59
Gabor + MCC0.710.472.863.975.545.71
Phase-filed0.250.620.231.240.930.30
Multi-task0.180.140.251.833.660.32
IMMH0.090.23--1.993.89--
MTS0.042.001.461.802.000.60
This paper0.100.120.201.132.000.27
Table 2. EER for the artificially damaged dataset.
Table 2. EER for the artificially damaged dataset.
Fingerprint DatabaseOriginal
Fingerprint
15% Damaged25% Damaged35% Damaged45% Damaged
DamagedRestoredDamagedRestoredDamagedRestoredDamagedRestored
EER(%)1.071.301.102.191.303.631.326.942.37
Table 3. PSNR of different methods.
Table 3. PSNR of different methods.
MethodVeriFingerGabor FilteringPhase-FiledMTSMulti-TaskThis Paper
PSNR13.689913.474413.863314.973415.883416.6735
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; Chen, X.; Tang, Y. Recovery of Incomplete Fingerprints Based on Ridge Texture and Orientation Field. Electronics 2024, 13, 2873. https://doi.org/10.3390/electronics13142873

AMA Style

Sun Y, Chen X, Tang Y. Recovery of Incomplete Fingerprints Based on Ridge Texture and Orientation Field. Electronics. 2024; 13(14):2873. https://doi.org/10.3390/electronics13142873

Chicago/Turabian Style

Sun, Yuting, Xiaojuan Chen, and Yanfeng Tang. 2024. "Recovery of Incomplete Fingerprints Based on Ridge Texture and Orientation Field" Electronics 13, no. 14: 2873. https://doi.org/10.3390/electronics13142873

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop