Next Article in Journal
Comparison of Physicochemical Characteristics and Microbial Quality between Commercially Available Organic and Conventional Japanese Soy Sauces
Previous Article in Journal
Neuromuscular Control during the Bench Press Exercise Performed with Free Weights and Pneumatic Loading
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effects of AI-Driven Face Restoration on Forensic Face Recognition

1
School of Software Engineering, Chengdu University of Information Technology, Chengdu 610225, China
2
Academy of Forensic Science, Shanghai 200063, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(9), 3783; https://doi.org/10.3390/app14093783
Submission received: 26 March 2024 / Revised: 19 April 2024 / Accepted: 26 April 2024 / Published: 29 April 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
In biometric recognition, face recognition is a mature and widely used technique that provides a fast, accurate, and reliable method for human identification. This paper aims to study the effects of face image restoration for forensic face recognition and then further analyzes the advantages and limitations of the four state-of-the-art face image restoration methods in the field of face recognition for forensic human image identification. In total, 100 face image materials from an open-source face image dataset are used for experiments. The Gaussian blur processing is applied to simulate the effect of blurred face images in actual cases of forensic human image identification. Four state-of-the-art AI-driven face restoration methods are used to restore the blurred face images. We use three mainstream face recognition systems to evaluate the recognition performance changes of the blurred face images and the restored face images. We find that although face image restoration can effectively remove facial noise and blurring effects, the restored images do not significantly improve the recognition performance of the face recognition systems. Face image restoration may change the original features in face images and introduce new made-up image features, thereby affecting the accuracy of face recognition. In current conditions, the improvement in face image restoration on the recognition performance of face recognition systems is limited, but it still has a positive role in the application of forensic human image identification.

1. Introduction

Biometric recognition is a technology that uses various human biological characteristics to identify individuals. Common biometric technologies usually include face recognition, fingerprint recognition, iris recognition, etc. [1,2]. Where face recognition is a mature and widely used method that provides a fast, accurate, and reliable method for forensic human image identification [3].
In practical applications, face recognition systems are widely applied, but they are often affected by deteriorating image quality, such as blurring, occlusion, and abnormal lighting [4,5,6]. These problems not only reduce the accuracy and stability of face recognition but also bring great difficulties for forensic human image identification. In actual forensic human image identification cases, the target portrait often does not meet the recognition requirements of face recognition systems, which creates many difficulties for the application of face recognition technology [7,8,9]. In a comprehensive study on face identification, it was observed that forensic facial examiners, facial reviewers, and super-recognizers demonstrated superior performance compared to fingerprint examiners and students in complex face identification tasks [10]. Although decision support from Automated Facial Recognition Systems (AFRS) improved human operator performance above baseline levels in face-matching tasks, the accuracy of the combined human–AFRS effort consistently remained lower than that achieved by AFRS alone [11]. Therefore, it is of great significance to effectively solve the above problems and improve the accuracy and reliability of face recognition and forensic human image identification for achieving more secure and reliable identity recognition applications.
Face image restoration methods can effectively solve the above problems. Image restoration refers to the use of computer algorithms and techniques to repair and restore defects, noise, stains, and other issues in images, making the images clearer, more beautiful, and complete [12,13,14]. The latest guidelines for image quality restoration include that facial details/textures should be visible and the face should be fully visible within the image [15]. Common image restoration techniques include contrast enhancement, denoising, inpainting, restoration, and reconstruction [16,17]. These techniques can be applied in various fields, such as digital image processing, medical image processing, and forensic identification. Unlike traditional image processing algorithms used in the existing literature, such as non-local central sparse representation [18] and natural image block models [19], in this paper, we use emerging AI-driven face restoration methods for study. The AI-driven face restoration significantly differs from traditional face enhancement algorithms in terms of appearance restoration effects. AI-driven face restoration technology refers to the use of artificial intelligence technology to repair missing or damaged facial components through computer vision algorithms [20]. By processing low-quality images, AI-driven image restoration techniques can significantly improve image quality with good results for facial details and textures in the restored images. AI-driven image restoration techniques mainly include denoising, deblurring, super-resolution, and image inpainting [21,22], which eliminate the impact of deteriorating image quality by preprocessing and postprocessing, thereby enhancing the identity recognizability of facial appearance. For example, denoising techniques can eliminate noise in images, making facial features clearer [23,24,25]; deblurring techniques can improve image blurring caused by motion and focus; super-resolution techniques can increase image resolution, making facial features more delicate [26,27,28]; image inpainting techniques can restore obscured or damaged areas. Among these techniques, deep learning has become the core method of image restoration. In particular, convolutional neural networks (CNNs) and generative adversarial networks (GANs) perform particularly well in image restoration tasks [29,30,31]. CNNs can automatically learn the hierarchy and semantic information in images through convolutional layers with multiple layers, while GANs can generate restoration results close to reality through adversarial learning between generators and discriminators [32,33,34,35,36]. In addition, advanced deep learning techniques such as attention mechanisms and recurrent neural networks (RNNs) [32,37,38] have also been widely used in the field of image restoration. Currently, many AI-driven image restoration techniques have achieved significant results in the fields of face recognition and human identification. These methods not only provide a theoretical basis for exploring the quality of face images (the performance of face recognition systems on low-quality images) but also provide stronger technical support for human identification.
This paper studies the effects of state-of-the-art face image restoration techniques on face recognition performance in face recognition systems and analyzes the advantages and limitations of face image restoration methods in the field of forensic human image identification. This study selects 100 face images from open-source face image datasets and simulates blurred face image quality in actual human identification cases using Gaussian blur processing. Four different AI-driven face restoration methods, which are currently leading in high performance, are used to restore the blurred face images, and the similarity quantization values of the restored face images and the original face images are calculated and compared with the blurred face images using the face recognition system to evaluate the specific quantitative impact of the restored face image quality on the recognition performance of the face recognition system. In addition, this study will use different mainstream recognition systems to conduct experiments to explore the differences in recognition effects of restored face images in different recognition systems, aiming to comprehensively understand the relationship between face image restoration and face recognition systems in the field of human identification.
The main contributions are summarized as follows:
  • Improve image quality with AI-powered image repair technology in the field of forensic human image identification.
  • The quantitative effect of the quality of the repaired face image on the recognition performance of the recognition system is analyzed.
  • To explore the difference in recognition effect of repaired face images in different recognition systems.

2. Materials and Methods

2.1. Experimental Material

To ensure reproducibility, the face data in this experiment were sourced from the SCUT-FBP5500 dataset [39,40,41]. This dataset contains 5500 samples with relatively high image pixel resolution, large facial area, and variations in lighting and facial expressions. In addition, the face images in the dataset are unprocessed and retain more original information, which is similar to the quality of sample face images in real forensic human image identification cases. We collected 50 male and 50 female face images from this database, with each image size being 350 × 350 pixels. Some of the face image samples are shown in Figure 1. As the image quality of the samples used in this study is good and the aim is to analyze the impact of different image restoration methods on face recognition systems, the influence of image quality on the experimental results can be temporarily ignored.

2.2. Experimental Method

The state-of-the-art face image restoration techniques were used for image restoration, which are the HitPaw Photo Enhancer [42], Topaz Photo AI [43], Swift Photo Repair software V5.0.2.3 [44], and a blurred portrait photo HD repair tool [45]. The HitPaw Photo Enhancer can reduce image blur and enlarge images without causing quality loss, and this photo enhancement tool can perfectly repair blurred photos; the Topaz Photo AI can remove sensor noise while preserving details and its image quality adjustment module includes four parts: noise removal, sharpening, facial restoration, and resolution enhancement; the Swift Photo Repair Software can help users easily repair photos, remove blemishes, change colors, etc.; the blurred portrait photo HD repair tool allows for high clarity and high color restoration degree after repair, allowing the users to see the character details in the photo. To perform accurate facial verification calculations on the sample, this paper compares the performance of three well-known facial recognition engines in the facial recognition field: Baidu [46], Ali Cloud [47], and ArcSoft [48]. In the facial recognition system used, the similarity quantitative values of Baidu, Ali Cloud, and ArcSoft are all between 0 and 100, the higher the similarity quantitative value, the higher the facial comparison similarity.
In the experiment, the original image is compared with the blurred and repaired images, respectively, and the effect of image repair processing on the recognition performance of the facial recognition system is analyzed by comparing the similarity mean values after blur processing and repair processing. In the experimental data analysis, the independent samples T-test is used to compare the differences in the performance of the facial recognition system between images of different degrees of blur and the original image; this method applies to testing whether the difference between the two sets of data is significant. The impacts of restoration under different levels of ambiguity on the performance of facial recognition systems are studied by the one-way ANOVA. The two-way ANOVA is used to study the impacts of the degree of blur and different photo-repairing methods on the facial recognition system. The main steps of the study are shown in Figure 2.

3. Related Work

In the study, the original image is blurred by Gaussian blur in two different levels (labeled as blur 3 and blur 5) to simulate the fuzzy face image in the real cases, in which the blur radius parameters selected were 3.0 pixels in blur 3 and 5.0 pixels in blur 5. The parameter settings mentioned above simulate the quality of facial images under moderate and severe blur conditions. The specific examples are shown in Figure 3.
The four restored images for the two types of blurred facial images mentioned above are shown in Figure 4 and Figure 5, respectively.
To carry out accurate face verification calculations for experimental samples, the study conducted performance comparison and validation experiments on well-known face recognition engines in the field, namely Baidu, Ali Cloud, and ArcSoft. Fifty facial images were selected from the sample database and subjected to Gaussian blur with a radius parameter of three. Subsequently, the processed facial images were individually subjected to face recognition using the Baidu, Ali Cloud, and ArcSoft face recognition engines. Finally, the recognition stability of each system was compared, and the experimental results were presented in Figure 6.
We used A(i) (i = 1, …,100) as the original set of facial images. B1(i) (i = 1, …,100) and B2(i) (i = 1, …,100) were used to represent the Gaussian blur with blur radii of 3.0 and 5.0 pixels, respectively. The MB1(i)(i = 1, …,100), MB2(i)(i = 1, …,100), respectively, were used to represent the similarity scores of the B1(i)(i = 1, …,100), B2(i)(i = 1, …,100) compared with A(i)(i = 1, …,100) in the face verification tasks. Quantified similarity values between images obtained by four different restoration methods and the original images, HitPaw Photo Enhancer was referred to as Restoration 1, Topaz Photo AI as Restoration 2, Swift Photo Repair Software Image Processing Tool as Restoration 3, and Blurry Portrait Photo HD Restoration Tool as Restoration 4.

4. Results

4.1. The Impact of Blurriness Level on the Recognition Performance of Face Recognition Systems

An independent sample T-test was carried out to analyze the effect of fuzzy degrees on the recognition performance of three face recognition systems. The results are shown in Table 1.
The above results indicated a significant impact of blurriness level on the recognition performance of face recognition systems (p = 0.000). In the face recognition systems, the mean of blurred images processed by B1 is higher than that of B2, with Ali Cloud and ArcSoft having the most significant differences, with both having a mean difference of around 6.5. The analysis suggested that as the blurriness level increases, both the accuracy and precision of face recognition systems decrease. This was because blurring can weaken the detailed information in the images, making it difficult for the face recognition systems to extract and analyze features, leading to a decrease in recognition accuracy.

4.2. The Influence of Restoration Methods on the Recognition Performance of Face Recognition Systems

Before studying the impact of different restoration methods on the recognition performance of face recognition systems, we first used one-way ANOVA to study the impact results of restoration 1, restoration 2, restoration 3, and restoration 4 in the case of fuzzy 3 and fuzzy 5, respectively. Based on the final experimental results, we found that the recognition performance of AI-driven restoration after blurring was lower than that of the similarity value between the blurred images and the original images. From Figure 6, we can observe that Ali Cloud has a more stable recognition accuracy. Therefore, we would primarily focus on the experimental data from the Ali Cloud recognition system for further detailed analysis.

4.2.1. Comparison of Four Different Restoration Methods for Blurriness Level 3

Table 2 presents the comparison results of four different restoration methods for Blurriness Level 3.
The results of the variance homogeneity test indicated that the variances are homogeneous (F(3, 326) = 20.173, p = 0.103). The P-value between groups was much less than 0.05, indicating significant differences among the four methods. Through pairwise comparison tests between different restoration methods, we found that compared to the facial images of restoration 3 and restoration 4, the facial images of restoration 1 and restoration 2 obtained higher facial similarity values. There is no significant difference between restoration 3 and restoration 4 (p = 0.374). The results suggested that restoration methods 1 and 2 are more effective than the other methods, indicating that the algorithm and technology employed in these methods can better restore the images by capturing more details and features. As a result, they yield better facial matching results.

4.2.2. Comparison of Four Different Restoration Methods for Blurriness Level 5

Table 3 presents the comparison results of four different restoration methods for blurriness level 5.
The result of the homogeneity of variance test indicated that the variances were homogeneous (F(3, 373) = 20.792, p = 0.075). Similarly, the inter-group P-value showed significant differences among the four groups. Pairwise comparisons between different restoration methods revealed that, compared to restoration 4, restoration 2 obtained a higher quantitative value for face similarity comparison, while no significant difference was observed between restoration 1 and restoration 3 (p = 0.183). Significant differences were found among the rest of the pairwise comparisons. By comparing Table 2 and Table 3, we can conclude that restoration 2 achieved the best image restoration effect. In practical applications, it is necessary to choose the most suitable image restoration method based on specific circumstances and requirements.

4.3. The Impact of Blur Level and Restoration Method on the Recognition Performance of the Face Recognition System

Two-way ANOVA was used to compare the effect of the ambiguity level and restoration method on recognition performance. The results indicated that the main effect is statistically significant, meaning that both factors have a significant impact on recognition performance. Additionally, the P-value for the interaction effect between blur level and restoration method was less than the significance level (p < 0.05), indicating a significant interaction between these two factors. Next, we analyzed the data from each face recognition system to determine the specific differences between the groups.

4.3.1. Performing Pairwise Comparisons between Blur Level and Restoration Method in Baidu

The results of the comparison test between blur level and restoration method are shown in Table 4.
Table 4 shows a significant difference in recognition performance between blur levels 3 and 5, with blur 3 outperforming blur 5. Higher levels of fuzziness can lead to errors in feature extraction by misidentifying fuzzy areas as target features. At the same blur level, significant differences in performance were observed among various restoration methods, except methods 3 and 4. However, the performance differences between restoration methods can be minor, complicating the detection of significant differences in experiments. Advanced image preprocessing methods can be adopted to ensure that algorithms more accurately identify and process clear features, rather than mistakenly recognizing blur as significant features. This approach not only enhances the precision of recognition but also improves the system’s robustness in handling low-quality images.

4.3.2. Performing Pairwise Comparisons between Blur Level and Restoration Method in Ali Cloud

The results of the comparison test between blur level and restoration method were shown in Table 5.
From the table, we can note that under the same restoration method, the blur level also showed a significant difference (p = 0.000). Under the same blur level, taking blur 5 as a reference, except for restoration methods 1 and 3, there were significant differences among all the other pairwise comparisons. In particular, restoration 4 shows poorer recognition performance compared to the previous three methods. Therefore, both blur level and restoration method have a significant impact on the recognition performance of the face recognition system.
In ArcSoft, the results were similar to those of Baidu and Ali Cloud. Based on these results, we can conclude that both the blur level and restoration method had a significant impact on the recognition performance, and some interactions collectively affected recognition performance. The experiment showed that different repair methods have high recognition accuracy under various levels of fuzziness, but as the fuzziness increases, the recognition accuracy gradually decreases.

5. Discussion

The data pointed out that the blur level has a significant impact on recognition performance. As the blurriness of the image increased, the recognition performance of the system typically decreased. Specifically, blurring increases the noise in the image, making feature extraction more difficult and resulting in an increase in error rates for face recognition. In practice, to avoid the impact of blurring on facial recognition systems, some measures can be taken, such as using better lighting conditions, selecting proper camera positions, and using image processing techniques to improve image quality.
From the experimental results, it can be seen that the repaired images have not significantly improved facial recognition performance. Sometimes, the inadequacy or failure of the restoration techniques even led to a decrease in recognition performance. The effectiveness of image restoration methods varied significantly, with HitPaw Photo Enhancer and Topaz Photo AI demonstrating superior recognition accuracy. The findings suggest that these algorithms and techniques enhance image restoration by more accurately capturing intricate details and features. Nonetheless, the distinctions among various restoration methods are often subtle. Thus, in practical applications, it is essential to carefully choose the appropriate restoration method tailored to the specific data characteristics and application requirements. Moreover, it is vital to conduct a thorough validation and assessment of the recognition system’s outcomes to significantly reduce the risk of misinterpretations and errors. Finally, in practical applications, the selection of blur level and restoration method should be tailored to specific factors like application context and data quality.
In summary, both the blur level and restoration method had a significant impact on the recognition performance, and some interactions collectively affected recognition performance. Firstly, the degree of blurring typically correlates with the loss of detail in an image, which directly affects the effectiveness of subsequent recovery algorithms. For instance, in cases of severe blurring, even advanced recovery techniques may fail to accurately restore all details, thus reducing the accuracy of recognition algorithms. Secondly, the choice of recovery method is crucial for ultimate recognition performance. Some methods may perform better with specific types of blur, such as motion blur or Gaussian blur, while others may be more suitable for dealing with low light conditions or noise interference. Therefore, selecting a recovery method that best matches the type of blur is key to enhancing recognition performance. In the study, different repair methods had high recognition accuracy under various levels of ambiguity, but as the degree of ambiguity increased, the recognition accuracy gradually decreased. The experiment also showed the limits of the recognition performance of face identification. Although face recognition technology has made significant progress in recent years, the recognition accuracy and robustness of the system still need to be improved in complex scenes and varying image quality. In addition, in the process of repairing damaged or missing images, more refined methods are needed to balance the completion of lost features with maintaining the authenticity of the original features. It is necessary to choose the most suitable image restoration method based on specific circumstances and needs. Rigorous support and evaluation of the results of the recognition system are crucial to minimize misunderstandings and errors.

6. Conclusions

Face recognition has become an important quantitative analysis technique in the field of portrait identification. However, the research findings indicate that AI-driven face image restoration has a limited impact on recognition performance. Face image restoration has not shown a significant positive effect, but it undoubtedly improves the quality of face images and has a positive role in portrait identification applications. About the application scenarios of face recognition systems in portrait identification, further research and exploration are needed to understand the role of AI-driven face image restoration techniques in improving the quality of face images.

Author Contributions

Conceptualization, J.Z., M.Y. and S.L.; Methodology, J.Z. and M.Y.; Software, J.Z. and S.L.; Results, M.Y.; Discussion, J.Z., M.Y. and S.L.; Conclusions, J.Z. and M.Y.; Writing—Original Draft Preparation, M.Y. and S.L.; Writing—Review and Editing, J.Z. and M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Shanghai Science and Technology Commission Project (21DZ2200100) and the Ministry of Finance, PR China (GY2024G-6, GY2021G-3).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yan, C.; Meng, L.; Li, L.; Zhang, J.; Wang, Z.; Yin, J.; Zhang, J.; Sun, Y.; Zheng, B. Age-invariant face recognition by multi-feature fusionand decomposition with self-attention. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2022, 18, 1–18. [Google Scholar] [CrossRef]
  2. Abudarham, N.; Shkiller, L.; Yovel, G. Critical features for face recognition. Cognition 2019, 182, 73–83. [Google Scholar] [CrossRef] [PubMed]
  3. Zeng, J.; Zhu, H.; Shi, S.; Qiu, X. Face image quality quantitative assessment for forensic identification of human images. In Proceedings of the 2018 IEEE International Conference on Progress in Informatics and Computing (PIC), Suzhou, China, 14–16 December 2018; pp. 113–116. [Google Scholar]
  4. Dai, Y.; Sun, K.; Huang, W.; Zhang, D.; Dai, G. Attention-based hierarchical pyramid feature fusion structure for efficient face recognition. IET Image Process. 2023, 17, 2399–2409. [Google Scholar] [CrossRef]
  5. Miranda, G.E.; de Freitas, S.G.; de Abreu Maia, L.V.; Melani, R.F.H. An unusual method of forensic human identification: Use of selfie photographs. Forensic Sci. Int. 2016, 263, e14–e17. [Google Scholar] [CrossRef] [PubMed]
  6. Zeng, J.; Qiu, X.; Shi, S. Image processing effects on the deep face recognition system. Math. Biosci. Eng. 2021, 18, 1187–1200. [Google Scholar] [CrossRef] [PubMed]
  7. Phillips, P.J.; White, D.; O’Toole, A.; Norell, K. Human Factors in Forensic Face Identification. In Handbook of Biometrics for Forensic Science; Springer: Berlin, Germany, 2017. [Google Scholar]
  8. Valentine, T.; Davis, J.P. Forensic facial identification: A practical guide to best practice. In Forensic Facial Identification: Theory and Practice of Identification from Eyewitnesses, Composites and CCTV; Blackwell Pub: Beach Haven, NJ, USA, 2015; pp. 323–347. [Google Scholar]
  9. Velusamy, S.; Parihar, R.; Kini, R.; Rege, A. FabSoften: Face beautification via dynamic skin smoothing, guided feathering, and texture restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 13–19 June 2020; pp. 530–531. [Google Scholar]
  10. Phillips, P.J.; Yates, A.N.; Hu, Y.; Hahn, C.A.; Noyes, E.; Jackson, K.; Cavazos, J.G.; Jeckeln, G.; Ranjan, R.; Sankaranarayanan, S. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proc. Natl. Acad. Sci. USA 2018, 115, 6171–6176. [Google Scholar] [CrossRef]
  11. Carragher, D.J.; Hancock, P.J. Simulated automated facial recognition systems as decision-aids in forensic face matching tasks. J. Exp. Psychol. Gen. 2023, 152, 1286. [Google Scholar] [CrossRef] [PubMed]
  12. Koo, J.H.; Cho, S.W.; Baek, N.R.; Park, K.R. Face and body-based human recognition by GAN-based blur restoration. Sensors 2020, 20, 5229. [Google Scholar] [CrossRef] [PubMed]
  13. Dagnes, N.; Vezzetti, E.; Marcolin, F.; Tornincasa, S. Occlusion detection and restoration techniques for 3D face recognition: A literature review. Mach. Vis. Appl. 2018, 29, 789–813. [Google Scholar] [CrossRef]
  14. He, M.; Zhang, J.; Shan, S.; Chen, X. Enhancing face recognition with self-supervised 3d reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 4062–4071. [Google Scholar]
  15. Merkle, J.; Rathgeb, C.; Tams, B.; Lou, D.-P.; Dörsch, A.; Drozdowski, P. State of the art of quality assessment of facial images. arXiv 2022, arXiv:2211.08030. [Google Scholar]
  16. Fan, L.; Zhang, F.; Fan, H.; Zhang, C. Brief review of image denoising techniques. Vis. Comput. Ind. Biomed. Art 2019, 2, 1–12. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, Y.; Song, W.; Fortino, G.; Qi, L.-Z.; Zhang, W.; Liotta, A. An experimental-based review of image enhancement and image restoration methods for underwater imaging. IEEE Access 2019, 7, 140233–140251. [Google Scholar] [CrossRef]
  18. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 2012, 22, 1620–1630. [Google Scholar] [CrossRef] [PubMed]
  19. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar]
  20. Mao, X.; Shen, C.; Yang, Y.-B. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar] [CrossRef]
  21. Ali, A.M.; Benjdira, B.; Koubaa, A.; El-Shafai, W.; Khan, Z.; Boulila, W. Vision transformers in image restoration: A survey. Sensors 2023, 23, 2385. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, Y.; Hu, Y.; Zhang, J. Panini-Net: GAN prior based degradation-aware feature interpolation for face restoration. Proc. AAAI Conf. Artif. Intell. 2022, 36, 2576–2584. [Google Scholar] [CrossRef]
  23. Xu, W.; Zhu, Q.; Qi, N.; Chen, D. Deep sparse representation based image restoration with denoising prior. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6530–6542. [Google Scholar] [CrossRef]
  24. Thakur, R.S.; Yadav, R.N.; Gupta, L. State-of-art analysis of image denoising methods using convolutional neural networks. IET Image Process. 2019, 13, 2367–2380. [Google Scholar] [CrossRef]
  25. Liu, J.; Huang, T.-Z.; Selesnick, I.W.; Lv, X.-G.; Chen, P.-Y. Image restoration using total variation with overlapping group sparsity. Inf. Sci. 2015, 295, 232–246. [Google Scholar] [CrossRef]
  26. Shen, H.; Peng, L.; Yue, L.; Yuan, Q.; Zhang, L. Adaptive norm selection for regularized image restoration and super-resolution. IEEE Trans. Cybern. 2015, 46, 1388–1399. [Google Scholar] [CrossRef]
  27. Yue, L.; Shen, H.; Li, J.; Yuan, Q.; Zhang, H.; Zhang, L. Image super-resolution: The techniques, applications, and future. Signal Process. 2016, 128, 389–408. [Google Scholar] [CrossRef]
  28. An, F.; Wang, J. Image super-resolution reconstruction algorithm based on significant network connection-collaborative migration structure. Digit. Signal Process. 2022, 127, 103566. [Google Scholar] [CrossRef]
  29. Chua, L.O.; Roska, T. The CNN paradigm. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1993, 40, 147–156. [Google Scholar] [CrossRef]
  30. Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Modi, K.; Ghayvat, H. CNN variants for computer vision: History, architecture, application, challenges and future scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
  31. Zhang, F.; Cai, N.; Wu, J.; Cen, G.; Wang, H.; Chen, X. Image denoising method based on a deep convolution neural network. IET Image Process. 2018, 12, 485–493. [Google Scholar] [CrossRef]
  32. Yin, W.; Kann, K.; Yu, M.; Schütze, H. Comparative study of CNN and RNN for natural language processing. arXiv 2017, arXiv:1702.01923. [Google Scholar]
  33. Zhang, Y.; Gan, Z.; Fan, K.; Chen, Z.; Henao, R.; Shen, D.; Carin, L. Adversarial feature matching for text generation. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 4006–4015. [Google Scholar]
  34. Chen, J.; Chen, J.; Chao, H.; Yang, M. Image blind denoising with generative adversarial network based noise modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3155–3164. [Google Scholar]
  35. Zhu, J.; Yang, H.; Liu, N.; Kim, M.; Zhang, W.; Yang, M.-H. Online multi-object tracking with dual matching attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 366–382. [Google Scholar]
  36. Cao, Y.-J.; Jia, L.-L.; Chen, Y.-X.; Lin, N.; Yang, C.; Zhang, B.; Liu, Z.; Li, X.-X.; Dai, H.-H. Recent advances of generative adversarial networks in computer vision. IEEE Access 2018, 7, 14985–15006. [Google Scholar] [CrossRef]
  37. Ullah, I.; Mahmoud, Q.H. Design and development of RNN anomaly detection model for IoT networks. IEEE Access 2022, 10, 62722–62750. [Google Scholar] [CrossRef]
  38. Mikolov, T.; Kombrink, S.; Burget, L.; Černocký, J.; Khudanpur, S. Extensions of recurrent neural network language model. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 5528–5531. [Google Scholar]
  39. Liang, L.; Lin, L.; Jin, L.; Xie, D.; Li, M. SCUT-FBP5500: A diverse benchmark dataset for multi-paradigm facial beauty prediction. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 1598–1603. [Google Scholar]
  40. Bougourzi, F.; Dornaika, F.; Taleb-Ahmed, A. Deep learning based face beauty prediction via dynamic robust losses and ensemble regression. Knowl. Based Syst. 2022, 242, 108246. [Google Scholar] [CrossRef]
  41. Saeed, J.N.; Abdulazeez, A.M.; Ibrahim, D.A. FIAC-Net: Facial image attractiveness classification based on light deep convolutional neural network. In Proceedings of the 2022 Second International Conference on Computer Science, Engineering and Applications (ICCSEA), Gunupur, India, 8 September 2022; pp. 1–6. [Google Scholar]
  42. Hill, J. [Official] HitPaw Photo Enhancer—One Click AI Photo Quality Enhancer. Available online: https://www.hitpaw.com/photo-enhancer.html (accessed on 1 December 2023).
  43. Labs, T. Topaz Photo AI—Maximize Image Quality with AI. Available online: https://www.topazlabs.com/topaz-photo-ai (accessed on 1 December 2023).
  44. SMS-iT. Image Format Converter—Image Format Conversion—Quick Image Converter Online. Available online: https://www.xunjietupian.com (accessed on 1 December 2023).
  45. yangtao9009. GitHub—Yangxy/GPEN. Available online: https://github.com/yangxy/GPEN (accessed on 1 December 2023).
  46. Baidu. Facial Recognition Cloud Service. Available online: https://cloud.baidu.com/product/face (accessed on 1 December 2023).
  47. Aliyun. Human Face-Alibaba Cloud Visual Intelligence Open Platform. Available online: https://vision.aliyun.com/facebody (accessed on 1 December 2023).
  48. Co, A.T. ArcSoft Vision Open Platform. Available online: https://ai.arcsoft.com.cn/product/arcface.html (accessed on 1 December 2023).
Figure 1. Partial face image material.
Figure 1. Partial face image material.
Applsci 14 03783 g001
Figure 2. The structure diagram of the experimental principle.
Figure 2. The structure diagram of the experimental principle.
Applsci 14 03783 g002
Figure 3. Facial images under blurry conditions: (a) blur 3; (b) blur 5.
Figure 3. Facial images under blurry conditions: (a) blur 3; (b) blur 5.
Applsci 14 03783 g003
Figure 4. Four restoration methods for facial images under blur 3 conditions. (a) represents restoration by HitPaw Photo Enhancer; (b) represents restoration by Topaz Photo AI; (c) represents restoration by Swift Photo Restoration Software; and (d) represents restoration by the Blurry Portrait Photo HD Restoration Tool.
Figure 4. Four restoration methods for facial images under blur 3 conditions. (a) represents restoration by HitPaw Photo Enhancer; (b) represents restoration by Topaz Photo AI; (c) represents restoration by Swift Photo Restoration Software; and (d) represents restoration by the Blurry Portrait Photo HD Restoration Tool.
Applsci 14 03783 g004
Figure 5. Four restoration methods for facial images under blur 5 conditions. (a) represents restoration by HitPaw Photo Enhancer; (b) represents restoration by Topaz Photo AI; (c) represents restoration by Swift Photo Restoration Software; and (d) represents restoration by the Blurry Portrait Photo HD Restoration Tool.
Figure 5. Four restoration methods for facial images under blur 5 conditions. (a) represents restoration by HitPaw Photo Enhancer; (b) represents restoration by Topaz Photo AI; (c) represents restoration by Swift Photo Restoration Software; and (d) represents restoration by the Blurry Portrait Photo HD Restoration Tool.
Applsci 14 03783 g005
Figure 6. Analysis of system identification stability.
Figure 6. Analysis of system identification stability.
Applsci 14 03783 g006
Table 1. Comparison between MB1(i) and MB2(i) (i = 1, …, 100).
Table 1. Comparison between MB1(i) and MB2(i) (i = 1, …, 100).
Recognition SystemMB1 (n = 100)
M ± SD
MB2 (n = 100)
M ± SD
tp
Baidu98.41 ± 0.5295.32 ± 1.4320.290.000
Ali Cloud96.28 ± 1.2589.84 ± 2.0426.930.000
ArcSoft99.82 ± 0.6392.87 ± 3.6418.840.000
Table 2. Comparison of four different restoration methods for blur 3.
Table 2. Comparison of four different restoration methods for blur 3.
IJMean Difference
(I–J)
Standard ErrorSignificance
RST 1RST 2−1.061 *0.3230.001
RST30.926 *0.3230.004
RST41.214 *0.3230.000
RST 2RST 31.987 *0.3230.000
RST 42.275 *0.3230.000
RST 3RST 40.2870.3230.374
Note. * Indicates significance at the 0.05 level.
Table 3. Comparison of four different restoration methods for blur 5.
Table 3. Comparison of four different restoration methods for blur 5.
IJMean Difference
(I–J)
Standard ErrorSignificance
RST 1RST 2−2.685 *0.4950.000
RST 3−0.6600.4950.183
RST 41.115 *0.4950.025
RST 2RST 32.025 *0.4950.000
RST 43.800 *0.4950.000
RST 3RST 41.775 *0.4950.000
Note. * Indicates significance at the 0.05 level.
Table 4. Test results of comparison between blur level and restoration method under Baidu.
Table 4. Test results of comparison between blur level and restoration method under Baidu.
Restoration MethodBlur Level
B1B2
RST 196.91 ± 1.00 bB85.64 ± 7.27 bA
RST 297.35 ± 0.86 cB89.07 ± 6.02 aA
RST 396.44 ± 1.24 aB85.93 ± 6.72 bA
RST 496.37 ± 2.4 aB83.36 ± 7.02 cA
Note. Different lowercase letters in the table indicate significant differences (p < 0.05) in recognition performance among different restoration methods under the same blur level factor. Different uppercase letters in the table indicate significant differences (p < 0.05) in recognition performance among different blur levels under the same restoration method factor.
Table 5. Test results of comparison between blur level and restoration method under Ali Cloud.
Table 5. Test results of comparison between blur level and restoration method under Ali Cloud.
Restoration MethodBlur Level
B1B2
RST 193.49 ± 1.93 bB83.49 ± 3.88 bA
RST 294.55 ± 1.79 cB86.18 ± 3.09 cA
RST 392.56 ± 2.16 aB84.16 ± 2.99 bA
RST 492.28 ± 3.05 aB82.38 ± 3.94 aA
Note. Different lowercase letters in the table indicate significant differences (p < 0.05) in recognition performance among different restoration methods under the same blur level factor. Different uppercase letters in the table indicate significant differences (p < 0.05) in recognition performance among different blur levels under the same restoration method factor.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, M.; Li, S.; Zeng, J. The Effects of AI-Driven Face Restoration on Forensic Face Recognition. Appl. Sci. 2024, 14, 3783. https://doi.org/10.3390/app14093783

AMA Style

Yang M, Li S, Zeng J. The Effects of AI-Driven Face Restoration on Forensic Face Recognition. Applied Sciences. 2024; 14(9):3783. https://doi.org/10.3390/app14093783

Chicago/Turabian Style

Yang, Mengxuan, Shengnan Li, and Jinhua Zeng. 2024. "The Effects of AI-Driven Face Restoration on Forensic Face Recognition" Applied Sciences 14, no. 9: 3783. https://doi.org/10.3390/app14093783

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop