Key-Point-Descriptor-Based Image Quality Evaluation in Photogrammetry Workflows
Abstract
:1. Introduction
- Proposing a method for image quality evaluation based on key point descriptors. The proposed method reuses the descriptors extracted for the purpose of feature matching, which is performed in the photogrammetric sparse reconstruction stage, thus minimizing the computational overhead of image quality evaluation;
- Presenting a dataset construction guide for the development of a descriptor-based image quality evaluation method;
- Performing comparative evaluation of seven descriptor types (SURF, SIFT, BRISK, ORB, KAZE, FREAK, SuperPoint) as feature sources used for image quality evaluation;
- Presenting comparative results of image quality evaluation methods using five publicly available image datasets. Results show that the created method is effective and performs better than simple sharpness-based, BRISQUE, NIQE, PIQE, and BIQAA blind image quality metrics in choosing better-quality images and reducing image redundancy for photogrammetric reconstructions.
2. Materials and Methods
2.1. Motivation and Method for Finding Higher-Quality Images
2.1.1. Proposed Method
2.1.2. Baseline Methods
- Sharpness uses measured image sharpness as an estimate of image quality. The method applies a Laplacian of Gaussian (LoG) filter with a filter size and . The variance of the filtered images is calculated, where a higher value indicates greater image sharpness, thus implying lower blurriness;
- BRISQUE (Blind/Referenceless Image Spatial Quality Evaluator) is a no-reference image quality assessment model that operates in the spatial domain. It evaluates images based on natural scene statistics, specifically, the locally normalized luminance coefficients. This metric does not require a transformation to another coordinate frame, which differentiates it from many other no-reference IQA approaches. BRISQUE is noted for its simplicity and low computational complexity, making it suitable for real-time applications;
- NIQE (Natural Image Quality Evaluator) is a completely blind image quality assessor that uses a statistical model of image features that are perceived as natural based on a Gaussian scale mixture model. It does not require any subjective training using human-rated distorted images, which distinguishes it from other methods that rely on human-rated training sets. NIQE is designed to assess the naturalness of images, making it useful for a variety of applications without the need for comparison to a reference image;
- PIQE (Perception-Based Image Quality Evaluator) is a no-reference metric that quantifies the perceptual quality of compressed images by measuring the visibility of artifacts and the loss of natural scene statistics caused by compression. It is particularly useful for evaluating JPEG images as it specifically measures the blockiness and blurriness introduced by JPEG compression. PIQE computes a quality score based on how much an image deviates from these expected statistical parameters, providing a measure of perceptual degradation without reference to the original;
- BIQAA (Blind Image Quality Assessment through Anisotropy) is a no-reference image quality assessment metric that evaluates image quality by analyzing the anisotropic properties of natural images. It operates on the premise that high-quality images exhibit certain statistical regularities and directional patterns. BIQAA measures the deviation from these expected anisotropic properties to quantify the degree of distortion present in the image.
2.2. Data Preparation
- Indoor Scene Recognition (ISR) (https://web.mit.edu/torralba/www/indoor.html) (accessed on 18 March 2024) [82]. The database contains 67 indoor categories and a total of 15,620 images. The number of images varies across categories, but there are at least 100 images per category. All images are in jpg format;
- DIV2K dataset (https://data.vision.ee.ethz.ch/cvl/DIV2K/) (accessed on 18 March 2024) [83,84]. Diverse 2K-resolution high-quality images;
- KonIQ-10k IQA Database (https://database.mmsp-kn.de/koniq-10k-database.html) (accessed on 18 March 2024) [85]. KonIQ-10k is, at the time of publication, the largest IQA dataset to date consisting of 10,073 quality-scored images. This is the first in-the-wild database aiming for ecological validity with regard to the authenticity of distortions, the diversity of content, and quality-related indicators;
- Common Objects in Context (COCO) val2017 (https://cocodataset.org/#download) (accessed on 18 March 2024) [86]. COCO is a large-scale object detection, segmentation, and captioning dataset;
- Flickr2K dataset (https://github.com/limbee/NTIRE2017), (accessed on 15 May 2024) [87]. Dataset of 2650 images collected by SNU_CVLab team for NTIRE2017 super-resolution challenge.
2.3. Evaluation of Methods for Finding Higher-Quality Images
2.4. Software Used
- MATLAB programming and numeric computing platform (version R2023b, The Mathworks Inc., Natick, MA, USA) for the implementation of the proposed algorithm (except for training LGBMRanker model and SuperPoint key point detection and description) and for data analysis and visualization;
- Python (version 3.11.8) (https://www.python.org) (accessed on 18 March 2024), an interpreted, high-level, general-purpose programming language with NumPy, SciPy, and LightGBM packages. Used for the machine learning applications (LGBMRanker tool) and SuperPoint key point detection and description;
- SuperPoint research project at Magic Leap (https://github.com/magicleap/SuperPointPretrainedNetwork, accessed on 15 May 2024) [61]. The repository contains the pretrained SuperPoint network. This is a Python implementation of the SuperPoint feature point detector and descriptor that was used in the descriptor comparison experiments.
3. Results and Discussion
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
SfM | Structure from motion |
ML | Machine learning |
CNN | Convolutional neural network |
LoG | Laplacian of Gaussian |
DoG | Difference of Gaussians |
References
- Li, Z.; Zhang, Z.; Luo, S.; Cai, Y.; Guo, S. An Improved Matting-SfM Algorithm for 3D Reconstruction of Self-Rotating Objects. Mathematics 2022, 10, 2892. [Google Scholar] [CrossRef]
- Matuzevičius, D.; Serackis, A.; Navakauskas, D. Mathematical models of oversaturated protein spots. Elektron. Elektrotechnika 2007, 73, 63–68. [Google Scholar]
- Gabara, G.; Sawicki, P. CRBeDaSet: A benchmark dataset for high accuracy close range 3D object reconstruction. Remote. Sens. 2023, 15, 1116. [Google Scholar] [CrossRef]
- Matuzevičius, D. Synthetic Data Generation for the Development of 2D Gel Electrophoresis Protein Spot Models. Appl. Sci. 2022, 12, 4393. [Google Scholar] [CrossRef]
- Eldefrawy, M.; King, S.A.; Starek, M. Partial scene reconstruction for close range photogrammetry using deep learning pipeline for region masking. Remote. Sens. 2022, 14, 3199. [Google Scholar] [CrossRef]
- Caradonna, G.; Tarantino, E.; Scaioni, M.; Figorito, B. Multi-image 3D reconstruction: A photogrammetric and structure from motion comparative analysis. In Proceedings of the International Conference on Computational Science and Its Applications, Melbourne, VIC, Australia, 2–5 July 2018; pp. 305–316. [Google Scholar]
- Žuraulis, V.; Matuzevičius, D.; Serackis, A. A method for automatic image rectification and stitching for vehicle yaw marks trajectory estimation. Promet-Traffic Transp. 2016, 28, 23–30. [Google Scholar] [CrossRef]
- Sledevič, T.; Serackis, A.; Plonis, D. FPGA Implementation of a Convolutional Neural Network and Its Application for Pollen Detection upon Entrance to the Beehive. Agriculture 2022, 12, 1849. [Google Scholar] [CrossRef]
- Ban, K.; Jung, E.S. Ear shape categorization for ergonomic product design. Int. J. Ind. Ergon. 2020, 80, 102962. [Google Scholar] [CrossRef]
- Mistretta, F.; Sanna, G.; Stochino, F.; Vacca, G. Structure from motion point clouds for structural monitoring. Remote. Sens. 2019, 11, 1940. [Google Scholar] [CrossRef]
- Varna, D.; Abromavičius, V. A System for a Real-Time Electronic Component Detection and Classification on a Conveyor Belt. Appl. Sci. 2022, 12, 5608. [Google Scholar] [CrossRef]
- Vacca, G. Overview of open source software for close range photogrammetry. In Proceedings of the 2019 Free and Open Source Software for Geospatial, FOSS4G 2019, Bucharest, Romania, 26–30 August 2019; Volume 42, pp. 239–245. [Google Scholar]
- Pang, T.Y.; Lo, T.S.T.; Ellena, T.; Mustafa, H.; Babalija, J.; Subic, A. Fit, stability and comfort assessment of custom-fitted bicycle helmet inner liner designs, based on 3D anthropometric data. Appl. Ergon. 2018, 68, 240–248. [Google Scholar] [CrossRef] [PubMed]
- Matuzevicius, D.; Navakauskas, D. Feature selection for segmentation of 2-D electrophoresis gel images. In Proceedings of the 2008 11th International Biennial Baltic Electronics Conference, Tallinn, Estonia, 14 April 2008; pp. 341–344. [Google Scholar]
- Luhmann, T. Close range photogrammetry for industrial applications. Isprs J. Photogramm. Remote. Sens. 2010, 65, 558–569. [Google Scholar] [CrossRef]
- Trojnacki, M.; Dąbek, P.; Jaroszek, P. Analysis of the Influence of the Geometrical Parameters of the Body Scanner on the Accuracy of Reconstruction of the Human Figure Using the Photogrammetry Technique. Sensors 2022, 22, 9181. [Google Scholar] [CrossRef]
- Barbero-García, I.; Pierdicca, R.; Paolanti, M.; Felicetti, A.; Lerma, J.L. Combining machine learning and close-range photogrammetry for infant’s head 3D measurement: A smartphone-based solution. Measurement 2021, 182, 109686. [Google Scholar] [CrossRef]
- Leipner, A.; Obertová, Z.; Wermuth, M.; Thali, M.; Ottiker, T.; Sieberth, T. 3D mug shot—3D head models from photogrammetry for forensic identification. Forensic Sci. Int. 2019, 300, 6–12. [Google Scholar] [CrossRef]
- Abromavičius, V.; Serackis, A. Eye and EEG activity markers for visual comfort level of images. Biocybern. Biomed. Eng. 2018, 38, 810–818. [Google Scholar] [CrossRef]
- Battistoni, G.; Cassi, D.; Magnifico, M.; Pedrazzi, G.; Di Blasio, M.; Vaienti, B.; Di Blasio, A. Does Head Orientation Influence 3D Facial Imaging? A Study on Accuracy and Precision of Stereophotogrammetric Acquisition. Int. J. Environ. Res. Public Health 2021, 18, 4276. [Google Scholar] [CrossRef] [PubMed]
- Abromavicius, V.; Serackis, A.; Katkevicius, A.; Plonis, D. Evaluation of EEG-based Complementary Features for Assessment of Visual Discomfort based on Stable Depth Perception Time. Radioengineering 2018, 27, 1138–1146. [Google Scholar] [CrossRef]
- Trujillo-Jiménez, M.A.; Navarro, P.; Pazos, B.; Morales, L.; Ramallo, V.; Paschetta, C.; De Azevedo, S.; Ruderman, A.; Pérez, O.; Delrieux, C.; et al. body2vec: 3D Point Cloud Reconstruction for Precise Anthropometry with Handheld Devices. J. Imaging 2020, 6, 94. [Google Scholar] [CrossRef]
- Zeraatkar, M.; Khalili, K. A Fast and Low-Cost Human Body 3D Scanner Using 100 Cameras. J. Imaging 2020, 6, 21. [Google Scholar] [CrossRef]
- Verwulgen, S.; Lacko, D.; Vleugels, J.; Vaes, K.; Danckaers, F.; De Bruyne, G.; Huysmans, T. A new data structure and workflow for using 3D anthropometry in the design of wearable products. Int. J. Ind. Ergon. 2018, 64, 108–117. [Google Scholar] [CrossRef]
- Barbero-García, I.; Lerma, J.L.; Mora-Navarro, G. Fully automatic smartphone-based photogrammetric 3D modelling of infant’s heads for cranial deformation analysis. Isprs J. Photogramm. Remote. Sens. 2020, 166, 268–277. [Google Scholar] [CrossRef]
- Kuo, C.C.; Wang, M.J.; Lu, J.M. Developing sizing systems using 3D scanning head anthropometric data. Measurement 2020, 152, 107264. [Google Scholar] [CrossRef]
- Zhao, Y.; Mo, Y.; Sun, M.; Zhu, Y.; Yang, C. Comparison of three-dimensional reconstruction approaches for anthropometry in apparel design. J. Text. Inst. 2019, 110, 1635–1643. [Google Scholar] [CrossRef]
- Galantucci, L.M.; Lavecchia, F.; Percoco, G. 3D Face measurement and scanning using digital close range photogrammetry: Evaluation of different solutions and experimental approaches. In Proceedings of the International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 9–20 October 2010; p. 52. [Google Scholar]
- Özyeşil, O.; Voroninski, V.; Basri, R.; Singer, A. A survey of structure from motion. Acta Numer. 2017, 26, 305–364. [Google Scholar] [CrossRef]
- Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from motion photogrammetry in forestry: A review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef]
- Wei, Y.m.; Kang, L.; Yang, B.; Wu, L.-d. Applications of structure from motion: A survey. J. Zhejiang Univ. Sci. 2013, 14, 486–494. [Google Scholar] [CrossRef]
- Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
- Yao, G.; Huang, P.; Ai, H.; Zhang, C.; Zhang, J.; Zhang, C.; Wang, F. Matching wide-baseline stereo images with weak texture using the perspective invariant local feature transformer. J. Appl. Remote. Sens. 2022, 16, 036502. [Google Scholar] [CrossRef]
- Wei, L.; Huo, J. A Global fundamental matrix estimation method of planar motion based on inlier updating. Sensors 2022, 22, 4624. [Google Scholar] [CrossRef]
- Heymsfield, S.B.; Bourgeois, B.; Ng, B.K.; Sommer, M.J.; Li, X.; Shepherd, J.A. Digital anthropometry: A critical review. Eur. J. Clin. Nutr. 2018, 72, 680–687. [Google Scholar] [CrossRef]
- Calantropio, A.; Chiabrando, F.; Seymour, B.; Kovacs, E.; Lo, E.; Rissolo, D. Image pre-processing strategies for enhancing photogrammetric 3D reconstruction of underwater shipwreck datasets. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2020, 43, 941–948. [Google Scholar] [CrossRef]
- Ballabeni, A.; Apollonio, F.I.; Gaiani, M.; Remondino, F. Advances in image pre-processing to improve automated 3D reconstruction. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2015, 40, 315–323. [Google Scholar] [CrossRef]
- Gaiani, M.; Remondino, F.; Apollonio, F.I.; Ballabeni, A. An advanced pre-processing pipeline to improve automated photogrammetric reconstructions of architectural scenes. Remote. Sens. 2016, 8, 178. [Google Scholar] [CrossRef]
- Neyer, F.; Nocerino, E.; Grün, A. Image quality improvements in low-cost underwater photogrammetry. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, 42, 135–142. [Google Scholar] [CrossRef]
- Li, Z.; Yuan, X.; Lam, K.W. Effects of JPEG compression on the accuracy of photogrammetric point determination. Photogramm. Eng. Remote. Sens. 2002, 68, 847–853. [Google Scholar]
- O’Connor, J. Impact of Image Quality on SfM Photogrammetry: Colour, Compression and Noise. Ph.D. Thesis, Kingston University, London, UK, September. 2018. [Google Scholar]
- Song, F. Analysis of Image Quality Evaluation Technology of Photogrammetry and Remote Sensing Fusion. In Innovative Computing: Proceedings of the 4th International Conference on Innovative Computing (IC 2021); Springer: Berlin/Heidelberg, Germany, 2022; pp. 141–146. [Google Scholar]
- Saponaro, M.; Capolupo, A.; Tarantino, E.; Fratino, U. Comparative analysis of different UAV-based photogrammetric processes to improve product accuracies. In Computational Science and Its Applications – ICCSA 2019. Lecture Notes in Computer Science, Proceedings of the Computational Science and Its Applications–ICCSA 2019: 19th International Conference, Saint Petersburg, Russia, 1–4 July 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 225–238. [Google Scholar]
- Karantanellis, E.; Arav, R.; Dille, A.; Lippl, S.; Marsy, G.; Torresani, L.; Oude Elberink, S. Evaluating the quality of photogrammetric point-clouds in challenging geo-environments–a case study in an Alpine Valley. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2020, 43, 1099–1105. [Google Scholar] [CrossRef]
- Ludwig, M.; M. Runge, C.; Friess, N.; Koch, T.L.; Richter, S.; Seyfried, S.; Wraase, L.; Lobo, A.; Sebastià, M.T.; Reudenbach, C.; et al. Quality assessment of photogrammetric methods—A workflow for reproducible UAS orthomosaics. Remote. Sens. 2020, 12, 3831. [Google Scholar] [CrossRef]
- Welch, R. Photogrammetric image evaluation techniques. Photogrammetria 1975, 31, 161–190. [Google Scholar] [CrossRef]
- Barbero-García, I.; Cabrelles, M.; Lerma, J.L.; Marqués-Mateu, Á. Smartphone-based close-range photogrammetric assessment of spherical objects. Photogramm. Rec. 2018, 33, 283–299. [Google Scholar] [CrossRef]
- Fawzy, H.E.D. The accuracy of mobile phone camera instead of high resolution camera in digital close range photogrammetry. Int. J. Civ. Eng. Technol. 2015, 6, 76–85. [Google Scholar]
- Barbero-García, I.; Lerma, J.L.; Marqués-Mateu, Á.; Miranda, P. Low-cost smartphone-based photogrammetry for the analysis of cranial deformation in infants. World Neurosurg. 2017, 102, 545–554. [Google Scholar] [CrossRef]
- Lerma, J.L.; Barbero-García, I.; Marqués-Mateu, Á.; Miranda, P. Smartphone-based video for 3D modelling: Application to infant’s cranial deformation analysis. Measurement 2018, 116, 299–306. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind image quality evaluation using perception based features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar]
- Lowe, G. Sift-the scale invariant feature transform. Int. J 2004, 2, 2. [Google Scholar]
- Lingua, A.; Marenchino, D.; Nex, F. Performance analysis of the SIFT operator for automatic feature extraction and matching in photogrammetric applications. Sensors 2009, 9, 3745–3766. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Leutenegger, S.; Chli, M.; Siegwart, R.Y. BRISK: Binary robust invariant scalable keypoints. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2548–2555. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE features. In Computer Vision – ECCV 2012. Lecture Notes in Computer Science, Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 214–227. [Google Scholar]
- Alahi, A.; Ortiz, R.; Vandergheynst, P. Freak: Fast retina keypoint. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognitio, Providence, RI, USA, 16–21 June 2012; pp. 510–517. [Google Scholar]
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperPoint: Self-supervised interest point detection and description. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 224–236.
- Petrakis, G.; Partsinevelos, P. Keypoint Detection and Description through Deep Learning in Unstructured Environments. Robotics 2023, 12, 137. [Google Scholar] [CrossRef]
- Georgiou, T.; Liu, Y.; Chen, W.; Lew, M. A survey of traditional and deep learning-based feature descriptors for high dimensional data in computer vision. Int. J. Multimed. Inf. Retr. 2020, 9, 135–170. [Google Scholar] [CrossRef]
- Fan, Y.; Mao, S.; Li, M.; Kang, J.; Li, B. LMFD: Lightweight multi-feature descriptors for image stitching. Sci. Rep. 2023, 13, 21162. [Google Scholar] [CrossRef]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. Adv. Neural Inf. Process. Syst. 2017, 30, 3149–3157. [Google Scholar]
- Eltner, A.; Sofia, G. Structure from motion photogrammetric technique. In Developments in Earth Surface Processes; Elsevier: Amsterdam, The Netherlands, 2020; Volume 23, pp. 1–24. [Google Scholar]
- Urban, S.; Weinmann, M. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds. Isprs Ann. Photogramm. Remote. Sens. Apatial Inf. Sci. 2015, 2, 121–128. [Google Scholar] [CrossRef]
- Wu, S.; Oerlemans, A.; Bakker, E.M.; Lew, M.S. A comprehensive evaluation of local detectors and descriptors. Signal Process. Image Commun. 2017, 59, 150–167. [Google Scholar] [CrossRef]
- Krig, S.; Krig, S. Interest point detector and feature descriptor survey. Comput. Vis. Metrics Textb. Ed. 2016, 187–246. [Google Scholar]
- Marmol, A.; Peynot, T.; Eriksson, A.; Jaiprakash, A.; Roberts, J.; Crawford, R. Evaluation of keypoint detectors and descriptors in arthroscopic images for feature-based matching applications. IEEE Robot. Autom. Lett. 2017, 2, 2135–2142. [Google Scholar] [CrossRef]
- Kelman, A.; Sofka, M.; Stewart, C.V. Keypoint descriptors for matching across multiple image modalities and non-linear intensity variations. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–7. [Google Scholar]
- Sharma, S.K.; Jain, K.; Shukla, A.K. A comparative analysis of feature detectors and descriptors for image stitching. Appl. Sci. 2023, 13, 6015. [Google Scholar] [CrossRef]
- Bojanić, D.; Bartol, K.; Pribanić, T.; Petković, T.; Donoso, Y.D.; Mas, J.S. On the comparison of classic and deep keypoint detector and descriptor methods. In Proceedings of the 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), Dubrovnik, Croatia, 23–25 September 2019; pp. 64–69. [Google Scholar]
- Işık, Ş. A comparative evaluation of well-known feature detectors and descriptors. Int. J. Appl. Math. Electron. Comput. 2014, 3, 1–6. [Google Scholar] [CrossRef]
- Santos, A.; Ortiz de Solórzano, C.; Vaquero, J.J.; Pena, J.M.; Malpica, N.; del Pozo, F. Evaluation of autofocus functions in molecular cytogenetic analysis. J. Microsc. 1997, 188, 264–272. [Google Scholar] [CrossRef]
- Matuzevičius, D.; Serackis, A. Three-Dimensional Human Head Reconstruction Using Smartphone-Based Close-Range Video Photogrammetry. Appl. Sci. 2021, 12, 229. [Google Scholar] [CrossRef]
- Tamulionis, M.; Sledevič, T.; Abromavičius, V.; Kurpytė-Lipnickė, D.; Navakauskas, D.; Serackis, A.; Matuzevičius, D. Finding the least motion-blurred image by reusing early features of object detection network. Appl. Sci. 2023, 13, 1264. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Muralidhar, G.S.; Bovik, A.C.; Ghosh, J. Blind image quality assessment without training on human opinion scores. In Proceedings of the Human Vision and Electronic Imaging XVIII, Burlingame, CA, USA, 4–8 February 2013; Volume 8651, pp. 177–183. [Google Scholar]
- Mittal, A.; Soundararajan, R.; Bovik, A. Prediction of image naturalness and quality. J. Vis. 2013, 13, 1056. [Google Scholar] [CrossRef]
- Chan, R.W.; Goldsmith, P.B. A psychovisually-based image quality evaluator for JPEG images. In Proceedings of the Smc 2000 Conference Proceedings, 2000 IEEE International Conference on Systems, Man and Cybernetics: Cybernetics Evolving to Systems, Humans, Organizations, and Their Complex Interactions, Anchorage, AK, USA, 27 September 2000; Volume 2, pp. 1541–1546. [Google Scholar]
- Gabarda, S.; Cristóbal, G. Blind image quality assessment through anisotropy. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2007, 24, B42–B51. [Google Scholar] [CrossRef] [PubMed]
- Quattoni, A.; Torralba, A. Recognizing indoor scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 413–420. [Google Scholar]
- Agustsson, E.; Timofte, R. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Ignatov, A.; Timofte, R.; Van Vu, T.; Minh Luu, T.; X Pham, T.; Van Nguyen, C.; Kim, Y.; Choi, J.S.; Kim, M.; Huang, J.; et al. PIRM challenge on perceptual image enhancement on smartphones: Report. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Hosu, V.; Lin, H.; Sziranyi, T.; Saupe, D. KonIQ-10k: An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment. IEEE Trans. Image Process. 2020, 29, 4041–4056. [Google Scholar] [CrossRef] [PubMed]
- Lin, T.; Maire, M.; Belongie, S.J.; Bourdev, L.D.; Girshick, R.B.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. arXiv 2014, arXiv:1405.0312. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
Descriptor | Dimensionality | Precision |
---|---|---|
SURF | 64/128 | single/float |
SIFT | 128 | 8-bit unsigned integer (uint8) |
BRISK | 64 | binary, stored in uint8 container |
ORB (Rotated BRIEF) | 32 | binary, stored in uint8 container |
KAZE | 64/128 | single/float |
FREAK | 64 | binary, stored in uint8 container |
SuperPoint | 256 | single/float |
Dataset | No. of Samples | Resolution |
---|---|---|
– For training: | ||
ISR [82] | 3318 | |
– For testing: | ||
ISR | ||
DIV2K [83,84] | 800 | |
KonIQ-10k IQA [85] | ||
COCO val2017 [86] | 4231 | |
Flickr2K [87] | 2650 |
Method | Results of Selecting a Better-Quality Image from a Pair [% Correct Selections] | |||||
---|---|---|---|---|---|---|
ISR (Test) | DIV2K | KonIQ10kIQA | COCO | Flickr2K | Overall | |
1. Sharpness | 78.0 | 78.4 | 78.2 | 77.5 | 76.0 | 77.6 |
2. BRISQUE | 73.8 | 73.6 | 72.2 | 73.1 | 72.6 | 73.1 |
3. NIQE | 76.4 | 77.7 | 76.5 | 77.1 | 79.6 | 77.5 |
4. PIQE | 73.0 | 74.5 | 73.7 | 73.7 | 78.9 | 74.8 |
5. BIQAA | 65.8 | 64.8 | 65.1 | 63.2 | 60.5 | 63.9 |
6. SURF64-ranker | 84.1 | 83.5 | 84.7 | 84.8 | 82.4 | 83.9 |
7. SURF128-ranker | 82.9 | 82.9 | 83.1 | 83.1 | 81.5 | 82.7 |
8. SIFT-ranker | 86.0 | 85.3 | 84.4 | 87.3 | 85.1 | 85.6 |
9. BRISK-ranker | 79.1 | 79.3 | 78.3 | 81.2 | 76.3 | 78.8 |
10. ORB-ranker | 77.1 | 78.9 | 78.6 | 79.1 | 76.1 | 78.2 |
11. KAZE64-ranker | 85.1 | 85.6 | 85.6 | 85.5 | 83.4 | 85.0 |
12. KAZE128-ranker | 85.4 | 85.8 | 87.2 | 84.7 | 84.7 | 85.6 |
13. FREAK-ranker | 69.8 | 73.2 | 70.7 | 69.4 | 67.8 | 70.2 |
14. SuperPoint-ranker | 82.2 | 81.8 | 80.4 | 77.4 | 79.4 | 80.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Matuzevičius, D.; Urbanavičius, V.; Miniotas, D.; Mikučionis, Š.; Laptik, R.; Ušinskas, A. Key-Point-Descriptor-Based Image Quality Evaluation in Photogrammetry Workflows. Electronics 2024, 13, 2112. https://doi.org/10.3390/electronics13112112
Matuzevičius D, Urbanavičius V, Miniotas D, Mikučionis Š, Laptik R, Ušinskas A. Key-Point-Descriptor-Based Image Quality Evaluation in Photogrammetry Workflows. Electronics. 2024; 13(11):2112. https://doi.org/10.3390/electronics13112112
Chicago/Turabian StyleMatuzevičius, Dalius, Vytautas Urbanavičius, Darius Miniotas, Šarūnas Mikučionis, Raimond Laptik, and Andrius Ušinskas. 2024. "Key-Point-Descriptor-Based Image Quality Evaluation in Photogrammetry Workflows" Electronics 13, no. 11: 2112. https://doi.org/10.3390/electronics13112112
APA StyleMatuzevičius, D., Urbanavičius, V., Miniotas, D., Mikučionis, Š., Laptik, R., & Ušinskas, A. (2024). Key-Point-Descriptor-Based Image Quality Evaluation in Photogrammetry Workflows. Electronics, 13(11), 2112. https://doi.org/10.3390/electronics13112112