Image Quality

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (15 July 2018) | Viewed by 24033

Special Issue Editors


E-Mail Website
Guest Editor
Laboratory of Computational and Subjective Image Quality, Electrical and Electronic Engineering, Shizuoka University, Hamamatsu, Shizuoka 432-8561, Japan
Interests: image processing; image quality; human vision; image compression; quality assessment

E-Mail Website
Guest Editor
Graduate School of Science and Engineering for Research, University of Toyama, 3190 Gofuku, Toyama 930-8555, Japan
Interests: quality of experience; image and video quality assessment; image and video coding, visual aesthetics

E-Mail Website
Guest Editor
Department of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland
Interests: quality of experience; video streaming; mobile users; field study; subjective experiments; ecological validity
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Image and video quality have become increasingly dominant themes in many areas of signal processing. Fundamental issues regarding the visual appearance, diagnostic utility, and how various preferences, experiences, and tasks define these quality judgments, impact nearly all applications that make use of images (I) and video (V). Today’s quality assessment (QA) research has a much broader reach than it did even just 10 years ago, with each new emerging application is raising the bar, in terms of both required fundamental QA knowledge (e.g., psychophysical experiments and quality databases), and QA algorithm performance.

The objective of this Special Issue is to bring together and showcase recent research on this ever-broadening topic of image quality. We seek original contributions in IQA/VQA, but not limited to, the following areas:

  • New databases
  • Unique psychophysical testing paradigms
  • Visually lossless and experiments
  • Image and video models for consumer content evaluation
  • Reduced-reference and no-reference
  • Stereoscopic algorithms
  • Free-viewpoint algorithms
  • Opinion-score unaware
  • New learning-based approaches
  • Techniques/analyses for practical/real-time
  • coding/streaming/broadcast applications
  • screen-content media
  • multiply distorted media
  • mobile/low-power devices
  • AR/VR applications
  • HDR applications
  • in-camera processing
  • surveillance applications
  • super-resolution/enhancement/recognition applications
  • algorithms for visual aesthetics
Prof. Damon M. Chandler
Prof. Yasuhiro Inazumi
Prof. Mikołaj Leszczuk
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image quality
  • video quality
  • quality assessment
  • visual psychophysics
  • visual appearance
  • reduced-reference quality assessment
  • no-reference quality assessment
  • quality database

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 3582 KiB  
Article
Study of Subjective Data Integrity for Image Quality Data Sets with Consumer Camera Content
by Jakub Nawała, Margaret H. Pinson, Mikołaj Leszczuk and Lucjan Janowski
J. Imaging 2020, 6(3), 7; https://doi.org/10.3390/jimaging6030007 - 25 Feb 2020
Cited by 3 | Viewed by 4348
Abstract
We need data sets of images and subjective scores to develop robust no reference (or blind) visual quality metrics for consumer applications. These applications have many uncontrolled variables because the camera creates the original media and the impairment simultaneously. We do not fully [...] Read more.
We need data sets of images and subjective scores to develop robust no reference (or blind) visual quality metrics for consumer applications. These applications have many uncontrolled variables because the camera creates the original media and the impairment simultaneously. We do not fully understand how this impacts the integrity of our subjective data. We put forward two new data sets of images from consumer cameras. The first data set, CCRIQ2, uses a strict experiment design, more suitable for camera performance evaluation. The second data set, VIME1, uses a loose experiment design that resembles the behavior of consumer photographers. We gather subjective scores through a subjective experiment with 24 participants using the Absolute Category Rating method. We make these two new data sets available royalty-free on the Consumer Digital Video Library. We also present their integrity analysis (proposing one new approach) and explore the possibility of combining CCRIQ2 with its legacy counterpart. We conclude that the loose experiment design yields unreliable data, despite adhering to international recommendations. This suggests that the classical subjective study design may not be suitable for studies using consumer content. Finally, we show that Hoßfeld–Schatz–Egger α failed to detect important differences between the two data sets. Full article
(This article belongs to the Special Issue Image Quality)
Show Figures

Figure 1

18 pages, 2448 KiB  
Article
Multivariate Statistical Approach to Image Quality Tasks
by Praful Gupta, Christos G. Bampis, Jack L. Glover, Nicholas G. Paulter and Alan C. Bovik
J. Imaging 2018, 4(10), 117; https://doi.org/10.3390/jimaging4100117 - 12 Oct 2018
Cited by 3 | Viewed by 3895
Abstract
Many existing natural scene statistics-based no reference image quality assessment (NR IQA) algorithms employ univariate parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here, we propose a multivariate model of natural image coefficients expressed in the bandpass spatial domain [...] Read more.
Many existing natural scene statistics-based no reference image quality assessment (NR IQA) algorithms employ univariate parametric distributions to capture the statistical inconsistencies of bandpass distorted image coefficients. Here, we propose a multivariate model of natural image coefficients expressed in the bandpass spatial domain that has the potential to capture higher order correlations that may be induced by the presence of distortions. We analyze how the parameters of the multivariate model are affected by different distortion types, and we show their ability to capture distortion-sensitive image quality information. We also demonstrate the violation of Gaussianity assumptions that occur when locally estimating the energies of distorted image coefficients. Thus, we propose a generalized Gaussian-based local contrast estimator as a way to implement non-linear local gain control, which facilitates the accurate modeling of both pristine and distorted images. We integrate the novel approach of generalized contrast normalization with multivariate modeling of bandpass image coefficients into a holistic NR IQA model, which we refer to as multivariate generalized contrast normalization (MVGCN). We demonstrate the improved performance of MVGCN on quality-relevant tasks on multiple imaging modalities, including visible light image quality prediction and task success prediction on distorted X-ray images. Full article
(This article belongs to the Special Issue Image Quality)
Show Figures

Figure 1

41 pages, 21219 KiB  
Article
On the Application LBP Texture Descriptors and Its Variants for No-Reference Image Quality Assessment
by Pedro Garcia Freitas, Luísa Peixoto Da Eira, Samuel Soares Santos and Mylene Christine Queiroz de Farias
J. Imaging 2018, 4(10), 114; https://doi.org/10.3390/jimaging4100114 - 04 Oct 2018
Cited by 16 | Viewed by 7062
Abstract
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on [...] Read more.
Automatic assessing the quality of an image is a critical problem for a wide range of applications in the fields of computer vision and image processing. For example, many computer vision applications, such as biometric identification, content retrieval, and object recognition, rely on input images with a specific range of quality. Therefore, an effort has been made to develop image quality assessment (IQA) methods that are able to automatically estimate quality. Among the possible IQA approaches, No-Reference IQA (NR-IQA) methods are of fundamental interest, since they can be used in most real-time multimedia applications. NR-IQA are capable of assessing the quality of an image without using the reference (or pristine) image. In this paper, we investigate the use of texture descriptors in the design of NR-IQA methods. The premise is that visible impairments alter the statistics of texture descriptors, making it possible to estimate quality. To investigate if this premise is valid, we analyze the use of a set of state-of-the-art Local Binary Patterns (LBP) texture descriptors in IQA methods. Particularly, we present a comprehensive review with a detailed description of the considered methods. Additionally, we propose a framework for using texture descriptors in NR-IQA methods. Our experimental results indicate that, although not all texture descriptors are suitable for NR-IQA, many can be used with this purpose achieving a good accuracy performance with the advantage of a low computational complexity. Full article
(This article belongs to the Special Issue Image Quality)
Show Figures

Figure 1

24 pages, 527 KiB  
Article
GPU Acceleration of the Most Apparent Distortion Image Quality Assessment Algorithm
by Joshua Holloway, Vignesh Kannan, Yi Zhang, Damon M. Chandler and Sohum Sohoni
J. Imaging 2018, 4(10), 111; https://doi.org/10.3390/jimaging4100111 - 25 Sep 2018
Cited by 2 | Viewed by 4186
Abstract
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithms play an important role in determining what is acceptable quality and what is [...] Read more.
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithms play an important role in determining what is acceptable quality and what is unacceptable from a human visual perspective. As modern image quality assessment (IQA) algorithms gain widespread adoption, it is important to achieve a balance between their computational efficiency and their quality prediction accuracy. One way to improve computational performance to meet real-time constraints is to use simplistic models of visual perception, but such an approach has a serious drawback in terms of poor-quality predictions and limited robustness to changing distortions and viewing conditions. In this paper, we investigate the advantages and potential bottlenecks of implementing a best-in-class IQA algorithm, Most Apparent Distortion, on graphics processing units (GPUs). Our results suggest that an understanding of the GPU and CPU architectures, combined with detailed knowledge of the IQA algorithm, can lead to non-trivial speedups without compromising prediction accuracy. A single-GPU and a multi-GPU implementation showed a 24× and a 33× speedup, respectively, over the baseline CPU implementation. A bottleneck analysis revealed the kernels with the highest runtimes, and a microarchitectural analysis illustrated the underlying reasons for the high runtimes of these kernels. Programs written with optimizations such as blocking that map well to CPU memory hierarchies do not map well to the GPU’s memory hierarchy. While compute unified device architecture (CUDA) is convenient to use and is powerful in facilitating general purpose GPU (GPGPU) programming, knowledge of how a program interacts with the underlying hardware is essential for understanding performance bottlenecks and resolving them. Full article
(This article belongs to the Special Issue Image Quality)
Show Figures

Figure 1

29 pages, 1422 KiB  
Article
Viewing Experience Model of First-Person Videos
by Biao Ma and Amy R. Reibman
J. Imaging 2018, 4(9), 106; https://doi.org/10.3390/jimaging4090106 - 31 Aug 2018
Viewed by 3496
Abstract
First-Person Videos (FPVs) are recorded using wearable cameras to share the recorder’s First-Person Experience (FPE). Ideally, the FPE is conveyed by the viewing experience of the FPV. However, raw FPVs are usually too shaky to watch, which ruins the viewing experience. To solve [...] Read more.
First-Person Videos (FPVs) are recorded using wearable cameras to share the recorder’s First-Person Experience (FPE). Ideally, the FPE is conveyed by the viewing experience of the FPV. However, raw FPVs are usually too shaky to watch, which ruins the viewing experience. To solve this problem, we improve the viewing experience of FPVs by modeling it as two parts: video stability and First-Person Motion Information (FPMI). Existing video stabilization techniques can improve the video stability but damage the FPMI. We propose a Viewing Experience (VE) score, which measures both the stability and the FPMI of a FPV by exploring the mechanism of human perception. This enables us to further develop a system that can stabilize FPVs while preserving their FPMI so that the viewing experience of FPVs is improved. Objective tests show that our measurement is robust under different kinds of noise, and our system has competitive performance relative to current stabilization techniques. Subjective tests show that (1) both our stability and FPMI measurements can correctly compare the corresponding attributes of an FPV across different versions of the same content, and (2) our video processing system can effectively improve the viewing experience of FPVs. Full article
(This article belongs to the Special Issue Image Quality)
Show Figures

Figure 1

Back to TopTop