Next Article in Journal
Digital Technologies: From Scientific to Clinical Applications in Orthodontic and Dental Communities
Next Article in Special Issue
Consensus Cooperative Encirclement Interception Guidance Law for Multiple Vehicles against Maneuvering Target
Previous Article in Journal
Detection and Factors That Induce Stenocarpella spp. Survival in Maize Stubble and Soil Suppressiveness under Tropical Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Low-Level Feature Distributions Based No Reference Image Quality Assessment

School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(10), 4975; https://doi.org/10.3390/app12104975
Submission received: 18 April 2022 / Revised: 8 May 2022 / Accepted: 12 May 2022 / Published: 14 May 2022

Abstract

:
No reference image quality assessment (NR IQA) aims to develop quantitative measures to automatically and accurately estimate perceptual image quality without any prior information about the reference image. In this paper, we introduce two low-level feature distributions (TLLFD) based method for NR IQA. Different from the deep learning method, the proposed method characterizes image quality with the distributions of low-level features, thus it has few parameters, simple model, high efficiency, and strong robustness. First, the texture change of distorted image is extracted by the weighted histogram of generalized local binary pattern. Second, the Weibull distribution of gradient is extracted to represent the structural change of the distorted image. Furthermore, support vector regression is adopted to model the complex nonlinear relationship between feature space and quality measure. Finally, numerical tests are performed on LIVE, CISQ, MICT, and TID2008 standard databases for five different distortion categories JPEG2000 (JP2K), JPEG, White Noise (WN), Gaussian Blur (GB), and Fast Fading (FF). The experimental results indicate that TLLFD method achieves superior performance and strong generalization for image quality prediction as compared to state-of-the-art full-reference, no reference, and even deep learning IQA methods.

1. Introduction

With the rapid development of information and communication technology, end-users continuously propose higher requirements for high-quality experience [1]. The main target of image quality assessment (IQA) is to design an objective image quality evaluation model, which is consistent with human subjective visual perception. Usually IQA methods include two categories: subjective assessment by humans and objective assessment by algorithms designed. Furthermore, objective IQA indices can be classified as full reference (FR), no-reference (NR), and reduced-reference (RR). Especially, NR IQA methods estimate image quality via computer simulation of human vision system (HVS) only from the distorted image, without any information access to the reference images [2,3], therefore NR IQA is more meaningful in image processing and applications.
Nowadays, deep learning has become one of the most attractive fields in the study of artificial intelligence and machine learning. Among them, the convolutional neural network (CNN) is probably one of the most popular, as a special multilayer perception. With the development of deep learning, CNN has not only produced a large number of variant models, but also made great success in various applications, particularly in tasks involving visual, image, and natural language information [4]. Though deep neural networks are powerful in some certain tasks, they have some apparent deficiencies. First, it is well-known that a large amount of training data are usually required in training. Second, deep neural networks, as a complicated black-box model, theoretically, make it so difficult to analyze the deep structure, and powerful computational facilities are usually required in the training process. More importantly, the learning performance of deep neural networks depends on careful streamline tuning of huge hyper-parameters, seriously. Recently, a series of deep networks and corresponding combination methods have been proposed to improve training speed and theoretical analysis. In [5], Chen et al. proposed an incremental learning system in the form of flat network without the need for deep architecture, named Broad Learning System (BLS). BLS has been demonstrated to have higher learning accuracy with much faster learning speed. In [6], Zhou et al. proposed a multi-grained cascade Forest (gcForest) method. This method generates a deep forest ensemble, with a cascade structure which enables gcForest to do representation learning. The gcForest has much fewer hyper-parameters and quite robust to hyper-parameter settings than deep neural networks.
Deep network directly provides the original image into the learning algorithm, there is no need to extract hand-crafted features in advance. But it often depends on a lot of training data and GPU, and spends a lot of time and experience on training parameters and network structure, such as [7,8]. In fact, deep neural networks are not the best choice for some tasks at the certain condition. By contrast, classical machine learning methods require the design of a set of features to describe the input information, such as scale invariant feature transform (SIFT), local binary patterns (LBP), histogram of oriented gradient (HOG), then input these features into the shallow classifiers, which have simple and efficient recognition effect, such as support vector machine (SVM), random forest (RF), logistic regression (LR).
Good feature description can effectively improve the performance of pattern recognition system. For images, features can be divided into low-level features and high-level semantic features. Low-level features mainly include: color, texture edge, and shape. High-level semantic features need to identify and interpret objects. How to extract effective features is very important in IQA as well as many other vision tasks. Due to the limitation of computer recognition and image understanding technology, it is impossible to accurately analyze and interpret the semantic features of images. Therefore, researchers often adopt some stable low-level features which are easy to extract by machine, so that well reflection of the human visual perception’s characteristics and obtaining objective scores for IQA are consistent with subjective evaluation.

2. Related Work

The NR IQA method based on low-level features and the methods closely related to our work are introduced in this section. The LBP feature proposed by Ojala et al., in [9,10,11] is an effective texture local description feature, which encodes the relative intensity values between the central pixel and surrounding pixels, and has been successfully applied in texture classification, face recognition image retrieval, and other fields. Compared with other local texture descriptors, LBP can capture the local texture features simply and quickly. Only in the past decade LBP has been widely used in NR IQA, and become an active research topic. A novel NR IQA method based on structure and luminance information was proposed in [12], which is obtained by extracting LBP to reflect the structural features of the distorted image and extracting the distribution of normalized luminance values to express the brightness characteristics. In [13], Dai et al. used the LBP operator to extract the structural information from the gradient map and the contrast normalization graph respectively. In [14], Yue et al. proposed a fuzzy NR IQA based on the LBP histogram feature. In [15], Zhang et al. used a Gaussian Laplacian filter to decompose the image into multi-scale sub-images, and explored the weighted LBP histogram as the quality-aware feature to input into the support vector regression (SVR) system for obtaining the quality score.
LBP has been extensively and deeply studied in many fields, but the method of feature extraction is sensitive to image noise, and the recognition effect is susceptible to the environment, which makes the local structure description limited. This still needs to be discussed in theory and algorithm.
Wang et al. [16] proposed structural similarity (SSIM) method, which is a milestone in the field of IQA research. The core idea of SSIM is that the change of the image local structure can effectively reflect the degradation of image quality. The disadvantage of SSIM for severely distorted or unstructured distorted images is inconsistent with subjective evaluation especially. For this problem, Zhang et al. proposed the feature similarity (FSIM) index by using two low-level features of phase consistency and gradient magnitude to replace the statistical features in SSIM [17]. Liu et al. [18] implemented a gradient similarity method by combining gradient features and pixel difference features, which emphasized the gradient can effectively capture structural and contrast changes. In [19], Xue et al. proposed the gradient magnitude similarity deviation (GMSD) method by using the gradient as the feature and the standard deviation instead of the previous mean as the pooling. As it is well-known that the image gradient is sensitive to image distortion and can well characterize the degree of quality degradation of different local structures in the distorted image. Liu et al. [20] considered that gradient direction feature plays an important role in image quality evaluation. They used relative gradient direction and relative gradient amplitude feature to evaluate image quality by using AdaBoosting back propagation neural network.
In [12,13,14,15], the authors used LBP for feature extraction and quality evaluation. LBP has good local description, but poor global performance. Since the gradient can reflect the overall structure information of the image, Refs. [18,19] both used different forms of gradient information for quality evaluation. Inspired by the above methods, we propose an NR IQA method based on two low-level feature distributions (TLLFD). The model extracts two types of complementary low-level feature distributions, which not only enhances computational efficiency, but also improves the description and discrimination of the image. The implementation details of TLLFD include three steps: (1) Using the generalized LBP to obtain the difference between the symbol feature and the amplitude feature respectively, then analyzing the histogram of the two features to describe the image texture change; (2) fitting the probability distribution of the gradient amplitude and using the parameters of the distribution to describe the structural changes of the distorted image; (3) the nonlinear regression model is established by SVR. The effectiveness of the method is verified by a large number of contrast experiments with different methods on four standard IQA databases.

3. TLLFD for NR IQA

In this section, the local normalization coefficient is first described as image preprocessing, and then the two low-level feature distributions are introduced. Finally, SVR is used as our nonlinear regression model.

3.1. Local Normalization

In applications of image processing, given a distorted color image, the color image is transformed to gray scale first.
I ( i , j ) = 0.2989 × R ( i , j ) + 0.5870 × G ( i , j ) + 0.1140 × B ( i , j )
where R ( i , j ) , G ( i , j ) , B ( i , j ) represent the three color components of the color image respectively. Then, using the same processing model as [21,22], I ( i , j ) is normalized locally to obtain the mean subtracted contrast normalized (MSCN) coefficient of the image brightness I ^ ( i , j ) .
I ^ ( i , j ) = I ( i , j ) μ ( i , j ) σ ( i , j ) + C
where i and j represent the spatial index of the image respectively, i = 1 , 2 , , M ,   j = 1 , 2 , , N , M × N represents the size of the image.
μ ( i , j ) = k = K K l = L L ω k , l I k , l ( i , j ) σ ( i , j ) = k = K K l = L L ω k , l ( I k , l ( i , j ) μ ( i , j ) ) 2
where ω = { ω k , l | k = K , , K , l = L , , L } , ( 2 K + 1 ) × ( 2 L + 1 ) is the size of Gaussian window. Take K = L = 3 , μ and σ are the mean and standard deviation of the local block of the image. C is a normal number to avoid the denominator to take 0 which selected C = ( α L ) 2 , L = 255 . α is a small constant.
Taking the above local normalization method as the image preprocessing step, the normalized result has a good statistical feature change analysis for the distorted image and test image. At the same time, quantifying these changes will make it possible to predict the distortion type affecting the image and its perceived quality.

3.2. Low-Level Feature Distribution

3.2.1. Local Binary Pattern

The traditional rotation invariant uniform local binary pattern (LBP) operator [9,10,11] can be defined as:
L B P P , R r i u 2 = { i = 0 P 1 S ( g i g c ) , U ( L B P P , R ) 2 P + 1 , others
U ( L B P P , R ) = | S ( g P 1 g c ) S ( g 0 g c ) | + i = 1 P 1 | S ( g i g c ) S ( g i 1 g c ) |
S ( x ) = { 1 , x 0 0 , x < 0
where LBP superscript “riu2” denotes the rotation invariant “uniform” patterns, and U value represents the number of transitions from 0 to 1 or from 1 to 0 with U value less than or equal to 2 . R is the neighborhood radius, g c is the gray value of the central pixel point, P represents the number of neighborhood pixels around the central pixel point ( x c , y c ) , and g i represents the gray value of the neighborhood pixel point i , i = 0 , 1 , , P 1 .
The rotation invariant uniform LBP pattern eventually generates only P + 2 dimensional texture features, which include p + 1 uniform pattern and 1 non-uniform pattern. So, the dimension is significantly lower than traditional LBP. In the specific application, the rotation invariant uniform LBP method still has some limitations in scale size and image noise. Different from the traditional LBP, the preprocessing method of formula (2) is used to extract the LBP features of different scales, which is more expressive and discriminative. For solving the sensitivity of LBP to image noise, Guo et al. [23] investigated completed LBP (CLBP). This method analyzes the LBP algorithm from the perspective of local difference sign-magnitude transform. Therefore, the CLBP method will be explored at each position in the texture image to extract texture features, which is defined as [23]:
C L B P P , R r i u 2 = { i = 0 P 1 S ( g i g c , T ) , U ( C L B P P , R ) 2 P + 1 , others
S ( x , T ) = { 1 , x T 0 , x < T
where T is a threshold parameter to be determined adaptively. If the T value is large, CLBP tends to describe the characteristics of image texture which changes dramatically. Conversely, if the T value is small, CLBP tends to describe the details of image texture information. When T = 0 , namely C L B P S P , R r i u 2 , similar to L B P P , R r i u 2 . When T 0 , namely C L B P M P , R r i u 2 . Here, T in the image is set to 1 P i = 1 P 1 | g i g c | .
After applying CLBP operator, local C L B P S and C L B P M map can be obtained. Then, the global structural features are extracted from C L B P S and C L B P M map as the visibility weighted H G T L B P S and H G T L B P M histogram, which are presented as following:
H G T L B P S P , R , T ( k ) = i = 1 M j = 1 N | I ^ ( i , j ) | f ( C L B P S P , R r i u 2 ( i , j ) , k ) H G T L B P M P , R , T ( k ) = i = 1 M j = 1 N | I ^ ( i , j ) | f ( C L B P M P , R r i u 2 ( i , j ) , k )
Here
f ( x , y ) = { 1 , x = y 0 . others
where k [ 0 , K ] , K = 9 is the maximum value of G T L B P model, M × N denotes the image size, and I ^ ( i , j ) is MSCN coefficients.
Although LBP method is widely used in many fields, it still needs further research and improvement. Some researchers have begun to study multi-feature fusion, which combines LBP with other features more effectively. Generally, features should be complementary to each other for different types of image databases and different fields. Compared with LBP, the gradient has better ability to describe the edge information of the image.

3.2.2. Gradient

The edges often appear in the position where the content of target and background changes, and often represent the contours of target in the images. Therefore, image edge extraction plays a key role in the processing of computer vision systems.
Gradient is usually calculated by convolving the image with a linear filter, such as the classic Prewitt [24] and Scharr [25] filters or other filters for specific tasks. The simplest Prewitt filter is used to calculate the gradient. With Prewitt gradient [24] operator, the partial derivatives G x ( x , y ) and G y ( x , y ) of the distorted image f ( x , y ) are calculated as follows:
G x ( x , y ) = 1 3 [ 1 1 1 0 0 0 1 1 1 ] f ( x , y ) G y ( x , y ) = 1 3 [ 1 0 1 1 0 1 1 0 1 ] f ( x , y )
where the symbol “*” denotes a convolution operation and then the gradient magnitude G ( x , y ) of the image f ( x , y ) is computed as:
G ( x , y ) = ( G x ( x , y ) ) 2 + ( G y ( x , y ) ) 2
Statistical information is an effective and robust way to characterize local features. For example, researchers tend to use probability distributions to fit wavelet coefficients, and use the histogram techniques to capture the distribution of features of LBP output, etc. The Weibull probability density function can be written as:
p ( x ) = γ β ( x β ) γ 1 exp ( ( x β ) γ )
where x is the image gradient magnitude, γ > 0 is the parameter of shape, and β > 0 is the parameter of ratio.
Figure 1 shows a reference image in LIVE database and its five distorted images: JPEG2000 (JP2K) compression, JPEG, White Noise (WN), Gauss Blur (GB), and Fast Fading (FF). Figure 2 shows the gradient amplitude distribution of the six images in Figure 1. Among them, the distribution of WN is more uniform and the peak value of FF is the highest. This is because image quality degradation arises from distortion and the gradient distribution level is also affected by the distortion amount. The scatter plot of Figure 3 shows the Weibull parameter distribution of the six image in Figure 1. In the longitudinal observation, the separation of FF, GB, and WN is obvious. From the horizontal observation, JPEG is distinguished from the reference image clearly. They are different because image with different distortion types may have drastically different parameter. This illustrates that adopting the sensitivity of the shape and proportional parameters of the Weibull distribution to describe different distortion types is effective. It can be observed from Figure 3 that the gradient size of the distorted image follows a two-parameter Weibull distribution, and the human brain response is strongly correlated with the Weibull image for visual perception [26].

3.2.3. Nonlinear Regression Model

In the next step of feature extraction, the regression function is used to establish the complex nonlinear relationship between feature space and quality evaluation. In [27], Vapnik et al. proposed SVM, which can be regarded as a special type of single hidden layer feed forward network, i.e., support vector network. SVM is a machine learning method based on statistical theory and can transform low-dimensional original feature space into high-dimensional feature space by using kernel function. SVM is generally noted for being able to handle small sample, non-linear, and high-dimensional data. In our implementation, LIBSVM [28] package is used to implement a nonlinear regression model SVR with a radial basis function (RBF) kernel. Considering a set of training data { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x l , y l ) } , where x i R n is the extracted quality aware feature and y i is the corresponding difference mean opinion score (DMOS). Given regularization constant parameters C and constant deviation parameters ε , the standard form of SVR can be represented as [29]:
min ω , b , ξ , ξ * 1 2 ω 2 + C { i = 1 l ξ i + i = 1 l ξ i * } s . t . { ω T ϕ ( x i ) + b y i ε + ξ i y i ω T ϕ ( x i ) b ε + ξ i * ξ i 0 , ξ i * 0 , i = 1 , 2 , , l .
where ξ i and ξ i * are the relaxation variable. ω and b are the weights and biases, respectively. The parameters C and ε > 0 are found by searching the optimal values from the sets ( 2 3 , 2 2 , , 2 10 ) and ( 2 10 , 2 9 , , 2 6 ) .

3.3. TllFD Flow Chart and Feature Comparison

In order to further explain the TLLFD method, Figure 4 shows the flow chart.
In this paper, an NR IQA method is proposed. First, the distorted image is downscaled to obtain the scale reduced image; second, the distorted image and the scale down image are locally normalized to obtain the normalized image, and the texture features is extracted from the normalized image. The gradient features are statistically analyzed by Weibull distribution, and two statistical parameters are obtained: γ = 0.2796 and β = 0.9625 . ( γ is the parameter of shape, and β is the parameter of ratio. In Figure 4, γ and β only represent). Finally, the extracted features are pooled by SVR to obtain the quality score.
In order to further compare the features, this method is analyzed with the four classical methods (NIQE [22], BIQI [30], DIIVINE [31], BRISQUE [21]), as shown in Table 1.

4. Experiments

In this section, the experimental setup is described, including IQA databases, evaluation criteria, and extract feature dimension. Then, TLLFD is compared with classic and state-of-the-art NR IQA models.

4.1. Experimental Setups

4.1.1. IQA Databases

TLLFD and state-of-the-art NR IQA models are compared on four standard IQA databases: LIVE [32], TID2008 [33], CSIQ [34], and MICT [35], respectively. The basic information is listed in Table 2. For CSIQ and TID2008 databases, we only consider the four common distortion types (JP2K, JPEG, WN, GB). In addition, we exclude the 25th synthetic reference image with its distorted versions from TID2008 database.

4.1.2. Evaluation Criteria

In order to explain the consistency between objective evaluation and HVS, it is mainly evaluated from the following two aspects: first, accuracy, that is, there is little difference between the results obtained by objective quality evaluation method and subjective judgment. The second is monotonicity. Judging the quality of an image subjectively is consistent with the quality evaluation results obtained by objective methods. The following four evaluation indexes are used for objective quality assessment [36]. Spearman rank order correlation coefficient (SROCC) and Kendall rank order correlation coefficient (KRCC) are used to measure the prediction monotonicity, Pearson linear correlation coefficient (PLCC), and root mean squared error (RMSE) are calculated after the suggested monotonic logistic mapping to measure the prediction accuracy.
For the calculation of PLCC and RMSE, regression analysis is used to provide a nonlinear mapping between the objective scores and the subjective mean opinion scores (MOS). For the nonlinear regression, the following mapping function suggested by Sheikh et al. [37] is used.
f ( x ) = β 1 ( 1 2 1 1 + e β 2 ( x β 3 ) + β 4 x + β 5 )
where x denotes the original objective score, and β i ( i = 1 , 2 , 3 , 4 , 5 ) are regression model parameters to be fitted. A good objective evaluation algorithm has higher SROCC, KROCC, and PLCC, and lower RMSE.

4.1.3. Feature Dimension

For GTLBP calculation, the number of neighbors P is 8 and the radius of the neighborhood R is 1. Different SROCC, KROCC, PLCC, and RMSE indices are obtained by extracting the characteristics of different scales on LIVE. It can be observed from Table 3 that extracting features from two scales is the best, and the performance of our method is relatively stable at different scales. Thus, given a 512 × 512 distorted color image, the extracted features are 44 dimensions in total.

4.2. Experimental Results and Analysis

4.2.1. Performance on Individual Databases

In this section, the overall performance of the various IQA models will be tested on each individual database. Each database is divided into training sets and test sets; random selection of 80% of the database constitutes the training set and the remaining 20% makes the test set. Then through 1000 times of cross-validation and the median SROCC and PLCC values are recorded as shown in Table 4. The competing algorithms including four classic ones FR IQA: PSNR, SSIM [16], FSIM [17], VSI [38], eight classic ones NR IQA: NIQE [22], ILNIQE [39], BIQI [30], DIIVINE [31], BLIINDS2 [40], BRISQUE [21], GMLOG [41], NFERM [42], and five state-of-the-art ones deep learning: Dip IQA [43], OG IQA [20], Deep IQA [7], MEON [8], CNN [44].
For each criteria, the best two IQA metrics are highlighted in bold. The main observations are as follows. First, TLLFD is closer to the human subjective evaluation of difference mean opinion score (DMOS) on all four databases. Second, TLLFD significantly outperforms PSNR and SSIM. Unfortunately, only CNN evaluation results on LIVE database as well as Dip IQA and Deep IQA evaluation results on CSIQ database are available. Third, compared with other deep learning methods, TLLFD has better quality prediction performance. On LIVE, the SROCC and PLCC values of TLLFD method reach 0.96. On TID2008, the value of SROCC and PLCC are close to 0.94. On MICT, the value of SROCC and PLCC are close to 0.92.

4.2.2. Performance on Individual Distortion Types

This section evaluates the performance of NR IQA models on individual distortion types. For NR IQA models, 80% of the five distorted images are used to train the NR IQA model, and 20% of the distorted images with specific distortion types are tested. The SROCC comparison for the 12 NR IQA models in the four benchmark databases is listed in Table 4; the best two NR IQA models for each distortion group are shown in boldface.
In Table 5, we can find that from the results of single distortion type, TLLFD method is better than most methods. For example, according to the experimental results of GB distortion type in CSIQ database, TLLFD method is better than all methods; however, in other databases, it is not the optimal value, but it is also the suboptimal value. Finally, from the overall weighted average, TLLFD method maintains the optimal value. It should be noted that similar results can be obtained for KROCC, PLCC, and RMSE indicators; only SROCC indicators are listed here. Moreover, the last row of Table 4 lists the weighted average SROCC values of all distortion types, where the weights are the number of images in each distortion group. The quality prediction accuracy of TLLFD is high under individual distortion types on LIVE, TID2008, CSIQ, MICT. For the JPEG2000, JPEG, and FF distortion, it performs slightly worse. For the WN, GB distortion, it outperforms all other NR IQA methods. To sum up, there are the three main reasons for the improvement of performance. First, different distortion types have different CLBP maps, it can effectively measure the influence of different distortion types on image structure change. Second, the global structural feature GTLBP is obtained by weighted histogram, which is an effective descriptor reflecting the effects of different distortion types. Third, JPEG2000, JPEG, FF will cause different degrees of blurring of the image. Blurring reduces the details of image leading to lower performance.

4.3. Ablation Experiment

4.3.1. Cross-Database Validation and Hypothesis Testing

At the same time, in order to illustrate the generalization capability of TLLFD method and prevent interference from over-fitting experiments, cross database verification experiments are carried out. In order to make a fair comparison, in Table 5, all models are validated on the full LIVE (779) database and tested on TID2008, CSIQ, and MICT. In Table 6, the NR IQA model was trained for image from CSIQ (600) database and tested on three other databases. The two best NR IQA models indexes are shown in bold.
Table 6 and Table 7 show the cross database experiment results respectively. Table 6 is the result of taking the LIVE data as the training set and carrying out the experiment in other databases. Table 7 shows the results of experiments with CSIQ data as training set and other databases. From Table 6, we can find that the model trained by LIVE database has better results for other databases, and the TLLFD method is due to the comparison method in the other three databases. From Table 7, we can find that the model trained by CSIQ database has more general results for other databases, but the overall results are good. Except that TLLFD method is slightly worse than nferm method in TID2008 database, it has better results in the three databases when compared with other methods.
Next, to further demonstrate the superiority of TLFFD, we calculated the statistical significance by two sample t-tests between SROCC obtained by competing NR IQA methods. The null hypothesis is that the mean correlation of the row is equal to the mean correlation of the column at the 95% confidence level. The alternate hypothesis is that the mean correlation of row is greater than or lesser than the mean correlation of the column.
In Table 8, 1 or −1 indicates that the method is statistically superior or lower than the comparison method, and 0 means has the same effect as the comparison method in statistics. It can be clearly seen from Table 7 that in LIVE database, TID2008 database, and MICT database, the comparison method is shown as 1 in the table, which shows that TLLFD is better than the comparison method. In CSIQ database, TLLFD method has the same experimental results as nferm method, and is better than the other methods.

4.3.2. Performance Comparison between LIVEWC and CID2013 Databases

In order to distinguish it from traditional databases, this paper compares the performance of real distortion database and contrast distortion database: LIVEWC, CID2013. The experimental results are shown in Table 9. The results of NR IQA methods with the best evaluation performance are marked in bold. In LIVEWC database, the SROCC value of the proposed method is slightly lower than that of NFERM method, and the PLCC value is higher than that of other NR IQA methods. In CID2013 database, the values of SROCC and PLCC are 0.7786 and 0.7987 respectively, which are obviously superior to other methods. This indicates that TLLFD method has stronger competitiveness compared with other NR IQA methods in LIVEWC and CID2013 databases.
Two groups of features, texture feature and gradient feature, are extracted from the proposed TLLFD method. In order to explore the contribution of these two groups of features to the final evaluation result, the performance of each group of features on five databases is evaluated respectively. It can be seen from Table 10 that among the five databases, the contribution of texture features is higher than that of gradient features, but the evaluation performance of using one group of features alone is worse than that of using two groups of features simultaneously. This indicates that both sets of characteristics are necessary in the TLLFD method, and they are complementary to the overall evaluation performance.

4.4. Computational Complexity Analysis

In many practical applications it is desired to estimate the quality of an input image online. Therefore, the computational complexity is also an important factor when evaluating a NR IQA model. The model complexity of the NR IQA model is shown in Figure 5. Our experiments run in Intel Core (TM) i5-3210M CPU @ 2.50 GHz and 4 GB RAM of ASUS A45V laptop. The MATLAB is R2012a (7.14) in the Windows. The 2D scatter plot shows the weighted average SROCC of four standard databases and running time of different methods for feature extraction of a 512 × 512 image.
The computational complexity of TLFFD is significantly lower than NIQE, ILNIQE, DIIVINE, and BLINDS2, and worse than BIQI and G-MLOG, which can be easily discerned in Figure 5. The main reasons are as follows. First, BIQI has only two features and extraction time is short. However, its performance is the worst among all the competing models. Second, GMLOG extracts 40 statistical features which combine gradient amplitude and Laplacian features. So, simple extraction process and high operation efficiency are obtained. Third, NIQE has 36 features and a higher computational complexity, leading to less quality prediction performance and slower running speed. Fourth, ILNIQE extracts five types of NSS features and uses them to learn the multivariate Gaussian model and predict the image quality. It has many parameters, which show performance is less competitive. Fifth, the dimension of DIIVINE features is up to 88, thus this model has long running time and low efficiency. Sixth, BLINDS2 using a natural scene statistics model of discrete cosine transform coefficients. The model process is complicated and requires a long running time. Overall, the calculation of TLLFD has low complexity and high efficiency.

5. Conclusions

In this paper, we propose a novel framework for NR IQA method, namely TLLFD. First, the normalized information is used as the image preprocessing, which is conducive to the statistical analysis of subsequent features. Then, two low-level feature distributions with unique regression function are extracted, and finally the quality regression analysis is carried out by using nonlinear regression method.
The feature extraction is carried out in two stages. One stage is the weight CLBP histogram coefficients are taken as an image texture feature, and the other stage is the parameters of the Weibull distribution fitting gradient map are used as an image gradient feature.
Verified by the experimental results, the TLLFD method can be compared with the state-of-the-art NR IQA method even when compared with the state-of-the-art FR IQA method. Compared with deep learning-based methods, the TLLFD method also achieves superior performance and strong generalization.
The disadvantage of this method is that some experimental parameters are not adaptively chosen, and knowledge of classical features is needed for deep understanding.
The follow-up work will further study the basic features of the image, and analyze it using the method proposed in this paper. The combination of classical features can have a certain comparative power with the deep learning method, so as to obtain more consistent results with the real MOS.

Author Contributions

Conceptualization, H.F., G.L. and X.Y.; Data curation, H.F., G.L., X.Y., L.W. and L.Y.; Formal analysis, G.L.; Funding acquisition, L.Y.; Investigation, X.Y. and L.W.; Methodology, H.F., G.L. and X.Y.; Project administration, L.W. and L.Y.; Resources, G.L. and L.Y.; Supervision, G.L.; Validation, H.F. and L.W.; Writing—original draft, H.F. and L.Y.; Writing—review & editing, G.L. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (Grant No. 62061040, 12162029, 61941111, 61906102), in part by the Key research and development programs of Ningxia (Grant No. 2019BEG03056), and in part by the Natural Science Foundation of Ningxia (Grant No. 2021AAC03039).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The MATLAB source code of TLLFD is available online at https://github.com/Yazhen1/TLLFD (accessed on 11 May 2022).

Acknowledgments

All individuals included in this section have consented to the acknowledgement.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

NRNo reference
TLLFDTwo low-level feature distributions
JP2KJPEG2000
WNWhite Noise
GBGaussian Blur
FFFast Fading
IQAImage quality assessment
FRFull Reference
RRReduced-Reference
HVSHuman Vision System
CNNConvolutional Neural Network
BLSBroad Learning System
SIFTScale Invariant Feature Transform
LBPLocal Binary Patterns
HOGHistogram of Oriented Gradient
SVMSupport Vector Machine
RFRandom Forest
LRLogistic Regression
SVRSupport Vector Regression
SSIMStructural SIMilarity
FSIMFeature SIMilarity
GMSDGradient Magnitude Similarity Deviation
CLBPCompleted LBP
DMOSDifference Mean Opinion Score
SROCCSpearman Rank Order Correlation Coefficient
KROCCKendall Rank Order Correlation Coefficient
PLCCPearson Linear Correlation Coefficient
RMSERoot mean Squared Error
MOSMean Opinion Scores
NIQENatural Image Quality Evaluator
ILNIQE Integrated Local NIQE
BIQI Blind Image Quality Indices
DIIVINE Distortion Identification-based Image Verity and INtegrity Evaluation
BLIINDS2 Blind Image Integrity Notator Using DCT Statistics
BRISQUE Blind/Referenceless Image Spatial QUality Evaluator
GMLOG Gradient Magnitude Map and the Laplacian Of Gaussian
NFERM NR Free Energy-Based Robust Metric
Dip IQADiscriminable Image Pairs Image quality assessment
OG IQAOriented Gradients Image Quality Assessment
Deep IQADeep Image quality assessment
MEON Multi-task End-to-End Optimized Deep Neural Network
PSNRPeak Signal to Noise Ratio

References

  1. Seshadrinathan, K.A.; Bovik, C. Automatic prediction of perceptual quality of multimedsignalsa survey. Multimed. Tools Appl. 2011, 51, 163–186. [Google Scholar] [CrossRef]
  2. Jiang, G.Y.; Huang, D.J.; Wang, X.; Yu, M. Overview on image quality assessment methods. J. Electron. Inf. Technol. 2010, 32, 219–226. [Google Scholar] [CrossRef]
  3. Li, L.; Zhou, Y.; Lin, W.; Wu, J.; Zhang, X.; Chen, B. No-reference quality assessment of deblocked images. Neurocomputing 2016, 177, 572–584. [Google Scholar] [CrossRef]
  4. Chang, L.; Deng, X.M.; Zhou, M.Q.; Wu, Z.K.; Yuan, Y.; Yang, S.; Hui, A.H. Convolutional neural networks in image understanding. Acta Autom. Sin. 2016, 42, 1300–1312. [Google Scholar] [CrossRef]
  5. Chen, C.P.; Liu, Z. Broad learning system: An effective and efficient incremental learn-ing system without the need for deep architecture. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 10–24. [Google Scholar] [CrossRef]
  6. Zhou, Z.H.; Feng, J. Deep forest: Towards an alternative to deep neural net-works. arXiv 2017, arXiv:1702.08835. [Google Scholar]
  7. Bosse, S.; Maniry, D.; Müller, K.-R.; Wiegand, T.; Samek, W. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Trans. Image Process. 2018, 27, 206–219. [Google Scholar] [CrossRef] [Green Version]
  8. Ma, K.; Liu, W.; Zhang, K.; Duanmu, Z.; Wang, Z.; Zuo, W. End-to-end blind image quality assessment using deep neural networks. IEEE Trans. Image Process. 2018, 27, 1202–1213. [Google Scholar] [CrossRef]
  9. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  10. Pietikäinen, M.; Ojala, T.; Xu, Z. Rotation-invariant texture classification using feature distributions. Pattern Recognit. 2000, 33, 43–52. [Google Scholar] [CrossRef] [Green Version]
  11. Ojala, T.; Pietikäinen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  12. Li, Q.H.; Lin, W.; Xu, J.; Fang, Y. Blind image quality assessment using statistical structural and luminance features. IEEE Trans. Multimed. 2016, 18, 2457–2469. [Google Scholar] [CrossRef]
  13. Dai, T.; Gu, K.; Niu, L.; Zhang, Y.B.; Lu, W.; Xia, S.T. Referenceless quality metric of multiply-distorted images based on structural degradation. Neurocomputing 2018, 290, 185–195. [Google Scholar] [CrossRef]
  14. Yue, G.; Hou, C.; Gu, K.; Ling, N. No reference image blurriness assessment with localbinary patterns. J. Vis. Commun. Image Represent. 2017, 49, 382–391. [Google Scholar] [CrossRef]
  15. Zhang, M.; Muramatsu, C.; Zhou, X.; Hara, T.; Fujita, H. Blind Image Quality Assessment Using the Joint Statistics of Generalized Local Binary Pattern. IEEE Signal Process. Lett. 2015, 22, 207–210. [Google Scholar] [CrossRef]
  16. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, P.E. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
  18. Liu, A.; Lin, W.; Narwaria, M. Image Quality Assessment Based on Gradient Similarity. IEEE Trans. Image Process. 2012, 21, 1500–1512. [Google Scholar] [CrossRef]
  19. Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index. IEEE Trans. Image Process. 2014, 2, 684–695. [Google Scholar] [CrossRef] [Green Version]
  20. Liu, L.; Hua, Y.; Zhao, Q.; Huang, H.; Bovik, A.C. Blind image quality assessment by relative gradient statistics and adaboosting neural network. Signal Processing Image Commun. 2016, 40, 1–15. [Google Scholar] [CrossRef]
  21. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
  22. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a completely blind image quality analyzer. IEEE Signal Processing Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
  23. Guo, Z.; Zhang, L.; Zhang, D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans. Image Process. 2010, 19, 1657–1663. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Sonka, M.; Hlavac, V.; Boyle, R. Image Processing, Analysis, and Machine Vision; Cengage Learning: Stanford, CT, USA, 2008. [Google Scholar]
  25. Jähne, B.; Haussecker, H.; Geissler, P. Handbook of Computer Vision and Applications; Academic: New York, NY, USA, 1999. [Google Scholar]
  26. Scholte, H.S.; Ghebreab, S.; Waldorp, L.; Smeulders, A.W.; Lamme, V.A. Brain responses strongly correlate with Weibull image statistics when processing natural images. J. Vision. 2009, 9, 29. [Google Scholar] [CrossRef] [PubMed]
  27. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  28. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Transns. Intell. Syst. Technol. (TIST). 2011, 2, 1–27. [Google Scholar] [CrossRef]
  29. Schölkopf, B.; Smola, A.J.; Bach, F. Learning with Kernels: Support Vector Machines, Re-Gularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  30. Moorthy, A.K.; Bovik, A.C. A Two-Step Framework for Constructing Blind Image Quality Indices. IEEE Signal Process. Lett. 2010, 17, 513–516. [Google Scholar] [CrossRef]
  31. Moorthy, A.K.; Bovik, A.C. Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality. IEEE Trans. Image Process. 2011, 20, 3350–3364. [Google Scholar] [CrossRef]
  32. Sheikh, H.R.; Wang, Z.; Bovik, A.C.; Cormack, L.K. Image and Video Quality Assessment Research at LIVE. Available online: http://live.ece.utexas.edu/research/quality/. (accessed on 10 December 2018).
  33. Ponomarenko, N.; Lukin, V.; Zelensky, A.; Egiazarian, K.; Carli, M.; Battisti, F. TID2008 d-atabase for evaluation of full-reference visual quality assessment metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. Available online: http://www.ponomarenko.info/tid2008.htm (accessed on 20 December 2021).
  34. Larson, E.C.; Chandler, D.M. Categorical Image Quality (CSIQ) Database. Available online: http://vision.okstate.edu/csiq (accessed on 10 December 2018).
  35. Horita, Y.; Shibata, K.; Kawayoke, Y. MICT Image Quality Assessment Database. Available online: http://mict.eng.utoyama.ac.jp/mictdb.html (accessed on 15 December 2021).
  36. Final Report From the Video Quality Experts Group on the Validation of Objective Models of Video Quality Assessment VQEG. 2000. Available online: http://www.vqeg.org (accessed on 10 December 2018).
  37. Sheikh, H.R.; Sabir, M.F.; Bovik, A.C. A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms. IEEE Trans. Image Process. 2006, 15, 3440–3451. [Google Scholar] [CrossRef]
  38. Zhang, L.; Shen, Y.; Li, H. VSI: A Visual Saliency-Induced Index for Perceptual Image Quality Assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Zhang, L.; Zhang, L.; Bovik, A.C. A Feature-Enriched Completely Blind Image Quality E-valuator. IEEE Trans. Image Process. 2015, 24, 2579–2591. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Saad, M.A.; Bovik, A.C.; Charrier, C. Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain. IEEE Trans. Image Process. 2012, 21, 3339–3352. [Google Scholar] [CrossRef] [PubMed]
  41. Xue, W.; Mou, X.; Zhang, L.; Bovik, A.C.; Feng, X. Blind Image Quality Assessment Using Joint Statistics of Gradient Magnitude and Laplacian Features. IEEE Trans Image Process. 2014, 23, 4850–4862. [Google Scholar] [CrossRef] [PubMed]
  42. Gu, K.; Zhai, G.; Yang, X.; Zhang, W. Using Free Energy Principle For Blind Image Quality Assessment. IEEE Trans. Multimed. 2015, 17, 50–63. [Google Scholar] [CrossRef]
  43. Ma, K.; Liu, W.; Liu, T.; Wang, Z.; Tao, D. Dipiq: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs. IEEE Trans. Image Process. 2017, 26, 3951–3964. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Kang, L.; Ye, P.; Li, Y.; Doermann, D. Convolutional Neural Networks for No-Reference Image Quality Assessment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, 23–28 June 2014; pp. 1733–1740. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A reference image and its five distortion images in LIVE. (a) Reference image; (b) JPEG 2000 image; (c) JPEG image; (d) WN image; (e) GB image; (f) FF image.
Figure 1. A reference image and its five distortion images in LIVE. (a) Reference image; (b) JPEG 2000 image; (c) JPEG image; (d) WN image; (e) GB image; (f) FF image.
Applsci 12 04975 g001aApplsci 12 04975 g001b
Figure 2. Gradient amplitude distribution of different distorted image.
Figure 2. Gradient amplitude distribution of different distorted image.
Applsci 12 04975 g002
Figure 3. Weibull parameter scatter plots of different distorted image.
Figure 3. Weibull parameter scatter plots of different distorted image.
Applsci 12 04975 g003
Figure 4. Flowchart of TLLFD NR IQA.
Figure 4. Flowchart of TLLFD NR IQA.
Applsci 12 04975 g004
Figure 5. SROCC (w) versus running time (log scale) of different methods.
Figure 5. SROCC (w) versus running time (log scale) of different methods.
Applsci 12 04975 g005
Table 1. Feature extraction and regression analysis of different methods.
Table 1. Feature extraction and regression analysis of different methods.
MethodFeature ExtractionRegression Method
NIQEThe normalized information is fitted with the generalized Gaussian distribution to obtain the statistical characteristicsMVG
BIQIWavelet decomposition is used for feature extraction, and the statistical features are obtained by fitting the generalized Gaussian distributionSVR
DIIVINEThe pyramid wavelet decomposition is used for feature extraction, and the generalized Gaussian distribution is fitted to obtain the statistical featuresSVR
BRISQUE
(1)
MSCN coefficient histogram is fitted with generalized Gaussian distribution;
(2)
The four direction histograms of MSCN coefficients are distributed by asymmetric generalized Gaussian distribution.
SVR
TLLFD
(1)
The input image is normalized and preprocessed, extracted CLBP weighted histogram;
(2)
The gradient features are fitted by Weibull distribution to obtain the statistical features.
SVR
Table 2. Benchmark database for IQA performance validation.
Table 2. Benchmark database for IQA performance validation.
DatabaseSource
Images
Distortion
Types
Distortion
Images
Subjects
Number
Subjective
Scores
LIVE2957791610–100
TID2008251717008380–9
CSIQ306866350–1
MICT142168161–5
Table 3. Image scale influence on quality assessment.
Table 3. Image scale influence on quality assessment.
IQA ModelLive (799)
SROCCKROCCPLCCRMSEDimensions
512 × 512 0.9490.8110.9518.35822
512 × 512
256 × 256
0.9580.8280.9607.57744
512 × 512
256 × 256
128 × 128
0.9560.8250.9597.73966
Table 4. SROCC and PLCC comparison of 15 IQA models on four benchmark databases (the two best models indexes are shown in bold).
Table 4. SROCC and PLCC comparison of 15 IQA models on four benchmark databases (the two best models indexes are shown in bold).
IQALIVE (779)TID2008 (384)CSIQ (600)MICT (168)
SROCCPLCCSROCCPLCCSROCCPLCCSROCCPLCC
PSNR0.8850.8830.8790.8600.9290.8540.6590.679
SSIM0.9480.9450.9100.9410.9240.9330.9150.920
FSIM0.9630.9600.9530.9290.9240.9120.9060.800
VSI0.9520.9480.9400.9230.9420.9280.8660.736
BIQI0.8520.8660.8020.8520.8200.8740.5740.599
DIIVINE0.9090.9090.8970.9030.8800.8990.6410.680
BLINDS20.9310.9370.8660.9060.8690.9120.8510.875
CORNIA0.9450.9470.8970.9310.8930.9290.9010.918
BRISQUE0.9440.9480.9050.9260.9140.9400.8830.902
GMLOG0.9500.9540.9370.9450.9230.9510.8850.888
NFERM0.9440.9490.9400.9510.9290.9530.8870.892
OG IQA0.9510.9550.9370.9410.9240.946--
CNN0.9560.953------
Dip IQA----0.9300.949--
Deep IQA----0.8710.891--
MEON----0.9320.944--
TLLFD0.9580.9600.9400.9440.9390.9530.9190.925
Table 5. SROCC comparisons of 12 NR IQA models on individual distortion types (the two best NR IQA models indexes are shown in bold).
Table 5. SROCC comparisons of 12 NR IQA models on individual distortion types (the two best NR IQA models indexes are shown in bold).
DatabaseD-TYNIQEILN-
IQE
BIQIDIIV-
INE
BLIN-DSIIGM
LOG
OG IQADip IQADeep IQAMEONTLLFD
LIVEJP2K0.9240.9000.8240.9060.9310.9260.937---0.950
JPEG0.9420.9440.8840.8970.9500.9630.964---0.962
WN0.9720.9790.9650.9820.9460.9830.987---0.987
GB0.9400.9240.8560.9340.9150.9200.961---0.958
FF0.8620.8440.7430.8540.8750.9010.899---0.907
TID2008JP2K0.9020.9370.8550.8950.9020.9350.926---0.935
JPEG0.8870.8870.8870.8870.8870.8840.934---0.931
WN0.8170.8830.7560.8400.6850.8910.907---0.904
GB0.8470.8600.8990.8900.8570.8860.881---0.929
CSIQJP2K0.9110.7960.8180.8710.8790.9180.9170.9440.9070.8980.925
JPEG0.9130.8280.8590.8830.8950.9170.9330.9360.9290.9480.942
WN0.9250.9240.8500.9010.8680.9460.9410.9040.9330.9510.949
GB0.8830.9050.8440.8950.8830.9150.9070.9320.8900.9180.936
MICTJP2K0.8360.8680.6600.8510.8940.887----0.988
JPEG0.9060.8680.6900.7550.8730.941----0.986
Weighted average0.9020.8940.8380.8910.8860.9230.8520.9290.9150.9290.944
Table 6. SROCC comparison on cross-database validation when NR IQA models are trained on LIVE (the two best models indexes are shown in bold).
Table 6. SROCC comparison on cross-database validation when NR IQA models are trained on LIVE (the two best models indexes are shown in bold).
DatabaseNIQEILNIQEBIQIDIIVINEBLINDS2BRISQUEGMLOGNFERMTLLFD
TID20080.7950.8700.8130.8670.8640.8940.9110.9140.915
MICT0.8110.7110.6630.7980.8100.8570.8350.8510.889
CSIQ0.8690.8800.7850.8770.9020.8900.8990.9070.928
Table 7. SROCC comparison on cross-database validation when NR IQA models are trained on CSIQ (the two best models indexes are shown in bold).
Table 7. SROCC comparison on cross-database validation when NR IQA models are trained on CSIQ (the two best models indexes are shown in bold).
DatabaseNIQEILNIQEBIQIDIIVINEBLINDS2BRISQUEGMLOGNFERMTLLFD
TID20080.7950.8700.7960.8520.7750.8890.8650.9040.893
MICT0.8110.7110.5600.5670.6540.6010.7780.8340.836
LIVE0.9050.8970.7550.7730.8880.8950.9050.8700.931
Table 8. Statistical significance t-test (1(−1) indicates our method is better (worse) than the method in the column; 0 indicates our method is statistically equivalent to the method in the column).
Table 8. Statistical significance t-test (1(−1) indicates our method is better (worse) than the method in the column; 0 indicates our method is statistically equivalent to the method in the column).
t-TestNIQEILNIQEBIQIDIIVINEBLINDS2BRISQUEGMLOGNFERM
LIVE11111111
CSIQ11111110
TID0811111111
MICT11111111
Table 9. Performance comparison of two databases with different evaluation algorithms (the two best models indexes are shown in bold).
Table 9. Performance comparison of two databases with different evaluation algorithms (the two best models indexes are shown in bold).
IQA MethodsLIVEWCCID2013
SROCCPLCCSROCCPLCC
BIQI0.53240.54790.65690.6757
DIIVINE0.51480.52830.49720.5124
BRISQUE0.56850.58640.43090.4783
NIQE0.42920.48480.60070.6136
BLIINDS20.48850.50640.47660.4987
NFERM0.60550.59080.62810.6322
ILNIQE0.50330.51270.45400.4634
TLLFD0.60530.62010.77860.7987
Table 10. Comparison of contributions of the two groups of characteristics (the two best models indexes are shown in bold).
Table 10. Comparison of contributions of the two groups of characteristics (the two best models indexes are shown in bold).
DatabaseEvaluation IndexTextureGradientTexture + Gradient
LIVESROCC0.93430.78360.958
PLCC0.93500.80630.960
CSIQSROCC0.91220.72300.939
PLCC0.92130.77650.953
TID2008SROCC0.92630.72340.940
PLCC0.93350.75680.944
LIVEWCSROCC0.49760.43460.6053
PLCC0.56010.53210.6201
CID2013SROCC0.73270.52340.7786
PLCC0.75610.56790.7987
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fu, H.; Liu, G.; Yang, X.; Wei, L.; Yang, L. Two Low-Level Feature Distributions Based No Reference Image Quality Assessment. Appl. Sci. 2022, 12, 4975. https://doi.org/10.3390/app12104975

AMA Style

Fu H, Liu G, Yang X, Wei L, Yang L. Two Low-Level Feature Distributions Based No Reference Image Quality Assessment. Applied Sciences. 2022; 12(10):4975. https://doi.org/10.3390/app12104975

Chicago/Turabian Style

Fu, Hao, Guojun Liu, Xiaoqin Yang, Lili Wei, and Lixia Yang. 2022. "Two Low-Level Feature Distributions Based No Reference Image Quality Assessment" Applied Sciences 12, no. 10: 4975. https://doi.org/10.3390/app12104975

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop