Next Article in Journal
Analysis of Distinguishable Security between the One-Time Password Extraction Function Family and Random Function Family
Previous Article in Journal
Spatial Attention Mechanism and Cascade Feature Extraction in a U-Net Model for Enhancing Breast Tumor Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Full-Reference Image Quality Assessment Based on Multi-Channel Visual Information Fusion

1
School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China
2
Industry Innovation Technology Research Co., Ltd., Anhui Polytechnic University, Wuhu 241000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8760; https://doi.org/10.3390/app13158760
Submission received: 27 May 2023 / Revised: 6 July 2023 / Accepted: 25 July 2023 / Published: 28 July 2023

Abstract

:
This study focuses on improving the objective alignment of image quality assessment (IQA) algorithms with human visual perception. Existing methodologies, predominantly those based on the Laplacian of Gaussian (LoG) filter, often neglect the impact of color channels on human visual perception. Consequently, we propose a full-reference IQA method that integrates multi-channel visual information in color images. The methodology begins with converting red, green, blue (RGB) images into the luminance (L), red–green opponent color channel (M), blue–yellow opponent color channel (N) or LMN color space. Subsequently, the LoG filter is separately applied to the L, M, and N channels. The convoluted components are then fused to generate a contrast similarity map using the root-mean-square method, while the chromaticity similarity map is derived from the color channels. Finally, multi-channel LoG filtering, contrast, and chromaticity image features are connected. The standard deviation method is then used for sum pooling to create a full-reference IQA computational method. To validate the proposed method, distorted images from four widely used image databases were tested. The evaluation, based on four criteria, focused on the method’s prediction accuracy, computational complexity, and generalizability. The Pearson linear correlation coefficient (PLCC) values, recorded from the databases, ranged from 0.8822 (TID2013) to 0.9754 (LIVE). Similarly, the Spearman rank-order correlation coefficient (SROCC) values spanned from 0.8606 (TID2013) to 0.9798 (LIVE). In comparison to existing methods, the proposed IQA method exhibited superior visual correlation prediction accuracy, indicating its promising potential in the field of image quality assessment.

1. Introduction

Visual perception stands as one of the most crucial senses for humans. Images, serving as an efficient and practical carrier of visual information, are progressively taking on a paramount role in human life. However, the processes of image acquisition, transmission, and storage are subject to various distortions, such as Gaussian white noise, motion blur, and white balance errors [1]. Perceptual image quality assessment (IQA) can be dichotomized into subjective and objective evaluations. The former is constrained by environmental factors and exhibits suboptimal practicality. Conversely, objective IQA predominantly leverages algorithmic methods to predict image quality, with the goal of attaining a high degree of congruence between computed values and human visual perception, ultimately supplanting human vision in the task of image quality assessment. In recent years, as the depth of research continues to expand, the theoretical underpinning of objective IQA research has gradually matured. Generally, objective IQA can be categorized into three types based on the availability of reference images [2]: full reference (FR) [3], reduced reference (RR) [4], and no reference (NR) [5] IQA. The primary focus of this paper lies in the investigation of FR-IQA methods.
Current research on FR-IQA methods is relatively advanced, and depending on their computational approaches, can be broadly classified into three categories: mathematical methods, methods predicated on human vision system (HVS) characteristics, and learning-based methods [6]. Mathematical methods apply direct computations on image pixel gray values, but they overlook the varying contributions of different pixels to human vision. This oversight leads to discrepancies between computed results and subjective perception [7]. In recent years, researchers have started to incorporate HVS characteristics [8]—including brightness adaptation, contrast sensitivity, masking effects, and attention mechanisms—into IQA research. A seminal method in this domain is the structural similarity index (SSIM) proposed by Wang [9], which assesses image quality via the image’s brightness, contrast, and structural information using the RMS algorithm for contrast computation [10]. This method has a high computational efficiency, but the computational accuracy needs to be obviously improved. Several researchers have enhanced the SSIM algorithm. For instance, Li [11] proposed the three-component weighted SSIM (3-SSIM) method, which calculates feature similarities for the edge, texture, and smooth areas across the entire image. Although the calculation accuracy of SSIM has been further improved by 3-SSIM, the generalization cannot be ideal. Zhang [12] introduced contrast and visual saliency similarity (CVSS), which amalgamates image contrast with visual saliency. Liu [13] proposed the gradient similarity metric (GSM) index by masking the structural information. Although the aforementioned methods exhibit high computational accuracy, their generalizability need to be optimized. From an information theory standpoint, Sheikh [14] proposed visual information fidelity (VIF), which boasts a high computational efficiency but exhibits limited generalizability and calculation accuracy. The Laplacian of Gaussian (LoG) operator excels in simulating human visual perception characteristics and is frequently employed in the computer vision-related literature [15]. Zhang [16] proposed the feature similarity index (FSIMc) using LoG wavelets to calculate phase consistency. Zhang [17] presented the visual saliency-induced index (VSI) for perceptual image quality assessment, drawing upon the variable attention of the human eye to different scenes. VSI employs LoG filters to extract image frequency domain features and applies image gradient and chromaticity information to the method. Temel [18] proposed a perceptual similarity index (PerSIM) method using LoG on the luminance channel to mimic the visual system on the human retina. The three aforementioned methods all utilize the LoG filter in the process of establishing IQA methods, and their evaluation results exhibit commendable accuracy. However, these IQA methods still have some deficiencies in generalization. Learning-based methods [19] typically necessitate feature extraction and method training. However, due to limited datasets, a protracted method training process, and a dependence on data information, these methods bear a significant disadvantage in terms of generalizability. Gao [20] employed the VGGnet (ConvNets trained for image recognition) to construct IQA methods by the improved architecture of deep neural networks. Yao [21] used a convolutional neural network to generate a visual difference map mapping, and weighted the visual difference map according to the local spatial characteristics of the distorted image to evaluate the image quality. Chen [22] used several difference of Gaussian (DoG) frequency bands to extract image features, and construct IQA methods through random forest regression. Learning-based methods are also used in the field of NR-IQA, for example, Zhang et al. employed a neural network-based pairwise learning-to-rank algorithm named RankNet [23] to learn the IQA method by incorporating the uncertainty into the loss function. Kim [24] proposed a two-stage NR-IQA method based on CNN. This method uses the local quality score generated by the full-reference image quality assessment as the image patch score label. With the rapid development of IQA, IQA-related researches have been widely used in the fields of face recognition and biometric classification in recent years [25,26,27].
FR-IQA methods are designed to serve human vision, and an ideal computational method should mirror human visual perception in its representation of image texture and structural content. In general, the chromaticity channels of an image also contain structural information, but the way in which to extract this information needs to be explored. In the previous research, LoG filtering was utilized to extract structural information on the image brightness channel. Based on this method, more extensive structural information can also be obtained by LoG filtering processing on luminance channel and chromaticity channels, respectively. In an RGB image, the brightness channel and chromaticity channels are mixed. Hence, it is difficult to extract more extensive structural information. In order to solve this problem, the RGB image can be transformed to a uniform color space. LMN is the widely used uniform color space, and it has been utilized in some IQA research [1,17]. In this article, the image structure information is extracted on the three channels of L, M, and N by LoG filtering. Then, the three filtering components are fused by mathematical formula. This calculation process can be termed as multi-channel LoG filtering (MLG). Since alterations in contrast impact the HVS perception of image quality [28] and the contrast of the filtered image remains consistent, this paper introduces image contrast to offset the aforementioned limitations. The proposed method primarily targets color images, and this paper aims to conduct rudimentary chromaticity similarity calculations on the chromaticity channels [29]. After the completion of the calculations for the three components, the final quality score is computed using the standard deviation (SD)-based [28] pooling summation method. The main contribution of this paper is to propose an IQA method that combines the three image characteristics, e.g., multi-channel structural information by LoG filtering, and contrast and chroma information. Finally, a stable and efficient FR-IQA MVFC is established. The superiority of the proposed method will be substantiated from multiple perspectives in the succeeding sections.

2. Proposed Method

2.1. Construction of Multi-Channel LoG Filtering

During image quality assessment, image features that are consistent with human visual perception are generally introduced, such as image structure and texture. The literature [30] mentions that the difference of Gaussian (DoG) can be used to simulate the visual characteristics of cats. However, when constructing the DoG method, different standard deviations and scale differences are usually tuned to obtain the ideal feature differences, making the fusion of methods at different scales very complex. When adjusting the scale, the DoG operators can be approximated by the second-order derivative of Gaussian, which corresponds to the LoG operator [15]. This operator can simulate the neural mechanism of human vision. Therefore, to simplify the calculation, LoG is used here to calculate the image difference values, with:
L o ^ G = 1 2 π σ 2 m 2 + n 2 2 σ 2 σ 4 e ( m 2 + n 2 ) / ( 2 σ 2 )
where σ is the standard deviation of L o ^ G , and m and n represent the positions of each pixel (for ease of writing, x is used here to represent the pixel position, i.e., x = (m, n)).
Before constructing a multi-channel LoG filter, the image needs to be preprocessed, that is, the RGB image is mapped to the opposite uniform color space. At present, there are many methods for color space conversion, such as CIELab [31], LMN [32], YIQ [33], and YUV [34]. To reduce the errors generated during color space conversion, the evaluation standard values of the color space in four public databases are calculated first, and the ideal uniform color space is determined through their values. The top-ranked evaluation standard values are highlighted in bold, as shown in Table 1.
From the data in Table 1, it can be seen that LMN has the best overall performance among the four uniform color spaces. Therefore, the RGB image is transformed into the LMN uniform color space in the proposed method. The transformation formulation is as shown in Equation (2):
L M N = 0 . 06   0 . 63   0 . 27 0 . 30   0 . 04   - 0 . 35 0 . 34   0 . 6   0 . 17 R G B
where L* represents luminance component; M* and N* represent chromaticity components.
Traditional calculation methods generally convolve the operator L o ^ G with the image luminance component, ignoring the importance of color components. In this paper, the operator L o ^ G is convolved with the three color components in the uniform color space, as implemented by Equation (3):
L L o G i ( x ) = L i ( x ) L o ^ G ( x ) M L o G i ( x ) = M i ( x ) L o ^ G ( x ) N L o G i ( x ) = N i ( x ) L o ^ G ( x )
where i represents the image type (i.e., original image f1 and distorted image f2) and * represents convolution calculation.
Next, after convolution, the three components are fused, and this process is defined as:
L M N L o G i ( x ) = L L o G i 2 ( x ) + M L o G i 2 ( x ) + N L o G i 2 ( x ) 1 2
the MLG similarity can be represented as:
S I M L o G ( x ) = 2 L M N L o G 1 ( x ) L M N L o G 2 ( x ) + T 1 L M N L o G 1 2 ( x ) + L M N L o G 2 2 ( x ) + T 1
where T1 is a constant for increasing formula stability.
To verify the performance of MLG, two groups of original and distorted images with the same specifications are selected from the TID2008 database [35], and the image quality scores of these groups are calculated using different algorithms (i.e., traditional LoG filtering and MLG). The corresponding similar images in this calculation process are listed, as shown in Figure 1.
In Figure 1, it can be seen that there are fewer bright spots in Figure 1g, h compared to Figure 1e,f, indicating that the algorithm in this paper has a strong characterization ability for the differences between reference and distorted images. Moreover, for the score values analysis, the higher the subjective score value, the smaller the difference between the reference and distorted images, and the higher the quality for the distorted image. The lower the objective score, the smaller the difference between the reference and distorted images, and the higher the quality, and vice versa. For example, in Figure 1, the subjective score of Figure 1d is lower than that of Figure 1c, the objective score of Figure 1e is lower than that of Figure 1f, and the objective score of Figure 1g is higher than that of Figure 1h.
These results indicate that MLG outperforms traditional LoG filtering in terms of predictive performance and is more consistent with human eye observation results in characterizing image structural information. The subjective scores of Figure 1c,d show that MLG is more consistent with human visual perception and has a higher prediction accuracy for IQA. It can be seen that MLG can better express the degree of distortion.

2.2. Contrast Similarity Image Calculation

Contrast reflects the brightness variation in images and is a significant visual attribute of image quality. There are many methods for calculating image contrast [16], among which RMS contrast is suitable for capturing natural stimuli with low computational complexity, and performs high consistency with the subjective contrast of natural images [10]. Therefore, this paper adopts RMS contrast as the second feature of the image, and its global contrast expression is:
L C i ( x ) = 1 H 1 j = 1 H ( x j μ x ) 2
where μ x = 1 H j = 1 H x j is the average intensity of the image.
Based on Equation (6), the global contrast similarity images of the original image f1 and the distorted image f2 can be calculated, and the expression is:
S I M L C ( x ) = 2 L C 1 ( x ) L C 2 ( x ) + T 2 L C 1 2 ( x ) + L C 2 2 ( x ) + T 2
where T2 is a constant for increasing the stability of the equation.

2.3. Chromaticity Similarity Image Calculation

The change in image colors affects the quality of the image, so chromaticity similarity is introduced as the third feature of the objective method, such as FSIM, VSI, and PerSIM. According to Table 1, the RGB image is mapped to the LMN uniform color space, where M* represents the red–green channel and N* represents the blue‒yellow channel [17]. The chromaticity similarity can be calculated as:
S I M M N ( x ) = 2 M 1 ( x ) M 2 ( x ) + T 3 M 1 2 ( x ) + M 2 2 ( x ) + T 3 2 N 1 ( x ) N 2 ( x ) + T 3 N 1 2 ( x ) + N 2 2 ( x ) + T 3
where T3 is a constant for increasing the stability of the equation, M1, M2 and N1, N2 are the chromaticity channels of f1 (original image) and f2 (distorted image), respectively.

2.4. MVFC Method

Based on the quantification results of multi-channel LoG filtering, contrast, and chromaticity, SIMLoG, SIMLC, and SIMMN feature similarity images can be obtained. By pooling the obtained feature similarities in space, an IQA method can be obtained. The index of the proposed method can be calculated using Equation (9):
S = w 1 S D S I M L o G + w 2 S D Ω S I M M N + w 3 S D Ω S I M L C
where w1, w2, and w3 are weights representing importance, and w1+ w2+ w3 = 1. Standard deviation (SD) can effectively highlight the feature differences of the image [28]. The SD calculation formula is S D k = 1 H k = 1 H ( k k ¯ ) 2 , k ¯ = 1 H k = 1 H k , where k represents the feature similarity value of the image.
In order to express the reliability of this method intuitively, two groups of original distorted image pairs (including subjective scores of images) with the same specifications are selected from the TID2008 database, and the objective scores of the above images are calculated through the proposed method, and then the obtained objective scores are compared with the subjective scores. The higher the subjective score, the smaller the difference between the reference and distorted images and the higher the quality for the distorted image. The lower the objective score, the smaller the difference between the reference and distorted images, and the higher the quality, and vice versa.
As can be seen from Figure 2, the subjective scores of Figure 2b,c decrease in turn, indicating that Figure 2c is more distorted than Figure 2b, while the objective scores calculated by the proposed method decrease in turn, which is in line with the results of human judgment. The subjective and objective scoring results in Figure 2e,f are the same, so they are not repeated here. It can be observed that the proposed method can accurately express the degree of distortion.
The method content described above can be more intuitively expressed in Figure 3, and the calculating process of MVFC is shown in Algorithm 1.
Algorithm 1 MVFC.
Input: The reference image R and 1 distorted image D.
Output: An quality score S.
% transform into an opponent color space (steps 1–2)
H stands for R or D.
1: RHGHBH transfer to LHMHNH
2: Downsample the image
% Contrast feature map similarity calculation. (steps 3–9)
3: window ← fspecial(‘gaussian’,2,1.5).
4: window ← window/sum(sum(window)).
5: muH ← filter2(window, LH,’same’).
6: mu_sqH ← muH × muH.
7: sigmaH ← sqrt(abs(filter2(window,i LH× LH,’same’) − mu_sqH)).
8: C2 ← 50.
% Define C2 to avoid instabilities when denominator is close to 0
9: SIMLC ← (2× sigmaR×sigmaD + C2)/(sigmaR^2 + sigmaD^2 + C2).
% Similarity calculation in chroma channels (M*, N*) (steps 20–22). (steps 10–13)
10: T1 ← 0.00006.
11: mSIM ← abs ((2 × (MR) × (MD) + T1)/((MR)^2 + (MD)^2 + T1)).
12: nSIM ← abs ((2 × (NR) × (ND) + T1)/((NR)^2 + (ND)^2 + T1)).
13: SIMMN ← mSIM×nSIM.
% Multi-channel LoG feature map extraction and similarity calculation. (steps 14–19)
14: LHDoG ← imgDoG(LH).
15: MHDoG ← imgDoG(MH).
16: NHDoG ← imgDoG(NH).
17: DoGR ← sqrt(LRDoG^2 + MRDoG^2 + NRDoG^2).
18: DoGD ← sqrt(LDDoG^2 + MDDoG^2 + NDDoG^2).
19: SIMLoG ← abs ((2×DoGR × DoGD + T1)/((DoGR)^2 + (DoGD)^2 + T1)).
% Calculate the quality score of D. (steps 20–21)
20: Output ← 0.5 × std2(SIMLoG) + 0.05 × std2(SIMMN) + 0.45 × std2(SIMLC).
21: end

3. Experimental Results and Analysis

3.1. Evaluation Metrics and Protocol

The performance of an FR-IQA method is given by correlation coefficients in the literature, which are measured between predicted and ground-truth quality scores. Four commonly used correlation coefficients are the Spearman rank-order correlation coefficient (SROCC), Kendal rank-order correlation coefficient (KROCC), Pearson linear correlation coefficient (PLCC), and root-mean-squared error (RMSE). The first two can characterize the prediction trend of the IQA method, while PLCC reflects the prediction accuracy of the method, and RMSE reflects the deviation of the method prediction. To calculate the last two indicators, the logistic regression analysis method [17] is adopted, and its mapping function is:
f s = β 1 1 2 1 1 + exp β 2 s β 3 + β 4 s + β 5
where β 1 , …, β 5 are fitting parameters, s represents the original IQA score, and f(s) is the IQA score after regression.

3.2. Databases

Currently, the four commonly used public image databases in the literature are TID2008 [35], CSIQ [36], LIVE [37], and TID2013 [38]. These databases contain many distortion types, rich content information, and noticeable pattern textures. Each image in these databases includes a corresponding mean opinion score (MOS) or differential mean opinion score (DMOS), which is suitable for conducting method-performance testing experiments. Table 2 lists the basic information of the databases, including a total of 6345 distorted images and 52 distortion types.

3.3. Parameter Optimization of the Method

As mentioned earlier, the parameter variables of the MVFC are T1, T2, w1, w2, and w3. To achieve the optimal performance of the method, it is necessary to determine the ideal variable values. In this research, the method of controlling variables is chosen to calculate the variable values, and the final target value is determined by the corresponding evaluation standard value (PLCC).
First, the values of T1, w1, w2, and w3 are kept unchanged, and the value of T2 is set in the interval [20, 70]. If T2 is too large, it will weaken the feature representation, and if it is too small, it will reduce the calculation stability. As shown in Figure 4, when the T2 value is in the interval [40, 50], the PLCC value in the four databases is the most ideal. Here, T2 = 50 is selected. Using the same method, T1 is set in the interval [0.00003, 0.00008], and when the T1 value is in [0.00004, 0.00006], the PLCC value of the MVFC in each database is the most ideal, and T1 = 0.00006 is chosen.
The selection of the numerical values of w1, w2, and w3 is consistent with the above method. Since the values of T1 and T2 have been given and w1 + w2 + w3 = 1, it is only necessary to determine the values of two variables. As shown in Figure 5, keeping the value of w3 unchanged, when the value of w1 is in the interval [0.44, 0.54], the PLCC value of the MVFC in each database is the most ideal, and w1 = 0.5 is chosen. Similarly, keeping the value of w1 unchanged, when the value of w3 is in the interval [0.41, 0.46], the PLCC value of the MVFC is the most ideal, and w3 = 0.45 is selected.

3.4. Overall Performance Comparison

MVFC method extracts three feature information of the image, namely multi-channel LoG filtering, contrast, and chroma. To demonstrate the performance advantages of the MVFC method, 10 classic algorithm methods are selected for comparison, including SSIM [9], CVSS [12], GSM [13], VIF [14], FSIMc [16], and VSI [17], as well as high-precision methods—RVSIM [39], MAD [40]—high-efficiency methods—GMSD [41]—and recently published methods—CAGS [42]. As shown in Table 3, the top three evaluation results are highlighted in bold. In addition, the evaluation results in the four databases are weighted averaged and directly averaged, where the weight of each database is determined by the number of distorted images it contains.
As seen from Table 3, MVFC method has good performance in all four databases, and its overall performance ranks first. Especially in the LIVE database, the performance evaluation standard value has a significant advantage compared with the second and third places. Compared with FSIMc and VSI, which also use LoG filtering to extract image structure information, MVFC method has more obvious advantages in the LIVE and CSIQ databases. In the weighted average and direct average values of the first three indicators, the performance of MVFC method ranks first. In addition, in the TID2008 database, the SROCC value of MVFC method is 0.9048, ranking first among the 11 calculation methods. In summary, the MVFC method in this paper has good competitiveness and prediction accuracy.
The methods with the top three places in Table 3 are the MVFC method in this paper (22 times), CVSS (14 times), and VSI (12 times). Moreover, it can be seen from Table 3 that the SROCC and PLCC values of the MVFC method are both greater than 0.8606, indicating that the method not only has a good predictive performance, but also has a better generalization ability.
Then, we will also compare the MVFC method with five learning-based or AI-based IQA methods, namely DoG-ssim, diplQ, Deepsim, DeepFR, and DIQA. Among them, with the SROCC and PLCC values chosen as the performance evaluation criteria. The results are shown in Table 4, with the top three marked in bold.
In Table 4, the top three ranked computational methods based on the total marked count are DeepFR (8 times), MVFC method (7 times), and DoG-ssim (6 times), and their overall performances rank second. As seen from the data, the MVFC method exhibits a high prediction performance. It can be concluded that the MVFC method performs with a higher generalization in the commonly used public image databases.
To further demonstrate the performance advantages of the MVFC method, the fitting performance of different methods is compared using the discrete points provided in the TID2013 database. As shown in Figure 6, the regression curve of the MVFC method has a better correlation with the subjective observation values.
The MVFC integrates HVS characteristics and simultaneously extracts multi-channel LoG filtering, contrast information, and chroma information. It uses deviations and characterizes feature differences. Therefore, it has a better accuracy and generalization compared to other methods. In terms of multi-channel LoG filtering extraction, considering the coordinated changes in luminance, as well as the impact of chroma components on image structure. The accuracy of LoG extraction is improved, and the image structure changes can be better characterized.

3.5. Performance Comparison for Individual Distortion Types

The following analysis focuses on the predictive performance of the method under different distortion types. Since the distortion types in the TID2008 database overlap with those in the TID2013 database, they are not used repeatedly. The performance comparison data for individual distortion methods can be found in Table 5, where the PLCC value is used as the performance evaluation criterion. In each distortion type, the top three PLCC values are highlighted in bold, and the methods with the highest total count of marks are the MVFC method (21 times), VSI (16 times), and CVSS (12 times).
Comparing the data in Table 5, the total count of marks for the MVFC method ranks first among the 11 methods and is significantly higher than VSI and CVSS. It also outperforms other methods in single-distortion evaluation results. The MVFC method performs well in the LIVE and CSIQ databases, with the highest PLCC value of 0.9912, which is significantly better than the computational methods that also use LoG (VSI and FSIMc). A further analysis of the MVFC method, VSI, and CVSS (Table 5) shows that the MVFC method has lower PLCC values for CTC (contrast change) and block (different intensity local block distortions) distortion types, with values of 0.5466 and 0.6253, respectively; VSI has lower results for block (different intensity local block distortions) and CTC (contrast change) distortion types, with results of 0.4875 and 0.6595, respectively; CVSS has lower results for block (different intensity local block distortions), CTC (contrast change), and CCS (color saturation change) distortion types, with results of 0.4983, 0.4278, and 0.0893, respectively. From the above data, it can be seen that the currently more challenging distortion type evaluations are CTC (contrast change) and block (different intensity local block distortions). The MVFC method outperforms CVSS in the evaluation of these two distortion types and performs significantly better than VSI and CVSS in block evaluation. Therefore, the generalization performance of MVFC method is good, both on the TID2013 database and across all databases.
To further validate MVFC method’s performance in evaluating individual distortion types, a comparison was conducted with five recently published journal articles, namely DISIS [43], SG-ESSIM [44], and VSPSI [45]. LGV and SWLGV [46] were tested on the TID2013 database, with the SROCC value chosen as the performance evaluation criterion. The results are shown in Table 5, with the top two marked in bold.
In Table 6, the top three ranked computational methods based on the total marked count are VSPSI (20 times), the MVFC method (17 times), and SG-ESSIM (14 times). As seen from the data, the MVFC method exhibits a high prediction performance. Upon further analysis of the data, MVFC method’s average SROCC value across all 24 distortion types is 0.8531, which is very close to the computational methods VSPSI and SWLGV and is significantly higher than the remaining three computational methods. These results also indicate that the MVFC method has good generalizability and strong competitiveness.

3.6. Comparison of Method Computational Efficiency

To compare the computational complexity of the method proposed in this study with that of the other seven methods, a pair of color images with a resolution of 512 × 512 from the TID2008 database were selected as test images. These were run on a computer with a 2.3 GHz Intel Core i5 memory, using MATLAB R2016a as the simulation software.
As shown in Table 7, the method in this study only requires a runtime of 0.14 s, demonstrating a relatively low computational complexity. In practical applications, this method can quickly accomplish relevant tasks.
In the comparison of the overall performance in Section 3.4, the MVFC method ranks first. This is because the MVFC method combines the HVS characteristics, that is, the multi-channel structural information, by LoG filtering, and then combines with contrast information and chromaticity information, and the deviation and characteristic difference are used to characterize. The main limitation of this research is to deal with the different distortions in Section 3.5. At present, the IQA method mentioned in this paper does not have a method that can obtain a higher evaluation performance in all types of distortion evaluation. The main reason is that the preprocessing method, extraction method, and pooling method of image features are all different. From Table 5 and Table 6, we can conclude that each IQA method has no means to solve all the distortion type evaluation, which is the main deficiency of current IQA research. Hence, all researchers should pay more attention to solve all the distortion types’ evaluation in future IQA research.

4. Conclusions

In accordance with the high precision and high efficiency requirements of modern FR-IQA methods, this paper proposes a full-reference method (MVFC) based on the fusion of multi-channel visual information from color images. Image structural information is extracted using the multi-channel LoG algorithm and it is pooled with chroma and contrast features using the standard deviation method. The parameters of MVFC are obtained through the control variable method, ultimately resulting in the computational method of this study. During the experimental phase, four classic FR-IQA databases were selected, totaling 109 reference images and 6345 distorted images for testing. The results were compared with those from ten classic FR-IQA methods and five learning-based methods. The experimental results indicate that in terms of the performance evaluation standard of PLCC, the minimum value of this method across the four databases is 0.8822, with the maximum reaching 0.9754. Its stability and precision are superior to most comparison methods. In single-distortion type tests, the highest PLCC value of this method can reach up to 0.9912, with the lowest at 0.5466. Its generalization and prediction performance also outperform most FR-IQA methods. By comparing the computational complexity with seven other methods, the higher computational efficiency of this method is verified. In conclusion, the proposed FR-IQA method in this study demonstrates a stable algorithmic performance. In the future, all IQA methods need to be improved to yield a better performance for the IQA problem in real settings, including the MVFC method.

Author Contributions

Conceptualization, B.J. and C.S.; methodology, S.B.; software, S.B.; validation, B.J., C.S. and L.W.; formal analysis, S.B.; investigation, S.B.; resources, S.B.; data curation, S.B.; writing—original draft preparation, S.B.; writing—review and editing, B.J.; visualization, S.B.; supervision, C.S.; project administration, B.J.; funding acquisition, B.J., C.S. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 52005003, Key Project of Scientific Research in universities of Anhui Province, grant number 2022AH050983, Research Start-up Foundation for Introduction of Talents of AHPU, grant number 2021YQQ027, and Scientific Research Fund of AHPU, grant number Xjky20220003, and Enterprise Cooperation Project of Anhui Future Technology Research Institute, grant number 2023qyhz02.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, C.Y.; Lin, Y.D. Image Quality Assessment Based on Three Features Fusion in Three Fusion Steps. Symmetry 2022, 14, 773. [Google Scholar] [CrossRef]
  2. Jiang, B.; Bian, S.; Shi, C.; Wu, L. Full reference image quality assessment based on color appearance-based phase consistency. Opt. Precis. Eng. 2023, 31, 1509–1521. [Google Scholar] [CrossRef]
  3. Shi, C.Y.; Lin, Y.D. Full reference image quality assessment based on visual salience with color appearance and gradient similarity. IEEE Access 2020, 8, 97310–97320. [Google Scholar] [CrossRef]
  4. Wei, L.; Zhao, L.; Peng, J. Reduced Reference Quality Assessment for Image Retargeting by Earth Mover’s Distance. Appl. Sci. 2021, 11, 9776. [Google Scholar] [CrossRef]
  5. Shi, C.Y.; Lin, Y.D. No Reference Image Sharpness Assessment Based on Global Color Difference Variation. Chin. J. Electron. 2022, 33, 1–11. [Google Scholar]
  6. Shen, T.W.; Li, C.C.; Lin, W.F.; Tseng, Y.H.; Wu, W.F.; Wu, S.; Tseng, Z.L.; Hsu, M.H. Improving Image Quality Assessment Based on the Combination of the Power Spectrum of Fingerprint Images and Prewitt Filter. Appl. Sci. 2022, 12, 3320. [Google Scholar] [CrossRef]
  7. Deng, J.H.; Yuan, Z.M.; Liu, D.H.; Gu, G.S.H. Superpixel and Visual Saliency Synergetic Image Quality Assessment. J. Guangdong Univ. Technol. 2021, 38, 33–39. [Google Scholar]
  8. Zhang, T.; Ding, J.; Tian, Z. A reversible information hiding algorithm in AMBTC domain based on human vision system. J. Phys. Conf. Ser. 2021, 1982, 012071. [Google Scholar] [CrossRef]
  9. Wang, Z.; Bovik, A.; Sheikh, H. Image Quality Assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  10. Kukkonen, H.; Rovamo, J.; Tiippana, K. Michelson contrast, RMS contrast and energy of various spatial stimuli at threshold. Vis. Res. 1993, 33, 1431–1436. [Google Scholar] [CrossRef]
  11. Li, C.; Bovik, A.C. Three-component weighted structural similarity index. Proc. SPIE 2009, 7242, 252–260. [Google Scholar]
  12. Jia, H.Z.; Zhang, L.; Wang, T.H. Contrast and Visual Saliency Similarity-induced Index for Assessing Image Quality. IEEE Access 2018, 6, 65885–65893. [Google Scholar] [CrossRef]
  13. Liu, A.M.; Lin, W.S.; Narwaria, M. Image Quality Assessment Based on Gradient Similarity. IEEE Trans. Image Process. 2011, 21, 1500–1512. [Google Scholar] [PubMed]
  14. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef]
  15. Young, R.A. The Gaussian derivative model for spatial vision: I. Retinal mechanisms. Spat. Vis. 1987, 2, 273–293. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Zhang, L.; Mou, X. FSIM: A Feature Similarity Index for Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
  17. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef]
  18. Temel, D.; Alregib, G. PerSIM: Multi-resolution image quality assessment in the perceptually uniform color domain. In Proceedings of the IEEE International Conference on Image Processing, Quebec City, QC, Canada, 27–30 September 2015; pp. 1682–1686. [Google Scholar]
  19. Liu, L.; Gao, M.; Zhang, Y. Application of machine learning in intelligent encryption for digital information of real-time image text under big data. EURASIP J. Wirel. Commun. Netw. 2022, 2022, 1–16. [Google Scholar] [CrossRef]
  20. Gao, F.; Wang, Y.; Li, P.; Min, T. DeepSim: Deep similarity for image quality assessment. Neurocomputing 2017, 257, 104–114. [Google Scholar] [CrossRef]
  21. Yao, W.; Liu, Y.P.; Zhu, C.H.B. Deep learning of full-reference image quality assessment based on human visual properties. Infrared Laser Eng. 2018, 47, 703004. [Google Scholar]
  22. Pei, S.H.; Chen, L.H. Image quality assessment using human visual DoG Model Fused with Random Forest. IEEE Trans. Image Process. 2015, 24, 3282–3292. [Google Scholar] [PubMed]
  23. Ma, K.; Liu, W.; Liu, T.; Wang, Z.; Tao, D. Diplq: Blind image quality assessment by learning-to-rank discriminable image pairs. IEEE Trans. Image Process. 2017, 26, 3951–3964. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Kim, J.; Nguyen, A.; Lee, S. Deep CNN-Based blind Image quality predictor. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 11–24. [Google Scholar] [CrossRef] [PubMed]
  25. Abboud, A.J.; Sellahewa, H.; Jassim, S.A. Quality based approach for adaptive face recognition. In Mobile Multimedia/Image Processing, Security, and Applications; SPIE: Bellingham, WA, USA, 2009; Volume 7351, pp. 175–184. [Google Scholar]
  26. Abboud, A.J.; Jassim, S.A. Biometric templates selection and update using quality measures. Proc. SPIE-Int. Soc. Opt. Eng. 2012, 8406, 74–82. [Google Scholar]
  27. Abboud, A.J.; Agaian, S.S.; Jassim, S.A. Image quality guided approach for adaptive modelling of biometric intra-class variations. Proc. SPIE-Int. Soc. Opt. Eng. 2010, 7708, 189–198. [Google Scholar]
  28. Ahmed, I.T.; Chen, S.D.; Hammad, B.T. Contrast-distorted image quality assessment based on curvelet domain features. Int. J. Electr. Comput. Eng. 2021, 11, 25–95. [Google Scholar] [CrossRef]
  29. Yang, J.; Zhang, W.; Li, X. Full reference image quality assessment by considering intra-block structure and inter-block texture. IEEE Access 2020, 8, 179702–179715. [Google Scholar] [CrossRef]
  30. Enroth-Cugell, C.; Robson, J.G. The contrast sensitivity of retinal ganglion cells of the cat. J. Physiol. 1966, 187, 517–552. [Google Scholar] [CrossRef]
  31. Berns, R.S. Extending CIELAB: Vividness Vab*, depth, Dab*, and clarity, Tab*. Color Res. Appl. 2014, 39, 322–330. [Google Scholar] [CrossRef]
  32. Geusebroek, J.M.; Van den Boomgaard, R.; Smeulders, A.W.M.; Geerts, H. Color Invariance. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1338–1350. [Google Scholar] [CrossRef] [Green Version]
  33. Kwok, S.H. Efficient gamut clipping for color image processing using LHS and YIQ. Opt. Eng. 2003, 42, 701–711. [Google Scholar] [CrossRef]
  34. Wen, X.; Pan, Z.; Hu, Y. Generative Adversarial Learning in YUV Color Space for Thin Cloud Removal on Satellite Imagery. Remote Sens. 2021, 13, 1079. [Google Scholar] [CrossRef]
  35. Ponomarenko, N.; Lukin, V.; Zelensky, A. TID2008—A Database for Evaluation of Full-Reference Visual Quality Assessment Metrics. Adv. Mod. Radioelectron. 2009, 10, 30–45. [Google Scholar]
  36. Yang, X.; Wang, T.; Ji, G. Image quality assessment via colour information fluctuation. Signal Image Video Process. 2022, 17, 1161–1171. [Google Scholar] [CrossRef]
  37. Osorio, F.; Vallejos, R.; Barraza, W. Statistical estimation of the structural similarity index for image quality assessment. Signal Image Video Process. 2022, 16, 1035–1042. [Google Scholar] [CrossRef]
  38. Ling, Y.; Zhou, F.; Guo, K. ASSP: An adaptive sample statistics-based pooling for full-reference image quality assessment. Neurocomputing 2022, 493, 568–582. [Google Scholar] [CrossRef]
  39. Yang, G.; Li, D.; Lu, F. RVSIM: A feature similarity method for full-reference image quality assessment. EURASIP J. Image Video Process. 2018, 2018, 6. [Google Scholar] [CrossRef] [Green Version]
  40. Larson, E.C.; Chandler, D.M. Most apparent distortion: Full-reference image quality assessment and the role of strategy. J. Electron. Imaging 2010, 19, 011006. [Google Scholar]
  41. Xue, W.; Zhang, L.; Mou, X. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef] [Green Version]
  42. Shi, C.Y.; Lin, Y.D. Objective image quality assessment based on image color appearance and gradient features. Acta Phys. Sin. 2020, 69, 401–412. [Google Scholar] [CrossRef]
  43. Ding, K.; Ma, K.; Wang, S. Image quality assessment: Unifying structure and texture similarity. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 2567–2581. [Google Scholar] [CrossRef] [PubMed]
  44. Varga, D. Saliency-Guided Local Full-Reference Image Quality Assessment. Signals 2022, 3, 28. [Google Scholar] [CrossRef]
  45. Wang, X.; Zheng, B.J.; Kong, L.J. Full Reference Image Quality Assessment Based on Visual Saliency and Perception Similarity Index. Packag. Eng. 2022, 43, 239–248. [Google Scholar]
  46. Varga, D. Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency. Electronics 2022, 11, 559. [Google Scholar] [CrossRef]
Figure 1. Comparison of performance between MLG and traditional LoG filtering; (a,b) are original images; (c,d) are distorted images, with subjective scores of 7.2 and 4.8286, respectively; (e,f) are MLG similarity images, with objective scores of 0.008526 and 0.008884, respectively; (g,h) are traditional LoG filtering similarity images, with objective scores of 0.015842 and 0.01499, respectively.
Figure 1. Comparison of performance between MLG and traditional LoG filtering; (a,b) are original images; (c,d) are distorted images, with subjective scores of 7.2 and 4.8286, respectively; (e,f) are MLG similarity images, with objective scores of 0.008526 and 0.008884, respectively; (g,h) are traditional LoG filtering similarity images, with objective scores of 0.015842 and 0.01499, respectively.
Applsci 13 08760 g001
Figure 2. (a,d) are original images, while (b,c,e,f) are distorted images corresponding to different distortion levels of a and d. Their subjective scores are 4.0571, 3.0000, 5.1935, and 4.0645, respectively. And their objective scores are 0.0931, 0.1720, 0.0518, and 0.1177, respectively.
Figure 2. (a,d) are original images, while (b,c,e,f) are distorted images corresponding to different distortion levels of a and d. Their subjective scores are 4.0571, 3.0000, 5.1935, and 4.0645, respectively. And their objective scores are 0.0931, 0.1720, 0.0518, and 0.1177, respectively.
Applsci 13 08760 g002
Figure 3. The analytical flow of the MVFC.
Figure 3. The analytical flow of the MVFC.
Applsci 13 08760 g003
Figure 4. PLCC of different T1 and T2 values on four benchmark databases.
Figure 4. PLCC of different T1 and T2 values on four benchmark databases.
Applsci 13 08760 g004
Figure 5. PLCC of different w1 and w2 values on four benchmark databases.
Figure 5. PLCC of different w1 and w2 values on four benchmark databases.
Applsci 13 08760 g005
Figure 6. Scatter diagram of subjective MOS and method prediction scores on TID 2013 database: (a) FSIMc; (b) VSI; (c) GSM; (d) SSIM; (e) VIF; (f) MVFC.
Figure 6. Scatter diagram of subjective MOS and method prediction scores on TID 2013 database: (a) FSIMc; (b) VSI; (c) GSM; (d) SSIM; (e) VIF; (f) MVFC.
Applsci 13 08760 g006
Table 1. Overall performance of color spaces in different databases.
Table 1. Overall performance of color spaces in different databases.
DatabaseCIELabYIQLMNYUV
TID2013SROCC0.78050.79830.81230.7983
KROCC0.58280.60600.61920.6061
PLCC0.78990.81580.82410.8159
RMSE0.76030.71690.70230.7168
TID2008SROCC0.77350.81030.81380.8103
KROCC0.57170.61230.61720.6123
PLCC0.77130.81300.81610.8130
RMSE0.85410.78130.77550.7813
CSIQSROCC0.85850.90750.90930.9076
KROCC0.66200.72690.72980.7269
PLCC0.85950.90690.90860.9069
RMSE0.13420.11060.10970.1106
LIVESROCC0.94620.96560.96650.9656
KROCC0.81070.85860.86120.8586
PLCC0.91520.87720.87660.8772
RMSE12.608015.024115.058115.0244
Direct averageSROCC0.83970.87040.87550.8705
KROCC0.65680.70100.70690.7010
PLCC0.83400.85320.85640.8533
Table 2. IQA benchmark database.
Table 2. IQA benchmark database.
DatabaseSource ImagesDistorted ImagesDistortion TypesObservers
TID201325300024971
TID200825170017838
CSIQ30866635
LIVE297795161
Table 3. Performance comparison of IQA methods on four databases.
Table 3. Performance comparison of IQA methods on four databases.
Database CriteriaSSIMVIFMADFSIMcGMSDVSIGSMRVISMCVSSCAGSProposed
TID2013SROCC0.74170.67690.78070.85100.80440.89650.79460.67570.80690.83160.8606
KROCC0.55880.51470.60350.66650.63390.71830.62550.51460.63310.64690.6810
PLCC0.78950.77200.82670.87690.85900.90000.84640.78250.84060.84450.8822
RMSE0.76080.78800.69750.59590.63460.54040.66030.77190.67150.66390.5838
TID2008SROCC0.77490.74910.83400.88400.89070.89790.85040.73750.90010.82310.9048
KROCC0.57680.58600.64450.69910.70920.71230.65960.56280.72150.62890.7278
PLCC0.77320.80840.83080.87620.87880.87620.84220.79540.89610.80910.9000
RMSE0.85110.78990.74680.64680.64040.64660.72350.81330.59560.78860.5850
CSIQSROCC0.87560.91950.94660.93100.95700.94230.91080.89790.95800.91980.9570
KROCC0.69070.75370.79700.76900.7640.78570.73740.72340.81730.74870.8127
PLCC0.86130.92770.95020.91920.9300.92790.89640.92360.95890.90140.9611
RMSE0.13340.09800.08180.10340.07860.09790.11640.10070.07450.11370.0725
LIVESROCC0.94790.96360.96690.96450.96030.95240.95610.96000.96720.97340.9798
KROCC0.79630.82820.84210.83630.82680.80580.81500.82030.84060.86580.8863
PLCC0.94490.96040.96750.96130.95950.94820.95120.95700.96510.96400.9754
RMSE8.94557.61376.90737.52967.69378.68168.43277.92747.15738.32516.9051
Weighted AverageSROCC0.79420.76460.84050.88470.85440.91000.84520.76380.87210.85880.9027
KROCC0.61080.60490.67020.71010.70220.73660.67320.60060.70740.68280.7414
PLCC0.81400.82610.86190.89280.88920.90330.86500.83070.88690.85750.9112
Direct AverageSROCC0.83500.82730.88210.90760.90310.92230.87800.81780.90810.88700.9256
KROCC0.65570.67070.72180.74270.74570.75550.70940.65530.75310.72260.7770
PLCC0.84220.86710.89380.90840.91270.91310.88410.86460.91520.87980.9297
Table 4. Performance comparison with AI-based IQA methods on three databases.
Table 4. Performance comparison with AI-based IQA methods on three databases.
Database CriteriaDoG-ssimdiplQDeepsimDeepFRDIQAProposed
TID2013SROCC0.90700.89400.84600.87600.82500.8606
PLCC0.91900.87700.87200.89400.85000.8822
CSIQSROCC0.94300.93000.91900.96000.88400.9570
PLCC0.95400.94900.91900.96600.91500.9611
LIVESROCC0.96100.95800.97400.98100.97500.9798
PLCC0.96300.95700.96800.98400.97700.9754
AverageSROCC0.93700.92730.91300.93900.89470.9325
PLCC0.94530.92770.91970.94800.91400.9396
Table 5. PLCC values of IQA methods for each type of distortion.
Table 5. PLCC values of IQA methods for each type of distortion.
DatabaseDistortion TypeSSIMVIFMADFSIMcGMSDVSIGSMRVISMCVSSCAGSProposed
TID2013AGN0.85990.85550.88460.91510.94940.95150.91290.86740.94540.93950.9537
ANC0.78110.81690.83020.88690.91330.91700.85760.83310.90480.88350.9138
SCN0.85020.86800.88350.82400.93890.94410.92570.84740.92060.86370.9470
MN0.82310.86270.78580.84780.75140.8110.78340.86000.80390.79750.8231
HFN0.90370.90830.92230.94320.95480.96390.9290.91610.9480.8760.8358
IN0.76490.83460.3310.72160.75730.86370.81680.88040.74260.83440.7971
QN0.76930.85150.83650.80960.91090.87260.86590.77400.89440.87630.9072
GB0.96620.96020.93360.95490.90940.95640.96290.97330.92920.95920.9028
DEN0.96090.88920.96110.96470.97560.97040.97590.93430.96490.97430.9802
JPEG0.94930.91980.96120.9730.98340.98540.97650.96260.97530.98310.9846
JP2K0.96810.93660.97380.97410.98120.98310.98010.96890.96990.97340.9787
JGTE0.88590.89220.89550.91780.87860.94540.92170.90240.88560.90980.8980
J2TE0.87250.86500.87550.80100.90870.91940.89940.88350.88840.83150.8884
NEPN0.77550.79890.85520.80570.81620.81540.80630.73680.84050.76750.8154
Block0.56220.56200.42160.54880.63510.48750.60610.63930.49830.66360.6253
MS0.78490.70810.67260.78700.76610.80250.77210.69110.75370.75770.7176
CTC0.46910.83620.29700.68380.50980.65950.64700.21290.42780.63460.5466
CCS0.49980.35410.16010.83770.15160.81060.73840.43740.08930.4080.7324
MGN0.77680.79080.85280.86540.8910.9130.84300.82910.88250.87680.8961
CN0.90430.88660.93430.94660.95620.95440.94920.93330.94800.93900.9643
LCNI0.91410.87610.95610.95640.96990.96340.95700.92560.96710.96390.9783
ICQD0.79760.81470.86940.80530.91920.89630.9000.81690.91240.91760.9346
CHA0.95990.93810.95910.97980.97090.97520.96830.97640.96970.97130.9580
SSR0.96900.91610.97620.97660.98480.98000.98400.96570.97850.97390.9847
CSIQAGWN0.91480.95630.95140.93830.45560.96360.95050.93890.96710.96460.9601
JPEG0.97740.98330.98270.98480.76210.98080.98040.98230.98650.97910.9860
JP2K0.96980.97820.98420.98180.78830.97450.96960.97990.98560.96160.9848
AGPN0.89160.96120.95480.81720.55730.86980.84990.91940.95810.85140.9553
GB0.93630.95620.97130.89330.65580.87610.85730.97370.98150.83940.9811
CTC0.78670.92330.93060.89140.87750.86860.85950.85810.94490.86720.9478
LIVEJP2K0.88480.93920.97540.97940.97710.86620.85640.96190.9780.97760.9853
JPEG0.97610.98170.98060.98490.98320.98160.9840.98530.95230.98610.9912
AWGN0.97650.99070.98810.97720.96550.98190.89040.98090.9820.98570.9863
GB0.92370.96260.94150.97370.96180.85440.85650.96820.97090.95720.9751
FF0.85100.95270.9520.85150.94080.81510.79250.96230.95850.95410.9625
Table 6. SROCC values of IQA methods for each type of distortion.
Table 6. SROCC values of IQA methods for each type of distortion.
DatabaseTypeDISTSSG-ESSIMVSPSILGVSWLGVProposed
TID2013AGN0.84500.93600.94710.92100.93600.9500
ANC0.78600.85500.87220.91100.90400.8859
SCN0.85900.93500.93850.88700.93000.9438
MN0.81400.71500.77120.82100.84200.7480
HFN0.86800.92000.92180.91600.87400.9211
IN0.67400.83300.87580.75900.79500.8082
QN0.81000.91100.87650.86600.95600.9063
GB0.92600.96900.96310.95200.96100.9074
DEN0.89900.96300.95020.98400.97600.9592
JPEG0.89700.95000.96500.97000.95200.9532
JP2K0.93100.94900.97250.94500.96800.9625
JGTE0.90600.82300.92340.89500.90000.8804
J2TE0.86500.89900.92460.91800.87400.8960
NEPN0.83300.80100.80760.71900.79500.8140
Block0.30200.62300.17160.60300.60100.6427
MS0.75200.70600.77150.67700.75600.6566
CTC0.46400.45200.47630.65900.66700.3666
CCS0.78900.01000.81160.75000.75800.7350
MGN0.79000.90000.91350.81900.84100.8903
CN0.90700.91600.92310.83800.85900.9356
LCNI0.93200.95200.95830.87300.91700.9754
ICQD0.83200.92800.88560.84500.86400.9261
CHA0.87900.83500.89230.79300.78800.8396
SSR0.94400.96400.96320.80000.81000.9695
Average0.81270.82270.85320.83840.85520.8531
Table 7. Comparison of the efficiency of different IQA methods.
Table 7. Comparison of the efficiency of different IQA methods.
IQA IndexTime Cost/sIQA IndexTime Cost/s
SSIM0.126VSI0.536
GSM0.271RVSIM0.939
CVSS0.436MAD1.256
CAGS0.483Proposed0.148
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, B.; Bian, S.; Shi, C.; Wu, L. Full-Reference Image Quality Assessment Based on Multi-Channel Visual Information Fusion. Appl. Sci. 2023, 13, 8760. https://doi.org/10.3390/app13158760

AMA Style

Jiang B, Bian S, Shi C, Wu L. Full-Reference Image Quality Assessment Based on Multi-Channel Visual Information Fusion. Applied Sciences. 2023; 13(15):8760. https://doi.org/10.3390/app13158760

Chicago/Turabian Style

Jiang, Benchi, Shilei Bian, Chenyang Shi, and Lulu Wu. 2023. "Full-Reference Image Quality Assessment Based on Multi-Channel Visual Information Fusion" Applied Sciences 13, no. 15: 8760. https://doi.org/10.3390/app13158760

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop