Next Article in Journal
A Multi-View Face Expression Recognition Method Based on DenseNet and GAN
Previous Article in Journal
Hardware Design and Implementation of a Lightweight Saber Algorithm Based on DRC Method
Previous Article in Special Issue
ARET-IQA: An Aspect-Ratio-Embedded Transformer for Image Quality Assessment
 
 
Article
Peer-Review Record

A Visual Enhancement Network with Feature Fusion for Image Aesthetic Assessment

Electronics 2023, 12(11), 2526; https://doi.org/10.3390/electronics12112526
by Xin Zhang 1, Xinyu Jiang 2,*, Qing Song 3 and Pengzhou Zhang 4
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Electronics 2023, 12(11), 2526; https://doi.org/10.3390/electronics12112526
Submission received: 27 April 2023 / Revised: 29 May 2023 / Accepted: 1 June 2023 / Published: 3 June 2023

Round 1

Reviewer 1 Report

This paper proposes a visual enhancement network called FF-VEN for image aesthetic assessment, which is based on feature fusion. The proposed network consists of a visual enhancement module, as well as a shallow and deep feature fusion module. Experimental results show the effectiveness of the proposed method. I have some comments as follows:

1. Since the backbone network has many layers, why choose the outputs from two specific layers for the so-called shallow and deep features? And how about using other layers for feature fusion?

2. The ablation study would be helpful to validate each proposed technique, such as the modules, adaptive filter, and feature fusion strategy, etc.

3. Some numbers are missing in Tables 4-5 and 7.

4. Except for image aesthetic assessment, several recent works on natural image quality assessment IQA are recommended to be reviewed, including GraphIQA, metaIQA, Lifelong IQA, etc.

5. It is suggested to further improve the presentation of this paper. For example, the figures are blurry (such as Figures 8-9). Please proofread the paper and check all figures.

 

 

n/a

Author Response

We thank the reviewers for your constructive and insightful comments on our paper and for the opportunity to improve on its quality. We have carefully revised our manuscript in accordance with your comments and suggestions. The answers and explanations of the changes made to the original manuscript are attached.

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper is well written and I am not surprised that the method works with adaptive filters. It would be better if the optimized parameter is suggested for the filters.

Author Response

We thank the reviewers for your constructive and insightful comments on our paper and for the opportunity to improve on its quality. We have carefully revised our manuscript in accordance with your comments and suggestions. The answers and explanations of the changes made to the original manuscript are attached.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Thanks for the response. The authors tried to address my comments, but there are still some issues.

1. The definition of shallow and deep features are confusing. Why the output data are assumed to be shallow features for the other layers? Do you have any references about this assumption?

2. The ablation study means removing the specific component while remains other parts and then testing the performance. It seems the authors fail to address this point.

3. What do you mean by some NNs only regard image aesthetic assessment as the classification and regression tasks? The reasons for the missing numbers are unclear.

4. Yes. The traditional IQA is different from IAA, so it would be better to point out this difference and provide some recent literature as I said in the previous comments. From this clarification, one can better design a model for IAA, otherwise, why not adopt the traditional IQA methods here?

5. The texts in these figures are still blurry.

 

n/a

Author Response

We thank the reviewer for your constructive and insightful comments on our paper and for the opportunity to improve on its quality. We have carefully revised our manuscript in accordance with your comments and suggestions. The answers and explanations of the changes made to the original manuscript are attached.

Author Response File: Author Response.pdf

Round 3

Reviewer 1 Report

Thanks for your response. But there are still some issues.

The authors claimed that the deep features refer to the final convolutional layer, while others are shallow. However, for relatively more deep CNNs, such as ResNet, obviously not only the final convolutional layer is for deep features.

The results of VE-CNN and SDFF are not significantly different, say the SDFF's ACC is better, but it does not perform as well as VE-CNN for LCC and others.

From the authors' response, it seems the output of IAA tasks is finally a regression score. Does any IAA work only for classification without regression?

The statements of IAA is similar to subjective evaluation in IQA is confusing.  What do you mean subjective evaluation in IQA focuses on objectively scoring? The IAA models shall also predict the subjective scores of IAA subjective data.

And most IQA methods do not consider data enhancement is not correct. Many IQA models actually adopt this technique since the subjective datasets are usually small.

I agree that using IQA methods for IAA needs further exploration, but at least the authors can point out the differences and add the references in the paper.

 

n/a

Author Response

We thank the reviewers for your constructive and insightful comments on our paper and for the opportunity to improve on its quality. We have carefully revised our manuscript in accordance with your comments and suggestions. The answers and explanations of the changes made to the original manuscript are attached.

Author Response File: Author Response.pdf

Back to TopTop