Next Article in Journal
Intelligent Design for Simulation Models of Weapon Systems Using a Mathematical Structure and Case-Based Reasoning
Next Article in Special Issue
EPF—An Efficient Forwarding Mechanism in SDN Controller Enabled Named Data IoTs
Previous Article in Journal
Impact of Germination and Fermentation on Rheological and Thermo-Mechanical Properties of Wheat and Triticale Flours
Previous Article in Special Issue
Genuine Reversible Data Hiding Technique for H.264 Bitstream Using Multi-Dimensional Histogram Shifting Technology on QDCT Coefficients
 
 
Article
Peer-Review Record

Deep Learning Model with Transfer Learning to Infer Personal Preferences in Images

Appl. Sci. 2020, 10(21), 7641; https://doi.org/10.3390/app10217641
by Jaeho Oh 1, Mincheol Kim 2 and Sang-Woo Ban 1,2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2020, 10(21), 7641; https://doi.org/10.3390/app10217641
Submission received: 30 September 2020 / Revised: 19 October 2020 / Accepted: 27 October 2020 / Published: 29 October 2020

Round 1

Reviewer 1 Report

The authors propose to use relabeled Fashion-MNIST, LFW and indoor scene recognition dataset for personal preferences classification. However, the details on how to relabel the dataset is missing. Do the authors recruits subjects to relabel the dataset? How many subjects are recruited? What instructions are given for subject to relabel the dataset? What's the criterion for subjects to classify images to preference or non-preference?

The heat map in Figure. 8 shows that the non-preference data's heat map has high response on the tie, preference data's heat map has high response on face. Does it mean the trained classifier is biased towards tie detection for identifying it's preference class? Users' preference classification is subjective, it does not make sense to LFW and indoor scene recognition dataset for this task. Again, this is related to how LFW and indoor scene recognition dataset is relabeled, what's the criterion to label face image to preference or non-preference category.

The authors propose to use transfer learning to improve classification accuracy and use CAM to visualize heat map for result. However, both transfer learning and CAM are not proposed in literature, which is not this paper's contribution.

Typos: page 3: "iamges" -> "images"

Author Response

Please see the attachment.

Author Response File: Author Response.doc

Reviewer 2 Report

Globally, the paper is well written and organized. However, there are too many English errors. Please refer to the attached document, where you can find some of these errors highlighted and also some additional comments on the manuscript. I encourage the authors to use a professional English revision service.

Major comments/issues

In all the tables in the results section, how have you measured the accuracy? Are the presented percentages the percentages of correct classifications? How have you find the number of correct classifications (i.e., true positives, false positives, true negatives, and false negatives)? You must explain this in detail.

Concerning the “Grad_CAM” approach: for how many images can you state that “the heat map provided by Grad_CAM is generally set in a meaningful feature area where human preference can be attracted in the image”? Have you tested it for all images in the three data sets? (All the images that you have tested?) I agree that this technique “might be a possibility of explaining the user’s preference”, but you have to test it further and present some justification metrics.

 

Minor comments/issues

Thought the whole document, use one space before parentheses (both "(" ang "["); it helps simplifying reading.

Define every acronym/abbreviation the first time you use it.

 

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.doc

Back to TopTop