Next Article in Journal
Novel Sodium Alginate/Polyvinylpyrrolidone/TiO2 Nanocomposite for Efficient Removal of Cationic Dye from Aqueous Solution
Next Article in Special Issue
Improvement of Business Productivity by Applying Robotic Process Automation
Previous Article in Journal
Data-Driven Reinforcement-Learning-Based Automatic Bucket-Filling for Wheel Loaders
Previous Article in Special Issue
Expert Recommendation for Answering Questions on Social Media
 
 
Article
Peer-Review Record

A Photo Identification Framework to Prevent Copyright Infringement with Manipulations

Appl. Sci. 2021, 11(19), 9194; https://doi.org/10.3390/app11199194
by Doyoung Kim 1, Suwoong Heo 1, Jiwoo Kang 1,*, Hogab Kang 2 and Sanghoon Lee 1,3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Appl. Sci. 2021, 11(19), 9194; https://doi.org/10.3390/app11199194
Submission received: 20 August 2021 / Revised: 27 September 2021 / Accepted: 28 September 2021 / Published: 2 October 2021

Round 1

Reviewer 1 Report

The paper gives a new method for copyright protection with manipulation detection. The method is well described and appears to outperform other research. 

Author Response

Thank you for the careful comments.

We updated our paper while providing point-to-point responses.

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The paper presents an innovative method of photo copyright identification from a collage of cropped photos. The three main steps of the proposed algorithm (Image RoI Detection, Image Hashing and Image Verification) are very well described as algorithms and also as mathematical demonstration; the results obtained are better than the other established methods used for comparation.

The selection of cited papers and books is adequate and relevant to the topic discussed in the paper.

 However, there are some punctual mistakes:

  • At lines 308-309, it is mentioned that “[…] in Section 4.2, 200,000 frames were sampled from Youtube 8M dataset” but in Section 3.4.2 only the Youtube 8M dataset is mentioned, and not the fact that are 200.000 frames
  • At line 317, it is mentioned that “We set the thresholds to Thobj = 0.7 and Thimg = 0.9” without any explanations about how these values were established, why they should work from the theoretical point of view or what other values have been tested in order to claim that the threshold values are appropriate for the task.
  • At lines 327-328, it is mentioned that “For AKAZE [44] feature extraction, the threshold and number of octaves were set to 10-4 and 3, respectively, and the reprojection threshold eth in Algorithm 1 was set to 0.2”. Which was the reason to choose these numerical values? The reprojection threshold eth appears in Algorithm 2, not in Algorithm 1.
  • At line 381, it is mentioned that “ 10 shows the receiver operating characteristic (ROC) curve”, but the ROC curves are in Figure 11
  • At lines 389-390, it is mentioned that “Table 6 also provides F1-scores and AUC (Area Under the Curve)”, but F1-score is in Table 5

 

The English is overall good, with only the occasional misspelling:

  • Line 260: “By using matached image features” instead of “By using matched image features”
  • Line 308: “as as mentioned” instead of “as mentioned”
  • Line 367: “gemetric manipulation” instead of “geometric manipulation”
  • Line 412: “as false postive rate” instead of “as false positive rate”

 

 

Author Response

Thank you for the careful comments.

We updated our paper while providing point-to-point responses.

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

This article proposes a photo copyright identification framework that handle manipulations of stolen photos.

The article is well structured and results are clearly shown and compared to the SoA investigations. 

 

I have few minor comments as follows:

  • In lines 88 and 107, references are out of order. Please check references order.
  • Figures should be close to the place where it is mentioned. Check Fig. 10.
  • In Figure 11, where is the plot of “ROC of mse”?
  • Table 5 compares the performance of different similarity metrics using F1-score and AUC. In my opinion, it would make the results more comparable if the authors compared the different methods in terms of false negative (FN) at fixed points of false positive (FP). For example, report FN when FP = 0.01, 0.05, 0.1. This can be added to the same table.
  • The conclusion is very short. The paper is almost 20 pages; however, you conclude it in one paragraph. I believe you can draw a better conclusion.

Author Response

Thank you for the careful comments.

We updated our paper while providing point-to-point responses.

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop