Next Article in Journal
Question Answer System: A State-of-Art Representation of Quantitative and Qualitative Analysis
Next Article in Special Issue
Image Segmentation for Mitral Regurgitation with Convolutional Neural Network Based on UNet, Resnet, Vnet, FractalNet and SegNet: A Preliminary Study
Previous Article in Journal
Deep Learning-Based Computer-Aided Classification of Amniotic Fluid Using Ultrasound Images from Saudi Arabia
Previous Article in Special Issue
Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition
 
 
Article
Peer-Review Record

Detection and Classification of Human-Carrying Baggage Using DenseNet-161 and Fit One Cycle

Big Data Cogn. Comput. 2022, 6(4), 108; https://doi.org/10.3390/bdcc6040108
by Mohamed K. Ramadan 1,2,*, Aliaa A. A. Youssif 3 and Wessam H. El-Behaidy 2
Reviewer 1:
Reviewer 2: Anonymous
Big Data Cogn. Comput. 2022, 6(4), 108; https://doi.org/10.3390/bdcc6040108
Submission received: 7 September 2022 / Revised: 25 September 2022 / Accepted: 28 September 2022 / Published: 6 October 2022
(This article belongs to the Special Issue Advancements in Deep Learning and Deep Federated Learning Models)

Round 1

Reviewer 1 Report (Previous Reviewer 2)

The authors have corrected the drawbacks and the paper can be published now.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report (New Reviewer)

A transfer learning approach is proposed for the classification of persons carrying baggage in the paper. They evaluated four datasets using pre-trained DenseNet-161 for binary and multi-class classification.    The paper is well-written. Easy to understand. The related work section and experiments are satisfactory. Here are some comments.    

In some paragraphs, I noticed a difference in font color. There are also some sentences highlighted in yellow. I am wondering why.

    How did you annotate? Did you manually do it? Or used Amazon Mechanical Turk or any other crowd-sourcing tool? That is not clear from the paper.   Figure captions should convey the main message of the image to the reader.   The figures and tables should appear on the same page as the text to which they refer (for example,  line#194).   As I can see, F1-scores are higher than 95%. I am curious as to the generalizability of this method. Would it be possible for you to train on one dataset (for example PETA) and test on the other datasets (for example INRIA and MSMT17)? Please evaluate the model for binary and multiclass classification, if possible. Validation sets are not required. Just checking to make sure this model is not overfitting.  

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

the problem, which is addressed in the paper and the approach to solve that, is good, but not novel. Basically, This paper lacks novelty. however, the reannotation of the data is something that can be taken as take away from the paper

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

The work is devoted to solving a specific classification problem: detecting presence of carried luggage in an image of human figure. State of the art deep convolutional network DenceNet-161 is involved. The work does not introduce new ideas or methods, but existing approaches are selected and used adequately and accurately. So the merit of this work is an experimental study and its result is a performance benchmark for solving the specific problem.

All parts of study: introduction, related work review, problem statement, method description, experiment description, results and discussion are presented. The presentation is structured and easy to follow.

There is a major drawback in research design (at least, in its description). The datasets used in the study are comprised of images which are "collected" from several publicly available databases. For training step this is correct, one can train the classifier in whichever way it is possible to obtain the result. However, using "collected" datasets for test is questionable. What does the word "collected" mean? If this is some automatic procedure used to exclude some images not feasible for processing (poor quality, no human figure, etc.) this might be feasible. However, in this case this procedure should be described,  its necessity should be proven, its impact on final quality should be presented. But if "collected" means manual selection, the results prove nothing, since this is an intervention of human to image processing. The authors should run their classifier with all images in all the databases (except those used in training, of course). Or at least do this with randomly selected subset (if there is a problem of manually marking the class in big amount of images).

Minor issues.

1. Figures 5-9, part b. There is no need to draw that huge squares to hold such small numbers.

2. Figures 5-13. Font should be increased.

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Back to TopTop