Next Article in Journal
Analysis and Prediction of the IPv6 Traffic over Campus Networks in Shanghai
Next Article in Special Issue
Integrating ISA and Part-of Domain Knowledge into Process Model Discovery
Previous Article in Journal / Special Issue
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI
 
 
Article
Peer-Review Record

Recursive Feature Elimination for Improving Learning Points on Hand-Sign Recognition

Future Internet 2022, 14(12), 352; https://doi.org/10.3390/fi14120352
by Rung-Ching Chen 1, William Eric Manongga 1 and Christine Dewi 2,*
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Future Internet 2022, 14(12), 352; https://doi.org/10.3390/fi14120352
Submission received: 31 August 2022 / Revised: 22 November 2022 / Accepted: 24 November 2022 / Published: 26 November 2022
(This article belongs to the Special Issue Trends of Data Science and Knowledge Discovery)

Round 1

Reviewer 1 Report

Thank you very much for the opportunity to review this manuscript. The manuscript investigates hand pose detecting and analyze the accuracy of different models. The paper is interesting contribution to the field and provides important knowledge for developing tools for communication.

General comments:

1)The whole manuscript needs to be proof reading for proper English and the paper needs to be written on an academic level.

L.54-57: This section should be rising the research questions (not contribution).

Section 2.3, L90-L106: This section needs to be improved and should include techniques from data mining to explain the data cleaning process.

Section 3.1 Research Design. The design needs to verbally expressed, not only a figure.

Section 4 Results and Discussion. This section should report results as well as try to explain the findings. As it is now, it is more a result without any discussion. The discussion part in this section needs to be elaborated.

Section 5 Conclusion. This section should could conclude based on your result, not repeat the results.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

In this paper, the authors perform recursive feature elimination on hand-sign detection (numerical). The authors study their implementation on four different datasets. However, in light of my comments below, I suggest the paper is not ready to be accepted for publication. 

1) My main concern is the lack of novelty. The use of RFE, random forest, Mediapipe etc. are all very common place implementations. This paper seems to have just pieced together several well established methods for a specific case study. 

2) The authors say that their proposed model has fewer parameters than CNN, which is true. But the author's method requires an additional step of feature extraction using MediaPipe, which the CNN can do by itself. Therefore it is not clear if the author's proposed method as a computational advantage as claimed. 

(other minor comments)

3) How is the cropping done for the fourth dataset? Manually?

4) The authors claim "feature selection methods currently available in Python cannot accept a 2-dimensional array as the input feature". Are the authors basing this claim on exisiting Python packages? If this is the case (which I doubt - because it can easily be flattended to 1-D array), this is an avenue for novelty! Further detail is necessary. 

5) Does RFE based feature selection differ from feature selection based on feature importance of a single run?

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 3 Report

The paper is well-organized and well-written. The introduction and literature are adequate. The presented method is well-detailed. The necessary tests are exposed, and the more efficient current techniques in this subject have been used to compare with the presented method. The obtained results are relevant compared with that techniques. Finally, the obtained conclusions are supported by the presented experimentation.  

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Approved after revision

Author Response

Please see the attachment

Author Response File: Author Response.docx

Reviewer 2 Report

The novelty of the work is not convincing. The framework of feature extraction, selection, model development, training and validation is a standard practice in data science field. The authors only use different pre-established tools within each part of the framework for a specific case study. I agree with the novelty of using palm centroid distances - but fail to see novelty otherwise. 

The author's response to some of my queries are insufficiently addressed. I was hoping to get some mathematical/model comparisons as opposed to simple descriptions. For example: regarding my comment on CNN vs proposed method  (MediaPipe + RFE + . .) on performance and total number of parameters, I was expecting an end-to-end run of a CNN model and compare the accuracy as well as run time with the proposed model. 

In response to one of my question, the authors claim that CNN requires more images for training, which I agree. But how is MediaPipe doing the same with fewer images? Doing an actual model to model comparison is needed to really answer why the proposed model works the best. 

Author Response

Please see the attachment

Author Response File: Author Response.docx

Round 3

Reviewer 2 Report

I appreciate the author's edits based on my comments. Just one small suggestion - in Table 4, please also add the input to CNN and the proposed approach. I believe the inputs are different for the two models (for CNN its an image and for the proposed approach it is features from MediaPipe - if I understand it correctly). I presume "trained model size" in table 4 for the proposed method does not account for the MediaPipe size. CNN is doing feature extraction and hence has more parameters/size. Proposed approach is using features already extracted from MediaPipe. Therefore the "Trained model size" comparison is not fair. The authors should make a footnote for the Table or consider removing that row. 

 

The same logic follows for comparing the training time. Are the authors considering training time for MediaPipe? Please make sure the comparisons are consistent. 

 

Just a minor point - the authors attached corrected pdf with comment/format column to the right. This was a little distracting. 

 

I suggest accepting for publication after the authors addressed the above comments. 

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Back to TopTop