Next Article in Journal
Deep Learning Method for Selecting Effective Models and Feature Groups in Emotion Recognition Using an Asian Multimodal Database
Previous Article in Journal
X-Type Step-Up Multi-Level Inverter with Reduced Component Count Based on Switched-Capacitor Concept
 
 
Article
Peer-Review Record

Fingertip Gestures Recognition Using Leap Motion and Camera for Interaction with Virtual Environment

Electronics 2020, 9(12), 1986; https://doi.org/10.3390/electronics9121986
by Inam Ur Rehman 1,*, Sehat Ullah 1, Dawar Khan 2,3, Shah Khalid 1, Aftab Alam 1, Gul Jabeen 4, Ihsan Rabbi 5, Haseeb Ur Rahman 1, Numan Ali 1, Muhammad Azher 1, Syed Nabi 1 and Sangeen Khan 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2020, 9(12), 1986; https://doi.org/10.3390/electronics9121986
Submission received: 26 July 2020 / Revised: 21 September 2020 / Accepted: 25 September 2020 / Published: 24 November 2020
(This article belongs to the Section Computer Science & Engineering)

Round 1

Reviewer 1 Report

I read the revision of this paper and I appreciate the author's changed to follow the suggestions brought up in the previous round of reviews.
Yet again it looks good to have several experiments to analyses the leap motion and the general camera.
Just a few minor things.

- line8,10 : camera based ->camera-based.
- references should be listed in order (Section 6.3.3 [27,56,48]-> [27,48,56]).

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

1. A low cost feature based gestures for interaction in virtual environment were studied in this paper.

2. Please change the notation of ^ in equation 3 to standard notation.
3. It is not statistically sound to draw the conclusions based on solely mean and standard deviations. Proper statistical methods such as hypothesis testing should be carried out to draw conclusions. For example, the results presented in Figure 9 may not support the conclusion "It means that G2 who used
the camera-based system completed the task in less time as compared to G1 who used Leap Motion". All the concluded results summarized in the conclusion section have similar issues. Overall the statistical methods used in this research are fundamentally flawed. The conclusions are not supported by the statistical analysis based on the experimental results.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

 

The paper proposes simple gesture recognition, tested with Leap Motion sensors, and with the application of virtual environment.

The field of research is certainly not new, and a very high number of solutions have already been proposed. Similar work has been published in the past 15 years or so. The system was tested using a small number of tasks, provides lower accuracy than some existing systems. Despite very substantial work done in the field, the paper does not have thorough comparison to other method.

It is not clear how this paper advances the field. The paper specifies three contributions, but none of them is indeed new, and with no explanation of how it advances compared to the existing literature.

 

The project itself is a good and interesting project, but for the specific task of gesture recognition it does not add substantial value over what the literature already provides. Gesture recognition is a very well studied problem, making it increasingly more  difficult to make a strong novel contribution.

 

The paper need to show more novelty over the existing literature. Many sections, especially the abstract and introduction, need to be revised for language and style.

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

  1. Line 286, what is t5.325 = 117.495?
  2. t test is commonly used for small sample numbers n<=30. You have 60 samples for each group, so the results should be relatively normal distributed. What is the particular reason of using t test? 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The paper is not exceptionally innovative, and does not make major contribution to this well-studied field. On the other hand, the paper is not scientifically wrong, and if the editors want to publish the paper I don't see harm in doing that.

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

The authors of this paper propose a few simple gestures based on one finger to navigate as well as select, release, and translate objects in a 3D scene (projected on a standard screen). The suggested effectiveness of this gesture set is tested via the investigation of two tracking technologies: a Leap Motion sensor and camera-based tracking.

Moving the index finger yields navigation, bending the finger performs selection and stretching it the release of an object. My main concern with this paper is that it completely lacks novelty and innovation. The power of the gesture set is severely limited, the applied methods are commonplace, and the described algorithms are trivial. Moreover, the reader expects interaction in a true virtual environment, not a 3D scene projected on a standard screen. The quality of the 3D scene can be gauged by Figure 2 – a truly antique rendering style. The paper is far away from the current frontiers of gesture and virtual reality research.

Too much space is given to describing the technicalities of the Leap Motion controller. This section reads like a tech information sheet by the manufacturer. The study does not report significance measure regarding the comparison of the study conditions. Instead of testing different interaction styles or gestures, a technical comparison is provided. The reader is left with no insights or conclusions for his or her future work and research.

The quality of presentation is overall very low. Text in figures is hardly readable (3, 14) or images are warped (Fig. 4-5). Additionally, the writing style of the paper is very poor, e.g. "With hand above 250mm above the Leap Motion considerably degrades its accuracy.". But there are numerous curious sentences, missing commas, and grammar mistakes.

Due to the lack of novelty and the poor presentation, I argue to reject this paper.

Reviewer 2 Report

This paper proposed a single finger-based interaction technique in VE. Through the proposed technique, the user can do some simple selection, release, and translation task. And the effectiveness of the proposed technique was analyzed through leap motion and general camera-based system. The idea of ​​easily interacting with a single finger looks good, and even though the algorithm is simple, the user study analysis results show that the selection, release, and translation task can be completed accurately. Also, it seems good to have several experiments and analyses of the leap motion and the general camera.

However,  the introduced approach only provides a minor contribution on top of existing work in the filed.  i.e., in the camera-based system part, the method of decision the finger's open and close state which based on the area looks very limited. In the proposed algorithm, if the detected area of a finger is less than the threshold, it is judged to be the open state otherwise is close. This method seems only can apply to situations where the finger and camera are perpendiculars, and cannot be applied when the finger is tilted. And also in this part, it needs a colored finger sleeve for calculating finger pose and it's area. Currently, there are have methods that can calculate the finger's pose without finger-sleeve, I'm curious why the author uses this algorithm.

Also, it seems necessary to describe the parameters used in the algorithm. For example page 9,  "THA=DA+1/4DA", why the threshold is DA+1/4DA? The “THA” and "T1" all mean threshold value, but the signs used chaos here.  And the algorithms on page 5 and page 9 need concise writing.

Finally, in Section 6.3.1, it seems to be an insufficient analysis of why the task of selection and release are more accurate when using a camera-based system than leap motion.

Small notes: The issue of capitalization. line 2: "gesture-based", line 409: "camera-based".

Reviewer 3 Report

The paper compares two gesture interaction systems, Leap Motion and a camera-based system.

The state of the art is quite large. However, it lacks a critical assessment of the findings from the literature. I would add a discussion on gaps and/or drawbacks at the end of section 2 to justify the proposed research. The experimental setup is not clearly described. Moreover, the implementation of the camera-based approach should be further explained.

In the comparison of the two systems, adequate relevance should be provided to the necessity of a finger cap to make the camera recognize the finger. This would be a strong limitation for the application of cameras. Is it possible to overcome such problem?

Other minor issues:

  • In the abstract, row two: “Gesture-based…” should be capitalized
  • Pag 13: The Leap Motion frame results lower than the camera. I would add a consideration on the quantity of processed information
  • Pag 16: “Interaction area”. Should be better “Interaction volume”?
Back to TopTop