Next Article in Journal
Microseismic Signal Denoising and Separation Based on Fully Convolutional Encoder–Decoder Network
Next Article in Special Issue
Zero-Shot Recognition Enhancement by Distance-Weighted Contextual Inference
Previous Article in Journal
Noise Prediction Using Machine Learning with Measurements Analysis
Previous Article in Special Issue
Efficient Video Frame Interpolation Using Generative Adversarial Networks
 
 
Article
Peer-Review Record

Gesture-Based User Interface for Vehicle On-Board System: A Questionnaire and Research Approach

Appl. Sci. 2020, 10(18), 6620; https://doi.org/10.3390/app10186620
by Krzysztof Małecki *, Adam Nowosielski and Mateusz Kowalicki
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Appl. Sci. 2020, 10(18), 6620; https://doi.org/10.3390/app10186620
Submission received: 29 July 2020 / Revised: 10 September 2020 / Accepted: 17 September 2020 / Published: 22 September 2020

Round 1

Reviewer 1 Report

REVIEW

 

Article titled:  “Gesture-based user interface for vehicle on-board system: a survey and research approach

  Applied Sciences  no. 899666

 

List of Authors:

 

Krzysztof Małecki, Adam Nowosielski, Mateusz Kowalicki

 

 

In this article, the Authors analyse the potential of hand gesture interaction in the vehicle environment by physically challenged drivers. In this way, the Authors proposed a gesture-based system for vehicle on-board system. As the Authors claim, system has been developed on the available state-of-the-art solutions and investigated in terms of usability on a group of people with different physical limitations who drive a car on daily basis mostly using steering aid tools. The potential of hand gesture interaction in the vehicle environment by both able-bodied and physically challenged drivers was shown in this paper. Also test scenarios consisting of sets of gestures assigned to control multimedia activities in an exemplary vehicle on-board system were defined.

 

 

My direct remarks concerning this article:

 

  1. In Section 3, the Authors present the main concepts of the interface, where the gesture acquisition, gesture recognition and gesture employment are addressed – Figure 1 (A general scheme of proposed approach).

Taking into account the world literature on this subject, various conditions and mechanisms of driving cars, it would be advisable to precisely list the criteria according to which the selection of gesture acquisition was made.

The Authors of the article should explain this carefully.

 

  1. For study, the Authors of this article, applied the Intel RealSense sr300 camera (158-163). The Authors did not conduct research using a different camera. Thus, they did not investigate the effect of the camera type on the test results.

The authors of the article should clarify this issue.

 

  1. As the Authors of the article wrote (212-214) “…A total of 40 voluntary participants took part in the first stage, including fully physically fit people and 17 with mobility disabilities, regardless of gender…”.

The above should be treated as a statistical study. Therefore, is 40 people enough?

The Authors should explain this precisely.

 

  1. The Authors wrote (292-294) “…The accuracy of gesture recognition by particular scenarios is presented in Fig. 11. Figure 12 shows the accuracy from the user’s perspective. Both graphs depict the aggregated information accordingly for the left and right hand…”.

How did the authors calculate the accuracy?

This should be clearly explained.

 

  1. The problem of “gesture recognition” is also very important in Gesture-Based Systems. So, applying for example the neural networks or application of sophisticated algorithms during the “gesture recognition” seems to be obvious in References.

 

  1. There is also no comment from the Authors on what the computational burden of the proposed method is and if their solution is used in equipment working in real conditions. This should be clearly explained.

 

  1. The Authors are not willing to write a presentation in what way the proposed method is better than other methods which can be found in the References. This should be clearly explained.

 

The article brings up very significant problem. The topic of this article falls within the scope of this Journal. The work should be reviewed after completing it with all necessary answers to questions above as well as required comments and information.

 

 

Comments for author File: Comments.pdf

Author Response

Cover letter for revised manuscript:

“Gesture-based user interface for vehicle on-board system: a survey and research approach”

 

Dear Reviewer,

First of all, we would like to thank you for all valuable and important remarks and suggestions. We referred to all the comments.

The detailed changes and answers are attached below.

Thank you so much for the opportunity to make these changes.

Authors

 

 

The answers

Reviewer 1 comments:

 

==================

In Section 3, the Authors present the main concepts of the interface, where the gesture acquisition, gesture recognition and gesture employment are addressed – Figure 1 (A general scheme of proposed approach).

Taking into account the world literature on this subject, various conditions and mechanisms of driving cars, it would be advisable to precisely list the criteria according to which the selection of gesture acquisition was made.

The Authors of the article should explain this carefully.

Thank you for this advice. We enhanced the section regarding the gesture acquisition. We consider other modalities (e.g. wireless technology, RFID, accelerometers) and carefully analyze other computer vision methods (thermal imaging, stereovision, etc.).  

 

For study, the Authors of this article, applied the Intel RealSense sr300 camera (158-163). The Authors did not conduct research using a different camera. Thus, they did not investigate the effect of the camera type on the test results.

The authors of the article should clarify this issue.

That is true. We did not investigate the effect of the camera type on the test result. Our aim was to investigate the potential of gesture recognition for drivers with physical disabilities. We didn’t do research on gesture recognition algorithm and based our investigation on available solutions. The explanation of the Intel RealSense choice is updated in the paper. We opted for a solution which is known for a good operation so we could investigate how well current state-of-the-art technology is suited for people with physical disfunctions.

 

 

As the Authors of the article wrote (212-214) “…A total of 40 voluntary participants took part in the first stage, including fully physically fit people and 17 with mobility disabilities, regardless of gender…”.

The above should be treated as a statistical study. Therefore, is 40 people enough?

The Authors should explain this precisely.

Acquiring people for voluntary and free completion of the survey is a difficult task today (especially disabled people who are drivers, at the same time). There are known works in which 17 [1] or 36 [2] or 13 participants [3] took part. Hence, it seems that 40 volunteers are enough to take some conclusions.

  1. Jiralerspong, T., Nakanishi, E., Liu, C., & Ishikawa, J. (2017). Experimental study of real-time classification of 17 voluntary movements for multi-degree myoelectric prosthetic hand. Applied Sciences, 7(11), 1163.
  2. Hirata, K., Tanimoto, H., Sato, S., Hirata, N., Imaizumi, N., Sugihara, Y., ... & Akagi, R. (2020). Carbon dioxide hydrate as a recovery tool after fatigue of the plantar flexors. Journal of Biomechanics, 109900.
  3. Inzelberg, L., David-Pur, M., Gur, E., & Hanein, Y. (2020). Multi-channel electromyography-based mapping of spontaneous smiles. Journal of Neural Engineering, 17(2), 026025.

 

The Authors wrote (292-294) “…The accuracy of gesture recognition by particular scenarios is presented in Fig. 11. Figure 12 shows the accuracy from the user’s perspective. Both graphs depict the aggregated information accordingly for the left and right hand…”.

How did the authors calculate the accuracy?

This should be clearly explained.

The accuracy has been calculated as the ratio of correct gestures (performed) to the total number of expected correct gestures. The percentages aggregate the detailed results presented in Fig.5-Fig.10.

The paper has been updated with this information - page 12.

 

The problem of “gesture recognition” is also very important in Gesture-Based Systems. So, applying for example the neural networks or application of sophisticated algorithms during the “gesture recognition” seems to be obvious in References.

Thank you for the remark. We noticed this research area and we upgraded the article by a paragraph presented below.

In recent years, wireless approaches of human interaction with devices are also considered, both in the context of gesture control [1] and human activities recognition [2,3]. The idea of wireless gesture control is to utilize the RFID signal phase changes to recognize different gestures. Gestures are performed in front of tags deployed in an environment. And then, the multipath of each tag’s signal propagation is changed along with hand movements, which can be captured by the RFID phase information. The first examination brings promising results [1]. It seems that smart watches and wristbands, equipped with many sensors, will become another way of gesture control [4]. Solutions of this kind are supported by the deep-learning methods [5,6]. However, it should be noted that it is customary to wear smart watches and smart wristbands on the left hand, which makes the use of such equipment for drivers reasonable for vehicles with the steering wheel placed on the right side. This is an interesting research area, but the authors of this article are not currently concerned with it.

[1] Zou, Y., Xiao, J., Han, J., Wu, K., Li, Y., & Ni, L. M. (2016). Grfid: A device-free rfid-based gesture recognition system. IEEE Transactions on Mobile Computing, 16(2), 381-393.

[2] Pradhan, S., Chai, E., Sundaresan, K., Qiu, L., Khojastepour, M. A., & Rangarajan, S. (2017, October). Rio: A pervasive rfid-based touch gesture interface. In Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking (pp. 261-274).

[3] Du, Y., Lim, Y., & Tan, Y. (2019). A Novel Human Activity Recognition and Prediction in Smart Home Based on Interaction. Sensors, 19(20), 4474.

[4] Jiang, S., Lv, B., Guo, W., Zhang, C., Wang, H., Sheng, X., & Shull, P. B. (2017). Feasibility of wrist-worn, real-time hand, and surface gesture recognition via sEMG and IMU Sensing. IEEE Transactions on Industrial Informatics, 14(8), 3376-3385.

[5] Kasnesis, P., Chatzigeorgiou, C., Toumanidis, L., & Patrikakis, C. Z. (2019, March). Gesture-based incident reporting through smart watches. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) (pp. 249-254). IEEE.

[6] Ahn, H. J., Kim, J. S., Shim, J. Y., & Kim, J. S. (2017, September). Hand gesture recognition for doors with neural network. In Proceedings of the International Conference on Research in Adaptive and Convergent Systems (pp. 15-18).

 

 

There is also no comment from the Authors on what the computational burden of the proposed method is and if their solution is used in equipment working in real conditions. This should be clearly explained.

The most burdensome component for the system is gesture recognition. We based this task on the Intel® RealSense™ technology. The manufacturer has issued the minimal hardware requirement which are 6th Generation Intel® Core™ Processor and 4 GB of memory. Considering the current equipment available on the market, these are not excessive requirements. We performed all experiments on the laptop equipped with 8th generation Intel® Core™ i5 Processor, 8 GB of memory, on which our self-written application was launched.  We observed no delays during the operation.

 

For safety reasons, we decided not to conduct experiments in traffic.

We clarified all the issues in the paper.

 

The Authors are not willing to write a presentation in what way the proposed method is better than other methods which can be found in the References. This should be clearly explained.

Our aim was to do research according to the contribution of the article. We did not want to compare methods of gesture recognition, especially since we decided to use state-of-the-art solution.

 

The article brings up very significant problem. The topic of this article falls within the scope of this Journal. The work should be reviewed after completing it with all necessary answers to questions above as well as required comments and information.

Thank you for appreciating our work.

Reviewer 2 Report

The authors present a survey of hand gesture recognition in the vehicle. The work is well described with enough experiments. So I just have some suggestions about this manuscript.

 

  1. If possible, please introduce detail of guesture recogniton model based on th the Intel RealSense sr300. Fox example, what kind of algorithm it utilized to label the sensor data.
  2. The work cares more about the camera-based approaches. Please discuss the advantages than the wireless signal-based approaches.
  3. The work in "A Novel Human Activity Recognition and Prediction in Smart Home Based on Interaction" presents a wireless sensing teconology to recongnize the activity. This may be useful in the discussion section.

 

Author Response

Cover letter for revised manuscript:

“Gesture-based user interface for vehicle on-board system: a survey and research approach”

 

Dear Reviewer, 

First of all, we would like to thank you for all valuable and important remarks and suggestions. We referred to all the comments.

The detailed changes and answers are attached below.

Thank you so much for the opportunity to make these changes.

Authors

 

 

The answers

Reviewer 2 comments:

==================

The authors present a survey of hand gesture recognition in the vehicle. The work is well described with enough experiments.

Thank you for appreciating our work.

 

If possible, please introduce detail of guesture recogniton model based on the Intel RealSense sr300. Fox example, what kind of algorithm it utilized to label the sensor data.

As suggested by the reviewer, we provided some details about the technology behind the Intel RealSense sensor. The following paragraph was added:

The gesture recognition mechanisms are based here on skeletal and blob (silhouette) tracking. The most advanced mode is called "Full-hand" and offers tracking of 22 joints of the hand in 3D [1]. This mode requires the highest computational resources but offers the recognition of pre-defined gestures. The second is the "Extremities mode" and it offers tracking of 6 points: the hand’s top-most, bottom-most, right-most, left-most, center and240closest (to the sensor) points [1]. Other modes include "Cursor" and "Blob". They are reserved for the simplest cases when central position of hand or any object is expected to be tracked.

 

[1] IntelR©RealSenseTMSDK 2016 R2 Documentation: https://software.intel.com/sites/landingpage/realsense/camera-sdk/v1.1/documentation/html/index.html?doc_devguide_introduction.html [Accessed: 27.08.2020]

 

The work cares more about the camera-based approaches. Please discuss the advantages than the wireless signal-based approaches.

Thank you for the remark. We noticed this research area and we upgraded the article by a paragraph presented below.

In recent years, wireless approaches of human interaction with devices are also considered, both in the context of gesture control [1] and human activities recognition [2,3]. The idea of wireless gesture control is to utilize the RFID signal phase changes to recognize different gestures. Gestures are performed in front of tags deployed in an environment. And then, the multipath of each tag’s signal propagation is changed along with hand movements, which can be captured by the RFID phase information. The first examination brings promising results [1]. It seems that smart watches and wristbands, equipped with many sensors, will become another way of gesture control [4]. Solutions of this kind are supported by the deep-learning methods [5,6]. However, it should be noted that it is customary to wear smart watches and smart wristbands on the left hand, which makes the use of such equipment for drivers reasonable for vehicles with the steering wheel placed on the right side. This is an interesting research area, but the authors of this article are not currently concerned with it.

[1] Zou, Y., Xiao, J., Han, J., Wu, K., Li, Y., & Ni, L. M. (2016). Grfid: A device-free rfid-based gesture recognition system. IEEE Transactions on Mobile Computing, 16(2), 381-393.

[2] Pradhan, S., Chai, E., Sundaresan, K., Qiu, L., Khojastepour, M. A., & Rangarajan, S. (2017, October). Rio: A pervasive rfid-based touch gesture interface. In Proceedings of the 23rd Annual International Conference on Mobile Computing and Networking (pp. 261-274).

[3] Du, Y., Lim, Y., & Tan, Y. (2019). A Novel Human Activity Recognition and Prediction in Smart Home Based on Interaction. Sensors, 19(20), 4474.

[4] Jiang, S., Lv, B., Guo, W., Zhang, C., Wang, H., Sheng, X., & Shull, P. B. (2017). Feasibility of wrist-worn, real-time hand, and surface gesture recognition via sEMG and IMU Sensing. IEEE Transactions on Industrial Informatics, 14(8), 3376-3385.

[5] Kasnesis, P., Chatzigeorgiou, C., Toumanidis, L., & Patrikakis, C. Z. (2019, March). Gesture-based incident reporting through smart watches. In 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops) (pp. 249-254). IEEE.

[6] Ahn, H. J., Kim, J. S., Shim, J. Y., & Kim, J. S. (2017, September). Hand gesture recognition for doors with neural network. In Proceedings of the International Conference on Research in Adaptive and Convergent Systems (pp. 15-18).

 

 

The work in "A Novel Human Activity Recognition and Prediction in Smart Home Based on Interaction" presents a wireless sensing teconology to recongnize the activity. This may be useful in the discussion section.

In the mentioned article, the authors developed the concept of human activity recognition based on RFID technology. We do not see the possibility of using this idea to our study. In the study, we use a camera and SDK to recognize gestures. We could try use wireless technology such as smart watches or smart wristbands to recognize gestures, but it would be totally another study. But we thank you very much for the idea of such research.

As we showed above, we refer to this in Related works section.

Reviewer 3 Report

Authors present a hand gesture recognition in the car using Intel sr300 sensor. Given that the sensor has a good performance in recognizing hand gestures, the present study proposes several gesture scenarios that consists of a series of hand gestures for evaluating its usability in the car with normal and disabled subjects.

There are several issues in this paper.

There is no experiment and description on gesture recognition itself.

The title contain a word "survey" on this topic. And yet it seems that this study is not a survey.

Although this study employs disabled subjects for their experiment, it does not shed any light on how this technology helps them while driving. 

 

 

Author Response

Cover letter for revised manuscript:

“Gesture-based user interface for vehicle on-board system: a survey and research approach”

 

Dear Reviewer, 

First of all, we would like to thank you for all valuable and important remarks and suggestions. We referred to all the comments.

The detailed changes and answers are attached below.

Thank you so much for the opportunity to make these changes.

Authors

 

 

The answers

Reviewer 3 comments:

==================

Authors present a hand gesture recognition in the car using Intel sr300 sensor. Given that the sensor has a good performance in recognizing hand gestures, the present study proposes several gesture scenarios that consists of a series of hand gestures for evaluating its usability in the car with normal and disabled subjects.

There are several issues in this paper.

There is no experiment and description on gesture recognition itself.

Thank you for your comment. As observed by the Reviewer our aim was to evaluate the usability of gesture recognition technology in the car environment by drivers with physical disabilities. Our experiment consisted of 6 scenarios in which 3 to 10 gestures were required.

Our aim was to do research according to the contribution of the article. We did not want to compare methods of gesture recognition. We decided to use state-of-the-art solution provided with the Intel RealSense sensor.

Considering the Reviewer’s comment, we extended the description of the experiment (section 4 “Evaluations”) and provided technical details behind the Intel RealSense technology (Sections: 3.1. Gesture acquisition and 3.2. Gesture recognition).  

 

The title contain a word "survey" on this topic. And yet it seems that this study is not a survey.

We are not English native speakers. However, a "survey" according to the English dictionary has a lot of meanings: “study”, “examination”, “investigation”, “exploration”, “review”, but also it means "questionnaire” and “poll”. We used it in the last meanings. We observe, that for a review articles a word “survey” is used with the “of” preposition, e.g. “a survey of face recognition algorithms”. Without the “of” preposition it should not suggest the “review paper” but the "questionnaire-based” approach. Please acknowledge our apologies for not changing the title.

If the Editor will confirm the Reviewer's opinion, we will immediately change the title in the process of article publishing (if the article will be accepted, of course).

 

Although this study employs disabled subjects for their experiment, it does not shed any light on how this technology helps them while driving.

The gesture recognition technology is now considered, in broad and shallow sense, as solved problem with many commercial solutions available on the market. A technology for the common use initiated by Kinect v1 (and also by Kinect v2) is no longer manufactured and recognized as obsolete. With the new equipment and solutions appearing on the market it is still marginalized how this technology could help people with disabilities. It is expected that user will perform a specific gesture but what if congenital or acquired disability would interfere with the correct and expected making of a gesture?

Our aim was to evaluate current state-of-the-art technology for people with physical disfunctions. We observed well reception of our idea. Many participants admitted they did not pay attention to this technology as it is broadly advertised in the entertainment sense. It occurred, however, that it can prove its value as assistive technology, too.

As shown in the survey, respondents expect that evolving technology (e.g. gesture-based systems) can reduce barriers to the use of vehicles by disabled people. Through scenarios composed of sequence of gestures, we have shown that the effectiveness of controlling e.g. the vehicle's multimedia system is high. It is found that the location of the sensor is of particular importance for people with disabilities who have problems with keeping their hand in the air for a long time.

The article does not introduce a new research gesture recognition method, but it reports an experimental result of state-of the-art solution conducted on disabled people and able-bodied, which (in our opinion) is of great practical importance.

We believe our research is valuable. What is more, this technology is already available in premium class cars and many people with disabilities do not have adequate financial resources to take advantage of this technical achievement. The aim of our research is to broaden the knowledge on this subject and show that the effectiveness of ready-made solutions is high and that people with disabilities can handle certain gestures well.

 

 

Reviewer 4 Report

The article presents an interface, through gesture recognition, for controlling the system of the main console of a car, designed for people with disabilities. The proposal could be interesting, but the work carried out and the experiment carried out is not clear. The document mainly describes gesture recognition (based on the realsense tutorial) and has limited data for validation. Therefore, it is of limited importance at this stage.

Key points: 

  1. Was ethical approval obtained for experiments? 
  2. Given the wide variety of disability cases, for which specific cases is the system valid? It is necessary to present in greater detail to whom it is intended. Personally, I think that the system should be tested initially by drivers without disabilities, and if successful, then it could be considered adapting it to people with disabilities, prior to a more exhaustive study of the necessary special needs.
  3. Present a comparison of how the subject currently solves the control of the system, both in the center console and on the steering wheel, and how it would improve with gesture control.
  4. Has the recommendation of the general directions of traffic to use both hands at the wheel to drive been taken into account? In section 2, line 72-80, the natural position of the disabled driver with one hand when changing gear is discussed. It appears that no prior study of the existing recommendations has been carried out. An in-depth study of this topic is necessary. Present the specific disability, the driver's case, the vehicle's adaptation and the correct driver's natural position
  5. Section 3 presents the proposed recognition interface. It would be necessary to present the full case study. What experiments have been carried out? What vehicle and what mounting has been used? The information is scattered between sections 3 and 4. It is necessary to present the experiment in greater detail. Show images of the data acquisition system, with the RealSense camera mount in the car and some example of the images obtained.
  6. Why was RealSense chosen? What alternatives were studied? It is necessary to present a study of alternatives (other models of realsense, leapmotion, kinect, etc ...).
  7. Section 3.2 Gesture recognition is a summary of the Intel realsense guidelines. They have also selected the basic gestures provided in the guideline, without taking into account the ease of use by people with disabilities. It would be necessary to make a preliminary study of the natural gestures used by the people for whom this system is intended and to carry out the own development to recognize them. By the way, the images used are copyrighted?
  8. How has the gesture recognition system been included with the car console? Has any application been developed?
  9. How has the gesture been validated? How does the system detect / notify if the gesture is correct, not correct or not desired?

Author Response

Cover letter for revised manuscript:

“Gesture-based user interface for vehicle on-board system: a survey and research approach”

Dear Reviewer, 

First of all, we would like to thank you for all valuable and important remarks and suggestions. We referred to all the comments.

The detailed changes and answers are attached below.

Thank you so much for the opportunity to make these changes.

Authors

 

 

The answers

Reviewer 4 comments:

==================

The article presents an interface, through gesture recognition, for controlling the system of the main console of a car, designed for people with disabilities. The proposal could be interesting, but the work carried out and the experiment carried out is not clear. The document mainly describes gesture recognition (based on the realsense tutorial) and has limited data for validation. Therefore, it is of limited importance at this stage.

We thank the Reviewer for this critical remark. However, the gesture recognition technology is now considered, in broad and shallow sense, as solved problem with many commercial solutions available on the market. A technology for the common use initiated by Kinect v1 (and also by Kinect v2) is no longer manufactured and recognized as obsolete. With the new equipment and solutions appearing on the market it is still marginalized how this technology could help people with disabilities. It is expected that user will perform a specific gesture but what if congenital or acquired disability would interfere with the correct and expected making of a gesture? That was our aim: to evaluate current state-of-the-art technology for people with physical disfunctions. We observed well reception of our idea. Many participants admitted they did not pay attention to this technology as it is broadly advertised in the entertainment sense. It occurred, however, that it can prove its value as assistive technology, too. For this reason, we believe our research is valuable. It is also worth to notice that according to performed survey, many participants were not aware of this technology.

What is more, this technology is already available in premium class cars and many people with disabilities do not have adequate financial resources to take advantage of this technical achievement. The aim of our research is to broaden the knowledge on this subject and show that the effectiveness of ready-made solutions is high and that people with disabilities can handle certain gestures well.

 

Key points:

Was ethical approval obtained for experiments?

The article deals with issues based on the opinions of respondents expressed in the electronic questionnaire and on a non-medical and non-invasive research experiment. The participants were volunteers who fully voluntarily took part in the survey and in the study. Disabled people were directed by the Association for the Support of Disabled People, with which we started cooperation.

Due to the fact that our experiment was not an invasive medical examination and did not disturb anyone's ethical standards, in our opinion there were no grounds to seek approval from the medical ethics committee.

The relevant information was included to the Introduction section.

 

Given the wide variety of disability cases, for which specific cases is the system valid? It is necessary to present in greater detail to whom it is intended. Personally, I think that the system should be tested initially by drivers without disabilities, and if successful, then it could be considered adapting it to people with disabilities, prior to a more exhaustive study of the necessary special needs.

Gesture-based systems and interactions are becoming common nowadays and they are used by people in everyday life (computers, smartphones, drones, etc.). Thus, in many cases they have already been verified by non-disabled people. In this context, the authors made the decision to test primarily on people with disabilities. Especially that the technology on which it was based (RealSense and SDK) is recognized and proven for physically fit people. However, it was not known what the effectiveness was and whether it was suitable for the purposes described in the aim of the article.

 

 

Present a comparison of how the subject currently solves the control of the system, both in the center console and on the steering wheel, and how it would improve with gesture control.

It is certainly an interesting aspect, but at this stage we are not able to address it. Our goal was to check the usefulness of set of gestures in the multimedia system controlled both by fully fit and disabled people. These studies are not intended to eliminate the ability of controlling of such systems using the multimedia steering wheel. Our research is to enrich the offer of car manufacturers. Please note that today there are voice systems to assist drivers. They often offer the same functions as touch systems (via keys, knobs or the touch screens).

 

Has the recommendation of the general directions of traffic to use both hands at the wheel to drive been taken into account? In section 2, line 72-80, the natural position of the disabled driver with one hand when changing gear is discussed. It appears that no prior study of the existing recommendations has been carried out. An in-depth study of this topic is necessary. Present the specific disability, the driver's case, the vehicle's adaptation and the correct driver's natural position

Our task was not to oppose the recommendations of the general directions of traffic to use both hands at the wheel to drive. We're just pointing out that controlling various on-board systems require drivers to take away one of their hands from the wheel to press a specific button or knob. In this context, making a gesture that does not require direct focus on the button that we want to use at the moment seems more natural. The certain gesture can be performed without distracting the driver's attention focused on observing the traffic situation. Hence our research in this direction.

The relevant information was included to the Related works section.

 

 

Section 3 presents the proposed recognition interface. It would be necessary to present the full case study. What experiments have been carried out? What vehicle and what mounting has been used? The information is scattered between sections 3 and 4. It is necessary to present the experiment in greater detail. Show images of the data acquisition system, with the RealSense camera mount in the car and some example of the images obtained.

We agree with the reviewer. We provided additional information and explanation in the paper.

 

Why was RealSense chosen? What alternatives were studied? It is necessary to present a study of alternatives (other models of realsense, leapmotion, kinect, etc ...).

We have updated the article with explanations. The following paragraph was added:

“The considered alternatives to this solution are MS Kinect and Leap Motion. The first and the second generation of Kinect hardware was excluded due to its dimensions. What is more, both versions are now considered obsolete and are not manufactured anymore. The latest release of Kinect (the Azure Kinect released in 2020) strictly base its operation on cloud computing, which exclude its usage on sparsely populated areas outside network coverage. The Leap Motion could have been a good alternative here for its small size and quite decent operation. However, the solution offers a limited number of movement patterns recognition (i.e. swipe, circle, key tap and screen tap). In view of the above, the Intel RealSense provides the best suited solution for our research.”

 

Section 3.2 Gesture recognition is a summary of the Intel realsense guidelines. They have also selected the basic gestures provided in the guideline, without taking into account the ease of use by people with disabilities. It would be necessary to make a preliminary study of the natural gestures used by the people for whom this system is intended and to carry out the own development to recognize them. By the way, the images used are copyrighted?

We agree with the reviewer. We investigated the available gestures and have chosen for investigation only those that are easy to perform. These are four directional swipe gestures, a push gesture and two static gestures (clenched fist and victory gesture). It must be noted that many gestures are not well suited for in-car environment. For example, a “thumb down” gesture (hand closed with thumb pointing down). Other gestures, like “spread fingers” might be extremely difficult for people with disabilities. We believe that our selected gestures are the easiest but for sure, in future, the user specific adaptation should be considered.

We added a referral to Fig. 3 for copyrighted images.

 

How has the gesture recognition system been included with the car console? Has any application been developed?

There is no integration with the car’s console in the current version of application. We assumed that the control would take place via the CAN bus. The application would send the information to the multimedia system which functions should be activated. Currently, the application gives the message in JSON format and this is used for validation. This is in the prototype phase.

 

How has the gesture been validated? How does the system detect / notify if the gesture is correct, not correct or not desired?

We rely on the gestures provided with the Intel RealSense.  For the validation of our scenarios, we developed a separate application, which handled the correct course of the scenario and provided appropriate instructions in the event of detecting an incorrect gesture. For example, when the user made an additional (unexpected) gesture, he had the option of making the "undo” gesture to return to determined sequence of a given scenario (“the recommended error correction condition” - user can but is not forced to correct the error).

Round 2

Reviewer 1 Report

REVIEW_2

 

Article titled:  “Gesture-based user interface for vehicle on-board system: a survey and research approach

  Applied Sciences  no. 899666

 

List of Authors:

 

Krzysztof Małecki, Adam Nowosielski, Mateusz Kowalicki

 

 

The article Applied Sciences  no.899666 entitled “Gesture-based user interface for vehicle on-board system: a survey and research approach” has been carefully modified and well revised.

The work is supposed to be finally accepted for publication in Applied Sciences.

 

Author Response

Dear Reviewer,

we would like to thank you for all valuable and important remarks and suggestions. We referred to all the comments of all reviewers, including the correction of the title of the article, according to the opinion of one of the Reviewers.

The detailed changes and answers are attached below.

Authors

 

Reviewer 1 comments:

==================

The article Applied Sciences  no.899666 entitled “Gesture-based user interface for vehicle on-board system: a survey and research approach” has been carefully modified and well revised. The work is supposed to be finally accepted for publication in Applied Sciences.

Thank you for your all valuable comment which helped us to improve our article.

Reviewer 3 Report

If this paper is accepted, please remove the word "survey" in the title.

Author Response

Cover letter for revised manuscript:

“Gesture-based user interface for vehicle on-board system: a questionnaire and research approach”

 

Dear Reviewer,

we would like to thank you for all valuable and important remarks and suggestions. We referred to all the comments of all reviewers, including the correction of the title of the article, according to the opinion of one of the Reviewers.

The detailed changes and answers are attached below.

Authors

 

Reviewer 3 comments:

==================

If this paper is accepted, please remove the word "survey" in the title.

OK, we changed the word “survey” to “questionnaire”.

Reviewer 4 Report

Thanks for your responses, some issues have been answered and the article has improved, but nevertheless there are some issues that have not yet been resolved and I believe that the work still needs to mature to be taken into account. I want to encourage you to continue the line of research that you have started, as it is very interesting.

Key points:

  • It is a common mistake to think that by not using invasive techniques the approval of an ethics committee is not necessary (it does not have to be medical ethics). Any investigation must go through an ethics committee, and depending on the investigation one or other measures will be taken. Whenever there are human subjects, surveys are conducted and data is collected (some of them sensitive and of a personal nature), it is necessary to take specific measures and have the informed consent of the subjects, among other things. I recommend that you read Regulation (EU) 2016/679 of the European Parliament and of the Council, of April 27, 2016, regarding the protection of natural persons with regard to the processing of personal data and the free circulation of these data and repealing Directive 95/46 / EC (General Data Protection Regulation)
  • The question has not been answered, What type of disability has been taken into account? For what specific disability has the study been carried out? Upper, lower motor disability,…? A person who is missing a leg is not the same as a person who is missing an arm or many other types of disabilities. Indicate clearly.
  • It is necessary to present a serious study, beyond a survey. The comparison is not to justify a substitution, it is to justify the usefulness and I think it would be necessary for the reader to have a vision of the scope of the system. The information presented is still scarce and insufficient, I strongly recommend expanding the information presented in this regard and in a more orderly way.
  • Why are some gestures easy and others difficult? Based on the fact that it has been decided which gestures are easier ?, the suitability of each gesture should be presented and an effort should be made to use more appropriate and natural personalized gestures.
  • There is no integration in the vehicle and it is assumed how it will be done, but currently it only works with a computer? Have the tests been performed on a vehicle or have they been performed in another environment? The tests have been limited to detecting gestures?
    The experiment is in a very early phase, it must mature to be taken into account.

Author Response

Cover letter for revised manuscript:

“Gesture-based user interface for vehicle on-board system: a questionnaire and research approach”

 

Dear Reviewer,

we would like to thank you for all valuable and important remarks and suggestions. We referred to all the comments of all reviewers, including the correction of the title of the article, according to the opinion of one of the Reviewers.

The detailed changes and answers are attached below.

Authors

 

Reviewer 4 comments:

==================

Thanks for your responses, some issues have been answered and the article has improved, but nevertheless there are some issues that have not yet been resolved and I believe that the work still needs to mature to be taken into account. I want to encourage you to continue the line of research that you have started, as it is very interesting.

Key points:

It is a common mistake to think that by not using invasive techniques the approval of an ethics committee is not necessary (it does not have to be medical ethics). Any investigation must go through an ethics committee, and depending on the investigation one or other measures will be taken. Whenever there are human subjects, surveys are conducted and data is collected (some of them sensitive and of a personal nature), it is necessary to take specific measures and have the informed consent of the subjects, among other things. I recommend that you read Regulation (EU) 2016/679 of the European Parliament and of the Council, of April 27, 2016, regarding the protection of natural persons with regard to the processing of personal data and the free circulation of these data and repealing Directive 95/46 / EC (General Data Protection Regulation)

Thank you for your very valuable remark. We asked Foundation for Active Rehabilitation FAR for a cooperation. All people with disabilities who took part in our research were acquired thanks to the Foundation. For that reason, and for our conviction about non-invasive, voluntary and helpful nature of the experiment we did not ask about approval. Continuing our further research, we will seek the approval of the ethic committee.

 

The question has not been answered, What type of disability has been taken into account? For what specific disability has the study been carried out? Upper, lower motor disability,…? A person who is missing a leg is not the same as a person who is missing an arm or many other types of disabilities. Indicate clearly.

We are sorry, it was not on purpose. We thought you have noticed subsection 4.1 - "Characteristic of research group". There is table 2 and description of all participants. We just would like to notice that all disabled participants were physically independent and all of them had all upper and lower limbs.

The section 4.1 was updated by this information.

 

It is necessary to present a serious study, beyond a survey. The comparison is not to justify a substitution, it is to justify the usefulness and I think it would be necessary for the reader to have a vision of the scope of the system. The information presented is still scarce and insufficient, I strongly recommend expanding the information presented in this regard and in a more orderly way.

We understand the reviewer's remark aimed at significantly expanding our research and the content of the article. However, our goal was to investigate the interest in gesture-based systems supporting non-disabled and disabled drivers and to check whether ready-made solutions are suitable for direct use, especially for disabled drivers. By immediate applications, we mean low-cost solutions based on e.g. COTS components (commercial off-the-shelf or commercially available off-the-shelf products). In our opinion, we have achieved our goal. The more that we are employees of department of information technology, not transport. We just wanted to show the potential of such research and thus encourage discussion and cooperation with other teams. Now, we have the next stages ahead of us, and we will certainly use the reviewer's suggestions. However, this is our future. Please be understanding in this matter and respect our current work in this area.

 

 

Why are some gestures easy and others difficult? Based on the fact that it has been decided which gestures are easier ?, the suitability of each gesture should be presented and an effort should be made to use more appropriate and natural personalized gestures.

We thank for the comment. We have added the following paragraph to the paper explaining differences in the performance of gestures (sec. 3.2):

“We have chosen for our investigations only those gestures that seems to be easy to perform, are natural, and engage small number of muscles. These are four directional swipe gestures, a push gesture and two static gestures (clenched fist and victory gesture). It must be noticed that many gestures are not well suited for the in-car environment. They require more space to be performed properly or engage too many muscles. The latter is especially important for people with physical limitations. For example, a “thumb down” gesture is performed with a closed hand and straightened thumb, but the rotary arm movement is required so that the thumb points down. Other gestures, like “spread fingers” might be extremely difficult for people with disabilities. We believe that our selected gestures are the easiest but for sure, in future, the user specific adaptation should be considered.”

 

There is no integration in the vehicle and it is assumed how it will be done, but currently it only works with a computer? Have the tests been performed on a vehicle or have they been performed in another environment? The tests have been limited to detecting gestures? The experiment is in a very early phase, it must mature to be taken into account.

Starting with the last comment, we agree with the Reviewer that the work is in the early phase. However, the aims of the current stage of our project have been achieved and therefore we have decided to publish the results. We believe our research is valuable.

The gesture recognition technology is now considered, in general, as solved problem with many commercial solutions available on the market. Some technologies are now obsolete (e.g. Kinect v1 and Kinect v2, no longer manufactured) and new solutions appear on the market. It is still marginalized how this technology could help people with disabilities. It is expected that user will perform a specific gesture but what if congenital or acquired disability would interfere with the correct and expected making of a gesture?

Our first aim was to evaluate current state-of-the-art technology for people with physical disfunctions in a car environment. We observed well reception of our idea. Many participants admitted they did not pay attention to this technology as it is broadly advertised in the entertainment sense. It occurred, however, that it can prove its value as assistive technology, too. We demonstrated that the effectiveness of ready-made solutions is high and that people with disabilities can handle certain gestures well.

 

In sec. 4 we have provided details of our experimental setup as follow:

“For safety reasons, we decided not to conduct experiments in traffic. Operation of both hands have been evaluated and because we used cars with the steering wheel located on left-hand side, when tested the left-hand operation a driver was moved to the passenger seat.”

And:

“In the examined version of the prototype the interface was not integrated with the car’s console. We assume that the control would take place via the CAN bus. The application would send the information to the multimedia system which functions should be activated. Currently, the application gives the message in JSON format and this is used for validation. The dedicated application is launched on laptop in the car environment.”

 

Round 3

Reviewer 3 Report

The paper has been improved according to reviewers' suggestions.

Reviewer 4 Report

The work is promising, but the current state of the work is not adequate to be accepted. I recommend maturing the work and conducting a more meaningful study.

Back to TopTop