Next Article in Journal
Cooperative Integrated Guidance and Control for Active Target Protection in Three-Player Conflict
Next Article in Special Issue
Calibration to Differentiate Power Output by the Manual Wheelchair User from the Pushrim-Activated Power-Assisted Wheel on a Force-Instrumented Computer-Controlled Wheelchair Ergometer
Previous Article in Journal
Optimization Design of Permanent Magnet Synchronous Motor Based on Multi-Objective Artificial Hummingbird Algorithm
Previous Article in Special Issue
Current State, Needs, and Opportunities for Wearable Robots in Military Medical Rehabilitation and Force Protection
 
 
Article
Peer-Review Record

Integration of Virtual Reality-Enhanced Motor Imagery and Brain-Computer Interface for a Lower-Limb Rehabilitation Exoskeleton Robot

Actuators 2024, 13(7), 244; https://doi.org/10.3390/act13070244
by Chih-Jer Lin * and Ting-Yi Sie
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Actuators 2024, 13(7), 244; https://doi.org/10.3390/act13070244
Submission received: 3 June 2024 / Revised: 22 June 2024 / Accepted: 26 June 2024 / Published: 28 June 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The authors have integrated VR technology with action observation, motor imagery and lower limb rehabilitation exoskeleton technology for the investigation of motor imagery-brain computer interface. Two positions of seated and standing of the subjects are investigated. The results of the seated position with the virtual reality goggle are found to be better in terms of classification compared to the standing position with respective accuracies of 75.35 and 74 percents.

This study should be interesting to the readers in the field of neuroscience and rehabilitation. I do have some comments as follows:

Minor:

line 15: The sentence is not complete and is not clear.

line 33: The sentence is not complete.

line 43 and 44: There is no reference for the sentence.

All figures' fonts and their sizes should be modified to enhance article's readability.

Major:

line 222: The way the signal quality feedback and VR binocular visual rendering affect the overall classification is not clear. 

line 282: The justification for the number of subjects is required.

line 403: How does the balancing activity of the brain affects its motor imagery?

The procedure for the application of the system to an individual is not short. How do you think that this might not be an issue for rehabilitating of the individuals suffering from paralysis in their lower limbs.

You have put a great effort in combining all these different systems. How would you think that you could simplify the whole system so that it can be adopted by a therapist of low engineering skill, for example?

Author Response

The authors have integrated VR technology with action observation, motor imagery and lower limb rehabilitation exoskeleton technology for the investigation of motor imagery-brain computer interface. Two positions of seated and standing of the subjects are investigated. The results of the seated position with the virtual reality goggle are found to be better in terms of classification compared to the standing position with respective accuracies of 75.35 and 74 percents.

This study should be interesting to the readers in the field of neuroscience and rehabilitation. I do have some comments as follows:

Minor:

Q1.1 line 15: The sentence is not complete and is not clear.

Answer: Thanks to the reviewer's suggestion, the issue has been corrected. [L15]

Q1.2 line 33: The sentence is not complete.

Answer:  Thanks to the reviewer's suggestion, the issue has been corrected. [L29]

Q1.3 line 43 and 44: There is no reference for the sentence.

Answer: Thanks to the reviewer's suggestion, the issue has been corrected. [L44]

All figures' fonts and their sizes should be modified to enhance article's readability.

Thanks to the reviewer's suggestion, the issue has been corrected.

 

Major:

Q1.4 line 222: The way the signal quality feedback and VR binocular visual rendering affect the overall classification is not clear. 

Answer:  VR binocular visual rendering provides a way to split the screen into display modes that can be displayed by VR glasses, and was used for the data collection case of Figure 15’s model training (with VR) (including sitting and standing poses). VR binocular visual rendering was not used for cue screen and normal screen. [L223]

The signal quality feedback was only used in the open-loop and close-loop experiments, and was recorded along with the state of the entire system, only to observe the effect of the exoskeleton on the signal quality when the subject was being driven by the exoskeleton. [L223]

With the different data collection conditions in Figure 15, the EEGs captured for model optimization in Figure 10 come from different perspective conditions of the participants, and VR binocular visual rendering was only used for the model training scenarios that utilized VR. The effect on the overall classification was only to change the test conditions of the source EEG. [L246]

 

Q1.5 line 282: The justification for the number of subjects is required.

Answer:  The information of the participants is shown in Table 4. Since closed loop rehabilitation requires the tracking of the joint model, five healthy participants with a wide range of heights from the shortest 1.64m to the highest 1.77m were selected for the experiment. [L384]

 

Q1.6 line 403: How does the balancing activity of the brain affects its motor imagery?

Answer:  In the standing data collection phase, balancing activity is equivalent to adding an irregularly evoked lower limb motor execution(ME) task to the data collection process. Related studies have shown similarities in cortical activation between ME and MI [32]. [L419]

Our experiment was designed to capture the brain activity features of gait MI, which can be used as an activation trigger for gait rehabilitation, and the exoskeleton will be activated when the participant has a gait movement intention. If the ME features of balancing activity are mixed in the data collection stage, the accuracy of the model in recognizing gait MI might be poor, e.g., the experimental results of Table 5 show that the classification accuracy of standing posture is lower than that of sitting posture on average. [L422]

 

Q1.7 The procedure for the application of the system to an individual is not short. How do you think that this might not be an issue for rehabilitating of the individuals suffering from paralysis in their lower limbs.

Answer:  Thanks to the reviewer's suggestion, patients with lower limbs paralysis or paraplegic patients usually have several characteristics, 1. inability to perform gait tasks unaided 2. normal brain function 3. unable to stand unaided.

The application flow of our proposed framework is as follows. Firstly, a seated participant with an EEG cap and VR collects first-person gait MI signals (30 trials of about 7 minutes). Next, the training time for MI-BCI was 83.3 seconds on average [26]. Immediately following the open-loop test, the open-loop time was about 140 seconds. Then CDF auto-leveling with time window labeling (Figure.17) for about 3 minutes. The sum of all the above times is the controllable and technically reducible preparation time, which is about 13.7 minutes in total.

Summarizing these technical and framework considerations, there is still room for improvement in terms of time consumption, and there is still a gap between the technical aspects and clinical applications. However, at this stage, we have tried our best to take into consideration the inconvenience of the patients and consider it in the framework design.

Firstly, the MI-BCI algorithm [26] minimizes the training time of the model (reducing the setup time). Secondly, the participant data can be collected by seated VR (which is more friendly to paralyzed patients and satisfies characteristic condition 3). Next, the CDF auto-leveling allows the seated VR model to be directly applied to exoskeleton rehabilitation, where the participant is directly transferred from the seated position to the exoskeleton system (with body weight support), minimizing the time the paralyzed patient has to stand independently on their own.

 

Q1.8 You have put a great effort in combining all these different systems. How would you think that you could simplify the whole system so that it can be adopted by a therapist of low engineering skill, for example?

Answer:  Thanks to the reviewer's suggestion, the simplification of the system and providing the physiotherapist with an easy control method is an important step. We have integrated the LLRER with the treadmill system into a single gait cycle execution system (Figure 1), where the whole system waits for a command to be triggered in order to automatically carry out a gait cycle of gait rehabilitation.

The complicated part is the VR-enhanced MI-BCI, which requires opening the EEG collection interface first, and then opening DesktopSbS and Spacedesk in sequence. After the EEG data is collected, the model training is required to be conducted manually, and the open-loop test is conducted after the training. After the open-loop test, the program will automatically pop up the screenshots of all the time windows for the experimenter to perform the real status labeling, the model will be automatically adjusted after the real status labeling (Figure 17), and then the experimenter can directly execute the MI-BCI Main Loop on the left side of Figure11 to start the close-loop test.

One of the more complicated parts of the BCI process is that the user needs to switch between different programs and applications. This can be automated by using QuickMacro or the Windows API for software switching and opening, which can be packaged into a single executable file so that the therapist only needs to focus on the actual interaction with the patient. Program switching and model training can be done automatically through the program.

 

 

Reviewer 2 Report

Comments and Suggestions for Authors

Very extensive detailed article.

Overall the work is well written and can provide useful information to readers interested to jump into this complex new field of development.

It might be useful to include link to where the mentioned available materials can be found: for example, "Figure 11. Timeframe of the publicly available dataset." and "classifier was further validated on a publicly available dataset of the high focused gait mental task.". 

The article successfully integrates VR BCI and Exoskeleton technologies. Areas of potential applications are broad, from helping disability patients, to homeland security.

I highly recommend publishing the article.

 

Author Response

Thanks to the reviewer's suggestion, the links and descriptions of the dataset has been added. [L278] [L484]

Reviewer 3 Report

Comments and Suggestions for Authors

On the overall I consider the paper is good and the information presented regarding the VR motor imagery and BCI data is quite promising.

 

Some remarks about the text

There are several phrases which end abruptly: see Abstract line 15-16 (After applying…) or in introduction line 61(Several previous 61 studies have shown.)

Figures 1 and 2 are overfilled with information and are difficult to follow. Maybe the exoskeleton can be shown in a separate figure and its functions explained there.

The VR part of the paper is thoroughly presented and well explained. However, there are some concerns which appear in relation to the paper topic, namely rehabilitation. When discussing about lower limb rehabilitation most patients have problems with the entire limb and one specific problem refers to the dropped foot (the ankle joint movement) which is not supported by the LLRER exoskeleton. It is specified that the robot has 4-DOF but while I guess that it is a mirrored 2-DOF left and right leg exoskeleton I did not find this information in the text. The second problem is the balance issue in a VR setting (compared to AR). This should be discussed. I do acknowledge the work of the authors and that the system has good use in multiple scenarios (e.g. knee surgery) but this should be explained. Also there are aspect regarding the brain activity which can appear in neuromotor affections which might interfere with the BCI data reading and interpretation.

Regarding the experimental data, there should be some information regarding the subjects used in the study which could be presented in table form.

Author Response

Comments and Suggestions for Authors

On the overall I consider the paper is good and the information presented regarding the VR motor imagery and BCI data is quite promising.

Some remarks about the text

Q3.1

There are several phrases which end abruptly: see Abstract line 15-16 (After applying…) or in introduction line 61(Several previous 61 studies have shown.)

Answer: Thanks to the reviewer's suggestion, the issue has been corrected. [L13] [L53]

 

Q3.2

Figures 1 and 2 are overfilled with information and are difficult to follow. Maybe the exoskeleton can be shown in a separate figure and its functions explained there.

Answer:  Thanks to the reviewer's suggestion, a separate figure has been added to describe the functionality and mechanism of the exoskeleton. [L145]

 

Q3.3

The VR part of the paper is thoroughly presented and well explained. However, there are some concerns which appear in relation to the paper topic, namely rehabilitation. When discussing about lower limb rehabilitation most patients have problems with the entire limb and one specific problem refers to the dropped foot (the ankle joint movement) which is not supported by the LLRER exoskeleton. It is specified that the robot has 4-DOF but while I guess that it is a mirrored 2-DOF left and right leg exoskeleton I did not find this information in the text.

Answer:  Thanks to the reviewer's suggestion, the description of the exoskeleton has been added to L145. Dropped foot is indeed a very important issue for paraplegic patients, and for this part we refer to the LOPES system also for treadmill gait rehabilitation. The main reason for neglecting the ankle joint is that applying large torques to the foot without the use of a fit-to-size foot-interface can be painful [28]. [L150]

Answer:  We utilized the treadmill movement and hip actuation to ensure that there was enough foot clearance during swinging phase to prevent dropped foot. However, this is something that will need to be taken into account when upgrading the DOF of the exoskeleton in the future.

 

Q3.4

The second problem is the balance issue in a VR setting (compared to AR). This should be discussed.

Answer:  Thanks to the reviewer's suggestion, this aspect was not described sufficiently in my article. In our proposed framework, the wearing of VR is only in the data collection stage (figure 15). In the data collection stage, the test subjects are only in two states: sitting and standing, so there should not be a major issue of balance when wearing the VR because of the static state of the test subjects. [L335]

We have considered that the participant can see their own legs, so in the close-loop test, we chose to let the participant not wear VR, and use their own lower limbs as the visual and body perception feedback. [L337]

If a participant is to be able to visualize the holographic projected action during gait rehabilitation and provide visual feedback through the human animated model, then AR may be a better choice.

 

Q3.5

I do acknowledge the work of the authors and that the system has good use in multiple scenarios (e.g. knee surgery) but this should be explained. Also there are aspect regarding the brain activity which can appear in neuromotor affections which might interfere with the BCI data reading and interpretation.

Answer:  Thanks to the reviewer's suggestion.

For rehabilitation with sensory motor integration, accurate matching between movement intent and sensory feedback is important to facilitate neuroplasticity [33]. When the human body wants to exercise. First, the motor cortex activates, then neurotransmission to the muscles leads to lower limb movements, and during lower limb movements there are somatosensory, haptic, visual, and other sensory feedbacks from the lower limbs. These multisensory feedbacks can induce neuroplasticity, which in turn enhances motor recovery [34]. [487]

The system we developed first determines the motor intention by using the MI-BCI classification algorithm on the EEG signals during motor cortex activation. When it is detected that there is a motor intention, the lower limb exoskeleton is triggered to assist the subject's limbs to perform gait movement, and the subject will receive various multisensory feedback during the process of being driven, thus completing the closed-loop of rehabilitation. In addition, by adding VR goggles to provide motion visualization, it attempted to enhance the activities of the motor cortex to improve the performance of the motion imagery. [492]

This endogenous subject intention-driven rehabilitation may apply to patients who have suffered a stroke, hemiplegia or paraplegia. This rehabilitation is characterized by a top-down approach, initiated by the patient's own motor intention. Then, through the exoskeleton, the limbs are driven, and the sensory-motor feedback from the limbs reinforces the overall motor control circuitry from the bottom up. [498]

Q3.6

Regarding the experimental data, there should be some information regarding the subjects used in the study which could be presented in table form.

Answer:  Thanks to the reviewer's suggestion, the table has been added. [L384] [L391]

Back to TopTop