Next Article in Journal
The Influence of Vibrations Induced by Blasting Works in an Open-Pit Mine and Seismic Events in an Underground Mine on Building Structures—A Case Study
Next Article in Special Issue
Measuring the Effect of Mental Health on Type 2 Diabetes
Previous Article in Journal
Assessment of Lightweight Concrete Properties with Zinc Oxide Nanoparticles: Structural and Morphological Analyses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of Sensory Virtual Reality Interface Using EMG Signal-Based Grip Strength Reflection System

1
Graduate School of IT Convergence Engineering, Daegu University, Gyeongsan 38453, Republic of Korea
2
Department of Computer and Information Engineering, Daegu University, Gyeongsan 38453, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(11), 4415; https://doi.org/10.3390/app14114415
Submission received: 18 April 2024 / Revised: 18 May 2024 / Accepted: 19 May 2024 / Published: 23 May 2024
(This article belongs to the Special Issue Monitoring of Human Physiological Signals)

Abstract

:
In virtual reality (VR), a factor that can maximize user immersion is the development of an intuitive and sensory interaction method. Physical devices such as controllers or data gloves of existing VR devices are used to control the movement intentions of the user, but their shortfall is that grip strength and detailed muscle strength cannot be reflected. Therefore, this study intended to establish a more sensory VR environment compared to existing methods by reflecting the grip strength of the flexor digitorum profundus of the user of the VR content. In this experiment, the muscle activity of the flexor digitorum profundus was obtained from six subjects based on surface electromyography, and four objects with differing intensity were created within a VR program in which the objects were made to be destroyed depending on muscle activity. As a result, satisfaction was improved because the users could sensitively interact with the objects inside the VR environment, and the intended motion control of the user was reflected in the VR content.

1. Introduction

Virtual reality (VR) is being integrated into various fields such as gaming, education, healthcare, art, entertainment, and business as a new interface. Particularly during the COVID-19 era, there has been active research and content production (e.g., social VR platforms) centered around the metaverse, leading to a significant increase in the number of VR content users [1]. Specifically, when used for rehabilitation, learning, and simulated training, VR is expected to be an effective training tool owing to its unique sense of immersion, which can facilitate adaptation to tasks and improve efficiency [2,3,4,5]. This is attributed to the concept of presence inherent in VR, which induces flow, thereby affecting learning outcomes [3]. In other words, enhancing immersion in the VR environment leads to improved learning outcomes, and various interactive tools such as hand tracking, eye tracking, haptic suits, and data gloves have been developed for this purpose.
Meanwhile, research has been conducted to utilize biosignals, such as electromyography (EMG), electroencephalography, and electrooculography, in VR environments to enhance sensory immersion, a step beyond traditional physical controllers and sensors [6]. Among various biosignals, surface electromyography (SEMG) is one of the more reliable methods to recognize human motion intentions because it can facilitate human-robot-environment integration with the help of an intelligent perception system [7]. This is because it allows users to actively control their motion intentions and can be easily attached to the skin without any surgical procedures, making it a useful tool.
This study aimed to investigate the impact on immersion when SEMG is utilized for interaction with objects within VR content. The acquisition site of the EMG was set to the flexor digitorum profundus (FDP), which is a muscle that bends the index, middle, ring, and little fingers; finger-bending motions can be assessed by measuring the EMG of the FDP. Moreover, using a standard VR controller may lack realism compared to using one’s hands [8]. Therefore, by employing hand tracking instead of a VR controller, a non-restrictive environment was established, allowing free movement of the hands without having to hold a controller.
To accomplish the principal aims of this work, the specific objectives were as follows:
  • To develop a novel interface that reflects hand strength, rather than hand movement, in hand tracking.
  • To propose an electromyography signal-processing method that can measure grip strength.
  • To develop breakable objects that generate particles in a VR environment.
  • To develop an interface that can destroy objects with different intensities in a VR environment.
  • To develop a VR interface that reflects grip strength and apply it to VR contents such as rehabilitation, muscle strengthening, exercise, games, and education in the future.
Figure 1 shows the overall system structure proposed in this study. The user wears a head-mounted display (HMD), attaches electrodes to the arm’s surface to acquire EMG signals from the FDP, and experiences VR using hand tracking instead of a controller. A system was developed in Unity where, while holding VR objects with their hands, the objects respond based on the measured EMG values, and the user can visually confirm this through the HMD. The effectiveness of the proposed method was verified through a mixed-methods research approach.
The research problems to be solved according to the research objectives described above were as follows:
  • Does the proposed EMG signal-processing method measure the user’s muscle activity?
  • Does the user’s grip strength reflect the destruction of objects with different strengths in the VR content?
  • Are users satisfied with the proposed grip strength reflection interface?
In this paper, we describe the proposed system modules by analyzing and interpreting the results for the modules’ performance and the feasibility of achieving these research purposes.

2. Related Research

2.1. Flow in Virtual Environments

VR is defined as having presence, flow, and interactivity [9]. Many studies have analyzed VR by focusing on flow. Kwon et al. [3] suggested that presence, a characteristic feature of VR, induces flow, which is directly related to the improvement of learning outcomes. They speculated that enhancing interaction could lead to improved learning outcomes and analyzed the relationship between distractions, flow, and learning outcomes in HMD-based immersive learning. Jo et al. [5] proposed that VR can alleviate the burden of high cost and risk in practice-oriented vocational training. They suggested that immersion and interactive experiences involving various senses can increase learning effectiveness and skill proficiency, providing examples from anatomical practice and industrial simulations. Kim et al. [6] argued that the basic elements defining VR are realism, flow, interactivity, and autonomy. Based on study findings that “expanding the range of sensory elements creates a greater sense of realism”, they developed a multisensory effect reproduction system that introduced buoyancy, water pressure, temperature, and resistance into virtual underwater simulations. These related studies emphasize the importance of interaction using sensory information in VR, focusing on the significance of flow.

2.2. System Using EMG

Among biosignals, SEMG stands out as a powerful tool owing to its ability to enable users to actively control their motion intentions, coupled with its short temporal gap from muscle activity to interpretation. Li et al. [7] conducted research on human–robot interaction systems and highlighted factors, such as muscle fatigue, skin sweat, electrode detachment, electrode movement, and abnormal movements, which can decrease pattern-recognition accuracy in SEMG. Despite these challenges, SEMG exhibits excellent accuracy and robustness, making it a reliable method for recognizing human motion intentions and serving as an important control signal. Dwivedi et al. [8] proposed a framework for hand gesture tracking and virtual object motion decoding using data gloves and EMG. They argued that conventional methods like handheld controllers, buttons, joysticks, trackpads, and triggers are less intuitive compared to using arms and hands, implying that muscle–computer interfaces can facilitate efficient and immersive interactions with VR/AR systems. Kim et al. [9] developed an immersive game interface using a multichannel EMG module with four directionalities and presented an interface that maps each of the four channels and directions to different muscles: extensor digitorum upwards, flexor carpi ulnaris to the right, flexor carpi radialis downwards, and abductor pollicis longus to the left.
These studies imply that the use of EMG in developing interfaces is practical and beneficial. Dwivedi et al. [8] aimed to present fatigue-free and natural interaction using hand gestures. However, their approach, which involved data gloves equipped with magnetic motion capture sensors, still imposed a limitation, as participants’ hands remained in contact with the hardware. Additionally, the system presented by Kim et al. [9] operated only by comparing the acquired values for each channel with a threshold, which limited the number of commands per channel.
To address these issues, this study proposes a system that allows for the use of bare hands by applying HMD camera-based hand tracking, alleviating the inconvenience associated with hardware attachments.

3. Proposed Method

3.1. Surface Electromyography

3.1.1. Acquisition of FDP EMG

The FDP, a muscle responsible for bending the index, middle, ring, and little fingers, allows for the measurement of potential differences caused by electrical activity during corresponding finger-bending movements through electrodes placed on the skin’s surface. Figure 1b shows Ag-AgCl electrodes and snap electrodes that were attached to the FDP and connected to an Arduino UNO through the EMG module KIT0012. EMG signals are known to have a major frequency band of 500 Hz [10], and to prevent signal distortion (aliasing), sampling was carried out at a rate higher than 1 kHz according to Nyquist’s theorem [11].

3.1.2. EMG Signal Processing

In Figure 2a, the original EMG signal is depicted, characterized by a high-frequency signal with a 500 Hz bandwidth. A smoothing technique was used to convert the original EMG into a low-frequency signal in real time, thereby offsetting noise and enhancing signal smoothness (Figure 2b). The acquired original EMG signals were then transmitted to Unity through serial communication, where they underwent real-time smoothing processing as a preprocessing step utilizing a moving average filter. Algorithm 1 is the pseudocode implemented in Unity for the moving average filter.
Algorithm 1 Pseudo code for moving average filter algorithm.
1:EMG = ParseLong(stream.ReadLine())
2:total −= readings[readIndex]
3:readings[readIndex] = EMG
4:total += readings[readIndex]
5:readIndex = readIndex ++
6:if readIndex >= numReadings then
7:        readIndex = 0
8:end if
9:Smoothed = total/numReadings
In Figure 3, signal acquisition from the attached sensor, signal smoothing processing, and signal recording are shown as performed in real time. During the calibration phase of the experiment, participants were asked to contract the FDP with maximum effort  k  times (in this experiment, 10), and the highest EMG value measured at each of the  i  contraction times was set as the maximum voluntary contraction (MVC) value for the  i -th time, as shown in Equation (1).
M V C i = m a x ( E M G j ) , ( i = 1 , 2 , . . . , k ) , ( j = 1 , 2 , . . . , n 1 , n ) .
As in Equation (2), the mean MVC was calculated based on the average of the six MVC values, excluding the first two and the last two from k MVCs.
m e a n M V C = 1 ( k 4 ) i = k 7 k 2 M V C i .
As in Equation (3), the normalized EMG values were derived through min–max normalization based on the mean MVC value and the real-time input of smoothed EMG values [12]. After data preprocessing, the real-time input of normalized EMG values was utilized to trigger events according to the grip strength of the objects.
N o r m a l i z e d E M G j ( % ) = f E M G j m i n ( E M G j ) m e a n M V C m i n ( E M G j ) × 100 ( % )

3.2. Scene Creation

3.2.1. Construction of VR Environment

For the experiment conducted in this study, an Oculus Quest 2 (Meta, MA, USA) was used as the VR HMD, and the grip reflection scene was created using the Oculus Integration SDK supported by Meta. In the created scene, users’ hands were reflected as objects in the field of view using hand tracking, and events were triggered based on the combination of the grabbed trigger (which indicated whether the user had grasped the object) and the normalized EMG.

3.2.2. Explanation of the Grip Reflection Scene

The grip reflection scene prepared for this experiment was designed based on the premise of using hand tracking without controllers. The Oculus Integration SDK used for hand tracking requires the interactor component to be assigned to the hand object and the interactable component assigned to the interactable objects to correspond. The types of interaction provided include hand grab, touch, and poke. The criteria for selecting interactions vary depending on the experiment; in this study, hand-grab interaction was mainly used to control objects.
To set different intensities for the objects, four objects with different threshold levels were created, classified into levels, and designed to be destructible by hand. These objects were differentiated by the colors green, red, purple, and gray, according to their intensity, and they were set with thresholds of 0.7, 0.85, 1, and 1.05 respectively, based on the normalized EMG. Destructible objects generated fragment particles and were removed when they were in the grabbed state and the normalized EMG exceeded the object-specific threshold, thus providing a visual event that conveyed the impression of crushing the object when the user gripped and applied force. A UI panel was provided to show various parameters in text. In particular, as shown in Figure 4, the normalized EMG parameter was displayed using a gauge UI, facilitating easy recognition by participants.
In the calibration phase, the MVCs measured were differentiated by pressing the ‘Q’ key on the keyboard or directly pressing a button in the scene. EMG signals, smoothed EMG, and time elapsed during the scene execution were recorded in a CSV file.

4. Results and Discussion

4.1. Experimental Environment

To assess satisfaction and verify the system’s reflection of grip strength, six participants (22.17 years old, ±2.13), three men and three women, were recruited. The participants were students from Daegu University, and one of them had no prior experience using VR systems. The study was conducted with individuals who did not have any musculoskeletal disorders affecting the FDP. The participants were fully informed about the experiment beforehand and consented to participate.
To evaluate their satisfaction with the proposed system, a mixed-methods research approach, incorporating both quantitative and qualitative evaluations, was used, and the system was assessed through semi-structured interviews. The questions designed for the quantitative evaluation of the system were as follows.
Q1: I felt more sensory interaction using the system that reflects muscle activity.
Q2: I did not need any special adaptation or learning to use the system.
Q3: The input of muscle activity was sufficiently reliable and stable.
Q4: I was satisfied while using the system.

4.2. Satisfaction Evaluation Results

Table 1 shows the scores given by the participants and the average scores for each item after using the system proposed in this study. All six participants were mostly satisfied with all items, resulting in an overall average score of 5 for Q1, 4.67 (0.52) for Q2, 4.83 (0.41) for Q3, and 4.83 (0.41) for Q4. Notably, all participants scored 5 for Q1, a perfect overall average. This indicates that incorporating muscle activity into existing VR content allows for more sensory interaction and reflects the users’ intentions directly into the VR content, thereby improving user satisfaction.
Below are similar qualitative opinions and the percentage of participants who presented them, calculated as (number of responses/total number of participants)*100.
33.33%: “It was interesting to see the difference between the green (lv 1) and black (lv 4), but it was hard to feel the difference between the red (lv 2) and purple (lv 3)”.
33.33%: “It would be good if the object showed quantitatively how much force was applied along with the event occurrence threshold”.
16.67%: “I wish the hand tracking recognition was better”.
16.67%: “It was difficult to break the objects because my nails were long”.
The qualitative opinion “It was interesting to see the difference between green (lv 1) and black (lv 4), but hard to feel the difference between red (lv 2) and purple (lv 3)” implies that users may find it difficult to closely distinguish or control each stage when threshold levels are divided into four. Reducing the number of stages or ensuring a certain difference between thresholds could enable users to sufficiently differentiate and feel satisfied with their interactions. Second, the qualitative opinion “It would be good if the object showed quantitatively how much force was applied along with the event occurrence threshold” indicates that users expect objective feedback when providing complex and continuous inputs like biosignals, and they want to verify their inputs with quantitative figures. Third, the opinion “I wish the hand tracking recognition was better” suggests the need for hand tracking technology that guarantees higher reliability software-wise, broader camera angles, different VR environments hardware-wise, or the need to eliminate elements that could be misrecognized as hands before using the system. Lastly, the opinion “It was difficult to break the objects because my nails were long,” suggests that when using the system barehanded, the length of the nails can impact the system’s usability, particularly during the MVC measurement phase and object interaction phase when clenching a fist, as nails digging into the palm can be a problem. This could be improved by notifying participants of the potential issues caused by long nails before the experiment.

4.3. Results of EMG by Object Intensity

Through tests with participants using the proposed system, quantitative data including raw EMG and smoothed signals were obtained. Figure 5a is a graph of the smoothed EMG values between onset and cessation when one participant destroyed each object. Participants recorded sensor values of 215.27, 323.57, 187.22, 158.53, 208.07, and 202.75 to destroy a Level 1 object with a threshold of 0.7; 313.50, 357.88, 217.35, 221.68, 282.31, and 371.48 to destroy a Level 2 object with a threshold of 0.85; 271.31, 437.23, 231.18, 168.32, 312.55, and 292.49 to destroy a Level 3 object with a threshold of 1; and 340.01, 329.29, 339.75, 229.36, 367.53, and 331.68 to destroy a Level 4 object with a threshold of 1.05.
Figure 5b demonstrates the average of the smoothed EMG values while participants attempted to destroy each object, presented as t-test results, indicating significant differences between the two levels (p < 0.001).

4.4. Limitations and Future Work

We proposed a grip strength reflection method that reflects muscle activity as a new approach to improve VR interfaces. Our findings showed that the proposed VR interface improved users’ satisfaction and immersion. However, there were some clear limitations and areas for improvement. First, data acquired from the FDP in this study may have been affected by unintended actions owing to noise from the experimental environment or interference from bodily artifacts [13]. To address these issues, a proposed method to correct the data could be to acquire EMG signals from muscles that may interfere. Several errors caused by body rotation could then be adjusted by attaching IMU sensors. To resolve these issues, we plan to apply the proposed methods to improve the system and develop a system that includes sensory feedback using actuators such as motors or air pumps. In addition, the most explicit limitation was that our database was collected under a controlled laboratory environment with only six subjects. The proposed system should be verified with different subject groups such as age, gender, race, etc., and more subjects should be recruited to generalize the results. In future research, we will verify the system’s effectiveness with more subjects and improve the system. Finally, although our research experiments focused on assessing users’ satisfaction with the system, inferential statistical methods should be used to evaluate the effectiveness of the UI more reliably to strengthen our conclusions in future work [14,15].

5. Conclusions

In this study, we used EMG signals from the FDP and hand tracking to develop a sensory VR system. Our method aimed to enhance user satisfaction in terms of interaction with VR devices and content by reflecting grip strength in the VR environment through EMG signal normalization based on the MVC. To evaluate the proposed method, participants were recruited, and a mixed-methods research approach was conducted. The survey results suggest that the users were able to feel the difference in the intensity of the objects in the system, and a significant difference in muscle activity was confirmed when interacting with objects of varying intensities. Consequently, in terms of the sensory interaction, the user’s satisfaction have been improved by using the proposed system compared to conventional systems when experiencing VR, and it is expected that VR content reflecting users’ intentions can be further improved.

Author Contributions

Conceptualization, Y.S. and M.L.; methodology, Y.S. and M.L.; software, Y.S.; validation, Y.S.; formal analysis, Y.S.; investigation, Y.S.; resources, Y.S.; data curation, Y.S.; writing—original draft preparation, Y.S. and M.L.; writing—review and editing, Y.S. and M.L.; visualization, Y.S.; supervision, M.L.; project administration, M.L.; funding acquisition, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by Korea Institute for Advancement of Technology(KIAT) grant funded by the Korea Government(MOTIE) (P0012724, HRD Program for industrial Innovation).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Daegu University (No: 1040621-202403-HR-021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The code used in the study is available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kim, T.K.; Kim, S.S. Metaverse App Market and Leisure: Analysis on Oculus Apps. Knowl. Manag. Res. 2022, 23, 37–60. [Google Scholar]
  2. Lee, M.T.; Youn, J.H.; Kim, E.S. Evaluation and Analysis of VR Content Dementia Prevention Training based on Musculoskeletal Motion Tracking. J. Korea Multimed. Soc. 2020, 23, 15–23. [Google Scholar]
  3. Kwon, C.S. A Study on the Relationship of Distraction Factors, Presence, Flow, and Learning Effects in HMD—Based Immersed VR Learning. J. Korea Multimed. Soc. 2018, 21, 1002–1020. [Google Scholar]
  4. Shin, J.M.; Choi, D.S.; Kim, S.Y.; Jin, K.B. A Study on Efficiency of the Experience Oriented Self-directed Learning in the VR Vocational Training Contents. J. Knowl. Inf. Technol. Syst. 2019, 14, 71–80. [Google Scholar]
  5. Jo, J.H. Analysis of Visual Attention of Students with Developmental Disabilities in Virtual Reality Based Training Contents. J. Korea Multimed. Soc. 2021, 24, 328–335. [Google Scholar]
  6. Kim, C.M.; Youn, J.H.; Kang, I.C.; Kim, B.K. Development and Assement of Multi-sensory Effector System to Improve the Realistic of Virtual Underwater Simulation. J. Korea Multimed. Soc. 2014, 17, 104–112. [Google Scholar] [CrossRef]
  7. Li, K.; Wang, J.Z.L.; Zhang, M.; Li, J.; Bao, S. A Review of the Key Technologies for sEMG-based Human-robot Interaction Systems. Biomed. Signal Process. Control. 2020, 62, 102074. [Google Scholar] [CrossRef]
  8. Dwivedi, A.; Kwon, Y.J.; Lioarokapis, M. EMG-Based Decoding of Manipulation Motions in Virtual Reality: Towards Immersive Interfaces. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11–14 October 2020; pp. 3296–3303. [Google Scholar]
  9. Kim, K.S.; Han, Y.H.; Jung, W.B.; Lee, Y.H.; Kang, J.H.; Choi, H.H.; Mun, C.W. Technical Development of Interactive Game Interface Using Multi-Channel EMG Signal. J. Korea Game Soc. 2010, 10, 65–73. [Google Scholar]
  10. Lee, M.; Lee, J.H.; Kim, D.H. Gender Recognition using Optimal Gait Feature based on Recursive Feature Elimination in Normal Walking. Expert Syst. Appl. 2022, 189, 116040. [Google Scholar] [CrossRef]
  11. Maciejewski, M.W.; Qui, H.Z.; Rujan, I.; Mobli, M.; Hoch, J.C. Nonuniform Sampling and Spectral Aliasing. J. Magn. Reson. 2009, 199, 88–93. [Google Scholar] [CrossRef]
  12. Islam, M.J.; Ahmad, S.; Haque, F.; Reaz, M.B.I.; Bhuiyan, M.A.S.; Islam, M.R. Application of Min-Max Normalization on Subject-Invariant EMG Pattern Recognition. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
  13. CDe Luca, J.; Gilmore, L.D.; Kuznetsov, M.; Roy, S.H. Filtering the Surface EMG Signal: Movement Artifact and Baseline Noise Contamination. J. Biomech. 2010, 43, 1573–1579. [Google Scholar] [CrossRef]
  14. Zhu, Y.; Geng, Y.; Huang, R.; Zhang, X.; Wang, L.; Liu, W. Driving towards the future: Exploring human-centered design and experiment of glazing projection display systems for autonomous vehicles. Int. J. Hum.–Comput. Interact. 2023, 1–16. [Google Scholar] [CrossRef]
  15. Fang, C.; He, B.; Wang, Y.; Cao, J.; Gao, S. EMG-Centered Multisensory Based Technologies for Pattern Recognition in Rehabilitation: State of the Art and Challenges. Biosensors 2020, 10, 85. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Overall structure of the proposed method in this study: (a) structure, (b) EMG electrodes attached to the flexor digitorum profundus.
Figure 1. Overall structure of the proposed method in this study: (a) structure, (b) EMG electrodes attached to the flexor digitorum profundus.
Applsci 14 04415 g001
Figure 2. SEMG Signals: (a) raw EMG, (b) smoothed EMG.
Figure 2. SEMG Signals: (a) raw EMG, (b) smoothed EMG.
Applsci 14 04415 g002
Figure 3. Experimental protocol block diagram from the perspective of EMG signal processing.
Figure 3. Experimental protocol block diagram from the perspective of EMG signal processing.
Applsci 14 04415 g003
Figure 4. Example of a screen where an object’s event occurs due to contraction (a) when holding an object lightly, (b) when holding an object strongly.
Figure 4. Example of a screen where an object’s event occurs due to contraction (a) when holding an object lightly, (b) when holding an object strongly.
Applsci 14 04415 g004
Figure 5. Result of the EMG value from the subject’s attempt to destroy each object: (a) graph of EMG value fluctuations while destroying objects at each level, (b) t-test results between object level 1 and level 4. Two asterisks (**) indicate statistical significance at p < 0.001.
Figure 5. Result of the EMG value from the subject’s attempt to destroy each object: (a) graph of EMG value fluctuations while destroying objects at each level, (b) t-test results between object level 1 and level 4. Two asterisks (**) indicate statistical significance at p < 0.001.
Applsci 14 04415 g005
Table 1. Questionnaire results from six subjects.
Table 1. Questionnaire results from six subjects.
Q1Q2Q3Q4
Subject 15554
Subject 25455
Subject 35555
Subject 45545
Subject 55455
Subject 65555
Total54.67 (0.52)4.83 (0.41)4.83 (0.41)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shin, Y.; Lee, M. Development of Sensory Virtual Reality Interface Using EMG Signal-Based Grip Strength Reflection System. Appl. Sci. 2024, 14, 4415. https://doi.org/10.3390/app14114415

AMA Style

Shin Y, Lee M. Development of Sensory Virtual Reality Interface Using EMG Signal-Based Grip Strength Reflection System. Applied Sciences. 2024; 14(11):4415. https://doi.org/10.3390/app14114415

Chicago/Turabian Style

Shin, Younghoon, and Miran Lee. 2024. "Development of Sensory Virtual Reality Interface Using EMG Signal-Based Grip Strength Reflection System" Applied Sciences 14, no. 11: 4415. https://doi.org/10.3390/app14114415

APA Style

Shin, Y., & Lee, M. (2024). Development of Sensory Virtual Reality Interface Using EMG Signal-Based Grip Strength Reflection System. Applied Sciences, 14(11), 4415. https://doi.org/10.3390/app14114415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop