applsci-logo

Journal Browser

Journal Browser

Applications of Virtual, Augmented, and Mixed Reality

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (15 March 2022) | Viewed by 121117

Special Issue Editor


E-Mail Website
Guest Editor
Department Technics and Projects in Engineering and Architecture, Universidad de La Laguna, 38200 Santa Cruz de Tenerife, Spain
Interests: augmented reality; virtual reality; mixed reality; human-computer interaction; wearable interaction; user experience; usability
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The term XR (extended reality), which includes the technology of virtual reality, augmented reality, and mixed reality, is beginning to become more widely known. In recent years, XR has made remarkable progress, and usage expectations are very high. There is no doubt about the potential of this technology.

As some basic research has come to fruition, expectations for XR have increased, as have opportunities for it to be applied in different fields. That way, these technologies provide great opportunities for education, medicine, architecture, Industry 4.0, e-commerce, gaming, healthcare, the military, emergency response, entertainment, engineering, advertising, entertainment, retail, etc., and we can consider that we are facing a technological change as big as the massive use of PC, internet or smartphone was at its time.

The applications that we can develop through smartphones, tablets, and new XR wearable (glasses and headset) devices that free workers and users from having to hold on to devices are more than we can imagine and can help to save time and reduce production costs, improving quality of life.

In the manufacturing field, the use of augmented reality has been the topic of conversation for years, but actual deployment has been slow. This is changing, however, as manufacturers explore the technology in their plants and move beyond pilots and trials to the wider, day-to-day use of AR. Although AR is still at an early stage in manufacturing, there is a lot of innovation going on, and a lot of movement in the industry around AR. On the other hand, XR provides great opportunities in education and training that are not possible with traditional instruction methods and other technologies used in education. VR, AR, and MR allow learners, in a safe way, to experience environments and virtual scenarios that would normally be dangerous to learn in. Even for academic institutions and companies, it is difficult to have some infrastructures to teach or train their learners or workers. Unlike some traditional instruction methods, VR, AR, and MR applications offer consistent education and training that do not vary from instructor to instructor. These virtual technologies also afford the development of psychomotor skills through physical 3D interactions with virtual elements. This is especially important when resources are limited for training purposes.

This Special Issue calls for many interesting studies, applications, and experiences that will open up new uses of XR. In addition to research that has steadily improved existing issues, we welcome research papers that present new possibilities of VR, AR, and MX. Topics of interest include but are not limited to the following:

  • VR/AR/MR applications: manufacturing, healthcare, virtual travel, e-sports, games, cultural heritage, military, e-commerce, military, psychology, medicine, emergency response, entertainment, engineering, advertising, etc.
  • Brain science for VR/AR/MX
  • VR/AR/MX collaboration
  • Context awareness for VR/AR
  • Education with VR/AR/MX
  • Use 360° video for VR
  • Display technologies for VR/AR/MX
  • Human–computer interactions in VR/AR/MX
  • Human factors in VR/AR/MX
  • Perception/presence in VR/AR/MX
  • Physiological sensing for VR/AR/MX
  • Cybersickness
  • User experience/usability in VR/AR/MX
  • Interfaces for VR/AR
  • Virtual humans/avatars in VR/AR/MX
  • Wellbeing with VR/AR/MX
  • Human behavior sensing
  • Gesture interface
  • Interactive simulation
  • New interaction design for VR/AR/MR
  • AR/VR devices and technologies integrated
  • Issues on real world and virtual world integration
  • Social aspects in VR/AR/MR interaction

Prof. Jorge Martin-Gutierrez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Virtual, augmented, and mixed reality
  • Interactive simulation
  • HCI (human–computer interaction)
  • Human-centered design

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (23 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 3466 KiB  
Article
AnyGesture: Arbitrary One-Handed Gestures for Augmented, Virtual, and Mixed Reality Applications
by Alexander Schäfer, Gerd Reis and Didier Stricker
Appl. Sci. 2022, 12(4), 1888; https://doi.org/10.3390/app12041888 - 11 Feb 2022
Cited by 11 | Viewed by 4689
Abstract
Natural user interfaces based on hand gestures are becoming increasingly popular. The need for expensive hardware left a wide range of interaction possibilities that hand tracking enables largely unexplored. Recently, hand tracking has been built into inexpensive and widely available hardware, allowing more [...] Read more.
Natural user interfaces based on hand gestures are becoming increasingly popular. The need for expensive hardware left a wide range of interaction possibilities that hand tracking enables largely unexplored. Recently, hand tracking has been built into inexpensive and widely available hardware, allowing more and more people access to this technology. This work provides researchers and users with a simple yet effective way to implement various one-handed gestures to enable deeper exploration of gesture-based interactions and interfaces. To this end, this work provides a framework for design, prototyping, testing, and implementation of one-handed gestures. The proposed framework was implemented with two main goals: First, it should be able to recognize any one-handed gesture. Secondly, the design and implementation of gestures should be as simple as performing the gesture and pressing a button to record it. The contribution of this paper is a simple yet unique way to record and recognize static and dynamic one-handed gestures. A static gesture can be captured with a template matching approach, while dynamic gestures use previously captured spatial information. The presented approach was evaluated in a user study with 33 participants and the implementable gestures received high accuracy and user acceptance. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

21 pages, 17833 KiB  
Article
An Immersive Virtual Reality Training Game for Power Substations Evaluated in Terms of Usability and Engagement
by Iván F. Mondragón Bernal, Natalia E. Lozano-Ramírez, Julian M. Puerto Cortés, Sergio Valdivia, Rodrigo Muñoz, Juan Aragón, Rodolfo García and Giovanni Hernández
Appl. Sci. 2022, 12(2), 711; https://doi.org/10.3390/app12020711 - 12 Jan 2022
Cited by 29 | Viewed by 5376
Abstract
Safety-focused training is essential for the operation and maintenance concentrated on the reliability of critical infrastructures, such as power grids. This paper introduces and evaluates a system for power substation operational training by exploring and interacting with realistic models in virtual worlds using [...] Read more.
Safety-focused training is essential for the operation and maintenance concentrated on the reliability of critical infrastructures, such as power grids. This paper introduces and evaluates a system for power substation operational training by exploring and interacting with realistic models in virtual worlds using serious games. The virtual reality (VR) simulator used building information modelling (BIM) from a 115 kV substation to develop a scenario with high technical detail suitable for professional training. This system created interactive models that could be explored using a first-person-perspective serious game in a cave automatic virtual environment (CAVE). Different operational missions could be carried out in the serious game, allowing several skills to be coached. The suitability for vocational training carried out by utility companies was evaluated in terms of usability and engagement. The evaluation used a System Usability Scale (SUS) and a Game Engagement Questionnaire (GEQ) filled by 16 power substation operators demonstrating marginally acceptable usability, with improvement opportunities and high acceptance (by utility technicians) of this system for operation training focused on safety in such hazardous tasks. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Graphical abstract

16 pages, 29250 KiB  
Article
EVM: An Educational Virtual Reality Modeling Tool; Evaluation Study with Freshman Engineering Students
by Julián Conesa-Pastor and Manuel Contero
Appl. Sci. 2022, 12(1), 390; https://doi.org/10.3390/app12010390 - 31 Dec 2021
Cited by 6 | Viewed by 2359
Abstract
Educational Virtual Modeling (EVM) is a novel VR-based application for sketching and modeling in an immersive environment designed to introduce freshman engineering students to modeling concepts and reinforce their understanding of the spatial connection between an object and its 2D projections. It was [...] Read more.
Educational Virtual Modeling (EVM) is a novel VR-based application for sketching and modeling in an immersive environment designed to introduce freshman engineering students to modeling concepts and reinforce their understanding of the spatial connection between an object and its 2D projections. It was built on the Unity 3D game engine and Microsoft’s Mixed Reality Toolkit (MRTK). EVM was designed to support the creation of the typical parts used in exercises in basic engineering graphics courses with a special emphasis on a fast learning curve and a simple way to provide exercises and tutorials to students. To analyze the feasibility of using EVM for this purpose, a user study was conducted with 23 freshmen and sophomore engineering students that used both EVM and Trimble SketchUp to model six parts using an axonometric view as the input. Students had no previous experience in any of the two systems. Each participant went through a brief training session and was allowed to use each tool freely for 20 min. At the end of the modeling exercises with each system, the participants rated its usability by answering the System Usability Scale (SUS) questionnaire. Additionally, they filled out a questionnaire for assessment of the system functionality. The results demonstrated a very high SUS score for EVM (M = 92.93, SD = 6.15), whereas Trimble SketchUp obtained only a mean score of 76.30 (SD = 6.69). The completion time for the modeling tasks with EVM showed its suitability for regular class use, despite the fact that it usually takes longer to complete the exercises in the system than in Trimble SketchUp. There were no statistically significant differences regarding functionality assessment. At the end of the experimental session, participants were asked to express their opinion about the systems and provide suggestions for the improvement of EVM. All participants showed a preference for EVM as a potential tool to perform exercises in the engineering graphics course. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

18 pages, 9750 KiB  
Article
VRChem: A Virtual Reality Molecular Builder
by Otso Pietikäinen, Perttu Hämäläinen, Jaakko Lehtinen and Antti J. Karttunen
Appl. Sci. 2021, 11(22), 10767; https://doi.org/10.3390/app112210767 - 15 Nov 2021
Cited by 10 | Viewed by 4948
Abstract
Virtual reality provides a powerful way to visualize the three-dimensional, atomic-level structures of molecules and materials. We present new virtual reality software for molecular modeling and for testing the use of virtual reality in organic chemistry education. The open-source software, named VRChem, was [...] Read more.
Virtual reality provides a powerful way to visualize the three-dimensional, atomic-level structures of molecules and materials. We present new virtual reality software for molecular modeling and for testing the use of virtual reality in organic chemistry education. The open-source software, named VRChem, was developed primarily for building, visualizing and manipulating organic molecules using a head-mounted virtual reality system. The design goal of the VRChem software was to create an easy-to-use and entertaining user experience for molecular modeling in virtual reality. We discuss the design and implementation of VRChem, together with real-life user experiences collected from students and academic research staff. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

19 pages, 2803 KiB  
Article
Combining Virtual Reality and Organizational Neuroscience for Leadership Assessment
by Elena Parra, Irene Alice Chicchi Giglioli, Jestine Philip, Lucia Amalia Carrasco-Ribelles, Javier Marín-Morales and Mariano Alcañiz Raya
Appl. Sci. 2021, 11(13), 5956; https://doi.org/10.3390/app11135956 - 26 Jun 2021
Cited by 9 | Viewed by 3359
Abstract
In this article, we introduce three-dimensional Serious Games (3DSGs) under an evidence-centered design (ECD) framework and use an organizational neuroscience-based eye-tracking measure to capture implicit behavioral signals associated with leadership skills. While ECD is a well-established framework used in the design and development [...] Read more.
In this article, we introduce three-dimensional Serious Games (3DSGs) under an evidence-centered design (ECD) framework and use an organizational neuroscience-based eye-tracking measure to capture implicit behavioral signals associated with leadership skills. While ECD is a well-established framework used in the design and development of assessments, it has rarely been utilized in organizational research. The study proposes a novel 3DSG combined with organizational neuroscience methods as a promising tool to assess and recognize leadership-related behavioral patterns that manifest during complex and realistic social situations. We offer a research protocol for assessing task- and relationship-oriented leadership skills that uses ECD, eye-tracking measures, and machine learning. Seamlessly embedding biological measures into 3DSGs enables objective assessment methods that are based on machine learning techniques to achieve high ecological validity. We conclude by describing a future research agenda for the combined use of 3DSGs and organizational neuroscience methods for leadership and human resources. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

17 pages, 1571 KiB  
Article
An Immersive Serious Game for the Behavioral Assessment of Psychological Needs
by Irene Alice Chicchi Giglioli, Lucia A. Carrasco-Ribelles, Elena Parra, Javier Marín-Morales and Mariano Alcañiz Raya
Appl. Sci. 2021, 11(4), 1971; https://doi.org/10.3390/app11041971 - 23 Feb 2021
Cited by 6 | Viewed by 3718
Abstract
Motivation is an essential component in mental health and well-being. In this area, researchers have identified four psychological needs that drive human behavior: attachment, self-esteem, orientation and control, and maximization of pleasure and minimization of distress. Various self-reported scales and interviews tools have [...] Read more.
Motivation is an essential component in mental health and well-being. In this area, researchers have identified four psychological needs that drive human behavior: attachment, self-esteem, orientation and control, and maximization of pleasure and minimization of distress. Various self-reported scales and interviews tools have been developed to assess these dimensions. Despite the validity of these, they are showing limitations in terms of abstractation and decontextualization and biases, such as social desirability bias, that can affect responses veracity. Conversely, virtual serious games (VSGs), that are games with specific purposes, can potentially provide more ecologically valid and objective assessments than traditional approaches. Starting from these premises, the aim of this study was to investigate the feasibility of a VSG to assess the four personality needs. Sixty subjects participated in five VSG sessions. Results showed that the VSG was able to recognize attachment, self-esteem, and orientation and control needs with a high accuracy, and to a lesser extent maximization of pleasure and minimization of distress need. In conclusion, this study showed the feasibility to use a VSG to enhance the assessment of psychological behavioral-based need, overcoming biases presented by traditional assessment. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

23 pages, 14726 KiB  
Article
Spatial Skills and Perceptions of Space: Representing 2D Drawings as 3D Drawings inside Immersive Virtual Reality
by Hugo C. Gómez-Tone, Jorge Martin-Gutierrez, John Bustamante-Escapa and Paola Bustamante-Escapa
Appl. Sci. 2021, 11(4), 1475; https://doi.org/10.3390/app11041475 - 6 Feb 2021
Cited by 28 | Viewed by 6350
Abstract
Rapid freehand drawings are of great importance in the early years of university studies of architecture, because both the physical characteristics of spaces and their sensory characteristics can be communicated through them. In order to draw architectural spaces, it is necessary to have [...] Read more.
Rapid freehand drawings are of great importance in the early years of university studies of architecture, because both the physical characteristics of spaces and their sensory characteristics can be communicated through them. In order to draw architectural spaces, it is necessary to have the ability to visualize and manipulate them mentally, which leads us to the concept of spatial skills; but it also requires a development of spatial perception to express them in the drawings. The purpose of this research is to analyze the improvement of spatial skills through the full-scale sketching of architectural spaces in virtual immersive environments and to analyze spatial perception in reference to the capture of spatial sensations in virtual immersive environments. Spatial skills training was created based on the freehand drawing of architectural spaces using Head Mounted Displays (HMD) and registered the spatial sensations experienced also using HMD, but only in previously modeled realistic spaces. It was found that the training significantly improved orientation, rotation and visualization, and that the sensory journey and experimentation of architectural spaces realistically modeled in immersive virtual reality environments allows for the same sensations that the designer initially sought to convey. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

16 pages, 1124 KiB  
Article
Compassionate Embodied Virtual Experience Increases the Adherence to Meditation Practice
by Jaime Navarrete, Marian Martínez-Sanchis, Miguel Bellosta-Batalla, Rosa Baños, Ausiàs Cebolla and Rocío Herrero
Appl. Sci. 2021, 11(3), 1276; https://doi.org/10.3390/app11031276 - 30 Jan 2021
Cited by 12 | Viewed by 3712
Abstract
Virtual Reality (VR) could be useful to overcome imagery and somatosensory difficulties of compassion-based meditations given that it helps generate empathy by facilitating the possibility of putting oneself into the mind of others. Thus, the aim of this study was to evaluate the [...] Read more.
Virtual Reality (VR) could be useful to overcome imagery and somatosensory difficulties of compassion-based meditations given that it helps generate empathy by facilitating the possibility of putting oneself into the mind of others. Thus, the aim of this study was to evaluate the effectiveness of an embodied-VR system in generating a compassionate response and increasing the quality and adherence to meditation practice. Health professionals or healthcare students (n = 41) were randomly assigned to a regular audio guided meditation or to a meditation supported by an embodied-VR system, “The machine to be another”. In both conditions, there was an initial in-person session and two weeks of meditation practice at home. An implicit measure was used to measure prosocial behavior, and self-report questionnaires were administered to assess compassion related constructs, quality of meditation, and frequency of meditation. Results revealed that participants from the embodied-VR condition meditated for double the amount of time at home than participants who only listened to the usual guided meditation. However, there were no significant differences in the overall quality of at-home meditation. In conclusion, this study confirms that embodied-VR systems are useful for increasing adherence to meditation practice. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

21 pages, 2652 KiB  
Article
An Immersive Virtual Reality Game for Predicting Risk Taking through the Use of Implicit Measures
by Carla de-Juan-Ripoll, José Llanes-Jurado, Irene Alice Chicchi Giglioli, Javier Marín-Morales and Mariano Alcañiz
Appl. Sci. 2021, 11(2), 825; https://doi.org/10.3390/app11020825 - 17 Jan 2021
Cited by 8 | Viewed by 3743
Abstract
Risk taking (RT) measurement constitutes a challenge for researchers and practitioners and has been addressed from different perspectives. Personality traits and temperamental aspects such as sensation seeking and impulsivity influence the individual’s approach to RT, prompting risk-seeking or risk-aversion behaviors. Virtual reality has [...] Read more.
Risk taking (RT) measurement constitutes a challenge for researchers and practitioners and has been addressed from different perspectives. Personality traits and temperamental aspects such as sensation seeking and impulsivity influence the individual’s approach to RT, prompting risk-seeking or risk-aversion behaviors. Virtual reality has emerged as a suitable tool for RT measurement, since it enables the exposure of a person to realistic risks, allowing embodied interactions, the application of stealth assessment techniques and physiological real-time measurement. In this article, we present the assessment on decision making in risk environments (AEMIN) tool, as an enhanced version of the spheres and shield maze task, a previous tool developed by the authors. The main aim of this article is to study whether it is possible is to discriminate participants with high versus low scores in the measures of personality, sensation seeking and impulsivity, through their behaviors and physiological responses during playing AEMIN. Applying machine learning methods to the dataset we explored: (a) if through these data it is possible to discriminate between the two populations in each variable; and (b) which parameters better discriminate between the two populations in each variable. The results support the use of AEMIN as an ecological assessment tool to measure RT, since it brings to light behaviors that allow to classify the subjects into high/low risk-related psychological constructs. Regarding physiological measures, galvanic skin response seems to be less salient in prediction models. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

19 pages, 6670 KiB  
Article
A Multi-Agent System for Data Fusion Techniques Applied to the Internet of Things Enabling Physical Rehabilitation Monitoring
by Héctor Sánchez San Blas, André Sales Mendes, Francisco García Encinas, Luís Augusto Silva and Gabriel Villarubia González
Appl. Sci. 2021, 11(1), 331; https://doi.org/10.3390/app11010331 - 31 Dec 2020
Cited by 26 | Viewed by 4377
Abstract
There are more than 800 million people in the world with chronic diseases. Many of these people do not have easy access to healthcare facilities for recovery. Telerehabilitation seeks to provide a solution to this problem. According to the researchers, the topic has [...] Read more.
There are more than 800 million people in the world with chronic diseases. Many of these people do not have easy access to healthcare facilities for recovery. Telerehabilitation seeks to provide a solution to this problem. According to the researchers, the topic has been treated as medical aid, making an exchange between technological issues such as the Internet of Things and virtual reality. The main objective of this work is to design a distributed platform to monitor the patient’s movements and status during rehabilitation exercises. Later, this information can be processed and analyzed remotely by the doctor assigned to the patient. In this way, the doctor can follow the patient’s progress, enhancing the improvement and recovery process. To achieve this, a case study has been made using a PANGEA-based multi-agent system that coordinates different parts of the architecture using ubiquitous computing techniques. In addition, the system uses real-time feedback from the patient. This feedback system makes the patients aware of their errors so that they can improve their performance in later executions. An evaluation was carried out with real patients, achieving promising results. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

17 pages, 3363 KiB  
Article
An Aerial Mixed-Reality Environment for First-Person-View Drone Flying
by Dong-Hyun Kim, Yong-Guk Go and Soo-Mi Choi
Appl. Sci. 2020, 10(16), 5436; https://doi.org/10.3390/app10165436 - 6 Aug 2020
Cited by 22 | Viewed by 4735
Abstract
A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to [...] Read more.
A drone be able to fly without colliding to preserve the surroundings and its own safety. In addition, it must also incorporate numerous features of interest for drone users. In this paper, an aerial mixed-reality environment for first-person-view drone flying is proposed to provide an immersive experience and a safe environment for drone users by creating additional virtual obstacles when flying a drone in an open area. The proposed system is effective in perceiving the depth of obstacles, and enables bidirectional interaction between real and virtual worlds using a drone equipped with a stereo camera based on human binocular vision. In addition, it synchronizes the parameters of the real and virtual cameras to effectively and naturally create virtual objects in a real space. Based on user studies that included both general and expert users, we confirm that the proposed system successfully creates a mixed-reality environment using a flying drone by quickly recognizing real objects and stably combining them with virtual objects. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

18 pages, 1977 KiB  
Article
The Effect of the Degree of Anxiety of Learners during the Use of VR on the Flow and Learning Effect
by Chongsan Kwon
Appl. Sci. 2020, 10(14), 4932; https://doi.org/10.3390/app10144932 - 17 Jul 2020
Cited by 11 | Viewed by 3517
Abstract
Virtual reality (VR) learning content that provides negative experiences makes learners anxious. Thus, experimental research was conducted to determine how anxiety felt by learners using VR impacts learning. To measure the learning effects, flow, a leading element of learning effects, was measured. Flow [...] Read more.
Virtual reality (VR) learning content that provides negative experiences makes learners anxious. Thus, experimental research was conducted to determine how anxiety felt by learners using VR impacts learning. To measure the learning effects, flow, a leading element of learning effects, was measured. Flow has a positive effect on learning as a scale of how immersed an individual is in the work he or she is currently performing. The evaluation method used the empirical recognition scale by Kwon (2020) and the six-item short-form State-Trait Anxiety Inventory (STAI) from Marteau and Becker (1992), which were used in the preceding study. The difference in flow between high- and low-anxiety groups was explored by measuring the degree the study participants felt using an Fire Safety Education Game based on VR that allows learners to feel the heat and wind of the fire site with their skin. As a result of the experiment, no difference in flow was found between the high- and low-anxiety groups that played the same VR game with cutaneous sensation. However, the high-anxiety group who played the VR game with cutaneous sensation showed a higher flow than the group that played the basic fire safety education VR game. Based on these results, the following conclusions were drawn: the closer to reality the VR learning and training system for negative situations is reproduced, the more realistically the learner feels the anxiety. In other words, the closer to reality the virtual environment is reproduced, the more realistically the learner feels the feelings in the virtual space. In turn, through this realistic experience, the learner becomes immersed in the flow more deeply. In addition, considering that flow is a prerequisite for the learning effect, the anxiety that learners feel in the virtual environment will also have a positive effect on the learning effect. As a result, it can be assumed that the more realistically VR is reproduced, the more effective experiential learning using VR can be. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

24 pages, 5859 KiB  
Article
Examining User Perception of the Size of Multiple Objects in Virtual Reality
by Bruce H. Thomas
Appl. Sci. 2020, 10(11), 4049; https://doi.org/10.3390/app10114049 - 11 Jun 2020
Cited by 11 | Viewed by 3604
Abstract
This article presents a user study into user perception of an object’s size when presented in virtual reality. Critical for users understanding of virtual worlds is their perception of the size of virtual objects. This article is concerned with virtual objects that are [...] Read more.
This article presents a user study into user perception of an object’s size when presented in virtual reality. Critical for users understanding of virtual worlds is their perception of the size of virtual objects. This article is concerned with virtual objects that are within arm’s reach of the user. Examples of such virtual objects could be virtual controls such as buttons, dials and levers that the users manipulate to control the virtual reality application. This article explores the issue of a user’s ability to judge the size of an object relative to a second object of a different colour. The results determined that the points of subjective equality for height and width judgement tasks ranging from 10 to 90 mm were all within an acceptable value. That is to say, participants were able to perceive height and width judgements very close to the target values. The results for height judgement task for just-noticeable difference were all less than 1.5 mm and for the width judgement task less than 2.3 mm. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

16 pages, 5422 KiB  
Article
Comparing Augmented Reality-Assisted Assembly Functions—A Case Study on Dougong Structure
by Chih-Hsing Chu, Chien-Jung Liao and Shu-Chiang Lin
Appl. Sci. 2020, 10(10), 3383; https://doi.org/10.3390/app10103383 - 14 May 2020
Cited by 30 | Viewed by 5334
Abstract
The Dougong structure is an ancient architectural innovation of the East. Its construction method is complex and challenging to understand from drawings. Scale models were developed to preserve this culturally-unique architectural technique by learning through their assembly process. In this work, augmented reality [...] Read more.
The Dougong structure is an ancient architectural innovation of the East. Its construction method is complex and challenging to understand from drawings. Scale models were developed to preserve this culturally-unique architectural technique by learning through their assembly process. In this work, augmented reality (AR)-based systems that support the manual assembly of the Dougong models with instant interactions were developed. The first objective was to design new AR-assisted functions that overcome existing limitations of paper-based assembly instructions. The second one was to clarify whether or not and how AR can improve the operational efficiency or quality of the manual assembly process through experiments. The experimental data were analyzed with both qualitative and quantitative measures to evaluate the assembly efficiency, accuracy, and workload of these functions. The results revealed essential requirements for improving the functional design of the systems. They also showed the potential of AR as an effective human interfacing technology for assisting the manual assembly of complex objects. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

15 pages, 8149 KiB  
Article
Efficacy of Virtual Reality in Painting Art Exhibitions Appreciation
by Chih-Long Lin, Si-Jing Chen and Rungtai Lin
Appl. Sci. 2020, 10(9), 3012; https://doi.org/10.3390/app10093012 - 26 Apr 2020
Cited by 41 | Viewed by 9521
Abstract
Virtual reality (VR) technology has been employed in a wide range of fields, from entertainment to medicine and engineering. Advances in VR also provide new opportunities in art exhibitions. This study discusses the experience of art appreciation through desktop virtual reality (Desktop VR) [...] Read more.
Virtual reality (VR) technology has been employed in a wide range of fields, from entertainment to medicine and engineering. Advances in VR also provide new opportunities in art exhibitions. This study discusses the experience of art appreciation through desktop virtual reality (Desktop VR) or head-mounted display virtual reality (HMD VR) and compares it with appreciating a physical painting. Seventy-eight university students participated in the study. According to the findings of this study, painting evaluation and the emotions expressed during the appreciation show no significant difference under these three conditions, indicating that the participants believe that paintings, regardless of whether they are viewed through VR, are similar. Owing to the limitation of the operation, the participants considered HMD VR to be a tool that hinders free appreciation of paintings. In addition, attention should be paid to the proper projected size of words and paintings for better reading and viewing. The above indicates that through digital technology, we can shorten the gap between a virtual painting and a physical one; however, we must still improve the design of object size and the interaction in the VR context so that a virtual exhibition can be as impressive as a physical one. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

17 pages, 5998 KiB  
Article
SoundFields: A Virtual Reality Game Designed to Address Auditory Hypersensitivity in Individuals with Autism Spectrum Disorder
by Daniel Johnston, Hauke Egermann and Gavin Kearney
Appl. Sci. 2020, 10(9), 2996; https://doi.org/10.3390/app10092996 - 25 Apr 2020
Cited by 38 | Viewed by 9379
Abstract
Individuals with autism spectrum disorder (ASD) are characterised as having impairments in social-emotional interaction and communication, alongside displaying repetitive behaviours and interests. Additionally, they can frequently experience difficulties in processing sensory information with particular prevalence in the auditory domain. Often triggered by everyday [...] Read more.
Individuals with autism spectrum disorder (ASD) are characterised as having impairments in social-emotional interaction and communication, alongside displaying repetitive behaviours and interests. Additionally, they can frequently experience difficulties in processing sensory information with particular prevalence in the auditory domain. Often triggered by everyday environmental sounds, auditory hypersensitivity can provoke self-regulatory fear responses such as crying and isolation from sounds. This paper presents SoundFields, an interactive virtual reality game designed to address this area by integrating exposure based therapy techniques into game mechanics and delivering target auditory stimuli to the player rendered via binaural based spatial audio. A pilot study was conducted with six participants diagnosed with ASD who displayed hypersensitivity to specific sounds to evaluate the use of SoundFields as a tool to reduce levels of anxiety associated with identified problematic sounds. During the course of the investigation participants played the game weekly over four weeks and all participants actively engaged with the virtual reality (VR) environment and enjoyed playing the game. Following this period, a comparison of pre- and post-study measurements showed a significant decrease in anxiety linked to target auditory stimuli. The study results therefore suggest that SoundFields could be an effective tool for helping individuals with autism manage auditory hypersensitivity. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

16 pages, 9551 KiB  
Article
ALCC-Glasses: Arriving Light Chroma Controllable Optical See-Through Head-Mounted Display System for Color Vision Deficiency Compensation
by Ying Tang, Zhenyang Zhu, Masahiro Toyoura, Kentaro Go, Kenji Kashiwagi, Issei Fujishiro and Xiaoyang Mao
Appl. Sci. 2020, 10(7), 2381; https://doi.org/10.3390/app10072381 - 31 Mar 2020
Cited by 6 | Viewed by 3007
Abstract
About 250 million people in the world suffer from color vision deficiency (CVD). Contact lenses and glasses with a color filter are available to partially improve the vision of people with CVD. Tinted glasses uniformly change the colors in a user’s field of [...] Read more.
About 250 million people in the world suffer from color vision deficiency (CVD). Contact lenses and glasses with a color filter are available to partially improve the vision of people with CVD. Tinted glasses uniformly change the colors in a user’s field of view (FoV), which can improve the contrast of certain colors while making others hard to identify. On the other hand, an optical see-through head-mounted display (OST-HMD) provides a new alternative by applying a controllable overlay to a user’s FoV. The method of color calibration for people with CVD, such as the Daltonization process, needs to make the calibrated color darker, which has not yet been featured on recent commercial OST-HMDs. We propose a new approach to realize light subtraction on OST-HMDs using a transmissive LCD panel, a prototype system, named ALCC-glasses, to validate and demonstrate the new arriving light chroma controllable augmented reality technology for CVD compensation. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

17 pages, 7937 KiB  
Article
Real-Time Application for Generating Multiple Experiences from 360° Panoramic Video by Tracking Arbitrary Objects and Viewer’s Orientations
by Syed Hammad Hussain Shah, Kyungjin Han and Jong Weon Lee
Appl. Sci. 2020, 10(7), 2248; https://doi.org/10.3390/app10072248 - 26 Mar 2020
Cited by 7 | Viewed by 3181
Abstract
We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be [...] Read more.
We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Graphical abstract

15 pages, 23397 KiB  
Article
Comparative Performance Characterization of Mobile AR Frameworks in the Context of AR-Based Grocery Shopping Applications
by Juhwan Lee, Sangwon Hwang, Jisun Lee and Seungwoo Kang
Appl. Sci. 2020, 10(4), 1547; https://doi.org/10.3390/app10041547 - 24 Feb 2020
Cited by 8 | Viewed by 4176
Abstract
A number of Augmented Reality (AR) frameworks are now available and used to support the development of mobile AR applications. In this paper, we measure and compare the recognition performance of the commercial AR frameworks and identify potential issues that can occur in [...] Read more.
A number of Augmented Reality (AR) frameworks are now available and used to support the development of mobile AR applications. In this paper, we measure and compare the recognition performance of the commercial AR frameworks and identify potential issues that can occur in the real application environment. For experiments, we assume a situation in which a consumer purchases food products in a grocery store and consider an application scenario in which AR content related to the products is displayed on a smartphone screen by recognizing such products. We use four performance metrics to compare the performance of the selected AR frameworks, Vuforia, ARCore, and MAXST. Experimental results show that Vuforia is relatively superior to the others. The limitation of the AR frameworks is also identified when they are used in a real grocery store environment. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

14 pages, 20974 KiB  
Article
Towards Next Generation Technical Documentation in Augmented Reality Using a Context-Aware Information Manager
by Michele Gattullo, Alessandro Evangelista, Vito M. Manghisi, Antonio E. Uva, Michele Fiorentino, Antonio Boccaccio, Michele Ruta and Joseph L. Gabbard
Appl. Sci. 2020, 10(3), 780; https://doi.org/10.3390/app10030780 - 22 Jan 2020
Cited by 11 | Viewed by 4218
Abstract
Technical documentation is evolving from static contents presented on paper or via digital publishing to real-time on-demand contents displayed via virtual and augmented reality (AR) devices. However, how best to provide personalized and context-relevant presentation of technical information is still an open field [...] Read more.
Technical documentation is evolving from static contents presented on paper or via digital publishing to real-time on-demand contents displayed via virtual and augmented reality (AR) devices. However, how best to provide personalized and context-relevant presentation of technical information is still an open field of research. In particular, the systems described in the literature can manage a limited number of modalities to convey technical information, and do not consider the ‘people’ factor. Then, in this work, we present a Context-Aware Technical Information Management (CATIM) system, that dynamically manages (1) what information as well as (2) how information is presented in an augmented reality interface. The system was successfully implemented, and we made a first evaluation in the real industrial scenario of the maintenance of a hydraulic valve. We also measured the time performance of the system, and results revealed that CATIM performs fast enough to support interactive AR. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

17 pages, 1474 KiB  
Article
The Limited Effect of Graphic Elements in Video and Augmented Reality on Children’s Listening Comprehension
by Marta Sylvia del Río Guerra, Alejandra Estefanía Garza Martínez, Jorge Martin-Gutierrez and Vicente López-Chao
Appl. Sci. 2020, 10(2), 527; https://doi.org/10.3390/app10020527 - 10 Jan 2020
Cited by 11 | Viewed by 4185
Abstract
There is currently significant interest in the use of instructional strategies in learning environments thanks to the emergence of new multimedia systems that combine text, audio, graphics and video, such as augmented reality (AR). In this light, this study compares the effectiveness of [...] Read more.
There is currently significant interest in the use of instructional strategies in learning environments thanks to the emergence of new multimedia systems that combine text, audio, graphics and video, such as augmented reality (AR). In this light, this study compares the effectiveness of AR and video for listening comprehension tasks. The sample consisted of thirty-two elementary school students with different reading comprehension. Firstly, the experience, instructions and objectives were introduced to all the students. Next, they were divided into two groups to perform activities—one group performed an activity involving watching an Educational Video Story of the Laika dog and her Space Journey available by mobile devices app Blue Planet Tales, while the other performed an activity involving the use of AR, whose contents of the same history were visualized by means of the app Augment Sales. Once the activities were completed participants answered a comprehension test. Results (p = 0.180) indicate there are no meaningful differences between the lesson format and test performance. But there are differences between the participants of the AR group according to their reading comprehension level. With respect to the time taken to perform the comprehension test, there is no significant difference between the two groups but there is a difference between participants with a high and low level of comprehension. To conclude SUS (System Usability Scale) questionnaire was used to establish the measure usability for the AR app on a smartphone. An average score of 77.5 out of 100 was obtained in this questionnaire, which indicates that the app has fairly good user-centered design. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

27 pages, 792 KiB  
Article
The Influence of Display Parameters and Display Devices over Spatial Ability Test Answers in Virtual Reality Environments
by Tibor Guzsvinecz, Cecilia Sik-Lanyi, Eva Orban-Mihalyko and Erika Perge
Appl. Sci. 2020, 10(2), 526; https://doi.org/10.3390/app10020526 - 10 Jan 2020
Cited by 9 | Viewed by 2760
Abstract
This manuscript analyzes the influence of display parameters and display devices over the spatial skills of the users in virtual reality environments. For this, the authors of this manuscript developed a virtual reality application which tests the spatial skills of the users. 240 [...] Read more.
This manuscript analyzes the influence of display parameters and display devices over the spatial skills of the users in virtual reality environments. For this, the authors of this manuscript developed a virtual reality application which tests the spatial skills of the users. 240 students used an LG desktop display and 61 students used the Gear VR for the tests. Statistical data are generated when the users do the tests and the following factors are logged by the application and evaluated in this manuscript: virtual camera type, virtual camera field of view, virtual camera rotation, contrast ratio parameters, the existence of shadows and the device used. The probabilities of correct answers were analyzed based on these factors by logistic regression (logit) analysis method. The influences and interactions of all factors were analyzed. The perspective camera, lighter contrast ratio, no or large camera rotations and the use of the Gear VR greatly and positively influenced the probability of correct answers on the tests. Therefore, for the assessment of spatial ability in virtual reality, the use of these parameters and device present the optimal user-centric human–computer interaction practice. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Review

Jump to: Research

14 pages, 505 KiB  
Review
The Application of Virtual Reality in Engineering Education
by Maged Soliman, Apostolos Pesyridis, Damon Dalaymani-Zad, Mohammed Gronfula and Miltiadis Kourmpetis
Appl. Sci. 2021, 11(6), 2879; https://doi.org/10.3390/app11062879 - 23 Mar 2021
Cited by 118 | Viewed by 16716
Abstract
The advancement of VR technology through the increase in its processing power and decrease in its cost and form factor induced the research and market interest away from the gaming industry and towards education and training. In this paper, we argue and present [...] Read more.
The advancement of VR technology through the increase in its processing power and decrease in its cost and form factor induced the research and market interest away from the gaming industry and towards education and training. In this paper, we argue and present evidence from vast research that VR is an excellent tool in engineering education. Through our review, we deduced that VR has positive cognitive and pedagogical benefits in engineering education, which ultimately improves the students’ understanding of the subjects, performance and grades, and education experience. In addition, the benefits extend to the university/institution in terms of reduced liability, infrastructure, and cost through the use of VR as a replacement to physical laboratories. There are added benefits of equal educational experience for the students with special needs as well as distance learning students who have no access to physical labs. Furthermore, recent reviews identified that VR applications for education currently lack learning theories and objectives integration in their design. Hence, we have selected the constructivist and variation learning theories as they are currently successfully implemented in engineering education, and strong evidence shows suitability of implementation in VR for education. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality)
Show Figures

Figure 1

Back to TopTop