Recent Advances in Extended Reality

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 April 2024) | Viewed by 14353

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
Interests: XR (VR/AR/MR); virtual avatars & agents; social interaction in XR; perception and cognition in XR; pervasive XR and IoT
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Engineering, Pusan National University, Busan 46241, Korea
Interests: VR/AR; virtual humans; perception in VR/AR; multimodal interaction
School of Information Technology and Mathematical Sciences, University of South Australia, Adelaide 5095, Australia
Interests: collaborative mixed reality; empathic computing; human computer interaction

E-Mail Website
Guest Editor
Division of Electronics and Communications Engineering, Pukyong National University, Busan 48513, Korea
Interests: AR/MR; computer vision; human computer interaction; deep learning applications

Special Issue Information

Dear Colleagues,

Recent technical advances and research in extended reality (XR), which broadly includes virtual, augmented, and mixed reality (VR, AR, and MR, respectively), enable us to extend our experience and abilities in various contexts. For example, VR users naturally and intuitively interact with remote users in immersive social virtual environments using their embodied avatars; AR users extend their sensory perception and knowledge through additional sensing devices equipped with AR displays; and in situ intelligent virtual entities are augmented into the real world.

In this Special Issue, we aim to capture the current states of XR research and developments while covering the interdisciplinary convergence research aspects in XR. We anticipate that this issue will address recent research findings and approaches, and possibly suggest future directions identifying research gaps. We are pleased to invite diverse research that covers different disciplines and perspectives, including, but not restricted to, the following topics:

  • New XR frameworks and platforms;
  • Sensing and tracking for XR;
  • Novel interfaces and interaction design in XR;
  • Security for XR;
  • Usability and user experience (UX) studies;
  • Human factors and ergonomics in XR;
  • Perception and cognition in XR;
  • Virtual avatars and agents;
  • Human–robot interaction with XR;
  • Remote collaboration and learning in XR;
  • Context-aware XR systems;
  • IoT and 5G for XR;
  • Industrial and occupational XR applications;
  • Other various XR applications.

Dr. Kangsoo Kim
Dr. Myungho Lee
Dr. Dongsik Jo
Dr. Gun Lee
Dr. Hanhoon Park
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • VR/AR/MR/XR
  • metaverse
  • multimodal interaction
  • virtual avatars and agents
  • perception and cognition
  • tracking and sensing techniques
  • context-aware XR

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 6553 KiB  
Article
Assessing the Effects of Various Gaming Platforms on Players’ Affective States and Workloads through Electroencephalogram
by Pratheep Kumar Paranthaman, Spencer Graham and Nikesh Bajaj
Electronics 2024, 13(11), 2043; https://doi.org/10.3390/electronics13112043 - 23 May 2024
Cited by 2 | Viewed by 1107
Abstract
Game platforms have different impacts on player experience in terms of affective states and workloads. By studying these impacts, we can uncover detailed aspects of the gaming experience. Traditionally, understanding player experience has relied on subjective methods, such as self-reported surveys, where players [...] Read more.
Game platforms have different impacts on player experience in terms of affective states and workloads. By studying these impacts, we can uncover detailed aspects of the gaming experience. Traditionally, understanding player experience has relied on subjective methods, such as self-reported surveys, where players reflect on their experience and effort levels. However, complementing these subjective measures with electroencephalogram (EEG) analysis introduces an objective approach to assessing player experience. In this study, we examined player experiences across PlayStation 5, Nintendo Switch, and Meta Quest 2. Using a mixed-methods approach, we merged subjective user assessments with EEG data to investigate brain activity, affective states, and workload during low- and high-stimulation games. We recruited 30 participants to play two games across three platforms. Our findings reveal that there is a statistically significant difference between these three platforms for seven out of nine experience factors. Also, three platforms have different impacts on play experience and brain activity. Additionally, we utilized a linear model to associate player experience aspects such arousal, frustration, and mental workload with different brain regions using EEG data. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

18 pages, 1617 KiB  
Article
FusionNet: An End-to-End Hybrid Model for 6D Object Pose Estimation
by Yuning Ye and Hanhoon Park
Electronics 2023, 12(19), 4162; https://doi.org/10.3390/electronics12194162 - 7 Oct 2023
Cited by 5 | Viewed by 1677
Abstract
In this study, we propose a hybrid model for Perspective-n-Point (PnP)-based 6D object pose estimation called FusionNet that takes advantage of convolutional neural networks (CNN) and Transformers. CNN is an effective and potential tool for feature extraction, which is considered the most popular [...] Read more.
In this study, we propose a hybrid model for Perspective-n-Point (PnP)-based 6D object pose estimation called FusionNet that takes advantage of convolutional neural networks (CNN) and Transformers. CNN is an effective and potential tool for feature extraction, which is considered the most popular architecture. However, CNN has difficulty in capturing long-range dependencies between features, and most CNN-based models for 6D object pose estimation are bulky and heavy. To address these problems, we propose a lighter-weight CNN building block with attention, design a Transformer-based global dependency encoder, and integrate them into a single model. Our model is able to extract dense 2D–3D point correspondences more accurately while significantly reducing the number of model parameters. Followed with a PnP header that replaces the PnP algorithm for general end-to-end pose estimation, our model showed better or highly competitive performance in pose estimation compared with other state-of-the-art models in experiments on the LINEMOD dataset. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

18 pages, 5736 KiB  
Article
User Experience of Multi-Mode and Multitasked Extended Reality on Different Mobile Interaction Platforms
by Hyeonah Choi, Heeyoon Jeong and Gerard Jounghyun Kim
Electronics 2023, 12(6), 1457; https://doi.org/10.3390/electronics12061457 - 19 Mar 2023
Viewed by 2357
Abstract
“Extended Reality (XR)” refers to a unified platform or content that supports all forms of “reality”—e.g., 2D, 3D virtual, augmented, and augmented virtual. We explore how the mobile device can support such a concept of XR. We evaluate the XR user experiences of [...] Read more.
“Extended Reality (XR)” refers to a unified platform or content that supports all forms of “reality”—e.g., 2D, 3D virtual, augmented, and augmented virtual. We explore how the mobile device can support such a concept of XR. We evaluate the XR user experiences of multi-mode and multitasking among three mobile platforms—(1) bare smartphone (PhoneXR), (2) standalone mobile headset unit (ClosedXR), and (3) smartphone with clip-on lenses (LensXR). Two use cases were considered through: (a) Experiment 1: using and switching among different modes within a single XR application while multitasking with a smartphone app, and (b) Experiment 2: general multitasking among different “reality” applications (e.g., 2D app, AR, VR). Results showed users generally valued the immersive experience over usability—ClosedXR was clearly preferred over the others. Despite potentially offering a balanced level of immersion and usability with its touch-based interaction, LensXR was not generally received well. PhoneXR was not rated particularly advantageous over ClosedXR even if it needed the controller. The usability suffered for ClosedXR only when the long text had to be entered. Thus, improving the 1D/2D operations in ClosedXR for operating and multitasking would be one way to weave XR into our lives with smartphones. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

15 pages, 18509 KiB  
Article
Dynamically Adjusted and Peripheral Visualization of Reverse Optical Flow for VR Sickness Reduction
by Songmin Kim and Gerard J. Kim
Electronics 2023, 12(4), 861; https://doi.org/10.3390/electronics12040861 - 8 Feb 2023
Cited by 2 | Viewed by 2163
Abstract
Sickness is a major obstacle in the wide adoption of virtual reality (VR). Providing low-resolution peripheral “countervection” visualization could mitigate VR sickness. Herein, we present an extension/improvement to this work, in which the reverse optical flow of the scene features is mixed in, [...] Read more.
Sickness is a major obstacle in the wide adoption of virtual reality (VR). Providing low-resolution peripheral “countervection” visualization could mitigate VR sickness. Herein, we present an extension/improvement to this work, in which the reverse optical flow of the scene features is mixed in, and the extent of the periphery is dynamically adjusted simultaneously. We comparatively evaluated the effects of our extension versus the two notable sickness reduction techniques, (1) the original peripheral countervection flow using the simple stripe pattern (with a fixed field of view and peripheral extent) and (2) the dynamic field of view adjustment (with no added visualization). The experimental results indicated that the proposed extension exhibits competitive or better sickness reduction effects and less user-perceived content intrusion, distraction, and breaks in immersion/presence. Furthermore, we tested the comparative effect of visualizing the reverse optical flow only in the lower visual periphery, which further reduced the content intrusion and lowered the sense of immersion and presence. The test indicated that using just the low visual periphery could achieve a comparable level of sickness reduction with significantly less computational effort, making it suitable for mobile applications. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

17 pages, 38742 KiB  
Article
DAVE: Deep Learning-Based Asymmetric Virtual Environment for Immersive Experiential Metaverse Content
by Yunsik Cho, Seunghyun Hong, Mingyu Kim and Jinmo Kim
Electronics 2022, 11(16), 2604; https://doi.org/10.3390/electronics11162604 - 19 Aug 2022
Cited by 18 | Viewed by 3542
Abstract
In this study, we design an interface optimized for the platform by adopting deep learning in an asymmetric virtual environment where virtual reality (VR) and augmented reality (AR) users participate together. We also propose a novel experience environment called deep learning-based asymmetric virtual [...] Read more.
In this study, we design an interface optimized for the platform by adopting deep learning in an asymmetric virtual environment where virtual reality (VR) and augmented reality (AR) users participate together. We also propose a novel experience environment called deep learning-based asymmetric virtual environment (DAVE) for immersive experiential metaverse content. First, VR users use their real hands to intuitively interact with the virtual environment and objects. A gesture interface is designed based on deep learning to directly link gestures to actions. AR users interact with virtual scenes, objects, and VR users via a touch-based input method in a mobile platform environment. A text interface is designed using deep learning to directly link handwritten text to actions. This study aims to propose a novel asymmetric virtual environment via an intuitive, easy, and fast interactive interface design as well as to create metaverse content for an experience environment and a survey experiment. This survey experiment is conducted with users to statistically analyze and investigate user interface satisfaction, user experience, and user presence in the experience environment. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 2782 KiB  
Review
Metaverse Solutions for Educational Evaluation
by Lingling Zi and Xin Cong
Electronics 2024, 13(6), 1017; https://doi.org/10.3390/electronics13061017 - 8 Mar 2024
Cited by 1 | Viewed by 1477
Abstract
This study aims to give a comprehensive overview of the application of the metaverse in educational evaluation. First, we characterize the metaverse and illustrate how it can support educational evaluation from the perspectives of virtual reality, augmented reality, and blockchain. Then, we outline [...] Read more.
This study aims to give a comprehensive overview of the application of the metaverse in educational evaluation. First, we characterize the metaverse and illustrate how it can support educational evaluation from the perspectives of virtual reality, augmented reality, and blockchain. Then, we outline the metaverse exploration framework and summarize its technical advantages. Based on this, we propose a metaverse-based implementation scheme to address the issues of reliability, accuracy, and credibility in educational evaluation. Finally, we show its implementation difficulties, performance evaluation, and future work. This proposed scheme opens up new research directions for the reform of educational evaluation while expanding the potential and reach of metaverse applications in education. We think that this study can help researchers in building an ecosystem for educational evaluation that is trustworthy, equitable, and legitimate. Full article
(This article belongs to the Special Issue Recent Advances in Extended Reality)
Show Figures

Figure 1

Back to TopTop