applsci-logo

Journal Browser

Journal Browser

VR/AR/MR with Cloud Computing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (10 August 2021) | Viewed by 7943

Special Issue Editors


E-Mail Website
Guest Editor
Department of Smart System, Graduate School of Smart Convergence, Kwangwoon University, Seoul 01897, Korea
Interests: augmented reality; holography; intelligent image processing; mixed reality; volumetric capture

E-Mail Website
Guest Editor
Division of Marine Mechatronics, Mokpo National Maritime University, Mokpo 58628, Korea
Interests: user experience; virtual reality; digital holography; physiological signal analysis; emotion analysis

E-Mail Website
Guest Editor
Department of Computer Engineering, Gachon University, Seongnam-si, Gyeonggi-do 13120, Korea
Interests: augmented reality; extended reality; game graphics; user experience; virtual reality

Special Issue Information

Dear Colleagues, 

With more online and mobile user activity, the demand for immersive virtual/augmented reality services is increasing. Virtual reality, augmented reality, and mixed reality have been continuously developed as they come to market for devices, software, and content. For more active services, research on content acquisition, cross-platform, cloud computing technology, server rendering, and mobile communications, as well as devices such as head-mounted displays and smart glasses, are important.

This Special Issue, entitled “'VR/AR/MR with Cloud Computing,” invites manuscripts focused on research regarding content, platforms, networks, and devices. Experimental, theoretical, and computational research on the VR/AR/MR concepts are all encouraged.

Topics include, but are not limited to: 

  • VR/AR/MR Content acquisitions
  • VR/AR/MR Cloud computing applications
  • VR/AR/MR Devices and platforms

Prof. Dr. Soonchul Kwon
Prof. Dr. Hyunjun Choi
Prof. Dr. Seokhee Oh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual reality, augmented reality, mixed reality, extended reality
  • OpenXR / WebXR
  • AR cloud anchor
  • volumetric capture
  • gesture interface
  • graphics pipeline
  • camera calibration
  • 3D registration and reconstruction
  • RGB and depth processing
  • remote rendering

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 4320 KiB  
Article
Point-Graph Neural Network Based Novel Visual Positioning System for Indoor Navigation
by Tae-Won Jung, Chi-Seo Jeong, Soon-Chul Kwon and Kye-Dong Jung
Appl. Sci. 2021, 11(19), 9187; https://doi.org/10.3390/app11199187 - 2 Oct 2021
Cited by 6 | Viewed by 2220
Abstract
Indoor localization is a basic element in location-based services (LBSs), including seamless indoor and outdoor navigation, location-based precision marketing, spatial recognition in robotics, augmented reality, and mixed reality. The popularity of LBSs in the augmented reality and mixed reality fields has increased the [...] Read more.
Indoor localization is a basic element in location-based services (LBSs), including seamless indoor and outdoor navigation, location-based precision marketing, spatial recognition in robotics, augmented reality, and mixed reality. The popularity of LBSs in the augmented reality and mixed reality fields has increased the demand for a stable and efficient indoor positioning method. However, the problem of indoor visual localization has not been appropriately addressed, owing to the strict trade-off between accuracy and cost. Therefore, we use point cloud and RGB characteristic information for the accurate acquisition of three-dimensional indoor space. The proposed method is a novel visual positioning system (VPS) capable of determining the user’s position by matching the pose information of the object estimated by the improved point-graph neural network (GNN) with the pose information label of a voxel database object addressed in predefined voxel units. We evaluated the performance of the proposed system considering a stationary object in indoor space. The results verify that high positioning accuracy and direction estimation can be efficiently achieved. Thus, spatial information of indoor space estimated using the proposed novel VPS can aid in indoor navigation. Full article
(This article belongs to the Special Issue VR/AR/MR with Cloud Computing)
Show Figures

Figure 1

16 pages, 6581 KiB  
Article
Performance Evaluation of Ground AR Anchor with WebXR Device API
by Daehyeon Lee, Woosung Shim, Munyong Lee, Seunghyun Lee, Kye-Dong Jung and Soonchul Kwon
Appl. Sci. 2021, 11(17), 7877; https://doi.org/10.3390/app11177877 - 26 Aug 2021
Cited by 4 | Viewed by 2443
Abstract
Recently, the development of 3D graphics technology has led to various technologies being combined with reality, where a new reality is defined or studied; they are typically named by combining the name of the technology with “reality”. Representative “reality” includes Augmented Reality, Virtual [...] Read more.
Recently, the development of 3D graphics technology has led to various technologies being combined with reality, where a new reality is defined or studied; they are typically named by combining the name of the technology with “reality”. Representative “reality” includes Augmented Reality, Virtual Reality, Mixed Reality, and eXtended Reality (XR). In particular, research on XR in the web environment is actively being conducted. The Web eXtended Reality Device Application Programming Interface (WebXR Device API), released in 2018, allows instant deployment of XR services to any XR platform requiring only an active web browser. However, the currently released tentative version has poor stability. Therefore, in this study, the performance evaluation of WebXR Device API is performed using three experiments. A camera trajectory experiment is analyzed using ground truth, we checked the standard deviation between the ground truth and WebXR for the X, Y, and Z axes. The difference image experiment is conducted for the front, left, and right directions, which resulted in a visible difference image for each image of ground truth and WebXR, small mean absolute error, and high match rate. In the experiment for measuring the 3D rendering speed, a frame rate similar to that of real-time is obtained. Full article
(This article belongs to the Special Issue VR/AR/MR with Cloud Computing)
Show Figures

Figure 1

14 pages, 7446 KiB  
Article
A Novel Preprocessing Method for Dynamic Point-Cloud Compression
by Mun-yong Lee, Sang-ha Lee, Kye-dong Jung, Seung-hyun Lee and Soon-chul Kwon
Appl. Sci. 2021, 11(13), 5941; https://doi.org/10.3390/app11135941 - 26 Jun 2021
Cited by 5 | Viewed by 2192
Abstract
Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with [...] Read more.
Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with recording and transmission. In this study, we propose a method of efficiently managing and compressing animation information stored in the 3D point-clouds sequence. A compressed point-cloud is created by reconfiguring the points based on their voxels. Compared with the original point-cloud, noise caused by errors is removed, and a preprocessing procedure that achieves high performance in a redundant processing algorithm is proposed. The results of experiments and rendering demonstrate an average file-size reduction of 40% using the proposed algorithm. Moreover, 13% of the over-lap data are extracted and removed, and the file size is further reduced. Full article
(This article belongs to the Special Issue VR/AR/MR with Cloud Computing)
Show Figures

Figure 1

Back to TopTop