Computer Graphics and Virtual Reality

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 October 2020) | Viewed by 35151

Special Issue Editors


E-Mail Website
Guest Editor
Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloniki, Greece
Interests: computer vision; machine learning and artificial intelligence; multi-dimensional signal processing; intelligent systems and applications; environmental informatics and remote sensing; ICT for civil protection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Sciences & Engineering, University of Texas at Arlington, 701 S Nedderman Drive, Arlington, TX 76019, USA
Interests: human–computer interaction; human–robot interaction; user interfaces; cognitive computing; virtual reality; mixed reality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Virtual reality and computer graphics technologies have attracted a lot of attention in recent year,s and they have been applied to a wide variety of fields, such as entertainment, education, medicine, architectural and urban design, engineering and robotics, fine arts, and cultural heritage. Some important aspects of VR experience are immersion, 3D realism, human–computer interaction, perception and scene analysis, including human motion analysis and 3D posture estimation using novel deep-learning algorithms, as well as intelligent adaptation and personalization.

The aim of this Special Issue is to attract world-leading researchers specialized in Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in an effort to highlight the latest exciting developments in the area, including advances in sensory interfaces, computer graphics, and AI algorithms that promote the creation of realistic, intelligent, and sophisticated 3D interactive environments. The accepted contributions will include theoretical considerations, experimental verifications, and proof-of-concept applications.

Dr. Kosmas Dimitropoulos
Dr. Nikos Grammalidis
Dr. Fillia Makedon
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Virtual, Augmented, and Mixed reality systems
  • Computer graphics techniques
  • Serious games
  • 3D Interactive environments
  • Immersive environments
  • Immersive/360° video
  • Multimodal capturing and reconstruction
  • 3D body/hand posture estimation
  • Novel human–computer interaction techniques
  • AI game adaptation algorithms
  • 3D user interaction
  • Nonvisual interfaces
  • Human factors and ergonomics
  • Data visualization

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 8311 KiB  
Article
Unsupervised 3D Motion Summarization Using Stacked Auto-Encoders
by Eftychios Protopapadakis, Ioannis Rallis, Anastasios Doulamis, Nikolaos Doulamis and Athanasios Voulodimos
Appl. Sci. 2020, 10(22), 8226; https://doi.org/10.3390/app10228226 - 20 Nov 2020
Cited by 4 | Viewed by 1699
Abstract
In this paper, a deep stacked auto-encoder (SAE) scheme followed by a hierarchical Sparse Modeling for Representative Selection (SMRS) algorithm is proposed to summarize dance video sequences, recorded using the VICON Motion capturing system. SAE’s main task is to reduce the redundant information [...] Read more.
In this paper, a deep stacked auto-encoder (SAE) scheme followed by a hierarchical Sparse Modeling for Representative Selection (SMRS) algorithm is proposed to summarize dance video sequences, recorded using the VICON Motion capturing system. SAE’s main task is to reduce the redundant information embedding in the raw data and, thus, to improve summarization performance. This becomes apparent when two dancers are performing simultaneously and severe errors are encountered in the humans’ point joints, due to dancers’ occlusions in the 3D space. Four summarization algorithms are applied to extract the key frames; density based, Kennard Stone, conventional SMRS and its hierarchical scheme called H-SMRS. Experimental results have been carried out on real-life dance sequences of Greek traditional dances while the results have been compared against ground truth data selected by dance experts. The results indicate that H-SMRS being applied after the SAE information reduction module extracts key frames which are deviated in time less than 0.3 s to the ones selected by the experts and with a standard deviation of 0.18 s. Thus, the proposed scheme can effectively represent the content of the dance sequence. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

15 pages, 2363 KiB  
Article
vIS: An Immersive Virtual Storytelling System for Vocational Training
by Sanika Doolani, Luke Owens, Callen Wessels and Fillia Makedon
Appl. Sci. 2020, 10(22), 8143; https://doi.org/10.3390/app10228143 - 17 Nov 2020
Cited by 18 | Viewed by 2477
Abstract
Storytelling has been established as a proven method to effectively communicate and assist in knowledge transfer. In recent years, there has been growing interest in improving the training and learning domain by using advanced technology such as Virtual Reality (VR). However, a gap [...] Read more.
Storytelling has been established as a proven method to effectively communicate and assist in knowledge transfer. In recent years, there has been growing interest in improving the training and learning domain by using advanced technology such as Virtual Reality (VR). However, a gap exists between storytelling and VR, and it is as yet unclear how they can be combined to form an effective system that not only maintains the level of engagement and immersion provided by VR technology but also provides the core strengths of storytelling. In this paper, we present vIS, a Vocational Immersive Storytelling system, which bridges the gap between storytelling and VR. vIS focuses on vocational training, in which users are trained on how to use a mechanical micrometer by employing a creative fictional story embedded inside a virtual manufacturing plant’s workplace. For the evaluation, a two-phase user study with 30 participants was conducted to measure the system’s effectiveness and improvements in long-term training, as well as to examine user experience against traditional methods of training—2D videos and textual manuals. The results indicate that the user’s ability to retain their training after seven days was nearly equal for vIS and the 2D video-based technique and was considerably higher than the text-based technique. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Graphical abstract

16 pages, 6146 KiB  
Article
Content Adaptation and Depth Perception in an Affordable Multi-View Display
by Iñigo Ezcurdia, Adriana Arregui, Oscar Ardaiz, Amalia Ortiz and Asier Marzo
Appl. Sci. 2020, 10(20), 7357; https://doi.org/10.3390/app10207357 - 21 Oct 2020
Cited by 2 | Viewed by 2075
Abstract
We present SliceView, a simple and inexpensive multi-view display made with multiple parallel translucent sheets that sit on top of a regular monitor; each sheet reflects different 2D images that are perceived cumulatively. A technical study is performed on the reflected and transmitted [...] Read more.
We present SliceView, a simple and inexpensive multi-view display made with multiple parallel translucent sheets that sit on top of a regular monitor; each sheet reflects different 2D images that are perceived cumulatively. A technical study is performed on the reflected and transmitted light for sheets of different thicknesses. A user study compares SliceView with a commercial light-field display (LookingGlass) regarding the perception of information at multiple depths. More importantly, we present automatic adaptations of existing content to SliceView: 2D layered graphics such as retro-games or painting tools, movies and subtitles, and regular 3D scenes with multiple clipping z-planes. We show that it is possible to create an inexpensive multi-view display and automatically adapt content for it; moreover, the depth perception on some tasks is superior to the one obtained in a commercial light-field display. We hope that this work stimulates more research and applications with multi-view displays. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Graphical abstract

17 pages, 9741 KiB  
Article
An Approach to the Creation and Presentation of Reference Gesture Datasets, for the Preservation of Traditional Crafts
by Nikolaos Partarakis, Xenophon Zabulis, Antonis Chatziantoniou, Nikolaos Patsiouras and Ilia Adami
Appl. Sci. 2020, 10(20), 7325; https://doi.org/10.3390/app10207325 - 19 Oct 2020
Cited by 18 | Viewed by 2674
Abstract
A wide spectrum of digital data are becoming available to researchers and industries interested in the recording, documentation, recognition, and reproduction of human activities. In this work, we propose an approach for understanding and articulating human motion recordings into multimodal datasets and VR [...] Read more.
A wide spectrum of digital data are becoming available to researchers and industries interested in the recording, documentation, recognition, and reproduction of human activities. In this work, we propose an approach for understanding and articulating human motion recordings into multimodal datasets and VR demonstrations of actions and activities relevant to traditional crafts. To implement the proposed approach, we introduce Animation Studio (AnimIO) that enables visualisation, editing, and semantic annotation of pertinent data. AnimIO is compatible with recordings acquired by Motion Capture (MoCap) and Computer Vision. Using AnimIO, the operator can isolate segments from multiple synchronous recordings and export them in multimodal animation files. AnimIO can be used to isolate motion segments that refer to individual craft actions, as described by practitioners. The proposed approach has been iteratively designed for use by non-experts in the domain of 3D motion digitisation. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

27 pages, 642 KiB  
Article
A Comprehensive Study on Deep Learning-Based 3D Hand Pose Estimation Methods
by Theocharis Chatzis, Andreas Stergioulas, Dimitrios Konstantinidis, Kosmas Dimitropoulos and Petros Daras
Appl. Sci. 2020, 10(19), 6850; https://doi.org/10.3390/app10196850 - 30 Sep 2020
Cited by 30 | Viewed by 4586
Abstract
The field of 3D hand pose estimation has been gaining a lot of attention recently, due to its significance in several applications that require human-computer interaction (HCI). The utilization of technological advances, such as cost-efficient depth cameras coupled with the explosive progress of [...] Read more.
The field of 3D hand pose estimation has been gaining a lot of attention recently, due to its significance in several applications that require human-computer interaction (HCI). The utilization of technological advances, such as cost-efficient depth cameras coupled with the explosive progress of Deep Neural Networks (DNNs), has led to a significant boost in the development of robust markerless 3D hand pose estimation methods. Nonetheless, finger occlusions and rapid motions still pose significant challenges to the accuracy of such methods. In this survey, we provide a comprehensive study of the most representative deep learning-based methods in literature and propose a new taxonomy heavily based on the input data modality, being RGB, depth, or multimodal information. Finally, we demonstrate results on the most popular RGB and depth-based datasets and discuss potential research directions in this rapidly growing field. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

16 pages, 5782 KiB  
Article
Motion-Sphere: Visual Representation of the Subtle Motion of Human Joints
by Adithya Balasubramanyam, Ashok Kumar Patil, Bharatesh Chakravarthi, Jae Yeong Ryu and Young Ho Chai
Appl. Sci. 2020, 10(18), 6462; https://doi.org/10.3390/app10186462 - 16 Sep 2020
Cited by 8 | Viewed by 3056
Abstract
Understanding and differentiating subtle human motion over time as sequential data is challenging. We propose Motion-sphere, which is a novel trajectory-based visualization technique, to represent human motion on a unit sphere. Motion-sphere adopts a two-fold approach for human motion visualization, namely a three-dimensional [...] Read more.
Understanding and differentiating subtle human motion over time as sequential data is challenging. We propose Motion-sphere, which is a novel trajectory-based visualization technique, to represent human motion on a unit sphere. Motion-sphere adopts a two-fold approach for human motion visualization, namely a three-dimensional (3D) avatar to reconstruct the target motion and an interactive 3D unit sphere, that enables users to perceive subtle human motion as swing trajectories and color-coded miniature 3D models for twist. This also allows for the simultaneous visual comparison of two motions. Therefore, the technique is applicable in a wide range of applications, including rehabilitation, choreography, and physical fitness training. The current work validates the effectiveness of the proposed work with a user study in comparison with existing motion visualization methods. Our study’s findings show that Motion-sphere is informative in terms of quantifying the swing and twist movements. The Motion-sphere is validated in threefold ways: validation of motion reconstruction on the avatar, accuracy of swing, twist, and speed visualization, and the usability and learnability of the Motion-sphere. Multiple range of motions from an online open database are selectively chosen, such that all joint segments are covered. In all fronts, Motion-sphere fares well. Visualization on the 3D unit sphere and the reconstructed 3D avatar make it intuitive to understand the nature of human motion. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

15 pages, 1903 KiB  
Article
Fluid Simulation with an L0 Based Optical Flow Deformation
by Kun Li, Na Qi and Qing Zhu
Appl. Sci. 2020, 10(18), 6351; https://doi.org/10.3390/app10186351 - 12 Sep 2020
Viewed by 2230
Abstract
Fluid simulation can be automatically interpolated by using data-driven fluid simulations based on a space-time deformation. In this paper, we propose a novel data-driven fluid simulation scheme with the L0 based optical flow deformation method by matching two fluid surfaces rather than [...] Read more.
Fluid simulation can be automatically interpolated by using data-driven fluid simulations based on a space-time deformation. In this paper, we propose a novel data-driven fluid simulation scheme with the L0 based optical flow deformation method by matching two fluid surfaces rather than the L2 regularization. The L0 gradient smooth regularization can result in prominent structure of the fluid in a sparsity-control manner, thus the misalignment of the deformation can be suppressed. We adopt the objective function using an alternating minimization with a half-quadratic splitting for solving the L0 based optical flow deformation model. Experiment results demonstrate that our proposed method can generate more realistic fluid surface with the optimal space-time deformation under the L0 gradient smooth constraint than the L2 one, and outperform the state-of-the-art methods in terms of both objective and subjective quality. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

11 pages, 29602 KiB  
Article
High-Luminance Mid-Air Image Display for Outdoor Viewing by Focusing Sunlight
by Naoya Koizumi and Koki Yuzurihara
Appl. Sci. 2020, 10(17), 5834; https://doi.org/10.3390/app10175834 - 23 Aug 2020
Viewed by 2184
Abstract
The mid-air image is a very powerful method for presenting computer graphics in a real environment, but it cannot be used in bright locations owing to the decrease in brightness during the imaging process. Therefore, to form a mid-air image with a high-brightness [...] Read more.
The mid-air image is a very powerful method for presenting computer graphics in a real environment, but it cannot be used in bright locations owing to the decrease in brightness during the imaging process. Therefore, to form a mid-air image with a high-brightness light source, a square pyramidal mirror structure was investigated, and the sunlight concentration was simulated. We simulated the tilt angle and combination angle of the condenser as parameters to calculate the luminance of the surface of a transparent liquid crystal display. The light collector was installed at 55 from the horizontal plane and mirror. A high level of illumination was obtained when these were laminated together at an angle of 70. To select a suitable diffuser, we prototyped and measured the brightness of the mid-air image with an LED lamp to simulate sunlight in three settings: summer solstice, autumnal equinox, and winter solstice. The maximum luminance of the mid-air image displayed by collecting actual sunlight was estimated to be 998.6 cd/m2. This is considerably higher than the maximum smartphone brightness to allow for outdoor viewing, and it can ensure fully compatible visibility. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

17 pages, 10480 KiB  
Article
Effectiveness of Computer-Generated Virtual Reality (VR) in Learning and Teaching Environments with Spatial Frameworks
by Parviz Safadel and David White
Appl. Sci. 2020, 10(16), 5438; https://doi.org/10.3390/app10165438 - 06 Aug 2020
Cited by 21 | Viewed by 4372
Abstract
In this paper, we highlight the benefits of using computer-generated VR in teaching instructional content that have spatial frameworks such as in science, technology, engineering, and mathematics (STEM) courses. Spatial ability scores were collected from a sample (N = 62) of undergraduate and [...] Read more.
In this paper, we highlight the benefits of using computer-generated VR in teaching instructional content that have spatial frameworks such as in science, technology, engineering, and mathematics (STEM) courses. Spatial ability scores were collected from a sample (N = 62) of undergraduate and graduate students. Students were required to complete an instructional tutorial in VR and computer desktop screening on DNA molecules, which included necessary information about DNA and nucleotide molecules. Students also completed a comprehensive test about the spatial structure of DNA and a feedback questionnaire. Results from the questionnaire showed media use and satisfaction to be significantly related. The results also showed a significant interaction between spatial ability levels (low, medium, and high) and media used on students’ spatial understanding of the DNA molecules. It may be concluded that VR visualization had a positive compensating impact on students with low spatial ability. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

9 pages, 1277 KiB  
Article
The Effect of Depth Information on Visual Complexity Perception in Three-Dimensional Textures
by Liang Li, Tatsuro Yamada and Woong Choi
Appl. Sci. 2020, 10(15), 5347; https://doi.org/10.3390/app10155347 - 03 Aug 2020
Cited by 3 | Viewed by 2654
Abstract
Visual complexity, as an attribute of images related to human perception, has been widely studied in computer science and psychology. In conventional studies, the research objects have been limited to the traditional two-dimensional (2D) patterns or images. Therefore, if depth information is introduced [...] Read more.
Visual complexity, as an attribute of images related to human perception, has been widely studied in computer science and psychology. In conventional studies, the research objects have been limited to the traditional two-dimensional (2D) patterns or images. Therefore, if depth information is introduced into this scenario, how will it affect our perception of visual complexity of an image? To answer this question, we developed an experimental virtual reality system that enables control and display of three-dimensional (3D) visual stimuli. In this study, we aimed to investigate the effect of depth information on visual complexity perception by comparing 2D and 3D displays of the same stimuli. We scanned three textures with different characteristics to create the experimental stimuli and recruited 25 participants for the experiment. The results showed that depth information significantly increased the visual complexity perception of the texture images. Moreover, depth information had different degrees of impact on visual complexity for different textures. The higher the maximum depth introduced in the 3D image, the more significant the increase in visual complexity perception. The experimental virtual reality system used in this study also provides a feasible experimental tool for future experiments. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

Review

Jump to: Research

17 pages, 484 KiB  
Review
A Systematic Review of Virtual Reality Interfaces for Controlling and Interacting with Robots
by Murphy Wonsick and Taskin Padir
Appl. Sci. 2020, 10(24), 9051; https://doi.org/10.3390/app10249051 - 18 Dec 2020
Cited by 30 | Viewed by 5835
Abstract
There is a significant amount of synergy between virtual reality (VR) and the field of robotics. However, it has only been in approximately the past five years that commercial immersive VR devices have been available to developers. This new availability has led to [...] Read more.
There is a significant amount of synergy between virtual reality (VR) and the field of robotics. However, it has only been in approximately the past five years that commercial immersive VR devices have been available to developers. This new availability has led to a rapid increase in research using VR devices in the field of robotics, especially in the development of VR interfaces for operating robots. In this paper, we present a systematic review on VR interfaces for robot operation that utilize commercially available immersive VR devices. A total of 41 papers published between 2016–2020 were collected for review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Papers are discussed and categorized into five categories: (1) Visualization, which focuses on displaying data or information to operators; (2) Robot Control and Planning, which focuses on connecting human input or movement to robot movement; (3) Interaction, which focuses on the development of new interaction techniques and/or identifying best interaction practices; (4) Usability, which focuses on user experiences of VR interfaces; and (5) Infrastructure, which focuses on system architectures or software to support connecting VR and robots for interface development. Additionally, we provide future directions to continue development in VR interfaces for operating robots. Full article
(This article belongs to the Special Issue Computer Graphics and Virtual Reality)
Show Figures

Figure 1

Back to TopTop