Novel Advances in Collaborative Environments for Virtual, Augmented, Mixed and Extended Reality

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Big Data and Augmented Intelligence".

Deadline for manuscript submissions: 30 October 2024 | Viewed by 685

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Sapienza University of Rome, 00185 Rome, Italy
Interests: computer vision (feature extraction and pattern analysis); scene and event understanding (by people and/or vehicles and/or objects); human–computer interaction (pose estimation and gesture recognition by hands and/or body); sketch-based interaction (handwriting and freehand drawing); human–behaviour recognition (actions, emotions, feelings, affects, and moods by hands, body, facial expressions, and voice); biometric analysis (person re-identification by body visual features and/or gait and/or posture/pose); artificial intelligence (machine/deep learning); medical image analysis (MRI, ultrasound, X-rays, PET, and CT); multimodal fusion models; brain–computer interfaces (interaction and security systems); signal processing; visual cryptography (by RGB images); smart environments and natural interaction (with and without virtual/augmented reality); robotics (monitoring and surveillance systems with PTZ cameras, UAVs, AUVs, rovers, and humanoids)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
CNR, ISPC, 34900 Rome, Italy
Interests: WebXR; immersive VR; 3D interfaces; spatial user interfaces; social VR
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor Assistant
Department of Computer Science, Sapienza University, 00198 Rome, Italy
Interests: computer science; computer vision; virtual reality; multimodal interaction; human–computer interaction

Special Issue Information

Dear Colleagues,

Recent technological advancements have highlighted an intense growth of computational power and sensors’ accuracy, thereby reducing production costs and encouraging researchers and developers to work with advanced immersive devices, e.g., head-mounted displays (HMDs). Moreover, machine and deep Learning (ML and DL) nowadays are exploited almost in every application, since a considerable number of advanced tasks can be completed with remarkable results, such as hand tracking, natural interaction, redirection/reorientation in virtual locomotion, action recognition, and others. In this context, the topic of multi-user environments presents numerous challenges and extensive growth potential. Various tasks, including the latest technologies in networking that provide high-speed communication to reduce latency and increase data transfer, can be explored; however, virtual environments synchronization still has room to grow and concurrency management remains an open problem. Moreover, multi-user environments facilitate the interconnection of multidisciplinary applications, from social studies to simulative experience analysis.

Topics of interest in this Special Issue include, but are not limited to, the following:

  • AI in data segmentation, detection, classification, or analysis;
  • Network-based applications for VR, AR, MR, or XR environments;
  • Multidisciplinary applications exploiting multi-user VR/AR/MR/XR environments;
  • AI for interactive or VR, AR, MR, or XR visualization;
  • Interactive web-based VR, AR, MR, or XR applications, tools, or services;
  • Interaction design process and methods for VR, AR, MR, or XR;
  • Human–computer interaction models for VR, AR, MR, or XR;
  • Spatial computing and 3D interfaces for VR, AR, MR, or XR;
  • Neural radiance fields (NeRF) in virtual spaces.

Dr. Danilo Avola
Dr. Bruno Fanini
Dr. Marco Raoul Marini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • virtual reality
  • augmented reality
  • mixed reality
  • extended reality
  • human–computer interface
  • human–computer interaction
  • gesture recognition
  • virtual collaborative environment
  • virtual shared environment
  • machine learning
  • deep learning
  • artificial intelligence
  • network-based virtual environment

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 929 KiB  
Article
Exploring Data Input Problems in Mixed Reality Environments: Proposal and Evaluation of Natural Interaction Techniques
by Jingzhe Zhang, Tiange Chen, Wenjie Gong, Jiayue Liu and Jiangjie Chen
Future Internet 2024, 16(5), 150; https://doi.org/10.3390/fi16050150 (registering DOI) - 27 Apr 2024
Viewed by 107
Abstract
Data input within mixed reality environments poses significant interaction challenges, notably in immersive visual analytics applications. This study assesses five numerical input techniques: three benchmark methods (Touch-Slider, Keyboard, Pinch-Slider) and two innovative multimodal techniques (Bimanual Scaling, Gesture and Voice). An experimental design was [...] Read more.
Data input within mixed reality environments poses significant interaction challenges, notably in immersive visual analytics applications. This study assesses five numerical input techniques: three benchmark methods (Touch-Slider, Keyboard, Pinch-Slider) and two innovative multimodal techniques (Bimanual Scaling, Gesture and Voice). An experimental design was employed to compare these techniques’ input efficiency, accuracy, and user experience across varying precision and distance conditions. The findings reveal that multimodal techniques surpass slider methods in input efficiency yet are comparable to keyboards; the voice method excels in reducing cognitive load but falls short in accuracy; and the scaling method marginally leads in user satisfaction but imposes a higher physical load. Furthermore, this study outlines these techniques’ pros and cons and offers design guidelines and future research directions. Full article
25 pages, 13896 KiB  
Article
A New Generation of Collaborative Immersive Analytics on the Web: Open-Source Services to Capture, Process and Inspect Users’ Sessions in 3D Environments
by Bruno Fanini and Giorgio Gosti
Future Internet 2024, 16(5), 147; https://doi.org/10.3390/fi16050147 - 25 Apr 2024
Viewed by 213
Abstract
Recording large amounts of users’ sessions performed through 3D applications may provide crucial insights into interaction patterns. Such data can be captured from interactive experiences in public exhibits, remote motion tracking equipment, immersive XR devices, lab installations or online web applications. Immersive analytics [...] Read more.
Recording large amounts of users’ sessions performed through 3D applications may provide crucial insights into interaction patterns. Such data can be captured from interactive experiences in public exhibits, remote motion tracking equipment, immersive XR devices, lab installations or online web applications. Immersive analytics (IA) deals with the benefits and challenges of using immersive environments for data analysis and related design solutions to improve the quality and efficiency of the analysis process. Today, web technologies allow us to craft complex applications accessible through common browsers, and APIs like WebXR allow us to interact with and explore virtual 3D environments using immersive devices. These technologies can be used to access rich, immersive spaces but present new challenges related to performance, network bottlenecks and interface design. WebXR IA tools are still quite new in the literature: they present several challenges and leave quite unexplored the possibility of synchronous collaborative inspection. The opportunity to share the virtual space with remote analysts in fact improves sense-making tasks and offers new ways to discuss interaction patterns together, while inspecting captured records or data aggregates. Furthermore, with proper collaborative approaches, analysts are able to share machine learning (ML) pipelines and constructively discuss the outcomes and insights through tailored data visualization, directly inside immersive 3D spaces, using a web browser. Under the H2IOSC project, we present the first results of an open-source pipeline involving tools and services aimed at capturing, processing and inspecting interactive sessions collaboratively in WebXR with other analysts. The modular pipeline can be easily deployed in research infrastructures (RIs), remote dedicated hubs or local scenarios. The developed WebXR immersive analytics tool specifically offers advanced features for volumetric data inspection, query, annotation and discovery, alongside spatial interfaces. We assess the pipeline through users’ sessions captured during two remote public exhibits, by a WebXR application presenting generative AI content to visitors. We deployed the pipeline to assess the different services and to better understand how people interact with generative AI environments. The obtained results can be easily adopted for a multitude of case studies, interactive applications, remote equipment or online applications, to support or accelerate the detection of interaction patterns among remote analysts collaborating in the same 3D space. Full article
Show Figures

Figure 1

Back to TopTop