Applications of Computer Vision in Interactive Environments

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 9262

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Computer Science, Heraklion, Crete, Greece
Interests: stereo and multiple-view computer vision; pose estimation and motion estimation for objects and persons; medical and industrial image analysis; applications of computer vision in interactive environments
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Interactive systems that access the physical world innately entail its sensing, representation, and understanding. The visual perception of surfaces, objects, and human or robotic actors in the physical environment are essential capabilities of such systems. These capabilities entail the sensing of surfaces and the recognition or reidentification of objects and persons, as well as the representation of their motion and understanding of their activities.

As crucial as the accuracy of sensing and correctness of understanding environments and activities are, similarly crucial is the way that these capabilities are employed when interacting with the user or the environment. The incorporation of sensory and perceptual capabilities in interactive systems is dependent on the type of environment and type of activity in which these interactive systems are employed. As such, the analysis of the application, the stakeholder needs and requirements, and the physical limitations of the environment is essential in providing intuitive, effective, efficient, and safe interactions.

For this Special Issue, we welcome the submission of original research papers and reviews addressing any aspects of the applications of computer vision in interactive environments.  Topics of interest include, but are not limited to, the following:

  • 3D reconstruction, localization, and pose estimation of objects;
  • Motion estimation of rigid or articulated objects and persons;
  • Detection, recognition, and reidentification of persons or objects;
  • Detection and localization of gestures and haptic activity;
  • Applications of computer vision in interactive environments, i.e., augmented or mixed reality;
  • Applications of computer vision in educational and training environments and in systems that interact with the physical environment, such as in robotic control, human–robot collaboration, and human–robot interaction;
  • Interaction paradigms for novel user interface metaphors empowered by computer vision capabilities;
  • New methods for detecting and semantically representing and understanding human activities.

Dr. Xenophon Zabulis
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer vision
  • Interactive environments
  • 3D reconstruction, pose, and motion estimation
  • Augmented or mixed reality
  • Recognition and reidentification of objects and persons
  • Gesture recognition

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 3102 KiB  
Article
Factors Affecting Avatar Customization Behavior in Virtual Environments
by Sixue Wu, Le Xu, Zhaoyang Dai and Younghwan Pan
Electronics 2023, 12(10), 2286; https://doi.org/10.3390/electronics12102286 - 18 May 2023
Cited by 4 | Viewed by 3583
Abstract
This research aims to examine the psychology and behavior of users when customizing avatars from the standpoint of user experience and to provide constructive contributions to the Metaverse avatar customization platform. This study analyzed the factors that affect the behavior of user-customized avatars [...] Read more.
This research aims to examine the psychology and behavior of users when customizing avatars from the standpoint of user experience and to provide constructive contributions to the Metaverse avatar customization platform. This study analyzed the factors that affect the behavior of user-customized avatars in different virtual environments and compared the differences in public self-consciousness, self-expression, and emotional expression among customized avatars in multiple virtual contexts. Methods: Using a between-subjects experimental design, two random groups of participants were asked to customize avatars for themselves in two contexts, a multiplayer online social game (MOSG) and a virtual meeting (VM). Results: When subjects perceived a more relaxed environment, the customized avatars had less self-similarity, and the subjects exhibited a stronger self-disclosure willingness and enhanced avatar wishful identification; nevertheless, public self-consciousness was not increased. When subjects perceived a more serious environment, the customized avatars exhibited a higher degree of self-similarity, and the subjects exhibited a greater self-presentation willingness, along with enhanced identification of avatar similarity and increased public self-consciousness. Conclusions: Participants in both experiment groups expressed positive emotions. The virtual context affects the self-similarity of user-customized avatars, and avatar self-similarity affects self-presentation and self-disclosure willingness, and these factors will affect the behavior of the user-customized avatar. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Interactive Environments)
Show Figures

Figure 1

21 pages, 11609 KiB  
Article
Multi-Scale Presentation of Spatial Context for Cultural Heritage Applications
by Nikolaos Partarakis, Xenophon Zabulis, Nikolaos Patsiouras, Antonios Chatjiantoniou, Emmanouil Zidianakis, Eleni Mantinaki, Danae Kaplanidi, Christodoulos Ringas, Eleana Tasiopoulou, Arnaud Dubois and Anne Laure Carre
Electronics 2022, 11(2), 195; https://doi.org/10.3390/electronics11020195 - 9 Jan 2022
Cited by 4 | Viewed by 1819
Abstract
An approach to the representation and presentation of spatial and geographical context of cultural heritage sites is proposed. The goal is to combine semantic representations of social and historical context with 3D representations of cultural heritage sites acquired through 3D reconstruction and 3D [...] Read more.
An approach to the representation and presentation of spatial and geographical context of cultural heritage sites is proposed. The goal is to combine semantic representations of social and historical context with 3D representations of cultural heritage sites acquired through 3D reconstruction and 3D modeling technologies, to support their interpretation and presentation in education and tourism. Several use cases support and demonstrate the application of the proposed approach including immersive craft and context demonstration environment and interactive games. Full article
(This article belongs to the Special Issue Applications of Computer Vision in Interactive Environments)
Show Figures

Figure 1

29 pages, 36855 KiB  
Article
Improving Deep Object Detection Algorithms for Game Scenes
by Minji Jung, Heekyung Yang and Kyungha Min
Electronics 2021, 10(20), 2527; https://doi.org/10.3390/electronics10202527 - 17 Oct 2021
Viewed by 2936
Abstract
The advancement and popularity of computer games make game scene analysis one of the most interesting research topics in the computer vision society. Among the various computer vision techniques, we employ object detection algorithms for the analysis, since they can both recognize and [...] Read more.
The advancement and popularity of computer games make game scene analysis one of the most interesting research topics in the computer vision society. Among the various computer vision techniques, we employ object detection algorithms for the analysis, since they can both recognize and localize objects in a scene. However, applying the existing object detection algorithms for analyzing game scenes does not guarantee a desired performance, since the algorithms are trained using datasets collected from the real world. In order to achieve a desired performance for analyzing game scenes, we built a dataset by collecting game scenes and retrained the object detection algorithms pre-trained with the datasets from the real world. We selected five object detection algorithms, namely YOLOv3, Faster R-CNN, SSD, FPN and EfficientDet, and eight games from various game genres including first-person shooting, role-playing, sports, and driving. PascalVOC and MS COCO were employed for the pre-training of the object detection algorithms. We proved the improvement in the performance that comes from our strategy in two aspects: recognition and localization. The improvement in recognition performance was measured using mean average precision (mAP) and the improvement in localization using intersection over union (IoU). Full article
(This article belongs to the Special Issue Applications of Computer Vision in Interactive Environments)
Show Figures

Figure 1

Back to TopTop