Computer Vision Datasets for Positioning, Tracking and Wayfinding

A special issue of Data (ISSN 2306-5729). This special issue belongs to the section "Information Systems and Data Management".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 5530

Special Issue Editors


E-Mail Website
Guest Editor
Information Systems Department, University of Minho, Campus Azurém, 4800-058 Guimarães, Portugal
Interests: indoor localization; ubiquitous computing; mobile computing; pervasive; location
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of New Imaging Technologies Espaitec 2, Universitat Jaume I, Avda. Vicente Sos Baynat S/N, Castelló de la Plana, Spain
Interests: neural networks; pattern recognition; machine learning; image processing; outdoor robotics; artificial intelligence; indoor localisation and positioning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Although Global Navigation Satellite Systems (GNSSs) are widely used for positioning purposes, they cannot properly operate in indoor areas such as hospitals, shopping areas, factories, or warehouses. Despite most indoor positioning solutions depending on radio frequency, there is rising interest in positioning using computer vision, as this methodology may be an interesting, cost-effective alternative to well-known systems based on LiDAR or UWB.

This Special Issue encourages authors from academia and industry to submit their datasets for positioning, tracking, and wayfinding using computer vision. The Special Issue topics include, but are not limited to:

  • Computer vision datasets;
  • Multi-source datasets, including sensor fusion;
  • Multi-range datasets.

Co-submissions that provide additional content to merit a new paper based on the published research articles are also welcome. These may include updates to a reported dataset, fuller release of a dataset, and useful information to enhance data transparency or reusability.

Dr. Filipe Meneses
Dr. Joaquín Torres-Sospedra
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Data is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Other

16 pages, 6400 KiB  
Data Descriptor
RIFIS: A Novel Rice Field Sidewalk Detection Dataset for Walk-Behind Hand Tractor
by Padma Nyoman Crisnapati and Dechrit Maneetham
Data 2022, 7(10), 135; https://doi.org/10.3390/data7100135 - 25 Sep 2022
Cited by 4 | Viewed by 2096
Abstract
Rice field sidewalk (RIFIS) identification plays a crucial role in enhancing the performance of agricultural computer applications, especially for rice farming, by dividing the image into areas of rice fields to be ploughed and the areas outside of rice fields. This division isolates [...] Read more.
Rice field sidewalk (RIFIS) identification plays a crucial role in enhancing the performance of agricultural computer applications, especially for rice farming, by dividing the image into areas of rice fields to be ploughed and the areas outside of rice fields. This division isolates the desired area and reduces computational costs for processing RIFIS detection in the automation of ploughing fields using hand tractors. Testing and evaluating the performance of the RIFIS detection method requires a collection of image data that includes various features of the rice field environment. However, the available agricultural image datasets focus only on rice plants and their diseases; a dataset that explicitly provides RIFIS imagery has not been found. This study presents an RIFIS image dataset that addresses this deficiency by including specific linear characteristics. In Bali, Indonesia, two geographically separated rice fields were selected. The initial data collected were from several videos, which were then converted into image sequences. Manual RIFIS annotations were applied to the image. This research produced a dataset consisting of 970 high-definition RGB images (1920 × 1080 pixels) and corresponding annotations. This dataset has a combination of 19 different features. By utilizing our dataset for detection, it can be applied not only for the time of rice planting but also for the time of rice harvest, and our dataset can be used for a variety of applications throughout the entire year. Full article
(This article belongs to the Special Issue Computer Vision Datasets for Positioning, Tracking and Wayfinding)
Show Figures

Figure 1

14 pages, 13286 KiB  
Data Descriptor
UNIPD-BPE: Synchronized RGB-D and Inertial Data for Multimodal Body Pose Estimation and Tracking
by Mattia Guidolin, Emanuele Menegatti and Monica Reggiani
Data 2022, 7(6), 79; https://doi.org/10.3390/data7060079 - 09 Jun 2022
Cited by 4 | Viewed by 2753
Abstract
The ability to estimate human motion without requiring any external on-body sensor or marker is of paramount importance in a variety of fields, ranging from human–robot interaction, Industry 4.0, surveillance, and telerehabilitation. The recent development of portable, low-cost RGB-D cameras pushed forward the [...] Read more.
The ability to estimate human motion without requiring any external on-body sensor or marker is of paramount importance in a variety of fields, ranging from human–robot interaction, Industry 4.0, surveillance, and telerehabilitation. The recent development of portable, low-cost RGB-D cameras pushed forward the accuracy of markerless motion capture systems. However, despite the widespread use of such sensors, a dataset including complex scenes with multiple interacting people, recorded with a calibrated network of RGB-D cameras and an external system for assessing the pose estimation accuracy, is still missing. This paper presents the University of Padova Body Pose Estimation dataset (UNIPD-BPE), an extensive dataset for multi-sensor body pose estimation containing both single-person and multi-person sequences with up to 4 interacting people. A network with 5 Microsoft Azure Kinect RGB-D cameras is exploited to record synchronized high-definition RGB and depth data of the scene from multiple viewpoints, as well as to estimate the subjects’ poses using the Azure Kinect Body Tracking SDK. Simultaneously, full-body Xsens MVN Awinda inertial suits allow obtaining accurate poses and anatomical joint angles, while also providing raw data from the 17 IMUs required by each suit. This dataset aims to push forward the development and validation of multi-camera markerless body pose estimation and tracking algorithms, as well as multimodal approaches focused on merging visual and inertial data. Full article
(This article belongs to the Special Issue Computer Vision Datasets for Positioning, Tracking and Wayfinding)
Show Figures

Figure 1

Back to TopTop