applsci-logo

Journal Browser

Journal Browser

Computer Vision for Mobile Robotics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 November 2021) | Viewed by 5118

Special Issue Editor


E-Mail Website
Guest Editor
Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La Mancha, 02071 Albacete, Spain
Interests: pattern recognition; human–computer interaction; affective computing; computer vision; multi-sensor fusion
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the last few years, mobile robots have participated in more and more complex tasks. The partial or total autonomous functioning of any sophisticated mobile robotic vehicle must be provided with computer vision as well as machine learning algorithms for video analysis. The movement in an indeterminate and dynamic environment requires the completion of location and navigation tasks based on the study of the robot's three-dimensional surroundings.

This Special Issue (SI) on “Computer Vision for Mobile Robotics” will bring together the research communities world-wide interested in all aspects of computer vision for mobile robotics.

Topics of interest include (but are not limited to):

  • Vision systems for mobile robots
  • Visual sensing and perception in mobile robotics
  • Computer vision for mapping and self-localization in mobile robotics
  • Computer vision for recognition and location in mobile robotics
  • Computer vision for navigation and planning in mobile robotics
  • Computer vision for tracking in mobile robotics
  • Visual servoing in mobile robotics

Prof. Dr. Antonio Fernández-Caballero
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 1050 KiB  
Article
Variational Bayesian Approach to Condition-Invariant Feature Extraction for Visual Place Recognition
by Junghyun Oh and Gyuho Eoh
Appl. Sci. 2021, 11(19), 8976; https://doi.org/10.3390/app11198976 - 26 Sep 2021
Cited by 5 | Viewed by 2045
Abstract
As mobile robots perform long-term operations in large-scale environments, coping with perceptual changes becomes an important issue recently. This paper introduces a stochastic variational inference and learning architecture that can extract condition-invariant features for visual place recognition in a changing environment. Under the [...] Read more.
As mobile robots perform long-term operations in large-scale environments, coping with perceptual changes becomes an important issue recently. This paper introduces a stochastic variational inference and learning architecture that can extract condition-invariant features for visual place recognition in a changing environment. Under the assumption that a latent representation of the variational autoencoder can be divided into condition-invariant and condition-sensitive features, a new structure of the variation autoencoder is proposed and a variational lower bound is derived to train the model. After training the model, condition-invariant features are extracted from test images to calculate the similarity matrix, and the places can be recognized even in severe environmental changes. Experiments were conducted to verify the proposed method, and the experimental results showed that our assumption was reasonable and effective in recognizing places in changing environments. Full article
(This article belongs to the Special Issue Computer Vision for Mobile Robotics)
Show Figures

Figure 1

14 pages, 3443 KiB  
Article
Bio-Inspired Modality Fusion for Active Speaker Detection
by Gustavo Assunção, Nuno Gonçalves and Paulo Menezes
Appl. Sci. 2021, 11(8), 3397; https://doi.org/10.3390/app11083397 - 10 Apr 2021
Viewed by 1915
Abstract
Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion [...] Read more.
Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations. Full article
(This article belongs to the Special Issue Computer Vision for Mobile Robotics)
Show Figures

Figure 1

Back to TopTop