sensors-logo

Journal Browser

Journal Browser

Spatial Perception and Navigation in the Absence of Vision

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 32023

Special Issue Editors


E-Mail Website
Guest Editor
Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel 40700, Israel
Interests: brain plasticity; navigation; sensory substitution; sensory deprivation; multisensory integration

E-Mail Website
Guest Editor
Chaire de Recherche Harland Sanders en Sciences de la Vision, École d’Optométrie, Université de Montréal, Montréal, QC H3T 1J4, Canada
Interests: vision; blindness; sensory substitution; cross modal plasticity; humans; animals; navigation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In congenital blindness (CB), tactile and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve, but is eventually reached through training-induced plasticity. Despite the fact that complete visual deprivation leads to volumetric reductions in brain structures associated with spatial learning, blind individuals are still able to navigate. However, the neural structures involved in this function are not fully understood. In this Special Issue, we invite leading experts in the field to submit reviews or original research on spatial navigation in the absence of vision from a variety of approaches to advance our understanding of multisensory spatial knowledge acquisition. Our aim is for these issues to be explored in people who are congenitally blind, late blind, and sighted using multisensory information to perceive spatial information, also discussing their abilities, strategies, and corresponding mental representations.

Contributions may include, but are not limited to, the following topics:

  • Navigation and multisensory integration in the context of spatial perception;
  • Sensory substitution geared to assist space perception;
  • The role of visual experience in shaping space perception;
  • The ability of blind people to perceive and interact with space;
  • The ability of blind people to use SSDs;
  • The special contribution of each sense to space perception;
  • The contribution of different brain regions to space perception;
  • Behavioral and brain reorganization linked with sensory deprivation in the context of space perception;
  • How brain reorganization influences the perception of space by people who are blind;
  • The use of virtual reality tools to manipulate the relative cues between vision, audition, and proprioception during navigation and spatial memory tasks.

Dr. Daniel-Robert Chebat
Prof. Dr. Maurice Ptito
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • audition
  • smell
  • animal studies
  • neural substrates

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 7278 KiB  
Article
Network QoS Impact on Spatial Perception through Sensory Substitution in Navigation Systems for Blind and Visually Impaired People
by Santiago Real and Alvaro Araujo
Sensors 2023, 23(6), 3219; https://doi.org/10.3390/s23063219 - 17 Mar 2023
Cited by 1 | Viewed by 2052
Abstract
A navigation system for individuals suffering from blindness or visual impairment provides information useful to reach a destination. Although there are different approaches, traditional designs are evolving into distributed systems with low-cost, front-end devices. These devices act as a medium between the user [...] Read more.
A navigation system for individuals suffering from blindness or visual impairment provides information useful to reach a destination. Although there are different approaches, traditional designs are evolving into distributed systems with low-cost, front-end devices. These devices act as a medium between the user and the environment, encoding the information gathered on the surroundings according to theories on human perceptual and cognitive processes. Ultimately, they are rooted in sensorimotor coupling. The present work searches for temporal constraints due to such human–machine interfaces, which in turn constitute a key design factor for networked solutions. To that end, three tests were conveyed to a group of 25 participants under different delay conditions between motor actions and triggered stimuli. The results show a trade-off between spatial information acquisition and delay degradation, and a learning curve even under impaired sensorimotor coupling. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Figure 1

10 pages, 1105 KiB  
Article
Deductive Reasoning and Working Memory Skills in Individuals with Blindness
by Eyal Heled, Noa Elul, Maurice Ptito and Daniel-Robert Chebat
Sensors 2022, 22(5), 2062; https://doi.org/10.3390/s22052062 - 7 Mar 2022
Cited by 6 | Viewed by 3408
Abstract
Deductive reasoning and working memory are integral parts of executive functioning and are important skills for blind people in everyday life. Despite the importance of these skills, the influence of visual experience on reasoning and working memory skills, as well as on the [...] Read more.
Deductive reasoning and working memory are integral parts of executive functioning and are important skills for blind people in everyday life. Despite the importance of these skills, the influence of visual experience on reasoning and working memory skills, as well as on the relationship between these, is unknown. In this study, fifteen participants with congenital blindness (CB), fifteen with late blindness (LB), fifteen sighted blindfolded controls (SbfC), and fifteen sighted participants performed two tasks of deductive reasoning and two of working memory. We found that while the CB and LB participants did not differ in their deductive reasoning abilities, the CB group performed worse than the sighted controls, and the LB group performed better than the SbfC group. Those with CB outperformed all the other groups in both of the working memory tests. Working memory is associated with deductive reasoning in all three visually impaired groups, but not in the sighted group. These findings suggest that deductive reasoning is not a uniform skill, and that it is associated with visual impairment onset, the level of reasoning difficulty, and the degree of working memory load. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Figure 1

33 pages, 4013 KiB  
Article
The Unfolding Space Glove: A Wearable Spatio-Visual to Haptic Sensory Substitution Device for Blind People
by Jakob Kilian, Alexander Neugebauer, Lasse Scherffig and Siegfried Wahl
Sensors 2022, 22(5), 1859; https://doi.org/10.3390/s22051859 - 26 Feb 2022
Cited by 17 | Viewed by 6219
Abstract
This paper documents the design, implementation and evaluation of the Unfolding Space Glove—an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to [...] Read more.
This paper documents the design, implementation and evaluation of the Unfolding Space Glove—an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to haptically explore the depth of their surrounding space, assisting with navigation tasks such as object recognition and wayfinding. The prototype requires no external hardware, is highly portable, operates in all lighting conditions, and provides continuous and immediate feedback—all while being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed structured training and obstacle courses with both the prototype and a white long cane to allow performance comparisons to be drawn between them. The subjects quickly learned how to use the glove and successfully completed all of the trials, though still being slower with it than with the cane. Qualitative interviews revealed a high level of usability and user experience. Overall, the results indicate the general processability of spatial information through sensory substitution using haptic, vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities after extensive training and to derive a fully functional navigation aid from its features. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Graphical abstract

18 pages, 2084 KiB  
Article
Virtual Reality Systems as an Orientation Aid for People Who Are Blind to Acquire New Spatial Information
by Orly Lahav
Sensors 2022, 22(4), 1307; https://doi.org/10.3390/s22041307 - 9 Feb 2022
Cited by 6 | Viewed by 3012
Abstract
This research aims to examine the impact of virtual environments interface on the exploration process, construction of cognitive maps, and performance of orientation tasks in real spaces by users who are blind. The study compared interaction with identical spaces using different systems: BlindAid, [...] Read more.
This research aims to examine the impact of virtual environments interface on the exploration process, construction of cognitive maps, and performance of orientation tasks in real spaces by users who are blind. The study compared interaction with identical spaces using different systems: BlindAid, Virtual Cane, and real space. These two virtual systems include user-interface action commands that convey unique abilities and activities to users who are blind and that operate only in these VR systems and not in real space (e.g., teleporting the user’s avatar or pointing at a virtual object to receive information). This research included 15 participants who are blind, divided into three groups: a control group and two experimental groups. Varied tasks (exploration and orientation) were used in two virtual environments and in real spaces, with both qualitative and quantitative methodologies. The results show that the participants were able to explore, construct a cognitive map, and perform orientation tasks. Participants in both virtual systems used these action commands during their exploration process: all participants used the teleport action command to move their avatar to the starting point and all Virtual Cane participants explored the environment mainly by using the look-around mode, which enabled them to collect spatial information in a way that influenced their ability to construct a cognitive map based on a map model. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Graphical abstract

21 pages, 4541 KiB  
Article
Colorophone 2.0: A Wearable Color Sonification Device Generating Live Stereo-Soundscapes—Design, Implementation, and Usability Audit
by Dominik Osiński, Marta Łukowska, Dag Roar Hjelme and Michał Wierzchoń
Sensors 2021, 21(21), 7351; https://doi.org/10.3390/s21217351 - 5 Nov 2021
Cited by 6 | Viewed by 3118
Abstract
The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, [...] Read more.
The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Figure 1

17 pages, 3222 KiB  
Article
Situational Awareness: The Effect of Stimulus Type and Hearing Protection on Sound Localization
by Leah Fostick and Nir Fink
Sensors 2021, 21(21), 7044; https://doi.org/10.3390/s21217044 - 24 Oct 2021
Cited by 7 | Viewed by 2344
Abstract
The purpose of the current study was to test sound localization of a spoken word, rarely studied in the context of localization, compared to pink noise and a gunshot, while taking into account the source position and the effect of different hearing protection [...] Read more.
The purpose of the current study was to test sound localization of a spoken word, rarely studied in the context of localization, compared to pink noise and a gunshot, while taking into account the source position and the effect of different hearing protection devices (HPDs) used by the listener. Ninety participants were divided into three groups using different HPDs. Participants were tested twice, under with- and no-HPD conditions, and were requested to localize the different stimuli that were delivered from one of eight speakers evenly distributed around them (starting from 22.5°). Localization of the word stimulus was more difficult than that of the other stimuli. HPD usage resulted in a larger mean root-mean-square error (RMSE) and increased mirror image reversal errors for all stimuli. In addition, HPD usage increased the mean RMSE and mirror image reversal errors for stimuli delivered from the front and back, more than for stimuli delivered from the left and right. HPDs affect localization, both due to attenuation and to limitation of pinnae cues when using earmuffs. Difficulty localizing the spoken word should be considered when assessing auditory functionality and should be further investigated to include HPDs with different attenuation spectra and levels, and to further types of speech stimuli. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Figure 1

11 pages, 2712 KiB  
Communication
Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures
by Eldad Holdengreber, Roi Yozevitch and Vitali Khavkin
Sensors 2021, 21(16), 5291; https://doi.org/10.3390/s21165291 - 5 Aug 2021
Cited by 3 | Viewed by 2116
Abstract
Muteness at its various levels is a common disability. Most of the technological solutions to the problem creates vocal speech through the transition from mute languages to vocal acoustic sounds. We present a new approach for creating speech: a technology that does not [...] Read more.
Muteness at its various levels is a common disability. Most of the technological solutions to the problem creates vocal speech through the transition from mute languages to vocal acoustic sounds. We present a new approach for creating speech: a technology that does not require prior knowledge of sign language. This technology is based on the most basic level of speech according to the phonetic division into vowels and consonants. The speech itself is expected to be expressed through sensing of the hand movements, as the movements are divided into three rotations: yaw, pitch, and roll. The proposed algorithm converts these rotations through programming to vowels and consonants. For the hand movement sensing, we used a depth camera and standard speakers in order to produce the sounds. The combination of the programmed depth camera and the speakers, together with the cognitive activity of the brain, is integrated into a unique speech interface. Using this interface, the user can develop speech through an intuitive cognitive process in accordance with the ongoing brain activity, similar to the natural use of the vocal cords. Based on the performance of the presented speech interface prototype, it is substantiated that the proposed device could be a solution for those suffering from speech disabilities. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Figure 1

14 pages, 1524 KiB  
Article
Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane
by Maxime Bleau, Samuel Paré, Ismaël Djerourou, Daniel R. Chebat, Ron Kupers and Maurice Ptito
Sensors 2021, 21(8), 2700; https://doi.org/10.3390/s21082700 - 12 Apr 2021
Cited by 8 | Viewed by 4630
Abstract
Vision loss has dramatic repercussions on the quality of life of affected people, particularly with respect to their orientation and mobility. Many devices are available to help blind people to navigate in their environment. The EyeCane is a recently developed electronic travel aid [...] Read more.
Vision loss has dramatic repercussions on the quality of life of affected people, particularly with respect to their orientation and mobility. Many devices are available to help blind people to navigate in their environment. The EyeCane is a recently developed electronic travel aid (ETA) that is inexpensive and easy to use, allowing for the detection of obstacles lying ahead within a 2 m range. The goal of this study was to investigate the potential of the EyeCane as a primary aid for spatial navigation. Three groups of participants were recruited: early blind, late blind, and sighted. They were first trained with the EyeCane and then tested in a life-size obstacle course with four obstacles types: cube, door, post, and step. Subjects were requested to cross the corridor while detecting, identifying, and avoiding the obstacles. Each participant had to perform 12 runs with 12 different obstacles configurations. All participants were able to learn quickly to use the EyeCane and successfully complete all trials. Amongst the various obstacles, the step appeared to prove the hardest to detect and resulted in more collisions. Although the EyeCane was effective for detecting obstacles lying ahead, its downward sensor did not reliably detect those on the ground, rendering downward obstacles more hazardous for navigation. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Figure 1

Review

Jump to: Research

17 pages, 8342 KiB  
Review
Spatial Knowledge via Auditory Information for Blind Individuals: Spatial Cognition Studies and the Use of Audio-VR
by Amandine Afonso-Jaco and Brian F. G. Katz
Sensors 2022, 22(13), 4794; https://doi.org/10.3390/s22134794 - 24 Jun 2022
Cited by 7 | Viewed by 2959
Abstract
Spatial cognition is a daily life ability, developed in order to be able to understand and interact with our environment. Even if all the senses are involved in mental representation of space elaboration, the lack of vision makes it more difficult, especially because [...] Read more.
Spatial cognition is a daily life ability, developed in order to be able to understand and interact with our environment. Even if all the senses are involved in mental representation of space elaboration, the lack of vision makes it more difficult, especially because of the importance of peripheral information in updating the relative positions of surrounding landmarks when one is moving. Spatial audio technology has long been used for studies of human perception, particularly in the area of auditory source localisation. The ability to reproduce individual sounds at desired positions, or complex spatial audio scenes, without the need to manipulate physical devices has provided researchers with many benefits. We present a review of several studies employing the power of spatial audio virtual reality for research in spatial cognition with blind individuals. These include studies investigating simple spatial configurations, architectural navigation, reaching to sounds, and sound design for improved acceptability. Prospects for future research, including those currently underway, are also discussed. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Figure 1

Back to TopTop