Abstract
Over the last decades, the development of navigation devices capable of guiding the blind through indoor and/or outdoor scenarios has remained a challenge. In this context, this paper’s objective is to provide an updated, holistic view of this research, in order to enable developers to exploit the different aspects of its multidisciplinary nature. To that end, previous solutions will be briefly described and analyzed from a historical perspective, from the first “Electronic Travel Aids” and early research on sensory substitution or indoor/outdoor positioning, to recent systems based on artificial vision. Thereafter, user-centered design fundamentals are addressed, including the main points of criticism of previous approaches. Finally, several technological achievements are highlighted as they could underpin future feasible designs. In line with this, smartphones and wearables with built-in cameras will then be indicated as potentially feasible options with which to support state-of-art computer vision solutions, thus allowing for both the positioning and monitoring of the user’s surrounding area. These functionalities could then be further boosted by means of remote resources, leading to cloud computing schemas or even remote sensing via urban infrastructure.
1. Introduction
Recent studies on global health estimate that 217 million people suffer from visual impairment, and 36 million from blindness [1]. The affected have their autonomy jeopardized in terms of many everyday tasks, with the emphasis being placed on those that involve moving through an unknown environment.
Generally, individuals rely primarily on vision to know their own position and direction in the environment, recognizing numerous elements in their surroundings, as well as their distribution and relative location. Those tasks are usually grouped under the categories of “orientation” or “wayfinding,” while the capability to detect and avoid nearby obstacles relates to “mobility.” A lack of vision heavily hampers the performance of such tasks, requiring a conscious effort to integrate perceptions from the remaining sensory modalities, memories, or even verbal descriptions. Past work described this as a “cognitive collage” [2].
In this regard, a navigation system’s purpose is to provide users with required and/or helpful data to get to a destination point, monitoring their position in previous modeled maps. As we will see, researchers working in this field have yet to find effective, efficient, safe, and cost-effective technical solutions for both the outdoor and indoor guidance needs of blind and visually impaired people.
Nevertheless, in recent years, we have seen unprecedented scientific and technical improvements, and new tools are now at our disposal to face this challenge. Thus, this study was undertaken to re-evaluate the perspective of navigation systems for the blind and visually impaired (BVI) in this new context, attempting to integrate key elements of what is frequently a disaggregated multidisciplinary background.
Given the purpose of this work, its content and structure differ from recent reviews on the same topic (e.g., [3,4]). Section 2 presents a historical overview that gathers together previous systems in order to present a novel survey of the principles, key points, strategies, rules, and approaches of assistive device design that are currently applicable. This is particularly important in the field of non-visual human‒machine interface, as the perceptual and cognitive processes remain the same. Next, Section 3, on related innovation fields, reviews several representative devices to introduce a set of technical resources that are yet to be fully exploited, e.g., remote processing techniques, simultaneous localization and mapping (SLAM), wearable haptic displays, etc. Finally, Section 4 and Section 5 include a brief introduction to user-centered design approaches, and a discussion of the currently available technical resources, respectively.
3. Related Innovation Fields
This section focuses on related R&D technological areas that currently benefit from greater attention and investment, as they could constitute some of the most important contributors in order to achieve BVI mobility self-sufficiency.
3.1. Mixed Reality
In recent years, virtual and real environments have been slowly breaking down barriers and becoming closer, e.g., by virtualizing physical objects or an individual’s movement, mixing virtual and real elements in an immersive scenario, etc.
When forming a picture of mixed reality, system latencies ranging between tenths or even hundredths of seconds are often required. Specifically, complying with that limitation when virtualizing features of real elements led to the development of low-latency techniques and commercial products for recording the three-dimensional environment.
Such circumstances would boost the implementation of functionalities needed for navigation systems such as obstacle detection and recognition. This would then be exemplified by projects like NAVI [59], based on Microsoft Kinect.
Soon enough, the high potential of applying computer vision for positioning was further exploited. Simultaneous locating and mapping technology (SLAM), which can be found in Google’s Project Tango, allowed for centimeter-level accuracy indoor positioning. Project Tango and related technologies such as Intel RealSense provided vision positioning solutions, with reported cases of application in commercially available drones like Yuneec’s Typhoon H. Specifically, the applications for BVI navigation that had been previously contemplated materialized in the development of various prototypes. For example, the Smart Cane system [60] used a depth camera and a server for SLAM processing that allowed for six degrees-of-freedom indoor location, plus obstacle detection features. Also, ISANA [61] exploited Project Tango for indoor wayfinding and obstacle detection, using compatible hardware platforms (i.e., Phab 2 or Yellowstone mobile devices) and haptic actuators embedded in a cane. Analogously, in [62] a novel prototype is described that used Tango and Unity, a game engine, to capture the user’s movement in a continuously updated virtual replica of the indoor environment. In addition to wayfinding and mobility assistance, SLAM techniques were also used for tasks such as face recognition [63].
Another remarkable application of this technology for VI people’s guidance lies in the user interface. One solution proposed by Stephen L. Hick et al., from Oxford University, exploited the residual vision by enhancing 3D perceptions with simplified images emphasizing depth (Figure 4) [64]. They recently tried to access the market with their Smart Specs [65] glasses, with VA-ST start-up funding.
Figure 4.
A VA-ST Smart Specs captured image.
Alternatively, mixed reality allows users to interact with virtual elements overlapping with their actual surroundings, thus providing intuitive cues of orientation, distance from and shapes of objects, etc.
The usage of virtual sound sources to guide pedestrians along a route is one of the classic solutions seen in projects like UCSB PGS, or even Haptic Radar. The latter combined its original IR-based obstacle avoidance system with virtual sound guidance, which resulted in positive after-test appraisals [66]. Nevertheless, some criticisms and suggestions were made, mainly in relation to the area covered by the IR sensors and the vibrational interface.
Also, virtual sounds could not only be applied for guidance, but also for at least several tasks that involved 3D enhanced perception, as previously seen in Virtual Acoustic Space.
Aside from solutions based on sound, virtual tactile elements were also studied, albeit apparently less. The Virtual Haptic Radar project [67], originating from Haptic Radar, is a representative example. It substituted its predecessor’s IR sensors by the combination of a three-dimensional model of the surroundings plus an ultrasonic-based motion capture system worn by the user. As described in Figure 5, once the user reached a certain area near the object, warning vibrations were triggered accordingly.
Figure 5.
Virtual Haptic Radar project.
However, one of the main problems hampering tactile-based solutions is the haptic interfaces available. Most portable designs seem to resort to mechanical components, thus causing a conflict between their bulkiness and the subtlety of the induced perceptions. Alternatives such as electrotactile devices remain experimental so far.
3.2. Smartphones
Over the last decade smartphones, among other portable devices, have gradually included a variety of features that would make them resourceful platforms for developers, some of which will be discussed next.
As a stand-alone device, a smartphone shows a high and rapidly increasing processing capacity in comparison with its price. Additionally, it incorporates a diverse set of built-in tools and sensors, like cameras, GNSS modules, accelerometers, gyroscopes, or NFC readers. In addition, close-range communication via Bluetooth or Wi-Fi further expands the previous assortment of uses, e.g., by means of external sensors for obstacle detection, high-precision RTK-GNSS modules, etc.
On the other hand, mobile networks keep on improving with each new release, leading to the usage of remote resources. In accordance with this, cloud computing services are nowadays commercialized at various levels of abstraction, such as infrastructure (IaaS), platforms (PaaS), or software (SaaS). Remarkable examples in our line of work, as will later be shown, are artificial vision SaaS, as offered by Google or Microsoft, providing developers with APIs to get access to Google Cloud Platform and Microsoft Cognitive Services resources, respectively.
An additional aspect to be aware of is the acceptance of smartphones specifically by BVI users [68]. Even before accessibility for handicapped people made its way into software design standards, as can be seen in Apple’s iOS, mobile phones have progressively become widely adopted for calls or to send text messages. Now, with the generational change, the number of users of these new technologies has further increased.
In this environment, research on navigation systems for BVI users found a new field to exploit, e.g., the BLE-based NavCog smartphone application [53] or purely inertial prototypes [69] for indoor wayfinding. Regarding general-purpose sensory substitution, a few visual‒auditory systems soon became publicly available software applications, e.g., EyeMusic [70,71], or even the classic vOICe [72]. Conversely, visual‒tactile sensory substitution systems were once again comparatively scarce. One example would be HamsaTouch, seen in Section 2.2, which recreates Bach-y-Rita’s and Collins et al.’s prototypes in a smartphone equipped with a haptic electrotactile display (Figure 6b). On the other hand, applications such as Seeing AI [73] or TapTapSee [74] provide users with verbal descriptions of captured images, making use of remote processing resources in a cloud computing schema.
Figure 6.
Lazzus (a). HamsaTouch (b).
Nevertheless, the focus of attention was placed on GNSS-based outdoor navigation. Next, some representative examples of available applications are briefly described:
- Moovit [75]: a free, effective, and easy-to-use tool that offers guidance on the public transport network, managing schedules, notifications, and even warnings in real time. It is one of the assets for mobility tasks recommended by ONCE (National Organization of Spanish Blind People).
- BlindSquare [76]: specifically designed for the BVI, this application conveys the relative location of previously recorded POIs through speech. It makes use of Foursquare’s and OpenStreetMap’s databases.
- Lazzus [77]: a paid application, again designed for BVI users, which coordinates GPS and built-in motion capture and orientation sensors to provide users with intuitive cues about the location of diverse POIs in the surrounding area, even including zebra crossings. It offers two modes of operation: the 360° mode verbally informs of the distance and orientation to nearby POIs, whereas the beam mode describes any POI in a virtual field of view in front of the smartphone. Its main sources of data are Google Places and OpenStreetMap.
Some of these functionalities are also shared by an increasing number of commercially available applications, each with specific characteristics and improvements. For example, Seeing AI GPS [78] includes solutions analogous to 360° and beam modes of Lazzus plus pre-journey information; NearBy Explorer offers several POI notification filters, etc.
3.3. Wearables
So far, bone conduction headphones and smart glasses with a built-in camera have mainly been used for BVI mobility support. Furthermore, as the size and cost of sensors and microprocessors further decreased, and given the advantages of wearable devices, the development of designs specifically aimed at these people has been slowly boosted.
Some of the main points in favor of wearable designs include the sensors’ wider field-of-view, the usage of immersive user interfaces, or users’ request for discreet, hands-free solutions. In Figure 7, some strategic placements of these sensors and interfaces are shown, including a few examples of market-available products.
Figure 7.
Wearables for the BVI: common placements.
Firstly, regarding the sensors’ field-of-view, some devices rely on the user to scan their surroundings, whereas others resort to intermediary systems that monitor the scene. Among them, the first strategy was therefore to look for placements that eased “scanning movements,” placing sensors on the wrist (Figure 7B), the head (Figure 7A) or embedded in the cane (Figure 7C). Specifically, systems corresponding with Figure 7B,C tended to imitate the features of the first ETA. This was exemplified by Ultracane, SmartCane (Figure 7C) or Sunu-band [79] (Figure 7B), as all of them offered obstacle detection functionalities supported by ultrasound proximity sensors via a vibrational user interface. On the other hand, the third category of wearables (Figure 7A) was usually seen in camera-based sensory substitution or artificial vision systems, e.g., Seeing AI, Orcam MyEye [80], BrainPort, or even vOICe.
Conversely, the second strategy generally opts for a wider field-of-view, thus sensors were often positioned in relatively static and non-occlusive placements all over the torso (red dots in Figure 7). That was the case with Toyota’s Project Blaid [81], a camera-based, inverted-U-shaped wearable that rested on the user’s shoulders. Among its functionalities, it pursued object and face recognition, with an emphasis placed on elements related to mobility such as stairs, signals, etc.
Regarding user interfaces, speech and Braille made up the first solutions for acoustic and tactile verbal interfaces, coupled with headphones and braille displays. As an example, Figure 7B shows the “Dot” braille smartwatch.
Other kinds of solutions strived for a reduced cognitive load by means of intuitive guidance cues, usually exploiting the innate space perception capabilities of touch and hearing. Many examples have been mentioned in this text, from Virtual Acoustic Space or UCSB PGS to Haptic Radar. Non-occlusive headphones and vibratory interfaces are some of the devices most commonly used as they benefit from a low cost, a reduced-weight design, etc., while still being able to generate immersive perceptions such as virtual sound sources, or the approach to tactile virtual objects, as seen initially in Haptic Radar, and later in Virtual Haptic Radar.
This latter approach is also found in the Spatial Awareness project, based on Intel RealSense. The developed prototype conveys distance measurements through the vibration of eight haptic actuators distributed over the user’s torso and legs.
4. Challenges in User-Centered System Design
As will be discussed, a major flaw in the design of navigation systems for BVI users seems to lie in a set of reiterated deficiencies concerning the knowledge of the users’ needs, capabilities, limitations, etc., despite the great amount of work that has accumulated over the last few decades. Thus, this section will attempt to gather key user-centered design features prior to a further discussion of system design in Section 5.
One of the first problems faced in the development of assistive technology is the heterogeneity of the targeted public [82]. The assistance required is related to the users’ residual vision, among other circumstances, such as physical or sensory disabilities deriving from the ageing process that should be noted (81% of the BVI are aged above 49 years [1]). In particular, this section will focus on blindness as the most severe case of disability, so as to provide the reader with enough data to infer the needs of specific users.
Several user requirements concerning navigation systems for the blind have often been addressed. Firstly, regarding the disposal of environmental information, some typical features to offer are [5]:
- “The presence, location, and preferably the nature of obstacles immediately ahead of the traveller.” This relates to obstacle avoidance support.
- Data on the “path or surface on which the traveller is walking, such as texture, gradient, upcoming steps,” etc.
- “The position and nature of objects to the sides of the travel path,” i.e., hedges, fences, doorways, etc.
- Information that helps users to “maintain a straight course, notably the presence of some type of aiming point in the distance,” e.g., distant traffic sounds.
- “Landmark location and identification,” including those previously seen, particularly in (3).
- Information that “allows the traveller to build up a mental map, image, or schema for the chosen route to be followed.” This point involves the study of what is frequently termed “cognitive mapping” in blind individuals [83].
Whilst the first ETAs were oriented to the first category of information, solutions that placed virtual sound sources over POIs easily covered points (4) and (5), and solutions based on artificial vision could provide data in any category.
One key factor to be aware of in this context is the theory behind the development of sensory substitution devices, which has been mentioned throughout the text when describing the “cognitive load” or “intuitiveness” of some user interfaces. At this point, the work in [84] is highlighted as it introduces the basics.
In the first place, some major constraints to be considered are the difference of throughput data capability between sensory modalities (bandwidth), and the compatibility with higher-nature cognitive processes [84]. Two respective examples of these constraints would be the overloading of touch seen in numerous attempts to convey visual perceptions [85], and the inability to decipher visual representations of sounds, even though vision has comparatively more ‘bandwidth’ than hearing.
Some other main factors would be the roles of synesthesia and neuroplasticity, or even how intelligent algorithms can be used to filter the information needed in particular scenarios [84].
Once it was proven that distant elements can be recognized through perceptions induced by sensory substitution devices of vision (Section 2.2), thus straying into the field of “distal attribution” (e.g., [84,85]), it started an ambitious pursue of general-purpose visual‒tactile and visual‒auditory devices. Several recent studies in neuroscience showed the high potential of this field [86,87], as areas of the brain though to be associated to visual-type tasks, e.g., involved in shape recognition, showed activity with visual-encoded auditory stimulation.
Nevertheless, given the limitations of the remaining senses to collect visual-type information, it is usually necessary to focus on what users require to carry out specific tasks [88,89].
Lastly, the poor acceptance of past designs by their intended public should be taken into account; a recent discussion on this topic can be found in [88]. In line with this, an aspect that was recently taken advantage of is the growing penetration of technology in the daily routines of BVI people, with an emphasis placed on the usage of smartphones.
Figure 8 shows the increasing growth of mobile phone and computer use, including how many BVI people use these devices to access the Internet, a tendency likely to continue among younger generations. This trend is also reflected in the creation of entities such as Amovil, which promotes the accessibility of these devices to the BVI people, or the smartphone-compatible infrastructure of London’s WayFindr [90] (similar to [91,92]), Bucharest’s Smart Public Transport [93], or Barcelona’s NaviLens [93], which are oriented to boosting the autonomy of BVI individuals when using public transportation. In line with this, Carnegie Mellon University’s NavCog, based on a BLE network, recently added Pittsburgh International Airport to the list of supported locations [94].
Figure 8.
Percentages of Spanish BVI users of mobile phones (blue) and computers (orange); percentage of those who access the Internet (gray), and references to the overall population (green). Data obtained from INE and [51] (2013).
5. Availability of Technical Solutions
Finally, this last section will delve into some general aspects of potential architectures. Functional requirements and their feasibility will be discussed according to past experiences, the available technology, and user-related needs and constraints.
The discussion on this topic will be addressed according to three main functionalities of navigation systems for the blind, namely positioning systems, environment monitoring and user interface (Figure 9). The system coordinates the abovementioned modules with complementary data, such as POIs (e.g., OpenStreetMap), maps, public transportation schedules, etc. which are available via the web.
Figure 9.
Architecture proposal for navigation assistance devices (examples included).
5.1. Positioning Systems
Focusing on assistance along a route, a navigation system needs positioning data, but its specifications may differ according to the solution pursued. For example, applications like Lazzus efficiently indicate the location and nature of POIs with accuracies of about 1 m. On the other hand, projects that simulate virtual sound sources, such as Virtual Acoustic Space, usually need cm accuracy positions, in addition to split-second time responses to match the HRTF output sounds with head movements. These are typical constraints of current mixed reality applications.
Additionally, the design of navigation systems varies depending on whether it is oriented to indoor or outdoor environments (see Section 4). This particularly affects positioning techniques, which can be further classified into portable equipment, e.g., related to dead reckoning navigation solutions, or external infrastructure that ranges from BLE beacons to GNSS. The technologies to be applied would then be chosen according to the requirements of the targeted tasks, costs, etc.
Some of the most attractive solutions are those that take advantage of already deployed infrastructure, which is reflected in the absolute prevalence of GNSS for outdoor location. It could also be combined with mobile networks, or portable alternatives such as INS and/or the previously discussed vision positioning. On the other hand, most of the indoor positioning techniques encountered, including those currently available on the market, require a beacon infrastructure deployment that easily pushes up costs, whereas usage would be extremely low.
At this point, portable devices for vision positioning show high promise for low-cost positioning, both in outdoor and indoor environments (Section 2.2 and Section 3.1). Additionally, vision-based solutions could provide data on the users’ surroundings (Section 2.2, Section 2.3, Section 3 and Section 5.2), and also play an important role in the design of sensory substitution devices (Section 2.2, Section 2.3, Section 3.1, Section 3.2, Section 4 and Section 5.3).
Whilst most GNSS and/or mobile networks can delimit user location within a few meters even in indoor scenarios (e.g., 5G [95]), vision positioning further improves it to cm precision. Furthermore, the same obstacles that degrade GNSS signals, e.g., buildings or bridges, could make fine reference points for solutions based on image processing, making up for the accumulated error characteristic of dead reckoning techniques. Some current drones, like DJI’s Phantom 4, stabilize their movements through precise location feedback based on this kind of strategy.
5.2. Environmental Monitoring
As seen in Section 4, navigation systems for BVI users need to gather specific data of the environment for an efficient and safe guidance.
In this context, a first distinction to make is whether any object, feature, etc., in range is fixed to a specific location, hereinafter referred to as static (e.g., stairways) or dynamic (e.g., pedestrians) elements.
Static elements could be relatively easy to handle through records of their distribution and relevant features in shared databases. This would be exemplified by Wayfindr, as the users’ closeness to BLE beacons triggers guidance cues and notifications of nearby elements. Dynamic elements, on the other hand, are to be managed with sensors such as cameras, sonar, LiDAR, etc., be they remote installations or equipment carried by the user.
As for what technology should be used to capture those dynamic elements, it depends on the specific application. Classic examples are Ultracane and Miniguide sonar-based obstacle detection devices, or those that are vision or infrared-based, described in Section 3.
Nevertheless, these mobility aids usually face strict constraints of reliability and robustness, as they could put users in potentially hazardous situations. Following this statement, three alternatives will be discussed.
Firstly, in opposition to autonomous devices, these aids can make use of the users’ judgement. Starting from the premise that raw measurement data of sensors do contain what is needed, e.g., to detect and avoid an obstacle, the issue lies in whether the user could effectively and efficiently analyze that flow of information. This delves into the domain of sensory substitution and augmentation, with Virtual Haptic Radar as an example of the potential of extended touch [96] in a context of mixed reality.
Secondly, not all orientation and mobility tasks require such extreme reliability. Common useful features could be signal detection and recognition, the detection of nearby pedestrians, etc., most of which are currently implemented in artificial vision technology. These solutions include precedent systems going back to Tyflos, the recent and market-available Seeing AI and Orcam MyEye, or current prototypes such as [97]. Also, the potential of vision-based systems would be even higher in urban areas, as they are built placing great care on what elements are visible.
Third and finally, the reliability and robustness of mobility-related tasks can be inherited from external resources, e.g., by leaning on urban monitoring infrastructures, as seen in Siemens’ InMobs project.
5.3. User Interface
Once the relevant data for navigation are gathered, they are then passed onto the user. However, this is one of the critical aspects in the design of products for BVI people, and usually acts as a bottleneck of the information available in numerous navigation systems.
Speech interfaces can be applicable for several tasks, e.g., when providing brief descriptions of the user surroundings, OCR, etc., as seen in Seeing AI. However, its use involves several constraints and problems. Firstly, it could mean that the user may not hear or pay attention to the environment. Simple, short messages are typically preferred, thus limiting the data provided. Secondly, the data gathered must be analyzed and filtered according to the users’ requirements at each time and place, a challenge similar to those of autonomous vehicles or drones. Thirdly, spatial cues are often non-optimal, even in the case of simple left/right indications [34]. Most of these problems could be extended to other linguistic interfaces (e.g., braille displays).
As for non-linguistic interfaces, the first limitation would be the extremely low data throughput of hearing and touch in comparison with vision, followed by the need to match the data output with “higher-nature cognitive processes” [84] (Section 4). Therefore, according to Giudice, Loomis, Klatzky et al., developers should focus on helping users to perform specific and actually needed tasks, minimizing the conveyed information, while taking advantage of the “perceptual and cognitive factors associated with non-visual information processing” [84,88].
These last factors can be exemplified by the natural cross-modal associations observed in the project vOICe, such as volume-to-brightness and pitch-to-spatial height (see “weak synesthesia” in [98]). This was even evident in Disney-supported research on color‒vibration correspondences [99], which came from the pursuit of more immersive experiences. Other illustrative cases include individuals exploiting the spatial-rich information of sound to extreme levels, e.g., the echolocation techniques shown by Daniel Kish. These techniques might be reminiscent of the first ETA described in Section 1.
Another remarkable aspect to point out is the effect on distal attribution of the correspondence between body movement and perceptions [100]. For example, in Bach-y-Rita’s et al. visual‒tactile experiments, it was observed that users needed to manipulate the camera themselves to notice the “contingencies between motor activity and the resulting changes in tactile stimulation” [84].
The use of these proprioception correspondences might be a fundamental element in the design of future orientation and mobility aids, given the good performance of past projects.
Several of the mentioned projects incorporate mixed-reality-type user interfaces, such as the virtual sound sources seen in UCBS PGS and Virtual Acoustic Space, or the virtual tactile objects of Virtual Haptic Radar. Another system worth highlighting is Lazzus, which tracks the smartphone’s position and orientation to trigger verbal descriptions according to which direction it is being pointed in. As seen with Talking Signs, these approaches have users’ support [101].
Nevertheless, some of these solutions are also affected by technical limitations. While bone-conduction earphones and head motion tracking techniques are sufficient for most sound-based applications, portable haptic interfaces are heavily constrained. Even though haptic displays such as those commercialized by Blitab could promote tactile-map approaches, portable alternatives are limited to vibrational interfaces. These devices by no means exploit the full capabilities of touch, thus hampering further exploration in fields such as the application of extended touch [96] in a context of mixed reality. However, recent advances might boost the growth of a versatile classic solution known as “electrotactile.”
This technology, which benefits from low cost, low power consumption, and lightweight design, encompasses a wide range of virtual perceptions. Nevertheless, it has an insufficient theoretical foundation in terms of neural stimulation, and several designs have revealed problems related to poor electrical contact through the skin. This could be partially compensated for by choosing placements with more adequate electrical conditions, such as the tongue (BrainPort), or by the use of a hydrogel for better control of the flow of the electrical current (e.g., Forehead Retina System), etc.
Nowadays the same BrainPort makes a market-available device that shows the feasibility of this haptic technology for some applications. In addition, over the years, subsequent prototypes have strived for various improvements, such as combining electrotactile technology with mechanical stimuli [102,103], stabilizing the transcutaneous electrode‒neuron electrical contact, albeit with closed-loop designs [104], or micro-needle interfaces [105,106], etc. Furthermore, the neural stimulation theoretical basis continues to advance through research in related fields, e.g., when developing myoelectric prostheses that provide a sense of touch via the electrical stimulation of afferent nerves.
6. Conclusions
Numerous devices have been developed to guide and assist BVI individuals along indoor/outdoor routes. However, they have not completely met the technical requirements and user needs.
Most such unmet aspects are currently being answered separately in several research fields, ranging from indoor positioning, computation offloading, or distributed sensing, to the analysis of spatial-related perceptual and cognitive processes of BVI people. On the other hand, smartphones and similar tools are rapidly making their way into their daily routines. In this context, old and novel solutions have become feasible, some of which are currently available in the market as smartphone applications or portable devices.
In line with this, the present article attempts to provide a holistic, multidisciplinary view of the research on navigation systems for this population. The feasibility of classic and new designs is then briefly discussed according to a new architecture scheme proposal.
Author Contributions
Conceptualization, S.R. and A.A.; Methodology, S.R.; Formal Analysis, S.R.; Writing—Original Draft Preparation, S.R.; Writing—Review & Editing, A.A.; Supervision, A.A.
Funding
This study received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Bourne, R.R.A.; Flaxman, S.R.; Braithwaite, T.; Cicinelli, M.V.; Das, A.; Jonas, J.B.; Keeffe, J.; Kempen, J.; Leasher, J.; Limburg, H.; et al. Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: A systematic review and meta-analysis. Lancet Glob. Health 2017, 5, e888–e897. [Google Scholar] [CrossRef]
- Tversky, B. Cognitive Maps, Cognitive Collages, and Spatial Mental Models; Springer: Berlin/Heidelberg, Germany, 1993; pp. 14–24. [Google Scholar]
- Tapu, R.; Mocanu, B.; Zaharia, T. Wearable assistive devices for visually impaired: A state of the art survey. Pattern Recognit. Lett. 2018. [Google Scholar] [CrossRef]
- Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions. Sensors 2017, 17, 565. [Google Scholar] [CrossRef] [PubMed]
- Working Group on Mobility Aids for the Visually Impaired and Blind; Committee on Vision. Electronic Travel Aids: New Directions for Research; National Academies Press: Washington, DC, USA, 1986; ISBN 978-0-309-07791-0.
- Benjamin, J.M. The laser cane. Bull. Prosthet. Res. 1974, 443–450. [Google Scholar]
- Russel, L. Travel Path Sounder. In Proceedings of the Rotterdam Mobility Research Conference; American Foundation for the Blind: New York, NY, USA, 1965. [Google Scholar]
- Armstrong, J.D. Summary Report of the Research Programme on Electronic Mobility Aids; University of Nottingham: Nottingham, UK, 1973. [Google Scholar]
- Pressey, N. Mowat sensor. Focus 1977, 11, 35–39. [Google Scholar]
- Heyes, A.D. The Sonic Pathfinder—A new travel aid for the blind. In High Technology Aids for the Disabled; Elsevier: Edinburgh, UK, 1983; pp. 165–171. [Google Scholar]
- Maude, D.R.; Mark, M.U.; Smith, R.W. AFB’s Computerized Travel Aid: Two Years of Research. J. Vis. Impair. Blind. 1983, 77, 71, 74–75. [Google Scholar]
- Collins, C.C. On Mobility Aids for the Blind. In Electronic Spatial Sensing for the Blind; Springer: Dordrecht, The Netherlands, 1985; pp. 35–64. [Google Scholar]
- Collins, C.C. Tactile Television-Mechanical and Electrical Image Projection. IEEE Trans. Man-Mach. Syst. 1970, 11, 65–71. [Google Scholar] [CrossRef]
- Rantala, J. Jussi Rantala Spatial Touch in Presenting Information with Mobile Devices; University of Tampere: Tampere, Finland, 2014. [Google Scholar]
- BrainPort, Wicab. Available online: https://www.wicab.com/brainport-vision-pro (accessed on 29 July 2019).
- Grant, P.; Spencer, L.; Arnoldussen, A.; Hogle, R.; Nau, A.; Szlyk, J.; Nussdorf, J.; Fletcher, D.C.; Gordon, K.; Seiple, W. The Functional Performance of the BrainPort V100 Device in Persons Who Are Profoundly Blind. J. Vis. Impair. Blind. 2016, 110, 77–89. [Google Scholar] [CrossRef]
- Kajimoto, H.; Kanno, Y.; Tachi, S. Forehead electro-tactile display for vision substitution. In Proceedings of the EuroHaptics, Paris, France, 3–6 July 2006. [Google Scholar]
- Kajimoto, H.; Suzuki, M.; Kanno, Y. HamsaTouch: Tactile Vision Substitution with Smartphone and Electro-Tactile Display. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems: Extended Abstracts, Toronto, ON, Canada, 26 April–1 May 2014; pp. 1273–1278. [Google Scholar]
- Cassinelli, A.; Reynolds, C.; Ishikawa, M. Augmenting spatial awareness with haptic radar. In Proceedings of the 10th IEEE International Symposium on Wearable Computers (ISWC 2006), Montreux, Switzerland, 11–14 October 2006; pp. 61–64. [Google Scholar]
- Kay, L. An ultrasonic sensing probe as a mobility aid for the blind. Ultrasonics 1964, 2, 53–59. [Google Scholar] [CrossRef]
- Kay, L. A sonar aid to enhance spatial perception of the blind: Engineering design and evaluation. Radio Electron. Eng. 1974, 44, 605. [Google Scholar] [CrossRef]
- Sainarayanan, G.; Nagarajan, R.; Yaacob, S. Fuzzy image processing scheme for autonomous navigation of human blind. Appl. Soft Comput. J. 2007, 7, 257–264. [Google Scholar] [CrossRef]
- Ifukube, T.; Sasaki, T.; Peng, C. A blind mobility aid modeled after echolocation of bats. IEEE Trans. Biomed. Eng. 1991, 38, 461–465. [Google Scholar] [CrossRef] [PubMed]
- Meijer, P.B.L. An Experimental System for Auditory Image Representations. IEEE Trans. Biomed. Eng. 1992, 39, 112–121. [Google Scholar] [CrossRef] [PubMed]
- Haigh, A.; Brown, D.J.; Meijer, P.; Proulx, M.J. How well do you see what you hear? The acuity of visual-to-auditory sensory substitution. Front. Psychol. 2013, 4. [Google Scholar] [CrossRef] [PubMed]
- Ward, J.; Meijer, P. Visual experiences in the blind induced by an auditory sensory substitution device. Conscious. Cognit. 2010, 19, 492–500. [Google Scholar] [CrossRef] [PubMed]
- Gonzalez-Mora, J.L.; Rodriguez-Hernaindez, A.F.; Burunat, E.; Martin, F.; Castellano, M.A. Seeing the world by hearing: Virtual Acoustic Space (VAS) a new space perception system for blind people. In Proceedings of the 2006 2nd International Conference on Information & Communication Technologies, Damascus, Syria, 24–28 April 2006; Volume 1, pp. 837–842. [Google Scholar]
- Hersh, M.A.; Johnson, M.A. Assistive Technology for Visually Impaired and Blind People; Springer: London, UK, 2008; ISBN 9781846288661. [Google Scholar]
- Ultracane. Available online: https://www.ultracane.com/ (accessed on 29 July 2019).
- Tachi, S.; Komoriya, K. Guide dog robot. In Autonomous Mobile Robots: Control, Planning, and Architecture; Mechanical Engineering Laboratory: Ibaraki, Japan, 1985; pp. 360–367. [Google Scholar]
- Borenstein, J. The guidecane—A computerized travel aid for the active guidance of blind pedestrians. In Proceedings of the 1997 International Conference on Robotics and Automation (ICRA 1997), Albuquerque, NM, USA, 20–25 April 1997; IEEE: Piscataway, NJ, USA; Volume 2, pp. 1283–1288.
- Shoval, S.; Borenstein, J.; Koren, Y. Mobile robot obstacle avoidance in a computerized travel aid for the blind. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, San Diego, CA, USA, 8–13 May 1994; pp. 2023–2028. [Google Scholar]
- Loomis, J.M. Digital Map and Navigation System for the Visually Impaired; Department of Psychology, University of California-Santa Barbara; Unpublished work; 1985. [Google Scholar]
- Loomis, J.M.; Golledge, R.G.; Klatzky, R.L.; Marston, J.R. Assisting wayfinding in visually impaired travelers. In Applied Spatial Cognition: From Research to Cognitive Technology; Lawrence Erlbaum Associates, Inc.: Mahwah, NJ, USA, 2007; pp. 179–203. [Google Scholar]
- Crandall, W.; Bentzen, B.L.; Myers, L.; Brabyn, J. New orientation and accessibility option for persons with visual impairment: Transportation applications for remote infrared audible signage. Clin. Exp. Optom. 2001, 84, 120–131. [Google Scholar] [CrossRef]
- Loomis, J.M.; Klatzky, R.L.; Golledge, R.G. Auditory Distance Perception in Real, Virtual, and Mixed Environments. In Mixed Reality; Springer: Berlin/Heidelberg, Germany, 1999; pp. 201–214. [Google Scholar]
- PERNASVIP—Final Report. 2011. Available online: pernasvip.di.uoa.gr/DELIVERABLES/D14.doc (accessed on 1 August 2019).
- Ran, L.; Helal, S.; Moore, S. Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service. In Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications, Orlando, FL, USA, 17–17 March 2004. [Google Scholar]
- Harada, T.; Kaneko, Y.; Hirahara, Y.; Yanashima, K.; Magatani, K. Development of the navigation system for visually impaired. In Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004; pp. 4900–4903. [Google Scholar]
- Cheok, A.D.; Li, Y. Ubiquitous interaction with positioning and navigation using a novel light sensor-based information transmission system. Pers. Ubiquitous Comput. 2008, 12, 445–458. [Google Scholar] [CrossRef]
- Bouet, M.; Dos Santos, A.L. RFID tags: Positioning principles and localization techniques. In Proceedings of the 1st IFIP Wireless Days, Dubai, UAE, 24–27 November 2008; pp. 1–5. [Google Scholar]
- Kulyukin, V.A.; Nicholson, J.; Kulyukin, V.; Nicholson, J. RFID in Robot-Assisted Indoor Navigation for the Visually Impaired RFID in Robot-Assisted Indoor Navigation for the Visually Impaired. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–2 October 2004; Volume 2, pp. 1979–1984. [Google Scholar]
- Kulyukin, V.; Gharpure, C.; Nicholson, J. RoboCart: Toward robot-assisted navigation of grocery stores by the visually impaired. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 2845–2850. [Google Scholar]
- Ganz, A.; Schafer, J.; Gandhi, S.; Puleo, E.; Wilson, C.; Robertson, M. PERCEPT Indoor Navigation System for the Blind and Visually Impaired: Architecture and Experimentation. Int. J. Telemed. Appl. 2012. [Google Scholar] [CrossRef]
- Lanigan, P.; Paulos, A.; Williams, A.; Rossi, D.; Narasimhan, P. Trinetra: Assistive Technologies for Grocery Shopping for the Blind. In Proceedings of the 2006 10th IEEE International Symposium on Wearable Computers, Montreux, Switzerland, 11–14 October 2006; pp. 147–148. [Google Scholar]
- Hub, A.; Diepstraten, J.; Ertl, T. Design and development of an indoor navigation and object identification system for the blind. In Proceedings of the 6th International ACM SIGACCESS Conference on Computers and Accessibility, Atlanta, GA, USA, 18–20 October 2004; pp. 147–152. [Google Scholar]
- Hub, A.; Hartter, T.; Ertl, T. Interactive tracking of movable objects for the blind on the basis of environment models and perception-oriented object recognition methods. In Proceedings of the Eighth International ACM SIGACCESS Conference on Computers and Accessibility, Portland, OR, USA, 23–25 October 2006; pp. 111–118. [Google Scholar]
- Fernandes, H.; Costa, P.; Filipe, V.; Hadjileontiadis, L.; Barroso, J. Stereo vision in blind navigation assistance. In Proceedings of the World Automation Congress, Kobe, Japan, 19–23 September 2010; pp. 1–6. [Google Scholar]
- Fernandes, H.; Costa, P.; Paredes, H.; Filipe, V.; Barroso, J. Integrating Computer Vision Object Recognition with Location Based Services for the Blind; Springer: Cham, Switzerland, 2014; pp. 493–500. [Google Scholar]
- Martinez-Sala, A.S.; Losilla, F.; Sánchez-Aarnoutse, J.C.; García-Haro, J. Design, implementation and evaluation of an indoor navigation system for visually impaired people. Sensors 2015, 15, 32168–32187. [Google Scholar] [CrossRef]
- Riehle, T.H.; Lichter, P.; Giudice, N.A. An indoor navigation system to support the visually impaired. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4435–4438. [Google Scholar]
- Legge, G.E.; Beckmann, P.J.; Tjan, B.S.; Havey, G.; Kramer, K.; Rolkosky, D.; Gage, R.; Chen, M.; Puchakayala, S.; Rangarajan, A. Indoor Navigation by People with Visual Impairment Using a Digital Sign System. PLoS ONE 2013, 8, 14–15. [Google Scholar] [CrossRef]
- Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K. NavCog: A navigational cognitive assistant for the blind. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’16), Florence, Italy, 6–9 September 2016; pp. 90–99. [Google Scholar]
- Murata, M.; Ahmetovic, D.; Sato, D.; Takagi, H.; Kitani, K.M.; Asakawa, C. Smartphone-Based Indoor Localization for Blind Navigation across Building Complexes. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom), Athens, Greece, 19–23 March 2018; pp. 1–10. [Google Scholar]
- Giudice, N.A.; Whalen, W.E.; Riehle, T.H.; Anderson, S.M.; Doore, S.A. Evaluation of an Accessible, Free Indoor Navigation System by Users Who Are Blind in the Mall of America. J. Vis. Impair. Blind. 2019, 113, 140–155. [Google Scholar] [CrossRef]
- Dakopoulos, D. Tyflos: A Wearable Navigation Prorotype for Blind & Visually Impaired; Design, Modelling and Experimental Results; Wright State University and OhioLINK: Dayton, OH, USA, 2009. [Google Scholar]
- Meers, S.; Ward, K. A vision system for providing 3D perception of the environment via transcutaneous electro-neural stimulation. In Proceedings of the Eighth International Conference on Information Visualisation, London, UK, 16–16 July 2004; pp. 546–552. [Google Scholar]
- Meers, S.; Ward, K. A Substitute Vision System for Providing 3D Perception and GPS Navigation via Electro-Tactile Stimulation. In Proceedings of the International Conference on Sensing Technology, Palmerston North, New Zealand, 21–23 November 2005; pp. 551–556. [Google Scholar]
- Zöllner, M.; Huber, S.; Jetter, H.-C.; Reiterer, H. NAVI—A Proof-of-Concept of a Mobile Navigational Aid for Visually Impaired Based on the Microsoft Kinect; Human-Computer Interaction—INTERACT; Springer: Berlin/Heidelberg, 2011; pp. 584–587. [Google Scholar]
- Zhang, H.; Ye, C. An Indoor Wayfinding System Based on Geometric Features Aided Graph SLAM for the Visually Impaired. IEEE Trans. Neural Syst. Rehabilit. Eng. 2017, 25, 1592–1604. [Google Scholar] [CrossRef] [PubMed]
- Li, B.; Munoz, J.P.; Rong, X.; Chen, Q.; Xiao, J.; Tian, Y.; Arditi, A.; Yousuf, M. Vision-based Mobile Indoor Assistive Navigation Aid for Blind People. IEEE Trans. Mob. Comput. 2019, 18, 702–714. [Google Scholar] [CrossRef] [PubMed]
- Jafri, R.; Campos, R.L.; Ali, S.A.; Arabnia, H.R. Visual and Infrared Sensor Data-Based Obstacle Detection for the Visually Impaired Using the Google Project Tango Tablet Development Kit and the Unity Engine. IEEE Access 2017, 6, 443–454. [Google Scholar] [CrossRef]
- Neto, L.B.; Grijalva, F.; Maike, V.R.M.L.; Martini, L.C.; Florencio, D.; Baranauskas, M.C.C.; Rocha, A.; Goldenstein, S. A Kinect-Based Wearable Face Recognition System to Aid Visually Impaired Users. IEEE Trans. Hum.-Mach. Syst. 2017, 47, 52–64. [Google Scholar] [CrossRef]
- Hicks, S.L.; Wilson, I.; Muhammed, L.; Worsfold, J.; Downes, S.M.; Kennard, C. A Depth-Based Head-Mounted Visual Display to Aid Navigation in Partially Sighted Individuals. PLoS ONE 2013, 8, e67695. [Google Scholar] [CrossRef] [PubMed]
- VA-ST Smart Specs—MIT Technology Review. Available online: https://www.technologyreview.com/s/538491/augmented-reality-glasses-could-help-legally-blind-navigate/ (accessed on 29 July 2019).
- Cassinelli, A.; Sampaio, E.; Joffily, S.B.; Lima, H.R.S.; Gusmo, B.P.G.R. Do blind people move more confidently with the Tactile Radar? Technol. Disabil. 2014, 26, 161–170. [Google Scholar] [CrossRef]
- Zerroug, A.; Cassinelli, A.; Ishikawa, M. Virtual Haptic Radar. In Proceedings of the ACM SIGGRAPH ASIA 2009 Sketches, Yokohama, Japan, 16–19 December 2009. [Google Scholar]
- Fundación Vodafone España. Acceso y uso de las TIC por las personas con discapacidad; Fundación Vodafone España: Madrid, España, 2013; Available online: http://www.fundacionvodafone.es/publicacion/acceso-y-uso-de-las-tic-por-las-personas-con-discapacidad (accessed on 1 August 2019).
- Apostolopoulos, I.; Fallah, N.; Folmer, E.; Bekris, K.E. Integrated online localization and navigation for people with visual impairments using smart phones. In Proceedings of the International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1322–1329. [Google Scholar]
- BrainVisionRehab. Available online: https://www.brainvisionrehab.com/ (accessed on 29 July 2019).
- EyeMusic. Available online: https://play.google.com/store/apps/details?id=com.quickode.eyemusic&hl=en (accessed on 29 July 2019).
- The vOICe. Available online: https://www.seeingwithsound.com/ (accessed on 29 July 2019).
- Microsoft Seeing AI. Available online: https://www.microsoft.com/en-us/ai/seeing-ai (accessed on 29 July 2019).
- TapTapSee—Smartphone application. Available online: http://taptapseeapp.com/ (accessed on 29 July 2019).
- Moovit. Available online: https://company.moovit.com/ (accessed on 29 July 2019).
- BlindSquare. Available online: http://www.blindsquare.com/about/ (accessed on 29 July 2019).
- Lazzus. Available online: http://www.lazzus.com/en/ (accessed on 29 July 2019).
- Seeing AI GPS. Available online: https://www.senderogroup.com/ (accessed on 29 July 2019).
- Sunu Band. Available online: https://www.sunu.com/en/index.html (accessed on 29 July 2019).
- Orcam MyEye. Available online: https://www.orcam.com/en/myeye2/ (accessed on 29 July 2019).
- Project Blaid. Available online: https://www.toyota.co.uk/world-of-toyota/stories-news-events/toyota-project-blaid (accessed on 29 July 2019).
- Schinazi, V. Representing Space: The Development, Content and Accuracy of Mental Representations by the Blind and Visually Impaired. Ph.D. Thesis, University College, London, UK, 2008. [Google Scholar]
- Ungar, S. Cognitive Mapping without Visual Experience. In Cognitive Mapping: Past Present and Future; Routledge: London, UK, 2000; pp. 221–248. [Google Scholar]
- Loomis, J.M.; Klatzky, R.L.; Giudice, N.A. Sensory substitution of vision: Importance of perceptual and cognitive processing. In Assistive Technology for Blindness and Low Vision; CRC Press: Boca Ratón, FL, USA, 2012; pp. 162–191. [Google Scholar]
- Spence, C. The skin as a medium for sensory substitution. Multisens. Res. 2014, 27, 293–312. [Google Scholar] [CrossRef]
- Maidenbaum, S.; Abboud, S.; Amedi, A. Sensory substitution: Closing the gap between basic research and widespread practical visual rehabilitation. Neurosci. Biobehav. Rev. 2014, 41, 3–15. [Google Scholar] [CrossRef]
- Proulx, M.J.; Brown, D.J.; Pasqualotto, A.; Meijer, P. Multisensory perceptual learning and sensory substitution. Neurosci. Biobehav. Rev. 2014, 41, 16–25. [Google Scholar] [CrossRef]
- Giudice, N.A. Navigating without Vision: Principles of Blind Spatial Cognition; Handbook of Behavioral and Cognitive Geography; Edward Elgar Publishing: Cheltenham, UK; Northampton, MA, USA, 2018; pp. 260–288. [Google Scholar]
- Giudice, N.A.; Legge, G.E. Blind Navigation and the Role of Technology. In Engineering Handbook of Smart Technology for Aging, Disability, and Independence; John Wiley & Sons: Hoboken, NJ, USA, 2008; pp. 479–500. [Google Scholar]
- Wayfindr. Available online: https://www.wayfindr.net/ (accessed on 29 July 2019).
- Kobayashi, S.; Koshizuka, N.; Sakamura, K.; Bessho, M.; Kim, J.-E. Navigating Visually Impaired Travelers in a Large Train Station Using Smartphone and Bluetooth Low Energy. In Proceedings of the 31st Annual ACM Symposium on Applied Computing, Pisa, Italy, 4–8 April 2016; pp. 604–611. [Google Scholar]
- Cheraghi, S.A.; Namboodiri, V.; Walker, L. GuideBeacon: Beacon-based indoor wayfinding for the blind, visually impaired, and disoriented. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications Workshops, Kona, HI, USA, 13–17 March 2017; pp. 121–130. [Google Scholar]
- NaviLens—Smartphone Application. Available online: https://www.navilens.com/ (accessed on 29 July 2019).
- NavCog. Available online: http://www.cs.cmu.edu/~NavCog/navcog.html (accessed on 29 July 2019).
- NGMN Alliance. NGMN Alliance 5G White Paper; Next Generation Mobile Networks White Paper; NGMN Alliance: Frankfurt, Germany, 2015; pp. 1–125. [Google Scholar]
- Giudice, N.A.; Klatzky, R.L.; Bennett, C.R.; Loomis, J.M. Perception of 3-D location based on vision, touch, and extended touch. Exp. Brain Res. 2013, 224, 141–153. [Google Scholar] [CrossRef] [PubMed][Green Version]
- Lin, B.S.; Lee, C.C.; Chiang, P.Y. Simple smartphone-based guiding system for visually impaired people. Sensors 2017, 17, 1371. [Google Scholar] [CrossRef] [PubMed]
- Martino, G.; Marks, L.E. Synesthesia: Strong and Weak. Curr. Dir. Psychol. Sci. 2001, 10, 61–65. [Google Scholar] [CrossRef]
- Delazio, A.; Israr, A.; Klatzky, R.L. Cross—Modal Correspondence between vibrations and colors. In Proceedings of the 2017 IEEE World Haptics Conference (WHC), Munich, Germany, 5–9 Jun 2017; pp. 219–224. [Google Scholar]
- Briscoe, R. Bodily action and distal attribution in sensory substitution. In Sensory Substitution and Augmentation; Macpherson, F., Ed.; Proceedings of the British Academy; Oxford University Press: London, UK, 2015; pp. 1–13. [Google Scholar]
- Golledge, R.G.; Marston, J.R.; Loomis, J.M.; Klatzky, R.L. Stated preferences for components of a personal guidance system for nonvisual navigation. J. Vis. Impair. Blind. 2004, 98, 135–147. [Google Scholar] [CrossRef]
- D’Alonzo, M.; Dosen, S.; Cipriani, C.; Farina, D. HyVE-hybrid vibro-electrotactile stimulation-is an efficient approach to multi-channel sensory feedback. IEEE Trans. Haptics 2014, 7, 181–190. [Google Scholar] [CrossRef] [PubMed]
- Yoshimoto, S.; Kuroda, Y.; Imura, M.; Oshiro, O. Material roughness modulation via electrotactile augmentation. IEEE Trans. Haptics 2015, 8, 199–208. [Google Scholar] [CrossRef] [PubMed]
- Kajimoto, H. Electrotactile display with real-time impedance feedback using pulse width modulation. IEEE Trans. Haptics 2012, 5, 184–188. [Google Scholar] [CrossRef]
- Kitamura, N.; Miki, N. Micro-needle-based electro tactile display to present various tactile sensation. In Proceedings of the 28th IEEE International Conference on Micro Electro Mechanical Systems (MEMS), Estoril, Portugal, 18–22 January 2018; 2015; pp. 649–650. [Google Scholar]
- Tezuka, M.; Kitamura, N.; Tanaka, K.; Miki, N. Presentation of Various Tactile Sensations Using Micro-Needle Electrotactile Display. PLoS ONE 2016, 11, e0148410. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).








