Next Article in Journal
Microbiological Air Quality in Healthcare Environments: A Review of Selected Facilities
Previous Article in Journal
Beam Finite Element Model Modification Considering Shear Stiffness: Octet-Truss Unit Cell with Springs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interactive Indoor Audio-Map as a Digital Equivalent of the Tactile Map

by
Dariusz Gotlib
*,
Krzysztof Lipka
and
Hubert Świech
Faculty of Geodesy and Cartography, Warsaw University of Technology, pl. Politechniki 1, 00-661 Warszawa, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(16), 8975; https://doi.org/10.3390/app15168975
Submission received: 11 July 2025 / Revised: 5 August 2025 / Accepted: 8 August 2025 / Published: 14 August 2025
(This article belongs to the Section Earth Sciences)

Abstract

There are still relatively few applications that serve the function of a traditional tactile map, allowing visually impaired individuals to explore a digital map by sliding their fingers across it. Moreover, existing technological solutions either lack a spatial learning mode or provide only limited functionality, focusing primarily on navigating to a selected destination. To address these gaps, the authors have proposed an original concept for an indoor mobile application that enables map exploration by sliding a finger across the smartphone screen, using audio spatial descriptions as the primary medium for conveying information. The spatial descriptions are hierarchical and contextual, focusing on anchoring them in space and indicating their extent of influence. The basis for data management and analysis is GIS technology. The application is designed to support spatial orientation during user interaction with the digital map. The research emphasis was on creating an effective cartographic communication message, utilizing voice-based delivery of spatial information stored in a virtual building model (within a database) and tags placed in real-world buildings. Techniques such as Text-to-Speech, TalkBack, QRCode technologies were employed to achieve this. Preliminary tests conducted with both blind and sighted people demonstrated the usefulness of the proposed concept. The proposed solution supporting people with disabilities can also be useful and attractive to all users of navigation applications and may affect the development of such applications.

1. Introduction

One of the traditional solutions that enable blind and visually impaired people to explore space are tactile maps, which are maps that are read by touch or, to a limited extent, by sight. One of the key requirements of tactile maps is to enable blind people to learn maps by heart, by sliding their fingers over them [1]. An analysis of the scientific literature and the mobile application market makes it possible to conclude that to date very few electronic applications have been developed that perform functions similar to the traditional tactile map. Therefore, the authors undertook the task of conducting research and then proposing a method of operation of a mobile application performing functions equivalent to a traditional tactile map but using sound as an information carrier. The focus was on the development of applications and maps to support indoor mobility. The purpose of this paper is to present the results of the research and development work carried out in this area and to present an example implementation in the form of a mobile application called Audio-Map.
The idea of creating this type of application originated in the Department of Cartography at the Warsaw University of Technology (WUT). The first prototype was realized as part of one of the theses [2]. The implementation work from the demonstrator to the first version of the application was carried out within the framework of the project “Warsaw University of Technology as an Ambassador of Innovation for Accessibility” subsidized by the EU (Operational Program Knowledge, Education, Development).
This article will pay attention not only to the proposed conceptual and technological solution, but also to the importance of non-graphic means of expression in cartography—in this case, sound and textual descriptions of space.
According to the authors, as technology advances, these solutions will become increasingly significant. Providing tools that support spatial orientation and navigation for blind individuals is a particularly challenging task that requires an unconventional approach. Traditional tactile maps inherently have significant limitations in terms of the amount of available information. On the other hand, navigation applications must be highly reliable and based on the highest-quality positioning systems to be useful for blind users. While outdoor navigation benefits from GNSS signal availability, indoor navigation presents a greater challenge due to the continued lack of high-precision indoor positioning systems. When available, these systems typically cover only very limited areas.
However, positioning technologies using Wi-Fi and BLE beacons are becoming more widespread. Despite this, their accuracy remains insufficient for safe turn-by-turn navigation, particularly in large indoor spaces, where positioning errors can reach several meters and signals may be affected by human movement within the building.
For this reason, the authors argue that, in addition to improving standard outdoor navigation solutions, alternative approaches should be explored to overcome these technological limitations. One such approach involves creating textual descriptions of spaces (either manually or automatically) and effectively integrating them into localization and navigation applications. This method does not require precise localization but instead relies on a more human-like approach to building situational awareness—through map analysis and mental mapping. Research has shown that the use of conventional turn-by-turn navigation reduces situational awareness. Therefore, efforts should be made to develop solutions that support this cognitive process.
Sighted users do not experience a lack of situational awareness as severely as blind individuals, for obvious reasons. Consequently, the authors propose prioritizing support for visually impaired users and leveraging their experiences to inform the development of new navigation paradigms. Their insights and needs could, in turn, influence the design of Location-Based Services (LBS) applications for other user groups. This approach may be particularly relevant for emergency services (e.g., police, firefighters), where traditional visual-based navigation methods may not be feasible inside buildings. Similarly, it could benefit passengers in train stations, shoppers in crowded malls, and tourists navigating museums. Unlike driving a car, where a smartphone can be mounted for occasional reference, pedestrians lack a convenient way to frequently check a visual map. Instead, receiving auditory and sound-based information would be a more effective method of navigation, not merely guiding users to execute simple commands but also helping them build an understanding of the spatial layout of a building and the path to their destination.
Therefore, research on the development of digital tactile maps appears to be of particular importance. However, there remains a research gap in this area. The novelty of the present study lies in its proposal of a new approach to designing applications not only for blind users but also for a broader audience of LBS application users. At this stage, a demonstrative application has been developed to validate the feasibility of this approach and preliminarily confirm its usability. It is important to emphasize that further extensive research and subsequent implementation efforts are planned.
So far, the creators of tactile maps and the developers of navigation maps have largely formed two distinct groups of researchers and manufacturers. This article highlights the need for collaboration between these two domains and the potential benefits of combining their expertise. A unified approach could lead to the development of new products that integrate the advantages of both conceptual and technological solutions.

2. Materials and Methods

The first phase of the research analyzed the scientific literature and the mobile application market for the theory and practice of developing tactile maps and electronic maps adapted to the needs of people with visual impairments. In particular, solutions for buildings and their interiors were analyzed. The analysis abandoned classic navigation applications that do not perform functions similar to tactile maps, focusing only to navigating the user. Discussing the issue of effective navigation goes beyond the objectives of this article. On the basis of an analysis of the advantages and, above all, the disadvantages of existing solutions, a concept was proposed for the development of a new product performing functions similar to those of traditional tactile maps (at the logical level). Subsequently, a series of experimental work was carried out in an IT environment, the purpose of which was first to check the correctness and feasibility of the proposed concept, and then to prepare a demonstrator that would be used to obtain first feedback from potential users. Then the first version of the application called “Audio-Map WUT” was developed. Finally, the “Audio-Map WUT” was compared to other commercially available applications and to traditional tactile maps. Figure 1 shows a general flow chart of the research, development and design work.

2.1. Literature Analysis

One can find many scientific articles on the subject of tactile maps for visually impaired people. In what follows, the analysis will focus on solutions that are more or less related to digital technology. The literature survey focused on three key issues: challenges in developing tactile maps, technologies that enhance spatial awareness for blind and visually impaired individuals, and the exploration of digital alternatives to tactile maps

2.1.1. Selected Challenges in Creating Tactile Maps

Rowell and Ungar [3] interviewed blind and visually impaired people about their requirements, experiences and preferences for using tactile maps. Among other things, the interviews revealed that tactile maps are the best way for visually impaired people to present spatial information but are rarely used due to limited accessibility and insufficient information. Respondents prefer large-scale tactile maps, but more for general spatial orientation than for detailed navigation. In the paper [4], the significant impact of tactile maps on improving user experiences, including those of blind and visually impaired people, is highlighted. The author pointed out that tactile maps can offer a more engaging and multidimensional experience than maps that rely solely on visuals. As an example, they cited a three-dimensional map of Paris that won a British Cartographic Society award. This map is made of laser-cut wood, combining traditional materials with modern technology, which not only provides a visually appealing design but also a tactile experience that engages users on multiple sensory levels. The author also noted the importance of research on tactile esthetics in cartography and how different materials can affect perception and interaction with maps. The paper [5] reviewed the literature on the automatic generation of traditional tactile maps and algorithms and models for generalizing such maps. It was shown that there is currently a lack of comprehensive solutions for the automatic generation of tactile maps. In the article [6] is devoted to the methodology of evaluating tactile maps for their informational value and in the article [7] summarized the knowledge on the correct design of tactile maps. It collected detailed guidelines for the design of tactile symbols and highlighted the problems associated with the standardization of tactile map production (e.g., different methods of map production, different meanings of symbols in different parts of the world). According to the researchers [8] it can be easier and cheaper to produce high-quality tactile maps using UV printing techniques than the techniques used previously. Preliminary experiments have confirmed that UV printing meets all the requirements for producing tactile maps. An experimental map of a historic park made using this technique was positively evaluated by a test group of 15 visually impaired people.

2.1.2. Technologies That Improve Spatial Awareness for Blind and Visually Impaired People

A very important issue in the implementation of tactile maps is to take into account the very limited spatial awareness of blind and visually impaired people. These people have problems locating obstacles and moving to new places. To address these problems, a study [9] has designed smart glasses for the blind and visually impaired. These glasses use IoT technology to facilitate movement and improve users’ independence by offering real-time support: proximity alarms, object detection, distance measurement and location mapping. Studies have shown that glasses can have a positive impact on improving the spatial awareness of blind and visually impaired people. Ten years earlier, researchers [10] developed a system that improves the spatial awareness of visually impaired people. The system works by replacing visual information with appropriate auditory and haptic information. People with visual impairments participated in a training phase to improve their ability to locate the sound source and orient themselves in space. In the training phase, the user can move the mouse cursor inside a virtual circle on the device’s screen while listening to 3D sound. The sound is generated depending on the angle between the center of the circle and the point the user points to. This exercise is designed to improve the ability to locate sounds in space. Feedback to the user included vibrations on a haptic belt placed on the head. Experiments showed that those who completed the training performed better at sound localization and spatial orientation than those who did not. The developed method may contribute to the development of assistive devices for people with visual impairments. Engel and Weber [11] created a prototype system for automatically generating audio-tactile maps of building interiors. The research created two applications. The first is used to generate SVG files containing information about the building interior (e.g., room names and functions, POIs). For this purpose, a model and method of tagging building interior elements called Simple Indoor Tagging created for the well-known OpenStreetMap (OSM) community project was used. Based on the generated SVG files, a tactile map was printed, enhanced with information about the selected building. In addition, a mobile application was developed, connected via Bluetooth to a digital pen, which can be used to read the tags on the printed tactile map. In this way, users of the mobile application can listen to information about the space in the building as they move a digital pen across the printed map. Preliminary research with a blind person has shown that the designed system can provide a low-cost solution to increase the independence of people with visual impairments.
In the study [12], the TouchIt3D technology was developed, which combines a 3D tactile map with a mobile device. Interactive maps were created that respond to user touch through vibrations and voice information. Studies have shown that such a solution can significantly improve the quality of life for people with visual impairments.
Tactile map technology was further examined in study [13], where the focus was on adapting it to the needs of blind and visually impaired users. The researchers developed tactile topographic maps of the Netherlands and Europe using swell paper, a heat-sensitive material that creates raised lines and textures, making the printed content tactile. The resulting maps were tested with blind people, who found that the maps facilitated spatial orientation by helping them to create mental representations of geographical areas. Participants also found the maps to be a valuable complement to mobile navigation systems, improving both spatial orientation and independent mobility.
In the study [14] a review of Greek and international literature on the design, use, and effectiveness of haptic maps was conducted. The study also reviewed technologies (e.g., multimedia tactile maps, optical character recognition, text-to-speech conversion, computer vision, ultrasonic sensors, voice instructions, screen readers, RFID, 3D printing) that support visually impaired individuals in daily activities. The authors highlighted the significant role of the mental map in the navigation process and the need to consider the specific characteristics of the users of these maps. They also noted the lack of universal methods for creating tactile maps.
The publication [15] showed that audio-tactile maps and audio-haptic maps can be more useful than verbal descriptions in the navigation process and in improving spatial orientation. The study verified the ability to find specific points of interest in an urban area, based on a map previously studied by a blind person. A study published a year earlier [16] showed that audio-tactile maps facilitate independent and efficient navigation for people with visual impairments, as well as the detection of specific points of interest (e.g., cafe, market). The study used the “Geomagic Touch”, i.e., a haptic device that allows virtual objects to be felt through feedback in three spatial directions (left-right, front-back, up-down). Participants in the experiment virtually navigated a map using the haptic device and received audio feedback on the location of landmarks, dangerous places, intersections and street names. Based on the constructed mental map, the participants tried to move around in the real world, identifying landmarks.
The authors [17] presented a mobile application that uses the SVG Tiny format to record additional information, which can then be delivered to the user in audio form. According to the study, blind people testing the app were very satisfied with its level of accessibility.
Researchers [18] integrated a mobile app with a tactile map (Figure 2). They encoded relevant information (metadata) in the form of barcodes and deployed it on the physical map. The user of the map-integrated mobile app decodes the metadata using the smartphone’s camera, and then detailed descriptions of the map are retrieved from a global database based on the metadata.

2.1.3. Geoinformation Applications Supporting the Blind and Visually Impaired Users

There is no shortage of navigation apps on mobile app distribution platforms (GooglePlay and AppStore) that take into account the needs of people with visual impairments. An example is Lazarillo, which helps blind and visually impaired people navigate unfamiliar terrain outdoors and in some buildings. With voice messages, Lazarillo informs people about nearby landmarks, such as intersections. Another example is the Seeing Assistant Move app, which helps visually impaired people navigate on foot. Among other things, it enables route planning and step-by-step navigation. In addition, the app can be controlled by voice commands. Another navigation app commonly used by visually impaired people is Google Maps, which from time to time introduces facilities for the blind and visually impaired, such as information on the accessibility of places of interest for people with disabilities. Also important are applications not directly related to navigation, but which make it easier for the blind and visually impaired to explore space. An example is the Envision AI virtual assistant, which describes objects within range of a smartphone’s camera. Another example of this type of application is NaviLens, which recognizes objects (e.g., doors, bus stops) based on scanned high-contrast QR codes. The Be My Eyes app, on the other hand, allows volunteers to help visually impaired people by recognizing objects, among other things. It is also worth mentioning the Soundscape app, which aims to improve spatial orientation for blind and visually impaired people. The app provides users with information about their surroundings using three-dimensional sound signals, creating an immersive acoustic environment that supports the recognition and localization of elements of the environment, such as streets, intersections, public transport stops, and distinctive landmarks. Blind and visually impaired users interact with sighted users via video calls. Screen readers, such as TalkBack designed for Android and VoiceOver designed for iOS, are also very important for blind and visually impaired users. They enable sightless users to use the device by listening to descriptions of application interface elements.
In the context of modern tactile maps, it is also worth mentioning interesting projects that have not been described in the scientific literature, but information about them can be found on the Internet. The first example is the project of [19], which aimed to develop a comprehensive technology for creating low-cost tactile maps of historic parks and gardens using 3D printing technology. In addition, a pilot implementation of this technology was carried out. Three-dimensional printing technology was also used in another project aimed at developing a tactile map for a public library in Lawton, Oklahoma [20]. Project participants identified the main cartographic challenges in designing tactile maps: generalization, map scale, and appropriate symbolism. The authors followed the Braille Authority of North America (BANA) guidelines and standards for tactile graphics. In turn, the students [21], using AI techniques, developed a device with a camera to translate text extracted from a photo into Braille. Blind people can also use the device to learn languages. That same year, an article [22] described the vital importance of tactile maps in the Netherlands. The production of these maps often uses ArcGIS technology and special “swell” paper, which creates raised lines and shapes in response to heat. Direct Access also uses “swell” paper to create sensory maps that enable blind and visually impaired people to better understand space and navigate independently [23].

2.1.4. Comparison of Products That Improve Spatial Awareness for Blind and Visually Impaired People

An analysis of the literature has shown that similar but non-identical and ambiguous terms such as audio-haptic map, audio guide, audio-tactile map, sound map, sensory map are often used in the context of tactile maps. What they have in common is the use of sound as the main medium for conveying information. Table 1 summarizes these products and provides information on the method of information transmission and an example of functionality for comparison. All of the terms listed refer to maps (models of space) that support spatial orientation and navigation.
All the products described above share a common main goal, which is to provide the recipient with information about space. Sensory maps seem to be the most versatile, making it easier for blind and visually impaired people to orient themselves in space by engaging multiple senses simultaneously (e.g., touch, hearing, sight). Maps of this type can also prove useful for people with conditions such as epilepsy or PTSD (post-traumatic stress disorder). This topic is covered in a paper [24], which proposes the use of sensory maps to ensure that people with disabilities can safely visit exhibitions in museums. In the case of audio-haptic, audio-tactile and audio-visual maps, the intended recipient is usually a blind or visually impaired person; however, audio maps can also be useful for able-bodied people. Audio guides, on the other hand, are intended by design for a wide audience. In addition to sound, audio-haptic maps use mechanical stimuli, such as vibration, to provide users of electronic devices (such as smartphones) with feedback about the space. Sound-touch maps, on the other hand, additionally use other forms of haptic perception, such as thermal (feeling temperature) and textural (feeling the surface and texture of objects).

2.2. Description of the Audio-Map WUT Application Concept

2.2.1. The Concept of Geo-Descriptions

Despite the fact that there are already mobile applications that perform the functions of a tactile map to a limited extent, no research or technological solutions have been found that use the new concept of geo-descriptions, that is, multi-faceted (locational, warning, cognitive, navigational) and multi-level (building, floor, local) descriptions of space, linked to a space (point, line, area) and delivered in text or audio form [25,26].
The purpose of location geo-descriptions is to describe the space around the user so that they can understand where they are. Examples of locational geo-descriptions are given in Table 2.
Geo-descriptions are hierarchically linked. The description of a location is assigned to a building zone, and the zone is assigned to the entire building. The user can only read the detailed description of the location, but after confirmation, they can also read the parent description of the zone or the entire building.
Warning geo-descriptions are mainly intended for blind people who cannot use their sense of sight to notice obstacles that impede smooth movement or dangers, such as stairs, thresholds, steps, low ceilings. On the basis of this type of geo-description, the application should generate a warning message at the appropriate moment when the user approaches a certain place, e.g., “Watch out! There are stairs around you. The stairs in this part of the building do not have handrails. Cognitive geo-descriptions are descriptive information assigned to important rooms, special places and objects within a building that may be helpful to users (e.g., dean’s office, ATM, male/female restroom, coffee shop). A geo-description of this type should help you learn about the space before navigating, as well as help you identify where the app user is located. An example of this is “You are standing in front of the Office of the Dean of the Faculty of Geodesy and Cartography. The faculty celebrated its centennial in 2021. 30 June 1921. For the first year of study of the 1921/22 academic year, 38 candidates were admitted. At present, more than 1000 students study at the Faculty.”.
The task of navigational geo-descriptions is to describe the transition between successive waypoints in a building in an appropriate, understandable and effective way. Descriptions for this type of information inside the building navigate in a way that avoids the use of words that directly indicate direction, e.g., left, right. The following description may serve as an example: “Climb the stairs to the half-floor, then walk along the railings to the end of the corridor”. This type of geo-description has not been used in the current version of the Audio-Mapa WUT application and is only planned for use in the next version, when the indoor positioning system is enabled.
Geo-descriptions are texts that describe space in the context of use in Location-Based Services. They should be structured in a way that supports situational awareness and spatial learning. Reading or listening to them resembles a conversation with someone giving directions or assisting with navigation. Their hierarchical structure allows for retrieving either general or more detailed information, depending on the user’s needs.
Therefore, the authors of this article decided to develop a mobile application based on this very idea. This type of application can be considered the digital equivalent of a traditional tactile map, an audio-tactile map and, to some extent, an audio guide.
The main function of the Audio-Map WUT mobile application is to provide users with voice-read information about the building (especially its interior) and to play audio descriptions of the space while the user traces their finger over the map displayed on the smartphone screen.

2.2.2. Meeting the WCAG Requirements

The Audio-Map WUT application has been designed to meet the requirements of the Web Content Accessibility Guidelines (WCAG) standard version 2.1, which is a standard for building and designing websites adapted for people with disabilities, but can also apply to mobile applications. One of the four principles underpinning this standard is that users should be able to use the site using their accessible senses. According to the second principle, users should be able to navigate the site and use all functions regardless of how they are controlled (e.g., using the mouse alone). The next WCAG principle is to provide users with understandable content and how the application works. According to the fourth principle, the site should be responsive, meaning that it should work properly in different browsers and on different devices, in order to ensure accessibility for as many users as possible. The Audio-Map WUT application has been implemented taking into account all of the aforementioned principles, including by providing clear and precise geo-descriptions and other audio cues, and by using high color contrast in the user interface with consideration for the blind and visually impaired. The assumption was to achieve AA level compliance. This level is the middle of three degrees of compliance (A, AA, AAA) and is widely recognized as the minimum standard in the design of accessible digital applications. It encompasses both basic and more advanced accessibility requirements aimed at providing full access for people with various disabilities, including those who are blind or visually impaired. As part of the evaluation process, experts conducted a detailed analysis of the Audio-Map application’s functionality’s compliance with individual WCAG 2.1 criteria. The analysis resulted in a report covering 50 accessibility criteria at levels A and AA, assigned to one of three evaluation categories: achieved, not applicable, or not achieved. The vast majority of the key criteria were rated as achieved—only four of the analyzed criteria did not meet the requirements. In the next version of the application, appropriate modifications are planned to enable full compliance with level AA.

3. Results

3.1. Software Architecture

The Audio-Map WUT architecture consists of the following key components (Figure 3):
  • Mobile Application—Audio-Map WUT mobile application implemented in Kotlin programming language;
  • ArcGIS Runtime SDK for Android (version 100.15.0)—A software library dedicated to Android for map display and integration with various ArcGIS services;
  • External database and services—A backend system that was developed as part of the “Building Accessibility Maps” project implemented at Warsaw University of Technology from 2022 to 2023.
The project resulted in the development of a concept, an information system structure, a database (2D and 3D models of dozens of WUT buildings) and several navigation and location applications, including the Audio-Map WUT. This component includes REST services providing data about buildings (e.g., geometry and room names) and REST services providing geo-descriptions. All services within this component have been created using ArcGIS technology.
The source of all data in the Audio-Map WUT mobile application is a spatial database created in PostgreSQL and stored on the CENAGIS server (Center for Geospatial and Satellite Analysis) which is accessed via ArcGIS REST Services. Integrating the mobile application with the ArcGIS SDK was a natural choice when the backend system was developed using ArcGIS technology. This enabled the optimal presentation of different layers on the building map. The modular and scalable architecture allows further development of the system.

3.2. Implementation Technology

The building models (both 2D and 3D) have been developed in accordance with the general principles of creating GIS-type databases and form the so-called WUT Building Accessibility Maps. Some of them are also saved in IFC format typical of BIM (Building Information Modeling) technology. However, this serves other purposes. A special process for managing building database updates has been implemented at WUT. All local building administrators have access to the online version of the application and should report changes on an ongoing basis (daily). A GIS coordinator has also been hired in the central building administration team to oversee this process and update geometric data and geo-descriptions.
A classic relational spatial database has been implemented, based on PostgreSQL (version 17) with the PostGIS (version 3.5.0) extension. The database schema was designed in-house, drawing partial inspiration from the CityGML (version 3.0) model, but tailored to the specific requirements of the application. While the structure itself does not critically affect system performance, several optimization techniques have been employed to ensure efficient processing of spatial data, including geometrically complex objects. To enhance application performance, data is served in three thematic layers: building geometry, points of interest (POI), and geo-descriptions. This modular separation enables selective data retrieval and reduces the load on both the server and client devices. Additionally, spatial indexing has been applied—GiST (Generalized Search Tree) indexes are used on geometry columns, significantly accelerating spatial queries. Data transmitted to the mobile application is pre-filtered spatially (e.g., restricted to the currently selected building), thereby minimizing data transfer and reducing client-side computational load.
The Audio-Map WUT mobile application at this stage has been implemented in the Android Studio development environment, using the Kotlin programming language. Thus, it can only run in an Android environment. ArcGIS Runtime SDK (Software Development Kit) for Android (100.15.0) technology was used to display the base map and manage layers on the map.
All services for sharing spatial data have been published through a portal included in ArcGIS Enterprise. The portal allows the creation and sharing of maps, applications, and other GIS resources on any infrastructure. In the case of Audio-Map WUT, the CENAGIS (Center for Geospatial and Satellite Analysis, Warsaw/Józefosław, Poland) cyber-infrastructure was used.
One of the key features in the context of Audio-Map WUT, as the electronic equivalent of a tactile map, is the processing of space descriptions assigned to specific locations or areas into sound. For this purpose, the Text To Speech (TTS) function provided by the Android system was used. The TTS language is set dynamically depending on the currently selected language at the operating system level. However, at the moment, the Audio-Mapa WUT application supports only two languages—Polish and English. Therefore, if another language is set at the operating system level, the TTS engine language is automatically set to English. In order to enable audio description of the user interface (a function necessary for the blind), the TalkBack (version 16.0) function available in the Android operating system was used. To enable this function, all elements of the user interface were marked with special text labels.
The implementation of the QR code reading module uses the GmsBarcodeScanner (version 18.3.1) interface, which is part of the ML Kit—a set of tools providing functions based on machine learning algorithms, optimized for mobile applications. The scanner uses advanced machine learning techniques to detect QR codes and barcodes, making it capable of quickly and accurately recognizing codes, even in dimly lit areas.

3.3. Description of the Functions and Operation of the Audio-Map WUT Application

Upon launching the app, a map of the Warsaw University of Technology is displayed. The application is integrated with the TalkBack (version 16.0) tool, a screen reader available on Android, which allows users to use the application’s GUI without using their eyes. The solution can be described as a kind of voice assistant. It is also possible to disable TalkBack, in which case an alternative voice assistant based on TTS (text-to-speech) technology is activated. Users who want to use the app only by sight can disable the voice assistant from the settings screen. The app operates in two modes: Map Browsing Mode and Learning Mode, which can be changed using the switch at the top of the screen (Figure 4a). Map Browsing Mode makes it easier to select areas of interest, such as a specific building or room. It is also designed for assistants to help the blind and visually impaired navigate. The method of use is similar to classic map applications (e.g., changing the scale, moving the view). After selecting a building, a 2D building model is displayed for the active floor, with visibility of floors, rooms, doors, elevators, stairs and selected equipment or infrastructure elements. In addition, there is a layer showing important places in the building (PoI), such as conference rooms, secretariats, lecture halls, toilets. Upon clicking on a selected room, the user receives both an audio and text description of the chosen location (Figure 4a), making the application accessible to deaf individuals as well.
In this mode, building and room search functions are also available, as well as precise location of the user based on information encoded in QR codes or NFC tags placed on the door at each handle (Figure 4b). In addition to identifying and describing a specific location, the user can listen to a message describing an entire floor, a section of a floor or even an entire building. This mechanism allows the user to build spatial awareness. Regardless, there is a brief message written in Braille next to each door handle in the Main Building of the WUT (test facility). Below are some screenshots of the application showing the functionality described above.
All the functions described so far are available in Map Viewing Mode. However, the key functionality is available in Learning Mode, in which the user can explore the space in the building by smoothly moving his finger across the map (Figure 4c–f). As the user explores the map, voice information is read out regarding: the numbers and categories of “passing” rooms, the names of points of interest (PoI), and the contents of geo-descriptions (area, zone, location and warning). In Learning Mode, it is not possible to change the scale with zoom-in and zoom-out gestures, as each touch of the screen is used to identify objects on the map under the user’s finger. In addition to identifying the layout of rooms, the user can activate six additional layers with geo-descriptions, which are the source of information read to the user while exploring the map.
The first (active by default) layer with geo-descriptions contains a description of the building zones, if any (e.g., north zone, southwest zone). Zone names refer to geographic directions. During the experiments, it was considered to refer to the building axis, but this could cause confusion for the user when moving from the building to the outside and when entering the building. This is a debatable issue and there are no ideal solutions in this regard. The authors plan to conduct separate research in this area.
When the geo-description layer is changed to “location”, the map displays icons representing location geo-descriptions, providing the user with information about specific locations previously marked on the building plan. This type of information makes it easier for the user to orient himself in the space and continue his independent journey to the designated destination [25,26]. When a finger is hovered over such a place, the content of the geo-description is read out, such as: “You are at the side entrance to the Main Building, which you can use to reach the Main Library. The corridor begins with a slightly sloping exit for people using wheelchairs.”. Many location geo-descriptions are logically linked to detailed area descriptions. In this case, when a location geo-description is played, a question dialog box is displayed that allows the user to listen to the area (master) message, e.g., “You are in the Main Building of the university. The building has four floors and a hexagonal shape. In the center, there is a large auditorium with an open space extending from the ground floor to the fourth floor, surrounded by cloisters”. This is intended to provide the user with a broader spatial context of the location he is pointing to on the map. In addition to reading the details, it is possible to obtain information on which part of an entire floor or building the indicated place is located. Area messages are more general in nature than local ones.
Another type of geo-description is referred to as cognitive. When the layer of cognitive geo-descriptions is activated, icons representing their locations are marked on the map. These are usually descriptions of so-called PoI (Point of Interest) objects, e.g., coffee shop, ATM, dean’s office, library. This type of information makes it easier for the user to learn about objects and rooms both in a basic context (e.g., category, opening hours of offices, name) and in an expanded context (e.g., additional description for visitors to a museum or other interesting place in the building) [25,26].
Zoning geo-descriptions, on the other hand, are intended to make it easier to know the layout of individual floors. This is especially important in the case of non-standard floor numbering.
When geo-descriptions from the “warning” category are activated, the map displays icons placed in places considered important for safe movement. This type of information is particularly important for blind people who cannot notice obstacles and dangerous places (e.g., stairs, steps).
Activation by the user of one of the layers: “locational”, “cognitive”, or “warning”, allows a more detailed exploration, during which descriptions are read out about each room indicated with a finger, such as “74, administrative”. On the other hand, activating one of the layers: “zone”, “floor”, “building”, enables more general information about a building by reading area geo-descriptions. They are read according to the user’s needs before entering the building to obtain preliminary information about its spatial characteristics, or on request as a master message after listening to detailed information in the indicated areas. This provides “learning” of a building’s space from the general to the detailed (or vice versa) and helps build situational awareness and a mental map of the building being visited.
In the application interface, under the map there is a panel containing buttons for navigating the map (up, down, left, right), centering the map to the selected object, centering the map to the full view of the building, changing the active floor. Each press of this panel not only moves the map or changes the scale, but also simultaneously reads out by voice the movement made, e.g., “moved the map”, “zoomed out”. The Section 3.4 describes the result of the preliminary verification of the correctness of the concept of the described system.

3.4. Preliminary Verification of the Correctness of the Concept

At this stage, only preliminary verification studies have been performed. Comprehensive testing of the application is planned, including both manual and automated testing. So far, the application has been evaluated by a small group of blind testers from an external company and a group of employees and students of the Warsaw University of Technology.
The first test was organized by the specialized company, selected through a public procurement process, with expertise in assessing the accessibility of products for visually impaired individuals. The study comprised two in-depth interviews, group testing conducted on the WUT campus, an individual test following the group sessions, and expert evaluations. The field tests included participants with the following demographic profiles: a woman, approximately 45 years old, with a mobility impairment; a woman, approximately 30 years old, who is nearly blind; a man, approximately 65 years old, who is nearly blind and has a mobility impairment; a man, approximately 45 years old, with a mobility impairment; a man, approximately 30 years old, who is completely blind; and a man, approximately 35 years old, who is completely blind. Additionally, the in-depth interviews involved three participants between the ages of 30 and 65 who were either blind or nearly blind and were proficient in using both computers and mobile phones.
The research report concluded that participants responded positively to the concept of presenting the WUT campus on an interactive map in the Audio-Map form. They were particularly interested in understanding the spatial arrangement of buildings, the layout of rooms, floor listings, and the means of navigating between different campus zones. In terms of the auditory feedback provided by the application, the content, length, and quality of the geo-descriptions were well received. The primary recommendations for improving the application focused on the logic and structure of the interface, the interaction between the TalkBack system and the proprietary Text-to-Speech-based system, and compliance with WCAG contrast standards. These included suggestions for controlling the application with gestures, the level of detail in the geo-descriptions read when sliding a finger across the map, changing the position of some user interface elements and expanding the voice assistant.
One of the key recommendations was to develop an iOS version of the application, as the expert involved in the study noted that the majority of completely blind users rely on iPhones, whereas Android is more commonly used by individuals with partial vision, who can often navigate using the standard visual interface without the need for full voice support. Another crucial recommendation was to ensure that all users—whether blind, visually impaired, or sighted—could utilize the same interface. Additionally, the study indicated that QR codes should serve only as a complementary solution to real-time positioning systems, such as BLE beacons, rather than as a primary navigation tool.
Detailed selected comments from individual users or collective opinions/suggestions from experts obtained during research prior to the implementation of the first version of the application are presented in Table 3.
The comments from the blind people have been particularly valuable. Many of the detailed suggestions from the test group have already been fully (e.g., 2, 7, 10, 11, 12, 15) or partly (e.g., 6, 8) incorporated into the current version of the application, while the remaining recommendations will be implemented in future updates. The analysis of findings also highlighted the importance of selecting a test group based on familiarity with the operating system, as prior experience and user habits significantly influence the perceived usability of the application.
The second test was a preliminary internal study. It had two objectives. The first was to evaluate whether geo-descriptions generated according to the proposed method are helpful in raising situational awareness. The second additional, whether it is possible to automate the generation of geo-descriptions using artificial intelligence and whether they will be perceived similarly to those generated by humans. The proposed method of generating them using, among other things, Large Language Models (LLM) was presented in article [27]. The study group received a specially prepared a special research version of application (GeoDescTest) for use (Figure 5).
The results of these tests are presented in the article [27]. The study involved reading geo-descriptions and then identifying the described locations on a map. Therefore, sighted individuals were required at this stage. A total of 66 students (28 women and 38 men) participated in the study. Tests have tentatively confirmed the validity of the adopted concept and showed no fundamental errors.

4. Discussion

4.1. Discussion of the Results

In this chapter, the proposed application is compared with other technological solutions that facilitate orientation and navigation inside buildings for blind and visually impaired individuals. For the purposes of the analysis, three groups of products were identified and characterized below:
  • Traditional tactile map;
  • Currently available electronic solutions using sound as a means of communication;
  • Proposed solutions.
The main advantage of tactile maps is the representation of geospatial objects by means of convex elements. The high popularity of tactile maps translates into their ease of use by people with visual disabilities. In addition, tactile maps do not require power or integration with additional devices, which translates into their reliability under various conditions (e.g., lack of access to the Internet) and lack of risk of cyber-attacks and technological failures. However, the amount of information placed on a tactile map is very limited compared to its digital counterpart. In addition, the information content of a tactile map is static and cannot be automatically updated. Physical updating is a time-consuming and expensive process. Other limitations of traditional tactile maps include: lack of use of sound as one of the methods of interacting with the map, lack of interactive information retrieval, difficult storage and portability due to size.
As mentioned in the introductory part of the paper, alternative electronic solutions to assist blind and visually impaired people with spatial orientation and navigation include: audio guides, haptic devices that allow users to interact with virtual objects through feedback (e.g., Geomagic Touch), maps adapted for use by people with disabilities (audio, audio-haptic, audio-tactile and sensory). The integration of tactile maps (e.g., 3D printed or UV) with mobile devices enables the use of sound as an additional medium for conveying information. Audio information is also used in audio guides and sensory maps. For both traditional tactile maps and the alternatives described above, the problem is the limited mobility of these solutions (or their complete lack of usability on the move). One exception is a prototype mobile application [11] for generating audio descriptions of building interiors, based on a point indicated on the screen. This works in such a way that the user, using a digital pen and a smartphone, can obtain detailed information about the building, including the location and structure of the objects, allowing independent navigation. The main advantages of this approach are:
  • The ability to significantly increase the scope of geo-information presented to the user;
  • Automatic delivery and updating of data;
  • Enabling interaction with the map by providing content dependent on the point indicated on the map.
The variety of available technologies makes it easier for the blind and visually impaired to build a mental map of space. However, in all cases analyzed, the scope of geo-information provided is severely limited. The authors of this article have not found comprehensive solutions that take into account the various needs of users, such as searching for a place, navigation, learning about space.
The solution to these problems can be the system proposed by the authors of this article, which provides the user with various types of geo-descriptions (navigational, locational, cognitive, warning). As the most important advantages of this approach tested on an example implementation in the form of Audio-Map WUT application can be mentioned:
  • High level of interactivity provided by smooth reading of the information displayed when the user swipes his finger on the smartphone screen;
  • Facilitating the creation of a mental map of the building by providing descriptions at several levels of detail (generalization) and different spatial scopes (local and area geo-descriptions);
  • Enabling building navigation (navigation geo-descriptions);
  • The ability to provide and update geo-information online;
  • Much wider range of geo-information delivered than in the case of a traditional tactile map (basically unlimited);
  • High level of customization (choice of communication channel and interface operation methods: audio, text, use of TTS (text to speech) or Google Talkback).
On the other hand, the proposed solution requires users to have a smartphone with the application installed (only the Android version is available at the moment), the operation of which may pose a barrier.
Other disadvantages of Audio-Map WUT compared to traditional tactile maps are:
  • Limited ability to present the 3rd dimension in a convenient way (at the current stage of application development);
  • Potential failures (network problems, server failures).
When comparing the advantages and disadvantages of the three types of solutions, it is still necessary to mention that traditional tactile maps require in most cases to read information using Braille, and this is not a common skill among visually impaired people. It must also be taken into account that the touch area for tactile maps tends to be much larger than for electronic products, especially smartphones, which in some cases may result in a less comfortable use of electronic applications.
Table 4 shows a comparison of the features of a traditional tactile map, currently available electronic products and the Audio-Map WUT.
The Audio-Map WUT application, is distinguished by its adaptability to a specific use case (navigation, learning about space, searching for specific rooms in a building). Its distinguishing features in relation to the other analyzed products are, first of all: a wide range of information provided, a great possibility in terms of creating a mental map, and universality, i.e., the possibility of using the application both by people with and without visual impairments. The separation of two modes of using the application (viewing the map and learning) is very important. This allows the application’s interface to be greatly simplified, thus making the application easier to use. The key functionality of the application, which is similar to that of a traditional tactile map, is available in learning mode (the user’s exploration of the space by moving his finger across the map). Providing all other options (building and room search engine, positioning via QR code, application settings) on a single screen would force a significant reduction in the size of the map, which is the most important element of the user interface. Another distinguishing element of Audio-Map WUT is a very powerful integration mechanism with Google Assistant. All key elements of the user interface have special descriptions assigned to them, which are automatically read when TalkBack is enabled. This way, the user always knows what key he pressed—whether, for example, he moved the map view, zoomed in, changed the floor of a building, changed the mode of use. A mechanism has also been added to read information about changing the active screen. However, when implementing the own mechanism for interpreting gestures on the map (swiping with a finger or fingers) in learning mode, conflicts were encountered with the gestures interpreted by TalkBack. For this reason, it was decided to include additional buttons for moving the map (vertically and horizontally) and for changing the scale of the map, and as a result, the types of gestures interpreted in the context of map exploration were kept to a minimum. Additional advantages of this approach include simplifying the use of the application (buttons are more intuitive than learning additional specific finger movements) and solving the problem of interpreting the user’s intentions (e.g., moving a finger across the map can mean both an attempt to move the map and a desire to listen to the content of geo-descriptions associated with the indicated area). The authors also decided to take into account the situation in which a blind or visually impaired person does not have TalkBack enabled. In such a case, it is possible to use an alternative author’s assistant based on TTS (text to speech) technology, which reads the content of the active geo-description and a general description of the visible part of the user interface (e.g., “Room numbers and places of interest are displayed”). Unlike TalkBack, a TTS-based assistant can also be useful for people with good vision as an alternative medium for conveying information. For example, when using the app while moving in a crowded room, listening to the contents of a geo-description may be more convenient than analyzing a graphic version of a map. It should be noted, however, that this solution required protection against parallel use of two assistants (TalkBack, TTS) simultaneously. Otherwise, messages (geo-descriptions, screen descriptions and screen transitions) would be read independently by each assistant, resulting in duplicate information.
Although the current version of the application is Android-based, the usability evaluation included iOS users as well. No platform-specific usability issues were identified, although several participants expressed interest in an iOS version. While TalkBack and iOS’s VoiceOver differ in technical implementation, both provide comparable screen reader functionality. VoiceOver tends to offer more consistent behavior across devices and richer gesture support. TalkBack, on the other hand, may exhibit variability depending on the Android version and device manufacturer. These differences did not significantly affect the comfort of using the application in the tests conducted.
In summary, the main advantages of the proposed concept of digital equivalents of tactile maps can be considered the possibility of conveying a very wide set of information in a manner analogous to the use of classic tactile maps. Compared to conventional navigation applications, a key benefit is the conceptual consistency between tactile maps and digital tactile maps. Additionally, this approach fosters the development of situational awareness—an aspect often lacking in standard navigation systems. Another advantage of this solution is its flexibility in delivering content, which can be dynamically modified and improved. Moreover, it allows for the simultaneous presentation of information in multiple formats: visual, auditory, and tactile. A particularly noteworthy benefit is that applications designed according to this concept can be used effectively by blind, visually impaired, and sighted individuals alike. The main drawback, in comparison to traditional tactile maps, is the limited tactile surface available when using standard smartphones, as well as the need for a certain level of proficiency in operating electronic devices. Regarding the first version of Audio-Map, a significant limitation is that it currently functions only in the Android environment, which restricts its user base—particularly among blind individuals, who frequently rely on iOS devices. Additionally, several interface elements still require refinement based on tester recommendations.

4.2. A Forward-Looking Perspective

Further research and implementation work is currently underway to develop the proposed concept (as well as the product itself) on many levels. Among other things, additional navigation functionality is being developed, using a system of BLE beacons positioning and navigation geo-descriptions (allowing navigation commands to be given in the form of descriptions rather than just simple directional commands). A system based on BLE beacon technology has already been installed and is operational at the Warsaw University of Technology. Nominal positioning accuracy is about 5 m. In large auditoriums and open spaces, this accuracy may be degraded. The positioning method is based on multilateration using signal strength (RSSI). The system is already being used in other navigation applications and will be integrated into WUT’s Audio-Map application in the future. A prototype of such a application has already been made.
Tactile perception includes the ability to recognize different types of touch, such as pressure and vibration. The current version of Audio-Map WUT makes partial use of one of the mechanisms of tactile perception—surface sensing. In the learning (map exploration) mode, the user can recognize rooms by sliding his finger across the map and clicking his finger on the map. Expansion of this mode is planned. Different types of geo-descriptions could be associated with specific gestures, for example:
  • Single click—reading a location geo-description;
  • Double click—reading the navigation geo-description;
  • Long click—reading a cognitive geo-description.
Users have different preferences for gestures, so a configurator is planned to be created to allow any association of gestures with geo-description categories.
The authors of this article also plan to use the vibration function on a smartphone. It would be possible to associate different types of vibration with certain types of rooms, for example:
  • Gentle and long vibrations—enclosed rooms (e.g., laboratory, toilet, office);
  • Pulsating vibrations (a series of short vibrations in a specific rhythm)—corridors;
  • Short and intense vibrations—dangerous places (e.g., stairs, high thresholds).
Recent research supports this approach, as it confirms that vibration-based feedback can be effectively distinguished and memorized by blind users when designed carefully. Several vibration patterns were developed in a study [28] to resemble natural sounds, such as a heartbeat, knocking, or engine. Blind participants were able to identify up to 10 distinct patterns with high accuracy, reaching up to 90% in some instances. These findings support the use of custom-designed vibration signals in indoor navigation systems for blind users, offering a non-visual and private alternative to only voice- or sound-based feedback. Other studies [29] also confirm the effectiveness of vibration-based navigation. Their authors developed three haptic feed-back methods for smartphones: Pattern, ScreenEdge, and Wand. In a user study involving eight blind participants navigating a pre-programmed route without audio cues, the Pat-tern method (using 1–4 vibration pulses to indicate direction) and ScreenEdge method (vibrating screen regions corresponding to directions) achieved high accuracy (96%). Most participants preferred these two methods over Wand, which relied on compass orientation and was less reliable. The study demonstrates that blind users can interpret directional vibration feedback with minimal error, supporting its use as a viable alternative to auditory navigation.
A separate issue requiring deeper analysis is the simulation of the variable speed of movement within a building. For this purpose, the rate of moving a finger on the map could be used. An example implementation of the application could be as follows:
  • Slow finger swiping—slow reading of space descriptions;
  • Fast finger swiping—fast reading of space descriptions.
Alternatively, another solution is possible:
  • Slow finger swiping—reading cognitive geo-descriptions;
  • Fast finger swiping—reading location geo-descriptions.
Another approach to this problem could be to interpret the tilt of the phone (detected by the gyroscope) as accelerating and decelerating movement around the building:
  • Tilting the smartphone forward—simulating acceleration of movement (faster geo-description reading);
  • Tilting the smartphone backwards—simulation of slowing down movement (slower reading of geo-descriptions).
This mechanism could be particularly important when combined with a 3D digital map, when a smartphone acting as a motion controller in a 3D virtual environment. In the context of building a mental map of complex spaces in a building, 3D maps may offer more advantages than 2D maps. Three-dimensional maps are currently available, but are not being used for the time being due to the performance limitations of the technologies used in the Audio-Map WUT application. The problem stems from the complexity of building models and issues with the rendering engine (ESRI server) not being adapted to this type of application. In this case multivariate cartographic generalization of the data is needed, which is also the subject of a separate study.
An important research direction is the integration of the user interface for both sighted and blind individuals, in line with the expectations of the study group. User preferences should be configurable within the application settings. A single, unified application for different user groups would help optimize costs while ensuring inclusivity.
Another advantage of this approach is that assistants for individuals with disabilities and spatial orientation trainers can use the same application. This also facilitates interactions between blind users and sighted individuals in public spaces—ensuring that the application remains accessible and understandable to everyone when assistance is needed.

5. Conclusions

The article presents the concept of a digital equivalent of a tactile map for visually impaired or blind people moving inside buildings. The purpose of the described solution is to support the creation of a mental map of building interiors (and thus support spatial orientation), by playing audio geo-descriptions of the space as the user moves his finger (or cursor) across the digital map on the screen of the device (e.g., smartphone). Automated reading of space information during real user movement is also possible, when integrated with indoor positioning systems (e.g., Bluetooth Low Energy beacons). The combination with such systems will not only enable a function similar to tactile maps, but will also enable full navigation which will allow synergy from both types of approaches and solutions.
The authors also presented an application, which is the sample implementation of the proposed concept. Preliminary tests conducted with people with visual impairments and students without visual impairments demonstrated the assumed usability of the proposed product. In the article, the Audio-Map WUT was compared with traditional tactile maps and alternative digital products used by blind and visually impaired individuals. Similar solutions to the concept proposed in this article already exist; however, a system implementing the concept of advanced, hierarchical geo-descriptions (descriptions of space categorized by the user’s purpose), which are also a new idea, has not yet been developed.
Based on the work carried out, the following general conclusions can be made:
  • At present, there are very good technological opportunities to provide visually impaired people with solutions to help them navigate much more than before using traditional tactile maps;
  • Emerging new solutions are based on combining several ways of conveying information: tactile, audio and haptic which seems to be the right trend;
  • The proposed solution supporting people with disabilities can also be useful and attractive to all users of navigation applications and may affect the development of such applications.
All the concepts described above will be further verified during planned next interviews with blind and visually impaired people. In addition, the existing version of the application will be examined in detail by properly selected test groups consisting of both blind and visually impaired people. The test results will be the basis for the introduction of new features and improvements.

Author Contributions

Conceptualization, D.G.; methodology, D.G. and K.L.; software, H.Ś. and K.L.; validation, H.Ś.; formal analysis, D.G.; investigation, K.L. and H.Ś.; resources, D.G.; data curation, H.Ś.; writing—original draft preparation, D.G. and K.L.; writing—review and editing, D.G.; visualization, D.G. and K.L.; supervision, D.G.; project administration, D.G.; funding acquisition, D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by The National Centre for Research and Development under Grant number POWR.03.05.00-00-A022/19.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors report there are no competing interests to declare.

References

  1. Olczyk, M. The rules of developing tactile maps for blind and visually impaired. Pol. Cartogr. Rev. 2014, 46, 432–442. [Google Scholar]
  2. Świech, H. Prototype of a Mobile Application Supporting Spatial Orientation Inside Buildings, with Special Attention to the Needs of Blind People. Unpublished. Bachelor’s Thesis, Warsaw University of Technology, Warsaw, Poland, 2023. [Google Scholar]
  3. Rowell, J.; Ungar, S. Feeling Our Way: Tactile Map User Requirements—A Survey. In Proceedings of the ICA 2005, New York City, NY, USA, 26–30 May 2005. [Google Scholar]
  4. Kent, A.J. Maps, Materiality and Tactile Aesthetics. Cartogr. J. 2019, 56, 1–3. [Google Scholar] [CrossRef]
  5. Wabiński, J.; Mościcka, A. Automatic (Tactile) Map Generation—A Systematic Literature Review. ISPRS Int. J. Geo-Inf. 2019, 8, 293. [Google Scholar] [CrossRef]
  6. Wabiński, J.; Mościcka, A.; Kuźma, M. The Information Value of Tactile Maps: A Comparison of Maps Printed with the Use of Different Techniques. Cartogr. J. 2020, 58, 30. [Google Scholar] [CrossRef]
  7. Wabiński, J.; Mościcka, A.; Touya, G. Guidelines for Standardizing the Design of Tactile Maps: A Review of Research and Best Practice. Cartogr. J. 2022, 59, 239–258. [Google Scholar] [CrossRef]
  8. Wabiński, J.; Araszkiewicz, A.; Mościcka, A.; Śmiechowska-Petrovskij, E.; Kiliszek, D. UV Printed Tactile Maps of Historic Parks. Agil. GISci. Ser. 2024, 5, 50. [Google Scholar] [CrossRef]
  9. Swathi, K.; Gayathri, P.; Manasa, D.; Pavani, G.; Likhitha, B. Smart Glasses for Visually Impaired People Using IoT. Int. Res. J. Adv. Eng. Manag. 2024, 2, 1390–1394. [Google Scholar] [CrossRef]
  10. Mitruț, O.; Butean, A. Designing 3D Audio and Haptic Interfaces for Training the Sound Localization Ability of the Visually Impaired People. Bull. Polytech. Inst. Iași 2014, 50, 87–94. [Google Scholar]
  11. Engel, C.; Weber, G. ATIM: Automated Generation of Interactive, Audio-Tactile Indoor Maps by Means of a Digital Pen. In Proceedings of the Computers Helping People with Special Needs, Lecco, Italy, 11–15 July 2022. [Google Scholar] [CrossRef]
  12. Barvíř, R.; Vondráková, A.; Brus, J. TouchIt3D: Technology (Not Only) for Tactile Maps. Abstr. ICA 2019, 1, 24. [Google Scholar] [CrossRef]
  13. Altenaa, V.; Rijnberk, D.; Kuijer, M.; Jansen, C.; Min, E.; Welbergen, C.; Visser, T.; Vaart, E.; Nauta, F. Tailoring Tactile Maps Based on Blind Users’ Needs. Proc. ICA 2023, 5, 22. [Google Scholar] [CrossRef]
  14. Gkanidi, M.; Drigas, A. Tactile Maps and New Technologies for Blind and People with Visual Impairments. Int. J. Manag. Humanit. 2021, 5, 1–9. [Google Scholar] [CrossRef]
  15. Papadopoulos, K.; Koustriava, E.; Koukourikos, P.; Kartasidou, L.; Barouti, M.; Varveris, A.; Misiou, M.; Zacharogeorga, T.; Anastasiadis, T. Comparison of three orientation and mobility aids for individuals with blindness: Verbal description, audio-tactile map and audio-haptic map. Assist. Technol. 2016, 29, 1–7. [Google Scholar] [CrossRef] [PubMed]
  16. Papadopoulos, K.; Koukourikos, P.; Koustriava, E.; Marina, M.; Asimis, V.; Valari, E. Audio-Haptic Map: An Orientation and Mobility Aid for Individuals with Blindness. Procedia Comput. Sci. 2015, 67, 223–230. [Google Scholar] [CrossRef][Green Version]
  17. Calle-Jimenez, T.; Luján-Mora, S.; Arias-Flores, H.; Ramos, C.; Nunes, I.L. Designing Accessible Maps on Mobile Devices for Blind and Visually Impaired Users. Adv. Ind. Des. 2020, 1202, 110–116. [Google Scholar] [CrossRef]
  18. Götzelmann, T.; Winkler, K. SmartTactMaps: A Smartphone-Based Approach to Support Blind Persons in Exploring Tactile Maps. In Proceedings of the 8th ACM International Conference on Pervasive Technologies Related to Assistive Environments, Corfu, Greece, 1–3 July 2015; Volume 2, pp. 1–8. [Google Scholar] [CrossRef]
  19. Science in Poland. Newly Developed Tactile Maps for the Blind Will Be Used More for Navigation. Available online: https://scienceinpoland.pl/en/news/news%2C96444%2Cnewly-developed-tactile-maps-blind-will-be-used-more-navigation.html (accessed on 2 July 2025).
  20. ArcGIS StoryMaps. Methods on Creating a Tactile Map for Lawton Public Library. Available online: https://storymaps.arcgis.com/stories/7150df6e8d3b4535bfcf5226f068942b (accessed on 2 July 2025).
  21. Science in Poland. Students Create Photo-to-Braille Translator for the Blind. Available online: https://scienceinpoland.pl/en/news/news%2C97150%2Cstudents-create-photo-braille-translator-blind.html (accessed on 2 July 2025).
  22. Esri. Tactile Maps Built with GIS Help People Who Are Blind Gain Spatial Awareness. Available online: https://www.esri.com/about/newsroom/arcnews/tactile-maps-built-with-gis-help-people-who-are-blind-gain-spatial-awareness/ (accessed on 2 July 2025).
  23. Direct Access. Tactile and Sensory Maps. Available online: https://directaccessgp.com/uk/media/tactile-and-sensory-maps (accessed on 2 July 2025).
  24. Cieslik, E. Accessibility and Exhibit Safety: The Importance of Sensory Maps. Collect. A J. Mus. Arch. Prof. 2024, 20, 365–384. [Google Scholar] [CrossRef]
  25. Gotlib, D.; Świech, H. Building Accessibility Maps: Cartographic Solutions to Assist Mobility of People with Disabilities. In Proceedings of the 31st International Cartographic Conference (ICC 2023), Cape Town, South Africa, 13–18 August 2023. [Google Scholar] [CrossRef]
  26. Gotlib, D.; Świech, H. Selected issues of the design and implementation process of mobile applications using text and voice geospatial description on the example of “Accessibility Map of Buildings”. Rocz. Geomat. 2023, 21, 7–29. [Google Scholar]
  27. Lipka, K.; Gotlib, D.; Choromański, K. The Use of Language Models to Support the Development of Cartographic Descriptions of a Building’s Interior. Appl. Sci. 2024, 14, 9343. [Google Scholar] [CrossRef]
  28. Khusro, S.; Shah, B.; Khan, I.; Rahman, S. Haptic Feedback to Assist Blind People in Indoor Environment Using Vibration Patterns. Sensors 2022, 22, 361. [Google Scholar] [CrossRef] [PubMed]
  29. Azenkot, S.; Ladner, R.E.; Wobbrock, J.O. Smartphone haptic feedback for nonvisual wayfinding. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2011), Dundee, UK, 24–26 October 2011. [Google Scholar] [CrossRef]
Figure 1. The general flow chart of the research, development and design work (own elaboration).
Figure 1. The general flow chart of the research, development and design work (own elaboration).
Applsci 15 08975 g001
Figure 2. The user holds the smartphone with the camera pointed at the tactile map. By touching the map with a finger, the user listens to audio descriptions provided by the mobile application [source: prof. T. Götzelmann, Technische Hochschule Nürnberg].
Figure 2. The user holds the smartphone with the camera pointed at the tactile map. By touching the map with a finger, the user listens to audio descriptions provided by the mobile application [source: prof. T. Götzelmann, Technische Hochschule Nürnberg].
Applsci 15 08975 g002
Figure 3. Diagram of the main components of the system (own elaboration).
Figure 3. Diagram of the main components of the system (own elaboration).
Applsci 15 08975 g003
Figure 4. (a) Panel showing the content of the geo-description about the area marked on the map (Map Viewing Mode), (b) QR code scanner (Map Viewing Mode), (c) Layer with location geo-descriptions (Learning Mode), (d) Layer with zone geo-descriptions (Learning Mode), (e) Layer with cognitive geo-descriptions (Learning Mode), (f) Layer with warning geo-descriptions (Learning Mode).
Figure 4. (a) Panel showing the content of the geo-description about the area marked on the map (Map Viewing Mode), (b) QR code scanner (Map Viewing Mode), (c) Layer with location geo-descriptions (Learning Mode), (d) Layer with zone geo-descriptions (Learning Mode), (e) Layer with cognitive geo-descriptions (Learning Mode), (f) Layer with warning geo-descriptions (Learning Mode).
Applsci 15 08975 g004
Figure 5. Screenshot from GeoDescTest app (own elaboration).
Figure 5. Screenshot from GeoDescTest app (own elaboration).
Applsci 15 08975 g005
Table 1. Comparison of selected types of maps supporting spatial orientation and navigation using sound as the main medium of information transmission (own elaboration).
Table 1. Comparison of selected types of maps supporting spatial orientation and navigation using sound as the main medium of information transmission (own elaboration).
Product TypeInformation MediumMain Features and Example of Functionality
audio-haptic mapsound, haptic forms of touchA mobile application that helps users navigate by generating audio descriptions of the space, such as descriptions of the user’s location and route directions, and by using vibrations to indicate direction of movement, distance to the destination, and obstacles along the route.
audio-tactile mapsound, various forms of touchA map with physical, convex, tactile elements and with sounds describing these elements. This could be a tactile map combined with sensors that activate sound when the element is touched or when a receiving device (such as an NFC reader) is brought close.
sound mapsoundA mobile application that provides visually impaired users with navigation guidance in the form of audio messages generated based on the location of the smartphone as determined by a GNSS receiver.
sensory maptouch, sound, sightA map with physical, convex elements, Braille text, contrasting colors and sounds describing different areas of the map.
audio guidesoundA virtual guide to the museum describing the exhibits in the user’s immediate vicinity.
Table 2. List of different types of locational geo-descriptions (own elaboration).
Table 2. List of different types of locational geo-descriptions (own elaboration).
Geo-Description TypeSpatial CoverageExplanationExample
Localrefer to a specific place (e.g., entrance) and roomThe geo-description contains information about what the immediate surroundings of the selected location look likeYou are at the beginning of the cloisters surrounding the Main Auditorium on the third floor. You are at the junction of two corridors near the entrance to the main offices of the Geodesy and Cartography Faculty
Zonesrelate to floors, wings/sections of the buildingThe geo-description includes such information as the range of numbers in a given part of the building, the name of the part, and the function.You are on the first floor in the northern part with the staff offices. The numbering in this part of the building starts with 120 and ends with 144.
Arearelate to the entire building or its surroundings giving spatial contextThe geo-description includes such information as the number of floors, the name of the building, the function of the building.
For the blind, information is provided on the appearance of the building (e.g., a glass building, a historic building in the neoclassical style), and for people with physical disabilities, information on how to reach the entrance adapted for them
The building has an irregular shape, in which the northern and southern parts can be distinguished. The main entrance to the building is on the east side. The southern part has teaching rooms, and the northern part has staff offices. The southern part has two floors, and the northern part has four floors.
Table 3. Detailed selected comments from individual users or collective opinions/suggestions from experts (source: internal report commissioned by WUT and prepared by the company Altix).
Table 3. Detailed selected comments from individual users or collective opinions/suggestions from experts (source: internal report commissioned by WUT and prepared by the company Altix).
IDTesters/Users’ Comments
1The testers responded positively to the idea of presenting the WUT campus using an interactive map. They were particularly interested in exploring the spatial relationships between buildings, the layout of rooms, the list of floors, and the means of navigating between different zone
2The testers noted issues with the repetition of announcements, the inability to replay messages, and difficulties in relocating previously explored areas on the map. However, the content, length, and overall quality of the announcements were evaluated positively. Testers understood the purpose of each message and the characteristics of the described sections, rooms, etc.
3The testers responded positively to the idea of presenting the WUT campus using an interactive map. They were particularly interested in exploring the spatial relationships between buildings, the layout of rooms, the list of floors, and the means of navigating between different zone
4The geodescriptions themselves were positively evaluated. The level of detail was deemed appropriate. Messages concerning buildings, floors, and zones were described as concise and informative, often including useful hints such as which rooms are located in particular areas or what amenities are available on certain floors. One participant expressed satisfaction with having access to both general information about the building’s appearance and potential obstacles, such as book carts positioned in the reading room.
5When asked whether she would like to familiarize herself with the building map before visiting, the tester answered affirmatively, stating it would be very helpful. She suggested it would be more convenient to explore the building using the app on a larger screen.
6The testers recommended maximizing the map view at the expense of the interface to facilitate better exploration of the displayed areas.
7After searching for an object using the search engine, the message read aloud does not include the object’s location in relation to the zone, floor, or building.
8A major difficulty reported by all testers was independently locating doors. QR codes are too small to be read by the camera from various points in the corridor. Given these issues, QR codes should be treated solely as a complementary solution to beacon-based navigation, for example, to verify a room number.
9The majority of completely blind users use iPhones. The iPhone is considered easier to use and less distracting. Therefore, the primary recommendation is to accelerate the development of the iOS version.
10Testers suggested that in this mode, the application should initially present a full view of the WUT campus. The entire campus should be visible on screen, and dragging a finger across it should identify which building is underneath. To help users remember building locations, this screen should remain static—without the ability to zoom, rotate, or shift the map.
11Testers recommended that all users—blind, visually impaired, and sighted—should interact with the same interface. This approach would facilitate knowledge-sharing between users with and without disabilities and would also reduce development and maintenance efforts.
12Testers proposed the inclusion of two user modes: Exploration Mode and Navigation Mode. In Exploration Mode, the application should deliver detailed messages describing individual parts of the campus. In Navigation Mode, messages should be concise, assuming the user has already familiarized themselves with the campus and the layout of areas of interest.
13Testers experienced difficulties using TalkBack and VoiceOver for reading the map and interacting with some oversized interface elements, as well as due to existing user habits.
14The interface’s audio feedback functioned differently than what the tester was accustomed to (VoiceOver in iOS). Touching a button triggered its immediate activation, without the opportunity to review available button labels beforehand. As a result, the user switched between functions in a disorganized manner and was unable to “take control” of the application.
15Testers pointed out the need to optimize zoom functionality.
Table 4. Comparison of features of traditional tactile map, available electronic products and proposed solution (own elaboration).
Table 4. Comparison of features of traditional tactile map, available electronic products and proposed solution (own elaboration).
FeatureTraditional Tactile MapElectronic Products Currently Available to Assist People with Visual ImpairmentsProposed Solution
Type of recipientblind, visually impairedblind, visually impaired, sensory disability,
epilepsy, PTSD,
person without disability
blind, visually impaired, sensory disability,
person without disabilities
Information mediumtouch, sighttouch, sound, sighttouch, sound, sight
Physical formyesin some productsno
Mobilityusually nonemostly limitedfull
Risk of failuresmall
(physical damage)
moderate
(physical damage, network problems, server failures)
moderate
(network problems, server failures)
Scope
of information
very limitedaveragewide
Content typestaticdynamicdynamic
Data updatenone
(usually the need to replace the entire map)
dependent on specific product—partial online update possibleonline
Interactivitynofrom small to high depending on the specific producthigh
Supporting mental map creation and spatial orientation traininglimitedmoderatehigh
Customizablenolimitedfully
Popularityhighmoderatesmall
(just being introduced into use)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gotlib, D.; Lipka, K.; Świech, H. Interactive Indoor Audio-Map as a Digital Equivalent of the Tactile Map. Appl. Sci. 2025, 15, 8975. https://doi.org/10.3390/app15168975

AMA Style

Gotlib D, Lipka K, Świech H. Interactive Indoor Audio-Map as a Digital Equivalent of the Tactile Map. Applied Sciences. 2025; 15(16):8975. https://doi.org/10.3390/app15168975

Chicago/Turabian Style

Gotlib, Dariusz, Krzysztof Lipka, and Hubert Świech. 2025. "Interactive Indoor Audio-Map as a Digital Equivalent of the Tactile Map" Applied Sciences 15, no. 16: 8975. https://doi.org/10.3390/app15168975

APA Style

Gotlib, D., Lipka, K., & Świech, H. (2025). Interactive Indoor Audio-Map as a Digital Equivalent of the Tactile Map. Applied Sciences, 15(16), 8975. https://doi.org/10.3390/app15168975

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop