Next Article in Journal
Research on Greenhouse Gas Emissions and Economic Assessment of Biomass Gasification Power Generation Technology in China Based on LCA Method
Next Article in Special Issue
Emergency Vehicle Driving Assistance System Using Recurrent Neural Network with Navigational Data Processing Method
Previous Article in Journal
Sustainable Performance through Digital Supply Chains in Industry 4.0 Era: Amidst the Pandemic Experience
Previous Article in Special Issue
Implementing the Maximum Likelihood Method for Critical Gap Estimation under Heterogeneous Traffic Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influences of Different Sensory Modalities and Cognitive Loads on Walking Navigation: A Preliminary Study

Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(24), 16727; https://doi.org/10.3390/su142416727
Submission received: 18 November 2022 / Revised: 9 December 2022 / Accepted: 12 December 2022 / Published: 13 December 2022
(This article belongs to the Special Issue Sustainable and Safe Road User Behaviour)

Abstract

:
External cognitive burden has long been considered an important factor causing pedestrian navigation safety problems, as pedestrians in navigation inevitably acquire external information through their senses. Therefore, the influences of different types of sensory modalities and cognitive loads on walking navigation are worthy of in-depth investigation as the foundation for improving pedestrians’ safety in navigation. This study investigated users’ performance in visual, auditory, and tactile navigation under different cognitive loads by experimental simulation. Thirty-six participants were recruited for the experiment. A computer program simulating walking navigation was used, and three different cognitive task groups were set up. Participants’ reaction times and performances were recorded during the experiment, and a post-test questionnaire was administered for evaluation purposes. According to the tests, the following points are summarized. First, visual navigation performed the best in load-free conditions, which was significantly faster than auditory navigation and tactile navigation, but the difference between the latter two was not significant. There was a significant interaction between navigation types and cognitive load types. Specifically, in the condition without load, reaction time in auditory navigation was significantly slower than those in visual navigation and tactile navigation. In the condition with auditory load, reaction time in visual navigation was significantly faster than those in auditory navigation and tactile navigation. In the condition with visual load, there were no significant differences among the three navigations.

1. Introduction

Walking is generally considered a sustainable and healthy way of travel. However, the cognitive loads of navigation interaction in the process of walking and pathfinding may be an inevitable factor affecting walking safety. At present, visual navigation is the main type of navigation but, in fact, there are many independent or composite sensory navigation methods to choose from. Moreover, with the popularization of ubiquitous personal communication equipment, navigation has iteratively evolved and become widely used.
Specifically, digital mapping and navigation are designed to help people arrive at destinations efficiently, especially as smartphones have become commonplace, and people can look up locations and initiate navigation at any time. Navigation systems based on digital maps require the user’s attention to the screen, which can be problematic when visual attention is required in the environment to avoid obstacles, other pedestrians, or surrounding traffic [1]. Existing pedestrian navigation systems are usually developed from car navigation and are not designed for pedestrian mobility characteristics [2,3]. The main problem is that navigation is not the main task when traveling, and people are often visually or auditorily occupied [4]. For example, while navigating on foot, the user may also be burdened with other visual or auditory cognitive tasks, such as checking other information or being on their phones and having conversations with others. In this context, cognitive loads may even cause safety issues in walking navigation.
To explore the interaction options for sensory modalities, this study simulated a typical cognitive load scenario for navigation in an urban environment and examined the relationship between different sensory modalities and different cognitive loads on users while navigating on foot under virtual wayfinding.
To our knowledge, there have been few studies that have compared navigation with different cognitive loads specifically based on three sensory modalities and that have drawn conclusions about the correlation between navigation style and cognitive load. The contribution of this work is as follows: cognitive load was introduced into the walking navigation scenario to establish a relationship between cognitive load and sensory modalities. A quantitative study was carried out using several methods of cognitive load in a task-simulated navigation situation, collecting objective data about the reaction times in different states and analyzing them in combination with subjective data from a post-test questionnaire. This paper contributes to the understanding of sensory load and provides insight into the future design of navigation modalities, especially in considering safety matter factors that are closely related to sensory modalities and cognitive loads.
The rest of this paper is organized as follows: Section 2 reviews the current state of research on sensory modalities and cognitive loads related to walking navigation and proposes research goals; Section 3 describes the experiment, including the participants, materials, and process, followed by the results of the experiment, analyzing the collected data of the success rate, reaction time, satisfaction, and subjective ratings of task load; Section 4 discusses the results; and Section 5 summarizes this work and gives an overview of future work.

2. Background

2.1. Walking Navigation and Sensory Modality

The process of navigation consists of two main components: locomotion and wayfinding. Locomotion is the physical movement of a person from a starting position to a destination; wayfinding is the process of finding a destination [5]. Wayfinding is usually used for directed movement in unfamiliar environments. People primarily obtain visual guidance instructions from the environment, such as using landmarks for orientation and decision-making [6], passing cognitive elements, such as names of paths and turning points [7,8]. People sometimes require navigation systems to provide more integrated navigation instructions and to help reduce the cognitive load during wayfinding [9].
The main reason for this reduction in cognitive load is that navigation systems integrate the spatial information on which people focus. Instead of traversing all the information about the environment, people use navigation systems to reallocate their attentional resources [10,11]. However, some studies have shown that users’ attention is divided between the navigation system and the environment when using navigation devices [12]. For example, users’ attention is drawn to location signals (flashing lights or beeps) on the GPS navigation system [10], and using other applications on the phone during navigation can lead to distraction.
Psychologically, attention is inextricably linked to the senses. The receptors of the eyes, ears, and skin are used to receive energy stimuli, such as light and sound, and forced to form sensations, such as sight, hearing, and touch, which are also known as sensory modalities. These sensory modalities are used by humans to receive stimulus signals from most sources of information in the external environment [13]. At the same time, the designer can allocate the user’s attention by predetermining which navigation attributes to select and how the system will represent them [14]. Therefore, many navigation systems have been designed and validated for usability from the perspective of sensory modality.
Van der Bie built a multimodal messaging framework for transmission through smartwatches and smartphones, and their work presents how communication of wayfinding information to people with visual impairment (PVI) can be improved by utilizing four modalities: audio, voice, tactile, and visual [15].
  • Visual
Paper maps are the most traditional navigation tool and contain a lot of information, but can distract the user during walking [16]. Many electronic navigation devices present the user with a large amount of information in pictures and text through a screen. The advent of augmented reality has added a new solution to visual navigation. For example, Narzt used a car windscreen as a prospective vehicle for an augmented reality application to guide the way through different colored paths [17] and Cherchi used the camera preview of a mobile phone combined with augmented reality signals to guide through a 3D arrow pointing to the current target point [18].
  • Auditory
Auditory navigation uses both verbal and nonverbal audio. Verbal audio (similar to verbal messages, such as “turn right”) is more prevalent in navigation systems, but this can be affected by noise in the environment [18,19]. Navigation systems that use nonverbal audio accurately communicate distance and forward direction information to participants and require less of the user’s attention [16,20]. Spearcons’ nonverbal navigation-based audio feedback can communicate distance and forward direction information to users during navigation [21], and research has shown that Chinese-based speech is more effective in communicating with native Chinese speakers [20]. In addition, a study by Shelton showed that auditory reaction times were faster than visual response times. Moreover, men responded faster than women to both auditory and visual stimuli [22].
  • Tactile
Tactile information is less commonly used in human–machine interfaces. It consists in conveying navigation information through various vibration patterns (rhythm, intensity, and frequency [23]) of some wearable components (wristbands, belts, helmets, etc.,) in contact with human skin. Tactile information appears to reduce the number of errors and cognitive load compared to other navigation modalities [24]. Arab’s study showed that haptic navigation aids reduced the number of errors made by older users [25] and Ernst’s study showed that less noisy channels with more reliable information were given higher weight in the integration of visual and haptic information [26].

2.2. Walking Navigation and Cognitive Load

Sweller proposed the Cognitive Load Theory (CLT), which states that “cognitive load” is the total amount of information that an individual can process during information processing [27]. Individuals have limited cognitive resources, which are consumed by the cognitive processing required to learn knowledge and solve problems [28]. When the cognitive load is too high, working memory can become overloaded, and this can have an impact on the user’s behavior; for example, the inability to notice valid information, the inability to respond accordingly, incorrect actions, etc.
When walking in an unknown environment, we often use navigation devices to support the cognitive processes needed to navigate as optimally as possible [29]. However, the formation of mental spatial representations is demanding and limited by the capacity of human attention [30,31,32]. For example, while navigating a walk, the person next to you may talk to you. In this case, the navigation device sends out route guidance instructions while you listen to the words of the person next to you and give a response. If this is an unfamiliar journey, it will be difficult to notice what the person next to you says and to ensure correct orientation, and the response to the navigation message will be slower during the conversation.
According to Brugger [5], people perform two tasks while navigating: wayfinding and locomotion. In the wayfinding task, the user receives navigation guidance instructions and compares the corresponding information in the environment to confirm the direction of travel. In the locomotion task, the user interacts more with the environment, e.g., using a mobile phone, talking to a companion, paying attention to traffic signs, etc. Therefore, there is an interrelationship between the user’s navigation task and the cognitive load between the human, the machine, and the environment during navigation (Figure 1). The user’s cognitive load consists of two components: the navigation load caused by navigation guidance instructions during wayfinding and the incremental visual or auditory cognitive loads caused by the environment during travel, i.e., the environmental loads. When both parts of the cognitive load need to be processed simultaneously, the user needs to allocate their cognitive resources and attention between the two parts to ensure that the navigation task is completed.

2.3. Research Objectives

Current navigation devices mainly rely on the turn-by-turn navigation command strategy [33,34]. In a navigation scenario, the user actively receives auditory or visual instructions at each intersection as he or she moves along the route. The study was based on this navigation method, and the experiment set the navigation types and the cognitive load increment as independent variants. The first independent variant, the navigation type, included visual navigation, auditory navigation, and tactile navigation; the second independent variant, the cognitive load increment, was introduced by setting up the corresponding tasks: cognitive load induced by visual predominance—visual load; cognitive load induced by auditory predominance—auditory load; and a control group—load-free.
The following hypotheses were proposed for the existing studies and the relationship between navigation types and cognitive load increment in this study.
  • Increasing the cognitive load has a negative effect on navigation efficiency.
  • Navigation instructions and cognitive load in the same sensory modality interfere with each other, leading to a decrease in navigation efficiency.
  • When the navigation instructions and the cognitive load are in different sensory modalities, navigation efficiency is less affected.

3. Experiments

3.1. Methods

3.1.1. Participants

In our experiment, 36 healthy adults (22–29 years old; 18 males and 18 females) were recruited to participate through a university in Guangzhou. All participants were ignorant of the purpose of the experiment, reported normal hearing and normal/corrected-to-normal vision, and were right-handed. Informed consent was obtained from all participants. Participants were reimbursed with a gift. All procedures were approved by the Academic Ethics Committee of Guangdong University of Technology (approval code GDUTXS2022085).

3.1.2. Research Design

A 3 (navigation types: visual navigation, auditory navigation, and tactile navigation) × 3 (cognitive load increments: visual load, auditory load, and load-free) mixed design was adopted. The navigation type was the between-subject variable and the cognitive load increment was the within-subject variable. The dependent variables included the success rate, reaction time, satisfaction with navigation, and subjective load rate. All participants were randomly assigned to one of three groups (six males and six females in each group), and each group used one navigation type.
There were three types of navigation in the experiment. When using visual navigation, the corresponding directional sign turned red at the turn (Figure 2); when using auditory navigation, the program emitted a corresponding voice at the turn, e.g., “turn right”; when using tactile navigation, a vibrating device worn by the participant on the wrist vibrated at the turn, with the left wristband vibrating for a left turn, the right wristband vibrating for a right turn, and both vibrating for straight ahead.
There were three cognitive load increments in the experiment. The visual load represented the visually-induced cognitive loads during navigation, and the experiment was set up with a puzzle game to distract their visual attention and simulate the visual cognitive load increments. The auditory load represented the auditory-induced cognitive load during navigation, and the experiment was set up with audio multiplication questions to ask the subjects (these questions were simple multiplication questions and did not cause excessive cognitive load) to simulate the auditorily induced cognitive load increments. The load-free group was regarded as the control group, where no additional cognitive load increments were generated by the added task.
The success rate was the percentage of participants who completed all cognitive load increment tasks, reflecting the difficulty of the task. The reaction time was the time that elapsed between the presentation of a sensory stimulus and the subsequent behavioral response [35]. This showed the efficiency of human navigation under specific conditions. The experiments were performed using the average response time at seven turnings. In addition, satisfaction with the navigation types and the subjective load rate were collected from all participants in the form of a questionnaire after the test. This reflected the subjective evaluation of the experiment by the participants.

3.1.3. Apparatus

The experimental program was created by Visual Studio Code (version 1.46.1), which was used to provide virtual signposts and give navigation instructions in an orderly manner through an ASUS computer (GU603H) while recording data, such as participant response times. The experimental setup consisted of an ASUS computer (GU603H) with an external keyboard, a haptic wearable (homemade), and an iPad, as shown in Figure 3.
The computer program interface is shown in Figure 4. The computer monitor presented navigation cues (Figure 4b), road signs at turns (Figure 4c), and visual navigation (Figure 2), with a screen resolution of 2560 × 1600 pixels, a refresh rate of 165Hz, and a sight distance of 57 cm. Auditory navigation was output through the speakers of the ASUS computer. Haptic navigation was presented through a haptic wearable device (the vibrating wristband, an Arduino UNO R3) worn on the wrist. In addition, the auditory load also output through the speakers of the ASUS computer, and the visual task was presented by the iPad (Air 3).

3.1.4. Tasks

Participants sat in front of a computer for the experiment. The computer program (Figure 4a) provided three types of navigation (visual navigation, auditory navigation, and tactile navigation), and participants engaged in one of the navigation cue types according to their group. There were three cognitive load incremental tasks (visual load, auditory load, and load-free, as shown in Figure 4a). The visual load experiment required participants to process a visually-induced cognitive task while receiving navigation (the participants attempted to complete a 24-piece puzzle, and puzzle completion was recorded). The auditory load experiment required participants to process an auditory-induced cognitive task while receiving navigation (the participants attempted to answer the multiplication questions in the audio, and the number of questions answered and the percentage of correct answers was recorded).
To reduce the effect of the experimental order on the experiment, the three routes on the cognitive load task were navigated in a different order of instructions but with the same distance length (same travel time in the experiment). The total route travel time was 141 s, consisting of seven intersections (navigation cue points) and eight segments of travel.

3.1.5. Procedures

Before the experiments, participants were asked to familiarize themselves with the experimental procedures.
During the experiment, the virtual road signs on the screen changed as they traveled, and the participants were prompted by the instant navigation to choose the way forward. When advancing to the intersection, the virtual road marker appeared on the screen, and the navigation prompt was issued after 1 s. The participant received the prompt and selected the direction by pressing the direction keys on the external keyboard. If the participant’s choice of direction was in line with the navigation instructions, the participant continued to the next distance, and the reaction time of the participant receiving the navigation instructions at the intersection was automatically recorded in the program; if the participant chose the wrong direction, the route was stopped immediately and judged as a failure. The participant’s reaction during the process was recorded.
After the participant completed the three walking routes, the type of navigation was evaluated on a five-point Likert scale to select the description that best matched his or her experience during the task. These included the System Usability Scale (SUS), subjective ratings of load (task difficulty: “It was difficult to use this system for immediate navigation” and task effort: “It required a lot of effort on my part to use this system for immediate navigation”), and open-ended questions.

3.2. Results

Five participants were excluded from the analysis who did not complete the full task. There were ten samples of visual navigation, nine samples of auditory navigation, and twelve samples of tactile navigation, for a total of thirty-one samples.
Statistical Products and Services Solutions (SPSS) version 26.0 was used to process the data.

3.2.1. Success Rate

A few participants failed to complete the entire task due to errors during the experiment, and the uncompleted tasks are discussed in Section 5. The success rate of the visual navigation was 83%, the auditory navigation was 75%, and the tactile navigation was 100%, χ2(2) = 3.79, p = 0.150. The high success rate is related to the effectiveness of the navigation method, indicating that this navigation method is effective in people’s different cognitive load scenarios, and people can reach their destination through turn-by-turn navigation.

3.2.2. Reaction Time

The results of the 3 × 3 repeated measures ANOVA showed that the main effect of the navigation type was not significant, F (2, 28) = 2.14, p > 0.05, and the post hoc test (LSD) showed that the RT in the visual navigation (1173.41 ms) was significantly faster than in the auditory navigation (1788.04 ms), p = 0.049. The RT in the tactile navigation (1204.70 ms) was not significantly different from the others; the main effect of the cognitive load increment was significant, F (2, 56) = 11.30, p < 0.05, indicating that the RT in the visual load increment (2031.34) was significantly slower than those in the auditory load increment (1442.11 ms) and the load-free (1363.96 ms). The interaction between the navigation type and cognitive load increment was significant, F (4, 56) = 3.61, p < 0.05. A simple effect test showed that in the condition with auditory load, there were significant differences among the three navigations, F (2, 28) = 9.62, p = 0.001, and multiple comparisons (LSD) showed that reaction time in visual navigation (M = 922.62 ms, SD = 318.77 ms) was significant faster than those in auditory navigation (M = 1903.43 ms, SD = 257.53 m, p < 0.001 and in tactile navigation (M = 1529.04 ms, SD = 700.91 ms), p = 0.008. In the condition without load, there were significant differences among the three navigations, F (2, 28) = 6.79, p = 0.004 and multiple comparisons (LSD) showed that reaction time in auditory navigation (M = 1788.04 ms, SD = 233.04 ms) was significantly slower than those in visual navigation (M = 1173.41 ms, SD = 619.12 ms), p = 0.003 and in tactile navigation (M = 1204.70 ms, SD = 275.11 ms), p = 0.003. In the condition with visual load, there were no significant different among the three navigations (Mauditory = 1865.97 ms, Mvisual = 2175.75 ms, and Mtactile = 2035.03 ms). Table 1 reports the mean and standard deviation of the different scenarios.
Figure 5 shows the interaction of the two factors. In the auditory navigation condition, the navigation reaction times were close for the different loads; in the visual navigation condition, the visual load increased the navigation reaction time, and the auditory load decreased the reaction time; in the tactile navigation condition, the visual load increased the navigation reaction time.

3.2.3. Satisfaction and Subjective Ratings of Task Load

The subjective ratings of the users’ overall satisfaction with the type of navigation showed that the participants had higher overall satisfaction with the tactile navigation cues than the auditory and visual cues (visual: M = 63.25, SD = 11.67; auditory: M = 62.5, SD = 8.84; and tactile: M = 66.667, SD = 12.36). However, there was no significant main effect between navigation cue types (F (2, 28) = 0.43, p > 0.05). This implies that although tactile navigation was somewhat more satisfying for participants, it was not significantly different from the other navigation types.
The users’ subjective ratings of the task load showed no significant differences between navigation cue types for the load-free and the auditory task (Load-free-difficulty: F (2, 28) = 0.04, p > 0.05; Load-free-effort: F (2, 28) = 0.61, p > 0.05; visual load-difficulty: F (2, 28) = 2.30, p > 0.05. Visual load-difficulty: F (2, 28) = 2.51, p > 0.05). For the visual task, there was a significant difference between the three navigation cue types (auditory load-difficulty: F (2, 28) = 5.79, p < 0.05 and auditory load-effort: F (2, 28) = 5.97, p < 0.05).
The comparison of the multiple results showed that there was a significant difference between the auditory and visual navigation in the visual load condition (p = 0.01), a significant difference between the difficulty of the visual and tactile navigation (p = 0.02), and no significant difference between the difficulty of the auditory and tactile navigation (p = 0.30). Thus, participants perceived less difficulty in processing the visual load when using auditory navigation and tactile navigation than when using visual navigation (auditory: M = 2.00, SD = 1.23; visual: M = 3.60, SD = 3.60; and tactile: M = 2.50, SD = 0.91). On the other hand, there was a significant difference between the auditory and visual difficulty (p = 0.01), a significant difference between the effort required for the visual navigation and tactile navigation (p = 0.04), and no significant difference between the effort required for the auditory and tactile navigation (p = 0.14). Thus, participants perceived less effort required to process the visual load when using auditory navigation and tactile navigation than when using visual navigation (auditory: M = 2.11, SD = 1.17; visual: M = 3.80, SD = 1.14; and tactile: M = 2.83, SD = 0.94).

4. Discussion

4.1. Participants Had the Shortest Reaction Time When Using Visual Navigation

According to the study, the use of visual guidance instructions was superior to the tactile and auditory guidance instructions in the absence of any distractions (load-free). This finding was coherent with intuitions and in line with Yagi et al.’s study that found the reaction time to visual stimuli was faster than that to auditory stimuli [36]. On the other hand, a study conducted by Thompson et al. recorded an average reaction time of approximately 180 to 200 ms for the detection of visual stimuli, whereas, for sound, it was approximately 140–160 ms [37]. The difference in the experimental results may be because the experiments simulated pedestrian walking and the visual channel was in a state of constant occupancy (participants could view the computer’s virtual signposts). Alternatively, the body’s state of movement may also affect the response time, as Verleger’s study confirmed that visual response times were faster than auditory response times during or after movement [38].
The study found a significant interaction between navigation type and cognitive load. Specifically, visual navigation was significantly faster than auditory and tactile navigation under auditory loading conditions. This suggests that the type of cognitive load affects the effectiveness of different navigation methods. These findings have important implications for the design of navigation systems and technologies. To optimize performance, navigation systems should be flexible and able to adapt to different cognitive load conditions. For example, if you need to talk to someone while walking or holding a conference call, visual navigation may be more effective. However, when you need to use visual attention to information, auditory and tactile navigation are more suitable.
Future studies should explore the impact of cognitive load on navigation performance in more detail, including conducting real-world experiments with different types of cognitive load and investigating how individual differences in cognitive ability affect navigation performance. Additionally, research could examine the potential benefits of combining different navigation methods to improve navigation performance in challenging environments.

4.2. Participants Were Least Influenced by the Loading Task When Using Auditory Navigation

The difference between the reaction time of the visual load, auditory load, and load-free was used to measure the impact of load increment for this task. In auditory navigation, the incremental impact value of the auditory load was 115.39 ms, and the incremental impact value of the task for the visual load was 77.93 ms. In visual navigation, the incremental impact value of the auditory load was −250.79 ms (a negative value represents a positive impact and a decrease in reaction time), and the incremental impact value of the visual load was 1002.34 ms. In tactile navigation, the incremental impact value of the auditory load was 324.34 ms and 830.33 ms for the visual load increment. Among the above groups of sensory modality interactions, the visual navigation and auditory load stood out with a negative impact, showing that visual navigation and auditory load had less interaction with each other in the navigation context, especially since the performance of auditory navigation is more stable under the two cognitive load conditions. Under virtual wayfinding, the two tasks of navigation, locomotion and wayfinding, both occupy visual resources and appear on the computer screen as a carrier, while other environmental sounds occupy auditory attention resources, which are processed in different modalities with less interference.
Overall, under visual navigation, additional visual information may compete with navigation tasks for attention resources, resulting in the decline of visual navigation. However, due to the compulsive and omnidirectional nature of auditory patterns, auditory navigation may be affected by less cognitive load.

4.3. Reasons for Participants Not Completing the Full Task

A total of twelve valid samples of tactile guidance instructions, ten valid samples of visual guidance instructions, and nine valid samples of auditory guidance instructions were collected in this experiment. The post-test interviews revealed that participants 4 and 34 failed the visual task mainly because it was difficult to complete the puzzle during navigation, and the puzzle attracted most of their attention. Three participants reported that they were conflicted between listening to the navigation instructions and listening to the multiplication questions and, in their panic, they chose the wrong direction, failing in the auditory task. In contrast, some of the participants who completed all the tasks indicated that they prioritized the more urgent navigation instructions when they conflicted with the visual task, forcing their gaze to dash away from the puzzle; they usually prioritized the auditory task when the navigation instructions conflicted with the auditory task. One participant said that it was similar to walking down the road while talking to your companion. You are just navigating and ignoring his question; it can be a bit rude.
These findings suggest that the presence of conflicting cognitive demands affects navigation performance. In particular, visual and auditory tasks may compete for attentional resources with navigational tasks, leading to slower reaction times and lower performance. However, even in the presence of conflicting needs, individuals are able to prioritize the navigation task and complete it successfully. This highlights the importance of considering cognitive load when designing navigation systems and technologies, as a navigation system with low cognitive load will ensure pedestrian safety.

5. Conclusions

This study examined the different guidance instructions for navigation via visual, auditory, and tactile means on navigation efficiency and user satisfaction under different cognitive loads. The results showed that visual navigation performed the best while externally load-free. In the visual navigation condition, the visual load increased the navigation reaction time, while the auditory load resulted in the best performance. In the auditory navigation condition, the auditory load increased the navigation reaction time. In the tactile navigation condition, the visual load increased the navigation reaction time.
These findings have practical implications for the design of navigation systems and technologies. The finding that visual navigation is most effective under no-load conditions suggests that visual aids may be particularly useful for guiding individuals in relatively simple environments. On the other hand, the interaction between navigation type and cognitive load suggests that navigation systems should be flexible and able to adapt to different load conditions to optimize performance.
This study took the concept of cognitive load into the design study of the audio–visual–touch navigation model, aiming to provide insights into the current phenomenon of cognitive overload in walking navigation interactions. By studying the user’s cognitive load for different navigation methods under different cognitive load conditions, the cognitive load of the product can be controlled within a reasonable range that is acceptable to the user through corresponding interaction strategies, thus improving the usability and user experience of the navigation derivatives. Adequate investigations of the influences of different kinds of cognitive loads are essential to assess the factors and their possible derivatives affecting travel safety.
Future research should investigate the impact of different cognitive loads on travel safety. This could involve studying the effects of multichannel navigation instructions, such as visual, auditory, and tactile, on cognitive load, as well as the impact of different sensory channels on the effectiveness of navigation instructions. For example, previous research has shown that visual navigation is most effective under no-load conditions, while auditory navigation performs the best under cognitive load. However, further research is needed to understand the specific effects of different navigation instructions on cognitive load and travel safety. Additionally, it would be valuable to evaluate the safety implications of different navigation and transportation methods in the context of the emerging intelligent era. For example, with the development of self-driving cars and other advanced transportation technologies, it is important to understand how these technologies can be designed to optimize safety and user satisfaction. This research could provide important insights into the factors that affect travel safety and could help to develop more effective and sustainable navigation and transportation technologies. Overall, this research has the potential to improve the usability and safety of navigation systems, and to enhance the user experience of navigation and transportation technologies. By better understanding the effects of cognitive load on navigation performance and safety, researchers and designers can develop more effective and efficient navigation systems that are better able to adapt to different cognitive load conditions. This research could ultimately help to make navigation and transportation more efficient, safe, and user-friendly in the intelligent era.

Author Contributions

Conceptualization, X.Z. and T.X.; Data curation, L.J.; formal analysis, J.Z.; funding acquisition, D.-B.L.; methodology, L.J., J.Z. and T.X.; project administration, D.-B.L.; resources, X.Z.; software, L.J. and J.L.; supervision, X.Z. and D.-B.L.; writing—original draft, T.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by grants from the National Social Science Foundation of China (21WZSB032 and 18BYY089), the Humanity and Social Science Youth Foundation of the Ministry of Education of China, grant number 18YJCZH249, the Guangzhou Science and Technology Planning Project, grant number 201904010241, and the Humanity Design and Engineering Research Team (263303306).

Institutional Review Board Statement

This work has been approved by the Departmental Ethics Committee and the Institutional Review Board of the Guangdong University of Technology.

Informed Consent Statement

Written informed consent has been obtained from the participants to publish this paper.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rümelin, S.; Rukzio, E.; Hardy, R. NaviRadar: A novel tactile information display for pedestrian navigation. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011. [Google Scholar]
  2. Radoczky, V. How to design a pedestrian navigation system for indoor and outdoor environments. In Location Based Services and TeleCartography; Springer: Berlin/Heidelberg, Germany, 2007; pp. 301–316. [Google Scholar]
  3. Renaudin, V.; Dommes, A.; Guilbot, M. Engineering, human, and legal challenges of navigation systems for personal mobility. IEEE Trans. Intell. Transp. Syst. 2016, 18, 177–191. [Google Scholar] [CrossRef]
  4. Jicol, C.; Lloyd-Esenkaya, T.; Proulx, M.J.; Lange-Smith, S.; Scheller, M.; O’Neill, E.; Petrini, K. Efficiency of Sensory Substitution Devices Alone and in Combination With Self-Motion for Spatial Navigation in Sighted and Visually Impaired. Front. Psychol. 2020, 11, 1443. [Google Scholar] [CrossRef]
  5. Brugger, A.; Richter, K.F.; Fabrikant, S.I. How does navigation system behavior influence human behavior? Cogn. Res. Princ. Implic. 2019, 4, 5. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Shah, P.; Miyake, A. The Cambridge Handbook of Visuospatial Thinking; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  7. Fang, H.; Song, Z.; Yang, L.; Ma, Y.; Qin, Q. Spatial Cognitive Elements of VR Mobile City Navigation Map. Geomat. Inf. Sci. Wuhan Univ. 2019, 8, 1124–1130. [Google Scholar]
  8. Zhang, X.; Jiang. N.; Feng. C.; Zhang, J. Research on Spatial Cognition of LBS Mobile Navigation Map. Surv. Mapp. Geol. Miner. Resources 2013, 29, 3–5. [Google Scholar]
  9. Allen, G.L. Cognitive Abilities in the Service of Wayfinding: A Functional Approach. Prof. Geogr. 1999, 51, 555–561. [Google Scholar] [CrossRef]
  10. Ishikawa, T.; Fujiwara, H.; Imai, O.; Okabe, A. Wayfinding with a GPS-based mobile navigation system: A comparison with maps and direct experience. J. Environ. Psychol. 2008, 28, 74–82. [Google Scholar] [CrossRef]
  11. Willis, K.S.; Hölscher, C.; Wilbertz, G.; Li, C. A comparison of spatial knowledge acquisition with maps and mobile maps. Comput. Environ. Urban Syst. 2009, 33, 100–110. [Google Scholar] [CrossRef]
  12. Gardony, A.L.; Brunyé, T.T.; Mahoney, C.R.; Taylor, H.A. How Navigational Aids Impair Spatial Memory: Evidence for Divided Attention. Spat. Cogn. Comput. 2013, 13, 319–350. [Google Scholar] [CrossRef]
  13. Small, D.M.; Prescott, J. Odor/taste integration and the perception of flavor. Exp. Brain Res. 2005, 166, 345–357. [Google Scholar] [CrossRef]
  14. Parasuraman, R. Designing automation for human use: Empirical studies and quantitative models. Ergonomics 2000, 43, 931–951. [Google Scholar] [CrossRef] [PubMed]
  15. van der Bie, J.; Ben Allouch, S.; Jaschinski, C. Communicating Multimodal Wayfinding Messages for Visually Impaired People via Wearables. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services, Taipei, Taiwan, 1–4 October 2019; p. 7. [Google Scholar]
  16. Liljedahl, M.; Lindberg, S.; Delsing, K.; Polojärvi, M.; Saloranta, T.; Alakärppä, I. Testing Two Tools for Multimodal Navigation. Adv. Hum. -Comput. Interact. 2012, 2012, 1–10. [Google Scholar] [CrossRef]
  17. Narzt, W.; Pomberger, G.; Ferscha, A.; Kolb, D.; Müller, R.; Wieghardt, J.; Hörtner, H.; Lindinger, C. Augmented reality navigation systems. Univers. Access Inf. Soc. 2005, 4, 177–187. [Google Scholar] [CrossRef]
  18. Scateni, G.C.F.S.R. AR Turn-by-turn navigation in small urban areas and information browsing. Smart Tools Apps Graph. 2014. [Google Scholar] [CrossRef]
  19. Giannopoulos, I.; Kiefer, P.; Raubal, M. GazeNav: Gaze-Based Pedestrian Navigation. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services, Copenhagen, Denmark, 24–27 August 2015; pp. 337–346. [Google Scholar]
  20. Hussain, I.; Chen, L.; Mirza, H.T.; Wang, L.; Chen, G.; Memon, I. Chinese-Based Spearcons: Improving Pedestrian Navigation Performance in Eyes-Free Environment. Int. J. Hum. -Comput. Interact. 2016, 32, 460–469. [Google Scholar] [CrossRef]
  21. Hussain, I.; Chen, L.; Mirza, H.T.; Xing, K.; Chen, G. A Comparative Study of Sonification Methods to Represent Distance and Forward-Direction in Pedestrian Navigation. Int. J. Hum. -Comput. Interact. 2014, 30, 740–751. [Google Scholar] [CrossRef]
  22. Shelton, J.; Kumar, G.P. Comparison between Auditory and Visual Simple Reaction Times. Neurosci. Med. 2010, 1, 30–32. [Google Scholar] [CrossRef] [Green Version]
  23. Brown, L.M.; Brewster, S.A.; Purchase, H.C. Multidimensional tactons for non-visual information presentation in mobile devices. In Proceedings of the 8th Conference on HUMAN-Computer Interaction with Mobile Devices and Services, Helsinki Finland, 12–15 September 2006. [Google Scholar]
  24. Pielot, M.; Boll, S. Tactile Wayfinder: Comparison of tactile waypoint navigation with commercial pedestrian navigation systems. In International Conference on Pervasive Computing; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  25. Arab, F.; Panëels, S.; Anastassova, M.; Coeugnet, S.; Le Morellec, F.; Dommes, A.; Chevalier, A. Haptic patterns and older adults: To repeat or not to repeat? In Proceedings of the 2015 IEEE World Haptics Conference (WHC), Evanston, IL, USA, 22–26 June 2015. [Google Scholar]
  26. Ernst, M.; Banks, M. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 2002, 415, 429–433. [Google Scholar]
  27. Sweller, J. Element Interactivity and Intrinsic, Extraneous, and Germane Cognitive Load. Educ. Psychol. Rev. 2010, 22, 123–138. [Google Scholar] [CrossRef]
  28. Sweller, J. Cognitive Load During Problem Solving: Effects on Learning. Cogn. Sci. 1988, 12, 257–285. [Google Scholar] [CrossRef]
  29. Ludwig, B.; Müller, M.; Ohm, C. Empirical Evidence for Context-aware Interfaces to Pedestrian Navigation Systems. KI—Künstliche Intell. 2014, 28, 271–281. [Google Scholar] [CrossRef]
  30. Münzer, S.; Hölscher, C. Entwicklung und Validierung eines Fragebogens zu räumlichen Strategien. Diagnostica 2011. [Google Scholar] [CrossRef]
  31. Wahn, B.; Konig, P. Can Limitations of Visuospatial Attention Be Circumvented? A Review. Front. Psychol. 2017, 8, 1896. [Google Scholar] [CrossRef] [PubMed]
  32. Weisberg, S.M.; Newcombe, N.S. Cognitive Maps: Some People Make Them, Some People Struggle. Curr. Dir. Psychol. Sci. 2018, 27, 220–226. [Google Scholar] [PubMed]
  33. Ben-Elia, E. An exploratory real-world wayfinding experiment: A comparison of drivers’ spatial learning with a paper map vs. turn-by-turn audiovisual route guidance. Transp. Res. Interdiscip. Perspect. 2021, 9, 100280. [Google Scholar] [CrossRef]
  34. Gramann, K.; Hoepner, P.; Karrer-Gauss, K. Modified Navigation Instructions for Spatial Navigation Assistance Systems Lead to Incidental Spatial Learning. Front. Psychol. 2017, 8, 193. [Google Scholar] [CrossRef] [Green Version]
  35. Kasozi, K.I.; Mbiydzneyuy, N.E.; Namubiru, S.; Safiriyu, A.A.; Sulaiman, S.O.; Okpanachi, A.O.; Ninsiima, H.I. A study on visual, audio and tactile reaction time among medical students at Kampala International University in Uganda. Afr. Health Sci. 2018, 18, 828–836. [Google Scholar] [CrossRef] [Green Version]
  36. Yagi, Y.; Coburn, K.L.; Estes, K.M.; Arruda, J.E. Effects of aerobic exercise and gender on visual and auditory P300, reaction time, and accuracy. Eur. J. Appl. Physiol. Occup. Physiol. 1999, 80, 402–408. [Google Scholar] [CrossRef]
  37. Thompson, P.; Colebatch, J.; Brown, P.; Rothwell, J.; Day, B.; Obeso, J.; Marsden, C. Voluntary stimulus-sensitive jerks and jumps mimicking myoclonus or pathological startle syndromes. Mov. Disord. Off. J. Mov. Disord. Soc. 1992, 7, 257–262. [Google Scholar] [CrossRef]
  38. Verleger, R. On the utility of P3 latency as an index of mental chronometry. Psychophysiology 1997, 34, 131–156. [Google Scholar] [CrossRef]
Figure 1. The relationship between the navigation task and the cognitive load in humans (blue), machine (yellow), and environment (green).
Figure 1. The relationship between the navigation task and the cognitive load in humans (blue), machine (yellow), and environment (green).
Sustainability 14 16727 g001
Figure 2. Visual navigation. As shown in the picture, the steering direction turns red for a left turn.
Figure 2. Visual navigation. As shown in the picture, the steering direction turns red for a left turn.
Sustainability 14 16727 g002
Figure 3. Devices used in the experiment. 1. An ASUS computer (GU603H). The display shows a section of the route under virtual conditions; the computer speakers play the audio. 2. A computer external keyboard. Participants press the corresponding left, straight, and right direction keys after receiving navigation cues to complete the corresponding direction selection. 3. Haptic wearable devices (including the vibrating wristband, an Arduino UNO R3). 4. iPad. Used in the visual load. Participants used this device to complete the puzzle. The puzzle details are available at (https://www.jigsawplanet.com/?rc=play&pid=1aef19862cc6, accessed on 20 June 2022).
Figure 3. Devices used in the experiment. 1. An ASUS computer (GU603H). The display shows a section of the route under virtual conditions; the computer speakers play the audio. 2. A computer external keyboard. Participants press the corresponding left, straight, and right direction keys after receiving navigation cues to complete the corresponding direction selection. 3. Haptic wearable devices (including the vibrating wristband, an Arduino UNO R3). 4. iPad. Used in the visual load. Participants used this device to complete the puzzle. The puzzle details are available at (https://www.jigsawplanet.com/?rc=play&pid=1aef19862cc6, accessed on 20 June 2022).
Sustainability 14 16727 g003
Figure 4. Computer program interface. (a) Home page. The participants experienced the program for the first time and became familiar with the experimental operation; they selected the navigation type and cognitive load task, entered the participant number, and pressed the ok button to enter the test; (b) The route in progress. This simulated the time participants traveled on the route; (c) Select the direction to proceed. The participants received a navigation instruction and pressed the direction button to select; (d) Wrong choice. The round of navigation was over, and they returned to the home page; (e) Successfully reached. They returned to the home page.
Figure 4. Computer program interface. (a) Home page. The participants experienced the program for the first time and became familiar with the experimental operation; they selected the navigation type and cognitive load task, entered the participant number, and pressed the ok button to enter the test; (b) The route in progress. This simulated the time participants traveled on the route; (c) Select the direction to proceed. The participants received a navigation instruction and pressed the direction button to select; (d) Wrong choice. The round of navigation was over, and they returned to the home page; (e) Successfully reached. They returned to the home page.
Sustainability 14 16727 g004
Figure 5. The marginal mean of the reaction time with respect to the estimated incremental cognitive load for the three navigation types.
Figure 5. The marginal mean of the reaction time with respect to the estimated incremental cognitive load for the three navigation types.
Sustainability 14 16727 g005
Table 1. The mean and standard error of different conditions.
Table 1. The mean and standard error of different conditions.
Visual NavigationAuditory NavigationTactile Navigation
Visual LoadM = 2175.75
SD = 1304.39
M = 1865.97
SD = 250.85
M = 2035.03
SD = 880.23
Auditory LoadM = 922.62
SD = 318.77
M = 1903.43
SD = 257.53
M = 1529.04
SD = 700.91
Load-freeM = 1173.41
SD = 619.12
M = 1788.04
SD = 233.05
M = 1204.70
SD = 275.11
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, X.; Jin, L.; Zhao, J.; Li, J.; Luh, D.-B.; Xia, T. The Influences of Different Sensory Modalities and Cognitive Loads on Walking Navigation: A Preliminary Study. Sustainability 2022, 14, 16727. https://doi.org/10.3390/su142416727

AMA Style

Zhang X, Jin L, Zhao J, Li J, Luh D-B, Xia T. The Influences of Different Sensory Modalities and Cognitive Loads on Walking Navigation: A Preliminary Study. Sustainability. 2022; 14(24):16727. https://doi.org/10.3390/su142416727

Chicago/Turabian Style

Zhang, Xiaochen, Lingling Jin, Jie Zhao, Jiazhen Li, Ding-Bang Luh, and Tiansheng Xia. 2022. "The Influences of Different Sensory Modalities and Cognitive Loads on Walking Navigation: A Preliminary Study" Sustainability 14, no. 24: 16727. https://doi.org/10.3390/su142416727

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop