4.1. RQ1: Indoor-Outdoor vs. Outdoor Only Mobile Navigation Aid in Campus Wayfinding (Performance and Perceived Workload)
This research question investigated how the participants’ wayfinding performance and perceived workload in campus navigation tasks using the provided navigation aid (outdoor navigation app vs. indoor-outdoor navigation app) differs between the control (using Google Maps) and the experiment group (using MazeMap). This research question focused on hypotheses H1 and H2.
The null hypothesis for H1 was that there was no difference between the wayfinding performance of participants in the control (outdoor navigation app: Google Maps) and experiment (indoor-outdoor navigation app: MazeMap) groups.
The wayfinding performance is measured in terms of total task duration, total help, total wrong directions, total hesitation, and total task completion. An independent
t-test was conducted to compare the wayfinding performance between the control and experiment groups. The independent variable was the groups with different navigation aid: outdoor navigation app vs. indoor-outdoor navigation app, and the dependent variables were wayfinding performance parameters: total task duration, total help, total hesitation, total wrong directions, and total task completion. The
t-test is appropriate for our experiment as it is used to study if there is a statistical difference between two independent groups. The results from the
t-test are shown in
Table 1.
According to the results, no significant difference was found in wayfinding performance between the control and experiment groups. There were no significant differences in total task duration (t (20) = 0.828, p = 0.418); total help (t (20) = 1.411, p = 0.174); total wrong directions (t (20) = −0.146, p = 0.886); total hesitation (t (20) = 1.101, p = 0.284); and total task completion (t (20) = −0.323, p = 0.750) between the two groups (p ≥ 0.05), failing to reject the null hypothesis. Despite this, the mean task duration (M = 24.82, SD = 7.561) for the control group (CG) was higher than the experiment group (EG) (M = 22.00, SD = 8.390). Similarly, the mean help (M = 1.18, SD = 1.168) and hesitation (M = 2.27, SD = 1.555) for the control group (CG) was higher than the mean help (M = 0.55, SD = 0.934) and hesitation (M = 1.64, SD = 1.120) for the experiment group (EG). However, the magnitude of the difference in the means for total task duration (mean difference = 2.818, 95% CI: [−4.285, 9.922]), total help (mean difference = 0.636, 95% CI: [−0.304, 1.577]), and total hesitation (mean difference = 0.636, 95% CI: [−0.569, 1.842]) was not significant. Hence, H1 was not supported.
The null hypothesis for H2 was that there was no difference between the perceived workload of participants in the control (Google Maps) and experiment (MazeMap) groups.
The perceived workload is measured using the NASA TLX score and subjective workload rating (SWR) score. Again, an independent
t-test was conducted to compare the perceived workload between the control and experiment groups as it is used to study if there is a statistical difference between the two independent groups. The independent variable was the groups with different navigation aid (outdoor navigation app vs. indoor-outdoor navigation app). The dependent variables were the NASA TLX score and SWR score. The results from the
t-test for H2 are shown in
Table 2. According to the results, there was no significant difference in NASA TLX score (t (20) = −0.429,
p = 0.673) and SWR score (t (20) = 0.706,
p = 0.488) between the two groups (
p ≥ 0.05), failing to reject the null hypothesis. Hence, H2 was also not supported.
4.2. RQ2: Effect of Spatial Ability Skills on Campus Wayfinding (Performance and Perceived Workload)
This research question investigated the effect of spatial ability skills on the perceived workload and wayfinding performance of all participants in indoor-outdoor campus navigation tasks regardless of their group (i.e., used navigation aid). The spatial ability skills of all participants were assessed before conducting the navigation task using mobile navigation aid. After the first phase (pre-task phase, see
Figure 1), participants were randomly assigned to the control and experiment groups. The descriptive statistics presented in
Table 3 show that the spatial ability skill parameters of participants in both groups are fairly equal, which shows that results in the previous section were not affected by the spatial ability skills of participants. The spatial ability skills were measured on three parameters spatial reasoning test, spatial orientation test and navigation (using the Santa Barbara Sense of Direction (SBSOD) scale). The scores of these three factors were added to get an overall score for spatial ability skills.
A series of Pearson’s correlations were conducted to identify any potential correlation between the participant’s spatial ability skills parameters (spatial reasoning score, spatial orientation score and SBSOD score), their perceived workload (NASA TLX score and SWR score), and wayfinding performance parameters (task duration, task completion, total help, total hesitation, and total wrong directions). Pearson’s correlation measures the strength and direction of the linear relationship between two variables. Pearson’s correlation was used to determine whether the variables of interest are correlated or related to each other, and the data met all the assumptions required for using Pearson’s correlation. All results for Pearson’s correlation are presented in
Table 4 and
Table 5. This research question focused on hypotheses H3 and H4.
The third hypothesis focuses on the relationship between spatial ability skills and workload in campus wayfinding. The null hypothesis for H3 was that the spatial ability skills of participants will not impact the workload they experience in the university campus wayfinding task.
Pearson’s test (see
Table 4) revealed a significant negative relationship between the overall score for spatial ability skill and perceived workload parameters: NASA TLX and SWR. It means that the spatial ability skills of participants affect the workload they experience in indoor-outdoor campus wayfinding tasks using mobile navigation apps. It shows that participants who had higher spatial ability skills experienced less workload during campus navigation using mobile apps. However, if we look further in detail at the three parameters of spatial ability skill, only spatial orientation has a significant relationship with the NASA TLX score. There was no significant relationship between any of the other two spatial ability skill parameters (Spatial reasoning and
SBSOD) and perceived workload (NASA TLX score and SWR score).
Pearson’s correlation of overall spatial ability skill and NASA TLX score was found to be a strong correlation and statistically significant (r = −0.559, p < 0.01), rejecting the null hypothesis. Similarly, the correlation between the spatial orientation and NASA TLX score was also found to be a strong correlation and statistically significant (r = −0.500, p < 0.05). At the same time, a medium correlation (r = −0.466, p < 0.05) was found between the overall spatial ability skill and SWR score. Hence, H3 was supported. This show that participants with higher spatial ability skills would experience less workload in the indoor-outdoor campus wayfinding task.
The fourth hypothesis focuses on the relationship between spatial ability skills and wayfinding performance in campus wayfinding. The null hypothesis for H4 was that the spatial ability skills of participants will not impact their wayfinding performance in the university campus wayfinding task.
Pearson’s test (see
Table 5) revealed a significant negative relationship only between one of the three parameters of spatial ability skills, i.e., navigation (SBSOD score) and total hesitation. It means that participants’ sense of direction affects one of the five parameters of wayfinding performance, i.e., total hesitation. It shows that participants with a better sense of direction experienced less hesitation during wayfinding in indoor-outdoor campus navigation using the mobile app. However, none of the other two parameters of spatial ability nor the overall spatial ability skill had a significant relationship with any of the five wayfinding performance parameters (total task duration, total help, total hesitation, total wrong directions, and total task completion). Pearson’s correlation of SBSOD and total hesitation was found to be a medium correlation and statistically significant (
r = −0.498,
p < 0.05), rejecting the null hypothesis. Hence, H4 was partially supported.
4.3. RQ3: Challenges in University Campus Wayfinding Using Mobile Navigation Apps
The last research question investigated the challenges faced by participants during indoor-outdoor university campus wayfinding task using mobile navigation apps. As the participants were divided into control and experiment groups, two navigation apps were used. The control group used Google Maps (an outdoor-only navigation app), and the experiment group used MazeMap (an indoor-outdoor navigation app). The qualitative data was collected through observation notes from the wayfinding task and short post-task interview responses from participants of both groups regarding their experience and challenges in university campus wayfinding. According to [
50], qualitative data analysis is important to deepen the understanding of research participants. Therefore, the procedure described by Gioia et al. [
49] was followed for analyzing, interpreting, and presenting data. The results highlight the issues in indoor-outdoor campus wayfinding using the two navigation apps: Google Maps and MazeMap. It also provides insight into challenges related to the specific technology that they used and how these navigation applications can be further improved to facilitate university campus wayfinding that combine indoor and outdoor navigation.
The analysis followed the grounded theory approach described by [
51,
52] and resulted in a data structure with three levels, as shown in
Figure 2:
- (1)
First column—the first level highlighted the challenges from participant’s feedback, i.e., raw data.
- (2)
Second column—the second level identified the themes for these challenges.
- (3)
Third column—the third level aggregated the created themes into categories.
For simplicity of representation, in the first column in
Figure 2, the participants’ feedback is encoded using the colors orange, blue, and black. The “blue” color represents the feedback from participants using Google Maps, the “orange” color for feedback from participants using MazeMap, and the “black” color infers that the same feedback is from both groups. Moreover, the frequency of occurrence of challenges for each theme is reflected in the second column with numbers in round brackets using blue color for Google Maps and orange for Maze Map, and cumulatively the parameters of all themes are presented in the categories respectively in column three of
Figure 2.
Our analysis resulted in five categories that characterize challenges in indoor-outdoor university campus wayfinding using mobile navigation apps. The categories include campus landscape-related issues, navigation app-related issues, wayfinding issues, technical issues, and mental and physical effort. According to the qualitative analysis (presented in
Figure 2), the category “campus landscape related issues” was based on five themes related to challenges linked to the university campus landscape. The themes include building search, entry/exit points, floor layout, assess information and room search. It was difficult for some participants to find the right building. Moreover, participants found it challenging to find the entry and exit points to get into or out of the building. Similarly, finding room numbers and understanding the floor map by navigating corridors was troublesome for some participants. Lastly, lack of access to some places made it difficult to navigate to the desired location as certain corridors and rooms were not accessible by everyone and required card access to open doors. As shown in
Figure 2, participants in both groups (Google Maps and MazeMap) reported equal challenges in this category. However, participants using Google Maps faced more difficulty in building and room searches as compared to participants using MazeMap. This is evident from the fact that in addition to outdoor navigation MazeMap also provides indoor navigation, which is lacking in Google Maps. Moreover, the experiment group (participants using MazeMap) reported more access problems. This is because they were following the route provided by the app but had access issues to certain places within that route, which was more frustrating as they were following the directed path. Whereas for the control group (participants using Google Maps), the reported access issues were comparatively less as participants were not using Google Maps inside and orienting themselves, so they considered it as the wrong route instead of an access issue.
The category “navigation app related issues” was based on three themes that included challenges linked to the used technology (Google Maps and MazeMap). The themes include broken in-campus/indoor navigation, missing or no information provided and app usage/usability. For “broken in-campus/indoor navigation”, participants using Google Maps experienced more challenges compared to the MazeMap group as it did not provide indoor navigation. Most participants reported that they did not use Google Maps for indoor navigation as it was not helpful and pointless to use inside when it is not marked where you are in the building. In addition to this, some of the campus buildings were not registered in Google Maps, and participants needed to ‘zoom in’ the map to search for the building. On the contrary, participants using MazeMap did not have complaints about finding the building on the map. However, the issues they experienced were either the indoor location given in the app was not very accurate, or it was difficult to follow the map when the indoor positioning was not working so well. For “missing or no information provided”, most participants using Google Maps complained that the app is not complete and detailed for campus wayfinding as no information was provided for indoor navigation, such as rooms and buildings are either not found in the map or the name is not clear. On the contrary, the issues faced by the MazeMap group were more related to missing information, such as missing room numbers, lack of proper indicators for each room or missing information related to lack of access to some places. Lastly, for “App usage/usability”, the group using MazeMap reported more issues as compared to the group using Google Maps. The usability issues related to MazeMap included inconsistencies with room search, confusing floor bar on the left, did not find any option for language selection, or participants did not like the overall experience of using MazeMap. Whereas participants using Google Maps only had an issue that they needed to scan the map to find the building, and they were not satisfied with the app for campus navigation as it was not helpful for indoor navigation.
The “wayfinding issues” category was related to the following themes: help and support for navigation, live current location, and university signage issues. Overall, the Google Maps group experienced more wayfinding issues as compared to the MazeMap group. For “help and support for navigation”, most participants using Google Maps relied more on receiving as much help as possible from physical maps and directions within the university campus. This is because indoor navigation was not provided, and even for outdoor navigation, some building names were not clear or helpful. On the contrary, based on the qualitative analysis, the participants using MazeMap only asked for assistance when they got stuck or when certain information was not provided in the app. For example, when it was difficult to understand whether to go out of the building or continue inside while following the route in the app because either the signal dropped or the door was not accessible, or the current location was not clear. For “live current location”, most issues were reported by the MazeMap group. Overall, the majority of participants were satisfied with the live location feature provided by Google Maps, except that it was not available for indoor navigation. The MazeMap group also referred to it when giving feedback “MazeMap should also provide live location like Google Maps”. Although MazeMap provides the current location, most participants faced issues with accuracy. Most of the time, only “?” was shown instead of the exact current location when inside the building, which made orientation difficult as the participants could not understand where they were and where to go. Moreover, participants also complained that MazeMap does not automatically select the current location to create the route map to the destination; instead, you must enter the current location yourself, which is sometimes difficult when you do not know your exact location. Lastly, for “university signage issues”, most challenges were reported by participants using Google Maps. This is because participants in this group relied more on university signage and physical maps on campus, which is also in line with the theme “help and support for navigation” and “broken in-campus/indoor navigation”. The university signage issues included problems such as building names not being visible at the entrance or not highlighted, room numbers being randomly assigned and not intuitive, font size and color contract issues with the displayed room labels, no floor maps in some buildings, the physical maps inside the building not helpful to find the meeting rooms and lack of proper directions for each room. These problems are in the real world and not inside the map of navigation apps.
The category “technical issues” were related to app restarting, GPS/in-building signal issues and internet connectivity. The participants using MazeMap reported more technical issues. There were problems with the app restarting and dropping directions, the GPS stopped working inside, and the indoor positioning system had some issues as participants experienced connection problems inside buildings. On the other hand, participants using Google Maps only had a few technical issues related to lag in GPS.
Lastly, the “mental and physical effort” category included challenges related to physical exertion and time limit. Both groups (Google Maps and MazeMap) experienced similar challenges related to these themes. Participants were tired as they had to walk more when the app was not helping them or giving them information for quicker or easier paths (such as the availability of elevators instead of taking stairs). Similarly, most participants were stressed due to the time limit for finding a location which is similar to when you are running late for a meeting.
Overall, from the qualitative analysis presented in
Figure 2, analyzing the parameters in the category column (third column) shows that both the Google Maps and MazeMap groups had equal challenges for “campus landscape related issues”, “navigation app related issues” and “mental and physical effort”. However, the analysis highlights that participants using Google Maps experienced more “Wayfinding issues” and participants using MazeMap experienced more “technical issues”.