1. Introduction
3D visualization of geospatial data is employed today in many fields and in relation to many specific issues. Some universal applications, such as Google Earth
TM or Virtual Earth
®, and many domain specific solutions can be applied in several areas. An overview of these are presented, for example, by Shiode [
1], Abdul-Rahman and Pilouk [
2], or Biljecki et al. [
3]. Despite the wide dissemination of 3D visualization applications, relatively little is known about their theoretical background. We can still agree with the notion of Wood et al. [
4], who claimed that we do not know enough about how 3D visualizations can be used appropriately and effectively.
Buchroithner and Knust [
5] distinguished between two basic types of 3D visualization: Real-3D and pseudo-3D visualization. Real-3D visualizations engage both binocular and monocular depth cues into geovisualizations using the principles of stereoscopy. Pseudo-3D visualizations are usually displayed on planar media (e.g., computer screens or widescreen projections), and are perceived by engaging only monocular depth cues [
5]. In general, pseudo-3D visualization is considered a cheaper and more disseminated type of visualization, since it has no further demands on peripheral technology to provide stereoscopy. This paper examines pseudo-3D visualization in more detail, specifically studying 3D maps presented on planar media.
Many different definitions of 3D maps exist. Some of them, for example, Hájek, Jedlička, and Čada [
6], concentrate mainly on map content, where 3D maps are understood to include Digital Terrain Models, 2D data draped onto terrain, 3D models of objects, or 3D symbols. Other definitions describe the specifics of the 3D map creation process (generalization, symbolization). Haeberling, Bär, and Hurni [
7] define a 3D map as the generalized representation of a specific area using symbolization to illustrate its physical features. Finally, some definitions are more complex and consider the characteristics of the resulting 3D map. Bandrova [
8] defines a 3D map as a computer generated, mathematically defined, three-dimensional, highly-realistic virtual representation of the world’s surface, which also includes the objects and phenomena in nature and society. Schobesberger and Patterson [
9] characterize a 3D map as the depiction of terrain with faux three-dimensionality containing perspective that diminishes the scale of distant areas.
We understand a 3D map as a pseudo-3D or real-3D depiction of the geographic environment and its natural or socio-economic objects and phenomena using a mathematical basis (geographical or projection coordinate systems with a Z-scale of input data and graphical projection, such as perspective or orthogonal projection) and cartographical processes (generalization, symbolization, etc.). 3D maps usually employ a bird’s eye view. We define an interactive 3D map as a 3D map which allows, at least, navigational (or viewpoint) interactivity [
10], whilst static maps are most often perspective views. Static 3D maps can be also tangible (for example printed by 3D printer), but in this paper, we deal only with 3D virtual maps that are displayed on the computer screen
This paper consists of a theoretical section, which is a literature review, and an empirical section, which is an experimental study. The literature review is divided into three parts, according to three fundamental factors important in cartography user studies: Stimuli, task, and user [
11,
12]. The main objectives of the empirical section relate to these three dimensions. First, we want to find out the differences between interactive and static 3D maps. Second, we want to explore the role of different types of tasks performed on the 3D maps, and the individual and group differences between 3D map users.
2. Related Concepts and Research
The differences between interactive and static 3D maps are based on the different psychological processes underlying their perception. From this point of view, we can discuss the concept of information and computational equivalence [
13]. Larkin and Simon [
13] suggested that two different visualizations are information equivalent when the information contained in one visualization is derivable from the other. However, the processes of derivation may require a different level of computation, since visualizations may be depicted by using different graphics or interfaces. Two different external representations (in our case 3D maps) are considered computationally equivalent if a person needs to perform the same number of mental processes (computations) when reading them to achieve the same information. The issue of interactivity in 3D maps is also discussed in the field of cartography, as interactive visualization solves the problems with overall visibility and readability of complex and informationally rich areas [
14].
Complex terrain models and 3D maps can be identical in terms of the information they contain; however, specific options to interact with them may promote, or conversely, hinder the number of computations when reading them. This implies that two equivalent 3D maps may not be comparable when considering only their content. It is necessary to consider the process of interaction with the specific interface (which can be characterized as physical computation), and based on this interaction, we need to measure the mental processes required to achieve specific information (mental computations). Following the work of Vygotsky and later Leontiev, human activity is stated as the core aspect of the process of perception and directly structures the user’s cognitive functions [
15]. Leontiev [
16] understood human activity as a “circular structure” including: Initial afforestation → effector processes regulating contact with the objective environment → correction and enrichment by means of reverse connections of the original afferent image.
Boy [
15] noted the importance of the actionists’ perspective, seeing the human mental reflection of the world as not exclusively created by external influences, but also by the processes through which users/people come into practical contact with the objective world. Neisser adopts a similar point of view when he suggests his cyclic model [
17]. We must consider the fact that the user/operator is no longer considered as a passive cognitive system experiencing the stimuli given by the external environment (as it usually was in a traditional experimental research paradigm). The user is an active scout, exploring the environment or system and engaging his or her inner intentions, and can influence the situation with specific activities or by following specific goals [
15]. From this point of view, the effectiveness of interactive visualizations can be tested against static visualizations (also in terms of the type and quantity of interactions).
2.1. Static versus Interactive 3D Maps
Most of the recent user studies in cartography use either only static 3D maps or interactive 3D maps as stimuli, not both. Static stimuli have been used, for example, by Schobesberger and Patterson [
9], Engel et al. [
18], Niedomysl et al. [
19], Popelka and Brychtová [
20], Seipel [
21], Popelka and Doležalová [
22], Preppernau and Jenny [
23], Rautenbach et al. [
24], Zhou et al. [
25], and Liu et al. [
26], whilst interactive stimuli have been used by Abend et al. [
27], Wilkening and Fabrikant [
28], Treves et al. [
29], Špriňarová et al. [
30], McKenzie and Klippel [
31], Carbonell-Carrera and Saorín [
32], and Herman et al. [
33].
Some studies have been conducted comprising two successive parts using interactive stimuli in one part and static stimuli in the other, for example, Juřík et al. [
34]. A direct comparison of static versus interactive visualizations was conducted by Bleisch, Dykes, and Nebiker [
35], Herbert and Chen [
36] or Kubíček et al. [
37]. Bleisch, Dykes, and Nebiker [
35] compared the reading of bar chart heights in static 2D visualizations and bar charts placed in a 3D environment. Interaction in a 3D environment was enabled, but it was not monitored or analyzed, and we do not know whether participants used the interactive capabilities of the 3D stimuli to achieve a better solution or whether they made decisions solely on the basis of visual information. Herbert and Chen [
36] tried to identify whether users preferred 2D maps and plans or interactive geovisualizations from the ArcScene software in matters of spatial planning. In both of these studies, two independent variables were not distinguished as separate (level of interactivity and dimensionality of visualization), and it is not possible to identify their true effect.
Kubíček et al. [
37] investigated the roles of both types of 3D visualization (pseudo-3D versus real-3D) and the level of navigational interactivity (static versus interactive) when working with terrain profiles. The results of this study indicated that the type of 3D visualization does not affect the performance of users significantly, but the level of navigational interactivity has a significant influence on the usability of a particular 3D visualization. Previous experiments met several methodological limitations (e.g., untreated primary effect when solving tasks in the early phase may influence better performance in the later phase), and in regard to this work, we suggested a design that focuses exclusively on comparing interactive and non-interactive 3D maps. In this study, we focused on designing a test that would make undertaking a comparison of static and interactive 3D maps as objective as possible.
The test battery and data collection procedure for comparing static and interactive 3D maps were evaluated in a pilot study conducted by Herman and Stachoň [
38]. In the present study, we emphasize the importance of measuring and analyzing the process of interaction in its entire complexity (i.e., as deeply as possible) while controlling all the residual variables.
2.2. The Nature of Tasks in 3D Maps
3D maps can be used for different purposes, and these specific purposes predetermine the nature of the tasks that would be solved. Interactivity represents only one of the possible factors; other factors may be represented by task complexity. The increasing complexity of the task changes the quality of cognitive processes included in the process of task solving. In the language of mathematics, we do not speak of addition but rather multiplication or squaring. We can suppose that as the complexity of the task grows, the significance of interactivity will be distinctly emphasized—in a more complex task, interactivity will be more helpful, and therefore we suppose a significant and larger effect will be found in the interactive condition. For this question, it is necessary to determine a typology of tasks for 3D maps in terms of their complexity. Various taxonomies of tasks for 3D maps have been used, for example, Kjellin et al. [
39], Boér, Çöltekin, and Clarke [
40] or Rautenbach et al. [
41]. The basic tasks describing the interaction with a specific interface or environment are called interaction primitives (IPs) [
10]. IPs represent the elementary activities that can be performed during the process of interaction. The applicable taxonomy of IPs (tasks) for 3D maps was proposed by Juřík, Herman, and Šašinka [
42]. A generalized version of this taxonomy is shown in
Table 1.
The frequency of use of individual types of task varies considerably in the papers analyzed. Some are used very often (search and spatial understanding), while others are used relatively less. We selected tasks related to pattern recognition, spatial understanding, and combined tasks. The simplest search tasks were not tested exclusively as they form a part of the more complex tasks. Pattern recognition tasks have not been used in any previous user study with 3D maps and testing this type of task was a challenge for this reason. Spatial understanding tasks, which have been used in most studies, were selected to make the results of our study comparable. We did not consider planning or shape description tasks because their evaluation can be difficult and largely subjective. Combined tasks have not been used often, but they lead to more complex cognitive processes and emulate user interaction, so this type of tasks was applied.
2.3. Users of 3D Maps and Their Spatial Abilities
Some previous studies suggested that the use of 3D maps may promote the realistic perception of spatial arrangement of the scene, making it easier for laypersons to form an impression of the scene without the necessity for any symbolic language [
37]. By contrast, some studies supported the claim that some forms of 3D visualization may increase the time required to solve tasks and create visual discomfort during their use [
21,
45]. For experts experienced in map reading, 3D visualization may lower the clarity of depicted content and increase the chance of making an error [
34].
These inconsistencies in map depiction have still not been explored very deeply, as several important factors contribute. Besides the type of visualization, interactivity and task type (discussed above), a level of (geo) expertise, and innate spatial ability (or better cognitive style) is involved when dealing with map content. The observer’s focus on an object in a scene or on the overall spatial arrangement of the scene plays an important role in computational processes. Individual spatial abilities determine the efficiency in remembering and understanding the spatial arrangement of the scene, and based on the mental image of this scene, they can be more or less successful when dealing with specific tasks [
46,
47].
The existence of people who are more spatially oriented, those who are more object oriented, and those mainly verbally oriented has been explored in many psychological studies [
48,
49,
50]. This orientation of a person is measured, for example, with the Object-Spatial Imagery and Verbal Questionnaire (OSIVQ) developed by Blazhenkova and Kozhevnikov [
51]. From this point of view, these three factors must be considered as part of the experimental design in evaluating geographical products. In the study presented, we involved experts and laypersons in the field of geography (to compare the results of these two groups of users) and measured their object-spatial orientation using the OSIVQ. These people were tested on interactive and static 3D maps.
3. Experimental Study
The aim of the study is to analyze the differences between effectiveness (correctness) [
52], efficiency (response times) [
53], and subjective preferences [
52] when working with static and interactive 3D maps. Correctness, response times, and subjective preferences were dependent variables. Independent variables were the level of interactivity, level of expertise, and task type. We addressed three research questions (RQ), which were further defined by nine hypotheses (H). The research questions and hypotheses were defined based on literature and our pilot studies. Hypothesis H1, H2, and H3 were based on Kubíček et al. [
37] and Herman and Stachoň [
38]. Hypothesis H4 and H5 were based on Špriňarová et al. [
30] and Juřík et al. [
34]. Hypothesis H6 and H7 were based on Bowman et al. [
54] and Herman et al. [
55]. Hypothesis H8 and H9 were based on Štěrba et al. [
56] and Stachoň et al. [
57].
RQ1: Does user performance differ between interactive and static 3D maps?
- ◦
H1: Participants solve interactive tasks with greater accuracy than static tasks.
- ◦
H2: Participants solve static tasks faster than interactive tasks.
- ◦
H3: Participants subjectively prefer interactive tasks to static tasks.
RQ2: Does user performance differ regarding different task types?
- ◦
H4: Static and interactive tasks have significant differences in accuracy in all three task subcategories (spatial understanding, pattern recognition, combined tasks).
- ◦
H5: Static and interactive tasks have significant differences in the time required to complete the tasks in all three task subcategories (spatial understanding, pattern recognition, combined tasks).
RQ3: Does user performance differ between experts and laypersons?
- ◦
H6: Experts solve the tasks with greater accuracy than laypersons.
- ◦
H7: Experts solve the tasks faster than laypersons.
- ◦
H8: Accuracy and speed of user response in laypersons significantly correlates with the high spatial factor score detected in the OSIVQ questionnaire.
- ◦
H9: Accuracy and speed of user response in experts does not correlate with the high spatial factor score detected in the OSIVQ questionnaire.
3.1. Methods
3.1.1. Participants
A total of 76 participants took part in the study. Testing was conducted in May and June 2018. The participants were recruited via email, social networks, and personal contact. The overwhelming majority of the participants were students and graduates of Masaryk University and Technical University Liberec. Masaryk University’s ethics committee approved this research. The majority of the participants could be considered to be experts as they were either geography or cartography graduates or students who had obtained at least a bachelor’s degree. A smaller number of participants were members of the general public. For more details about gender, age, self-reported experiences, and field of education, see
Table 2. The participants agreed to the experimental procedure, participated voluntarily, and could withdraw freely from the experiment at any time. All the participants had normal or corrected-to-normal vision. The environmental conditions (including lighting conditions and other environmental factors) were kept constant for all the participants.
3.1.2. Procedure
A mixed factorial (2 × 3 × 2) design [
58] was chosen for the study. Level of interactivity (static vs interactive) and task type (spatial understanding tasks, pattern recognition tasks, and combined tasks) were the within-subject factors to indicate the expected differences. Level of expertise (experts and laypersons) was the between-subject factor. To maximize the internal validity of the study, four versions of the test battery (
Figure 1—I, II, III, and IV) were created to counterbalance the static and interactive tasks. The geographical stimuli could not be artificially designed, so the geographical nature of specific tasks was also counterbalanced (regarding the specific region used and its difficulty, as discussed). The tasks in the test battery were counterbalanced to prevent a primacy/learning effect and to reduce the potential diversity of tasks. Equal numbers of experts and laypersons were assigned to each of these four versions.
The test battery comprised an introductory questionnaire on personal information and previous 3D visualization experience. Two training tasks followed (static and interactive, in which participants attempted all three possible types of virtual movement). After training, 24 testing tasks were completed with 3D maps. Finally, the OSIVQ questionnaire was given. Testing tasks with 3D maps were divided into six blocks. Blocks were introduced with detailed instructions and ended with a brief, subjective evaluation. Before testing, participants were instructed that the correctness of responses was important and that their response times would be recorded, and that it would be ideal to solve tasks accurately, as well as quickly.
For this outline of 3D map tasks, we created tasks that engaged elementary cognitive processes, as in Reference [
59]. We selected spatial understanding tasks (A, B, C), pattern recognition tasks (E), and combined tasks (D, F). Specifically, we formulated the tasks as follows:
Select which of four buildings is at the lowest altitude.
Select which of four buildings is in the location with lowest signal intensity.
Determine which of four buildings are visible from the top of the signal transmitter.
Determine which of four buildings are visible from the top of the signal transmitter and are also in the location with lowest signal intensity.
Determine whether the spatial distribution of signal intensity depends on altitude or terrain slope.
Comparison of average altitudes and average terrain slope in highlighted areas.
Signal intensity (in tasks B, D, and E) was depicted with an orange color scale (color intensity). Participant responses to the all the above-mentioned tasks required choosing from four options. Most of the tasks (A, B, D, E, F) indicated only one correct answer. Only task C required more than one correct answer to complete it correctly.
3.1.3. Apparatus
3DmoveR (3D Movement and Interaction Recorder), which is our original application developed at the Faculty of Sciences, Masaryk University [
55] and can be optimized to record the process of user interaction with 3D maps, was employed for user testing. The 3DmoveR was based on the combination of a user logging approach and online questionnaire engaging practical spatial tasks. This application is freely available to any interested person under a BSD (Berkeley Software Distribution) license. Open web technologies (JavaScript, jQuery, WebGL and PHP) were used for its implementation. All user interaction data and user responses were recorded and stored on the server for later analysis. The 3DmoveR and two derived variants, 3DtouchR (3D Touch Interaction Recorder) and 3DgazeR (3D Gaze Recorder), have been used in several user studies [
31,
38,
42,
43,
44,
55,
60].
For the empirical part of this study, 3DmoveR version 2.0 was used. The main shift from the previous version lay in replacing the X3DOM library for rendering 3D geospatial data with the equally focused Three.js library. This change extended support for various types of devices (mouse-controlled desktop PCs, laptops with touchpads, or tablets) and across all operating system platforms and web browsers. In addition to better hardware and software support, this change had other benefits, such as automated, and therefore, faster stimuli preparation (using open source GIS—QGIS 2.18 and Qgis2threejs plugin), more precise stimuli control settings (assigning specific movements to different keys or prohibiting all types of movement for static stimuli), and customization of user movement in 3D scenes, for better control and greater accuracy than the previous 3DmoveR version.
Although 3DmoveR is primarily designed to test 3D geospatial data, it is also possible to create slides containing classic questionnaires (e.g., OSIVQ) with this tool. Based on the results of a previous survey described in Juřík et al. [
60], the testing interface comprised a classic PC with keyboard and mouse, monitor (screen resolution 1920 × 1080 px), and Windows OS. The application was launched via the Google Chrome web browser, as the survey in Juřík et al. [
60] found that this software configuration was the most commonly used by respondents. It also contributed to the increase of the environmental validity of the results.
3.1.4. Materials
Digital terrain models formed the principal part of stimuli in this experiment. Terrain models from the EU-DEM [
61], which is a freely available data source, were the primary data input for stimuli. Four homogenous areas from different parts of Europe were chosen for processing. Two areas represented mountainous terrain (southeastern France, borderland of Italy and Austria), and another two areas represented less rugged, rather hilly terrain (southern Norway, borderland of the Czech Republic, Poland and Germany). Each area was divided into six rectangles (20 × 20 km), and two equal squares were prepared for training tasks. These digital terrain models were processed in QGIS 2.18 with the Qgis2threejs plugin. Objects required for each task were created and edited manually in QGIS. Moreover, textures for tasks from blocks B, D, and E were created in QGIS. The symbology and visualization style were set in the Qgis2threejs plugin, as well as Z-factor (1.5, which is the default value).
HTML and JavaScript files were than exported from the plugin. The graphical user interface (GUI) to enter responses was created in HTML (
Figure 2). All 3D scene controls were defined within one JavaScript file and modified to be controlled only by a mouse in the interactive variants and not to allow any navigational interactivity in the static variants of 3D scenes. The first (initial) position of the virtual camera in the interactive version corresponded in all tasks to the positions of the virtual camera in the static version of corresponding tasks. See available online video for a detailed description of experimental testing (
https://youtu.be/Xat0slCx-Yg).
3.2. Results
The research design included three main factors, so we performed three-way analysis of variance (ANOVA) as Warne [
62] to gain a complex picture of the research issue first. We analyzed the influence of the observed factors (level of interactivity, level of expertise, and task type) and the interaction effects of the factors on the participants’ performance (i.e., correctness and response times). The factor of interactivity had two levels (interactive and static), the task type factor had three levels (spatial understanding tasks, pattern recognition tasks, and combined tasks), and the expertise factor had two levels (laypersons and experts). See descriptive statistics in
Table 3 and results of ANOVA in
Table 4. Regarding the hypotheses outlined, we further analyzed the data using Levene’s
t-test [
63] or Mann Whitney U test [
64] (depending on whether the data had a normal distribution), to more closely look at the discussed research questions.
Regarding task response times, the dataset did not show a normal distribution, so we transformed the time responses to a normal distribution using Box-Cox transformation (λ = 0.3), as recommended for working with specific variables such as time [
65].
We found statistically significant differences in task type and interactivity for task-solving times and in expertise, task type, and interactivity for correctness. Only the interaction of interactivity and the task types factor was found to be statistically significant for both correctness and response times (see
Table 4 and
Figure 3).
3.2.1. RQ1. Does User Performance Differ between Interactive and Static 3D Maps?
Correctness
To answer the general research question, whether interactivity influenced user performance on maps, we compared the overall performance of all the participants for the interactivity factor (static vs interactive tasks, regardless of task type, see
Figure 4). The assumption of normal distribution was not fulfilled in correctness, and the Shapiro-Wilk test (as in Reference [
66]) of normality did not assume normal distribution of the correct answers for interactive and static tasks (see
Table 5). Therefore, we conducted the Mann-Whitney U test to measure the differences in the correct answers between static and interactive tasks. Significant differences were found as static tasks were solved with less accuracy than interactive tasks, as shown in
Figure 4 and
Table 5.
Response Times
Similar to accuracy, we compared the time the participants needed to complete all the tasks. The assumption of normal distribution in time required for task-solving was fulfilled (see Shapiro-Wilk test,
Table 5), so we used Levene’s
t-test for equality of variances to assess the differences between static and interactive tasks. For more details, see
Table 5 and
Figure 4.
Subjective Preferences
The subjective preferences of the participants indicated that most considered interactivity in maps to be a helpful feature to solve the given tasks. In all the experimental tasks, 89% of the users reported that interactive task-solving was easier, as they agreed or strongly agreed with the claim that task solving was easier with interactive conditions.
Figure 5 presents a detailed summary of the specific answers for each of the six task blocks.
3.2.2. RQ2: Does User Performance Differ Regarding Different Task Types?
Correctness
To gain deeper insights into the role of task complexity, we compared user performance across specific task types divided into three categories: Spatial understanding, pattern recognition, and combined tasks. Accuracy in the spatial understanding and combined tasks categories showed statistically significant differences between the static and interactive conditions. The pattern recognition tasks showed no significant differences. The values of the central tendency are summarized in
Table 6.
Response Times
Differences in the task categories were also found in the time required to complete the given tasks. Similar to accuracy, differences were found in both spatial understanding tasks and combined tasks, while pattern recognition tasks showed no significant differences. The central tendency values are summarized in
Table 6. The data showed that combined tasks were solved the fastest, while pattern recognition tasks were solved the slowest.
3.2.3. RQ3: Does User Performance Differ between Experts and Laypersons?
Expertise
The normal distribution of the data was assumed (see
Table 7), so we conducted
t-tests to measure the exact differences between experts’ and laypersons’ response times and correctness. The empirical evidence supported our expectation that experts would have higher accuracy than laypersons when solving cartographic tasks. However, experts were not significantly faster or slower than laypersons. For more details, see
Table 7 and
Figure 6.
Cognitive Style
We investigated whether the objective performance (correctness and response time) of individual participants was related to cognitive styles detected via the OSIVQ questionnaire. Correctness and response times were both aggregated according to task type by addition.
Table 8 shows the correlation coefficients calculated for task types and all three cognitive styles (spatial, object, and verbal) from OSIVQ. No significant correlation was found at this level. When we analyzed these data at a more detailed level, positive correlation was found only between correctness and spatial cognitive style for experts in task block B (
r = 0.347;
p-value = 0.021), which was spatial understanding task type.
4. Discussion
The ANOVA results suggested that significant main effects existed in the correctness of answers for the factors of interactivity, task type, and expertise, and for the interaction of the task type and interactivity factors at the significance level of 0.001 (except for expertise,
p < 0.05). For response times, only the factors of interactivity and task type were found to have significant differences, also at the significance level of 0.001. According to these results, all the mentioned factors significantly influenced user performance when evaluating the altitude of objects placed in virtual 3D visualizations. It should be mentioned that the factor of expertise, which had a significant effect only for correctness, implied that when working with virtual 3D visualizations, the role of expertise may be less. The data also strongly emphasized the advantage of interactivity when working with this type of stimuli, which grew significantly with more complex (combined) tasks (see
Section 3.2.1).
4.1. Research Question 1
Based on the given data, H1, H2, and H3 were confirmed. These hypotheses related to the results of the entire test (user response accuracy and speed were aggregated for all 24 tasks and participants). As expected, we found significant differences in overall user response accuracy and speed between interactive and static 3D maps. Interactive tasks were solved with greater accuracy (H1), while static tasks were solved faster (H2). Interactivity in tasks offered users a good option to explore content more precisely and identify the best solution (accuracy) but required more time. In the subjective comparison of interactive and static 3D maps, 89% of the participants strongly agreed or agreed with the statement that task solving was easier with interactive 3D maps. This unequal proportion also corroborated the other hypothesis that users preferred interactive tasks (H3).
4.2. Research Question 2
We hypothesized that interactivity had the same effect in all three subcategories of applied tasks (spatial understanding, pattern recognition, and combined tasks). For accuracy (H4) and speed (H5), the effect in combined tasks and spatial understanding tasks favored interactivity, although in pattern recognition tasks, no significant differences between interactive and static 3D maps were found. In pattern recognition tasks, accuracy and required time were quite high despite interactivity. The data suggested that pattern recognition tasks were probably more difficult than the other types because the participants took longer to solve them, though this greater effort led them to the correct answers. In future research, we should also consider that the experiment contained fewer pattern recognition tasks (in the number of each trial) than tasks in the other subcategories and that an existing effect may not have been seen.
The significant improvement of accuracy in interactive combined tasks—suggested to be the most complex of the three subcategories—indicated the real value added to interactive versions of 3D maps. In complex 3D tasks, where all the necessary data could not be depicted in easily accessible ways, the importance of interactivity grew significantly.
4.3. Research Question 3
The examination of the effect of expertise confirmed H6, which predicted greater accuracy by experts (geography students or graduates). However, H7 was refused as we did not find any statistically significant difference in response time between experts and laypersons. We also investigated the suggested relationship between cognitive styles measured in the OSIVQ questionnaire and user performance in tasks (H8 and H9). We could discern no relationship between specific cognitive styles and specific task types. Therefore, H8 was refuted, and H9 was confirmed. We can assume that all the cognitive strategies represented in the OSIVQ questionnaire were involved in the process of solving complex cartographic tasks (which 3D map tasks certainly are).
4.4. General Discussion
We discuss some limitations of our experimental design and procedure. Individual task types (RQ3) were somewhat difficult to compare due to the unequal numbers of experimental tests, especially pattern recognition tasks, of which there were only four in each test variant (two interactive and two static). However, in our opinion, it is possible to compare combined tasks (8 in each test variant) and spatial understanding tasks (12 in each test variant). For this reason, we analyzed the correctness and response times for all the test answers to RQ 1 (H1, H2, and H3) and RQ3 (H6 and H7). Another possible limitation is related to the number of tested participants. In general, it can be stated that a higher number of participants generates more representative results. However, in the experimental testing of 3D maps, the number of participants is usually lower (e.g., the average number of participants in the studies listed in
Table 1 is 49), and these studies have produced significant results (i.e., References [
20,
28,
35]).
In general, some authors have stated that comparing static and interactive maps can be problematic from the perspectives of cartography (e.g., Roth et al. [
67]) and psychology (as discussed by Juřík et al. [
60]). However, as mentioned in
Section 2.3, other authors have made such comparisons, particularly to explore the process of perception and decision making [
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69]. From the perspective of traditional experiments, spatial data represent noisy, ambiguous stimuli, which are hard to control and adjust as sets of controlled test tasks. To research this issue, we needed a well-controlled, balanced experimental design with the maximum possible range of measurable variables. This condition was an objective in the present study. Importantly, from the literature review, we recognized that the results of previous user studies used as stimuli for static 3D maps cannot be generalized or transmitted to interactive 3D maps or virtual reality, as a fundamental component of virtual reality is 3D visualization of spatial data and navigational interactivity.
5. Conclusions and Future Work
In this study, we investigated the influence of interactivity in virtual 3D maps on user performance (accuracy and speed). The users consisted of both experts and laypersons. The participants completed an online testing battery with various task types including both interactive and static geovisualizations. We found significant differences in both accuracy and speed. Our data indicated that various tasks in 3D maps were solved more accurately in the presence of interactivity, and that users subjectively preferred to solve interactive tasks. However, tasks were solved faster with static visualizations. Further analysis indicated some differences between specific types of task-solving. Differences between experts and laypersons in overall task-solving accuracy were also identified.
Despite the limited number of available participants, our results can contribute to the development of new systems using 3D maps designed for landscape management, precision farming, environmental protection, and crisis management, where tasks that consider both terrain (altitudes and slopes) and thematic information are performed on 3D geovisualizations using color intensity as the main variable. Such 3D maps have been used, for example, by Jedlička and Charvát [
70] to visualize yield potential, Herman and Řezník [
71] to map noise, Christen et al. [
72] to visualize avalanche simulations, and Dübel et al. [
73] and Sieber et al. [
74] to represent hazardous weather.
Specifically, the benefits of interactive 3D maps are influenced by factors contributed by the purpose of the map, which are map use conditions, type (and complexity) of map tasks, and potential map users.
Interactive 3D maps are suitable for purposes where more accurate solution/decision is required, and no/less time pressure exists on the speed of this decision.
Interactive 3D maps are more suitable for complex tasks (see
Section 2.2 for more on task complexity).
Interactive 3D maps are more suitable for geospatial data experts (geographers, spatial, and urban planners, etc.). It is necessary to carefully consider the use of 3D maps for laypersons.
For user studies, one clear recommendation can be made: If experimental results are to be generalized for interactive 3D maps and virtual reality, interactive 3D maps should be used as stimuli.
From a technological point of view, it is now possible to perform user testing directly with interactive 3D maps, which may be more appropriate regarding the transferability of results into practice in the design and implementation of 3D geovisualizations (see Juřík et al. [
60]). Technologies that permit testing in controlled [
38] and non-controlled conditions [
56] can be connected to eye-tracking devices [
33] and various interfaces, such as a touch screens [
43,
44] or the Wii Remote Controller [
30], and other technologies for stereoscopic (real-3D) visualization with different immersion levels [
34,
37,
57].
We would like to continue testing and will be working directly on interactive 3D testing. Importantly, user interaction and movement in the virtual environment can be described, analyzed, and compared, for example, with the results of the OSIVQ questionnaire, or inspected if any relationship exists between the results of the OSIVQ questionnaire and navigation in photorealistic and immersive virtual environments.
We also want to focus on more complex tasks that include advanced types of interaction. However, the difficulty of each task is also affected by the shape and complexity of the terrain or the distance between objects inserted into this terrain. Therefore, it must also be mentioned that 3D maps (and GIS data in general) represent complex stimuli which do not allow us to create a strict experimental design (as usually required in experimental studies). Regarding this, comprehensive data collection is required in interactive 3D maps to acquire better insight into the processes of decision making and task solving.