Next Article in Journal
Measurement of Opportunity Cost of Travel Time for Predicting Future Residential Mobility Based on the Smart Card Data of Public Transportation
Next Article in Special Issue
Collaborative Immersive Virtual Environments for Education in Geography
Previous Article in Journal
From Global Goals to Local Gains—A Framework for Crop Water Productivity
Previous Article in Special Issue
Determining Optimal Video Length for the Estimation of Building Height through Radial Displacement Measurement from Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of User Performance in Interactive and Static 3D Maps

1
Department of Geography, Faculty of Science, Masaryk University, 611 37 Brno, Czech Republic
2
Department of Psychology, Faculty of Arts, Masaryk University, 602 00 Brno, Czech Republic
3
Department of Applied Mathematics, Faculty of Science, Humanities and Education, Technical University of Liberec, 461 17 Liberec, Czech Republic
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2018, 7(11), 415; https://doi.org/10.3390/ijgi7110415
Submission received: 4 September 2018 / Revised: 19 October 2018 / Accepted: 23 October 2018 / Published: 26 October 2018
(This article belongs to the Special Issue Cognitive Aspects of Human-Computer Interaction for GIS)

Abstract

:
Interactive 3D visualizations of geospatial data are currently available and popular through various applications such as Google EarthTM and others. Several studies have focused on user performance with 3D maps, but static 3D maps were mostly used as stimuli. The main objective of this paper was to identify differences between interactive and static 3D maps. We also explored the role of different tasks and inter-individual differences of map users. In the experimental study, we analyzed effectiveness, efficiency, and subjective preferences, when working with static and interactive 3D maps. The study included 76 participants and used a within-subjects design. Experimental testing was performed using our own testing tool 3DmoveR 2.0, which was based on a user logging method and open web technologies. We demonstrated statistically significant differences between interactive and static 3D maps in effectiveness, efficiency, and subjective preferences. Interactivity influenced the results mainly in ‘spatial understanding’ and ‘combined’ tasks. From the identified differences, we concluded that the results of the user studies with static 3D maps as stimuli could not be transferred to interactive 3D visualizations or virtual reality.

1. Introduction

3D visualization of geospatial data is employed today in many fields and in relation to many specific issues. Some universal applications, such as Google EarthTM or Virtual Earth®, and many domain specific solutions can be applied in several areas. An overview of these are presented, for example, by Shiode [1], Abdul-Rahman and Pilouk [2], or Biljecki et al. [3]. Despite the wide dissemination of 3D visualization applications, relatively little is known about their theoretical background. We can still agree with the notion of Wood et al. [4], who claimed that we do not know enough about how 3D visualizations can be used appropriately and effectively.
Buchroithner and Knust [5] distinguished between two basic types of 3D visualization: Real-3D and pseudo-3D visualization. Real-3D visualizations engage both binocular and monocular depth cues into geovisualizations using the principles of stereoscopy. Pseudo-3D visualizations are usually displayed on planar media (e.g., computer screens or widescreen projections), and are perceived by engaging only monocular depth cues [5]. In general, pseudo-3D visualization is considered a cheaper and more disseminated type of visualization, since it has no further demands on peripheral technology to provide stereoscopy. This paper examines pseudo-3D visualization in more detail, specifically studying 3D maps presented on planar media.
Many different definitions of 3D maps exist. Some of them, for example, Hájek, Jedlička, and Čada [6], concentrate mainly on map content, where 3D maps are understood to include Digital Terrain Models, 2D data draped onto terrain, 3D models of objects, or 3D symbols. Other definitions describe the specifics of the 3D map creation process (generalization, symbolization). Haeberling, Bär, and Hurni [7] define a 3D map as the generalized representation of a specific area using symbolization to illustrate its physical features. Finally, some definitions are more complex and consider the characteristics of the resulting 3D map. Bandrova [8] defines a 3D map as a computer generated, mathematically defined, three-dimensional, highly-realistic virtual representation of the world’s surface, which also includes the objects and phenomena in nature and society. Schobesberger and Patterson [9] characterize a 3D map as the depiction of terrain with faux three-dimensionality containing perspective that diminishes the scale of distant areas.
We understand a 3D map as a pseudo-3D or real-3D depiction of the geographic environment and its natural or socio-economic objects and phenomena using a mathematical basis (geographical or projection coordinate systems with a Z-scale of input data and graphical projection, such as perspective or orthogonal projection) and cartographical processes (generalization, symbolization, etc.). 3D maps usually employ a bird’s eye view. We define an interactive 3D map as a 3D map which allows, at least, navigational (or viewpoint) interactivity [10], whilst static maps are most often perspective views. Static 3D maps can be also tangible (for example printed by 3D printer), but in this paper, we deal only with 3D virtual maps that are displayed on the computer screen
This paper consists of a theoretical section, which is a literature review, and an empirical section, which is an experimental study. The literature review is divided into three parts, according to three fundamental factors important in cartography user studies: Stimuli, task, and user [11,12]. The main objectives of the empirical section relate to these three dimensions. First, we want to find out the differences between interactive and static 3D maps. Second, we want to explore the role of different types of tasks performed on the 3D maps, and the individual and group differences between 3D map users.

2. Related Concepts and Research

The differences between interactive and static 3D maps are based on the different psychological processes underlying their perception. From this point of view, we can discuss the concept of information and computational equivalence [13]. Larkin and Simon [13] suggested that two different visualizations are information equivalent when the information contained in one visualization is derivable from the other. However, the processes of derivation may require a different level of computation, since visualizations may be depicted by using different graphics or interfaces. Two different external representations (in our case 3D maps) are considered computationally equivalent if a person needs to perform the same number of mental processes (computations) when reading them to achieve the same information. The issue of interactivity in 3D maps is also discussed in the field of cartography, as interactive visualization solves the problems with overall visibility and readability of complex and informationally rich areas [14].
Complex terrain models and 3D maps can be identical in terms of the information they contain; however, specific options to interact with them may promote, or conversely, hinder the number of computations when reading them. This implies that two equivalent 3D maps may not be comparable when considering only their content. It is necessary to consider the process of interaction with the specific interface (which can be characterized as physical computation), and based on this interaction, we need to measure the mental processes required to achieve specific information (mental computations). Following the work of Vygotsky and later Leontiev, human activity is stated as the core aspect of the process of perception and directly structures the user’s cognitive functions [15]. Leontiev [16] understood human activity as a “circular structure” including: Initial afforestation → effector processes regulating contact with the objective environment → correction and enrichment by means of reverse connections of the original afferent image.
Boy [15] noted the importance of the actionists’ perspective, seeing the human mental reflection of the world as not exclusively created by external influences, but also by the processes through which users/people come into practical contact with the objective world. Neisser adopts a similar point of view when he suggests his cyclic model [17]. We must consider the fact that the user/operator is no longer considered as a passive cognitive system experiencing the stimuli given by the external environment (as it usually was in a traditional experimental research paradigm). The user is an active scout, exploring the environment or system and engaging his or her inner intentions, and can influence the situation with specific activities or by following specific goals [15]. From this point of view, the effectiveness of interactive visualizations can be tested against static visualizations (also in terms of the type and quantity of interactions).

2.1. Static versus Interactive 3D Maps

Most of the recent user studies in cartography use either only static 3D maps or interactive 3D maps as stimuli, not both. Static stimuli have been used, for example, by Schobesberger and Patterson [9], Engel et al. [18], Niedomysl et al. [19], Popelka and Brychtová [20], Seipel [21], Popelka and Doležalová [22], Preppernau and Jenny [23], Rautenbach et al. [24], Zhou et al. [25], and Liu et al. [26], whilst interactive stimuli have been used by Abend et al. [27], Wilkening and Fabrikant [28], Treves et al. [29], Špriňarová et al. [30], McKenzie and Klippel [31], Carbonell-Carrera and Saorín [32], and Herman et al. [33].
Some studies have been conducted comprising two successive parts using interactive stimuli in one part and static stimuli in the other, for example, Juřík et al. [34]. A direct comparison of static versus interactive visualizations was conducted by Bleisch, Dykes, and Nebiker [35], Herbert and Chen [36] or Kubíček et al. [37]. Bleisch, Dykes, and Nebiker [35] compared the reading of bar chart heights in static 2D visualizations and bar charts placed in a 3D environment. Interaction in a 3D environment was enabled, but it was not monitored or analyzed, and we do not know whether participants used the interactive capabilities of the 3D stimuli to achieve a better solution or whether they made decisions solely on the basis of visual information. Herbert and Chen [36] tried to identify whether users preferred 2D maps and plans or interactive geovisualizations from the ArcScene software in matters of spatial planning. In both of these studies, two independent variables were not distinguished as separate (level of interactivity and dimensionality of visualization), and it is not possible to identify their true effect.
Kubíček et al. [37] investigated the roles of both types of 3D visualization (pseudo-3D versus real-3D) and the level of navigational interactivity (static versus interactive) when working with terrain profiles. The results of this study indicated that the type of 3D visualization does not affect the performance of users significantly, but the level of navigational interactivity has a significant influence on the usability of a particular 3D visualization. Previous experiments met several methodological limitations (e.g., untreated primary effect when solving tasks in the early phase may influence better performance in the later phase), and in regard to this work, we suggested a design that focuses exclusively on comparing interactive and non-interactive 3D maps. In this study, we focused on designing a test that would make undertaking a comparison of static and interactive 3D maps as objective as possible.
The test battery and data collection procedure for comparing static and interactive 3D maps were evaluated in a pilot study conducted by Herman and Stachoň [38]. In the present study, we emphasize the importance of measuring and analyzing the process of interaction in its entire complexity (i.e., as deeply as possible) while controlling all the residual variables.

2.2. The Nature of Tasks in 3D Maps

3D maps can be used for different purposes, and these specific purposes predetermine the nature of the tasks that would be solved. Interactivity represents only one of the possible factors; other factors may be represented by task complexity. The increasing complexity of the task changes the quality of cognitive processes included in the process of task solving. In the language of mathematics, we do not speak of addition but rather multiplication or squaring. We can suppose that as the complexity of the task grows, the significance of interactivity will be distinctly emphasized—in a more complex task, interactivity will be more helpful, and therefore we suppose a significant and larger effect will be found in the interactive condition. For this question, it is necessary to determine a typology of tasks for 3D maps in terms of their complexity. Various taxonomies of tasks for 3D maps have been used, for example, Kjellin et al. [39], Boér, Çöltekin, and Clarke [40] or Rautenbach et al. [41]. The basic tasks describing the interaction with a specific interface or environment are called interaction primitives (IPs) [10]. IPs represent the elementary activities that can be performed during the process of interaction. The applicable taxonomy of IPs (tasks) for 3D maps was proposed by Juřík, Herman, and Šašinka [42]. A generalized version of this taxonomy is shown in Table 1.
The frequency of use of individual types of task varies considerably in the papers analyzed. Some are used very often (search and spatial understanding), while others are used relatively less. We selected tasks related to pattern recognition, spatial understanding, and combined tasks. The simplest search tasks were not tested exclusively as they form a part of the more complex tasks. Pattern recognition tasks have not been used in any previous user study with 3D maps and testing this type of task was a challenge for this reason. Spatial understanding tasks, which have been used in most studies, were selected to make the results of our study comparable. We did not consider planning or shape description tasks because their evaluation can be difficult and largely subjective. Combined tasks have not been used often, but they lead to more complex cognitive processes and emulate user interaction, so this type of tasks was applied.

2.3. Users of 3D Maps and Their Spatial Abilities

Some previous studies suggested that the use of 3D maps may promote the realistic perception of spatial arrangement of the scene, making it easier for laypersons to form an impression of the scene without the necessity for any symbolic language [37]. By contrast, some studies supported the claim that some forms of 3D visualization may increase the time required to solve tasks and create visual discomfort during their use [21,45]. For experts experienced in map reading, 3D visualization may lower the clarity of depicted content and increase the chance of making an error [34].
These inconsistencies in map depiction have still not been explored very deeply, as several important factors contribute. Besides the type of visualization, interactivity and task type (discussed above), a level of (geo) expertise, and innate spatial ability (or better cognitive style) is involved when dealing with map content. The observer’s focus on an object in a scene or on the overall spatial arrangement of the scene plays an important role in computational processes. Individual spatial abilities determine the efficiency in remembering and understanding the spatial arrangement of the scene, and based on the mental image of this scene, they can be more or less successful when dealing with specific tasks [46,47].
The existence of people who are more spatially oriented, those who are more object oriented, and those mainly verbally oriented has been explored in many psychological studies [48,49,50]. This orientation of a person is measured, for example, with the Object-Spatial Imagery and Verbal Questionnaire (OSIVQ) developed by Blazhenkova and Kozhevnikov [51]. From this point of view, these three factors must be considered as part of the experimental design in evaluating geographical products. In the study presented, we involved experts and laypersons in the field of geography (to compare the results of these two groups of users) and measured their object-spatial orientation using the OSIVQ. These people were tested on interactive and static 3D maps.

3. Experimental Study

The aim of the study is to analyze the differences between effectiveness (correctness) [52], efficiency (response times) [53], and subjective preferences [52] when working with static and interactive 3D maps. Correctness, response times, and subjective preferences were dependent variables. Independent variables were the level of interactivity, level of expertise, and task type. We addressed three research questions (RQ), which were further defined by nine hypotheses (H). The research questions and hypotheses were defined based on literature and our pilot studies. Hypothesis H1, H2, and H3 were based on Kubíček et al. [37] and Herman and Stachoň [38]. Hypothesis H4 and H5 were based on Špriňarová et al. [30] and Juřík et al. [34]. Hypothesis H6 and H7 were based on Bowman et al. [54] and Herman et al. [55]. Hypothesis H8 and H9 were based on Štěrba et al. [56] and Stachoň et al. [57].
  • RQ1: Does user performance differ between interactive and static 3D maps?
    H1: Participants solve interactive tasks with greater accuracy than static tasks.
    H2: Participants solve static tasks faster than interactive tasks.
    H3: Participants subjectively prefer interactive tasks to static tasks.
  • RQ2: Does user performance differ regarding different task types?
    H4: Static and interactive tasks have significant differences in accuracy in all three task subcategories (spatial understanding, pattern recognition, combined tasks).
    H5: Static and interactive tasks have significant differences in the time required to complete the tasks in all three task subcategories (spatial understanding, pattern recognition, combined tasks).
  • RQ3: Does user performance differ between experts and laypersons?
    H6: Experts solve the tasks with greater accuracy than laypersons.
    H7: Experts solve the tasks faster than laypersons.
    H8: Accuracy and speed of user response in laypersons significantly correlates with the high spatial factor score detected in the OSIVQ questionnaire.
    H9: Accuracy and speed of user response in experts does not correlate with the high spatial factor score detected in the OSIVQ questionnaire.

3.1. Methods

3.1.1. Participants

A total of 76 participants took part in the study. Testing was conducted in May and June 2018. The participants were recruited via email, social networks, and personal contact. The overwhelming majority of the participants were students and graduates of Masaryk University and Technical University Liberec. Masaryk University’s ethics committee approved this research. The majority of the participants could be considered to be experts as they were either geography or cartography graduates or students who had obtained at least a bachelor’s degree. A smaller number of participants were members of the general public. For more details about gender, age, self-reported experiences, and field of education, see Table 2. The participants agreed to the experimental procedure, participated voluntarily, and could withdraw freely from the experiment at any time. All the participants had normal or corrected-to-normal vision. The environmental conditions (including lighting conditions and other environmental factors) were kept constant for all the participants.

3.1.2. Procedure

A mixed factorial (2 × 3 × 2) design [58] was chosen for the study. Level of interactivity (static vs interactive) and task type (spatial understanding tasks, pattern recognition tasks, and combined tasks) were the within-subject factors to indicate the expected differences. Level of expertise (experts and laypersons) was the between-subject factor. To maximize the internal validity of the study, four versions of the test battery (Figure 1—I, II, III, and IV) were created to counterbalance the static and interactive tasks. The geographical stimuli could not be artificially designed, so the geographical nature of specific tasks was also counterbalanced (regarding the specific region used and its difficulty, as discussed). The tasks in the test battery were counterbalanced to prevent a primacy/learning effect and to reduce the potential diversity of tasks. Equal numbers of experts and laypersons were assigned to each of these four versions.
The test battery comprised an introductory questionnaire on personal information and previous 3D visualization experience. Two training tasks followed (static and interactive, in which participants attempted all three possible types of virtual movement). After training, 24 testing tasks were completed with 3D maps. Finally, the OSIVQ questionnaire was given. Testing tasks with 3D maps were divided into six blocks. Blocks were introduced with detailed instructions and ended with a brief, subjective evaluation. Before testing, participants were instructed that the correctness of responses was important and that their response times would be recorded, and that it would be ideal to solve tasks accurately, as well as quickly.
For this outline of 3D map tasks, we created tasks that engaged elementary cognitive processes, as in Reference [59]. We selected spatial understanding tasks (A, B, C), pattern recognition tasks (E), and combined tasks (D, F). Specifically, we formulated the tasks as follows:
  • Select which of four buildings is at the lowest altitude.
  • Select which of four buildings is in the location with lowest signal intensity.
  • Determine which of four buildings are visible from the top of the signal transmitter.
  • Determine which of four buildings are visible from the top of the signal transmitter and are also in the location with lowest signal intensity.
  • Determine whether the spatial distribution of signal intensity depends on altitude or terrain slope.
    • Determine whether signal intensity depends on altitude.
    • Determine whether signal intensity depends on terrain slope.
  • Comparison of average altitudes and average terrain slope in highlighted areas.
    • Determine which of the four areas is located at the highest average altitude.
    • Determine which of the four areas is characterized by the highest average terrain slope.
Signal intensity (in tasks B, D, and E) was depicted with an orange color scale (color intensity). Participant responses to the all the above-mentioned tasks required choosing from four options. Most of the tasks (A, B, D, E, F) indicated only one correct answer. Only task C required more than one correct answer to complete it correctly.

3.1.3. Apparatus

3DmoveR (3D Movement and Interaction Recorder), which is our original application developed at the Faculty of Sciences, Masaryk University [55] and can be optimized to record the process of user interaction with 3D maps, was employed for user testing. The 3DmoveR was based on the combination of a user logging approach and online questionnaire engaging practical spatial tasks. This application is freely available to any interested person under a BSD (Berkeley Software Distribution) license. Open web technologies (JavaScript, jQuery, WebGL and PHP) were used for its implementation. All user interaction data and user responses were recorded and stored on the server for later analysis. The 3DmoveR and two derived variants, 3DtouchR (3D Touch Interaction Recorder) and 3DgazeR (3D Gaze Recorder), have been used in several user studies [31,38,42,43,44,55,60].
For the empirical part of this study, 3DmoveR version 2.0 was used. The main shift from the previous version lay in replacing the X3DOM library for rendering 3D geospatial data with the equally focused Three.js library. This change extended support for various types of devices (mouse-controlled desktop PCs, laptops with touchpads, or tablets) and across all operating system platforms and web browsers. In addition to better hardware and software support, this change had other benefits, such as automated, and therefore, faster stimuli preparation (using open source GIS—QGIS 2.18 and Qgis2threejs plugin), more precise stimuli control settings (assigning specific movements to different keys or prohibiting all types of movement for static stimuli), and customization of user movement in 3D scenes, for better control and greater accuracy than the previous 3DmoveR version.
Although 3DmoveR is primarily designed to test 3D geospatial data, it is also possible to create slides containing classic questionnaires (e.g., OSIVQ) with this tool. Based on the results of a previous survey described in Juřík et al. [60], the testing interface comprised a classic PC with keyboard and mouse, monitor (screen resolution 1920 × 1080 px), and Windows OS. The application was launched via the Google Chrome web browser, as the survey in Juřík et al. [60] found that this software configuration was the most commonly used by respondents. It also contributed to the increase of the environmental validity of the results.

3.1.4. Materials

Digital terrain models formed the principal part of stimuli in this experiment. Terrain models from the EU-DEM [61], which is a freely available data source, were the primary data input for stimuli. Four homogenous areas from different parts of Europe were chosen for processing. Two areas represented mountainous terrain (southeastern France, borderland of Italy and Austria), and another two areas represented less rugged, rather hilly terrain (southern Norway, borderland of the Czech Republic, Poland and Germany). Each area was divided into six rectangles (20 × 20 km), and two equal squares were prepared for training tasks. These digital terrain models were processed in QGIS 2.18 with the Qgis2threejs plugin. Objects required for each task were created and edited manually in QGIS. Moreover, textures for tasks from blocks B, D, and E were created in QGIS. The symbology and visualization style were set in the Qgis2threejs plugin, as well as Z-factor (1.5, which is the default value).
HTML and JavaScript files were than exported from the plugin. The graphical user interface (GUI) to enter responses was created in HTML (Figure 2). All 3D scene controls were defined within one JavaScript file and modified to be controlled only by a mouse in the interactive variants and not to allow any navigational interactivity in the static variants of 3D scenes. The first (initial) position of the virtual camera in the interactive version corresponded in all tasks to the positions of the virtual camera in the static version of corresponding tasks. See available online video for a detailed description of experimental testing (https://youtu.be/Xat0slCx-Yg).

3.2. Results

The research design included three main factors, so we performed three-way analysis of variance (ANOVA) as Warne [62] to gain a complex picture of the research issue first. We analyzed the influence of the observed factors (level of interactivity, level of expertise, and task type) and the interaction effects of the factors on the participants’ performance (i.e., correctness and response times). The factor of interactivity had two levels (interactive and static), the task type factor had three levels (spatial understanding tasks, pattern recognition tasks, and combined tasks), and the expertise factor had two levels (laypersons and experts). See descriptive statistics in Table 3 and results of ANOVA in Table 4. Regarding the hypotheses outlined, we further analyzed the data using Levene’s t-test [63] or Mann Whitney U test [64] (depending on whether the data had a normal distribution), to more closely look at the discussed research questions.
Regarding task response times, the dataset did not show a normal distribution, so we transformed the time responses to a normal distribution using Box-Cox transformation (λ = 0.3), as recommended for working with specific variables such as time [65].
We found statistically significant differences in task type and interactivity for task-solving times and in expertise, task type, and interactivity for correctness. Only the interaction of interactivity and the task types factor was found to be statistically significant for both correctness and response times (see Table 4 and Figure 3).

3.2.1. RQ1. Does User Performance Differ between Interactive and Static 3D Maps?

Correctness

To answer the general research question, whether interactivity influenced user performance on maps, we compared the overall performance of all the participants for the interactivity factor (static vs interactive tasks, regardless of task type, see Figure 4). The assumption of normal distribution was not fulfilled in correctness, and the Shapiro-Wilk test (as in Reference [66]) of normality did not assume normal distribution of the correct answers for interactive and static tasks (see Table 5). Therefore, we conducted the Mann-Whitney U test to measure the differences in the correct answers between static and interactive tasks. Significant differences were found as static tasks were solved with less accuracy than interactive tasks, as shown in Figure 4 and Table 5.

Response Times

Similar to accuracy, we compared the time the participants needed to complete all the tasks. The assumption of normal distribution in time required for task-solving was fulfilled (see Shapiro-Wilk test, Table 5), so we used Levene’s t-test for equality of variances to assess the differences between static and interactive tasks. For more details, see Table 5 and Figure 4.

Subjective Preferences

The subjective preferences of the participants indicated that most considered interactivity in maps to be a helpful feature to solve the given tasks. In all the experimental tasks, 89% of the users reported that interactive task-solving was easier, as they agreed or strongly agreed with the claim that task solving was easier with interactive conditions. Figure 5 presents a detailed summary of the specific answers for each of the six task blocks.

3.2.2. RQ2: Does User Performance Differ Regarding Different Task Types?

Correctness

To gain deeper insights into the role of task complexity, we compared user performance across specific task types divided into three categories: Spatial understanding, pattern recognition, and combined tasks. Accuracy in the spatial understanding and combined tasks categories showed statistically significant differences between the static and interactive conditions. The pattern recognition tasks showed no significant differences. The values of the central tendency are summarized in Table 6.

Response Times

Differences in the task categories were also found in the time required to complete the given tasks. Similar to accuracy, differences were found in both spatial understanding tasks and combined tasks, while pattern recognition tasks showed no significant differences. The central tendency values are summarized in Table 6. The data showed that combined tasks were solved the fastest, while pattern recognition tasks were solved the slowest.

3.2.3. RQ3: Does User Performance Differ between Experts and Laypersons?

Expertise

The normal distribution of the data was assumed (see Table 7), so we conducted t-tests to measure the exact differences between experts’ and laypersons’ response times and correctness. The empirical evidence supported our expectation that experts would have higher accuracy than laypersons when solving cartographic tasks. However, experts were not significantly faster or slower than laypersons. For more details, see Table 7 and Figure 6.

Cognitive Style

We investigated whether the objective performance (correctness and response time) of individual participants was related to cognitive styles detected via the OSIVQ questionnaire. Correctness and response times were both aggregated according to task type by addition. Table 8 shows the correlation coefficients calculated for task types and all three cognitive styles (spatial, object, and verbal) from OSIVQ. No significant correlation was found at this level. When we analyzed these data at a more detailed level, positive correlation was found only between correctness and spatial cognitive style for experts in task block B (r = 0.347; p-value = 0.021), which was spatial understanding task type.

4. Discussion

The ANOVA results suggested that significant main effects existed in the correctness of answers for the factors of interactivity, task type, and expertise, and for the interaction of the task type and interactivity factors at the significance level of 0.001 (except for expertise, p < 0.05). For response times, only the factors of interactivity and task type were found to have significant differences, also at the significance level of 0.001. According to these results, all the mentioned factors significantly influenced user performance when evaluating the altitude of objects placed in virtual 3D visualizations. It should be mentioned that the factor of expertise, which had a significant effect only for correctness, implied that when working with virtual 3D visualizations, the role of expertise may be less. The data also strongly emphasized the advantage of interactivity when working with this type of stimuli, which grew significantly with more complex (combined) tasks (see Section 3.2.1).

4.1. Research Question 1

Based on the given data, H1, H2, and H3 were confirmed. These hypotheses related to the results of the entire test (user response accuracy and speed were aggregated for all 24 tasks and participants). As expected, we found significant differences in overall user response accuracy and speed between interactive and static 3D maps. Interactive tasks were solved with greater accuracy (H1), while static tasks were solved faster (H2). Interactivity in tasks offered users a good option to explore content more precisely and identify the best solution (accuracy) but required more time. In the subjective comparison of interactive and static 3D maps, 89% of the participants strongly agreed or agreed with the statement that task solving was easier with interactive 3D maps. This unequal proportion also corroborated the other hypothesis that users preferred interactive tasks (H3).

4.2. Research Question 2

We hypothesized that interactivity had the same effect in all three subcategories of applied tasks (spatial understanding, pattern recognition, and combined tasks). For accuracy (H4) and speed (H5), the effect in combined tasks and spatial understanding tasks favored interactivity, although in pattern recognition tasks, no significant differences between interactive and static 3D maps were found. In pattern recognition tasks, accuracy and required time were quite high despite interactivity. The data suggested that pattern recognition tasks were probably more difficult than the other types because the participants took longer to solve them, though this greater effort led them to the correct answers. In future research, we should also consider that the experiment contained fewer pattern recognition tasks (in the number of each trial) than tasks in the other subcategories and that an existing effect may not have been seen.
The significant improvement of accuracy in interactive combined tasks—suggested to be the most complex of the three subcategories—indicated the real value added to interactive versions of 3D maps. In complex 3D tasks, where all the necessary data could not be depicted in easily accessible ways, the importance of interactivity grew significantly.

4.3. Research Question 3

The examination of the effect of expertise confirmed H6, which predicted greater accuracy by experts (geography students or graduates). However, H7 was refused as we did not find any statistically significant difference in response time between experts and laypersons. We also investigated the suggested relationship between cognitive styles measured in the OSIVQ questionnaire and user performance in tasks (H8 and H9). We could discern no relationship between specific cognitive styles and specific task types. Therefore, H8 was refuted, and H9 was confirmed. We can assume that all the cognitive strategies represented in the OSIVQ questionnaire were involved in the process of solving complex cartographic tasks (which 3D map tasks certainly are).

4.4. General Discussion

We discuss some limitations of our experimental design and procedure. Individual task types (RQ3) were somewhat difficult to compare due to the unequal numbers of experimental tests, especially pattern recognition tasks, of which there were only four in each test variant (two interactive and two static). However, in our opinion, it is possible to compare combined tasks (8 in each test variant) and spatial understanding tasks (12 in each test variant). For this reason, we analyzed the correctness and response times for all the test answers to RQ 1 (H1, H2, and H3) and RQ3 (H6 and H7). Another possible limitation is related to the number of tested participants. In general, it can be stated that a higher number of participants generates more representative results. However, in the experimental testing of 3D maps, the number of participants is usually lower (e.g., the average number of participants in the studies listed in Table 1 is 49), and these studies have produced significant results (i.e., References [20,28,35]).
In general, some authors have stated that comparing static and interactive maps can be problematic from the perspectives of cartography (e.g., Roth et al. [67]) and psychology (as discussed by Juřík et al. [60]). However, as mentioned in Section 2.3, other authors have made such comparisons, particularly to explore the process of perception and decision making [36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69]. From the perspective of traditional experiments, spatial data represent noisy, ambiguous stimuli, which are hard to control and adjust as sets of controlled test tasks. To research this issue, we needed a well-controlled, balanced experimental design with the maximum possible range of measurable variables. This condition was an objective in the present study. Importantly, from the literature review, we recognized that the results of previous user studies used as stimuli for static 3D maps cannot be generalized or transmitted to interactive 3D maps or virtual reality, as a fundamental component of virtual reality is 3D visualization of spatial data and navigational interactivity.

5. Conclusions and Future Work

In this study, we investigated the influence of interactivity in virtual 3D maps on user performance (accuracy and speed). The users consisted of both experts and laypersons. The participants completed an online testing battery with various task types including both interactive and static geovisualizations. We found significant differences in both accuracy and speed. Our data indicated that various tasks in 3D maps were solved more accurately in the presence of interactivity, and that users subjectively preferred to solve interactive tasks. However, tasks were solved faster with static visualizations. Further analysis indicated some differences between specific types of task-solving. Differences between experts and laypersons in overall task-solving accuracy were also identified.
Despite the limited number of available participants, our results can contribute to the development of new systems using 3D maps designed for landscape management, precision farming, environmental protection, and crisis management, where tasks that consider both terrain (altitudes and slopes) and thematic information are performed on 3D geovisualizations using color intensity as the main variable. Such 3D maps have been used, for example, by Jedlička and Charvát [70] to visualize yield potential, Herman and Řezník [71] to map noise, Christen et al. [72] to visualize avalanche simulations, and Dübel et al. [73] and Sieber et al. [74] to represent hazardous weather.
Specifically, the benefits of interactive 3D maps are influenced by factors contributed by the purpose of the map, which are map use conditions, type (and complexity) of map tasks, and potential map users.
  • Interactive 3D maps are suitable for purposes where more accurate solution/decision is required, and no/less time pressure exists on the speed of this decision.
  • Interactive 3D maps are more suitable for complex tasks (see Section 2.2 for more on task complexity).
  • Interactive 3D maps are more suitable for geospatial data experts (geographers, spatial, and urban planners, etc.). It is necessary to carefully consider the use of 3D maps for laypersons.
For user studies, one clear recommendation can be made: If experimental results are to be generalized for interactive 3D maps and virtual reality, interactive 3D maps should be used as stimuli.
From a technological point of view, it is now possible to perform user testing directly with interactive 3D maps, which may be more appropriate regarding the transferability of results into practice in the design and implementation of 3D geovisualizations (see Juřík et al. [60]). Technologies that permit testing in controlled [38] and non-controlled conditions [56] can be connected to eye-tracking devices [33] and various interfaces, such as a touch screens [43,44] or the Wii Remote Controller [30], and other technologies for stereoscopic (real-3D) visualization with different immersion levels [34,37,57].
We would like to continue testing and will be working directly on interactive 3D testing. Importantly, user interaction and movement in the virtual environment can be described, analyzed, and compared, for example, with the results of the OSIVQ questionnaire, or inspected if any relationship exists between the results of the OSIVQ questionnaire and navigation in photorealistic and immersive virtual environments.
We also want to focus on more complex tasks that include advanced types of interaction. However, the difficulty of each task is also affected by the shape and complexity of the terrain or the distance between objects inserted into this terrain. Therefore, it must also be mentioned that 3D maps (and GIS data in general) represent complex stimuli which do not allow us to create a strict experimental design (as usually required in experimental studies). Regarding this, comprehensive data collection is required in interactive 3D maps to acquire better insight into the processes of decision making and task solving.

Supplementary Materials

A video documenting an experimental battery can be accessed at: https://youtu.be/Xat0slCx-Yg.

Author Contributions

L.H. was responsible for the literature review (cartographical and geoinformatics aspects), implementation of the testing tool, design of the experimental study, data analysis and interpretation and discussion of the results. L.H. also conducted the experimental study and coordinated whole paper preparation. V.J. collaborated on the literature review (psychological aspects), data analysis and discussion of the results. Z.S. advised on experimental design and collaborated on interpretation and discussion of the results. D.V. and J.R. collaborated on execution of the experimental study and data analysis. T.Ř. advised on implementation of the testing tool and collaborated on interpretation of the results.

Funding

This research was supported by the grants of the Masaryk University “The influence of cartographic visualization methods in the success of solving practical and educational spatial tasks” (Grant No. MUNI/M/0846/2015), “Integrated research on environmental changes in the landscape sphere of Earth III” (Grant No. MUNI/A/1251/2017) and by the grant of the Ministry of Education, Youth and Sports of the Czech Republic (Grant No. LTACH-17002) “Dynamic Mapping Methods Oriented to Risk and Disaster Management in the Era of Big Data”. This research was supported also by the research infrastructure HUME Lab Experimental Humanities Laboratory, Faculty of Arts, Masaryk University.

Acknowledgments

We are grateful to Dajana Snopková for helping us with statistical analysis. Finally, we would like to thank the participants for their time and efforts

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shiode, N. 3D Urban Models: Recent Developments in the Digital Modelling of Urban Environments in Three-dimensions. GeoJournal 2001, 52, 263–269. [Google Scholar] [CrossRef]
  2. Abdul-Rahman, A.; Pilouk, M. Spatial Data Modelling for 3D GIS, 1st ed.; Springer: Berlin, Germany, 2008; 289p, ISBN 978-3-540-74166-4. [Google Scholar]
  3. Biljecki, F.; Stoter, J.; Ledoux, H.; Zlatanova, S.; Çöltekin, A. Applications of 3D city models: State of the art review. ISPRS Int. J. Geo Inf. 2015, 4, 2842–2889. [Google Scholar] [CrossRef] [Green Version]
  4. Wood, J.; Kirschenbauer, S.; Döllner, J.; Lopes, A.; Bodum, L. Using 3D in Visualization. In Exploring Geovisualization, 1st ed.; Dykes, J., MacEachren, A.M., Kraak, M.-J., Eds.; Elsevier: Amsterdam, The Netherlands, 2005; pp. 295–312. [Google Scholar]
  5. Buchroithner, M.F.; Knust, C. True-3D in Cartography—Current Hard and Softcopy Developments. In Geospatial Visualisation, 1st ed.; Moore, A., Drecki, I., Eds.; Springer: Berlin, Germany, 2013; pp. 41–65. [Google Scholar]
  6. Hájek, P.; Jedlička, K.; Čada, V. Principles of Cartographic Design for 3D Maps Focused on Urban Areas. In Proceedings of the 6th International Conference on Cartography and GIS, Albena, Bulgaria, 13–17 June 2016; Bandrova, T., Konečný, M., Eds.; Bulgarian Cartographic Association: Sofia, Bulgaria, 2016; pp. 297–307. [Google Scholar]
  7. Haeberling, C.; Bär, H.; Hurni, L. Proposed Cartographic Design Principles for 3D Maps: A Contribution to an Extended Cartographic Theory. Cartogr. Int. J. Geogr. Inf. Geovisual. 2008, 43, 175–188. [Google Scholar] [CrossRef]
  8. Bandrova, T. Innovative Technology for the Creation of 3D Maps. Data Sci. J. 2006, 4, 53–58. [Google Scholar] [CrossRef]
  9. Schobesberger, D.; Patterson, T. Evaluating the Effectiveness of 2D vs. 3D Trailhead Maps. In Proceedings of the 6th ICA Mountain Cartography Workshop Mountain Mapping and Visualisation, Lenk, Switzerland, 1–15 February 2008; pp. 201–205. [Google Scholar]
  10. Roth, R. Cartographic Interaction Primitives: Framework and Synthesis. Cartogr. J. 2012, 49, 376–395. [Google Scholar] [CrossRef]
  11. Šašinka, Č. Interindividuální Rozdíly v Percepci Prostoru a Map. Ph.D. Thesis, Masaryk University, Brno, Czech Republic, 2012. [Google Scholar]
  12. Lokka, I.-E.; Çöltekin, A. Simulating Navigation with Virtual 3D Geovisualizations—A Focus on Memory Related Factors. In ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Halounová, L., Ed.; Copernicus GmbH: Gottingen, Germany, 2016; Volume XLI-B2, pp. 671–673. [Google Scholar]
  13. Larkin, J.H.; Simon, H.A. Why a Diagram is (Sometimes) Worth Ten Thousand Words. Cogn. Sci. 1987, 11, 65–100. [Google Scholar] [CrossRef] [Green Version]
  14. Shepherd, I. Travails in the Third Dimension: A Critical Evaluation of Three Dimensional Geographical Visualization. In Geographic Visualization: Concepts, Tools and Applications, 1st ed.; Dodge, M., McDerby, M., Turner, M., Eds.; John Wiley & Sons, Ltd.: Chichester, UK, 2008; pp. 199–222. ISBN 978-0-470-51511-2. [Google Scholar]
  15. Boy, G.A. The Handbook of Human-Machine Interaction: A Human-Centered Design Approach, 1st ed.; Ashgate: Farnham, UK, 2011; 478p, ISBN 9781138075825. [Google Scholar]
  16. Leontiev, A. Activity and Consciousness. 1977. Available online: https://www.marxists.org/archive/leontev/works/activity-consciousness.pdf (accessed on 15 August 2018).
  17. Neisser, U. Cognition and Reality: Principles and Implications of Cognitive Psychology, 1st ed.; W. H. Freeman & Company: San Francisco, CA, USA, 1976; 230p, ISBN 13-978-0716704775. [Google Scholar]
  18. Engel, J.; Semmo, A.; Trapp, S.; Döllner, J. Evaluating the Perceptual Impact of Rendering Techniques on Thematic Color Mappings in 3D Virtual Environments. In Proceedings of the 18th International Workshop on Vision, Modeling and Visualization (VMV 2013), Lugano, Switzerland, 11–13 September 2013; The Eurographics Association: Geneve, Switzerland, 2013. [Google Scholar]
  19. Niedomysl, T.; Elldér, E.; Larsson, A.; Thelin, M.; Jansund, B. Learning Benefits of Using 2D versus 3D Maps: Evidence from a Randomized Controlled Experiment. J. Geogr. 2013, 112, 87–96. [Google Scholar] [CrossRef]
  20. Popelka, S.; Brychtová, A. Eye-tracking Study on Different Perception of 2D and 3D Terrain Visualization. Cartogr. J. 2013, 50, 240–375. [Google Scholar] [CrossRef]
  21. Seipel, S. Evaluating 2D and 3D Geovisualisations for Basic Spatial Assessment. Behav. Inf. Technol. 2013, 32, 845–858. [Google Scholar] [CrossRef]
  22. Popelka, S.; Doležalová, J. Non-Photorealistic 3D Visualization in City Maps: An Eye-Tracking Study. In Modern Trends in Cartography, 1st ed.; Brus, S., Vondráková, A., Voženílek, V., Eds.; Springer: Berlin, Germany, 2015; pp. 357–367. ISBN 978-3-319-07925-7. [Google Scholar]
  23. Preppernau, C.A.; Jenny, B. Three-dimensional versus Conventional Volcanic Hazard Maps. Nat. Hazards 2015, 78, 1329–1347. [Google Scholar] [CrossRef]
  24. Rautenbach, V.; Coetzee, S.; Çöltekin, A. Investigating the Use Of 3D Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa. In ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Halounová, L., Ed.; Copernicus GmbH: Gottingen, Germany, 2016; Volume XLI-B2, pp. 425–431. [Google Scholar]
  25. Zhou, Y.; Dao, T.H.D.; Thill, J.-C.; Delmelle, E. Enhanced 3D Visualization Techniques in Support of Indoor Location Planning. Comput. Environ. Urban Syst. 2016, 50, 15–29. [Google Scholar] [CrossRef]
  26. Liu, B.; Dong, W.; Meng, L. Using Eye Tracking to Explore the Guidance and Constancy of Visual Variables in 3D Visualization. ISPRS Int. J. Geo Inf. 2017, 6, 274. [Google Scholar] [CrossRef]
  27. Abend, P.; Thielmann, T.; Ewerth, R.; Seiler, D.; Műhling, M.; Dőring, J.; Grauer, M.; Freisleben, B. Geobrowsing Behaviour in Google Earth—A Semantic Video Content Analysis of On-Screen Navigation. In GI_Forum 2012: Geovisualization, Society and Learning; Jekel, T., Car, A., Griesebner, G., Eds.; Wichmann: Berlin, Germany, 2012; pp. 2–13. [Google Scholar]
  28. Wilkening, J.; Fabrikant, S.I. How Users Interact with a 3D Geo-Browser under Time Pressure. Cartogr. Geogr. Inf. Sci. 2013, 40, 40–52. [Google Scholar] [CrossRef] [Green Version]
  29. Treves, R.; Viterbo, P.; Haklay, M. Footprints in the Sky: Using Student Tracklogs from a “Bird’s Eye View” Virtual Field Trip to Enhance Learning. J. Geogr. High. Educ. 2015, 39, 97–110. [Google Scholar] [CrossRef]
  30. Špriňarová, K.; Juřík, V.; Šašinka, Č.; Herman, L.; Štěrba, Z.; Stachoň, Z.; Chmelík, J.; Kozlíková, B. Human-computer Interaction in Real 3D and Pseudo-3D Cartographic Visualization: A Comparative Study. In Cartography—Maps Connecting the World: 27th International Cartographic Conference 2015—ICC2015, 1st ed.; Sluter, C.R., Ed.; Springer: Berlin, Germany, 2015; pp. 59–73. ISBN 978-3-319-17737-3. [Google Scholar]
  31. McKenzie, G.; Klippel, A. The Interaction of Landmarks and Map Alignment in You-Are-Here Maps. Cartogr. J. 2016, 53, 43–54. [Google Scholar] [CrossRef]
  32. Carbonell-Carrera, C.; Saorín, J. Geospatial Google Street View with Virtual Reality: A Motivational Approach for Spatial Training Education. ISPRS Int. J. Geo Inf. 2017, 6, 261. [Google Scholar] [CrossRef]
  33. Herman, L.; Popelka, S.; Hejlová, V. Eye-tracking Analysis of Interactive 3D Geovisualizations. J. Eye Mov. Res. 2017, 10, 1–15. [Google Scholar] [CrossRef]
  34. Juřík, V.; Herman, L.; Šašinka, Č.; Stachoň, Z.; Chmelík, J. When the Display Matters: A Multifaceted Perspective on 3D Geovisualizations. Open Geosci. 2017, 9, 89–100. [Google Scholar] [CrossRef]
  35. Bleisch, S.; Dykes, J.; Nebiker, S. Evaluating the Effectiveness of Representing Numeric Information Through Abstract Graphics in 3D Desktop Virtual Environments. Cartogr. J. 2008, 45, 216–226. [Google Scholar] [CrossRef] [Green Version]
  36. Herbert, G.; Chen, X. A Comparison of Usefulness of 2D and 3D Representations of Urban Planning. Cartogr. Geogr. Inf. Sci. 2015, 42, 22–32. [Google Scholar] [CrossRef]
  37. Kubíček, P.; Šašinka, Č.; Stachoň, Z.; Herman, L.; Juřík, V.; Urbánek, T.; Chmelík, J. Identification of Altitude Profiles in 3D Geovisualizations: The Role of Interaction and Spatial Abilities. Int. J. Digit. Earth 2017. [Google Scholar] [CrossRef]
  38. Herman, L.; Stachoň, Z. Comparison of User Performance with Interactive and Static 3D Visualization—Pilot Study. In ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Halounová, L., Ed.; Copernicus GmbH: Gottingen, Germany, 2016; Volume XLI-B2, pp. 655–661. [Google Scholar]
  39. Kjellin, A.; Pettersson, L.W.; Seipel, S. Evaluating 2D and 3D Visualizations of Spatiotemporal Information. ACM Trans. Appl. Percept. 2010, 7, 1–23. [Google Scholar] [CrossRef]
  40. Boér, A.; Çöltekin, A.; Clarke, K.C. An Evaluation of Web-based Geovisualizations for Different Levels of Abstraction and Realism—What do users predict? In Proceedings of the International Cartographic Conference, Dresden, Germany, 25–30 August 2013.
  41. Rautenbach, V.; Coetzee, S.; Çöltekin, A. Towards Evaluating the Map literacy of Planners in 2D Maps and 3D Models in South Africa. In Proceedings of the AfricaGEO 2014 Conference, Cape Town, South Africa, 1–3 July 2014. [Google Scholar]
  42. Juřík, V.; Herman, L.; Šašinka, Č. Interaction Primitives in 3D Geovisualizations. In In Useful Geography: Transfer from Research to Practice. Proceedings of the 25th Central European Conference, Brno, Czech Republic, 2–13 October 2017; Svobodová, H., Ed.; Masaryk University: Brno, Czech Republic, 2018; pp. 294–303. [Google Scholar]
  43. Herman, L.; Stachoň, Z. Controlling 3D Geovisualizations through Touch Screen—The Role of Users Age and Gesture Intuitiveness. In Proceedings of the 7th International Conference on Cartography and GIS, Sozopol, Bulgaria, 18–23 June 2018; Bandrova, T., Konečný, M., Eds.; Bulgarian Cartographic Association: Sofia, Bulgaria, 2018; Volume 1, pp. 473–480. [Google Scholar]
  44. Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P. Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues. In ISPRS Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Dimopoulou, E., Van Oosterom, P., Eds.; Copernicus GmbH: Gottingen, Germany, 2016; Volume XLII-2/W2, pp. 33–40. [Google Scholar]
  45. Livatino, S.; De Paolis, L.T.; D’Agostino, M.; Zocco, A.; Agrimi, A.; De Santis, A.; Bruno, L.V.; Lapresa, M. Stereoscopic Visualization and 3-D Technologies in Medical Endoscopic Teleoperation. IEEE Trans. Ind. Electron. 2015, 62, 525–535. [Google Scholar] [CrossRef]
  46. Hayes, J.; Allinson, C.W. Cognitive Style and its Relevance for Management Practice. Br. J. Manag. 1994, 5, 53–71. [Google Scholar] [CrossRef]
  47. Kozhevnikov, M. Cognitive Styles in the Context of Modern Psychology: Toward an Integrated Framework of Cognitive Style. Psychol. Bull. 2007, 133, 464–481. [Google Scholar] [CrossRef] [PubMed]
  48. Peterson, E.R.; Deary, I.J.; Austin, E.J. A New Measure of Verbal–Imagery Cognitive Style: VICS. Pers. Individ. Differ. 2005, 38, 1269–1281. [Google Scholar] [CrossRef]
  49. Blajenkova, O.; Kozhevnikov, M.; Motes, M.A. Object-spatial imagery: A new self-report imagery questionnaire. Appl. Cogn. Psychol. 2006, 20, 239–263. [Google Scholar] [CrossRef]
  50. Jonassen, D.H.; Grabowski, B.L. Handbook of Individual Differences, Learning, and Instruction; Routledge: Abingdon, UK, 2012; 512p, ISBN 978-0805814132. [Google Scholar]
  51. Blazhenkova, O.; Kozhevnikov, M. The New Object-Spatial-Verbal Cognitive Style Model: Theory and Measurement. Appl. Cogn. Psychol. 2009, 23, 638–663. [Google Scholar] [CrossRef]
  52. Rubin, J.; Chisnell, D.; Spool, J. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests, 2nd ed.; Wiley: Hoboken, NJ, USA, 2008; 384p, ISBN 978-0-470-18548-3. [Google Scholar]
  53. EEE 610:1990. IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries; IEEE: Piscataway, NJ, USA, 1990. [CrossRef]
  54. Bowman, D.A.; Kruijff, E.; Poupyrev, I.; LaViola, J.J. 3D User Interfaces: Theory and Practice, 1st ed.; Addison Wesley Longman Publishing: Redwood City, CA, USA, 2004; 512p, ISBN 978-0321980045. [Google Scholar]
  55. Herman, L.; Řezník, T.; Stachoň, Z.; Russnák, J. The Design and Testing of 3DmoveR: An Experimental Tool for Usability Studies of Interactive 3D Maps. Cartogr. Perspect. 2018, 90, 31–63. [Google Scholar] [CrossRef]
  56. Štěrba, Z.; Šašinka, Č.; Stachoň, Z.; Štampach, R.; Morong, K. Selected Issues of Experimental Testing in Cartography, 1st ed.; Masaryk University, MuniPress: Brno, Czech Republic, 2015; 120p, ISBN 978-80-210-7909-0. [Google Scholar]
  57. Stachoň, Z.; Kubíček, P.; Málek, F.; Krejčí, M.; Herman, L. The Role of Hue and Realism in Virtual Reality. In Proceedings of the 7th International Conference on Cartography and GIS, Sozopol, Bulgaria, 18–23 June 2018; Bandrova, T., Konečný, M., Eds.; Bulgarian Cartographic Association: Sofia, Bulgaria, 2018; Volume 1, pp. 932–941. [Google Scholar]
  58. Howell, D. Statistical Methods for Psychology, 7th ed.; Cengage Wadsworth: Belmont, CA, USA, 2010; 793p, ISBN 978-0-495-59784-1. [Google Scholar]
  59. Anderson, W.; Krathwohl, D.R.; Bloom, B.S. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, 1st ed.; Longman: New York, NY, USA, 2001; 352p, ISBN 978-0801319037. [Google Scholar]
  60. Juřík, V.; Herman, L.; Šašinka, Č.; Stachoň, Z.; Chmelík, J.; Strnadová, A.; Kubíček, P. Behavior Analysis in Virtual Geovisualizations: Towards Ecological Validity. In Proceedings of the 7th International Conference on Cartography and GIS, Sozopol, Bulgaria, 18–23 June 2018; Bandrova, T., Konečný, M., Eds.; Bulgarian Cartographic Association: Sofia, Bulgaria, 2018; Volume 1, pp. 518–527. [Google Scholar]
  61. European Environment Agency. Copernicus Land Monitoring Service—EU-DEM. Available online: https://www.eea.europa.eu/data-and-maps/data/copernicus-land-monitoring-service-eu-dem (accessed on 15 September 2018).
  62. Warne, R.T. A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists. Pr. Assess. Res. Eval. 2014, 17, 1–10. [Google Scholar]
  63. Levene, H. Robust Tests for Equality of Variances. In Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling, 1st ed.; Olki, I., Ed.; Stanford University Press: Palo Alto, CA, USA, 1960; pp. 278–292. [Google Scholar]
  64. Mann, H.B.; Whitney, D.R. On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other. Ann. Math. Stat. 1947, 1, 50–60. [Google Scholar] [CrossRef]
  65. Box, G.E.; Cox, D.R. An Analysis of Transformations. J. R. Stat. Soc. Ser. B 1964, 2, 211–252. [Google Scholar]
  66. Shapiro, S.S.; Wilk, M.B. An Analysis of Variance Test for Normality (Complete Samples). Biometrika 1965, 3/4, 591–611. [Google Scholar] [CrossRef]
  67. Roth, R.E.; Çöltekin, A.; Delazari, L.; Filho, H.F.; Griffin, A.; Hall, A.; Korpi, J.; Lokka, I.-E.; Mendonça, A.; Ooms, K.; et al. User Studies in Cartography: Opportunities for Empirical Research on Interactive Maps and Visualizations. Int. J. Cartogr. 2017, 3, 61–89. [Google Scholar] [CrossRef]
  68. Keskin, M.; Çelik, B.; Doğru, A.Ö.; Pakdil, M.E. A Comparison of Space-Time 2D and 3D Geovisualization. In Proceedings of the 27th International Cartographic Conference, Rio de Janeiro, Brazil, 23–28 August 2015; pp. 1–17. [Google Scholar]
  69. Bogucka, E.P.; Jahnke, M. Feasibility of the Space–Time Cube in Temporal Cultural Landscape Visualization. ISPRS Int. J. Geo Inf. 2018, 7, 209. [Google Scholar] [CrossRef]
  70. Jedlička, K.; Charvát, K. Visualisation of Big Data in Agriculture and Rural Development. In Proceedings of the IST-Africa Week Conference, Gaborone, Botswana, 9–11 May 2018; Cunningham, P., Cunningham, M., Eds.; IEEE: Piscataway, NJ, USA, 2018; pp. 1–8. [Google Scholar]
  71. Herman, L.; Řezník, T. Web 3D Visualization of Noise Mapping for Extended INSPIRE Buildings Model. In Environmental Software Systems. Fostering Information Sharing, Proceedings of theIFIP Advances in Information and Communication Technology, Neusiedl am See, Austria, 9–11 October 2013; Hřebíček, J., Schimak, G., Kubásek, M., Rizzoli, A.E., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 413, pp. 414–424. [Google Scholar]
  72. Christen, M.; Bűhler, Y.; Bartelt, P.; Leine, R.; Glover, J.; Schweizer, A.; Graf, C.; McArdell, B.W.; Gerber, W.; Deubelbeiss, Y.; et al. Integral Hazard Management Using a Unified Software Environment: Numerical Simulation Tool “RAMMS” for Gravitational Natural Hazards. In Proceedings of the 12th Congress INTERPRAEVENT 2012, Grenoble, France, 23–26 April 2012; Koboltschnig, G., Hübl, J., Braun, J., Eds.; International Research Society INTERPRAEVENT: Klagenfurt, Austria, 2012; Volume 1, pp. 77–86. [Google Scholar]
  73. Dübel, S.; Röhlig, M.; Tominski, C.; Schumann, H. Visualizing 3D Terrain, Geo-Spatial Data, and Uncertainty. Informatics 2017, 4, 6. [Google Scholar] [CrossRef]
  74. Sieber, R.; Hollenstein, L.; Eichenberger, R. Concepts and Techniques of an Online 3D Atlas—Challenges in Cartographic 3D Visualization. In Proceedings of the Leveraging Applications of Formal Methods, Verification and Validation, Applications and Case Studies, Heraklion, Greece, 15–18 October 2012; Margaria, T., Steffen, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 325–326. [Google Scholar] [CrossRef]
Figure 1. Design of the experiment (four versions of the test battery are marked as I, II, III, and IV).
Figure 1. Design of the experiment (four versions of the test battery are marked as I, II, III, and IV).
Ijgi 07 00415 g001
Figure 2. Examples of 3DmoveR user interface: (a) Task A—selecting which of four buildings is at the lowest altitude—interactive variant; (b) Task D—determining which of four buildings are visible from the top of the signal transmitter and are also in the location with lowest signal intensity—static variant; (c) Task Ea—determining whether signal intensity depends on terrain slope—interactive variant; (d) Task Fb—determining which of the four areas is characterized by the highest average terrain slope—static variant.
Figure 2. Examples of 3DmoveR user interface: (a) Task A—selecting which of four buildings is at the lowest altitude—interactive variant; (b) Task D—determining which of four buildings are visible from the top of the signal transmitter and are also in the location with lowest signal intensity—static variant; (c) Task Ea—determining whether signal intensity depends on terrain slope—interactive variant; (d) Task Fb—determining which of the four areas is characterized by the highest average terrain slope—static variant.
Ijgi 07 00415 g002
Figure 3. Average correctness (left) and average response times (right) by task type.
Figure 3. Average correctness (left) and average response times (right) by task type.
Ijgi 07 00415 g003
Figure 4. Total correctness (left) and total response time (right) of user responses in terms of the interactivity factor.
Figure 4. Total correctness (left) and total response time (right) of user responses in terms of the interactivity factor.
Ijgi 07 00415 g004
Figure 5. Subjective preferences of interactive 3D maps for individual task types.
Figure 5. Subjective preferences of interactive 3D maps for individual task types.
Ijgi 07 00415 g005
Figure 6. Comparison of correctness of responses (left) and response times (right) by laypersons and experts.
Figure 6. Comparison of correctness of responses (left) and response times (right) by laypersons and experts.
Ijgi 07 00415 g006
Table 1. Taxonomy of tasks in empirical research of 3D maps (ranked from the simplest to the most complex task according to Juřík, Herman and Šašinka [42]) with the given number of studies using this type of task.
Table 1. Taxonomy of tasks in empirical research of 3D maps (ranked from the simplest to the most complex task according to Juřík, Herman and Šašinka [42]) with the given number of studies using this type of task.
TaskNumber of User StudiesUser Studies
Search16[9,18,19,20,22,23,24,27,28,29,30,32,39,41,43]
Pattern recognition0-
Spatial understanding18[18,19,20,21,23,24,25,26,28,30,31,33,34,35,37,38,41,44]
Quantitative estimation2[18,24]
Shape description3[9,28,32]
Combined tasks1[34]
Planning3[23,30,34]
Table 2. Participant’s characteristics.
Table 2. Participant’s characteristics.
LaypersonsExperts
TotalN3244
FemalesN1918
MalesN1326
AgeMin1820
Mean26.87526.795
Stdv7.6024.203
Median23.50026.000
Max4246
Self-reported experiences (How often you work with …?PCMedian1daily1daily
MapsMedian3occasionally2regularly
3D visualizationsMedian4singularly3occasionally
Field of education N6foreign languages 23geography
5psychology12cartography
4economy8geoinformatics
4informatics1geodesy
3pedagogy
2laws
8other humanities
Table 3. Correctness of user responses and time demands in individual tasks according to level of interactivity, level of expertise, and task type.
Table 3. Correctness of user responses and time demands in individual tasks according to level of interactivity, level of expertise, and task type.
Correctness per One Task [0–1]Response Time for One Task [s]
InteractiveStaticInteractiveStatic
LaypersonsExpertsLaypersonsExpertsLaypersonsExpertsLaypersonsExperts
Combined tasksMedian1.0001.0001.0001.00026.67024.72517.29017.835
Mean0.9070.9490.5190.55931.67329.55319.37120.616
Stdv0.2890.2190.4980.49522.97823.71310.27812,989
Spatial understandingMedian1.0001.0001.0001.00026.70522.68017.52018.155
Mean0.7510.7400.5650.63034.45031.13819.98221.509
Stdv0.4310.4380.4950.48227.66025.52411.41615.428
Pattern recognitionMedian1.0001.0001.0001.00035.74030,78027.26525.610
Mean0.8620.8650.7380.92136.59432.74031.34228.201
Stdv0.3430.3400.4360.26818.71419.75818.55614.839
Table 4. ANOVA results—correctness and response times.
Table 4. ANOVA results—correctness and response times.
PredictorDfSum of SquaresMean SquareF-Valuep-ValueSignificance
Correctness per one task [0–1]Interactivity119.219.172106.015<2e−16***
Task type27.53.75320.7521.23e−09***
Expertise10.88.4404.6660.0309*
Interactivity × task type28.84.39124.2813.92e−11***
Interactivity × expertise10.50.5242.8970.0889.
Task type × expertise20.30.1280.7060.4937
Interactivity × task type × expertise20.40.2121.1710.3101
Response time for one task [s]Interactivity122,24322,243114.173<2e−16***
Task type27375368718.9277.32e−09***
Expertise15985983.0690.0800.
Interactivity × task type211565782.9660.5180.
Interactivity × expertise17327323.7600.0527.
Task type × expertise2116580.2980.7420
Interactivity × task type × expertise275380.1930.8244
Significance codes: *** significance level = 0.001, * significance level = 0.05, significance level = 0.1.
Table 5. Overall measures (correctness and response time) and analyses results for the interactivity factor.
Table 5. Overall measures (correctness and response time) and analyses results for the interactivity factor.
Correctness [Number of Correct Answers in Whole Experiment]MedianMeanStdvShapiro-Wilk TestMann Whitney U test
p-ValueUp-Value
InteractivityInteractive20.00019.8161.5220.02325.5000.000
Static15.00014.8951.6890.035
Total response time for all tasks in experiment [s]MedianMeanStdvShapiro-Wilk testLevene’s t-test
p-valueTDfp-value
InteractivityInteractive777.370772.165199.0750.6276.41259.3142.61e−08
Static526.750529.675115.3050.916
Table 6. Overall measures (correctness and response time) and analyses results for the task types factor.
Table 6. Overall measures (correctness and response time) and analyses results for the task types factor.
Correctness per One Task [0–1]MedianMeanStdvShapiro-Wilk TestMann Whitney U Test
p-ValueUp-Value
Task typeCombined tasksInteractive1.0000.9310.2530.0004.5000.000
Static1.0000.5410.4980.000
Spatial understandingInteractive1.0000.7440.4360.002201.0000.000
Static1.0000.6020.4890.002
Pattern recognitionInteractive1.0000.8630.3430.000661.0000.000
Static1.0000.8430.3620.000
Response time for one task [s]MedianMeanStdvShapiro-Wilk testLevene’s t test
p-valueTDfp-value
Task typeCombined tasksInteractive25.74530.46323.5040.915−5.73260.2953.38e−07
Static17.59520.10011.9540.135
Spatial understandingInteractive25.74532.54926.5520.0386.57257.0451.64e−08
Static18.03020.87213.9280.077
Pattern recognitionInteractive32.51534.36919.5390.1782.01866.9060.048
Static26.24529.54516.6920.252
Table 7. The overall measures (correctness and response time) and analyses report in terms of expertise factor.
Table 7. The overall measures (correctness and response time) and analyses report in terms of expertise factor.
Correctness [Number of Correct Answers in Whole Experiment]MedianMeanStdvShapiro-Wilk TestLevene’s t-Test
p-ValueTDfp-Value
ExpertiseExperts18.00017.7951.9610.2662.253740.027
Laypersons17.00016.7501.9840.357
Total response time for all tasks in experiment [s]MedianMeanStdvShapiro-Wilk testLevene’s t-test
p-valueTDfp-value
ExpertiseExperts605.095638.979242.3240.2210.520740.605
Laypersons635.950667.339216.3890.398
Table 8. Correlation coefficient (significance level α = 0.05) for correctness, response times, and cognitive styles measured via Object-Spatial Imagery and Verbal Questionnaire (OSIVQ) (spatial, object, and verbal).
Table 8. Correlation coefficient (significance level α = 0.05) for correctness, response times, and cognitive styles measured via Object-Spatial Imagery and Verbal Questionnaire (OSIVQ) (spatial, object, and verbal).
LaypersonsExperts
ObjectSpatialVerbalObjectSpatialVerbal
Rp-ValueRp-ValueRp-ValueRp-ValueRp-ValueRp-Value
CorrectnessCombined tasks−0.1550.3960.0570.756−0.0130.9430.0820.598−0.1670.277−0.0700.651
Spatial under-standing0.0130.942−0.1110.545−0.0180.9240.1370.3760.0180.908−0.1450.348
Pattern recognition−0.1100.548−0.1550.398−0.0890.6290.0410.7930.0760.625−0.0880.569
Overall−0.1140.535−0.1190.518−0.0560.7600.1470.340−0.0450.770−0.1620.295
Response timesCombined tasks0.2010.2690.0130.944−0.1040.5720.0650.675−0.0710.6480.1080.486
Spatial under-standing0.1080.5570.0690.707−0.1300.479−0.0080.957−0.0960.5330,2260.141
Pattern recognition0.3480.0510.0920.616−0.0850.6450.1270.413−0.0360.8170.2620.085
Overall0.2080.2530.0640.726−0.1270.4880.0420.789−0.0850.5820.2150.161

Share and Cite

MDPI and ACS Style

Herman, L.; Juřík, V.; Stachoň, Z.; Vrbík, D.; Russnák, J.; Řezník, T. Evaluation of User Performance in Interactive and Static 3D Maps. ISPRS Int. J. Geo-Inf. 2018, 7, 415. https://doi.org/10.3390/ijgi7110415

AMA Style

Herman L, Juřík V, Stachoň Z, Vrbík D, Russnák J, Řezník T. Evaluation of User Performance in Interactive and Static 3D Maps. ISPRS International Journal of Geo-Information. 2018; 7(11):415. https://doi.org/10.3390/ijgi7110415

Chicago/Turabian Style

Herman, Lukáš, Vojtěch Juřík, Zdeněk Stachoň, Daniel Vrbík, Jan Russnák, and Tomáš Řezník. 2018. "Evaluation of User Performance in Interactive and Static 3D Maps" ISPRS International Journal of Geo-Information 7, no. 11: 415. https://doi.org/10.3390/ijgi7110415

APA Style

Herman, L., Juřík, V., Stachoň, Z., Vrbík, D., Russnák, J., & Řezník, T. (2018). Evaluation of User Performance in Interactive and Static 3D Maps. ISPRS International Journal of Geo-Information, 7(11), 415. https://doi.org/10.3390/ijgi7110415

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop