3.2.1. Setup

In our experiment, participants navigated in a virtual office via an HTC Vive Pro HMD (refresh rate: 90 Hz, resolution: 1440 × 1600 pixels, FoV 110°) connected to a workstation (Intel(R) Core(TM) i7-6850K CPU @ 3.60 GHz, 3.60 GHz). The environment was developed using Unity 3D version 2019.4.35 f1. Unity 3D is a game engine developed by the Unity Technologies (San Francisco, CA, USA). It is a very famous platform that has been used by game developers across the world. The data analysis was performed using R version 4.2.2 and RStudio version 2022.07.2+576. R is a programming language and software environment for statistical computing and graphics, developed by the R Development Core Team and maintained by the R Foundation (Vienna, Austria). RStudio is an integrated development environment (IDE) for the R programming language, developed by RStudio, Inc. (Boston, MA, USA).

### 3.2.2. Experience Design

The experience was designed as a role-playing serious game where a user, in the role of a new employee, is exposed to two different levels of uncertainty in the context of interpersonal communication in a workplace scenario. To this aim, the story plot that develops within the experience takes inspiration from Amelia Bedelia, the protagonist and title character of the children's book series authored by Peggy Parish [62]. Amelia Bedelia is a housekeeper who takes her instructions literally because her boss could not be present in the house on the first day of her work. The instructions include lexical ambiguity coming from each sentence. Despite such ambiguity, Amelia stays positive and expresses her •

excitement to do her job well and make her boss happy, but she repeatedly misunderstands the guidance. Inspired by this story line, our application implements:


Please see the Table 3 for a comparison between these features in the proposed platform and Amelia Bedelia story.


**Table 3.** A comparison of the elements in the proposed platform with those in the Amelia Bedelia story.

The application includes two phases: the "Familiarization" phase and the "Main" phase. The goal of the "familiarization" phase is to remove any uncertainty arising from unfamiliarity with VR interfaces and the related context. For this purpose, it provides information and a step-by-step tutorial with feedback to familiarize the user with the context and allow the user to feel confident with the interactions that will then be executed. The user in this phase will ge<sup>t</sup> to know the boss, his/her role, the space s/he will be working in, the means of communication, and the way s/he can accomplish the tasks indicated by the boss using the available interfaces. At the end of this phase and when the system confirms the user has successfully executed all steps, s/he will reach the virtual office by pressing the "Move to the office" button from the panel on the left hand (see Figure 2 for some screenshots taken from the familiarization phase).

**Figure 2.** Screenshots showing some steps of the familiarization phase, from left to right: (**a**) selecting the "I am ready" button by the user; (**b**) teleporting to a destination; (**c**) reading instructions of the task from the panel on the user's left hand; (**d**) removing an interactive object; (**e**) submitting the current task; (**f**) selecting the "Move to the office" button to move there.

The main phase starts with the user finding himself/herself inside a virtual office in front of a door. After 10 s, a phone starts ringing, and s/he should answer. The boss is on the phone, welcoming and asking the player to follow some instructions, explaining three options that will be available during the experience: submitting the task, suppressing it, and requesting help using the buttons on the panel. The boss also says that if something important comes up he will call again. By pressing the "I am ready" button on the panel, a description of the first task appears. The user can now teleport to move within the office environment, removing objects based on the instructions. The removed objects then become visible in the "Item" tab of the panel. The user can cancel a previous removal by pressing the close button near each image. The user will be asked to complete a second task either by pressing the "Submit task" button or the "I do not do this task" button. After removing a specific number of objects, in the middle of the second task the phone will ring. The boss warns the user that it may be necessary to cancel previous removals to follow a new set of instructions. Task 2 finishes either by pressing the "Submit task" button or the "I will not do this task" button. The user can also decide to exit the game by pressing the "Exit the game" button.

The virtual office is hence furnished with interactive and non-interactive objects as well as two dynamic blackboards as two sources of information (See Figure 3 for some screenshots showing the virtual office environment). As described in Figure 4, there are five possible sources of information in the experience: 1. a small blackboard displays the name of the current task; 2. a big blackboard communicates the current status; 3. a small blackboard attached to the panel is a closeup of the big one; 4. a phone that blinks and rings when the boss calls; and 5. a task board showing the instructions for each task. The different parts of the panel are shown in Figure 5. An example of teleporting and removing interactive objects may be viewed in Figure 6. In addition, to increase the immersion, during the main phase an ambient sound is played, simulating the sounds coming from nearby offices to help reduce the confounding effects of noises coming from the real world.

**Figure 3.** Screenshots showing two views of the office: (**a**) the view of the office from the perspective of the user at the beginning; (**b**) Another view of the office.

**Figure 4.** Screenshots showing five sources of information for the user during the experience: (**a**) Arrow 1 points to a small blackboard that displays the name of the current task. Arrow 2 points to a big blackboard that communicates the current status of the tasks during the experience; (**b**) Arrow 3 points to a small blackboard attached to the panel that is a closeup of the big one; (**c**) Arrow 4 points to a phone that blinks and rings when the boss calls; (**d**) Arrow 5 points to a task board that shows the instructions for each task.

**Figure 5.** Screenshots showing different parts of the panel for interactions: (**<sup>a</sup>**,**d**) selecting the "I am ready" and "Answering the phone" buttons by the user; (**b**,**<sup>e</sup>**) visualization of the task tab; (**<sup>c</sup>**,**f**) visualization of the item tab.

**Figure 6.** Screenshots showing the participant when: (**a**) removing interactive objects; (**b**) using the teleportation to move in the environment.

The tasks amount to sequences of instructions to search and remove objects expressed in written form or verbally at different moments in the experience. The tasks include two levels of uncertainty, as inspired by the definition of Hillen et al. [28]. As explained before, uncertainty appears in terms of ambiguity, probability, and complexity. Ambiguities result from incomplete guides and instructions. Probable situations appear as unexpected task changes, and complexity as a change in the number of causal factors in the instructions. Following this guide, we proposed these two tasks to represent two levels of uncertainty:

Task 1 (or base task): This includes a simple and clear set of instructions. For each step, the number, place, and color of objects that should be removed are clearly expressed.

Instructions for Task 1:


Task 2 (or with an intermediate level of uncertainty): This is a task whose degree of uncertainty includes more complex instructions when compared to Task 1. This Task comprises lexical ambiguity in the instruction, instructions with missing information, and the possibility of instruction changes on the fly (See Figure 7 for the placements of objects in seven different areas of the environment with their associated rugs).

Instructions for Task 2:


Instructions for Task 2 (after change):


**Figure 7.** Screenshots from seven different areas for the placements of objects characterized by the color of their associated rugs.
