Next Article in Journal
Assessment as a Site of Inclusion: A Qualitative Inquiry into Academic Faculty Perspectives
Previous Article in Journal
Voices of the Future: Palestinian Students’ Attitudes Toward English Language Learning in an EFL Context
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Realizing Ambient Serious Games in Higher Education—Concept and Heuristic Evaluation

Institute of Telematics, University of Lübeck, 23562 Lübeck, Germany
*
Author to whom correspondence should be addressed.
Trends High. Educ. 2025, 4(3), 52; https://doi.org/10.3390/higheredu4030052
Submission received: 29 July 2025 / Revised: 4 September 2025 / Accepted: 8 September 2025 / Published: 16 September 2025

Abstract

In the IoT era, higher education is transforming into a more digitalized learning environment, also known as Education 4.0. In this context, the use of serious games for teaching is also evolving into a more adaptive digital form. This paper describes how this could be realized in combination with Learning Management Systems and smart environments in universities. The concept is based on requirements that have been raised by a human-centered design process and are already published. The concept is the foundation upon which several games are designed and implemented. Two heuristic studies were conducted to evaluate the general concept and the implemented games. The results show that the games are suitable for educational use and that their integration into smart environments could be appropriate to address the requirements following the Education 4.0 paradigm.

1. Introduction

Similarly to industry, education is undergoing a transformation toward greater digitalization and individualization. Education 4.0 is a term that refers to a paradigm in which university teaching is adapted to students [1]. Digital technologies support the delivery of educational content and its adaptation to the current campus context and users [1]. Concepts that integrate computers into the environment to make this paradigm a reality are referred to as Ambient Learning Management Systems [2] or Pervasive University [3]. These systems use sensors that recognize context and users, adapting digital content or its presentation accordingly [2,3].
Along with the technological transformation of the university environment, the organization of teaching is changing as well [4]. Rather than direct lecture-centered instruction, Education 4.0 employs active, learner-centered teaching methods [5]. One such method is the implementation of serious games. Studies on serious games have shown positive effects on learning adherence and motivation [6,7,8,9]. However, there is a lack of examples of adaptive games using innovative digital technologies in university teaching when considering their implementation with regard to the transformation to Education 4.0 [10].
Ambient Serious Games are one way of transforming serious games into Education 4.0. These are serious games that are implemented using computers embedded in the environment and adapt to the user and the educational environment [11].
Prior, the human-centered design process was used to determine the requirements for Ambient Serious Games. Based on these, this paper presents games that meet the requirements for Ambient Serious Games and shows the results of two heuristic evaluations. The following research questions are addressed:
  • How can Ambient Serious Games be designed to meet the requirements?
  • How are concepts for such games assessed by experts?
  • How are the games implemented perceived by users with expert knowledge?
In conclusion, the results of the investigations will be discussed, and the steps required for better integration into everyday university life will be outlined (see Section 6).

1.1. Ambient Serious Games

Ambient Serious Games are digital games designed for a serious purpose—such as education, training, or behavior change—that are embedded in smart environments. They combine elements of ambient games and serious games, and often operate in ubiquitous computing contexts, meaning the technology is integrated into the physical environment in a way that can be unobtrusive or even invisible to the player (cf. [11,12]).
Although they do not completely overlap, there are related works in the field of pervasive or serious games that are similar to Ambient Serious Games. One example is the game LINA, developed by Mittmann et al. [13]. It is aimed at primary school pupils who use smartphones to follow augmented clues and find their fictitious missing classmate, Lina. Players work together in their regular classroom setting. The augmentation enriches the environment, but the smartphones as interaction devices themselves are not the focus of the students’ perception. Additionally, the augmentation is limited to audio output via headphones, and no other technology is used to enrich the spatial environment with smart capabilities to support immersion. Another example is the game developed by Stoldt et al. [14]. The game is categorized as a pervasive game and consists of a spatial environment with four pillars as its cornerstones. These are illuminated with changing colors to create a spatial atmosphere. In order to answer knowledge questions, cubes integrated into different narratives must be assigned to the pillars. The narratives are interchangeable but not adapted to the players, distinguishing the game from Ambient Serious Games. Another example that incorporates contextual information is the game Searching for Us [15]. With the help of a web app and two haptic avatars set in wooden frames, players can experience the story of two characters. Environmental data is recorded and influences the story presented by the app. The game adapts to the user through context sensitivity; however, it is played outside by moving around in a city—not in a smart, computer–enriched environment. There are many other examples of pervasive or ambient games that overlap with Ambient Serious Games. Some of these games have a serious purpose. For instance, the aforementioned games focus on teaching social skills (see [13,15]) or testing factual knowledge (see [14]). Due to their partial overlap, they demonstrate the difference between Ambient Serious Games and previously published works. However, it cannot be ruled out that games are already implemented that fully correspond to the above definition.
The following sections show our approach to Ambient Serious Games, their concepts and implementation, as well as their mechanics. Prior, we present the published requirements and the procedure for evaluating them.

1.2. The Use of Learning Management Systems

In higher education, instructors use so-called Learning Management Systems to organize teaching. Learning Management Systems support both the structuring and delivery of courses. Primarily, they provide a platform where instructors and students can access lecture materials in various media formats. These materials are organized into so-called courses, in which students enroll [16]. A differentiated role management system controls access rights within these courses. In addition, Learning Management Systems offer further communication tools such as chats or forums [16,17]. Due to their digital nature, Learning Management Systems enable the integration of interactive learning tools such as whiteboards, calendars, or quizzes [17]. Instructors can evaluate the latter.
Both commercial and open-source solutions are available on the market. In Germany, seven Learning Management Systems solutions are widely used in higher education [17]: (1) Moodle, (2) Blackboard, (3) Clix, (4) ILIAS, (5) OLAT, (6) Stud.IP, and (7) Chamilo.
Since the games are developed for use in higher education it seems suitable to use existing information from Learning Management Systems to create them. Interactive tests can be reused to create game tasks, minimizing the effort required of lecturers to incorporate games into their courses. To ensure educational games are easily accessible and can be seamlessly integrated into teaching, it makes sense to link them interoperably to the already familiar, widely used interactive tests features. Therefore, the process of learning through exercises and quizzes in Learning Management Systems is analyzed in the following.
Instructors integrate interactive tests into their respective Learning Management System courses and create the content. They decide which parts of the lecture are reviewed or tested and provide the media used within them. When designing the interactive tests, they also define the difficulty level. Offering identical tests at multiple difficulty levels is uncommon. Interactive tests consist of at least one question, and instructors determine their order. Depending on the settings, students can usually repeat interactive tests, which may serve purely for self-reflection. However, instructors can also use interactive tests to assess student performance. Time limits for completion are optional. Instructors—and, if allowed, students—can access automatic evaluations in the Learning Management Systems.
Students access interactive tests within the course environment of the Learning Management Systems. Once a test starts, they can begin answering the questions. From that point on, the Learning Management System tracks the time spent. Students do not need to follow a linear order and can revise answers after submission. Once they complete all questions, they submit the entire test. Whether each question must be answered depends on the instructor’s settings. The Learning Management Systems stores the results, and students can access them—provided the instructor has granted permission. Instructors have access to the results of all interactive tests submitted within their assigned courses.

Question Types

The Learning Management Systems platforms commonly used in Germany offer a variety of question types in their interactive test tools [16,18,19,20,21,22,23,24]. To analyze the available question types, the documentation and user guides of the respective systems were reviewed. Identical question types with different names were grouped together. Table 1 provides an overview of the distribution of individual question types across the various Learning Management Systems.
It becomes evident that the question types Multiple Choice, Single Choice, and Free Text are available in all seven examined Learning Management Systems. The True/False and Cloze types are missing in one, and Assignment in two Learning Management Systems.
The Cloze and Free Text question types require students to enter text freely. In the context of ambient systems, this poses a challenge both for input and for the automatic correction required by games. In such cases, converting the task into another question type becomes necessary. This conversion helps prevent erroneous input due to poor usability in the game’s interface and simplifies automated correction by using discrete answer formats. For instance, a Cloze question may be reformulated into a Cloze selection type, which is supported by three of the Learning Management Systems examined.
The games presented in this paper are based on interactive tests with Assignment, Cloze selection, Multiple Choice, Single Choice, and True/False questions. Therefore, a software application is implemented that is able to convert the questions into Java objects and integrate them into the implemented games.

1.3. The Organic Software Approach

Interaction with various devices follows the concept of the Internet of Things, where smart objects include sensors, computing units, and network interfaces, allowing flexible combinations [25,26]. Here, these smart objects are connected via an organic computing architecture. The concept of organic computing introduces essential aspects such as self-description, self-reflection, and context awareness [26,27,28], which support the connection of input and output components according to their properties and user needs.
This concept is realized in this work with the Ambient Reflection Framework (cf. [26]). It enables dynamic connection, localization, and explanation of smart objects. Furthermore, Kordts [29] extended the framework to integrate applications and provide interactive tutorials. However, it allows individual game components to dynamically connect to available devices. These components, realized as standalone web-applications in our case, as well as the input and output devices, can describe their own states and services, communicate with each other, and respond to changes. This includes, for instance, a status change due to the execution of an interaction, such as the selection of an answer option, but also changes to the hardware, such as the shutdown of a device or loss of a connection. In addition, devices can be combined into ensembles that form a common unit from the perspective of user interaction. Several ensembles can communicate with one another via the Ambient Reflection Framework to connect individual units. With further extensions, the framework can also deploy applications and, therefore, games to suitable devices in the environment without requiring user interaction [30]. In a university setting, it is important that the framework provides tools that allow objects with a software interface to connect to it (see Virtual Device Daemon in [30]). This means that the framework can be used without requiring specific devices from certain manufacturers.
Based on this framework, Ambient Serious Games can achieve the following adaptive features:
  • Adaptation to the physical abilities of the users, as the system can integrate input devices that support barrier-free interaction.
  • Adaptation to the current environment. The system can connect to any device that supports the framework, allowing the same game to run in rooms with different setups.
  • Real-time deployment of personalized games in the environment by distributing applications to appropriate smart objects.
  • Adapted tutorials for an explanation of the dynamically connected smart objects.
Systems can transmit the necessary user profile information—similar to interactive tests—from a Learning Management System to the game environment, enabling adaptation to both the user and environment.

2. Methods and Materials

The game design is based on the requirements of Ambient Serious Games, which lead to design decisions. Table 2 presents a list of the published requirements. The identifier listed in the table will be used as a reference in the results section.
The game design process was realized using the IntelliJ [32] development environment, which is frequently used in programming contexts. All games include videos introducing the gameplay with game characters for an emotional connection (see requirement J3). Each video is based on texts spoken by the characters. These were created with the large language model ChatGPT 3.5 [33], which is freely available for a certain number of requests. After creation they were revised by hand.
The revised results were in turn supplemented with ChatGPT to include sounds that make the speech sound more natural when it is read aloud. For example, pauses, sighs or throat clearing. The results of these prompts were revised again and then converted into audio signals using Bark [34], a freely available text-to-speech tool. These audio files were then edited. The generated audio files have the characteristic that texts are read aloud at different speeds or that unknown words are slurred or pronounced unusually. Where possible, these passages were accelerated and removed, provided that the meaningfulness was retained.
For the visual representation, freely available 3D models were used and animated using Unity [35]. A script animates the movement of the mouths based on the amplitude deflection of the audio signals in the vertical direction. Body gestures were animated manually and integrated at a suitable point. The videos created with Unity were then given fade-in effects and finally edited.
When creating the game content, particular attention was paid to the ethical aspects, concerning requirements L1, and L2.
For the learning environment, we used an organic computing approach with self-describing devices that could be dynamically connected (see Section 1.3). Six ensembles were built with this technology, each consisting of one screen, one button board for input, and a smart light bulb on top of the screen. The screens were connected to computers with speakers for individual sounds at each ensemble. Every computer is enabled to run a standalone application which is part of the complete game application. Figure 1 shows the room’s layout from two angles, revealing all six ensembles. The ensembles were placed at approximately equal distances from each other whenever possible to address the requirement K2. Additionally, the environment was equipped with moving head-spots for background lights and loudspeakers for background music. A software component replaces predefined gaps in the game applications with interactive tests exported from the Learning Management System.

3. Game Design

Requirements A1 through A4 describe the need for adaptation to the learner. To this end, a questionnaire that determines player characteristics precedes the game. Additionally, before playing the game, an appropriate interactive test from the Learning Management System must be transferred to it, providing the content for the knowledge assessment. After playing the game, a record of the correct or incorrect answers to the knowledge questions is sent back to the Learning Management System. These features address also requirements H1, H2, and H4.
According to the theory of different player types and the requirements of Ambient Serious Games, separate games should be designed for each type of player. This paper uses the player type questionnaire developed by Krath and von Korflesch [36] to classify players into one of the three profiles we identified in a previous study [10]. The mechanics used in the games follow the outcomes of the study and were defined and suggested to player types of the underlying HEXAD Framework (cf. [36,37,38]).
For this reason, three game (variants) are presented below before the evaluation results are presented.

3.1. Social Achiever Game

The first game is for the Social Achiever player profile (cf. [10]). This profile responds to mechanics such as Learning, Nonlinear Gameplay, Knowledge Sharing, Certificates, Challenges, and Exploratory Tasks.
Since the concept targets single-player experiences, mechanics that require interaction with others, such as Knowledge Sharing, are only partially applicable. Similarly, implementing nonlinear gameplay is challenging within the short duration of each play session. An alternative would be to offer several branching paths that continue linearly or unfold in parallel through short segments.
To address this, a virtual escape room scenario was developed that incorporates the remaining relevant mechanics. The game spans three virtual rooms. Players must answer questions based on leaning management systems interactive tests in each room to unlock clues, which are then used to solve puzzles (cf. requirement B1). Each puzzle yields a code used to open a locked door. Solving all three puzzles is required to escape. Players must explore the doors to correctly interpret the clues and solve each challenge.
This design emphasizes the mechanics of Challenges and Exploratory Tasks. The game ties these mechanics together through a narrative: a professor (see Figure 2) designed the escape room to find a worthy successor for his lab. Although not explicitly, this introduces the mechanic Certificates. The narrative also allows for scientific puzzles that teach skills such as binary number conversion. Clues appear in the same order on each device, reinforcing the Learning mechanic.
Table 3 outlines the game sequence. The second and third column represent an ensemble consisting of the arcade button board, game interface, and feedback lamp. To keep the table clear, it groups together ensembles that behave identically. The second column shows the room lighting and background audio for each phase. This is used to create a subtle background atmosphere which does not capture the students’ attention and remains in the users periphery in regard to requirements J1 and J2.
The game begins with a prompt for user input. Every interactive interface features an embedded tutorial (see requirements C1 and C2) that explains how to use the arcade button boards (see Figure 3). The initial screen shows a dedicated input key on the interface.
After starting, a video plays on one of the output devices. It shows a stylized character in a white lab coat introducing the game’s story and explaining the sequence. In the next section, the test questions are displayed (see Figure 4). Correct answers trigger the corresponding ensemble’s feedback lamp to light up green and the corresponding screen display a clue. Incorrect answers cause the lamp to turn red, and another unanswered question appears, if available. Otherwise, the current question remains. Thus, it is necessary to answer all the questions to achieve the game’s characterizing goal (requirement B1) without the learning content preventing its achievement (requirement I1). An audio cue reinforces the positive or negative feedback. Combining audio and light provides multimodal feedback to consider requirement I2 and I3.
After collecting all clues, the corresponding door appears. A moving light effect highlights the final door to increase suspense.
The first puzzle reveals a block of three letters that form the phrase ’TELL ME THE TRUTH!’. Players must enter this phrase into a command-line interface embedded in the door (Figure 5A). An AI module then generates a task-specific response containing the door code in block letters.
The second door (Figure 5B) involves converting four-digit binary numbers to decimal. Clues present 15 bits in groups of three. The door interface displays a mask with a leading zero and blank fields for the 15 digits. Players convert each block to decimal and input the resulting numbers in the order indicated by the clues.
The third door (Figure 5C) uses a periodic table. Clues specify elements, and players must combine their atomic numbers in the correct sequence to get the code. Duplicate numbers must be excluded, as instructed in the text tutorial.
The clue interfaces use a template text layout. In contrast, each door interface is custom-designed and embedded as a scalable vector graphic in HTML. After the interface loads in the browser, the input field positions are calculated and overlaid using HTML elements that capture user input. These fields are linked to the input devices via the organic framework (see Section 1.3).
Text tutorials use the same templates as the clues but also include the path to an audio file. The professor’s voice explains each puzzle in these recordings.
Finally, the outro-screens do not require user input. They credit the creators of audio assets when licensing requires attribution. After 10 s, the game shuts down automatically.

3.2. Socializer Game

The second profile is the Socializer Profile. This player profile allows for various design possibilities (cf. [10]), particularly in terms of the number of mechanics that can be integrated. However, since the game is single-player, integrating social interactions with other players presents a challenge. To address this, the game features a main character who engages directly with the player. Unlike the antagonist in the escape scenario, this character is not designed as an opponent but rather depends on the player’s help. This setup is intended to create the feeling of working together as a team with a virtual companion. The game narrative unfolds as a space adventure.
The character plays the role of a high-tech space officer tasked with solving the theft of a seed capsule (see Figure 6). However, his forensic technician has played a prank on him: the officer can only use his equipment after answering a series of knowledge questions correctly. The player must now help by answering these questions, enabling access to the required tools.
To incorporate the Levels or Progression mechanic, the officer gains access to various fictional forensic technologies throughout the game, each suited to different investigative tasks. Initially, correct answers to knowledge questions allow the player to analyze suspect interviews. Every correct answer unlocks an "investigator question" at a terminal, which presents a follow-up question and a corresponding response based on the interviews. Players are free to choose which investigator question they want to reveal based on the knowledge questions they answer correctly.
Each suspect appears within a specific device setup. The investigator questions and their corresponding answers were generated and adapted using ChatGPT 3.5. The responses are structured so that each suspect provides at least two clues pointing to the actual culprit, wrongly accuses another character, and falsely clears a third. For each suspect, six to seven question–answer pairs are available. Out of the total 34 pairs, players can unlock ten. Eight random pairs have already been revealed by the officer.
Once all available questions have been answered, the player must participate in a voting phase to identify the correct suspect. A correct guess unlocks an achievement. This is followed by the culprit’s escape, described in detail by the officer. The player must now operate a search drone using the same question-based system to track the suspect. All remaining knowledge questions are presented simultaneously, and the interface switches to a black screen once each one is answered correctly. This search sequence represents a Challenge mechanic, demanding heightened effort and concentration.
After completing all questions, the drone catches the suspect. The officer then provides a summary of the motives, and the game concludes with an outro scene. Table 4 summarizes the gameplay. The lighting effect described in the table is intended to heighten suspense, similar to the escape room sequence. Subfigures in Figure 7 illustrates the interfaces developed for conducting interrogations, identifying suspects, and performing the drone search.
The feedback behavior for answered questions mirrors the one used in the escape room game. The same applies to the game start, embedded videos, and outros.

3.3. Player Game

The last player profile is called Player. Due to the similarities between the player profile Player and Socializer in terms of suitable mechanics (cf. [10]), the two games align in many aspects. Most of the mentioned mechanics are strongly represented in the Player profile. An exception is the mechanic Guilds or Teams, which is more relevant for the Socializer profile, along with the associated mechanics Social Status, Social Discovery, Social Comparison, and Sharing Knowledge. Nevertheless, the narrative remains appropriate.
What particularly characterizes the Player profile is the relevance of reward-based mechanics. This results from the strong expression of the Player type within the profile. As according to Kapp [40], these mechanics are primarily structural in nature. Therefore, no content-related changes to the games are necessary to implement them.
Based on these considerations, the game was adapted for the Player profile by introducing a points system for correctly answered questions. Players receive five points for a correct answer on the first attempt, three on the second, and one on the third. The system sums the points and displays a leader board at the end of the game. This leader board appears between game segments 21 and 22 (see Table 4). The corresponding interface is shown in Figure 8.

4. Evaluation of Game Concepts

Before implementation, the game concepts, including story, mechanics, and environment, we underwent a formative evaluation conducted with experts. This served the purpose of assessing the playability of the games before their technical realization. The following research questions were addressed:
  • Are the game concepts perceived as playable and do they sufficiently meet design principles for serious games, or are there shortcomings that should be addressed prior to technical implementation?
  • What changes need to be made to improve the game concepts so that they are suitable as serious games?
To tailor the concept to the experts, their player types were first determined using the Player Type Questionnaire by Krath and von Korflesch [36]. The concepts were then presented using a PowerPoint presentation supported by graphics and audio elements from the games and their planned user interfaces.
Each concept represented a long-form version of the games, designed to span multiple interactive tests across a semester and structured into six progressive levels.
Each evaluation session began with the presentation of one game concept, tailored to the expert’s player type. The study team introduced the concept and answered any questions. After the presentation, each expert assessed the game using a heuristic checklist and provided content-based feedback on the design principles. This was done digitally using a LimeSurvey questionnaire hosted on the Institute’s server. The study lead remained available for questions throughout the evaluation.
The heuristic set used was an adapted version of the PLAY heuristics [41]. This set was selected because it focuses on playability, has been validated in previous research, and, according to the authors, can be modified by omitting heuristics as needed. This flexibility ensures that only relevant heuristics are applied, especially when a playable prototype is not available. Alternatives such as HEP [42] and PHEG [43], which each include 43 heuristics, would require significantly more time from participants.
Since evaluating certain subcategories—such as Enduring Play, Challenge, Strategy and Pace, and Player’s Perception of Control (from the Game Play category), as well as Status and Score, Screen Layout, Navigation, and Error Prevention (from the Usability and Game Mechanics category)—requires an interactive prototype, a total of 24 heuristics were excluded from the concept evaluation.
In addition to qualitative feedback, participants rated the implementation of each concept on a five-point scale from “very good” to “not good”. All heuristics within a category were rated together, resulting in 11 quantitative scores per evaluation.
The study aimed to recruit a sample of 5 to 10 experts (cf. [44,45,46]), based on the following inclusion criteria:
  • A Master’s degree in Media Informatics, Computer Science, Game Design, or a related field.
  • Experience with digital or non-digital games, with at least one gaming session per month over the past six months.
  • Proficient understanding of both English and German.
Individuals with psychological or physical impairments affecting vision or those sensitive to light and audio effects were excluded from participation.
Experts were recruited from the university environment. Academic staff from the computer science institutes at the university were contacted via email or in person. Interested individuals received the study information materials and could reach out to the study team if they wished to participate. All preparatory information, including player type data and survey responses, was collected via email and online questionnaires.
In total, nine participants took part in the survey. Each game concept was presented at least once. Table 5 summarizes the results of the quantitative part of the survey. Most participants rated the heuristics as neutral to good. However, one participant assessed all heuristics as rather neutral or negative.
The following section summarizes the qualitative comments. Each heuristic received feedback, which will be discussed in more detail in the context of the quantitative results. A complete list of remarks is available in the Supplementary Materials.
Consistency in Game World: Participants noted the linearity of the game flow, which preserves the game state by design. One person positively highlighted the separate protocol feature.
Goals: Participants considered the initial video sufficient to explain the game objectives. However, they missed options for individual adaptation to reward outstanding performance. One suggestion involved allowing players to customize their avatar or companion character based on success. Another participant emphasized that the player must have acquired the necessary knowledge through the lecture content.
Emotional Connection: Most participants associated emotional connection primarily with the companion avatar. They recommended making this character more human or emotional. One participant suggested giving the astronaut a robotic voice to match the graphic style. Two others missed a player character avatar and proposed shortening the introduction video.
Coolness/Entertainment: Participants mentioned various aspects that could improve the entertainment factor, such as (a) a multimedia presentation of all characters, not just the companion, (b) side quests with rewards, or (c) the risk that answering learning questions might feel tedious. In general, they viewed the concept of an Ambient Serious Game as beneficial to the coolness/entertainment factor.
Humor: Due to the short presentation, participants found it difficult to evaluate this heuristic. Some, however, noted that the companion character shows potential for humorous interaction.
Immersion: Participants rated immersion as sufficient, thanks to the combination of lighting, audio effects, and background music. They viewed the graphic quality as limited, likely due to budget constraints. Several participants expressed interest in additional video content.
Documentation/Tutorial: One participant requested a skippable tutorial, with the option to access the information later. While participants generally found the instructions sufficient, one person believed a full play-through would be necessary for final judgment. They also felt that answering questions became easier when players were already familiar with the Learning Management Systems interactive tests.
Game Provides Feedback: Participants found the feedback adequate but criticized the weak connection between gameplay and questions. One suggestion was to visually or narratively reflect incorrect answers within the game environment.
Terminology: Some participants found it hard to distinguish this heuristic from “Goals.” They acknowledged the clear definition of objectives but again requested stronger individual adaptation and better integration between learning content and gameplay.
Burden on Player: Participants felt that the game did not punish them unnecessarily. They considered the ambient interaction manageable, although they preferred the option to skip tutorials. A full play-through seemed necessary for a final assessment.
Game Story Immersion: Participants rated the story as sufficient but suggested increasing immersion by enhancing all characters multimedia-wise or integrating the player more directly into the narrative.

Conclusions

When considering the content-related feedback, several suggestions emerged to improve the game concept. Multiple participants found the tutorial videos too lengthy and recommended making them skippable—a change that should definitely be implemented to avoid losing player interest early. Feedback also emphasized the importance of well-developed companion characters and additional figures. Depending on available resources, these characters should be rich in detail and exhibit distinct personalities that influence their actions and expressions. Another crucial point concerns the stronger connection between learning content and gameplay. However, due to the dynamic game generation based on a concept sketch and independently created Learning Management Systems interactive tests, achieving this integration poses significant challenges. Ideally, educators should be able to create games that align with their lecture content in a simple way. However, this approach would require a separate game for each test, resulting in considerable resource demands. This issue is further addressed in Section 6. The next section will describe how these findings influenced the implementation of the game concepts.
The implementation of the games took into account the need for characters with humorous or relatable traits. Players can now skip videos, and short tutorial texts provide the most important information (marked with * in Table 3 and Table 4). In the escape room game the text tutorials are underlined with audio files in addition, while there is only text in the space adventure game. This choice reflects their function. The adventure text tutorials summarize key explanations from the videos without introducing new information. The tutorials in the escape room game introduce the puzzles and give additional hints. At the time of the evaluation, only the introduction video existed, while other instructions appeared solely as text.

5. Evaluation of Games Prototypes

The objective of this evaluation is to assess the usability and playability of the playable prototypes, which were developed based on previously evaluated concepts. The evaluation addresses the following research questions:
  • What usability and playability issues arise when implementing the games using the prototypes?
  • What changes are necessary to improve the games for further use?
This formative evaluation also follows a heuristic approach. In addition to assessing playability, the experts primarily focus on evaluating the usability of the technical implementation. For this purpose, the HEEG heuristic set [47] was chosen, as it covers both usability and playability aspects. Moreover, HEEG includes criteria specifically related to educational content.
Most other heuristic sets identified during the literature review that address both usability and playability do not focus on educational games and are therefore not suitable alternatives. One exception is the set proposed by Fitchat and Jordaan [48], which explicitly targets serious games. However, their set concentrates on user experience rather than directly addressing usability and playability. Additionally, it was tested only with users in a single game and has not been evaluated by independent experts.
The recruited experts each received a customized game tailored to a specific player type. They were assessed through the questionnaire published by Krath and von Korflesch [36]. After playing the game, they evaluated it using an online questionnaire based on the HEEG heuristics—similar to the previous evaluation. The interactive tests included basic arithmetic problems and two Cloze tasks covering natural numbers and prime numbers.
The target sample corresponds to that of the previous evaluation, but the inclusion criteria differ, as this study required experts in media informatics who are familiar with usability principles from their education. The criteria for participation were as follows:
  • Currently enrolled in or graduated from a Media Informatics program.
  • Adequate proficiency in both English and German.
Individuals with significant visual, auditory, or motor impairments that could interfere with gameplay (e.g., sensitivity to light/audio effects or difficulty interacting with the game interface) were excluded.
Experts were recruited from the Bachelor’s and Master’s programs in Media Informatics at the university. The recruitment strategy ensures that the participants are or were students at the university and could identify with the target group. Invitations were distributed via email and the Moodle forum for study participation announcements. Alumni were also contacted via email within the university network. The information materials were sent to potential participants, who could then express their interest by contacting the study coordinator. Any questions were answered individually, either in writing or in person.
Five experts participated, and each game variant was played at least once. The evaluation produced the following results. All answers can be found in the Supplementary Materials.
Enduring Play: Participants found the tasks varied and appreciated the physical activity, which added dynamic elements to the gameplay. Despite some content similarities, mistakes did not feel like punishment. Repetition in math problems was offset by changing locations. While participants did not report frustration or monotony, they felt unsure at the beginning about how many tasks they needed to complete and whether they had to do so consecutively.
Challenge, Strategy and Pace: The variety of tasks and alternation between types and physical movement helped prevent fatigue. The gameplay remained engaging due to its dynamic flow. While the physical demands were moderate, participants recommended age-based adjustments to prevent overload.
Goals: In general, the game communicated goals clearly. However, after specific actions like code entry, participants briefly felt confused about what to do next. The overarching goal—identifying the thief—remained understandable. Participants suggested simplifying the introduction scenes, as the amount of information made it hard to grasp the context.
Concentration: According to participants, the game supported focus well. Task completion involved few distractions, and room lighting and music positively influenced the atmosphere. However, the connection between the math tasks and the space adventure’s questioning segment was not immediately clear, which led to a sense of performing unnecessary tasks.
Immersion: Music and lighting contributed significantly to immersion. The task types and spatial arrangement of terminals enhanced the experience. Still, some participants found the long introduction texts disruptive. One person missed auditory feedback for correct answers, even though it existed.
Documentation/Tutorial: Participants generally understood the game mechanics easily. However, they suggested a clearer and less text-heavy introduction. Minor inconsistencies appeared in the controls, such as navigation directions. The link between tasks and game objectives was not always obvious, and it was unclear to some that points were displayed after answering correctly in the "Player" profile.
Game Provides Feedback: Participants perceived basic feedback as present. However, the game did not always clearly indicate how many questions remained. They also wanted feedback on their overall game performance.
Screen Layout: Participants described the design as consistent and functional, especially for the controllers. However, the math tasks appeared somewhat minimalist and detached from the overall game style. The text for the questionnaire could benefit from a more compact presentation.
Navigation: Participants found navigation intuitive and simple. Nevertheless, minor inconsistencies occurred, particularly regarding the direction of text navigation during suspect questioning in the space adventure.
Error Prevention: Participants noted that context-specific explanations would have helped at the beginning. Misunderstandings occasionally occurred when starting tasks. In-game hints could ease entry, and additional help solving door puzzles was also requested.
Control: Participants reported that controls were generally simple and mostly consistent. However, some inconsistencies appeared between the controller and interface. After making mistakes, it was not always clear how to correct entries.
Supports Active Learning: The game allowed for a certain degree of exploration, which initially confused some participants. It primarily tested existing knowledge rather than teaching new material. The math tasks challenged players’ prior understanding.
Engenders Engagement: Puzzles and hidden information piqued player curiosity. The difficulty level seemed appropriate, and exploration was rewarded.
Appropriateness: Participants found the scenario well suited to the learning content and felt it was meaningfully embedded. They believed the game could serve as a useful supplement, though the artificial nature of the test task made it hard to evaluate exact integration.
Supports Reflection: The game did not include deliberate reflection phases. Participants suggested adding a brief summary or learning feedback after completing the tasks to support the learning process.
Provides Equitable Experience: Some tasks required specific prior knowledge, such as binary codes or prime numbers. As a result, not all tasks were equally accessible to everyone. Participants missed adaptive adjustments to individual knowledge levels.

Conclusions

The evaluation indicates that the implemented learning games were generally perceived as feasible and immersive. However, several usability aspects could be improved by refining the user interfaces. Most importantly, the game introductions and explanations require adjustments. The arcade button boards used to control the games were largely intuitive, with the exception of the questioning interface in the space adventure. The following section outlines all changes made to finalize the game design in accordance with the descriptions in Section 3.1.
We expanded the existing introduction videos with additional video or audio content to better explain the gameplay. The updated videos now guide players to the next step. In the space adventure, the video mentions the total number of useful questions for suspect interrogation. It also explains that players may freely choose the order of answering tasks or questioning suspects. For the “Player” profile, the explanation about scoring was added. These changes aim to ensure players understand what task comes next and how solving practice tasks connects to the game’s main objective. Additionally, the introduction of a forensic specialist into the story (see Section 3.1) provides a logical reason why players must complete tasks to conduct the interrogation. In the escape room game, the system now displays tips after players enter an incorrect code twice. A text tutorial explicitly informs players about this feature.
The team also made visual changes to the user interfaces. This included modifying the interfaces of the interviews (see Figure 7), which participants had criticized for information overload and confusing navigation direction. The list of suspect questions now appears one at a time. A new horizontal navigation system allows players to switch between questions. An icon indicates whether another question is available or not yet unlocked. Switching questions also updates the corresponding answer, if one was already revealed. This redesign addresses the need for a more compact questionnaire display.
To align the practice questions with the space adventure, the team replaced the background with one matching the game’s other interfaces. The true/false and assignment tasks now include a header that states the total number of statements and the current number. This change aims to eliminate confusion about how many questions remain.
The display time for tutorials increased from five to ten seconds. The time for showing feedback about correctness and earned points was reduced to two seconds.
Evaluation results related to learning content suggest that the generated games may effectively assess previously acquired knowledge. However, proper content delivery and post-game transfer of results to the Learning Management Systems are essential to enable reflection.

6. Discussion

The presented games demonstrate how the requirements of Ambient Serious Games can be incorporated into a learning game. These games embed questions from a Learning Management System into a meaningful narrative structure. The evaluation results demonstrate that a multimodal presentation of learning content is feasible. The test subjects predominantly rated the user interfaces as comprehensible and suitable. Criticisms of individual interface elements were addressed and incorporated into the revised version.
The second heuristic evaluation showed that experts and potential users perceive the applications as games and could see themselves using them for learning. In particular, users requested the ability to transfer correct and incorrect answers back into the Learning Management System to enable reflection on one’s level of knowledge.
The game concepts addressed the requirements formulated in Section 2. The player type is determined based on the user’s characteristics and generates a suitable game process. The overarching goal of the games is to complete a Learning Management System interactive test. All tasks must be completed to successfully finish the game.
In-game tutorials, in the form of videos and text overlays, help players operate the game without additional instructions. Interactive objects are explained via graphical user interfaces. New learning objects can be integrated into the game if they are incorporated into the technical, organically structured software framework. Each question is accompanied by multimedia feedback, which most participants rated as helpful. Additionally, audio effects and background music increase immersion. The introduction of characters also encourages an emotional connection to the game.
However, not all requirements could be implemented in the current game prototypes. In particular, adapting to over- or underchallenging situations or the emotional states of players was not possible, as recording this information over a spatial distance is technically challenging. Integrating sensory measurements to adapt the course of the game and considering individual learning speed on this basis are open research questions; however, in perspective of Education 4.0, it is needed for a learning experiences fully tailored to students.
While the current game structure allows different learning content to be linked to the narrative, this results in a rather loose coupling of game and content. Closer integration of content would be possible if teachers could easily create their own game content, perhaps with the help of Artificial Intelligence-supported tools. This would allow for the creation of more player-type-specific variants or several game modes with a common narrative to appeal to different types of players.
The studies included participants without language or physical limitations, so no disadvantages arose in this context. However, this is not the case in typical university scenarios. In principle, a technical adaptation of the interaction devices to potential limitations is possible thanks to the organic software approach. However, further research into meaningful implementation and didactic design is required.
There are also limitations regarding the study’s conduct. The second heuristic evaluation was conducted with the minimum required number of participants, so it can be assumed that a larger sample size could uncover further potential for improvement. Additionally, the sample included alumni as well as students.
A larger-scale study would be necessary to quantitatively evaluate the quality of the games. Additionally, it should be investigated whether player-type based adaptation actually causes significant differences in game perception and learning experience.
Further development of the technical infrastructure is necessary to sustainably integrate Ambient Serious Games into everyday university life. The goal should be an integrated system that connects Learning Management Systems and game environments, automatically detects player types, and generates suitable games. Teachers and maybe also students themselves could then create their own games for their courses and make them available to students in a short amount of time.

7. Conclusions

The presented work shows that combining existing hardware and spatial resources with suitable software is a viable approach for creating Ambient Serious Games. Spaces not designed for Ambient Serious Games can be enhanced with smart objects to provide spatial learning games.
To reproduce the presented concept, the following components are required: (1) university premises with smart objects or the ability to be supplemented accordingly, (2) an established Learning Management System from which interactive tests can be exported, and (3) ready-made game sketches for each type of player that can be finalized by software using information from the Learning Management System.
The ideas in this work can be used to create more Ambient Serious Games. However, in the future, it will be necessary to link the single components and, especially, an editor for Ambient Serious Games, to automatically generate games tailored to students, offering lecturers an easy way to implement them without requiring in-depth knowledge of software development or smart environments.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/higheredu4030052/s1.

Author Contributions

Conceptualization, L.C.B. and A.S.; methodology, L.C.B.; software, L.C.B.; validation, L.C.B.; formal analysis, L.C.B.; investigation, L.C.B.; resources, L.C.B. and A.S.; writing—original draft preparation, L.C.B.; writing—review and editing, L.C.B. and A.S.; visualization, L.C.B.; supervision, A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of University of Lübeck (file 2024-288, date of original approval: 10 June 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Informed consent has been obtained from the participants to publish this paper.

Data Availability Statement

All results are reported in the Supplementary Materials.

Acknowledgments

We thank all subjects for participating in our study. In addition, we thank B. Kordts for support during the study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Miranda, J.; Navarrete, C.; Noguez, J.; Molina-Espinosa, J.M.; Ramírez-Montoya, M.S.; Navarro-Tuch, S.A.; Bustamante-Bello, M.R.; Rosas-Fernández, J.B.; Molina, A. The Core Components of Education 4.0 in Higher Education: Three Case Studies in Engineering Education. Comput. Electr. Eng. 2021, 93, 107278. [Google Scholar] [CrossRef]
  2. Rahimi, I.D. Ambient Intelligence in Learning Management System (LMS). In Intelligent Computing; Springer: Cham, Switzerland, 2022; pp. 379–387. [Google Scholar] [CrossRef]
  3. Tavangarian, D.; Lucke, U. Pervasive University—A Technical Perspective Die Pervasive University aus technischer Perspektive. It—Inf. Technol. 2009, 51, 6–13. [Google Scholar] [CrossRef]
  4. Mourtzis, D.; Vlachou, E.; Dimitrakopoulos, G.; Zogopoulos, V. Cyber-Physical Systems and Education 4.0—The Teaching Factory 4.0 Concept. Procedia Manuf. 2018, 23, 129–134. [Google Scholar] [CrossRef]
  5. Tang, S.; Long, M.; Tong, F.; Wang, Z.; Zhang, H.; Sutton-Jones, K.L. A Comparative Study of Problem-Based Learning and Traditional Approaches in College English Classrooms: Analyzing Pedagogical Behaviors Via Classroom Observation. Behav. Sci. 2020, 10, 105. [Google Scholar] [CrossRef] [PubMed]
  6. Backlund, P.; Hendrix, M. Educational games—Are they worth the effort? A literature survey of the effectiveness of serious games. In Proceedings of the 2013 5th International Conference on Games and Virtual Worlds for Serious Applications, VS-GAMES, Poole, UK, 11–13 September 2013; pp. 1–8. [Google Scholar] [CrossRef]
  7. Faiella, F.; Ricciardi, M. Gamification and learning: A review of issues and research. J. E-Learn. Knowl. Soc. 2015, 11. [Google Scholar] [CrossRef]
  8. Boeker, M.; Andel, P.; Vach, W.; Frankenschmidt, A. Game-Based E-Learning Is More Effective than a Conventional Instructional Method: A Randomized Controlled Trial with Third-Year Medical Students. PLoS ONE 2013, 8, e82328. [Google Scholar] [CrossRef] [PubMed]
  9. Susi, T.; Johannesson, M.; Backlund, P. Serious Games—An Overview. 2007. Available online: https://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-1279 (accessed on 6 June 2025).
  10. Brandl, L.C.; Schrader, A. Serious Games in Higher Education in the Transforming Process to Education 4.0—Systematized Review. Educ. Sci. 2024, 14, 281. [Google Scholar] [CrossRef]
  11. Brandl, L.C.; Schrader, A. Towards Ambient Serious Games in Higher Education—Vision, Requirements and Technical Solutions. In Human-Computer Interaction; Springer: Cham, Switzerland, 2025; In press. [Google Scholar]
  12. Brandl, L.C.; Kordts, B.; Schrader, A. Technological Challenges of Ambient Serious Games in Higher Education. In Proceedings of the MuM’23 Workshops on Making a Real Connection and Interruptions and Attention Management, Vienna, Austria, 3 December 2023; Volume 3712. [Google Scholar]
  13. Mittmann, G.; Barnard, A.; Krammer, I.; Martins, D.; Dias, J. LINA—A Social Augmented Reality Game around Mental Health, Supporting Real-world Connection and Sense of Belonging for Early Adolescents. ACM Hum. Comput. Interact. 2022, 6, 1–21. [Google Scholar] [CrossRef]
  14. Stoldt, F.; Brandl, L.C.; Schrader, A. Pervasive Serious Game for Exam Preparation: Exploring the Motivational Effects of Game Narratives. In Serious Games; Springer: Cham, Switzerland, 2023; pp. 439–446. [Google Scholar] [CrossRef]
  15. Echeverri, D. Searching for Us: A Pervasive Tangible Narrative About Friendship. In Proceedings of the Designing Interactive Systems Conference, Copenhagen Denmark, 1–5 July 2024; pp. 284–288. [Google Scholar] [CrossRef]
  16. Turnbull, D.; Luck, J.; Chugh, R. Learning Content Management Systems. In Encyclopedia of Education and Information Technologies; Springer International Publishing: Cham, Switzerland, 2020; p. 1051. [Google Scholar] [CrossRef]
  17. Popplow, A. Auswahl einer Lernplattform für wissenschaftliche Weiterbildung. Z. Hochsch. Weiterbild. (ZHWB) 2018, 1, 60–67. [Google Scholar] [CrossRef]
  18. LTD, M.P. Fragetypen—MoodleDocs. 2018. Available online: https://docs.moodle.org/311/de/Fragetypen (accessed on 15 November 2021).
  19. Ilias. DOCU: Dokumentation für Autoren. 2021. Available online: https://docu.ilias.de/ilias.php?ref_id=2221&obj_id=42809&cmd=layout&cmdClass=illmpresentationgui&cmdNode=eg&baseClass=ilLMPresentationGUI (accessed on 15 November 2021).
  20. frentix GmbH Test Fragetypen—OpenOlat 16.0 Benutzerhandbuch—OpenOlat Confluence. 2021. Available online: https://confluence.openolat.org/display/OO160DE/Test+Fragetypen (accessed on 18 November 2021).
  21. Milius, F. CLIX—Learning-Management-System für Unternehmen, Bildungsdienstleister und Hochschulen. Wirtschaftsinformatik 2002, 44, 163–170. [Google Scholar] [CrossRef]
  22. 2i2L Sar. Adding Questions to the Test. 2021. Available online: https://docs.chamilo.org/teacher-guide/interactivit_tests/adding_questions_to_the_test (accessed on 18 November 2021).
  23. Stud.IP e.V.und die Autor/-innen der Stud.IP-Dokumentation. Stud.IP-Nutzerdokumentation (deutsch): GestaltungAnlage. 2008. Available online: https://hilfe.studip.de/help/4.2/de/Vips/GestaltungAnlage (accessed on 18 November 2021).
  24. Blackboard Inc. Fragentypen | Blackboard-Hilfe. 2018. Available online: https://help.blackboard.com/de-de/Learn/Instructor/Ultra/Tests_Pools_Surveys/Question_Types (accessed on 16 November 2021).
  25. Madakam, S.; Ramaswamy, R.; Tripathi, S. Internet of Things (IoT): A Literature Review. J. Comput. Commun. 2015, 3, 164–173. [Google Scholar] [CrossRef]
  26. Burmeister, D. Selbstreflexive Geräteverbünde in Smarten Umgebungen. Ph.D. Thesis, Universität zu Lübeck, Lübeck, Germany, 2018. [Google Scholar]
  27. Müller-Schloer, C.; von der Malsburg, C.; Würt, R.P. Aktuelles Schlagwort: Organic Computing. Inform. Spektrum 2004, 27, 332–336. [Google Scholar] [CrossRef]
  28. Poslad, S. Autonomous Systems and Artificial Life. In Ubiquitous Computing: Smart Devices, Environments and Interactions; John Wiley & Sons: Hoboken, NJ, USA, 2009; pp. 317–341. [Google Scholar]
  29. Kordts, B. Selbstreflexive Smarte Umgebungen Im Intensivkontext. Ph.D. Thesis, Universität zu Lübeck, Lübeck, Germany, 2023. [Google Scholar]
  30. Kordts, B.; Brandl, L.C.; Schrader, A. Managing Self-Explaining Ambient Applications. Front. Internet Things 2025, 4, 1623733. [Google Scholar] [CrossRef]
  31. Cowan, N. The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behav. Brain Sci. 2001, 24, 87–114, Discussion 114–185. [Google Scholar] [CrossRef] [PubMed]
  32. Kosukhina, M. IntelliJ IDEA 2023.3.6 Is Out! | The IntelliJ IDEA Blog. 2024. Available online: https://blog.jetbrains.com/idea/2024/03/intellij-idea-2023-3-6/ (accessed on 20 May 2025).
  33. OpenAI. ChatGPT (GPT-4.5). 2025. Available online: https://chat.openai.com (accessed on 2 May 2025).
  34. Suno, Inc. Bark. 2025. Available online: https://github.com/suno-ai/bark (accessed on 30 May 2025).
  35. Technologies, U. Echtzeit-Entwicklungsplattform von Unity | 3D, 2D, VR- und AR-Engine. 2025. Available online: https://unity.com (accessed on 2 May 2025).
  36. Krath, J.; von Korflesch, H.F.O. Player Types and Game Element Preferences: Investigating the Relationship with the Gamification User Types HEXAD Scale. In HCI in Games: Experience Design and Game Mechanics; Springer: Cham, Switzerland, 2021; pp. 219–238. [Google Scholar] [CrossRef]
  37. Tondello, G.F.; Wehbe, R.R.; Diamond, L.; Busch, M.; Marczewski, A.; Nacke, L.E. The Gamification User Types Hexad Scale. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, Austin, TX, USA, 16–19 October 2016; pp. 229–243. [Google Scholar] [CrossRef]
  38. Marczewski, A. Even Ninja Monkeys Like to Play: Gamification, Game Thinking and Motivational Design; CreateSpace Independent Publishing Platform: North Charleston, SC, USA, 2015. [Google Scholar]
  39. TurboSquid. The TurboSquid 3D Model License. Available online: https://blog.turbosquid.com/turbosquid-3d-model-license/ (accessed on 29 July 2025).
  40. Kapp, K.M. The Gamification of Learning and Instruction Fieldbook: Ideas into Practice; John Wiley & Sons: Chichester, UK, 2013. [Google Scholar]
  41. Desurvire, H.; Wiberg, C. Game Usability Heuristics (PLAY) for Evaluating and Designing Better Games: The Next Iteration. In Online Communities and Social Computing; Springer: Berlin/Heidelberg, Germany, 2009; pp. 557–566. [Google Scholar] [CrossRef]
  42. Desurvire, H.; Caplan, M.; Toth, J.A. Using Heuristics to Evaluate the Playability of Games. In Proceedings of the CHI ’04 Extended Abstracts on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004; pp. 1509–1512. [Google Scholar] [CrossRef]
  43. Mohamed, H.; Jaafar, A. Development and Potential Analysis of Heuristic Evaluation for Educational Computer Game (PHEG). In Proceedings of the 5th International Conference on Computer Sciences and Convergence Information Technology, Seoul, Republic of Korea, 30 November–2 December 2010; pp. 222–227. [Google Scholar] [CrossRef]
  44. Korhonen, H. The Explanatory Power of Playability Heuristics. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology, Lisbon, Portugal, 8–11 November 2011; pp. 1–8. [Google Scholar] [CrossRef]
  45. Nielsen, J.; Molich, R. Heuristic Evaluation of User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 1–5 April 1990; pp. 249–256. [Google Scholar] [CrossRef]
  46. Yáñez-Gómez, R.; Cascado-Caballero, D.; Sevillano, J.L. Academic Methods for Usability Evaluation of Serious Games: A Systematic Review. Multimed. Tools Appl. 2017, 76, 5755–5784. [Google Scholar] [CrossRef]
  47. Barbosa, M.B.; Rêgo, M.; de Medeiros, I. HEEG: Heuristic evaluation for educational games. In Proceedings of the SBC—SBGames 2015, Teresina, PI, Brazil, 11–13 November 2015; pp. 224–227. [Google Scholar]
  48. Fitchat, L.; Jordaan, D.B. Ten Heuristics to evaluate the User Experience of Serious Games. Int. J. Soc. Sci. Humanit. Stud. 2016, 8, 209–225. [Google Scholar]
Figure 1. The six device ensembles distributed across the room illustrate the technical setup. Six arcade button boards, six interface displays, and six feedback lamps are shown, mounted in the rigging (marked in orange). Additionally, four moving head spots for room lighting and speakers for sound output are visible (marked in mint).
Figure 1. The six device ensembles distributed across the room illustrate the technical setup. Six arcade button boards, six interface displays, and six feedback lamps are shown, mounted in the rigging (marked in orange). Additionally, four moving head spots for room lighting and speakers for sound output are visible (marked in mint).
Higheredu 04 00052 g001
Figure 2. The professor character featured in the escape room. The 3D model is freely available under an open license (see [39]).
Figure 2. The professor character featured in the escape room. The 3D model is freely available under an open license (see [39]).
Higheredu 04 00052 g002
Figure 3. On the left side, the figure illustrates the integrated tutorial in German. If not all the inputs are needed, a shortened tutorial is displayed. From left to right the texts mean “left/previous element”, “enter” and “right/next element”. On the right side the button board for user interaction is shown.
Figure 3. On the left side, the figure illustrates the integrated tutorial in German. If not all the inputs are needed, a shortened tutorial is displayed. From left to right the texts mean “left/previous element”, “enter” and “right/next element”. On the right side the button board for user interaction is shown.
Higheredu 04 00052 g003
Figure 4. Display of a multiple-choice question. The left side presents the original German version and the right a translation of the screen.
Figure 4. Display of a multiple-choice question. The left side presents the original German version and the right a translation of the screen.
Higheredu 04 00052 g004
Figure 5. Escape room doors labeled (AC) in chronological order.
Figure 5. Escape room doors labeled (AC) in chronological order.
Higheredu 04 00052 g005
Figure 6. The character representing the space officer. The 3D model is freely available under an open license (see [39]).
Figure 6. The character representing the space officer. The 3D model is freely available under an open license (see [39]).
Higheredu 04 00052 g006
Figure 7. Custom-designed interfaces of the space adventure. (A) shows the survey interface in a state where the selected question remains unanswered and no additional questions have been unlocked. A customized tutorial appears in the bottom-right corner of (A). (A’) shows the updated version, based on the results of the second evaluation. The interface was updated after (B) presents the suspect selection screen, while (C) illustrates the first step of the pursuit.
Figure 7. Custom-designed interfaces of the space adventure. (A) shows the survey interface in a state where the selected question remains unanswered and no additional questions have been unlocked. A customized tutorial appears in the bottom-right corner of (A). (A’) shows the updated version, based on the results of the second evaluation. The interface was updated after (B) presents the suspect selection screen, while (C) illustrates the first step of the pursuit.
Higheredu 04 00052 g007
Figure 8. Interface of the space adventure tailored to the player type, displaying the generated leader board. The [Pseudonym] is replaced by the player’s chosen pseudonym. The players’ total score is added up during the game. The fictitious players’ high scores are generated within the possible point range. The real player is not always in first place.
Figure 8. Interface of the space adventure tailored to the player type, displaying the generated leader board. The [Pseudonym] is replaced by the player’s chosen pseudonym. The players’ total score is added up during the game. The fictitious players’ high scores are generated within the possible point range. The real player is not always in first place.
Higheredu 04 00052 g008
Table 1. Overview of various question types and their presence across the respective Learning Management System platform. Questions represented in less than three platforms are not mentioned. The term Kprim refers to a group of four True/False questions.
Table 1. Overview of various question types and their presence across the respective Learning Management System platform. Questions represented in less than three platforms are not mentioned. The term Kprim refers to a group of four True/False questions.
QuestiontypeMoodleBlackboardClixIliasOlatStud-IPChamilo
Single Choicexxxxxxx
Multiple Choicexxxxxxx
Freetextxxxxxxx
True/Falsexxx-xxx
Clozexxxxx-x
Assignmentxxxx--x
Hotspot/Imagemapx--xx-x
Numericx--xxx-
Sequence--xxx--
Kprimx--xx--
Calculatedxx-x---
Cloze selectionx--x--x
Table 2. Requirements of Ambient Serious Games published in [11].
Table 2. Requirements of Ambient Serious Games published in [11].
IdentifierRequirement
A1User profiles should describe individual characteristics and allow the system to adapt accordingly
A2The system should analyze the user during gameplay and adapt dynamically
A3User profiles should update after each game session
A4Profiles should include knowledge level and player type
B1The game should pursue a clear characterizing goal
C1The game should include in-game tutorials
C2Interactive objects should explain how to use them
D1New types of learning objects should be introduced
D2Learning content should be presented in a multimodal and adaptive manner
E1User interfaces should be intuitive and free from unnecessary information
F1The game should support the user in reaching the characterizing goal
F2Learning content should be adapted to avoid both under- and over-challenging the user and support flow
F3The system should recognize and respond to user emotions
G1The game itself should be the user’s focus, not just a gamified exercise
G2It should be perceived as a genuine game, not merely a gamified task
H1Appropriate methods should be used based on the application context and target group
H2The game should offer an engaging experience for various player types
H3The game should connect theoretical and practical knowledge
H4Learning and training tasks should be integrated into the gameplay
I1Learning content should not hinder gameplay or progression
I2In-game feedback should help users understand whether their answers are correct and how to improve
I3Feedback should be immediate and multimodal
J1Appropriate background music and audio effects should enhance the experience
J2Learning content should be segmented according to the Cognitive Load Theory (ideally 4 ± 1 chunks) (cf. [31])
J3The game should foster an emotional connection with the user
K1The learning pace should adapt to the player
K2Integrated movement elements should promote well-being and better focus
L1Avoid political or religious content, unless it is directly related to the educational material
L2Avoid stereotypical or racist depictions to ensure cultural neutrality
L3Ensure language is not a barrier to understanding the content
L4Avoid excluding users with physical or cognitive impairments, provided they meet the academic requirements
Table 3. Sequence of the escape room game. Each ensemble consists of a display, a button board and a light bulb. Entries marked with * are added to the sequence after the first evaluation.
Table 3. Sequence of the escape room game. Each ensemble consists of a display, a button board and a light bulb. Entries marked with * are added to the sequence after the first evaluation.
Game PhaseRoom AtmosphereDevice Ensemble ADevice Ensembles B to F
1White lighting, mysterious background musicGame beginsBlack screen
2White lightingPlays videoBlack screen
3Green lighting, mysterious background musicShows text tutorial *Displays question
4Green lighting, mysterious background musicShows doorDisplays clue
5Blue lighting, audio explanationShows text tutorialDisplays question
6Blue lighting, mysterious background musicShows doorDisplays clue
7Red lighting, audio explanationShows text tutorial *Displays question
8Red lighting with motion effect, mysterious background musicShows doorDisplays clue
9Warm white lighting with motion effectPlays videoBlack screen
10Warm white lighting, mysterious background musicOutroOutro
Table 4. Sequence of the space adventure. The term “space music” refers to music associated with the theme of space. Entries marked with * are added to the sequence after the first evaluation and entries marked with ** are added to the sequence after the second evaluation.
Table 4. Sequence of the space adventure. The term “space music” refers to music associated with the theme of space. Entries marked with * are added to the sequence after the first evaluation and entries marked with ** are added to the sequence after the second evaluation.
Game PhaseRoom AtmosphereDevice Ensemble ADevice Ensembles
B to F
1White lighting, space musicGame startBlack screen
2Blue lightingVideoBlack screen
3Blue lighting, space musicText tutorial *Black screen
4Blue lighting with movement effects, space musicQuestionSuspect interrogation
5Blue lighting with movement effects, space musicQuestionSuspect interrogation
6Blue lighting with movement effects, space musicQuestionSuspect interrogation
7Blue lighting with movement effects, space musicQuestionSuspect interrogation
8Blue lighting with movement effects, space musicQuestionSuspect interrogation
9Blue lighting with movement effects, space musicQuestionSuspect interrogation
10Blue lighting with movement effects, space musicQuestionSuspect interrogation
11Blue lighting with movement effects, space musicQuestionSuspect interrogation
12Blue lighting with movement effects, space musicQuestionSuspect interrogation
13Blue lighting with movement effects, space musicQuestionSuspect interrogation
14Blue lightingVideo **Suspect interrogation
15Blue lighting, space musicText tutorial *Suspect interrogation
16Blue lighting with movement effects, space musicSuspect selectionSuspect interrogation
17Blue lightingVideo **Black screen
18Blue lighting, space musicText tutorial *Black screen
19Blue lighting with movement effects, space musicDrone search mapQuestion
20 VideoBlack screen
21Blue lighting, space musicText tutorial *Black screen
22Blue lighting, space musicOutroOutro
Table 5. Results of the quantitative survey. The numbers represent subjects who answered the respective answer option.
Table 5. Results of the quantitative survey. The numbers represent subjects who answered the respective answer option.
HeuristicGoodRather GoodNeutralRather Not GoodNot Good
Consistency in Game World44001
Goals35000
Emotional Connection15210
Coolness/Entertainment42300
Humor23010
Immersion53100
Documentation/Tutorial52100
Game Provides Feedback53100
Terminology25100
Burden on Player61100
Game Story Immersion44100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brandl, L.C.; Schrader, A. Realizing Ambient Serious Games in Higher Education—Concept and Heuristic Evaluation. Trends High. Educ. 2025, 4, 52. https://doi.org/10.3390/higheredu4030052

AMA Style

Brandl LC, Schrader A. Realizing Ambient Serious Games in Higher Education—Concept and Heuristic Evaluation. Trends in Higher Education. 2025; 4(3):52. https://doi.org/10.3390/higheredu4030052

Chicago/Turabian Style

Brandl, Lea C., and Andreas Schrader. 2025. "Realizing Ambient Serious Games in Higher Education—Concept and Heuristic Evaluation" Trends in Higher Education 4, no. 3: 52. https://doi.org/10.3390/higheredu4030052

APA Style

Brandl, L. C., & Schrader, A. (2025). Realizing Ambient Serious Games in Higher Education—Concept and Heuristic Evaluation. Trends in Higher Education, 4(3), 52. https://doi.org/10.3390/higheredu4030052

Article Metrics

Back to TopTop