Next Article in Journal
Neuroprotective Benefits of Rosmarinus officinalis and Its Bioactives against Alzheimer’s and Parkinson’s Diseases
Previous Article in Journal
Integrating Interpolation and Extrapolation: A Hybrid Predictive Framework for Supervised Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Impact of a Digital Game-Based AI Chatbot on Students’ Academic Performance, Higher-Order Thinking, and Behavioral Patterns in an Information Technology Curriculum

College of Educational Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(15), 6418; https://doi.org/10.3390/app14156418
Submission received: 22 June 2024 / Revised: 21 July 2024 / Accepted: 21 July 2024 / Published: 23 July 2024
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
In the age of intelligence, information technology (IT) education has become the focus of attention in the education sector. However, traditional teaching methods fall short in motivating students and fostering higher-order thinking and have difficulty providing a personalized learning experience. Although AI chatbots can provide instant feedback as an innovative teaching tool, it is still challenging to fully enhance learner engagement. Based on this, this study developed a digital game-based AI chatbot system to enhance students’ learning experience through digital game-based learning strategies. This study utilized a quasi-experimental design with the experimental group using a digital game-based AI chatbot and the control group using a traditional AI chatbot. A comparison was made between the two groups concerning student learning performance in IT courses, higher-order thinking (including problem-solving, computational thinking, and creativity), learning motivation, and flow experience. In addition, the behavioral patterns of high-achieving and low-achieving students in the experimental group were analyzed. The results showed that the experimental group was significantly better than the control group in academic performance, problem-solving, computational thinking, learning motivation, and flow experience, but there was no significant difference in creativity tendency. Behavioral pattern analysis showed that high-achieving students in the experimental group showed more systematic learning strategies, while low-achieving students relied more on immediate feedback and external help, but both high- and low-achieving groups were able to actively talk to the AI chatbot and actively explore problem-solving strategies in the digital game. Therefore, AI chatbots based on digital games can be effectively used in IT courses to help students construct knowledge and develop higher-order thinking.

1. Introduction

With the advancement of intelligent technologies, information technology (IT) education has become an important part of basic education, exerting a decisive impact on students’ future careers and the overall development of society [1]. IT courses are not only aimed at imparting foundational knowledge and programming skills but also emphasize the cultivation of higher-order thinking skills, including problem-solving abilities, computational thinking, and creativity [2]. However, in most IT courses, teachers typically rely on textbooks, lectures, and programming instruction to help students master relevant knowledge and operational skills. Such teaching methods struggle to ignite students’ interest in learning and enhance their higher-order thinking [3]. Moreover, due to individual differences among students, traditional teaching methods often fail to provide sufficient personalized feedback, limiting the maximization of learning outcomes [4]. This makes learning dull and fails to motivate students to engage in positive learning behaviors. Therefore, enhancing students’ interest in learning, cultivating higher-order thinking, and providing personalized learning services in IT courses are of paramount importance.
AI chatbots, as an innovative teaching tool, have shown immense potential in providing personalized support and enhancing students’ higher-order thinking [5]. They can guide students like human tutors and provide immediate feedback [6]. Additionally, AI chatbots can cater to students’ unique learning needs [7], offering personalized learning experiences. They can also alleviate learners’ stress and reduce cognitive load [8]. Numerous studies have demonstrated that AI chatbots can improve learners’ academic performance and cultivate their thinking skills, particularly problem-solving and innovation abilities. For instance, Lin and Ye [9] applied AI chatbots in biology and found that they significantly improved students’ academic achievements. Li et al. [10] used a quasi-experimental method with the course “Remote Education” and discovered that AI chatbot-supported teaching robots significantly enhanced students’ problem-solving, innovation, and collaborative learning abilities. Li [11] used a ChatGPT-based flipped learning instructional approach for teaching and learning and found that this approach significantly improved learners’ performance and creative thinking. Hu [12] used a generative AI chatbot as a virtual learning partner in a business ethics course and found that it improved students’ problem-solving skills. On the other hand, research has found that AI chatbots still face challenges in comprehensively enhancing learner engagement. Some researchers applied ChatGPT in mathematics learning and found no significant difference in learners’ intrinsic motivation, enthusiasm, and social engagement compared to traditional Google search-assisted learning [7]. Kuhail et al.’s [13] systematic review analyzed various applications of educational chatbots and found that over time students lose interest in interacting with an AI chatbot, which does not occur when interacting with a human partner. Therefore, despite the progress made by AI chatbots in improving academic performance, cultivating higher-order thinking, and providing personalized education, challenges remain in fully enhancing learner engagement and learning investment.
Digital game-based learning (DGBL) introduces competitive mechanisms, achievement systems, and reward mechanisms to provide learners with engaging, interactive, and challenging learning environments, significantly boosting students’ learning motivation and participation [14]. This teaching strategy addresses the issues of dull classroom instruction and low student engagement [15,16], motivating learners to persist in their studies [17]. Moreover, DGBL can enhance learners’ flow experiences [18]. A good flow experience during learning will make learners more focused and even enjoy the learning process [19]. Additionally, researchers have introduced educational games into AI courses, finding that educational games can significantly enhance learners’ AI literacy, provide emotional support, and improve cognitive engagement [20]. Game-based learning environments can promote self-regulation among elementary school students [21]. Other researchers have found that AI-based chatbots in virtual reality game learning environments can enhance learners’ behavioral engagement, emotional engagement, and metacognitive awareness [22]. Evidently, digital game-based learning has great potential in further promoting students’ higher-order thinking, academic performance, learning motivation, and flow experiences.
To enable students to better grasp foundational knowledge and develop higher-order thinking in IT courses while also improving learning motivation and flow experience, this study integrates an AI chatbot into a digital game-based learning environment, developing a digital game-based AI chatbot. To verify its effectiveness, a quasi-experimental design is employed to compare the application effects of the digital game-based AI chatbot and traditional AI chatbot in IT courses. The specific research questions are as follows:
  • Does the digital game-based AI chatbot improve students’ academic performance better than the traditional AI chatbot?
  • Compared to the traditional AI chatbot, does the digital game-based AI chatbot more effectively cultivate students’ higher-order thinking abilities? (Higher-order thinking abilities include problem-solving tendencies, computational thinking, and creativity across three dimensions).
  • Do students using the digital game-based AI chatbot for learning have higher learning motivation compared to using the traditional AI chatbot?
  • Does the digital game-based AI chatbot provide a better flow experience compared to the traditional AI chatbot?
  • How do students’ learning behaviors manifest when using the digital game-based AI chatbot, and what are the behavioral differences between high and low achievers?

2. Literature Review

2.1. Information Technology Education

Information technology (IT) education refers to a series of courses offered in schools or educational institutions aimed at teaching students the foundational knowledge of information and communication technology. This education allows students to achieve simple applications and cultivate higher-order thinking skills. IT course content includes computer operations, programming languages, network security, artificial intelligence, and machine learning, often featuring interdisciplinary and relatively complex topics. With the advancement of intelligent technologies, IT education can be divided into three stages: early computer education focused on programming, an application stage emphasizing skills, and an education phase oriented towards literacy [23]. Evidently, as intelligent technologies advance, IT courses must not only impart knowledge but also focus on solving real-world problems and fostering higher-order thinking skills in students [24]. However, many current IT classes still rely on traditional teaching methods centered on learning operations and programming, which result in monotonous classes with low student engagement and inadequate stimulation of higher-order thinking [25]. Additionally, the traditional classroom teaching model restricts real-time feedback and personalized guidance. Thus, enhancing students’ motivation, cultivating their thinking abilities, and providing personalized feedback in IT courses are of paramount importance.

2.2. AI Chatbots

AI chatbots are software programs built using natural language processing and machine learning technologies that can understand user queries and interact with humans through text or speech [26]. In education, AI chatbots can serve as learning partners, tutors, and learning resource managers [27]. For example, researchers have applied AI chatbots in English learning, finding that chatbots as learning partners can provide real-time feedback and personalized correction, significantly improving students’ English speaking skills and willingness to communicate [28]. AI chatbots can also guide students like human tutors, with studies showing that AI chatbot training can be more effective than traditional teacher-led instruction [29]. Additionally, research has found that guided AI chatbot-assisted learning can significantly enhance learners’ self-regulation, higher-order thinking, and knowledge construction [30]. Furthermore, AI chatbots that recommend resources can personalize recommendations based on learners’ learning styles and characteristics [31]. Thus, AI chatbots are interactive tools that provide timely feedback and personalized learning services, helping students address learning challenges and improve their academic performance [32]. Despite extensive research demonstrating the positive impacts of AI chatbots on student performance and cognitive development, challenges remain in fully enhancing learner engagement and investment [7]. Moreover, research on the application of AI chatbots in IT education is still lacking. Additionally, meta-analytic studies have found that the educational effectiveness of AI chatbots tends to diminish over longer experiment durations [33]. Therefore, new teaching methods and strategies are needed to boost student motivation and engagement with AI chatbot-assisted learning.

2.3. Digital Game-Based Learning

Digital game-based learning (DGBL) is an educational approach that integrates learning content into digital games using games as teaching tools to achieve educational goals [34]. Digital games feature engaging storylines, clear instructional objectives, and reward mechanisms that can effectively increase learners’ participation, flow experience, and learning outcomes [35]. To date, digital game-based learning has been used in a wide range of disciplines. For instance, Hwang et al. [36] utilized concept mapping-based digital games to help learners understand complex chemical problems, improving problem-solving skills and scientific self-efficacy. Wen et al. [37] developed a problem-solving simulation game for high school physics, integrating physics knowledge into game tasks to help learners think reflectively about scientific issues. Hwang et al. [38] proposed a two-tier testing digital game method for learning about world-famous artworks, incorporating two-tier tests into the gamified teaching process to help learners construct art appreciation knowledge and enhance learning motivation and flow experience. In addition, recent research further supports the great potential of DGBL to enhance students’ higher-order thinking, academic performance, and motivation. For example, Ng et al. [39] developed an online platform for gamifying e-books and found that gamification can provide emotional and cognitive support as well as an enjoyable experience for the development of learners’ AI literacy, and is effective in increasing motivation and self-efficacy in learning. Li and Li [40] found experimentally that game-based learning can stimulate college students to generate new ideas and have a significant positive impact on students’ creativity. Amzalag and Peretz [41] found that students’ motivation and engagement increased when digital learning games were integrated into teaching. Other researchers have further suggested that digital game-based learning can reduce students’ cognitive conformity, increase motivation, and promote effective learning behaviors [42]. Thus, DGBL can help learners understand complex knowledge and develop higher-order thinking, and enhance their flow experience, making them more immersed in the learning process. Consequently, this study incorporates gamified elements into AI chatbot-supported IT education, employing digital game-based learning strategies to stimulate student motivation and flow experience, thereby further promoting knowledge mastery and the cultivation of higher-order thinking.

2.4. Higher-Order Thinking Skills

Higher-order thinking refers to the creative application of knowledge or methods to solve problems [43]. It is a crucial 21st-century skill and a core focus in education, essential for student academic success [44]. The primary goal of IT education is to cultivate students’ higher-order thinking and promote their ability to solve complex problems. Various researchers have different views on the core elements of higher-order thinking. For example, Li et al. [45] believe that the core elements of higher-order thinking include computational thinking, problem-solving thinking, creative thinking, and critical thinking. Hwang et al. [46] suggest that higher-order thinking involves complex cognitive processes beyond simple memory and understanding, such as critical thinking, problem-solving, and creativity. Sun et al. [47] argue that the core elements of higher-order thinking include metacognition, critical thinking, and creative thinking. Combining the perspectives of multiple scholars and the goals of IT education, this study identifies computational thinking, problem-solving, and creativity as the core elements of higher-order thinking. Computational thinking refers to using concepts and techniques from computer science to solve complex problems through step-by-step analysis [48]. Problem-solving ability is the capacity to effectively identify, analyze, and resolve complex or unknown problems [49]. Creativity involves breaking traditional thinking patterns to generate novel and valuable ideas [50].

3. Digital Game-Based AI Chatbot System

3.1. System Framework

AI chatbots can provide immediate learning assistance and personalized services, much like a tutor, thereby improving students’ academic achievements [51]. Digital game-based learning (DGBL) uses rich contexts and storylines to engage learners, features clear instructional tasks, and employs incentive mechanisms to enhance student engagement, interactivity, immediate feedback, and multi-sensory learning [36,52]. This study builds on existing research by deeply integrating the advantages of AI chatbots with DGBL strategies, developing a digital game-based AI chatbot system named “Sound Guardian” to teach amplifier content in IT courses. The system aims to help learners acquire relevant IT knowledge, enhance higher-order thinking skills, and increase motivation and flow experience while playing games. The system framework of the digital game-based AI chatbot is shown in Figure 1.
The overall framework includes the following modules:
1. Learning Content Module: Provides materials and resources related to IT learning, including texts, images, micro-lesson videos, and exercises, ensuring the content’s scientific and educational value.
2. Digital Game Task Module: Designs interesting game tasks and challenges based on teaching goals and content. During the game-based teaching process, the system rewards and scores students according to their performance, recording their game progress.
3. Question Prompt Module: Provides guided thinking points during complex problem-solving processes, helping students focus on key concepts and important information, find clues to solve problems, reduce frustration, and boost confidence.
4. AI Chatbot Guidance and Feedback Module: When students encounter difficulties in game tasks, the AI chatbot offers supportive feedback, answers questions, and provides additional learning resources to help learners understand complex concepts or solve problems.
5. Learning Notes Module: Stores important information that learners record or bookmark during game-based learning for easy review and sharing.
6. Student Profile Module: Records detailed data on each student’s logins, study times, and task results, including personal information, learning progress, and assessment results. This data can be used to analyze learning outcomes and improve teaching methods.
Data from these modules is stored in corresponding databases, including learning and task databases, human–computer interaction databases, learning profile databases, and learning analysis databases. This framework facilitates the updating, modification, and subsequent teaching analysis of content.

3.2. System Introduction

“Sound Guardian” is a teaching assistant system that integrates AI chatbot technology with DGBL, specifically designed to teach the scientific principles and applications of amplifiers. The system guides students through four structured educational levels using real-world scenarios involving upgrading and maintaining a city’s sound system. Each game level is closely designed around specific teaching objectives, covering key knowledge points such as the working principles of amplifiers, construction, sound quality optimization, and future AI amplifier designs. Additionally, each level includes corresponding teaching video resources, ensuring students can systematically grasp the design and working principles of amplifiers while completing tasks. The AI chatbot in the system acts as a tutor and assistant, providing real-time guidance and problem-solving support when students face technical difficulties. The interface of the “Sound Guardian” system, shown in Figure 2, includes the game interface, AI chatbot interface, teaching video interface, and game task clues.

3.3. System Strategies

In order to better enhance learners’ academic achievement and higher-order thinking in IT courses, the system developed in this study uses the following strategies: first, considering the complexity of IT courses and students’ lack of a priori knowledge and problem-solving skills, we designed a system with AI chatbot conversational features to help students overcome difficulties during the game. The AI chatbot is able to answer questions and provide immediate feedback and additional learning resources, enabling students to overcome challenges on their own and learn at their own pace [53]. This approach not only accelerates the problem-solving process but also deepens the understanding of the learning content and helps students develop higher-order thinking skills. Second, the AI chatbot in this study has the ability to proactively ask students if they need help and facilitates learning through a question-prompting module. This module provides a structured list of questions from which students can select relevant questions or talk directly to the AI chatbot for support information. In addition, it has been found that closely linking scaffolding to game challenges can motivate and facilitate students’ mastery of relevant knowledge, which in turn improves academic performance and leads to better immersion in the game [54]. Based on this, we embedded the AI chatbot as a learning tool into the digital game environment, directly associating the learning objectives with the game tasks so that learners can not only obtain an immediate sense of achievement in the process of completing the tasks but also invisibly master the core subject knowledge. Moreover, digital game-based learning includes a variety of game elements, such as challenges, game points, and competitive role-playing, which provide learners with enjoyable experiences that enhance their motivation and mind-flow experience [55,56]. In the system we developed, points and AI chatbot favorability were set as incentives. Learners receive points for watching instructional videos, while skipping videos results in fewer points. Meanwhile, asking questions to the AI chatbot increases favorability. These gamification elements aim to motivate learners to study seriously. Finally, we designed corresponding task clues in the game and displayed key knowledge points through hint boxes to deepen learners’ understanding. Through these strategies, the system developed in this study can effectively enhance learners’ academic achievement and higher-order thinking skills in IT courses.

4. Experimental Design

This study used a mixed-methods quasi-experimental design, including quantitative and qualitative analyses. The independent variables are types of AI chatbot, including digital game-based AI chatbot and traditional AI chatbot. The dependent variables are learners’ academic performance, motivation, flow experience, and higher-order thinking. Specifically, quantitative analysis was used to compare the differences between the two groups of students in terms of academic achievement, motivation, flow experience, and higher-order thinking. Meanwhile, lagged sequence analysis was used to analyze the differences in the behavioral patterns of high achievers and low achievers in the AI chatbot group based on digital games.

4.1. Participants

The study involved two sixth-grade classes from a primary school in Hangzhou, Zhejiang Province, with a total of 77 students aged between 11 and 12 years. One class, consisting of 38 students (19 boys, 19 girls), was the experimental group that learned IT course content using a digital game-based AI chatbot. The other class, with 39 students (20 boys, 19 girls), served as the control group and learned using a traditional AI chatbot. The students had been studying IT courses since the third grade and had a basic understanding of the subject. The main content of the course for this study was the amplifier unit in the IT curriculum.

4.2. Experimental Procedure

The experimental procedure of this study is illustrated in Figure 3. Initially, a 15 min introduction to the learning activity was given. This was followed by a 20 min pre-test questionnaire, which assessed knowledge of the “amplifier” unit, higher-order thinking, flow experience, and learning motivation. Subsequently, all students engaged in 40 min of AI chatbot-assisted learning using computers. Both groups had access to the same micro-lesson resources and question prompt lists; the only difference was the type of AI chatbot used. Specifically, the experimental group used a digital game-based AI chatbot, while the control group used a traditional AI chatbot. After the learning activity, all students completed a 20 min post-test questionnaire.

4.3. Measurement Tools

Regarding IT learning performance, both the pre-test and post-test questions were selected from the Zhejiang Province sixth-grade IT unit test papers. Two experienced IT teachers collaborated to select, discuss, and refine the questions. The test comprised 20 multiple-choice questions, each worth 5 points, with a total score of 100 points.
For higher-order thinking, this study assessed three dimensions: problem-solving tendency, computational thinking, and creativity tendency. The problem-solving tendency questionnaire was adapted from Lai and Hwang [57] and had a Cronbach’s α value of 0.883, indicating high reliability. The computational thinking tendency questionnaire, adapted from Hwang et al. [58], included 5 items with a Cronbach’s α value of 0.880, also indicating high reliability. The creativity tendency questionnaire, adapted from Lai and Hwang [57], comprised 5 items with a Cronbach’s α value of 0.768. All questionnaires used a 5-point Likert scale.
For learning motivation, the study used a scale developed by Pintrich et al. [59], which included intrinsic and extrinsic motivation and consisted of 7 items rated on a 5-point Likert scale. The Cronbach’s α value was 0.847.
Flow experience was assessed using a modified scale by Pearce et al. [60]. This scale included 8 items rated on a 5-point Likert scale, with a Cronbach’s α value of 0.901, and was used to evaluate the flow experience cultivated during the learning activities.

4.4. Behavior Coding Scheme

For the analysis of learning behaviors, this study used a learning behavior coding scheme adapted from Liang et al. [22] and further confirmed through discussions with two experienced IT teachers. Table 1 lists the eleven codes and their corresponding learning behaviors.

4.5. Data Analysis

In this study, we first used the Shapiro–Wilk test to check the normality of the pre-test and post-test data. It was found that the data did not meet the assumption of normal distribution, which indicated the need to be analyzed using Wilcoxon signed-rank tests. Therefore, this study used the Mann–Whitney U test (M-U test) to analyze the differences between the two groups of learners in terms of academic performance, higher-order thinking, motivation, and mind-flow experience. In addition, the study used lagged sequence analysis to analyze the behavioral patterns of high- and low-achieving students in the experimental group.

5. Experimental Results

5.1. Learning Achievement

To explore the differences in learning performance between the experimental group and the control group, a Mann–Whitney U test was conducted. The results are shown in Table 2. The findings indicate that there was no significant difference in learning performance between the two groups before the experiment (p = 0.063 > 0.05). However, after the experiment, the learning performance scores of the experimental group and the control group were 71.71 and 63.21, respectively, with a significant difference between the two groups (p = 0.005). This suggests that the digital game-based AI chatbot was more effective in improving learners’ academic performance compared to the traditional AI chatbot.

5.2. Higher-Order Thinking Ability

The Mann–Whitney U test was used to test the three dimensions of problem-solving tendency, computational thinking tendency, and creativity tendency of higher-order thinking skills of the two groups of students, and the results are shown in Table 3. In the dimension of problem-solving disposition in higher-order thinking abilities, there was no significant difference between the two groups before the experiment (p = 0.489 > 0.05); however, a significant difference was observed after the experiment (U = 427.50, Z = −3.216, p = 0.001). Regarding the dimension of computational thinking, there was no significant difference between the two groups before the experiment (p = 0.144 > 0.05), but a significant difference emerged after the experiment (p = 0.006). In the dimension of creativity, there was no significant difference between the two groups both before and after the experiment (p > 0.05). These results indicate that compared to traditional artificial intelligence, game-based artificial intelligence chatbots are more conducive to fostering learners’ problem-solving disposition and computational thinking, while no significant difference was found in the creativity dimension.

5.3. Learning Motivation

Table 4 gives the results of the Mann–Whitney U test for learning motivation of the two groups. As can be seen from the results, there is no significant difference in intrinsic motivation (p = 0.164) and extrinsic motivation (p = 0.992) between the two groups before the experiment. However, after the experiment, significant differences were observed between the experimental and control groups in both intrinsic and extrinsic motivation. This suggests that the game-based AI chatbot is more conducive to improving learners’ intrinsic and extrinsic motivation than the traditional AI chatbot.

5.4. Flow Experience

In this study, the Mann–Whitney U test was used to test the mind-flow experience of the two groups of students, and the results are shown in Table 5. Before the experiment, there was no significant difference in the mind-flow experience between the two groups (p = 0.503). At the end of the experiment, there was a significant difference between the two groups’ mind-flow experiences (p = 0.023). The results showed that the game-based AI chatbot was more conducive to the learners’ learning mind-flow than the traditional AI.

5.5. Learning Behavior

To further explore the learning processes of learners supported by AI chatbots in digital game-based learning environments, this study recorded learners’ behaviors during gameplay and employed the behavior coding method proposed by Liang et al. [22]. The specific coding descriptions are detailed in Table 1. The experimental group was divided into high-achievement and low-achievement groups to investigate the learning processes of students with varying performance levels. This grouping helps us to understand the behavioral patterns of high and low achievers in AI chatbot-supported game-based learning, enabling more targeted educational interventions and support.
The median pre-test score of 55 points was used as the grouping criterion, as the median is an effective dividing point that splits the dataset into two roughly equal parts, enhancing the stability and reliability of statistical analyses [61,62]. Students scoring below 55 points were classified as the low-achievement group, comprising 21 students (scoring between 30 and 50 points), generating 589 behavior codes. Those scoring above 55 points were classified as the high-achievement group, consisting of 17 students (scoring between 55 and 75 points), generating 827 behavior codes. As this was the introduction of new content, students had not previously learned the relevant knowledge, resulting in generally low pre-test scores.
Lag sequential analysis was employed to reveal the differences in behavioral patterns between the high-achievement and low-achievement groups in AI chatbot-supported digital game-based learning. Table 6 presents the frequencies and ratios of behaviors in both groups. Descriptive statistical analysis first showed that for the behavior of engaging in free dialogue with the AI chatbot (AF), the high-achievement group had a proportion of 18.9%, while the low-achievement group had a proportion of 14.1%. This indicates that both high and low achievers actively engaged in free dialogue with the AI chatbot, which could enhance learning motivation and efficiency for all students. Secondly, comparing the frequency of skipping videos (K), it was found that low-achievers more frequently chose to skip videos. Furthermore, in behaviors coded as submitting correct answers (C) and engaging in selective dialogue with the AI chatbot (AS), high achievers exhibited higher frequencies compared to low achievers. This reflects higher problem-solving efficiency and more precise information-filtering abilities in the high-achievement group. Finally, to determine whether there were significant differences in learning behaviors between high and low achievers under AI chatbot support, a chi-square analysis was conducted on the frequency of learning behaviors. The results showed X2(10) = 25.712, p = 0.004, indicating significant differences in behavior distribution between the high- and low-achievement groups.
To further explore the statistical significance and underlying dynamics of the observed behavior frequencies, adjusted residual analysis was performed on the transition frequencies of behaviors between the high- and low-achievement groups. This analysis helps identify behavior transitions significantly higher or lower than expected, providing deeper insights into the behavioral differences between the two groups. Table 7 presents the residuals for the high-achievement group, with rows representing the initial behaviors and columns representing subsequent behaviors. The results showed 21 behavior transitions with z-scores significantly greater than 1.96, indicating statistical significance. Similarly, Table 8 presents the results for the low-achievement group, with 20 behavior transitions having z-scores significantly greater than 1.96, indicating significance.
Based on Table 7 and Table 8, behavioral pathways were constructed, as shown in Figure 4. The left diagram represents the learning behavior pathways of high achievers, while the right diagram represents those of low achievers. Black lines indicate important sequences occurring in both groups, while red lines indicate important sequences unique to either the high or low achievers.
Figure 4 shows that high achievers, after starting a task, tend to engage in selective dialogue with the AI chatbot and repeat this dialogue (Q → H→ AS→AS), demonstrating targeted and in-depth help-seeking behavior. Additionally, these students shift to watching instructional videos (V→ I) or reading-task clues (V→ L) when needed, then seek help from the AI chatbot again (I→ H, L→ H), leading to selective dialogue (H→ AS), often repeating the process to ensure deep understanding and thorough problem resolution. After these interactions, they typically start new tasks (V→ Q).
In contrast, low achievers also seek help from the AI chatbot after starting tasks (Q→ H) but engage more frequently in free dialogue (H→ AF) and free questioning (AS→ AF). After watching videos or reading task clues, they typically proceed directly to answering questions (L→ S) rather than seeking further help.
In summary, comparing the behavioral patterns of high and low achievers, we find that regardless of achievement level, students actively seek help from the AI chatbot during AI chatbot-supported digital game-based learning. However, high achievers exhibit more systematic and in-depth learning strategies, engaging in multiple interactions and repeated confirmations to master task essentials, while low achievers rely more on immediate feedback and external assistance.

6. Discussion

This study integrates an AI chatbot into a digital game environment, developing a game-based AI chatbot to address issues of low learning motivation and difficulty in stimulating higher-order thinking in information technology courses. The game-based AI chatbot provides personalized learning services and immediate feedback, combined with gamification elements, enhancing students’ enthusiasm for learning IT, improving their understanding of IT knowledge, and fostering higher-order thinking skills. Compared to traditional AI chatbots, the game-based AI chatbot shows superior results.
Firstly, the results indicate that the game-based AI chatbot significantly improves learners’ academic performance compared to traditional AI chatbots. This finding aligns with the results of Chen et al. [63] and Ng et al. [39]. The game-based learning environment is more engaging than traditional learning environments [34], enhancing learners’ attention and motivating continuous learning [64]. The game-based learning environment can unfold teaching content through game storylines to guide learners towards learning objectives [65]. Traditional chatbots lack storylines, whereas the game-based AI chatbot naturally presents teaching content within story contexts. Additionally, gamification elements such as leaderboards, points, and badges have been shown to improve students’ academic performance and learning motivation [66]. The game-based AI chatbot developed in this study includes point systems and AI affinity levels to encourage diligent learning. Moreover, AI chatbots in a gaming environment provide a more interactive and immersive learning experience [67], enabling students to actively participate in the learning process through interactions with AI, completing tasks, and solving problems, leading to a deeper understanding of knowledge. Therefore, there are significant differences in learning achievements between the two groups.
Secondly, regarding the cultivation of higher-order thinking skills, the results show that the game-based AI chatbot significantly enhances learners’ problem-solving tendencies and computational thinking abilities compared to traditional AI chatbots. This finding is consistent with the results of Hooshyar et al. [68] and Hsu and Wu [69]. The game-based AI chatbot provides a rich and engaging learning environment where students can better understand and apply computational thinking and problem-solving strategies by solving real-world problems in games. Additionally, utilizing student questioning strategies in a digital game environment can enhance computational thinking abilities [70]. Educational game tasks with clear goal orientation provide students with clear directions at each stage, thereby strengthening their problem-solving tendencies. Conversely, traditional AI chatbot-supported learning lacks clear instructional tasks and story contexts, making it relatively harder for the control group students to break down problems. Therefore, the game-based AI chatbot outperforms the traditional AI chatbot in terms of problem-solving tendencies and computational thinking. However, there is no significant difference between the experimental and control groups in the creativity dimension. This result may be attributed to the fact that digital games often revolve around specific goals and tasks, which, despite requiring problem-solving and logical reasoning, may not provide sufficient space and opportunities for students to engage in free creation or highly innovative thinking. Additionally, different students exhibit varying levels of creativity and expression styles; some may already demonstrate high creativity levels in traditional AI chatbot environments, and the game-based environment may not significantly enhance their creative performance. Furthermore, both groups can ask questions to the AI chatbot and express their ideas, resulting in no significant difference in creativity tendencies between the two groups.
Moreover, there are significant differences in learning motivation and cognitive flow between the two groups post-experiment. Students using the game-based AI chatbot demonstrate significantly better learning motivation and flow experience compared to those using traditional AI chatbots. This result aligns with previous research findings [38,71]. Embedding an AI chatbot into digital games does not diminish students’ learning motivation; instead, it creates more challenges, making learning more enjoyable. Game-based learning can immerse learners in the learning process, helping them achieve better flow experiences [72]. The reason lies in gamification elements such as points, rewards, levels, and challenges, which motivate learners to continually strive towards their learning goals [73]. Additionally, the narrative and role-playing elements in digital games immerse students in a virtual world, enhancing the emotional engagement in the learning experience [74]. For instance, in this study, students in the experimental group played the role of “Sound Guardians,” participating in the upgrade and maintenance of a city’s sound amplification system. Such role-playing enables deeper engagement in educational game-based learning. Moreover, digital games can enhance the learning experience through multi-sensory stimulation, including visual, auditory, and tactile inputs. This multi-sensory stimulation increases the fun of learning, thereby improving learners’ motivation. Consequently, the game-based AI chatbot effectively enhances students’ learning motivation and flow experience in IT courses.
Finally, in the study of learning behaviors in a game-based environment, the experimental group students actively watched instructional videos and read task clues, thereby strengthening their understanding of IT knowledge points. They also actively asked questions to the AI chatbot and engaged in repeated dialogues to seek problem-solving methods. This behavior is significant in both high and low achievers, driven by game mechanisms that reward points and increase AI affinity through interactions with the AI chatbot, motivating students to actively think and solve problems. Furthermore, since the students in the experimental group were between 11 and 12 years old, they were in a transitional cognitive development stage, moving from concrete operational thought to formal operational thought. Consequently, they required more assistance and feedback when dealing with complex tasks and information. Additionally, students in this age group have a high acceptance of games and interactive learning tools. They are easily motivated by game mechanisms and point systems, which foster their proactivity and enthusiasm.
By comparing the learning behaviors of high-achieving and low-achieving students, several key differences can be observed. First, low-achieving students more frequently skipped videos compared to their high-achieving counterparts. This behavioral discrepancy may be attributed to the relative lack of self-regulation and attention span among low-achieving students [75], resulting in impatience in sustaining attention to video content. Conversely, high-achieving students are more inclined to adopt systematic learning strategies [76], watching the videos in full to build a comprehensive knowledge structure. Additionally, the current instructional videos might lack sufficient interactivity and engagement, particularly for low-achieving students. Future research should consider enhancing video content by incorporating interactive Q&A sessions and visually appealing elements to increase engagement. Furthermore, low-achieving students tended to seek help from the AI chatbot after skipping videos, suggesting that interactions with the AI chatbot were more appealing to them. Based on this finding, educators might design more learning modules that integrate AI interactions to improve learning outcomes.
Second, at the beginning of tasks, high-achieving students tended to engage in selective dialogue (H→AS), while low-achieving students frequently engaged in free dialogue and repetitive questioning (H→AS, H→AF, AS→AF). This indicates that high-achieving students had clearer questions and task objectives, enabling them to seek targeted assistance from the AI chatbot, whereas low-achieving students encountered more difficulties in understanding tasks or information, requiring more external help to resolve issues. This finding aligns with Li et al. [61], who found that low achievers rely more on concept maps in their study on the effects of a concept map-based two-tier test strategy on student behavioral patterns. The difference may be due to high-achieving students possessing stronger problem-solving skills [77], resulting in more efficient and goal-oriented interactions with the AI chatbot. Low-achieving students might lack confidence when facing difficulties, prompting them to seek help more frequently. Based on these findings, educators can further optimize AI chatbot-supported digital game-based learning by providing personalized guidance and support, especially offering more detailed steps and motivational feedback for low-achieving students.
Third, after watching instructional videos or reading task clues, high-achieving students interacted multiple times with the AI chatbot (I→H, L→H) to ensure a deep understanding of the tasks, whereas low-achieving students started answering questions directly after reading task clues (L→S). This suggests that high-achieving students employed more in-depth and reflective learning strategies, while low-achieving students quickly moved to task execution, reflecting shallow information processing and a preference for speed over depth. This difference could stem from high-achieving students’ stronger cognitive abilities, allowing them to better understand and process complex information, whereas low-achieving students exhibited surface-level understanding and reliance on external help. Additionally, high-achieving students might have higher learning motivation and self-efficacy [78].
Fourth, high-achieving students submitted correct answers more frequently than low-achieving students. This discrepancy could be due to low-achieving students frequently skipping instructional videos, negatively impacting their understanding of key concepts. Furthermore, high-achieving students demonstrated better interaction skills with the AI chatbot, engaging in more targeted and in-depth dialogues. They likely possessed stronger foundational or prior knowledge related to the learning content, facilitating linkages and comprehension of new information, thus enhancing answer accuracy. Low-achieving students might lack these abilities, affecting their learning efficiency and problem-solving skills.
Despite the significant behavioral differences between high- and low-achieving students, these differences underscore the utility of AI chatbots in digital game-based learning. For low achievers, AI chatbots serve as immediate feedback providers and guides, helping them understand and complete tasks, compensating for deficiencies in self-regulation and foundational knowledge. For high achievers, AI chatbots function as partners for knowledge deepening and confirmation, supporting autonomous learning and deep understanding, leveraging their high cognitive abilities and learning strategies. This differentiated support highlights the potential of AI chatbots in addressing individual differences and promoting personalized learning in digital game-based environments. To further enhance the educational benefits of AI chatbots in digital games, future development should consider incorporating more personalized and adaptive features to better recognize and respond to the specific needs of different students.

7. Conclusions

In this study, a digital game-based AI chatbot was developed and quasi-experiments were conducted to evaluate the effectiveness of a digital game-based AI chatbot in an IT course. It was found that the digital game-based AI chatbot significantly improved learners’ academic performance compared to the traditional AI chatbot. Second, in terms of higher-order thinking development, it was able to significantly improve learners’ problem-solving tendencies and computational thinking compared to traditional AI chatbots, but there was no significant difference in terms of creativity. In addition, the digital game-based AI chatbot is significantly more effective than the traditional AI chatbot in improving learners’ motivation and learning mind-flow. Moreover, the results of the learning behavior analysis of the experimental group showed that both the high and low subgroups actively interacted with the AI chatbot during the problem-solving process, which in turn facilitated the understanding of the knowledge points and improved the ability to ask questions. Therefore, the digital game-based AI chatbot developed in this study is effective when applied to an IT course.
These findings offer valuable insights for educators and educational game designers. First, we encourage teachers to incorporate AI chatbots into their teaching practices. AI chatbots can provide immediate responses to students’ questions, helping them overcome difficulties encountered during learning and offering personalized support and feedback. Second, AI chatbots should be deeply integrated with digital game-based learning strategies. Gamified design can stimulate students’ exploratory spirit and willingness to engage in active learning. Moreover, this study found that unfolding instructional content through game narratives can better guide learners to achieve their learning objectives. Therefore, future educators should integrate clear learning tasks with story backgrounds when using AI chatbots to assist teaching. By combining narrative elements with real-world problems, educators can enhance student engagement and motivation. Finally, instruction should be learner-centered, taking into account students’ cognitive levels and the behavioral differences exhibited in AI chatbot-supported digital game-based learning. This approach involves providing customized scaffolding resources to more effectively assist various types of students in achieving their learning goals. For high achievers, more challenging tasks and open-ended questions can be offered to encourage deeper thinking. Conversely, for low achievers, more specific task cues and engaging micro-videos can be provided to help them build a knowledge framework and enhance understanding.
However, this study has certain limitations. First, the duration of the experiment was relatively short. Future research should collect more evidence to demonstrate the impact of AI chatbot-supported digital game-based learning on students’ learning outcomes. Second, the sample size used in analyzing the learning behavior paths of the experimental group was relatively small, which might limit the generalizability and statistical power of our findings. Although we enhanced the reliability of the results through statistical methods such as the chi-square test, future studies should consider using larger sample sizes to validate our preliminary findings and increase the complexity of the analysis. Additionally, developing such educational games is time-consuming and requires a high level of systematic thinking from designers. Lastly, this study only explored the application of AI chatbots in digital game-based learning within the context of information technology. It remains to be seen whether this approach can be applied to other subjects.
Based on the current findings, we offer the following recommendations for future research: (1) To better understand the long-term impact of AI chatbot-supported digital game-based learning on students’ academic performance and higher-order thinking skills, future studies should extend the duration of the experiment and increase the sample size. These measures will enhance the statistical power and generalizability of the research results, ensuring the stability and reliability of the findings. (2) Future research should investigate how to optimize AI chatbot dialogue strategies to better meet individualized learning needs and improve student learning outcomes. (3) The application of game-based AI chatbots should be explored across different subjects to further validate their effectiveness and enhance teaching outcomes. (4) Future research could compare the effectiveness of AI chatbots embedded in different game environments to identify optimal design principles for educational games incorporating AI chatbots. (5) Combining gamification with virtual reality (VR) and augmented reality (AR) technologies for interaction with AI chatbots could provide a more immersive learning experience, warranting further exploration. These recommendations aim to address the limitations identified and enhance the understanding and effectiveness of AI chatbots in educational contexts.

Author Contributions

Y.X. and J.Z. (Jingdong Zhu) designed and conducted the study and analyzed and summarized the data. M.W. is responsible for the development of the game. F.Q., Y.Y. and J.Z. (Jie Zhang) participated in data analysis. All authors contributed to the article and approved the submitted version. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request from the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Topalli, D.; Cagiltay, N.E. Improving programming skills in engineering education through problem-based game projects with Scratch. Comput. Educ. 2018, 120, 64–74. [Google Scholar] [CrossRef]
  2. Fang, J.W.; Shao, D.; Hwang, G.J.; Chang, S.C. From critique to computational thinking: A peer-assessment-supported problem identification, flow definition, coding, and testing approach for computer programming instruction. J. Educ. Comput. Res. 2022, 60, 1301–1324. [Google Scholar] [CrossRef]
  3. Yue, Y.-L.; Zhang, X.-J.; Leung, Y.-F. How does artificial intelligence teaching cultivate high school students’ computational thinking?—An empirical study based on AI case-driven Python programming teaching. Basic Educ. 2022, 19, 74–84. [Google Scholar] [CrossRef]
  4. Hu, B.H.; Liu, L.; Duo, H. An Exploration of a Hybrid Site Rotation Model to Support Differentiated Programming Instruction in Elementary Schools. Mod. Educ. Technol. 2023, 33, 59–69. [Google Scholar] [CrossRef]
  5. Wu, R.; Yu, Z. Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. Br. J. Educ. Technol. 2024, 55, 10–33. [Google Scholar] [CrossRef]
  6. Escalante, J.; Pack, A.; Barrett, A. AI-generated feedback on writing: Insights into efficacy and ENL student preference. Int. J. Educ. Technol. High. Educ. 2023, 20, 57. [Google Scholar] [CrossRef]
  7. Wu, T.T.; Lee, H.Y.; Li, P.H.; Huang, C.N.; Huang, Y.M. Promoting self-regulation progress and knowledge construction in blended learning via ChatGPT-based learning aid. J. Educ. Comput. Res. 2024, 61, 3–31. [Google Scholar] [CrossRef]
  8. Stathakarou, N.; Nifakos, S.; Karlgren, K.; Konstantinidis, S.T.; Bamidis, P.D.; Pattichis, C.S.; Davoody, N. Students’ perceptions on chatbots’ potential and design characteristics in healthcare education. In The Importance of Health Informatics in Public Health During a Pandemic; IOS Press: Clifton, VA, USA, 2020; pp. 209–212. [Google Scholar]
  9. Lin, Y.T.; Ye, J.H. Development of an educational chatbot system for enhancing students’ biology learning performance. J. Internet Technol. 2023, 24, 275–281. [Google Scholar]
  10. Li, H.F.; Wang, W.; Li, G.X.; Wang, Y. Intelligent midwifery pedagogy: An example of the teaching practice of “Intelligent Socratic Conversation Robot”. Open Educ. Res. 2024, 30, 89–99. [Google Scholar] [CrossRef]
  11. Li, H. Effects of a ChatGPT-based flipped learning guiding approach on learners’ courseware project performances and perceptions. Australas. J. Educ. Technol. 2023, 39, 40–58. [Google Scholar] [CrossRef]
  12. Hu, Y.H. Improving ethical dilemma learning: Featuring thinking aloud pair problem solving (TAPPS) and AI-assisted virtual learning companion. Educ. Inf. Technol. 2024, 1–21. [Google Scholar] [CrossRef]
  13. Kuhail, M.A.; Alturki, N.; Alramlawi, S.; Alhejori, K. Interacting with educational chatbots: A systematic review. Educ. Inf. Technol. 2023, 28, 973–1018. [Google Scholar] [CrossRef]
  14. Chen, C.H.; Liu, J.H.; Shou, W.C. How competition in a game-based science learning environment influences students’ learning achievement, flow experience, and learning behavioral patterns. J. Educ. Technol. Soc. 2018, 21, 164–176. [Google Scholar]
  15. Prensky, M. Digital natives, digital immigrants part 1. Horizon 2001, 9, 1–6. [Google Scholar] [CrossRef]
  16. Prensky, M. The games generations: How learners have changed. Digit. Game-Based Learn. 2001, 1, 1–26. [Google Scholar]
  17. Zimmerman, B.J.; Schunk, D.H. Self-regulated learning and performance: An introduction and an overview. In Handbook of Self-Regulation of Learning and Performance; Routledge: London, UK, 2011; pp. 15–26. [Google Scholar]
  18. Hung, C.Y.; Sun, J.C.Y.; Yu, P.T. The benefits of a challenge: Student motivation and flow experience in tablet-PC-game-based learning. Interact. Learn. Environ. 2015, 23, 172–190. [Google Scholar] [CrossRef]
  19. Czikszentmihalyi, M. Flow: The Psychology of Optimal Experience; Harper & Row: New York, NY, USA, 1990. [Google Scholar]
  20. Yeoh, C.P.; Li, C.T.; Hou, H.T. Game-based collaborative scientific inquiry learning using realistic context and inquiry process-based multidimensional scaffolding. Int. J. Sci. Educ. 2024, 1–23. [Google Scholar] [CrossRef]
  21. Yang, Y.F.; Lee, I.C.; Tseng, C.C.; Lai, S.C. Developing students’ self-regulated learning strategies to facilitate vocabulary development in a digital game-based learning environment. J. Res. Technol. Educ. 2024, 1–20. [Google Scholar] [CrossRef]
  22. Liang, H.Y.; Hwang, G.J.; Hsu, T.Y.; Yeh, J.Y. Effect of an AI-based chatbot on students’ learning performance in alternate reality game-based museum learning. Br. J. Educ. Technol. 2024. [Google Scholar] [CrossRef]
  23. Yang, X.; Liu, X. Information technology education: Historical evolution and design logic. China Electron. Educ. 2023, 434, 70–76. [Google Scholar]
  24. Yang, X.; Liu, X. Learning Styles in Compulsory Information Technology Curriculum: Digital Learning. Curric. Teach. Mater. Teach. Methods 2023, 43, 139–144. [Google Scholar] [CrossRef]
  25. Wu, J.; Guo, Q.; Zhu, S. Why and how to implement IT curriculum: New thinking in the perspective of digital transformation of education. China Electron. Educ. 2024, 444, 59–67. [Google Scholar]
  26. Zhang, J.; Oh, Y.J.; Lange, P.; Yu, Z.; Fukuoka, Y. Artificial intelligence chatbot behavior change model for designing artificial intelligence chatbots to promote physical activity and a healthy diet. J. Med. Internet Res. 2020, 22, e22845. [Google Scholar] [CrossRef]
  27. Fidan, M.; Gencel, N. Supporting the instructional videos with chatbot and peer feedback mechanisms in online learning: The effects on learning performance and intrinsic motivation. J. Educ. Comput. Res. 2022, 60, 1716–1741. [Google Scholar] [CrossRef]
  28. Yuan, Y. An empirical study of the efficacy of AI chatbots for English as a foreign language learning in primary education. Interact. Learn. Environ. 2023, 1–16. [Google Scholar] [CrossRef]
  29. Yuan, C.C.; Li, C.H.; Peng, C.C. Development of mobile interactive courses based on an artificial intelligence chatbot on the communication software LINE. Interact. Learn. Environ. 2023, 31, 3562–3576. [Google Scholar] [CrossRef]
  30. Lee, H.Y.; Chen, P.H.; Wang, W.S.; Huang, Y.M.; Wu, T.T. Empowering ChatGPT with guidance mechanism in blended learning: Effect of self-regulated learning, higher-order thinking skills, and knowledge construction. Int. J. Educ. Technol. High. Educ. 2024, 21, 1–28. [Google Scholar] [CrossRef]
  31. Huang, A.Y.; Lu, O.H.; Yang, S.J. Effects of artificial Intelligence–Enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom. Comput. Educ. 2023, 194, 104684. [Google Scholar] [CrossRef]
  32. Huang, W.; Hew, K.F.; Fryer, L.K. Chatbots for language learning—Are they really useful? A systematic review of chatbot- supported language learning. J. Comput. Assist. Learn. 2021, 38, 237–257. [Google Scholar] [CrossRef]
  33. Alemdag, E. The effect of chatbots on learning: A meta-analysis of empirical research. J. Res. Technol. Educ. 2023, 1–23. [Google Scholar] [CrossRef]
  34. Erhel, S.; Jamet, E. Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Comput. Educ. 2013, 67, 156–167. [Google Scholar] [CrossRef]
  35. Yang, K.H.; Lu, B.C. Towards the successful game-based learning: Detection and feedback to misconceptions is the key. Comput. Educ. 2021, 160, 104033. [Google Scholar] [CrossRef]
  36. Hwang, G.J.; Chuang, W.H.; Hsia, L.H. Comprehending complex chemistry problems in a structured and enjoyable manner: A concept mapping-based contextual gaming approach. Educ. Inf. Technol. 2024, 1–23. [Google Scholar] [CrossRef]
  37. Wen, C.T.; Chang, C.J.; Chang, M.H.; Fan Chiang, S.H.; Liu, C.C.; Hwang, F.K.; Tsai, C.C. The learning analytics of model-based learning facilitated by a problem-solving simulation game. Instr. Sci. 2018, 46, 847–867. [Google Scholar] [CrossRef]
  38. Hwang, G.J.; Chiu, M.C.; Hsia, L.H.; Chu, H.C. Promoting art appreciation performances and behaviors in effective and joyful contexts: A two-tier test-based digital gaming approach. Comput. Educ. 2023, 194, 104706. [Google Scholar] [CrossRef]
  39. Ng, D.T.K.; Chen, X.; Leung, J.K.L.; Chu, S.K.W. Fostering students’ AI literacy development through educational games: AI knowledge, affective and cognitive engagement. J. Comput. Assist. Learn. 2024. [Google Scholar] [CrossRef]
  40. Li, X.; Li, R. How game-based learning supports the creativity of university students? The context of China. Innov. Educ. Teach. Int. 2024, 1–15. [Google Scholar] [CrossRef]
  41. Amzalag, M.; Kadusi, D.; Peretz, S. Enhancing Academic Achievement and Engagement Through Digital Game-Based Learning: An Empirical Study on Middle School Students. J. Educ. Comput. Res. 2024, 07356331241236937. [Google Scholar] [CrossRef]
  42. Chen, C.H.; Chang, C.L. Effectiveness of AI-assisted game-based learning on science learning outcomes, intrinsic motivation, cognitive load, and learning behavior. Educ. Inf. Technol. 2024, 1–22. [Google Scholar] [CrossRef]
  43. Lewis, A.; Smith, D. Defining higher order thinking. Theory Into Pract. 1993, 32, 131–137. [Google Scholar] [CrossRef]
  44. Huang, Y.M.; Silitonga, L.M.; Wu, T.T. Applying a business simulation game in a flipped classroom to enhance engagement, learning achievement, and higher-order thinking skills. Comput. Educ. 2022, 183, 104494. [Google Scholar] [CrossRef]
  45. Li, W.; Huang, J.Y.; Liu, C.Y.; Tseng, J.C.; Wang, S.P. A study on the relationship between student’learning engagements and higher-order thinking skills in programming learning. Think. Ski. Creat. 2023, 49, 101369. [Google Scholar] [CrossRef]
  46. Hwang, G.-J.; Lai, C.-L.; Liang, J.-C.; Chu, H.-C.; Tsai, C.-C. A long-term experiment to investigate the relationships between high school students’ perceptions of mobile learning and peer interaction and higher-order thinking tendencies. Educ. Technol. Res. Dev. 2018, 66, 75–93. [Google Scholar] [CrossRef]
  47. Sun, H.; Xie, Y.; Lavonen, J. Exploring the structure of students’ scientific higher order thinking in science education. Think. Ski. Creat. 2022, 43, 100999. [Google Scholar] [CrossRef]
  48. Wing, J.M. Computational thinking. Commun. ACM 2006, 49, 33–35. [Google Scholar] [CrossRef]
  49. Lu, K.; Yang, H.H.; Shi, Y.; Wang, X. Examining the key influencing factors on college students’ higher-order thinking skills in the smart classroom environment. Int. J. Educ. Technol. High. Educ. 2021, 18, 1. [Google Scholar] [CrossRef]
  50. Bray, A.; Byrne, P.; O’Kelly, M. A short instrument for measuring students’ confidence with ‘key skills’ (SICKS): Development, validation and initial results. Think. Ski. Creat. 2020, 37, 100700. [Google Scholar] [CrossRef]
  51. Song, C.; Song, Y. Enhancing academic writing skills and motivation: Assessing the efficacy of ChatGPT in AI-assisted language learning for EFL students. Front. Psychol. 2023, 14, 1260843. [Google Scholar] [CrossRef] [PubMed]
  52. Tay, J.; Goh, Y.M.; Safiena, S.; Bound, H. Designing digital game-based learning for professional upskilling: A systematic literature review. Comput. Educ. 2022, 184, 104518. [Google Scholar] [CrossRef]
  53. Ait Baha, T.; El Hajji, M.; Es-Saady, Y.; Fadili, H. The impact of educational chatbot on student learning experience. Educ. Inf. Technol. 2024, 29, 10153–10176. [Google Scholar] [CrossRef]
  54. Kuo, C.H.; Chen, M.J.; Nababan, R.; She, H.C. Space adventure game-based learning: How games and scaffolds affect eighth graders’ physics learning and game immersion. IEEE Trans. Learn. Technol. 2023, 17, 229–240. [Google Scholar] [CrossRef]
  55. Chen, S.Y.; Chang, Y.M. The impacts of real competition and virtual competition in digital game-based learning. Comput. Hum. Behav. 2020, 104, 106171. [Google Scholar] [CrossRef]
  56. Lo, J.J.; Ji, N.W.; Syu, Y.H.; You, W.J.; Chen, Y.T. Developing a digital game-based situated learning system for ocean ecology. Trans. Edutainment 2008, 1, 51–61. [Google Scholar]
  57. Lai, C.L.; Hwang, G.J. Effects of mobile learning time on students’ onception of collaboration, communication, complex problem–solving, meta–cognitive awareness and creativity. Int. J. Mob. Learn. Organ. 2014, 8, 276–291. [Google Scholar] [CrossRef]
  58. Hwang, G.J.; Li, K.C.; Lai, C.L. Trends and strategies for conducting effective STEM research and applications: A mobile and ubiquitous learning perspective. Int. J. Mob. Learn. Organ. 2020, 14, 161–183. [Google Scholar] [CrossRef]
  59. Pintrich, P.R.; Smith, D.A.F.; Garcia, T.; McKeachie, W.J. A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ); ERIC Document Reproduction Service No. ED 338122; National Center for Research to Improve Postsecondary Teaching and Learning: Ann Arbor, MI, USA, 1991. [Google Scholar]
  60. Pearce, J.M.; Ainley, M.; Howard, S. The ebb and flow of online learning. Comput. Hum. Behav. 2005, 21, 745–771. [Google Scholar] [CrossRef]
  61. Li, F.Y.; Hwang, G.J.; Chen, P.Y.; Lin, Y.J. Effects of a concept mapping-based two-tier test strategy on students’ digital game-based learning performances and behavioral patterns. Comput. Educ. 2021, 173, 104293. [Google Scholar] [CrossRef]
  62. Xu, W.; Xing, Q.W.; Zhu, J.D.; Liu, X.; Jin, P.N. Effectiveness of an extended-reality interactive learning system in a dance training course. Educ. Inf. Technol. 2023, 28, 16637–16667. [Google Scholar] [CrossRef]
  63. Chen, Y.C.; Hwang, G.J.; Lai, C.L. Motivating students to become self-regulatory learners: A gamified mobile self-regulated learning approach. Educ. Inf. Technol. 2024, 1–24. [Google Scholar] [CrossRef]
  64. Zimmerman, B.J.; Bonner, S.; Kovach, R. Developing Self-Regulated Learners: Beyond Achievement to Self-Efficacy; American Psychological Association: Washington, DC, USA, 1996. [Google Scholar] [CrossRef]
  65. Wouters, P.; van Oostendorp, H. Overview of instructional techniques to facilitate learning and motivation of serious games. In Instructional Techniques to Facilitate Learning and Motivation of Serious Games; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1–16. [Google Scholar] [CrossRef]
  66. Huang, B.; Hew, K.F. Implementing a theory-driven gamification model in higher education flipped courses: Effects on out-of-class activity completion and quality of artifacts. Comput. Educ. 2018, 125, 254–272. [Google Scholar] [CrossRef]
  67. Chien, C.C.; Chan, H.Y.; Hou, H.T. Learning by playing with generative AI: Design and evaluation of a role-playing educational game with generative AI as scaffolding for instant feedback interaction. J. Res. Technol. Educ. 2024, 1–20. [Google Scholar] [CrossRef]
  68. Hooshyar, D.; Pedaste, M.; Yang, Y.; Malva, L.; Hwang, G.J.; Wang, M.; Lim, H.; Delev, D. From gaming to computational thinking: An adaptive educational computer game-based learning approach. J. Educ. Comput. Res. 2020, 59, 383–409. [Google Scholar] [CrossRef]
  69. Hsu, C.Y.; Wu, T.T. Application of Business Simulation Games in Flipped Classrooms to Facilitate Student Engagement and Higher-Order Thinking Skills for Sustainable Learning Practices. Sustainability 2023, 15, 16867. [Google Scholar] [CrossRef]
  70. Cheng, Y.P.; Lai, C.F.; Chen, Y.T.; Wang, W.S.; Huang, Y.M.; Wu, T.T. Enhancing student’s computational thinking skills with student-generated questions strategy in a game-based learning platform. Comput. Educ. 2023, 200, 104794. [Google Scholar] [CrossRef]
  71. Chang, C.C.; Warden, C.A.; Liang, C.; Lin, G.Y. Effects of digital game-based learning on achievement, flow and overall cognitive load. Australas. J. Educ. Technol. 2018, 34. [Google Scholar] [CrossRef]
  72. Hamari, J.; Shernoff, D.J.; Rowe, E.; Coller, B.; Asbell-Clarke, J.; Edwards, T. Challenging games help students learn: An empirical study on engagement, flow and immersion in game-based learning. Comput. Hum. Behav. 2016, 54, 170–179. [Google Scholar] [CrossRef]
  73. Sun, L.; Kangas, M.; Ruokamo, H. Game-based features in intelligent game-based learning environments: A systematic literature review. Interact. Learn. Environ. 2023, 1–17. [Google Scholar] [CrossRef]
  74. Alexiou, A.; Schippers, M.C. Digital game elements, user experience and learning: A conceptual framework. Educ. Inf. Technol. 2018, 23, 2545–2567. [Google Scholar] [CrossRef]
  75. McClelland, M.M.; Cameron, C.E. Self-regulation and academic achievement in elementary school children. New Dir. Child Adolesc. Dev. 2011, 2011, 29–44. [Google Scholar] [CrossRef]
  76. Hirt, C.N.; Karlen, Y.; Merki, K.M.; Suter, F. What makes high achievers different from low achievers? Self-regulated learners in the context of a high-stakes academic long-term task. Learn. Individ. Differ. 2021, 92, 102085. [Google Scholar] [CrossRef]
  77. Ispir, O.A.; Ay, Z.S.P.; Saygi, E. High achiever students’ self regulated learning strategies, motivation towards mathematics, and their thinking styles. Egit. Ve Bilim 2011, 36, 235. [Google Scholar]
  78. Guo, W.; Bai, B. Effects of self-regulated learning strategy use on motivation in EFL writing: A comparison between high and low achievers in Hong Kong primary schools. Appl. Linguist. Rev. 2022, 13, 117–139. [Google Scholar] [CrossRef]
Figure 1. Digital game-based AI chatbot system framework.
Figure 1. Digital game-based AI chatbot system framework.
Applsci 14 06418 g001
Figure 2. Key interfaces of the “Sound Guardian” system.
Figure 2. Key interfaces of the “Sound Guardian” system.
Applsci 14 06418 g002
Figure 3. Experimental procedure.
Figure 3. Experimental procedure.
Applsci 14 06418 g003
Figure 4. The learning behavioral paths of the high- and low-achievement students in the experimental group.
Figure 4. The learning behavioral paths of the high- and low-achievement students in the experimental group.
Applsci 14 06418 g004
Table 1. Behavior coding scheme.
Table 1. Behavior coding scheme.
CodeDescriptionBehavior Explanation
KSkip videoClick the Skip Video button to skip the instructional video
VSkip the questionsClick the Leave a Question button to skip asking AI chatbot a question
QStarting a missionClick on the Explore Map mission button to start the game mission
HTap on the help guideAsk AI chatbot for help
AFFree conversation with AI chatbotOpen the dialog window to talk freely with AI chatbot
ASSelective dialogue with AI chatbotOpen the dialog window to select a conversation with the AI chatbot
SStart answering questionsStart answering questions
WWrong answerSubmit a wrong answer
CCorrect answerSubmit a correct answer
LRead the assignment threadRead the contents of the tip box in the game
IWatch instructional videosWatch instructional video resources from Game Quest
Table 2. The Mann–Whitney U test results for the two groups’ learning achievements.
Table 2. The Mann–Whitney U test results for the two groups’ learning achievements.
GroupNMeanMean RankUZp
learning achievementCG3955.5143.59562.00−1.8560.063
(pre-test)EG3850.7934.29
learning achievementCG3963.2131.97467.00−2.8090.005 **
(post-test)EG3871.7146.21
** p < 0.01
Table 3. The Mann–Whitney U test results for the two groups’ higher-order thinking skills.
Table 3. The Mann–Whitney U test results for the two groups’ higher-order thinking skills.
GroupNMeanMean RankUZp
problem-solvingCG393.4637.27673.5−0.6930.489
tendency (pre-test)EG383.6740.78
problem-solvingCG393.6530.96427.5−3.2160.001 ***
tendency (post-test)EG384.1647.25
computational thinkingCG393.1235.35598.5−1.4620.144
(pre-test)EG383.3942.75
computational thinkingCG393.5732.12472.5−2.7570.006 **
(post-test)EG383.9746.02
creative tendencyCG393.4938.77732.0−0.0930.926
(pre-test)EG383.6139.24
creative tendencyCG393.8139.97703.0−0.3910.696
(post-test)EG383.7838.00
** p < 0.01, *** p < 0.001.
Table 4. The Mann–Whitney U test results for the two groups’ learning motivation.
Table 4. The Mann–Whitney U test results for the two groups’ learning motivation.
GroupNMeanMean RankUZp
intrinsic motivationCG393.4835.54606−1.3910.164
(pre-test)EG383.7242.55
intrinsic motivationCG393.831.12433.5−3.1740.002 **
(post-test)EG384.2747.09
extrinsic motivationCG393.739.03740−0.010.992
(pre-test)EG383.7538.97
extrinsic motivationCG393.8533.73535.5−2.130.033 *
(post-test)EG384.2144.41
* p < 0.05, ** p < 0.01.
Table 5. The Mann–Whitney U test results for the two groups’ flow experience.
Table 5. The Mann–Whitney U test results for the two groups’ flow experience.
GroupNMeanMean RankUZp
learning achievementCG393.5737.32675.5−6.70.503
(pre-test)EG383.7140.72
learning achievementCG393.6433.32519.5−2.270.023 *
(post-test)EG384.0844.83
* p < 0.05
Table 6. The frequency and rate of behaviors of the high-achievers and low-achievers.
Table 6. The frequency and rate of behaviors of the high-achievers and low-achievers.
Behavioral CodeLow-Achievers Group
(Intergroup Ratio)
High-Achievers Group
(Intergroup Ratio)
Total (Total Ratio)
K: Skip video35 (5.9%)23 (2.8%)58 (4.1%)
V: Skip the questions68 (11.5%)103 (12.5%)171 (12.1%)
Q: Starting a mission53 (9.0%)68 (8.2%)121 (8.5%)
H: Tap on the help guide69 (11.7%)103 (12.5%)172 (12.1%)
AF: Free conversation with AI chatbot83 (14.1%)156 (18.9%)239 (16.9%)
AS: Selective dialogue with AI chatbot22 (3.7%)51 (6.2%)73 (5.2%)
S: Start answering questions84 (14.3%)102 (12.3%)186 (13.1%)
W: Wrong answer44 (7.4%)36 (4.4%)80 (5.6%)
C: Correct answer39 (6.6%)66 (8.0%)105 (7.4%)
L: Read the assignment thread53 (9.0%)68 (8.2%)121 (8.5%)
I: Watch instructional videos39 (6.6%)51 (6.2%)90 (6.4%)
Total589 (100%)827 (100%)1416
Note: X2(10) = 25.712, p = 0.004 < 0.01, V = 0.135.
Table 7. Adjusted residuals table (z-score) for the high achievers in the experimental group.
Table 7. Adjusted residuals table (z-score) for the high achievers in the experimental group.
KVQHAFASSWCLI
K−0.499−1.316−0.9324.465 *−1.39−0.803−1.226−0.548−1.0643.605 *−0.837
V−1.316−3.4714.759 *−3.269−3.669−2.1191.324−1.445−2.80710.802 *2.883 *
Q−0.977−2.579−1.8276.888 *−2.725−1.574−2.403−1.073−2.086−1.91611.178 *
H−1.3039.643 *−2.436−3.2380.1817.939 *−3.204−1.431−2.78−2.554−2.187
AF−1.4031.947−2.622−3.48514.566 *−1.138−3.449−1.54−2.993−2.75−2.354
AS−0.8035.504 *−1.501−1.9961.1463.947 *−1.975−0.882−1.714−1.574−1.348
S−1.226−3.235−2.292−3.046−3.419−1.975−3.0149.174 *17.829 *−2.403−2.057
W−0.524−1.3812.435 *−0.394−1.46−0.8435.115 *−0.575−1.117−1.026−0.879
C−0.947−2.50.227−1.292−2.642−1.52613.198 *−1.04−2.022−1.857−1.59
L−0.977−2.5796.611 *5.853 *−2.725−1.5741.249−1.073−2.086−1.916−1.64
I12.241 *−2.208−1.5645.011 *−2.333−1.348−2.057−0.919−1.7861.92−1.404
* Z > 1.96.
Table 8. Adjusted residuals table (z-score) for the low achievers in the experimental group.
Table 8. Adjusted residuals table (z-score) for the low achievers in the experimental group.
KVQHAFASSWCLI
K−1.235−1.83−1.3517.598 *−2.011−0.63−1.989−1.406−1.3224.456 *−1.322
V−1.83−2.7111.693−2.676−2.98−0.934−0.68−2.083−1.95911.307 *3.683 *
Q−1.563−2.316−1.717.617 *−2.546−0.798−2.518−1.779−1.674−1.9799.669 *
H−1.8311.746 *−2.001−2.6761.968 *4.046 *−2.947−2.083−1.959−2.316−1.959
AF−2.0112.867 *−2.2−2.94213.097 *0.136−3.239−2.289−2.154−2.546−2.154
AS−0.631.556−0.689−0.9222.459 *2.894 *−1.015−0.717−0.675−0.798−0.675
S−1.989−2.947−2.175−2.909−3.239−1.015−3.20312.85 *12.615 *−2.518−2.13
W−1.294−1.9171.034−1.247−2.107−0.669.336 *−0.684−1.385−1.638−1.385
C−1.174−1.7391.383−0.312−1.912−0.5997.927 *−1.336−1.257−1.486−1.257
L−1.563−2.3168.037 *1.565−2.546−0.7984.147 *−1.779−1.674−1.979−1.674
I17.72 *−1.959−1.446−0.668−2.154−0.675−2.13−1.505−1.416−0.965−1.416
* Z > 1.96.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, Y.; Zhu, J.; Wang, M.; Qian, F.; Yang, Y.; Zhang, J. The Impact of a Digital Game-Based AI Chatbot on Students’ Academic Performance, Higher-Order Thinking, and Behavioral Patterns in an Information Technology Curriculum. Appl. Sci. 2024, 14, 6418. https://doi.org/10.3390/app14156418

AMA Style

Xu Y, Zhu J, Wang M, Qian F, Yang Y, Zhang J. The Impact of a Digital Game-Based AI Chatbot on Students’ Academic Performance, Higher-Order Thinking, and Behavioral Patterns in an Information Technology Curriculum. Applied Sciences. 2024; 14(15):6418. https://doi.org/10.3390/app14156418

Chicago/Turabian Style

Xu, Yeqing, Jingdong Zhu, Minkai Wang, Fang Qian, Yiling Yang, and Jie Zhang. 2024. "The Impact of a Digital Game-Based AI Chatbot on Students’ Academic Performance, Higher-Order Thinking, and Behavioral Patterns in an Information Technology Curriculum" Applied Sciences 14, no. 15: 6418. https://doi.org/10.3390/app14156418

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop