Next Article in Journal
Patients’ UX Impact on Medication Adherence in Czech Pilot Study for Chronically Ill
Next Article in Special Issue
Gender Typicality and Engineering Attachment: Examining the Viewpoints of Women College Engineers and Variation by Race/Ethnicity
Previous Article in Journal
Childhood Environmental Instabilities and Their Behavioral Implications: A Machine Learning Approach to Studying Adverse Childhood Experiences
Previous Article in Special Issue
“Knowing I Had Someone to Turn to Was a Great Feeling”: Mentoring Rural-Appalachian STEM Students
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Digital Math Game and Multiple-Try Use with Primary Students: A Sex Analysis on Motivation and Learning

by
Claudio Cubillos
1,*,
Silvana Roncagliolo
1,
Daniel Cabrera-Paniagua
2 and
Rosa Maria Vicari
3
1
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, Chile
2
Escuela de Ingeniería Informática, Universidad de Valparaíso, Valparaíso 2362905, Chile
3
Instituto de Informática, Universidade Federal do Rio Grande do Sul, Porto Alegre 90010-150, Brazil
*
Author to whom correspondence should be addressed.
Behav. Sci. 2024, 14(6), 488; https://doi.org/10.3390/bs14060488
Submission received: 16 April 2024 / Revised: 29 May 2024 / Accepted: 5 June 2024 / Published: 8 June 2024

Abstract

:
Sex differences have been a rarely addressed aspect in digital game-based learning (DGBL). Likewise, mixed results have been presented regarding the effects according to sex and the conditions that generate these effects. The present work studied the effects of a drill-and-practice mathematical game on primary students. The study focused on an analysis by sex, measuring motivation and learning in the practice activity. Also, two instructional mechanics were considered regarding the question answering to search for possible differences: a multiple-try feedback (MTF) condition and a single-try feedback (STF) condition. A total of 81 students from four courses and two schools participated in the intervention. The study’s main findings were as follows: (a) the girls outperformed the boys in terms of the students’ learning gains; (b) the girls presented lower levels of competence and autonomy than the boys; (c) under MTF, the girls presented lower levels of autonomy but no differences in competence contrasted with the boys; (d) under STF, the girls presented lower levels of competence but no differences in autonomy contrasted with the boys; (e) no sex differences existed in interest, effort, and value, in general, as per the instructional condition. This study enhances the knowledge of sex differences under diverse instructional settings, in particular providing insights into the possible differences by sex when varying the number of attempts provided to students.

1. Introduction

The rapid advancement of technology, particularly in computing and information technology (IT), has significantly impacted education, with various tools now available to support learning processes, ranging from Learning Management Systems (LMSs) like Chamilo, Moodle, and H5P and educational game generators (e.g., Educandy, Educaplay) to specific educational math platforms, like photoMath and Symbolab. These tools have been shown to significantly influence learner motivation, attitudes, and achievement, especially in mathematics [1]. However, sex differences in technology use and learning have been observed, with males showing more interest in using computers and technologies than females [2,3]. Males also perceive multimedia technology for learning as more beneficial than females [3].
In terms of cognition and learning, studies suggest that females perform better than males in planning and attention tasks [4,5]. However, the research also indicates that men and women employ distinct strategies in planning tasks [5,6], with women having higher verbal and memory skills, while men exhibit stronger capabilities in spatial cognition and learning, and both develop different approaches to complex planning tasks throughout their lives. This suggests that sex is a key factor in the efficient and effective use of educational digital technologies.
Integrating gaming with learning is not an easy task, as even setting up any of the above-mentioned learning platforms is not straightforward. Many of these tools allow for the configuration of different instructional features, such as question types, number of attempts, and types of feedback. However, a key problem in their use is that it is not always clear which combinations of all these available characteristics are most suitable for a specific group of students. Adding the factor of sex differences makes it even less clear which of these characteristics generate sex-neutral effects and which do not. While studies on digital game-based learning (DGBL) have extensively researched various emotional and motivational factors, studies covering sex differences in learning gains and engagement [4,7,8], attitudes [9,10], motivation [11], and anxiety [12,13] have shown contradictory findings.
This study aimed to address these gaps by exploring the motivational and learning effects of using a digital math game on primary students. It focused on identifying possible differences by sex, especially when varying the number of attempts provided as part of the instructional design. This research will contribute to a deeper understanding of how sex influences learning outcomes in DGBL, ultimately aiding in the development of more effective and inclusive educational technologies.
The next section provides a background on the existing literature covering sex differences in digital learning contexts and the inclusion of motivation and the related affective outcomes for deepening the concepts of instructional feedback and multiple attempts. Then, a sub-section on the Self-Determination Theory is presented to frame the motivational constructs used in our study. Section 3 presents the objectives and research questions guiding the current study together with a detailed description of the experiment’s materials and methods, including the MatematicaST game, participants, instruments, and procedure used. Then, Section 4 and Section 5 show the results and discussion, and Section 6 presents the conclusion with the main findings and their implications.

2. Background

2.1. Sex Differences and Learning in Digital Contexts

The exploration of sex differences in learning reveals that males and females interact with technology in distinct ways. Teen girls, for instance, tend to spend more time on smartphones and engage more with social networks, while teen boys are more inclined to spend their time playing games on electronic devices [14]. This divergence in technological engagement extends to mobile learning settings, where males appear to be more influenced by social factors than their female counterparts [15]. This study focuses on the biological sex of students rather than their sexual identification or orientation. Despite the above, students had to write their gender identification on the perception questionnaire, not registering any gender–sex difference. Therefore, it seems that all participants did identify with their sex.
Delving deeper into the field of gaming, a sex-based dichotomy becomes evident. Males often perceive games as unique and engaging experiences, which they find relaxing and conducive to better performance. This association of diversion and catharsis with gaming is typically more pronounced in males [16]. In contrast, it seems that females view games differently. Rather than seeing them as memorable or entertaining experiences, females tend to regard games as just an alternative learning method [16]. Furthermore, females are frequently more skeptical than males about the instructional potential of games, indicating a more cautious approach to the integration of gaming into learning [16].
When focusing on academic performance, research indicates that females generally outperform males, exhibiting superior academic achievements [17] and excelling in learning assignments [18]. This trend extends to students’ disposition towards risk, with a study by Cipriani [19] revealing that females tend to be more cautious in multiple-choice tests, omitting more items and guessing less than males. This behavior is attributed to males being more risk-prone, interpreting risk as a challenge rather than a threat.
Despite these observed differences, there is a scarcity of studies investigating sex differences in gameplay and digital game-based learning (DGBL) [8]. However, the existing body of research suggests that females may have an edge over males in terms of engagement and learning gains in DGBL [8,20,21]. For instance, Lukosch et al. [4] found that female participants outperformed males in a game designed to develop planning operation skills in a container terminal setting.
Similarly, a study by Yeo et al. [22] on fifth-grade students in Taiwan revealed that females showed greater improvements in learning performance than males when using a digital food-chain game. Similar results were also observed in a study by Khan et al. [8], which involved an eighth-grade chemistry (reactivity) game-based learning application. The study found that the girls outperformed the boys in the post-test scores under the educational application, while no gender differences were observed under traditional science instruction. These findings lend further support to the notion of sex-based learning differences.
However, it is worth noting that other research [10,11,23] has reported no significant learning differences by sex. For example, a study by Chung and Chang [11] involving a digital game for first-aid training in English for Chinese non-native speakers found no significant sex differences in the learning achievements. These contrasting findings underscore the need for further investigation to clarify under which conditions one sex may have a learning advantage over the other. The collective insights from these studies justify and highlight the need for further research in this area to better understand and cater to these distinct learning preferences.

2.2. Sex, Motivation, and Related Affective Outcomes

The role of sex in education, particularly in the context of motivation and affective outcomes, has been the subject of extensive research. However, past studies have revealed the complex interplay of sex, learning performance, and motivation in education. In the context of student attitudes towards e-assessments, Bahar and Asil [9] found that males reported more positive attitudes than females. However, when it comes to students’ interest in educational video games, Manero et al. [24] found no sex differences in students’ interest in theater-going. The researchers explored the impact of sex, age, and gaming habits on the effectiveness of the game “La Dama Boba” (The Foolish Lady), a graphical adventure based on the theater play of the same name. Interestingly, in the study of the digital game for first-aid training (EFA) by Chung and Chang [11], girls presented higher motivation levels than boys, although they had similar learning levels. The authors argue that their game was gender-moderated (or balanced), as the three learning activities used elements such as storylines, challenges that did not emphasize competition, educational values, fun factors, and interactivity that offered game exploration opportunities, enhancing female motivation. They also measured usability with a System Usability Scale (SUS) test, finding a non-significant higher score for females.
A study by Khan et al. [8] measured students’ engagement with four factors: positive body language, consistent focus, confidence, and fun and excitement. They found that girls in the learning application presented higher engagement levels than boys, although this was not significant, by using the “student engagement walk-through checklist” [25] administered by teachers during the intervention. In the experience with the food-chain digital game of Yeo et al. [22] with primary students, they determined that the attention of the medium- and high-prior-knowledge (PK) groups was significantly higher than that of the low-PK group. Also, the medium-PK group reported higher satisfaction levels than the low- and high-PK groups. The results also varied by sex. In the high-PK group, males presented higher attention levels than females, while in the low-PK group, males showed higher confidence than females. The authors used the Attention, Relevance, Confidence, and Satisfaction (ARCS) Learning Motivation Scale [26], which is based on the expectation value theory and individuals’ success expectations.
A recently explored variable has been student anxiety, especially in math [27,28] and digital game-based learning [29]. Diverse authors [12,30] found no sex differences concerning anxiety and achievement in math. The Goetz et al. [31] study examined possible sex differences between trait (habitual) anxiety and state (transitory) anxiety in mathematics in nearly 700 students. They concluded that females presented a higher trait of math anxiety compared to males, but no differences existed for state anxiety using experience-sampling methods in activities like taking a math test or attending math lessons. Another relevant finding was that the students’ self-perceptions of their competence in mathematics were lower in girls than boys despite them having similar grades in math. The authors argue that this difference in perceived competence helps to partially explain sex differences among state-trait anxiety. More recently, Wang [13] suggested that the student’s spatial ability can be a relevant factor in mediating math anxiety when considering sex differences.
In conclusion, the effects of sex on learning and motivation are not straightforward. Different variables and conditions make it hard to draw consistent conclusions from the revisited literature. Diverse authors [8,10,20] highlight the need for more research focusing on sex, learning approaches, and motivation for building strategies that foster sex parity in DGBL and education.

2.3. Multiple-Attempt Use with Instructional Feedback

Instructional feedback can be defined as providing students with information to correct their answers, aiding in error identification, misconception correction, and the enhancement of problem-solving strategies and self-regulation [32,33,34]. In a broader sense, feedback encompasses any information provided by an agent—such as a teacher, peer, book, parent, or experience—regarding a person’s performance or understanding [35]. In this study, the instructional feedback was centered on the task level and its formative dimension (learning), diverging from delayed summary feedback [36] and feedback focused on task motivation or self-regulation processes [34].
Feedback provision can vary widely [37], with three feedback types being most frequently used in game-based learning settings during the last decade [34,38,39] as part of single-attempt instructional feedback (STF) implementations. Knowledge of Result (KR) confirms the correctness of students’ answers or marks errors without additional information [34]. Knowledge of Correct Response (KCR) specifies the correct answer without further explanation [40]. Elaborated feedback (EF) provides information on why an answer is right or wrong, offering explanations, additional materials, hints, or a combination of these [39,40]. Additionally, feedback types differ based on the number of attempts allowed. Answer-Until-Correct (AUC) feedback permits multiple attempts until the correct answer is reached, offering KR feedback between attempts [41]. Multiple-try feedback (MTF) allows a limited number of attempts, typically providing KR but sometimes offering KCR after the final try or hints on the first attempt [42].
Past literature has progressed in leveraging under what conditions each type of feedback seems better but has mainly focused on single-try (STF) alternatives. A study by Van der Kleij, Feskens, and Eggen [33] revealed that much research exists on elaborated feedback combined with KR or KCR in computer-based environments. Their meta-analysis showed that different feedback types were moderated by the kind of learning to be achieved, with EF outperforming both KCR and KR under high-order learning outcomes (LOs), while no major differences existed among these in low-order learning that involved verbatim or recall of information.
The above highlights the relevance of task complexity. By considering that feedback’s primary importance is the correction of errors, one would expect to see larger effects for instruction with higher error rates [43]; that is, more difficult topics have more possibilities for learning improvement. Differences in single-try feedback alternatives (KR, KCR, and EF) appear only upon student error.
Now, regarding multiple-attempt use, there is much less literature, and it is outdated, in addition to providing mixed results on the learning effects. Some studies show significant learning gains with multiple-try solutions [36,42,44], while others find no significant differences when compared to single-try alternatives [45,46,47]. Only the review by Clariana and Koul [44] compared MTF to different feedback types (KR, KCR, and EF), determining that MTF outperformed the other feedback types for higher-order learning outcomes (effect size—ES: 0.11) while being equivalent to or inferior for lower-order outcomes (ES: 0.22). It seems that in situations in which the test items involve comprehension and understanding rather than simply recall, such as in mathematical problem solving, the invitation to try again offers a chance for the elaboration and reorganization of information, potentially enhancing learning [42].
Past research has provided different arguments and used diverse theories to explain multiple-try effects. On the one hand, some research shows that the use of multiple trials (AUC and MTF) offers an interactive mechanism [38] for students’ errors, giving them multiple immediate exposures to the same item [48]. From a contiguity theory perspective, multiple trials seem beneficial compared to single attempts because they may engage learners in additional active processing following errors [49,50]. It is also in line with the information-processing theory, which argues that the continued engagement with a question needed upon an incorrect response can offer potential advantages [51,52], and with the idea that providing the correct answer after only one response, as happens with KCR, may “short circuit” learning [49,53].
On the other hand, some research argues that repeatedly asking a learner to answer the same question can be frustrating [49,54]. Providing multiple attempts at errors might encourage deeper thinking about the lesson unless the learner falls on random guessing because of frustration or impatience [55]. Furthermore, diverse past studies support the idea that learners, particularly those with low academic performances, can feel frustrated when lessons employ multiple-try feedback [55,56,57].
Referring to Salomon and Globerson’s concept of mindfulness [58], feedback can aid learning when received mindfully but might hinder it if it promotes mindlessness [42]. Along this line, besides knowing that MTF seems to be more beneficial for high-level learning outcomes, there is not much clarity regarding what other factors and their interactions promote mindful or mindless trial-and-error behavior when using multiple attempts.

2.4. Motivation and Self-Determination

The Self-Determination Theory (SDT) [59,60] is a psychological framework for explaining the motivation construct that is frequently used in DGBL. However, its use for explaining the effects of multiple trials has been limited. To the best of our knowledge, this is the first effort to apply the SDT to explain sex differences when using multiple attempts under DGBL. The SDT coins the concept of intrinsic motivation, the innate psychological need for competence, autonomy, and relatedness [61]. Competence involves the need for effectiveness in interactions with the environment [61], while autonomy regards the experience of free choice and “freedom in initiating one’s behavior” [62]. Relatedness refers to the fundamental human need for belongingness and connection with others [62]. In addition, the Cognitive Evaluation Theory (CET) (an SDT sub-theory) explores factors that either enhance or hinder intrinsic motivation [60]. The CET states that fostering intrinsic motivation in an educational setting involves providing diverse stimulating instructional elements and adequate challenges and promoting learner initiative and autonomy without control or pressure [59]. In this sense, activities with optimal challenge and effort levels enhance motivation [61]. Adequate challenges lead to effort, generating feelings of competence, while the absence of controlling conditions can positively impact effort levels [59]. Therefore, the SDT-CET make use of the following constructs to assess motivation: interest, competence, pressure, effort, choice, and value. All of them are positively correlated to motivation except for pressure, which is the reason why it is often used in reverse mode as no pressure.
Research on game-based learning has used the SDT with diverse focuses. Liao, Chen, and Shih [63] studied the relationship between cognitive load, motivation, and learning outcomes, while the authors of [64] studied the properties of scaffolds on intrinsic motivation. The SDT was also used to explain the relation of gamification techniques with motivation in virtual reality [65], and for bridging learning mechanics and game mechanics in GBL contexts [66]. Liu, Wang, and Lee [67] found that game quality and feedback significantly influenced the motivation and learning performances of students.
In our previous study utilizing MatematicaST [68], the focus was on contrasting multiple-try feedback and the single-try condition with KCR among primary-school students. The results revealed that MTF provided higher learning and higher levels of perceived competence and autonomy than the single-try condition with KCR. Also, multiple-try feedback presented an increase in perceived pressure. No significant differences were observed in terms of the perceived effort and value between the conditions, and both remained consistently high. As well, no sex analysis was conducted, as it was not a factor or focus of the paper.
In summary, the SDT has been applied to gamification, game mechanics, and game quality in computer-based learning contexts, but without controlling the instructional feedback, such as single-try and multiple attempts. There is little empirical research examining the SDT constructs with the sex and feedback types under DGBL contexts, showing the necessity of advancing the literature on how sex affects students’ motivation and the related affective dimensions, such as effort, pressure, and value, and also on how the feedback type and multiple-attempt inclusion could affect sex outcomes in terms of learning and motivation.

3. Materials and Methods

3.1. Study Objectives and Research Questions

Based on the above, the present study first aimed to explore the impacts of sex when using a digital math game with primary students. Second, it intended to assess whether such impacts differ across the two instructional feedback conditions (namely, MTF and STF). The study focused not only on the performance effects (learning gains) but also on the motivational effects based on the SDT-CET and the operationalization in terms of the interest, competence, autonomy, effort, no-pressure, and value constructs. Three high-order math-learning objectives were selected in this drill-and-practice training game. Therefore, based on the research gaps leveraged from the past literature, the following research questions were elaborated to guide the present investigation:
  • RQ1: What are the differences in the learning outcomes between male and female students using MatematicaST?
  • RQ2: What differences exist in motivation in terms of interest (RQ2a), competence (RQ2b), and autonomy (RQ2c) between male and female students using MatematicaST?
  • RQ3: What are the differences in the perceptions of effort (RQ3a), no pressure (RQ3b), and value (RQ3c) between male and female students using MatematicaST?
  • RQ4: What differences exist in the effects of sex between the two feedback conditions (MTF and STF)?

3.2. MatematicaST Game

“MatematicaST” is a web platform with mathematical games, developed by our research group starting in 2017 as part of a Research & Development project. It has been piloted in elementary school courses in Chile, ranging from the 3rd to the 6th grade (from 8 to 11 years old), across various schools to practice mathematical topics [68]. For the current intervention, three math mini-games were offered: number identification, number ordering, and money counting (see Figure 1), aligned with the national curriculum for the 3rd and 4th grades in Chile. The number identification game exercises the place value on the symbolic representation of numbers with units, tens, and hundreds. The number-ordering game focuses on arranging four numbers from smallest to largest up to 3 digits. Players can drag the numbers to rearrange them and confirm the obtained sequence. The money-counting game focuses on counting coins of different amounts considering coins with values ranging from 1 to 500 (in CLP).
In each game, the question appears at the top and can be listened to by pressing the play button. Players have three lives (represented by red hearts) and earn ten points for each correct answer. Total points are displayed at the upper right. The platform also features a list of high scores on the right side.
The games have 2 versions, which vary the type of instructional feedback provided to students. In the STF version of the games, students have one opportunity to answer each exercise. After their response, they receive feedback indicating whether the answer is correct or incorrect, along with the correct solution in case of a wrong answer. In the MTF version, students have three attempts per exercise without losing a life. After each attempt, immediate feedback on their answers’ correctness (KR) is given. The correct answer is not revealed even after the last try. For more details on MatematicaST games and versions, please refer to [68].

3.3. Participants and Design

In this study, students from the third and fourth grades from three schools located in the Valparaíso region of Chile were engaged in the intervention, as the learning outcomes covered by the games corresponded to the mathematical topics covered in these grades, as part of the national curriculum. Initially, the study considered the participation of 95 students. However, after controlling for complete participation (student absences on activity days, missing written parental consent, or incomplete responses in post-test or post-IMI), the final number of subjects was 81.
The research followed a randomized pre-test–intervention–post-test design, with the sex (male/female) being the first factor of analysis and the number of attempts with the feedback type provided as a second factor (STF/MTF). The STF condition considered a single try with Knowledge of the Correct Response after answering. The MTF condition involved multiple tries (3 attempts) with Knowledge of the Result among the trials. Students were randomly assigned to these conditions, with 41 participants (23 male, 18 female) in the multiple-try group and 40 participants (19 male, 21 female) in the STF condition group.
The experiment involved private and subsidized schools, with high and medium socioeconomic statuses (SESs), respectively. However, students from public schools (low SESs) were missing and so should be considered in the next studies. In Chile in 2022, 56% of students went to subsidized schools, 35% went to public schools, and the remaining 9% went to private ones [69].

3.4. Procedure

The study was conducted during two subsequent school days on math lessons. On day 1, students were taken to the computer lab or provided with notebooks depending on the school facilities. They were initially given a motivational talk outlining the experiment’s objectives and clear instructions about the procedures and the tests to be administered (10 min). Subsequently, students took the pre-IMI-test and the learning pre-test using a paper-and-pencil format for 20 min. Following this, students accessed the web platform and engaged in the MatematicaST game activity on school computers for 1 h. On day 2, students responded to the post-IMI-test and the learning post-test (20 min).

3.5. Instruments

3.5.1. Math Tests

The pre- and post-tests in this study were designed based on three specific learning objectives outlined in the Chilean mathematics program and its corresponding guide textbook. Collaborative efforts were made with teachers from participating schools, who actively contributed to the construction of these tests. Their valuable feedback was utilized to improve the instructions, exercises, and testing conditions. Each pre-test and post-test encompassed 18 items and six exercises related to each learning objective, which included tasks focused on (a) number identification, (b) number ordering, and (c) money counting. The tests were structured to have a maximum of 9.0 points, ensuring a comprehensive assessment aligned with the targeted learning outcomes.

3.5.2. Motivational Questionnaire

The measurement of motivation following the Self-Determination Theory and its Cognitive Evaluation Theory was operationalized using the Intrinsic Motivation Inventory (IMI) [70]. The IMI assesses various constructs related to intrinsic motivation during specific activities. It should also be noted that its reliability and validity in measuring these constructs have been proven through numerous studies focusing on intrinsic motivation [7,64,66,68]. This research incorporated six key motivation constructs related to students’ self-perceptions: interest, competence, pressure, effort, choice (autonomy), and value. The perception tests consisted of 16 sentences rated on a 5-point Likert scale [71], ranging from 1 (“strongly disagree”) to 5 (“strongly agree”).

4. Results

The data were processed with SPSS software, version 29, for the statistical analyses and graph generation. Partial eta squared (η2) was used as a measure of the effect size, with values of 0.01, 0.06, and 0.14 for small, medium, and large effect sizes, respectively, provided by Cohen [72]. First, Table 1 shows the descriptive statistics for the pre- and post-test scores per sex and condition. Then, to assess the overall learnings and the possible effects of sex, a two-way repeated-measures analysis of variance (ANOVA) was performed considering sex as a between-subject factor and the (pre/post) test type as a within-subject factor.

4.1. Learnings by Sex

The results show a difference in the pre and post-test scores with F(1, 79) = 19.380, p < 0.001, η2 = 0.197, meaning that there were significant learning gains with the activity. The main effects of sex on learning resulted in significance with F(1, 79) = 12.678, p < 0.001, η2 = 0.138, indicating differences by sex. When evaluating the effects of sex on the test type (pre-/post-test), the results show that males obtained significantly higher pre-test scores than females (F(1, 79) = 11.950, p < 0.001, η2 = 0.131), but not post-test scores (F(1, 79) = 2.926, p = 0.091, η2 = 0.036).
To consider the pre-test differences, a two-way analysis of covariance (ANCOVA) was carried out on the pre-/post-test learning gains, with the sex and conditions as factors and the pre-test as the controlled covariate. It showed a significant single effect of sex on the learning gains with F(1, 76) = 4.309, p = 0.041, η2 = 0.054, meaning that females learned significantly more than males after adjusting for the pre-test scores (see Figure 2). The single effect of the conditions was not significant (F(1, 76) = 0.001, p = 0.972), meaning that the learning gains in both conditions (STF and MTF) were similar.
The two-way interaction of sex and the conditions was not significant (F(1, 76) = 0.073, p = 0.788, η2 = 0.001). Pairwise comparisons of sex by condition revealed that females learned more than males in both conditions but this was not significant, with F(1, 76) = 1.694, p = 0.197, η2 = 0.022 for MTF and F(1, 76) = 2.995, p = 0.088, η2 = 0.038 for STF. Also when analyzing the effect of the conditions on sex, non-significant differences were obtained for females (F(1, 76) = 0.027, p = 0.870) and males (F(1, 76) = 0.048, p = 0.828), meaning that the learning gains were similar across the conditions for males and females.

4.2. Sex and Motivation

Table 2 shows the descriptive statistics of the IMI dimensions per sex and condition. The pre- and post-IMI-test values considered measures for the dimensions of interest, perceived competence, effort, no pressure (the reverse of pressure), perceived choice, and value. To assess the internal consistency of the questionnaires, a Cronbach alpha of 0.72 for the pre-IMI-test, 0.69 for the post-IMI-test, and 0.81 for both perception tests were obtained, which can be interpreted in the range of acceptable–good [73].
The differences among the post-IMI and pre-IMI values were analyzed. A three-way repeated-measures ANOVA was carried out to evaluate the effects of the conditions (MTF/STF), sex (male/female), and six IMI dimensions on the pre- and post-IMI delta values.
First, the results show a significant single main effect of sex on the IMI delta values with F(1, 77) = 4.053, p = 0.048, η2 = 0.050, with mean = 0.253 (95% CI [0.113, 0.392]) for males and mean = 0.050 (95% CI [−0.095, 0.194]) for females, meaning that males presented higher gains in the overall IMI scores when considering all six constructs related to motivation as a whole.
The interaction among the IMI dimensions and sex was non-significant (F(5, 73) = 1.648, p = 0.158, η2 = 0.101), the same as the interaction among the IMI dimensions and conditions (F(5, 73) = 0.159, p = 0.977, η2 = 0.011). However, the three-way interaction among the six IMI dimensions, sex, and conditions was significant with F(5, 73) = 2.969, p = 0.017, η2 = 0.169.
The post hoc tests considered the Bonferroni adjustment for multiple comparisons. When analyzing the simple main effect of sex on the IMI dimensions, we found significant differences in competence (F(1, 77) = 7.195, p = 0.009, η2 = 0.085) and choice (F(1, 77) = 4.407, p = 0.039, η2 = 0.054) favoring males over females, as Figure 3 shows. For the rest of the IMI dimensions, no significant differences were obtained.
Now, when analyzing the three-way interaction among sex, the IMI dimensions, and the conditions, we determined that, under the MTF condition (see Figure 4), there were no significant differences in sex for the IMI competence construct (F(1, 77) = 0.002, p = 0.963), while, for the choice dimension, males presented a significantly higher gain than females (F(1, 77) = 5.982, p = 0.017, η2 = 0.072). On the contrary, under the STF condition (see Figure 5), males obtained significantly higher gains for competence (F(1, 77) = 13.956, p < 0.001, η2 = 0.153), while no relevant differences existed between the males and females for perceived choice (F(1, 77) = 0.280, p = 0.598).

5. Discussion

5.1. RQ1: What Are the Differences in the Learning Outcomes between Male and Female Students Using MatematicaST?

When analyzing RQ1, the results by sex are not similar in terms of the learning gains. Females presented higher learning gains than males when adjusting for the pre-test scores. At first, our findings seemed to support past research suggesting that females present more engagement and learning gains than males under DGBL [4,8,20], and they seemed to contradict research showing no significant sex differences in learning [10,11]. However, it is relevant to consider that the males had higher pre-test scores than the females, and that the post-test scores were similar between the sexes. Hence, we can consider that the females’ outperforming learning gains were at least partially supported by their lower prior knowledge level, as diverse studies state the idea that lower levels of prior knowledge provide more space for learning improvements [22,34,38,74].
Now, from the perspective of the instructional feedback conditions provided in the present experiment, the multiple-try and single-try game versions provided similar results in terms of the learning gains for each sex. Therefore, the reported sex effects on learning seem to have not been affected by these instructional feedback conditions. Furthermore, the present results show no learning differences among the two instructional conditions involved in this experiment, namely, STF and MTF. Such results contradict past research supporting the idea that MTF outperforms STF in high-order or complex learning outcomes that are not just verbatim tasks [33,44,57,68].
A factor that could have affected the results is the presence of an overall high pre-test score (and, hence, a high prior knowledge level) in general when contrasted to other literature. While past literature usually presents pre-test levels in the range of 50–80% [42,45,46,47], in our case, the average score in the pre-test was 84%, and in males under the MTF game version, it was 95%, possibly affecting the possibility of learning improvements for this group.

5.2. RQ2: What Differences Exist in Motivation in Terms of Interest (RQ2a), Competence (RQ2b), and Autonomy (RQ2c) between Male and Female Students Using MatematicaST?

When looking at sex perception in terms of interest (RQ2a), no differences were assessed between the sexes. The results support Manero et al. [24], who found no interest differences by sex, and they partially contradict Chung and Chang’s study [11], which states that “in a moderate genre digital game, female learners’ motivation is significantly higher than that of male learners”. This is because, from the perspective of learning, our game does appear to have had a moderate sex influence, while, in terms of motivation, it does not. This could be due to the basic mechanics of the game, as it does not require much coordination skill (as in a shooting- or jumping-platform game). This may have benefited the females, as evidenced in [3,11], as they prefer exploration mechanics over competitive ones. Furthermore, the gamification elements used in our MatematicaST game were points, rankings, and levels, which foster progression and competition dynamics. These elements and their associated mechanics could have negatively affected the females due to the absence of other game elements, such as the challenges, storylines, and fun interaction found in Chung and Chang’s game [11]. However, despite the general belief that males find games more relaxing and engaging than females [16], our research revealed that the males did not perceive MatematicaST as more interesting. This discrepancy may be attributed to boys’ preferences for action games, as highlighted by Khan et al. [8], whereas MatematicaST employs a point-and-click mechanic.
Regarding the competence (RQ2b) and autonomy (choice) (RQ2c) dimensions, both sexes reported positive increments. However, the males showed significantly higher increments compared to the females in both constructs. Despite the females’ higher learning gains, the males presented higher perceived competence and autonomy, both with medium effect sizes. Such results do not follow the general idea proposed by SDT-CET that competence and autonomy are strong predictors of motivation and learning [59,60,61,62]. These contradicting results may be attributed to the Dunning–Kruger effect [75], where students tend to significantly overestimate their performance, fostering a belief in their adequate knowledge of a given topic.
Now, from the perspective of the females, despite having higher learning gains, they perceived themselves as less competent compared to the males. Such results are coherent with Goetz et al. [31] regarding “students’ beliefs about their competence in mathematics, with female students reporting lower perceived competence than male despite having the same average grades.” From a social learning theory perspective [76], such sex differences in math competence perceptions reveal that there is still a strong cultural bias that identifies math as a predominantly male domain due to stereotypes. Past research has recognized that gender math stereotypes negatively affect women’s mathematics achievements, primarily through their self-concept [77], leading them to underestimate their mathematical abilities and experience more mathematics anxiety than boys [78]. Furthermore, although boys continue to outperform girls in mathematics in many countries, these gaps have narrowed or even shifted in favor of female students in the last two decades [79].
As competence and autonomy are strong predictors of intrinsic motivation, we can infer that the males felt significantly more motivated by the activity. This is corroborated by the overall higher IMI scores of the males compared with the females when considering all six motivational constructs together, and with almost a medium effect size. Perhaps this motivation was due to the gaming itself and that males tend to be more competitive than females at that age, especially in computer gameplay scenarios. Also, the points mechanism and the leaderboard could have positively affected the males’ motivation. Our findings support Bonanno and Kommers [16], who reported that boys consider playing games a memorable and unique experience. Also, males’ high pre-test and post-test scores could make them feel confident in their knowledge of the math topics involved, increasing their perception of their competence and autonomy while reducing their interest in a subject that they feel they have already mastered.

5.3. RQ3: What Are the Differences in Perceptions of Effort (RQ3a), No Pressure (RQ3b), and Value (RQ3c) between Male and Female Students Using MatematicaST?

Concerning the effort construct (RQ3a), no significant differences were observed between the males and females. Also, both sexes showed decreasing pre-/post-test delta values, which indicate that the students ended up needing to work less than they were willing to at the beginning of the activity. Although the pre-/post-test difference is negative, the post-test level remains high (80%), providing indications of the non-existence of factors perceived as controlling by the students [59], which could have negatively affected their motivation and learning. An interesting situation is that the females showed a decrease in effort despite perceiving themselves as less competent when compared to the males and outperformed them in learning gains. An interpretation of this phenomenon might be rooted in cultural factors. It is plausible that the females perceived their mathematical abilities to be lower than they actually were, causing them to make more effort than was required in the mathematical tasks. This misperception has been documented in previous studies related to sex stereotypes in mathematics [77,79].
Regarding the no-pressure construct (RQ3b), no significant differences existed between the males and females. By considering pressure and anxiety as related constructs, the results support findings from past gender math anxiety studies [31], indicating that no sex differences exist for state anxiety (that is, anxiety during specific activities, such as, in this case, participating in a math training activity with a digital game). Both sexes obtained decreases in their efforts and increases in the no-pressure dimensions, corroborating the fact that both intervention conditions were suitable to the promotion of learning [59].
In terms of the value dimension (RQ3c), although there were no significant differences between the sexes, the males presented higher levels than the females. This is aligned with the idea that females do not see games as a unique experience but rather as another alternative for learning, and they find them less useful compared to males [16].

5.4. RQ4: What Differences Exist in the Effects of Sex between the Two Feedback Conditions (MTF and STF)?

As indicated earlier, there were no learning differences between the two instructional conditions, MTF and STF, both overall and when evaluated for each sex. However, upon examining the potential effects on the motivational variables resulting from the interaction of sex and the instructional conditions, the results revealed significant differences. Specifically, it was found that in the MTF condition, there were no significant sex differences regarding the sense of competence, but there were differences in terms of autonomy. Conversely, in the STF condition, sex differences were present for competence but not for the autonomy construct.
These results are relevant, as they suggest a possible sex difference between the two instructional implementations (MTF and STF) used in our experiment. They directly indicate that when using multiple attempts, the females felt as competent as the males in the exercise activity, but, at the same time, they felt less free or autonomous than the males. One possible explanation could stem from the fact that females perceive themselves as less competent in math despite achieving similar learning outcomes [31]. Therefore, the mechanic of having multiple attempts (MTF) to answer the same exercise makes the student face an item they know they have answered incorrectly, generating a degree of frustration [49,54,55] and reinforcing the idea of limited effectiveness for an item that needs repetition. If we add to this the fact that the pre-test level (and therefore, prior knowledge) was lower in the females, we can assume that they probably used the retry mechanism more often than the males when not answering correctly on the first attempt. This aligns with the idea that low achievers tend to feel more frustrated when using MTF [55,56,57].
This study contributes valuable insights into sex differences in DGBL, emphasizing the need for gender-responsive curriculum designs and evidence-based policies. Curriculum developers should strike a balance between autonomy and competence. For girls, this means fostering confidence while maintaining high standards of knowledge acquisition.
Our findings suggest that girls may feel less confident or empowered when given multiple attempts to answer questions. Thus, curriculum developers could consider strategies to boost girls’ autonomy, such as providing clear guidelines, encouraging self-assessment, and fostering a growth mindset. The study findings also suggest that girls might struggle more when they have only one chance to answer questions. Therefore, curriculum designers could explore ways to enhance girls’ competence, such as by providing adaptive feedback based on sex-specific needs (for example, implementing additional elaborated feedback (e.g., hints, worked-out examples) for girls during the single attempt).
Policymakers should recognize sex differences in DGBL and promote equity. Policies can encourage schools to adopt inclusive practices that cater to diverse learning needs. For instance, allocating resources for sex-sensitive game-based learning initiatives can foster equal opportunities. Also, policies should emphasize teacher training in sex-responsive pedagogy. Educators need awareness and skills to adapt their teaching methods to accommodate sex-specific preferences.
Regarding the study limitations, as age is an important personal trait in educational contexts [34,38], it could have affected the multiple attempts’ effectiveness and the sex differences found in this work. Therefore, future studies should include subjects from other levels, from preschoolers [46] and secondary-school students to tertiary-education students. Also, the elderly outside formal learning should be considered, with games aiding their cognitive and emotional needs [80]. Other limitations arise from the nature of the learning objectives used. Although they were of high complexity, future research should include LOs from other mathematical domains. Differences in students’ socioeconomic statuses (SESs) can be considered another possible limitation, as the study involved private and subsidized schools but not public ones.
Likewise, these results open up the possibility of questioning whether there might be other instructional elements and combinations thereof that also generate sex differences, either in terms of learning or motivation. From this, it becomes necessary to move forward with further analyses. These studies should initially focus on replicating these results and then proceed to evaluate the various factors and interactions that generate different effects in both sexes when varying the number of attempts.

6. Conclusions

The present study provided empirical results on the different effects obtained by sex when using MatematicaST, a digital math game for primary students. The results indicate that the females outperformed the males in learning gains, despite presenting lower motivation levels, especially regarding the self-perceptions of competence and autonomy. In this way, the present research contributes to the literature not only by providing a successful case of game-based learning favoring females in terms of learning, but also by giving insights into the internal emotional processes that could affect such differences in learning.
This study is perhaps the first research providing empirical evidence on the effects of multiple attempts by sex. An interaction effect between sex and the feedback condition was found. The females under multiple-try feedback presented lower autonomy levels than the males, while for STF, there were no differences. On the contrary, the females under MTF presented similar competence levels to those of the males, but under STF, the females presented lower levels. Such findings suggest that not all feedback types are sex-neutral, especially the ones involving multiple attempts. It will be relevant to check whether these findings extend to other learning contexts, and to delve deeper into the factors that enable the generation of such sex differences.
The practical implications regard the use of multiple attempts not only in automated assessments and testing but also in game-based formative activities. The use of multiple attempts in existing LMS (e.g., Moodle) activities will generate learning gains in students, especially in females. Also, the design of future educational computer systems should include multiple attempts, but while considering the possible sex differences leveraged in this study. By acknowledging these findings, educators and policymakers can create more effective and equitable learning environments for all students.
Future research directions should also include multiple-try implementations with various attempt numbers, instructional feedback types, and game mechanics. Also, possible sex differences in multiple-try use and its outcomes under all these variants need further study. Finally, further studies that incorporate other demographic variables that may influence the effectiveness of DGBL, such as age, income level, race, location, parent’s employment, and level of education, are needed.

Author Contributions

Conceptualization, C.C. and S.R.; methodology, C.C. and S.R.; software, C.C.; validation, C.C., S.R., D.C.-P. and R.M.V.; formal analysis, C.C.; investigation, C.C.; resources, C.C.; data curation, C.C.; writing—original draft preparation, C.C.; writing—review and editing, C.C., D.C.-P. and R.M.V.; visualization, S.R.; supervision, S.R.; project administration, C.C.; funding acquisition, C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Pontifical Catholic University of Valparaíso, grant number DI 039.360/2023.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Bioethics and Biosecurity Committee of Pontificia Universidad Católica de Valparaíso (protocol code BIOEPUCV-H 659-2023 on 29 June 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study. Parents and legal tutors also signed a written informed consent managed and coordinated with the schools’ authorities involved in this study.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available as they may present identification data of the study subjects.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Higgins, K.; Huscroft-Dangelo, J.; Crawford, L. Effects of Technology in Mathematics on Achievement, Motivation, and Attitude: A Meta-Analysis. J. Educ. Comput. Res. 2019, 57, 283–319. [Google Scholar] [CrossRef]
  2. Ahuja, M.K.; Thatcher, J.B. Moving beyond intentions and toward the theory of trying: Effects of work environment and gender on post-adoption information technology use. MIS Q. 2005, 29, 427–459. [Google Scholar] [CrossRef]
  3. Denden, M.; Tlili, M.; Essalmi, F.; Jemni, M.; Chen, N.-S.; Burgos, D. Effects of gender and personality differences on students’ perception of game design elements in educational gamification. Int. J. Hum. Comput. Stud. 2021, 154, 102674. [Google Scholar] [CrossRef]
  4. Lukosch, H.K.; Kurapati, S.; Groen, D.; Verbraeck, A. Gender and Cultural Differences in Game-Based Learning Experiences. Electron. J. E-Learn. 2017, 15, 310–319. [Google Scholar]
  5. Naglieri, J.A.; Rojahn, J. Gender differences in planning, attention, simultaneous, and successive (PASS) cognitive processes and achievement. J. Educ. Psychol. 2001, 93, 430–437. [Google Scholar] [CrossRef]
  6. Boghi, A.; Rasetti, R.; Avidano, F.; Manzone, C.; Orsi, L.; D’agata, F.; Caroppo, P.; Bergui, M.; Rocca, P.; Pulvirenti, L.; et al. The effect of gender on planning: An fMRI study using the Tower of London task. Neuroimage 2006, 33, 999–1010. [Google Scholar] [CrossRef] [PubMed]
  7. Chen, C.; Law, V.; Huang, K. The roles of engagement and competition on learner’s performance and motivation in game-based science learning. Educ. Technol. Res. Dev. 2019, 67, 1003–1024. [Google Scholar] [CrossRef]
  8. Khan, A.; Ahmad, F.; Malik, M.M. Use of digital game based learning and gamification in secondary school science: The effect on student engagement, learning and gender difference. Educ. Inf. Technol. 2017, 22, 2767–2804. [Google Scholar] [CrossRef]
  9. Bahar, M.; Asil, M. Attitude towards e-assessment: Influence of gender, computer usage and level of education. Open Learn. J. Open Distance E-Learn. 2018, 33, 221–237. [Google Scholar] [CrossRef]
  10. Dorji, U.; Panjaburee, P.; Srisawasdi, N. Gender differences in students’ learning achievements and awareness through residence energy saving game-based inquiry playing. J. Comput. Educ. 2015, 2, 227–243. [Google Scholar] [CrossRef]
  11. Chung, L.-Y.; Chang, R.-C. The Effect of Gender on Motivation and Student Achievement in Digital Game-based Learning: A Case Study of a Contented-Based Classroom. Eurasia, J. Math. Sci. Technol. Educ. 2017, 13, 2309–2327. [Google Scholar] [CrossRef]
  12. Erturan, S.; Jansen, B. An investigation of boys’ and girls’ emotional experience of math, their math performance, and the relation between these variables. Eur. J. Psychol. Educ. 2015, 30, 421–435. [Google Scholar] [CrossRef]
  13. Wang, L. Mediation Relationships Among Gender, Spatial Ability, Math Anxiety, and Math Achievement. Educ. Psychol. Rev. 2020, 32, 1–15. [Google Scholar] [CrossRef]
  14. Twenge, J.M.; Martin, G.N. Gender differences in associations between digital media use and psychological well-being: Evidence from three large datasets. J. Adolesc. 2020, 79, 91–102. [Google Scholar] [CrossRef] [PubMed]
  15. Wang, Y.S.; Wu, M.C.; Wang, H.Y. Investigating the determinants and age and gender differences in the acceptance of mobile learning. Br. J. Educ. Technol. 2009, 40, 92–118. [Google Scholar] [CrossRef]
  16. Bonanno, P.; Kommers, P.A.M. Exploring the influence of gender and gaming competence on attitudes towards using instructional games. Br. J. Educ. Technol. 2008, 39, 97–109. [Google Scholar] [CrossRef]
  17. Carvalho, R.G.G. Gender differences in academic achievement: The mediating role of personality. Personal. Individ. Differ. 2016, 94, 54–58. [Google Scholar] [CrossRef]
  18. Matthews, J.S.; Ponitz, C.C.; Morrison, F.J. Early gender differences in self-regulation and academic achievement. J. Educ. Psychol. 2009, 101, 689–704. [Google Scholar] [CrossRef]
  19. Cipriani, G.P. Gender difference in willingness to guess after a failure. J. Econ. Educ. 2018, 49, 299–306. [Google Scholar] [CrossRef]
  20. Chang, M.; Evans, M.; Kim, S.; Deater-Deckard, K.; Norton, A. Educational Video Games and Students’ Game Engagement. In Proceedings of the 2014 International Conference on Information Science & Applications (ICISA), Seoul, Republic of Korea, 6–9 May 2014; pp. 1–3. [Google Scholar] [CrossRef]
  21. Klisch, Y.; Miller, L.M.; Wang, S.; Epstein, J. The impact of a science education game on Students’ learning and perception of inhalants as body pollutants. J. Sci. Educ. Technol. 2012, 21, 295–303. [Google Scholar] [CrossRef]
  22. Yeo, J.H.; Cho, I.H.; Hwang, G.H.; Yang, H.H. Impact of gender and prior knowledge on learning performance and motivation in a digital game-based learning biology course. Educ. Technol. Res. Dev. 2022, 70, 989–1008. [Google Scholar] [CrossRef]
  23. Lester, J.C.; Spires, H.A.; Nietfeld, J.L.; Minogue, J.; Mott, B.W.; Lobene, E.V. Designing gamebased learning environments for elementary science education: A narrative-centered learning perspective. Inf. Sci. 2014, 264, 4–18. [Google Scholar] [CrossRef]
  24. Manero, B.; Torrente, J.; Fernández-Vara, C.; Fernández-Manjón, B. Investigating the Impact of Gaming Habits, Gender, and Age on the Effectiveness of an Educational Video Game: An Exploratory Study. IEEE Trans. Learn. Technol. 2017, 10, 236–246. [Google Scholar] [CrossRef]
  25. Jones, R. Student Engagement Teacher Handbook; International Center for Leadership in Education: Rexford, MT, USA, 2009. [Google Scholar]
  26. Keller, J.M. Using the ARCS motivational process in computer-based instruction and distance education. New Dir. Teach. Learn. 1999, 78, 39–47. [Google Scholar] [CrossRef]
  27. Pérez-Fuentes, M.D.C.; Núñez, A.; Molero, M.D.M.; Gázquez, J.J.; Rosário, P.; Núñez, J.C. The Role of Anxiety in the Relationship between Self-efficacy and Math Achievement. Psicol. Educ. 2020, 26, 137–143. [Google Scholar] [CrossRef]
  28. Namkung, J.M.; Peng, P.; Lin, X. The relation between mathematics anxiety and mathematics performance among school-aged students: A meta-analysis. Rev. Educ. Res. 2019, 89, 459–496. [Google Scholar] [CrossRef]
  29. Vanbecelaere, S.; Van den Berghe, K.; Cornillie, F.; Sasanguie, D.; Reynvoet, B.; Depaepe, F. The effects of two digital educational games on cognitive and non-cognitive math and reading outcomes. Comput. Educ. 2020, 143, 103680. [Google Scholar] [CrossRef]
  30. Abín, A.; Núñez, J.C.; Rodríguez, C.; Cueli, M.; García, T.; Rosário, P. Predicting Mathematics Achievement in Secondary Education: The Role of Cognitive, Motivational, and Emotional Variables. Front. Psychol. 2020, 11, 876. [Google Scholar] [CrossRef]
  31. Goetz, T.; Bieg, M.; Lüdtke, O.; Pekrun, R.; Hall, N.C. Do Girls Really Experience More Anxiety in Mathematics? Psychol. Sci. 2013, 24, 2079–2087. [Google Scholar] [CrossRef]
  32. Barana, A.; Marchisio, M.; Sacchet, M. Interactive Feedback for Learning Mathematics in a Digital Learning Environment. Educ. Sci. 2021, 11, 279. [Google Scholar] [CrossRef]
  33. Van der Kleij, F.M.; Feskens, R.C.W.; Eggen, T.J.H.M. Effects of Feedback in a Computer-Based Learning Environment on Students’ Learning Outcomes: A Meta-Analysis. Rev. Educ. Res. 2015, 85, 475–511. [Google Scholar] [CrossRef]
  34. Shute, V.J. Focus on Formative Feedback. Rev. Educ. Res. 2008, 78, 153–189. [Google Scholar] [CrossRef]
  35. Hattie, J.; Timperley, H. The power of feedback. Rev. Educ. Res. 2007, 77, 81–112. [Google Scholar] [CrossRef]
  36. Candel, C.; Vidal-Abarca, E.; Cerdán, R.; Lippmann, M.; Narciss, S. Effects of timing of formative feedback in computer-assisted learning environments. J. Comput. Assist. Learn. 2020, 36, 718–728. [Google Scholar] [CrossRef]
  37. Andre, T.; Thieman, A. Level of adjunct question, type of feedback, and learning concepts by reading. Contemp. Educ. Psychol. 1988, 13, 296–307. [Google Scholar] [CrossRef]
  38. Narciss, S. Feedback strategies for interactive learning tasks. In Handbook of Research on Educational Communications and Technology, 3rd ed.; Spector, J.M., Merril, M.D., van Merriënboer, J.J.G., Driscoll, M.P., Eds.; Lawrence Erlbaum: Mahwah, NJ, USA, 2008; pp. 125–144. [Google Scholar]
  39. Wang, Z.; Gong, S.-Y.; Xu, S.; Hu, X.-E. Elaborated feedback and learning: Examining cognitive and motivational influences. Comput. Educ. 2019, 136, 130–140. [Google Scholar] [CrossRef]
  40. Tsai, F.-H.; Tsai, C.-C.; Lin, K.-Y. The evaluation of different gaming modes and feedback types on game-based formative assessment in an online learning environment. Comput. Educ. 2015, 81, 259–269. [Google Scholar] [CrossRef]
  41. Montazeri, M.; Salimi, E.-A. Assessing motivation to speak (MTS) and willingness to communicate through metalinguistic corrective feedback. Learn. Motiv. 2019, 68, 101594. [Google Scholar] [CrossRef]
  42. Attali, Y. Effects of multiple-try feedback and question type during mathematics problem solving on performance in similar problems. Comput. Educ. 2015, 86, 260–267. [Google Scholar] [CrossRef]
  43. Bangert-Drowns, R.L.; Kulik, C.-L.C.; Kulik, J.A.; Morgan, M. The Instructional Effect of Feedback in Test-Like Events. Rev. Educ. Res. 1991, 61, 213–238. [Google Scholar] [CrossRef]
  44. Clariana, R.B.; Koul, R. Multiple-Try Feedback and Higher-Order Learning Outcomes. Int. J. Instr. Media 2005, 32, 239. [Google Scholar]
  45. Stevenson, C.E.; Hickendorff, M. Learning to solve figural matrix analogies: The paths children take. Learn. Individ. Differ. 2018, 66, 16–28. [Google Scholar] [CrossRef]
  46. Understanding Human-Media Interaction: The Effect of the Computer’s Answer-Until-Correct (AUC) vs. Knowledge-of-Result (KR) Task Feedback on Young Users’ Behavioral Interaction Development through a Digital Playground. Available online: https://childhood-developmental-disorders.imedpub.com/articles/towards-understanding-humanmedia-interaction-the-effect-of-the-computers-answeruntil-correct-auc-vs-knowledgeofresult-kr-task-feed.php?aid=7865 (accessed on 23 April 2024).
  47. Fyfe, E.R. Providing feedback on computer-based algebra homework in middle-school classrooms. Comput. Hum. Behav. 2016, 63, 568–574. [Google Scholar] [CrossRef]
  48. Clariana, R.B.; Wagner, D.; Murphy, L.C.R. Applying a Connectionist Description of Feedback Timing. Educ. Technol. Res. Dev. 2000, 48, 5–21. [Google Scholar] [CrossRef]
  49. Clariana, R.B.; Ross, S.M.; Morrison, G.R. The effects of different feedback strategies using computer-administered multiple-choice questions as instruction. Educ. Technol. Res. Dev. 1991, 39, 5–17. [Google Scholar] [CrossRef]
  50. Guthrie, E.R. The Psychology of Learning; Harper & Brothers: New York, NY, USA, 1935. [Google Scholar]
  51. Dempsey, J.V. Interactive Instruction and Feedback; Educational Technology: Jersey City, NJ, USA, 1993. [Google Scholar]
  52. Noonan, J.V. Feedback procedures in computer-assisted instruction: Knowledge-of-results, knowledge-of-correct-response, process explanations, and second attempts after explanations. Diss. Abstr. Int. 1984, 45, 131. [Google Scholar]
  53. Schimmel, B.J. Feedback use by low-ability students in computer-based education. Diss. Abstr. Int. 1986, 47, 4068. [Google Scholar]
  54. Dick, W.; Latta, R. Comparative effects of ability and presentation mode in computer-assisted instruction and programed instruction. ECTJ 1970, 18, 33–45. [Google Scholar] [CrossRef]
  55. Clariana, R.B.; Koul, R. The effects of different forms of feedback on fuzzy and verbatim memory of science principles. Br. J. Educ. Psychol. 2006, 76, 259–270. [Google Scholar] [CrossRef]
  56. Morrison, G.M.; Ross, S.M.; Gopalakrishnan, M.; Casey, J. The effects of feedback and incentives on achievement in computer-based instruction. Contemp. Educ. Psychol. 1995, 20, 32–50. [Google Scholar] [CrossRef]
  57. Clariana, R.B.; Lee, D. The effects of recognition and recall study tasks with feedback in a computer-based vocabulary lesson. Educ. Technol. Res. Dev 2001, 49, 23–36. [Google Scholar] [CrossRef]
  58. Salomon, G.; Globerson, T. Skill may not be enough: The role of mindfulness in learning and transfer. Int. J. Educ. Res. 1987, 11, 623–637. [Google Scholar] [CrossRef]
  59. Ryan, R.M.; Deci, E.L. Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemp. Educ. Psychol. 2020, 61, 101860. [Google Scholar] [CrossRef]
  60. Ryan, R.; Deci, E. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 2000, 55, 68–78. [Google Scholar] [CrossRef] [PubMed]
  61. Deci, E.L.; Ryan, R.M. Intrinsic Motivation and Self-Determination in Human Behavior; Plenum Press: New York, NY, USA, 1985. [Google Scholar]
  62. Deci, E.; Ryan, R. The “What” and “Why” of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychol. Inq. 2000, 11, 227–268. [Google Scholar] [CrossRef]
  63. Liao, C.W.; Chen, C.H.; Shih, S.J. The interactivity of video and collaboration for learning achievement, intrinsic motivation, cognitive load, and behavior patterns in a digital game-based learning environment. Comput. Educ. 2019, 133, 43–55. [Google Scholar] [CrossRef]
  64. Chen, C.-H.; Law, V. Scaffolding individual and collaborative game-based learning in learning performance and intrinsic motivation. Comput. Hum. Behav. 2016, 55B, 1201–1212. [Google Scholar] [CrossRef]
  65. Huang, Y.; Backman, S.J.; Backman, K.F.; McGuire, F.A.; Moore, D. An investigation of motivation and experience in virtual learning environments: A self-determination theory. Educ. Inf. Technol. 2019, 24, 591–611. [Google Scholar] [CrossRef]
  66. Proulx, J.-N.; Romero, M.; Arnab, S. Learning Mechanics and Game Mechanics Under the Perspective of Self-Determination Theory to Foster Motivation in Digital Game Based Learning. Simul. Gaming 2017, 48, 81–97. [Google Scholar] [CrossRef]
  67. Liu, Y.C.; Wang, W.-T.; Lee, T.-L. An Integrated View of Information Feedback, Game Quality, and Autonomous Motivation for Evaluating Game-Based Learning Effectiveness. J. Educ. Comput. Res. 2020, 59, 3–40. [Google Scholar] [CrossRef]
  68. Cubillos, C.; Roncagliolo, S.; Cabrera-Paniagua, D. Learning and Motivation When Using Multiple-Try in a Digital Game for Primary Students in Chile. Educ. Sci. 2023, 13, 1119. [Google Scholar] [CrossRef]
  69. Sostiene. Qué Son y Cómo Funcionan los Colegios Particulares Subvencionados en Chile. Available online: https://www.sostiene.cl/colegios-particulares-subvencionados/ (accessed on 23 April 2024).
  70. Monteiro, V.; Mata, L.; Peixoto, F. Intrinsic Motivation Inventory: Psychometric Properties in the Context of First Language and Mathematics Learning. Psicol. Reflex. Crit. 2015, 28, 434–443. [Google Scholar] [CrossRef]
  71. Joshi, A.; Kale, S.; Chandel, S.; Pal, D.K. Likert Scale: Explored and Explained. Br. J. Appl. Sci. Technol. 2015, 7, 396–403. [Google Scholar] [CrossRef]
  72. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Erlbaum: Hillsdale, NJ, USA, 1988. [Google Scholar]
  73. Gliem, J.A.; Gliem, R.R. Calculating, Interpreting, and Reporting Cronbach’s Alpha Reliability Coefficient for Likert-Type Scales. In Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education; Ohio State University: Columbus, OH, USA, 2003; Available online: https://hdl.handle.net/1805/344 (accessed on 23 April 2024).
  74. Fyfe, E.R.; Rittle-Johnson, B.; DeCaro, M.S. The effects of feedback during exploratory mathematics problem solving: Prior knowledge matters. J. Educ. Psychol. 2012, 104, 1094–1108. [Google Scholar] [CrossRef]
  75. Kruger, J.; Dunning, D. Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J. Personal. Soc. Psychol. 1999, 77, 1121–1134. [Google Scholar] [CrossRef]
  76. Bandura, A.; Walters, R.H. Social Learning Theory; Prentice Hall: Englewood Cliffs, NJ, USA, 1977; Volume 1. [Google Scholar]
  77. Xie, F.; Yang, Y.; Xiao, C. Gender-math stereotypes and mathematical performance: The role of attitude toward mathematics and math self-concept. Eur. J. Psychol. Educ. 2023, 38, 695–708. [Google Scholar] [CrossRef]
  78. Becker, J.R.; Hall, J. Research on gender and mathematics: Exploring new and future directions. ZDM Math. Educ. 2024, 56, 141–151. [Google Scholar] [CrossRef]
  79. Perez Mejias, P.; McAllister, D.E.; Diaz, K.G.; Ravest, J. A longitudinal study of the gender gap in mathematics achievement: Evidence from Chile. Educ. Stud. Math. 2021, 107, 583–605. [Google Scholar] [CrossRef]
  80. Rienzo, A.; Cubillos, C. Playability and Player Experience in Digital Games for Elderly: A Systematic Literature Review. Sensors 2020, 20, 3958. [Google Scholar] [CrossRef]
Figure 1. (a) Number ordering mini-game; (b) money counting mini-game.
Figure 1. (a) Number ordering mini-game; (b) money counting mini-game.
Behavsci 14 00488 g001
Figure 2. Pre-/post-test learning differences per sex and feedback condition after controlling for the pre-test.
Figure 2. Pre-/post-test learning differences per sex and feedback condition after controlling for the pre-test.
Behavsci 14 00488 g002
Figure 3. Pre-/post-test delta values of motivational constructs per sex.
Figure 3. Pre-/post-test delta values of motivational constructs per sex.
Behavsci 14 00488 g003
Figure 4. Pre-/post-test delta values of motivational constructs per sex for the MTF condition.
Figure 4. Pre-/post-test delta values of motivational constructs per sex for the MTF condition.
Behavsci 14 00488 g004
Figure 5. Pre-/post-test delta values of motivational constructs per sex for the STF condition.
Figure 5. Pre-/post-test delta values of motivational constructs per sex for the STF condition.
Behavsci 14 00488 g005
Table 1. Pre-test, post-test, and learning-gain descriptive statistics per sex and condition.
Table 1. Pre-test, post-test, and learning-gain descriptive statistics per sex and condition.
SexConditionPre-TestPost-TestGainN
MeanSDMeanSDMeanSD
FemaleMTF6.892.167.671.990.78 ***0.9418
STF6.991.977.791.550.80 ***1.0021
Total6.942.037.731.740.79 ***0.9639
MaleMTF8.540.808.580.830.030.7423
STF7.831.537.971.570.140.9419
Total8.221.228.301.250.080.8242
TotalMTF7.821.748.181.510.36 **0.9041
STF7.391.807.881.540.49 **1.0140
Total7.601.778.031.520.42 ***0.9581
MTF: multiple-try feedback. STF: single-try feedback. ** p < 0.01, *** p < 0.001.
Table 2. IMI perception constructs’ descriptive statistics per condition.
Table 2. IMI perception constructs’ descriptive statistics per condition.
IMI
Construct
SexConditionPre-TestPost-TestGainN
MeanSDMeanSDMeanSD
InterestFemaleMTF4.640.484.690.730.050.7518
STF4.700.564.850.380.150.6921
Total4.670.524.770.560.100.7139
MaleMTF4.560.604.640.670.080.6323
STF4.580.594.770.440.180.5919
Total4.570.584.700.570.130.6042
TotalMTF4.590.544.660.690.070.6841
STF4.650.574.810.400.170.6340
Total4.620.554.740.570.120.6581
CompetenceFemaleMTF3.960.834.380.740.41 *0.8218
STF4.290.754.290.92−0.020.7221
Total4.140.804.330.830.180.7939
MaleMTF4.130.784.560.690.42 *0.8223
STF3.830.864.750.460.94 ***0.9019
Total4.000.824.650.600.65 ***0.8842
TotalMTF4.060.804.480.710.41 **0.8141
STF4.070.834.510.770.44 ***0.9340
Total4.060.814.490.730.42 ***0.8781
EffortFemaleMTF4.560.754.190.89−0.360.7418
STF4.520.844.020.99−0.50 *0.8721
Total4.540.794.100.94−0.44 **0.8039
MaleMTF4.500.783.850.98−0.65 **0.9923
STF4.550.714.290.79−0.261.0719
Total4.520.744.050.92−0.48 **1.0442
TotalMTF4.520.764.000.95−0.52 ***0.8941
STF4.540.774.150.90−0.39 *0.9640
Total4.530.764.070.92−0.46 ***0.9381
No PressureFemaleMTF3.920.793.970.900.061.2618
STF4.051.004.400.900.360.9121
Total3.990.904.210.920.221.0839
MaleMTF3.670.913.831.240.151.2123
STF3.710.734.131.100.42 0.9819
Total3.690.833.961.180.27 1.1142
TotalMTF3.780.863.891.090.111.2241
STF3.890.894.271.000.39 *0.9340
Total3.830.874.081.060.25 *1.0981
ChoiceFemaleMTF3.960.903.990.660.020.9218
STF3.701.104.100.750.39 0.9721
Total3.821.014.050.700.220.9539
MaleMTF3.341.164.120.750.77 ***0.9523
STF3.440.893.990.950.55 *1.0319
Total3.381.044.060.840.67 ***0.9842
TotalMTF3.611.094.070.700.44 **1.0041
STF3.571.004.050.840.46 **0.9940
Total3.591.044.060.770.45 ***0.9881
ValueFemaleMTF4.520.624.420.88−0.110.6918
STF4.630.604.790.370.160.6521
Total4.580.604.620.670.040.6739
MaleMTF4.460.524.740.470.28 0.5323
STF4.340.504.500.940.161.0419
Total4.410.514.630.720.22 0.8042
TotalMTF4.490.564.600.690.110.6341
STF4.490.574.650.710.160.8540
Total4.490.564.620.700.130.7481
MTF: multiple-try feedback. STF: single-try feedback.  p < 0.10, * p < 0.05, ** p < 0.01, *** p < 0.001.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cubillos, C.; Roncagliolo, S.; Cabrera-Paniagua, D.; Vicari, R.M. A Digital Math Game and Multiple-Try Use with Primary Students: A Sex Analysis on Motivation and Learning. Behav. Sci. 2024, 14, 488. https://doi.org/10.3390/bs14060488

AMA Style

Cubillos C, Roncagliolo S, Cabrera-Paniagua D, Vicari RM. A Digital Math Game and Multiple-Try Use with Primary Students: A Sex Analysis on Motivation and Learning. Behavioral Sciences. 2024; 14(6):488. https://doi.org/10.3390/bs14060488

Chicago/Turabian Style

Cubillos, Claudio, Silvana Roncagliolo, Daniel Cabrera-Paniagua, and Rosa Maria Vicari. 2024. "A Digital Math Game and Multiple-Try Use with Primary Students: A Sex Analysis on Motivation and Learning" Behavioral Sciences 14, no. 6: 488. https://doi.org/10.3390/bs14060488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop