Next Article in Journal
Middle-Level Teacher Certification/Licensure: Current Status and Future Directions
Previous Article in Journal
How and Why Teachers Use Technology: Distinct Integration Practices in K-12 Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Mediating Role of Generative AI Self-Regulation on Students’ Critical Thinking and Problem-Solving

1
School of Business and Management, Queen Mary University of London, London E1 4NS, UK
2
School of Economics and Management, Beijing University of Chemical Technology, Beijing 100029, China
3
School of Design, University of Leeds, University of Leeds, Leeds LS2 9JT, UK
4
Centre for Instructional Technology & Multimedia, Universiti Sains Malaysia, Gelugor 11700, Pulau Pinang, Malaysia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(12), 1302; https://doi.org/10.3390/educsci14121302
Submission received: 16 October 2024 / Revised: 14 November 2024 / Accepted: 20 November 2024 / Published: 27 November 2024
(This article belongs to the Section Higher Education)

Abstract

:
Within the rapid integration of AI into educational settings, understanding its impact on essential cognitive skills is crucial for developing effective teaching strategies and improving student outcomes. This study examines the influence of generative artificial intelligence (GenAI) on students’ critical thinking and problem-solving skills in higher education. Our research specifically investigates how the perceived ease of use, usefulness, and learning value of GenAI tools might influence students’ critical thinking and problem-solving skills, and whether self-regulation serves as a mediator in this relationship. Utilising a quantitative approach, we surveyed 223 students and analysed their responses using a structural equation modelling method. The results reveal that the ease of use of GenAI significantly enhances self-regulation, which in turn positively impacts both the critical thinking and problem-solving abilities of students. However, the perceived usefulness and learning value of GenAI were not found to significantly influence these skills through self-regulation. These findings suggest that, while AI tools can offer an environment conducive to developing higher-order cognitive skills, this might not necessarily translate to the enhancement of students’ skills. This research contributes to the ongoing literature on the role of technology in education by highlighting the importance of designing GenAI tools that support self-regulated learning. Furthermore, it calls for educators and developers to focus not just on the functionality of AI, but also on how these tools can be integrated into curricula to effectively support critical thinking and problem-solving. The practical implications of our research highlight the need for AI tools that are user-friendly and aligned with educational goals, enhancing their adoption and effectiveness in improving student outcomes. It is crucial for educators to integrate strategies that promote self-regulation within AI-enhanced learning environments to maximise their impact on student learning.

1. Introduction

The incorporation of generative artificial intelligence (GenAI) into the educational sector signifies a paradigm shift that promises to redefine traditional teaching and learning paradigms. As higher education continues to evolve, the potential of GenAI as a transformative tool is undeniable. The discourse surrounding GenAI’s role in academia is multifaceted, oscillating between enthusiasm for its potential to significantly improve educational delivery and scepticism about its efficacy in nurturing the foundational skills necessary for academic success and lifelong learning [1]. Despite the optimism surrounding GenAI’s capabilities, including its promise for personalised education paths and enhanced engagement through interactive learning platforms, critical questions remain [2]. In particular, the effectiveness of GenAI in cultivating deep, critical thinking and problem-solving skills among students stands as an area for exploration. These skills are increasingly recognised as indispensable for students to effectively interpret information, make informed decisions, and navigate the problem-rich environments of both academia and the professional world.
Moreover, the discussion on GenAI’s educational potential is not without its concerns. The literature [3] points to the need for a deeper understanding of how GenAI applications can support the development of self-regulated learning practices and metacognitive strategies—skills that are paramount for students to manage their learning processes, adapt strategies, and reflect on their learning journey. Despite the extensive literature highlighting both the advantages and challenges of incorporating technology into higher education, there remains a notable gap in empirical studies examining AI’s role in enhancing self-regulated learning and the development of metacognitive skills [4]. This gap points to the need for a detailed investigation into how GenAI might either facilitate or impede the cultivation of essential cognitive skills such as critical thinking and problem solving. To address this, our research is designed to assess the extent to which GenAI can support or hinder these key areas of student development. We propose to investigate a theoretical model that centres on three specific attributes of GenAI: perceived usefulness, ease of use, and perceived learning value, and their collective impact on fostering critical thinking and problem-solving capabilities among students. In addition, this study aims to explore the potential mediating role of self-regulation on the relationship between GenAI use and the enhancement of critical thinking and problem-solving skills. The guiding research questions for this investigation are as follows: (1) What is the relationship between the use of AI and the development of students’ critical thinking and problem-solving skills? And, (2) does self-regulation serve as a mediating factor in the relationship between AI use and the enhancement of critical thinking and problem-solving skills? Through this research, we aim to contribute valuable insights into the effective integration of GenAI tools in educational settings, specifically focusing on their potential to enrich students’ learning experiences by enhancing critical cognitive skills and facilitating a more self-regulated approach to learning.

2. Literature Review and Hypothesis Development

2.1. The Impact of AI on Skill Development

Perceived usefulness and perceived ease of use are fundamental beliefs at the core of the technology acceptance model (TAM), used to forecast users’ attitudes and behavioural intentions toward adopting specific technologies [5]. Perceived usefulness is understood as the extent to which an individual perceives that using a system would reduce their effort, both physically and mentally [5]. Conversely, perceived ease of use is the belief in the potential of a particular system to improve one’s job performance [5]. Within the context of AI in education (AIED), perceived usefulness can be seen as the likelihood of students using AI tools based on their belief in their ability to improve academic performance [6]; meanwhile, perceived ease of use is the ease associated with using AI for educational support. Perceived learning value is the belief that interaction with AI tools and applications will aid in achieving educational objectives [7].
Critical thinking and problem-solving skills are essential in the 21st century, vital for both academic achievements and career success [8]. Defined as “the art of evaluating cognitive processes” by Paul and Elder [9], critical thinking encompasses a range of cognitive skills (i.e., analysis, reasoning, argumentation) and affective dispositions (i.e., open-minded, flexible in evaluation) [10]. Davis and Barnett [11] categorised the cognitive skills of critical thinking into four groups: lower-level thinking skills, which include interpreting, explanation, and recognition of assumptions; higher-level thinking skills, which encompass analysis and synthesis; complex skills, which involve induction, deduction, inference; and metacognitive skills, considered to be the highest level. Research has demonstrated a connection between employing GenAI technologies and fostering these critical and problem-solving skills. For example, Bailey and Alumusharraf [12] observed that certain chatbot-generated questions are very useful in aiding and nurturing students’ initial critical thinking stages, such as memorisation and evaluation of content, thereby enhancing their comprehension. In a similar vein, Essien et al. [3] reported that ChatGPT, serving as an effective learning companion, positively influences the development of students’ cognitive abilities, including knowledge understanding and application. Furthermore, their findings suggest that ChatGPT encourages deeper engagement with intricate scenarios, thus facilitating higher-order learning processes like analysis and evaluation.
The risk of AI circulating incorrect information has raised alarms, created concerns, and presented obstacles to educational outcomes [13]. Farrokhnia et al. [14] have demonstrated that employing GenAI tools might undermine advanced cognitive capabilities, such as critical and analytical thinking. Sun and Hoelscher [15] further emphasise the necessity for students to develop skills for critically assessing information and making knowledgeable choices when utilising AI for their studies. However, the phenomenon of GenAI producing biassed or inaccurate information, commonly referred to as GenAI hallucination, has surprisingly been observed to contribute to the enhancement of students’ critical analysis skills. This issue might reduce the learning value students receive from the tool, which as a result influences their ability to distinguish between accurate and inaccurate AI-generated content—a competency that is increasingly essential as they engage with AI in learning environments, thereby sharpening their analytical abilities [16]. The theoretical model of our paper is as shown in Figure 1.
Following the model, we propose the following hypotheses:
H1a. 
The perceived usefulness of GenAI is positively associated with the enhancement of students’ critical thinking skills.
H1b. 
The perceived learning value of GenAI tools significantly correlates with their critical thinking skill development.
H1c. 
The perceived ease of use of GenAI positively influences students’ critical thinking skill development.
The existing literature has explored the impact of GenAI on students’ problem-solving skills [3]. Cam and Kiyici [17] found that students who received support from robotics-assisted programming education showed significant improvement in their problem-solving skills. Additionally, Urban et al. [18] compared students at their universities who used ChatGPT to those who did not, demonstrating that ChatGPT usage significantly supports students in understanding tasks and enhances students’ task resolution skills. Therefore, in this study, we propose the following hypotheses:
H2a. 
The perceived usefulness of GenAI is positively associated with the enhancement of students’ problem-solving skills.
H2b. 
The perceived learning value of GenAI tools significantly correlates with their problem-solving skill development.
H2c. 
The perceived ease of use of GenAI positively influences students’ problem-solving skill development.

2.2. The Mediating Role of Self-Regulation

Self-regulation is understood as a deliberate and flexible process through which students can plan, adjust, and oversee their cognitive, emotional, and behavioural responses to task requirements [19,20]. It is a crucial component of critical thinking, aiding students through complex cognitive processes [21]. Rooted in the theory of self-regulated learning, learners engage in metacognitive practices such as setting goals, planning, monitoring progress, and evaluating outcomes to manage their learning effectively and achieve their educational objectives. This involves refining their metacognitive strategies [22] and maintaining motivation and engagement with their studies [23].
Nonetheless, students often encounter difficulties in this area, particularly when they lack prior knowledge or face challenging learning tasks [24]. The advent of GenAI has introduced a novel educational tool that, with its constant availability and user-friendliness, offers immediate feedback on inquiries, thereby promoting the development of cognitive abilities, including understanding of subject matter and application of knowledge [4]. Zhou and Lilian [25] posited that GenAI has the potential to function as a new stakeholder in the students’ learning journal, assisting students in independent learning by offering constructive insights, feedback, and guidance to students, a 24/7 study buddy and teaching assistant. This perspective is corroborated by numerous empirical studies. For instance, Zhou et al. [26] discovered that GenAI could support self-directed learning in entrepreneurship education, enhance subject matter comprehension, and assist students in generating ideas and conducting market research. Similarly, studies have indicated that students who purposefully and effectively integrate AI into their study routines can improve their metacognitive engagement. Pinto et al. [27] observed that students do not simply depend on AI tools passively; instead, they strategically use these technologies in their study practices, leveraging their functionalities to deepen understanding and hone their problem-solving abilities.
These three critical attributes of GenAI have been shown to significantly influence students’ motivation for self-directed learning and subsequently their skill development. For instance, Li [28] investigated factors affecting college students’ actual use of AI-based systems, finding that the perceived usefulness and ease of use of GenAI substantially fostered learning motivation, increased interest, and facilitated goal achievement. Similarly, Al-Abdullatif [29] discovered that students in Saudi higher education institutions found that AI’s perceived usefulness, ease of use, and learning value positively affected their willingness to use GenAI. Hence, it is deduced that these features, when perceived positively by students as enhancing learning performance, value, and convenience, can bolster self-regulated learning and ultimately support students’ metacognitive thinking and problem-solving abilities. Based on the discussion above, we propose the following hypotheses:
H3a. 
Self-regulation mediates the relationship between the perceived usefulness of GenAI tools and the development of students’ critical thinking skills.
H3b. 
Self-regulation mediates the relationship between the perceived learning value of GenAI tools and the development of students’ critical thinking skills.
H3c. 
Self-regulation mediates the relationship between the perceived ease of use of GenAI tools and the development of students’ critical thinking skills.
H4a. 
Self-regulation mediates the relationship between the perceived usefulness of GenAI tools and the development of students’ problem-solving skills.
H4b. 
Self-regulation mediates the relationship between the perceived learning value of GenAI tools and the development of students’ problem-solving skills.
H4c. 
Self-regulation mediates the relationship between the perceived ease of use of GenAI tools and the development of students’ problem-solving skills.

3. Method

3.1. Instrument and Sample

This study investigated students’ perceptions of GenAI’s usefulness, learning values, ease of use, regulation, critical thinking, and problem-solving. The items for measuring GenAI’s ease of use (n: 4) and usefulness (n: 4) were adapted from Davis [5]. The items have been widely used by many previous studies on technology integration and its use in learning. In addition, the learning value gained from the use of GenAI was assessed in this study by 4 items which we adapted from Alves [30], who originally developed them based on the work of Zimmerman [31]. Given the widespread use of AI tools in higher education, it was anticipated that this might enhance perceived value by providing personalised and adaptive learning experiences, fostering critical thinking skills, and offering innovative approaches to problem-solving. We also adapted 5 items from the Motivated Strategies for Learning Questionnaire (MSLQ) by Pintrich [32] to measure students’ perceived critical thinking. The reason for using the MSLQ in an AI context was due to its relevance in explaining how GenAI helped students to apply previous knowledge to new situations in order to make critical evaluations of the content. We also adapted 11 items to determine how GenAI helped students in various problem-solving situations. These items were adapted from Rosenbaum [33] to reflect specific types of self-controlling behaviours when learning with GenAI. All the items were assessed using a 7-point Likert scale ranging from strongly disagree to strongly agree. Since the participants were from a public university in China, we translated the study description and items from English into Chinese. We also employed a back-translation process to ensure that the translation retained the same meaning as the original English version that we adapted or modified from previous studies. We conducted a content validity test on the modified items with 12 scholars and students to ensure that the items were understood and captured the intended meaning of the study variables.
After validating the study items, we recruited students from various undergraduate and postgraduate courses. The number of respondents was determined using GPower 3.1, as recommended by previous studies such as Kang [34]. Based on the input parameters in GPower, the minimum sample size for a structural model with 11 interactions (6 direct effects and 5 mediating effects) was calculated to be 178 respondents. This estimation was based on a statistical power of 0.95, a moderate effect size (Cohen’s f 2 ) of 0.15, and an error probability of 0.05. Therefore, email invitations were sent to 650 individuals randomly selected from a pool of students undertaking different undergraduate and postgraduate courses at a university in China. The respondents were asked to answer a series of questions about their personal experience with GenAI in learning. The first section of the email provided a detailed description of the study’s purpose and the respondents’ rights to participate. A link to the survey was included in the email to direct students to a page containing questions for specific study variables (see Table 1).
We made two attempts to gather data from the target population. As shown in Table 2, this resulted in 223 responses, which surpassed the recommended number of 178. The demographic breakdown revealed that 67% (n: 148) of the respondents were female, while 33% were male (n: 75). Regarding age distribution, 121 respondents were aged 18–21 years old, 62 were between 22 and 25 years old, 18 were 26–30 years old, and 22 were over 30 years old. In terms of academic level, 155 respondents were enrolled in undergraduate programmes, 65 were in master’s programmes, and only 3 were pursuing a PhD.

3.2. Model Evaluation

The study model underwent examination through Partial Least Squares Structural Equation Modelling (PLS-SEM), which is a two-stage multivariate analysis method. In the initial stage, measurement values were assessed to ensure they meet all required thresholds before proceeding to the evaluation of the structural model. We measured the goodness of fit of the proposed model by examining CFA, which was conducted to assess the overall measurement model. The mean values of the skewness and kurtosis were smaller than the prescribed levels (skewness, 2.0 and kurtosis, 7.0), indicating no significant problems regarding multivariate normality of the data [35]. Also, for the measurement model to have a sufficiently good model fit, the ratio of the ×2 value to degrees of freedom (CMIN/df) should not exceed 3, and the comparative fit index (CFI), the Tucker–Lewis index (TLI), and the non-normed fit index (NNFI) should exceeded 0.9, while the root mean square error of approximation (RMSEA) should not exceed 0.05 [36].
SmartPLS was employed to assess the convergent validity of the scales according to the criteria outlined by Fornell and Larcker [37]. These criteria consisted of ensuring that: (1) all indicator loadings are statistically significant and exceed 0.7; (2) construct reliability (CR) is surpassed 0.7; and (3) the average variance extracted (AVE) for each construct is higher than 0.5. Meanwhile, we evaluated the discriminant validity of the scales based on guidelines established by Fornell and Larcker [37], wherein the square root of the AVE values for each construct should exceed the variance of any inter-construct correlations.

4. Results

Our analytical process commenced with an examination of the factor loadings. The analysis revealed that all item loadings exceeded the threshold of 0.7.
There are six latent variables in the measurement model. Table 3 details the internal consistency and convergent validity of these variables. Each latent variable has a Cronbach’s alpha value exceeding 0.7, signifying reliable internal consistency. The Average Variance Extracted (AVE) values surpass the 0.5 threshold, indicating a satisfactory level of convergent validity within the model.
Table 4 presents the outcomes of the Fornell–Larcker criterion analysis. The findings demonstrate that the square root of the AVE for each construct surpasses its correlation with any other construct, signifying that each construct has stronger associations with its own indicators compared to other constructs. This provides evidence of the good discriminant validity within the model. HTMT is also adopted to test the discriminant validity of each variable. The results shown in Table 5, show that correlations between variables are below the maximum threshold of 0.9, ranging from 0.514 to 0.755, thereby indicating adequate discriminant validity among the constructs within the model.
Table 6 presents the result of hypotheses testing. GenAI’s usefulness and ease of use showed a positive correlation with critical thinking, supporting H1a and H1c ( β = 0.215, p < 0.05; β = 0.205, p < 0.05). In contrast, the perceived learning value derived from AI ( β = −0.001, p > 0.05) does not appear to significantly influence users’ critical thinking in the context of GenAI use. Therefore, hypothesis 1b is not supported. Notably, perceptions of the usefulness, learning value, and ease of use of a tool or system significantly and positively affect problem-solving skills ( β = 0.199, p < 0.001; β = 0.237, p < 0.01; β = 0.279, p < 0.01, respectively), thus confirming hypotheses 2a, 2b, and 2c.
The hypothesis H3a, examining the roles of perceived usefulness and learning value, respectively, was rejected (β = 0.037, p > 0.01), indicating insufficient statistical evidence to support a significant indirect effect of self-regulation on critical thinking. The p-value of 0.066 for hypothesis 3b (β = 0.076) suggests a possible mediating effect of self-regulation between perceived learning value and critical thinking skills. However, since this value does not meet the significance threshold (0.05), it does not provide enough support to confirm hypothesis 3b definitively, therefore hypothesis 3b has been rejected.
The analysis revealed a significant mediating role of self-regulation in the relationship between perceived ease of use and problem-solving skills (β = 0.120, p < 0.01), supporting hypothesis 4c. The paths from perceived usefulness to problem-solving skills (β = 0.021, p > 0.05), and from perceived learning value to problem-solving skills through self-regulation (β = 0.043, p > 0.05), were not statistically significant, therefore rejecting hypotheses 4a and 4b.
In Figure 2, the resulting R 2 value indicates that the combined effect of perceived usefulness, perceived learning value, and perceived ease of use explains a 45.3% variance in self-regulation ( R 2 = 0.453 ). The R-squared value of 0.501 for critical thinking and 0.609 for problem-solving skills indicates that approximately 50.1% of the variance in critical thinking and 60.9% of problem-solving skills is explained by the combined effect of perceived usefulness, perceived learning value, and perceived ease of use of AI. The Standardised Root Mean Square Residual (SRMR) is 0.067, which is below the threshold of 0.08 as suggested by Hu and Bentler (1999) [38], indicating a good fit.

5. Discussion

5.1. The Impact of GenAI on Employability Development

Our findings demonstrate that GenAI can indeed play a constructive role in enhancing students’ critical thinking and problem-solving capabilities, aligning with prior research, such as that by Essien et al. [3] and Urban et al. [18]. However, aligned with previous findings, the influence of GenAI on these skills appears to be modest. While the majority of the existing literature treats GenAI generically, without delving into specific features that may bolster employability skills, our investigation zeroes in on three particular aspects of GenAI, its perceived usefulness, ease of use, and perceived learning value. Our results indicate that these GenAI characteristics significantly contribute to fostering students’ problem-solving abilities, such as facilitating knowledge inquiry, breaking down complex tasks into manageable segments, and encouraging the generation of alternative solutions. This complements the observations made by Urban et al. [18], who found that students engaging with ChatGPT exhibit enhanced creative problem-solving skills compared to those who do not use the tool.
Despite our findings affirming that the perceived usefulness and ease of use of GenAI can augment students’ critical thinking, the data did not corroborate the influence of perceived learning value on critical thinking. This finding contrasts with research by Essien [3], who observed that the phenomenon of “AI hallucination” encourages students to engage in fact-checking, thereby fostering the development of advanced cognitive skills such as analysis and evaluation. The variation in findings may stem from the high-quality outputs provided by paid AI services, which could lead students to overly trust the generated content, accepting it unquestioningly. This situation presents a challenge to how students uncritically accept AI-produced materials, highlighting the urgent need to cultivate skills in critical evaluation among learners.

5.2. Results of the Mediating Role of Self-Regulation

The results indicate that self-regulation serves as a mediator in the relationship between the ease of use of AI tools and the development of critical thinking and problem-solving skills. This suggests that when students find AI tools straightforward to use, they can more effectively employ self-regulatory strategies, as they are not bogged down by the operational challenges associated with other educational technologies. This ease of use allows students to focus more mental resources on engaging with the material, which in turn facilitates the deeper cognitive processing crucial for critical thinking and problem-solving. This aligns with the existing literature which suggests that students can utilise AI for basic learning tasks such as summarising and understanding assignments, thereby freeing up time to concentrate on more complex reasoning tasks like analysis, evaluation, and creation [3].
However, the findings showed that self-regulation did not mediate the relationship between the perceived usefulness and learning value of AI and the development of advanced-level skills. This observation corresponds with patterns noted in the critical thinking domain, suggesting that students’ perceptions of AI’s usefulness and learning value do not directly lead to improved problem-solving skills via self-regulated learning mechanisms, similar to the findings of Ghotbi et al. [39] and Chan and Hu [40]. Developing problem-solving and critical thinking skills involves more than just the application of knowledge; it also requires the integration of creative and analytical thinking to solve new or unfamiliar challenges. Effective skill development in these areas likely necessitates direct, hands-on experience with problem-solving tasks, guided reflection, and iterative practice with feedback. While AI-generated responses and content may seem credible, they can also produce misinformation and “alternative facts” based on the prompts provided by users [41]. If students rely on AI tools and believe that AI tools are highly useful and deliver accurate information, they may miss opportunities to critically reflect on and learn from their mistakes, potentially impeding the cognitive processes necessary for developing critical employability skills.

5.3. Theoretical and Practical Implications

Our study significantly advances theoretical understanding in two key areas: the TAM and self-regulation theory, particularly within the context of AI’s role in education. Firstly, by identifying specific AI features—ease of use, perceived usefulness, and perceived learning value—that influence the development of critical thinking and problem-solving skills, our research adds granularity to the TAM framework. This specificity allows for a more nuanced understanding of how AI adoption impacts educational outcomes, suggesting that future TAM research should consider the differential impact of various technology attributes on learning processes. Secondly, integration with self-regulation theory: the mediating role of self-regulation identified in our study suggests a synergistic relationship between AI use and self-regulated learning. This finding encourages a re-examination of self-regulation theory to account for how AI tools, particularly those characterised by user-friendliness and perceived value, can foster or impede the development of self-regulatory capacities and, by extension, critical cognitive skills. Thirdly, our study, conducted within China’s educational framework, where substantial governmental support for GenAI and robust collaboration between GenAI and educational experts exist, deepens our understanding of the contextual elements like national AI policy and academia–industry collaborations influencing AI tool efficacy in higher education.
In addition, education in China is highly structured and hierarchical with a strong emphasis on academic achievement, standardised testing, and teacher authority. This cultural backdrop influences both educators’ and students’ approaches to GenAI tools, where GenAI is often seen as a means to enhance efficiency and support academic performance. This insight prompts a call for more culturally sensitive theoretical models that recognise the variability in technology acceptance and learning outcomes across different educational and cultural landscapes.
From a practical perspective, recognising the importance of ease of use, perceived usefulness, and learning value in GenAI tools underscores the need for developers to prioritise these features. GenAI tools that are intuitive to use and aligned with learning objectives are more likely to be embraced by students and can lead to better educational outcomes. Educators should be aware of the critical role of self-regulation in mediating the benefits of AI tools for learning. Instructional strategies that promote self-regulation, such as goal setting, self-monitoring, and reflective practice, should be integrated into AI-enhanced learning environments to maximise their impact on student learning. Our findings underscore the need for educators and instructional designers to consider how AI tools are integrated into the curriculum. They suggest moving beyond assumptions that positive perceptions of AI automatically translate into learning gains. Instead, a more nuanced approach that considers the specific cognitive processes involved in problem-solving, coupled with targeted instructional strategies and support, may be necessary to fully realise the potential of AI for enhancing students’ problem-solving skills. The variability of AI’s impact across different educational contexts highlights the need for culturally sensitive implementation strategies. Educators and policymakers should consider local educational norms, technological infrastructure, and student needs when integrating AI into the curriculum to ensure that its benefits are accessible to all students.

5.4. Limitation and Future Direction

One limitation is the scope of the GenAI features examined. While we focused on ease of use, perceived usefulness, and perceived learning value, other attributes of AI, such as interactivity, feedback quality, and personalisation, were not explored. Future studies could investigate these additional features to provide a more comprehensive understanding of how various aspects of AI contribute to learning outcomes. Another limitation relates to the study’s context, conducted within the Chinese educational system. While this provided specific insights, the findings may not be universally applicable across different cultural and educational settings. Future research should consider cross-cultural studies to examine how cultural nuances impact the effectiveness of AI in education. Such studies could help in developing more globally applicable educational technologies and strategies. Furthermore, our study primarily utilised quantitative methods to assess the relationship between AI’s perceived attributes and cognitive skill development. Future research could benefit from incorporating qualitative methodologies, such as interviews or focus groups, to gain deeper insights into students’ experiences and perceptions of using AI for learning. This mixed-methods approach could uncover the nuanced ways in which AI influences learning processes and outcomes. Lastly, the mediating role of self-regulation highlighted in our findings suggests that interventions designed to enhance self-regulatory skills could amplify the benefits of AI in education. Future research could develop and test such interventions, assessing their effectiveness in improving critical thinking and problem-solving skills in conjunction with AI tool use.

6. Conclusions

Our research illuminates the impact of AI on critical thinking and problem-solving skills among students in China’s higher education sector. Features such as AI’s ease of use, perceived usefulness, and learning value significantly contribute to the enhancement of advanced skills. AI’s user-friendliness enables educators to improve teaching strategies and motivates students to incorporate AI into their learning responsibly. However, as AI technology evolves, its increasingly accurate and sophisticated content requires careful integration into educational practices. There is a risk that overemphasis on AI’s usefulness and learning value could oversimplify cognitive processes and inhibit the cultivation of critical thinking. Therefore, it is crucial to adopt a sophisticated approach that encourages efficient and effective AI use in self-directed learning, positioning AI as a supportive educational tool rather than a primary source of content.

Author Contributions

Conceptualization, X.Z., D.T. and H.A.-S.; data and methodology, X.Z. and D.T.; formal analysis, X.Z.; investigation, H.A.-S.; writing—original draft, X.Z.; writing—review and editing, D.T. and H.A-S.; project administration, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Beijing Municipal Social Science Funds (Grant No. 23GLB016).

Institutional Review Board Statement

This study was conducted and approved by the Institutional Review Board of the School of Economics and Management, Beijing University of Chemical Technology.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data are available upon request through the correspondence author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. O’Dea, X. Generative AI: Is it a paradigm shift for higher education? Stud. High. Educ. 2024, 49, 1–6. [Google Scholar] [CrossRef]
  2. Molenaar, I.; de Mooij, S.; Azevedo, R.; Bannert, M.; Järvelä, S.; Gašević, D. Measuring self-regulated learning and the role of AI: Five years of research using multimodal multichannel data. Comput. Hum. Behav. 2023, 139, 107540. [Google Scholar] [CrossRef]
  3. Essien, A.; Bukoye, O.T.; O’Dea, X.; Kremantzis, M. The influence of AI text generators on critical thinking skills in UK business schools. Stud. High. Educ. 2024, 49, 865–882. [Google Scholar] [CrossRef]
  4. Lodge, J.M.; de Barba, P.; Broadbent, J. Learning with generative artificial intelligence within a network of co-regulation. J. Univ. Teach. Learn. Pract. 2023, 20. [Google Scholar] [CrossRef]
  5. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  6. Alfadda, H.A.; Mahdi, H.S. Measuring students’ use of Zoom application in language course based on the technology acceptance model (TAM). J. Psycholing. Res. 2021, 50, 883–900. [Google Scholar] [CrossRef]
  7. Maheshwari, G. Factors influencing entrepreneurial intentions the most for university students in Vietnam: Educational support, personality traits or TPB components? Edu. Train. 2021, 63, 1138–1153. [Google Scholar] [CrossRef]
  8. Penkauskienė, D.; Railienė, A.; Cruz, G. How is critical thinking valued by the labour market? Employer perspectives from different European countries. Stud. High. Educ. 2019, 44, 804–815. [Google Scholar] [CrossRef]
  9. Paul, R.; Elder, L. The Miniature Guide to Critical Thinking Concepts and Tools; Rowman & Littlefield: Lanham, MD, USA, 2019. [Google Scholar]
  10. Ennis, R.H. Critical thinking across the curriculum: A vision. Topoi 2018, 37, 165–184. [Google Scholar] [CrossRef]
  11. Davies, M.; Barnett, R. (Eds.) The Palgrave Handbook of Critical Thinking in Higher Education; Palgrave Macmillan US: New York, NY, USA, 2015. [Google Scholar] [CrossRef]
  12. Bailey, D.; Almusharraf, N.; Hatcher, R. Finding Satisfaction: Intrinsic Motivation for Synchronous and Asynchronous Communication in the Online Language Learning Context; Springer: New York, NY, USA, 2021; p. 3. [Google Scholar] [CrossRef]
  13. Walter, Y. Embracing the future of artificial intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. Int. J. Educ. Technol. High. Educ. 2024, 21, 15. [Google Scholar] [CrossRef]
  14. Farrokhnia, M.; Banihashem, S.K.; Noroozi, O.; Wals, A. A SWOT analysis of ChatGPT: Implications for educational practice and research. Innov. Educ. Teach. Int. 2023, 61, 460–474. [Google Scholar] [CrossRef]
  15. Sun, G.H.; Hoelscher, S.H. The ChatGPT storm and what faculty can do. Nurse Educ. 2023, 48, 119–124, advance online publication. [Google Scholar] [CrossRef] [PubMed]
  16. Musi, E.; Carmi, E.; Reed, C.; Yates, S.; O’Halloran, K. Developing misinformation immunity: How to reason-check fallacious news in a human–computer interaction environment. Soc. Media Soc. 2023, 9. [Google Scholar] [CrossRef]
  17. Cam, E.; Kiyici, M. The impact of robotics assisted programming education on academic success, problem solving skills and motivation. JETOL 2022, 5, 47–65. [Google Scholar] [CrossRef]
  18. Urban, M.; Děchtěrenko, F.; Lukavský, J.; Hrabalová, V.; Švacha, F.; Brom, C.; Urban, K. ChatGPT improves creative problem-solving performance in university students: An experimental study. Comput. Educ. 2024, 215, 105031. [Google Scholar] [CrossRef]
  19. Schunk, D.H.; Zimmerman, B.J. (Eds.) Handbook of Self-Regulation of Learning and Performance; Taylor & Francis: Abingdon, UK, 2018. [Google Scholar]
  20. Beckman, K.; Apps, T.; Bennett, S.; Dalgarno, B.; Kennedy, G.; Lockyer, L. Self-regulation in open-ended online assignment tasks: The importance of initial task interpretation and goal setting. Stud. High. Educ. 2021, 46, 821–835. [Google Scholar] [CrossRef]
  21. Lau, J.Y. Metacognitive Education: Going Beyond Critical Thinking. In The Palgrave Handbook of Critical Thinking in Higher Education; Davies, M., Barnett, R., Eds.; Palgrave Macmillan US: New York, NY, USA, 2015; pp. 373–389. [Google Scholar]
  22. Winne, P.H. Leveraging Big Data to Help Each Learner and Accelerate Learning Science. Teach. Coll. Rec. Voice Scholarsh. Educ. 2017, 119, 1–24. [Google Scholar] [CrossRef]
  23. Greene, J.A.; Azevedo, R. A theoretical review of Winne and Hadwin’s model of self-regulated learning: New perspectives and directions. Rev. Educ. Res. 2007, 77, 334–372. [Google Scholar] [CrossRef]
  24. Seufert, T. The interplay between self-regulation in learning and cognitive load. Educ. Res. Rev. 2018, 24, 116–129. [Google Scholar] [CrossRef]
  25. Zhou, X.; Schofield, L. Using social learning theories to explore the role of generative artificial intelligence (AI) in collaborative learning. JLDHE 2024, 30. [Google Scholar] [CrossRef]
  26. Zhou, X.; Zhang, J.J.; Chen, C. Unveiling students’ experiences and perceptions of artificial intelligence usage in higher education. J. Univ. Teach. Learn. Pract. 2024, 21. [Google Scholar] [CrossRef]
  27. Pinto, P.H.R.; de Araujo, V.M.U.; Junior, C.D.S.F.; Goulart, L.L.; Beltrão, J.V.C.; Aguiar, G.S.; Avelino, E.L. Assessing the psychological impact of generative AI on data science education: An exploratory study. Preprints 2023, 2023120379. [Google Scholar] [CrossRef]
  28. Li, K. Determinants of college students’ actual use of AI-based systems: An extension of the technology acceptance model. Sustainability 2023, 15, 5221. [Google Scholar] [CrossRef]
  29. Al-Abdullatif, A.M. Modeling Students’ Perceptions of Chatbots in Learning: Integrating Technology Acceptance with the Value-Based Adoption Model. Educ. Sci. 2023, 13, 1151. [Google Scholar] [CrossRef]
  30. Alves, H. The measurement of perceived value in higher education: A unidimensional approach. Serv. Ind. J. 2011, 31, 1943–1960. [Google Scholar] [CrossRef]
  31. Zimmerman, B.J. Attaining Self-Regulation: A Social Cognitive Perspective. In Handbook of Self-Regulation; Boekaerts, M., Pintrich, P.R., Zeidner, M., Eds.; Elsevier: Amsterdam, The Netherlands, 2000; pp. 13–39. [Google Scholar]
  32. Pintrich, P.R. A Manual for the Use of the Motivated Strategies for Learning Questionnaire (MSLQ); U.S. Department of Education: Washington, DC, USA, 1991.
  33. Rosenbaum, M. A schedule for assessing self-control behaviors: Preliminary findings. Behav. Ther. 1980, 11, 109–121. [Google Scholar] [CrossRef]
  34. Kang, H. Sample size determination and power analysis using the G* Power software. J. Educ. Eval. Health Prof. 2021, 18. [Google Scholar] [CrossRef]
  35. Muthén, B.; Kaplan, D.; Hollis, M. On structural equation modeling with data that are not missing completely at random. Psychometrika 1987, 52, 431–462. [Google Scholar] [CrossRef]
  36. Bentler, P.M.; Speckart, G. Models of attitude–behavior relations. Psychol. Rev. 1979, 86, 452. [Google Scholar] [CrossRef]
  37. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  38. Hu, L.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Modeling. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  39. Ghotbi, N.; Ho, M.T.; Mantello, P. Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI Soc. 2022, 37, 283–290. [Google Scholar] [CrossRef]
  40. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  41. Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 2023, 6, 342–363. [Google Scholar] [CrossRef]
Figure 1. Theoretical model.
Figure 1. Theoretical model.
Education 14 01302 g001
Figure 2. The effect coefficients and external loadings of latent reflective variables. Source: Authors based on graphs generated by SmartPLS 4.1 software.
Figure 2. The effect coefficients and external loadings of latent reflective variables. Source: Authors based on graphs generated by SmartPLS 4.1 software.
Education 14 01302 g002
Table 1. Questionnaire items used in this study.
Table 1. Questionnaire items used in this study.
VariablesItemsSource
Perceived Usefulness1. Using GenAI improves my performance in my study. Davis (1989) [5]
2. Using GenAI in my study increases my productivity.
3. Using GenAI enhances my effectiveness in my study.
4. I find GenAI to be useful in my study.
Perceived Learning Value1. The experience I have gained in using GenAI will help me get a good job. (Future goals)
2. Taking into consideration the price I pay for using GenAI (fees, charges, etc.), I believe GenAI provides quality service. (Trade-off price/quality)
3. Compared with other learning-supporting software, I consider that I receive quality service for the price that I pay for GenAI. (Comparison with alternatives)
4. I feel happy about my choice of GenAI tools. (emotion)
Alves (2011) [30]
Perceived Ease of Use1. My interaction with GenAI is clear and understandable.Davis (1989) [5]
2. Interacting with GenAI does not require a lot of my mental effort.
3. I find GenAI to be easy to use.
4. I find it easy to get GenAI to do what I want it to do.
Self-RegulationWhen I use GenAI to read for this course, I make up questions to help focus my reading. Pintrich (1991) [32]
If the materials provided by GenAI are difficult to understand, I am able to change the way I read the material.
I try to change the way I use GenAI for study in order to fit the course requirements and instructors’ teaching style.
When I use GenAI, I try to think through a topic and decide what I am supposed to learn from it rather than just reading it over.
Critical Thinking1. I often find myself questioning things I read from GenAI to decide if I find them convincing. Pintrich (1991) [32]
2. When a theory, interpretation or conclusion is presented in GenAI, I try to decide if there is good supporting evidence.
3. I treat GenAI content as a starting point and try to develop my own ideas about it.
4. I try to play around with ideas of my own related to what I am learning in GenAI.
5. Whenever I read an assertation or conclusion generated by GenAI, I think about possible alternatives.
Problem Solving1. When I use GenAI to do my learning tasks, I think about the less boring parts of the task and the reward that I will receive once I am finished. Rosenbaum (1980) [33]
2. When I have to do something that is anxiety arousing for me, I try to visualize how I will overcome my anxieties while doing it with GenAI.
3. When I am faced with a difficult problem, I try to approach its solution in a systematic way using GenAI.
4. When I find that I have difficulties in concentrating on my learning, I look for ways to increase my concentration with GenAI.
5. When I plan to learn with GenAI, I remove all the things that are not relevant to my learning.
6. When I use GenAI to get rid of a bad habit, I first try to find out all the factors that maintain this habit.
7. When I find it difficult to settle down and do a certain task, I use GenAI to help me look for ways to settle down.
8. GenAI tool help me to finish a learning task I have to do and then start doing the things I really like.
9. Facing the need to make a decision I usually find out all the possible alternatives with the help of GenAI instead of deciding quickly and spontaneously.
10. I usually plan my work with GenAI when faced with a number of things to do.
11. If I find it difficult to concentrate on a certain task, I use GenAI to help me divide the job into smaller segments.
Table 2. Demographic information of the participants.
Table 2. Demographic information of the participants.
DetailsRespondentsPercentage
GenderFemale14867%
Male7533%
Age18–21 years12154.26%
22–25 years6227.80%
26–30 years188.07%
Over 30 years229.87%
Academic LevelUndergraduate15569.51%
Master’s6529.15%
PhD31.35%
Table 3. The assessment of the internal consistency and convergent validity within the evaluated model.
Table 3. The assessment of the internal consistency and convergent validity within the evaluated model.
Cronbach’s AlphaComposite Reliability (rho_a)Composite Reliability (rho_c)Average Variance Extracted (AVE)
Critical thinking0.9000.9010.9260.716
Perceived learning value0.7820.7970.8580.603
Perceived ease of use0.8540.8670.9010.694
Perceived usefulness0.9060.9100.9340.780
Problem-solving skills0.9350.9370.9440.608
Self-regulation0.8430.8430.8950.680
Table 4. Assessment of discriminant validity (Fornell–Larcker criterion).
Table 4. Assessment of discriminant validity (Fornell–Larcker criterion).
Critical ThinkingPerceived Learning ValuePerceived Ease of UsePerceived UsefulnessProblem-Solving SkillsSelf-Regulation
Critical thinking0.846
Perceived learning value0.4800.776
Perceived ease of use0.5780.5800.833
Perceived usefulness0.4930.6440.4590.883
Problem-solving skills0.5910.6540.6630.5840.780
Self-regulation0.6500.5250.6440.4320.6310.825
Table 5. Application of the HTMT report to assess discriminant validity.
Table 5. Application of the HTMT report to assess discriminant validity.
Critical ThinkingPerceived Learning ValuePerceived Ease of UsePerceived UsefulnessProblem-Solving SkillsSelf-Regulation
Critical thinking
Perceived learning value0.549
Perceived ease of use0.6450.682
Perceived usefulness0.5430.7600.514
Problem-olving skills0.6410.7550.7290.630
Self-regulation0.7460.6230.7450.4900.707
Table 6. Values associated with the asymptotic significance p and t-test for the structural model hypotheses.
Table 6. Values associated with the asymptotic significance p and t-test for the structural model hypotheses.
HypothesesOriginal Sample (O)Sample Mean (M)Standard Deviation (STDEV)T Statistics (|O/STDEV|)p ValuesResults
H1a: Perceived usefulness → critical thinking0.2150.2130.0822.6380.008Support
H1b: Perceived learning value → critical thinking−0.001−0.0010.0820.0170.986Reject
H1c: Perceived ease of use → critical thinking0.2050.2050.0802.5800.010Support
H2a: Perceived usefulness → problem-solving skills0.1990.2010.0643.1210.002Support
H2b: Perceived learning value → problem-solving skills0.2370.2380.0723.2800.001Support
H2c: Perceived ease of use → problem-solving skills0.2790.2810.0664.2020.000Support
H3a: Perceived usefulness → self-regulation → critical thinking0.0370.0400.0331.1300.259Reject
H3b: Perceived learning value → self-regulation → critical thinking0.0760.0750.0411.8410.066Reject
H3c: Perceived ease of use → self-regulation → critical thinking0.2130.2150.0504.2300.000Support
H4a: Perceived usefulness → self-regulation → problem-solving skills0.0210.0210.0181.1500.250Reject
H4b: Perceived learning value → self-regulation → problem-olving skills0.0430.0430.0271.6100.107Reject
H4c: Perceived ease of use → self-regulation → problem-solving skills0.1200.1200.0363.3750.001Support
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, X.; Teng, D.; Al-Samarraie, H. The Mediating Role of Generative AI Self-Regulation on Students’ Critical Thinking and Problem-Solving. Educ. Sci. 2024, 14, 1302. https://doi.org/10.3390/educsci14121302

AMA Style

Zhou X, Teng D, Al-Samarraie H. The Mediating Role of Generative AI Self-Regulation on Students’ Critical Thinking and Problem-Solving. Education Sciences. 2024; 14(12):1302. https://doi.org/10.3390/educsci14121302

Chicago/Turabian Style

Zhou, Xue, Da Teng, and Hosam Al-Samarraie. 2024. "The Mediating Role of Generative AI Self-Regulation on Students’ Critical Thinking and Problem-Solving" Education Sciences 14, no. 12: 1302. https://doi.org/10.3390/educsci14121302

APA Style

Zhou, X., Teng, D., & Al-Samarraie, H. (2024). The Mediating Role of Generative AI Self-Regulation on Students’ Critical Thinking and Problem-Solving. Education Sciences, 14(12), 1302. https://doi.org/10.3390/educsci14121302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop