Next Article in Journal
Research on the Influence Mechanism of New Energy Vehicle Promotion Policy
Previous Article in Journal
Agroforestry: A Sustainable Land-Use Practice for Enhancing Productivity and Carbon Sequestration in Madhupur Sal Forest, Bangladesh
Previous Article in Special Issue
Fostering Continuous Innovation in Creative Education: A Multi-Path Configurational Analysis of Continuous Collaboration with AIGC in Chinese ACG Educational Contexts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effects of Technology Perceptions, Teacher Beliefs, and AI Literacy on AI Technology Adoption in Sustainable Mathematics Education

1
School of Mathematical Sciences, East China Normal University, Shanghai 200241, China
2
Shanghai Key Laboratory of Pure Mathematics and Mathematical Practice, Shanghai 200241, China
3
Faculty of Arts, University of Auckland, Auckland 1010, New Zealand
*
Authors to whom correspondence should be addressed.
Sustainability 2025, 17(8), 3698; https://doi.org/10.3390/su17083698
Submission received: 17 March 2025 / Revised: 11 April 2025 / Accepted: 17 April 2025 / Published: 19 April 2025
(This article belongs to the Special Issue Artificial Intelligence in Education and Sustainable Development)

Abstract

:
Artificial intelligence has significantly transformed educational practices across disciplines. This study investigated the cognitive–behavioral mechanisms underpinning mathematics teachers’ engagement with AI teaching tools through an extended technology acceptance model. Utilizing structural equation modeling with data from 500 mathematics educators, we delineated psychological pathways connecting perceptual variables to technology engagement and pedagogical outcomes. Results revealed that perceived usefulness functioned as the primary determinant of AI engagement, while perceived ease of use operated exclusively through sequential mediational pathways, challenging conventional technology acceptance paradigms. Domain-specific factors, such as teacher AI literacy and mathematics teaching beliefs, emerged as significant mediators that conditioned technology-related behavioral responses. The mediators in this study illustrated differential attitudinal mechanisms through which perceptual variables transformed into engagement behaviors. These findings extended technology acceptance theories in educational contexts by demonstrating how domain-specific cognitive structures modulated perception–behavior relationships in professional technology adoption in mathematics education.

1. Introduction

1.1. Research Background and Problem Statement

Educational technologies have driven profound educational reform on a global scale. This transformation has fundamentally reshaped the roles of educational stakeholders, resources, and tools [1]. The proliferation of generative artificial intelligence (AI) technologies in educational contexts represents a particularly consequential contemporary development. These innovations have precipitated complex psychological responses among mathematics educators, reconfiguring their cognitive appraisal processes and decisions regarding technology integration into teaching practice [2,3]. These cognitive–behavioral responses manifest through intricate psychological mechanisms. Educators assess technological affordances against established pedagogical schemas, navigate perceived implementation risks, and ultimately enact behavioral engagement patterns that determine instructional outcomes [4].
Integrating artificial intelligence technologies into education offers multidimensional affordances that transcend traditional instructional modalities, precipitating fundamental reconsiderations of pedagogical approaches across disciplines [5]. Specifically, AI technologies in mathematics education facilitate adaptive learning environments that respond dynamically to individual cognitive trajectories, thereby cultivating personalized learning pathways calibrated to students’ developmental readiness [3]. These technologies have substantively enhanced mathematical conceptualization through sophisticated visualization algorithms, making abstract mathematical constructs accessible through multiple representational modalities; they have further expanded students’ cognitive engagement with complex mathematical structures [6].
Furthermore, AI-enabled analytics systems provide unprecedented granularity in formative assessment mechanisms, enabling the real-time diagnostic evaluation of student comprehension patterns and procedural misconceptions that traditional assessment methodologies often fail to identify [7]. The algorithmic adaptivity of AI teaching tools facilitates automated problem generation calibrated to optimal cognitive challenge thresholds. Such tools create instructional scaffolding and systematically advance students’ procedural fluency toward conceptual mastery through appropriately sequenced mathematical tasks [8]. Critically, these technological affordances operate not as substitutive mechanisms displacing educator agency but rather as augmentative tools that amplify teachers’ instructional capacities [2]. Such technology applications have enabled educators to reallocate cognitive resources from routine procedural facilitation toward higher-order pedagogical functions, including conceptual scaffolding and mathematical reasoning development [2].
Introducing these AI-enabled capabilities into mathematics instruction holds substantial promise for advancing sustainable educational development by improving instructional efficiency, cognitive accessibility, and learning outcome equity. Those benefits collectively underscore the significance of examining the psychological mechanisms where mathematics educators engage with these evolving technological resources.
The cognitive architecture supporting technology adoption decisions encompasses multiple interconnected dimensions. These include instrumental assessments regarding effectiveness enhancement, procedural evaluations of implementation complexity, and normative cognition concerning alignment with established pedagogical values. Within this dynamic psychological landscape, mathematics educators function as cognitive agents. Their technology-related decision processes significantly modulate how technological affordances translate into instructional behaviors and subsequent educational outcomes [9]. The interaction between technology, educators, and content demonstrates crucial importance for the sustainable development of educational assessment, technology integration, and pedagogical reform.
A particularly consequential psychological tension emerges between frameworks of technological acceptance and deeply entrenched epistemological belief structures regarding mathematics instruction [10]. This cognitive dissonance materializes in the perceived incongruence between technological utility assessments and pedagogical coherence evaluations. Educators’ philosophical orientations toward mathematical knowledge construction frequently conflict with the algorithmic learning models underpinning AI systems [9]. The cognitive resolution of these tensions constitutes a critical psychological process. This process determines whether technological innovations become meaningfully integrated within existing belief structures or remain perpetually peripheral to core instructional designs and modes. This complex interplay between cognitive belief structures and technological appraisal processes represents a significantly undertheorized domain within educational technology adoption research in domain-specific explorations.
The extant literature has predominantly examined isolated attitudinal components of AI implementation in mathematics education. These components include cognitive perceptions, affective responses, or behavioral utilization patterns. AI chatbots can be introduced into learning environments to strengthen learning engagement and motivation [11]. Nevertheless, a comprehensive theoretical framework elucidating the integrated cognitive–behavioral mechanisms whereby teachers’ technological appraisals influence engagement remains conspicuously absent [12,13]. This theoretical gap presents a substantial impediment to progress. It restricts the development of psychologically informed interventions aimed at facilitating adaptive technological integration. Without a nuanced understanding of the cognitive–behavioral pathways connecting technological perceptions to adoption behaviors, educational institutions risk substantial investment in technological infrastructure without corresponding psychological preparation. Such preparation must address the cognitive barriers and behavioral facilitators of effective technology integration.
Consequently, the current study aims to address a critical gap in the technology acceptance literature, establishing a domain-specific comprehensive framework for explaining and predicting AI teaching tool adoption among mathematics teachers. We propose and empirically validate an integrated cognitive–behavioral framework that elucidates these complex psychological mechanisms to sustainably support future-oriented reform and implementation of AI-assisted education. To establish complex interconnections between multiple variables, structural equation modeling (SEM) techniques are chosen for the current study, representing the practice of a set of sophisticated multi-variable statistical approaches that excel in inter-relationship explorations (e.g., [14]). SEM has the capacity to simultaneously assess multiple interdependent relationships while accounting for measurement error—a critical consideration when operationalizing latent psychological constructs such as perceptions, beliefs, and literacy. This analytical procedure involved a two-phase approach: first establishing measurement model validity through confirmatory factor analysis, followed by the structural model assessment to evaluate hypothesized causal pathways.

1.2. Research Objectives and Significance

The primary objective of this investigation is to delineate the cognitive–behavioral pathways through which mathematics teachers’ perceptual appraisals of AI teaching tools influence their engagement patterns and perceived impacts on student mathematical literacy outcomes [15]. By conceptualizing teacher engagement as a multidimensional response encompassing both utilization frequency and psychological investment, this study transcends simplistic adoption–outcome correlations that have characterized many existing studies. Instead, we seek to elucidate the mediating psychological mechanisms—particularly domain-specific knowledge structures and belief systems—that modulate the relationship between cognitive technology appraisals and behavioral engagement manifestations [10].
Theoretically, this investigation will significantly contribute to the literature on technology integration in future-oriented mathematics education. It integrates two previously disparate theoretical frameworks: technology acceptance models and teacher belief system theories [13]. This theoretical synthesis enables a more comprehensive psychological lens to examine the complex interplay between technological perceptions, domain-specific cognitive structures in mathematics education, and engagement patterns that collectively determine the implementation effectiveness of technology-enhanced teaching. By extending conventional technology acceptance frameworks to specialized cognitive constructs relevant to educational contexts, particularly the perceived risks of implementing AI technologies and domain-specific belief structures, this study advances the theoretical understanding of how professional cognitive schemas condition technology-related behavioral decisions in specialized disciplinary domains [15]. This approach should better prepare and inform future technological designs and applications.
The practical significance of this research lies in its potential to inform psychologically sophisticated interventions aimed at facilitating mathematics teachers’ effective and sustainable AI integration into pedagogical practice [8]. By identifying specific determinants and mediators that facilitate or impede successful AI technology implementation in mathematics education, this study provides empirical guidance for psychological intervention initiatives, professional development programs, and institutional policies. This study will enable more effective educational interventions to optimize technological utilization and modern educational reform through targeted cognitive restructuring and behavioral activation approaches. Furthermore, the findings offer valuable insights for educational technology developers seeking to design AI tools that accommodate educators’ cognitive frameworks and behavioral implementation preferences and satisfy educational needs, thereby enhancing their adoptability and sustainable utilization [13].

2. Literature Review and Theoretical Background

2.1. Cognitive Determinants of Technology Acceptance and Adoption

The cognitive architecture underlying technology adoption decisions represents a complex interplay of perceptual, evaluative, and attributional processes that transcend simplistic utilitarian assessments [16]. These cognitive structures are particularly nuanced in education, as they intersect with pedagogical epistemologies that are often crystallized through longstanding professional practice [17].
As one of the most influential technology acceptance and adoption models and theories, the technology acceptance model (TAM) was originally conceptualized by Davis [18] and subsequently refined through numerous theoretical iterations (e.g., [19,20]), which provided a foundational cognitive framework for understanding how mental representations of technological affordances translated into behavioral adoption intentions. The model’s core perceptual constructs—perceived usefulness and perceived ease of use—functioned as primary cognitive appraisals that subsequently conditioned behavioral response tendencies [21]. Numerous studies extended this classic model to external predictors and outcomes in cognitive, behavioral, affective, and social aspects [20,22,23]. This classic model captured several core variables, such as perceived usefulness and perceived ease of use.
Perceived usefulness constitutes a cognitive evaluation of the instrumental value of a technology, representing the subjective assessment of how technological implementation would enhance task performance. This cognitive appraisal operates as a direct determinant of behavioral intention and is particularly salient in professional contexts where performance efficacy is paramount [17]. The cognitive processing of usefulness perceptions is inherently domain-specific as it necessitates the mental simulation of potential implementation scenarios within existing instructional frameworks. In mathematics education specifically, this cognitive evaluation process involves complex assessments of how AI teaching tools might augment concept representation, facilitate procedural skill development, enhance problem-solving scaffolding, or enable formative assessment mechanisms [6,24].
Perceived ease of use represents a cognitive assessment of the anticipated effort expenditure, which functions both directly and indirectly on behavioral intention formation [21]. This cognitive parameter assumes particular significance in educational contexts where cognitive load considerations are already substantial due to the multifaceted demands of classroom orchestration. Within mathematics education specifically, this cognitive appraisal encompasses anticipated effort requirements for technological mastery, interface navigation, instructional integration, and task facilitation during technology-mediated learning activities [7].
The adaptation of the TAM to AI’s educational applications necessitated the expansion of its cognitive parameters to accommodate domain-specific mental models and risk assessments that characterize pedagogical technology adoption [25]. Notably, perceived risk emerged as a critical cognitive determinant in AI-empowered education, encompassing the assessment of potential negative outcomes across multiple dimensions: algorithmic reliability, data privacy, intellectual property, student cognitive autonomy, and pedagogical displacement [26]. Alongside problematic technology use, such risk cognitions and concerns might result in technology stress and anxiety [14], operating as inhibitory factors within the broader adoption decision architecture. Thus, it functioned as a cognitive counterweight to perceived benefits in the mental calculus of technology acceptance [27].
Contemporary cognitive elaborations of technology acceptance frameworks further recognized the mediating role of social–cognitive factors, particularly subjective norms and institutional facilitating conditions, which acknowledged how individual cognitive processing was embedded within broader social cognition networks [27]. This social–cognitive dimension was especially pronounced in educational institutions, where departmental cultures, administrative messaging, and collegial influence significantly shaped individual cognitive appraisals of technological innovation. The present investigation aimed to extend this cognitive framework by examining how domain-specific cognitive structures—specifically mathematical pedagogical beliefs and AI literacy—modulated the relationship between general technological perceptions and specific engagement patterns among mathematics educators.

2.2. Engagement Patterns in Educational Technology Contexts

Engagement with educational technologies constitutes a multidimensional response pattern that transcends the binary notion of adoption versus rejection; instead, it manifests across a continuous spectrum of implementation behaviors characterized by variations in frequency, duration, intensity, and sophistication [28,29]. This repertoire encompasses observable actions ranging from exploratory (experimental technology utilization), adaptive (customization to specific instructional contexts), and integrative (incorporation within established pedagogical routines) to transformative behaviors (fundamental reconfiguration of instructional approaches to leverage technological affordances) [30,31]. The conceptual complexity of technology engagement underscores the inadequacy of unidimensional conceptualizations and highlights the need for multifaceted frameworks to capture this concept. For instance, earlier theories specified the behavioral, emotional, agentic, cognitive, and social aspects of engagement (e.g., reflected in [14]).
Within mathematics education specifically, engagement with AI teaching tools exhibited domain-specific manifestations across instructional phases. Preparatory engagement included AI utilization for problem generation, concept exemplification, and differentiated material development. Instructional engagement encompassed real-time implementation of AI-augmented demonstrations, interactive problem-solving, and adaptive assessment mechanisms. Post-instructional engagement involved AI-facilitated feedback provision, learning analytics interpretation, and instructional refinement based on algorithmic performance insights [30]. The temporal distribution of these engagement behaviors reflected the complex patterns that characterized meaningful technology integration into specialized educational domains.
The psychological antecedents of these engagement patterns extended beyond mere intention formation to encompass a complex array of cognitive, affective, and contextual determinants [32]. Cognitive determinants include technology-related knowledge structures, self-efficacy beliefs, outcome expectancies, and implementation planning capacities; affective determinants encompass emotional responses, including technological enthusiasm, implementation anxiety, and satisfaction with previous technology encounters; contextual determinants involve institutional support structures, resource availability, professional development opportunities, and collegial behavioral modeling [33]. This multifaceted determinant structure highlighted the complex psychological foundations underlying engagement with technology adoption.
A particularly critical psychological dimension of engagement was technological literacy, which constituted the specialized knowledge architecture that enabled effective technological implementation [34]. For AI teaching tools specifically, this literacy comprised multiple knowledge domains: algorithmic literacy (understanding computational processes underlying AI systems), data literacy (comprehending how training data influences system outputs), and application literacy (knowledge of effective implementation strategies) [35,36,37]. These knowledge structures functioned as enabling factors for sophisticated engagement behaviors, facilitating progression from superficial technology utilization to pedagogically transformative implementation approaches [38]. Technological literacy and readiness were considered solutions to technostress in higher educational contexts among students [39], which could presumably apply to other educational stakeholders. Thus, the significant role of technology literacy has been increasingly noticed in the AI era, along with the changing landscape of technologies and their applications.
The relationship between engagement and technological literacy was characterized by reciprocal causality: initial engagement fostered literacy development through experiential learning, which subsequently enabled more sophisticated engagement behaviors in recursive developmental cycles [32]. This dynamic inter-relationship was moderated by several factors, including institutional scaffolding, professional learning communities, and individual self-regulatory capacities. Research suggested that engagement–literacy developmental trajectories exhibited distinct patterns across technology implementation phases: Early-stage engagement primarily enhanced functional knowledge components, while sustained engagement progressively developed critical evaluative capacities through reflective implementation experiences [40].
The present investigation extended this engagement framework by examining how various psychological determinants—specifically, technological perceptions, contextualized teacher AI literacy, and pedagogical beliefs—could collectively shape engagement patterns with AI teaching tools in mathematics education contexts. By conceptualizing engagement as a multidimensional construct rather than a simplistic adoption metric, this study would seek to illuminate the complex psychological mechanisms through which cognitive appraisals translate into observable implementation behaviors, ultimately influencing educational outcomes [41].

2.3. Domain-Specific Cognitive Structures as Psychological Mediators

Mathematics teachers’ beliefs constituted specialized cognitive architectures that functioned as interpretive schemas, through which technological innovations were evaluated, categorized, and ultimately embraced or rejected. These domain-specific belief structures operated as cognitive mediators that filtered perceptual inputs, conditioned affective responses, and ultimately shaped implementation decisions regarding educational technologies [42]. The mediational function of these belief systems was particularly pronounced when technological innovations potentially disrupted established instructional paradigms or challenged core pedagogical assumptions—conditions frequently associated with AI implementation in mathematics education.
The cognitive composition of mathematics teaching beliefs encompasses multiple interconnected dimensions that collectively form coherent mental models of effective instruction. Epistemological beliefs concern the nature of mathematical knowledge, ranging from instrumentalist perspectives (mathematics as a collection of rules and procedures) to constructivist orientations (mathematics as conceptual understanding developed through exploration and discovery) [43]. Pedagogical beliefs address optimal instructional approaches, spanning a continuum from direct instruction (emphasizing procedural fluency through demonstration and practice) to inquiry-based methodologies (prioritizing student-centered exploration and mathematical reasoning) [44]. Evaluative beliefs encompass concepts of mathematical proficiency assessment, ranging from product-oriented approaches (focusing on solution accuracy) to process-oriented frameworks (emphasizing strategy sophistication and metacognitive awareness).
The cognitive centrality and stability of these belief structures rendered them powerful mediators of technological acceptance processes. Beliefs function as cognitive filters through which teachers evaluate the congruence between technological affordances and their instructional priorities, with greater perceived alignment facilitating higher acceptance and more sustainable technology adoption behaviors [45]. This filtering mechanism operates through cognitive consistency principles: technologies perceived as reinforcing existing belief structures encountered minimal resistance, while those presenting potential belief contradictions triggered cognitive dissonance that frequently manifested as implementation resistance or superficial adoption patterns [46]. Consequently, the same technological innovation could elicit dramatically different adoption responses across teachers with divergent belief structures, even within identical institutional contexts.
The mediational function of teacher beliefs extended beyond initial acceptance decisions to influence implementation depth and sustainability. Teachers with constructivist epistemological orientations generally demonstrated greater receptivity to technological innovations that facilitated student-centered exploration, conceptual visualization, and personalized learning pathways—affordances frequently associated with sophisticated AI teaching tools [47]. Conversely, teachers with instrumentalist orientations might perceive these technological affordances as potentially undermining procedural mastery or diluting instructor authority. These contrasting cognitive interpretations of identical technological features illuminated how belief structures fundamentally reconfigured the psychological significance of technological innovations.
The technological domain of artificial intelligence introduced unique cognitive tensions within mathematics educators’ belief systems. AI teaching tools frequently embodied specific mathematical epistemologies that might align or conflict with teachers’ established belief architectures, thereby conditioning both adoption decisions and implementation approaches [48]. These cognitive tensions were particularly evident in AI applications that employed machine learning algorithms to generate alternative solution pathways or adaptive problem sequences, that is, issues diverging from teachers’ preferred instructional progression [49]. Similarly, AI tools that emphasized procedural efficiency through automated calculation or symbolic manipulation might conflict with teachers’ beliefs regarding the developmental importance of cognitive struggle and procedural mastery [50].
The dynamic interplay between technological implementations and belief structures created potential for reciprocal influences, wherein successful technology integration experiences could gradually reconfigure aspects of teachers’ belief systems through cognitive accommodation processes [51]. This bidirectional relationship underscores the complex cognitive dynamics underlying technology acceptance in specialized educational domains, where technological innovations simultaneously function as objects of cognitive evaluation and potential catalysts for belief evolution. The present investigation built upon this theoretical foundation by examining how teachers’ mathematics-specific belief structures mediated the relationship between technological perceptions and engagement, thereby illuminating the cognitive mechanisms through which general acceptance parameters translated into domain-specific implementation approaches.

2.4. Research Model and Hypotheses

The theoretical framework underpinning this study integrates the extended technology acceptance model with constructs from the theory of teachers’ belief systems. This integration comprehensively captures the complex determinants of mathematics teachers’ engagement with AI tools and the subsequent impacts on educational outcomes [52]. This integrated model posits that teachers’ perceptions of AI teaching tools—specifically perceived usefulness, perceived ease of use, and perceived risks—could influence their engagement with these technologies, which in turn affected students’ mathematical literacy according to teachers’ perceptions [53]. Crucially, the relationships between teachers’ AI technology perceptions and outcomes were mediated by two key constructs, i.e., teachers’ AI literacy and mathematics teaching beliefs. These factors collectively determine how effectively teachers could leverage AI affordances to enhance students’ learning outcomes and sustainable improvements in mathematics teaching.
The preceding theoretical examination illuminated a sophisticated cognitive–behavioral architecture, wherein teachers’ technological perceptions, domain-specific knowledge structures, and epistemological belief systems functioned as interconnected determinants of technology engagement and educational outcomes. Building upon the reviewed theoretical constructs, cognitive determinants of technology acceptance [16,18], engagement patterns [32], and domain-specific cognitive mediators [42], we formulated the following hypotheses (Figure 1):
H1. 
Perceived usefulness significantly and positively predicts (a) teachers’ AI literacy, (b) teachers’ mathematics beliefs, (c) teachers’ AI engagement, and (d) the perceived impact on students’ mathematics literacy.
H2. 
Perceived ease of use significantly and positively predicts (a) teachers’ AI literacy, (b) teachers’ mathematics beliefs, (c) teachers’ AI engagement, and (d) the perceived impact on students’ mathematics literacy.
H3. 
Perceived risk significantly and negatively predicts (a) teachers’ AI literacy, (b) teachers’ mathematics beliefs, (c) teacher AI engagement, and (d) the perceived impact on students’ mathematics literacy.
H4. 
Teachers’ AI literacy significantly and positively predicts (a) teacher AI engagement and (b) the perceived impact on students’ mathematics literacy.
H5. 
Teachers’ mathematics beliefs significantly and positively predict (a) Teacher AI engagement and (b) the perceived Impact on students’ mathematics literacy.
H6. 
Teacher AI engagement significantly and positively predicts the perceived impact on students’ mathematics literacy.
These hypotheses collectively reflect the complex and multi-variate nature of technology integration into educational contexts and acknowledge the critical role of teacher agency in determining how technological affordances translate into pedagogical practices and learning outcomes [54,55]. In the following sections, we present the methodologies employed to test this model.

3. Research Methodology

3.1. Research Design and Sample

The current investigation employed a cross-sectional survey design to examine the inter-relationships between mathematics teachers’ perceptions of AI teaching tools, beliefs, AI literacy, and engagement and the impacts on students’ mathematical literacy outcomes [56] (Appendix A). This methodological approach facilitated the simultaneous examination of multiple theoretical constructs within naturalistic educational settings while maintaining the systematic measurement of key variables [57]. Though cross-sectional designs inherently constrained causal inference, this approach provided an appropriate initial framework for examining associations among the focal constructs—teacher perceptions about AI tools, AI engagement, AI literacy, discipline-specific teaching beliefs, and student achievements [58].
The sampling procedure implemented a multistage stratified sampling strategy to enhance population representativeness across diverse educational contexts [59]. The initial sampling frame encompassed secondary mathematics teachers from 47 schools within four administrative districts, stratified according to school type (public/private), geographical location (urban/suburban/rural), and socioeconomic indicators. These stratification parameters were established to mitigate selection biases that might otherwise compromise external validity [60]. Sample size determination proceeded through a priori power analysis using G*Power 3.1.9.7 software, with parameters established based on structural equation modeling requirements (α = 0.05, power = 0.80, anticipated effect size = 0.30). To account for potential non-responses and incomplete data (estimated at 25%), we distributed invitations to a total number of participants that exceeded the minimum required sample size.
Each participating teacher provided responses for one intact mathematics class, thus creating teacher–class pairs as the primary unit of analysis. The demographic section collected information regarding the teachers’ gender, age, teaching experience, educational qualifications, duration of AI experience, and AI usage frequency. This comprehensive demographic profiling could enhance the interpretability of findings within specific educational contexts [61].

3.2. Measurement Tools

3.2.1. Extended Technology Acceptance Model Scales

Teachers’ perceptions regarding AI teaching tools were assessed using a contextualized adaptation of established TAM instrumentation, encompassing three primary dimensions: perceived usefulness, perceived ease of use, and perceived AI risks. The perceived usefulness (PU) subscale, comprising three items adapted from Davis [18] and Scherer et al. [62], assessed teachers’ evaluation of AI tools’ contribution to instructional effectiveness, their efficiency, and their differentiation capacity [63]. The perceived ease of use (PEOU) subscale, also containing three items derived from Davis [18] and Teo [64], measured teachers’ assessment of AI tools’ comprehensibility and learnability, as well as integration simplicity [65]. The perceived AI risks (PR) subscale, consisting of three items adapted from Featherman and Pavlou [66] and Wang et al. [67], evaluated teachers’ concerns regarding technological dependency, mathematical accuracy, and algorithmic bias [68]. All items employed five-point Likert scales ranging from “strongly disagree” (1) to “strongly agree” (5), with appropriate reverse coding for risk-related items.
The instruments underwent comprehensive psychometric evaluation in previous research, demonstrating satisfactory reliability coefficients and validity indicators across diverse educational contexts. The adapted measures were pilot-tested with a representative sample of mathematics teachers (n = 42), who were not included in the main study, to verify the instrument’s clarity, relevance, and contextual appropriateness [69,70].

3.2.2. Teachers’ AI Literacy and Engagement Measures

Teachers’ AI literacy (TAL) was assessed using a three-item instrument adapted from Ng [71] and Peterson et al. [72], measuring teachers’ capacity to critically evaluate AI-generated mathematics content, comprehend AI’s capabilities and limitations, and adapt AI materials to specific learning objectives [73]. Teachers’ AI engagement (TAE) was measured using a three-item scale adapted from Schaufeli et al. [74] and Ifinedo [75], assessing experimentation frequency with diverse AI applications, enthusiasm regarding classroom AI integration, and the systematic evaluation of AI effectiveness across mathematical domains [76]. Both constructs utilized five-point response scales, with AI literacy employing agreement ratings and AI engagement utilizing frequency assessments ranging from “never” (1) to “very often” (5).
The psychometric properties of these instruments were established in previous educational technology research, with factor analysis studies confirming their construct validity and internal consistency. The current study implemented these measures with minor adaptations to ensure contextual relevance for mathematics education settings [77].

3.2.3. Teachers’ Mathematics Teaching Beliefs Questionnaire

To include teachers’ mathematics teaching beliefs (TMB), their epistemological and pedagogical orientation toward mathematics instruction were assessed using a five-item instrument, synthesizing elements from Peterson et al. [72] and Stipek et al. [78]. This measure evaluated beliefs regarding (1) guided exploration versus direct instruction, (2) conceptual understanding versus procedural fluency, (3) error tolerance in mathematical learning, (4) technology’s complementary role in instruction, and (5) multiple solution pathway encouragement [79]. Five-point Likert scales captured agreement levels, with higher scores indicating constructivist orientations emphasizing conceptual understanding, student autonomy, and multiple representation utilization.
This instrument was selected based on its theoretical alignment with the contemporary mathematics education literature, and it has demonstrated psychometric properties in previous studies investigating teacher belief systems. Content validity was further established through expert review involving mathematics education researchers and experienced practitioners [80].

3.2.4. Student Mathematical Literacy Assessment

To measure the teacher-perceived impact on students’ mathematics literacy (PIML), it was operationalized using a five-item instrument aligned with the PISA Mathematics Framework [81] and Wilkins [82]. This variable could capture teachers’ observations of AI tools’ impact on students’ (1) mathematical problem formulation abilities, (2) concept and procedure application proficiency, (3) mathematical result interpretation capacity, (4) problem-solving persistence, and (5) mathematics learning engagement [83]. This five-point scale ranged from “strong negative impact” (1) to “strong positive impact” (5), assessing perceived changes in student capabilities attributable to AI implementation.
While acknowledging the inherent limitations of teacher-reported outcome measures, this approach was selected for several methodological reasons. (1) It facilitates consistent assessment across diverse curricular contexts where standardized achievement measures might lack content validity, (2) it enables evaluation of multidimensional literacy constructs beyond computational proficiency, and (3) it accommodates the diverse implementation timelines across participating classrooms [84].

3.3. Data Collection and Analysis

3.3.1. Data Collection Procedures

Data collection proceeded through a systematic multi-phase process designed to maximize response quality while minimizing the administrative burden on participants [85]. Following institutional review board approval and administrative permissions from participating schools, teachers received electronic invitations containing project information, consent documentation, and personalized survey links. Survey administration employed a secure web-based platform with data encryption protocols that safeguarded participant confidentiality while enabling automated response validation and completion monitoring.
Quality control mechanisms included (1) attention check items strategically embedded within survey instruments, (2) forced-response options for critical variables with logical validation parameters, (3) timestamp monitoring to identify potentially rushed or inattentive responses, and (4) follow-up verification for anomalous response patterns. These protocols aimed to minimize missing data and ensure response integrity. For incomplete submissions, participants received automated reminders with personalized survey links, enabling completion without data duplication. All data collection procedures adhered to institutional ethical guidelines regarding voluntary participation, informed consent, confidentiality protections, and secure data storage.

3.3.2. Measurement Model Validation

The analytical approach employed Mplus 8.3 to perform a two-phase structural equation modeling procedure, beginning with measurement model validation followed by structural model assessment [86], which is commonly adopted in structural equation modeling studies in educational domains (e.g., [22,87]). The measurement phase utilized confirmatory factor analysis (CFA) with robust maximum likelihood estimation to accommodate potential non-normality in indicator distributions. This approach enabled the simultaneous evaluation of all measurement instruments within an integrated model, allowing examinations of both discriminant and convergent validity.
The measurement validation process examined several psychometric properties: (1) indicator reliability through standardized factor loadings, (2) construct reliability using Cronbach’s alpha and composite reliability indices, (3) convergent validity through average variance extracted (AVE) calculations, and (4) discriminant validity using the Fornell–Larcker criterion. Model refinement, if necessary, would proceed through an iterative process guided by both statistical considerations (modification indices and standardized residuals) and theoretical coherence, ensuring that any specification modifications maintained conceptual integrity while improving empirical fit.

3.3.3. Structural Model Analysis

Hypothesis testing employed structural equation modeling with bootstrapping procedures to evaluate direct, indirect, and total effects within the proposed theoretical framework [88]. This analytical approach offered several advantages for examining complex inter-relationships, such as (1) simultaneous estimation of multiple dependence relationships, (2) accommodation of latent variables with multiple indicators, (3) explicit modeling of measurement error, and (4) assessment of mediating mechanisms within an integrated analytical framework [89].
The hypothesized structural model specified teacher technology perceptions (perceived usefulness, ease of use, and risks) as exogenous variables influencing AI literacy and mathematics teaching beliefs; teaching beliefs and AI literacy sequentially influenced teacher AI engagement, ultimately influencing the perceived impact on students’ mathematical literacy. Mediation analysis would employ bootstrapping with 5000 resamples to generate bias-corrected confidence intervals for indirect effects, providing robust inference regarding mediating mechanisms without assuming normal sampling distributions [90]. This approach would enable the decomposition of total effects into direct and specific indirect pathways, facilitating the nuanced interpretation of multi-variate relationships within the theoretical framework [91].
Model adequacy assessment employed multiple fit indices to overcome limitations associated with exclusive reliance on chi-square statistics, which is sensitive to sample size and minor model misspecifications [92]. Model evaluation would incorporate absolute and incremental fit indices to provide a comprehensive assessment of model adequacy: the χ2 test (acknowledging its sensitivity to sample size), RMSEA with 90% confidence intervals (assessing the approximation error with precision estimation), CFI and TLI (comparing model fit against null and independence baselines), and SRMR (evaluating the average standardized residual magnitude) [93,94]. Additionally, alternative model specifications would be systematically evaluated to mitigate confirmation bias and explore potential theoretical refinements, enhancing the robustness of conclusions regarding the hypothesized mediation framework [95].

4. Research Results

4.1. Descriptive Statistics and Sample Characteristics

The demographic composition of our sample provided essential contextual parameters for interpreting technology acceptance patterns among mathematics educators. As depicted in Table 1, the gender distribution (53% female and 47% male) approximates the demographic structure of the broader mathematics teacher population, enhancing the generalizability of findings within technology acceptance research. The sample’s age distribution (M = 38.46 years, SD = 8.74) and professional experience spectrum (M = 12.83 years, SD = 7.35) were particularly salient for technology acceptance investigations, as they spanned cohorts with potentially divergent technological socialization patterns—a critical consideration when applying extended TAM frameworks in educational settings [62,63].
The institutional diversity of the sample—with representation across elementary (33.0%), middle (35.6%), and high school (31.4%) levels, and across public (61.6%), private (25.8%), and charter/alternative (12.6%) institutions—provided a robust basis for examining technology acceptance mechanisms across varied pedagogical contexts. This diversity strengthened the ecological validity of our findings, addressing a notable limitation in previous TAM applications in specialized educational domains [25,26].
The participants’ experience with AI teaching tools (M = 26.72 months, SD = 17.53) revealed that our sample extended beyond early adoption phases, suggesting that reported attitudes likely reflected substantive engagement rather than novelty effects that confounded previous technology acceptance studies. The usage frequency distribution—with 54.2% reporting at least weekly utilization—indicated sufficient engagement for meaningful assessment of the constructs central to our theoretical framework.
The consistency in mean scores across primary constructs (ranging from 3.32 to 3.45 on a five-point scale) warranted theoretical interpretation. The observed pattern, with teacher AI literacy (TAL) demonstrating the lowest mean (M = 3.32, SD = 1.22) and perceived ease of use (PEOU) registering the highest (M = 3.45, SD = 1.20), suggested a potential theoretical tension between technical facility with AI tools and deeper pedagogical integration—a pattern consistent with domain-specific elaborations of the TAM in knowledge-intensive professional contexts [17,21].

4.2. Measurement Model Assessment

Prior to hypothesis testing, we conducted a rigorous psychometric evaluation of our measurement instruments to establish their theoretical and statistical reliability and validity. The internal consistency reliability coefficients (Cronbach’s α) ranged from 0.773 (perceived AI risks) to 0.869 (perceived impact on mathematics literacy), all exceeding the conventional threshold of 0.700 [86]. These values, presented in Table 2, indicated substantive measurement fidelity across our theoretical constructs.
The convergent validity assessment yielded average variance extracted (AVE) values ranging from 0.652 (teacher’s mathematics beliefs) to 0.777 (perceived usefulness), surpassing the established criterion of 0.500 (Table 2). This indicated that our constructs explained between 65.2% and 77.7% of the variance in their respective indicators, a substantial improvement over previous implementations of the TAM in educational technology contexts, where convergent validity was marginally established [27].
Discriminant validity was rigorously established using the Fornell–Larcker criterion. As Table 3 demonstrates, the square root of AVE for each construct on the diagonal in bold consistently exceeds inter-construct correlations, confirming that each construct captures a distinct theoretical dimension rather than alternative manifestations of the same underlying phenomenon. The correlation matrix further reveals theoretically consistent relationships among key variables, with the strongest associations observed between teachers’ AI engagement and the perceived impact on mathematics literacy (r = 0.604, p < 0.001) and between perceived usefulness and teachers’ AI Engagement (r = 0.596, p < 0.001). These correlation patterns aligned with the TAM’s theoretical emphasis on perceived usefulness as a primary driver of technology engagement [18,62].
The measurement model demonstrated excellent fit across all indices. Table 4 presents the model fit indices, the recommended values, and the model evaluation results: χ2/df = 1.106, which is substantially below the conservative threshold of 3.000; RMSEA = 0.021 (90% CI = [0.000, 0.034]), indicating excellent fit with narrow confidence intervals; and CFI = 0.991 and TLI = 0.989, both exceeding the rigorous 0.950 benchmark. The SRMR value of 0.048 further confirms the model’s precision in reproducing the empirical covariance structure. These fit statistics collectively exceeded the parameters typically reported in TAM studies in educational contexts [25,27], establishing a robust foundation for subsequent structural analysis.

4.3. Structural Model Results

4.3.1. Cognitive Processing Pathways to Mathematics Teachers’ AI Literacy and Belief Structures

The structural equation model elucidated the complex psychological architecture through which cognitive appraisals influence domain-specific knowledge structures and belief systems (Table 5 and Figure 2). Hypothesis testing revealed that perceived ease of use (PEOU) significantly activates cognitive pathways to teachers’ AI literacy (TAL) (β = 0.597, p < 0.001), confirming H2a. The substantial magnitude of this cognitive processing pathway suggests that perceptions of technological accessibility function as critical cognitive precursors catalyzing the development of domain-specific technological knowledge structures among mathematics educators. This finding extended conventional technology acceptance frameworks by illuminating the specific cognitive mechanism through which general accessibility perceptions facilitated specialized knowledge acquisition and integration, a process particularly salient in knowledge-intensive professional domains where technological mastery requires substantial cognitive investment.
Contrary to the theoretical postulation in H1a, perceived usefulness (PU) demonstrated an insignificant cognitive pathway to teachers’ AI literacy (β = 0.003, p = 0.974). This null effect suggested a fundamental psychological distinction between utility-oriented and knowledge-oriented cognitive processing in specialized instructional domains. The absence of a direct cognitive linkage between usefulness perceptions and literacy development indicated that instrumental evaluations of technological utility operated through distinct psychological mechanisms that bypassed knowledge construction processes, a nuanced cognitive processing distinction that challenged the universality of standard causal technology acceptance architectures. Similarly, perceived AI risks (PR) exhibited an insignificant influence on teachers’ AI literacy (β = 0.107, p = 0.180), rejecting H3a and suggesting that risk-related cognitive appraisals influenced behavioral responses through alternative psychological mechanisms rather than through knowledge structure modifications.
Regarding cognitive pathways to belief systems, PEOU demonstrated a significant influence on teachers’ mathematics beliefs (TMB) (β = 0.328, p = 0.004), confirming H2b and illuminating how perceptions of technological accessibility activate cognitive restructuring processes that modify domain-specific epistemological schemas. This finding suggests that cognitive ease assessments could initiate belief accommodation processes, wherein existing pedagogical schemas incorporate technological elements, particularly when those technologies are perceived as cognitively accessible. However, neither PU (β = −0.121, p = 0.280) nor PR (β = 0.061, p = 0.469) established significant cognitive pathways to teacher belief structures, which rejects H1b and H3b, respectively. This differential pattern of cognitive influence reveals a theoretically significant distinction: while accessibility perceptions could initiate belief modification processes, utility and risk assessments operate through psychologically distinct mechanisms that bypass explicit belief restructuring, a cognitive processing differentiation with substantial implications for understanding how various technological perceptions influence professional belief systems in specialized domains.

4.3.2. Psychological Determinants of Teachers’ AI Engagement

The analysis of the psychological architecture underlying teachers’ AI engagement (TAE) revealed a sophisticated constellation of cognitive and knowledge-based determinants with differential influence magnitudes. Perceived usefulness emerged as the predominant cognitive determinant of teachers’ AI engagement (β = 0.522, p < 0.001), supporting H1c and confirming the central psychological proposition that utility appraisals function as the primary activating mechanisms for technology-related behavioral responses. The substantial magnitude of this cognitive–behavioral pathway indicates the primacy of instrumental evaluations in professional decision-making contexts, where performance optimization constitutes a central motivational concern.
Teachers’ AI literacy manifested as the second most potent psychological determinant of teachers’ AI engagement (β = 0.327, p < 0.001), confirming H4a and illuminating the critical role of domain-specific knowledge structures in facilitating technology implementation behaviors. This finding revealed how specialized technological knowledge functions as a psychological enabler that transforms general implementation intentions into specific behavioral manifestations, a psychological process often undertheorized in conventional technology acceptance frameworks that emphasize perceptual factors while neglecting knowledge-based behavioral determinants. Teachers’ mathematics beliefs demonstrated a comparable influence on AI engagement (β = 0.268, p < 0.001), supporting H5a and revealing how domain-specific epistemological schemas condition technological engagement through cognitive consistency mechanisms, wherein implementation behaviors align with underlying belief structures.
Perceived AI risks (PR) exhibited a significant inhibitory effect on teachers’ AI Engagement (TAE) (β = −0.185, p = 0.003), confirming H3c and demonstrating how risk cognitions functioned as psychological barriers to implementation behaviors through protective psychological mechanisms. The comparative influence magnitude of this inhibitory pathway (approximately one-third of the magnitude of the facilitating usefulness pathway) suggested a psychological counterbalancing process, wherein positive utility appraisals attenuated behavioral inhibition stemming from risk perceptions—a cognitive-behavioral balancing mechanism that explains the ambivalent implementation responses frequently observed in educational technology contexts.
A particularly noteworthy finding involved the absence of a direct cognitive influence of perceived ease of use on teachers’ AI engagement (β = −0.012, p = 0.908), rejecting H2c and challenging a fundamental proposition of conventional technology acceptance models. This finding necessitated substantial theoretical reconceptualization. In specialized educational contexts, accessibility perceptions appeared to operate exclusively through indirect psychological mechanisms (via knowledge and belief structures) rather than directly activating behavioral responses. This processing distinction indicates a complex psychological architecture, wherein perceptions of ease influence knowledge development and belief modification, which subsequently shape behavioral manifestations—a sequential cognitive–behavioral mechanism that transcends the direct perception–behavior linkages postulated in traditional acceptance frameworks.

4.3.3. Determinants of Perceived Educational Outcomes: Cognitive and Behavioral Influence Pathways

The structural model revealed a sophisticated psychological architecture underlying determinants of the perceived impact on students’ mathematics literacy (PIML). Teachers’ AI engagement demonstrated a substantial direct effect on PIML (β = 0.308, p = 0.013), confirming H6 and illuminating how behavioral implementation patterns function as critical experiential mechanisms through which various psychological factors ultimately influence outcome assessments. This finding revealed the essential mediational role of behavioral engagement in the perception–outcome relationship, suggesting that behavioral implementation experiences provide critical feedback that shapes educational impact evaluations.
Teachers’ mathematics beliefs established a significant direct cognitive pathway to perceived impacts on students’ mathematics literacy (β = 0.256, p < 0.001), supporting H5b and revealing how epistemological schemas independently shape perceptions of technological educational effectiveness. This finding extended conventional technology acceptance frameworks by demonstrating how domain-specific belief structures directly influence outcome assessments beyond their effects on implementation behaviors, a cognitive processing mechanism that explains how identical technological implementations produce divergent effectiveness evaluations among educators with different belief systems. This psychological process underscores the powerful interpretive function of belief structures in professional evaluation contexts.
Perceived usefulness demonstrated a direct cognitive influence on the perceived impact on students’ mathematics literacy (β = 0.269, p = 0.019), confirming H1d and revealing how utility appraisals shape effectiveness perceptions through dual psychological pathways, i.e., directly through cognitive association mechanisms and indirectly through behavioral implementation experiences. This dual-process influence architecture illuminates how instrumental evaluations of technological utility operate simultaneously through cognitive and behavioral channels to shape overall effectiveness assessments. Similarly, the perceived AI risks establish a significant negative cognitive pathway to outcome perceptions (β = −0.161, p = 0.012), supporting H3d and revealing how risk cognition similarly influences effectiveness evaluations through parallel psychological mechanisms.
Contrary to theoretical postulations, teachers’ AI literacy exhibited an insignificant direct cognitive pathway to the perceived impact on students’ mathematics literacy (β = 0.049, p = 0.568), failing to support H4b. This unexpected finding revealed a nuanced psychological distinction: while technological knowledge structures enable behavioral implementation, their influence on outcome assessments operates predominantly through experiential feedback derived from behavioral engagement rather than through direct cognitive association mechanisms. This processing differentiation suggests that knowledge structures function primarily as behavioral enablers rather than direct determinants of effectiveness evaluations—a psychologically significant distinction for understanding how technological competence influences outcome perceptions in professional contexts.
Similarly, perceived ease of use demonstrated an insignificant direct cognitive pathway to the perceived impact on students’ mathematics literacy (β = 0.066, p = 0.493), rejecting H2d but further supporting the interpretation that accessibility perceptions operate exclusively through indirect psychological mechanisms in specialized educational domains. This finding reinforces the conceptualization of PEOU as an antecedent factor that influences outcome perceptions entirely through sequential mediational mechanisms rather than through direct cognitive associations, a processing architecture substantially divergent from conventional technology acceptance formulations that posit direct perception–outcome linkages.

4.4. Mediation Analysis Reveals Theoretical Mechanisms and Causal Pathways

To elucidate the mechanisms through which the TAM constructs influence perceived educational outcomes, we conducted a bootstrapping-based mediation analysis (5000 resamples), a methodologically robust approach for testing complex mediation patterns in structural equation models [88,89]. This analysis is summarized in Table 6 and reveals theoretically significant mediational pathways that elaborate on conventional causal TAM structures.
Teachers’ AI engagement emerged as a critical mediating mechanism in several theoretically significant pathways. The indirect effect from perceived usefulness to the perceived impact on mathematics literacy through TAE (indirect effect = 0.161, p = 0.018, 95% CI = [0.028, 0.294]) accounted for 37.4% of the total PU→PIML effect, indicating substantial partial mediation. This finding underscores the dual operation of utility perceptions, i.e., through direct cognitive influence and through behavioral engagement mechanisms, a theoretical refinement that extends standard TAM formulations.
Similarly, TAE significantly mediated the relationship between perceived AI risks and PIML (indirect effect = −0.057, p = 0.047, 95% CI = [−0.114, −0.001]), demonstrating how risk perceptions inhibit perceived educational outcomes partially through reduced engagement. The confidence interval’s proximity to zero suggests that this mediational pathway, while statistically significant, exhibits marginal practical significance compared with the direct effect of risk perceptions.
The mediational role of TAE extends to the effects of both teachers’ AI literacy (indirect effect = 0.101, p = 0.030, 95% CI = [0.010, 0.192]) and teachers’ mathematics beliefs (indirect effect = 0.082, p = 0.020, 95% CI = [0.013, 0.152]) on PIML. These findings illuminate how domain-specific knowledge and beliefs influence perceived educational outcomes primarily through their effects on engagement behaviors rather than through direct cognitive mechanisms, a theoretically significant insight for understanding technology integration in specialized educational domains.
The analysis further revealed a theoretically complex serial mediation effect for the pathway from perceived ease of use through teachers’ AI literacy and teachers’ AI engagement to PIML (indirect effect = 0.060, p = 0.047, 95% CI = [0.001, 0.119]). This multi-step mediational pathway illuminates how ease of use perceptions, which showed insignificant direct effects on engagement or outcomes, exert influence through a sequential causal chain by facilitating literacy development, which subsequently enables engagement and ultimately influences outcome perceptions. This finding represents a significant theoretical elaboration of the TAM in specialized educational contexts, demonstrating how general technological perceptions translate into domain-specific outcomes through sequentially mediated pathways.
The total indirect effect from PEOU to PIML was statistically significant (total indirect effect = 0.197, p = 0.003, 95% CI = [0.066, 0.328]) despite the insignificant direct effect (β = 0.066, p = 0.493). This pattern indicates complete mediation, revealing that PEOU influences perceived educational outcomes entirely through indirect pathways—primarily through sequential mechanisms involving literacy and engagement rather than through the direct effects hypothesized in conventional TAM formulations.
The contrasting patterns of mediation for different predictors—complete mediation for PEOU versus partial mediation for PU and PR—illuminate theoretical distinctions in how different TAM components influence educational outcomes in specialized domains. These nuanced mediational patterns challenge the universality of standard causal TAM structures and suggest the need for domain-specific theoretical elaborations that account for the complex interplay between technological perceptions, specialized knowledge, pedagogical beliefs, and engagement behaviors in educational technology application.

5. Discussion and Conclusions

5.1. Main Research Findings and Theoretical Significance

This study illuminates the cognitive–behavioral mechanisms through which mathematics educators’ technological perceptions transform into engagement patterns and teacher-perceived educational outcomes in AI-mediated instructional contexts. Our empirical analysis extends conventional technology acceptance frameworks by delineating the following three primary theoretical contributions: (1) differential processing mechanisms across technology acceptance constructs, (2) domain-specific mediational pathways that transform perceptual inputs into behavioral manifestations, and (3) distinctive attitudinal routes through which technological perceptions influence educational outcomes.
The results reveal that perceived ease of use (PEOU) cannot significantly impact teachers’ AI engagement (H2c). Further comparing it with other direct paths, we have identified a nuanced attitudinal pattern, wherein perceived ease of use (PEOU) presumably functions exclusively through sequential mediational pathways. For example, it can contribute to teachers’ mathematics beliefs (H2b) and teachers’ AI literacy (H2a) when those two are both significantly positive predictors of teachers’ AI engagement (H4a and H5a). Such chain effects have also demonstrated the significance of exploring domain-specific factors in exploring technology acceptance. Specially, this finding substantially refines conventional technology acceptance postulations, indicating that in knowledge-intensive professional domains, accessibility perceptions function as distal cognitive antecedents influencing behavior exclusively through intermediary psychological mechanisms.
Specifically, PEOU’s significantly positive effect on mathematics teachers’ AI literacy (H2a) reveals a cognitive processing sequence where accessibility perceptions facilitate domain-specific knowledge acquisition, which may subsequently enable behavioral implementation. This sequential architecture underscores the limitations of direct perception–behavior formulations in complex professional contexts and emphasizes knowledge structures’ critical mediating role in translating perceptual inputs into behavioral manifestations. In our research context, mathematics teaching involves professional knowledge that is more abstract and transferable compared with disciplines such as social sciences and humanities. Demographic features may foster further variations across teachers due to their educational backgrounds, experiences with educational technologies, ages, and genders, while H2a identifies a perceptual pathway at the group characteristics level. As Chiu et al. [28] suggest, this cognitive processing distinction reflects the heightened complexity of professional technological implementation, which necessitates substantial knowledge development before perceptual assessments manifest behaviorally.
In contrast to the insignificant effect of PEOU, perceived usefulness (PU) establishes a direct cognitive pathway to teachers’ AI engagement (H1c), suggesting a fundamentally different psychological processing mechanism. This divergent pattern indicates a dual-route cognitive architecture where different perceptual dimensions operate through distinct mechanisms: usefulness perceptions function as proximal perceptual activators directly catalyzing engagement, while ease perceptions operate as distal precursors facilitating knowledge development and belief accommodation, which subsequently enable behavioral responses. This cognitive processing differentiation extends Granić and Marangunić’s [16] theoretical proposition regarding variable-specific influence mechanisms, suggesting that professional domains amplify these processing distinctions through heightened knowledge requirements and established belief structures.
Teachers’ AI literacy is identified as a significant determinant of AI engagement (H4a), which transcends conventional acceptance frameworks by illuminating the essential role of domain-specific knowledge structures in facilitating implementation behaviors. This finding reveals how professional technological competence functions as a psychological enabler, transforming general implementation intentions into specific behavioral manifestations—a process inadequately captured in perception-centric acceptance models. Through our study, we have elucidated that mathematics teachers’ engagement with AI tools is based on their AI literacy, which supports the pronounced significance of digital literacy in our contexts. Despite some teacher training programs and projects, their experience with such technologies and their pertinent knowledge are likely to cause differences in subsequent technology adoption and student learning outcomes. This is similar to the traditional contexts of mathematics teaching, where prior involvement in certain effective pedagogical arrangements offers positive feedback and inspires teachers to practice them again in the future. It appears that there is no exception when it comes to AI teaching tools. It is also noteworthy that alongside the positive feedback, the discouragement and challenge of experiencing the disadvantages and limitations of AI may become barriers to future implementation. Thus, the significance of digital literacy in teacher education and development should be highlighted. This suggestion aligns with the significance of Technological Pedagogical and Content Knowledge (TPACK) among STEM (Science, Technology, Engineering, and Mathematics) teachers [96]. This cognitive–behavioral primacy of knowledge structures aligns with Allen and Kendeou’s [35] proposition that educational technology implementation necessitates sophisticated knowledge architectures extending beyond general technological familiarity.
The discovery of differential mediational patterns across technological perceptions, i.e., between (1) the completely mediated effect of PEOU and (2) the partially mediated effects of PU and PR on PIML, is theoretically consequential. This pattern illuminates fundamental distinctions in cognitive–behavioral processing mechanisms: accessibility perceptions operate entirely through sequential mediational pathways, while utility and risk perceptions function through parallel processing routes, including both direct linkages and indirect pathways. This processing differentiation suggests a theoretical refinement of unitary technology acceptance models, supporting Tram’s [15] proposition that different perceptual dimensions engage distinct processing mechanisms, warranting separate theoretical conceptualization. These variable-specific processing architectures significantly advance understanding of the psychological complexity underlying technology acceptance in specialized domains.
Teachers’ mathematics beliefs substantially influence AI engagement (H5a) and the perceived impact on mathematical literacy (H5b), which illuminates how domain-specific epistemological schemas function as critical cognitive mediators to condition evaluations of both implementation and effectiveness. This finding extends conventional frameworks by demonstrating how professional belief structures shape engagement through cognitive consistency mechanisms that align implementation behaviors with underlying epistemologies. The dual influence of belief structures reveals their powerful mediational function in professional contexts, supporting Drijvers and Sinclair’s [48] contention that educational technology implementation is filtered through established pedagogical belief systems that determine both behavioral responses and effectiveness assessments.
The complex inter-relationships among perceived risks, usefulness perceptions, and engagement suggest a sophisticated cognitive balancing mechanism wherein positive utility appraisals partially counterbalance risk-related inhibitory effects. Perceived AI risks significantly and negatively influence teachers’ AI engagement (H3c), which forms a psychological barrier to technology implementation, yet its comparative magnitude (approximately one-third of the positive usefulness effect) suggests a counterbalancing dynamic that explains the ambivalent implementation patterns frequently observed in educational technology contexts. This mechanism extends Hazzan-Bishara et al.’s [12] conceptualization of technology adoption as resulting from dynamic tensions between facilitating and inhibiting factors. In our study, potential reasons include teacher-perceived threats posed by AI tools in replacing human teachers and fostering academic misconduct. Risks in academic, developmental, affective, and other aspects can collectively prevent the alluring effects of perceived usefulness and ease of use of such tools. Such explanations can enrich research on AI and educational ethics. Previous studies have elucidated the potential sources of higher-education students’ technostress and ethical concerns about AI technologies (e.g., [39]). In the reshaped practice of teaching, instructors may show unique attitudes that are worth exploring further.
Teachers’ AI engagement is a critical mediating variable across multiple pathways, which substantiates its conceptualization as a central psychological process transforming perceptual inputs into educational outcomes. This mediational primacy supports Bond et al.’s [32,33] theoretical proposition regarding engagement as a multidimensional construct functioning as an essential translational mechanism. Our findings extend this conceptualization by empirically delineating the specific determinants shaping engagement behaviors, thereby illuminating the complex psychological architecture underlying observable implementation patterns.
The significant chain mediation pathways from PEOU to PIML via TAL and TAE represent a theoretically significant elaboration of cognitive–behavioral processing sequences. This multi-step mediational chain illuminates how general technological perceptions translate into domain-specific outcomes through sequential psychological processes, transcending conventional models’ parsimonious formulations while enhancing explanatory sophistication. This elaborates cognitive–behavioral sequence supports Wen and Cai’s [52] proposition regarding multiple mediational frameworks’ necessity for understanding complex psychological processes while empirically delineating the specific sequential mechanisms through which perceptual inputs influence outcome assessments.
The insignificant direct effect of teachers’ AI literacy on the perceived impact on mathematics literacy (H4b), coupled with its significant indirect effect through engagement, suggests a psychological processing distinction, wherein knowledge structures influence outcome assessments primarily by enabling implementation rather than through a direct cognitive association. This finding reveals a theoretically significant dissociation between knowledge possession and outcome evaluation, suggesting knowledge structures function predominantly as behavioral enablers rather than direct determinants of effectiveness assessments. This processing differentiation extends Li et al.’s [38] theoretical distinction between technological knowledge and implementation effectiveness.
In summary, the above findings have collectively clarified our understanding of the complex cognitive–behavioral architecture underlying technology acceptance in specialized educational domains. While certain core propositions regarding the primacy of utility perceptions retain validity across contexts, the psychological mechanisms through which perceptions translate into behaviors and learning outcomes exhibit domain-specific complexities necessitating substantial theoretical elaboration. The differential processing patterns, sequential mediational chains, and variable-specific influence mechanisms identified collectively illuminate the sophisticated psychological architecture underlying technology adoption decisions in professional educational contexts, substantially advancing understanding of the cognitive–behavioral mechanisms determining implementation effectiveness in technology-mediated educational environments.

5.2. Practical Implications

The empirical findings from this investigation yield substantial practical implications for mathematics education stakeholders seeking to optimize AI technology integration within pedagogical frameworks. These implications extend across multiple levels of educational practice, from individual teacher development to systemic implementation strategies.

5.2.1. Optimizing AI Technology Training for Mathematics Teachers

Our findings regarding the primacy of perceived usefulness in determining teachers’ AI engagement (H1c) suggest that professional development initiatives should emphasize concrete pedagogical benefits rather than technological features in isolation. This represents a significant reorientation from conventional technology training approaches that often prioritize operational functionality over pedagogical application. Mathematics teachers should systematically demonstrate how specific AI functionalities address persistent instructional challenges, such as differentiation, formative assessment, and conceptual visualization, thereby establishing clear utility connections that catalyze adoption intentions.
The significant influence of teachers’ AI literacy on engagement (H4a), coupled with PEOU’s substantial effect on literacy development (H2a), indicates that professional development should adopt a sequenced approach, beginning with accessibility-focused instruction that minimizes perceived complexity, challenges, and technostress, progressing to domain-specific literacy development, and culminating in pedagogical integration. This multi-phase approach aligns with Nti-Asante’s [97] iterative design framework for implementing mathematics education technology, which emphasizes progressive competence development rather than comprehensive simultaneous skill acquisition.
The negative influence of perceived AI risks on engagement (H3c) suggests that professional development should explicitly address potential concerns, particularly regarding algorithmic reliability, equity implications, and student dependency risks, rather than emphasizing only positive affordances. Training modules should incorporate guided critical analyses of AI-generated mathematical content to develop teachers’ evaluative capacities, thereby transforming risk perceptions from adoption barriers into professional judgment opportunities. This recommendation extends Busuttil and Calleja’s [10] finding that mathematics teachers’ risk concerns can be productively reframed as opportunities for developing critical technological discernment rather than as impediments to adoption.

5.2.2. Leveraging Teachers’ Mathematics Beliefs into Technology Integration

The significant influence of teachers’ mathematics beliefs on both engagement (H5a) and perceived impact on students’ mathematics literacy (H5b) indicates that technology integration initiatives should actively engage with teachers’ existing pedagogical philosophies rather than imposing technological imperatives that may conflict with core instructional values. Professional development facilitators should explicitly connect AI functionalities to diverse mathematical teaching approaches—from constructivist exploration to procedural fluency development—demonstrating how various technological affordances can enhance rather than displace preferred instructional methodologies.
This approach necessitates differentiated professional development that acknowledges the heterogeneity of mathematics teaching philosophies rather than presuming a uniform pedagogical stance. Implementation protocols should incorporate explicit reflection on how specific AI capabilities align with individual teachers’ mathematical learning theories, creating coherence between technological affordances and pedagogical values. This recommendation extends Chou et al.’s [9] finding that congruence between technological capabilities and existing pedagogical beliefs constitutes a critical precondition for meaningful technology integration in mathematics education.

5.2.3. Enhancing AI’s Impact Through Engagement-Centered Implementation

The significant mediating role of teachers’ AI engagement across multiple pathways suggests that implementation strategies should prioritize creating sustained interaction opportunities rather than merely providing access or initial training. School leaders should establish collaborative exploration communities that normalize regular experimentation with AI tools, systematic reflection on implementation outcomes, and iterative refinement of integration approaches. These communities should incorporate structured sharing of successful integration strategies, creating a professional knowledge ecosystem that accelerates collective engagement.
The identification of a significant sequential mediation pathway (PEOU→TAL→TAE→PIML) indicates that implementation timelines should accommodate the progressive development of engagement behaviors rather than expecting immediate pedagogical impact. Administrative evaluation frameworks should recognize the developmental nature of technology integration, with metrics that evolve from adoption and exploration indicators to sophisticated pedagogical application measures over extended implementation periods. This recommendation aligns with Henkel et al.’s [8] finding that educational technology efficacy in mathematics contexts emerges through progressive implementation phases rather than through immediate transformation.

5.2.4. Balancing Efficiency and Pedagogical Integrity

The complex influencing patterns on the perceived impact on mathematics literacy, including direct effects from PU, PR, and TMB alongside indirect effects through engagement, suggest that implementation guidance should balance efficiency-oriented and pedagogically oriented integration approaches. Mathematics instructional leaders should develop AI integration rubrics that evaluate both operational effectiveness (time efficiency and task completion) and mathematical learning integrity (conceptual understanding, problem-solving autonomy, and cognitive engagement). This dual-focus evaluation framework would prevent technological implementation that achieves procedural efficiency at the expense of deeper mathematical learning processes.
This balanced approach addresses the theoretical tension identified in our findings: while utility perceptions strongly drive adoption decisions, mathematics teaching beliefs independently shape impact perceptions. Implementation protocols should therefore incorporate explicit consideration of how efficiency gains through AI tools can complement rather than compromise core mathematical learning principles. This recommendation extends Shin et al.’s [50] finding that effective STEAM programs integrating data science and AI technologies in mathematics education require explicit alignment between technological efficiencies and substantive disciplinary learning processes.

5.2.5. Systemic Implementation Considerations

Beyond individual and classroom-level implications, our findings suggest several systemic considerations for educational policymakers and institutional leaders. The differential influence magnitudes of various factors on AI engagement and perceived impact indicate that comprehensive implementation strategies should address multiple dimensions simultaneously rather than focusing exclusively on technological infrastructure or training provision.
Specifically, the substantive influence of mathematics teaching beliefs on both engagement and perceived impact suggests that technology integration policies should acknowledge and accommodate pedagogical diversity rather than presuming a singular “best practice” approach to AI implementation. Policy frameworks should establish broad parameters for appropriate AI utilization while preserving instructional autonomy regarding specific integration methodologies. This recommendation aligns with Lazarides et al.’s [98] finding that teachers’ motivational beliefs influence student outcomes through differentiated teaching practices rather than through standardized implementation approaches.
The identification of teachers’ AI literacy as a critical mediating mechanism between ease of use perceptions and engagement suggests that credentialing and professional development systems should incorporate domain-specific technological competence standards rather than generic digital literacy frameworks. These standards should explicitly address the unique characteristics of AI applications in mathematics instruction, including algorithm evaluation, output verification, and pedagogical adaptations of AI-generated content. This recommendation extends Pan and Wang’s [31] proposition regarding the necessity of context-specific AI literacy frameworks for educators in different disciplinary domains.

5.3. Limitations and Future Research Directions

5.3.1. Limitations and Justifications

The current study results in theoretically and practically significant insights. However, we have to acknowledge that it may contain some methodological and conceptual limitations. In line with previous studies, those limitations are not fatal, but researchers should notice their existence and interpret the results with some caution.
The cross-sectional design of this study is appropriate for initial model testing; however, this method precludes definitive causal inferences regarding the temporal relationships among theoretical constructs. While our structural equation modeling approach enables theoretical path analysis, the contemporaneous measurement of all variables introduces potential bidirectionality concerns, particularly regarding the relationships between literacy, beliefs, and engagement. As Zwart et al. [99] noted, technology integration in educational contexts often involved reciprocal rather than unidirectional relationships among key constructs—a complexity that cross-sectional designs cannot fully disentangle. However, aligned with numerous studies on predictors and impacts of technology acceptance, such cross-sectional studies demonstrate crucial research and practical values in guiding educational technology applications and developments.
The reliance on self-reported measures for both predictor and outcome variables introduces potential common method bias concerns, notwithstanding our rigorous psychometric validation procedures. As Yi et al. [58] have noted, teacher perceptions of technological educational impact may diverge from objectively measured student learning outcomes, a distinction our measurement approach cannot address. This limitation is particularly salient regarding the terminal outcome variable (perceived impact on mathematics literacy), which captures teacher perceptions rather than direct student assessments. However, as self-reported data are the most direct opportunities for researchers in various educational domains to explore psychological mechanisms such as technology acceptance research (e.g., [14,100]), the current study can still bring substantial significance to the existing literature.
The sampling approach, while yielding an educationally diverse participant pool, may not fully represent the broader population of mathematics educators, particularly those in rural or under-resourced settings where technological infrastructure constraints may introduce additional acceptance barriers. As Chen and Liu [59] have noted, technology acceptance mechanisms may operate differently in resource-constrained educational environments, a contextual variation that our sample may not adequately capture.
While our theoretical framework integrates TAM constructs with domain-specific factors (mathematics teachers’ AI literacy and teaching beliefs), it does not fully capture the multidimensional nature of each construct domain. Our operationalization of teachers’ mathematics beliefs, although psychometrically robust, necessarily simplifies the complex belief structures that mathematics educators hold regarding teaching and learning processes. As Forgasz and Leder [42] have noted, mathematics teaching beliefs encompass multiple dimensions—epistemological, pedagogical, and evaluative—that may interact differently with technological perceptions.
Similarly, our measurement of teachers’ AI literacy may not fully capture the multifaceted nature of this emerging competence domain, although it demonstrated strong psychometric properties. As Allen and Kendeou [35] have noted, AI literacy encompasses technical, critical, and creative dimensions that may exert differential influences on engagement behaviors and educational applications.
Our theoretical framework, while incorporating risk perceptions as a critical extension to standard TAM formulations, does not comprehensively address the diverse ethical considerations that may influence AI acceptance in educational contexts. As Hazzan-Bishara et al. [12] have noted, ethical concerns regarding algorithmic bias, intellectual autonomy, and assessment validity constitute distinct dimensions that may influence technology acceptance through different mechanisms.
Finally, while our model addresses the perceived impact on mathematics literacy as the terminal outcome variable, it does not comprehensively capture the full range of potential educational outcomes that AI technology integration might influence. As Sanders et al. [83] have noted, mathematics education encompasses multiple outcome domains—procedural fluency, conceptual understanding, problem-solving capacity, and mathematical identity development—that may be differentially affected by technological integration.

5.3.2. Implications for Future Research and Teaching

In response to the research limitations and findings, the current study reveals some suggestions for future research and teaching in technology-assisted education.
Methodologically, future research should employ longitudinal designs that capture the evolving relationships among technological perceptions, specialized knowledge, pedagogical beliefs, and engagement behaviors across extended implementation periods. Such designs would enable a more robust examination of potential reciprocal and developmental relationships, particularly regarding how initial engagement experiences might recursively influence subsequent perceptions and beliefs. Complementary experimental approaches incorporating randomized professional development interventions would further strengthen causal inferences regarding the malleability of key mediating mechanisms. Qualitative approaches to technology acceptance can also aid this topic, elucidating the complex component conditions for technology adoption, sustainable reform, and integration using educational technologies (e.g., [101]).
Additionally, future research should incorporate multi-method measurement approaches that triangulate self-reported perceptions with behavioral observations, artifact analysis, and direct student outcome assessments through standardized tests. Mixed-method designs integrating qualitative classroom observations with quantitative engagement and outcome measures would provide a richer contextual understanding of how technological perceptions translate into instructional behaviors and student learning experiences. Objective measures of student mathematics competence development would further strengthen validity by directly assessing the educational outcomes that our model addresses through teacher perceptions.
Contextual and demographic factors are needed to further enrich the research on such topics. In our study, the demographic features were not treated as moderators of the hypotheses. One reason was that due to the limited volume of this study, it was unlikely and unreliable for the researchers to explore so many contextual and demographic moderators. More importantly, a reliable moderating analysis should be based on a more balanced sample across subgroups, with ours containing dominant proportions of certain categories. This is usual since, in previous studies, such a description of demographic characteristics should represent domain-specific realities. Future studies interested in these moderators should aim to collect more comprehensive and balanced samples before statistical analysis.
For data collection, future research should employ stratified sampling designs that ensure representation across diverse educational contexts, with particular attention to resource disparities that may moderate technology acceptance relationships. Comparative analyses across different educational environments would illuminate how contextual factors condition the mechanisms through which various perceptions influence engagement behaviors and educational outcomes. Multi-level modeling approaches would further enhance contextual understanding by examining how institutional and systemic factors moderate individual-level technology acceptance processes [102].
Regarding the conceptual framework, future research should adopt more nuanced operationalizations of mathematics teaching beliefs, distinguishing between different belief dimensions and examining their differential interactions with technological perceptions and engagement behaviors. Latent profile analyses identifying distinct belief constellations would further enhance understanding of how different pedagogical orientations condition technology acceptance processes in mathematics education contexts.
Future research should develop and validate more comprehensive AI literacy measures that distinguish between technical operational knowledge, critical evaluative capacities, and creative adaptive competencies [103,104]. Such measures would enable a more nuanced examination of how different literacy dimensions influence engagement behaviors and perceived educational impacts. Longitudinal investigations of literacy development trajectories would further enhance understanding of how different dimensions evolve through professional experience and formal development initiatives.
Future research should incorporate more comprehensive ethical consideration frameworks, distinguishing between different dimensions of ethical concern and examining their differential influences on acceptance processes (e.g., [22]). Mixed-method approaches integrating ethical reasoning analyses with quantitative acceptance measures would provide a richer understanding of how various ethical considerations condition technology integration decisions in mathematics education contexts.
Future research should adopt more differentiated outcome frameworks that distinguish between different mathematics learning dimensions and examine how various acceptance factors influence each dimension through potentially distinct mechanisms. A longitudinal mixed-method design that tracks multiple outcome domains across extended implementation periods would provide a more comprehensive understanding of how technology acceptance processes influence diverse educational outcomes in mathematics education contexts.

5.3.3. Emerging Research Frontiers

Beyond addressing methodological and conceptual limitations, our findings suggest several innovative research frontiers that could substantively advance the understanding of AI technology acceptance in mathematics education contexts.
First, the identification of teachers’ AI engagement as a critical mediating mechanism suggests the need for more sophisticated conceptualization and measurement of engagement behaviors in educational technology contexts. Future research should develop multidimensional engagement frameworks that distinguish between different engagement types—exploratory, adaptive, evaluative, and collaborative—and examine their differential relationships with various perceptions and outcomes. Such research would extend Bond et al.’s [32,33] conceptual work on engagement dimensionality into the specific domain of AI-enhanced mathematics education.
Second, the complex serial mediation pathway identified in our analysis (PEOU→TAL→TAE→PIML) suggests the need for more sophisticated process-oriented research examining the developmental trajectories through which general technological perceptions translate into specialized educational outcomes. Future research employing experience sampling methodologies, microgenetic designs, and qualitative process tracing would provide a richer understanding of how these sequential mechanisms unfold in authentic educational contexts. Such process-oriented research would extend Otto et al.’s [55] work on feedback systems in educational technology contexts by illuminating the micro-processes through which perceptions transform into behaviors and outcomes.
Third, the differential mediation patterns identified across different TAM components (complete versus partial mediation) suggest the need for more nuanced theoretical elaborations that accommodate construct-specific influence mechanisms rather than presuming universal causal structures. Future theoretical and empirical work should develop and test moderated mediation frameworks that specify how different contextual factors condition the mechanisms through which various perceptions influence behaviors and outcomes. Such research would extend Scherer et al.’s [62] meta-analytic work by developing more contextually sensitive theoretical models of technology acceptance in specialized educational domains.
Finally, the rapidly evolving nature of AI technologies in educational contexts suggests the need for anticipatory research examining how acceptance mechanisms might shift as these technologies become more sophisticated and ubiquitous (e.g., [105]). Future research employing longitudinal panel designs, technological forecasting methodologies, and scenario-based experimental approaches would enhance understanding of how acceptance processes evolve alongside technological capabilities. Such research would extend Li’s [2] work on internal and external adoption influences by examining their dynamic evolution across technological development cycles.
In conclusion, while acknowledging these limitations, our investigation has established a robust foundation for understanding the complex mechanisms through which extended TAM constructs influence AI technology acceptance and perceived educational impacts in mathematics education contexts. The identified limitations do not undermine the theoretical and practical significance of our findings but rather suggest productive avenues for future research that would further advance understanding of this critical domain at the intersection of technological innovation and mathematics education.

6. Conclusions

This investigation delineates the cognitive–behavioral mechanisms through which technology perceptions influence mathematics teachers’ AI engagement and perceived educational outcomes. Our findings reveal distinct psychological pathways that significantly refine conventional technology acceptance frameworks in specialized educational contexts.
Three key mechanisms emerge from our structural analysis. First, perceived usefulness directly impacts engagement, while perceived ease of use functions exclusively through sequential mediational pathways, challenging standard TAM formulations that presume uniform causal relationships. Second, the domain-specific factors of teachers’ AI literacy and mathematics teaching beliefs significantly mediate technology acceptance processes, demonstrating how professional cognitive structures transform general perceptions into specific implementation behaviors. Third, the complete mediation pattern for perceived ease of use versus partial mediation for perceived usefulness illustrates how different perceptual dimensions operate through distinct influence mechanisms rather than through uniform psychological processes.
These findings necessitate multidimensional professional development approaches that address both technological perceptions and domain-specific cognitive structures. Effective interventions should emphasize concrete pedagogical benefits while developing specialized literacy and aligning technological affordances with existing belief structures, a sequence that reflects the complex mediational architecture identified in our model.
While acknowledging cross-sectional design limitations, this research establishes a foundation for understanding how technological perceptions transform into engagement behaviors and perceived outcomes in mathematics education. Future longitudinal studies employing multi-method measurement approaches would further illuminate the developmental trajectories and reciprocal relationships among these constructs in increasingly AI-mediated educational environments.

Author Contributions

Conceptualization, T.L. and J.Z.; methodology, T.L. and J.Z.; software, J.Z.; validation, J.Z.; formal analysis, T.L.; resources, J.Z.; data curation, J.Z. and T.L.; writing—original draft preparation, J.Z. and T.L.; writing—review and editing, T.L. and J.Z.; visualization, J.Z. and T.L.; supervision, B.X.; project administration, B.X.; funding acquisition, B.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Shanghai Key Laboratory of Pure Mathematics and Mathematical Practice, and was funded in part by the Science and Technology Commission of Shanghai Municipality (No. 22DZ2229014).

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and has been approved by the Ethics Committee of East China Normal University, with the approval number HR 806-2023.

Informed Consent Statement

All participants voluntarily signed informed consent forms after being fully informed about the purpose, process, potential risks, and benefits of this study. Data collection and processing for this study were conducted anonymously.

Data Availability Statement

The data supporting this study’s findings are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Measurement scales and questionnaire items.
Table A1. Measurement scales and questionnaire items.
ConstructItemsSourceScale Type
Perceived Usefulness (PU)PU1: AI teaching tools contribute to enhanced instructional effectivenessDavis [17]; Scherer et al. [58]5-point Likert scale (1 = strongly disagree, 5 = strongly agree)
PU2: AI teaching tools improve instructional efficiency
PU3: AI teaching tools enhance differentiation capacity
Perceived Ease of Use (PEOU)PEOU1: AI teaching tools are comprehensibleDavis [17]; Teo [60]5-point Likert scale (1 = strongly disagree, 5 = strongly agree)
PEOU2: AI teaching tools are learnable with reasonable effort
PEOU3: AI teaching tools integrate simply into existing instructional practices
Perceived AI Risks (PR)PR1: AI teaching tools may create technological dependencyFeatherman & Pavlou [62]; Wang et al. [63]5-point Likert scale (1 = strongly disagree, 5 = strongly agree) *
PR2: AI teaching tools may have mathematical accuracy concerns
PR3: AI teaching tools may perpetuate algorithmic bias
Teacher’s AI Literacy (TAL)TAL1: Capacity to critically evaluate AI-generated mathematics contentNg [67]; Peterson et al. [68]5-point Likert scale (1 = strongly disagree, 5 = strongly agree)
TAL2: Comprehension of AI capabilities and limitations
TAL3: Ability to adapt AI materials to specific learning objectives
Teacher’s AI Engagement (TAE)TAE1: Experimentation frequency with diverse AI applicationsSchaufeli et al. [70]; Ifinedo [71]5-point frequency scale (1 = never, 5 = very often)
TAE2: Enthusiasm regarding classroom AI integration
TAE3: Systematic evaluation of AI effectiveness across mathematical domains
Teacher’s Mathematics Teaching Beliefs (TMB)TMB1: Beliefs regarding guided exploration versus direct instructionPeterson et al. [74]; Stipek et al. [75]5-point Likert scale (1 = strongly disagree, 5 = strongly agree) **
TMB2: Beliefs regarding conceptual understanding versus procedural fluency
TMB3: Beliefs regarding error tolerance in mathematical learning
TMB4: Beliefs regarding technology’s complementary role in instruction
TMB5: Beliefs regarding multiple solution pathway encouragement
Perceived Impact on Mathematics Literacy (PIML)PIML1: Impact on students’ mathematical problem formulation abilitiesPISA Mathematics Framework [72]; Wilkins [78]5-point impact scale (1 = strong negative impact, 5 = strong positive impact)
PIML2: Impact on students’ concept and procedure application proficiency
PIML3: Impact on students’ mathematical result interpretation capacity
PIML4: Impact on students’ problem-solving persistence
PIML5: Impact on students’ mathematics learning engagement
Note: * indicates that the p-value is less than 0.05, showing significance; ** indicates that the p-value is less than 0.01, showing high significance.

References

  1. Wang, C.L.; Chen, X.J.; Yu, T.; Liu, Y.D.; Jing, Y.H. Education reform and change driven by digital technology: A bibliometric study from a global perspective. Humanit. Soc. Sci. Commun. 2024, 11, 256. [Google Scholar] [CrossRef]
  2. Li, M. Integrating artificial intelligence in primary mathematics education: Investigating internal and external influences on teacher adoption. Int. J. Sci. Math. Educ. 2024, 1–26. [Google Scholar] [CrossRef]
  3. Yeo, S.; Moon, J.; Kim, D.J. Transforming mathematics education with AI: Innovations, implementations, and insights. Math. Educ. 2024, 63, 387–392. [Google Scholar] [CrossRef]
  4. Pelton, T.; Pelton, L.F. Using Generative AI in Mathematics Education: Critical Discussions and Practical Strategies for Preservice Teachers, Teachers, and Teacher Educators. In Proceedings of the Society for Information Technology & Teacher Education International Conference (SITE 2024), Las Vegas, NV, USA, 17–21 March 2024. [Google Scholar]
  5. Alam, A.; Mohanty, A. Educational technology: Exploring the convergence of technology and pedagogy through mobility, interactivity, AI, and learning tools. Cogent Eng. 2023, 10, 2283282. [Google Scholar] [CrossRef]
  6. Awang, L.A.; Yusop, F.D.; Danaee, M. Current practices and future direction of artificial intelligence in mathematics education: A systematic review. Int. Electron. J. Math. Educ. 2025, 20, em0823. [Google Scholar] [CrossRef] [PubMed]
  7. Bešlić, A.; Bešlić, J.; Kamber Hamzić, D. Artificial Intelligence in Elementary Math Education: Analyzing Impact on Students Achievements. In Proceedings of the International Conference on Digital Transformation in Education and Artificial Intelligence Application, Cham, Switzerland, 24–26 April 2024. [Google Scholar]
  8. Henkel, O.; Horne-Robinson, H.; Kozhakhmetova, N.; Lee, A. Effective and scalable math support: Evidence on the impact of an AI-tutor on math achievement in Ghana. arXiv 2024, arXiv:2402.09809. [Google Scholar]
  9. Chou, C.M.; Shen, T.C.; Shen, T.C.; Shen, C.H. Teachers’ adoption of AI-supported teaching behavior and its influencing factors: Using structural equation modeling. J. Comput. Educ. 2024, 1–44. [Google Scholar] [CrossRef]
  10. Busuttil, L.; Calleja, J. Teachers’ beliefs and practices about the potential of ChatGPT in teaching Mathematics in secondary schools. Digit. Exp. Math. Educ. 2025, 11, 140–166. [Google Scholar] [CrossRef]
  11. Roca, M.D.L.; Chan, M.M.; Garcia-Cabot, A.; Garcia-Lopez, E.; Amado-Salvatierra, H. The impact of a chatbot working as an assistant in a course for supporting student learning and engagement. Comput. Appl. Eng. Educ. 2024, 32, e22750. [Google Scholar] [CrossRef]
  12. Hazzan-Bishara, A.; Kol, O.; Levy, S. The factors affecting teachers’ adoption of AI technologies: A unified model of external and internal determinants. Educ. Inf. Technol. 2025, 1–27. [Google Scholar] [CrossRef]
  13. Ma, S.; Lei, L. The factors influencing teacher education students’ willingness to adopt artificial intelligence technology for information-based teaching. Asia Pac. J. Educ. 2024, 44, 94–111. [Google Scholar] [CrossRef]
  14. Lin, Y.; Niu, R.; Yu, Z. Roles of ambiguity tolerance and learning effectiveness: Structural equation modeling evidence from EFL students’ perceptions of factors influencing peer collaboration. Lang. Teach. Res. 2023, 13621688231216201. [Google Scholar] [CrossRef]
  15. Tram, N.H.M. Unveiling the drivers of AI integration among language teachers: Integrating UTAUT and AI-TPACK. Comput. Sch. 2024, 1–21. [Google Scholar] [CrossRef]
  16. Granić, A.; Marangunić, N. Technology acceptance model in educational context: A systematic literature review. Br. J. Educ. Technol. 2019, 50, 2572–2593. [Google Scholar] [CrossRef]
  17. Gupta, C.; Gupta, V.; Stachowiak, A. Adoption of ICT-based teaching in engineering: An extended technology acceptance model perspective. IEEE Access 2021, 9, 58652–58666. [Google Scholar] [CrossRef]
  18. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  19. Lin, Y.; Yu, Z. Extending Technology Acceptance Model to higher-education students’ use of digital academic reading tools on computers. Int. J. Educ. Technol. High. Educ. 2023, 20, 34. [Google Scholar] [CrossRef]
  20. Wang, C.L.; Dai, J.; Zhu, K.K.; Yu, T.; Gu, X.Q. Understanding the continuance intention of college students toward new e-learning spaces based on an integrated model of the TAM and TTF. Int. J. Hum. Comput. Int. 2023, 40, 8419–8432. [Google Scholar] [CrossRef]
  21. Zhonggen, Y.; Xiaozhi, Y. An extended technology acceptance model of a mobile learning technology. Comput. Appl. Eng. Educ. 2019, 27, 721–732. [Google Scholar] [CrossRef]
  22. Lin, Y.; Yu, Z. Learner perceptions of artificial intelligence-generated pedagogical agents in language learning videos: Embodiment effects on technology acceptance. Int. J. Hum-Comput. Int. 2025, 41, 1606–1627. [Google Scholar] [CrossRef]
  23. Shahzad, M.F.; Xu, S.; Asif, M. Factors affecting generative artificial intelligence, such as ChatGPT, use in higher education: An application of technology acceptance model. Br. Educ. Res. J. 2025, 51, 489–513. [Google Scholar] [CrossRef]
  24. Li, M.; Manzari, E. AI utilization in primary mathematics education: A case study from a southwestern Chinese city. Educ. Inf. Technol. 2025, 1–34. [Google Scholar] [CrossRef]
  25. Al-Rahmi, A.M.; Shamsuddin, A.; Alturki, U.; Aldraiweesh, A.; Yusof, F.M.; Al-Rahmi, W.M.; Aljeraiwi, A.A. The influence of information system success and technology acceptance model on social media factors in education. Sustainability 2021, 13, 7770. [Google Scholar] [CrossRef]
  26. Han, J.H.; Sa, H.J. Acceptance of and satisfaction with online educational classes through the technology acceptance model (TAM): The COVID-19 situation in Korea. Asia Pac. Educ. Rev. 2022, 23, 403–415. [Google Scholar] [CrossRef]
  27. Carvajal-Morales, J.M.; León-Plúas, E.E.; Valenzuela-Cobos, J.D.; Guevara-Viejó, F. Educational design in the adoption of ICT for sustainable digital learning in social and business sciences: A structural equation model. Sustainability 2024, 16, 10674. [Google Scholar] [CrossRef]
  28. Chiu, T.K.; Ahmad, Z.; Çoban, M. Development and validation of teacher artificial intelligence (AI) competence self-efficacy (TAICS) scale. Educ. Inf. Technol. 2024, 30, 6667–6685. [Google Scholar] [CrossRef]
  29. Liu, Z.; Tang, Q.; Ouyang, F.; Long, T.; Liu, S. Profiling students’ learning engagement in MOOC discussions to identify learning achievement: An automated configurational approach. Comput. Educ. 2024, 219, 105109. [Google Scholar] [CrossRef]
  30. Michel, C.; Pierrot, L. A Proposal for Assessing Digital Maturity in French Primary Education: Design of Tools and Methods. In Proceedings of the 16th International Conference on Computer Supported Education (CSEDU), Angers, France, 2–4 May 2024. [Google Scholar]
  31. Pan, Z.; Wang, Y. From technology-challenged teachers to empowered digitalized citizens: Exploring the profiles and antecedents of teacher AI literacy in the Chinese EFL context. Eur. J. Educ. 2025, 60, e70020. [Google Scholar] [CrossRef]
  32. Bond, M.; Bedenlier, S.; Buntins, K.; Kerres, M.; Zawacki-Richter, O. Facilitating student engagement in higher education through educational technology: A narrative systematic review in the field of education. Contemp. Issues Technol. Teach. Educ. 2020, 20, 315–368. [Google Scholar]
  33. Bond, M.; Buntins, K.; Bedenlier, S.; Zawacki-Richter, O.; Kerres, M. Mapping research in student engagement and educational technology in higher education: A systematic evidence map. Int. J. Educ. Technol. High. Educ. 2020, 17, 2. [Google Scholar] [CrossRef]
  34. Yim, I.H.Y. A critical review of teaching and learning artificial intelligence (AI) literacy: Developing an intelligence-based AI literacy framework for primary school education. Comput. Educ. Artif. Intell. 2024, 7, 100319. [Google Scholar] [CrossRef]
  35. Allen, L.K.; Kendeou, P. ED-AI Lit: An Interdisciplinary framework for AI literacy in education. Policy Insights Behav. Brain Sci. 2024, 11, 3–10. [Google Scholar] [CrossRef]
  36. Jing, Y.H.; Wang, H.M.; Chen, X.J.; Wang, C.L. What factors will affect the effectiveness of using ChatGPT to solve programming problems? A quasi-experimental study. Humanit. Soc. Sci. Commun. 2024, 11, 319. [Google Scholar] [CrossRef]
  37. Wang, C.L.; Wang, H.M.; Li, Y.Y.; Dai, J.; Gu, X.Q.; Yu, T. Factors influencing university students’ behavioral intention to use generative artificial intelligence: Integrating the Theory of Planned Behavior and AI Literacy. Int. J. Hum. Comput. Int. 2024, 1–23. [Google Scholar] [CrossRef]
  38. Li, L.; Fengchao, Y.; Zhang, E. A systematic review of learning task design for K-12 AI education: Trends, challenges, and opportunities. Comput. Educ. Artif. Intell. 2024, 6, 100217. [Google Scholar] [CrossRef]
  39. Lin, Y.; Yu, Z. An integrated bibliometric analysis and systematic review modelling students’ technostress in higher education. Behav. Inf. Technol. 2025, 44, 631–655. [Google Scholar] [CrossRef]
  40. Mehmood, S. Exploring digital leadership, technology integration, and teacher task performance in higher education institutions: A moderated-mediation study. J. Digitovation Inf. Syst. 2023, 3, 141–155. [Google Scholar] [CrossRef]
  41. Schubert, T.; Oosterlinck, T.; Stevens, R.D.; Maxwell, P.H.; van der Schaar, M. AI education for clinicians. EClinicalMedicine 2025, 79, 1–7. [Google Scholar] [CrossRef]
  42. Forgasz, H.J.; Leder, G.C. Beliefs about mathematics and mathematics teaching. In International Handbook of Mathematics Teacher Education; Sriraman, B., Ed.; Sense Publishers: Rotterdam, The Netherlands, 2008; Volume 1, pp. 173–192. [Google Scholar]
  43. Xie, S.; Cai, J. Teachers’ beliefs about mathematics, learning, teaching, students, and teachers: Perspectives from Chinese high school in-service mathematics teachers. Int. J. Sci. Math. Educ. 2021, 19, 747–769. [Google Scholar] [CrossRef]
  44. Kraft, T. Exceptional Mathematics Teachers’ Beliefs About the Nature of Mathematics and Teaching and Learning. Doctoral Dissertation, George Mason University, Fairfax, VA, USA, 2024. [Google Scholar]
  45. Karatas, I. Changing pre-service mathematics teachers’ beliefs about using computers for teaching and learning mathematics: The effect of three different models. Eur. J. Teach. Educ. 2014, 37, 390–405. [Google Scholar] [CrossRef]
  46. Hidayat, R.; Zainuddin, Z.; Mazlan, N.H. The relationship between technological pedagogical content knowledge and belief among preservice mathematics teachers. Acta Psychol. 2024, 249, 104432. [Google Scholar] [CrossRef] [PubMed]
  47. Simantirakis, T. An Investigation into the Effect of Educational Background and Math Anxiety on Teacher Candidates’ Pedagogical Beliefs. Master’s Thesis, Trent University, Peterborough, ON, Canada, 2024. [Google Scholar]
  48. Drijvers, P.; Sinclair, N. The role of digital technologies in mathematics education: Purposes and perspectives. ZDM Math. Educ. 2024, 56, 239–248. [Google Scholar] [CrossRef]
  49. Yang, X.; Kaiser, G. The impact of mathematics teachers’ professional competence on instructional quality and students’ mathematics learning outcomes. Curr. Opin. Behav. Sci. 2022, 48, 101225. [Google Scholar] [CrossRef]
  50. Shin, S.H.; Sim, J.; Moon, C.; Kim, N.; Hwang, J. Effects of STEAM programs emphasizing data science and AI on students’ attitudes toward mathematics and science. KEDI J. Educ. Policy 2024, 21, 21–38. [Google Scholar] [CrossRef]
  51. LKPD, P.; Simanullang, S.R.; Nasution, M.D.; Azis, Z. The effect of guided discovery model using LKPD on mathematics learning outcomes of middle school students. J. MathEduc. Nusantara 2022, 5, 1–6. [Google Scholar]
  52. Wen, Q.; Cai, J. Applying structural equation modeling to examine the role of teacher beliefs and practices in differentiated instruction in physical education: Multiple mediation analyses. Psychol. Sch. 2024, 61, 3045–3062. [Google Scholar] [CrossRef]
  53. Wen, K.; Liu, Q. Examining the relationship between teaching ability and smart education adoption in K-12 schools: A moderated mediation analysis. J. Pedagog. Res. 2024, 8, 381–396. [Google Scholar] [CrossRef]
  54. Karumbaiah, S.; Borchers, C.; Shou, T.; Falhs, A.C.; Liu, P.; Nagashima, T.; Aleven, V. A Spatiotemporal Analysis of Teacher Practices in Supporting Student Learning and Engagement in an AI-enabled Classroom. In Proceeding of the International Conference on Artificial Intelligence in Education, Tokyo, Japan, 3–7 June 2023. [Google Scholar]
  55. Otto, D.; Assenmacher, V.; Bente, A.; Gellner, C.; Waage, M.; Deckert, R.; Kuche, J. Student Acceptance of AI-based Feedback Systems: An Analysis Based on the Technology Acceptance Model (TAM). In Proceedings of the 18th International Technology, Education and Development Conference (INTED2024), Valencia, Spain, 4–6 March 2024. [Google Scholar]
  56. Antwi-Boampong, A. Towards a faculty blended learning adoption model for higher education. Educ. Inf. Technol. 2020, 25, 1639–1662. [Google Scholar] [CrossRef]
  57. Perez, E.; Manca, S.; Fernández-Pascual, R.; Mc Guckin, C. A systematic review of social media as a teaching and learning tool in higher education: A theoretical grounding perspective. Educ. Inf. Technol. 2023, 28, 11921–11950. [Google Scholar] [CrossRef]
  58. Yi, L.; Liu, D.; Jiang, T.; Xian, Y. The effectiveness of AI on K-12 students’ mathematics learning: A systematic review and meta-analysis. Int. J. Sci. Math. Educ. 2025, 23, 1105–1126. [Google Scholar] [CrossRef]
  59. Chen, R.S.; Liu, I.F. Research on the effectiveness of information technology in reducing the rural-urban knowledge divide. Comput. Educ. 2013, 63, 437–445. [Google Scholar] [CrossRef]
  60. Waheed, M.; Ul-Ain, N.; Leišytė, L. Quality and Competency for Sustainable Digital Future of Education: Context of Digital Learning System. In Proceedings of the European Conference on Information Systems 2023 (ECIS 2023), Kristiansand, Norway, 11 May 2023. [Google Scholar]
  61. Linardatos, G.; Apostolou, D. Investigating high school students’ perception about digital comics creation in the classroom. Educ. Inf. Technol. 2023, 28, 10079–10101. [Google Scholar] [CrossRef]
  62. Scherer, R.; Siddiq, F.; Tondeur, J. The technology acceptance model (TAM): A meta-analytic structural equation modeling approach to explaining teachers’ adoption of digital technology in education. Comput. Educ. 2019, 128, 13–35. [Google Scholar] [CrossRef]
  63. Teo, T. Development and validation of the E-learning Acceptance Measure (ElAM). Internet High. Educ. 2010, 13, 148–152. [Google Scholar] [CrossRef]
  64. Teo, T. Factors influencing teachers’ intention to use technology: Model development and test. Comput. Educ. 2011, 57, 2432–2440. [Google Scholar] [CrossRef]
  65. Colvin, C.A.; Goh, A. Validation of the technology acceptance model for police. J. Crim. Justice 2005, 33, 89–95. [Google Scholar] [CrossRef]
  66. Featherman, M.S.; Pavlou, P.A. Predicting e-services adoption: A perceived risk facets perspective. Int. J. Hum.-Comput. Stud. 2003, 59, 451–474. [Google Scholar] [CrossRef]
  67. Wang, Y.; Wang, S.; Wang, J.; Wei, J.; Wang, C. An empirical study of consumers’ intention to use ride-sharing services: Using an extended technology acceptance model. Transportation 2019, 47, 397–415. [Google Scholar] [CrossRef]
  68. Kwee-Meier, S.T.; Bützler, J.E.; Schlick, C. Development and validation of a technology acceptance model for safety-enhancing, wearable locating systems. Behav. Inf. Technol. 2016, 35, 394–409. [Google Scholar] [CrossRef]
  69. Balaman, F.; Baş, M. Perception of using e-learning platforms in the scope of the technology acceptance model (TAM): A scale development study. Interact. Learn. Environ. 2023, 31, 5395–5419. [Google Scholar] [CrossRef]
  70. Rajeb, M.; Wang, Y.; Man, K.; Morett, L.M. Students’ acceptance of online learning in developing nations: Scale development and validation. Educ. Technol. Res. Dev. 2023, 71, 767–792. [Google Scholar] [CrossRef] [PubMed]
  71. Ng, W. Can we teach digital natives digital literacy? Comput. Educ. 2012, 59, 1065–1078. [Google Scholar] [CrossRef]
  72. Peterson, P.L.; Fennema, E.; Carpenter, T.P.; Loef, M. Teacher’s pedagogical content beliefs in mathematics. Cogn. Instr. 1989, 6, 1–40. [Google Scholar] [CrossRef]
  73. Boulden, D.C.; Rachmatullah, A.; Oliver, K.M.; Wiebe, E. Measuring in-service teacher self-efficacy for teaching computational thinking: Development and validation of the T-STEM CT. Educ. Inf. Technol. 2021, 26, 4663–4689. [Google Scholar] [CrossRef]
  74. Schaufeli, W.B.; Salanova, M.; González-Romá, V.; Bakker, A.B. The measurement of engagement and burnout: A two sample confirmatory factor analytic approach. J. Happiness Stud. 2002, 3, 71–92. [Google Scholar] [CrossRef]
  75. Ifinedo, P. Determinants of students’ continuance intention to use blogs to learn: An empirical investigation. Behav. Inf. Technol. 2018, 37, 381–392. [Google Scholar] [CrossRef]
  76. Herring Watson, J.; Rockinson-Szapkiw, A.J. Developing and validating the intention to use technology-enabled learning (I-TEL) scale. J. Res. Technol. Educ. 2023, 55, 971–985. [Google Scholar] [CrossRef]
  77. Antonietti, C.; Schmitz, M.L.; Consoli, T.; Cattaneo, A.; Gonon, P.; Petko, D. Development and validation of the ICAP Technology Scale to measure how teachers integrate technology into learning activities. Comput. Educ. 2023, 192, 104648. [Google Scholar] [CrossRef]
  78. Stipek, D.J.; Givvin, K.B.; Salmon, J.M.; MacGyvers, V.L. Teachers’ beliefs and practices related to mathematics instruction. Teach. Teach. Educ. 2001, 17, 213–226. [Google Scholar] [CrossRef]
  79. Pérez-Villalobos, C.; Ventura-Ventura, J.; Spormann-Romeri, C.; Melipillán, R.; Jara-Reyes, C.; Paredes-Villarroel, X.; Matus-Betancourt, O. Satisfaction with remote teaching during the first semester of the COVID-19 crisis: Psychometric properties of a scale for health students. PLoS ONE 2021, 16, e0250739. [Google Scholar] [CrossRef]
  80. Talbert, E.; Hofkens, T.; Wang, M.T. Does student-centered instruction engage students differently? The moderation effect of student ethnicity. J. Educ. Res. 2019, 112, 327–341. [Google Scholar] [CrossRef]
  81. OECD. PISA 2021 Mathematics Framework (Draft); OECD Publishing: Paris, France, 2018. [Google Scholar]
  82. Wilkins, J.L. Preparing for the 21st century: The status of quantitative literacy in the United States. Sch. Sci. Math. 2000, 100, 405–418. [Google Scholar] [CrossRef]
  83. Sanders, M.; Mathew, M.; Petty, R.; Arredondo, S.; Rambo-Hernandez, K.E. Learning together: A mixed methods analysis of team-based learning in mathematics. J. Educ. Res. 2024, 117, 241–254. [Google Scholar] [CrossRef]
  84. Vlachogianni, P.; Tselios, N. Perceived usability evaluation of educational technology using the System Usability Scale (SUS): A systematic review. J. Res. Technol. Educ. 2022, 54, 392–409. [Google Scholar] [CrossRef]
  85. Osman, R.B.; Choo, P.S.; Rahmat, M.K. Understanding student teachers’ behavioural intention to use technology: Technology Acceptance Model (TAM) validation and testing. Int. J. Instr. 2013, 6, 89–104. [Google Scholar]
  86. Hair, J.; Alamer, A. Partial Least Squares Structural Equation Modeling (PLS-SEM) in second language and education research: Guidelines using an applied example. Res. Methods Appl. Linguist. 2022, 1, 100027. [Google Scholar] [CrossRef]
  87. Yu, T.; Dai, J.; Wang, C.L. Adoption of blended learning: Chinese university students’ perspectives. Humanit. Soc. Sci. Commun. 2023, 10, 390. [Google Scholar] [CrossRef]
  88. Panchenko, L.F.; Velychko, V.Y. Structural Equation Modeling in Educational Research: A Case-study for PhD Training. In Proceedings of the 1st Symposium on Advances in Educational Technology, Kyiv, Ukraine, 12–13 November 2022. [Google Scholar]
  89. Stone, B.M. The ethical use of fit indices in structural equation modeling: Recommendations for psychologists. Front. Psychol. 2021, 12, 783226. [Google Scholar] [CrossRef]
  90. Sobaih, A.E.E.; Elshaer, I.A. Structural equation modeling-based multi-group analysis: Examining the role of gender in the link between entrepreneurship orientation and entrepreneurial intention. Mathematics 2022, 10, 3719. [Google Scholar] [CrossRef]
  91. Kidd, J.C. Mediation Analysis in Genetic Studies. Doctoral Dissertation, The University of North Carolina, Chapel Hill, NC, USA, 2021. [Google Scholar]
  92. Yin, H.; Huang, S. Applying structural equation modelling to research on teaching and teacher education: Looking back and forward. Teach. Teach. Educ. 2021, 107, 103438. [Google Scholar] [CrossRef]
  93. Fang, J.; Wen, Z.; Zhang, M.; Sun, P. The analyses of multiple mediation effects based on structural equation modeling. Journal of Psychological Science 2014, 37, 735. [Google Scholar]
  94. Yu, T.; Zhang, Y.; Teoh, A.P.; Wang, A.; Wang, C.L. Factors influencing university students’ behavioral intention to use electric car-sharing services in Guangzhou, China. SAGE Open 2023, 13, 1–27. [Google Scholar] [CrossRef]
  95. Oamen, T.; Ihekoronye, M.R.; Enitan, O.O. A parallel mediation analysis of behavioral constructs linking intention-to-use technology to performance outcomes among pharmaceutical executives. J. Technol. Manag. Bus. 2024, 11, 1–16. [Google Scholar] [CrossRef]
  96. Dolgopolovas, V.; Dagiene, V. Competency-based TPACK approaches to computational thinking and integrated STEM: A conceptual exploration. Comput. Appl. Eng. Educ. 2024, 32, e22788. [Google Scholar] [CrossRef]
  97. Nti-Asante, E. Engaging students in making artificial intelligence tools for mathematics education: An iterative design approach. J. Math. Educ. 2024, 17, 16–37. [Google Scholar]
  98. Lazarides, R.; Schiefele, U.; Hettinger, K.; Frommelt, M.C. Tracing the signal from teachers to students: How teachers’ motivational beliefs longitudinally relate to student interest through student-reported teaching practices. J. Educ. Psychol. 2023, 115, 290. [Google Scholar] [CrossRef]
  99. Zwart, D.P.; Van Luit, J.E.; Noroozi, O.; Goei, S.L. The effects of digital learning material on students’ mathematics learning in vocational education. Cogent Educ. 2017, 4, 1313581. [Google Scholar] [CrossRef]
  100. Turner, M.; Kitchenham, B.; Brereton, P.; Charters, S.; Budgen, D. Does the technology acceptance model predict actual use? A systematic literature review. Inf. Softw. Technol. 2010, 52, 463–479. [Google Scholar] [CrossRef]
  101. Wang, C.L.; Chen, X.J.; Hu, Z.B.; Jin, S.; Gu, X.Q. Deconstructing university learners’ adoption intention towards AIGC technology: A mixed-methods study using ChatGPT as an example. J. Comput. Assist. Learn. 2025, 41, e13117. [Google Scholar] [CrossRef]
  102. Shayan, P.; Rondinelli, R.; van Zaanen, M.; Atzmueller, M. Multi-level analysis of learning management systems’ user acceptance exemplified in two system case studies. Data 2023, 8, 45. [Google Scholar] [CrossRef]
  103. Lintner, T. A systematic review of AI literacy scales. NPJ Sci. Learn. 2024, 9, 50. [Google Scholar] [CrossRef] [PubMed]
  104. Wang, B.; Rau, P.L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [Google Scholar] [CrossRef]
  105. Hakim, V.G.A.; Paiman, N.A.; Rahman, M.H.S. Genie-on-demand: A custom AI chatbot for enhancing learning performance, self-efficacy, and technology acceptance in occupational health and safety for engineering education. Comput. Appl. Eng. Educ. 2024, 32, e22800. [Google Scholar] [CrossRef]
Figure 1. A hypothesized research model.
Figure 1. A hypothesized research model.
Sustainability 17 03698 g001
Figure 2. SEM path diagram.
Figure 2. SEM path diagram.
Sustainability 17 03698 g002
Table 1. Descriptive statistics of respondent characteristics and key constructs.
Table 1. Descriptive statistics of respondent characteristics and key constructs.
VariableCategories/StatisticsFrequency/ValuePercentage/SD
Demographic Characteristics
GenderFemale26553.00%
Male23547.00%
Age (years)Mean38.468.74
Range24–58-
Teaching Experience (years)Mean12.837.35
<5 years10220.40%
5–10 years14829.60%
11–20 years16533.00%
>20 years8517.00%
Grade LevelElementary16533.00%
Middle School17835.60%
High School15731.40%
School TypePublic30861.60%
Private12925.80%
Charter/Other6312.60%
AI Experience (months)Mean26.7217.53
RangeMar-62-
AI Usage FrequencyDaily9719.40%
Several times weekly17434.80%
Weekly12925.80%
Monthly/Less frequently10020.00%
Construct Measures
Perceived Usefulness (PU)Mean3.391.19
Perceived Ease of Use (PEOU)Mean3.451.2
Perceived AI Risks (PR)Mean3.381.28
Teacher’s AI Literacy (TAL)Mean3.321.22
Teacher’s AI Engagement (TAE)Mean3.391.22
Teacher’s Mathematics Beliefs (TMB)Mean3.361.23
Perceived Impact on Mathematics Literacy (PIML)Mean3.411.21
Table 2. Measurement Scales and Psychometric Properties.
Table 2. Measurement Scales and Psychometric Properties.
ConstructAbbreviationItemsCronbach’s αAVE
Perceived Usefulness (PU)PU30.8570.777
Perceived Ease of Use (PEOU)PEOU30.8380.756
Perceived AI Risks (PR)PR30.7730.688
Teacher’s AI Literacy (TAL)TAL30.8390.757
Teacher’s AI Engagement (TAE)TAE30.8410.760
Teacher’s Mathematics Beliefs (TMB)TMB50.8660.652
Perceived Impact on Mathematics Literacy (PIML)PIML50.8690.656
Table 3. Construct correlation matrix and discriminant validity.
Table 3. Construct correlation matrix and discriminant validity.
PUPEOUPRTALTAETMBPIML
PU0.881
PEOU0.4940.869
PR−0.185−0.0980.829
TAL0.3480.382−0.0570.870
TAE0.5960.392−0.2480.5080.872
TMB0.1470.179−0.0420.3330.3480.807
PIML0.5280.376−0.2150.4290.6040.4140.810
Table 4. Measurement model fit indices.
Table 4. Measurement model fit indices.
Fit IndexValueThresholdInterpretation
χ2/df1.106 (282.143/255)<3.00Excellent
RMSEA0.021<0.08Excellent
RMSEA 90% CI[0.000, 0.034]Upper bound < 0.08Excellent
RMSEA p-value1.000>0.05Excellent
CFI0.991>0.95Excellent
TLI0.989>0.95Excellent
SRMR0.048<0.08Excellent
Table 5. Structural model path coefficients.
Table 5. Structural model path coefficients.
Hypotheses and PathsStandardized Coefficient (β)S.E.t-Valuep-ValueHypothesis Support
Effects on Teacher AI Literacy (TAL)
H1a: PU → TAL0.0030.1050.0320.974Not supported
H2a: PEOU → TAL0.5970.1045.746<0.001Supported
H3a: PR → TAL0.1070.0801.3420.180Not supported
Effects on Teacher Mathematics Beliefs (TMB)
H1b: PU → TMB−0.1210.112−1.0810.280Not supported
H2b: PEOU → TMB0.3280.1142.8720.004Supported
H3b: PR → TMB0.0610.0850.7240.469Not supported
Effects on Teacher AI Engagement (TAE)
H1c: PU → TAE0.5220.0737.185<0.001Supported
H2c: PEOU → TAE−0.0120.100−0.1160.908Not supported
H3c: PR → TAE−0.1850.063−2.9350.003Supported
H4a: TAL → TAE0.3270.0754.360<0.001Supported
H5a: TMB → TAE0.2680.0554.870<0.001Supported
Effects on Perceived Impact on Mathematics Literacy (PIML)
H6: TAE → PIML0.3080.1242.4920.013Supported
H4b: TAL → PIML0.0490.0860.5720.568Not supported
H5b: TMB → PIML0.2560.0634.050<0.001Supported
H1d: PU → PIML0.2690.1152.3400.019Supported
H2d: PEOU → PIML0.0660.0960.6860.493Not supported
H3d: PR → PIML−0.1610.064−2.5060.012Supported
Table 6. Mediation analysis of indirect effects.
Table 6. Mediation analysis of indirect effects.
Indirect PathStandardized EffectS.E.t-Valuep-Value95% CI
Indirect Effects through Teachers’ AI Literacy (TAL)
PU → TAL → PIML0.0000.0050.0320.975[−0.010, 0.010]
PEOU → TAL → PIML0.0290.0520.5660.571[−0.073, 0.131]
PR → TAL → PIML0.0050.0100.5050.613[−0.015, 0.025]
Indirect Effects through Teachers’ Mathematics Beliefs (TMB)
PU → TMB → PIML−0.0310.030−1.0380.299[−0.090, 0.028]
PEOU → TMB → PIML0.0840.0372.2680.023[0.011, 0.157]
PR → TMB → PIML0.0160.0230.6980.485[−0.029, 0.061]
Indirect Effects through Teachers’ AI Engagement (TAE)
PU → TAE → PIML0.1610.0682.3670.018[0.028, 0.294]
PEOU → TAE → PIML−0.0040.031−0.1170.907[−0.065, 0.057]
PR → TAE → PIML−0.0570.029−1.9910.047[−0.114, −0.001]
TAL → TAE → PIML0.1010.0462.1760.030[0.010, 0.192]
TMB → TAE → PIML0.0820.0362.3200.020[0.013, 0.152]
Serial Mediation Effects
PU → TAL → TAE → PIML0.0000.0110.0320.974[−0.022, 0.022]
PEOU → TAL → TAE → PIML0.0600.0301.9890.047[0.001, 0.119]
PR → TAL → TAE → PIML0.0110.0091.1690.243[−0.007, 0.029]
PU → TMB → TAE → PIML−0.0100.011−0.9510.342[−0.031, 0.011]
PEOU → TMB → TAE → PIML0.0270.0151.7520.080[−0.003, 0.057]
PR → TMB → TAE → PIML0.0050.0070.7060.480[−0.009, 0.019]
Total Indirect Effects
PU → PIML (total indirect)0.1200.0821.4650.143[−0.041, 0.281]
PEOU → PIML (total indirect)0.1970.0672.9350.003[0.066, 0.328]
PR → PIML (total indirect)−0.0200.046−0.4390.660[−0.110, 0.070]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, T.; Zhang, J.; Xiong, B. Effects of Technology Perceptions, Teacher Beliefs, and AI Literacy on AI Technology Adoption in Sustainable Mathematics Education. Sustainability 2025, 17, 3698. https://doi.org/10.3390/su17083698

AMA Style

Lin T, Zhang J, Xiong B. Effects of Technology Perceptions, Teacher Beliefs, and AI Literacy on AI Technology Adoption in Sustainable Mathematics Education. Sustainability. 2025; 17(8):3698. https://doi.org/10.3390/su17083698

Chicago/Turabian Style

Lin, Tianqi, Jianyang Zhang, and Bin Xiong. 2025. "Effects of Technology Perceptions, Teacher Beliefs, and AI Literacy on AI Technology Adoption in Sustainable Mathematics Education" Sustainability 17, no. 8: 3698. https://doi.org/10.3390/su17083698

APA Style

Lin, T., Zhang, J., & Xiong, B. (2025). Effects of Technology Perceptions, Teacher Beliefs, and AI Literacy on AI Technology Adoption in Sustainable Mathematics Education. Sustainability, 17(8), 3698. https://doi.org/10.3390/su17083698

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop