1. Introduction
The field of education has continually evolved to address the diverse needs of students, including those with learning disabilities. These disabilities, which encompass a range of challenges such as dyslexia, attention-deficit/hyperactivity disorder (ADHD), and specific learning disorders in reading or writing, significantly impact students’ academic performance and overall learning experience (
Lemons et al., 2018;
Fletcher et al., 2018). Research has consistently underscored the necessity of tailored interventions to support these students, as conventional instructional strategies often fall short of meeting their unique needs (
Bernacki et al., 2020). As educators and researchers seek more effective solutions, the integration of technology in education has gained momentum, offering promising avenues for improving learning outcomes (
Choi-Lundberg et al., 2023).
Grammar, a fundamental component of language proficiency, presents a significant challenge for many students with learning disabilities. Difficulties in grammar comprehension and application can hinder both written and verbal communication, further exacerbating academic struggles (
Kormos, 2016). Studies indicate that deficits in grammatical skills often stem from underlying cognitive processing issues, such as working memory limitations or auditory processing difficulties (
Parker, 2022). For instance, research has shown that students with dyslexia may struggle with syntactic awareness (
Farris et al., 2021), while those with ADHD often encounter obstacles in maintaining focus on grammatical structures during learning tasks (
Gutiérrez et al., 2023). These findings highlight the critical need for targeted strategies that address the root causes of grammatical difficulties (
Katsarou et al., 2024).
Artificial intelligence (AI) has emerged as a transformative tool in various fields, including education. Leveraging AI for personalized learning has shown considerable promise, particularly for students with diverse learning needs (
Alam & Mohanty, 2023). AI-driven applications can analyze individual performance, identify specific areas of difficulty, and provide customized feedback in real time (
Haleem et al., 2022). For students with learning disabilities, these capabilities are especially valuable, as they allow for interventions that are both adaptive and responsive. Recent studies have explored the efficacy of AI tools in enhancing language skills, with many reporting significant improvements in grammar proficiency among students who used AI-based learning platforms (
Jadhav et al., 2024;
Özdere, 2023).
One notable advantage of AI in education is its ability to provide immediate and constructive feedback, which is crucial for mastering grammatical concepts (
Zhang et al., 2022). Unlike conventional classroom settings, where feedback may be delayed or generalized, AI tools can offer instant corrections and explanations tailored to the learner’s specific errors (
Yuan, 2023). This real-time interaction not only facilitates deeper understanding but also encourages active engagement, an essential factor for students with learning disabilities who may struggle with motivation or self-confidence (
Abdi et al., 2024). Furthermore, AI systems can adapt to the pace and learning style of each student, creating a more inclusive and supportive educational environment (
Yang et al., 2024).
Despite the promising potential of AI, its implementation in evaluating and improving grammar performance among children with learning disabilities remains a relatively underexplored area (
Wilcox et al., 2022). Existing research has predominantly focused on general applications of AI in education, with limited studies examining its impact on specific challenges like grammar acquisition (
Norton & Buchanan, 2022). Preliminary findings, however, suggest that AI-driven interventions can significantly enhance grammatical skills by addressing individual deficits and promoting consistent practice (
Hoang et al., 2021). For example, adaptive learning platforms that incorporate natural language processing (NLP) algorithms have demonstrated effectiveness in helping students recognize and correct grammatical errors, thereby reinforcing language proficiency (
Shin et al., 2024).
The theoretical framework of this study is grounded in cognitive, educational, and artificial intelligence theories that support personalized learning for children with learning disabilities. Cognitive Load Theory (
Sweller, 1988) informs the study by recognizing that these children often experience higher cognitive strain when processing grammar rules. The AI-based tool mitigates this by providing real-time, adaptive feedback, reducing unnecessary cognitive overload, and enhancing retention through immediate correction. Additionally, the study aligns with Constructivist Learning Theory (
Piaget, 1950;
Vygotsky, 1978) by emphasizing interactive and scaffolded instruction, allowing learners to gradually build grammatical knowledge through progressively challenging tasks. The Universal Design for Learning (UDL) Framework (
Rose & Meyer, 2002) further supports the study by ensuring accessibility and flexibility, offering multiple means of engagement, representation, and expression to cater to diverse learning needs. Moreover, Behaviorist Reinforcement Principles (
Skinner, 1957) are evident in the AI tool’s use of reward-based incentives, such as instant feedback, points, and badges, which help sustain motivation, particularly for students with attention difficulties. Finally, the study builds on the Artificial Intelligence in Education (AIED) Framework, leveraging Natural Language Processing (NLP) and Bayesian Knowledge Tracing (BKT) to analyze patterns of grammatical errors and predict optimal learning pathways. By integrating these theoretical perspectives, the study establishes a strong foundation for exploring how AI-driven interventions can enhance grammar acquisition for children with learning disabilities, making learning more adaptive, engaging, and effective.
This study aims to build on these findings by investigating the use of AI in assessing and supporting grammar performance among children with learning disabilities. By analyzing how AI tools can identify patterns of grammatical errors, provide targeted feedback, and track progress over time, this research seeks to contribute to the growing body of knowledge on technology-enhanced learning, especially in the Greek language where there is paucity in research (
Alam & Mohanty, 2023). Ultimately, the goal is to explore how AI can be harnessed to create more equitable educational opportunities, ensuring that students with learning disabilities have the support they need to thrive academically (
Holmes & Porayska-Pomsta, 2023).
2. Materials and Strategies
A sample of 100 children aged 8–12 with diagnosed learning disabilities (divided in four distinct groups: students diagnosed with dyslexia, ADHD, language disorders, and other related disorders) from a public differential Greek center and psychologists participated in the study. Participants were recruited from special education programs in urban and suburban areas to ensure a diverse representation of socio-economic backgrounds. Parents gave their written consent to allow their children to participate in the experiment, which took place during school hours. Parental consent was obtained through written agreements before the study, and strict data protection measures were implemented to ensure confidentiality. All participant data were anonymized using unique identification codes, securely stored on password-protected servers, and encrypted before transmission. Access was restricted to designated research team members, minimizing exposure. The study adhered to GDPR and national data protection laws and received approval from a university ethics board. Additionally, a data retention policy was in place, ensuring that all collected information would be securely stored for five years before permanent deletion in compliance with institutional guidelines. The study aimed to evaluate the effectiveness of an AI-based grammar assessment tool compared to conventional paper-based grammar instruction.
The AI-based tool was programmed to administer a series of grammar tests, including sentence correction: Identifying and correcting grammatical errors in sentences (Example: Original: “She go to the store every day”. Corrected: “She goes to the store every day.”); verb conjugation: Choosing the correct verb form in various tenses (Example: Question: “By this time next year, she ___ (complete) her degree”. Answer choices: (a) completes, (b) will complete, (c) completed Correct answer: “will complete”); and pronoun usage: Selecting appropriate pronouns based on sentence context (Question: “Neither of the students brought ___ books to class”. Answer choices: (a) their, (b) his or her, (c) its. Correct answer: “his or her”). Each test is adapted to the user’s proficiency level in real time, allowing for a personalized learning experience. Pre-test and post-test measures were taken to evaluate performance improvements over four weeks of AI-mediated practice.
The AI-driven grammar assessment tool utilized in this study was designed specifically for children with learning disabilities. It is a supervised machine learning-based system that employs natural language processing (NLP) and adaptive learning algorithms to tailor grammar exercises to each student’s unique needs. The tool integrates error analysis models to identify patterns in grammatical mistakes and provides instant corrective feedback with detailed explanations. The AI system was trained using a large corpus of Greek language grammar rules, including data from student-written texts with annotated grammatical errors. It uses reinforcement learning to improve its feedback mechanisms based on student responses over time. The system also applies Bayesian Knowledge Tracing (BKT) to track student progress and predict the next optimal learning challenge.
The tool evaluates students’ initial proficiency through a diagnostic test and adjusts difficulty levels accordingly. When a mistake is made, the AI provides context-aware explanations, guiding the student toward self-correction. Performance metrics, such as accuracy, response time, and engagement levels, are tracked and visualized for educators and parents. The AI-based tool recorded user interactions, including time spent on each task, number of attempts per question, and completion rates. Higher engagement was indicated by sustained interaction, minimal inactivity periods, and repeated attempts to improve scores. The system includes reward-based incentives, such as points and badges, to enhance student motivation. The AI model is a hybrid system combining rule-based processing and machine learning. It integrates transformer-based NLP models, similar to GPT-based architectures, but optimized for Greek language grammar assessment. The AI tool is currently not open source, as it was developed exclusively for this research. However, a beta version is available upon request for educators and researchers interested in conducting further studies. The research team is exploring potential collaborations for integration into broader educational platforms.
The study utilized a quasi-experimental design with two groups: (a) an Experimental group consisting of 50 students (n = 50) who received AI-facilitated grammar assessments and personalized feedback and (b) a Control group consisting of 50 students (n = 50) who completed conventional paper-based grammar tests without personalized feedback.
The following metrics were used to assess the effectiveness of the interventions:
Accuracy rate (%): The percentage of correct answers on grammar tasks.
Completion time (minutes): Average time taken to complete each test.
Engagement level: Based on self-reported satisfaction surveys and observed test completion rates.
3. Results
Descriptive statistics were computed to summarize the demographic of the sample (
Table 1), the diagnosis (
Table 2), variables, and baseline test scores. For each group, the mean, standard deviation (SD), and range of scores were calculated to ensure comparability at the start of the intervention (
Table 3).
Figure 1 provides an overview of the demographic characteristics of participants across the experimental and control groups. The left panel illustrates the mean age and standard deviation for each group. The experimental group had a mean age of 10.2 years (SD = 1.4), while the control group had a mean age of 10.1 years (SD = 1.5). The overall mean age across both groups was 10.15 years (SD = 1.45). These values indicate that both groups are closely matched in terms of age distribution. The right panel displays the gender distribution and urban/suburban residency breakdown. The gender distribution appears balanced, with 28 males and 22 females in the experimental group and 27 males and 23 females in the control group, resulting in an overall distribution of 55 males and 45 females. Similarly, the urban/suburban split remains consistent across groups, with approximately 55% of participants residing in urban areas and 45% in suburban areas.
Figure 2 presents the distribution of diagnosed learning disabilities across the experimental and control groups. Dyslexia was the most common diagnosis, with 36% of participants in the experimental group and 34% in the control group, resulting in an overall prevalence of 35%. ADHD was reported in 28% of the experimental group and 30% of the control group, with a total of 29%. The distribution of Specific Language Impairment (SLI) and Mixed/Other diagnoses was relatively similar between groups, with minor variations. These results indicate that the two groups had a comparable distribution of learning disabilities, ensuring that any observed effects in later analyses are not confounded by initial group differences.
Figure 3 exemplifies the baseline grammar performance of participants in the experimental and control groups. The experimental group had a mean accuracy of 45.6% (SD = 6.2), while the control group had a mean accuracy of 46.2% (SD = 5.8). An independent samples
t-test revealed no statistically significant difference between groups at baseline,
p = 0.71, confirming that both groups started with comparable grammar performance levels before the intervention.
Descriptive statistics showed that the mean baseline accuracy was 45.6% (SD = 6.2) for the experimental group and 46.2% (SD = 5.8) for the control group. The similarity in means (
p > 0.05) indicated no significant initial differences between groups. Independent samples
t-tests compared baseline grammar performance between groups. Results confirmed no significant differences at the baseline (
p = 0.71), validating the comparability of groups for further analysis (t(98) = −0.37,
p = 0.71) (
Table 3). A repeated measures ANOVA was employed to assess changes in grammar performance over time within and between groups. The within-subjects factor was “Time” (pre-test vs. post-test) and the between-subjects factor was “Group” (experimental vs. control). Significant main effects were found for both factors, and a significant interaction effect indicated that the experimental group improved more substantially than the control group (F(1, 98) = 24.7,
p < 0.001).
Post-intervention means and standard deviations were calculated for each group. The experimental group achieved a mean accuracy of 78.5% (SD = 5.6), while the control group achieved 70.2% (SD = 6.1) (
Table 4). The difference was statistically significant (
p < 0.001). Cohen’s d was calculated to determine the magnitude of the effect of the AI tool on grammar performance. The effect size was found to be 0.84, indicating a large effect (
Table 5).
Figure 4 illustrates the post-intervention grammar performance of participants in the experimental and control groups. The experimental group achieved a mean accuracy of 78.5% (SD = 5.6), while the control group achieved 70.2% (SD = 6.1). The observed difference was statistically significant (
p < 0.001), suggesting that the AI-based intervention led to greater improvements in grammar accuracy compared to the conventional paper-based approach.
Figure 5 presents the effect size of the AI tool on grammar performance, measured using Cohen’s d. The effect size was calculated as 0.84, which exceeds the 0.8 threshold for a large effect. This suggests that the AI-based intervention had a substantial impact on improving grammar accuracy compared to the conventional paper-based approach.
Additional analyses were conducted for engagement levels and time efficiency. High engagement levels in the experimental group were confirmed by observational data and survey responses. Test completion times were analyzed using paired samples
t-tests for each group, confirming significant reductions in the experimental group (
Table 6 and
Table 7).
Figure 6 demonstrates the average test completion times before and after the intervention for both groups. The experimental (AI) group reduced their test completion time from 14.5 min to 9.2 min, a decrease of 5.3 min. In contrast, the control (paper) group showed a smaller reduction, from 15.1 min to 12.4 min (2.7 min). The greater reduction in the experimental group suggests that the AI-based intervention improved efficiency more significantly than the conventional method.
Figure 7 shows the engagement levels in the experimental and control groups. The experimental (AI) group demonstrated significantly higher engagement, with 76% of participants classified as highly engaged, compared to only 42% in the control group. Conversely, the control group showed higher levels of moderate (36%) and low engagement (22%), suggesting that the conventional paper-based method resulted in lower overall engagement. These findings indicate that the AI-based intervention was more effective in promoting active participation.
The study revealed a significant improvement in grammar accuracy for the experimental group that utilized the AI-based tool. The mean accuracy score increased from 45.6% (SD = 6.2) in the pre-test to 78.4% (SD = 5.6) in the post-test, reflecting an average gain of 32.8%. In contrast, the control group, which completed conventional paper-based tests, showed an improvement from 46.2% (SD = 5.8) to 62.3% (SD = 6.1), with an average gain of only 16.1%. The observed difference in improvement between the two groups was statistically significant (p < 0.001). These findings suggest that the AI tool effectively enhanced grammatical accuracy by providing real-time feedback and adaptive testing, which are not features of conventional strategies.
The experimental group demonstrated notable reductions in test completion time, reflecting enhanced processing speed and improved proficiency in grammar tasks. On average, students in the AI group reduced their test completion time by 5.3 min, with a pre-test average of 14.5 min and a post-test average of 9.2 min. In comparison, the control group showed a reduction of only 2.7 min, from a pre-test average of 15.1 min to a post-test average of 12.4 min. These results underscore the AI tool’s ability to streamline cognitive processing and task execution, likely due to its adaptive and interactive nature.
Engagement levels were significantly higher among students in the experimental group. According to self-reported surveys and observational data, 76% of the AI group exhibited high engagement levels, compared to only 42% in the control group. Moderate engagement was reported by 18% of the AI group and 36% of the control group. Low engagement levels were minimal in the AI group (6%) but significantly higher in the control group (22%). These engagement trends were further corroborated by completion rates, with fewer dropouts observed in the experimental group. The personalized feedback and interactive elements of the AI tool likely contributed to these positive engagement outcomes, enhancing motivation and reducing frustration among participants.
4. Discussion
The findings of this study underscore the potential of AI-driven tools in supporting modern pedagogical strategies and facilitating personalized learning, particularly for children with learning disabilities. Rather than positioning AI as superior to traditional approaches, this study highlights its role as a complementary tool that enhances instructional strategies by providing real-time feedback, adapting to individual learning needs, and maintaining high levels of engagement.
The findings of this study highlight the effectiveness of AI-driven interventions in enhancing grammar performance, engagement, and efficiency among children with learning disabilities. The significant improvement in grammar accuracy (from 45.6% to 78.5%) in the experimental group, compared to the more modest gains in the control group, underscores the impact of personalized, adaptive learning experiences. While the study effectively presents these results, a deeper connection to the theoretical framework strengthens the interpretation of these findings.
Cognitive Load Theory (
Sweller, 1988) provides a critical lens for understanding the effectiveness of the AI-driven tool. Grammar learning can impose a high cognitive load, particularly for students with learning disabilities, due to the demands on working memory and processing speed. The AI system mitigated these challenges by offering real-time, adaptive feedback, which reduced extraneous cognitive load by guiding students toward correct grammatical structures without unnecessary frustration. The reduction in test completion time (from 14.5 to 9.2 min) in the experimental group further supports this, suggesting that the AI tool streamlined cognitive processing and improved efficiency in task execution.
From the perspective of Constructivist Learning Theory (
Piaget, 1950;
Vygotsky, 1978), the study’s AI intervention aligns with the principles of scaffolded, interactive learning. The tool’s progressively challenging tasks and context-aware explanations allowed students to construct their understanding of grammar rules at an individualized pace. Unlike static paper-based methods, the AI system provided immediate corrective feedback, reinforcing the role of active engagement and self-regulation in learning. The high levels of engagement (76% of students in the AI group compared to 42% in the control group) reflect the constructivist emphasis on interactive and student-centered learning.
Behaviorist Reinforcement Principles (
Skinner, 1957) further explain the observed motivational benefits. The AI tool’s reward-based system, including points, badges, and instant positive reinforcement, likely contributed to the higher engagement and sustained effort observed in the experimental group. The behaviorist model suggests that immediate feedback and reinforcement encourage repetition and mastery, which aligns with the increased accuracy rates observed in post-test performance.
Additionally, the Universal Design for Learning (UDL) framework (
Rose & Meyer, 2002) offers a relevant perspective on accessibility and differentiated instruction. By providing multiple means of representation, engagement, and expression, the AI tool accommodated diverse learning needs, ensuring that students with different cognitive profiles benefited from personalized instruction. The strong effect size (Cohen’s d = 0.84) reinforces the argument that adaptive learning environments can lead to substantial improvements in academic performance.
Finally, the integration of Artificial Intelligence in Education (AIED) principles demonstrates how machine learning and Natural Language Processing (NLP) algorithms can be leveraged to create individualized learning trajectories. The tool’s ability to analyze grammatical errors, predict learning patterns, and adjust difficulty levels aligns with the broader discussion on AI’s role in education as a personalized, data-driven support system. Therefore, the study’s findings support the notion that AI-based interventions can significantly enhance grammar learning, engagement, and efficiency by aligning with well-established learning theories. Explicitly linking these results to Cognitive Load Theory, Constructivist Learning, Behaviorist Reinforcement, UDL, and AIED frameworks not only strengthens the theoretical foundation of this research but also reinforces the broader implications of AI in inclusive education. Future research should explore long-term retention effects and the integration of AI with teacher-led instruction to further enhance its impact.
One of the most significant findings is the improvement in grammar accuracy observed in the experimental group (mean gain of 32.8%) compared to the control group (16.1%). While previous research has demonstrated moderate gains using conventional instructional strategies or static digital tools (typically 10–15%), the adaptability and immediate feedback mechanisms of AI-driven systems appear to provide additional support in fostering a deeper understanding of grammatical concepts. These findings align with existing literature that emphasizes the importance of interactive and adaptive learning environments.
Another key aspect is the efficiency observed in task completion. The AI-driven tool facilitated a reduction of 5.3 min in test completion time for the experimental group, compared to a 2.7-min reduction in the control group. This suggests that AI-based strategies may streamline cognitive processing by reducing task-related anxiety and promoting sustained engagement. While previous research has acknowledged the challenges children with learning disabilities face in processing speed, few studies have specifically examined how adaptive AI systems influence this factor.
Engagement levels further support the effectiveness of AI as an instructional tool, with 76% of the experimental group reporting high engagement. Prior research has consistently identified engagement as a strong predictor of learning outcomes, yet traditional strategies and non-adaptive tools often struggle to maintain student interest. By incorporating interactive and responsive features, AI-based approaches may offer a valuable means of fostering sustained motivation and attentiveness among learners.
Moreover, the observed effect size (Cohen’s d = 0.84) suggests a substantial impact of AI-enhanced strategies on grammar performance. This is notable given that meta-analyses of interventions for learning disabilities often report small to medium effect sizes. While this study contributes to a growing body of evidence supporting adaptive AI tools, it is important to view these findings as part of an evolving discussion on optimizing learning interventions rather than as definitive proof of AI’s superiority over other educational approaches.
In comparison to previous research (e.g.,
Zhang et al., 2022;
Haleem et al., 2022), which emphasized AI’s potential in providing immediate feedback and personalized learning pathways, this study corroborates these findings while specifically addressing their implications for children with learning disabilities. Additionally, while studies by
Özdere (
2023) and
Alam and Mohanty (
2023) explored the broader benefits of technology in language education, this study provides targeted evidence on how AI tools can support learners with processing delays and attention difficulties.
Unlike earlier research relying on static digital tools (e.g.,
Norton & Buchanan, 2022), the high engagement levels observed in this study suggest that interactive and adaptive AI systems may be particularly effective in sustaining student motivation. Similarly, while studies such as
Hoang et al. (
2021) and
Shin et al. (
2024) documented improvements in grammar skills through AI applications, this study contributes new insights by examining time efficiency as an additional factor influencing learning outcomes.
Despite these promising findings, further research is needed to explore the long-term retention of skills acquired through AI-supported learning. Additionally, integrating AI tools with teacher-led interventions or collaborative learning activities may offer further benefits, a hypothesis supported by previous research but not directly tested in this study. Future studies should also consider the interaction between AI-driven strategies and other instructional approaches to develop more comprehensive educational frameworks.
In conclusion, this study highlights the valuable role of AI in supporting contemporary pedagogical strategies by enabling personalized learning experiences, improving engagement, and enhancing efficiency. Rather than replacing traditional approaches, AI-driven tools should be viewed as a means of enriching and complementing existing instructional strategies to better meet the diverse needs of learners.
5. Conclusions
In conclusion, this study provides compelling evidence of the efficacy of AI-driven tools in enhancing grammar performance, engagement, and efficiency among children with learning disabilities. By addressing individual needs with real-time adaptability, the AI tool not only meets but exceeds the capabilities of conventional strategies. Future studies should aim to replicate these findings across different populations and extend the analysis to long-term impacts, thereby solidifying the role of AI in inclusive education.
The integration of artificial intelligence in education represents a pivotal advancement in addressing the diverse and often complex needs of students with learning disabilities. This study demonstrates that AI-driven tools are not merely supplementary but can serve as transformative agents, offering tailored learning experiences that conventional strategies cannot match. The significant improvements observed in grammar performance, efficiency, and engagement among the experimental group underscore the potential of adaptive technologies to bridge longstanding gaps in educational accessibility and effectiveness.
However, as with any innovation, the journey of integrating AI into educational frameworks is ongoing. While this study establishes a strong foundation, future efforts must focus on long-term impacts, scalability, and the integration of AI tools within broader pedagogical strategies. Collaboration between educators, technologists, and policymakers will be critical in ensuring these tools are effectively deployed and equitably accessible.
The promise of AI in education is vast, but its success hinges on thoughtful implementation and a commitment to inclusivity. By continuing to explore and refine these technologies, we can move closer to a future where every learner, regardless of their challenges, has the opportunity to reach their full potential. This study serves as a testament to the possibilities that lie ahead and a call to action for further innovation and research in this transformative field.