Next Article in Journal
Audiovisual Dual-Tasking and the Characteristics of Concurrent Information Processing in Young Children
Previous Article in Journal
Can Correct and Incorrect Worked Examples Supersede Worked Examples and Problem-Solving on Learning Linear Equations? An Examination from Cognitive Load and Motivation Perspectives
Previous Article in Special Issue
Reimagining Flipped Learning via Bloom’s Taxonomy and Student–Teacher–GenAI Interactions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Potential of Generative Artificial Intelligence to Innovate Feedback Processes

by
Gilberto Huesca
1,*,
Mariana E. Elizondo-García
2,
Ricardo Aguayo-González
3,
Claudia H. Aguayo-Hernández
4,
Tanya González-Buenrostro
5 and
Yuridia A. Verdugo-Jasso
6
1
School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City 14380, Mexico
2
Educational Innovation and Digital Learning, Tecnologico de Monterrey, Monterrey 64849, Mexico
3
School of Architecture, Art and Design, Tecnologico de Monterrey, Mexico City 14380, Mexico
4
Academic Vicerectory, Tecnologico de Monterrey, Monterrey 64849, Mexico
5
Digital Enablement and Transformation, Tecnologico de Monterrey, Monterrey 64849, Mexico
6
Food Innovation International Center, Tecnologico de Monterrey, Ciudad Obregón 85010, Mexico
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(4), 505; https://doi.org/10.3390/educsci15040505
Submission received: 22 March 2025 / Revised: 13 April 2025 / Accepted: 16 April 2025 / Published: 18 April 2025
(This article belongs to the Special Issue Generative-AI-Enhanced Learning Environments and Applications)

Abstract

:
Feedback is an essential component of the teaching–learning process; however, it can vary in quality due to different contexts and students’ and professors’ individual characteristics. This research explores the effect of generative artificial intelligence (GenAI) in strengthening personalized and timely feedback by initially defining an adaptable framework to integrate GenAI into feedback mechanisms defined in theoretical frameworks. We applied a between-subjects analysis in an experimental research design with 263 undergraduate students across multiple disciplines based on an approach consisting of a pretest–post-test process and control and focus groups to evaluate students’ perceptions of artificial intelligence-enhanced feedback versus traditional professor-led feedback. The results show that students who used GenAI declared statistically significantly higher satisfaction levels and a greater sense of ownership in the feedback process. Additionally, GenAI scaffolded continuous improvement and active student participation through a structured and accessible feedback environment, determining that 97% of students are willing to reuse the tool. These findings show that GenAI is a valuable tool to complement professors in the creation of an integrated feedback model. This study draws directions on future research on the combination of artificial intelligence and innovative strategies to produce a long-term impact on education.

Graphical Abstract

1. Introduction

The accelerated rhythm of technological advances has caused educational institutions to enter a continuous process of experimentation and generation of opportunities. According to Gartner (Sheehan et al., 2024), the 2035 vision focuses on personalizing the teaching–learning process, putting students at its center, where their individual needs will define the development of their learning path.
Artificial intelligence (AI) has emerged as a point of attention in several aspects of human life. Additionally, with the democratization of generative artificial intelligence (GenAI) through ChatGPT, whose arrival in 2022 marked a turning point for many environments, the teaching–learning process began to transform both inside and outside the classroom. According to UNESCO (2023) and Chang et al. (2024), this tool can act as a personal tutor or a study companion. In particular, its ability to be a creation engine is key to maintaining structured conversations with students (as the ones involved in the feedback process). Also, Angulo Valdearenas et al. (2024) sets some use cases in education to improve learning personalization by means of course adaptation and access to resources in multiple languages, promoting an inclusive and student-centered education. It is therefore imperative to analyze and research the implementation of GenAI in education to understand its impact, scope, and risks.

1.1. Feedback in Education

Feedback is an essential part of the assessment process, it boosts learning and scaffolds autoregulation and self-direction abilities in students (Panadero et al., 2018). It is an active and dynamic process in which students interpret and use information to improve their learning. The concept is inherent to sustainable assessment (Boud & Soler, 2016), as students analyze, discern, and assess what they have accomplished and what needs to be improved. Feedback encourages students’ understanding of discipline standards and improvement plans (Carless, 2013).
From this active perspective, Carless (2013) promotes a dialogical approach where interpretations, meanings, or expectations are shared to give the students the opportunity to understand the standards of the discipline so that they can draw up a plan to achieve them.
Models have been developed to guide professors in this task. Hattie and Timperley (2007) proposed that effective feedback should answer three questions: where am I going? How am I doing it? And what is next? These questions define three levels of feedback:
  • Feed up: Explain what is expected in the activity and how it relates to the learning objectives.
  • Feed back: Analyze the students’ work and indicate what is good and what needs improvement.
  • Feed forward: Suggest concrete actions to improve future deliveries or the understanding of the topic.
Additionally, the authors say that feedback should address the task, the process, self-regulation, and personal attitudes. Aligned with this, Stobart (2018) proposed that effective feedback develops students’ autonomy and autoregulation, going further than just error correction. Their model emphasizes understandable, timely, and future-oriented feedback that fosters students’ deep and continuous learning and empowerment over their learning strategies. Wesolowski (2020) extends this by saying that the key to successful feedback is finding clear and relevant assessment criteria to direct professors during formative and summative assessments.
Such perspectives have taken a more visible place by shifting the focus from teaching to learning (Mendiola & González, 2020; Moreno Olivos, 2016; Stiggins, 2005; Wiliam et al., 2004; Brown, 2005). However, ensuring that students receive meaningful and individualized feedback remains a key challenge, especially in large class sizes (Khahro & Javed, 2022) and when student engagement varies (Carless & Winstone, 2023). These obstacles highlight the need for dynamic and structured feedback systems that promote clarity, encourage student participation, and adapt to different learning contexts to help students take an active role and develop critical thinking abilities about their own learning (Boud & Molloy, 2013). There is no doubt that this will contribute to accelerating learning, optimizing the quality of what is learned, and improving individual and collective achievements, as well as giving them lifelong skills, as Hounsell (2007) states.

1.2. GenAI and Its Use in Education and Feedback

Research on GenAI applied to learning is still in the initial stages, with limited empirical studies addressing its effectiveness like the works of Abdelghani et al. (2023), Xu and Liu (2025), Huesca et al. (2024), or Teng (2025). Furthermore, the role of institutions has also been explored in works like Tran et al. (2024) and Korseberg and Elken (2024).
Some studies have shown that multimodal tools, such as ChatGPT, can increase interaction, accessibility, and effectiveness of learning (Bewersdorff et al., 2025), as well as the capacity for self-regulation and academic performance (Afzaal et al., 2021; Sung et al., 2025), confirming the transformative role of GenAI in education.
Feedback supported by GenAI has been explored from the assessment process, showing that it increases personalization (Y. Zhou et al., 2025; Naz & Robertson, 2024; Güner et al., 2024) and helps professors make this process easier in large groups (Pozdniakov et al., 2024). In this sense, Jiménez (2024) showed that ChatGPT eliminates professors’ time constraints by strengthening student autonomy.
From the students’ point of view, Campos (2025) reveals that students express satisfaction mainly because the tool gives immediate and specific answers about what they should improve in their assignments. This automatization of feedback, which makes it more efficient and timelier, comes from the GenAI natural language functionalities. These features are useful for generating specific real-time feedback that can be adapted to the style and level of each student and help them monitor their performance to improve.
For example, Teng (2024) showed that feedback provided by ChatGPT can improve writing motivation and student autonomy. Also, ChatGPT’s ability to personalize and deliver feedback in a timely manner led Hutson et al. (2024) to conclude in their study that it creates highly responsive, student-centered learning environments that become motivating and rewarding academic experiences. Furthermore, this motivation is a key element for the success of strategies that integrate GenAI. Chu et al. (2025) state that students with higher learning motivation show a more positive attitude when using GenAI for creative tasks.
However, further explorations are needed to understand the full scope of GenAI feedback and to analyze contrasting results. For example, on the one hand, Dai et al. (2024) found that ChatGPT surpassed the laborious feedback activity carried out by professors. On the other hand, Lin and Crosthwaite (2024) concluded that, compared to the feedback provided by ChatGPT when checking written work, professors’ feedback is more consistent, comprehensive, and global.
It is important to pause here to indicate that, although works have taken important steps within GenAI research in education, the state of the art has not yet interlaced the tool with key educational theories. A step forward must be taken in this sense to provide a theoretical basis for the use of GenAI and to extend traditional theories toward the elements needed by technological advances.
In addition, ethical dilemmas and conflicts arise with the use of GenAI. Hagendorff (2024) created a taxonomy for 19 ethical topics exposing issues on elements like fairness and bias, regulation, governance, privacy, authorship, and transparency, or in areas like education and learning, sustainability, or arts. Regarding education and learning, Z. Wang et al. (2025) state that the main causes of students’ unethical behavior are time pressure, challenging courses, and a notable lack of knowledge among professors about these tools. This proves the need for teacher training so that they can transmit the usefulness of the tool and how to focus on its use to, at the same time, avoid overestimation of its features and the feeling that learning through this tool requires little personal effort (Al Murshidi et al., 2024).
To advance the integration of GenAI into the teaching–learning process, it is essential to promote collaboration between professors, researchers, educational institutions, and policymakers. Such an approach will ensure the effective, ethical, and responsible use of these tools, promoting critical thinking and originality among students (Cordero et al., 2024).
This study takes a step forward in this direction by introducing a framework for using GenAI to enrich the feedback process based on educational theory. The specific objectives of this work are (1) to present a methodology for integrating GenAI tools into traditional feedback processes (2) and to present the results of a statistical analysis of students’ perceptions of the feedback received using GenAI compared to the traditional feedback process. This entire work aims to be a guide for educators and institutions to achieve the integration of AI tools into education.

2. The Use of GenAI to Enhance the Feedback Process: A Proposed Methodology

Figure 1 represents the traditional feedback process, where students rely on the professor for feedback at specific times, which can lead to long waiting times.
To explore the impact of GenAI on the feedback process, a methodology that integrates ChatGPT into the teaching–learning process was designed. The approach of this methodology focuses on providing students with timely, iterative, and structured feedback, without replacing the professor’s role in the final evaluation. This methodology is described next.
Step 1: Professors select a topic within the course.
Step 2: Professors design an activity, linked to the selected topic, and clearly define the reasons for carrying it out (motivation); the impact they want to achieve on students (objective); and what students should do (instructions).
Step 3: Professors define the deliverables of the activity and design the instrument for its evaluation. The detailed description of these two aspects facilitates the process of integrating the GenAI tool into the feedback process.
Step 4: Professors build a prompt for the AI feedback tool. Prompts guide the model to generate responses, in a conversational way, that aligns with the user’s intent. Figure 2 shows the structure of the prompt, highlighting the key elements to generate structured feedback. Each of these sections is described as follows:
  • Intention and Context. The personality that the GenAI tool will exhibit is described here. This specification can be made using the Persona Pattern for prompt Engineering (White et al., 2023) by giving instructions like “Act as a person who is an expert on topic x”. Likewise, in this section, the characteristics of the students are presented. It is suggested to indicate the name of the course (as a specification of the domain in which the tool will be deployed) and the level of studies of the course (to indicate the depth to be applied). Other elements can be added to make specifications. The objective of these definitions is to give an initial context to the AI tool.
  • Task Description and Instructions. The activity instructions designed in the previous steps are provided. It is important to mention that the length of the prompts can confuse the tool, so it is suggested to make a precise summary. Additionally, it is important to declare the deliverables to be produced by students.
  • Learning Objectives. This section describes the objectives to be achieved in the activity. This will center the tool’s answer on what should be achieved by students.
  • Evaluation Criteria. The evaluation of the activity is based on previously defined criteria. It is recommended to use rubrics or checklists to delimit the expected results. Having clear criteria is essential to ensure that feedback is understandable and useful for students (Wesolowski, 2020; Brookhart, 2020). It is important to mention that evaluation is the exclusive responsibility of the professors. AI cannot replace the ethical and expert judgment of professors, who have the full context of the student’s performance. Furthermore, only professors can consider emotional and motivational factors in the final evaluation (Burner et al., 2025). On the contrary, the tool cannot apply direct observation of the student’s actions or the development of attitudinal or behavioral components. So, this section is only a reference for the tool
  • AI Behavior and Expectations. The type of interaction that the tool must have with the students, what is allowed, and what is not allowed within this interaction are defined in this section. For example, professors can tell the tool to not provide a direct solution to the task. Patterns such as Question Refinement or Cognitive Verifier (White et al., 2023) can also be applied so the tool can have more precision in its answers. Additionally, a tool’s introduction message could be defined.
  • Feedback Format. A structured format is established to guarantee that the answers provided by the tool are clear, organized, and aligned with pedagogical principles of effective feedback. In this section of the prompt, professors can define elements in the three levels of feedback mentioned by Hattie and Timperley (2007) (feed up, feed back, and feed forward).
    The Context Manager pattern (White et al., 2023) can be applied here to maintain a fixed response structure to avoid redundant or disorganized information.
    The perception of feedback influences students’ motivation and commitment, so structuring feedback in a balanced way is essential to maintain their confidence in the learning process (Mayordomo et al., 2022; Van Boekel et al., 2023). So, a balance between positive aspects and areas for improvement should be enforced. This will make the feedback immediate, understandable, and actionable and will avoid infinite loops in the conversation.
  • Additional Guidelines. Other elements can be added to fine-tune the tool’s interactions. For example, to use friendly and pleasant language during conversation.
Step 5: Prompt refinement. Once the prompt is designed, performing iterative tests to ensure it generates useful feedback is needed. It is recommended to use examples of deliverables with diverse levels of quality to observe the tool’s behavior.
Step 6: Publication of the activity. The activity is published along with a description of how to use the AI tool. This guide could include instructions on safeguarding student and third-party data, preventing plagiarism, and upholding academic integrity.
Step 7: Presentation of the AI feedback tool. It is important to present a detailed explanation of the AI tool’s use and advantages to the students. A live demonstration would be useful here.
Step 8: Guidance during the process. Professors must continuously monitor students’ work and the use of the AI tool. Professors must maintain an active presence throughout the process, ensuring that students understand how to leverage AI as an improvement resource and not as a replacement for professor guidance. In the initial stages, this support can focus on promoting the use of the tool, as some students may be skeptical due to a lack of familiarity with the technology or concerns about its impact on learning (X. Zhou et al., 2024).
Step 9: Evaluation and final feedback. Finally, professors must assess the students’ delivery. This step can be used to highlight principal elements of the course that may have been overlooked by the tool. Additionally, a grade can be assigned if required. Figure 3 shows a summary of the proposed methodology.
The following sections present the analysis of this methodology’s implementation.

3. Hypothesis

This research aims to explore whether ChatGPT improves students’ perceptions of their feedback experience compared to traditional teaching–learning. Based on this, the following hypothesis is defined for this work:
H1. 
A teaching–learning process enhanced with the use of ChatGPT will have a greater positive effect on students’ perceptions of their feedback experience when solving a learning activity, compared to a traditional process with no intervention of the artificial intelligence tool.

4. Materials and Methods

This study applied a between-subjects analysis in an experimental research design to the results of 263 students in courses for undergraduate programs. Students were organized into 7 groups of different disciplines. The characteristics of these groups and the treatment received are described as follows:
  • One group of a course in Discrete Mathematics. A curricular course for undergraduate programs related to Computer Sciences was taught by a team of 2 professors. This group was selected as a focus group (n = 17). The methodology was implemented in a challenge that lasted 5 weeks (the course’s duration). Students had to solve a problem linked to reality. The AI tool was configured to help students create a written report by giving feedback about the organization of the document and how to strengthen its links to the course concepts. This group was taught in a virtual format.
  • Five groups of a course related to Architecture. A curricular course for the Architecture undergraduate program. These groups were all taught by the same professor. Randomly, 2 groups were designed as control groups (n = 27) while 3 groups were designed as focus groups (n = 69). Students created a prompt to create an image to inspire others to commit to fighting global warming and climate change by learning to design zero-carbon buildings. This activity lasted for 4 weeks. These groups were in a face-to-face format.
  • One group of a course related to Biomimicry. This is an elective course for any undergraduate program in the institution. This group was taught by a team of 2 professors in a virtual format. Due to the high enrollment in this course (n = 150) and the fact that it included students from different campuses of the institution, the course was delivered as a single group, with no possibility of splitting students into separate control and focus groups. As part of this study, all students completed two separate learning activities. The first activity, considered the control group implementation, was completed without using any AI tool and covered a different topic, but was equivalent in terms of scope, difficulty, and grading weight to the second activity. The second activity considered the focus group implementation and included access to a customized AI tool that provided formative feedback. This approach was chosen to ensure equitable access to the AI tool for all students, considering the group’s diverse composition and large class size. In the focus group activity, students analyzed a Leadership in Energy and Environmental Design-certified project, a globally recognized certification system for sustainable buildings developed by the U.S. Green Building Council. The AI tool was set up to support students in developing written analysis and an infographic, providing feedback on the structure, clarity, and coherence. In addition, it helped them to strengthen their sustainability analysis and improvement proposals, promoting a more robust and informed approach. The students had almost 4 weeks to develop the activity.
The GenAI tools used were configured as custom Generative Pre-trained Transformers (GPTs) under the GPT4-turbo model of OpenAI ChatGPT, which can handle large contexts and was specifically designed with high conversational capabilities and quality text generation.
An 11-question survey, applied as a pretest and post-test, was designed to collect the degree of satisfaction with feedback. Informed consent was obtained from all subjects involved in this study. This consent was collected in the class in which the activity and tool were presented. In this class, research objectives were also explained by professors before the pretest application. The post-test was administered in the next class after the students submitted their work. An analytical strategy was applied following the next steps:
  • Instrument validation. Experts were consulted and a statistical process was applied.
  • A statistical analysis of the difference in perception of the feedback received between the focus and control groups in the pretest results was conducted to determine if the groups had any difference in their experiences in previous courses.
  • Analysis of the difference in perception of the feedback received between the focus and control groups in the post-test results was conducted by applying an ordinal logistic regression to validate the hypothesis of this work.
  • Students who decided not to participate in this research were not asked to answer the surveys, and their data were not included.
Students were recruited from existing course enrollments, and random assignment was used in courses where multiple sections were available. The Biomimicry group was the only one selected to receive both treatments. Table 1 gives an overview of the sample assignments considering non-usable data.

5. Results

The results of the analytical strategy are presented in this section.

5.1. Instrument Validation

A total of 22 experts were consulted regarding their opinion on whether each item was “essential”, “useful, but not essential”, or “not necessary” to measure students’ perceptions. A Content Validity Ratio (CVR) value was calculated according to Lawshe (1975) and Wilson et al. (2012). The critical value used was 0.418. Questions, experts’ item classification, CVR, and decisions for each item are provided in Table 2.
Before removing any item, the survey’s CVR was 0.455 > 0.418, so the instrument was validated. After removing items, this value was 0.818.
An item was added to the survey given experts’ suggestions:
  • Would you recommend the use of the AI tool to your colleagues or friends? Answers: Yes, No, or I do not know.
The final instrument had the following items:
  • In general, how satisfied are you with the feedback you received for the activity?
  • Did you use the AI tool to receive feedback during the activity?
  • Did the feedback you received in this activity from the AI tool make you realize your areas of opportunity or improvement?
  • Did you use the feedback the AI tool gave you to improve the delivery of your activity?
  • Would you ask the AI tool for feedback again?
  • Would you recommend the use of the AI tool to your colleagues and/or friends?
On the other hand, the instrument’s internal consistency was also analyzed. For this purpose, the post-test results for questions 1, 3, 4, and 5 were used. Question 2 was not considered because if this question is not answered positively, the other questions cannot be answered. Question 6 was not included in the analysis because it includes an external factor (recommendation to another student), while other questions refer to an internal factor (personal experience). Cronbach’s alpha = 0.612 and Composite reliability = 0.631. These values are acceptable for exploratory research according to (Hair Junior et al., 2014).
Item 1 was the only question used in the pretest and was redacted as follows:
In general, and in your experience, what is the average level of satisfaction you have with the feedback you have received on previous courses?”
In the post-test, only item 1 was presented to the control group students. The complete survey was presented to the focus group students.
For the following analysis, answers were codified into numeric values as follows: (1) Very low, (2) Low, (3) High, and (4) Very high.

5.2. Statistical Analysis of the Difference in Perception of the Feedback Received on Previous Experiences (Pretest Results)

The results in the pretest (N = 201) were used in a median difference analysis between samples. Only for this analysis, a separate sample was created containing the students enrolled in the Biomimicry course given that they received both treatments. Table 3 shows the exploratory analysis, Figure 4 shows the Likert scale chart, and Figure 5 shows the density chart of the samples.
A Levene’s test (Y. Wang et al., 2017) showed homoscedasticity (F = 0.6389, p = 0.529). Given these results, a Kruskal–Wallis rank sum test (Ostertagova et al., 2014) was applied. The results (chi-squared = 5.3851, p = 0.06771) do not show evidence of a difference in the median. These results suggest samples have similar feedback experiences.

5.3. Analysis of the Difference in Perception of the Feedback Received When Solving the Learning Activities Comparing Focus and Control Groups (Post-Test Results)

For this analysis, data from students in the focus group who indicated that they did not use the tool (question 2) were removed. Regarding students in the Biomimicry course, the results of the implementation of the activity without using the AI tool were classified as the control group, while those of the implementation of the activity using the AI tool were classified as focus groups. A total of 252 values were used in this analysis. Table 4 shows the exploratory analysis, Figure 6 shows the Likert scale chart, and Figure 7 shows the density chart of the samples.
These results suggest that the focus group had a better perception of the feedback received.
Next, an ordinal logistic regression (Larasati et al., 2011) was applied. The predictor variable was the treatment received, and the criterion variable was the feedback perception which was found to contribute to the model (estimate = 0.7167, SE = 0.2569, z value = 2.79, p = 0.00527). Threshold coefficients are listed in Table 5. An ANOVA analysis found that this model is different from the null model (LR.stat = 7.9296, df = 1, Pr(>Chisq) = 0.004863). Given the presence of other variables (course modality, the type of activity, or course discipline, for example), other models were tested considering the same criterion variable and a combination of variables as predictors. All of these models indicated that the only significant predictor was the treatment received.
The adjusted predictions plot can be found in Figure 8. This plot shows the incidence of the tool’s use in each of the satisfaction levels.
Students who do not use the tool tend to select the levels that represent low satisfaction. The same is observed for level 3, which represents high feedback perception. However, for level 4 (Very high feedback perception) it was observed that students who use the tool have a greater tendency to experience this level.

5.4. Opinion of Students That Used the GenAI Tool

The results of questions 3, 4, 5, and 6 can be found in Figure 9. Answers from 135 students were collected for this analysis.

6. Discussion

In this work, we found that students who used the GenAI tool had a more positive perception of the feedback process than students who did not use the tool. This highly diverges from Tossell et al. (2024) who said that participants’ expectation of the learning value of ChatGPT did not change after using it during an essay creation activity. This is possible because our methodology creates a conversation between students and the GenAI tool, encouraging them to reflect on the activity’s educational objectives within each partial delivery, thus creating a continuous improvement cycle.
Building such an educational process and avoiding the excessive use of the tool, as stated by H. Wang et al. (2024), is a complex task for professors, given the limited time and number of students they can have. The methodology proposed aims to reduce the workload for professors while creating a structured learning environment to shape a comprehensive learning experience for students. Albdrani and Al-Shargabi (2023) say that this type of environment also stimulates engagement, exploration, explanation, elaboration, personalization, and evaluation cycles.
This ability for personalization was also signaled by Márquez and Martínez (2024), where authors compared the performance of professors and ChatGPT when grading activities. They found that ChatGPT fails to fully identify the quality of deliveries. In this sense, this research shows that a clear methodology contributes to providing a personalized feedback experience despite the limitations of the tool.
Another finding of this work is that a high percentage of students who used ChatGPT (97%) would recommend its use to their classmates or friends. This is consistent with the results of Schmidt-Fajlik (2023), where authors report that this is because ChatGPT gives detailed feedback, and its explanations are easy to understand. The methodology proposed in this work strengthens these characteristics by specifying prompt tuning points to define the type and form of the feedback to be generated. This tuning is highly relevant and allows the AI tool to adapt to different profiles and disciplines. So, the ability to generate detailed explanations that are specifically targeted at achieving educational objectives is an advantage of our methodology over the use of traditional AI tools focused on error detection only. As stated by Kohnke (2024), this type of environment could be useful to develop high-level knowledge and skills.
On the other hand, trust and ethical issues arise regarding the use of ChatGPT. For example, Ngo (2023) explains that, although a high proportion of students agree to have had positive experiences when using ChatGPT for academic purposes in the past, they also have concerns about the credibility of the information provided by the tool. Adding to this, Wecks et al. (2024) state that it is possible that the use that students give to AI tools is correlated with a decrease in their academic results. Hence, as a future work, research on ethical guidance and training in these tools for teachers and students is important.
From another point of view, Stamper et al. (2024) give importance to the creation of frameworks rooted in pedagogical models. Considering this, the framework presented in this work has the advantage of being based on the feedback model of Hattie and Timperley (2007). This recognizes the efforts made in the state of the art and extends the frontiers of knowledge-integrating tools to support the teaching–learning process.
One limitation of this research is the lack of analysis of the impact of learning gains linked to the shift in the feedback process. It is possible that it will be found that, although students have a positive experience with the tool, their learning gains do not show a significant improvement, as reported in the study by Sun et al. (2024). Linked to this, the fact that the methodology showed positive results despite the heterogeneity of the groups is an advantage of this work. However, it can also be recognized as a limitation, given that, to explore the results in greater depth, similar experiments could be conducted in groups from the same discipline or with the same type of delivery format, for example.
The results of this research showed that 98% of the students who used the GenAI tool were able to identify their areas of opportunity and that 81% used feedback to improve their work. These characteristics exhibit self-regulation and self-direction skills development in students, which is a fundamental responsibility of institutions and professors, as mentioned by Falconí et al. (2024). Furthermore, 96% of students said they would use the tool again, which aligns to research where students perceive GenAI feedback as more comprehensive (Allen & Mizumoto, 2024), detailed (Guo & Wang, 2024), attractive, and less intimidating (Allen & Mizumoto, 2024).
Additionally, students indicated that they would like to continue using the tool in other activities. This agrees with Boud (2015) who established that the feedback process must be directed by the students as people endowed with decision and action.
Finally, there is an opportunity and future work to identify whether the methodology had an impact on the time that professors could invest in the feedback process. This future study is valuable because teachers recognize that feedback is a key action for learning, but it is a time-consuming activity at the same time, as found by Aguayo-Hernández et al. (2024).
A summary of the works analyzed in this section can be found in Table 6.

7. Conclusions

This research work presented a statistical analysis on the comparison of students’ perceptions of feedback provided by a process that integrated a GenAI tool and a traditional process. A total of 263 undergraduate students in Architecture, Biomimicry, and Discrete Mathematics participated in this study. A statistically analyzed and expert-validated survey was used to collect students’ insights in a pretest–post-test process with focus and control groups.
Furthermore, a methodology to enhance the traditional feedback process with ChatGPT, with the aim of achieving the course’s educational objectives, was presented.
It was found that the AI-enhanced feedback process showed a greater positive effect on students’ perceptions of their feedback experience compared to a traditional process, supporting the hypothesis of this work.
This demonstrates that AI tools can be effective enablers that give students a customized and interactive experience. Based on these characteristics, the learning environment created with the help of the GenAI tool shapes an educational process centered on the student. This environment promotes a guiding and supportive role for teachers and allows them to focus their efforts on tasks of greater interest and service to their students.
Another advantage of this methodology is that it builds an autonomous learning environment and fosters student ownership of their learning process. These positive elements are increased with the conversational features of GenAI tools crafting a cyclical and incremental learning process according to their needs. Moreover, the strength of this methodology is that it is an extension of pedagogical models and educational methodologies contained in the state of the art. This characteristic is a requirement that gives solidness to positive results.
On the other hand, reasoning capabilities that go beyond basic probabilistic prediction have recently been integrated into GenAI tools. Future work to analyze these models’ influence on the development of metacognitive skills related to the feedback process would be of interest to extend the methodology presented in this work.
This work opens new perspectives on how different modalities (text, images, and videos) and tools (other than ChatGPT) can be used to improve the teaching–learning process and learning gains. Future work can be centered on exploring the impact of these elements on grades or on achieving students’ learning outcomes by developing, at the same time, skills and conceptual knowledge.
However, it is important to investigate the potential drawbacks and restrictions of implementing these technologies widely, such as ethical concerns, the danger of over-reliance on AI, or potential access restrictions to technology.
Yet a limitation of this work is that it was implemented at the undergraduate level in a few disciplinary fields. Further research is needed to extend this analysis to other fields like Social Sciences or Business and to other levels like elementary or secondary schools.
Finally, the interest generated by the application of artificial intelligence tools in everyday aspects, such as education, raises the need for serious experimentation to clarify and classify their impact on society. This study contributes significantly to this purpose by offering a promising path on how to integrate artificial intelligence into innovative educational processes.

Author Contributions

Conceptualization, C.H.A.-H., G.H., M.E.E.-G., R.A.-G. and Y.A.V.-J.; data curation, G.H.; formal analysis, G.H.; investigation, C.H.A.-H., T.G.-B., G.H., M.E.E.-G. and R.A.-G.; methodology, C.H.A.-H., G.H., M.E.E.-G. and R.A.-G.; project administration, G.H.; resources, C.H.A.-H., T.G.-B., G.H., M.E.E.-G. and R.A.-G.; software, T.G.-B., G.H., M.E.E.-G. and R.A.-G.; supervision, G.H.; validation, M.E.E.-G.; visualization, C.H.A.-H., G.H., M.E.E.-G. and R.A.-G.; writing—original draft, C.H.A.-H., T.G.-B., G.H., M.E.E.-G., R.A.-G. and Y.A.V.-J.; writing—review & editing, C.H.A.-H., G.H. and Y.A.V.-J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Research Board of Tecnologico de Monterrey (protocol code CA-EIC-2408-02; 20 August 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Acknowledgments

The authors would like to acknowledge the support of the Summit AI 2024 event, Tecnologico de Monterrey, Mexico, in the production of this work. The authors would like to acknowledge the pedagogical guidance of Centro de Desarrollo Docente e Innovación Educativa (CEDDIE), Tecnologico de Monterrey, Mexico, during the implementation of this research. The authors would like to acknowledge Caribay Godoy Rangel for proofreading the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
ANOVAAnalysis of Variance
CVRContent Validity Ratio
GenAIGenerative artificial intelligence

References

  1. Abdelghani, R., Sauzéon, H., & Oudeyer, P. Y. (2023). Generative AI in the classroom: Can students remain active learners? arXiv, arXiv:2310.03192. [Google Scholar]
  2. Afzaal, M., Nouri, J., Zia, A., Papapetrou, P., Fors, U., Wu, Y., Li, X., & Weegar, R. (2021). Explainable AI for data-driven feedback and intelligent action recommendations to support students’ self-regulation. Frontiers in Artificial Intelligence, 4, 723447. [Google Scholar] [CrossRef]
  3. Aguayo-Hernández, C. H., Sánchez Guerrero, A., & Vázquez-Villegas, P. (2024). The learning assessment process in higher education: A grounded theory approach. Education Sciences, 14(9), 984. [Google Scholar] [CrossRef]
  4. Albdrani, R. N., & Al-Shargabi, A. A. (2023). Investigating the effectiveness of ChatGPT for providing personalized learning experience: A case study. International Journal of Advanced Computer Science & Applications, 14(11). [Google Scholar] [CrossRef]
  5. Allen, T. J., & Mizumoto, A. (2024). ChatGPT over my friends: Japanese English-as-a-Foreign-Language learners’ preferences for editing and proofreading strategies. RELC Journal, 00336882241262533. [Google Scholar] [CrossRef]
  6. Al Murshidi, G., Shulgina, G., Kapuza, A., & Costley, J. (2024). How understanding the limitations and risks of using ChatGPT can contribute to willingness to use. Smart Learning Environments, 11(1), 36. [Google Scholar] [CrossRef]
  7. Angulo Valdearenas, M. J., Clarisó, R., Domènech Coll, M., Garcia-Brustenga, G., Gómez Cardosa, D., & Mas Garcia, X. (2024). Com incorporar la IA en les activitats d’aprenentatge. Repositori Institucional (O2) Universitat Oberta de Catalunya. Available online: http://hdl.handle.net/10609/151242 (accessed on 12 January 2025).
  8. Bewersdorff, A., Hartmann, C., Hornberger, M., Seßler, K., Bannert, M., Kasneci, E., Kasneci, G., Zhai, X., & Nerdel, C. (2025). Taking the next step with generative artificial intelligence: The transformative role of multimodal large language models in science education. Learning and Individual Differences, 118, 102601. [Google Scholar] [CrossRef]
  9. Boud, D. (2015). Feedback: Ensuring that it leads to enhanced learning. The Clinical Teacher, 12(1), 3–7. [Google Scholar] [CrossRef]
  10. Boud, D., & Molloy, E. (2013). Rethinking models of feedback for learning: The challenge of design. Assessment & Evaluation in Higher Education, 38(6), 698–712. [Google Scholar]
  11. Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41(3), 400–413. [Google Scholar]
  12. Brookhart, S. M. (2020). Feedback and measurement. In Classroom assessment and educational measurement (p. 63). Routledge. [Google Scholar]
  13. Brown, S. (2005). Assessment for learning. Learning and Teaching in Higher Education, (1), 81–89. [Google Scholar]
  14. Burner, T., Lindvig, Y., & Wærness, J. I. (2025). “We Should Not Be Like a Dinosaur”—Using AI Technologies to Provide Formative Feedback to Students. Education Sciences, 15(1), 58. [Google Scholar] [CrossRef]
  15. Campos, M. (2025). AI-assisted feedback in CLIL courses as a self-regulated language learning mechanism: Students’ perceptions and experiences. European Public & Social Innovation Review, 10, 1–14. [Google Scholar]
  16. Carless, D. (2013). Sustainable feedback and the development of student self-evaluative capacities. In Reconceptualising feedback in higher education (pp. 113–122). Routledge. [Google Scholar]
  17. Carless, D., & Winstone, N. (2023). Teacher feedback literacy and its interplay with student feedback literacy. Teaching in Higher Education, 28(1), 150–163. [Google Scholar] [CrossRef]
  18. Chang, C. Y., Chen, I. H., & Tang, K. Y. (2024). Roles and research trends of ChatGPT-based learning. Educational Technology & Society, 27(4), 471–486. [Google Scholar]
  19. Chu, H. C., Lu, Y. C., & Tu, Y. F. (2025). How GenAI-supported multi-modal presentations benefit students with different motivation levels. Educational Technology & Society, 28(1), 250–269. [Google Scholar]
  20. Cordero, J., Torres-Zambrano, J., & Cordero-Castillo, A. (2024). Integration of Generative Artificial Intelligence in Higher Education: Best Practices. Education Sciences, 15(1), 32. [Google Scholar] [CrossRef]
  21. Dai, W., Tsai, Y. S., Lin, J., Aldino, A., Jin, H., Li, T., Gašević, D., & Chen, G. (2024). Assessing the proficiency of large language models in automatic feedback generation: An evaluation study. Computers and Education: Artificial Intelligence, 7, 100299. [Google Scholar] [CrossRef]
  22. Falconí, C. A. R., Figueroa, I. J. G., Farinango, E. V. G., & Dávila, C. N. M. (2024). Estrategias para fomentar la autonomía del estudiante en la educación universitaria: Promoviendo el aprendizaje autorregulado y la autodirección académica. Reincisol, 3(5), 691–704. [Google Scholar] [CrossRef]
  23. Guo, K., & Wang, D. (2024). To resist it or to embrace it? Examining ChatGPT’s potential to support teacher feedback in EFL writing. Education and Information Technologies, 29(7), 8435–8463. [Google Scholar] [CrossRef]
  24. Güner, H., Er, E., Akçapinar, G., & Khalil, M. (2024). From chalkboards to AI-powered learning. Educational Technology & Society, 27(2), 386–404. [Google Scholar]
  25. Hagendorff, T. (2024). Mapping the ethics of generative AI: A comprehensive scoping review. Minds and Machines, 34(4), 39. [Google Scholar] [CrossRef]
  26. Hair Junior, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2014). A primer on partial least squares structural equation modeling (PLS-SEM). SAGE Publications, Inc. [Google Scholar]
  27. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. [Google Scholar] [CrossRef]
  28. Hounsell, D. (2007). Towards more sustainable feedback to students. In Rethinking assessment in higher education (pp. 111–123). Routledge. [Google Scholar]
  29. Huesca, G., Martínez-Treviño, Y., Molina-Espinosa, J. M., Sanromán-Calleros, A. R., Martínez-Román, R., Cendejas-Castro, E. A., & Bustos, R. (2024). Effectiveness of using ChatGPT as a tool to strengthen benefits of the flipped learning strategy. Education Sciences, 14(6), 660. [Google Scholar] [CrossRef]
  30. Hutson, J., Fulcher, B., & Ratican, J. (2024). Enhancing assessment and feedback in game design programs: Leveraging generative AI for efficient and meaningful evaluation. International Journal of Educational Research and Innovation, 1–20. [Google Scholar] [CrossRef]
  31. Jiménez, A. F. (2024). Integration of AI helping teachers in traditional teaching roles. European Public & Social Innovation Review, 9, 1–17. [Google Scholar]
  32. Khahro, S. H., & Javed, Y. (2022). Key challenges in 21st century learning: A way forward towards sustainable higher educational institutions. Sustainability, 14(23), 16080. [Google Scholar] [CrossRef]
  33. Kohnke, L. (2024). Exploring EAP students’ perceptions of GenAI and traditional grammar-checking tools for language learning. Computers and Education: Artificial Intelligence, 7, 100279. [Google Scholar] [CrossRef]
  34. Korseberg, L., & Elken, M. (2024). Waiting for the revolution: How higher education institutions initially responded to ChatGPT. Higher Education, 1–16. [Google Scholar] [CrossRef]
  35. Larasati, A., DeYong, C., & Slevitch, L. (2011). Comparing neural network and ordinal logistic regression to analyze attitude responses. Service Science, 3(4), 304–312. [Google Scholar] [CrossRef]
  36. Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 567. [Google Scholar] [CrossRef]
  37. Lin, S., & Crosthwaite, P. (2024). The grass is not always greener: Teacher vs. GPT-assisted written corrective feedback. System, 127, 103529. [Google Scholar] [CrossRef]
  38. Mayordomo, R. M., Espasa, A., Guasch, T., & Martínez-Melo, M. (2022). Perception of online feedback and its impact on cognitive and emotional engagement with feedback. Education and Information Technologies, 27(6), 7947–7971. [Google Scholar] [CrossRef]
  39. Márquez, A. M. B., & Martínez, E. R. (2024). Retroalimentación formativa con inteligencia artificial generativa: Un caso de estudio. Wímb lu, 19(2), 1–20. [Google Scholar]
  40. Mendiola, M. S., & González, A. M. (2020). Evaluación del y para el aprendizaje: Instrumentos y estrategias. Imagia Comunicación. [Google Scholar]
  41. Moreno Olivos, T. (2016). Evaluación del aprendizaje y para el aprendizaje: Reinventar la evaluación en el aula. Universidad Autónoma Metropolitana. [Google Scholar]
  42. Naz, I., & Robertson, R. (2024). Exploring the feasibility and efficacy of ChatGPT3 for personalized feedback in teaching. Electronic Journal of e-Learning, 22(2), 98–111. [Google Scholar] [CrossRef]
  43. Ngo, T. T. A. (2023). The perception by university students of the use of ChatGPT in education. International Journal of Emerging Technologies in Learning, 18(17), 4. [Google Scholar] [CrossRef]
  44. Ostertagova, E., Ostertag, O., & Kováč, J. (2014). Methodology and application of the Kruskal-Wallis test. Applied Mechanics and Materials, 611, 115–120. [Google Scholar] [CrossRef]
  45. Panadero, E., Andrade, H., & Brookhart, S. (2018). Fusing self-regulated learning and formative assessment: A roadmap of where we are, how we got here, and where we are going. The Australian Educational Researcher, 45, 13–31. [Google Scholar] [CrossRef]
  46. Pozdniakov, S., Brazil, J., Abdi, S., Bakharia, A., Sadiq, S., Gašević, D., Denny, P., & Khosravi, H. (2024). Large language models meet user interfaces: The case of provisioning feedback. Computers and Education: Artificial Intelligence, 7, 100289. [Google Scholar] [CrossRef]
  47. Schmidt-Fajlik, R. (2023). ChatGPT as a grammar checker for Japanese english language learners: A comparison with grammarly and proWritingAid. AsiaCALL Online Journal, 14(1), 105–119. [Google Scholar] [CrossRef]
  48. Sheehan, T., Riley, P., Farrell, G., Mahmood, S., Calhoun, K., & Thayer, T.-L. 2024 December 3. Predicts 2024: Education automation, adaptability and acceleration. Garner. Available online: https://www.gartner.com/en/documents/5004931 (accessed on 12 January 2025).
  49. Stamper, J., Xiao, R., & Hou, X. (2024, July 8–12). Enhancing llm-based feedback: Insights from intelligent tutoring systems and the learning sciences. International Conference on Artificial Intelligence in Education (pp. 32–43), Recife, Brazil. [Google Scholar]
  50. Stiggins, R. (2005). From formative assessment to assessment for learning: A path to success in standards-based schools. Phi Delta Kappan, 87(4), 324–328. [Google Scholar] [CrossRef]
  51. Stobart, G. (2018). Becoming proficient: An alternative perspective on the role of feedback. In A. A. Lipnevich, & J. K. Smith (Eds.), The cambridge handbook of instructional feedback (pp. 29–51). Cambridge University Press. [Google Scholar]
  52. Sun, D., Boudouaia, A., Zhu, C., & Li, Y. (2024). Would ChatGPT-facilitated programming mode impact college students’ programming behaviors, performances, and perceptions? An empirical study. International Journal of Educational Technology in Higher Education, 21(1), 14. [Google Scholar] [CrossRef]
  53. Sung, G., Guillain, L., & Schneider, B. (2025). Using AI to Care: Lessons Learned from Leveraging Generative AI for Personalized Affective-Motivational Feedback. International Journal of Artificial Intelligence in Education, 1–40. [Google Scholar] [CrossRef]
  54. Teng, M. F. (2024). “ChatGPT is the companion, not enemies”: EFL learners’ perceptions and experiences in using ChatGPT for feedback in writing. Computers and Education: Artificial Intelligence, 7, 100270. [Google Scholar] [CrossRef]
  55. Teng, M. F. (2025). Metacognitive Awareness and EFL Learners’ Perceptions and Experiences in Utilising ChatGPT for Writing Feedback. European Journal of Education, 60(1), e12811. [Google Scholar] [CrossRef]
  56. Tossell, C. C., Tenhundfeld, N. L., Momen, A., Cooley, K., & de Visser, E. J. (2024). Student perceptions of ChatGPT use in a college essay assignment: Implications for learning, grading, and trust in artificial intelligence. IEEE Transactions on Learning Technologies, 17, 1069–1081. [Google Scholar] [CrossRef]
  57. Tran, T. M., Bakajic, M., & Pullman, M. (2024). Teacher’s pet or rebel? Practitioners’ perspectives on the impacts of ChatGPT on course design. Higher Education. [Google Scholar] [CrossRef]
  58. UNESCO. (2023). ChatGPT e inteligencia artificial en la educación superior. La Organización de las Naciones Unidas para la Educación, la Ciencia y la Cultura. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000385146_spa (accessed on 21 November 2024).
  59. Van Boekel, M., Hufnagle, A. S., Weisen, S., & Troy, A. (2023). The feedback I want versus the feedback I need: Investigating students’ perceptions of feedback. Psychology in the Schools, 60(9), 3389–3402. [Google Scholar] [CrossRef]
  60. Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence, 7, 100326. [Google Scholar] [CrossRef]
  61. Wang, Y., Rodríguez de Gil, P., Chen, Y. H., Kromrey, J. D., Kim, E. S., Pham, T., Nguyen, D., & Romano, J. L. (2017). Comparing the performance of approaches for testing the homogeneity of variance assumption in one-factor ANOVA models. Educational and Psychological Measurement, 77(2), 305–329. [Google Scholar] [CrossRef]
  62. Wang, Z., Yin, Z., Zheng, Y., Li, X., & Zhang, L. (2025). Will graduate students engage in unethical uses of GPT? An exploratory study to understand their perceptions. Educational Technology & Society, 28(1), 286–300. [Google Scholar]
  63. Wecks, J. O., Voshaar, J., Plate, B. J., & Zimmermann, J. (2024). Generative AI usage and academic performance. Available online: https://ssrn.com/abstract=4812513 (accessed on 4 January 2025).
  64. Wesolowski, B. C. (2020). “Classroometrics”: The validity, reliability, and fairness of classroom music assessments. Music Educators Journal, 106(3), 29–37. [Google Scholar] [CrossRef]
  65. White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv, arXiv:2302.11382. [Google Scholar]
  66. Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers developing assessment for learning: Impact on student achievement. Assessment in Education: Principles, Policy & Practice, 11(1), 49–65. [Google Scholar]
  67. Wilson, F. R., Pan, W., & Schumsky, D. A. (2012). Recalculation of the critical values for Lawshe’s content validity ratio. Measurement and Evaluation in Counseling and Development, 45(3), 197–210. [Google Scholar] [CrossRef]
  68. Xu, J., & Liu, Q. (2025). Uncurtaining windows of motivation, enjoyment, critical thinking, and autonomy in AI-integrated education: Duolingo Vs. ChatGPT. Learning and Motivation, 89, 102100. [Google Scholar] [CrossRef]
  69. Zhou, X., Zhang, J., & Chan, C. (2024). Unveiling students’ experiences and perceptions of Artificial Intelligence usage in higher education. Journal of University Teaching and Learning Practice, 21(6), 126–145. [Google Scholar] [CrossRef]
  70. Zhou, Y., Zhang, M., Jiang, Y. H., Gao, X., Liu, N., & Jiang, B. (2025). A Study on Educational Data Analysis and Personalized Feedback Report Generation Based on Tags and ChatGPT. arXiv, arXiv:2501.06819. [Google Scholar]
Figure 1. Traditional feedback process.
Figure 1. Traditional feedback process.
Education 15 00505 g001
Figure 2. Prompt structure to provide the AI feedback tool with a set of statements to describe the activity and the tool behavior.
Figure 2. Prompt structure to provide the AI feedback tool with a set of statements to describe the activity and the tool behavior.
Education 15 00505 g002
Figure 3. Proposed methodology to integrate generative artificial intelligence into the feedback process described by (Hattie & Timperley, 2007).
Figure 3. Proposed methodology to integrate generative artificial intelligence into the feedback process described by (Hattie & Timperley, 2007).
Education 15 00505 g003
Figure 4. Likert scale chart of the satisfaction about feedback received on previous learning experiences.
Figure 4. Likert scale chart of the satisfaction about feedback received on previous learning experiences.
Education 15 00505 g004
Figure 5. Density chart of satisfaction about feedback received on previous learning experiences.
Figure 5. Density chart of satisfaction about feedback received on previous learning experiences.
Education 15 00505 g005
Figure 6. Likert scale chart of the satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
Figure 6. Likert scale chart of the satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
Education 15 00505 g006
Figure 7. Density chart of satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
Figure 7. Density chart of satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
Education 15 00505 g007
Figure 8. Adjusted predictions for the ordinal logistic regression applied to the treatment received (predictor variable) and the feedback perception (criterion variable).
Figure 8. Adjusted predictions for the ordinal logistic regression applied to the treatment received (predictor variable) and the feedback perception (criterion variable).
Education 15 00505 g008
Figure 9. Answers of students that used the GenAI tool for questions: 3. “Did the feedback you received in this activity from the AI tool make you realize your areas of opportunity or improvement?”, 4. “Would you ask the AI tool for feedback again?”, 5. “Did you use the feedback the AI tool gave you to improve the delivery of your activity?”, and 6. “Would you recommend the use of the AI tool to your colleagues and/or friends?”.
Figure 9. Answers of students that used the GenAI tool for questions: 3. “Did the feedback you received in this activity from the AI tool make you realize your areas of opportunity or improvement?”, 4. “Would you ask the AI tool for feedback again?”, 5. “Did you use the feedback the AI tool gave you to improve the delivery of your activity?”, and 6. “Would you recommend the use of the AI tool to your colleagues and/or friends?”.
Education 15 00505 g009
Table 1. Sample distribution considering non-usable data.
Table 1. Sample distribution considering non-usable data.
CourseCandidate StudentsControl Group Usable DataFocus Group Usable Data
Discrete Mathematics17016
Architecture962051
Biomimicry15010560
Total263125127
Table 2. Questions, expert classification, and Content Validity Ratio (CVR) values for items in the survey to collect students’ perceptions of the feedback received when solving a learning activity. For each item, the decision about keeping, removing, or adding it is shown.
Table 2. Questions, expert classification, and Content Validity Ratio (CVR) values for items in the survey to collect students’ perceptions of the feedback received when solving a learning activity. For each item, the decision about keeping, removing, or adding it is shown.
#QuestionType of AnswerEssentialUseful, But
Not Essential
Not NecessaryContent Validity
Ratio
Decision
1What is your level of satisfaction with the feedback you received for the activity?Very low, Low,
High, Very high
20200.82To keep and modify the wording
2Do you think you had a better feedback experience in this activity than in previous courses or activities?Yes, No14710.27To remove
3On average, how many times did you request feedback from any entity or person during the activity?Numeric11830To remove
4Order the following entities with respect to the frequency with which you turned to them to ask for feedback or to resolve doubts during the activity. The first option is the one you used the most and the last one is the one you used the least.
-
Professor.
-
Other students.
-
Friendships.
-
Internet.
-
Artificial intelligence tools.
Order a list of items111100To remove
5During the activity, did you turn to other entities to receive feedback or resolve questions? If yes, write down the entities in the following space separated by commas.Yes (text), No8104−0.27To remove
6Did you use the AI tool to receive feedback during the activity?Yes, No20200.82To keep
7Was the feedback you received in the activity from the AI tool useful in improving your performance?Yes, No21100.91To remove. Even if this item is validated, experts said that it repeats the same idea of item 8.
8Did the feedback you received in this activity from the AI tool make you realize your areas of opportunity or improvement?Not at all useful, Not very useful, Useful, Very useful21100.91To keep
9Did you use the feedback the AI tool gave you to improve the delivery of your activity?Yes, No20200.82To keep
10Would you ask for the AI tool for feedback again?Never, Sometimes, Frequently, Always19300.73To keep
11Indicate three characteristics that you value or liked about the feedback and/or use of the AI tool.Text111100To remove
Table 3. Exploratory analysis of satisfaction with feedback received on previous learning experiences.
Table 3. Exploratory analysis of satisfaction with feedback received on previous learning experiences.
ValueFocus GroupControl GroupBoth Treatments
N7023108
Min.2.002.001.00
1st qu.3.003.003.00
Median3.003.003.00
Mean3.063.303.23
3rd qu.3.004.004.00
Max.4.004.004.00
Standard deviation0.560.560.61
Table 4. Exploratory analysis of the satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
Table 4. Exploratory analysis of the satisfaction about feedback received while solving activities with (focus group) and without (control group) GenAI.
ValueFocus GroupControl Group
N127125
Min.3.001.00
1st qu.3.003.00
Median3.003.00
Mean3.493.27
3rd qu.4.004.00
Max.4.004.00
Standard deviation0.500.60
Table 5. Threshold coefficients for the ordinal logistic regression applied to the control and focus groups.
Table 5. Threshold coefficients for the ordinal logistic regression applied to the control and focus groups.
ThresholdEstimateStd. Errorz Value
1|2−4.52730.7168−6.316
2|3−3.40690.4253−8.010
3|40.73140.19053.839
Table 6. Comparison of works analyzed in the discussion section.
Table 6. Comparison of works analyzed in the discussion section.
ReferenceStrategyNumber of StudentsApplication ContextDisciplineType of StudyResults
This work ChatGPT as an extension tool for the feedback process263Undergraduate studentsDiscrete Mathematics, Architecture, BiomimicryQualitative and quantitative. Pretest–post-test. Control and focus groups.There is a significant difference in students’ perceptions between those who used GenAI and those who did not.

ChatGPT would be recommended by 97% of students.
Tossell et al. (2024)ChatGPT for an essay creation activity24Air force academyDesignQuantitative. Pretest–post-test.No significant difference in perception of the value of the tool.
Schmidt-Fajlik (2023)ChatGPT for writing69UniversityJapanese writingQualitativeChatGPT is highly recommended by students because of its detailed feedback and its explanations are easy to understand.
Kohnke (2024)GenAI for writing14UniversityEnglish for academic purposesQualitativeThe in-depth explanations of GenAI tools are useful for students to have a better understanding of the feedback.
Sun et al. (2024)GenAI for programming82UniversityProgramming for Educational Technology majorsQuantitative. Control and focus groups.Positive students’ acceptance of ChatGPT with no significant difference regarding learning gains.
Albdrani and Al-Shargabi (2023)ChatGPT for Internet of Things course20UniversityComputer SciencesQualitative and quantitative. Control and focus groups.Positive perception of a personalized learning experience.
Ngo (2023)ChatGPT for academic purposes200UniversityInformation Technology
Business Administration
Media Communication
Hospitality and Tourism
Linguistics
Graphic Design
QualitativeA total of 86% of students presented high satisfaction when using ChatGPT in previous academic experiences.
Márquez and Martínez (2024)Comparison between feedback provided by a professor and ChatGPTNo dataUniversityPsychology ChatGPT identifies partially activities’ quality, and grades differed between both actors. ChatGPT presented the ability to personalize feedback.
Wecks et al. (2024)Detection of the use of GenAI in writing essays193UniversityFinancial AccountingQuantitative. Control and focus groups.Students who used GenAI ranked lower in the final exam.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huesca, G.; Elizondo-García, M.E.; Aguayo-González, R.; Aguayo-Hernández, C.H.; González-Buenrostro, T.; Verdugo-Jasso, Y.A. Evaluating the Potential of Generative Artificial Intelligence to Innovate Feedback Processes. Educ. Sci. 2025, 15, 505. https://doi.org/10.3390/educsci15040505

AMA Style

Huesca G, Elizondo-García ME, Aguayo-González R, Aguayo-Hernández CH, González-Buenrostro T, Verdugo-Jasso YA. Evaluating the Potential of Generative Artificial Intelligence to Innovate Feedback Processes. Education Sciences. 2025; 15(4):505. https://doi.org/10.3390/educsci15040505

Chicago/Turabian Style

Huesca, Gilberto, Mariana E. Elizondo-García, Ricardo Aguayo-González, Claudia H. Aguayo-Hernández, Tanya González-Buenrostro, and Yuridia A. Verdugo-Jasso. 2025. "Evaluating the Potential of Generative Artificial Intelligence to Innovate Feedback Processes" Education Sciences 15, no. 4: 505. https://doi.org/10.3390/educsci15040505

APA Style

Huesca, G., Elizondo-García, M. E., Aguayo-González, R., Aguayo-Hernández, C. H., González-Buenrostro, T., & Verdugo-Jasso, Y. A. (2025). Evaluating the Potential of Generative Artificial Intelligence to Innovate Feedback Processes. Education Sciences, 15(4), 505. https://doi.org/10.3390/educsci15040505

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop