Next Article in Journal
Multi-Class Road Marker Detection on Rainy Days Using Deep Learning Approach
Previous Article in Journal
Research on the Evaluation of the Architectural Design Objectives and Operation Effect of Terminals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Role of Mathematics Teachers in Learner’s Diversity Using AI Tools †

School of Education and Languages, Hong Kong Metropolitan University, Hong Kong, China
Presented at the 2024 IEEE 7th International Conference on Knowledge Innovation and Invention, Nagoya, Japan, 16–18 August 2024.
Eng. Proc. 2025, 89(1), 19; https://doi.org/10.3390/engproc2025089019
Published: 26 February 2025

Abstract

:
The advancement of artificial intelligence (AI) has attracted attention across disciplines. Different research has revealed the role of AI and the outcomes of AI in education (AIEd). However, teachers need to use AI to cater to learner diversities in mathematics education, which needs exploration. Therefore, how different AI tools assist mathematics teachers in developing teaching materials for students was investigated in this study. Teachers were invited to utilize AI techniques to develop their teaching and learning materials. The findings can be used to enhance the remedial and enrichment measures in teaching secondary mathematics and construct a framework to help teachers with learner diversities with AI tools.

1. Introduction

Artificial intelligence (AI) offers opportunities and challenges to enhance teaching efficiency for teachers. To effectively harness AI’s benefits, teachers’ professional development is crucial. Teachers play a pivotal role in implementing AI-based curricula and understanding readiness and perceptions regarding AI usage. In the 21st century, equipping students with AI literacy is important. Therefore, this study aims to assess secondary mathematics teachers’ use of AI integration in teaching secondary mathematics.
Yau et al. [1] studied how to design an AI curriculum. Through a collaborative effort of university professors and secondary school teachers, an innovative AI curriculum was co-designed, and teachers’ technical knowledge gaps were identified. This study underscores the importance of collaboration among experts and educators in designing effective AI curricula tailored to students’ contextual needs and improving learning outcomes. Therefore, teachers need to collaborate for the development of AI-based materials in secondary mathematics.
In this study, two focus groups of in-service local secondary mathematics teachers in Hong Kong were involved. The first group of teachers focused on how they used AI tools to prepare materials for remedial classes; the second group of teachers focused on how they used AI tools to prepare materials for enrichment classes. Also, how secondary mathematics teachers use AI tools was explored.

2. Literature Review

2.1. Readiness in Using AI

The successful implementation of generative AI in education relies on the readiness of teachers. The importance of teacher training is emphasized to integrate AI curricula for various subjects. Chiu and Li [2] emphasized the significance of professional training and support for teachers. Providing educators with the necessary tools and guidance is crucial for effective AI implementation. Ravi et al. [3] recommend that teacher support and professional development sessions precede the introduction of new AI curricula. This ensures that teachers, even those without prior experience, are equipped to confidently deliver foundational AI knowledge. Ravi et al. [3] delved into teachers’ perceptions and experiences in implementing AI curricula in middle schools. Educators expressed a need for clearer directions and specifications in training materials to navigate novel concepts effectively. Additionally, they requested hands-on examples for incorporating AI into teaching practices. Loong and Herbert [4] explored the integration of digital technology by primary teachers in mathematics classrooms. Yet, the readiness of Hong Kong secondary mathematics teachers in this regard remains underexplored.

2.2. Prompting Techniques

Schorcht et al. [5] used four variants of prompting techniques in mathematical problem-solving: Zero-Shot, Chain-of-Thought, Ask-me-Anything, and Few-Shot. In Zero-Shot, a task is entered into AI without additional inputs. In Chain-of-Thought, the user provides sequential guides to AI and then AI offers an organized sequential output. These intermediate steps assist the generative AI in producing an accurate solution. It significantly improves the AI’s performance in straightforward problem-solving tasks involving the calculation of basic computational steps.
In Ask-me-Anything (Arora et al. [6]), users avoid long answers. When AI is asked a question, the answer generated must be short. AI waits until its user gives it further instructions to proceed. In doing so, AI asks the user questions to confirm if it is performing what is expected. The process is deemed to be interactive. In Few-Shot, an additional task with a given solution is introduced to AI alongside the original problem. The user then inputs a similar mathematics problem with only small differences in details. Schorcht et al. [5] highlighted the effectiveness of Chain-of-Thought and Ask-me-Anything in enhancing AI’s strategic approaches to problem-solving.
It is still necessary to research how secondary mathematics teachers use AI, particularly how they interact with various AI tools to tailor curricula to different learning needs.

3. Methodology

According to Williams and Katz [7], a focus group with individuals who share a common interest or characteristic provides information on a particular subject matter. Regarding the purposes of a focus group, Krueger and Casey [8] noted that a focus group promotes a comfortable atmosphere of disclosure among the members of the group to share their ideas, experiences, and attitudes about a topic. Participants play both an active and passive role as they influence and are influenced. The researcher, on the other hand, is the moderator, facilitator, observer, and investigator.
The concept of grounded theory was proposed by Glaser and Strauss [9] to build theory from data. Following this approach, researchers are encouraged to discover theories based on research data rather than testing existing hypotheses. According to Gibson and Hartman [10], the grounded theory is characterized by its openness. This notion of openness entails why the researcher shall not make use of preconceived concepts or ideas when conducting certain research. Since AI is an innovation in education, grounded theory can ensure the openness of the findings and implications drawn.
Grounded theory was applied by the two focus groups to elicit how teachers select AI tools, what teachers want to achieve with these selected AI tools, and how secondary mathematics teachers interact with the AI tools. Six secondary mathematics teachers were recruited, three of whom were assigned to the remedial group and three of them to the enrichment group. They designed materials that fit their teaching needs. They chose AI tools to evaluate the effectiveness of these tools.

3.1. Remedial Group

In this group, teachers used AI tools to prepare remedial materials and a remedial topic to determine the corresponding objectives. They captured all the prompts and responses from AI and recorded how they interacted with AI. They also compared the effectiveness of different AI tools and stated the benefits and the limitations of using these different AI tools.

3.2. Enrichment Group

In this group, teachers used AI tools to prepare enrichment materials for an enrichment topic and determined the corresponding objectives. They were given the same set of instructions as teachers in the remedial group.
Teachers from both groups chose AI tools and the topics in which they were interested. Teachers recorded the prompting questions and AI responses for data analysis.

4. Findings

4.1. Remedial Group

  • Teacher A
Teacher A chose two questions: Q1. Plot a graph x 2 x 1 = 0 ; Q2. Solve x 2 x 1 = 0 .
He used the mobile app version of the AI tools POE, Copilot, and Wolfram Alpha. He gave the same prompt to the three tools. He did not keep interacting with the tools. Teacher A’s conclusion to Q1 is that POE could not plot the graph; it just showed the HTML code. It could not convert it to a graph or a picture. Copilot and Wolfram Alpha provided the correct graph. The graph on Copilot was downloadable. Teacher A’s conclusion to Q2 was that POE and Copilot offered the correct answers in texts. Thus, it was not easy to read. Wolfram Alpha represented answers with the approximate value or exact value, and it could also give various solutions. The teacher described the benefit of AI tools as being fast and having lots of information. Several AI tools did not provide graphs. They only generated ideas with too much information.
  • Teacher B
Teacher B wanted to use worksheet exercises designed by a textbook publisher. However, there were not enough exercises, and no solutions were provided. The topic chosen was factorization. Teacher B used two AI tools, Magic School and Wolfram Alpha, to generate more. Teacher B chose a YouTube video clip teaching factorization. He used the YouTube Video Questions tool in Magic School. He filled in the template with eighth-grade-level questions. It included 10 questions with multiple-choice answers and the URL of the YouTube video. Then, 10 multiple-choice questions were generated with the answer key. Teacher B used Wolfram Alpha to generate an exercise on quadratic factorization for students. He used Wolfram Problem Generate to conduct this. He set the prompt “Factor w 2 + 13 w + 42 ” with the ‘Beginner’ level. Eight multiple-choice questions were generated with a set of answer keys. Teacher B also tried out the step-by-step function in Wolfram Alpha. The prompt was “Factor 2 x 2 3 x 5 ”; a step-by-step solution was generated. AI tools saved him time in preparing the online remedial materials. Students could obtain hints from AI anytime and anywhere. However, the choice of words for the questions generated by AI was not the same as the teachers’ choice of words.
  • Teacher C
Teacher C had the same objectives as Teacher B, preparing a set of exercise questions on factoring quadratic polynomials. He used AI to prepare exercises on the factor method. He used POE and prompts, “Can you generate 10 questions for factorizing quadratic polynomial?” and “Once again, share the solution with me?”. POE provided 10 questions with the correct solutions. He used the same prompts for Copilot. Copilot could also provide 10 questions with correct solutions. POE and Copilot provided text responses. Teacher C also wanted to know if AI could plot the graph of y = 2x + 1. He used POE and Copilot and asked “Can you use Desmos or GeoGebra to plot the graph of y = 2x + 1?”. Copilot provided the steps of using Desmos and GeoGebra to plot the graph. Desmos gave a scatter plot, while Copilot looked for images and gave sketches of a graph that did not exactly show y = 2x + 1. AI tools generated questions and answers effectively and efficiently, particularly for those mechanical questions and topics. However, precise wording needed to be used for good AI responses. AI could not show the mathematics in the proper format. For example, POE did not show the answers in mathematics notation.

4.2. Enrichment Group

  • Teacher D
Teacher D chose a question from the Hong Kong Mathematics Olympics Competition as shown in Figure 1.
Teacher D used POE, Thetawise, and Wolfram Alpha to test whether these AI tools could give correct solutions. Teacher D used the same prompt for the 3 AI tools: “Let f(x) be a polynomial of degree 2, where f(1) = 1 2 , f(2) = 1 6 , f(3) = 1 12 . Find the value of f(6)”.
POE correctly set the system of three equations. However, it solved the system of equations incorrectly. Teacher D did not notice that POE gave the incorrect answer. Thetawise correctly set the system of three equations and solved the equations with numerical methods, which provided an estimate. He prompted Thetawise by adding the word “exact value” and the second prompt was “Let f(x) be a polynomial of degree 2, where f(1) = 1 2 , f(2) = 1 6 , f(3) = 1 12 . Find the value of f(6) with exact value”. Thetawise gave what Teacher D wanted. Teacher D observed that Wolfram Alpha could not understand the questions. Teacher D did not revise the prompt and considered that Wolfram Alpha failed in this task. AI tools were time-saving and they generally gave clear and detailed explanations of the corresponding mathematics problems. It was difficult to identify the incorrect steps. Users need to make modifications when inputting the question.
  • Teacher E
Teacher E captured the image of a DSE question—a high-stakes public examination for senior secondary students in Hong Kong (HKDSE 2024 paper 1 Q17) as shown in Figure 2. He imported the image of the question to three different AI tools: Copilot, POE, and Thetawise. He wanted to know if the AI tools could give the correct solutions within three prompts.
Copilot was able to read the question from the captured image. However, it missed the slope of Γ. Teacher E set the second prompt as “In (a)(i), the solution given by you, the slope of QR is incorrect, check again please”. However, the slope of Γ generated by Copilot was still incorrect. Teacher E used POE with the same first prompt. POE was also able to read the question from the captured image and it was able to give the correct answer in (a)(i). However, POE missed the slope of QR and wrongly represented the sign. POE was almost correct in (b)(i). The sqrt (49 + 16) must be sqrt(65) instead of 7. It did not explain why the diameter of C is QR though. Teacher E set the second prompt as “In (a)(i), the solution given by you, the slope of QR is incorrect, check again please”. POE corrected the slope of QR, but it failed to note that Γ is perpendicular to QR, so the slope of Γ was incorrect. Teacher E set the third prompt as “The new solution is not correct. Γ is a line perpendicular to line segment QR, if the slope of QR is 4/7, the slope of Γ should be −7/4”. The AI made a mistake in this third and last step. Instead of y + 5 = −(7/4)x + (21/4), it should be y = −(7/4)x + ¼. Teacher E concluded that POE could not successfully complete the task.
Teacher E used Thetawise with the first prompt as shown in Figure 2. Thetawise was also able to read the question from the captured image. It translated the math symbol into LaTeX form and gave the answers correctly in (a)(i) and (a)(ii). For part (b), the AI found the equation of C by setting up a system of simultaneous equations. Thetawise finished the calculation of (b)(i) correctly. In (b)(ii), it used a more complex approach than expected. The AI stopped before attempting to find the radius r of the circle. Teacher E then continued with the second prompt “Do you notice that the circumference UVW passes through the center of C in part (b)(i)? If yes, you should be able to find the diameter of the circumcircle UVW, hence the area of it”. Thetawise responded and found the center of a circle from an equation that applied the distance formula. Yet, it seemed relatively inapt to comprehend deductive geometry in this situation. The distance between point U and the center of circle C should be the diameter of the required circle because its subtended angle is 90°.
Teacher E set the third prompt as “In your answer, the distance between point U and the center of C should be the diameter of the required circle because it’s subtended angle is 90°”. Thetawise then completed the question correctly. After using Thetawise to obtain the correct solution, Teacher E was interested in exploring if it could generate similar questions. From the above results, only Thetawise can generate the correct solution within a few prompts. Also, Thetawise was able to display easily readable solutions using LaTeX. Thus, Teacher E rated Thetawise over POE and Copilot to generate a question similar to the given one.
It is a handy tool to prepare questions and solutions with clear prompts and boosts self-directed learning because students learn at home using AI tools. AI provides instant feedback in learning; users can learn the background knowledge of the subject easily. Teacher E also observed that AI is a bit different from Google because AI organizes learning material for users, and teachers can even tailor-make the lesson with clear prompts such as “tell me about quadratic equation in 50 words”. However, the solution given by AI may not be correct, so the effectiveness of AI depends on the depth of the users’ knowledge about the subject matter.
  • Teacher F
Teacher F selected a junior secondary competition question which was not a standard textbook-type or examination-type question. He was interested in exploring if AI provided the correct solution to the question. He used Thetawise, MathGPT, and Julius AI to solve this problem. Teacher F fed the image of Figure 3 to the three AI tools.
Both Thetawise and MathGPT showed the step-by-step solution with correct answers. Julius AI provided the solutions with several steps, but it stopped before giving the full solutions and prompted Teacher F to say what he wanted. Teacher F set the second prompt as “Solve the whole question and give the full solution”. Julius AI gave a wrong answer, and the third prompt was “It seems that the answer is 1/3. Did you make any mistake?”. Julius AI finally gave the correct answer.
Besides using AI to develop enrichment materials, Teacher F shared his dialog with AI about how he prepared a set of remedial exercises. He used Websim AI. He used eight prompts to interact with AI. First prompt: “Create 5 factorization questions related to only taking out common factor, perfect square, and difference of two squares”; second prompt: “Put the answer at the end of all questions. Don’t use Hide/Show Button”; third prompt: “Don’t show “Factor the following expression” in each question, but show “Factorize the following expressions” at the beginning of all 5 questions”; fourth prompt: “Don’t show “(common factor)” in each question. 1 row 1 question”; fifth prompt: “1 row 1 answer, no explanation needed”; sixth prompt: “Create 15 more questions, giving a total of 20 questions”; seventh prompt: “Answer of Q6 and Q12 can be further factorized using the difference of two squares”; eighth prompt: “Q16, Q18, and Q20 involve other skills. Delete them and create 3 others”. Teacher F finally prepared a set of exercises with 20 questions on the factorization of quadratic polynomials with answers.
Using Websim AI, teachers can generate different types of questions to address the users’ needs. It shortens the time to prepare when generating a lot of exercises with slight or large variations according to the user’s input commands. The time to set questions, prepare solutions, and typeset is reduced. Another benefit shared by Teacher F is how Wolfram Problem Generator can generate different levels of questions within the scope on its website. The questions can be different as they are generated. This provides extra resources to teachers.
Its limitation is related to math questions with diagrams, in 2D or 3D, especially when the information is provided in the diagrams, instead of descriptions in the question. Mistakes made by AI are found in diagrams. Another limitation is related to math questions with cross-topics and high-order thinking. It is easy for AI to solve or create a difficult math question in a single specific topic, but it becomes problematic if several different topics are involved. For example, to work out a long question of an HKDSE Math Paper 1 which requires knowledge of arithmetic sequences and geometric sequences, the transformation of functions, quadratic equations, the centers of triangles, and coordinate geometry, the AI solver could not solve it successfully. When AI is asked to create a math question with different topics, these topics are dealt with in different parts and coherence among these different parts of the question is absent.

5. Analysis

5.1. Remedial Group

Teachers in the remedial group and enrichment group had different expectations of AI. For the remedial groups, teachers had the following expectations:
  • Plot graphs (Teachers A, C);
  • Give correct answers and solutions (Teachers A, B and C);
  • Obtain step-by-step solutions (Teacher B);
  • Generate exercises efficiently (Teachers B, C and F);
  • Obtain more information on the topic (Teacher A);
  • Generate questions with proper mathematics notation (Teacher C).
The AI tools that the teachers used in the remedial group are shown in Table 1. All teachers chose the AI tools according to their needs. For example, Wolfram Alpha had Question Generator that helped teachers prepare sets of questions swiftly. Magic School helped the teachers set questions based on a YouTube video. Websim AI helped teachers generate questions and save them on a website so that teachers could provide aids to students instantly.
Teacher B chose Wolfram Alpha as they could generate lots of reliable questions and thus ensure efficiency in teaching and learning material development. Teachers A, C, and F criticized that AI tools could not draw graphs of functions and could not show proper mathematics notations. Better tools are needed to fulfill their needs.

5.2. Enrichment Group

Teachers in the enrichment group had different expectations of AI as follows:
  • Solve structured questions (Teachers D, E and F);
  • Give a prescriptive solution (Teachers D and F);
  • Generate a similar exercise to a solved problem (Teachers E and F);
  • Generate exercises with different levels of difficulty (Teacher E);
  • Solve cross-topic questions (Teacher F).
The AI tools that the teachers chose and used in the enrichment group are shown in Table 2. Teachers D, E, and F chose Thetawise as it could solve different mathematics problems including questions at the university level. POE and Copilot were also popular.
Teachers in the enrichment group used different AI tools to solve structured questions chosen from public examinations or mathematics competitions. Teachers found it time-consuming to work on the solutions for such questions and seek help from AI. AI could provide them with lots of information and detailed steps. However, not all the solutions were correct. Teachers had to be careful to evaluate the solutions from AI and give AI the relevant hints. The three AI tools Math GPT, Thetawise, and Julius AI could solve lots of mathematics problems correctly. However, the solutions were sometimes too complicated and advanced and might not fit the educational purpose. Teachers also kept providing guidance and information to help AI correct the mistakes. Teacher feedback gathered from this study revealed that AI could not solve deductive geometry and trigonometry problems that involve geometrical diagrams.
Regarding what teachers expected to achieve with the selected AI tools, Teachers A, B, and C wanted to have well-structured exercises with solutions to help students consolidate particular mathematics knowledge and skills. It usually took the teachers much time to prepare remedial materials. Teachers D and E recognized that AI gave incorrect steps, and sometimes it was hard to identify the mistaken steps. Teachers E and F suggested that AI tools could not effectively solve some cross-topic mathematics problems.

5.3. Prompting Techniques

Schorcht et al. [5] suggested four variants of different prompting techniques including Zero-Shot, Chain-of-Thought, Ask-me-Anything, and Few-Shot. Teachers’ general prompting techniques were matched into these four variants of prompting techniques as shown in Table 3.
Teachers were not trained with different prompting techniques. They used their own ways of interacting with the AI tools. Thus, teachers in this study used Zero-Shot and Chain-of-Thought prompting. Teachers A, B, and D used Zero-Shot prompting when dialoguing with AI. When the AI tools gave what they wanted (e.g., an exercise with solutions), they considered the task completed and evaluated that AI was useful. When it came to complex mathematics problems, Zero-Shot prompting was not useful in generating a correct solution. Teachers C, E, and F used Chain-of-Thought prompting to refine the questions, correct the mistakes, and provide more details to AI to amend the solutions. Teacher F used Chain-of-Thought prompting to generate a set of exercise questions in a particular format and re-generate appropriate questions.
Teachers A and D used Zero-Shot prompting when using their selected AI tools for all the questions and evaluated the effectiveness of AI from the answers generated. They did not keep interacting with AI to refine the answers generated. Teachers A, B, and D evaluated the effectiveness of the AI tools, concluding that the AI tools could not provide graphs and did not give proper mathematical notation. Teachers E and F interacted with AI by employing the Chain-of-Thought prompting technique. Their feedback revealed that AI could amend the solution if the guide given to AI was clear and relevant. The accuracy of the structured solutions depended largely on teachers’ knowledge when feeding input to AI.

6. Conclusions and Implications

Different AI tools helped secondary mathematics teachers generate exercises on a particular content area, verify the answers to a problem, create exercises efficiently, and seek hints and information about complex mathematics problems. Mathematics teachers rated AI tools by evaluating whether the tools could (a) give a correct solution; (b) give a solution using a particular method; and (c) prepare exercises with solutions effectively. Teachers in the groups employed different prompting techniques when interacting with AI. One of the significant challenges faced by the teacher was how the Zero-Shot prompting technique failed to identify the mistakes made by AI and concluded that the AI gave the incorrect solution. Mathematics teachers used Zero-Shot and Chain-of-Thought prompting when interacting with their chosen AI tools. Teachers used Zero-Shot prompting techniques to generate remedial exercises efficiently. Teachers chose to use the default template in the AI tools to generate exercises to save time. Teachers who used Chain-of-Thought prompting techniques wanted to check whether AI tools could solve structured mathematics problems. Teachers also used the Chain-of-Thought technique to prepare similar questions with different levels of difficulty to the original problem.
In mathematics problem solving, there are solutions to the problem. Information and descriptions of various methods can be generated by AI. However, AI may (a) make calculation mistakes; (b) choose the wrong approach to tackle the problem; (c) develop different interpretations of the problem from the user’s expectation. It is the teacher’s role to correct mistakes from the answers generated by AI. Teachers need to interpret the questions and give information to AI. Teachers must be equipped with their content knowledge so that they can evaluate the solutions given by AI. In this study, Chain-of-Thought prompting demonstrated more accurate solutions in a better format and levels of difficulty that teachers intend to develop. Further study is necessary to explore the prompting framework with Chain-of-Thought. By differentiating the objectives in remedial and enrichment needs, secondary mathematics teachers can benefit from a well-structured prompting framework or prompting cycle. AI technology is advancing at an unprecedented rate. Different AI problem-solvers and geometry solvers are emerging, which necessitates further study on the collaboration of different AI tools.

Funding

This research was funded by a Hong Kong Metropolitan University Research Grant (No. RD/2024/1.2).

Institutional Review Board Statement

The study was reviewed, approved, and conducted by “Research Ethics Committee” of Hong Kong Metropolitan University (HKMU) with approval date: 14 May 2024; approval code: RD/2024/1.2. All the participants’ related information collected in this study is treated with strict confidentiality and is reported in an aggregated manner and anonymized form to ensure data privacy and protection.

Informed Consent Statement

Informed consent was obtained from all the human subjects involved into this while conducting the activity.

Data Availability Statement

Due to the ethical, data privacy, and protection concerns, we cannot make the raw data publicly available.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Yau, K.W.; Chai, C.S.; Chiu, T.K.; Meng, H.; King, I.; Yam, Y. A phenomenographic approach on teacher conceptions of teaching Artificial Intelligence (AI) in K-12 schools. Educ. Inf. Technol. 2023, 28, 1041–1064. [Google Scholar] [CrossRef]
  2. Chiu, T.K.; Li, Y. How Can Emerging Technologies Impact STEM Education? J. STEM Educ. Res. 2023, 6, 375–384. [Google Scholar] [CrossRef]
  3. Ravi, P.; Broski, A.; Stump, G.; Abelson, H.; Klopfer, E.; Breazeal, C. Understanding Teacher Perspectives and Experiences after Deployment of AI Literacy Curriculum in Middle-school Classrooms. arXiv 2023, arXiv:2312.04839. [Google Scholar]
  4. Loong, E.Y.; Herbert, S. Primary school teachers’ use of digital technology in mathematics: The complexities. Math. Educ. Res. J. 2018, 30, 475–498. [Google Scholar] [CrossRef]
  5. Schorcht, S.; Buchholtz, N.; Baumanns, L. Prompt the problem–investigating the mathematics educational quality of AI-supported problem solving by comparing prompt techniques. In Frontiers in Education; Frontiers Media SA: Lausanne, Switzerland, 2024; Volume 9, p. 1386075. [Google Scholar]
  6. Arora, S.; Narayan, A.; Chen, M.F.; Orr, L.; Guha, N.; Bhatia, K.; Chami, I.; Re, C. Ask Me Anything: A Simple Strategy for Prompting Language Models. In Proceedings of the Eleventh International Conference on Learning Representations 2023, Kigali, Rwanda, 1–5 May 2023; Available online: https://openreview.net/pdf?id=bhUPJnS2g0X (accessed on 6 July 2024).
  7. Williams, A.; Katz, L. The use of focus group methodology in education: Some theoretical and practical considerations. IEJLL Int. Electron. J. Leadersh. Learn. 2001, 5. [Google Scholar]
  8. Krueger, R.; Casey, M. Focus Groups: A Practical Guide for Applied Research, 3rd ed.; Sage: Newbury Park, CA, USA, 2000. [Google Scholar]
  9. Glaser, B.G.; Strauss, A.L. The Discovery of Grounded Theory: Strategies for Qualitative Research; Weidenfeld and Nicolson: London, UK, 1968. [Google Scholar]
  10. Gibson, B.; Hartman, J. Rediscovering Grounded Theory; SAGE Publications Ltd.: Newbury Park, CA, USA, 2014. [Google Scholar]
Figure 1. Prompting question 1 from Teacher D.
Figure 1. Prompting question 1 from Teacher D.
Engproc 89 00019 g001
Figure 2. Prompting questions from Teacher E.
Figure 2. Prompting questions from Teacher E.
Engproc 89 00019 g002
Figure 3. Prompting question by Teacher F.
Figure 3. Prompting question by Teacher F.
Engproc 89 00019 g003
Table 1. AI tools used in remedial group.
Table 1. AI tools used in remedial group.
TeachersAI Tools Used
POECopilotWolfram AlphaMagic SchoolWebsim AI
Teacher Aüüü
Teacher B üü
Teacher Cüü
Teacher F ü ü
Table 2. AI tools used in enrichment group.
Table 2. AI tools used in enrichment group.
TeachersAI Tools Used
POECopilotWolfram AlphaMath GPTThetawiseJulius AI
Teacher Dü ü ü
Teacher Eüü ü
Teacher F üüü
Table 3. Four variants of different prompting techniques.
Table 3. Four variants of different prompting techniques.
TeachersZero-ShotChain-of-ThoughtAsk-Me-AnythingFew-Shot
Teacher Aü
Teacher Bü
Teacher C ü
Teacher Dü
Teacher E ü
Teacher F ü
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheng, W.-K. Role of Mathematics Teachers in Learner’s Diversity Using AI Tools. Eng. Proc. 2025, 89, 19. https://doi.org/10.3390/engproc2025089019

AMA Style

Cheng W-K. Role of Mathematics Teachers in Learner’s Diversity Using AI Tools. Engineering Proceedings. 2025; 89(1):19. https://doi.org/10.3390/engproc2025089019

Chicago/Turabian Style

Cheng, Wing-Kin. 2025. "Role of Mathematics Teachers in Learner’s Diversity Using AI Tools" Engineering Proceedings 89, no. 1: 19. https://doi.org/10.3390/engproc2025089019

APA Style

Cheng, W.-K. (2025). Role of Mathematics Teachers in Learner’s Diversity Using AI Tools. Engineering Proceedings, 89(1), 19. https://doi.org/10.3390/engproc2025089019

Article Metrics

Back to TopTop