Automated Test Creation Using Large Language Models: A Practical Application
Abstract
:1. Introduction
2. The Evolution of Software Applications for Test Creation
- Traditional systems are highly dependent on predefined rules and templates, which limits the ability to generate diverse questions, especially when more complex or adaptive test formats are required.
- These systems require significant manual configuration, both in creating rules and selecting appropriate keys and distractors, making them time-consuming and labor-intensive.
- Adaptation is limited, usually based on prior answers and not easily adjustable to dynamically changing conditions.
- The applications use limited semantic information processing, which reduces their ability to understand and interpret deeper contexts and relationships within the text.
3. Using ChatGPT for Test Creation
- Purpose of the test: the specific purpose of the test to be considered when creating the questions (e.g., assessment of understanding, self-preparation, competition preparation);
- Topic coverage: distribution of questions across different subtopics (e.g., five questions for each of the four subtopics within one main topic);
- Difficulty level: the complexity level of the questions (e.g., easy, medium, hard);
- Question type: the type of questions to be generated (e.g., multiple choice, open-ended questions, true/false, fill-in-the-blank);
- Question format: additional requirements for the format of the questions (e.g., use of diagrams or charts, inclusion of case studies or scenarios);
- Question style: the type of expression (e.g., academic, conversational, technical);
- Number of answers: how many answers the multiple-choice questions should have (e.g., four possible answers, one correct).
- Feedback: inclusion of explanations or comments for correct and incorrect answers (e.g., a brief explanation for each correct answer);
- Specific guidelines: additional instructions or criteria for creating the questions (e.g., avoiding questions with misleading answers, including questions that require critical thinking);
- Use of sources: inclusion of questions based on specific sources or texts (e.g., questions based on a specific article, textbook, or study);
- Context and examples: inclusion of questions with context or examples (e.g., questions that use real-life situations or case studies);
- Test format: the output format in which ChatGPT should provide the questions and answers (e.g., table, text file, Excel file);
- Target audience: the age group or education level of the learners (e.g., high school students, university students, professionals), etc.
- Reusability: Questions and tests can be stored in a database and reused multiple times. This eliminates the need for the educator to store tests in files, which makes searching for and reusing tests more difficult.
- Personalization: The application can include specific criteria for creating tests that are tailored to a particular subject area, educator, or educational institution. This allows for the creation of tests adapted to specific needs and teaching styles.
- Flexibility and adaptability: Developing a custom application can provide greater flexibility in design and features, allowing adaptation to various test formats and question types.
- Data protection: Using a proprietary software product allows for control over the security and privacy of learners’ data. This helps avoid the risks associated with using confidential data by third parties.
- Specific educational methodologies: The educator can use unique teaching methods that are not supported by existing applications, and the custom application can be designed in a way that supports their use.
- Integration with other systems: The custom application can be more easily integrated with other internal systems that the educational institution or teacher already uses.
4. AI-Powered Quiz Creation and Management Tool
4.1. Technologies and Methods Used
4.2. Key Roles and Functionalities
- User management: The administrator can manage user accounts, including creating, modifying, and deleting accounts.
- Entering text with educational content: The educator enters text—educational content, based on which test questions and answers should be created. The correct answer is marked as such.
- Generating questions and answers: The system uses the specialized module and ChatGPT API to generate multiple questions and answers based on the entered text.
- Approving test questions: The educator reviews the generated questions and answers and approves the appropriate ones according to their own judgment. The approved answers are saved in the database. There is an option for subsequent editing of questions and answers.
- Creating quizzes: The educator can create quizzes by selecting from the test questions stored in the database.
- Taking quizzes: Learners take quizzes as assigned by the educator.
- Grading quizzes: The system automatically grades the quiz submissions provided by the learners.
- Reviewing quizzes: Learners have the opportunity to review the graded quizzes and the mistakes they made.
- Bloom’s taxonomy level, e.g., Knowledge, Comprehension, Application, Analysis, Synthesis, Evaluation
- Difficulty of questions, e.g., easy, medium, difficult
- Question style, e.g., academic, conversational, technical
- Feedback, e.g., no feedback, correct/incorrect (i.e., only indicates if the answer is correct or incorrect), a brief explanation for each correct answer, etc.
- Specific guidelines for questions, e.g., avoiding questions with misleading answers, creating questions that require critical thinking, including real-life examples, incorporating interdisciplinary questions, short and precise formulations, etc.
- Training for specific skills, e.g., computational skills, problem-solving, reading comprehension, data interpretation skills, language skills, research skills, etc.
4.3. Communication between the Application and LLM APIs. Prompt Engineering
- Teacher Configuration: A user with the role of a teacher configures the parameters for generating test questions, such as the topic, target group, test complexity, question types, and other specific requirements.
- Prompt Generation: Upon starting the process, the application generates a prompt that includes the teacher-defined parameters and a JSON template for the response structure.
- Prompt Submission: The client application sends this prompt to the LLM through the appropriate model’s API.
- LLM Response: The LLM returns an output that contains the generated test questions and answers in JSON format.
- Output Processing: The client application processes the received JSON output and displays the questions and answers in a user-friendly format.
- Teacher Review and Storage: The teacher reviews and edits the questions, after which they are saved in the application’s database.
4.4. System Architecture
- Managing user profile information
- Entering data and text for generating test questions; approving and editing test questions and answers; configuring quizzes
- Taking quizzes, reviewing received results, and more.
- Generating test questions and configuring tests
- Executing and automatically grading tests
- Storing the tests in the database
- Personalization for users, utilizing the User Management System.
4.5. System Database
- Users: Contains information about all system users, including administrators, teachers, and learners. Attributes include a unique identifier, username, password, and role in the system.
- Educational Resources: Stores resources in the form of text or files, which are used for generating test questions. Includes a unique resource identifier, text, file name, and author identifier, which is a foreign key to the user who provided the resource.
- Test Categories: Stores test categories that define both individual questions and the overall test. Attributes include a category identifier and its name.
- Questions: Contains the questions generated through the OpenAI API. Includes a question identifier, question category identifier, question content, and an optional foreign key to the educational resource, as questions can be created without being tied to specific educational resources. Optional attributes for additional categorization of the question, used when generating it from the OpenAI API, include Bloom Level, Difficulty, Guidelines, Question Style, and Skill Training.
- Answers: Stores all answers associated with the questions. Consists of an answer identifier, question identifier, answer text, feedback for the answer, and a boolean flag indicating whether the answer is correct.
- Tests: Stores information about the created tests, including a unique test identifier, test title, test author identifier, test category identifier, test creation date, and a boolean flag indicating whether the test is active.
- Test Questions: Establishes a relationship between tests and questions and contains the identifiers of the test and the question.
- Test Submissions: Records the test solutions submitted by learners and includes a unique submission identifier, test identifier, student identifier, and score.
- Student Answers: Contains the answers given by learners to the test questions, including the submission identifier, question identifier, and an optional identifier of the given answer.
5. Experiments
6. Future Work
- Empirical study: The application will be used for generating and conducting tests in existing courses in computer sciences, allowing it to be evaluated by both teachers and students. Through observation and conducting surveys, the usability of the application, its impact on the teachers’ work, the learning process, and the learning outcomes will be assessed.
- Comparative analysis with traditional test creation methods: Studies will be conducted to gather opinions from both teachers and students regarding the quality, relevance, adaptability, and evaluation of questions generated with the help of AI.
- Exploration of the application’s potential in different fields of education: This research will focus on the adaptability of the application across various disciplines outside of computer and technical sciences. Experiments will be conducted with courses in social and humanities fields to test its ability to generate questions in different contexts and at varying levels of complexity.
- Evaluation of applicability in different teaching strategies: The application will be studied in various teaching strategies and approaches, such as problem-based and project-based learning, self-learning, group work, etc. Experiments and analyses will be conducted to explore how the application can be used to support different teaching strategies and forms of learning.
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhang, K.; Aslan, A.B. AI technologies for education: Recent research & future directions. Comput. Educ. Artif. Intell. 2021, 2, 100025. [Google Scholar] [CrossRef]
- Rojas, M.P.; Chiappe, A. Artificial Intelligence and Digital Ecosystems in Education: A Review. Technol. Knowl. Learn. 2024, 1–18. [Google Scholar] [CrossRef]
- Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
- Jeon, J.; Lee, S. Large language models in education: A focus on the complementary relationship between human teachers and ChatGPT. Educ. Inf. Technol. 2023, 28, 15873–15892. [Google Scholar] [CrossRef]
- Chen, W.; Samuel, R.; Krishnamoorthy, S. Computer Vision for Dynamic Student Data Management in Higher Education Platform. J. Mult.-Valued Log. Soft Comput. 2021, 36, 5. [Google Scholar]
- Agbo, G.C.; Agbo, P.A. The role of computer vision in the development of knowledge-based systems for teaching and learning of English language education. ACCENTS Trans. Image Process. Comput. Vis. 2020, 6, 42–47. [Google Scholar] [CrossRef]
- Kucak, D.; Juricic, V.; Dambic, G. Machine Learning in Education—A Survey of Current Research Trends. In Proceedings of the 29th DAAAM International Symposium, Vienna, Austria, 24–27 October 2018. [Google Scholar] [CrossRef]
- Hadzhikolev, E.; Hadzhikoleva, S.; Yotov, K.; Borisova, M. Automated Assessment of Lower and Higher-Order Thinking Skills Using Artificial Intelligence Methods. Commun. Comput. Inf. Sci. 2021, 1521, 13–25. [Google Scholar]
- Chui, K.T.; Lee, L.K.; Wang, F.L.; Cheung, S.K.S.; Wong, L.P. A Review of Data Augmentation and Data Generation Using Artificial Intelligence in Education. Commun. Comput. Inf. Sci. 2024, 1974, 242–253. [Google Scholar] [CrossRef]
- Ayeni, O.O.; Al Hamad, N.M.; Chisom, O.N.; Osawaru, B.; Adewusi, O.E. AI in education: A review of personalized learning and educational technology. GSC Adv. Res. Rev. 2024, 18, 261–271. [Google Scholar] [CrossRef]
- Hwang, G.J.; Xie, H.; Wah, B.W.; Gašević, D. Vision, challenges, roles and research issues of Artificial Intelligence in Education. Comput. Educ. Artif. Intell. 2020, 1, 100001. [Google Scholar] [CrossRef]
- Borenstein, J.; Howard, A. Emerging challenges in AI and the need for AI ethics education. AI Ethics 2021, 1, 61–65. [Google Scholar] [CrossRef]
- Sofianos, K.C.; Stefanidakis, M.; Kaponis, A.; Bukauskas, L. Assist of AI in a Smart Learning Environment. IFIP Adv. Inf. Commun. Technol. 2024, 714, 263–275. [Google Scholar] [CrossRef]
- Harry, A.; Sayudin, S. Role of AI in Education. Interdiciplinary J. Hummanity 2023, 2, 260–268. [Google Scholar] [CrossRef]
- Nurhayati, T.N.; Halimah, L. The Value and Technology: Maintaining Balance in Social Science Education in the Era of Artificial Intelligence. In Proceedings of the International Conference on Aplied Social Sciences in Education, Bangkok, Thailand, 14–16 November 2024; Volume 1, pp. 28–36. [Google Scholar] [CrossRef]
- Nunez, J.M.; Lantada, A.D. Artificial intelligence aided engineering education: State of the art, potentials and challenges. Int. J. Eng. Educ. 2020, 36, 1740–1751. [Google Scholar]
- Darayseh, A.A. Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Comput. Educ. Artif. Intell. 2023, 4, 100132. [Google Scholar] [CrossRef]
- Briganti, G.; Le Moine, O. Artificial intelligence in medicine: Today and tomorrow. Front. Med. 2020, 7, 27. [Google Scholar] [CrossRef]
- Kandlhofer, M.; Steinbauer, G.; Hirschmugl-Gaisch, S.; Huber, P. Artificial intelligence and computer science in education: From kindergarten to university. In Proceedings of the 2016 IEEE Frontiers in Education Conference (FIE), Erie, PA, USA, 12–15 October 2016. [Google Scholar] [CrossRef]
- Edmett, A.; Ichaporia, N.; Crompton, H.; Crichton, R. Artificial Intelligence and English Language Teaching: Preparing for the Future. British Council. 2023. Available online: https://www.teachingenglish.org.uk/sites/teacheng/files/2024-08/AI_and_ELT_Jul_2024.pdf (accessed on 21 September 2024).
- Hajkowicz, S.; Sanderson, C.; Karimi, S.; Bratanova, A.; Naughtin, C. Artificial intelligence adoption in the physical sciences, natural sciences, life sciences, social sciences and the arts and humanities: A bibliometric analysis of research publications from 1960–2021. Technol. Soc. 2023, 74, 102260. [Google Scholar] [CrossRef]
- Crompton, H.; Burke, D. Artificial Intelligence in Higher Education: The State of the Field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
- Xu, W.; Ouyang, F. The application of AI technologies in STEM education: A systematic review from 2011 to 2021. Int. J. STEM Educ. 2022, 9, 59. [Google Scholar] [CrossRef]
- Rahiman, H.; Kodikal, R. Revolutionizing education: Artificial intelligence empowered learning in higher education. Cogent Educ. 2024, 11, 2293431. [Google Scholar] [CrossRef]
- Mishra, R. Usage of Data Analytics and Artificial Intelligence in Ensuring Quality Assurance at Higher Education Institutions. In Proceedings of the 2019 Amity International Conference on Artificial Intelligence (AICAI), Dubai, United Arab Emirates, 4–6 February 2019; pp. 1022–1025. [Google Scholar] [CrossRef]
- Dempere, J.; Modugu, K.; Hesham, A.; Ramasamy, L. The impact of ChatGPT on higher education. Front. Educ. 2023, 8, 1206936. [Google Scholar] [CrossRef]
- Chaudhry, I.; Sarwary, S.; Refae, G.; Chabchoub, H. Time to Revisit Existing Student’s Performance Evaluation Approach in Higher Education Sector in a New Era of ChatGPT—A Case Study. Cogent Educ. 2023, 10, 2210461. [Google Scholar] [CrossRef]
- Pradana, M.; Elisa, H.; Syarifuddin, S. Discussing ChatGPT in education: A literature review and bibliometric analysis. Cogent Educ. 2023, 10, 2243134. [Google Scholar] [CrossRef]
- Chinonso, O.; Theresa, A.; Aduke, T. ChatGPT for Teaching, Learning and Research: Prospects and Challenges. Glob. Acad. J. Humanit. Soc. Sci. 2023, 5, 33–40. [Google Scholar] [CrossRef]
- Aecharungroj, V. “What Can ChatGPT Do?” Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data Cogn. Comput. 2023, 7, 35. [Google Scholar] [CrossRef]
- Akiba, D.; Fraboni, M.C. AI-Supported Academic Advising: Exploring ChatGPT’s Current State and Future Potential toward Student Empowerment. Educ. Sci. 2023, 13, 885. [Google Scholar] [CrossRef]
- O’Connor, S. Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ. Pract. 2023, 66, 103537. [Google Scholar] [CrossRef]
- Stokel-Walker, C. AI bot ChatGPT writes smart essays-should academics worry? Nature, 9 December 2022. [Google Scholar] [CrossRef]
- Rahman, M.M.; Watanobe, Y. ChatGPT for Education and Research: Opportunities, Threats, and Strategies. Appl. Sci. 2023, 13, 5783. [Google Scholar] [CrossRef]
- Grassini, S. Shaping the Future of Education: Exploring the Potential and Consequences of AI and ChatGPT in Educational Settings. Educ. Sci. 2023, 13, 692. [Google Scholar] [CrossRef]
- Borisova, M.; Hadzhikoleva, S.; Hadzhikolev, E.; Gorgorova, M. Training of higher order thinking skills using ChatGPT. In Proceedings of the International Conference on Virtual Learning, Bucharest, Romania, 15 November 2023; Volume 18, pp. 15–26. [Google Scholar] [CrossRef]
- Osterlind, S.J. What is constructing test items? In Constructing Test Items. Evaluation in Education and Human Services; Springer: Dordrecht, The Netherlands, 1998; Volume 25. [Google Scholar] [CrossRef]
- Bugbee, A.C. The Equivalence of Paper-and-Pencil and Computer-Based Testing. J. Res. Comput. Educ. 1996, 28, 282–299. [Google Scholar] [CrossRef]
- Serbedzija, N.; Kaiser, A.; Hawryszkiewycz, I. E-Quest: A Simple Solution for e-Questionnaires. In Proceedings of the IADIS International Conference e-Society, Ávila, Spain, 16–19 July 2004; pp. 425–432. [Google Scholar]
- Bennett, R.E.; Bejar, I.I. Validity and automad scoring: It’s not only the scoring. Educ. Meas. Issues Pract. 1998, 17, 9–17. [Google Scholar] [CrossRef]
- Thelwall, M. Computer-based assessment: A versatile educational tool. Comput. Educ. 2000, 34, 37–49. [Google Scholar] [CrossRef]
- Sanchez, L.; Penarreta, D.; Poma, X. Learning Management Systems for Higher Education: A Brief Comparison. TechRxiv. 2023. Available online: https://www.techrxiv.org/doi/full/10.36227/techrxiv.23615523.v1 (accessed on 26 September 2024).
- Bednarik, L.; Kovács, L. Implementation and assessment of the automatic question generation module. In Proceedings of the 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom), Kosice, Slovakia, 2–5 December 2012; pp. 687–690. [Google Scholar] [CrossRef]
- Pino, J.; Heilman, M.; Eskenazi, M. A selection strategy to improve cloze question quality. In Proceedings of the Workshop on Intelligent Tutoring Systems for Ill-Defined Domains. In Proceedings of the 9th International Conference on Intelligent Tutoring Systems, Montreal, QC, Canada, 23–27 June 2008; pp. 22–32. [Google Scholar]
- Das, B.; Majumder, M.; Phadikar, S. A novel system for generating simple sentences from complex and compound sentences. Int. J. Mod. Educ. Comput. Sci. 2018, 10, 57–64. [Google Scholar] [CrossRef]
- Pabitha, P.; Mohana, M.; Suganthi, S.; Sivanandhini, B. Automatic Question Generation system. In Proceedings of the 2014 International Conference on Recent Trends in Information Technology, Chennai, India, 10–12 April 2014; pp. 1–5. [Google Scholar] [CrossRef]
- Aldabe, I.; Maritxalar, M.; Mitkov, R. A study on the automatic selection of candidate sentences distractors. In Proceedings of the 2009 Conference on Artificial Intelligence in Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling, Brighton, UK, 20 July 2009; pp. 656–658. [Google Scholar]
- Lin, Y.-C.; Sung, L.-C.; Chen, M.C. An automatic multiple-choice question generation scheme for English adjective understanding. In Proceedings of the Workshop on Modeling, Management and Generation of Problems/Questions in eLearning, 15th International Conference on Computers in Education, Hiroshima, Japan, 5–9 November 2007; pp. 137–142. Available online: https://api.semanticscholar.org/CorpusID:239993403 (accessed on 26 September 2024).
- Correia, R.; Baptista, J.; Eskenazi, M.; Mamede, N. Automatic Generation of Cloze Question Stems. In Computational Processing of the Portuguese Language. PROPOR 2012. Lecture Notes in Computer Science; Caseli, H., Villavicencio, A., Teixeira, A., Perdigão, F., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7243. [Google Scholar] [CrossRef]
- Smith, S.; Avinesh, P.; Kilgarriff, A. Gap-fill tests for language learners: Corpus-driven item generation. In Proceedings of the 8th International Conference Natural Lang Process, Kharagpur, India, 8–11 December 2010; pp. 1–6. [Google Scholar]
- Mitkov, R.; An Ha, L.; Karamanis, N. A computer-aided environment for generating multiple-choice test items. Nat. Lang. Eng. 2006, 12, 177–194. [Google Scholar] [CrossRef]
- Araki, J.; Rajagopal, D.; Sankaranarayanan, S.; Holm, S.; Yamakawa, Y.; Mitamura, T. Generating Questions and Multiple-Choice Answers Using Semantic Analysis of Texts. In Proceedings of the 26th International Conference on Computational Linguistics (COLING 2016), Osaka, Japan, 11–16 December 2016; pp. 1125–1136. Available online: https://aclanthology.org/C16-1107/ (accessed on 26 September 2024).
- Agarwal, M.; Mannem, P. Automatic Gap-Fill Question Generation from Text Books. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications, Portland, OR, USA, 24 June 2011; pp. 56–64. Available online: https://aclanthology.org/W11-1407/ (accessed on 26 September 2024).
- Fattoh, I. Automatic multiple choice question generation system for semantic attributes using string similarity measures. Comput. Eng. Intell. Syst. 2014, 5, 66–73. [Google Scholar]
- CH, D.; Saha, S. Automatic Multiple Choice Question Generation From Text: A Survey. IEEE Trans. Learn. Technol. 2020, 13, 14–25. [Google Scholar] [CrossRef]
- Majumder, M.; Saha, S. A system for generating multiple choice questions: With a novel approach for sentence selection. In Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications, Beijing, China, 31 July 2015; pp. 64–72. [Google Scholar]
- Mitkov, R.; Ha, L. Computer-aided generation of multiple-choice tests. In Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natural Language Processing, Edmonton, Canada, 31 May 2003; pp. 17–22. [Google Scholar] [CrossRef]
- Afzal, N.; Mitkov, R. Automatic generation of multiple choice questions using dependency-based semantic relations. Soft Comput. 2014, 18, 1269–1281. [Google Scholar] [CrossRef]
- Heilman, M. Automatic Factual Question Generation from Text. Ph.D. Thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2011. [Google Scholar]
- Goto, T.; Kojiri, T.; Watanabe, T.; Iwata, T.; Yamada, T. Automatic generation system of multiple-choice cloze questions and its evaluation. Knowl. Manag. E-Learn. 2010, 2, 210–224. [Google Scholar] [CrossRef]
- Liu, C.-L.; Wang, C.-H.; Gao, Z.-M.; Huang, S.-M. Applications of lexical information for algorithmically composing multiple-choice cloze items. In Proceedings of the second workshop on Building Educational Applications Using NLP, Michigan, USA, 29 June 2005; pp. 1–8. [Google Scholar]
- Papasalouros, A.; Kanaris, K.; Kotis, K. Automatic generation of multiple choice questions from domain ontologies. In Proceedings of the International Conference e-Learning 2008, Amsterdam, The Netherlands, 22–25 July 2008; pp. 427–434. [Google Scholar]
- Das, B.; Majumder, M.; Phadikar, S.; Sekh, A. Automatic question generation and answer assessment: A survey. Res. Pract. Technol. Enhanc. Learn. 2021, 16, 5. [Google Scholar] [CrossRef]
- Kurdi, G.; Leo, J.; Parsia, B.; Sattler, U.; Al-Emari, S. A Systematic Review of Automatic Question Generation for Educational Purposes. Int. J. Artif. Intell. Educ. 2020, 30, 121–204. [Google Scholar] [CrossRef]
- Divate, M.; Salgaonkar, A. Automatic question generation approaches and evaluation techniques. Curr. Sci. 2017, 113, 1683–1691. [Google Scholar] [CrossRef]
- Borisova, M.; Hadzhikoleva, S.; Hadzhikolev, E. Use of Artificial Intelligence technologies in studying the phenomenon of electric current in physics education. In Proceedings of the International Conference on Virtual Learning, Bucharest, Romania, 26–27 October 2023; Volume 18, pp. 215–224. [Google Scholar] [CrossRef]
- Gorgorova, M.; Gaftandzhieva, S.; Hadzhikoleva, S. Use of Artificial Intelligence Tools in Moodle. In Proceedings of the Second National Scientific and Practical Conference “Digital Transformation of Education—Problems and Solutions”, Ruse, Bulgaria, 24–25 April 2024. (In Bulgarian). [Google Scholar]
- Hadzhikoleva, S.; Gorgorova, M.; Hadzhikolev, E.; Pashev, G. AI-Driven Approach to Educational Game Creation. In Proceedings of the 16th International conference ICT Innovations, Ohrid, North Macedonia, 28–30 September 2024. [Google Scholar]
- Hadzhikoleva, S.; Gorgorova, M.; Hadzhikolev, E.; Pashev, G. Creating Educational Games with ChatGPT. Educ. Technol. 2024, 15, 212–218. [Google Scholar]
- Zhang, Y.; Chen, X.; Jin, B.; Wang, S.; Ji, S.; Wang, W.; Han, J. A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery. arXiv 2024, arXiv:2406.10833. [Google Scholar] [CrossRef]
- Sahoo, P.; Singh, A.; Saha, S.; Jain, V.; Mondal, S.; Chadha, A. A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications. arXiv 2024, arXiv:2402.07927. [Google Scholar] [CrossRef]
- ChatGPT. Available online: https://chatgpt.com/ (accessed on 14 August 2024).
- Gemini. Available online: https://gemini.google.com/ (accessed on 14 August 2024).
- Llama. Available online: https://llama.meta.com/ (accessed on 14 August 2024).
- Claude. Available online: https://claude.ai/ (accessed on 14 August 2024).
- Mistral. Available online: https://chat.mistral.ai/ (accessed on 14 August 2024).
- Cohere. Available online: https://coral.cohere.com/ (accessed on 14 August 2024).
- Reka. Available online: https://chat.reka.ai/ (accessed on 14 August 2024).
- DeepSeek. Available online: https://chat.deepseek.com/ (accessed on 14 August 2024).
- Shannon, L.-J.; Rice, M. Scoring the open source learning management systems. Int. J. Inf. Educ. Technol. 2017, 7, 432–436. [Google Scholar] [CrossRef]
- Gaurav, S.; Shrivastava, V.; Pandey, A.; Shrivastava, V. A Survey of Firebase Technology and It’s Features. SSRN Electron. 2024. [Google Scholar] [CrossRef]
- Biehl, M. RESTful API Design: Best Practices in API Design with REST; ASIN: B01L6STMVW; API-University Press: Zürich, Switzerland, 2016. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hadzhikoleva, S.; Rachovski, T.; Ivanov, I.; Hadzhikolev, E.; Dimitrov, G. Automated Test Creation Using Large Language Models: A Practical Application. Appl. Sci. 2024, 14, 9125. https://doi.org/10.3390/app14199125
Hadzhikoleva S, Rachovski T, Ivanov I, Hadzhikolev E, Dimitrov G. Automated Test Creation Using Large Language Models: A Practical Application. Applied Sciences. 2024; 14(19):9125. https://doi.org/10.3390/app14199125
Chicago/Turabian StyleHadzhikoleva, Stanka, Todor Rachovski, Ivan Ivanov, Emil Hadzhikolev, and Georgi Dimitrov. 2024. "Automated Test Creation Using Large Language Models: A Practical Application" Applied Sciences 14, no. 19: 9125. https://doi.org/10.3390/app14199125
APA StyleHadzhikoleva, S., Rachovski, T., Ivanov, I., Hadzhikolev, E., & Dimitrov, G. (2024). Automated Test Creation Using Large Language Models: A Practical Application. Applied Sciences, 14(19), 9125. https://doi.org/10.3390/app14199125