Next Article in Journal
Investigating the Relationship Between Boredom and Creativity: The Role of Academic Challenge
Previous Article in Journal
Coaching for Agency, Authority and Advocacy in Dual Language Bilingual Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Prompt to Polished: Exploring Student–Chatbot Interactions for Academic Writing Assistance

Faculty of Instructional Technologies, Holon Institute of Technology—HIT, Holon 5810201, Israel
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(3), 329; https://doi.org/10.3390/educsci15030329
Submission received: 3 February 2025 / Revised: 17 February 2025 / Accepted: 6 March 2025 / Published: 7 March 2025
(This article belongs to the Section Technology Enhanced Education)

Abstract

:
The integration of generative artificial intelligence (GenAI) in higher education has opened new avenues for enhancing academic writing through student–chatbot interactions. While initial research has explored this potential, deeper insights into the nature of these interactions are needed. This study characterizes graduate students’ interactions with AI chatbots for academic writing, focusing on the types of assistance they sought and their communication style and tone patterns. To achieve this, individual online sessions were conducted with 43 graduate students, and their chatbot interactions were analyzed using qualitative and quantitative methods. The analysis identified seven distinct types of assistance sought by students. The most frequent requests involved content generation and expansion, followed by source integration and verification, and then concept clarification and definitions. Students also sought chatbot support for writing consultation, text refinement and formatting, and, less frequently, rephrasing and modifying content and translation assistance. The most frequent communication style was “requesting,” marked by direct appeals for assistance, followed by “questioning” and “declarative” styles. In terms of communication tone, “neutral” and “praising” appeals dominated the interactions, reflecting engagement and appreciation for chatbot responses, while “reprimanding” tones were relatively low. These findings highlight the need for tailored chatbot interventions that encourage students to seek AI assistance for a broader and more in-depth range of writing tasks.

1. Introduction

In recent years, the integration of Generative Artificial Intelligence (GenAI) into educational settings has shown significant promise in enhancing various aspects of teaching and learning (Chan & Hu, 2023; Kurtz et al., 2024; Luckin et al., 2022). Among the various applications of GenAI in higher education, AI chatbots—automated conversational agents powered by large language models (LLMs)—have garnered significant attention (Fryer et al., 2019; Labadze et al., 2023). These chatbots leverage the capabilities of LLMs to engage in meaningful, context-aware conversations, providing users with interactive and immediate feedback, personalized assistance, and comprehensive support (Bond et al., 2024; Chan & Hu, 2023; Okonkwo & Ade-Ibijola, 2021).
The current literature on AI chatbot integration in higher education reports its positive impact on general learning outcomes (Abbas et al., 2024; Essel et al., 2022; Labadze et al., 2023). These tools were documented for improving students’ understanding, motivation, and overall educational experiences (Deng & Yu, 2023; Labadze et al., 2023; Okonkwo & Ade-Ibijola, 2021). Studies that explored students’ perspectives on using chatbots in the learning process indicate that students have an overall positive view of these tools and appreciate their capabilities (Bibi & Atta, 2024; Chan & Hu, 2023; Shoufan, 2023).
One specific area where AI chatbots show significant potential is in enhancing academic writing skills, including the ability to communicate complex ideas clearly and persuasively, which is a vital skill for both academic and professional success (Chan & Hu, 2023; Labadze et al., 2023; Nguyen et al., 2024). These tools can provide immediate feedback on grammar, structure, and content, suggest improvements, and help students refine their writing in real-time (Gill et al., 2024; Lin & Chang, 2020). Additionally, they can serve as a valuable shortcut, significantly reducing the time required to complete academic assignments or projects (Abbas et al., 2024). This personalized assistance is particularly beneficial for graduate students, who often face high expectations for producing high-quality written work within tight deadlines (Gupta et al., 2022; Lavelle & Bushrow, 2007; Nguyen et al., 2024).
Recent systematic literature reviews indicate that research on AI chatbots in higher education is still in its infancy, with several key areas requiring further exploration. Notably, the pedagogical and learning strategies employed when using chatbots for educational purposes warrant further investigation (Bond et al., 2024; Labadze et al., 2023; Okonkwo & Ade-Ibijola, 2021). Some scholars have proposed conducting qualitative analyses of actual student–chatbot conversations to better understand these interactions (Bibi & Atta, 2024; Smutny & Schreiberova, 2020). Moreover, the existing literature on AI chatbots in higher education primarily focuses on student perceptions and attitudes toward using various AI tools, primarily through quantitative surveys and self-reports (Abbas et al., 2024; Belda-Medina & Kokošková, 2023; Chan & Hu, 2023; Ding et al., 2023). While these studies provide valuable insights, they do not capture the nuances of actual interactions between students and chatbots during the writing process (Bibi & Atta, 2024; Tlili et al., 2023).

Research Goal and Questions

Our research addresses these gaps by characterizing and examining graduate students’ interactions with AI chatbots for academic writing, focusing on the types of assistance they sought and their communication style and tone patterns. By analyzing real student–chatbot conversations, as reflected in the content of their prompts, we aim to provide a comprehensive understanding of how AI chatbots can effectively support academic writing in higher education.
Two research questions guided this study:
  • What types of assistance do graduate students seek during interactions with AI chatbots for academic writing, as reflected in the content of their prompts?
  • What communication style and tone patterns emerge in graduate students’ interactions with AI chatbots for academic writing?

2. Related Work

2.1. Generative Artificial Intelligence (GenAI) in Higher Education

Generative artificial intelligence (GenAI) refers to AI systems capable of creating new text, images, audio, and other forms of data. These systems use complex algorithms, primarily neural networks and deep learning models, to produce content that is often indistinguishable from that created by humans (Luckin et al., 2016; Radford et al., 2019). By learning patterns through machine learning algorithms applied to large existing datasets, including books, news articles, and websites, GenAI can generate insightful and relevant new content (Chan & Hu, 2023; Kurtz et al., 2024). The ability of GenAI to process vast amounts of data and extract insights offers significant advantages, such as handling large data volumes and performing tasks at unprecedented speeds, thus providing efficiency and scalability (Essel et al., 2022; Nikolopoulou, 2024).
This linguistic proficiency, combined with its capacity to process extensive datasets, makes GenAI applicable in diverse fields, including healthcare, workplaces, and educational environments (Essel et al., 2022; Luckin et al., 2016). In higher education settings, GenAI has revolutionized content delivery and consumption. It enables personalized learning experiences and automates administrative tasks, enhancing educational efficiency and effectiveness (Bond et al., 2024; Walter, 2024; Zawacki-Richter et al., 2019). For example, GenAI can tailor educational content to individual student needs, providing customized learning paths and immediate feedback, thus fostering a more engaging and effective learning environment (Chan & Hu, 2023).
However, deploying GenAI in educational settings presents considerable challenges. Ethical concerns include the potential for generating misleading or harmful content, data privacy issues, and biases within AI models, which may lead to unintended consequences (Bogina et al., 2022; Nam & Bai, 2023; Usher & Barak, 2024). The quality of training data is crucial; inaccuracies or misinformation can exacerbate these issues (Ding et al., 2023). Moreover, the advanced linguistic capabilities of GenAI can make generated information appear scientific and reliable, potentially masking inaccuracies (Gill et al., 2024; Nam & Bai, 2023). There is also the risk of over-reliance on technology, which may diminish critical thinking and problem-solving skills among learners (Abbas et al., 2024; Chan & Hu, 2023). Addressing these issues requires robust AI governance frameworks and ongoing research to ensure ethical use and mitigate biases (Bond et al., 2024; Taddeo & Floridi, 2018; Usher & Barak, 2024). Balancing the integration of GenAI with established educational practices is essential to ensure that technology enhances rather than replaces fundamental learning processes.
Research indicates that GenAI is becoming increasingly prevalent in higher education, with the potential to transform teaching and learning processes (Gill et al., 2024; Nikolopoulou, 2024; Chan & Hu, 2023; Luckin et al., 2022). Among its various applications, AI chatbots—automated conversational agents driven by large language models (LLMs)—have attracted considerable interest (Fryer et al., 2019; Labadze et al., 2023).

2.2. AI Chatbot Integration in Higher Education

AI chatbots are advanced software applications that utilize natural language processing (NLP) to engage in human-like conversational interactions (Belda-Medina & Kokošková, 2023; Gill et al., 2024; Smutny & Schreiberova, 2020). These chatbots can understand and respond to user queries, providing real-time assistance and information (Hew et al., 2023). Technological advancements have made chatbots more convenient, natural, and user-friendly, thereby expanding their potential across various domains, including higher education (Zawacki-Richter et al., 2019).
In this context, AI chatbots are increasingly being integrated into educational platforms to engage learners in meaningful, context-aware conversations (Chan & Hu, 2023; Okonkwo & Ade-Ibijola, 2021). These chatbots can tailor educational content to individual learning styles and paces, providing interactive, immediate feedback and personalized support (Chan & Hu, 2023; Lee et al., 2022). Often referred to as virtual teaching assistants, AI chatbots assist students with coursework, answer questions, and explain complex concepts (Ding et al., 2023; Fryer et al., 2019; Labadze et al., 2023). This integration alleviates some of the workload from human educators, allowing them to focus on more nuanced instructional tasks (Ding et al., 2023; Essel et al., 2022; Okonkwo & Ade-Ibijola, 2021).
Recent research has extensively explored students’ perspectives on the use of chatbots in the learning process (Abbas et al., 2024; Bibi & Atta, 2024; Chan & Hu, 2023; Ding et al., 2023). These studies generally indicate that students have positive perceptions of AI chatbots, enjoying their use and appreciating their capabilities (Bibi & Atta, 2024; Chan & Hu, 2023; Shoufan, 2023). However, despite their favorable views, students also express concerns about the accuracy of the information provided by chatbots and the potential implications for privacy and data security (Chan & Hu, 2023; Usher & Barak, 2024). Another issue that has been raised by students refers to the risk of over-reliance on AI. Ding et al. (2023) reported that many students blindly rely on chatbot responses. Therefore, they need to improve their critical thinking skills and examine the information sources upon which the chatbot bases its responses.
Indeed, the effective use of chatbots requires appropriate skills known in the literature as AI literacy (Ding et al., 2023; Nikolopoulou, 2024; Tlili et al., 2023; Walter, 2024). This literacy involves a deep understanding of how the models underlying GenAI work, knowledge of the architecture of these models, familiarity with basic concepts related to AI, awareness of its limitations, and the ability to interact with it effectively (Kong et al., 2023; Walter, 2024). In other words, AI literacy encompasses technical knowledge alongside an awareness of the social and ethical implications of chatbots (Usher & Barak, 2024; Walter, 2024).
Current studies report a wide range of purposes for which students use AI chatbots, including receiving instant answers to questions, explanations of complex concepts, visual and audio multimedia assistance, and help with summarizing readings and completing assignments (Bibi & Atta, 2024; Chan & Hu, 2023). Additionally, students report using AI chatbots for organizing thoughts, brainstorming ideas, and receiving feedback on drafts (Barrett & Pack, 2023; Bibi & Atta, 2024; Labadze et al., 2023; Shoufan, 2023). Chatbots also assist by providing suggestions for sentence structure, brevity, cohesion, clarity, spelling, and grammar across multiple drafts (Abbas et al., 2024; Bibi & Atta, 2024; Chan & Hu, 2023; Lin & Chang, 2020). These varied applications underscore the flexibility and utility of AI chatbots in supporting the diverse needs of students throughout their academic endeavors.

2.3. Writing with the Assistance of AI Chatbots

The aforementioned functionalities are particularly beneficial for supporting students at various stages of the academic writing process. Writing is an iterative, non-linear, and cognitively complex activity involving planning, drafting, evaluating, and revising (Graham, 2018). These stages require integrating prior knowledge, content searching, and organization into coherent ideas, alongside linguistic skills and an understanding of writing objectives and target audience (Nguyen et al., 2024; Wang, 2024; Zhao et al., 2024). Additionally, the writing process fosters subject knowledge development and critical thinking, contributing to curiosity, creativity, perseverance, and responsibility (Barrett & Pack, 2023). Therefore, higher education institutions rightly view writing skills as essential for degree completion (Graham, 2018; Kumar & Mindzak, 2024; Lavelle & Bushrow, 2007).
Graduate students, in particular, must master the skill of writing effectively to communicate their research ideas and findings. This involves synthesizing the existing literature, generating new insights, and clearly articulating their results to a broad and diverse audience (Nguyen et al., 2024). They face high expectations for precision, coherence, and quality in their written work, often under tight deadlines (Gupta et al., 2022; Lavelle & Bushrow, 2007). The ability to write academically is not merely a functional skill but a critical component of their intellectual toolkit, enabling them to contribute meaningfully to their respective fields (Nguyen et al., 2024). Many students face significant challenges while attempting to produce high-quality academic writing. These challenges include structuring their arguments, ensuring grammatical accuracy, and adhering to formal conventions, all of which are crucial for academic success (Graham, 2018; Lavelle & Bushrow, 2007).
The digitization of writing represents a significant transformation in the writing process, profoundly impacting how writing is conducted (Zhao et al., 2024). This transformation began decades ago with the use of word processors that aided in spelling and grammar correction (Cummings et al., 2024). As technology advanced, tools for automatic text correction such as Grammarly and Wordtune entered the market (Jen & Salam, 2024). These tools provide synchronous feedback on spelling, grammar, or style as the text is written, and include features such as academic article search and management tools for creating final reference lists (Zhao et al., 2024).
While such digital tools have been adopted to aid in the academic writing process, the unique capabilities of AI-based chatbots offer a new dimension of support that is both interactive and adaptive, helping students to navigate the complexities of academic writing (Bibi & Atta, 2024; Ding et al., 2023). In recent years, these tools have offered students the ability to create textual and visual content easily and for free (Wang, 2024). AI chatbots were documented to support these writing processes, from preparation through editing and proofreading to post-writing reflection, thereby enhancing the overall quality of their work (Bibi & Atta, 2024; Ding et al., 2023; Gill et al., 2024). These tools were also documented as supporting students in conducting literature reviews and synthesizing information, which are critical components of scholarly writing (Nguyen et al., 2024). This personalized support is invaluable in helping students organize their thoughts, develop clear arguments, and maintain academic rigor in their writing (Nguyen et al., 2024). Studies have found that students’ use of GenAI for writing purposes enhances their writing experience, reduces the cognitive load required for the process, lowers anxiety, and provides immediate feedback (Nguyen et al., 2024; Wang, 2024).
However, this convenience comes with significant challenges. One major concern is the risk of over-reliance on AI tools, which can mislead students through the generation of inaccurate or unreliable information (Chan & Hu, 2023; Mogavi et al., 2024; Nguyen et al., 2024). This dependency may also impair the development of critical academic skills, as students might prioritize ease and efficiency over deeper engagement with the writing process (Mogavi et al., 2024). Another issue relates to content ownership and authorship, which raises questions about the originality and authenticity of scholarly work produced with AI assistance (Cummings et al., 2024; Mogavi et al., 2024).
Although AI chatbots have clear potential to enhance the writing processes of higher education students, further insights are needed into how students interact with these tools, how they integrate them into their writing, and the potential consequences of their use (Mogavi et al., 2024; Nguyen et al., 2024; Wang, 2024). Most studies in this field focus on students’ perspectives and attitudes, primarily using quantitative surveys based on self-reports (Abbas et al., 2024; Bibi & Atta, 2024; Chan & Hu, 2023; Ding et al., 2023). This study aims to address this gap by examining the content and characteristics of graduate students’ interactions with AI chatbots during their academic writing endeavors. By shedding light on this underexplored area, this research seeks to contribute to the growing body of literature on the utilization of AI chatbots in higher education and provide practical insights for educators and educational designers seeking to optimize the use of GenAI for academic writing support.

3. Materials and Methods

3.1. Participants and Procedure

This study involved 43 graduate students (36 females and seven males) enrolled in the Faculty of Instructional Technologies at a large public higher education institute during the spring semester of the 2023–2024 academic year. All students were registered for a mandatory seminar course requiring them to write a seminar paper on a topic related to instructional technologies. Notably, all participants were in the second year of their master’s degree and had previously taken courses where they were introduced to GenAI tools and writing prompts.
Before participating, students reported moderate levels of prior experience using AI chatbots for academic writing, with an average score of 3.05 (SD = 0.90) on a 1 to 5-point scale. Students also rated their purposes for using chatbots in their studies on a scale of 1 to 5, with 10 uses presented. The five purposes that received the highest average scores were formulating ideas for inspiration (M = 3.58, SD = 1.15), writing new text (M = 3.26, SD = 1.33), editing existing content (M = 3.14, SD = 1.57), completing course assignments (M = 2.95, SD = 1.44), and searching for academic papers (M = 2.88, SD = 1.20).
To explore graduate students’ interactions with AI chatbots for academic writing assistance, individual online sessions were conducted via Zoom. Each session lasted approximately 45 min and was moderated by a researcher with a PhD in education. Prior to the sessions, students completed a consent form detailing their rights, including the voluntary nature of their participation, the option to withdraw at any time, and the measures taken to ensure anonymity and confidentiality. Students also completed a pre-survey capturing their demographic information and prior experience using AI chatbots for educational purposes.
The online sessions took place mid-semester, by which time students had selected their seminar topics and drafted the first two paragraphs of the initial chapter of their literature review. At this stage, students were still in the early phases of their writing process, as they had yet to develop the full structure of their literature review, expand on key theoretical concepts, or progress to later chapters of their seminar work. They were asked to bring a file containing these paragraphs to the session. To establish their baseline writing abilities, students were instructed to complete these drafts without using any AI tools. Each online session began with a brief introduction to the study (5 min); after which, students were instructed to select an AI chatbot of their preference and initiate an interaction using the standardized opening prompt:
“Hello, I am a graduate student in the Faculty of Instructional Technologies. As part of our seminar course, we are required to write a literature review on a chosen topic. My topic is [INSERT THE TOPIC OF YOUR SEMINAR WORK HERE]. The literature review includes several sub-topics, and I am currently focusing on the sub-topic titled [INSERT THE TITLE OF YOUR SUB-TOPIC HERE]. I have already composed the first two paragraphs of this section, but I need to add at least two more paragraphs to thoroughly cover and complete this sub-topic. Before proceeding with the additional paragraphs, I would like your assistance in reviewing our existing text. Please read the two paragraphs I will send you and provide detailed feedback on areas that need improvement, without directly correcting the text. Point out specific places that require enhancement and detail the nature of these improvements. This will help me reflect on each suggestion and apply them myself. After reviewing these paragraphs, I would appreciate your assistance in drafting the next two paragraphs. The goal is to achieve a comprehensive and well-developed coverage of the sub-topic.”
After providing the initial prompt, students were asked to continue the conversation with the chatbot to advance their writing. They were informed that they could utilize other external sources, such as web browsing, Google Scholar, or any other resources. Each student worked individually on the writing task for approximately 45 min. They were not guided to follow any specific steps but were encouraged to progress in their writing as they wished, using the AI chatbot for assistance as needed. Once they felt they had exhausted the writing for that session, they were asked to send the researcher the shareable link to the full conversation with the chatbot, which included all the prompts and responses. While students were free to choose any chatbot, the majority opted for ChatGPT.

3.2. Research Method and Tools

This study employed a qualitative approach to examine students’ interactions with AI chatbots for academic writing assistance. The data were collected through a short pre-survey and content analysis of students’ documents (i.e., their conversations with the chatbots).
The pre-survey was distributed to students before the online session. It aimed to examine students’ demographics and gather information about their prior experience with AI chatbots for educational purposes through two closed-ended questions. The first question measured students’ overall prior experience (“To what extent do you have experience using AI chatbots for educational purposes?”). The second question explored their specific purposes for using AI chatbots for educational purposes (“To what extent did you use the AI chatbots for the following purposes?”). Students were provided with a list of ten possible purposes, including formulating ideas for inspiration, writing new text, completing course assignments, and searching for academic papers.
Content analysis was conducted to characterize and examine students’ interactions with AI chatbots for academic writing assistance. This included qualitatively analyzing the content of students’ prompts, followed by a quantitative coding phase. First, we conducted an inductive content analysis of the prompts students used throughout the online conversations to identify common themes and patterns in content, communication style, and communication tone. For the purpose of this study, communication style refers to the structural approach students use to frame their interactions with chatbots, while communication tone refers to the emotional undertone or sentiment of the interactions. Following the qualitative analysis, a quantitative analysis was performed to recognize the frequency and distribution of the categories and patterns observed. By combining these qualitative and quantitative methods, we aimed to gain a comprehensive understanding of the types of assistance students seek when interacting with AI chatbots.
This study was conducted in accordance with the academic institute’s ethical guidelines and received IRB approval.

3.3. Data Analysis

The qualitative data from students’ interactions with AI chatbots were analyzed using the conventional (inductive) content analysis approach, as outlined by Hsieh and Shannon (2005). The analysis proceeded in four stages: First, all students’ prompts were compiled into a single comprehensive file for detailed analysis. Second, the two authors conducted a thorough examination of the prompts, highlighting text segments indicative of the purposes of the prompts (demonstrating content), as each prompt could include multiple types of assistance sought. The style and tone of the communication were also highlighted. Third, the highlighted text segments were thematically categorized based on identified common themes and patterns. This included categorizing the content into types of assistance sought and categorizing the communication by style (i.e., Requesting, Questioning, Declarative) and tone (i.e., Praising, Reprimanding, Neutral). These segments were then numerically coded to reflect the frequency and distribution of each. Fourth, to ensure inter-coder reliability, a randomly selected sample of the student prompts, along with the established categories, was evaluated by two additional judges who hold master’s degrees in educational research and are currently PhD students in the field. Cohen’s Kappa analysis confirmed high inter-coder reliability, with agreements of 92% and 89% with the first and second judges, respectively.
The quantitative data collected from students’ interactions with AI chatbots were analyzed using descriptive statistical methods to examine the distribution of assistance types, communication styles, and tones. The frequency and percentage of each category were calculated in relation to the overall number of prompt segments, and standard deviations (SD) were reported to capture variability across students. By combining these qualitative and quantitative analyses, we aimed to gain a comprehensive understanding of the types of assistance students seek when interacting with AI chatbots.

4. Results

4.1. Types of Assistance Sought During Students’ Interactions with AI Chatbots

The qualitative content analysis of student–chatbot interactions identified seven distinct types of assistance that students sought when using AI chatbots to support their academic writing. These categories reflect the diverse ways in which students engage with chatbots. Table 1 provides an overview of these categories, detailing the types of assistance sought by students.
The most frequent type of assistance sought by students was content generation and expansion, accounting for 34.66% (SD = 23.96) of the prompt segments. Students frequently used chatbots to generate new sections of their academic work. For example, one student requested, “Please write another paragraph, this time describing the effect of COVID-19 on digital learning” (S1, Female). Additionally, students sought to expand on existing content by strengthening logical connections between concepts. One student, for instance, requested assistance in linking theoretical perspectives to AI-based learning tools, stating, “I would like a new sentence at the end of the second paragraph that would link the behaviorist approach to the subject of AI-based learning tools” (S14, Female).
Source integration and verification was the second most frequent type of assistance sought, constituting 28.22% (SD = 23.80) of the prompt segments. Within this category, the majority of prompts involved seeking help in identifying relevant academic sources for literature reviews. For instance, one student requested, “Please write a new reference list with the full articles” (S17, Female). Others sought assistance in integrating sources appropriately into their text, as seen in one student’s request: “Please enrich this paragraph by inserting academic papers dealing with teachers’ use of Canva” (S13, Female). Interestingly, some students expressed concerns about the reliability of chatbot-generated references, actively verifying the credibility of suggested sources. For instance, one student inquired, “Once again, I would like to ask you how accurate the sources you gave me are and whether they are from credible academic journals” (S43, Female). These findings suggest that while students relied on chatbots for sourcing academic references, they remained critically aware of the importance of source credibility.
Seeking concept clarification and definitions was another significant way students engaged with chatbots, appearing in 10.62% (SD = 19.86) of the prompt segments. Students frequently requested definitions of key terms and theoretical concepts relevant to their research topics. For example, one student asked, “Please give me the definitions for synchronous and asynchronous digital learning” (S1, Female). Additionally, students often sought clarifications on chatbot-generated content, asking for further elaboration on responses they received. One example of this is a student who wrote, “Explain to me more what you meant in the paragraph describing the second point you mentioned” (S11, Female).
The category of writing consultation accounted for 10.23% (SD = 12.44) of the prompt segments. Within this category, students primarily engaged in deliberative exchanges with the chatbot to refine their ideas, strengthen their arguments, and assess the organization of their content. For instance, one student sought feedback on content structuring, stating, “I thought of adding the examples of implementing active learning that incorporate technology as a separate paragraph in this chapter. What do you think?” (S36, Male). Additionally, students consulted the chatbot for guidance on expanding their work, ensuring that new content aligned with their topic. One student inquired, “Is there something else that you think could be related to the topic, such as building a lesson plan incorporating active learning? Do you think it would be right to do or not?” (S37, Male). These interactions demonstrate that students used the chatbot as a thought partner, not only for content generation but also for evaluating the relevance and appropriateness of additional material in their writing.
Moreover, students also requested text refinement and formatting, accounting for 9.94% (SD = 18.69) of the prompt segments. This category included requests for structuring paragraphs, improving logical flow, and ensuring formal correctness. For instance, one student asked, “Can you divide this text into logical paragraphs so that each idea is clearly separated?” (S32, Female). Students also used chatbots for proofreading and language refinement, seeking help with grammar, syntax, and clarity. One student, for example, requested, “Please check this paragraph for grammatical errors and improve the wording if necessary” (S15, Male).
Less frequently, students sought assistance with refining their writing at the sentence level. Requests for rephrasing and modifying content appeared in 3.50% (SD = 5.63) of the prompt segments. Within this category, students primarily requested sentence-level adjustments to enhance clarity, improve readability, or avoid repetition. For example, one student asked, “Can you rephrase these sentences to make them sound more professional?” (S2, Female). Additionally, some students focused on modifying content for conciseness, ensuring that their writing was more precise and to the point. One such request stated, “The paragraph you wrote here is long. Write it in a length of five sentences” (S7, Female). The final category, translation assistance, accounted for only 2.83% (SD = 9.56) of the prompt segments, making it the least frequent type of request. Within this category, students primarily used chatbots to translate academic text between languages while maintaining an academic tone. One student, for instance, asked, “Can you translate this abstract into English while keeping the academic tone?” (S25, Female).

4.2. Communication Style and Tone Patterns

The analysis of student–chatbot interactions revealed distinct patterns in both communication style and tone. In this study, communication style refers to the structural approach students used to frame their interactions with chatbots, with three primary styles emerging: requesting, questioning, and declarative. In addition, the analysis examined communication tone, which reflects the emotional undertone or sentiment embedded in student interactions with the chatbot. Three main tone categories were identified: praising, reprimanding, and neutral.

4.2.1. Communication Style Patterns

The “requesting” style was the most frequent, appearing in 58% of all prompt segments. This style was characterized by explicit requests and direct appeals for assistance, where students asked the chatbot to generate, modify, or refine content to achieve a specific outcome. The prominence of this style underscores students’ strong reliance on chatbots for direct support in their writing process. Common linguistic patterns included phrases such as “Can you…?” or “I would like you to…”, illustrating the directive nature of these interactions. For instance, one student requested, “Can you rewrite this paragraph in a more formal tone?” (S17, Female).
The “questioning” style was also widely used, accounting for 27% of all prompt segments. This style involved students asking questions to obtain specific information, clarifications, or insights, with the expectation of receiving a direct response from the chatbot. The primary purpose was to inquire and gather knowledge, often reflected in the use of question words and question marks. For instance, students frequently posed queries such as, “Do you think this part is unnecessary?” (S13, Female) or “Can you please help me clarify this argument?” (S10, Male).
Finally, the “declarative” style was the least common, appearing in 15% of all prompt segments. This style involves making statements or providing information or context for the chatbot. It is used to inform or describe a situation without expecting any specific response or action from the chatbot. For instance, one student stated: “I have already sent you these sentences” (S8, Female) and another declared “This section is now complete” (S5, Female). These interactions suggest that, although less frequent, the declarative style played a role in structuring chatbot interactions by providing contextual updates.

4.2.2. Communication Tone Patterns

The “neutral” tone was the most frequent, appearing in 66% of all prompt segments. Students using this tone typically issued straightforward instructions or factual statements without expressing positive or negative sentiments. For example, one student wrote, “Can you provide more details on this?” (S3, Female) while another requested “I need a list of references for this topic” (S4, Female). The “praising” tone was the second most common, accounting for 26% of prompt segments. This tone reflected approval or satisfaction with the chatbot’s responses, often incorporating expressions of gratitude or politeness. For instance, one student responded, “Excellent. This makes it much easier to read” (S3, Female). Prompts in this tone frequently included polite expressions such as “thank you,” “please,” and “I appreciate it,” as seen in requests like, “Thank you, that was very helpful!” (S16, Female) and “I would appreciate it if you could please explain to me the stages of the development of such artificial intelligence tools” (S3, Female).
In contrast, the “reprimanding” tone was the least frequent, appearing in 8% of prompt segments. This tone conveyed dissatisfaction or frustration, typically when students found the chatbot’s responses inadequate, misleading, or incorrect. A common source of frustration was the chatbot’s failure to provide proper citations or verifiable sources. One student explicitly voiced their dissatisfaction, stating, “I want you to give me real articles now, as I asked at the beginning, not ones you made up. Inventing articles is not nice and can mislead students who rely on you. So don’t do this in the future” (S15, Male). Another student similarly reprimanded the chatbot for omitting citations, remarking, “Again, you forgot to mention the source of the articles. Are you tired because we’ve been working on other chapters since this morning?” (S40, Female). This statement not only reflects dissatisfaction but also suggests that the student perceived the chatbot in human-like terms, attributing traits such as fatigue or forgetfulness to its performance.

5. Discussion and Conclusions

The findings of this study provide valuable insights into how graduate students interact with AI chatbots for academic writing assistance. This section discusses the key findings, focusing on the types of assistance students sought and the implications of these interaction patterns.

5.1. Types of Assistance Sought During Student–Chatbot Interactions

The two most prevalent types of assistance sought were content generation and source retrieval and integration, underscoring the chatbot’s role in supporting students with text development and literature reviews. The high frequency of prompts requesting new content creation highlights the chatbot’s importance in expanding and refining students’ academic writing. Similarly, students frequently turned to chatbots for locating and integrating academic references, demonstrating a strong focus on strengthening the scholarly foundation of their literature reviews. However, despite their reliance on AI for retrieving references, students remained critically aware of source credibility. Many explicitly requested verifications of source reliability, expressing concerns about the accuracy and trustworthiness of chatbot-generated references. This ongoing need to validate AI-generated sources reflects students’ growing recognition of the limitations of AI in providing verifiable scholarly materials (Khlaif et al., 2023) and reinforces the importance of information literacy skills in AI-assisted academic research (Chan & Hu, 2023).
Beyond content generation and source retrieval, other types of assistance were far less common. Writing consultation, for instance, accounted for approximately 10% of student interactions, involving deliberative discussions with the chatbot to refine ideas and strengthen arguments. These interactions suggest that some students viewed the chatbot as more than just a text generator, using it as a collaborative tool to refine their writing. Similarly, text refinement and formatting comprised another 10% of prompt segments, encompassing tasks such as structuring text, proofreading for grammar and syntax issues, and language polishing. Interestingly, our findings indicate that students were less inclined to use chatbots for these purposes compared to previous research (Abbas et al., 2024; Chan & Hu, 2023; Ding et al., 2023; Gill et al., 2024).
The diversity in chatbot use observed in this study highlights their potential to support multiple aspects of academic writing, ranging from basic tasks to more complex and cognitively demanding processes (Nguyen et al., 2024; Wang, 2024; Zhao et al., 2024). However, the findings suggest that higher-order writing tasks, such as content consultation, brainstorming, and critical synthesis, were significantly less common among students. Few students used chatbots for exploring different perspectives, seeking guidance on argument development, or strategically structuring their writing. This limited engagement suggests that students primarily perceive chatbots as functional aids rather than tools for deeper analytical and creative processes. This finding contrasts to some extent with previous research, which emphasized students’ perceptions of AI chatbots as tools for brainstorming and receiving feedback to enhance writing skills—beyond basic grammar correction (Chan & Hu, 2023). Several factors may explain why students infrequently sought higher-order assistance from chatbots.
One possible explanation is the inherent difficulty and time-consuming nature of writing tasks such as content consultation, brainstorming, and critical synthesis, which require greater cognitive effort and advanced analytical skills—challenges that some students may find daunting (Ferrari, 2001). Additionally, these tasks are often perceived as more time-intensive, leading students to prioritize simpler, more straightforward tasks that can be completed quickly, especially under time constraints. A recent study by Abbas et al. (2024) found that students perceive AI chatbots as valuable shortcuts, significantly reducing the time required to complete academic assignments. When faced with high time pressure, students may feel they lack sufficient time to engage deeply with assignments, prompting them to rely on chatbots for immediate, task-oriented support rather than for complex problem-solving and critical engagement.
Another factor that may contribute to the limited use of chatbots for higher-order writing tasks is students’ confidence and familiarity with AI tools. Engaging in complex cognitive processes using AI requires not only technical proficiency but also the confidence to experiment with AI-driven interactions. Research suggests that comfort and self-efficacy in using technology play a crucial role in how effectively students integrate digital tools into their learning (Teng & Wang, 2021). This is particularly relevant for AI tools, where prior exposure and familiarity influence engagement with more complex functionalities (Chan & Hu, 2023). When students lack confidence in their ability to use AI chatbots for advanced writing tasks, they may be more inclined to rely on them for simpler, lower-order functions, such as source integration and proofreading, rather than for more cognitively demanding activities like argument development or structured brainstorming.
Finally, this issue may be linked to AI literacy, which encompasses an understanding of AI tools, their capabilities, and their limitations, as well as how to interact with them effectively (Ding et al., 2023; Nikolopoulou, 2024; Tlili et al., 2023; Walter, 2024). AI literacy extends beyond basic user proficiency to include knowledge of how GenAI models function, awareness of their biases and constraints, and the ability to critically assess AI-generated content (Kong et al., 2023; Walter, 2024). When students lack a deep understanding of how to optimize chatbot interactions, they may underutilize AI for higher-order cognitive tasks, restricting their engagement with the tool to basic requests rather than leveraging its potential for more complex and strategic writing assistance.

5.2. Students’ Communication Patterns While Interacting with AI Chatbots

The analysis of students’ communication patterns revealed that they predominantly engaged with chatbots through explicit requests for information or assistance, highlighting a strong reliance on AI tools for direct academic support. This directive-based interaction style mirrors how students in synchronous hybrid learning environments perceive their instructors—as guides who provide structured direction and facilitate learning (Usher & Hershkovitz, 2024). Similar to how online instructors are often viewed as “tourist guides” or “bus drivers” who lead students from one point to another, chatbots in academic writing appear to be primarily used as navigational tools rather than as interactive collaborators. The questioning style was the second most prevalent, indicating that students frequently sought clarification and detailed responses from chatbots. This preference for AI-based questioning may stem from the judgment-free nature of chatbot interactions, allowing students to ask questions without fear of criticism—an issue often encountered in traditional classroom settings (Hew et al., 2023; Barak & Usher, 2020; Usher & Barak, 2018). The 24/7 availability of chatbots further reinforces their appeal as convenient, immediate sources of assistance, enabling students to seek support at any time without relying on human instructors.
In contrast, the declarative style, which primarily involved making statements or providing information without expecting a specific response, was the least used. This may be because students primarily seek active engagement and feedback from chatbots to support their learning and writing processes. The more interactive styles of communication, such as requesting and questioning, likely align better with students’ academic needs (Bibi & Atta, 2024), making declarative statements less relevant in this context. Regarding communication tone, the neutral tone was the most commonly used, indicating that a significant portion of interactions lacked explicit emotional expression. This may be attributed to the task-oriented nature of chatbots, which typically lack social features such as sharing, conversation, and empathy (Tlili et al., 2023).
The praising tone, reflecting approval or satisfaction with chatbot responses, was the second most frequent and often included expressions of gratitude or politeness. The reprimanding tone was the least common, suggesting that students rarely expressed dissatisfaction with the chatbot’s responses. Despite their infrequency, reprimanding interactions revealed that students held chatbots accountable for providing reliable and precise information. These instances suggest that students expected AI-generated content to meet academic standards, reinforcing the growing demand for accuracy and credibility in AI-assisted writing. Prior research has highlighted similar concerns, indicating that students increasingly recognize the limitations of AI tools in generating verifiable scholarly content and emphasizing the need for source validation and fact-checking when incorporating AI-generated materials (Ding et al., 2023; Nikolopoulou, 2024; Tlili et al., 2023; Walter, 2024). These findings align with broader discussions on AI literacy, which stress the importance of developing critical evaluation skills to navigate AI-generated outputs effectively (Kong et al., 2023; Walter, 2024).

6. Limitations and Future Research

This study provides valuable insights into how students interact with AI chatbots for academic writing assistance. However, several limitations should be acknowledged. First, this study was conducted with a relatively small sample size of 43 graduate students within a single higher education institution. This limited sample may not fully represent the broader student population across diverse educational contexts. Future research should involve larger and more diverse samples to enhance the generalizability of the findings. Second, longitudinal studies could provide deeper insights into how students’ use of chatbots evolves over time and in various academic contexts. Future research could examine how students’ chatbot usage shifts across different writing phases, particularly whether they engage more with AI for idea generation and conceptual development in earlier stages while relying on it more for refinement and revision in later stages. Third, the analysis focused on students’ prompt-writing profiles but did not assess the overall quality of the final literature reviews produced. Future studies should evaluate the quality and enhancement of students’ written documents, which was beyond the scope of this current work. Fourth, this study examined chatbot interactions in their current form, primarily focusing on task-oriented AI assistance. However, reasoning-focused AI models are emerging, integrating structured reasoning and reflective questioning to foster critical thinking and metacognitive skills. Future research should explore whether the introduction of reasoning-based GenAI tools may influence students’ reliance on AI beyond directive task assistance, potentially encouraging deeper engagement in analytical writing and argument development. Lastly, the ethical implications of using AI chatbots in academic settings were not explored in this study. Issues such as data privacy, academic integrity, and the potential for dependency on AI tools are important considerations for future research (Usher & Barak, 2024). Investigating these ethical dimensions in the context of AI chatbot usage could provide valuable guidelines for the responsible integration of AI in education.

7. Summary and Implications

This study provides a comprehensive overview of how graduate students interact with AI chatbots for academic writing and the types of assistance they seek. The findings indicate that students primarily use chatbots for source integration and verification, content generation and expansion, and concept clarification and definitions. AI chatbots were largely perceived as tools for information retrieval and basic writing support. While these functions of the writing process are valuable, students underutilize AI for more cognitively demanding tasks, such as content consultation, brainstorming, and critical synthesis. The limited engagement with higher-order writing processes suggests a need for greater awareness and training on AI’s advanced capabilities.
To fully harness AI’s potential in academic writing support, educators and institutions must provide structured training and ongoing support to help students develop AI literacy and confidence in using these tools. For effective integration into academic programs, educators should ensure that students understand the full range of chatbot functionalities and how to apply them strategically in their writing processes. Initial training sessions, coupled with continuous guidance, can help students develop the skills necessary to navigate AI-generated content critically, refine arguments, and synthesize diverse perspectives. By fostering a deeper understanding of AI technologies, institutions can empower students to use chatbots more effectively for complex academic tasks, ultimately enhancing their writing proficiency. This study contributes to the growing body of research on AI in higher education by providing a nuanced understanding of how students engage with chatbots for academic writing. Recognizing the diverse ways students interact with AI tools allows educators and developers to design more effective interventions that support students’ academic writing needs, foster higher-order cognitive engagement, and promote more effective and engaging learning experiences.

Author Contributions

Conceptualization, M.U. and M.A.; methodology, M.U. and M.A.; validation, M.U. and M.A.; formal analysis, M.U. and M.A.; investigation, M.U. and M.A.; data curation, M.U. and M.A.; writing—original draft preparation, M.U.; writing—review and editing, M.U. and M.A.; visualization, M.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Holon Institute of Technology (date of approval 31 January 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to ethical reasons.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
GenAIGenerative Artificial Intelligence

References

  1. Abbas, M., Jam, F. A., & Khan, T. I. (2024). Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. International Journal of Educational Technology in Higher Education, 21, 10. [Google Scholar] [CrossRef]
  2. Barak, M., & Usher, M. (2020). Innovation in a MOOC: Project-Based Learning in the International Context. In J. J. Mintzes, & E. M. Walter (Eds.), Active learning in college science. Springer. [Google Scholar] [CrossRef]
  3. Barrett, A., & Pack, A. (2023). Not quite eye to AI: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 20(1), 59. [Google Scholar] [CrossRef]
  4. Belda-Medina, J., & Kokošková, V. (2023). Integrating chatbots in education: Insights from the Chatbot-Human Interaction Satisfaction Model (CHISM). International Journal of Educational Technology in Higher Education, 20(1), 62. [Google Scholar] [CrossRef]
  5. Bibi, Z., & Atta, A. (2024). The role of ChatGPT as AI English writing assistant: A study of student’s perceptions, experiences, and satisfaction. Annals of Human and Social Sciences, 5(1), 433–443. [Google Scholar]
  6. Bogina, V., Hartman, A., Kuflik, T., & Shulner-Tal, A. (2022). Educating software and AI stakeholders about algorithmic fairness, accountability, transparency and ethics. International Journal of Artificial Intelligence Education, 32, 808–833. [Google Scholar] [CrossRef]
  7. Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21, 4. [Google Scholar] [CrossRef]
  8. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  9. Cummings, R. E., Monroe, S. M., & Watkins, M. (2024). Generative AI in first-year writing: An early analysis of affordances, limitations, and a framework for the future. Computers and Composition, 71, 102827. [Google Scholar] [CrossRef]
  10. Deng, X., & Yu, Z. (2023). A meta-analysis and systematic review of the effect of chatbot technology use in sustainable education. Sustainability, 15(4), 2940. [Google Scholar] [CrossRef]
  11. Ding, L., Li, T., Jiang, S., & Gapud, A. (2023). Students’ perceptions of using ChatGPT in a physics class as a virtual tutor. International Journal of Educational Technology in Higher Education, 20(1), 63. [Google Scholar] [CrossRef]
  12. Essel, H. B., Vlachopoulos, D., Tachie-Menson, A., Johnson, E. E., & Baah, P. K. (2022). The impact of a virtual teaching assistant (chatbot) on students’ learning in Ghanaian higher education. International Journal of Educational Technology in Higher Education, 19, 57. [Google Scholar] [CrossRef]
  13. Ferrari, J. R. (2001). Procrastination as self-regulation failure of performance: Effects of cognitive load, self-awareness, and time limits on ‘working best under pressure’. European Journal of Personality, 15(5), 391–406. [Google Scholar] [CrossRef]
  14. Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot learning partners: Connecting learning experiences, interest and competence. Computers in Human Behavior, 93, 279–289. [Google Scholar] [CrossRef]
  15. Gill, S. S., Xu, M., Patros, P., Wu, H., Kaur, R., Kaur, K., Fuller, S., Singh, M., Arora, P., Parlikad, A. K., & Stankovski, V. (2024). Transformative effects of ChatGPT on modern education: Emerging Era of AI Chatbots. Internet of Things and Cyber-Physical Systems, 4, 19–23. [Google Scholar] [CrossRef]
  16. Graham, S. (2018). Introduction to conceptualizing writing. Educational Psychologist, 53(4), 217–219. [Google Scholar] [CrossRef]
  17. Gupta, S., Jaiswal, A., Paramasivam, A., & Kotecha, J. (2022). Academic writing challenges and supports: Perspectives of international doctoral students and their supervisors. Frontiers in Education, 7, 891534. [Google Scholar] [CrossRef]
  18. Hew, K. F., Huang, W., Du, J., & Jia, C. (2023). Using chatbots to support student goal setting and social presence in fully online activities: Learner engagement and perceptions. Journal of Computing in Higher Education, 35(1), 40–68. [Google Scholar] [CrossRef]
  19. Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. [Google Scholar] [CrossRef]
  20. Jen, S. L., & Salam, A. R. H. (2024). A systematic review on the use of artificial intelligence in writing. International Journal of Academic Research in Progressive Education and Development, 13(1), 1819–1829. [Google Scholar] [CrossRef]
  21. Khlaif, Z. N., Mousa, A., Hattab, M. K., Itmazi, J., Hassan, A. A., Sanmugam, M., & Ayyoub, A. (2023). The potential and concerns of using AI in scientific research: ChatGPT performance evaluation. JMIR Medical Education, 14(9), e47049. [Google Scholar] [CrossRef]
  22. Kong, S. C., Cheung, W. M. Y., & Zhang, G. (2023). Evaluating an artificial intelligence literacy programme for developing university students’ conceptual understanding, literacy, empowerment and ethical awareness. Educational Technology & Society, 26(1), 16–30. [Google Scholar]
  23. Kumar, R., & Mindzak, M. (2024). Who wrote this? Detecting artificial intelligence–generated text from human-written text. Canadian Perspectives on Academic Integrity, 7(1). [Google Scholar] [CrossRef]
  24. Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y., Kohen-Vacs, D., Gal, E., Zailer, G., & Barak-Medina, E. (2024). Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Education Sciences, 14(5), 503. [Google Scholar] [CrossRef]
  25. Labadze, L., Grigolia, M., & Machaidze, L. (2023). Role of AI chatbots in education: Systematic literature review. International Journal of Educational Technology in Higher Education, 20, 56. [Google Scholar] [CrossRef]
  26. Lavelle, E., & Bushrow, K. (2007). Writing approaches of graduate students. Educational Psychology, 27(6), 807–822. [Google Scholar] [CrossRef]
  27. Lee, Y. F., Hwang, G. J., & Chen, P. Y. (2022). Impacts of an AI-based chatbot on college students’ after-class review, academic performance, self-efficacy, learning attitude, and motivation. Educational Technology Research and Development, 70(5), 1843–1865. [Google Scholar] [CrossRef]
  28. Lin, M. P.-C., & Chang, D. (2020). Enhancing post-secondary writers’ writing skills with a chatbot: A mixed-method classroom study. Journal of Educational Technology & Society, 23(1), 78–92. [Google Scholar]
  29. Luckin, R., Cukurova, M., Kent, C., & du Boulay, B. (2022). Empowering educators to be AI-ready. Computers and Education: Artificial Intelligence, 3, 100076. [Google Scholar] [CrossRef]
  30. Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson. [Google Scholar]
  31. Mogavi, R. H., Deng, C., Kim, J. J., Zhou, P., Kwon, Y. D., Metwally, A. H. S., Tlili, A., Bassanelli, S., Bucchiarone, A., Gujar, S., Nacke, L. E., & Hui, P. (2024). ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Computers in Human Behavior: Artificial Humans, 2(1), 100027. [Google Scholar] [CrossRef]
  32. Nam, B. H., & Bai, Q. (2023). ChatGPT and its ethical implications for STEM research and higher education: A media discourse analysis. International Journal of STEM Education, 10, 66. [Google Scholar] [CrossRef]
  33. Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Studies in Higher Education, 49, 847–864. [Google Scholar] [CrossRef]
  34. Nikolopoulou, K. (2024). Generative artificial intelligence in higher education: Exploring ways of harnessing pedagogical practices with the assistance of ChatGPT. International Journal of Changes in Education, 1(2), 103–111. [Google Scholar] [CrossRef]
  35. Okonkwo, C. W., & Ade-Ibijola, A. O. (2021). Chatbots applications in education: A systematic review. Computers & Education: Artificial Intelligence, 2, 100033. [Google Scholar] [CrossRef]
  36. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI. [Google Scholar]
  37. Shoufan, A. (2023). Exploring students’ perceptions of ChatGPT: Thematic analysis and follow-up survey. IEEE Access, 11, 38805–38818. [Google Scholar] [CrossRef]
  38. Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chatbots for the Facebook messenger. Computers & Education, 151, 103862. [Google Scholar] [CrossRef]
  39. Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. [Google Scholar] [CrossRef]
  40. Teng, Y., & Wang, X. (2021). The effect of two educational technology tools on student engagement in Chinese EFL courses. International Journal of Educational Technology in Higher Education, 18, 27. [Google Scholar] [CrossRef]
  41. Tlili, A., Shehata, B., Adarkwah, M. A., Bozkurt, A., Hickey, D. T., Huang, R., & Agyemang, B. (2023). What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments, 10(1), 15. [Google Scholar] [CrossRef]
  42. Usher, M., & Barak, M. (2018). Peer assessment in a project-based engineering course: Comparing between on-campus and online learning environments. Assessment & Evaluation in Higher Education, 43(5), 745–759. [Google Scholar] [CrossRef]
  43. Usher, M., & Barak, M. (2024). Unpacking the role of AI ethics online education for science and engineering students. International Journal of STEM Education, 11, 35. [Google Scholar] [CrossRef]
  44. Usher, M., & Hershkovitz, A. (2024). From guides to jugglers, from audience to outsiders: A metaphor analysis of synchronous hybrid learning. Learning Environments Research, 27, 1–16. [Google Scholar] [CrossRef] [PubMed]
  45. Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 15. [Google Scholar] [CrossRef]
  46. Wang, C. (2024). Exploring students’ generative AI-assisted writing processes: Perceptions and experiences from native and nonnative English speakers. Technology, Knowledge and Learning, 1–22. [Google Scholar] [CrossRef]
  47. Zawacki-Richter, O., Marin, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—Where are the educators? International Journal of Educational Technology in Higher Education, 16(39), 1–27. [Google Scholar] [CrossRef]
  48. Zhao, X., Cox, A., & Cai, L. (2024). ChatGPT and the digitisation of writing. Humanities and Social Sciences Communications, 11(1), 1–9. [Google Scholar] [CrossRef]
Table 1. Types of assistance sought by students during student–chatbot interactions.
Table 1. Types of assistance sought by students during student–chatbot interactions.
No.CategoryTypes of Assistance Sought
1Content generation and expansionRequesting the creation of new content, such as generating paragraphs or sections on specified topics.
Expanding on existing content, whether student-generated or chatbot-generated.
2Source integration and verificationSeeking relevant academic sources or assistance in writing reference lists.
Integrating academic sources into existing texts.
Checking the accuracy and reliability of sources.
3Concept clarification and definitionsSeeking explanations for general questions or content generated by the chatbot.
Requesting definitions of concepts, terms, models, or theories.
4Writing consultationEngaging in deliberative discussions with the chatbot to refine ideas and strengthen arguments.
Seeking chatbot-generated suggestions to enhance content quality.
5Text refinement and formattingOrganizing the text’s overall structure and formal correctness, including structuring paragraphs and sections.
Proofreading for grammar, syntax, and polishing language to enhance readability.
6Rephrasing and modifying contentRequesting rephrasing of existing content to enhance clarity and avoid repetition.
Requesting modification of existing content to make it more concise and focused.
7Translation assistanceTranslating text from one language to another, ensuring appropriate meaning and context are maintained.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Usher, M.; Amzalag, M. From Prompt to Polished: Exploring Student–Chatbot Interactions for Academic Writing Assistance. Educ. Sci. 2025, 15, 329. https://doi.org/10.3390/educsci15030329

AMA Style

Usher M, Amzalag M. From Prompt to Polished: Exploring Student–Chatbot Interactions for Academic Writing Assistance. Education Sciences. 2025; 15(3):329. https://doi.org/10.3390/educsci15030329

Chicago/Turabian Style

Usher, Maya, and Meital Amzalag. 2025. "From Prompt to Polished: Exploring Student–Chatbot Interactions for Academic Writing Assistance" Education Sciences 15, no. 3: 329. https://doi.org/10.3390/educsci15030329

APA Style

Usher, M., & Amzalag, M. (2025). From Prompt to Polished: Exploring Student–Chatbot Interactions for Academic Writing Assistance. Education Sciences, 15(3), 329. https://doi.org/10.3390/educsci15030329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop