Next Article in Journal
Derivative Three-Dimensional Synchronous Fluorescence Analysis of Tear Fluid and Their Processing for the Diagnosis of Glaucoma
Next Article in Special Issue
Personalization of the Learning Path within an Augmented Reality Spatial Ability Training Application Based on Fuzzy Weights
Previous Article in Journal
A Novel Sensor for Undrained Shear Strength Measurement in Very Soft to Soft Marine Sediments Based on Optical Frequency Domain Reflectometry Technology
Previous Article in Special Issue
Predicting Students’ Academic Performance with Conditional Generative Adversarial Network and Deep SVM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Model to Develop Chatbots for Assisting the Teaching and Learning Process

by
Sonia Mendoza
1,
Luis Martín Sánchez-Adame
1,*,
José Fidel Urquiza-Yllescas
1,
Beatriz A. González-Beltrán
2 and
Dominique Decouchant
3
1
Computer Science Department, CINVESTAV-IPN, Mexico City 07360, Mexico
2
Systems Department, UAM-Azcapotzalco, Mexico City 02200, Mexico
3
Information Technologies Department, UAM-Cuajimalpa, Mexico City 05348, Mexico
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(15), 5532; https://doi.org/10.3390/s22155532
Submission received: 6 June 2022 / Revised: 19 July 2022 / Accepted: 20 July 2022 / Published: 25 July 2022
(This article belongs to the Special Issue Smart Educational Systems: Hardware and Software Aspects)

Abstract

:
Recently, in the commercial and entertainment sectors, we have seen increasing interest in incorporating chatbots into websites and apps, in order to assist customers and clients. In the academic area, chatbots are useful to provide some guidance and information about courses, admission processes and procedures, study programs, and scholarly services. However, these virtual assistants have limited mechanisms to suitably help the teaching and learning process, considering that these mechanisms should be advantageous for all the people involved. In this article, we design a model for developing a chatbot that serves as an extra-school tool to carry out academic and administrative tasks and facilitate communication between middle-school students and academic staff (e.g., teachers, social workers, psychologists, and pedagogues). Our approach is designed to help less tech-savvy people by offering them a familiar environment, using a conversational agent to ease and guide their interactions. The proposed model has been validated by implementing a multi-platform chatbot that provides both textual-based and voice-based communications and uses state-of-the-art technology. The chatbot has been tested with the help of students and teachers from a Mexican middle school, and the evaluation results show that our prototype obtained positive usability and user experience endorsements from such end-users.

1. Introduction

Chatbots are computer software systems that use natural language processing to assist humans in activities of various kinds [1]. These systems usually perform searches for keywords, phrases, examples, and patterns identified in their knowledge bases and translate them into queries. As a result, chatbots provide people with information concerning products, places, services, and events in online sales services and social networks [2].
These conversational agents have been implemented in multiple sectors, e.g., entertainment [3] and e-commerce [4], to carry out several tasks, e.g., providing recommendations [5], responding to FAQ [2], and providing procedure guidance [6]. By 2024, Insider Intelligence predicts that consumer retail companies will spend $142 billion on chatbots worldwide—up from just $2.8 billion in 2019 [7]. In fact, the COVID-19 pandemic has shown that this expectation is rapidly coming true. For instance, big enterprises have had to improvise basic chatbots on their websites or through WhatsApp, in order to deal with several problems arising from the unforeseen increase in online sales [8].
Thanks to recent technological advances, schools and universities around the world are progressively investing in educational software systems, which are not meant to replace teachers but supply useful tools that allow students to achieve better academic formation [9,10,11]. Now more than ever, due to the current global health crisis, students receive an essential part of their preparation via online information, e.g., homework, class topics, and exercises. Therefore, the education sector should not be ignored, since providing timely and accurate feedback is crucial for a successful school achievement [12].
In particular, the use and development of chatbots begins to attract academic institutions [13,14], since they can be valuable help for both students and teachers to obtain and provide information about procedures, school services, and courses, among others [15,16]. Nevertheless, most of these chatbots have limited mechanisms to provide adequate support to the learning and teaching process, as they must be profitable for both students and academic staff (e.g., teachers, pedagogues, psychologists, and social workers).
A major problem that software in education faces is that of digital illiteracy. As already mentioned, the pandemic not only caused certain technological tools to boom, but also exposed how many people lack the expertise to use them in whole or in part [17]. Thus, many teachers are not sufficiently trained or confident enough to use digital elements that can help improve the teaching and learning process [18,19]. This has to do not only with their age, but also with their socio-economic level, their educational level, and the culture that surrounds them [20]. Of course, Mexican teachers are no exception in this respect, as being forced to use the Internet as the main medium to teach their classes, many were severely constrained, especially those in rural areas with limited resources [21]. Thus, to the inherent problems of educational software must be added those of digital illiteracy, and so chatbots, which have been used as implementations to bridge this gap, have emerged [22].
In this article, we describe a novel model to develop chatbots that serve as extra-school tools. The proposed model defines key components to allow a resulting chatbot to act as an intermediary between students and the personnel involved in the educative process. The main functionalities provided by such a chatbot include facilitating communication, completing partial information, making reminders, advising students, monitoring events or situations, providing useful information, and directing students’ questions and doubts towards the adequate person.
Our model also defines several roles for interaction with a chatbot (e.g., teacher, student, and administrative staff); each one accomplishes particular tasks in the teaching and learning process. Thus, for example, regarding users who play the student role, they can receive tips for their classes, plus reminders of project deadlines and exam dates. For the teacher role, they can receive questions from students and recommend exercises and supplementary material to reinforce particular topics. To provide an easier means of interaction for less tech-savvy people, our model defines components with voice and text user interfaces.
To validate the proposed model, we have designed and implemented a chatbot as a multi-platform application, using state-of-the-art technologies. The resulting chatbot was tested by students and teachers from a Mexican middle school, who focused on evaluating the usability and user experience (UX) of this prototype. In general, our results show that the chatbot has obtained favorable ratings.
This article is organized in the following way. After analyzing related work (see Section 2), we explain the background of the proposal presented here (see Section 3). Then, we describe our model for the development of chatbots as support tools for the teaching and learning process in middle schools (see Section 4). Afterwards, we briefly describe the implementation of a chatbot following our model (see Section 5). Next, we present the assessments conducted, their results, and a discussion about the end-user evaluations of the chatbot (see Section 6). Finally, we conclude the work carried out and give some ideas to improve it (see Section 7).

2. Related Work

The related work is organized into two categories: School Service-Oriented and Student/Teacher-Oriented. These categories differ in the granularity of the information the chatbot can receive and send. The School Service-Oriented category includes chatbots focused on more general tasks, since the target users are typically general public searching for basic information. Conversely, the Student/Teacher-Oriented category encompasses more specialized chatbots that perform more precise tasks and satisfy the need for more personalized interactions.
The School Service-Oriented category includes chatbots that answer frequently asked questions (FAQs) or provide users with general information, such as educational offers, fees, procedures, processes, and schedules. Chatbots within this category are helpful for academic institutions, since they provide an automatic service to both internal and external users who require administrative and academic information. The major advantages of this kind of chatbot are constant availability, decreased workload for the staff, simultaneous attention of multiple users, and accessibility from any computer device. From this category, we can identify four features:
  • Information: It refers to information, such as educational offers, staff contact data, and study plans, needed by people interested in becoming a student of the institution. Examples of chatbots that provide this kind of information are: CiSA [23], EASElective [24], KEMTbot [25], LiSA [26], TutorDocente [27], E-orientation [28], FIT-EBot [29], Mekni et al. [30], UMT-BOT [31], and Ranoliya et al. [2].
  • FAQ: It concerns questions and answers that people commonly ask. This characteristic also exists in the Student/Teacher-Oriented category. Works like CiSA [23], TutorDocente [27], FIT-EBot [29], Mekni et al. [30], Ranoliya et al. [2], DINA [6], and Lee et al. [32] are examples of chatbots that include FAQs.
  • Procedures and processes: It is a guide for students to perform administrative policies, e.g., how to enroll in a class or what requirements are needed to get certified, and even steps of the admission process. This characteristic is also present in the Student/Teacher-Oriented category. Some examples of these chatbots are: TutorDocente [27] and UMT-BOT [31].
  • Schedule: It is related to specific information on academic activities, such as events, calls, and evaluations. The chatbot EASElective [24] can be mentioned as a representative of this characteristic.
The Student/Teacher-Oriented category includes chatbots that interact with both students and teachers. From this category, we can find seven features:
  • Evaluation: It refers to evaluation instruments for students, e.g., homework, quizzes, exams, essays, and practices. Examples of chatbots with this feature are: KEMTbot [25], CHARLIE [33], Lecturer’s Apprentice [34], T-Bot/Q-Bot [35], NLAST [36], Tribubot [37], Bigham et al. [38], LTKA-Bot [39], QuizBot [40], and Ikastenbot [41].
  • Feedback: The system provides students with feedback about their progress in class. As examples, we can mention Lecturer’s Apprentice [34], T-Bot/Q-Bot [35], NLAST [36], Tribubot [37], LTKA-Bot [39], QuizBot [40], Ikastenbot [41], Gómez Róspide and Puente [42], Chatbot [43], and Nguyen et al. [44].
  • Q&A: People can ask specific questions to the chatbot, which can supply concrete context-based answers. Chatbots that exemplify this characteristic are: Lecturer’s Apprentice [34], T-Bot/Q-Bot [35], NLAST [36], Tribubot [37], LTKA-Bot [39], QuizBot [40], Gómez Róspide and Puente [42], Chatbot [43], Nguyen et al. [44], Bala et al. [45], Infobot [46], Dutta [47], Niranjan et al. [48], Reyes et al. [49], and Sreelakshmi et al. [50].
  • Reports: The system provides teachers with details about the academic progress of their students. To illustrate this feature, we mention Ikastenbot [41].
  • Subjects: In this case, the system can interact with the students about the classes they have registered. TutorDocente [27], Lecturer’s Apprentice [34], T-Bot/Q-Bot [35], Tribubot [37], LTKA-Bot [39], Gómez Róspide and Puente [42], Chatbot [43], and Infobot [46] are chatbots that allow conversations of this type.
  • Support: It provides students with some type of assistance, e.g., how to use laboratory equipment. As a representative of this feature, we can name TutorDocente [27].
  • Tutorships: It is about clarifying doubts about specific topics by providing students with some form of orientation, e.g., the complexity of binary search algorithms. Among these chatbots, we can mention TutorDocente [27], Lecturer’s Apprentice [34], T-Bot/Q-Bot [35], NLAST [36], Tribubot [37], LTKA-Bot [39], QuizBot [40], Gómez Róspide and Puente [42], Chatbot [43], Infobot [46], Doly [51], and CultureBot [52].
Most chatbots only fall into one category, but in many we can detect more than one feature in common. This fact is natural, as a complete solution must consider multiple functions to become truly valuable. From our review, only KEMTbot [25] and TutorDocente [27] are simultaneously within School Service-Oriented and Student/Teacher-Oriented categories. This trend is due to the technological development of chatbots, i.e., as new frameworks, libraries, and other implementation elements emerge, it is easier to develop chatbots rich in functionality, leaving behind agents that simply serve as “answering machines” (like the majority in the School Service-Oriented category). However, this also presents a problem, as it is possible to launch a chatbot full of features without any of them actually working well.
In this way, academic environments really represent a challenge for these types of systems, as they should be thought of as tools that help all those involved in the teaching and learning process, making their work easier and supporting them with their problems. To achieve this goal, all the features to be implemented must work together in harmony, as we intend in our proposal.

3. Background

In this section, we summarize the background of the proposal described in this article, which is intended to be a generic solution [53].
In our previous work, we used the Google Design Sprint technique [54] in order to reach a feasible solution quickly. This technique is a way to create valuable products, since they are not just aesthetically pleasing and usable, but also they create a way of thinking and a change of skills [55].
The main goal of our former work was to develop a software tool intended for schools. Thus, students can have extra support for their courses, a source of information about school policies, and an alternative form of interaction among end-users. To meet this objective, our university established a collaboration agreement with a Mexican middle school within the framework of a particular project in charge of these institutions.
We approached this technological development as follows. Firstly, we spent some days becoming acquainted with the most typical school processes and procedures. We also conducted interviews with main actors. As a result of those activities, we designed three personas who portray the critical roles that end-users of the software tool can play: student, teacher, and administrative staff.
To find the characteristics and requirements of each user role [56,57], we refined our interviews and then applied this knowledge to each persona in user stories format [58]. Then, using the personas and their user stories, we discussed several scenarios.
It should be noted that as part of our interviews, we learned that the school had already tried to implement a Moodle-based system during 2015. The adoption of this system was an initiative of some teachers and was used for a few months. However, this endeavor was not successful for two reasons: (1) there was no technical manager with sufficient expertise to maintain the system and add options as needed, so it became obsolete, and (2) the low participation of most of the teachers. Although they were open enough to the idea of integrating a digital tool into their classes, more than 60% described themselves as digitally illiterate, as their experience was limited to basic tasks such as surfing the internet (e.g., consult the Wikipedia) or using a social networking application on their smartphones.
Thus, we had a scenario with several constraints. The school needed a system that was low-maintenance, attractive, and simple enough for everyone involved to use and be motivated to do so. It was clear that making a traditional application entirely form-based was not a viable option, as it would require staff training and would be too costly for a public school to develop and maintain. In addition, because of the past failed implementation, we had more reluctance from teachers, which would not help the acceptance of the new tool [59]. Therefore, drawing on the requirements and characteristics of our personas, we decided that a likely way forward would be to develop a chatbot, as all stakeholders were familiar with applications such as WhatsApp. Additionally, this kind of system has already been successfully implemented in other educational contexts [60,61].
Once the chatbot solution was chosen, a rapid implementation was carried out, in order to realize to what extent this solution satisfied end-user’s requirements, an outline of how it would be to develop the chatbot entirely, and the pertinence of the selected technology. If we had opted, for example, for non-functional prototypes, perhaps the tests would not be conclusive about whether a chatbot was a suitable solution or not.
Finally, the former tests with end-users were conducted. Thus, we assessed the workload perceived by the participants after having performed some tasks. Eight individuals from the middle school were gathered: two teachers (one 36 years old man and one 28 years old woman) and six students—three females and three males between 14 and 15 years old, who had not used a chatbot. Thus, to perform our tests, we started letting them interact with the chatbot’s user interface for a few minutes, in order to become familiar with the different widgets and their functions. Afterwards, we asked them to complete specific tasks according to their roles. As shown by our results, the chatbot-based solution seems to be a good option, as anyone can use it, many academic processes can be systematized, it will always be available, and it allows end-users to participate accordingly.
Although these incipient results demonstrated that a chatbot could be a useful tool in our scenario, they also revealed that we needed to come up with a much more robust interaction design scheme, as text- or voice-only interaction can be confusing or tiresome in some respects, e.g., they are more prone to input errors [62]. We needed a hybrid tool: a conversational agent flexible enough to help end-users without being costly to develop and mechanisms that mirror everyday tasks in familiar environments.

4. A Model for Educational Chatbots

In this section, we describe the main components of a novel model for developing chatbots that support the teaching and learning process in Mexican middle schools.
As shown in Figure 1, our model provides developers with a user interface component, which offers two possible interaction modes: voice-based and text-based. In this way, end-users can choose their preferred mode to express themselves and communicate with other end-users or with the conversational agent.
End-users interact with the components of the model according to a role. As mentioned before, we identified three primary roles: student, teacher, and administrative staff, so we defined a component that manages the specific activities of each role. In this way, by adding new roles or suppressing those that are no longer useful, our model can be easily maintained.
The teacher role component aims to allow teachers to organize their activities in and out of classes, to help them communicate better with their students, and to improve their classes and their work with the administrative staff. Through this component, teachers are able to visualize all their information classified by group and grade, upload extra-material for their classes, see the questions and doubts of their students, post notices and reminders for their students, and note the students’ strengths and weaknesses.
The student role component intends to help students communicate with their teachers, improve their learning, and share their problems and concerns with their teachers or other members of the academic staff, e.g., the psychologist, pedagogue, or social worker. Via this component, students can send their homework to their teachers; know the schedule of their classes, exams, and other important events; see extra material and presentations of the topics covered in class; send messages to their teachers; and check their class grades.
The administrative staff role component aims to allow the psychologist, pedagogue, or social worker to get closer to the students, providing the required attention to those who need it most. Through this component, the administrative staff can send reminders about important dates (e.g., enrolment), upload formats for scholar procedures, and answer questions and doubts of the students.
Our model also provides students with additional instructional materials suggested by their teachers. In this way, on behalf of the teacher, the conversational agent can provide students with information about complete topics of classes, answer the FAQs of crucial school processes and procedures, make automatic event reminders, guide students in each step of their administrative work, direct students with the correct office and responsible person for each organizational task, and set conversations on topics and personal matters.
The resource manager and calendar components intend to manage places, dates, and time for the programming of academic or sporting events.
The conversational agent component helps users carry out their activities by relying on natural language processing to ease and guide their interactions with the components of the model. Moreover, since we designed a knowledge base per functionality, developers can add, modify, or delete a knowledge base without affecting the context of others.
The remaining components are responsible for managing teacher, student, and administrative staff profiles; groups; courses; schedules; and important information exchanged among end-users, e.g., questions, answers, events, reminders, extra-class material, messages, and homework.
In the next subsections, we detail these critical components of our model.

4.1. Modeling User Roles

According to the Mexican middle school educational system (which also applies to other systems, such as the French one), a teacher can teach one course at least, e.g., biology, mathematics, or history. Although more than one teacher can teach the same course, a course is taught to a group of students by a single teacher (e.g., Ms. Toklas and Mr. Robinson teach Spanish for groups 1A and 1B respectively).
At least two students form a group, but one student is a member of only one group. A group of students has to take multiple courses during a school year. Therefore, a course is identified thanks to the course name, the group, and grade (e.g., English for group A in the first year). In some schools, students stay in the same classroom to take all of their courses, whereas in other schools, students have to move between different classrooms to take their courses. Since a classroom is a resource that cannot be shared between two groups of students, our model also considers this natural constraint. Thus, each course is assigned a single time and place (and vice versa) to cover both cases: mobile or stationary groups. It is worth mentioning that this property is common to all the components of our model to avoid sharing a resource that is intrinsically exclusive. In this way, the calendar and resource manager components that allow the user roles to schedule time and places can alert them to these types of issues.
Figure 2 shows the high-level model for the teacher and student roles, which can act as both information producers and consumers. In the next subsections, we describe the user models of our proposal following the producer/consumer approach for each role identified in our former study.

4.1.1. The Teacher Role

As an information producer, a teacher can:
  • Create multiple extra-class materials for one or more courses. An extra-class material can even be the result of the collaborative work of several teachers. In this way, a single course can have one or more associated extra-class materials to reinforce topics covered in class. An extra-class material can be formed by links to websites, files in the cloud, and YouTube videos. To facilitate reutilization, all these elements of information can be shared among several extra-class materials.
  • Generate one or more event announcements about an exam, homework, or a class project. Each event announcement has a place and time associated with it to indicate where and when the event will take place. In general, an event announcement is related to a course, which can have multiple events specified.
  • Define a reminder in the context of an event announcement. In fact, an event announcement can have one or several reminders, which can be made automatically, according to a periodicity determined by the teacher (e.g., each week until 24 h before the deadline). Typically, a reminder associates with a single event. However, several event announcements can be concentrated in just one reminder, as long as the deadlines and periodicity of said events are similar or relatively close, and the recipients are the same.
As an information consumer, a teacher can:
  • Accept pieces of homework for one or more courses. Typically, a piece of homework is assigned to a specific teacher who is responsible for reviewing and grading it. It is important to point out that the conversational agent takes care of informing the correct teacher that homework is ready, once the creator (i.e., a student or a group) has established that it is in the “completed” state.
  • Receive statistical reports of student performance for the teacher’s courses.

4.1.2. The Student Role

As information producer, a student can:
  • Create multiple pieces of homework for one or more courses. A piece of homework can be the result of work done by an individual or by two or more students. A course may require the student to complete at least one piece of homework. Like extra-class material, a piece of homework can contain one or more links to websites, files in the cloud, and YouTube videos.
As information consumer, a student can:
  • Accept extra-class materials for one or more courses. An extra-class material is intended for a single student, a specific group of students (e.g., topic “Usage of Passive Voice in English” for all or part of group C in the second year) or even some groups of students (e.g., groups A, B, and C in the second year). It is important to mention that the conversational agent takes care of informing the correct students that an extra-class material is ready, once the creator (i.e., a teacher or a group) has established that it is in the “completed” state.
  • Receive an announcement or a periodic reminder of an academic, administrative, or athletic event. In general, an event announcement and its reminders are intended for an entire group of students, all or some groups of the same grade, specific groups of different grades, or even all the student community of the institution.

4.1.3. The Administrative Staff Role

Within this role, several subroles can be derived, since the pedagogical team is not only constituted by teachers. Other actors have critical roles in the successful academic performance of students. Thus, we identify the following subroles: social worker, psychologist, pedagogue, and system administrator.
As an information producer, an administrative staff member can:
  • Manage (register, update, and delete) user profiles depending on his/her role (system administrator subrole).
  • Manage groups and courses (system administrator subrole).
  • Generate one or more event announcements about conferences, sports, inscriptions, etc. Unlike event announcements created by teachers that are anchored to a course, these events have broader coverage, since they are intended for several groups or even for all the students of the school. These are events that take place in the auditorium, in the gym, or in the school yard, so they are also associated with respective places, dates, and times. As mentioned above, the resource manager and calendar components of our model are responsible for ensuring that the selected places, dates, and times do not conflict with others of previously defined events (all subroles).
  • Define a reminder in the context of an event announcement. A reminder created by an administrative staff member has the same characteristics of a reminder created by a teacher (all subroles).
As information consumer, an administrative staff member can:
  • Receive alerts from the conversational agent about a student with emotional problems; e.g., as a result of a conversation with the student, the conversational agent can direct them to the psychologist or social worker, with prior consent of the student (psychologist and social worker subroles).
  • Receive alerts from the conversational agent about a student with school achievement problems; e.g., as a result of a conversation with the student and the analysis of their grades, the conversational agent can inform the pedagogue or social worker about it (pedagogue and social worker subroles).
  • Receive statistical reports of student performance per student or group of one or all courses (pedagogue and social worker subroles).

4.1.4. Common to All Roles

As information producer, a user with any role can:
  • Send voice or text messages to an individual or a group within any of the supported roles. A message may or may not be in the context of a course, but a course may be involved in none, one, or more messages.
  • Ask voice or text questions.
  • Create voice or text answers to questions.
A question and an answer have the same characteristics as a message, except that an answer is always anchored to a question, but the latter may go unanswered. In addition, a question may have several answers. It is important to note that when a question has not been answered for a while, the conversational agent asks the corresponding person to answer it, discard it, or even denounce it.
As information consumer, a user with any role can:
  • Receive one or more message.
  • Accept questions and answers.

4.2. Modeling Conversational Agent Functionality

As a proactive component, the conversational agent can take the initiative to perform certain actions in order to solve a problem it has detected. For instance, as shown in Figure 3, the conversational agent can determine that a student has poor performance in one or more courses from the analysis of their exam scores. Thus, it can start a dialogue with that student when detecting that he/she is active, in order to obtain more useful information (e.g., some causes of their poor performance) and encourage the student.
On the other hand, the conversational agent can also talk to a professor about some comprehension problems on a particular topic that one or more students have explicitly raised or that it has deduced from exam scores. Moreover, if these academic problems persist for a period of time, the conversational agent can put the student in contact with the pedagogue (or social worker) or only alert the latter about these issues, so that the pedagogue can take actions that can help the teacher and their students achieve a better teaching and learning process.
In a similar way, when a student has poor performance, the conversational agent can also try to talk with him/her to see whether he/she is emotionally well or not, in case the student has not taken before the initiative to talk to it about this matter. If as a result of said talk the conversational agent manages to infer (by identifying keywords in the dialogue) that the student is a victim of some type of violence at home or at school, it can put the student in contact with the psychologist (or social worker) or only alert the latter about these problems, so that the psychologist can intervene to help the student.
Furthermore, the conversational agent can take the initiative to provide a teacher with student performance statistics when the it notices that the teacher has not requested them for a while. It is important to mention that these statistical reports are automatically created. The conversational agent can also send these performance reports to the pedagogue (or social worker) so that he/she is informed of the progress of each group of students in the different courses and act accordingly.
The conversational agent can also help complete actions performed on the initiative of user roles (see Figure 4). As we already mentioned in Section 3, our model is conceived as a hybrid proposal; i.e., the conversational agent is the guide that helps users to perform their tasks not only with text and voice but also through widgets. The interaction design we propose is intended to help people with little or no experience with any software tool, as our premise is to save them from navigating through menus and settings that can be overwhelming. Instead, we move them to a familiar environment (a chat like WhatsApp), making them perceive it as a safer environment, hence improving the overall UX [63]. Additionally, the chat format keeps interactions concise, as the conversational agent will always make end-users provide all the necessary data no matter how they choose to initiate a task.
Figure 4 shows examples of activities that can be fulfilled by the different user roles with the help of the conversational agent. The processes of registration and classification of user profiles according to their role are carried out by the system administrator, who is a member of the administrative staff.
As mentioned before, there are some activities that can be managed by more than one role. For instance, activities such as scheduling events and asking for sending a question to students can be carried out by a teacher or an administrative staff member. For this reason, these activities encompass both roles in Figure 4.
Both members of the administrative staff and teachers can program events, specifying date, time, and place, which are automatically stored. When a person is a teacher or administrative staff and needs to schedule an event, the conversational agent helps them with doing this task, as depicted in Figure 5. The description of the event is obviously the responsibility of the human, but the conversational agent provides a calendar showing available dates and time to make it easy to select when the event will take place. Then, according to the chosen date and time, the conversational agent checks the availability of places in the school and provides the end-user with only the available places.
Using this information, the conversational agent can provide the intended students (e.g., a specific group) with periodical reminders, freeing teachers from this cumbersome activity. In addition, at any moment, students are able to ask about a reminder for previously scheduled events. When conversational agent does not find information about the requested event, it asks the student for more information, in order to answer their request (although the conversational agent ends in Figure 4 for simplification).
Teachers can ask the conversational agent to memorize links to files, websites, and YouTube videos, and associate them with a topic. In this way, the conversational agent can provide students with extra-class materials created or approved by their teachers, and students can also ask for them. In case a student needs extra-class material about a topic, but the conversational agent did not find any information, it notifies the teacher that said student requested material.
The conversational agent allows teachers and administrative staff to send a question to a student or to all students belonging to the same group (this activity is also acceptable for students, but it is not depicted in Figure 4 for space reasons). When the conversational agent delivers a question to the intended students, it asks them for an answer. If the student ignores the response request, the conversational agent makes a certain number of periodic reminders. If these reminders fail to obtain an answer from the student, the conversational agent informs the corresponding teacher or administrative staff member about it.
Similarly, students can ask the conversational agent to send their homework to their teachers. If desired, teachers can be notified about homework delivery through alternative means, such as email. The conversational agent also sends the students acknowledgment of the good reception of their homework by their teacher. As the conversational agent receives homework from a student, the conversational agent classifies it as either “on time” or “out of time” delivery to facilitate the work of teachers.

5. Implementation of Our Model for Educational Chatbots

Figure 6 shows the tools and technologies that make up a prototype of our model for chatbots intended to assist the teaching and learning process. The developed chatbot follows a Web-based client/server approach, since it allows end-users to access the resulting chatbot anytime and anywhere using a PC or mobile platforms.
End-users can interact with the chatbot through a keyboard, or via voice. This feature is thanks to the user interface that was implemented for the three possible user roles, i.e., teachers, students, and administrative staff. The components for these roles, on both Web client-side (i.e., the user interface) and server-side (i.e., the functional core), were developed with AngularJS.
The work of processing and understanding natural language is carried out with Google cloud technologies: Firebase and Dialogflow. In particular, Firebase is in charge of storing information closely related to the chatbot, such as questions and answers. On the other hand, Dialogflow is responsible for training the model that nourishes the chatbot, which gives “life” and “behaviour” and allows communication between the chatbot and the different user roles. We also use a MySQL database to store chatbot-independent information, e.g., user data, reminders, extra-class material, messages, homework, and courses.
We created a database API which serves as an object relational map (ORM) to map the relational database to objects and facilitate their use in the components on the server side. These latter connects with Dialogflow for the conversational agent, and with Firebase for the NoSQL databases of both plain text (real-time) and complex files (cloud storage). The link between Dialogflow and Firebase occurs because Firebase provides data to the Dialogflow-hosted agent, and the agent itself stores data in Firebase.
The connection between Dialogflow and the MySQL database is provided by intents, which are the main processes by which Dialogflow does natural language processing. When a end-user asks a question, it is processed through the three steps of an intent: (1) intent matching, which is about recognising what the end-user wants to do; (2) entity extraction, which extracts the relevant data (entities) about what the end-user wants; and (3) dialogue control, which shapes the conversation. In this way, once the intent has extracted the entities needed for an operation, these are used to perform the corresponding queries in the MySQL database, which in turn, returns the result to the intent in order to give a response to the end-user (see Figure 7).
Finally, to improve the reproducibility of our project, we published this implementation in a GitHub repository: https://github.com/jfuy/CinvestavCHATBOT (accessed on 21 June 2022).

6. Evaluation of Our Chatbot with End-Users

In this section, we describe the assessment that was carried out to evaluate our prototype. First, we explain the context of the trials. Next, we report the results of the tests (see Section 6.1). Finally, we analyze the significance of the obtained data (see Section 6.2).
To evaluate the proposed model, we put our chatbot prototype to the test. With the help of teachers, students, and the User Experience Questionnaire (UEQ) [64], we evaluated the UX of our prototype. We chose to do tests of this kind because we were interested in the reception of the end-users to this type of technology, the way we implemented the functionalities, and the general attitude towards the components of our model.
We chose UEQ because it is a questionnaire that people can answer quickly and considers hedonic and pragmatic aspects. Through a 26 series of semantic differentials (e.g., good-bad), it measures six scales [65]:
  • Attractiveness: General opinion of the artifact. Do users like or dislike it?
  • Perspicuity: Is it easy to get comfortable with the artifact and to learn how to use it?
  • Efficiency: Can users solve their tasks without unnecessary effort? Does it react fast?
  • Dependability: Does the user feel in control of the interaction? Is it reliable and foreseeable?
  • Stimulation: Is it appealing and encouraging to use the artifact? Is it fun to use?
  • Novelty: Is the design of the artifact imaginative? Does it catch the interest of users?
We used an opportunistic sample to recruit our participants, all from the same middle school. The group consisted of 10 teachers (7 female and 3 male) with an average age of 41.2 years and 10 students (6 male and 4 female) with an average age of 14.1 years. The teachers teach a variety of classes (e.g., Computer Science, Mathematics, History, Geography, and Spanish), and the pupils belonged to the 2nd and 3rd years. In all cases, they had no experience using educational chatbots, only brief encounters with casual chatbots such as those used in customer service.
To minimize non-relevant stimuli, we conducted our tests in a quiet environment so that participants felt comfortable. Each session was conducted at around 11 a.m. for 20 days (one participant per day). Each volunteer participated individually in the assessment accompanied by an on-site moderator.
To do this, once we gave them a little time to familiarize themselves with the chatbot (on a PC), we asked the participants to carry out the following tasks:
  • Ask something related to a class (e.g., when is the next History test?).
  • Ask the chatbot for help on a particular topic (e.g., can you help me to solve equations?).
  • Send a file to another user (e.g., I want to send my Geography assignment).
After completing the three tasks, the participants answered the questionnaire; each assessment took around 45 min in total. Thus, both teachers and students had a sample of the functionalities implemented in our prototype.

6.1. Results

The UEQ employs a 7-point scale ranging in score from 3 (horribly bad) to 3 (extremely good) to gather participants’ ratings for each semantic differential; i.e., if a participant selects number 1 on the scale of a differential, this will be given score 3 ; on the other hand, if the participant answers 7, the score will be 3 [64].
Table 1 and Table 2 represent the means and standard deviations for each scale of the results obtained from the groups of students and teachers. In the same way, they also contain the confidence intervals of the results. Figure 8 is the graphic representation of these results.
The confidence interval is a measure for the estimation precision of the scale mean. The smaller the confidence interval is, the higher the estimation precision and the more we can trust the results. The width of the confidence interval depends on the number of available data and on how consistently the persons judged the evaluated product. The more consistent their opinion is, the smaller is the confidence interval [64].
Reliability was evaluated by assessing the internal consistency of the UEQ scales. The Cronbach’s alpha coefficients of the instrument were classified, as shown in Table 3, for students and teachers, respectively.
The scales of the UEQ can be grouped into pragmatic quality (perspicuity, efficiency, dependability) and hedonic quality (stimulation, originality). Pragmatic quality describes the task-related quality aspects, whereas hedonic quality explains the non task-related quality aspects [66]. Table 4 shows the means for both qualities.
The UEQ also provides a benchmarking tool, which contains the data of 21,175 persons from 468 studies concerning different products (e.g., business software, webpages, webshops, and social networks). The benchmark classifies a product into five categories [66]:
  • Excellent: in the range of the 10% best results.
  • Good: 10% of the results in the benchmark data set are better, and 75% of the results are worse.
  • Above average: 25% of the results in the benchmark are better than the result for the evaluated product; 50% of the results are worse.
  • Below average: 50% of the results in the benchmark are better than the result for the evaluated product; 25% of the results are worse.
  • Bad: in the range of the 25% worst results.
Our results on the benchmark are found in Figure 9, for students and teachers.

6.2. Discussion

Certainly, our model for chatbots aims to improve the teaching and learning process by automating and digitizing everyday processes. In that respect, it is no different from any other software engineering case. However, our context has an element that cannot be ignored, i.e., technological illiteracy on the part of many teachers and administrative staff. Thus, our user-centered design approach required end-user evaluations, since an UX assessment allows us to learn about user perceptions: the usability that indicates how easy it was to use our chatbot and the hedonic aspects that indicate the emotions provoked by the use of our application. These tests showed whether the design of our model is coherent with the problems encountered and whether the implementation was adequate or not.
We aimed to create a truly useful tool that can be used on a daily basis in the classroom and grow according to the needs of the community as they arise. This is a big challenge, as we not only rely on elements of natural language processing and understanding, but we had to develop mechanisms that make the tasks in the teaching and learning process easier. This is why the UX evaluation was vital, as it allowed us to know how end-users appreciated our chatbot; the last thing we wanted was to end up with was a cumbersome application that no one would like to use.
For UEQ scales, averages are usually between 2 and 2; i.e., participants do not tend to rate with extremes [66]. Taking this into account, our results were positive for both groups. As can be seen in Table 1 and Table 2, all averages were greater than or close to 2. In the case of the students, we had better ratings, since the confidence intervals are narrower than those obtained with the teachers. We attribute this to the fact that students have much more experience with software applications of various kinds, so they are more comfortable experimenting with a new system. Qualitatively, we can say that this is consistent with our general observations of the participants: the teachers were much more cautious when performing the test tasks. Nevertheless, both groups yielded consistent data, as all Cronbach’s alpha coefficients were greater than 0.7 (see Table 3).
Our positive results are reflected jointly in the pragmatic and hedonic qualities (see Table 4). This indicates that the implemented mechanisms were well received as task solvers and as items that are enjoyable to use.
The last aspect that UEQ provides is the benchmark (see Figure 9). We can highlight that the students’ averages appeared in the “Excellent” category in all cases. As for the teachers, it was the same; only the Perspicuity scale was in the “Good” range. Of course, these results are not defined enough, as the context in which all the studies were made is missing. We can only mention the interpretation offered by UEQ; i.e., our scales were in the top 10% range.
Another reason to chose an end-user evaluation was because of the nature of our context. As can be seen in Section 2, educational chatbots can offer a variety of features, so they can focus on a specific task with limited functionalities, or they can have more mechanisms without any particular specialty. Of course, we cannot call any of these approaches inherently good or bad, but it will depend on the needs of the community the chatbot is targeting.
What we did observe in our state-of-the-art research is that there is a certain epistemic immaturity in the field of educational chatbots. Most of the research does not detail the models they developed, so the mechanisms cannot be easily replicated. Additionally, there is the kind of assessments that were applied: in the same subject, there are tests and results that focus on educational performance, others that prefer the approach of software quality, and still others that analyze the usability and UX of their systems (as in our case). If the difference on granularity of the descriptions of the developments is added to this, the result is that presenting a work of this nature is complicated.
In our case, we decided to detail the characteristics of our model, not only because of the technical effort involved, but also because it is the best way we found to represent the specific needs of our study community, i.e., how an everyday task can be solved through a chatbot.

7. Conclusions and Future Work

In this article, we proposed a novel model for the development of chatbots that assist in the teaching and learning process in Mexican middle schools. This proposal defines three user roles which have well-defined activities that guide the user–chatbot interaction. We devised the roles in the first phase of our project, through “Google Design Sprint,” which allowed us to obtain the requirements and wishes of end-users.
Our commitment was to create a highly flexible model with components that allow close interoperability, but at the same time, are sufficiently defined and independent for organic evolution to be possible. Another of our objectives was to create a modular design for the roles, since the interaction with the chatbot ultimately depends on the role that the end-user takes.
We consider this an interdisciplinary work, which is why it was not easy to choose the type of tests to be carried out. We think that the most holistic scenario would be to implement and test with end-users. This would allow us not only to have first-hand experience of the software performance, but also whether it adequately met the requirements of end-users.
Regarding the results of the UEQ test, it can be said that both teachers and students found the chatbot a practical and friendly tool, considering that it is the first time they used such technology. The results in the efficiency and dependability dimensions were very positive. These two dimensions were the most important, since they indicate whether the chatbot was the correct tool for the scenario in question (Mexican middle schools), as well as whether the implementation covered the requirements stated. We had a small sample, but the obtained results were reliable and auspicious. Additionally, thanks to the Google technologies used in our implementation, a low-maintenance system was achieved, as Dialogflow agents are self-training [67].
A chatbot developed with our model can be a useful tool for the teaching and learning process. A big problem that many teachers face (at least in our experience) is that they are to some extent digital illiterate, so using natural language as a means of interaction may be a more straightforward process. We plan to continue working closely with members of the middle school to improve our model for educational chatbots. Similarly, our purpose is to integrate the chatbot functions into an application more familiar to end-users, such as WhatsApp or Facebook Messenger.
In addition, since our model involves handling potentially sensitive data (e.g., mental health issues), it is important to establish an information flow policy. End-users must be assured that their information will not be leaked by third parties due to bugs or misconfigurations [68]. This will ensure that particular materials are only accessible to the right people; e.g., reports on violence or depression should only reach the hands of psychologists.

Author Contributions

Conceptualization, S.M. and L.M.S.-A.; methodology, J.F.U.-Y. and L.M.S.-A.; software, J.F.U.-Y.; validation, S.M. and L.M.S.-A.; formal analysis, S.M. and B.A.G.-B.; investigation, S.M. and L.M.S.-A.; resources, D.D. and B.A.G.-B.; writing—original draft preparation, S.M., L.M.S.-A. and J.F.U.-Y.; writing—review and editing, S.M., B.A.G.-B. and L.M.S.-A.; visualization, D.D.; supervision, D.D.; project administration, D.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of CINVESTAV-IPN (12 June 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shawar, B.A.; Atwell, E. Chatbots: Are they really useful? Ldv Forum 2007, 22, 29–49. [Google Scholar]
  2. Ranoliya, B.R.; Raghuwanshi, N.; Singh, S. Chatbot for university related FAQs. In Proceedings of the 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 13–16 September 2017; pp. 1525–1530. [Google Scholar] [CrossRef]
  3. Dian Sano, A.V.; Daud Imanuel, T.; Intanadias Calista, M.; Nindito, H.; Raharto Condrobimo, A. The Application of AGNES Algorithm to Optimize Knowledge Base for Tourism Chatbot. In Proceedings of the 2018 International Conference on Information Management and Technology (ICIMTech), Jakarta, Indonesia, 3–5 September 2018; pp. 65–68. [Google Scholar] [CrossRef]
  4. Ravi, R. Intelligent Chatbot for Easy Web-Analytics Insights. In Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Bangalore, India, 19–22 September 2018; pp. 2193–2195. [Google Scholar] [CrossRef]
  5. Argal, A.; Gupta, S.; Modi, A.; Pandey, P.; Shim, S.; Choo, C. Intelligent travel chatbot for predictive recommendation in echo platform. In Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 176–183. [Google Scholar] [CrossRef]
  6. Agus Santoso, H.; Anisa Sri Winarsih, N.; Mulyanto, E.; Wilujeng saraswati, G.; Enggar Sukmana, S.; Rustad, S.; Syaifur Rohman, M.; Nugraha, A.; Firdausillah, F. Dinus Intelligent Assistance (DINA) Chatbot for University Admission Services. In Proceedings of the 2018 International Seminar on Application for Technology of Information and Communication, Semarang, Indonesia, 21–22 September 2018; pp. 417–423. [Google Scholar] [CrossRef]
  7. Intelligence, I. Chatbot Market in 2021: Stats, Trends, and Companies in the Growing AI Chatbot Industry. 2021. Available online: https://www.insiderintelligence.com/insights/chatbot-market-stats-trends/ (accessed on 21 June 2022).
  8. Morelos, M.; Alcántara, A. Liverpool Redobla Esfuerzos en Comercio Electrónico, Pero Queda Rezagado Ante Coppel, Walmart y Amazon. 2020. Available online: https://elceo.com/tecnologia/liverpool-redobla-esfuerzos-en-comercio-electronico-pero-queda-rezagado-ante-coppel-walmart-y-amazon/ (accessed on 21 June 2022).
  9. Biancarosa, G.; Griffiths, G.G. Technology tools to support reading in the digital age. Future Child. 2012, 22, 139–160. [Google Scholar] [CrossRef] [PubMed]
  10. Kumar, V.; Bhardwaj, A. Role of Cloud Computing in School Education. In Handbook of Research on Diverse Teaching Strategies for the Technology-Rich Classroom; IGI Global: Hershey, PA, USA, 2020; pp. 98–108. [Google Scholar]
  11. Zhong, S.H.; Li, Y.; Liu, Y.; Wang, Z. A computational investigation of learning behaviors in MOOCs. Comput. Appl. Eng. Educ. 2017, 25, 693–705. [Google Scholar] [CrossRef]
  12. Ndukwe, I.G.; Daniel, B.K.; Amadi, C.E. A Machine Learning Grading System Using Chatbots. In Artificial Intelligence in Education; Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 365–368. [Google Scholar]
  13. AdmitHub. AdmitHub. 2019. Available online: https://www.admithub.com/ (accessed on 21 June 2022).
  14. Maderer, J. Jill Watson, Round Three. 2017. Available online: https://www.news.gatech.edu/2017/01/09/jill-watson-round-three (accessed on 21 June 2022).
  15. Shaw, A. Using Chatbots to Teach Socially Intelligent Computing Principles in Introductory Computer Science Courses. In Proceedings of the 2012 Ninth International Conference on Information Technology—New Generations, Las Vegas, NV, USA, 16–18 April 2012; pp. 850–851. [Google Scholar] [CrossRef]
  16. Molnár, G.; Szüts, Z. The Role of Chatbots in Formal Education. In Proceedings of the 2018 IEEE 16th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 13–15 September 2018; pp. 000197–000202. [Google Scholar] [CrossRef]
  17. Pinto, F.; Macadar, M.A. Using Chatbots to Enlarge Human Capabilities—Insights from a Pandemic Context. In Proceedings of the 14th International Conference on Theory and Practice of Electronic Governance, ICEGOV 2021, Athens, Greece, 6–8 October 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 555–557. [Google Scholar] [CrossRef]
  18. Dhawan, S.; Batra, G. Artificial Intelligence in Higher Education: Promises, Perils, and Perspective. Expand. Knowl. Horizon. OJAS 2020, 11, 11–22. [Google Scholar]
  19. Cortina-Pérez, B.; Gallardo-Vigil, M.Á.; Jiménez-Jiménez, M.Á.; Trujillo-Torres, J.M. Digital illiteracy: A challenge for 21st century teachers / El analfabetismo digital: Un reto de los docentes del siglo XXI. Cult. Educ. 2014, 26, 231–264. [Google Scholar] [CrossRef]
  20. Neves, B.B.; Waycott, J.; Malta, S. Old and afraid of new communication technologies? Reconceptualising and contesting the ‘age-based digital divide’. J. Sociol. 2018, 54, 236–248. [Google Scholar] [CrossRef] [Green Version]
  21. Porras, V.d.C.A.; Albores, I.A. Scale to measure the uses of CAT tools in university teachers during the COVID-19 pandemic in segregated territories. J. Posit. Psychol. Wellbeing 2022, 6, 2390–2403. [Google Scholar]
  22. Sriwisathiyakun, K.; Dhamanitayakul, C. Enhancing digital literacy with an intelligent conversational agent for senior citizens in Thailand. Educ. Inf. Technol. 2022, 27, 6251–6271. [Google Scholar] [CrossRef]
  23. Heo, J.; Lee, J. CiSA: An Inclusive Chatbot Service for International Students and Academics. In HCI International 2019—Late Breaking Papers; Stephanidis, C., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 153–167. [Google Scholar]
  24. Chun Ho, C.; Lee, H.L.; Lo, W.K.; Lui, K.F.A. Developing a Chatbot for College Student Programme Advisement. In Proceedings of the 2018 International Symposium on Educational Technology (ISET), Osaka, Japan, 31 July–2 August 2018; pp. 52–56. [Google Scholar]
  25. Ondáš, S.; Pleva, M.; Hládek, D. How chatbots can be involved in the education process. In Proceedings of the 2019 17th International Conference on Emerging eLearning Technologies and Applications (ICETA), Starý Smokovec, Slovakia, 21–22 November 2019; pp. 575–580. [Google Scholar] [CrossRef]
  26. Dibitonto, M.; Leszczynska, K.; Tazzi, F.; Medaglia, C.M. Chatbot in a Campus Environment: Design of LiSA, a Virtual Assistant to Help Students in Their University Life. In Human-Computer Interaction. Interaction Technologies; Kurosu, M., Ed.; Springer International Publishing: Cham, Switzerland, 2018; pp. 103–116. [Google Scholar]
  27. Cordero, J.; Toledo, A.; Guamán, F.; Barba-Guamán, L. Use of chatbots for user service in higher education institutions. In Proceedings of the 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), Seville, Spain, 24–27 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  28. Zahour, O.; Benlahmar, E.H.; Eddaoui, A.; Ouchra, H.; Hourrane, O. A system for educational and vocational guidance in Morocco: Chatbot E-Orientation. In Proceedings of the 17th International Conference on Mobile Systems and Pervasive Computing (MobiSPC), the 15th International Conference on Future Networks and Communications (FNC), the 10th International Conference on Sustainable Energy Information Technology, Leuven, Belgium, 9–12 August 2020; Volume 175, pp. 554–559. [Google Scholar] [CrossRef]
  29. Hien, H.T.; Cuong, P.N.; Nam, L.N.H.; Nhung, H.L.T.K.; Thang, L.D. Intelligent Assistants in Higher-Education Environments: The FIT-EBot, a Chatbot for Administrative and Learning Support. In Proceedings of the Ninth International Symposium on Information and Communication Technology, SoICT 2018, Danang City, Vietnam, 6–7 December 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 69–76. [Google Scholar] [CrossRef]
  30. Mekni, M.; Baani, Z.; Sulieman, D. A Smart Virtual Assistant for Students. In Proceedings of the 3rd International Conference on Applications of Intelligent Systems, APPIS 2020, Las Palmas de Gran Canaria, Spain, 7–9 January 2020; Association for Computing Machinery: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  31. Valle-Rosado, L.; García-García, M.; López-Martínez, J. Desarrollo e implementación de un bot conversacional como apoyo a los estudiantes en su proceso de titulación. In Proceedings of the International Conference on Robotics and Computing, Karlsruhe, Germany, 6–10 May 2013. [Google Scholar]
  32. Lee, K.; Jo, J.; Kim, J.; Kang, Y. Can Chatbots Help Reduce the Workload of Administrative Officers?—Implementing and Deploying FAQ Chatbot Service in a University. In HCI International 2019—Posters; Stephanidis, C., Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 348–354. [Google Scholar]
  33. Mikic-Fonte, F.A.; Burguillo, J.C.; Llamas, M.; Rodriguez, D.A.; Rodriguez, E. Charlie: An AIML-based chatterbot which works as an interface among INES and humans. In Proceedings of the 2009 EAEEIE Annual Conference, Valencia, Spain, 22–24 June 2009; pp. 1–6. [Google Scholar] [CrossRef]
  34. Ismail, M.; Ade-Ibijola, A. Lecturer’s Apprentice: A Chatbot for Assisting Novice Programmers. In Proceedings of the 2019 International Multidisciplinary Information Technology and Engineering Conference (IMITEC), Vanderbijlpark, South Africa, 21–22 November 2019; pp. 1–8. [Google Scholar]
  35. Mikic-Fonte, F.A.; Burguillo, J.C.; Rodriguez, D.A.; Rodriguez, E.; Llamas, M. T-Bot and Q-Bot: A couple of AIML-based bots for tutoring courses and evaluating students. In Proceedings of the 2008 38th Annual Frontiers in Education Conference, Saratoga Springs, NY, USA, 22–25 October 2008; pp. S3A-7–S3A-12. [Google Scholar]
  36. Mikic-Fonte, F.A.; Nistal, M.L.; Rial, J.C.B.; Rodríguez, M.C. NLAST: A natural language assistant for students. In Proceedings of the 2016 IEEE Global Engineering Education Conference (EDUCON), Abu Dhabi, United Arab Emirates, 10–13 April 2016; pp. 709–713. [Google Scholar] [CrossRef]
  37. Rafael, M.S.; María, T.B.L.; Antonio, F.U.; Hanns, D.L.F.M. Support to the learning of the Chilean tax system using artificial intelligence through a chatbot. In Proceedings of the 2019 38th International Conference of the Chilean Computer Science Society (SCCC), Concepcion, Chile, 4–9 November 2019; pp. 1–8. [Google Scholar]
  38. Bigham, J.P.; Aller, M.B.; Brudvik, J.T.; Leung, J.O.; Yazzolino, L.A.; Ladner, R.E. Inspiring Blind High School Students to Pursue Computer Science with Instant Messaging Chatbots. SIGCSE Bull. 2008, 40, 449–453. [Google Scholar] [CrossRef]
  39. Mulyana, E.; Hakimi, R.; Hendrawan. Bringing Automation to the Classroom: A ChatOps-Based Approach. In Proceedings of the 2018 4th International Conference on Wireless and Telematics (ICWT), Bali, Indonesia, 12–13 July 2018; pp. 1–6. [Google Scholar]
  40. Ruan, S.; Jiang, L.; Xu, J.; Tham, B.J.K.; Qiu, Z.; Zhu, Y.; Murnane, E.L.; Brunskill, E.; Landay, J.A. QuizBot: A Dialogue-Based Adaptive Learning System for Factual Knowledge. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI ’19, Glasgow, UK, 4–9 May 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–13. [Google Scholar] [CrossRef] [Green Version]
  41. Pereira, J.; Barcina, M.A. A Chatbot Assistant for Writing Good Quality Technical Reports. In Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM’19, León, Spain, 16–18 October 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 59–64. [Google Scholar] [CrossRef]
  42. Gómez Róspide, C.; Puente Águeda, C. Agente Virtual Inteligente Aplicado a un Entorno Educativo. Rev. Pensam. Matemático 2012, 2, 195–207. [Google Scholar]
  43. Benotti, L.; Martnez, M.C.; Schapachnik, F. A Tool for Introducing Computer Science with Automatic Formative Assessment. IEEE Trans. Learn. Technol. 2018, 11, 179–192. [Google Scholar] [CrossRef]
  44. Nguyen, H.D.; Pham, V.T.; Tran, D.A.; Le, T.T. Intelligent tutoring chatbot for solving mathematical problems in High-school. In Proceedings of the 2019 11th International Conference on Knowledge and Systems Engineering (KSE), Da Nang, Vietnam, 24–26 October 2019; pp. 1–6. [Google Scholar] [CrossRef]
  45. Bala, K.; Kumar, M.; Hulawale, S.; Pandita, S. Chat-Bot For College Management System Using A.I. Int. Rsearch J. Eng. Technol. 2017, 4, 2030–2033. [Google Scholar]
  46. Lee, L.K.; Fung, Y.C.; Pun, Y.W.; Wong, K.K.; Yu, M.T.Y.; Wu, N.I. Using a Multiplatform Chatbot as an Online Tutor in a University Course. In Proceedings of the 2020 International Symposium on Educational Technology (ISET), Bangkok, Thailand, 24–27 August 2020; pp. 53–56. [Google Scholar] [CrossRef]
  47. Dutta, D. Developing an Intelligent Chat-bot Tool to Assist High School Students for Learning General Knowledge Subjects; Technical Report; Georgia Institute of Technology: Atlanta, GA, USA, 2017. [Google Scholar]
  48. Niranjan, M.; Saipreethy, M.S.; Kumar, T.G. An intelligent question answering conversational agent using Naïve Bayesian classifier. In Proceedings of the 2012 IEEE International Conference on Technology Enhanced Education (ICTEE), Amritapuri, India, 3–5 January 2012; pp. 1–5. [Google Scholar]
  49. Reyes, R.; Garza, D.; Garrido, L.; De la Cueva, V.; Ramirez, J. Methodology for the Implementation of Virtual Assistants for Education Using Google Dialogflow. In Advances in Soft Computing; Martínez-Villaseñor, L., Batyrshin, I., Marín-Hernández, A., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 440–451. [Google Scholar]
  50. Sreelakshmi, A.S.; Abhinaya, S.B.; Nair, A.; Jaya Nirmala, S. A Question Answering and Quiz Generation Chatbot for Education. In Proceedings of the 2019 Grace Hopper Celebration India (GHCI), Bangalore, India, 6–8 November 2019; pp. 1–6. [Google Scholar]
  51. Kowsher, M.; Tithi, F.S.; Ashraful Alam, M.; Huda, M.N.; Md Moheuddin, M.; Rosul, M.G. Doly: Bengali Chatbot for Bengali Education. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019; pp. 1–6. [Google Scholar]
  52. Nias, J.; Ruffin, M. CultureBot: A Culturally Relevant Humanoid Robotic Dialogue Agent. In Proceedings of the 2020 ACM Southeast Conference, ACM SE ’20, Tampa, FL, USA, 2–4 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 280–283. [Google Scholar] [CrossRef]
  53. Mendoza, S.; Hernández-León, M.; Sánchez-Adame, L.M.; Rodríguez, J.; Decouchant, D.; Meneses Viveros, A. Supporting Student-Teacher Interaction Through a Chatbot. In Human-Computer Interaction. Perspectives on Design; Kurosu, M., Ed.; Springer International Publishing: Cham, Switzerland, 2020; pp. 210–223. [Google Scholar]
  54. Banfield, R.; Lombardo, C.T.; Wax, T. Design Sprint: A Practical Guidebook for Building Great Digital Products; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2015. [Google Scholar]
  55. Sari, E.; Tedjasaputra, A. Designing Valuable Products with Design Sprint. In Human-Computer Interaction—INTERACT 2017; Bernhaupt, R., Dalvi, G., Joshi, A., Balkrishan, D.K., O’Neill, J., Winckler, M., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 391–394. [Google Scholar]
  56. Keijzer-Broers, W.J.W.; de Reuver, M. Applying Agile Design Sprint Methods in Action Design Research: Prototyping a Health and Wellbeing Platform. In Tackling Society’s Grand Challenges with Design Science; Parsons, J., Tuunanen, T., Venable, J., Donnellan, B., Helfert, M., Kenneally, J., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 68–80. [Google Scholar]
  57. Southall, H.; Marmion, M.; Davies, A. Adapting Jake Knapp’s Design Sprint Approach for AR/VR Applications in Digital Heritage. In Augmented Reality and Virtual Reality: The Power of AR and VR for Business; tom Dieck, M.C., Jung, T., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 59–70. [Google Scholar]
  58. Cohn, M. User Stories Applied: For Agile Software Development; Addison-Wesley Professional: Boston MA, USA, 2004. [Google Scholar]
  59. Yeo, S.; Rutherford, T.; Campbell, T. Understanding elementary mathematics teachers’ intention to use a digital game through the technology acceptance model. Educ. Inf. Technol. 2022. [Google Scholar] [CrossRef]
  60. Lim, J.S.; Zhang, J. Adoption of AI-driven personalization in digital news platforms: An integrative model of technology acceptance and perceived contingency. Technol. Soc. 2022, 69, 101965. [Google Scholar] [CrossRef]
  61. Merelo, J.J.; Castillo, P.A.; Mora, A.M.; Barranco, F.; Abbas, N.; Guillén, A.; Tsivitanidou, O. Exploring the Role of Chatbots and Messaging Applications in Higher Education: A Teacher’s Perspective. In Learning and Collaboration Technologies. Novel Technological Environments; Zaphiris, P., Ioannou, A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 205–223. [Google Scholar]
  62. Wobbrock, J.O.; Myers, B.A. Analyzing the Input Stream for Character- Level Errors in Unconstrained Text Entry Evaluations. ACM Trans. Comput.-Hum. Interact. 2006, 13, 458–489. [Google Scholar] [CrossRef]
  63. Sánchez-Adame, L.M.; Urquiza-Yllescas, J.F.; Mendoza, S. Measuring Anticipated and Episodic UX of Tasks in Social Networks. Appl. Sci. 2020, 10, 8199. [Google Scholar] [CrossRef]
  64. Laugwitz, B.; Held, T.; Schrepp, M. Construction and Evaluation of a User Experience Questionnaire. In HCI and Usability for Education and Work; Holzinger, A., Ed.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 63–76. [Google Scholar]
  65. Hinderks, A.; Schrepp, M.; Mayo, F.J.D.; Escalona, M.J.; Thomaschewski, J. Developing a UX KPI based on the user experience questionnaire. Comput. Stand. Interfaces 2019, 65, 38–44. [Google Scholar] [CrossRef]
  66. Schrepp, M.; Hinderks, A.; Thomaschewski, J. Construction of a Benchmark for the User Experience Questionnaire (UEQ). Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 40–44. [Google Scholar] [CrossRef] [Green Version]
  67. Google. Training. 2022. Available online: https://cloud.google.com/dialogflow/es/docs/training (accessed on 21 June 2022).
  68. Pasquier, T.M.; Singh, J.; Eyers, D.; Bacon, J. Camflow: Managed Data-Sharing for Cloud Services. IEEE Trans. Cloud Comput. 2017, 5, 472–484. [Google Scholar] [CrossRef] [Green Version]
Figure 1. A model for a chatbot assisting the teaching and learning process in middle schools.
Figure 1. A model for a chatbot assisting the teaching and learning process in middle schools.
Sensors 22 05532 g001
Figure 2. Teacher and student role models using the producer/consumer approach.
Figure 2. Teacher and student role models using the producer/consumer approach.
Sensors 22 05532 g002
Figure 3. Some actions at the initiative of the conversational agent to help user roles.
Figure 3. Some actions at the initiative of the conversational agent to help user roles.
Sensors 22 05532 g003
Figure 4. Some activities performed by user roles with the help of the conversational agent.
Figure 4. Some activities performed by user roles with the help of the conversational agent.
Sensors 22 05532 g004
Figure 5. The completion of tasks is facilitated by widgets and asking for missing information; e.g., to schedule an exam, a calendar widget is used, and data that the teacher were not initially provided are required.
Figure 5. The completion of tasks is facilitated by widgets and asking for missing information; e.g., to schedule an exam, a calendar widget is used, and data that the teacher were not initially provided are required.
Sensors 22 05532 g005
Figure 6. Tools and technologies used to implement the main components of our chatbot.
Figure 6. Tools and technologies used to implement the main components of our chatbot.
Sensors 22 05532 g006
Figure 7. Intent is the main unit of work in Dialogflow. Thanks to their processing, it is possible to obtain entities to perform operations and hold conversations with end-users.
Figure 7. Intent is the main unit of work in Dialogflow. Thanks to their processing, it is possible to obtain entities to perform operations and hold conversations with end-users.
Sensors 22 05532 g007
Figure 8. Scales results.
Figure 8. Scales results.
Sensors 22 05532 g008
Figure 9. Benchmark results.
Figure 9. Benchmark results.
Sensors 22 05532 g009
Table 1. Students results ( n = 10 ). Confidence interval ( p = 0.05 ) per scale.
Table 1. Students results ( n = 10 ). Confidence interval ( p = 0.05 ) per scale.
ScaleMeanStd. Dev.ConfidenceConfidence Interval
Attractiveness2.4830.3720.2312.2532.714
Perspicuity2.5250.4160.2582.2672.783
Efficiency2.6000.4740.2942.3062.894
Dependability2.3750.4450.2762.0992.651
Stimulation2.3500.5550.3442.0062.694
Novelty2.6000.3940.2442.3562.844
Table 2. Teachers results ( n = 10 ). Confidence interval ( p = 0.05 ) per scale.
Table 2. Teachers results ( n = 10 ). Confidence interval ( p = 0.05 ) per scale.
ScaleMeanStd. Dev.ConfidenceConfidence Interval
Attractiveness2.2000.6370.3951.8052.595
Perspicuity1.9500.7620.4721.4782.422
Efficiency2.1500.6260.3881.7622.538
Dependability2.0250.7210.4471.5782.472
Stimulation1.8500.8990.5571.2932.407
Novelty2.0250.7020.4351.5902.460
Table 3. Cronbach’s alpha coefficients.
Table 3. Cronbach’s alpha coefficients.
ScaleStudentsTeachers
Attractiveness0.810.92
Perspicuity0.800.92
Efficiency0.940.88
Dependability0.890.94
Stimulation0.840.96
Novelty0.760.92
Table 4. Pragmatic and hedonic qualities.
Table 4. Pragmatic and hedonic qualities.
ScaleStudentsTeachers
Pragmatic Quality2.502.04
Hedonic Quality2.481.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mendoza, S.; Sánchez-Adame, L.M.; Urquiza-Yllescas, J.F.; González-Beltrán, B.A.; Decouchant, D. A Model to Develop Chatbots for Assisting the Teaching and Learning Process. Sensors 2022, 22, 5532. https://doi.org/10.3390/s22155532

AMA Style

Mendoza S, Sánchez-Adame LM, Urquiza-Yllescas JF, González-Beltrán BA, Decouchant D. A Model to Develop Chatbots for Assisting the Teaching and Learning Process. Sensors. 2022; 22(15):5532. https://doi.org/10.3390/s22155532

Chicago/Turabian Style

Mendoza, Sonia, Luis Martín Sánchez-Adame, José Fidel Urquiza-Yllescas, Beatriz A. González-Beltrán, and Dominique Decouchant. 2022. "A Model to Develop Chatbots for Assisting the Teaching and Learning Process" Sensors 22, no. 15: 5532. https://doi.org/10.3390/s22155532

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop