Next Article in Journal
Do You Care for Robots That Care? Exploring the Opinions of Vocational Care Students on the Use of Healthcare Robots
Previous Article in Journal
A Robust Robotic Disassembly Sequence Design Using Orthogonal Arrays and Task Allocation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students

1
Department of Intelligent Systems and Digital Design, School of Information Technology, Halmstad University, 301 18 Halmstad, Sweden
2
Norsk Regnesentral, 0373 Oslo, Norway
*
Authors to whom correspondence should be addressed.
Robotics 2019, 8(1), 21; https://doi.org/10.3390/robotics8010021
Submission received: 8 February 2019 / Revised: 7 March 2019 / Accepted: 10 March 2019 / Published: 13 March 2019

Abstract

:
We report on an exploratory study conducted at a graduate school in Sweden with a humanoid robot, Baxter. First, we describe a list of potentially useful capabilities for a robot teaching assistant derived from brainstorming and interviews with faculty members, teachers, and students. These capabilities consist of reading educational materials out loud, greeting, alerting, allowing remote operation, providing clarifications, and moving to carry out physical tasks. Secondly, we present feedback on how the robot’s capabilities, demonstrated in part with the Wizard of Oz approach, were perceived, and iteratively adapted over the course of several lectures, using the Engagement Profile tool. Thirdly, we discuss observations regarding the capabilities and the development process. Our findings suggest that using a social robot as a teaching assistant is promising using the chosen capabilities and Engagement Profile tool. We find that enhancing the robot’s autonomous capabilities and further investigating the role of embodiment are some important topics to be considered in future work.

1. Introduction

University courses are designed based on requirements of students and teachers, to engage students and encourage them to learn actively. During the last century, a transition has taken place from behaviorist learning, featuring passive knowledge transfer and repetitions, to cognitive learning leveraging knowledge of how people process information, and constructivist learning considering subjective needs and backgrounds of students. Recently, online collaborative learning theory has been proposed, which facilitates social collaborations via transformational digital technologies [1]. Digital technologies have been described as not just an aide for teaching, but rather as something which has changed how students learn and our concept of learning [2]. For example, through “multi-inclusive” and multimodal designs, such technologies can facilitate different learning styles (e.g., being intelligible to both “serialists” and “holists”), engaging students via visual, auditory, or kinesthetic stimuli [3]. In particular, the promise of robotic technologies for engaging students is being increasingly recognized, with the result that we are now in the midst of a “robotics revolution” in education, in which robots are being used more and more in classrooms around the world targeting various age groups and disciplines [4].
One kind of robot which is increasingly being used in such applications as teaching where the social aspect is of the essence is what is being called a social robot; this refers to a (semi-)autonomous system with a physical embodiment that interacts and communicates with humans or other agents by following social behaviours and rules attached to its role [5]. As such, social robotics is part of the larger field of human-robot interaction (HRI) [6]. One instance of how a social robot could be used in teaching is that it could act as teaching assistant.
Some positive benefits of robot teaching assistants can be seen in relation to alternatives such as employing more human teachers or other digital technologies. With an increasing number of students seeking to receive university degrees, teachers can have little time to prepare due to other responsibilities, and it would be helpful to offload some of the teachers’ work. From the student perspective, opportunities for one-to-one interactions with the teacher in typical university classes are limited; robots can help with the lack of teachers.
Furthermore, robots have surpassed humans in many capabilities: e.g., sensing, memory, arithmetic, the ability to continuously work and concentrate without sleep or breaks, and the ability to communicate basically in many languages. Thus, robots could use their ability to collate student data to adapt the pace of learning, provide extra wait time for answers, and be patient if a task must be carried out many times. By this we do not suggest that human teachers should be replaced by robots; rather we imagine a situation in which both humans and robots can complement one another.
Several studies have provided evidence that robots show marked benefits over screen-based technologies in education, also at the university level, both in terms of learning outcomes and motivation [7,8,9] and the use of social robots in education [10]. Yet, there could also be disadvantages to using robots in a university classroom. In the same way that digital technologies such as slide presentations, have been described as “soporific or dazzling” [11], it could be damaging if robots were seen as a facetious distraction, or as an excuse for teachers to avoid having to deal with bothersome students. Moreover, socially interactive robotics is an emerging field which is not fully mature; imperfect capabilities could disappoint and disillusion rather than engaging, and yield results which might be different from more refined systems in the future. For robot technology to succeed in teaching, useful capabilities and potential pitfalls should be identified and considered.
The contribution of the current paper is reporting on some experiences designing and deploying a robot teaching assistant in an engineering course at the university level over a three week period; we designed the robot’s capabilities based on interviewing some teachers at a Swedish university, then adapted and analyzed the capabilities using the Engagement Profile tool. Considerations and future challenges are also described, toward informing next investigations.

2. Related Work

In a variety of studies, robots are the subject of learning [12,13,14,15,16], where they are used as a learning material. In such settings, students can experience engagement by conducting practice-based learning, assembling robots and using them to test their hypotheses. In other settings, robots can be used as tools for enabling remote attendance, typically for children as the target learning group. For example, the Pebbles robot was used to allow sick children to remotely attend classes [17]. Also, Telenoid, a remotely operated robot, was used for children’s groupwork; increased participation and pro-activeness were reported, along with the suggestion that communication restrictions imposed by using robots can actually facilitate collaboration [18]. Additionally, a tele-teaching approach was reported with the AV1 robot, where students can use an avatar to remotely attend classes and be present in the classroom; in this approach, the classic robot aspects are more in the background [19].
Pioneering work on companion robots was conducted by Kanda and colleagues, who deployed a humanoid robot in an elementary school [20,21]. Children were free to interact and play with the robot outside of class time during a thirty minute break after lunch. The robot was described to the children as only speaking English, which allowed controlling the complexity of the interactions and also motivated the students to use and learn English. As a result, the children learned new vocabulary with the robot present. Such robots have also been used as tutors to to stimulate scientific curiosity in engaged children [22], as well as to improve children’s performance, engagement and motivation in learning sign language [23].
One question is how such a tutor robot should interact during learning. One study reported that children seemed to learn better when a NAO robot acted more like a peer than a teacher [24]. However, in general, serious robots might be more effective in serious tasks than playful robots [25], and role assignment in educational HRI has been observed to be a complex and dynamic process which can be difficult to control [26]. Additionally, although social behaviors can be effective for engaging students, they can also in some cases distract children, and hence should be incorporated into interactions with care [27]. Furthermore, positive effects have been observed from personalizing the behavior of two autonomous robots, which acted as learning companions for children [28].
In other studies, robots acted as teaching assistants in the classroom. A RoboSapiens was used in an elementary school for five weeks, which read stories using different voices, led recitals, provided friendly feedback, gave quizzes, and moved when students asked [29]. Similarly, a NAO robot was used with some children to read out vocabulary words with pictures shown on slides behind it, and pantomime the meanings, while also providing entertainment such as singing and dancing [30]. As a result, it was observed that students learned faster and more words, compared to a control group. Additionally, a NAO robot was used in two studies with children with autism spectrum disorder and with children that were second language learners, observing increased participation and involvement in language learning [31].
Thus, various robotics studies have been conducted with children. Further details can be found in a number of reviews which have been made available (e.g., regarding the use of social robotics for early language learning [32]), and we also provide some examples of robots in education in Table 1.
Fewer studies have been conducted with adults. An idea was reported that science students with disabilities could use robots to conduct experiments remotely [36]. Furthermore, a study carried out with university students found that the physical presence of a robot tutor facilitated learning outside the classroom [8]. Outside of robotics, an artificial intelligence was used for an online course to answer frequently asked questions, which worked so convincingly that students reportedly did not know they were interacting with a non-human teaching assistant [37,38].
Why have studies on robot agents up until now focused predominantly on children? It has been pointed out that children can have problems with making friends and bullying [33]. Also it has been stated that “in general younger children are more enthusiastic about robots” [39], citing a work which dealt with elementary to high school students but not adults [34]. Furthermore, this latter work stated: “It is also noteworthy that the research interest in HRI in classrooms has been largely skewed towards elementary school settings.” We speculate that there could also be a feeling that robots can be enjoyable due to their novelty, and therefore more applicable to the domain of children—whereas the adult world has sometimes been perceived in the past as a place for work and seriousness rather than emotions like enjoyment, although this perception is changing [40].
It is not yet clear if the results of studies with children will directly apply also to adults, e.g., university students. In general, adults typically have rich specialized knowledge and experience, especially at the master’s level, and are more self-directed and needs-driven than children [41]. As well, there can be large differences in the degrees of knowledge adults possess, which can cause stress for teachers [42]. Robots can be programmed with a wide range of encyclopedic knowledge, which could be useful to overcome knowledge differences. Moreover, language studies have shown that robots are more successful when the students had some ability and interest [7]. Additionally, in adult classrooms, where the students have spent years gaining expertise, face-saving becomes more important, as being judged by another adult can be humiliating [41]; making mistakes in front of a robot could be less embarrassing than in front of a human teacher.
In regard to self-direction, it could be easier for a robot to keep adults’ attention. Some children have been reported as not listening to a robot’s quiz and covering its eyes with their hands [22]. Various abusive behaviour including kicking and punching by young people toward robots has also been reported [43,44]. By contrast, adults at universities have more freedom to select what they will study, picking majors and courses. However, adults can also experience various needs, responsibilities, and worries which are not typical for children—from financial factors such as part-time work, loans, and mortages, to caring for dependents, and age-related health problems—which can lead to mental fatigue. In such cases, the communicative power of robots using visual, aural, and, possibly, haptic modalities, could facilitate learning.
Thus, previous work suggested that robots could also be useful in the context of adult learning, but it was not clear to us what kinds of robot capabilities would be desirable or how interactions could be structured to be engaging.

3. Materials and Methods

To investigate how a robotic teaching assistant can be used at the university level, we performed an experiment at the Department of Intelligent Systems and Digital Design (ISDD) at Halmstad University in Sweden, as described below. After gathering some requirements from the engineering teachers, we selected a course and a robot to facilitate lectures as a teaching assistant. We used an iterative approach, starting with an initial design of the robotic teaching assistant, that was analyzed using the Engagement Profile tool, and this design was updated during the course of three weeks.

3.1. Robot Teaching Assistant Capabilities

A wide range of tasks can potentially be performed by teaching assistants, including tutoring support, grading assignments and tests (also invigilating), assisting students with special needs, replying to emails, and questions during office hours. To select capabilities which might be useful to be incorporated into a teaching assistant robot, we conducted a brainstorming session during a regular weekly meeting of the teachers at the ISDD. We asked for any comments the teachers might have about where a robot could be helpful, especially regarding problems the teachers had faced in the classroom before. As a result, we identified challenges regarding, e.g., fatigue (specifically of the voice), absences, ambiguity, distraction, and physical chores. From this list, we suggested six capabilities of interest for the robotic teaching assistant: C1—reading, C2—greeting, C3—alerting, C4—remote operation, C5—clarification, and C6—motion. We describe these capabilities in Table 2 and elaborate in more detail below.
Speech is an important communication modality in classroom teaching, but excessive speaking can be tiring for lecturers, who typically have a heavy workload, and monotonous for students. As examples, our staff mentioned that lectures typically last two hours and oral exams can last two days for large classes. To address this challenge, a robotic assistant can orally present material and moderate quizzes ( C 1 ). Further, a robotic assistant can be used to greet the students ( C 2 ).
Students and teachers can miss classes due to various reasons, such as illness and traveling. In such cases, video conferencing is an option, but can require people who are present to spend time setting up computers (e.g., microphones, speakers, and angles of viewing). Robotic teaching assistants can be used to overcome this challenge by using remote operation ( C 4 ).
Teachers try to scaffold students’ understandings while together tackling appropriately challenging material, but for various reasons, e.g., because prior knowledge typically varies by student, additional help can occasionally be desirable. An example given at the brainstorming meeting at the ISDD was the challenge in visualizing data when teaching topics such as machine learning. This challenge can be addressed by clarification ( C 5 ) and reading ( C 1 ); i.e., automatically supporting the teacher by looking up and showing topics on a robot’s display while the teacher talks.
It can also be hard for a teacher to divide their attention during class between multiple factors, such as lecture content, timing, and students, which can lead to errors [45]. Examples included making coding mistakes that students did not point out, blocking part of the view of a presentation, forgetting students’ names and backgrounds, and not immediately seeing a student with their hand up. This challenge can be addressed by an alerting functionality ( C 3 ), where the robotic assistant alerts the teacher when needed. Further, the greeting functionality ( C 2 ) can be useful, in case the teacher forgets the names of students.
Also, the class’s time can be reduced and effort can be required to complete common tasks, such as handing out materials, closing and opening doors and windows, and lowering projection screens. Using motion capability ( C 6 ), the robotic assistant can provide locomotion and object manipulation to conduct such physical tasks.
Given the exploratory nature of our study, we decided to focus most on capability C 1 (reading) and especially on quizzes, which was described as a useful low-hanging fruit and could engage students by encouraging active learning. In our work, we did not further consider additional suggestions that were not clearly related to a problem; these included incorporating playfulness, personalizing interactions by calling students by name or speaking their native languages, and using a range of different voices and dialects.
To realize these capabilities, a mid-fidelity prototyping approach was followed, to balance obtaining accurate insight into how a completed robot would be perceived, with allowing observations to be collected quickly and practically [46]. In line with grounded theory and an intention to explore through observations, we also drew insight from an idea from previous work that any mistakes made by a robot can have positive effects in helping students to feel less self-conscious about performing perfectly [35].

3.2. The DEIS Course

As a testbed for designing our robotic teaching assistant, we selected a course called the Design of Embedded and Intelligent Systems (DEIS), which is a double-credit compulsory course for second year master’s degree students in the Embedded and Intelligent Systems Programme at Halmstad University. The course aims to improve both the breadth and depth of the students’ conceptual and practical knowledge in a collaborative, creative, and critical manner. Students attend lectures and labs which are supervised by eight teachers, while also working independently on a problem-solving project in small groups. Lectures cover a wide range of topics, including statistical inference, robotics, sensor fusion, embedded programming, motion planning, simulation, communication, and image processing; the project involves developing platooning capabilities for some small wheeled robots. Learning is evaluated via a de-contextualized oral exam and contextualized written report (50% each). The oral exam is in “ordered outcome format”, in alignment with the structure of observed learning outcomes (SOLO) taxonomy of knowledge, in which students receive questions requiring uni-structural, relational, and creative responses [47].
We assumed that this course would be appropriate due to its contents, as it seemed intuitive to use a robot to teach a course about robots; the students study robotic components during the course, and can enrich their knowledge by seeing these components function together in a working robot. As the course is difficult to teach, a need for teaching assistance occurred; a high level of student engagement is required because students, coming from many different backgrounds such as data mining, intelligent vehicles, electronics, and communications, are expected to gain knowledge that is both wide-ranging and deep.

3.3. The Baxter Robot

Several robots were available for use at the ISDD, including NAO robots [48] and Turtlebot [49]. A Baxter robot on a Ridgeback mobile base (shown in Figure 1 and hereafter referred to as Baxter), was selected due to its versatile interactive capabilities and engaging appearance. To interact with people, Baxter has various actuators: two seven degrees-of-freedom arms capable of moving objects up to 2 kg, an omni-directional mobile base enabling movement within a classroom, speakers, and a display. Baxter also has a number of sensors, including a microphone, cameras in its head and wrists, and force sensors in its arms. 13 sonar sensors situated in a ring around its head, IR range sensors in its wrists, and a laser and inertial measurement unit in its base were available but not used in the current study. Baxter’s height (180 cm) was also considered to be a potential advantage as height plays a key role in how attractive, persuasive, and dominant a robot is perceived to be [50], and such qualities are linked with engagement [51].
Visual and aural recognition was conducted using the open source computer vision library (OpenCV) [52] and the speech recognition toolkit CMU PocketSphinx [53]. Robot behaviors were triggered and robot states changed by a teacher by pressing buttons on a graphical user interface running on a desktop, using robot operating system (ROS) [54] for inter-machine communication. A face to show on the robot’s screen was designed by people from the communication department of Halmstad University, from which we constructed variations to convey various emotions and states. Gestures were recorded and played back using Baxter’s software development library for Python.

3.4. The Engagement Profile

For the evaluation of the capabilities and the design of the robotic teaching assistant, we desired some way to assess the degree to which our robot engaged students. The Glossary of Education Reform [55] refers to engagement as follows: “In education, student engagement refers to the degree of attention, curiosity, interest, optimism, and passion that students show when they are learning or being taught, ..., and the concept of `student engagement’ is predicated on the belief that learning improves when students are inquisitive, interested, or inspired, and that learning tends to suffer when students are bored, dispassionate, disaffected, or otherwise ‘disengaged.’” According to this glossary, forms of engagement include (a) intellectual, (b) emotional, (c) behavioral, (d) physical, (e) social, and (f)cultural engagement.
In our work, we use the Engagement Profile, which was originally developed to assess engagement factors for exhibits in science centres and museums [56]. These are considered as informal teaching arenas. We posit that the Engagement Profile can be applied to a setting where the robotic teaching assistant is used in a formal learning environment. Similar to installations in science centres and museums, the robotic teaching assistant represents an artifact that the students interact with during their studies and classes. Thus, we can assume that increased engagement by the students will contribute to inspire and facilitate learning, as well as increasing the learning outcome.
The Engagement Profile [57] quantifies the characteristics of installations along eight dimensions, each of which is given a value between 0 and 5. The dimensions of the Engagement Profile represent the degrees of competition (C), narrative elements (N), interaction (I), physical activity (P), user control (U), social aspects (S), achievements awareness (A), and exploration possibilities (E). External influences are not taken into account in the Engagement Profile since these are not properties of the direct learning environment. Physical factors, such as noise, light or smell could play a role in the perception of engagement, but need to be handled outside the Engagement Profile. Properties that belong to the context, such as social factors, institutional factors, or recent incidents personally or globally, are excluded. However, these factors still need to be taken into account in the assessment process, e.g., as suggested for a different setting [58].
To adapt the Engagement Profile to a more formal learning environment with a robot teaching assistant, we replaced references to the original domain (i.e., installations in museums and science centres) by terms that are related to the use of a robot teaching assistant. The short form of the adapted version of the Engagement Profile is shown in Figure 2.

4. Study

The Baxter robot was used in four classes over three weeks in Autumn 2017 conducted by the course responsible. In the first week there were two classes on Thursday and Friday, and in subsequent weeks only on Thursdays. The classes were conducted from 10:15 to 12:00. The classroom was kept constant with a layout that is shown in Figure 3. The room was well-lit, and there was little noise from the outside. The study was conducted with 24 students (average age 26.8 years, SD = 4.7; 8 females, 16 males; from approximately ten different countries, with a majority from Asia).

4.1. Setup of the Study

The study followed the iterative design process described for a different application area [56]. Out of the description of the capabilities of interest and the context, we identified the ranges of preferred and suitable values of the Engagement Profile, as shown in Table 3. We also identified the influence of the classroom setting to the values of the Engagement Profile. The Engagement Profile of the robotic teaching assistant at the start of the experiments is explained in Table 4 and visualized in Figure 4. In this diagram, the green area shows the preferred values for our teaching setting, while the blue hatches show the values for the implementation of the first lecture.
Each week during the experiment the students were asked to answer a questionnaire by sending them a URL for the respective questionnaire. The URL was sent after lectures 2, 3, and 4, respectively. The questionnaires contained the questions given in s Table 5, Table 6 and Table 7, two questions about gender and age group, as well as a field for a free form comment. The questionnaires were identical each week. They were implemented using Google Forms [59] with a new form each week.
The students were involved in the assessment by answering a standardized questionnaire with eight questions about each of the dimensions of the Engagement Profile. This questionnaire is given in Table 5. Further, we asked five more questions about their satisfaction with the learning experience, as shown in Table 6. To be better informed about which capabilities the students prefer, we asked a further six questions that are shown in Table 7.

4.2. Description of the Experiment

In the iterative design of our experiment, the experience design was changed each week in two ways: (1) to address changes desired by students, and (2) to test new possibilities and content for each robot capability. This led to a chain of designing, implementing, observing, and obtaining feedback to adjust the requirements for the next iteration, as is summarized in Table 8. In the following diary, we describe the actions and observations we made; some examples of the teaching assistant robot interacting with the class are also shown in Figure 5.

4.2.1. Day 1

The initial design was based on the outcome of a brainstorming session with the teachers of the ISDD, as described in Section 3.1, which highlighted the six potentially useful capabilities C 1 C 6 . Except for the robot, the basic structure of the course held in the previous year (lecture format) and contents were retained.
C1
Reading: quiz content was split between the lecture slides and the robot, based on the assumption that the slides would be better for clearly communicating some complex information such as equations, while the robot would be more interesting in general for communicating simpler content such as quiz questions. Six slides in the lecture presentation were allocated for quizzes. On reaching a quiz slide, the teacher pressed a button on the GUI to trigger the robot to ask questions and show a puzzled face on its display. Quiz topics included computational logic and time complexity. For example, one quiz slide showed a deterministic and non-deterministic state machine and some strings; the robot asked the students to consider which strings would be accepted by each.
C2
Greeting: the robot was set up to introduce itself at the beginning of class, stating its name, describing its role as teaching assistant, and priming students’ expectations that it was a work in progress, while waving a hand and smiling; at the end of class it said thank you and goodbye, again waving.
C3
Alerting: the robot looked toward the teacher and stated that the teacher had forgotten to explain a topic.
C4
Remote operation: students were invited during a break to teleoperate the robot using a handheld controller.
C5
Clarification: the robot automatically showed some example images in its display based on recognizing keywords spoken by the teacher: specifically, the names of some common charts, such as ’Venn diagram’, ’histogram’, ’pie chart’, and ’Gantt chart’.
C6
Motion: the robot took an attendance sheet from the teacher in its gripper and moved forward to hand it to the nearest student.
After the lecture, the students were asked to anonymously describe their experience. Over half the class described the initial experience with the robot in class as positive (15 people), using the adjectives good (5), awesome (4), cool (2), and fun (2); one third thought it was engaging (8), describing the experience as interesting (5), exciting (3), and motivating (2). Five people had various neutral questions about the robot, and nine people voiced suggestions for improvement, six of which were to improve the sound.

4.2.2. Day 2

Based on the feedback from day 1, which was mostly positive, the system was kept the same, just increasing the volume of the robot’s speech. Further, we tried to enhance the experience as follows:
C1
Reading: six quizzes were conducted, in regard to circuits, connectors, computers, math, and programming languages.
C2
Greeting: the robot greeted the class at the beginning and said goodbye at the end of the lecture.
C3
Alerting: the robot advised the teacher at one point that a description was not clear.
C4
Remote operation: the students listened to a former master’s student describe her experience by speaking remotely through the robot.
C5
Clarification: the robot recognized keywords which the students said and displayed related images on its display.
C6
Motion: The robot shook hands with students who wished to do so during a break.
At the end of day 2, the students were asked to answer the questionnaire with the questions shown in s Table 5, Table 6 and Table 7. The students were given the time until the next lecture to respond to the questionnaire. Nine students answered this questionnaire. The analysis of this questionnaire indicates for the dimensions of the Engagement Profile that the students desired more exploration (E), user control (U), physical activity (P), and social interaction (S), as can be seen in Figure 6 (w1).
For the questions Q 1 Q 5 , all were on the positive side. Specifically Q 5 and Q 1 indicated that the students would like to repeat the experience, and that they liked it. However, the students were not so convinced about the learning effect of the experience. Regarding the capabilities, remote operation and extra content scored highest. The detailed results are shown in the section for week 1 of Table 9.
Based on these results, we applied the following changes for day 3: (a) To increase exploration, the robot suggested some sources for extra learning. (b) To increase exploration and user control, the robot gave students the choice to hear more about some topics, or take additional quizzes. (c) To increase the physical and social interaction, the students were asked to wave their hands to indicate interest and to come see the robot during the break.

4.2.3. Day 3

Based on the feedback and evaluation from the previous day, the setup of the robot for day 3 was as follows:
C1
Reading: math, pattern recognition, statistics
C2
Greetings: hello, bye (2).
C3
Alerting: the robot alerted the class that it was time to go for a short outing to a workshop with tools.
C4
Remote operation: video conference with remote person (the second author in Oslo, Norway).
C5
Clarification: the robot described some extra resources.
C6
Motion: in break, handshakes and face recognition.
At the end of day 3, the students were asked to answer the questionnaire with the questions shown in s Table 5, Table 6 and Table 7. The students were given the time until the next lecture to respond to the questionnaire. Only five students answered this questionnaire. The analysis of this questionnaire indicates the dimensions of Engagement Profile that the students desired; more narrative (N), user control (U), visible achievements (A), and the possibility for exploration (E), as can be seen in Figure 6, (w2). We interpreted this that a) the storyline and roles should be more evident, b) there should be more possibilities to go in depth with extra content to explore on your own, c) the interaction should be more influenced by what the students did; and d) there should be more feedback on how well the students are doing.
For the questions Q 1 Q 5 , as well as for the capabilities C 1 to C 6 , we abstain from comments, as the number of responses is too low. See the section for week 2 of Table 9 for details.

4.2.4. Day 4

Based on the feedback and evaluation from the previous week, we added several new elements: (a) The robot presented an outline of the “storyline” for that day’s class (what activities would be conducted and why) and clarified roles. (b) For each main topic of this lecture (robotics and computer vision) the robot gave the students some free time to study and take quizzes; the robot reminded when it was time to move on. (c) Baxter was set up to recognize faces and provide feedback for specific students at the end of the lecture.
The following functionality was implemented:
C1
Reading: sensors, actuators, computer vision, summary (6).
C2
Greeting: hello (storyline, roles), bye (2).
C3
Alerting: time to change topics or switch between listening to lectures and exploring.
C4
Remote operation: make the robot’s gaze follow people moving left to right or vice versa.
C5
Clarification: showing quiz questions and answers on the robot’s display.
C6
Motion: the robot handed out robot kits to a representative from each project group.
For day 4, additional functionality was implemented: facial recognition for personalisation, and a looking-around feature to show awareness. Face recognition was implemented using OpenCV [52] and a support vector machine (SVM) classifier with local binary pattern features trained on data collected from the students. The robot recognized faces using its head camera and displayed the faces in its display.
For the look-around feature, the intention was for the robot to look toward the part of the classroom which was most active to show awareness. For example, if a student waved her or his hand, the robot could demonstrate awareness by looking toward them. Background subtraction was used to extract motion from the students. In other words, moving an arm in front of the robot’s camera with a non-skin-colored background resulted in a difference between the colors of pixels in image frames over time which can be quantified. Images from the robot’s head camera were split into two regions, left and right. The robot was given two states, “head left” and “head right”, which entailed looking toward students on the left or right of the classroom. The regions were defined with overlap (hysteresis) based on the robot’s state to avoid jitter between states due to motion in the middle of the image. Further, the robot was made to wait a short time after moving to seek to avoid reacting based on its own motion.
At the end of day 4, the students were asked to answer the questionnaire with the questions shown in s Table 5, Table 6 and Table 7. The students were given the time until the next lecture to respond to the questionnaire. Twelve students answered this questionnaire. The analysis of this questionnaire indicates for the dimensions of the Engagement Profile that the students desired more narrative (N), social activity (S), visible achievements (A), and the possibility for exploration (E), as can be seen in Figure 6, (w3).
For the questions Q 1 Q 5 all were on the positive side. Specifically Q 1 and Q 2 indicated that the students liked the experience and found it engaging. Still, the learning effect scored lowest here. Regarding the capabilities, the alert functionality, remote operation, and extra content scored highest. The detailed results are shown in the section for week 3 of Table 9.

4.3. Observations

Our experiment design addressed a complex scenario and was performed in an exploratory way. While the experiments were ongoing, unexpected events happened that influenced the further path of our experiments. One of the authors, therefore, took on a role as an observer. A diary of events is presented below, and some examples are shown in Figure 7.

Day 1

The robot’s voice which had seemed sufficiently loud during development was not perceived as loud enough by the students. Also, when handing an attendance sheet to a student, the student reached out their hand to grab the sheet but when the robot did not immediately let go, the student retracted their hand and the sheet fell to the ground.

Day 2

Speech recognition was difficult because students sometimes did not speak loudly despite being requested to do so, possibly out of shyness, and the teacher often had to repeat the students’ words in order for the robot to react.

Day 3

After class some students described a difficulty with waving to vote for or against exploring material, due to feeling reluctant to oppose the wishes of their fellow students. One suggestion was to use an online poll for anonymity, and to assign tasks for a longer period of time during which students could be free to move around, to also allow for physical activity.
The robot’s scheduled reminder was not used because a student reminded the teacher before the robot could. The teacher had said that the class would leave for their outing at 11:35, but a student reminded the class at 11:30. Thereafter, the students started to pack their belongings, and because of the commotion, the last comments of the robot were also not effective; the teacher had to call for the students to give their attention while the robot was speaking. Handshakes with face recognition were also not demonstrated due to insufficient time, as more time than expected was required by the students to solve the math problems given by the robot.

Day 4

Due to a scheduling mistake another class was held directly before the DEIS course class, and there was no time to prepare the robot; therefore face recognition could not be shown during class time and was demonstrated afterwards. Quizzes appeared to work well, with some students gathering around the robot during each time slot allocated for exploration. The downside was that small groups could interact but not the whole class. Also handing out materials was slow, as the robot’s motions were not fast out of safety concerns. The robot’s phrasing when reminding of the time elicited some laughter. When it said “It is time”, a student said, “Time for what?” and laughed.

Afterwards

The robot’s power adaptors were stolen, luckily after the last day. The classroom was shared with other classes, and although the classroom was to be locked outside of class time, some hurry or mishap resulted in the door being left unlocked.
Another interesting find was that the robot’s power cables were consistently removed from the wall whenever we tried to let the robot charge overnight. Moreover, someone opened a panel on the robot and disconnected power coming from the robot’s main battery to an inverter. We do not know if this was a security guard or teacher, but such problems could affect the use of robots in venues which are shared by many different users.

5. Discussion

In our research design, we explored possibilities for using a robot as a teaching assistant by using the Engagement Profile to vary six distinct capabilities, that were measured against user feedback regarding satisfaction and some engagement factors. The Engagement Profile and such methodology has been applied earlier in connection with evaluating exhibits in science centres and museums. Using this design methodology, one performs several iterations where the design of the robot teaching assistant is altered by changing its capabilities, followed by an evaluation step that gives evidence how to make further changes.
In our research, we used the Engagement Profile as an evaluation platform. For this purpose, we adjusted the Engagement Profile to the use case of robots in a teaching context. The transition from installations in science centres and museums, for which the Engagement Profile originally was designed, to robots in a teaching context seemed straightforward. Further, the categories in the Engagement Profile, i.e., competition, narrative, interactivity, physical user control, social, achievements, and exploration, appeared to be suitable for the analysis. Also, the use of the questionnaires for the analysis integrating the Engagement Profile, the user satisfaction questions, as well as the capability questions appeared to be a suitable procedure for analysis.
By using a mid-fidelity prototyping approach we were able to implement the six capabilities and adapt them each week. Although performance was not always perfect, we observed as previous studies have, that weakness could be perceived as a plus, as when mistakes lighten the mood of the class [35], or perhaps by eliciting altruistic feelings to protect and help an imperfectly functioning robot [60]; for example, in the case of the students laughing when the robot reminded the class it was time to do something but did not specify what it was time for.
Since the actual design of the robotic teaching assistant setup was on the low end in the desired ranges (cf. Figure 4), we observe that the majority of the participants desired more of all eight Engagement Profile dimensions. However, it seems puzzling that the participants did not want much more of the physical dimension, since being physical is a vital part of the nature of a robot, which as noted previously would seem to make robots be perceived as more engaging than virtual agents or objects. A possible reason is because the prototype motion capabilities were quite simplified, and students might not have been able to imagine them resulting in a significant reduction of the teacher’s workload, which could benefit the class. However, one could also reason that appliances that are less capable of physical interaction than a robot (such as Google Home or Amazon Alexa) could be sufficient for use as a teaching assistant, which might be desirable due to reduced cost. However, we believe that the property for a robotic teaching assistant to be recognized as a tangible entity which can communicate and physically affect us and the world around us will be essential, especially in the future as designs become more complex and powerful.

5.1. Limitations

The current work has various limitations regarding the study design, our approach to designing for engagement, and the robot’s capabilities. We discuss these limitations and their potential impact below.
Our study was conducted with young adults of various nationalities at an engineering master’s course, but there are many variables including age, culture, gender, and expertise and more, which can affect perception of robots. For example, age in adult classrooms can differ substantially. Young adults can be uncertain about their status as adults, with low rates of marriage, parenthood, and occupational experience [61], and can also have different media preferences than older adults [62]. As well, university classes in Sweden exhibit high cultural diversity, especially at the Master’s level [63], where cultural differences in attitudes toward robots have been noted [64]. Moreover, men have expressed more positive attitudes toward robots than women [65].
In regard to the Engagement Profile, the online questionnaires were convenient for collecting statistics but alone did not provide a way to find out why the students thought the way they did (i.e., the kind of probing which interviews allow for), which complicated efforts to improve the system.
Also, the students had a week to answer the questionnaires, which was not optimal. It would have been better if the responses had been given immediately after the learning experience, as there is evidence in the literature that results can be biased when the memory is not fresh [66]. However, there were practical reasons for performing the surveys as described previously.
Additionally, participation in questionnaires was low, possibly because the questions were the same each week, we did not give incentives to the students, and they were to be completed outside of class time. For this reason, the responses might have been biased and not representative of the class as a whole. Thus, the low number of responses to our questionnaires limited our analysis and led to some outcomes that we could only use as indications. We also note that we wished that the number of responses to our questionnaire could have been higher, specifically in week 2 where only five students responded. Therefore, the analyses using statistics might not be conclusive for such low numbers of responses. However, the results still give some indications that the participants reacted positively to the experiments, as shown in Table 9.
Also, the impact of changes made in the design was not as large as expected. Possibly, the introduction of a new artifact such as a robot teaching assistant had a greater influence on satisfaction and engagement than adjusting the robot’s capabilities. In hindsight, we recognize that the changes to the capabilities might not have been large enough between the iterations to show an impact in the Engagement Profile and satisfaction ratings. Probably, more iterations would have been useful, so that the novelty factor of using a robotic teaching assistant would be reduced, giving the changes between the iterations more room for comparison by the users.
Some limitations also apply to the mid-fidelity prototyping approach we employed, where capabilities were implemented only as prototypes or mock-ups. Thus, the teacher needed to construct situations and effectuate some actions by the robot manually, as capabilities were not fully autonomous. Being forced to press buttons and cause certain trigger-events required some need to concentrate on the robot instead of teaching. We assume that this had an impact on what the students experienced and satisfaction. Further, content for the robot such as quizzes and gestures needed to be prepared in advance. As there is no authoring system available, this can require time and resources, as well as being inflexible.
Moreover, the six capabilities we investigated were derived from functionality requirements and consideration of their usefulness was conducted from a theoretical perspective. We did not explicitly consider the abstract property of a robotic teaching assistant being recognized as a tangible entity that one can communicate with, or the degree to which the robot being evaluated is actually perceived as being capable of different capabilities. For future studies, we suggest that the impact of such properties be included in the research design.
In conducting our study, some unexpected behaviors were also observed, as was described, like in mistaking the timing for the robot to hand out an attendance sheet. Such events can influence how the students thought about the robot, but we think this was not a critical problem for our work for the following reasons: First, the contribution of the work is elsewhere, on reporting on how we can apply an iterative design tool (Engagement Profile) from a different domain to designing a robot teaching assistant, based on requirements we identified from teachers. Also, the students reported a positive general impression of the robot despite the few failures which occurred; and, the literature reports that mistakes from a robot can actually have some positive effects in helping students to feel less self-conscious about performing perfectly [35]. Moreover, it was known from the start that some failures could be expected due to the challenging setting: the paper explores testing “in the wild” in a real world setting where there is really no way to avoid all troubles from occurring [67,68]—there is no robot we know of which already has the capabilities we described, and even human teachers can make mistakes in real class settings, as our teachers described. The students also were aware from the start, from the robot’s self-introduction, that the robot was a work-in-progress.
Conversely, we think it is a strength of the adopted approach that we report on such observations, because our exploratory experiment (based on accepted methodology such as grounded theory and prototyping approaches [69]) is designed also for the purpose of trying to expose such failures early on. It is known that publication bias and the file-drawer effect are serious problems which severely impede the scientific community’s understanding [70], and hiding or designing around failures runs the risk that others will make similar mistakes, which would be desirable to avoid. Since there is increasing interest in this area, with many courses around the world starting to use robots, we think there is a use for identifying such potential failing points early on by adopting such a research design.

5.2. Future Work

As described previously, various past studies have already reported good results using robots with people of different ages, cultures, and non-engineering students (e.g., in language classes), as well as engineering students, but more work is needed. Future work will involve comparing different demographics (e.g., young adults versus elderly, different cultures, and in different fields).
For the Engagement Profile, we shall use the questions we conceived to further develop this analysis and design methodology for artifacts in a teaching context. We will also explore the possibility of dedicating time at the end of classes to answering questionnaires in order to gain more responses and how to also conduct some interviews to gain insight into why the students thought the way they did.
Regarding capabilities for a robot teaching assistant, we focused on six capabilities which our teachers indicated as desirable, but there could be various other qualities which could be useful.
Also, in general, we pressed buttons and timers to trigger robot behaviors such as quizzes and reminders, but the robot itself should be able to flexibly determine the right time to act; this includes recognizing when students are listening.
As well, content such as quizzes or suggestions for extra reading had to be prepared ahead of time manually by the teacher, and it was not possible to take into account what each student knew; future work will explore how the robot itself can construct or select content, which will have benefits such as allowing for continuous evaluation. A fundamental related problem is analysis of student behaviors and responses, which can be addressed by approaches such as educational data mining (EDM) and learning analytics (LA) [71]. For example, text mining can be conducted on a student’s verbal or written responses; the abstract structure underlying an answer can be represented using a tree or in the case of programming in terms of how input data are transformed [72]. Some other typical applications of such approaches include characterizing students by learning behaviors, knowledge levels, or personalities [73].
Such analysis should be utilized to plan a robot’s behaviors. For example, some work has already explored how deep learning can be used for an intelligent tutoring system (ITS) to select quiz questions in an optimal way for a single student based on knowledge tracing [74]. Further work is required to explore how to do so for groups of students in a class. This is not a simple question as decisions must be made about which students, if any, to prioritize: weaker students who require help to meet minimum learning requirements, or stronger students who are trying hard to learn. Additionally, another attractive prospect would be if a robot can itself learn to improve its knowledge base, e.g., from YouTube videos [75]; domain-specific learning has been demonstrated in various studies but general learning across various topics, and machine creativity for generating new content, are still desirable goals for future work.
As well, insight from various previous studies could be used to take these capabilities to a higher level of technological readiness: Greetings, a useful way to enhance engagement [76], could be more effective if personalized. For example, a system was developed which could greet people personally, proposing that robots can remember thousands of faces and theoretically surpass humans in ability to tailor interactions toward specific individuals [77]. However, much future work remains to be done in this area in regard to how content can be personalized, e.g., by recognizing features such as clothing or hairstyle, or leveraging prior knowledge of human social conventions.
As well, for alerting capability, adaptive reminding systems have been developed which seek to avoid annoying the person being reminded or making them overly reliant on the system, such as for the nurse robot PEARL [78]. We expect that this challenge will become more difficult when dealing with a group of people, where some times and actions might be good for some students but not for others. For example, some students might finish an exercise faster than others, but should a robot wait before initiating discussion until the majority of students have stopped making progress on the exercise; and if so, how can this be recognized? Possibly, a robot could detect, for each student, when a student’s attention is no longer directed toward a problem (e.g., by tracking eye gaze [79]), nothing has been written for a while, verbal behavior focuses on an unrelated topic, or potentially informative emotions such as satisfaction, frustration, or defeat are communicated—although we expect accurate inference to be quite challenging given the complexity of human behavior.
Various interesting possibilities for improved remote operation are suggested in the literature. For example, a robot could be used by students with a physical disability to carry out physical tasks [36], or students with cancer to remotely attend classes [19]. Moreover, it was suggested that robots’ limited capabilities actually facilitate collaboration, e.g., by increasing participation, proactiveness, and stimulation for operators [18,21]. Future work could consider how to share time on robots between multiple students, and provide enhanced sensing capabilities which human students might not have, like the ability to zoom in on slides. Moreover, ethical concerns should be investigated, such as if students using robots to attend classes could be vulnerable to being hassled or feeling stigmatized; or conversely, who is accountable if a student uses a robot to cause harm?
Clarification and automatically supporting the teacher (e.g., by looking up and showing topics on a display while the teacher talks) could be improved by incorporating state-of-the-art speech recognition and ability to conduct information retrieval with verbose natural language queries, such as is incorporated into IBM’s Watson, Microsoft’s Siri, Google Assistant, and Facebook Graph Search [80]. How to display such information in an optimally informative way is an interesting question for future work, both for individual students and groups of students.
Physical tasks such as handing out materials could also be improved by considering previous work: for example, approaches have been described for making hand-overs natural and effective [81], for handing materials to seated people [82], and for how to approach people to deliver handouts [83]. For example, a robot’s velocity profile can be controlled to be distinctive and evoke trust [84], in students who might not be accustomed to interacting with robots. Mechanisms for dealing with complex and narrow human environments with obstacles can also be considered; for example, a drone could be used by a large robot to deliver handouts to students in locations which are difficult to directly access.
As well, the question of whether an embodiment is truly needed arises again. As noted, various studies have suggested the usefulness of robots as an engaging medium within certain contexts. For example, a human-like robot was considered more enjoyable than a human or a box with a speaker for reading poetry [85]; the authors speculated that the robot was easier to focus attention on than the box, and that the robot’s neutral delivery allowed for more room for interpretation and for people to concentrate and immerse themselves more. However, five out of six of the capabilities we explored—all except physical tasks—do not strictly speaking require an embodiment. Not using an embodiment could be cheaper; also, using speakers and screens distributed through a classroom (e.g., as desks) could make it easier for all students to be able to see and hear to the same degree. This suggests the usefulness of some future work to examine cost/value trade-offs for having an embodiment within various contexts. Furthermore, additional mechanisms such as emotional speech synthesis could be used for capabilities such as reading, greeting, and alerting to have a more engaging delivery.

6. Conclusions

The contribution of the current study lies in reporting our experiences exploring a process for designing an engaging teaching assistant robot. This involved developing a robotic teaching assistant for a university-level engineering course and observing how this robot was received by the students. We performed an analysis once a week over a three week period using questionnaires that contained questions about robot capabilities, user satisfaction, and an adapted version of the Engagement Profile. This process is also summarized in a video accompanying the current article (Video S1: Supplemental video).
During the study, we were able to study some capabilities which were identified by teachers as being potentially desirable for a robot teaching assistant: reading, greeting, alerting, remote operation, clarification, and motion. Findings for each of these capabilities were aligned with previous results from the literature, along with recommendations for future work. Personalizing such capabilities could be useful to improve interactions with the robot teaching assistant; moreover, the role of embodiment needs to be further investigated.
As a secondary finding, the adaptation of the Engagement Profile to the case of a robotic teaching assistant and the iterative design process in developing the robot teaching assistant appeared to be useful. However, changes between iterations should have been more significant to receive stronger indications of what to improve.
The novelty of this work lies in the adaptation of the Engagement Profile to the scenario of robots in education, as well as in reporting a list of desired capabilities for a teaching assistant robot, and observations in regard to the students’ experiences. We look forward to reporting on further advances in outlined challenges, as robots are increasingly adopted to also engage adults within the context of learning.

Supplementary Materials

Video S1: Supplemental video. This paper is also summarized in a video available online: https://youtu.be/Gkr1b5zvp9Y.

Author Contributions

Conceptualization, M.C.; data curation, W.L.; formal analysis, W.L.; investigation, M.C.; writing—original draft, M.C.; writing—review and editing, W.L. M.C. developed the research design and performed the study on-site at Halmstad University. He also acted as the teacher for the class. W.L. adapted the Engagement Profile to the teaching context and performed the analysis of the questionnaires. Both authors contributed substantially to the writing of the article.

Funding

The first author received funding from the Swedish Knowledge Foundation (Sidus AIR no. 20140220 and CAISR 2010/0271) and also some travel funding from the REMIND project (H2020-MSCARISE No 734355). The Engagement Profile has been developed in the context of the project VisitorEngagement funded by the Research Council of Norway in the BIA programme, grant number 228737.

Acknowledgments

The authors express their gratitude to their colleagues for helpful comments while preparing this paper. Special thanks go to Ivar Solheim at Norsk Regnesentral for discussions about the use of robots in education, and to Alberto Montebelli at the University of Skövde regarding possibilities for enhancing engagement via robots.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations and names are used in this manuscript:
ASDAutism spectrum disorder
C i Capability i; see  Table 2
DEISDesign of Embedded and Intelligent Systems (course acronym at Halmstad University)
EDMEducational data mining
ISDDDepartment of Intelligent Systems and Digital Design (at Halmstad University, Sweden)
ITSIntelligent tutoring system
HRIHuman-robot interaction
LALearning analytics
NAOa humanoid robot created by the company Softbank Robotics
ROSRobot operating system
SOLOStructure of observed learning outcomes
OpenCVOpen source computer vision library
SVMSupport vector machine

References

  1. Harasim, L. Learning Theory and Online Technologies, 1st ed.; Routledge: New York, NY, USA, 2011. [Google Scholar]
  2. Säljö, R. Digital tools and challenges to institutional traditions of learning: Technologies, social memory and the performative nature of learning. J. Comput. Assist. Learn. 2010, 26, 53–64. [Google Scholar] [CrossRef]
  3. Entwistle, N. Teaching for Understanding at University: Deep Approaches and Distinctive Ways of Thinking; Universities into the 21st Century; Palgrave Macmillan: Basingstoke, UK, 2009. [Google Scholar]
  4. Druin, A.; Hendler, J.A. Robots for Kids: Exploring New Technologies for Learning; Morgan Kaufmann: Burlington, MA, USA, 2000. [Google Scholar]
  5. Fong, T.W.; Nourbakhsh, I.; Dautenhahn, K. A Survey of Socially Interactive Robots: Concepts, Design, and Applications; Technical Report CMU-RI-TR-02-29; Robotics Institute, Carnegie Mellon University: Pittsburgh, PA, USA, 2002. [Google Scholar]
  6. Steinfeld, A.; Fong, T.; Kaber, D.; Lewis, M.; Scholtz, J.; Schultz, A.; Goodrich, M. Common Metrics for Human-robot Interaction. In Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-robot Interaction, Salt Lake City, UT, USA, 2–3 March 2006; pp. 33–40. [Google Scholar] [CrossRef]
  7. Han, J.; Jo, M.; Jones, V.; Jo, J.H. Comparative Study on the Educational Use of Home Robots for Children. J. Media Res. 2008, 4. [Google Scholar] [CrossRef]
  8. Leyzberg, D.; Spaulding, S.; Toneva, M.; Scassellati, B. The Physical Presence of a Robot Tutor Increases Cognitive Learning Gains. In Proceedings of the 34th Annual Meeting of the Cognitive Science Society, CogSci 2012, Sapporo, Japan, 1–4 August 2012; pp. 1882–1887. [Google Scholar]
  9. Belpaeme, T.; Kennedy, J.; Baxter, P.; Vogt, P.; Krahmer, E.; Kopp, S.; Bergmann, K.; Leseman, P.; Küntay, A.C.; Göksun, T.; et al. L2TOR—Second Language Tutoring using Social Robots. In Proceedings of the ICSR 2015 Workshop on Educational Robotics (WONDER) 2015, Paris, France, 26–30 October 2015. [Google Scholar]
  10. Belpaeme, T.; Kennedy, J.; Ramachandran, A.; Scassellati, B.; Tanaka, F. Social robots for education: A review. Sci. Robot. 2018, 3. [Google Scholar] [CrossRef]
  11. Biggs, J.B.; Tang, C. Teaching for Quality Learning at University; Open University Press/Mc Graw-Hill Education: New York, NY, USA, 2007. [Google Scholar]
  12. Avanzato, R. Collaborative mobile robot design in an introductory programming course for engineers. In Proceedings of the 1998 ASEE Annual Conference, Seattle, WA, USA, 28 June–1 July 1998. [Google Scholar]
  13. Klassner, F.; Anderson, S.D. MindStorms: Not Just for K-12 Anymore. IEEE Robot. Autom. Mag. 2003, 10, 12–18. [Google Scholar] [CrossRef]
  14. Fernandes, E.; Fermé, E.; Oliveira, R. Using Robots to Learn Functions in Math Class. In Proceedings of the Seventeenth International Commission on Mathematical Instruction (ICMI) Study Conference “Technology Revisited”, Hanoi, Vietnam, 3–8 December 2006; pp. 152–159. [Google Scholar]
  15. Church, W.; Ford, T.; Perova, N.; Rogers, C. Physics with robotics: Using Lego Mindstorms in high school education. In Proceedings of the Advancement of Artificial Intelligence Spring Symposium, Palo Alto, CA, USA, 22–24 March 2010; pp. 47–49. [Google Scholar]
  16. Castledine, A.; Chalmers, C. LEGO Robotics: An Authentic Problem Solving Tool? Des. Technol. Educ. 2011, 16, 19–27. [Google Scholar]
  17. Fels, D.; Waalen, J.; Zhai, S.; Weiss, P. Telepresence Under Exceptional Circumstances: Enriching the Connection to School for Sick Children. In Proceedings of the IFIP INTERACT01: Human-Computer Interaction, Tokyo, Japan, 9–13 July 2001; pp. 617–624. [Google Scholar]
  18. Yamazaki, R.; Nishio, S.; Ogawa, K.; Ishiguro, H.; Matsumura, K.; Koda, K.; Fujinami, T. How Does Telenoid Affect the Communication Between Children in Classroom Setting? In CHI ’12 Extended Abstracts on Human Factors in Computing Systems; ACM: New York, NY, USA, 2012; pp. 351–366. [Google Scholar]
  19. Børsting, J.; Culén, A.L. A Robot Avatar: Easier Access to Education and Reduction in Isolation? In Proceedings of the International Conference on E-Health 2016, IADIS, Funchal, Portugal, 1–3 July 2016; pp. 34–44. [Google Scholar]
  20. Kanda, T.; Hirano, T.; Eaton, D.; Ishiguro, H. Interactive Robots as Social Partners and Peer Tutors for Children: A Field Trial. Hum.-Comput. Interact. 2004, 19, 61–84. [Google Scholar] [Green Version]
  21. Kanda, T.; Ishiguro, H. Communication Robots for Elementary Schools. In Proceedings of the AISB’05 Symposium Robot Companions: Hard Problems and Open Challenges in Robot-Human Interaction, Hatfield, UK, 12–15 April 2005; pp. 54–63. [Google Scholar]
  22. Shiomi, M.; Kanda, T.; Howley, I.; Hayashi, K.; Hagita, N. Can a Social Robot Stimulate Science Curiosity in Classrooms? Int. J. Soc. Robot. 2015, 7, 641–652. [Google Scholar] [CrossRef]
  23. Köse, H.; Uluer, P.; Akalın, N.; Yorgancı, R.; Özkul, A.; Ince, G. The effect of embodiment in sign language tutoring with assistive humanoid robots. Int. J. Soc. Robot. 2015, 7, 537–548. [Google Scholar] [CrossRef]
  24. Zaga, C.; Lohse, M.; Truong, K.; Evers, V. The Effect of a Robot’s Social Character on Children Task Engagement: Peer Versus Tutor. In Proceedings of the 7th International Conference on Social Robotics, ICSR 2015, Paris, France, 26–30 October 2015; pp. 704–713. [Google Scholar] [CrossRef]
  25. Goetz, J.; Kiesler, S.; Powers, A. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In Proceedings of the 12th IEEE International Workshop on Robot and Human Interactive Communication, ROMAN 2003, Millbrae, CA, USA, 31 October–2 November 2003; pp. 55–60. [Google Scholar]
  26. Alves-Oliveira, P.; Sequeira, P.; Paiva, A. The role that an educational robot plays. In Proceedings of the 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2016), New York, NY, USA, 26–31 August 2016; pp. 817–822. [Google Scholar]
  27. Kennedy, J.; Baxter, P.; Belpaeme, T. The robot who tried too hard: Social behaviour of a robot tutor can negatively affect child learning. In Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015), Portland, Oregon, USA, 2–5 March 2015; pp. 67–74. [Google Scholar]
  28. Baxter, P.; Ashurst, E.; Read, R.; Kennedy, J.; Belpaeme, T. Robot education peers in a situated primary school study: Personalisation promotes child learning. PLoS ONE 2017, 12, e0178126. [Google Scholar] [CrossRef]
  29. Chang, C.W.; Lee, J.H.; Chao, P.Y.; Wang, C.Y.; Chen, G.D. Exploring the Possibility of Using Humanoid Robots as Instructional Tools for Teaching a Second Language in Primary School. Educ. Technol. Soc. 2010, 13, 13–24. [Google Scholar]
  30. Alemi, M.; Meghdari, A.; Ghazisaedy, M. Employing Humanoid Robots for Teaching English Language in Iranian Junior High-Schools. Int. J. Humanoid Robot. 2014, 11, 1450022. [Google Scholar] [CrossRef]
  31. Fuglerud, K.S.; Solheim, I. The use of social robots for supporting language training of children. In Proceedings of the Universal Design and Higher Education in Transformation Congress (UDHEIT2018), Dublin Castle, Ireland, 30 October–2 November 2018; pp. 1–8. [Google Scholar] [CrossRef]
  32. Kanero, J.; Geçkin, V.; Oranç, C.; Mamus, E.; Köntay, A.C.; Göksun, T. Social Robots for Early Language Learning: Current Evidence and Future Directions. Child Dev. Perspect. 2018. [Google Scholar] [CrossRef]
  33. Kanda, T.; Nabe, S.; Hiraki, K.; Ishiguro, H.; Hagita, N. Human friendship estimation model for communication robots. Auton. Robots 2008, 24, 135–145. [Google Scholar] [CrossRef]
  34. Saerbeck, M.; Schut, T.; Bartneck, C.; Janse, M.D. Expressive Robots in Education: Varying the Degree of Social Supportive Behavior of a Robotic Tutor. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI2010), Atlanta, Georgia, 10–15 April 2010; pp. 1613–1622. [Google Scholar]
  35. Alemi, M.; Meghdari, A.; Ghazisaedy, M. The effect of employing humanoid robots for teaching English on students’ anxiety and attitude. In Proceedings of the 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), Nottingham, UK, 6–7 July 2014; pp. 754–759. [Google Scholar] [CrossRef]
  36. Cooper, M.; Keating, D.; Harwin, W.; Dautenhahn, K. Robots in the classroom-tools for accessible education. In Assistive Technology on the Threshold of the New Millennium, Assistive Technology Research Series; IOS Press: Amsterdam, The Netherlands, 1999; Volume 4, pp. 448–452. [Google Scholar]
  37. Goel, A.K.; Joyner, D.A. Design of an Online Course on Knowledge-Based AI. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA, 12–17 February 2016; pp. 4089–4094. [Google Scholar]
  38. Maderer, J. Artificial Intelligence Course Creates AI Teaching Assistant: Students Didn’T Know Their TA Was a Computer. Georgia Tech News Center. 9 May 2016. Available online: https://www.news.gatech.edu/2016/05/09/artificial-intelligence-course-creates-ai-teaching-assistant (accessed on 12 March 2019).
  39. Shin, N.; Kim, S. Learning about, from, and with Robots: Students’ Perspectives. In Proceedings of the 16th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2007), Jeju Island, Korea, 26–29 August 2007; pp. 1040–1045. [Google Scholar] [CrossRef]
  40. Cramer, H.; Mentis, H.M.; Fernaeus, Y. Serious work on playful experiences: A preliminary set of challenges. In Proceedings of the CSCW 2010 Fun, Seriously Workshop, Savannah, GA, USA, 6–10 February 2010; Available online: https://pdfs.semanticscholar.org/ae58/923f3925f8f1a558b73a6fe307e5a5562522.pdf (accessed on 12 March 2019).
  41. Knowles, M.S. The Modern Practice of Adult Education: Andragogy versus Pedagogy; Cambridge Adult Education; Prentice Hall Regents: Englewood Cliffs, NJ, USA, 1970. [Google Scholar]
  42. Rayner, G.M.; Burke da Silva, K. Building pedagogical bridges between secondary and tertiary biology: A multi-institutional, national endeavour. In Proceedings of the STEM 2014—International Science, Technology, Engineering and Mathematics in Education Conference, Vancouver, BC, Canada, 12–15 July 2014; pp. 1–6. [Google Scholar]
  43. Brščić, D.; Kidokoro, H.; Suehiro, Y.; Kanda, T. Escaping from Children’s Abuse of Social Robots. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI 2015), Portland, OR, USA, 2–5 March 2015; pp. 59–66. [Google Scholar] [CrossRef]
  44. Salvini, P.; Ciaravella, G.; Yu, W.; Ferri, G.; Manzi, A.; Mazzolai, B.; Laschi, C.; Oh, S.R.; Dario, P. How safe are service robots in urban environments? Bullying a robot. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 12–15 September 2010; pp. 1–7. [Google Scholar] [CrossRef]
  45. Spink, A.; Cole, C.; Waller, M. Multitasking behavior. Annu. Rev. Inf. Sci. Technol. 2008, 42, 93–118. [Google Scholar] [CrossRef]
  46. Engelberg, D.; Seffah, A. A framework for rapid mid-fidelity prototyping of web sites. In Proceedings of the IFIP World Computer Congress, Poznan, The Netherlands, 25–30 August 2002; pp. 203–215. [Google Scholar]
  47. Biggs, J.B.; Collis, K.F. Evaluating the Quality of Learning: The SOLO Taxonomy (Structure of the Observed Learning Outcome); Academic Press: Cambridge, MA, USA, 1982. [Google Scholar]
  48. SoftBank Robotics. Find out More about NAO. Available online: https://www.ald.softbankrobotics.com/en/robots/nao/find-out-more-about-nao (accessed on 12 March 2019).
  49. Open Source Robotics Foundation. What Is a TurtleBot? 2018. Available online: https://www.turtlebot.com (accessed on 12 March 2019).
  50. Rae, I.; Takayama, L.; Mutlu, B. The influence of height in robot-mediated communication. In Proceedings of the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, Japan, 4–6 March 2013; pp. 1–8. [Google Scholar] [CrossRef]
  51. Kim, T.; Hong, H.; Magerko, B. Designing for Persuasion: Toward Ambient Eco-Visualization for Awareness. In Persuasive Technology: 5th International Conference, PERSUASIVE 2010, Copenhagen, Denmark, 7–10 June 2010; Ploug, T., Hasle, P., Oinas-Kukkonen, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 106–116. [Google Scholar]
  52. OpenCV Team. OpenCV Library. 2018. Available online: https://opencv.org (accessed on 12 March 2019).
  53. CMU Sphinx Developers. CMU Sphinx/Pocketsphinx. 2018. Available online: https://github.com/cmusphinx/pocketsphinx (accessed on 12 March 2019).
  54. Open Source Robotics Foundation. ROS—Robot Operating System. 2018. Available online: https://www.ros.org (accessed on 12 March 2019).
  55. The Glossary of Education Reform. Student Engagement. Interactive Web Pages. 2016. Available online: http://edglossary.org/student-engagement/ (accessed on 12 March 2019).
  56. Leister, W.; Tjøstheim, I.; Joryd, G.; de Brisis, M.; Lauritzsen, S.; Reisæter, S. An Evaluation-Driven Design Process for Exhibitions. Multimodal Technol. Interact. 2017, 1, 25. [Google Scholar] [CrossRef]
  57. Leister, W.; Tjøstheim, I.; Joryd, G.; Schulz, T.; Larssen, A.; de Brisis, M. Assessing Visitor Engagement in Science Centres and Museums. J. Adv. Life Sci. 2016, 8, 49–63. [Google Scholar]
  58. Ocampo-Agudelo, J.; Maya, J.; Roldán, A. A Tool for the Design of Experience-Centred Exhibits in Science Centres. Poster at Science Centre World Summit—SCWS2017, Tokyo, Japan, 15–17 November 2017. [Google Scholar] [CrossRef]
  59. Google Inc. About Google Forms. 2018. Available online: https://www.google.com/forms/about/ (accessed on 12 March 2019).
  60. Gray, H.M.; Gray, K.; Wegner, D.M. Dimensions of mind perception. Science 2007, 315, 619. [Google Scholar] [CrossRef]
  61. Arnett, J.J. Are college students adults? Their conceptions of the transition to adulthood. J. Adult Dev. 1994, 1, 213–224. [Google Scholar] [CrossRef]
  62. Mundorf, N.; Brownell, W. Media preferences of older and younger adults. Gerontologist 1990, 30, 685–691. [Google Scholar] [CrossRef]
  63. Bolton, K.; Kuteeva, M. English as an academic language at a Swedish university: Parallel language use and the ‘threat’of English. J. Multiling. Multicult. Dev. 2012, 33, 429–447. [Google Scholar] [CrossRef]
  64. Bartneck, C.; Nomura, T.; Kanda, T.; Suzuki, T.; Kato, K. A cross-cultural study on attitudes towards robots. In Proceedings of the HCI International, Las Vegas, Nevada, USA, 22–27 July 2005. [Google Scholar]
  65. Kuo, I.H.; Rabindran, J.M.; Broadbent, E.; Lee, Y.I.; Kerse, N.; Stafford, R.; MacDonald, B.A. Age and gender factors in user acceptance of healthcare robots. In Proceedings of the RO-MAN 2009—The 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 27 September–2 October 2009; pp. 214–219. [Google Scholar]
  66. Furnham, A. Response bias, social desirability and dissimulation. Personal. Individ. Differ. 1986, 7, 385–400. [Google Scholar] [CrossRef]
  67. Sabanovic, S.; Michalowski, M.P.; Simmons, R. Robots in the wild: Observing human-robot social interaction outside the lab. In Proceedings of the 9th IEEE International Workshop on Advanced Motion Control, Istanbul, Turkey, 27–29 March 2006; pp. 596–601. [Google Scholar]
  68. Salter, T.; Werry, I.; Michaud, F. Going into the wild in child–robot interaction studies: Issues in social robotic development. Intell. Serv. Robot. 2008, 1, 93–108. [Google Scholar] [CrossRef]
  69. Strauss, A.; Corbin, J. Grounded theory methodology. Handb. Qual. Res. 1994, 17, 273–285. [Google Scholar]
  70. Rosenthal, R. The file drawer problem and tolerance for null results. Psychol. Bull. 1979, 86, 638. [Google Scholar] [CrossRef]
  71. Slater, S.; Joksimović, S.; Kovanovic, V.; Baker, R.S.; Gasevic, D. Tools for educational data mining: A review. J. Educ. Behav. Stat. 2017, 42, 85–106. [Google Scholar] [CrossRef]
  72. Piech, C.; Huang, J.; Nguyen, A.; Phulsuksombati, M.; Sahami, M.; Guibas, L. Learning program embeddings to propagate feedback on student code. arXiv, 2015; arXiv:1505.05969. [Google Scholar]
  73. Dutt, A.; Ismail, M.A.; Herawan, T. A systematic review on educational data mining. IEEE Access 2017, 5, 15991–16005. [Google Scholar] [CrossRef]
  74. Piech, C.; Bassen, J.; Huang, J.; Ganguli, S.; Sahami, M.; Guibas, L.J.; Sohl-Dickstein, J. Deep knowledge tracing. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 505–513. [Google Scholar]
  75. Rothfuss, J.; Ferreira, F.; Aksoy, E.E.; Zhou, Y.; Asfour, T. Deep episodic memory: Encoding, recalling, and predicting episodic experiences for robot action execution. IEEE Robot. Autom. Lett. 2018, 3, 4007–4014. [Google Scholar] [CrossRef]
  76. Heenan, B.; Greenberg, S.; Aghel-Manesh, S.; Sharlin, E. Designing Social Greetings in Human Robot Interaction. In Proceedings of the 2014 Conference on Designing Interactive Systems, Vancouver, BC, Canada, 21–25 June 2014; pp. 855–864. [Google Scholar] [CrossRef]
  77. Glas, D.F.; Wada, K.; Shiomi, M.; Kanda, T.; Ishiguro, H.; Hagita, N. Personal Greetings: Personalizing Robot Utterances Based on Novelty of Observed Behavior. Int. J. Soc. Robot. 2017, 9, 181–198. [Google Scholar] [CrossRef]
  78. Pollack, M.E.; Brown, L.; Colbry, D.; Orosz, C.; Peintner, B.; Ramakrishnan, S.; Engberg, S.; Matthews, J.T.; Dunbar-Jacob, J.; McCarthy, C.E. Pearl: A Mobile Robotic Assistant for the Elderly. In Proceedings of the AAAI Workshop on Automation as Caregiver, Edmonton, AB, Canada, 28–29 July 2002. [Google Scholar]
  79. Schindler, M.; Lilienthal, A. Eye-Tracking For Studying Mathematical Difficulties: Also in Inclusive Settings. In Proceedings of the Annual Meeting of the International Group for the Psychology of Mathematics Education (PME-42), Umeå, Sweden, 3–8 July 2018; Volume 4, pp. 115–122. [Google Scholar]
  80. Gupta, M.; Bendersky, M. Information retrieval with verbose queries. Found. Trends Inf. Retr. 2015, 9, 209–354. [Google Scholar] [CrossRef]
  81. Cakmak, M.; Srinivasa, S.S.; Lee, M.K.; Kiesler, S.; Forlizzi, J. Using Spatial and Temporal Contrast for Fluent Robot-Human Hand-Overs. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI 2011), Lausanne, Switzerland, 8–11 March 2011; pp. 489–496. [Google Scholar]
  82. Koay, K.; Sisbot, E.; Syrdal, D.; Walters, M.; Dautenhahn, K.; Alami, R. Exploratory Study of a Robot Approaching a Person in the Context of Handing Over an Object. In Proceedings of the AAAI-Spring Symposium 2007: SS07, Multidisciplinary Collaboration for Socially Assistive Robotics, Palo Alto, CA, USA, 26–28 March 2007; pp. 18–24. [Google Scholar]
  83. Shi, C.; Shiomi, M.; Smith, C.; Kanda, T.; Ishiguro, H. A model of distributional handing interaction for a mobile robot. In Proceedings of the Robotics: Science and Systems, Berlin, Germany, 24–28 June 2013. [Google Scholar]
  84. Schulz, T.; Herstad, J.; Tørresen, J. Classifying Human and Robot Movement at Home and Implementing Robot Movement Using the Slow in, Slow out Animation Principle. Int. J. Adv. Intell. Syst. 2018, 11, 234–244. [Google Scholar]
  85. Ogawa, K.; Taura, K.; Ishiguro, H. Possibilities of Androids as poetry-reciting agent. In Proceedings of the 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 565–570. [Google Scholar] [CrossRef]
Figure 1. Basic concept: a robot teaching assistant can help learning at a master’s level engineering course.
Figure 1. Basic concept: a robot teaching assistant can help learning at a master’s level engineering course.
Robotics 08 00021 g001
Figure 2. The dimensions of the Engagement Profile explained with short definitions, adapted to the teaching case. To define the value of a property, find the adjacent number of the phrases that fit best.
Figure 2. The dimensions of the Engagement Profile explained with short definitions, adapted to the teaching case. To define the value of a property, find the adjacent number of the phrases that fit best.
Robotics 08 00021 g002
Figure 3. Classroom setup.
Figure 3. Classroom setup.
Robotics 08 00021 g003
Figure 4. The Engagement Profile for the learning experience with the robotic teaching assistant. The green areas show the preferred values; the blue hatches show the Engagement Profile of the robotic teaching assistant during the first week.
Figure 4. The Engagement Profile for the learning experience with the robotic teaching assistant. The green areas show the preferred values; the blue hatches show the Engagement Profile of the robotic teaching assistant during the first week.
Robotics 08 00021 g004
Figure 5. Some examples of the robot’s capabilities being demonstrated. (a) Reading: students gathered around the robot during a free exploration session, to participate in quizzes. (b) Greetings: the robot smiled when saying hello and closed its eyes as a metaphor for going to sleep when saying goodbye. (c) Remote operation: a graduated student and the second author in Norway speak through the robot. (d) Clarification: The robot shows some different kinds of graphs while the teacher speaks. (e) Alerting: the robot looks at the teacher while issuing a reminder. (f) Motion: the robot takes a kit from the teacher to hand to students.
Figure 5. Some examples of the robot’s capabilities being demonstrated. (a) Reading: students gathered around the robot during a free exploration session, to participate in quizzes. (b) Greetings: the robot smiled when saying hello and closed its eyes as a metaphor for going to sleep when saying goodbye. (c) Remote operation: a graduated student and the second author in Norway speak through the robot. (d) Clarification: The robot shows some different kinds of graphs while the teacher speaks. (e) Alerting: the robot looks at the teacher while issuing a reminder. (f) Motion: the robot takes a kit from the teacher to hand to students.
Robotics 08 00021 g005
Figure 6. Change-diagrams weekly. Colour code: blue = stay; green = increase; (red = decrease).
Figure 6. Change-diagrams weekly. Colour code: blue = stay; green = increase; (red = decrease).
Robotics 08 00021 g006
Figure 7. Some examples of lessons learned. (a) Reading: the robot’s monitor showing students waving to vote on learning activities is mostly dark; students described feeling shy about waving out of fear that they would act against the wishes of others. (b) Greeting: we explored designing different waving behaviors as the robot is not capable of moving its arm like a human, but the robot’s farewell wave on the first lecture was described as a student as resembling a Nazi salute. (c) Clarification: the robot’s speech recognition was not robust enough to hear some of the students at the back of the room during quizzes. (d) Alerting: a student reminded the class about an outing five minutes ahead of schedule, so the robot’s reminder was not used. (e) Remote operation: some pauses used to allow the robot to more easily distinguish its own movements from students’ movements interfered with handing out kits to the students. (f) Motion: the robot was too slow when handing out an attendance sheet on the first day and dropped the sheet.
Figure 7. Some examples of lessons learned. (a) Reading: the robot’s monitor showing students waving to vote on learning activities is mostly dark; students described feeling shy about waving out of fear that they would act against the wishes of others. (b) Greeting: we explored designing different waving behaviors as the robot is not capable of moving its arm like a human, but the robot’s farewell wave on the first lecture was described as a student as resembling a Nazi salute. (c) Clarification: the robot’s speech recognition was not robust enough to hear some of the students at the back of the room during quizzes. (d) Alerting: a student reminded the class about an outing five minutes ahead of schedule, so the robot’s reminder was not used. (e) Remote operation: some pauses used to allow the robot to more easily distinguish its own movements from students’ movements interfered with handing out kits to the students. (f) Motion: the robot was too slow when handing out an attendance sheet on the first day and dropped the sheet.
Robotics 08 00021 g007
Table 1. Some examples of previous work on robot agents in a teaching context, classified by the role of the robot. Benefits refers to the advantages of using a robot, as compared to a human.
Table 1. Some examples of previous work on robot agents in a teaching context, classified by the role of the robot. Benefits refers to the advantages of using a robot, as compared to a human.
ReferenceBenefitsRobotCapabilitiesClassOutcomes
Role: Tutor outside class:
[20,22,33]Prevent bullying, provide friendshipRobovieRecognize children, quizzes, gaze, entertainingElementary school language and science classes in Japan, 1–2 monthsBetter retention, some increased curiosity
[34]-iCatRead, gaze, feedback w/facial expressions, nod, shake head, idling16 10–11 year old elementary school children in Holland, 1 hSocial behaviors facilitated learning
Role: Avatar:
[19]Tele-teaching, avoid lonelinessAV1Remote operation, avatar9 12–16 year old adolescents at school in NorwayUsers provided positive feedback
[18]Change social interaction, expand human capabilitiesTelenoidConvey operator’s voice, arms move28 9–10 year old elementary school children in Japan, 2 daysLimitations of robot had positive effects on collaboration
Role: Teaching Assistant:
[29]Repeatability, digitization, fantastic appearance, different voicesRoboSapiensRead (feedback), remote controlElementary school for five weeks, in TaiwanStudents were motivated, and suggestions for improvement were made
[35]Repeatability, AI, sensorsNAORead words, pantomime, entertainment (sing, dance).12 year old students in IranStudents learned faster and more words, compared to control
[31]Second language learning toolNAOListen, repeat, feedbackpre-school children in Norway; children with autism (ASD)Increased participation and involvement
Table 2. List of capabilities for the robot teaching assistant.
Table 2. List of capabilities for the robot teaching assistant.
CapabilityDescription
C 1 Readingto orally present material such as quizzes.
C 2 Greetingto greet the students.
C 3 Alertingto alert the teacher.
C 4 Remote operationto facilitate communication with persons at remote places.
C 5 Clarificationto present extra material on request.
C 6 Motionto perform physical tasks by means of locomotion and object manipulation.
Table 3. Influence of the robot capabilities on the preferred ranges in the Engagement Profile. Values outside the suitable range are counter-indicative to the intentions of the teaching robot.
Table 3. Influence of the robot capabilities on the preferred ranges in the Engagement Profile. Values outside the suitable range are counter-indicative to the intentions of the teaching robot.
CapabilityPreferred RangeSuitable Range
C 1 , Reading N 1 N 1
C 1 , Quiz C = 2 4 ; A = 1 4 C = 1 5 ; A = 0 5
C 2 , Greeting I 2 I 2
C 4 , Remote operation S 2 S 2
C 5 , Clarification N 2 N 2
C 3 , Alerting N 2 ; A 1 N 2 ; A 1
C 6 , Motion--
classroom setting P = 1 3 P = 0 3
one robot in front S = 3 , 4 S 2
use of robot in general U = 1 3 ; E = 0 4 U = 0 5 ; E = 0 5
Table 4. The Engagement Profile of the robotic teaching assistant at the start of the experiments.
Table 4. The Engagement Profile of the robotic teaching assistant at the start of the experiments.
C:2Competition with robot. The students will discuss and respond to quiz questions in pairs in front of the class. Some students might implicitly perceive themselves to be competing with others to some extent, but in general the students will work together as a class to answer the robot’s questions.
N:2Limited narrative structure. The robot follows a simple storyline: introducing itself, why it is participating and what it should do, conducting its task, and saying goodbye.
I:2Limited interactivity. The students will respond to the robot’s quizzes, but the responses will not change how the interaction proceeds.
P:0Look only. The students will get the chance to also pilot the robot via a controller if they wish during the break, and they will maybe also receive handouts from the robot, but in general the students will mostly look only.
U:1Linear chronology. The robot will give quizzes in a predefined sequence during the lecture. The users can affect how many quizzes are given by the time they take to answer (lectures can last only two hours, so if time runs out, quizzes can be given at a later date).
S:3One student, others cheer and engage. The robot will conduct social behaviors aimed at the group, greeting and quizzing.
A:1Immediate feedback. Answers to quizzes will be given in general very soon after students respond. We do not plan to give scores currently, to avoid having some students worry about losing face, although scores could be a fun way to motivate some students.
E:0Defined view. The students will investigate topics through a standard lecture view, and also from an applied view in participating in quizzes, but both perspectives are predefined and the robot will only be involved with the applied/quiz perspective.
Table 5. The questions for the student opinion using the following scale: 2 (much less), 1 (less), 0 (as now), 1 (more), 2 (much more).
Table 5. The questions for the student opinion using the following scale: 2 (much less), 1 (less), 0 (as now), 1 (more), 2 (much more).
QuestionFormulation
Q C Should there be more or less competition between groups and participants in the learning experience?
Q N Should the storyline and roles in the learning experience be more evident or less evident in the learning experience?
Q I Should there be more or less feedback on the choices you did in the learning experience?
Q P Should there be more or less physical activity in the learning experience?
Q U Should the learning experience be more or less influenced by what you did during the experience?
Q S Should more or less be done in a group (as opposed to individually) during the learning experience ?
Q A Should there be more or less feedback on how well you are doing during the learning experience?
Q E Should there be more or less possibilities to go in depth with extra content to explore on your own?
Table 6. Formulation of the additional questions using the scale 1 (disagree) to 5 (agree).
Table 6. Formulation of the additional questions using the scale 1 (disagree) to 5 (agree).
QuestionFormulation
Q 1 I liked the learning experience.
Q 2 The learning experience was engaging.
Q 3 I learned much during the learning experience.
Q 4 I recommend the learning experience to other students.
Q 5 I would like to have this type of learning experience for future course content.
Table 7. Formulation of the questions about robot capabilities using the scale 1 (disagree) to 5 (agree).
Table 7. Formulation of the questions about robot capabilities using the scale 1 (disagree) to 5 (agree).
QuestionFormulation
C 1 The ability to read material (e.g., giving quizzes) will be helpful for a robot teaching assistant.
C 2 The ability to greet people (e.g., saying hello and goodbye at the start and end of a class) will be helpful for a robot teaching assistant.
C 3 The ability to alert the teacher (e.g., if the teacher has forgotten to mention something or an explanation is unclear) will be helpful for a robot teaching assistant.
C 4 The ability to be remotely controlled (e.g., for people who cannot attend class due to illness or travel) will be helpful for a robot teaching assistant.
C 5 The ability to provide additional information (e.g., visualizing data, or adding information about topics which the teacher or students are discussing) will be helpful for a robot teaching assistant.
C 6 The ability to interact physically with people (e.g., fetching and handing out materials, handshakes) will be helpful for a robot teaching assistant.
Table 8. Overview of capabilities and feedback.
Table 8. Overview of capabilities and feedback.
Day 1Day 2Day 3Day 4
Week 1Week 2Week 3
Designlecture with six behaviourssound increasedvoting via waving, linksexplore sessions
Implementation
Readingbasicbasicon requestfor small groups
Greetingoutline/rolesbasicbasicoutline/roles
Remote operationlocomotionrecordingvideo conferencegaze
Clarificationteacherstudentsextra materialteacher and students
Alertingomissionclarificationeventswitch topic
Motionsheethandshakehandshakerobot kits
Feedbackgood, more volumemore exploration, user control, physical, social interactionmore narrative, exploration, control, awarenessmore narrative, social interaction, awareness, exploration
Table 9. Results from questionnaires, week 1 for Q 1 to Q 5 and C 1 to C 6 .
Table 9. Results from questionnaires, week 1 for Q 1 to Q 5 and C 1 to C 6 .
Q 1 Q 2 Q 3 Q 4 Q 5 C 1 C 2 C 3 C 4 C 5 C 6
LikeEngageLearnRecmd.AgainReadGreetalertRemoteContentInteract
week 1 ( n = 9 )
mean3.83.43.43.74.13.73.74.14.04.04.0
median4.04.03.04.04.03.03.05.04.04.04.0
90 %5.04.25.05.05.05.05.05.05.05.05.0
10 %2.02.02.02.03.02.82.82.82.82.82.8
variance1.41.01.31.50.61.31.31.41.01.01.3
positive67%56%44%56%78%44%44%67%78%78%67%
neutral11%22%33%22%22%44%44%22%11%11%22%
negative22%22%22%22%0%11%11%11%11%11%11%
week 2 ( n = 5 )
mean4.64.44.24.64.64.24.44.44.24.64.2
median5.04.04.05.05.04.04.05.05.05.04.0
90%5.05.05.05.05.05.05.05.05.05.04.6
10%4.04.03.44.04.03.44.03.22.84.04.0
variance0.30.30.70.30.30.70.31.81.70.30.2
positive100%100%80%100%100%80%100%80%80%100%100%
neutral0%0%20%0%0%20%0%0%0%0%0%
negative0%0%0%0%0%0%0%20%20%0%0%
week 3 ( n = 12 )
mean4.44.24.04.34.34.03.94.44.34.34.1
median4.54.04.04.04.04.04.05.04.04.54.0
90%5.05.05.05.05.05.05.05.05.05.05.0
10%4.04.03.03.13.13.03.03.13.13.13.0
variance0.50.40.60.60.60.61.40.60.60.60.6
positive92%92%75%83%83%75%75%83%83%83%75%
neutral8%8%25%17%17%25%17%17%17%17%25%
negative0%0%0%0%0%0%8%0%0%0%0%

Share and Cite

MDPI and ACS Style

Cooney, M.; Leister, W. Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students. Robotics 2019, 8, 21. https://doi.org/10.3390/robotics8010021

AMA Style

Cooney M, Leister W. Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students. Robotics. 2019; 8(1):21. https://doi.org/10.3390/robotics8010021

Chicago/Turabian Style

Cooney, Martin, and Wolfgang Leister. 2019. "Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students" Robotics 8, no. 1: 21. https://doi.org/10.3390/robotics8010021

APA Style

Cooney, M., & Leister, W. (2019). Using the Engagement Profile to Design an Engaging Robotic Teaching Assistant for Students. Robotics, 8(1), 21. https://doi.org/10.3390/robotics8010021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop