Next Article in Journal
An Elaborate Dynamic Model of the Dual-Motor Precision Transmission Mechanism for Performance Optimization
Previous Article in Journal
A Review of Axial-Flux Permanent-Magnet Motors: Topological Structures, Design, Optimization and Control Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Expanding the Frontiers of Industrial Robots beyond Factories: Design and in the Wild Validation

1
Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo 184-8588, Japan
2
Industrial Cyber-Physical Systems Research Center, National Institute of Advanced Industrial Science and Technology, Koto-ku, Tokyo 135-0064, Japan
3
Engineering Division, Kawada Robotics, Taito-ku, Tokyo 111-0036, Japan
*
Author to whom correspondence should be addressed.
Machines 2022, 10(12), 1179; https://doi.org/10.3390/machines10121179
Submission received: 26 October 2022 / Revised: 29 November 2022 / Accepted: 3 December 2022 / Published: 7 December 2022
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)

Abstract

:
Robots able to coexist and interact with humans are key elements for Society 5.0. To produce the right expectations towards robots, it will be necessary to expose the true current capabilities of robots to the general public. In this context, Human–Robot Interaction (HRI) in the wild emerges as a relevant paradigm. In this article, we affront the challenge of bringing an industrial robot (NEXTAGE Open) outside factories and laboratories to be used in a public setting. We designed a multi-modal interactive scenario that integrates state-of-the-art sensory devices, deep learning methods for perception, and a human–machine graphical interface that monitors the system and provides useful information to participants. The main objective of the presented work is to build a robust and fully autonomous robotic system able to: (1) share the same space as humans, (2) work in a public and crowded space, and (3) provide an intuitive and engaging experience for a robotic exposition. In addition, we measured the attitudes, perceptions, expectations, and emotional reactions of volunteers. Results suggest that participants considered our proposed scenario as enjoyable, safe, interesting, and clear. Those points are also the main concerns of participants about sharing workspaces of daily environments with robots. However, we can point out some limitations with a biased population mainly composed of Japanese and males. In future work, we will improve our scenario with non-functional features or emotional expressions from the robot.

1. Introduction

Robots with advanced interactive capabilities are promising alternatives to more efficient and flexible industrial systems [1]. However, the lack of acceptance and positive attitudes towards robots among employees have limited the spread of collaborative robots in factories [2,3]. Moreover, it is expected that the current academic and commercial interest in the use of these advanced robots expands its frontiers beyond factories to more ecological and everyday-life scenarios, such as restaurants, shops, and homes [4]. Unlike traditional efforts in industrial robotics generally focused on increasing task performance, the next generation of robotics applications enabling interaction with humans require the consideration of hedonics (i.e., emotions and desires), as well as ergonomic factors [5,6,7]. These novel research activities will enable manufacturers and designers to build alternatives or counter-measures to increase the acceptability, the social impact, and the desirability of interactive robots [2,8]. Many research articles, such as [9,10,11], suggest that a low acceptance and negative attitudes toward robots are in part produced by the mismatch between people’s expectations (in part due to social media, movies, or inexperience with robots) and the real capabilities of robots. Robots must be able to work outside laboratories to evaluate more realistic expectations. However, due to the complexity required to create robust applications with robots, Human–Robot Interaction (HRI) applications are rarely tested in uncontrolled and public settings. Moreover, they are often presented and evaluated with convenient samples (e.g., laboratory and faculty members) [12,13]. This approach makes data collection more manageable and avoids many technical issues often presented when robots are required to perform in open, uncertain, and crowded environments. However, there is a need in the HRI community to move towards approaches that enable the acquisition of more valuable theoretical and technical insight through the use and validation of robotic systems in natural, open, everyday environments. This approach is called in literature as HRI in the wild [12,13]. Common types of robots used in this emergent paradigm are social and service robots. Some examples are presented in [14,15,16]. In many cases, these robots are either pre-programmed or remotely controlled rather than fully autonomous. Some recent examples using this methodology are [17,18,19]. In this article, we affronted the challenge of bringing an industrial robot to the wild through the development of a multi-modal and distributed system architecture. This architecture integrates advanced sensors, effective deep learning methods, and a human–machine graphical interface to enable fully autonomous HRI. This paper is organized as follows. Section 2 presents the related works. Section 3 presents the objectives and contributions of this article. Section 4 presents the proposed system architecture. Section 5 presents the experimental methodology. Section 6 presents the results. Discussion and conclusions follow.

2. Related Work

Exposure to robots is basically done in three ways: no HRI, (i.e., participants are not asked to imagine, view, or interact with a robot), indirect HRI, (i.e., participants are asked to imagine an HRI task or observe an interaction with the robot using videos or images) and direct HRI, (i.e., the robot is physically present and interacts with participants) [3]. Several cross-cultural studies about acceptance and attitudes toward robots in different domains have been presented in previous works, such as [20,21,22]. However, in these types of studies, no HRI or indirect HRI is used. While the study of these non-functional and human-centered aspects through direct interaction is a popular topic in some social robotics areas, such as Robot-Assisted Therapy (RAT) [23], education [24] and elderly-care [25], the attention of these aspects when performing advanced Human–Robot Interaction (HRI) activities using industrial and collaborative robots is still limited [26]. A recent review of research articles evaluating attitudes, anxiety, acceptance, and trust in social robotics domains is presented in [3]. They discovered that “studies providing direct HRI may report different attitudes to studies where participants do not directly interact with a robot”. Moreover, attitudes can change depending on the application domain and the design of the robot (e.g., humanoid-like or anthropomorphic). In the context of industrial robotics, studies collecting attitudes, expectations or perceptions towards robots after a direct HRI interaction are still rare. For example, Aaltonen et al. [27] grasped possible expectations of factory workers from the industrial and academic points of view. However, data collection was performed with an online questionnaire with no direct HRI. Moreover, experimental research with industrial and collaborative robots has been mainly limited to lab experiments with convenience samples (e.g., students or people working in a research lab) [28]. Laboratory studies are still mainstream in HRI. However, validations of robotics systems in the wild will increase more and more in relevance as the need to have advanced robotic systems able to be used outside factories and laboratories increases.
Museums and exhibitions are suitable in the wild scenarios for exposing novel technological achievements to people with different backgrounds and interests [29]. An example is proposed in [30], where an ethnographic study is performed in a museum with a humanoid robot. Other examples of HRI demonstrations in the wild scenarios using FROG (a tour guide robot) are presented in [31,32]. Other suitable locations for in the wild experiment are shopping malls, such as the work done by Kand et al. [33]. They used a robot (Robovie-IIF) to help customers to find their way in the mall. The robot was partially teleoperated, to overcome speech recognition of algorithm difficulties, and to keep the interaction smooth. We can also mention some of our previous work with robot interaction in school [14,15]. In those studies, the Pepper robot was used to entertain children in their school with some activities, in uncontrolled scenarios. In the current article, we affront the technical challenge of designing and executing an HRI application with a dual-arm industrial robot in a public and crowded scenario, specifically a robotic exhibition.
Table 1 summarizes recent and similar works reporting the development of robotic systems with industrial robots able to share the same space with people and that collect the users’ perceptions towards robots after performing direct HRI. This table shows that one of this article’s main novelties/differences against previous works is the experimental setting, which is not performed in a closed room or laboratory. Instead, validation of our proposed system is performed in a public and noisy scenario with no control of environmental conditions (e.g., illuminations or the number of people in the field of view of the robot). Similar to [26], our proposed system is integrated into a dual-arm industrial robot. Unlike [26], where the robot is remotely controlled, the robotic system proposed in this article is fully autonomous. Moreover, many systems developed in previous and similar works were developed to follow an industrial task, which in most cases requires some previous training. An exception is [34], where a straightforward but suitable task is proposed for enabling people with mental and physical disabilities to interact with an industrial robot. These complex industrial tasks, such as assembly, are inappropriate for a robotic exhibition where people interact with the robot voluntarily and have no time for complex explanations. Therefore, we designed an intuitive socio-emotional scenario where the robot can guide and adapt to human actions and emotions.

3. Objectives and Contributions

This article proposes a novel application where a dual-arm industrial robot, originally designed to be used by industrial workers, can interact with people from different backgrounds and outside laboratories and factories. The main objective of the project presented in this article is to develop an intuitive, social, and engaging interactive scenario for the International Robot EXhibition (IREX) using the NEXTAGE Open robot by KAWADA ROBOTICS CORPORATION [36]. Rather than proposing a typical scenario where robots need to be isolated from visitors or a cooperative industrial task (which often requires previous training), we developed an application where humans intuitively interact with an industrial robot through emotions and body motions. In this way, visitors can experience what it is like to interact with an advanced robot with advanced grasping, perceptual, and cognitive capabilities in an everyday-like situation. Therefore, the main contributions of this work are (i) the creation of an intuitive and engaging HRI scenario that integrates the NEXTAGE Open dual-arm industrial robot; (ii) the execution and validation of the proposed HRI scenario in an unconstrained, crowded, and dynamic environment; and (iii) validation of the proposed scenario using self-reports that grasp the emotional experience towards the robot platform and the proposed HRI scenario of participants as well as their potential needs and attitudes towards industrial robots with social capabilities.
Moreover, this system is validated with more types of users, rather than only factory workers or faculty members (e.g., students).

4. Design and Implementation

4.1. Hardware

The experimental setup is composed of four parts. The main one is the NEXTAGE Open robot from Kawada Robotics [36]. It is an upper-body anthropomorphic robot with 15 DoFs, 6 for each arm, 2 in the neck and 1 for the waist. The robot also has 4 cameras, 2 in the head and 1 in each hand. The hands are composed of a three-fingered pneumatic gripper, and the payload for each is 1.5 kg. The robot is controlled by an Intel NUC PC on Ubuntu 16.
The second part is composed of three sensors used to detect the user’s interactions with the robots. The people detection is achieved with two devices (redundancy for robustness): a RealSense depth camera (RS), and a set of ultrasonic sensors (US). The latter is a custom-made array of six HC-SR04 sensors, see Figure 1. Both are used to estimate the proximity of the user to start and sustain the interaction and are placed below the robot on the front panel. Only the sensitive part is visible, thanks to two slits. The user’s choice detection is done with a Leap motion placed on the booth’s edge, in front of the object’s window. It detects the choice between three positions: left (chocolate), center (pen) and right (eraser). A mark was placed on the floor to indicate the user’s ideal position, which maximizes detection.
The cameras are used for two purposes. The embedded eye camera (EC) detects the person’s facial expression and the engagement of the user. According to those, the robot will change its behavior, by knowing, for example, if the user is engaged in the interaction. The hand cameras (HC) are used to detect the position of the objects on the table, and to grasp them.
The third part is placed behind the robot: a monitor was installed on a wall to display the Graphic User Interface (GUI). It indicates to the user what to do, see Figure 2. The instructions are written in English and Japanese; visual feedback from the cameras (user’s face with his engagement and the recognition of the objects from the HCs) is also displayed.
The last part consists of the computers. Besides the computer used to control the robot, two computers composed the apparatus. They run the algorithms that are divided on both to maximize robustness, see Section 4.2.

4.2. Software

The software architecture is described in Figure 3. The first PC (Desktop: RAM 32 GB, CPU Ryzen 1900X 8 cores 3.8 GHz, GPU Nvidia GeForce RTX 2070) is dedicated to vision, with three resource-consuming algorithms. The second PC (laptop Dell Inspiron 14 5000: RAM 8 GB, CPU i7-7500 2.70 GHz) runs the sensors scripts. Data are gathered on two blackboard scripts (i.e., shared working memories) and sent to the control PC and the GUI using NEP [37], on a WLAN.

4.2.1. Eye Camera Algorithms

The first algorithm detects the gaze direction and head orientation (and then determines the engagement) of the user regarding the robot. If s/he is not looking it in the eye, it will continue its task until the user is engaged in the interaction. We are using an algorithm we developed, described in [38].
The second one recognizes five different emotions of the user (We use the toolkit provided by OpenVINO: https://docs.openvino.ai/latest/omz_models_model_emotions_recognition_retail_0003.html-accessed on 2 December 2022), if s/he is happy, sad, surprised, angry or neutral. The emotions are used to determine if the robot will give the gift or not. If the emotion of the user is negative (e.g., angry) the robot will take the gift back and let the user choose another one. We assume the user changed their mind and would prefer another gift.

4.2.2. Hand Cameras Algorithms

Two identical algorithms run simultaneously to process the video stream of each hand. This algorithm performs object recognition by feature extraction and classification. Different customized models were created corresponding to the objects the robot has to manipulate. The Faster RCNN was applied, with these models taking advantage of its high capacity to detect different objects with enough precision.
The algorithm was trained to detect the three objects the robot has to grasp: a pen, a chocolate and an eraser (with a smiley shape) (Figure 4). The model was trained with about 300 images; with one or several objects, fully displayed or partially covered or with different luminosity and background.
The algorithm’s output is the object’s name, the bounding box (BB), and the confidence score (the probability of being a true positive). Only the objects detected with a confidence score over 75% are used, and the bounding box coordinates are sent to the control algorithm. Moreover, for the chocolate and the eraser, their BB is expected to be square, so the shape of the BB is checked. If it is a rectangle and not a square, it means the object is partially in the field of vision. Hence, the center of the BB is not the center of the object, and to avoid grasping problems, the object is discarded until it becomes fully visible.

4.2.3. Sensors Algorithms

The algorithm is separated into two parts. The first one is on an Arduino where the six ultrasonic sensors (US) are connected. The script loops on them to obtain the current distance data and send it via serial communication to the sensor PC. A second algorithm grasps the data from the six sensors and sends them by NEP to the blackboard sensor script that gathers all the sensor data before sending them to the control algorithm.

4.2.4. Control Algorithm

The algorithm is detailed in Figure 5. Four threads are running in the background to gather the data sent via NEP. Moreover, the algorithm is using a virtual grid to track the objects on the plates. The quantity and position of each object on that grid are recorded. The cameras are used to know the precise position of the different objects and facilitate the grasping task.
The Working task is used when no one interacts with the robot. In this case, the robot is moving objects on the grid randomly. When a user is detected, the robot stops this task and turns its head to the user. If it has an object in its hand, it proposes it to the user and gives it to him/her, if the facial expression and engagement are positive.
The robot then Presents the objects with a hand gesture (and on the TV screen as well) and waits for the user to point at the object of their choice, the pointing is detected by the Leap Motion sensor. The robot picks the selected object and offers it to the user. While doing that, the user’s facial expression and engagement are checked, and according to them, the robot will give the object or pick a new one (see Section 4.2.1).
The Refill task is done when the robot has nothing to do with the user (i.e., during the Working task) if the plates are almost empty. If they are empty, the task is done even if a user is present. To refill, the robot turns 90° to access a big plate with different objects. This plate is easily accessible by the staff even during the robot’s running, with safety. Because the objects are placed randomly on that plate, the robot relies only on the hand cameras with the object detection algorithm to pick the correct object.
For simplification, the Invite next user action is not in Figure 5. This action is done after two interactions with the same user. Indeed, we limited users to 2 consecutive interactions to avoid monopolization.

4.2.5. Human–Robot Graphical User Interface

We designed a Graphical User Interface (GUI) able to: (a) provide feedback to people interacting with the robot about the decisions and actions taken by the robot (e.g., the current facial expression of the user that has been detected, and selected gift), (b) give instructions to users about how to proceed in the interaction, and (c) enable developers to monitor the status of sensors and the results from deep learning algorithms. This GUI was developed using modern web technologies and JavaScript libraries, such as Node.js, HTML, CSS, and Vue.js. The interface subscribes to the blackboard sensor and blackboard vision modules and the decision-making modules using NEP. The information obtained from these external modules is used to dynamically change the elements displayed to the users. Figure 2 presents the basic sections of this interface and examples of the different elements changing according to the robot and sensors’ status. When there is no interaction between the robot and a user, the background color of the interface is green; otherwise, it is blue.

5. Validation in the Wild

The experimentation took place at the International Robotics Exhibition (IREX) 2019, in Tokyo, Japan. It is a public event for four days at the end of December, before the COVID-19 pandemic. As presented in Section 3, the main objective of this work was to build a system architecture that integrates state-of-the-art perceptual tools to provide engaging experiences for visitors of IREX 2019. In order to know if this objective was met and to identify possible improvements for posterior iterations, we formulated the research question Q1 (see below). Additionally to the main goal of this article, we also grasped the attitudes and expectations of visitors of IREX after a direct interaction with an industrial robot with affective and cognitive skills. Therefore, we formulated the next two additional research questions Q2 and Q3.
Q1: 
What are the emotional reactions and perceptions of visitors towards the proposed interactive scenario? 
Q2: 
What is the attitude of visitors towards robots after direct interactions with an industrial robot with affective and cognitive skills? 
Q3: 
What are the potential expectations of visitors towards robots in their working and everyday environment? 

5.1. Subjective Validation of the Proposed System

We use semantic difference (SD) self-reporting questionnaires to grasp participants’ emotional reactions to the robot and the proposed HRI. The results from these questionnaires are used to answer the research question Q1. In order to select the items of this questionnaire, we follow recommendations presented in the Kansei Engineering ergonomics discipline [39,40]. Kansei Engineering is a suitable human-centered and ergonomic approach often used to grasp and analyze the emotional, need, and social values of people towards products, interfaces, and services [41]. The first step in developing a self-reporting questionnaire is to collect a certain number of emotional words and adjectives that are relevant to the design of some specific product or application. Therefore, these adjectives can vary according to the application domain [39]. Additionally, these adjectives can be obtained after consulting experts in the domain or reviewing state-of-art articles. We collected 30 possible pairs from state-of-art works on industrial and social robotics. Finally, 18 pairs were selected to be applied in different 5-point Kansei (K) questionnaires. We classified these pairs of words into two sections: K-1 and K-2, which are presented in Table 2 and Table 3, respectively. While K-1 was designed to grasp the feeling of visitors about the HRI scenario (using 6 pairs), K-2 was designed to grasp the impressions about robot design, usefulness, and skills (using 13 pairs). Additionally, two open questions, identified as OP-1 and OP-2, were applied to answer question Q3. These two questions were defined as follows:
OP-1: 
If you had to work together with a robot, what would be the main characteristics you think the robot should have? 
OP-2: 
If you had to live with a robot, what would be the main characteristics you think the robot should have? 
We use the highly cited Negative Attitudes toward Robots Scale (NARS) [42] questionnaire to grasp the attitudes of participants after a direct interaction with the robot and in this way answer question Q2. According to [3], NARS is the most popular questionnaire used in robotics to measure attitudes towards robots. This questionnaire comprises 14 questions with a 5-point Likert scale (1: I strongly disagree, 2: I disagree, 3: Undecided, 4: I agree, 5: I strongly agree). These questions are classified into three sections: NARS-S1, NARS-S2, and NARS-S3. Generally, the average value of these three sections is calculated and reported separately. These average values range between 1 to 5. The NARS-S1 section is composed of six questions and is designed to obtain attitudes toward situations of interactions with robots. The NARS-S2 section is composed of 5 questions and is designed to obtain attitudes toward the social influence of robots. Finally, the NARS-S3 section is composed of 3 questions and is designed to obtain attitudes toward emotions in interaction with robots [43]. Then, NARS-S1, NARS-S2, and NARS-S3 values are combined (average) to obtain the general NARS score, the value of which is also between 1 to 5. We use the original version of NARS defined in [42], which is in the Japanese language, and its version in English.
In total, NARS-S1, NARS-S2, and NARS-S3, and the proposed questions of K-1, K-2, OP-1, and OP-2, are composed of 35 questions. It is relevant to highlight that the research methodologies and objectives of this article are different from most classical HRI research activities performed in laboratories or structured environments, where participants are hired or are members of the same laboratory or school. This type of participant has time and motivation to effectively answer large questionnaires built with five or more items for the same psychological or usability construct. This traditional approach is often applied to prove the validity and reliability of the results using tools such as Cronbach’s alpha [44]. In the application presented in this article, qualitative data is obtained from visitors of a robotic exhibition that participated voluntarily. Therefore, many typical considerations for increased validity, sensitivity, and reliability done in structured, descriptive, or explanatory research performed in laboratories are not suitable and are out of this project’s scope. In this context, a common topic of discussion in the literature is when to use single or multi-items for the same construct in self-reporting questionnaires. For example, [45] discusses how the use of multiple-item measures is costly, aggravates respondent behaviour, and increases response errors. Moreover, even “a second or third item of the same construct contributes little to the information obtained from the first item”. Diamantopoulos et al. in [46] suggest that single-item approaches are viable options in exploratory research. This type of research is usually performed at a preliminary stage, in unstructured settings, such as in this work. Recent works discussing the suitability of single-item or multi-item constructs are [47,48]. In practice, single-item approaches can be considered suitable for well-understood constructs, such as those measuring satisfaction. Moreover, multi-item approaches are more suitable for complex constructs, such as trust and attitudes [46,49]. In this work, we use a single-item approach for measuring satisfaction-related constructs and a multi-item approach (NARS) to measure attitudes. Martinez et al. describe in [50] that questionnaires for visitors in museums and expositions must be breve and simple, otherwise “it is unlikely that visitors will fill out the questionnaires if they are too long to complete or too difficult to understand, and responses will not reflect the real experience”. Moreover, many practitioners, textbooks, and research articles suggest that long questionnaires should be avoided [51,52,53] to prevent careless responses and respondent fatigue as well as to motivate visitors to participate in the survey. Therefore, we divide the proposed 35 questions among four questionnaires (P1, P2, P3, and P4) that were applied on different days in the IREX exposition. NARS-S1 questions are asked in P1. NARS-S2 and NARS-S3 questions are asked in P2. K-1 (6 items), OP-1, and OP-2 questions are asked in P3. Finally, K-2 (13 items) questions are asked in P4. The first part of the questionnaire P4 (composed of 6 items) was used to grasp impressions of perceived intelligence and animacy towards the proposed robotics system. The second part of the questionnaire P4 (composed of 7 items) was used to grasp design-related aspects of the robot platform. Four demographic questions (age range, gender, country, and robotics experience) are included in P1, P2, P3, and P4. We provided two versions of each questionnaire, one in Japanese and the other in English, and participants were free to choose the language that suited them the most. Figure 6 sums up the content of each questionnaire.

5.2. Participants

The participants of this study are visitors of IREX 2019 that voluntarily interacted with the proposed HRI system. After they interacted with the robot, we asked them if they could fill out one of the questionnaires described before and by doing so give consent to use their answers for research purposes. No personal data were collected. The total number of participants answering is 207. From the questionnaires P1 and P2, participants (5 and 2, respectively) were discarded because they did not answer all of the questions. We discarded participants only in P1 and P2 because the NARS score is built from the answer to every question. Contrastingly, the answers to the questions of P3 and P4 can be taken individually, then, empty answers are not discarded. Table 4 sums up the demographic data of each questionnaire. Because IREX is held in Tokyo, Japan, most of the participants are Japanese (between 70% and 86% depending on the questionnaires). We also divided the participants into groups according to their knowledge of robotics: novice (1/5 and 2/5 on the Likert scale), and people that are knowledgeable about robots, called expert (3/5, 4/5 and 5/5).

6. Results

6.1. User Perceptions and Emotional Reactions

We use the results from questionnaires P3 and P4 to evaluate the answer to Q1 (What are the emotional reactions and perceptions of visitors towards the proposed interactive scenario?). Results from emotional reactions (P3) are shown in Table 2. Table 3 contains the result from the P4 questionnaire, regarding the feeling of the participants regarding the robot. We can gather some dimensions into similar concepts ( C ), regarding the satisfaction ( α ) and comfort ( β ) of the participants after interaction with the robot (K-1); and behavior ( γ ), interactions ( δ ) and appearance ( ε ) for their impressions (K-2).

6.2. Negative Attitudes Towards Robots

We use the results from questionnaires P1 and P2 to answer research question Q2 (What is the attitude of visitors towards robots after direct interactions with an industrial robot with affective and cognitive skills?). The mean value and standard deviation (SD) for NARS-S1, NARS-S2 and NARS-S3 are summarized in Table 5. The mean values of each NARS section can be interpreted as positive (values close to 1), neutral (values close to 3), and negative (values close to 5). We divided the participants into male–female, novice–expert and Japanese–non-Japanese groups. Table 6 shows the mean and standard deviation values for each of these groups.

6.3. User Needs and Desires

We use the open OP-1, and OP-2 to grasp the potential needs and design desires of users towards robots. Relevant words used by participants to describe needs and desired features of robots in working environments are: kindness, safety, intuitive, responsive, cuteness, convenient, fast, accurate and efficient. On the other hand, participants consider that robots in their every-life environment must be fun, kind, safe and cute. Moreover, they consider that robots must be able to have effective and interesting communication skills, understand feelings and emotions, use clothes, and have convenient and interesting functionalities such as being able to cook and dance.

6.4. Hypothesis

With the collected data and the different groups created, we can draw some hypotheses. They will lead, in addition to the research question, the analysis of the results.
H1. 
The cultural background (country of origin) has an influence on their perception of the robot. 
H2. 
The gender of the participants has an influence on their perception of the robot. 
H3. 
The knowledge about the robots has an influence on their perception of the robot. 
Regarding the cultural background (H1), Jiang and Cheng pointed out better acceptance of robots by Chinese people [54]; and Bröhl et al. mentioned that Japanese people are more used to seeing robots in everyday life than Chinese or US people [55].
Concerning the knowledge about robots (H3), we can hypothesize that expert people have a more rational preconception about robots, on their real capability and not based on a fictional image of robots [56].

7. Discussion

7.1. Regarding Q1 and Q2

In Table 2, the visitors expressed having a highly positive experience by feeling happy ( μ = 1.38 ;   σ = 0.64 ), relaxed ( μ = 2.06 ;   σ = 1.23 ), and interested ( μ = 1.26 ;   σ = 0.52 ) in the proposed HRI scenario. Moreover, rather than considering interaction with the NEXTAGE Open industrial robot a dangerous task, visitors mostly expressed feeling safe ( μ = 1.32 ;   σ = 0.75 ). They also considered the proposed HRI scenario as clear ( μ ¯ = 0.87 ;   σ = 1.21 ; in order to keep coherence, the values in the text are presented to have the value close to 1 for the described adjective. When an adjective is close to the 5-side, then the five’s complement ( 6 μ ) is used, and symbolized by a bar: ( μ ¯ ) or intuitive enough, with a value really close to 1, which was the main objectives of this project. We can sum up the feeling of the participants and reply to Q1 by analyzing the different concepts, thus they were satisfied [ α ] (scores around 1.3) and felt comfortable [ β ] (scores between 1.3 and 2) after their interaction with the robot.
Results shown in Table 3 suggest that visitors perceived the robot as a smart ( μ = 1.67 ;   σ = 0.67 ) and complex machine ( μ = 3.52 ;   σ = 1.10 ), which is between a lifelike and an artificial entity ( μ = 2.80 ;   σ = 1.27 ). Even though the robot was able to recognize emotions as well as make decisions regarding them, or the engagement, visitors had a neutral impression between emotional—emotionless ( μ = 3.02 ;   σ = 1.12 ). One factor influencing this result may be the lack of whole-body expressive movements presented in the proposed application, which could be one possible improvement in the system to have a better response from the participants [57]. This lack of emotion could also be an explanation for the perception of the robot’s movement by the participants. They considered them neither responsive nor slow as well as mostly static rather than dynamic, with a score of around 3 for both of them. It is confirmed by the analysis of their impressions regarding the behaviour [ γ ] and interactions [ δ ] of the robot; participants have difficulties interpreting them, and grade them neutrally with scores of around 3, aside from the intelligence and usefulness, which are judged positively.
From Table 2, we can see that the perception of the robot is positive, the visitors see it as cute ( μ = 1.86 ;   σ = 1.04 ), desirable ( μ = 1.77 ;   σ = 0.86 ) or attractive ( μ = 1.73 ;   σ = 0.99 ). During the experiment, the robot was dressed in a hat and an apron, which could also explain the positive reaction of the public. However, they were quite neutral regarding their familiarity ( μ = 3.27 ;   σ = 1.30 ) with NEXTAGE, even if it is not famous to the general public, its design is still close to what a person could expect to be a robot, and the participants judged its appearance [ ε ] positively (scores around 1.7).
Nevertheless, the analysis of the different subgroups cannot allow us to answer the three hypotheses (H1–3). Indeed, almost all the p-values of Welch’s t-test do not permit us to discard the null hypothesis. This is clearer with the novice–expert group, where most of the p-values are close to 1. We can still point out some significant differences, with p < 0.05 . We can observe some differences in the perception of the robot, especially regarding the country of origin. Thus, the Japanese participants described the presented robot as less dynamic ( μ = 3.53 ;   σ = 1.20 ) than the foreign participants ( μ = 2.20 ;   σ = 1.10 ). One explanation could be the habituation of robots in everyday life in Japan. Those robots, such as Pepper, try to be dynamic to catch the audience’s attention. Yet, NEXTAGE is originally an industrial robot and has more rigid movements. We can see with our results that foreigners ( μ = 2.50 ;   σ = 0.84 ) seem more familiar with that kind of robot than Japanese participants ( μ = 3.39 ;   σ = 1.35 ). However, both groups found the robot cute and liked it in a similar proportion. On the other hand, men tend to see more life ( μ = 2.48 ;   σ = 1.19 ) in the robot than women ( μ = 3.29 ;   σ = 1.31 ).
To answer the second research question (Q2), we can also use the NARS analysis of P1 and P2. As shown in Table 5, participants have a positive attitude towards interacting with industrial robots, a neutral and slightly positive attitude towards the social influence of robots, as well as a neutral attitude toward emotions in interaction with robots. Results from the Welch’s t-test shown in Table 6 suggest that there is not a statistically significant effect between genders (p-value higher than 0.05), again dismissing H2. This result differs from those reported in [2], which suggested that women have more negative attitudes towards robots than men. However, our results agree with those recently reported in [3], which performed a systematic mapping of research articles exploring attitudes, trust, and acceptance in different contexts of social robotics. They found out that “the gender of the participants [is] not associated with their affective attitudes toward social robots”. Similarly, the t-test applied to novice and expert groups indicates there are no statistically significant effects regarding robotic-related experiences (H3). A hypothesis could be that experts in robotics can be people working in marketing for robotic companies, or workers in factories where robots and humans work isolated. Therefore, it may also be the first time they actually share the same space with an industrial robot in an interactive scenario for most of them. This can explain why results from people with more experience in robotics presented similar values to those identified as novices (in many cases families and tourists). On the other hand, the novice group, even though they know less about robots, can still be interested in them, and that is why they visited IREX. The analysis of the countries is also non-significant and does not allow us to verify H1. Even if [55] pointed out some differences in the perceptions of robots by countries. Even if some analysis using NARS already pointed out that the country of origin has an influence regarding robot [58].
As explained in Section 2, studies reporting results of attitudes, acceptance and trust towards robots change depending on the application domain, the type of exposure, and the design of the robot. Romeo and Lado found positive perceptions about robots during the pandemic in Spain for Generation Z (1997–2012) [59]. However, those results could just be a confirmation of the general tendency regarding the acceptance of robots in Spain or by the young generation. Therefore, robotics systems such as those that we presented in this article shall become more relevant in the near future. This study mainly showed positive attitudes and impressions of visitors towards the use of robots. This suggests that the main objective of the project presented in this work was met, proving that people with no experience could interact with a robot without prior teaching, in an enjoyable scenario. However, these impressions can be biased by the fact that humans received a gift. How this factor and other complex factors (such as the cultural background) may or not result in positive attitudes, acceptance or trust and how they affect the robot’s perception of different attributes such as its behavior or appearance is out of the scope of this article and can be discussed in future iterations/studies of the proposed cognitive system.

7.2. Regarding Q3

Concerning the third research question (Q3, What are the potential expectations of visitors towards robots in their working and everyday environment?), the words and desired features used for replying to the OP-1 and OP-2 suggest that providing robots with pleasing design/aesthetics, intuitive communication skills, and enjoyable HRI activities can be relevant aspects to improving the user experience and desirability of robots in both working and every-life settings. This contrasts with the traditional utilitarian objectives of robotics, which focus on performance (i.e, efficiency and effectiveness), and this agrees with recent studies in HCI and HRI showing the importance of hedonic factors (e.g., aesthetics, pleasure, and emotions) for the successful integration of novel technological [39,40,60,61]. The identification of the needs and desires of possible users of robots is a relevant task for developing robots and applications with greater acceptance, social impact, and market penetration [62,63].
The participants also focus a lot on safety, especially in work settings, 39% of answers to OP-1 mention it. For instance, one participant mentioned that “a human should be able to turn it off”. This comment can be influenced by science fiction books or films. Liang and Lee pointed out that people with exposure to that kind of media have higher fear of robots and AI [56]. They also identified that only 30% of the US population has no fear of robots and AIs, and actually most of them make no difference between fear of robots and AIs. This fear of robots/AIs can explain the sake of safety, it was one of the conditions of their acceptance in factories from the 60s [64].
Even if we mention before that participants are not solely looking for efficiency, it is still a recurrent comment, especially in the case of working with robots. The robots are essential for the so-called Industry 5.0 to improve the efficiency of the process [65]. The people are expecting them to do their tasks “fastly” and “accurately”.

7.3. Limitations

Because of the nature of the experiment, without any selection of participants, we have some bias in the population. Then, besides the questionnaire P4, all of the others have an unbalanced distribution, with around 80% of males for P1 and P2, and around 60% for P3. Even if the "I" of IREX stands for international, the number of non-Japanese is below 30%. Due to the lack of foreign people, we gather all the non-Japanese together, even if some disparities exist, even between close countries, such as France and Germany [58]. Being a heterogeneous group could explain the difficulty to interpret the results.
The difference in the size of the different subgroups can be an explanation for the non-statistical difference in the results, and why we cannot clearly answer the hypothesis. Since it was not the main goal of our experiment, it is not a problem, but can be taken into account for future works.

8. Conclusions and Future Work

Industrial robots are advanced machines able to generate engaging HRI scenarios that are difficult to reach with most social robots, such as those tasks requiring advanced manipulation of objects. However, most of them are evaluated and used in factories or research laboratories. In this work, we proposed the initial iteration of an advanced industrial robotic system able to deal with a noisy, crowded, and public environment (an international robotic exhibition). We successfully affronted the challenge of bringing this robot to the wild and made a step ahead toward developing systems helping in the understanding of those social and technical factors influencing the adoption of robots in everyday environments. The proposed system could perform during the whole event (four days) without any technical issues and could manage the interactions with hundreds of visitors. The system can recognize human faces, states, actions, and emotions from the user. Then, in an unscripted scenario, the robot could adapt its behaviors according to the user’s. We demonstrated by using an industrial robot in this explorative study that those robots could be used with the general public, in the wild, without prior specific training. In addition, the results from applied questionnaires suggest that visitors considered our proposed scenario enjoyable, safe, and interesting, suggesting that the proposed work’s main objective was met. The collected expectations of this preliminary study also coincide with previous studies showing the importance of non-functional elements, such as aesthetics, in HRI and HCI. Some participants also mentioned that safety and convenience were important points for them if they have to work with robots in the future. These remarks are particularly important as design guidelines as the participants that joined our experiment were mainly familiar with robots. These characteristics may be more prominent for the widespread adoption of robots by the general public. Non-verbal communication with the robot could be one direction to investigate to improve safety and convenience. Future studies will be focused on exploring how non-functional features, emotional expression, and different robot personalities, as well as the different errors presented in the interaction, can influence or result in an improvement in the attitudes, positive experiences, trust, and acceptance toward robots in everyday and public scenarios. In addition to checking with a wider population about the impact of gender, knowledge about robots and country of origin. Concerning the latter, it could be better to have a group for each nationality.

Author Contributions

Conceptualization, L.R., E.C. and G.V.; methodology, S.C., L.R., E.C., S.H. and S.Y.; software, S.C., L.R., E.C., S.H. and S.Y.; validation, G.V.; formal analysis, S.C.; funding acquisition, G.V.; investigation, S.C., L.R., E.C., S.H., S.Y. and G.V.; resources, V.L., Y.K. (Yuichiro Kawasumi) and Y.K. (Yasutoshi Kudou); data curation, S.C.; writing—original draft preparation, S.C.; writing—review and editing, V.L., Y.K. (Yuichiro Kawasumi), Y.K. (Yasutoshi Kudou) and G.V.; visualization, S.C.; supervision, G.V.; project administration, G.V. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by KAWADA ROBOTICS.

Institutional Review Board Statement

The study followed the guidelines of the Ethics Committee of the Tokyo University of Agriculture and Technology, Tokyo, Japan.

Informed Consent Statement

By filling out the questionnaires, the participants were fully informed of the anonymity of the questionnaire and agreed.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BBBounding Box
CPUCentral Processing Unit
DoFDegree of Fredom
ECEye Camera
GPUGraphics Processing Unit
GUIGraphical User Interface
HCHand Camera
HCIHuman–Computer Interaction
HRIHuman–Robot interaction
IREXInternational Robotics EXhibition
OPOPen question
P1–4Questionnaire 1–4
RATRobot Assisted Therapy
RCNNRegion-based Convolutional Neural Network
RSRealSense
USUltrasonic Sensor
WLANWireless Local Area Network

References

  1. Darvish, K.; Wanderlingh, F.; Bruno, B.; Simetti, E.; Mastrogiovanni, F.; Casalino, G. Flexible human–robot cooperation models for assisted shop-floor tasks. Mechatronics 2018, 51, 97–114. [Google Scholar] [CrossRef] [Green Version]
  2. Müller-Abdelrazeq, S.L.; Schönefeld, K.; Haberstroh, M.; Hees, F. Interacting with collaborative robots—A study on attitudes and acceptance in industrial contexts. In Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction; Springer: Berlin/Heidelberg, Germany, 2019; pp. 101–117. [Google Scholar]
  3. Naneva, S.; Gou, M.S.; Webb, T.L.; Prescott, T.J. A Systematic Review of Attitudes, Anxiety, Acceptance, and Trust Towards Social Robots. Int. J. Soc. Robot. 2020, 12, 1179–1201. [Google Scholar] [CrossRef]
  4. Wendt, T.M.; Himmelsbach, U.B.; Lai, M.; Waßmer, M. Time-of-flight cameras enabling collaborative robots for improved safety in medical applications. Int. J. Interdiscip. Telecommun. Netw. IJITN 2017, 9, 10–17. [Google Scholar] [CrossRef]
  5. Oron-Gilad, T.; Hancock, P.A. From ergonomics to hedonomics: Trends in human factors and technology—The role of hedonomics revisited. In Emotions and Affect in Human Factors and Human-Computer Interaction; Elsevier: Amsterdam, The Netherlands, 2017; pp. 185–194. [Google Scholar]
  6. Fukuyama, M. Society 5.0: Aiming for a new human-centered society. Jpn. Spotlight 2018, 27, 47–50. [Google Scholar]
  7. Kadir, B.A.; Broberg, O.; da Conceicao, C.S. Current research and future perspectives on human factors and ergonomics in Industry 4.0. Comput. Ind. Eng. 2019, 137, 106004. [Google Scholar] [CrossRef]
  8. Kummer, T.F.; Schäfer, K.; Todorova, N. Acceptance of hospital nurses toward sensor-based medication systems: A questionnaire survey. Int. J. Nurs. Stud. 2013, 50, 508–517. [Google Scholar] [CrossRef]
  9. Sundar, S.S.; Waddell, T.F.; Jung, E.H. The Hollywood robot syndrome media effects on older adults’ attitudes toward robots and adoption intentions. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; IEEE: New Jersey, NJ, USA, 2016; pp. 343–350. [Google Scholar]
  10. Beer, J.M.; Prakash, A.; Mitzner, T.L.; Rogers, W.A. Understanding Robot Acceptance; Technical Report; Georgia Institute of Technology: Atlanta, GA, USA, 2011. [Google Scholar]
  11. Bröhl, C.; Nelles, J.; Brandl, C.; Mertens, A.; Schlick, C.M. TAM reloaded: A technology acceptance model for human-robot cooperation in production systems. In Proceedings of the International Conference on Human-Computer Interaction, Paris, France, 14–16 September 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 97–103. [Google Scholar]
  12. Jung, M.; Hinds, P. Robots in the wild: A time for more robust theories of human-robot interaction. ACM Trans. Hum.-Robot Interact. THRI 2018, 7, 2. [Google Scholar] [CrossRef] [Green Version]
  13. Šabanović, S.; Reeder, S.; Kechavarzi, B. Designing robots in the wild: In situ prototype evaluation for a break management robot. J. Hum.-Robot Interact. 2014, 3, 70–88. [Google Scholar] [CrossRef] [Green Version]
  14. Coronado, E.; Indurkhya, X.; Venture, G. Robots Meet Children, Development of Semi-Autonomous Control Systems for Children-Robot Interaction in the Wild. In Proceedings of the 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM), Toyonaka, Japan, 3–5 July 2019; pp. 360–365. [Google Scholar] [CrossRef]
  15. Venture, G.; Indurkhya, B.; Izui, T. Dance with me! Child-robot interaction in the wild. In Proceedings of the International Conference on Social Robotics, Tsukuba, Japan, 22–24 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 375–382. [Google Scholar]
  16. Foster, M.E.; Alami, R.; Gestranius, O.; Lemon, O.; Niemelä, M.; Odobez, J.M.; Pandey, A.K. The MuMMER project: Engaging human-robot interaction in real-world public spaces. In Proceedings of the International Conference on Social Robotics, Kansas City, MO, USA, 1–3 November 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 753–763. [Google Scholar]
  17. Bremner, P.; Koschate, M.; Levine, M. Humanoid robot avatars: An ‘in the wild’usability study. In Proceedings of the 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, NY, USA, 26–31 August 2016; IEEE: New Jersey, NJ, USA, 2016; pp. 624–629. [Google Scholar]
  18. Björling, E.A.; Thomas, K.; Rose, E.J.; Cakmak, M. Exploring teens as robot operators, users and witnesses in the wild. Front. Robot. AI 2020, 7, 5. [Google Scholar] [CrossRef] [Green Version]
  19. Zguda, P.; Kołota, A.; Venture, G.; Sniezynski, B.; Indurkhya, B. Exploring the Role of Trust and Expectations in CRI Using In-the-Wild Studies. Electronics 2021, 10, 347. [Google Scholar] [CrossRef]
  20. Nomura, T.T.; Syrdal, D.S.; Dautenhahn, K. Differences on social acceptance of humanoid robots between Japan and the UK. In Proceedings of the Procs 4th Int Symposium on New Frontiers in Human-Robot Interaction, The Society for the Study of Artificial Intelligence and the Simulation of Behaviour, Canterbury, UK, 21–22 April 2015. [Google Scholar]
  21. Haring, K.S.; Mougenot, C.; Ono, F.; Watanabe, K. Cultural differences in perception and attitude towards robots. Int. J. Affect. Eng. 2014, 13, 149–157. [Google Scholar] [CrossRef] [Green Version]
  22. Brondi, S.; Pivetti, M.; Di Battista, S.; Sarrica, M. What do we expect from robots? Social representations, attitudes and evaluations of robots in daily life. Technol. Soc. 2021, 66, 101663. [Google Scholar] [CrossRef]
  23. Sinnema, L.; Alimardani, M. The Attitude of Elderly and Young Adults Towards a Humanoid Robot as a Facilitator for Social Interaction. In Proceedings of the International Conference on Social Robotics, Madrid, Spain, 26–29 November 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 24–33. [Google Scholar]
  24. Smakman, M.H.; Konijn, E.A.; Vogt, P.; Pankowska, P. Attitudes towards social robots in education: Enthusiast, practical, troubled, sceptic, and mindfully positive. Robotics 2021, 10, 24. [Google Scholar] [CrossRef]
  25. Chen, S.C.; Jones, C.; Moyle, W. Health Professional and Workers Attitudes Towards the Use of Social Robots for Older Adults in Long-Term Care. Int. J. Soc. Robot. 2019, 12, 1–13. [Google Scholar] [CrossRef]
  26. Elprama, S.A.; Jewell, C.I.; Jacobs, A.; El Makrini, I.; Vanderborght, B. Attitudes of factory workers towards industrial and collaborative robots. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 113–114. [Google Scholar]
  27. Aaltonen, I.; Salmi, T. Experiences and expectations of collaborative robots in industry and academia: Barriers and development needs. Procedia Manuf. 2019, 38, 1151–1158. [Google Scholar] [CrossRef]
  28. Elprama, B.; El Makrini, I.; Jacobs, A. Acceptance of collaborative robots by factory workers: A pilot study on the importance of social cues of anthropomorphic robots. In Proceedings of the International Symposium on Robot and Human Interactive Communication, New York, NY, USA, 26–31 August 2016. [Google Scholar]
  29. Kim, M.G.; Lee, J.; Aichi, Y.; Morishita, H.; Makino, M. Effectiveness of robot exhibition through visitors experience: A case study of Nagoya Science Hiroba exhibition in Japan. In Proceedings of the 2016 International Symposium on Micro-NanoMechatronics and Human Science (MHS), Nagoya, Japan, 28–30 November 2016; IEEE: New Jersey, NJ, USA, 2016; pp. 1–5. [Google Scholar]
  30. Vidal, D.; Gaussier, P. Visitor or artefact! An experiment with a humanoid robot at the Musée du Quai Branly in Paris. In Wording Robotics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 101–117. [Google Scholar]
  31. Karreman, D.; Ludden, G.; Evers, V. Visiting cultural heritage with a tour guide robot: A user evaluation study in-the-wild. In Proceedings of the International Conference on Social Robotics, Paris, France, 26–30 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 317–326. [Google Scholar]
  32. Joosse, M.; Evers, V. A guide robot at the airport: First impressions. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, 6–9 March 2017; pp. 149–150. [Google Scholar]
  33. Kanda, T.; Shiomi, M.; Miyashita, Z.; Ishiguro, H.; Hagita, N. A Communication Robot in a Shopping Mall. IEEE Trans. Robot. 2010, 26, 897–913. [Google Scholar] [CrossRef]
  34. Drolshagen, S.; Pfingsthorn, M.; Gliesche, P.; Hein, A. Acceptance of Industrial Collaborative Robots by People With Disabilities in Sheltered Workshops. Front. Robot. AI 2021, 7, 173. [Google Scholar] [CrossRef] [PubMed]
  35. Rossato, C.; Pluchino, P.; Cellini, N.; Jacucci, G.; Spagnolli, A.; Gamberini, L. Facing with Collaborative Robots: The Subjective Experience in Senior and Younger Workers. Cyberpsychology Behav. Soc. Netw. 2021, 24, 349–356. [Google Scholar] [CrossRef]
  36. Kawada Robotics. 2022. Available online: http://nextage.kawada.jp/en/open/ (accessed on 2 December 2022).
  37. Coronado, E.; Venture, G. Towards IoT-Aided Human–Robot Interaction Using NEP and ROS: A Platform-Independent, Accessible and Distributed Approach. Sensors 2020, 20, 1500. [Google Scholar] [CrossRef] [Green Version]
  38. Shi, Y.; Chen, Y.; Rincon Ardila, L.; Venture, G.; Bourguet, M.L. A Visual Sensing Platform for Robot Teachers. In Proceedings of the 7th International Conference on Human-Agent Interaction, Kyoto, Japan, 6–10 October 2019; pp. 200–201. [Google Scholar] [CrossRef]
  39. Nagamachi, M.; Lokman, A.M. Innovations of Kansei Engineering; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  40. Coronado, E.; Venture, G.; Yamanobe, N. Applying Kansei/Affective Engineering Methodologies in the Design of Social and Service Robots: A Systematic Review. Int. J. Soc. Robot. 2021, 13, 1161–1171. [Google Scholar] [CrossRef]
  41. Nagamachi, M. Kansei/Affective Engineering; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  42. Nomura, T.; Suzuki, T.; Kanda, T.; Kato, K. Measurement of negative attitudes toward robots. Interact. Stud. 2006, 7, 437–454. [Google Scholar] [CrossRef]
  43. Nomura, T.; Suzuki, T.; Kanda, T.; Kato, K. Altered attitudes of people toward robots: Investigation through the Negative Attitudes toward Robots Scale. In Proceedings of the AAAI-06 Workshop on Human Implications of Human-Robot Interaction, Menlo Park, CA, USA, 17 July 2006; Volume 2006, pp. 29–35. [Google Scholar]
  44. Gliem, J.A.; Gliem, R.R. Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. In Proceedings of the Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education, Milwaukee, WI, USA, 13–15 October 2003. [Google Scholar]
  45. Drolet, A.L.; Morrison, D.G. Do we really need multiple-item measures in service research? J. Serv. Res. 2001, 3, 196–204. [Google Scholar] [CrossRef]
  46. Diamantopoulos, A.; Sarstedt, M.; Fuchs, C.; Wilczynski, P.; Kaiser, S. Guidelines for choosing between multi-item and single-item scales for construct measurement: A predictive validity perspective. J. Acad. Mark. Sci. 2012, 40, 434–449. [Google Scholar] [CrossRef] [Green Version]
  47. Bergkvist, L. Appropriate use of single-item measures is here to stay. Mark. Lett. 2015, 26, 245–255. [Google Scholar] [CrossRef]
  48. Freed, L. Innovating Analytics: How the Next Generation of Net Promoter Can Increase Sales and Drive Business Results; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  49. Sauro, J. Is a Single Item Enough to Measure a Construct? 2018. Available online: https://measuringu.com/single-multi-items/ (accessed on 2 December 2022).
  50. Martinez-Molina, A.; Boarin, P.; Tort-Ausina, I.; Vivancos, J.L. Assessing visitors’ thermal comfort in historic museum buildings: Results from a Post-Occupancy Evaluation on a case study. Build. Environ. 2018, 132, 291–302. [Google Scholar] [CrossRef] [Green Version]
  51. Burchell, B.; Marsh, C. The effect of questionnaire length on survey response. Qual. Quant. 1992, 26, 233–244. [Google Scholar] [CrossRef]
  52. Steyn, R. How many items are too many? An analysis of respondent disengagement when completing questionnaires. Afr. J. Hosp. Tour. Leis. 2017, 6, 41. [Google Scholar]
  53. Krosnick, J.A. Questionnaire design. In The Palgrave Handbook of Survey Research; Springer: Berlin/Heidelberg, Germany, 2018; pp. 439–455. [Google Scholar]
  54. Jiang, H.; Cheng, L. Public Perception and Reception of Robotic Applications in Public Health Emergencies Based on a Questionnaire Survey Conducted during COVID-19. Int. J. Environ. Res. Public Health 2021, 18, 10908. [Google Scholar] [CrossRef]
  55. Bröhl, C.; Nelles, J.; Brandl, C.; Mertens, A.; Nitsch, V. Human–Robot Collaboration Acceptance Model: Development and Comparison for Germany, Japan, China and the USA. Int. J. Soc. Robot. 2019, 11, 709–726. [Google Scholar] [CrossRef] [Green Version]
  56. Liang, Y.; Lee, S.A. Fear of Autonomous Robots and Artificial Intelligence: Evidence from National Representative Data with Probability Sampling. Int. J. Soc. Robot. 2017, 9, 379–384. [Google Scholar] [CrossRef]
  57. Venture, G.; Kulić, D. Robot expressive motions: A survey of generation and evaluation methods. ACM Trans. Hum.-Robot Interact. THRI 2019, 8, 1–17. [Google Scholar] [CrossRef] [Green Version]
  58. Nomura, T. Cultural differences in social acceptance of robots. In Proceedings of the 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, Portugal, 28 August–1 September 2017; pp. 534–538. [Google Scholar] [CrossRef]
  59. Romero, J.; Lado, N. Service robots and COVID-19: Exploring perceptions of prevention efficacy at hotels in generation Z. Int. J. Contemp. Hosp. Manag. 2021, 33, 4057–4078. [Google Scholar] [CrossRef]
  60. Hoffman, G.; Ju, W. Designing robots with movement in mind. J. Hum.-Robot Interact. 2014, 3, 91–122. [Google Scholar] [CrossRef]
  61. Helander, M.G.; Khalid, H.M. Underlying theories of hedonomics for affective and pleasurable design. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Orlando, FL, USA, 26–30 September 2005; SAGE Publications Sage CA: Los Angeles, CA, USA, 2005; Volume 49, pp. 1691–1695. [Google Scholar]
  62. Díaz, C.E.; Fernández, R.; Armada, M.; García, F. A research review on clinical needs, technical requirements, and normativity in the design of surgical robots. Int. J. Med. Robot. Comput. Assist. Surg. 2017, 13, e1801. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Glende, S.; Conrad, I.; Krezdorn, L.; Klemcke, S.; Krätzel, C. Increasing the acceptance of assistive robots for older people through marketing strategies based on stakeholder needs. Int. J. Soc. Robot. 2016, 8, 355–369. [Google Scholar] [CrossRef]
  64. Kirschgens, L.A.; Ugarte, I.Z.; Uriarte, E.G.; Rosas, A.M.; Vilches, V.M. Robot hazards: From safety to security. arXiv 2018, arXiv:1806.06681. [Google Scholar]
  65. Prassida, G.F.; Asfari, U. A conceptual model for the acceptance of collaborative robots in industry 5.0. Procedia Comput. Sci. 2022, 197, 61–67. [Google Scholar] [CrossRef]
Figure 1. Robot on the booth.
Figure 1. Robot on the booth.
Machines 10 01179 g001
Figure 2. Top: main elements of the designed Graphical User Interface (GUI). Bottom: examples of elements shown in the GUI according to the status of the interaction: (a) No human interaction is performed, (b) the interface shows which object is selected by the human, (c) the robot provides information about what it is doing and how it feels.
Figure 2. Top: main elements of the designed Graphical User Interface (GUI). Bottom: examples of elements shown in the GUI according to the status of the interaction: (a) No human interaction is performed, (b) the interface shows which object is selected by the human, (c) the robot provides information about what it is doing and how it feels.
Machines 10 01179 g002
Figure 3. General architecture of the project. The GUI program is using client-server architecture, the server is in Python and the client in JavaScript.
Figure 3. General architecture of the project. The GUI program is using client-server architecture, the server is in Python and the client in JavaScript.
Machines 10 01179 g003
Figure 4. Real scene with the objects the robot has to detect and grasp.
Figure 4. Real scene with the objects the robot has to detect and grasp.
Machines 10 01179 g004
Figure 5. Flowchart of the command of the robot. The green boxes are threads running in the background and updating data. The actions with a red mark are calling the Updating plate state one. The Cognitive engagement thread estimates the engagement of the user towards the robot thanks to the user’s position, emotion or gaze.
Figure 5. Flowchart of the command of the robot. The green boxes are threads running in the background and updating data. The actions with a red mark are calling the Updating plate state one. The Cognitive engagement thread estimates the engagement of the user towards the robot thanks to the user’s position, emotion or gaze.
Machines 10 01179 g005
Figure 6. The content of each questionnaire P1–P4. The number in the red circle indicates the number of questions.
Figure 6. The content of each questionnaire P1–P4. The number in the red circle indicates the number of questions.
Machines 10 01179 g006
Table 1. Research articles proposing interactive robotic systems with industrial robots for grasping perceptions towards robots after performing direct Human–Robot Interaction.
Table 1. Research articles proposing interactive robotic systems with industrial robots for grasping perceptions towards robots after performing direct Human–Robot Interaction.
ArticleRobotic PlatformSettingTaskAutonomyTraining RequiredParticipants
Muller et al. [2]Universal Robot 5 robot armLaboratoryAssembly taskFully autonomousYes90 subjects mainly students from a technical university
Rossato et al. [35]Universal Robot 10e robot armLaboratoryCollaborative taskFully autonomousYes20 industrial senior and younger workers
Drolshagen et al. [34]KUKA LBR iiwa 7 R800 (robot arm)Closed roomThe robot picks up wooden sticks to hand them over to the workerFully autonomousNo10 participants with mental or physical disabilities.
Elprama et al. [26]Baxter dual-arm robotClosed roomParticipants instruct the robot to put blocks inside boxesRemote controlledYes11 car factory employees
This workNEXTAGE Open dual-arm robotPublic spaceThe robot gives gifts to visitors according to their instructions and facial expressionFully autonomousNoHundreds, but only 207 answered some questionnaires.
Table 2. Semantic analysis results (K-1), about the feelings the participant had regarding the robot. The concepts ( C ) are regarding to the satisfaction ( α ) and comfort ( β ).
Table 2. Semantic analysis results (K-1), about the feelings the participant had regarding the robot. The concepts ( C ) are regarding to the satisfaction ( α ) and comfort ( β ).
C DimensionSemantic Evaluation ( μ σ )
Positive (1)Negative (5)JapaneseNon-J.pMaleFemalepNoviceExpertpTotal
α HappyUnhappy1.430.681.140.380.1361.460.711.290.560.3461.240.601.550.670.1091.380.64
α InterestedBoring1.280.551.140.380.4471.270.531.240.540.8441.240.521.270.550.8361.260.52
α DisappointedAmused4.600.635.000.000.0004.540.654.810.510.1160.720.614.590.590.4674.660.59
β RelaxedAnxious2.101.221.861.460.6901.880.992.291.490.2972.281.431.820.960.1962.061.23
β SafeDanger1.250.541.711.500.4471.230.511.430.980.4091.280.891.360.580.7021.320.75
β ConfusedClear4.201.143.711.700.4914.231.184.001.300.5324.201.294.051.170.6694.131.21
Table 3. Semantic analysis results (K-2). The concepts ( C ) are regarding to the behavior ( γ ), interactions ( δ ) and appearance ( ε ).
Table 3. Semantic analysis results (K-2). The concepts ( C ) are regarding to the behavior ( γ ), interactions ( δ ) and appearance ( ε ).
C DimensionSemantic Evaluation ( μ σ )
Positive (1)Negative (5)JapaneseNon-J.pMaleFemalepNoviceExpertpTotal
γ SmartStupid1.610.721.330.520.2901.440.581.760.830.1761.570.661.570.750.9771.570.69
γ SimpleComplicated3.581.003.171.720.5903.671.113.291.100.2843.611.123.431.120.5973.521.10
γ DynamicStatic3.531.202.201.100.0503.261.103.561.500.4883.361.333.381.200.9643.371.24
γ ResponsiveSlow3.001.232.500.550.162.961.262.881.050.8203.041.262.811.080.5112.931.16
δ LifelikeArtificial2.891.292.171.170.2052.481.193.291.310.0462.571.273.051.280.2182.801.27
δ EmotionalEmotionless3.051.112.831.330.7142.931.113.181.190.4893.041.153.001.140.9003.021.12
δ UsefulUseless1.891.011.500.840.3301.810.961.881.050.8321.650.882.051.070.1921.840.98
δ FamiliarUnknown3.391.352.500.840.0533.111.453.531.070.2783.261.393.291.270.9513.271.30
ε DesirableUndesirable1.840.891.200.450.0291.630.842.000.890.1891.781.041.750.640.9011.770.86
ε CuteUgly1.951.091.330.520.0431.810.961.941.200.7161.870.971.861.150.9691.861.04
ε ModernOld1.570.831.330.520.3741.350.691.820.880.0701.610.781.450.830.5231.530.79
ε AttractiveUnattractive1.791.041.330.520.1161.671.001.821.010.6191.741.011.711.010.9351.730.99
ε LikeDislike1.680.871.000.000.0001.440.751.820.950.1751.650.781.520.930.6231.590.83
Table 4. Demographic data of each questionnaire. Some participants did not mention their gender or country. For the latter, we assume they were Japanese or not depending on the version of the questionnaire they used (Japanese or English).
Table 4. Demographic data of each questionnaire. Some participants did not mention their gender or country. For the latter, we assume they were Japanese or not depending on the version of the questionnaire they used (Japanese or English).
P1P2P3P4
Considered answers78100%67100%47100%44100%
Japanese5773%4770%4085%3886%
Non Japanese2228%2030%715%614%
Male6279%5582%2655%2761%
Female1519%1218%2145%1739%
Novice2127%2233%2553%2352%
Expert5773%4567%2247%2046%
OP-1 1838%
OP-2 2145%
Table 5. Average ( μ ) and standard deviation ( σ ) values for each NARS type. Values close to one represent positive attitudes and values close to five represent negative attitudes. Because the NARS-S3 is using positive questions, it is the five’s complement of the mean that is shown.
Table 5. Average ( μ ) and standard deviation ( σ ) values for each NARS type. Values close to one represent positive attitudes and values close to five represent negative attitudes. Because the NARS-S3 is using positive questions, it is the five’s complement of the mean that is shown.
Type μ σ
Interaction (S1)1.900.72
Social (S2)2.600.96
Emotion (S3)2.620.97
Table 6. Average ( μ ) and standard deviation ( σ ) of the NARS questionnaire according to each different group. The column p represents the p-value of Welch’s t-test. Because the NARS-S3 is using positive questions, it is the five’s complement of the mean that is shown.
Table 6. Average ( μ ) and standard deviation ( σ ) of the NARS questionnaire according to each different group. The column p represents the p-value of Welch’s t-test. Because the NARS-S3 is using positive questions, it is the five’s complement of the mean that is shown.
Interaction (S1)Social (S2)Emotion (S3)
Groups μ σ p μ σ p μ ¯ σ p
Japanese1.900.640.8902.470.970.0842.731.010.130
Non Japanese1.870.922.900.882.370.82
Male1.910.710.8902.611.020.8092.620.990.963
Female1.880.782.550.662.610.86
Novice2.070.650.1682.710.850.4842.590.920.852
Expert1.830.742.541.012.641.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Capy, S.; Rincon, L.; Coronado, E.; Hagane, S.; Yamaguchi, S.; Leve, V.; Kawasumi, Y.; Kudou, Y.; Venture, G. Expanding the Frontiers of Industrial Robots beyond Factories: Design and in the Wild Validation. Machines 2022, 10, 1179. https://doi.org/10.3390/machines10121179

AMA Style

Capy S, Rincon L, Coronado E, Hagane S, Yamaguchi S, Leve V, Kawasumi Y, Kudou Y, Venture G. Expanding the Frontiers of Industrial Robots beyond Factories: Design and in the Wild Validation. Machines. 2022; 10(12):1179. https://doi.org/10.3390/machines10121179

Chicago/Turabian Style

Capy, Siméon, Liz Rincon, Enrique Coronado, Shohei Hagane, Seiji Yamaguchi, Victor Leve, Yuichiro Kawasumi, Yasutoshi Kudou, and Gentiane Venture. 2022. "Expanding the Frontiers of Industrial Robots beyond Factories: Design and in the Wild Validation" Machines 10, no. 12: 1179. https://doi.org/10.3390/machines10121179

APA Style

Capy, S., Rincon, L., Coronado, E., Hagane, S., Yamaguchi, S., Leve, V., Kawasumi, Y., Kudou, Y., & Venture, G. (2022). Expanding the Frontiers of Industrial Robots beyond Factories: Design and in the Wild Validation. Machines, 10(12), 1179. https://doi.org/10.3390/machines10121179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop