Next Article in Journal
LMFRNet: A Lightweight Convolutional Neural Network Model for Image Analysis
Next Article in Special Issue
The Effect of Directional Tactile Memory of the Back of the User on Reaction Time and Accuracy
Previous Article in Journal
Improving Detection of DeepFakes through Facial Region Analysis in Images
Previous Article in Special Issue
The Synergy between a Humanoid Robot and Whisper: Bridging a Gap in Education
 
 
Article
Peer-Review Record

Redefining User Expectations: The Impact of Adjustable Social Autonomy in Human–Robot Interaction

Electronics 2024, 13(1), 127; https://doi.org/10.3390/electronics13010127
by Filippo Cantucci 1,*, Rino Falcone 1 and Marco Marini 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Electronics 2024, 13(1), 127; https://doi.org/10.3390/electronics13010127
Submission received: 20 October 2023 / Revised: 12 December 2023 / Accepted: 18 December 2023 / Published: 28 December 2023
(This article belongs to the Special Issue Human Computer Interaction in Intelligent System)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This paper is about Human-Robot Interaction for guidance in cultural heritage sites. It is an investigation of the human user satisfaction, with an autonomous robot that is equipped with a computational model that integrates the principles of Adjustable Social Autonomy. Three main research hypotheses where examined and discussed in the paper. Generally, the scientific content is solid. The authors should focus on clearing all the language and linguistic mistakes. In addition, some paragraphs should be better explained such as the beginning of 4.1 section (it is understood later in the paper).

Comments on the Quality of English Language

The authors should focus on clearing all the language and linguistic mistakes. In addition, some paragraphs should be better explained such as the beginning of 4.1 section (it is understood later in the paper).

Author Response

Dear Reviewer,

thank you for reviewing our work. In the hope of meeting your requirements, we have endeavored to minimize errors in English. Additionally, we have revised section 4.1 with the aim of making it more understandable in the context of the submitted article.

Reviewer 2 Report

Comments and Suggestions for Authors

1. The article does not provide a more in-depth analysis of the decision-making process of robots to explain why different levels of autonomy have different impacts on user satisfaction.

2. In the discussion section, the explanation of the results was not detailed, and there was no clear explanation of why autonomous adaptive robots would lead to a decrease in user satisfaction.

3. There are many grammar errors, and improvement of the English language is required.

Comments on the Quality of English Language

There are many grammar errors, and improvement of the English language is required.

Author Response

Dear reviewer, thank you for the valuable comments given in reviewing our work. I hope that the answers formulated can satisfy your objections.

1)The article does not provide a more in-depth analysis of the decision-making process of robots to explain why different levels of autonomy have different impacts on user satisfaction.

Reply: Thank you for your comment. We attempted to provide the answer by modyfing the section 4.1. This section explains how the levels of adoption can affect the user's satisfaction, due to the capability of the robot to not align the adopted task to the user’s task delegation. The section 4.1. is the following:

“As mentioned earlier, the robot employed different levels of adaptation in its interactions to offer optimal support to the user. Specifically, two types of assistance were utilized in this experiment: literal help and critical help. When the robot opted for literal help, it constructed the tour by selecting the most relevant artworks from the artistic period explicitly indicated by the user. Conversely, when the robot chose to provide critical help, the suggested tour was crafted based on broader criteria. In other words, the robot took into account additional information from the user's profile, such as the tolerance for room crowding or disinterest in specific artistic periods. Furthermore, the robot considered the potential interest the user might have in a highly relevant work present in the virtual museum — an aspect implicitly deduced by the robot and not explicitly declared by the user. Specifically, the robot endeavored to optimize the relationship between the relevance of the artworks and the rooms' level of crowding (virtual). In the case of critical help, as evident, the robot doesn't directly align with the user's request but presents an alternative tour that could still be of interest, perhaps even more so. The strength of this form of assistance lies precisely in the robot's ability to go beyond the goals declared by the user and address other needs and interests that the user may not immediately consider. However, it's important to note that this type of assistance is susceptible to potential erroneous robot's interpretations and could be met with reluctance from the user regarding a tutoring role played by the robot, which was not directly requested.”

2)In the discussion section, the explanation of the results was not detailed, and there was no clear explanation of why autonomous adaptive robots would lead to a decrease in user satisfaction.

Reply: We attempted to provide this explanation by adding the following paragraph to the discussion: “An autonomous robot that adapts its behavior may not be able to tailor it to the user's needs but may instead adopt other criteria, such as considering only the resources available in the physical world. This could lead the robot to make choices that potentially may have nothing to do with the user's expectations. This would result in user dissatisfaction with the outcome achieved by the robot. What we observed in our experiment is that a robot capable of adapting its behavior to the user's needs, including implicit mental states, can adopt a task in a way that does not decrease user’s satisfaction compared to the result obtained.”.

3)There are many grammar errors, and improvement of the English language is required.

Reply: We tried to minimize the English errors

Reviewer 3 Report

Comments and Suggestions for Authors

This study investigates human users satisfaction when interacting with a robot whose decision-making process is guided by a computational cognitive model integrating the principles of Adjustable Social Autonomy. A within subjects experimental study was designed, in the domain of Cultural Heritage. The results indicated that as the robot’s level of autonomy in the task adoption increased, user satisfaction with the robot decreased, while their satisfaction with the tour itself improved. Results highlight the potential of Adjustable Social Autonomy as paradigm for developing autonomous adaptive social robots that can improve user experiences in multiple HRI real domains.

 

There are still some issues in the manuscript:

1, In the Abstract: Line 6:“human users satisfaction” should be “human user’s satisfaction”?

2, In the section 4 (Methodology), the author proposed their experiment investigated four items. Subsequently, the author made a statement that the explanation and trustworthiness results analysis were beyond the scope of the present article. In their work they focused on analyzing the impact of intelligent help on user satisfaction, as an indicator of the robot’s ability to intercept the user’s needs, even when not explicitly declared. Since that's the case, why should the experiment be designed with four items?

3, in the section 4.2, what is the difference between the first Hypothesis (H1) and the second Hypothesis?

The first Hypothesis is about the satisfaction regarding the quality of the tour, and the second Hypothesis is about satisfaction with the robot’s performance? If so, isn’t the quality of the tour a form of the result of robot’s performance?

• (H1): User satisfaction regarding the quality of the tour suggested by the robot was higher when the robot provided critical help, as opposed to literal help.

• (H2): Users exhibited greater satisfaction with the robot’s performance when the robot operated in critical help, rather than providing literal help.

• (H3): Users experienced a higher level of surprise with the robot’s selection when it performed critical help compared to literal help.

 

4, According to [5] the contractor can adopt the task τ at different levels of autonomy: Three are seven kinds of help.

Why did this article only select three types of help for research, and what are the criteria for selecting these three types?

Comments on the Quality of English Language

 English language is fine。

Author Response

Dear reviewer, we appreciate the valuable feedback provided during the review of our work. Your comments have been instrumental in our efforts to enhance the quality of our project.

1)In the Abstract: Line 6: “human users satisfaction” should be “human user’s satisfaction”?

Reply: Thank you for the suggestion. We modified user satisfaction with user’s satisfaction in any part of the paper.

2)In the section 4 (Methodology), the author proposed their experiment investigated four items. Subsequently, the author made a statement that the explanation and trustworthiness results analysis were beyond the scope of the present article. In their work they focused on analyzing the impact of intelligent help on user satisfaction, as an indicator of the robot’s ability to intercept the user’s needs, even when not explicitly declared. Since that's the case, why should the experiment be designed with four items?

Reply: Thank you for this precious suggestion. It’s true, we proposed an experiment with 4 items and described only two of them. We chose to proceed in this way because in this experiment, we aimed to analyze two fundamental elements: autonomy and trustworthiness in HRI. What we tried to do was focus on autonomy, framing the work in relation to a robot's ability to adapt to user needs and demonstrate the potential of adjustable social autonomy, without considering other concepts such as trustworthiness or explainability. Considering trust and explainability would have required framing the work in a different state of the art than what we addressed in this study. Upon investigating the literature, we realized that it would be more appropriate to advance the results separately, even though they come from the same experiment. Therefore, we decided to defer the analysis of the two items related to explainability and trustworthiness to future work. In that subsequent work, we plan to describe both the theoretical tools and the literature in a more specific manner, having a full article at our disposal. We believe this is a prudent choice given the complexity of the topics we are trying to address and the extensive literature associated with them.

3)In the section 4.2, what is the difference between the first Hypothesis (H1) and the second Hypothesis?

Reply: Thank you for this valuable objection. We have identified that the use of “robot's performance” could be misleading and might lead to the perception that the two hypotheses could be considered as a single investigation. In reality, while the first hypothesis specifically pertains to satisfaction regarding the robot's final choice (suggested tour), the second hypothesis aims to investigate user satisfaction with the robot's behavior. This behavior is intended as the decision-making process that led to the adoption of the task by performing either literal help or critical help. What the user evaluates in the second hypothesis is precisely how the robot operated concerning the task initially delegated to it, regardless of the satisfaction linked to the outcome obtained following its behavior. The user assesses how satisfied they are with a robot that has done exactly what was asked of it compared to when it has done something not aligned with the delegated task. We have indeed observed that if the result proposed by the robot was more satisfying at the end of critical help, this was not true for the satisfaction associated with its behavior, which decreased compared to when the robot provided literal help. We proceeded to change the name of the variable from robot's performance to robot's behaviour. We introduced this explanation in section 4.2. 

4)According to [5] the contractor can adopt the task τ at different levels of autonomy: Three are seven kinds of help. Why did this article only select three types of help for research, and what are the criteria for selecting these three types?

Reply: In the Adjustable Social Autonomy theory, there are more levels of assistance than those investigated in our study. We made this choice because we selected the types of assistance that are most frequently provided by humans when interacting with each other. Additionally, we chose the minimum number of assistance types compatible with what we wanted to investigate in the paper, without risking compromising its comprehensibility. The critical help is the simplest form of assistance that allows the robot to adopt a task in a manner not aligned with the user's initial request, enabling the robot to consider implicit mental states. This demonstrates its ability to have a theory of mind and adapt its behavior to the mental states correctly attributed to the user. Literal help is the standard form of assistance with which a robot should behave, doing exactly what is requested. In the future, we will consider the possibility of expanding the robot's assistance forms, encompassing all levels of autonomy modeled in the Adjustable Social Autonomy theory.

Back to TopTop