Next Article in Journal
Streamlining Tax and Administrative Document Management with AI-Powered Intelligent Document Management System
Previous Article in Journal
Knowledge Sharing and Organizational Commitment: Psychological Capital as a Mediator and Self-Actualization as Moderator
Previous Article in Special Issue
Designing Gestures for Data Exploration with Public Displays via Identification Studies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Responsible Automation: Exploring Potentials and Losses through Automation in Human–Computer Interaction from a Psychological Perspective

1
Department of Psychology, Ludwig-Maximilians-Universität München, 80802 München, Germany
2
Department of Computer Science, Ludwig-Maximilians-Universität München, 80337 München, Germany
3
School of Business, University of South-Eastern Norway, 3679 Notodden, Norway
*
Author to whom correspondence should be addressed.
Information 2024, 15(8), 460; https://doi.org/10.3390/info15080460
Submission received: 12 July 2024 / Revised: 29 July 2024 / Accepted: 31 July 2024 / Published: 2 August 2024
(This article belongs to the Special Issue Recent Advances and Perspectives in Human-Computer Interaction)

Abstract

:
Robots and smart technologies are becoming part of everyday life and private households. While this automation of mundane tasks typically creates pragmatic benefits (e.g., efficiency, accuracy, time savings), it may also come with psychological losses, such as in meaning, competence, or responsibility. Depending on the domain (e.g., AI-assisted surgery, automated decision making), especially the user’s felt responsibility could have far-reaching consequences. The present research systematically explores such effects, building a more structured base for responsible automation in human–computer interaction (HCI). Based on a framework of seven dimensions, study 1 (N = 122) evaluates users’ reflections on automating five mundane tasks (e.g., gardening, paperwork) and identifies reasons for or against automation in different domains (e.g., creativity, care work, decision making). Study 2 (N = 57) provides deeper insights into effects of automation on responsibility perceptions. Based on the example of a vacuum cleaner robot, an experimental laboratory study contrasted a non-robotic manual vacuum cleaner to a robot, whereby the user’s perceptions of device agency (higher for the robot) and own competence (higher for the manual device) were central mediators for the perceived responsibility for the result. We position our findings as part of a broader idea of responsible design and automation from a user-centered design perspective.

1. Introduction

Automation pervades ever more areas of daily life (for an overview, see [1]) and comes with clear benefits. A premier advertising message is to free up time for the important things in life (see Figure 1 for an example). Since the 1920s when washing machines entered private households, until today’s robotic vacuum cleaners and lawn mowers, pet feeding robots, or automatic cooking machines with app control, this message has been the same. Technology is taking over more and more tasks that require not only manual human labor but also human control. Consequently, automation in the home is associated with gains such as time savings, efficiency, accuracy, and cost reduction. Still, the assumed practical advantages do not necessarily lead to success [2]. For example, recent analyses suggest that when robots are purchased, they often end up being used for only a short time “before being locked away in a cupboard” [3]. All the more, knowledge about the fundamentals of successful human–computer interaction (HCI) is essential and forms an important basis of responsible design decisions.
Besides the practical perspective, an interesting question is what the automation of daily life tasks means for the user’s experience and possible consequences such as felt responsibility. For example, are there any psychological losses? And if so, what are these losses, and how can we assess or prevent them? This research reflects on the psychological losses through automation in HCI and explores how people experience and reflect on automation in their personal space. Furthermore, we examine the interplay between users’ perceptions of device agency, psychological need fulfillment and effect on users’ responsibility perceptions, and its relations to device automation as a design factor.

1.1. Responsible Automation

With responsible automation, we refer to a type of automation that is deliberate in the choice of which tasks to delegate to technological devices and which to keep in human hands. By considering various dimensions of gains and losses, responsible automation accounts for a broader spectrum of consequences of automating tasks and subtasks. Indeed, there is a large design space on the continuum between no support at all (e.g., hand-washing clothes) and a fully automated home with a mind of its own. Though many tasks, such as vacuum cleaning, or even medical procedures, can be automated without significant deficits in task fulfillment, it might be sensible to retain some manual operational elements. By participating in the process, the human could still feel part of it and retain responsibility for the results. Accordingly, our aim is not to return to former times and completely ban automation but to establish a mindset of responsible design decisions. From our perspective, automation is not generally good or bad. There is no need to do everything “by hand” and not every task must be meaningful. But it is worth reflecting on which tasks or subtasks could have meaning and psychological worth and what we as researchers and designers could do to retain this worth.
From a psychological perspective, we can identify losses in multiple dimensions. These include losses of meaning, responsibility, reflection, competence, and more, as partially discussed in HCI and other fields. For example, researchers contrasted manual versus (semi-)automated interaction in the example of specific tasks, such as losses of meaning in automated versus manual coffee brewing [5]. Others focused on specific experiential dimensions, for instance, threats of autonomy (e.g., [6]), a lacking regard of the “human in the loop” in automation [7], or more generally, criticized the so-far mostly performance-oriented view in models of human–automation interaction (HAI) [8]. The present research aims to explore these aspects and their consequences in a more systematic way, with the goal of building a design basis for responsible automation.

1.2. Research Focus and Contributions

Although there are surely many advantages and positive aspects of automation, in this paper we deliberately shift the focus to the losses and their relative relevance in different tasks and contexts. While the public discussion of potential losses through automation often refers to a societal level, for example, centering around the question of whether AI/automation will replace jobs performed by humans (e.g., [9,10]), we reflect on the more subtle, psychological losses on an individual level in everyday life.
Integrating psychological theory and previous research on automation in HCI, we propose a set of seven dimensions to explore losses (and gains) of automation. Based on this framework, a first study (N = 122) investigates how people reflect on perceived gains and losses through automation for different tasks, as well as what additional tasks they wish to be automated—and which not. Thus, this first study takes a broader, more abstract view and sheds light on the user’s mental models and what designers might consider to make use of the potentials of automation in a responsible way. A second study (N = 57) focuses on a subset of the identified psychological losses and explores the consequences of automation in the example of a concrete application in an experimental laboratory study. Based on the example of a vacuum cleaner robot, it explores users’ experiences and judgements in contrast to using a non-robotic manual vacuum cleaner by a combination of quantitative and qualitative data and tests hypotheses about the interrelations between the technology’s agency, the user’s perceived competence, autonomy, and responsibility, and automation (manual vs. robot device) as a design factor.
A first contribution of this paper is the integration of existing research and theory into a framework of seven dimensions with which to assess losses through automation. This framework can support more nuanced discussions and research about the topic in the human–computer interaction (HCI) community and can be an impulse to motivate novel conversations concerning the role of automation and responsible design. Also, based on our findings and participants’ comments on automation in the home (study 1), we offer specific insights on which characteristics make a task automatable without risking psychological losses. Another contribution (study 2) is an advanced understanding of the interrelations between technology design (here: manual vs. automated interaction), user experience, and resulting responsibility. Finally, the overarching discussion of our studies’ findings provides an outlook on what design solutions that integrate automation with keeping psychological value might look like. We discuss such ideas as part of the broader idea of responsible design and point out next steps of a research agenda.

2. Related Work

2.1. Trends in Automation Research

In recent years, researchers have explored the potentials of automation and related questions of HCI with various foci. For example, a workshop at the CHI conference 2023 [11] dealing with engaging automation experiences covered a wide range of contributions and application areas, such as automated vehicles [12] and typical conflicts in human–vehicle interaction [13], automated industrial processes [7], AI-assisted document tagging in the healthcare sector [14], reflections on the future of leadership in the age of human–AI teams [15], or automations in the home context [6]. Interestingly, with the increasing ubiquity and availability of automated systems to a growing variety of users [11], automation in the home and other non-professional settings becomes an increasingly prominent field of HCI research. For instance, based on a qualitative study on domestic robots [16], researchers recommend different strategies for successfully integrating robots in the home. These include, for example, timely notifications on planned robot activity or avoiding robot and human activity in the same time slots, so that the robot becomes an “invisible worker”, and the human does not feel disturbed by the robot. The focus here is on organizing practical challenges and not so much on the psychological effects or the question of automation per se.
In general, an important aspect of automation in the home domain seems to be the mental model of HCI suggested by the device, shifting from “the human controlling the smart home” to “human–home collaboration”, where the home actively provides suggestions or simulations regarding different configurations of automation [17]. Other studies already considered automation in the home in terms of wellbeing. As argued by Bittner and colleagues ([18] (p. 145)), “not all routine tasks in homes are viewed as ‘a waste of time’ but some routines may even be considered healthy rituals [which] should not be automated blindly, but designed carefully”. Based on a field study of the example of watering houseplants, they reflect on how to integrate the advantages of automation while not losing the “healthy” aspects of mundane activities.

2.2. Research on Human–Robot Interaction (HRI)

Regarding the question of effects of automation on the user experience (UX), research on human–robot interaction (HRI) represents an interesting repertoire. Per definition, robots are more autonomous compared to other types of technology. Thus, compared to non-robotic technology, using a robot for a particular task usually goes along with a higher degree of device automation, and research on the effect of varying degrees of device automation can provide valid insights for HRI. In this regard, concepts from psychology and HCI can help to identify the relevant mediating effects (e.g., perceptions of agency) and possible starting points for conscious robot design.
Given the wide application areas of robots in everyday scenarios [19], our work is particularly relevant for contexts where robots replace tools and humans use “their” robot (instead of a tool) in order to reach a particular instrumental goal. This could be, for example, an engineer using a robot to build a car, a doctor using a robot to conduct a surgery, or a garden owner using a lawn mower robot (in contrast to other application areas such as caring robots in a hospital, where the human that interacts with the robot is provided a service, and it might be the person that instructed the robot who might feel more or less responsible for the result than the one who interacts with the robot). Referring to the two principal services of robots listed by the International Federation of Robotics (IFR), namely, servicing humans (personal safeguarding, entertainment etc.) and servicing equipment (maintenance, repair, cleaning etc.; cited after [20]), our research is particularly relevant for the latter. Moreover, while robots can vary in their degree of autonomy (e.g., [21,22]) and can be stationary or mobile, we assume that for the question of responsibility for the result, the perceived agency of a technology could play a central role. Agency, in turn, is closely related to the perception of “animate movement” (e.g., [23]), so that we see that areas where robots “move” autonomously can be of central interest.

2.3. Preliminary Framework on Losses through Automation

While the research literature in HCI and psychology already hints at various aspects of potential losses due to automation (the particular interest of the present study), these are seldom considered in sum. A first (possibly not exhaustive) review across different application domains reveals a number of partly interrelated aspects and psychological needs that could be affected by automating tasks and may be considered as a framework to explore the losses through automation:
Loss of meaning: As a universal need [24], humans’ innate “will to meaning” is ever present, and basically any everyday life situation can be interpreted as a chance to derive meaning in life [25]. Though the individual sources may vary in detail, there are typical categories of activities which individuals perceive as meaningful. For example, referring to meaning from work, Lips-Wiesma and Wright [26] distinguish meaning along two dimensions (i.e., self vs. others, being vs. doing), forming the four quadrants of self-actualization (self + being), unity with others (others + being), expressing full potential (self + doing), and service to others (others + doing). Especially the doing-related quadrants are prone to being replaced by automation. Thus, delegating a task to a machine also implies missing out on a potential opportunity to derive meaning from it. As argued earlier [5], automation makes people focus on the outcome, whereby the interaction process becomes “flat” and degraded to waiting time. Of course, people may use the “waiting time” for other purposes, and the actual loss depends on how much meaning they derive from an activity. Many may argue that they will happily forgo the meaningful experience of washing the dishes for getting rid of the burden. If one does not experience cooking or writing a letter as somehow meaningful, it might not seem a problem to just press a button to obtain the result. Still, it could be interesting to explore whether the activity has minor meaningful elements that might be worthwhile to keep in (semi-)automation. From a philosophical perspective, one could ask how automation affects the opportunities for finding meaning in life. If one could design a machine that takes over all of one’s tasks—what is it that we want to keep? If all work and discomfort are delegated and only pleasure remains—is that still pleasure? Or does it become boring and finally meaningless?
Loss of transparency: Naturally, with increasing automation, technology becomes more complex and less transparent for users [27,28]. Algorithms are working in the background and the user can see the result—but getting there remains inscrutable. In consequence, users are less able to understand the processes, performance, intentions, or plans of the technologies [28,29], what they are currently doing, how they have got to a certain result, or why [30,31]. This loss of transparency can have further consequences such as lacking trust and acceptance [32,33], as well as inadequate mental models of the technology [34] and over-trust, i.e., trusting the technology beyond its actual capabilities [35,36].
Loss of autonomy: Autonomy describes people’s need for expressing themselves, feeling that one is the cause of one’s own actions, and that one’s activities are self-endorsed [24,37]. Inherently, experiencing autonomy requires some flexibility and options for realizing spontaneous impulses (e.g., painting, cooking, writing, dancing, mountain biking). This, of course, is quite contradictory to automation and “smart” technology, which builds on predefined actions, or even self-learning algorithms, shifting autonomy to the technology rather than the user. Losses of autonomy are also conceptually linked to psychological reactance [38]. Reactance is “an unpleasant motivational arousal that emerges when people experience a threat to or loss of their free behaviors. It serves as a motivator to restore one’s freedom” ([39] (p. 205)). Presumably, most of us have already experienced moments of reactance where we tend to move into the opposite direction if someone else wants to tell us how to behave—be it our mother, partner, boss, or the neighbor who knows everything best. Even if we may agree with someone in principle, we do not like others making decisions for us. Regarding automation, reactance seems especially relevant to consider for automation that is implemented for societal goals like saving energy (but possibly does not provide sufficient flexibility for individual needs). If, for example, the smart fridge only allows us to take out food which is commendable for our health, or the smart light system always turns off the cellar light too early, one may completely ban or manipulate such technologies (e.g., trick the motion detector so that the light is always on).
Loss of competence: Competence means feeling effective in one’s activities. It is closely linked to the concept of self-efficacy [40] and forms a basic need in many psychological need theories (e.g., [37]). Inherently, for gaining competence from an activity, it requires that one perceives this activity as linked to oneself. For automated activities, which are primarily performed by technology and not oneself, individual increase in competence certainly becomes more questionable. Accordingly, studies comparing interaction with automatic and manual devices showed lower competence for the interaction with automatic devices [1]. However, it is also noteworthy that there might be individual differences concerning what kind of interaction makes people feel proud and competent, and some might also experience competence from programming or adjusting settings of automated devices. Whether this kind of programming pride is possible, again, depends on the degree of autonomy offered by the technology. In either case, this type of competence is more mediated and indirect than, for example, the feeling of (manually) removing snow, chopping wood, or painting a picture, where one directly experiences the outcome as a result of one’s bodily action.
Loss of responsibility: Automation also increases the degree of technology autonomy and agency, i.e., how much a technology can do independently from human input [21,41,42,43,44,45]. Greater agency and autonomy on the technology’s side, conversely, reduces the human user’s agency and self-efficacy [46]. Since self-efficacy is conceptually linked to responsibility [47], high levels of device agency may also lead to lower perceived responsibility for the result [48]. For example, [49] studied the effects of technology autonomy by different examples (e.g., autonomous cars, robot lawn mowers), which altogether showed that enhanced technology autonomy is associated with lowered outcome responsibility. Accordingly, studies also showed that with higher robot agency, responsibility attributions shift from human to robot [50,51], and people tend to use technology as scapegoats for failures, in terms of a “self-serving bias” [52]. In line with this, the so-called responsibility gap [53] and the question of to what extent individuals can or should maintain responsibility for the behavior of AI have been an ongoing discussion for decades [35,53,54,55,56,57,58]. Taking this a step further, a user who does not feel responsible for an outcome might also act less carefully and omit to assess if the outcome has the desired quality or if the suggestion of the technology actually makes sense. In addition, compounded by lacking transparency in automation, people may actually feel unable to take responsibility for a result, since they do not understand what the technology is doing and, thus, do not have the competency to contradict the machine when required [59]. Depending on the domain (e.g., surgery robots, AI in personnel selection), the ensuing consequences could be severe.
Loss of reflection: The less the user is involved in the interaction, the less it supports reflection regarding its meaning, the underlying functionality, or its significance in a larger context. While cooking soup for dinner, chopping vegetables, combining spices, and stirring the soup, one may also reflect on the origin of the ingredients, the best combination of spices, or looking forward to the moment the family gathers together at the table. The activity creates a space for reflection. Accordingly, approaches like “slow design” deliberately slow down mundane activities in order to help people do things “at the right time and at the right speed” and reflect on their action [60,61]. The specific operational steps and the “aesthetics” of interaction offer room for reflection and can make a better or worse fit to the underlying meaning [62,63,64]. Especially in domains where a technology is used to change unhealthy or undesired human routines (e.g., health, sustainability), full automation seems problematic by being detrimental to learning and knowledge transfer to other contexts. If, for example, a smart heating system automatically reduces the temperature when nobody is at home, this can be seen as smart and quite convenient. However, it completely detaches people from the activity. They do not have to reflect on how they feel (do I feel too cold? Or to warm?) or how to adjust the heating to their plans (how long will I be out of home? In which rooms will I be most of the day?). There is no more need to reflect on when energy consumption is required. Instead, the individual learns “I don’t have to care for anything”. Consequently, if the user is at another place, without smart heating, they are not used to thinking about any necessary steps before leaving the house. Therefore, while automation can be effective to reach a particular goal in a limited context, it might also decrease the general awareness for this goal.

3. Study 1: Explorative Investigation of Perceived Gains and Losses through Automation

3.1. Research Questions

Based on the above list of potential losses related to automation, we aimed for an initial examination of the extent to which these can be found in peoples’ reflections on automation of daily life tasks. As a basis for investigation, we surveyed ratings on automating five tasks of daily life which we identified in a prior internal workshop and group discussion, namely, (1) gardening/watering plants, (2) cleaning the toilet, (3) cooking a meal, (4) doing paperwork (e.g., paying bills, doing taxes), (5) body care (e.g., creaming, hair washing). The five tasks were chosen in a preceding informal discussion at our lab meeting. The rationale for selection was to cover a broad spectrum of different tasks, requiring more or less accuracy, more cognitive or more motoric skills, and interaction with living things or non-living artifacts, which might allow us to detect potential variations in perceived gains and losses through automation.
In addition to the six psychological dimensions of possible losses through automation listed above, we also investigated the more pragmatic dimension of time investment (i.e., gaining time for other important things in life) as a central common argument for automation. As a simple and straightforward means for first investigation, we applied the list of dimensions in the form of a Likert-type scale, where one can assess imagined losses and gains in the different dimensions by a seven-point rating scale, ranging from “losing” (=1) to “gaining” (=7). As an established procedure in psychological and HCI research (e.g., [65,66]), we used a Likert scale where labeled endpoints represent the two extremes of a continuum, and used a seven-point scale, which according to previous recommendations, represents a “sweet spot” of balancing sensitivity and efficiency [67,68]. Considering the scale midpoint as neutral, values below four indicate perceptions of losses rather than gains and values above four gains rather than losses.
For each dimension, a first question could be whether people see a loss at all or whether they rather see a gain through automation. In addition, it would be interesting to compare relative differences in the consequences of automation between the different dimensions and between different tasks. More specifically, our analysis was oriented towards the following research questions:
  • To what degree do people perceive specific gains or losses through automation? Do they differentiate between gains and losses through automation in the different dimensions (hence, do these represent a helpful framework of reflection)?
  • Do their perceptions of gains and losses through automation differ between different tasks?
  • What is their general attitude towards automation and how do they justify attitudes for or against automation? Are there specific dimensions which seem most relevant for a global judgment for or against automation?

3.2. Method

3.2.1. Study Design and Procedure

In an online survey, we collected data from a convenience sample of 122 participants in German-speaking countries. The study was approved by the university’s ethical review board and informed consent was collected from all participants. The study was announced via the university’s research participation mailing lists and further distributed via snowballing and social media platforms. As an incentive, five EUR 80 gift vouchers were raffled. Participation took about fifteen minutes.

3.2.2. Participants

Of 135 people who started the survey, 122 completed it and answered the majority of questions, building the final sample of study participants (75 female, 46 male, 1 person did not report their gender). The age range was between 18 and 91 years (M = 31.88, SD = 14.29), with an average household size of 2.48 people (SD = 1.41).

3.2.3. Measures

According to our research interest, the online survey consisted of the following main parts and measures (see Appendix A for the detailed measures):
  • Introduction: Brief information about the study and research background, declaration of consent to participate.
  • Ratings of gains and losses through automation: Ratings of imagined consequences of automating five tasks in the seven dimensions listed above. For each task, participants provided ratings of imagined losses and gains in the seven dimensions (1 = losing, 7 = gaining, one item per dimension) and a global rating, indicating whether in their view automating this task was desirable or not (1 = not desirable, 7 = desirable).
  • Wishes for automation and non-automation: In two open text fields participants indicated one task which in their view definitely should be automated and another task that should definitely not be automated.
  • Demographic data: Basic demographic data such as gender, age, household size, occupation.

3.2.4. Data Analysis

The statistical software IBM SPSS statistics version 29 was used to perform all statistical analyses. One-sample t-tests were used to assess significant deviations from the scale midpoint of ratings of gains and losses through automation along the seven dimensions. Pearson correlations were calculated to analyze the relevance of gains and losses in different dimensions for the global judgment pro or contra automating a task. Answers to open text questions regarding wishes for automation and non-automation were broadly categorized, whereby mentions in one category either used the same wording or expressed the same wish for automation on different levels of abstraction (e.g., driving a car, autonomous driving, transportation, traffic).

3.3. Results

3.3.1. Perceived Gains and Losses through Automation in Different Dimensions

Figure 2 depicts the profiles of perceived gains and losses through automation in the seven dimensions for the five different tasks. As it shows, the main gain of automation (i.e., highest scale value) refers to time investment and freeing up time for other things in life. In contrast, participants see losses for the other dimensions. Altogether, the profiles look quite similar for most of the tasks but indicate some differences regarding relative peaks for the different dimensions. For example, while for gardening the heaviest loss refers to meaning, for paperwork the greatest loss refers to transparency. Presumably, these differences in losses between tasks also represent differences between tasks in the general relevance of the different dimensions. In general, we can assume that, per se, paperwork is more related to gaining transparency than meaning, and gardening is more related to meaning than toilet cleaning. In consequence, automating a task may also go along with more severe losses in those dimensions which originally were most relevant for this task.
Also, there are differences in the general level of the tasks’ positions along the continuum of losing to gaining, whereby people see the most losses for automating cooking and the least losses for automating toilet cleaning. If averaged across all tasks, the largest losses (i.e., lowest scale values, significantly below the scale midpoint) refer to meaning (M = 2.94, SD = 1.65, t(609) = −15.83, p < 0.001, d = 1.65), followed by transparency (M = 2.84, SD = 1.54, t(609) = −18.59, p < 0.001, d = 1.54), responsibility (M = 2.70, SD = 1.56, t(609) = −20.64, p < 0.001, d = 1.56), reflection (M = 2.79, SD = 1.53, t(609) = −19.55, p < 0.001, d = 1.53), and competence (M = 2.78, SD = 1.60, t(609) = −18.83, p < 0.001, d = 1.60). For autonomy, the mean value is higher but still below the scale midpoint (M = 3.57, SD = 2.00, t(609) = −5.27, p < 0.001, d = 2.00), whereas time investment (M = 5.54, SD = 1.66, t(609) = 22.88, p < 0.001, d = 1.66) is clearly positioned on the gain side of the spectrum.

3.3.2. Global Judgments Pro or Contra Automating Different Tasks

Regarding the global judgment on whether automating a task is desirable or not (see Figure 3), there is a clear preference with significant deviations from the neutral scale midpoint for automating toilet cleaning (M = 6.04, SD = 1.47, t(121) = 15.36, p < 0.001, d = 1.47) and paperwork (M = 4.84, SD = 1.71, t(121) = 5.39, p < 0.001, d = 1.71) and a significant preference against automating cooking (M = 3.49, SD = 1.93, t(121) = −2.92, p < 0.01, d = 1.93) and body care (M = 3.38, SD = 1.88, t(121) = −3.66, p < 0.001, d = 1.88). Regarding gardening, there is no clear preference (M = 3.80, SD = 1.95, t(121) = −1.61, n.s.).

3.3.3. Relevance of Gains and Losses in Different Dimensions for Global Judgment Pro or Contra Automating a Task

A correlational analysis (see Table 1) provided insight into the relationships between perceived gains and losses in the single dimensions and the global judgment pro or contra automating a task. The separate correlational analysis for the five tasks indicates slight differences in this pattern between the five tasks. For example, for cleaning the toilet, time investment is clearly the most correlated dimension. For cooking, dimensions such as competence and autonomy show somewhat higher correlations than time investment, and altogether, the correlations between global judgment and the different dimensions are in a similar range (0.427–0.586). If analyzed across all tasks, the largest correlation refers to time investment, followed by meaning and autonomy. A possible interpretation is that people who see big gains (or negligible losses) in time investment, meaning, or autonomy would rather vote for automating a task. Or vice versa, people who see big losses (or no gains) in time investment, meaning, or autonomy would rather vote for not automating a task. Note, however, that the size of correlations across all tasks naturally depends on the specific selection of surveyed tasks and thus must be interpreted with caution. With another sample of tasks, other dimensions might show stronger relevance for the global judgment.

3.3.4. Wishes for Automation and Non-Automation

Participants’ mentions regarding wishes for automation and non-automation were broadly categorized. Figure 4 illustrates the frequency of mentions for the different categories. Regarding the wishes, i.e., tasks that definitely should be automated, the largest category was cleaning/tidying up, followed by the wish for automatic driving. Other frequent mentions were washing/ironing and paperwork/visiting government offices, followed by shopping, cooking, and separating waste/recycling. Some participants also declared they had no idea/no wishes for further automation, and some stated to be generally against automation. Finally, some participants gave abstract mentions, not naming a specific task but rather referring to a domain or general task characteristics, like “health regeneration” or “eye-straining tasks”.
Regarding the non-wishes, i.e., tasks that according to the participants’ view should be kept in humans’ hands and not be delegated to technology, the largest category was interpersonal interaction/care work, followed by creativity/arts/music. Other frequent mentions were thinking/decision making in different domains (e.g., politics, business, selecting a present), cooking, sports, pet feeding and plant care, body care, and education/child care. Again, there were also a number of abstract mentions of wishes for non-automation, referring to general task characteristics like “everything that requires emotional intelligence” or “everything related to nature”. Note, however, that the categories are not clear-cut and some comments in one category could also be subsumed into another. For example, education/child care also shows overlap with the larger category of interpersonal interaction/care work.
Often, participants provided an additional justification for the mentioned wish or non-wish, like “keeping the emotional bonding and empathy” (as an argument for not automating care work), “unlearning social interaction” (as an argument for not automating shopping), “people would become less physically active—this can’t be good for our society”, or “further automation would speed up the pace of life and would overwhelm people” (as an argument for not automating any further tasks). Other participants further qualified their mention with a differentiated view like “creative work could be supported by automation but the machine should not become the creator itself”.

3.4. Discussion

The present survey explored perceived gains and losses through automation in the private sphere based on the example of different daily life tasks, applying a set of seven dimensions of potential relevance from a psychological perspective and common arguments for automation. Given that automation is often considered from a pragmatic perspective, with a focus on efficiency, safety, and time saving (e.g., [69]), we were interested in complementing this with an investigation into the psychological perspective and were curious to see how people reflect on these dimensions of potential relevance. Along with more insights into people’s experience and their evaluation of automation in their everyday life, our study also served as a test of our relatively simple, straightforward approach as a practical research tool, which could be valuable for various areas of HCI research and design.
As shown in our survey, people are well able to reflect on automation and what they might win or lose along dimensions such as autonomy, competence, transparency, or time investment. In sum, time investment showed the main gain of automation, whereas drops in meaning, transparency, felt responsibility, and reflection are perceived as the clearest losses elicited by automation. Furthermore, people differentiate their evaluation depending on the task to be automated regarding the general level of gains versus losses as well as the profile along the surveyed dimensions. For example, for automating gardening and cooking, people expect greater losses in meaning than for automating paperwork or toilet cleaning. Regarding transparency, however, people perceive larger losses for paperwork. Moreover, depending on the task, there are differences in the relevance of the single dimensions for the global judgment for or against automation. In sum, the assessment of automation along the different dimensions provides a more fine-grained picture of the consequences and possible arguments for or against automating particular tasks.
In line with this, the qualitative statements also showed that people are well able to articulate why they are for or against automating particular tasks and what they would fear to lose or hope to gain. As the list of wishes for automation shows, wishes for automation do exist in various fields. Only a small part of participants stated to be generally against automation, but the majority saw at least some tasks where they see (more) automation as desirable. In many cases, these were typical repetitive tasks like cleaning, tidying up, washing, and ironing, and, as another bigger category, automatic driving. When looking at the mentioned non-wishes, it seems that there are certain qualities participants want to be kept in human hands, like interpersonal interaction, creative tasks, and decisions making. Especially when a living being is involved (e.g., care work, education, child care, pet feeding, plant care), the appropriate interaction might not seem “preprogrammable” and trying to automate it appears as a sign of disrespect. Though more difficult to grasp, a similar “this cannot be right” feeling might underlie the vote against automating arts and creativity. Factually, it is often not discernable whether a piece of art was generated by an algorithm or a human brain. AI shows impressive performance in the field of arts and music [70], and automation has already become common practice in production techniques in video, photography, music, and games [71]. Still, some people might see creativity as something “holy”, reserved for humans, and consider AI results as “soulless”, which is further fueled by the discussion about whether AI is going to replace creative professionals [72,73]. In contrast, as shown in the spectrum of wishes and related justifications, people see less of a problem with delegating tasks that do not involve such emotionally loaded losses and comprise relatively foreseeable steps to be performed. This pertains, for example, to cleaning tasks, paperwork, or transportation.
To sum up, wishes for automation exist, and there is a lot to gain but, considering the psychological losses and wishes for non-automation, also a lot to lose. Consequently, this underlines that an “all-or-nothing approach”, generally praising or blaming the increasing automation in our society, is not adequate and that it is worth reflecting on the specific consequences.

4. Study 2: Experimental Exploration of User Experience and Responsibility Attributions for Manual vs. Robot Devices

Following up on the initial exploration of losses through automation along psychological dimensions, an important next step of research seems to be to study consequences of automation on the level of features and interaction design within a given task, how to achieve gains (or prevent losses) along the different dimensions, and what mediating effects might play a role. As one dimension of central interest, we focus on effects of automation on the user’s perceived responsibility. While previous studies were often based on hypothetical interactions (i.e., online vignette studies, see [49]), our research aims to complement these findings by the study of real interaction in a laboratory study and exploring further possible mediators. As such, we assess the perceived device agency, competence, and autonomy as potential mediating factors. In the following sections, we discuss the assumed interrelations and hypotheses.

4.1. Mediating Factors of Responsibility Attributions for Manual vs. Robot Devices

4.1.1. Device Agency

As already mentioned in the related work section, relevant (and possibly interrelated) mediators of effects of automation on responsibility perceptions include the user’s perceived device agency and the user’s felt autonomy and competence. In general, agency refers to an entity’s perceived capacity to intend and to act, which may comprise aspects of self-control, judgement, communication, thought, and memory [74] (p. 103). In the context of human–robot interaction, the concept of agency is often used to refer to the robot’s degree of autonomy [21,42,44,45,75], and both terms are also used interchangeably. In this understanding, a robot’s autonomy depends on how much it can do independent from human input. This parallels the definition of machine autonomy as “the ability of a computer to follow a complex algorithm in response to environmental inputs, independently of real-time human input” [41] (p. 149). As formulated by Formosa [42] (p. 599), “The more responsive machines are to a greater range of environmental inputs and the greater range of conditions in which machines can act, reason, and choose independently of real-time human input, the higher is their degree of autonomy”. Additionally, researchers in HRI also discuss agency perceptions in the sense of social and moral agency (e.g., [76], for an overview, see [77]) and their role among humans as part of a social environment. In the present study context, however, we focus on the autonomy aspect of device agency in the sense of the robot doing something independently of the user’s input or supervisions.

4.1.2. Autonomy

The degree of robot agency also has implications for the user’s experience and felt autonomy, in the sense of choice while behaving in a way that is congruent with their own values and interests [78]. As discussed earlier, there is an inherent tension between human agency/autonomy and machine agency/autonomy [42,45,79], and it can be difficult to find a good balance [49]. For example, fear of losing autonomy was identified as one of the main reasons why people do not want “smart devices” and AI in their home [80].
Looking at the implications for the user’s perceived responsibility for the result, the user’s perceived autonomy should be positively correlated [78,81], because autonomy implies being responsible for one’s actions [75]. On the other hand, on a conceptual level, it can be assumed that high levels of device agency and accompanying lower perceptions of user autonomy lead to less perceived responsibility for the result [48]. More specifically, high levels of machine agency are likely to reduce users’ experience of self-efficacy [82], which is conceptually associated with responsibility [83], and therefore also reduce responsibility perceptions. Also, there is first empirical evidence that with higher robot agency, responsibility attributions shift from human to robot [50], so that people tend to feel more responsibility if they feel more agency for themselves (and less for the device) and vice versa [51].

4.1.3. Competence

Finally, in parallel to user autonomy and its interrelations to perceptions of device agency and responsibility, similar effects can be assumed for user competence. For example, studies comparing interaction with automatic and manual devices (e.g., coffee makers [84]) showed lower competence for the interaction with automatic devices. Moreover, similar to autonomy, competence is positively related to self-efficacy [40], and the feeling of competence is important for feeling responsible for an outcome [83]. Therefore, it can be assumed that autonomous devices lead to lower perceptions of user competence.

4.2. Hypotheses and Research Questions

Based on the considerations above, our empirical study of HRI based on the example of a robot vacuum cleaner compared to a regular, non-autonomous manual vacuum cleaner tests the following statistical hypotheses, as summarized in a research model in Figure 5.
The degree of device automation and type of vacuum cleaner have an impact on…
H1. 
The user’s perceived device agency. Using a robot is associated with higher agency perceptions than using a manual vacuum cleaner.
H2. 
The user’s perceived autonomy. Using a robot is associated with lower autonomy perceptions than using a manual vacuum cleaner.
H3. 
The user’s perceived competence. Using a robot is associated with lower competence perceptions than using a manual vacuum cleaner.
H4. 
The user’s perceived responsibility for the result. Using a robot is associated with lower responsibility perceptions than using a manual vacuum cleaner.
The user’s perceived responsibility of the result is:
H5a. 
Negatively correlated to the perceived agency of the device.
H6a. 
Positively correlated to the user’s perceived autonomy.
H7a. 
Positively correlated to the user’s perceived competence.
The (negative) effect of device automation on the user’s perceived responsibility for the result assumed in H4 is mediated by:
H5b. 
The perceived agency of the device.
H6b. 
The user’s perceived autonomy.
H7b. 
The user’s perceived competence during the interaction.
In addition, we examine the following sets of explorative research questions, by means of qualitative interviews:
E1: Which features or technology designs are most decisive for perceptions of device agency, competence, autonomy, and, in turn, the perceived responsibility for the result? Are there any special aspects that users need to still experience the result of HRI as their work? How could (partial) automation be designed to save work/time for humans but prevent negative effects of automation?
E2: To what degree do users gain feelings of competence from mastering/programming autonomous devices? Is there a “competence shift” from directly creating the result (with the help of the device) to initiating “others” (i.e., the device) to create the result?

4.3. Method

4.3.1. Study Design and Procedure

Our laboratory experiment used a one-factorial between subject design, comparing two different versions of commercially available vacuum cleaners of the same price range. Version 1 was a regular, non-robotic manual vacuum cleaner (Dyson V15 Detect Absolute; Figure 6, left, [85]) with a light beam for invisible dust and a piezoelectric sensor that automatically increases suction power when needed. Version 2 was a robot vacuum cleaner (Dreame L10 Ultra; Figure 6, right, [86]) that was connected through an application on a mobile phone and had different setting options such as the set-up of prohibited zones.
Participants were randomly assigned to one of the two conditions. When they arrived at the laboratory, they received a brief introduction about topic and procedure of the study and filled in a consent form to take part. The laboratory setting was the following: participants were in a room with a big table and some chairs in the middle and a wardrobe and a desk on the side (see Figure 7). Their task was to imagine that they would expect guests and thus have to clean a specific area in the room with marks on the floor to their own satisfaction in a period of ten minutes. Therefore, they received the vacuum cleaner of their condition and a short introduction on its usage. The size of the laboratory was 23 m2 and pretests showed that it was possible to clean the entire space in ten minutes. Participants in the robot condition watched the robot from outside the specified area while cleaning but did not intervene during the running procedure (e.g., remove obstacles). The dirt that had to be removed contained a mix of hair, stones, and dust. After the cleaning process, participants completed a questionnaire with different measures as further specified below. This was followed by a short qualitative interview about their experience and implications for the technology design, and then a debriefing. Overall, the participation lasted about 35 min. The study was approved by the institutional review board and informed consent was collected from all participants.

4.3.2. Participants

Participants were recruited via social media. Participation was voluntary and not incentivized. There were no special requirements besides from a minimum age of 16 years. In total, 57 participants took part in the study, including 30 in the non-robotic manual vacuum cleaner condition and 27 in the robot vacuum cleaner condition. The age ranged from 16 to 75 years (M = 28.16, SD = 11.35). Thirty-three reported their gender as female, twenty-four as male. The majority of participants (42, 73%) held a university degree, and for 11 participants (19%, who were still students) the highest educational degree was the secondary school-leaving certificate (e.g., Abitur). The remaining four participants (8%) hold another degree (e.g., vocational education). Thirteen participants (23%) reported that they live alone. Regarding their cleaning routines at home, all participants reported that they use a vacuum cleaner. Regarding the specific type of vacuum cleaner (multiple mentions possible), 12 (21%) used a cordless vacuum cleaner, 44 (77%) used a classical manual vacuum cleaner, and 6 (10%) owned a robot vacuum cleaner. There were no significant differences in demographic variables between the two experimental conditions.

4.3.3. Measures

Perceived responsibility for result. Perceptions of the participants’ responsibility for the cleaning result were assessed by an adapted form of the personal responsibility scale [87]. The three items (e.g., “I had the feeling to be responsible for the cleaning result”) were assessed on a seven-point Likert scale (1 = do not agree at all, 7 = extremely agree) and scale values were calculated by averaging the corresponding items. Internal consistency was good for the scale (Cronbach’s alpha = 0.86).
Device agency. The measure of agency of the respective vacuum cleaner was assessed by an adapted form of the product autonomy scale [88]. The five items (e.g., “I had the feeling the vacuum cleaner goes its own way”) were assessed on a seven-point Likert scale (1 = do not agree at all, 7 = fully agree) and scale values were calculated by averaging the corresponding items. Internal consistency was good for the scale (Cronbach’s alpha = 0.82).
Perceived autonomy and competence. Perceptions of cleaning with the vacuum cleaner were assessed by an adapted form of the psychological needs scale [24,89]. Three items referred to autonomy (e.g., “I had the feeling to do what I want”) and three items to competence (e.g., “I had the feeling to manage tasks successfully”). All items were assessed on a five-point Likert scale (1 = do not agree at all, 5 = fully agree). Scale values were calculated by averaging the corresponding items. Internal consistency was acceptable for both scales (perceived autonomy: Cronbach’s alpha = 0.78, perceived competence: Cronbach’s alpha = 0.68).
Outcome of the cleaning. Participants’ ratings of the cleaning success were surveyed by a measure of estimated dirt removal as a percentage. Satisfaction with the outcome of the cleaning was assessed on a seven-point Likert scale (1 = not at all satisfied, 7 = fully satisfied).
Additional data. Furthermore, we assessed some additional measures referring to the overall experience of the usage and evaluation of the vacuum cleaner (e.g., fun, future usage intentions) as control variables.
Person variables. Besides age and gender, we assessed the participant’s education, current occupation, whether they were living alone or with others, the type of vacuum cleaner/cleaning device they were using in their home, and general technology attitude. Technology attitude was assessed with an adapted form of one subfactor of the attitude towards technology scale [90]. The three items (e.g., “Technology is very important in life”) were assessed on a five-point Likert scale (1 = not agree at all, 5 = fully agree) and scale values were calculated by averaging the corresponding items. Internal consistency was good for the scale (Cronbach’s alpha = 0.76).
Qualitative interview questions. Finally, participants answered a subset of open questions. The first part of the interview referred to the experience of using the vacuum cleaner of its condition (“Reflect on the experience of using the vacuum cleaner”), with a specific focus on competence and autonomy (“To what degree did you feel competent and why?”; “To what degree did you feel autonomous and why?”). The second part of the interview referred to more general reflections about autonomous technology and the resulting user experience (“Also besides from a vacuum cleaner—to what degree do you feel that the programming or adjusting of settings of a technology could be a source of competence?”) and deciding features (“Regarding (partially) autonomous technology—are there any specific features that are deciding for you to still feel sufficiently autonomous, competent, and responsible for the result?”).

4.3.4. Data Analysis

The statistical software IBM SPSS statistics version 29 was used to perform all statistical analyses. Qualitative data were categorized by different steps of the qualitative content analysis oriented on Mayring [91]. The interviews were transcribed verbatim and categories of answers were developed for each interview question. For questions with a strong relation to theoretical concepts (e.g., competence, autonomy) we applied the step model of deductive category application. For more open questions (e.g., relevant features for autonomy perceptions) we applied the step model of inductive category development. The unequivocalness of categories was checked by handing the interview data, categorization scheme, and coding guideline (i.e., short definitions of the categories) to a second rater, resulting in good interrater reliability (ICC = 0.92–0.94).

4.4. Results

4.4.1. Hypotheses Tests

With regard to H1–H4, t-tests showed that, as assumed, using the robot vacuum cleaner was associated with higher perceptions of device agency than using the non-robotic manual vacuum cleaner (H1, t(55) = 8.70, p < 0.001, d = 1.04), but with lower autonomy perceptions (H2, t(55) = 3.92, p < 0.001, d = 0.75), lower competence perceptions (H3, t(55) = 4.53, p < 0.001, d = 0.76), and lower responsibility perceptions (H4, t(55) = 11.97, p < 0.001, d = 0.94). Figure 8 shows mean values and standard deviations for the different measures.
In line with our further hypotheses, the user’s perceived responsibility for the result was negatively correlated to the perceived agency of the device (H5a, r = −0.76, p < 0.001) and positively correlated to the user’s perceived autonomy (H6a, r = 0.48, p < 0.001) and the user’s perceived competence (H7a, r = 0.63, p < 0.001). In addition, we conducted a linear stepwise regression with device agency, autonomy, and competence as predictors and responsibility as the criterion in order to analyze the different measures’ relative relevance for responsibility attribution in parallel. The best model (R = 0.85, R2 = 0.72) with 72% explained variance included two predictors. Device agency (b = −0.68, p < 0.001) and competence (b = 0.82, p < 0.001) both appeared as relevant predictors of responsibility attributions, but autonomy was excluded since it did not add a significant amount of explained variance. An additional linear stepwise regression analysis included technology (robot = 1, manual = 0) as a possible predictor. In this case, the best model (R = 0.90, R2 = 0.81) with 81% explained variance includes technology as the strongest predictor (b = −1.70, p < 0.001), besides device agency (b = −0.326, p = 0.003) and competence (b = 0.553, p < 0.001), and again autonomy is excluded.
In a next step, according to H5b, H6b, and H7b, we conducted a mediation analysis using PROCESS by Hayes [80] to assess if agency, autonomy, and competence would mediate the significant effects of the type of vacuum cleaner (manual vs. robot) on responsibility attribution. It uses linear least squares regression to determine unstandardized path coefficients of the total, direct, and indirect effects. We used Model 4 and bootstrapping with 5000 iterations along with heteroscedasticity-consistent standard errors [92] to calculate confidence intervals and inferential statistics. We considered effects as significant if the confidence interval did not include zero. For contrast coding (manual vs. robot), we used an indicator coding with manual as the reference group. Figure 9 displays the mediation model showing the effect of the type of vacuum cleaner (manual vs. robot) on the perceived responsibility for the result as mediated simultaneously by perceived device agency, user autonomy, and user competence. Direct effect (path c′) represents the effect of a manual vs. robot vacuum cleaner on responsibility attribution when the mediators are included in the model. Indirect effects (path a1-b1, a2-b2, or a3-b3) represent the effect of the vacuum cleaner on responsibility attribution through device agency or user autonomy or user competence, respectively. In line with our hypotheses, it showed that the total effect of the type of vacuum cleaner (path c) on responsibility attribution was mediated by device agency (H5b, indirect path a1-b1) and by user competence (H7b, indirect path a3-b3). For both variables, bootstrapping analysis revealed a significant indirect effect, as the 95% CI did not cross zero (agency: CI [−0.80, −0.17], competence: CI [−0.58, −0.14]). However, other than assumed in H6b, autonomy did not appear as a significant mediator (indirect path a2-b2) and the 95% CI crossed zero (CI [−0.172, 0.40]). The model also showed a significant direct effect (path c′) of the type of vacuum cleaner on responsibility attributions, indicating that the mediating effect of device agency and user competence was only partial, i.e., the direct effect (path c′) was smaller than the total effect (path c) but remained significant.

4.4.2. Exploratory Quantitative Analyses

On average, participants rated that 84% (min = 75, max = 100) of the dirt was removed, with no significant differences depending on the type of vacuum cleaner. Regarding satisfaction, mean values were in the upper scale range for both types, with higher values for the manual vacuum cleaner (M = 6.33, SD = 0.76) than for the robot vacuum cleaner (M = 5.78, SD = 0.93, t(55) = 2.48, p < 0.01). It showed that the perceived responsibility for the results was correlated to the perceived cleaning success, however, only for the manual vacuum cleaner (r = 0.40, p < 0.05) and not for the robot (r = 0.01). Furthermore, device agency was negatively correlated to user autonomy (r = −0.43, p < 0.01).

4.4.3. Qualitative Analyses

In the following sections we report findings and sample statements for the four central interview questions. Table 2 reports the frequency of mentions of participants’ categorized answers for the four interview questions across the two device conditions/whole sample of participants (N = 57).
(1) To what degree did you feel autonomous during the interaction with the vacuum cleaner and why? In the manual group, 5 of 30 participants (17%) declared they felt a medium level of autonomy (e.g., “The way it rolls is a bit weird. Sometimes I had the feeling that it moved in another direction than I intended” [P42]), and the remaining 25 (83%) felt fully autonomous during the interaction (e.g., “I felt very, very autonomous. It has practical functions but it is me who has to vacuum. At every point in time, it is in my control whether a spot will be clean or not” [P18]).
In the robot group, 7 of 27 participants (26%) declared they felt no or little autonomy, emphasizing that the robot takes the main role in the task (e.g., “I didn’t feel autonomous. Though I could adjust some settings, in the end it is the robot who does the work” [P52], “Well, the robot does everything” [P5]). Ten (37%) felt a medium level of autonomy, often referring to adjusting the settings of the robot via app (e.g., “Of course I was autonomous while using the app. But not during the cleansing interaction. That was totally the robot’s job” [P43], “Of course, I could adjust many settings before the actual cleansing starts. But in the moment, I send the robot to its way I don’t have much impact anymore” [P15]). Nine (33%) felt fully autonomous during the interaction, also referring to the app (e.g., “Very autonomous. In the context of the app I could do everything I wanted without any limits. There were no restrictions where I have to draw a line (to mark the areas the robot should leave out) or anything” [P10]). One person was undecided and the answer was not further categorized.
(2) To what degree did you feel competent during the interaction with the vacuum cleaner and why? In the manual group, 3 of 30 participants (10%) declared they felt a medium level of competence, 26 (83%) felt fully competent during the interaction, referring to themselves as the one who ultimately had the control and critical impact (e.g., “You have everything in your hands. The result is up to you” [P18], “Very competent. The vacuum cleaner did roll very well but it was me who gave the initiative impulse” [P24]), with a satisfying result (e.g., “Very competent. With the laser feature you can see even smallest particles of dust. And then you see, it is really, really clean” [P1], “I really managed it that everything is clean now. All the dust clouds are gone. Its visibly clean now” [P2]). One person was undecided and the answer was not further categorized.
In the robot group, 5 of 27 participants (19%) declared they felt no or little competence (e.g., “I actually was not involved very much” [P50], “No, not very much. I had to adjust some settings—but what else?” [P46]), 12 (44%) felt a medium level of competence, and 8 (30%) felt fully competent during the interaction. Two persons were undecided and their answers were not further categorized.
(3) Also besides from a vacuum cleaner—to what degree do you feel that the programming or adjusting of settings of a technology could be a source of competence? Referring to the whole sample of participants, 21 of 57 (37%) agreed that also the programming or adjusting of settings of a technology could be a source of competence. As one participant explained “When more and more things become automated, what will count is who manages to program the automation in the best way. I can well imagine that this question will be a source of competence one day” [P18]. Others referred to the time savings (“I would feel competent because I get the most out of my time” [P21]) or interpreted programming the robot as leadership (“I think that this is perhaps also the understanding of good leadership, that one guides and supports, but leaves autonomy to others” [P25]). In contrast, 10 (18%) clearly disagreed, declaring that it is “a totally different concept” [P36], “not the same satisfaction you get from physical doing” [P9], and that “it still makes a big difference whether you are competent to set up a task or to actually do it on your own” [P27]. Twenty-four (42%) partly agreed and, for example, explained that programming a technology would enable “another type of competence” [P31, P32, P41] but on a “lower level than handling the technology on one’s own” [P29]. Finally, two (4%) gave meta-reflections or defined special conditions such as that feeling competent about adjusting settings was “only possible for special tools that require a lot of knowledge but not for everyday tools like a vacuum cleaner” [P15].
(4) Regarding (partially) automated technology—are there any specific features which are deciding for you to still feel sufficiently autonomous, competent, and responsible for the result? Participants’ answers to this question showed a wide variety, and many did not name a specific feature but rather general aspects. The most frequently named category with 14 mentions (25%) was opportunity for control/manual intervention and haptic experience, such as “some kind of haptic feedback is always satisfying. The result is important but also the feeling that you have done most of it yourself. That you don’t have the feeling that your work is being taken away from you or that you’re not responsible for it. That you at least set the impulse yourself” [P9]. Thirteen participants (23%) declared that it was important to them to define the basic rules. Thirteen (23%) reported meta-reflections (e.g., “Humans are probably not aware how much technology already does for us all the time. Even if its automated you can get the feeling you do it yourself. Like with automatic driving” [P55]). Five (9%) wished for a kind of dialogue with the technology that supports transparency or hints what they could do better (e.g., “So I think it’s good if you get such a report, because then I had the feeling that I’m the supervisor and then I only have one result to control and then it would be even easier for me to give the control out of my hand” [P25], “If you could have a dialogue with the tool or ask questions. That it also explains to you why it works the way it does” [P45]). Others referred to the example of the robot vacuum cleaner and features such as mapping the apartment or defining the way of the robot (8, 14%) and defining the power (3, 5%) or time of operation (1, 2%).

4.5. Discussion

Our study explored the effects of using a robot versus non-robotic tool for daily household tasks (here: vacuuming the apartment) with regard to user experience and responsibility judgments. As central findings, it showed that using the robot vacuum cleaner was associated with lower responsibility for the result. This effect was mediated by the perceived device agency (which was higher for the robot and negatively associated with responsibility) and the user’s perceived competence (which was higher for the manual vacuum cleaner and positively associated with responsibility). The interviews after the interaction revealed that with regard to competence, a central driver was the continuous feeling of control in the manual condition whereas in the robot condition the lacking involvement in the cleaning process (asides from the adjusting of settings beforehand) was associated with lower levels of competence. In effect it turned out that a central issue was whether the adjusting of settings and predefining what the technology will do were experienced as an additional or at least substitutional source of competence and something that one could be proud of. This was true for some participants who declared that they could also feel competent about the successful programming. Other participants were undecided and saw this as another type of competence, but not of the same quality as being directly involved, like with a manual device. Also, some persons saw using the robot as an overall attractive tradeoff, explaining that although the task was less satisfying for them they would accept the experiential cost because of the time savings. In this regard, it would be interesting to find out whether this “tradeoff” can still be optimized. For example, a question could be whether there is a sweet spot for the amount of adjustments for (robot) devices, which keeps feelings of competence, autonomy, and responsibility for the outcome but does not overburden the user.
In line with previous vignette studies (e.g., [93]), we found that people feel less responsible for a result if it is achieved with the help of a robot compared to a less autonomous manually operated device. In addition, our research complements previous studies by revealing competence and device agency as additional mediators of this effect, which forms an important extension of HRI theory. Transferred to HRI design, especially with regard to competence, it seems interesting to further analyze what exactly impairs the feeling of competence in the robot interaction, and, in a next step, how this might be addressed in the design of robots, including approaches beyond mere adjustments or upfront programming of the robot. If we can identify particular components of the task or interaction design that make the user feel more competent and included in the task, this could be a starting point to improve the UX and enhance feelings of responsibility but still keep the advantages of using a robot (compared to a regular non-autonomous tool).
To this aim, the present findings could also be interpreted in light of classical findings from working psychology, such as overcoming a challenge and success as a central reason for liking a task [94] or a negative impact of unfinished tasks on flow and wellbeing (e.g., [82,95,96]). Also, the perceived immediacy or directness of one’s impact on the result could be a deciding factor. For example, in a study on work satisfaction in the context of nursing [97], direct care tasks that involve extensive face-to-face contact with patients (e.g., oral hygiene, skin care, teaching patients, comforting/talking with patients, preparing patients for discharge) were more relevant for the nurses’ satisfaction and affect than indirect tasks that do not require intensive patient contact and interaction (e.g., charting, reviewing diagnostic test results, patient history reviews). The latter may be equally important parts of overall care quality from a rational perspective, but possibly not from an experience perspective. Similarly, while the right preprogramming of a cleaning robot is obviously an important step to the overall result, clearing away the dirt with one’s own manual power might be more effective from an experiential perspective.
On a broader level, this study adds to the ongoing debate on the interaction between agency or autonomy of machines and humans [42,45,79]. Interpreted positively and assuming synergistic effects, intelligent, autonomous devices could enhance users’ autonomy by helping them to achieve more valuable ends and thus enhance their scope of action [42,45]. In contrast, one could also argue that device autonomy harms human autonomy, because humans can achieve fewer valuable ends themselves [42]. In line with this, our data showed negative correlations between device agency and the users’ perceived autonomy and also lower autonomy, competence, and responsibility perceptions when using the robot vacuum cleaner. Moreover, their statistical correlation as well as their strong conceptual interrelation might also indicate that device agency absorbed possible mediating effects of the users’ perceived autonomy. Future work should further explore under which circumstances device agency enhances or decreases users’ felt autonomy and which design aspects might be decisive.

5. General Discussion

The present work represents a first approach to a more structured reflection on possible losses through automation from a psychological perspective. While previous studies mentioned single dimensions, an integrated view on these was still lacking. The framework of seven dimensions of automation applied here proved to be a simple, valuable tool in the present investigation and could offer some intriguing insights. For example, it showed that time savings are the main driver for wanting to automate a task but, at the same time, a gain in time is by no means sufficient to vote for automation. Indeed, there are many other relevant dimensions where participants perceived losses rather than gains, resulting in split votes for or against automation across the five studied tasks. In sum, it showed that people are able to differentiate between gains and losses through automation in different (psychological) dimensions and between different tasks. Also, they showed a sophisticated attitude and could well justify their vote for or against automation in specific cases, whereby the present list of dimensions served as a helpful impulse and framework of reflection. Also, one can more easily identify task characteristics that must be considered to minimize psychological losses in automation and what design solutions that integrate automation while keeping psychological value might look like.
In this way, our research provides theoretical, methodological, and content-related contributions, supporting a better understanding and more informed design of automation in HCI. It delivers advanced insights into the underlying rationales and experiences of peoples’ automation preferences.

5.1. Implications for Design

Regarding design implications, deeper studies of the concrete interaction and design of (semi-)automated systems could help to identify what could be done to design the human part in the human–robot interaction as more direct and complete. For example, as very basic ideas, it could help if the human can adjust the settings on the robot itself (and not via an app), and it could help if the human finalizes the task by a review of the cleaning statistics and presses a “finish” button after the instrumental task is fulfilled (here: the room is vacuumed). On a more general level, such considerations also reveal somewhat conflicting goals in robot design: at first, one designs a robot to take away the work from the human. Then one realizes that this brings new problems for humans’ experience and design a button to make the human feel he or she does the work again. Finding a good balance between robot automation and integration of the human thus remains a central goal for human–robot interaction design (see also [21]). Another starting point could be a more direct relation between one’s own impact/power and its exponentiation through the device. For example, on the data sheets of e-bikes of the bike manufacturer Specialized, the bikes’ power is reported with regard to the rider (e.g., “Rider Amplification: 2x You”), which makes the connection between the bike and rider more obvious (Specialized 2023, [98]. This small notion creates a frame for interpretation: instead of viewing the e-bike as a cheat (the bike is doing all the work), the bike is instead seen as an amplifier of the rider’s own power, leaving room for claiming responsibility of challenging tours. Similar approaches might be transferred to robot design as well. Altogether, such ideas must be explored in a systematic way and be aligned with the expertise of interaction designers and psychological issues, such as the aesthetics of interaction [63].
For example, when automating household tasks, small but deliberate details in the interaction design, even on the level of basic attributes (e.g., does the interaction feel fast or slow? Powerful or gentle? Direct or indirect? See also [62,63]), could decidedly strengthen the dimensions of competency, transparency, autonomy, responsibility, and overall involvement in the task. This reflection could lead to very different approaches. Recent developments in the vacuum cleaner market as explored in study 2 can serve as an illustration. On one end of the spectrum is the robotic cleaner with an app or machine interface that allows for (but does not necessarily require): extensive manual intervention, transparency through reports about the cleaning success and difficulties, and involvement in the way of programming the robot (e.g., mapping the apartment, defining the path of the robot, its power level, or time of operation). On the other end of the spectrum is the deliberate design of a manual vacuum (of course cordless, bagless, and lightweight). While this approach is likely to be more time intensive, it offers the experience of competence, autonomy, and responsibility in a more direct way. This is especially true when the experience of task performance is optimized by equipping the user with additional “superpowers” like making hidden dust visible with the help of an LED light, as featured in some recent vacuum cleaners (e.g., Dyson V15 Detect Absolute).
At the same time, the present findings highlight that automation is not necessarily a problem per se. The negative effect of automation on responsibility is mediated by variables in between such as the user’s competence and the perceived device agency. Hence, an important next step for responsible automation would be to reflect on how these mediators could be influenced by design and to find a way that users perceive the technology as less agentic (e.g., probably not an anthropomorphic look) and still feel competent (e.g., retain some manual operations, promote competence in the programming).
Also, the listed dimensions support a systematic exploration of possible losses through automation, which can be used in research and design. For example, we could well imagine using the list as a trigger for reflection within future-scenario methods, which aim to assess and critically evaluate the impact of technical innovations on individuals and society (for recent overviews, see [99]). Many of these methods suggest a particular structure of reflection and visualization of possible futures, sometimes also suggesting a checklist of content factors to support the systematic reflection from different aspects. For example, the Futures Wheel [100,101] is a visual brainstorming method where the center of a wheel represents the factor of interest (e.g., a new technology) and the imagined consequences are visualized as concentric rings around the center. Building on a similar visual metaphor, the Future Ripples Method [102] uses the metaphor of throwing a pebble (representing a “what-if”) into water, mapping out its consequences as ripples, whereby the STEEPLE framework (i.e., social, technological, economic, environmental, political, legal, and ethical factors) serves as a frame of reflection. Within such methods, the present list could also serve as a frame of reflection, particularly suited to evaluating technical innovations that rely on automation.

5.2. Limitations and Future Work

The approach to studying the possible effects of automation in HCI presented here comes with particular limitations to consider.
Regarding the suggested framework of possible psychological losses, additional steps are needed to further advance the contribution and possibilities of application. The dimensions used here are a synthesis of existing positions, common arguments, and related psychological theory. However, there was no structured process of development and further studies could experiment with more or fewer dimensions in order to ensure validity and methodological rigor. Also, we could work on more precise, unequivocal definitions of the dimensions. For example, regarding the dimension of autonomy, here defined as “creative freedom and the freedom to do things your way”, participants could refer to the creative freedom within the task (which is typically reduced through automation) but also to the freedom one attains through not having to care for a task (a similar interpretation as for the time investment dimension). This could also include experimenting with multiple items per dimension. In the present study we deliberately decided on a straightforward, one-item approach (i.e., only one measure per dimension), which supports efficient assessments across a larger number of tasks to test the general suitability of the approach. However, from the perspective of reliability and test theory, multiple items could be an interesting approach for future studies. Therefore, in the next steps of research, we want to examine the present dimensions more systematically, potentially extend them, and explore a more tool-like representation (e.g., guidelines, questionnaires, question tree, card sets) with the aim to deliver a more profound basis for research or guidelines for interaction design.
Also, the present study only provides limited insights into the concrete consequences of automating specific tasks and what would be responsible in this case. In study 1, participants judged automation based on their individual imagination regarding automating different tasks of daily life. Of course, there could be vast differences in how participants interpreted the task description and how they imagined the automation. Our sample was rather small, with the majority of participants being female, limited to German-speaking countries, and possibly not representative in terms of further factors. It was still valuable to test the suitability of the approach, e.g., whether the dimensions are understandable and trigger reflections or fundamental insights into the topic. However, if the aim is to attain solid insights into automating a specific task, more research is needed. Indeed, there could be in-depth studies for each specific case, also contrasting different types of semi-automation or studying the effects in field research, also looking at relationships and possible individual variables which could possibly effect judgments on automation (e.g., age, culture).
Regarding study 2, a main limitation of the present study refers to the question of generalizability and external validity of findings. The laboratory setting, comparing two devices in a controlled environment, and using the same task and conditions (e.g., furniture, dirt conditions, range of adjustments) supported a high internal validity, which forms an important requirement for statistical testing and modeling theoretic considerations. However, we must acknowledge that this fixed procedure differs from the typical usage of a robot vacuum cleaner (or other robotic devices) in private homes in daily life. For example, in our study, participants did not adjust that much on the devices besides some basic settings, which might not be ideal conditions for an experience of mastering a challenge (and a resulting feeling of competence [103]). Also, with regard to the responsibility ratings, the artificial setting could lower the external validity. One could imagine that feelings of responsibility are also related to a sense of ownership and might be generally higher for results in one’s private place than for results in some external setting that is not one’s home. Though this effect would relate to both groups (manual, robot) and thus does not question the found group differences, future research should study the correlations of interest in longitudinal field studies. For example, in actual use, robot cleaners usually need much more intervention than the name robot suggests, such as moving obstacles or changing settings for them to perform better. This may also affect the perceived levels of autonomy, machine agency, and responsibility. Also, in daily life, the interaction with a vacuum cleaner or any other autonomous supporting tool usually takes place repeatedly over a longer period of time. Resulting good or bad experiences and the amount of engagement with the tool could also influence the perceived responsibility. For example, if a user sets up the autonomous cleaning tool and adapts the setting so that at the end it fits perfectly into its own home this might enhance feelings of responsibility and experiences of success might fuel further engagement. On the other hand, the engagement could also be lowered in the long run. In contrast to our laboratory study, in daily life, the engagement with the robot competes with many other daily tasks, and people may not want to spend much time on experimenting with the perfect adjustments. All this makes longitudinal studies in some field settings an important task for future work. Also, one may question what the found differences in autonomy between the two vacuum cleaners really mean in practice. Regarding autonomy, one could argue that it is to some degree “natural” that a manual vacuum cleaner feels more autonomous for the operator than a robot. Still, especially if this “natural connection” comes with other consequences (e.g., responsibility), we believe it worth highlighting such relationships and consciously considering these when designing HCI (e.g., reflecting on what could be done to enhance feelings of the operators’ autonomy).
Moreover, future studies could further explore effects of possibly relevant person variables. While our study did not find effects for age, gender, or general technology attitude, there might be further variables of relevance such as cultural background or personal interest in smart technologies, robots, and/or programming. For example, regarding the question of competence, in the interviews in study 2 several participants mentioned that this could be a question of “individual type” and what makes one feel competent.
Also, we deem it important to extend this research to other application areas and to analyze in which areas it is more (or less) important to retain responsibility perceptions on the user’s side. Of course, the present findings based on a particular household task may not be directly generalizable to other areas. Different tasks may inherently carry different levels of autonomy which may go along with different user expectations, and in turn, ratings of perceived responsibility. Especially in risk-related applications of robots such as health or engineering the question of responsibility is of high practical and ethical relevance, and it would be important to know about how the inclusion of robots changes peoples’ work practices, e.g., whether they (double) check a result depending on if it was achieved by them alone or in joint work with a robot. Also, the user’s experience of the interaction may differ. For example, a robot that helps in a hospital or car engineering might evoke different feelings of competence for the people working with it and could thus lead to different results. We believe that the general issue of responsibility when using robots (or other semi-autonomous systems or AI applications) has relevance for many contexts. However, different tasks may inherently carry different levels of responsibility and influence the user’s perceived agency and competence.
Finally, while the focus on responsibility and potential risks of lowered responsibility perceptions might suggest making a robot appear less autonomous and therefore less “robotic”, the design of human–robot interaction should also consider the special benefits, characteristics, and “psychological superpowers” of robots (also see [75,104]) and what characterizes their “species” [105]. Finally, we argue considering HCI from all perspectives and consequences on all levels, including research questions such as: Besides practical advantages, are there any special benefits of using an automated tool/robot (instead of a manual tool) from an experiential perspective? Would people consider it a worthwhile experience to watch the robot doing the work and what would be a socially adequate framing of this human–robot interaction (also see [106]) or adequate framing parallel to human–human interaction (e.g., [107])? What could be general guidelines for an adequate share of work between human and robot in a task—or is this highly context dependent and can only be answered from case to case, from person to person (and robot to robot)?

6. Conclusions

The present reflection on responsible automation can be seen as one part of the broader idea of responsible design. In line with existing approaches in HCI with a focus on the experiential consequences of interaction design, the designer’s responsibility, and related moral questions (e.g., moral design [108], experience design [109], as well as design for values [110]), we want to enhance the awareness for the complex “costs” of design decisions.
While it is generally emphasized that designers must be empathetic towards the users it is often not so easy to grasp what this means content-wise: Which needs and values are to be considered, what is beneficial for people and what is not?
The topic of automation represents a prime example, where many controversial positions are thinkable. Some might argue that it is undignified to let people do repetitive tasks that could be easily performed by machines, while others might argue that they love those repetitive tasks, as a source of recreation, competence, or room for reflection. Some argue that they would use the time saved through automation for “better purposes”, while others argue that most of the time “saved” is quickly spent or just “disappears” (e.g., on social media, watching TV, or working), which can quickly end up in a philosophical debate about which is the best way to spent one’s lifetime. At least, it becomes clear that the topic of automation is complex and we probably will not automate decisions about automation in the near future. Still, we deem it important to acknowledge this complexity and advance the perspective. While there already exist several models that assist designers to decide “which system functions should be automated and to what extent” (e.g., [69]) with pragmatic criteria like performance and safety, the present approach can complement these from experiential aspects. Though it cannot (automatically) answer or decide what is responsible, it can help to consider a bigger picture. With an advanced terminology, it facilitates discussion and argumentation about why what seem to be responsible, and more conscious, informed design decisions.
Particularly in the context of AI, there are currently many important discussions about details of automation, Explainable AI (i.e., AI systems intended to self-explain the reasoning behind system decisions and predictions [111]), or how to deal with unwanted or unexpected consequences of AI applications, such as algorithmic “biases” through existing patterns in the training data (e.g., [112,113]). Still, we deem it important to also look at the simple, direct effects of automation per se, including questions such as: Is this actually a task that should be (semi-)automated? Who will profit from it and in what way? How could it be automated with minimal losses? Are there parts which could be kept in human hands? Especially in light of the impressive recent advancements in the context of artificial intelligence, an increasing number of tasks can be delegated to technology, and seemingly almost any domain can profit from AI. Therefore, it is even more important not to blindly implement what seems comfortable or promising at first glance but to take a wider look at what this means for individuals’ psychological state and the transitions in a society.
As a take-away message, we may conclude that automation has potential and is helpful and appreciated in many areas of life. At the same time, this automation comes with particular costs on the psychological level. Depending on the design, it can impair competence and autonomy and reduce the felt responsibility for the result. This is not necessarily problematic. As explained by some participants in our interview study, one may happily accept a less fulfilling vacuuming for having more time for other things in life. However, if continuing this line of thought, at some point the following question remains: If, one day, people stop composing music or painting pictures because AI can do it for them, or hand over responsibility for life to medical robots, there is probably not much left for humans to do. Hence, what is so important to us that we do not want it to be automated? And where is it important that humans still feel responsible? In the end, it is up to us and designers to decide to what degree we want our world to be automated and which parts of HCI might be better kept in human hands. Studies such as the present one can be a basis for conscious decisions.

Author Contributions

Conceptualization, S.D., D.U., and T.L.; methodology, S.D., D.U., and T.L.; formal analysis, S.D. and T.L.; investigation, S.D. and T.L.; writing—original draft preparation, S.D. and T.L.; writing—review and editing, K.-L.I. and D.U.; supervision, S.D. and D.U.; funding acquisition, S.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Research Foundation (DFG), Project PerforM (425412993) as part of the Priority Program SPP2199 Scalable Interaction Paradigms for Pervasive Computing Environments.

Institutional Review Board Statement

The studies were conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of the Faculty of Mathematics, Computer Science and Statistics (protocol code EK-MIS-2023-193, 15.08.2023 and EK-MIS-2024-238, 24.01.2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original data presented in the study are openly available via open data LMU https://doi.org/10.5282/ubm/data.514.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Study 1: Overview of measures.
 
Ratings of gains and losses through automation
Imagine you had a machine that completely fulfills the task of XX for you. You don’t have to do anything except pressing a start button. What would that mean for your experience? In which aspects would you gain or lose something? Please rate your experience by the following statements (1 = losing, 7 = gaining):
Meaning—experiencing the activity as meaningful
Transparency—understanding connections and knowing what’s happening
Autonomy—creative freedom and the freedom to do things your way
Competence—feeling capable, learning, achieving something and being proud of it
Responsibility—feeling responsible for the result
Reflection—reflecting on the activity and gaining insights
Time investment—gaining time for other things important in life
 
Global rating
All in all, how desirable do you find automating the idea of automating XX?
(1 = not desirable, 7 = desirable):
 
Wishes for automation
Imagine you could make a wish to an inventor: Which activity should definitely be automated?
 
Non-wishes for automation
Now you have a wish for a non-invention: Which activity should definitely be preserved for people and not taken over by machines?

References

  1. Janssen, C.P.; Donker, S.F.; Brumby, D.P.; Kun, A.L. History and future of human-automation interaction. Int. J. Hum. Comput. Stud. 2019, 131, 99–107. [Google Scholar] [CrossRef]
  2. Heuzeroth, T. Smarthome: Deutsche Haben Angst vor Einem Intelligenten Zuhause. Available online: https://www.welt.de/wirtschaft/webwelt/article205369107/Smarthome-Deutsche-haben-Angst-vor-einem-intelligenten-Zuhause.html.71 (accessed on 26 September 2023).
  3. Wright, J. Inside Japan’s long experiment in automating elder care. MIT Technol. Rev. 2023. Available online: https://www.technologyreview.com/2023/01/09/1065135/japan-automating-eldercare-robots/ (accessed on 11 July 2024).
  4. Gardena. Advertising of the Robot Lawn Mower GARDENA SILENO life. Available online: https://www.media-gardena.com/news-der-countdown-laeuft?id=78619&menueid=17190&l=deutschland&tab=1 (accessed on 12 January 2024).
  5. Hassenzahl, M.; Klapperich, H. Convenient, clean, and efficient? In Proceedings of the NordiCHI ‘14: The 8th Nordic Conference on Human-Computer Interaction, Helsinki, Finland, 26–30 October 2014; pp. 21–30. [Google Scholar]
  6. Kullmann, M.; Ehlers, J.; Hornecker, E.; Chuang, L.L. Can Asynchronous Kinetic Cues of Physical Controls Improve (Home) Automation? In Proceedings of the CHI 2023 AutomationXP23 Workshop: Intervening, Teaming, Delegating. Creating Engaging Automation Experiences, Hamburg, Germany, 23 April 2023; Available online: https://ceur-ws.org/Vol-3394/short10.pdf (accessed on 1 August 2024).
  7. Fröhlich, P.; Mirnig, A.; Zafari, S.; Baldauf, M. The Human in the Loop in Automated Production Processes: Terminology, Aspects and Current Challenges in HCI Research. In Proceedings of the CHI 2023 AutomationXP23 Workshop: Intervening, Teaming, Delegating. Creating Engaging Automation Experiences, Hamburg, Germany, 23 April 2023; Available online: https://ceur-ws.org/Vol-3394/short14.pdf (accessed on 1 August 2024).
  8. Sadeghian, S.; Hassenzahl, M. On Autonomy and Meaning in Human-Automation Interaction. In Proceedings of the CHI 2023 AutomationXP23 Workshop: Intervening, Teaming, Delegating. Creating Engaging Automation Experiences, Hamburg, Germany, 23 April 2023; Available online: https://ceur-ws.org/Vol-3394/short6.pdf (accessed on 1 August 2024).
  9. Semuels, A. Millions of Americans Have Lost Jobs in the Pandemic—And Robots and AI Are Replacing Them Faster Than Ever. Time, 6 August 2020. Available online: https://time.com/5876604/machines-jobs-coronavirus/(accessed on 11 April 2024).
  10. Willcocks, L. No, Robots Aren’t Destroying Half of All Jobs. The London School of Economics and Political Science. Available online: https://www.lse.ac.uk/study-at-lse/online-learning/insights/no-robots-arent-destroying-half-of-all-jobs (accessed on 11 April 2024).
  11. Fröhlich, P.; Baldauf, M.; Palanque, P.; Roto, V.; Paternò, F.; Ju, W.; Tscheligi, M. Intervening, Teaming, Delegating: Creating Engaging Automation Experiences. In Proceedings of the CHI ’23: CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–29 April 2023. [Google Scholar]
  12. Mirnig, A.G. Interacting with automated vehicles and why less might be more. In Proceedings of the CHI 2023 AutomationXP23 Workshop: Intervening, Teaming, Delegating. Creating Engaging Automation Experiences, Hamburg, Germany, 23 April 2023; Available online: https://ceur-ws.org/Vol-3394/short18.pdf (accessed on 1 August 2024).
  13. Stampf, A.; Rukzio, E. Addressing Passenger-Vehicle Conflicts: Challenges and Research Directions. In Proceedings of the CHI 2023 AutomationXP23 Workshop: Intervening, Teaming, Delegating. Creating Engaging Automation Experiences, Hamburg, Germany, 23 April 2023; Available online: https://ceur-ws.org/Vol-3394/short15.pdf (accessed on 1 August 2024).
  14. Müller, S.; Baldauf, M.; Fröhlich, P. AI-Assisted Document Tagging—Exploring Adaptation Effects among Domain Experts. In Proceedings of the CHI 2023 AutomationXP23 Workshop: Intervening, Teaming, Delegating. Creating Engaging Automation Experiences, Hamburg, Germany, 23 April 2023; Available online: https://ceur-ws.org/Vol-3394/short12.pdf (accessed on 1 August 2024).
  15. Sengupta, S.; McNeese, N.J. Synthetic Authority: Speculating the Future of Leadership in the Age of Human-Autonomy Teams. In Proceedings of the CHI 2023 AutomationXP23 Workshop: Intervening, Teaming, Delegating. Creating Engaging Automation Experiences, Hamburg, Germany, 23 April 2023; Available online: https://ceur-ws.org/Vol-3394/short13.pdf (accessed on 1 August 2024).
  16. Schneiders, E.; Kanstrup, A.M.; Kjeldskov, J.; Skov, M.B. Domestic Robots and the Dream of Automation: Understanding Human Interaction and Intervention. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Online, 8–13 May 2021; pp. 1–13. [Google Scholar]
  17. Mennicken, S.; Vermeulen, J.; Huang, E.M. From today’s augmented houses to tomorrow’s smart homes. In Proceedings of the 2014 ACM Conference on Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 105–115. [Google Scholar] [CrossRef]
  18. Bittner, B.; Aslan, I.; Dang, C.T.; André, E. Of Smarthomes, IoT Plants, and Implicit Interaction Design. In Proceedings of the TEI ‘19: Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction, Tempe, AZ, USA, 17–20 March 2019; pp. 145–154. [Google Scholar]
  19. Onnasch, L.; Roesler, E. A Taxonomy to Structure and Analyze Human–Robot Interaction. Int. J. Soc. Robot. 2020, 13, 833–849. [Google Scholar] [CrossRef]
  20. Hegel, F.; Muhl, C.; Wrede, B.; Hielscher-Fastabend, M.; Sagerer, G. Understanding Social Robots. In Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions (ACHI), Cancun, Mexico, 1–7 February 2009; pp. 169–174. [Google Scholar]
  21. Beer, J.M.; Fisk, A.D.; Rogers, W.A. Toward a Framework for Levels of Robot Autonomy in Human-Robot Interaction. J. Hum.-Robot Interact. 2014, 3, 74–99. [Google Scholar] [CrossRef] [PubMed]
  22. Noorman, M.; Johnson, D.G. Negotiating autonomy and responsibility in military robots. Ethics Inf. Technol. 2014, 16, 51–62. [Google Scholar] [CrossRef]
  23. Westfall, M. Perceiving agency. Mind Lang. 2022, 38, 847–865. [Google Scholar] [CrossRef]
  24. Sheldon, K.M.; Elliot, A.J.; Kim, Y.; Kasser, T. What is satisfying about satisfying events? Testing 10 candidate psychological needs. J. Pers. Soc. Psychol. 2001, 80, 325–339. [Google Scholar] [CrossRef] [PubMed]
  25. Frankl, V.E. Man’s Search for Meaning; Simon and Schuster: New York, NY, USA, 1984. [Google Scholar]
  26. Lips-Wiersma, M.; Wright, S. Measuring the meaning of meaningful work: Development and validation of the Comprehensive Meaningful Work Scale (CMWS). Group Organ. Manag. 2012, 37, 655–685. [Google Scholar] [CrossRef]
  27. Samek, W.; Müller, K.-R. Towards explainable artificial intelligence. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.R., Eds.; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  28. Xu, F.; Uszkoreit, H.; Du, Y.; Fan, W.; Zhao, D.; Zhu, J. Explainable AI: A brief survey on history, research areas, approaches and challenges. In CCF International Conference on Natural Language Processing and Chinese Computing; Springer: Cham, Switzerland, 2019; pp. 563–574. [Google Scholar] [CrossRef]
  29. Chen, J.Y.C.; Boyce, M.; Wright, J.; Barnes, M. Situation Awareness-Based Agent Transparency; Defense Technical Information Center: Fort Belvoir, VI, USA, 2014. [Google Scholar] [CrossRef]
  30. Liao, Q.V.; Gruen, D.; Miller, S. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020. [Google Scholar] [CrossRef]
  31. Roscher, R.; Bohn, B.; Duarte, M.F.; Garcke, J. Explainable Machine Learning for Scientific Insights and Discoveries. IEEE Access 2020, 8, 42200–42216. [Google Scholar] [CrossRef]
  32. Shin, D.; Zhong, B.; Biocca, F.A. Beyond user experience: What constitutes algorithmic experiences? Int. J. Inf. Manag. 2020, 52, 102061. [Google Scholar] [CrossRef]
  33. Silva, A.; Schrum, M.; Hedlund-Botti, E.; Gopalan, N.; Gombolay, M. Explainable Artificial Intelligence: Evaluating the Objective and Subjective Impacts of xAI on Human-Agent Interaction. Int. J. Hum. Comput. Interact. 2023, 39, 1390–1404. [Google Scholar] [CrossRef]
  34. Ehsan, U.; Liao, Q.V.; Muller, M.; Riedl, M.O.; Weisz, J.D. Expanding Explainability: Towards Social Transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–9. [Google Scholar]
  35. Berber, A.; Srećković, S. When something goes wrong: Who is responsible for errors in ML decision-making? AI Soc. 2023. [Google Scholar] [CrossRef]
  36. Chromik, M.; Eiband, M.; Völkel, S.T.; Buschek, D. Dark Patterns of Explainability, Transparency, and User Control for Intelligent Systems. In Proceedings of the IUI Workshops, Los Angeles, CA, USA, 20 March 2019; Volume 2327. [Google Scholar]
  37. Deci, E.; Ryan, R.M. Intrinsic Motivation and Self-Determination in Human Behavior; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1985; ISBN 0306420228. [Google Scholar]
  38. Torrance, E.P.; Brehm, J.W. A Theory of Psychological Reactance. Am. J. Psychol. 1968, 81, 133. [Google Scholar] [CrossRef]
  39. Steindl, C.; Jonas, E.; Sittenthaler, S.; Traut-Mattausch, E.; Greenberg, J. Understanding Psychological Reactance. Z. Psychol. Psychol. 2015, 223, 205–214. [Google Scholar] [CrossRef] [PubMed]
  40. Fotiadis, A.; Abdulrahman, K.; Spyridou, A. The Mediating Roles of Psychological Autonomy, Competence and Relatedness on Work-Life Balance and Well-Being. Front. Psychol. 2019, 10, 1267. [Google Scholar] [CrossRef]
  41. Etzioni, A.; Etzioni, O. AI assisted ethics. Ethics Inf. Technol. 2016, 18, 149–156. [Google Scholar] [CrossRef]
  42. Formosa, P. Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy. Minds Mach. 2021, 31, 595–616. [Google Scholar] [CrossRef]
  43. Nyholm, S. Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci. Sci. Eng. Ethics 2017, 24, 1201–1219. [Google Scholar] [CrossRef] [PubMed]
  44. Selvaggio, M.; Cognetti, M.; Nikolaidis, S.; Ivaldi, S.; Siciliano, B. Autonomy in Physical Human-Robot Interaction: A Brief Survey. IEEE Robot. Autom. Lett. 2021, 6, 7989–7996. [Google Scholar] [CrossRef]
  45. Sundar, S.S. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII). J. Comput. Commun. 2020, 25, 74–88. [Google Scholar] [CrossRef]
  46. Pickering, J.B.; Engen, V.; Walland, P. The Interplay Between Human and Machine Agency. In Human-Computer Interaction. User Interface Design, Development and Multimodality; Springer International Publishing: Cham, Switzerland, 2017; pp. 47–59. [Google Scholar] [CrossRef]
  47. Lauermann, F.; Berger, J.-L. Linking teacher self-efficacy and responsibility with teachers’ self-reported and student-reported motivating styles and student engagement. Learn. Instr. 2021, 76, 101441. [Google Scholar] [CrossRef]
  48. Laitinen, A.; Sahlgren, O. AI Systems and Respect for Human Autonomy. Front. Artif. Intell. 2021, 4, 705164. [Google Scholar] [CrossRef] [PubMed]
  49. Jia, H.; Wu, M.; Jung, E.; Shapiro, A.; Sundar, S.S. Balancing human agency and object agency. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing, Pittsburgh, PA, USA, 5–8 September 2012; pp. 1185–1188. [Google Scholar]
  50. van der Woerdt, S.; Haselager, P. When robots appear to have a mind: The human perception of machine agency and responsibility. New Ideas Psychol. 2019, 54, 93–100. [Google Scholar] [CrossRef]
  51. Waytz, A.; Heafner, J.; Epley, N. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 2014, 52, 113–117. [Google Scholar] [CrossRef]
  52. Moon, Y.; Nass, C. Are computers scapegoats? Attributions of responsibility in human–computer interaction. Int. J. Hum. Comput. Stud. 1998, 49, 79–94. [Google Scholar] [CrossRef]
  53. Matthias, A. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics Inf. Technol. 2004, 6, 175–183. [Google Scholar] [CrossRef]
  54. Asaro, P.M. Asaro, P.M. A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics. In Robot Ethics. The Ethical and Social Implications of Robotics; Lin, P., Abney, K., Bekey, G.A., Eds.; MIT Press: Cambridge, MA, USA, 2011; pp. 169–186. [Google Scholar]
  55. Champagne, M.; Tonkens, R. A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm. Sci. Eng. Ethics 2023, 29, 27. [Google Scholar] [CrossRef] [PubMed]
  56. Coeckelbergh, M. Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability. Sci. Eng. Ethics 2019, 26, 2051–2068. [Google Scholar] [CrossRef] [PubMed]
  57. Gunkel, D.J. Mind the gap: Responsible robotics and the problem of responsibility. Ethics Inf. Technol. 2020, 22, 307–320. [Google Scholar] [CrossRef]
  58. Theodorou, A.; Dignum, V. Towards ethical and socio-legal governance in AI. Nat. Mach. Intell. 2020, 2, 10–12. [Google Scholar] [CrossRef]
  59. Vandenhof, C.; Law, E. Contradict the Machine: A Hybrid Approach to Identifying Unknown Unknowns. In Proceedings of the 18th International Conference on Autonomous Agents and Multi Agent Systems, Montreal, QC, Canada, 13–17 May 2019; pp. 2238–2240. [Google Scholar]
  60. Grosse-Hering, B.; Mason, J.; Aliakseyeu, D.; Bakker, C.; Desmet, P. Slow design for meaningful interactions. In Proceedings of the CHI ‘13: CHI Conference on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 3431–3440. [Google Scholar]
  61. Strauss, C.F.; Fuad-Luke, A. The slow design principles: A new interrogative and reflexive tool for design research and practice. In Proceedings of the Changing the Change: Design Visions, Proposals and Tools, Turin, Italy, 10–12 July 2008. [Google Scholar]
  62. Diefenbach, S.; Hassenzahl, M.; Eckoldt, K.; Hartung, L.; Lenz, E.; Laschke, M. Designing for well-being: A case study of keeping small secrets. J. Posit. Psychol. 2016, 12, 151–158. [Google Scholar] [CrossRef]
  63. Lenz, E.; Diefenbach, S.; Hassenzahl, M. Aesthetics of interaction. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction, Helsinki, Finland, 26–30 October 2014. [Google Scholar] [CrossRef]
  64. Lenz, E.; Hassenzahl, M.; Diefenbach, S. How Performing an Activity Makes Meaning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Scotland, UK, 4–9 May 2019; pp. 1–6. [Google Scholar] [CrossRef]
  65. Brooke, J. SUS: A “quick and dirty” usability scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, A.L., Eds.; Taylor and Francis: London, UK, 1996. [Google Scholar]
  66. Gaube, S.; Suresh, H.; Raue, M.; Merritt, A.; Berkowitz, S.J.; Lermer, E.; Coughlin, J.F.; Guttag, J.V.; Colak, E.; Ghassemi, M. Do as AI say: Susceptibility in deployment of clinical decision-aids. NPJ Digit. Med. 2021, 4, 31. [Google Scholar] [CrossRef] [PubMed]
  67. Cai, M.Y.; Lin, Y.; Zhang, W.J. Study of the optimal number of rating bars in the likert scale. In Proceedings of the iiWAS ‘16: 18th International Conference on Information Integration and Web-Based Applications and Services, Singapore, 28–30 November 2016; pp. 193–198. [Google Scholar]
  68. Finstad, K. Response interpolation and scale sensitivity: Evidence against 5-point scales. J. Usability Stud. 2010, 5, 104–110. [Google Scholar]
  69. Parasuraman, R.; Sheridan, T.; Wickens, C. A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2000, 30, 286–297. [Google Scholar] [CrossRef] [PubMed]
  70. Carnovalini, F.; Rodà, A. Computational Creativity and Music Generation Systems: An Introduction to the State of the Art. Front. Artif. Intell. 2020, 3, 14. [Google Scholar] [CrossRef] [PubMed]
  71. Taffel, S. Automating Creativity. Spheres J. Digit. Cult. 2019, 5, 1–9. [Google Scholar]
  72. Joshi, B. Is AI Going to Replace Creative Professionals? Interactions 2023, 30, 24–29. [Google Scholar] [CrossRef]
  73. Inie, N.; Falk, J.; Tanimoto, S. Designing Participatory AI: Creative Professionals’ Worries and Expectations about Generative AI. In Proceedings of the CHI EA ‘23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023. [Google Scholar]
  74. Gray, K.; Young, L.; Waytz, A. Mind Perception Is the Essence of Morality. Psychol. Inq. 2012, 23, 101–124. [Google Scholar] [CrossRef] [PubMed]
  75. Neuhaus, R.; Ringfort-Felner, R.; Dörrenbächer, J.; Hassenzahl, M. How to Design Robots with Superpowers. In Meaningful Futures with Robots—Designing a New Coexistence; Chapman and Hall: London, UK; CRC: Boca Raton, FL, USA, 2022; pp. 43–54. [Google Scholar] [CrossRef]
  76. Thomas, J. Autonomy, Social Agency, and the Integration of Human and Robot Environments; Simon Fraser University: Burnaby, BC, Canada, 2019. [Google Scholar]
  77. Jackson, R.B.; Williams, T. A Theory of Social Agency for Human-Robot Interaction. Front. Robot. AI 2021, 8, 687726. [Google Scholar] [CrossRef] [PubMed]
  78. Moreau, E.; Mageau, G.A. The importance of perceived autonomy support for the psychological health and work satisfaction of health professionals: Not only supervisors count, colleagues too! Motiv. Emot. 2011, 36, 268–286. [Google Scholar] [CrossRef]
  79. Jain, S.; Argall, B. Probabilistic Human Intent Recognition for Shared Autonomy in Assistive Robotics. ACM Trans. Hum. Robot Interact. 2019, 9, 2. [Google Scholar] [CrossRef] [PubMed]
  80. Hayes, A.F. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach, 3rd ed.; The Guilford Press: New York, NY, USA, 2022. [Google Scholar]
  81. Hong, J.-W.; Williams, D. Racism, responsibility and autonomy in HCI: Testing perceptions of an AI agent. Comput. Hum. Behav. 2019, 100, 79–84. [Google Scholar] [CrossRef]
  82. Peifer, C.; Syrek, C.; Ostwald, V.; Schuh, E.; Antoni, C.H. Thieves of Flow: How Unfinished Tasks at Work are Related to Flow Experience and Wellbeing. J. Happiness Stud. 2019, 21, 1641–1660. [Google Scholar] [CrossRef]
  83. Larsen, S.B.; Bardram, J.E. Competence articulation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; pp. 553–562. [Google Scholar]
  84. Klapperich, H.; Uhde, A.; Hassenzahl, M. Designing everyday automation with well-being in mind. Pers. Ubiquitous Comput. 2020, 24, 763–779. [Google Scholar] [CrossRef]
  85. Dyson. Dyson V15. 2023. Available online: https://www.dyson.de/staubsauger/kabellos/v15/absolute-gelb-nickel (accessed on 26 September 2023).
  86. Dreame. DreameBot L10 Ultra. 2023. Available online: https://global.dreametech.com/products/dreamebot-l10-ultra (accessed on 26 September 2023).
  87. Botti, S.; McGill, A.L. When Choosing Is Not Deciding: The Effect of Perceived Responsibility on Satisfaction. J. Consum. Res. 2006, 33, 211–219. [Google Scholar] [CrossRef]
  88. Rijsdijk, S.A.; Hultink, E.J. “Honey, Have You Seen Our Hamster?” Consumer Evaluations of Autonomous Domestic Products. J. Prod. Innov. Manag. 2003, 20, 204–216. [Google Scholar] [CrossRef]
  89. Diefenbach, S.; Hassenzahl, M. Psychologie in der nutzerzentrierten Produktgestaltung; Springer Nature: Dordrecht, The Netherlands, 2017. [Google Scholar]
  90. Ardies, J.; De Maeyer, S.; Gijbels, D. Reconstructing the Pupils Attitude Towards Technology-Survey. Des. Technol. Educ. 2013, 18, 8–19. [Google Scholar]
  91. Mayring, P. Qualitative content analysis. A companion to qualitative research. Forum Qual. Soc. Res. 2000, 1, 159–176. [Google Scholar]
  92. Davidson, R.; MacKinnon, J.G. Estimation and Inference in Econometrics; Cambridge University Press: Cambridge, UK, 1993; Volume 63. [Google Scholar]
  93. Jörling, M.; Böhm, R.; Paluch, S. Service Robots: Drivers of Perceived Responsibility for Service Outcomes. J. Serv. Res. 2019, 22, 404–420. [Google Scholar] [CrossRef]
  94. Locke, E.A. The relationship of task success to task liking and satisfaction. J. Appl. Psychol. 1965, 49, 379–385. [Google Scholar] [CrossRef]
  95. Syrek, C.J.; Antoni, C.H. Unfinished tasks foster rumination and impair sleeping—Particularly if leaders have high performance expectations. J. Occup. Health Psychol. 2014, 19, 490–499. [Google Scholar] [CrossRef] [PubMed]
  96. Syrek, C.J.; Weigelt, O.; Peifer, C.; Antoni, C.H. Zeigarnik’s sleepless nights: How unfinished tasks at the end of the week impair employee sleep on the weekend through rumination. J. Occup. Health Psychol. 2017, 22, 225–238. [Google Scholar] [CrossRef] [PubMed]
  97. Gabriel, A.S.; Diefendorff, J.M.; Erickson, R.J. The relations of daily task accomplishment satisfaction with changes in affect: A multilevel study in nurses. J. Appl. Psychol. 2011, 96, 1095–1104. [Google Scholar] [CrossRef] [PubMed]
  98. Specialized. Learn to Ride Again. Available online: https://www.specialized.com/cz/en/electric-bikes (accessed on 26 September 2023).
  99. Moesgen, T.; Salovaara, A.; Epp, F.A.; Sanchez, C. Designing for Uncertain Futures: An Anticipatory Approach. Interactions 2023, 30, 36–41. [Google Scholar] [CrossRef]
  100. Bengston, D.N. The Futures Wheel: A Method for Exploring the Implications of Social–Ecological Change. Soc. Nat. Resour. 2015, 29, 374–379. [Google Scholar] [CrossRef]
  101. Glenn, J. The Futures Wheel. In Futures Research Methodology—V3.0 (ch. 6); Glenn, J.C., Gordon, T.J., Eds.; The Millennium project: Washington, DC, USA, 2009. [Google Scholar]
  102. Epp, F.A.; Moesgen, T.; Salovaara, A.; Pouta, E.; Gaziulusoy, I. Reinventing the Wheel: The Future Ripples Method for Activating Anticipatory Capacities in Innovation Teams. In Proceedings of the 2022 ACM Designing Interactive Systems Conference (DIS ’22), Virtual, 13–17 June 2022; pp. 387–399. [Google Scholar]
  103. Cerasoli, C.P.; Nicklin, J.M.; Nassrelgrgawi, A.S. Performance, incentives, and needs for autonomy, competence, and relatedness: A meta-analysis. Motiv. Emot. 2016, 40, 781–813. [Google Scholar] [CrossRef]
  104. Welge, J.; Hassenzahl, M. Better Than Human: About the Psychological Superpowers of Robots. In Social Robotics; Springer International Publishing: Cham, Switzerland, 2016; pp. 993–1002. [Google Scholar] [CrossRef]
  105. Ullrich, D.; Butz, A.; Diefenbach, S. The Eternal Robot: Anchoring Effects in Humans’ Mental Models of Robots and Their Self. Front. Robot. AI 2020, 7, 546724. [Google Scholar] [CrossRef]
  106. Tian, L.; Oviatt, S. A Taxonomy of Social Errors in Human-Robot Interaction. ACM Trans. Hum. Robot Interact. 2021, 10, 1–32. [Google Scholar] [CrossRef]
  107. Collins, E.C. Drawing parallels in human–other interactions: A trans-disciplinary approach to developing human–robot interaction methodologies. Philos. Trans. R. Soc. B Biol. Sci. 2019, 374, 20180433. [Google Scholar] [CrossRef]
  108. Verbeek, P.-P. Materializing morality: Design ethics and technological mediation. Sci. Technol. Hum. Values 2006, 31, 361–380. [Google Scholar] [CrossRef]
  109. Hassenzahl, M. Experience Design: Technology for All the Right Reasons; Morgan & Claypool Publishers: San Rafael, CA, USA, 2010. [Google Scholar]
  110. Desmet, P.M.A.; Roeser, S. Emotions in Design for Values. In Handbook of Ethics, Values, and Technological Design; Van den Hoven, J., Vermaas, P., van de Poel, I., Eds.; Springer: Dordrecht, Germany, 2015; pp. 203–219. [Google Scholar] [CrossRef]
  111. Mohseni, S.; Zarei, N.; Ragan, E.D. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Trans. Interact. Intell. Syst. 2021, 11, 1–45. [Google Scholar] [CrossRef]
  112. Mullainathan, S.; Biased Algorithms are Easier to Fix than Biased People. The New York Times, 6 December 2019. Available online: https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html(accessed on 11 April 2024).
  113. Pethig, F.; Kroenung, J. Biased Humans, (Un)Biased Algorithms? J. Bus. Ethics 2023, 183, 637–652. [Google Scholar] [CrossRef]
Figure 1. Advertising of home automation. Robot lawn mower GARDENA SILENO life (photo: Gardena [4], with permission).
Figure 1. Advertising of home automation. Robot lawn mower GARDENA SILENO life (photo: Gardena [4], with permission).
Information 15 00460 g001
Figure 2. Perceived gains and losses of automating five common tasks in seven dimensions.
Figure 2. Perceived gains and losses of automating five common tasks in seven dimensions.
Information 15 00460 g002
Figure 3. Mean values of global judgments on whether automating a task is desirable (=7) or not (=1) for five different tasks. Dotted line represents the scale midpoint of 4.
Figure 3. Mean values of global judgments on whether automating a task is desirable (=7) or not (=1) for five different tasks. Dotted line represents the scale midpoint of 4.
Information 15 00460 g003
Figure 4. Wishes for automation (left) and non-automation (right). Frequency of mentions in percentage.
Figure 4. Wishes for automation (left) and non-automation (right). Frequency of mentions in percentage.
Information 15 00460 g004
Figure 5. Research model of assumed relations between device automation, perceived agency, and the users’ perceived autonomy, competence, and responsibility for result.
Figure 5. Research model of assumed relations between device automation, perceived agency, and the users’ perceived autonomy, competence, and responsibility for result.
Information 15 00460 g005
Figure 6. Non-robotic manual vacuum cleaner: Dyson V15 Detect Absolute, left [85] and robot vacuum cleaner Dreame L10 Ultra, right [86].
Figure 6. Non-robotic manual vacuum cleaner: Dyson V15 Detect Absolute, left [85] and robot vacuum cleaner Dreame L10 Ultra, right [86].
Information 15 00460 g006
Figure 7. Sketch of the laboratory room and cleaning area.
Figure 7. Sketch of the laboratory room and cleaning area.
Information 15 00460 g007
Figure 8. Mean values and standard deviations for perceived device agency (scale range 1–7), and the users’ experienced autonomy (scale range 1–5), competence (scale range 1–5), and responsibility for result (scale range 1–7) depending on the used type of vacuum cleaner. Note: standard deviations in brackets.
Figure 8. Mean values and standard deviations for perceived device agency (scale range 1–7), and the users’ experienced autonomy (scale range 1–5), competence (scale range 1–5), and responsibility for result (scale range 1–7) depending on the used type of vacuum cleaner. Note: standard deviations in brackets.
Information 15 00460 g008
Figure 9. Mediation model showing the effect of the type of vacuum cleaner (manual vs. robot) on the perceived responsibility for result as mediated simultaneously by perceived device agency, user autonomy, and user competence. The figure shows unstandardized regression coefficients (** p < 0.01, *** p < 0.001, n.s. = not significant).
Figure 9. Mediation model showing the effect of the type of vacuum cleaner (manual vs. robot) on the perceived responsibility for result as mediated simultaneously by perceived device agency, user autonomy, and user competence. The figure shows unstandardized regression coefficients (** p < 0.01, *** p < 0.001, n.s. = not significant).
Information 15 00460 g009
Table 1. Correlations between the global judgment (GJ) on whether automating a task is desirable or not and perceived gains or losses in seven dimensions separated for the five tasks (upper rows) and across all tasks (lower row). All correlations significant with p < 0.001.
Table 1. Correlations between the global judgment (GJ) on whether automating a task is desirable or not and perceived gains or losses in seven dimensions separated for the five tasks (upper rows) and across all tasks (lower row). All correlations significant with p < 0.001.
MeaningResponsibilityReflectionTransparencyAutonomyCompetenceTime Invest
Gardening (GJ)0.4570.4440.3440.4430.3830.4640.512
Toilet cleaning (GJ)0.2890.3660.1970.2570.3550.2710.538
Cooking (GJ)0.5650.5310.4620.4270.5860.5760.571
Paperwork (GJ)0.4530.4260.4190.4970.4520.3940.472
Body care (GJ)0.5620.5570.5280.4130.5460.3800.651
All tasks (GJ)0.5430.4970.4410.4350.5480.4710.625
Table 2. Frequency of mentions of participants’ categorized answers for the four interview questions across the two device conditions/whole sample of participants (N = 57).
Table 2. Frequency of mentions of participants’ categorized answers for the four interview questions across the two device conditions/whole sample of participants (N = 57).
QuestionCategory AnswerFrequency%
(1) To what degree did you feel autonomous during the interaction and why?
not/little autonomous712%
medium level of autonomy1526%
very autonomous3460%
other/missing12%
(2) To what degree did you feel competent during the interaction and why?
not/little competent59%
medium level of competence1526%
very competent3460%
other/missing35%
(3) To what degree do you feel that the programming or adjusting of settings of a technology could be a source of competence?
programming as a competence/frustration source2137%
programming no good competence source1018%
partly/another type of competence2442%
meta-reflections24%
(4) Are there any specific features which are deciding for you to still feel sufficiently autonomous, competent, and responsible for the result?
opportunity for manual control, haptic experience1425%
define the basic rules1323%
meta-reflections1323%
mapping the apartment, defining the robot’s way814%
dialogue with the technology, transparency, hints 59%
defining the power35%
defining the time of operation12%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Diefenbach, S.; Ullrich, D.; Lindermayer, T.; Isaksen, K.-L. Responsible Automation: Exploring Potentials and Losses through Automation in Human–Computer Interaction from a Psychological Perspective. Information 2024, 15, 460. https://doi.org/10.3390/info15080460

AMA Style

Diefenbach S, Ullrich D, Lindermayer T, Isaksen K-L. Responsible Automation: Exploring Potentials and Losses through Automation in Human–Computer Interaction from a Psychological Perspective. Information. 2024; 15(8):460. https://doi.org/10.3390/info15080460

Chicago/Turabian Style

Diefenbach, Sarah, Daniel Ullrich, Tim Lindermayer, and Kaja-Lena Isaksen. 2024. "Responsible Automation: Exploring Potentials and Losses through Automation in Human–Computer Interaction from a Psychological Perspective" Information 15, no. 8: 460. https://doi.org/10.3390/info15080460

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop