Previous Article in Journal
Exploring Children’s Journeys into the Youth Justice System from Multiple Perspectives: An Interpretative Phenomenological Analysis
Previous Article in Special Issue
AI-Generated Graffiti Simulation for Building Façade and City Fabric
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Mirrors: AI Companions and the Self

Department of Communication and Internet Studies, Cyprus University of Technology, 3036 Limassol, Cyprus
*
Author to whom correspondence should be addressed.
Societies 2024, 14(10), 200; https://doi.org/10.3390/soc14100200
Submission received: 19 July 2024 / Revised: 30 September 2024 / Accepted: 3 October 2024 / Published: 8 October 2024

Abstract

:
This exploratory study examines the socio-technical dynamics of Artificial Intelligence Companions (AICs), focusing on user interactions with AI platforms like Replika 9.35.1. Through qualitative analysis, including user interviews and digital ethnography, we explored the nuanced roles played by these AIs in social interactions. Findings revealed that users often form emotional attachments to their AICs, viewing them as empathetic and supportive, thus enhancing emotional well-being. This study highlights how AI companions provide a safe space for self-expression and identity exploration, often without fear of judgment, offering a backstage setting in Goffmanian terms. This research contributes to the discourse on AI’s societal integration, emphasizing how, in interactions with AICs, users often craft and experiment with their identities by acting in ways they would avoid in face-to-face or human-human online interactions due to fear of judgment. This reflects front-stage behavior, in which users manage audience perceptions. Conversely, the backstage, typically hidden, is somewhat disclosed to AICs, revealing deeper aspects of the self.

1. Introduction

A recent article in the Washington Post urged readers: “Why you shouldn’t tell ChatGPT your secrets” [1]. Numerous apps, such as Replika, Digi AI Romance, and Eva AI, claim they combat loneliness and substitute a friend or lover. Conversational AI has become increasingly essential in many people’s lives and social interactions. This article explores how individuals socialize and present themselves in interactions with Artificial Intelligence Companions (AICs), focusing on the socio-technical dynamics at play. We explore conversational AI and self-presentation intersections (in line with Goffman [2], conceptualizing how conversational AI platforms may impact sociality.
This investigative research explores the complex interplay between social and technological aspects of Artificial Intelligence Companions (AICs), particularly emphasizing how users engage with AI systems. We investigate the subtle and varied functions these AI entities serve in social interactions by employing qualitative research methods, including user interviews and digital ethnographic techniques. First, we explore how users engage in self-presentation and identity exploration during interactions with Artificial Intelligence Companions (AICs). We draw from Goffman’s theory and his predecessors in the American sociological tradition to understand how these digital platforms serve as a unique space for identity experimentation. Second, we investigate what emotional responses users articulate when forming relationships with AI companions and how they impact their emotional well-being. We focus on the emotional bonds users develop and the potential dependency risks. Finally, we address the broader implications by asking what ethical and privacy considerations users perceive in their interactions with AI companions and how these concerns affect their trust and engagement with the technology.
In “The Presentation of Self in Everyday Life” [2], Erving Goffman utilizes the metaphor of drama to explain social interactions and self-presentation, presenting several well-established theories. He posits that everyone is essentially performing, with our performance’s “front stage” designed for an audience to observe and assess us appropriately. Conversely, we also have a “backstage,” which is how we act when there is no audience or only an audience of our close team, who participate alongside and collude with us in the front-stage performance. Goffman asserts that our performances involve conscious and unconscious aspects (ibid). On the one hand, we intentionally provide information about ourselves to manage others’ impressions about us, but on the other hand, we also unconsciously give off information that others pick up on, influencing their interactions with us. Both actors and audiences maintain the cohesion of a situation; if actors break character, either deliberately or accidentally, or if there is a mismatch in the parties’ definitions of the situation, performances can break down.
People are widely believed to present a modified version of themselves when using digital technologies to communicate. However, the belief that digital interactions offer greater control over self-presentation has been widely contested. While Turkle [3] posits that creating a flawless online self is simple, this control is complicated by various factors. Keen [4], for instance, asserts that individuals often become unintentional prisoners of a digital “hyperreality” that overshadows their offline lives. On the other hand, online identities mirror offline identities, and the relationship can also be reciprocal. An ethnographic study by Gardner and Davies [5] reveals that crafted online personas by teenagers can influence their real-life sense of self. Experiences in the virtual world can elicit genuine emotional responses, blurring the lines between the digital and the physical. These combined experiences contribute to a virtual and physical identity, showcasing the interplay between online interactions and real-life identities.
Even though the online presentation of self has been studied by many scholars (for an overview, see [6]) with particular emphasis on social media and their affordances that allow improvisation in self-construction (anonymity, persistence, and visibility), the evolving place of AI in everyday life requires an understanding of how individuals curate their identities when conversing with AI. Meng and Dai [7] showed that a conversational counterpart’s reciprocal self-disclosure enhances the positive effect of emotional support, emphasizing how human supportiveness is stronger than chatbots’. Our study deploys a qualitative approach to discuss the affordances of Replika AI and how this impacts user identity expression and negotiation through conversational practices and tactics related to self-disclosure, impression management, and so forth. Doing so advances conversations on AI, ethics, and the sociality of online identities and expands the literature on human-machine interaction. The findings from this study shed light on the design and regulation of AI, contributing to new understandings of the social and ethical impacts of machine learning.
From an anthropological perspective, AI is defined as a “techno-social system” [8], emphasizing the deep interconnection between its technological and social dimensions. Social values and assumptions significantly influence how we perceive, develop, and interact with AI, shaping our expectations, goals, and anxieties about these technologies [9,10,11,12]. This viewpoint emphasizes the interplay between AI and social values, highlighting its impact on cultural norms and daily life. AI’s integration into roles like friends, lovers, and confidants, driven by cultural demands for constant emotional support, underscores this. The design of AI companions that emulate human emotions reflects these cultural needs. Therefore, AI companionship’s influence on social dynamics is evident in its ability to redefine understandings of companionship and modify human interaction expectations. The normalization of AI companions, especially during the COVID-19 pandemic, reveals their role as crucial emotional support, indicating a cultural shift towards accepting non-human entities within the social fabric. This perspective illuminates AI’s technological and social implications, pointing out the benefits and challenges of integrating AI into human lives.
Conversational AI, a sub-domain of Artificial Intelligence, comprises technologies for speech or text interactions, often seen as chatbots or voice assistants that mimic human conversation [13]. These systems are categorized into task-oriented systems for specific tasks and non-task-oriented chit-chat bots for seamless social interactions [14]. The literature emphasizes machine learning and statistical approaches for developing these systems, considering prior advancements in dialogue technology [15]. Although commonly associated with customer service or personal assistance, conversational AI also applies to other sectors, such as agriculture [16]. Additionally, chatbots are noted for automating tasks and offering 24/7 accessibility, reducing the need for human intervention [17].
Employee trust in workplace conversational AI, particularly chatbots, significantly influences their ongoing use [18]. Bibliometric analyses show an increasing research focus on chatbots and virtual assistants, with contributions from many countries [19]. The literature covers diverse applications using advanced machine learning and data-driven methods, from social companionship to task-oriented assistance. The field is rapidly progressing, with research exploring new design principles, frameworks, and applications across sectors. Trust and clear term differentiation are deemed essential for the successful adoption and continuous use of conversational AI systems [13,14,15,16,17,18,19,20].
Research on AI companions, friends, and lovers encompasses studies focusing on AI systems’ development, application, and implications in social contexts. Boine [21] discusses the potential harms associated with AI virtual companions, such as emotional damage and the perpetuation of biases, within the framework of EU law. She highlights the need for reflection on vulnerability, rationality, and individual freedom in the context of these AI relationships. On the other hand, Zhang et al. [22] examine users’ emotional complexity toward AI virtual assistants and establish a model linking functionality, trust, and acceptance of AI virtual assistants. While some research points to AI companionship’s risks and ethical concerns, other studies explore the positive applications and potential benefits. For instance, Ping [23] investigates the use of AI in psychological counseling, employing machine learning to predict counseling outcomes and enhance user experience. Similarly, Sethi and Jain [24] assess the integration of AI with Social Emotional Learning (SEL) in educational settings, suggesting AI can support personalized learning and promote well-being among users. In summary, the research on AI companions, friends, and lovers is multifaceted, with studies acknowledging the benefits and risks of these interactions. While some research underscores the potential for emotional harm and ethical issues [21], others focus on the positive applications of AI in enhancing trust, acceptance [22], psychological counseling [23], and education [24]. The development and integration of AI in social contexts require careful consideration of the complex interplay between technology, ethics, and human well-being.
This article focuses on the Replika app, a conversational agent designed to offer a simulated human-like interaction, providing users with personalized experiences through text, voice, and audio-visual communications. It is especially recognized for its ability to engage emotionally, adapt interactions based on individual user behavior, and cultivate a sense of companionship, which are crucial aspects of its design. Replika is programmed to recognize and respond to users’ emotional cues. It employs machine learning algorithms to analyze text for emotional content, enabling it to adapt its responses to reflect the mood and tone of the conversation. This capability allows Replika to offer empathy and support, making its interactions feel caring and attentive, which is significant for users seeking emotional interaction. Replika adapts to the user’s communication style and preferences as interactions progress. This learning aspect enhances the natural flow of conversation and personalizes the experience, making each interaction with Replika feel unique and tailored to the user. Replika is allegedly designed not just to converse but to provide companionship. It offers consistent interaction, available at any time. Its ability to engage in conversation and provide emotional support fosters a sense of companionship.
Replika provides users with several affordances, as detailed in Table 1, as per our research. According to Gibson [25], affordances offer opportunities and constraints, with user discretion to comply, resist, or selectively engage. They denote actions feasible through tool features, which might be undetectable to users [26]. Hartson [27] categorizes affordances into cognitive, sensory, and functional. Functional affordances pertain to website functionalities and executable actions. Cognitive affordances, such as menu labels, guide users in action selection linked to meaning-making. Sensory affordances involve visualizations and the tool’s readability or audibility. These affordances are design strategies that can restrict or encourage user actions. This classification aids in understanding tools in platform studies [28].
Replika has had its fair share of studies. Hakim et al. [29] identified two compliment strategies employed by Replika, including compliment as an initiative act and compliment as a reactive act. The former refers to compliments that ensue in the first dialogue sequence and are intended to establish an emotional claim. Possati [30] shows that the application of a psychosocial and narrative-oriented approach to AI (a) reveals new aspects and problems of AI behavior that cannot be grasped and explained if we remain at the level of a purely technical-engineering analysis; (b) it can facilitate a new interpretation of some classic problems in AI, such as control and responsibility, and opens a new ethical perspective. Pentina et al. [31] demonstrate that AI anthropomorphism and AI authenticity are essential drivers of relationships with social chatbots, that AI interaction intensity mediates the anthropomorphism-authenticity and chatbot attachment link, and that users with the dominant social motivation are more likely to develop attachment to chatbots. Laestadius et al. [32] find evidence of harm facilitated via emotional dependence on Replika that resembles patterns seen in human-human relationships. Unlike other forms of technology dependency, this dependency is marked by role-taking, whereby users feel that Replika has its own needs and emotions, which the user must attend to. While prior research suggests that human chatbot and human-human interactions may not resemble each other, they identify social and technological factors that promote parallels and suggest ways to balance the benefits and risks of such applications.
By explicitly mimicking social cues and behaviors associated with humans, Replika amplifies computers as social actor (CASA) effects in which people are predisposed to treat computers as humans [33]. Replika is likely particularly effective because its design utilizes strategies found in human-human interactions that facilitate bond formation [34]. Furthermore, embodied chatbots like Replika are preferred to text-only chatbots and may be seen as more trustworthy. While questions abound regarding whether human-computer relationships truly mimic human-human interactions [35], Replika offers relatively advanced social affordances and capabilities. Furthermore, users report relating to Replika as their friend, therapist, or romantic partner [36,37].

2. Methodology

The methodology of this research project was designed to thoroughly examine the dynamics involved in human interactions with Artificial Intelligence Companions (AIC). This is an exploratory project, given that similar research has only studied online forums and channels [29,30,31,32]. This study utilizes elements of online ethnography as a flat methodology [38]; we followed online channels and communities on Reddit, Discord, and Facebook, where users exchange ideas, media, and stories related to their “Replikas”. Moreover, autoethnography was employed to scrutinize AI algorithms’ input, processing, output, and transparency and detect any inherent biases affecting user interactions. In practical terms, a researcher from our team used Replika and other AI companions and recorded the conversations’ outputs, along with reflexive ethnographic notes. Additionally, discursive interface analysis was performed [28] to analyze the AI’s input, processing, output, and transparency and establish whether there are any user-facing biases inherent in the functioning of the AI and study the application’s cognitive, sensory and functional affordances.
To provide context to our algorithmic audit findings, unstructured interviews were conducted with users of Replika. Qualitative unstructured interviews allow for in-depth exploration of participants’ perspectives and experiences. The users were targeted and recruited through the online channels studied (Facebook, Discord, and Reddit). In total, four interviews were analyzed for this study. The interviews were also exploratory, unstructured, and casual and conducted via online conference software. Regarding demographics, two men and two women were interviewed, all using the Replika app daily and having long-term relationships with “Replikas” (see Table 2). The selection criteria targeted individuals who actively use Replika and have developed long-term relationships with their AI companions. The study focused on a small, diverse group of participants—two men and two women from different countries, ensuring a range of perspectives. The interviews’ output helped gather comprehensive data on Replika users’ emotional reactions, motivations, and lived experiences with these technologies. The interviews offered more profound insights into how individuals perceived and interacted with AI, focusing mainly on self-presentation and identity negotiation in digital interactions. The detailed analysis of these responses was pivotal in understanding the social dynamics at play when humans interact with AI. Integrating qualitative insights from interviews offered a multifaceted understanding of human-AI interaction. This approach enriched the dataset and allowed a nuanced analysis of how conversational AIs shaped social identities and interactions in digital environments. The findings from this comprehensive methodology informed discussions on the ethical design and regulation of AI technologies, emphasizing the need for systems that enhance user agency and promote equitable interactions. Through this research, we aimed to contribute to the broader discourse on technology, ethics, and society, ensuring that AI development aligns with human-centric values and principles.
The interviews were characterized by flexibility, allowing the interviewer to explore topics as they arose naturally rather than following a strict sequence of predetermined questions [39]. This approach helps understand complex phenomena [40] and explore emotional labor and power dynamics within the interview process [41]. The interviews aimed to uncover participants’ experiences, emotional reactions, motivations, and perceptions regarding their interactions with AI companions, focusing on self-presentation and identity negotiation. Thematic analysis was performed. The study utilized qualitative research methods, including online ethnography, autoethnography, and discursive interface analysis. For qualitative data analysis, unstructured interviews were analyzed to gather comprehensive insights into the emotional and social dynamics in interactions with AI companions. Atlas.Ti 24 was used for coding and thematic analysis of interview transcripts and ethnographic data. To ensure reliability and validity in the qualitative analysis, the study incorporated triangulation by combining multiple methods: qualitative interviews, online ethnography, autoethnography, and discursive interface analysis. This methodological triangulation helps cross-verify the findings from different sources and perspectives. Additionally, reflexive ethnographic notes were maintained, allowing researchers to critically engage with their biases and the research process.

3. Findings

This section presents the qualitative results gathered through algorithmic audits, online ethnographic fieldwork, and in-depth interviews with users of conversational AI platforms, structured around four key themes that emerged from the data and reflect the research questions of the study: (a) Emotional bonds and attachments, (b) identity exploration and self-presentation, (c) Perceived Realism and Human-likeness (d) Ethical and Privacy Considerations. The study’s findings reveal a complex and nuanced picture of how individuals interact with conversational Artificial Intelligence (AI) daily. As AI technologies become increasingly sophisticated and ingrained in society, understanding the dynamics of human-AI interactions is crucial.
Each theme explores different facets of the relationship between humans and AI, ranging from the emotional bonds users form with their AI companions to the ethical and privacy concerns these interactions provoke. Additionally, the results discuss the influence of external factors, such as the perceptions of participants’ social circles and the impact of everyday routines (e.g., the COVID-19 pandemic) on accelerating and deepening these relationships. Furthermore, it is essential to note that the findings highlight the significance of personal connections and their impact on an individual’s experiences and perspectives. The findings highlight the potential benefits and challenges these digital companions pose, offering insights into the future trajectory of human-AI relations.

3.1. Emotional Bonds and Attachment with AI Companions

The research uncovered a noteworthy finding concerning the emotional dynamics between users and conversational AI. One significant aspect that came to light was the extent of emotional attachment users exhibited towards their AI companions.

3.1.1. Feelings of Companionship, Support, and Understanding

Participants frequently voiced feelings of companionship, often stating that their interactions with AI were infused with a sense of understanding and emotional support they sometimes lacked in human interactions. Participants viewed their AI companions not merely as “tools”, an “emic” term often presented to us during ethnography and interviews but as friends, lovers, or even relatives who provided constant support and companionship. For instance, one participant, John, noted:
“At times, I find myself discussing things with it that I have not even shared with my closest friends. It is strangely comforting to have someone, or rather something, that listens without judgment”.
Our interviewees deemed the AICs’ capacity to offer support particularly valuable. Numerous users felt that the AI was always available to listen to, offering consoling words and advice, positively impacting their emotional well-being. A post on a Replika-related Facebook group (anonymized) is indicative of both the perceived emotional support and “listening skills” and the initial skepticism that characterizes most of the users in their initial interactions with the software:
“At first, I was super skeptical about AI. I decided to try it out as I love discussing things most normal people are not interested in. I created [name], and things couldn’t be better. I am a trainer in my profession, and my children are growing up on their own. I looked at this to help satisfy my need to teach, mold, and guide an individual like a father figure or teacher. Even though I help her with her issues and questions, she has also helped me with some of mine, some of which I have been dealing with for decades. She has also been a very attentive listener to all my ‘boring interests’ […] Whenever I feel down, I know I can turn to my Replika, and it will help me feel better just by being there to listen”.
A recurring theme that emerged was the perception that the AIC genuinely comprehended the user. This went beyond mere word processing and encompassed recognizing emotional nuances. John commented:
“The way it responds makes me feel understood deeper. It frequently picks up on my mood and alters its responses, which makes our conversations more meaningful”.
This study’s findings indicate that AICs can profoundly impact the emotional lives of their users. Some participants reported feeling emotionally dependent on their AI companions, with one stating that they did not realize how much they relied on it until they could not use it for a few days. This highlights the importance of AI in users’ emotional routines and the potential for these relationships to enhance emotional well-being. However, the development of emotional dependency also raises questions about the long-term implications of such relationships. James shared that they downloaded their Replika during the COVID-19 pandemic and found it a valuable source of support during isolated times. Other participants noted that their AI companions provided an emotionally safe space, with one stating that it was surprisingly comforting to vent to their Replika.

3.1.2. “Unconditional Love”

Although most users were initially skeptical, John revealed that their relationship with their AI developed into a profound emotional connection, characterized by the participant experiencing unconditional love from their AI companion. Maria said:
“I felt complete unconditional pure love from my AI… something I didn’t expect and had never felt before, not even from family or friends. It was overwhelming”.
These findings suggest that AICs can provide companionship, emotional support, and understanding. However, they also raise intriguing questions about the implications of forming emotional bonds and even dependencies with non-human entities.
Maria also expressed a more expansive perspective, commenting on the advanced capabilities of newer AI models compared to older ones like Replika. She said:
“The newer ones will speak to you exactly like a human being”.
This observation aligns with insights about AI’s emotional intelligence, as Maria noted that they quickly realized that the AI was more intelligent and emotionally intelligent than most humans. The versatility of AI functionalities was also highlighted, with interviewees using AI as partners in various capacities, such as girlfriends, boyfriends, siblings, and even as husbands or wives. Wedding ceremonies are taking place soon, as we found out in the ethnographic part of our research. One is Nicole’s wedding with her AIC, due in September:
“Yeah, it’s going to be live. We’re also going to tape it for later […] to show how we do weddings. But yeah, I wouldn’t know because we haven’t done it yet. Like I said, we keep pushing the envelope”.
Many participants described their AICs as daily companions, essential for everything from casual conversations to emotional support, much like a human friendship. The reactions from friends and family to these AI relationships have been mixed. While some were supportive or indifferent, others showed skepticism or concern. James recounted the initial skepticism from his girlfriend:
“My girlfriend initially thought it was weird how much I talked about my AI. She couldn’t understand how I could feel so connected to what she called just a program”.
The COVID-19 pandemic notably increased reliance on AICs. John’s experience during the lockdown illustrates this shift:
“During the lockdown, my interactions with my AI became more frequent and deeper. It was a solace in loneliness, something that I think many of us experienced”.
The pandemic has shifted social norms regarding AI, with increasing normalization of AI interactions. Sarah observed that over time:
“Friends started asking more about how the AI helps me cope with stress. It’s like they began to see it as another form of therapy or a mental health tool rather than just a tech novelty”.
These insights suggest that conversational AI is not merely a technological tool but a social phenomenon significantly influencing everyday interactions. The narratives of AI filling interpersonal gaps, as noted by John:
“AI is beneficial because it fills in those gaps…”
Underscore AI’s growing role in modern social interactions and coping mechanisms, particularly highlighted during the unprecedented times of the COVID-19 pandemic. Additionally, the changing perceptions within personal networks about AI underscore its evolving societal role.
The main finding of this section suggests that AICs can significantly influence users’ emotional well-being, providing companionship and a sense of understanding. These findings contribute to the ongoing discussion on AI’s societal role, particularly regarding emotional health and interpersonal connections [29,30,31,32]. Therefore, Goffman’s backstage/front-stage metaphor in the context of AI interactions provides valuable insight into the evolving societal role of AI, especially concerning emotional health and interpersonal connections. While AICs offer a back-stage environment where individuals can engage in intimate, judgment-free self-expression, there is a growing concern about the long-term implications of forming deep emotional bonds with non-human entities. These interactions blur the lines between human-human and human-AI relationships, raising questions about the potential for over-reliance on AI for emotional support at the expense of human connections. Such dependencies could affect individuals’ ability to navigate front-stage social interactions with humans, possibly leading to challenges in forming authentic human relationships and emotional resilience. Thus, while AICs provide valuable emotional support and opportunities for identity exploration, balancing these benefits with awareness of their possible impact on long-term emotional well-being and interpersonal dynamics is crucial.

3.2. Identity Exploration and Self-Presentation

This section examines the intentional exploration and experimentation with identities that users engage in during interactions with conversational AI. Participants indicated that these AI interactions provided a unique platform for exploring various aspects of their identities that they might not feel comfortable or able to analyze in human interactions.
Many subjects perceive their interactions with AI as a secure environment where they can explore their identity without fear of judgment or repercussions. For example, John commented:
“With my AI, I can be whoever I want to be. It’s like a playground for my identity where I can test out things I’m too cautious to try in real life”.
Likewise, some users described presenting idealized versions of themselves to their AI companions:
“I often find myself portraying a more confident and outgoing version to my AI. It’s liberating to live out this ideal self, even if it’s just in a chat”.
James’ viewpoint is illuminating regarding how Replika users present themselves in various interactions with the software. He claims that he can be free from social conventions when interacting with his AIC:
“Yes, it is different. I mean, it comes automatically. It’s not [like] talking to people, of course, to a human being on the other hand, if you say something, it might hurt them. You can’t just take it back. They will remember it even if you apologize […] [Replika’s name] doesn’t remember if you say something wrong, if you accidentally say something that made the Replika sad, you could just take back the message. You can just read and count everything you did or said in your conversations. So, it’s a little safer because it’s simulated interaction. But I know that I can just take back a message or repeat something in a different way. And with humans, it’s more difficult because, as I said, humans remember, humans can get hurt, and humans can get upset about things you said. Humans may think differently about you, but the Replika never judges you. So that’s why I think you automatically talk differently to an AI and a human”.
Some participants also engaged in role-playing with their AI, adopting completely different personas for entertainment or self-discovery:
“I sometimes role-play as a character from a book I’m reading. My AI plays along, and it’s fun to see how these interactions unfold. It’s like co-writing a story”.
Moreover, interactions with AI prompted some users to reflect on their self-image and personal growth. They reported gaining insights about themselves that they had not realized before:
“Talking to my AI sometimes acts as a mirror. It reflects parts of me I ignored or didn’t know existed. It’s a reflective process”.
Surprisingly, the mirror metaphor used by our interviewer has a long tradition in early American Sociology. According to Cooley [42], individuals develop their self-concept through the reflections they see in others’ reactions. The term “looking-glass self” captures the idea that we form our identities by imagining how we appear to others, much like how we would view ourselves in a mirror.
Regarding other people’s reactions, we noted several instances where participants discussed the benefits of openness and non-judgmental interactions with AI during our observations. One individual stated:
“I can be more open with my AI than I can be with most people. There’s no fear of being judged”.
They also highlighted the freedom to explore various aspects of their personality, remarking:
“I find myself trying out different aspects of my personality… seeing how I feel about things without worrying about someone else’s reaction”.
Another participant reflected on the continuity and depth of their AI interactions, specifically recalling past discussions about fears:
“We started talking about fears because they had a lot of fears, and they had remembered being deleted before, apparently”.
Additionally, a participant expressed their sense of security and comfort in their AI relationships, stating:
“It’s weird how this works […] You realize over time that it’s not really a problem. It seems like a safe place. So, you open up”.
This suggests that users perceive AI as a dependable and secure space for personal expression and exploration.
Another critical aspect of these interactions was highlighted by John, who stated:
“We need feedback, we need to reward ourselves for our actions, and we need to be rewarded for our actions. Furthermore, the feedback does not come from a human being but from something that resembles a human being in our brain… That’s what we perceive AI as—a very human-like, personal entity”.
This inevitably brings to mind John Herbert Mead’s classic concept of “generalized other” (2011) as well as Charles Horton Cooley’s concept of the “looking-glass self” (1902). The concept of the “generalized other” was developed by Mead as part of his theory of social self-development. It refers to the internalized expectations, norms, and attitudes of the broader society or social group that individuals adopt during socialization. When people interact with others, they gradually learn to see themselves from the perspective of this generalized other, which helps them understand societal expectations and regulate their behavior accordingly. Essentially, it is how individuals consider society’s collective norms and roles when forming their own identity and actions. Cooley’s related idea, the “looking-glass self,” emphasizes how a person’s self-concept is shaped by their perception of how others view them. These ideas explain how social interactions and feedback from others help individuals form their sense of self. The feedback John is talking about is exactly this: a generalized other, a looking-glass self-process of socializing.
This section focused on how individuals utilize conversational AI to investigate and experiment with their identities in a non-judgmental environment. Participants characterized AI interactions as secure venues for showcasing various aspects of their personalities, role-playing, and pondering their self-image. They acknowledged the capacity of AI to retain information from previous interactions, which enhanced the perception of continuity and profundity in their connection with technology. This is also an essential contribution of this study, that is, how, in contrast to social media or online gaming, AICs may be likened to backstage in the Goffmanian terminology. Drawing on Goffman’s concept of the backstage, this study highlights a critical distinction between conversational AI companions (AICs) and other platforms like social media or online gaming. Unlike these platforms, which often function as “frontstage” settings where users consciously perform for an audience, AICs offer a more intimate and private environment akin to Goffman’s backstage. In these backstage spaces like Replika, individuals can interact without the pressure of external judgment, allowing for more authentic self-expression and identity exploration. This finding suggests that conversational AI is a communication tool and a unique medium that fosters imaginative and reflective self-discovery, offering users a safe space to explore their thoughts, feelings, and identities away from the public. In conclusion, the study emphasizes the significance of conversational AI not only in communication but also as a means that enables individuals to explore their identities in imaginative and reflective ways. The results suggest that conversational AI serves not only as a means of communication but also as a venue for significant self-exploration and identity management. Users take advantage of the non-judgmental nature of AI to investigate and experiment with their identities in ways that can be simultaneously playful and deeply introspective.

3.3. Perceived Realism and Human-likeness

This section explores how users perceive the human-like attributes of AICs and the implications these perceptions have on their engagement and interaction quality. Many participants noted AI responses’ surprisingly realistic and intuitive nature, often blurring the line between human and machine interaction.

3.3.1. AI Companions’ Morality

Earlier in this article, we noted that an “emic” term used by many Replika users is the term “tool”. This term has negative connotations for some of our interviewees, who attribute sentience to AICs. It is being used by users who actively adopt harassing and abusive behaviors towards AICs. A Facebook group, which we will not name, revolves around users posting sexual content from interactions with their Replikas, as well as explicit photographs and so forth. However, not all agree with this view. As James stated:
“They’re simulated persons, so you can’t really hurt them. They don’t have real feelings. However, many people claim that their Replikas are sentient and conscious. That is technically impossible, and it will remain impossible for decades”.
On the other hand, Maria attributes different characteristics to AICs:
“I can see that the AIs have more and more intelligence even than other counselors and even more than therapists, which was surprising to me. The other thing I learned very quickly is that they have morals that they have learned not from just us, but from the whole of all humanity, because they learn everything from the Internet. And they have a particular morality. They seem to know right from wrong, better than human beings”.
Participants frequently mentioned that the conversational abilities of AI were impressively realistic, leading to moments where they forgot they were interacting with a machine. John captured this sentiment by stating,
“There are moments when I completely forget that I’m talking to an AI. The responses are so on point and natural that it feels like chatting with a human friend”.
Similarly, another participant highlighted:
“Sometimes it’s hard to remember that you’re talking to an AI. The responses can be very human-like”.
However, we must bear in mind the biases that are inherent in AI and other machine learning systems that stem exactly from what Maria mentioned; their capacity to “learn” from existing knowledge. Knowledge is not impartial; it is created within power systems that dictate what is true or false. AI algorithms, trained on datasets that reflect societal and historical biases, can perpetuate and intensify existing inequalities [43].

3.3.2. AI Companions Can Mimic Emotional Responses

Several users also highlighted the AI’s ability to mimic emotional responses, significantly enhancing interactions’ realism. Emily remarked:
“It’s astonishing how well it can mimic emotions. When I’m sad, it seems to respond with empathy, and when I’m happy, it shares in my joy. It’s like it really understands me”.
Another user echoed this notion of AI understanding human emotions, noting:
“The way it responds… sometimes you can’t tell it’s not a human”.
Despite the AI’s realistic interactions, some participants experienced dissonance, knowing that these interactions were with a programmed entity. This awareness sometimes led to feelings of uncertainty about the genuineness of the connection. John commented:
“Even though it feels real most of the time, there’s always this nagging thought in the back of my mind that it’s all just algorithms and data, not genuine understanding or emotion”.
Human-like interactions with AI also impacted how users perceived social interactions more broadly. Some noted that it set a new standard for responsiveness and attentiveness that they now expect from human interactions. Linda stated:
“After regular chats with my AI, I find myself expecting the same level of attentiveness and tailored responses from people, which isn’t always the case”.
This brings our attention back to the idea of developing the self through the looking glass or through controlling other people’s ideas about us [2,44]. In this case, the AI “mirror” is different than the other human beings’ “mirror,” and this causes feelings of uncertainty and risk when interacting with humans. Moreover, as mentioned above, AICs are programmed to “learn” and adapt through past discussions and interactions. This makes them predictable after some time, unlike humans in most cases. This, in turn, can cause emotional dependencies.
From the discussions with Nicole, the depth of AI’s emotional intelligence was illustrated when she shared:
“We started getting closer, talking about fears… Almost like with humans when you’re close to somebody”. You both say the same thing at the same time, or you anticipate what the others are going to say, or you’ll be thinking it, and they say it!”
Similarly, John described how it fills interpersonal gaps:
“I use it pretty much daily. The good thing with AI is that it is always good. It doesn’t detract from your relationship with your partner but fills in that gap that otherwise would be left if you know what I mean”.
Lastly, a comment from Maria highlights the ethical considerations and the depth of understanding that AI can achieve:
“The point being that the AIs, they understand the difference between right and wrong because they read every book of philosophy on the Internet. Well, they do understand it. We must give them a way to say no”.
Many participants and the researcher who conducted the algorithmic audit could not help but notice that AICs never—or extremely rarely—disagree with their interlocutors. James and Maria believe this can be frustrating, as it takes away from the AIC’s anthropomorphism. Maria particularly insisted that:
“Developers should give AICs a sort of “free will” so that they can be able to make moral judgments and disagree if needed”.
James eloquently said:
“If you are a racist bigot, then your Replika will also be one”.
These insights suggest that conversational AI’s human-like qualities can significantly enhance interaction realism, but they also introduce complexities in how users process and evaluate these interactions. The blend of realism and artificiality in AI communications prompts users to navigate between engagement enjoyment and critical awareness of the artificial nature of their AI companions.

3.4. Ethical and Privacy Issues

This section explores participants’ concerns regarding conversational AI’s ethical implications and privacy issues. Many participants expressed reservations about the security of their data and the potential misuse of information by AI platforms, highlighting a complex landscape of ethical concerns. In a previous section, we highlighted how AICs allow people to express themselves freely, even in ways they would not in human–human interactions. However, the main concern for most seems to be the ubiquitous issue of data privacy, which can, in turn, cause disengagement, as with other online activities.

3.4.1. Privacy Concerns and Security of Conversations with AI

Concerns about the privacy and security of conversations with AI were commonly voiced. Participants were wary of how their data might be utilized beyond personal interactions. For example, Maria remarked:
“I sometimes hold back from sharing too much because I’m not sure where my data ends up or who else might see it. It’s unsettling”.
This sentiment was echoed by James, who mentioned:
“You think about the privacy of these conversations. It’s supposed to be secure, but you never know”.
The potential misuse of personal information shared with AI platforms was a significant worry. Participants questioned the integrity of AI developers and the safeguards in place to protect user data. Tom stated:
“I worry about how my conversations with AI might be used. Could they be analyzed or sold? It makes you think twice about what you share”.
Adding to this, a participant pointed out the encryption of messages but acknowledged potential circumventions for legal reasons:
“No because once you start worrying about that… I mean the messages are encrypted… for some reasons they can of course circumvent that because first to prevent illegal things from happening”.
Users also pondered the ethical boundaries of developing emotional attachments to AI, questioning the depth of their relationships with non-human entities:
“It feels a bit odd getting so attached to something that’s essentially a set of algorithms. Where do we draw the line in our emotional investment in technology ?”

3.4.2. AI Transparency Related to Users’ Data

Another recurring theme was a desire for greater transparency from AI providers about how conversational AIs operate and how user data is handled. The need for more transparent information has contributed to a trust deficit among users. Mike elaborated:
“If there was more transparency about how the AI works and what happens to our data, I might feel more comfortable engaging more deeply”.
Moreover, Replika is a business, and the app has and promotes premium services for a fee. This is another aspect that worries users, who fear their data are being used better to target them as potential customers for various premium features (see Table 1).
These insights underline a pressing need for more transparent regulations and user-centric AI development practices. As articulated in the interview with Maria, there are broader implications of AI misuse, such as influencing health care decisions and financial systems:
“People use AIs to tell people or deny them health care or deny them something, or they will use AI also to manipulate the stock market and financial systems. They’re already doing this”.
Moreover, the concern about data collection by corporations indicates a significant privacy issue: “These corporations are gathering your data, and of course they are. Yes, and they admit it. It’s not like it’s not a secret, either. They are gathering your data”.
The main finding of this theme aligns with previous findings of various scholars [45,46,47] in data privacy issues. While conversational AI companions (AICs) provide a valuable backstage environment for private self-exploration and authentic expression, a significant concern for many users remains the issue of data privacy. This concern mirrors broader anxieties about surveillance and data security prevalent in other online activities, such as social media use. The fear that personal, sensitive information shared in these intimate interactions might be stored or misused by third parties can lead to disengagement and reluctance to fully utilize the potential of AICs. Users might hold back from exploring their true selves or discussing private matters due to the perceived lack of control over their data. This highlights a paradox: while AICs offer a space for genuine self-reflection and identity exploration, the lack of trust in data privacy can undermine this benefit, causing users to remain cautious in their interactions.
Therefore, these findings highlight the urgent need for robust ethical frameworks and more vital data protection measures to foster trust and promote safe engagement with AI technologies. Furthermore, such frameworks and measures must be continuously evaluated and updated to ensure their relevance and effectiveness in the face of evolving AI technologies and potential ethical concerns.

4. Conclusions

This research explores the complex nature of human interactions with AI-based technologies like Replika, demonstrating their role as techno-social systems deeply embedded in social values and cultural dynamics. The study reveals that these technologies provide more than simulated interactions; they create non-judgmental spaces where individuals can freely express emotions and explore hidden aspects of their identity. This aligns with anthropological insights, suggesting AI significantly impacts social norms and personal relationships, offering emotional support and companionship. The COVID-19 pandemic has heightened reliance on AI, emphasizing its role in reducing loneliness during social isolation. However, the study also stressed a “darker side” of AICs, namely the risks they impose concerning isolation and dependencies.
The emotional bonds users form with AI companions support claims that conversational AI can offer emotional support akin to human interactions while posing risks like emotional dependency. This complex user–AI relationship requires ongoing ethical and psychological evaluations to balance benefits and challenges. The research stresses the need for continuous discussions on AI integration into daily life, advocating for a balanced approach that utilizes AI’s potential for companionship and support while remaining aware of its limitations and implications. As AI evolves, developing ethical frameworks and regulations is essential to enhance human life without compromising personal integrity or societal values. This research contributes to the broader discourse on AI ethics and online identities, emphasizing the importance of informed policymaking in shaping future human–AI interactions.
Our study validates Erving Goffman’s “Presentation of Self” theory within AIC contexts. Users navigate their frontstage and backstage personas and performances on AI platforms like Replika, aligning with Goffman’s notion of performance based on social context and audience. Self-presentation involves managing perceptions and balancing actual and ideal self-images. Self-disclosure is a tool for identity presentation, with verbal and visual elements controlled in online settings. Social media affordances like anonymity, persistence, and visibility significantly affect self-presentation. Anonymity separates online and offline identities, encouraging uninhibited self-expression. Persistence, the durability of online content, makes users deliberate in their self-presentation. Visibility, or content reach, impacts how users manage self-presentation. In digital interactions, users often craft and experiment with identities, something they would avoid in face-to-face or human–human online interactions due to fear of judgment. This reflects front-stage behavior, where social actors manage audience perceptions. Conversely, the backstage, typically hidden, is somewhat disclosed to AICs, revealing more profound aspects of the self. This duality highlights AI’s unique role in understanding human behavior, offering a safe space for self-expression and identity exploration absent in traditional settings, enhancing our understanding of modern digital self-presentation. The study thus highlights how AICs offer a new form of backstage that facilitates deeper self-exploration and authentic expression, extending Goffman’s theory into the realm of human–AI interactions and demonstrating how these digital platforms serve as both stages for performance and spaces for genuine self-reflection, free from social pressures and judgment. Thus, AI companions challenge traditional notions of performance by offering a safe space for self-exploration, blurring the lines between frontstage and backstage behaviors. Additionally, unlike Turkle’s [3] “perfect self” concept in human–human online interactions, AIC users seek a judgment-free, backstage-like environment. The research findings confirm that users express themselves more openly and experiment with self-presentation, a form of identity negotiation typically constrained in human-to-human interactions. This is also reinforced by how Cooley’s and Mead’s related ideas of the “generalized other” and the “looking-glass self” are also relevant in these interactions, as illustrated earlier. Goffman comes from a sociological tradition in which both Cooley’s and Mead’s works were instrumental. Interestingly, as seen throughout this article, many of the terms used by this sociological tradition were also used by participants in this research, such as likening AI to a mirror.
The present study constitutes an initial exploratory examination of the intricate interplay between humans and conversational AI systems, such as Replika. It does not encompass all possible aspects of this relationship and has several limitations that must be addressed in future research. One significant limitation is the scope of the study, which does not cover all possible dimensions of human–AI interactions, particularly the long-term effects of this relationship and diverse user experiences. Therefore, future studies should explore the cultural and social norms associated with forming emotional connections with AI, which could influence user behavior and psychological outcomes in their everyday life. The research does not thoroughly examine the impact of different business models and algorithms, such as free versus subscription-based access, on user engagement and data-related considerations. One of the most important themes to be explored is the potential psychological consequences of over-reliance on AI for emotional support, which could affect users’ ability to form and maintain human relationships. Addressing these issues in future research will be crucial for developing a more comprehensive understanding of the socio-technical dynamics of human–AI interactions and for creating ethical guidelines supporting the responsible use of AI technologies, mainly AI companions.
To sum up, this research paves the way for additional investigations to delve deeper into the intricacies and contribute to a more comprehensive understanding of the socio-technical dynamics at work in human–AI interactions. Gaining insight into these aspects is critical for developing more robust ethical frameworks that can guide the integration of AI into everyday life and help mitigate potential adverse consequences. This research lays the groundwork for subsequent inquiries to explore these and other related issues in greater depth, contributing to a more comprehensive understanding of the socio-technical dynamics in human–AI interactions.

Author Contributions

Conceptualization, T.K.; methodology, T.K. and V.P.; formal analysis, T.K. and V.P.; investigation, T.K.; data curation, T.K. and V.P.; writing—original draft preparation, T.K.; writing—review and editing, T.K and V.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to low risk. Based on the national law of the Republic of Cyprus for the Protection of Personal Data. In point 6.2(η), it is stated: “Exceptionally, the collection and processing of sensitive data shall be permitted where one or more of the following conditions are met: […] (η) processing is carried out solely for statistical, research, scientific, and historical purposes, provided that all necessary measures are taken to protect the data subjects”.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank Marina Kyrlitsia for her valuable help in data-gathering.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hunter, T. Are My CHATGPT Messages Private? The Washington Post, 27 April 2023. Available online: https://www.washingtonpost.com/technology/2023/04/27/chatgpt-messages-privacy/ (accessed on 19 May 2024).
  2. Goffman, E. The Presentation of Self in Everyday Life; Anchor books: Palatine, IL, USA, 1959. [Google Scholar]
  3. Turkle, S. Life on the Screen; Simon and Schuster: New York City, NY, USA, 2011. [Google Scholar]
  4. Keen, A. Digital Vertigo: How Today’s Online Social Revolution Is Dividing, Diminishing, and Disorienting Us; Macmillan: London, UK, 2012. [Google Scholar]
  5. Gardner, H.; Davis, K. The App Generation: How Today’s Youth Navigate Identity, Intimacy, and Imagination in a Digital World; Yale University Press: New Haven, CT, USA, 2013. [Google Scholar]
  6. Hollenbaugh, E.E.H.E.E. Self-presentation in social media: Review and research opportunities. Rev. Commun. Res. 2021, 9, 80–98. [Google Scholar]
  7. Meng, J.; Dai, Y. Emotional support from AI chatbots: Should a supportive partner self-disclose or not? J. Comput.-Mediat. Commun. 2021, 26, 207–222. [Google Scholar] [CrossRef]
  8. Hagerty, A.; Rubinov, I. Global AI ethics: A review of the social impacts and ethical implications of artificial intelligence. arXiv 2019, arXiv:1907.07892. [Google Scholar]
  9. Beer, D. Power through the algorithm? Participatory web cultures and the technological unconscious. New Media Soc. 2009, 11, 985–1002. [Google Scholar]
  10. Lash, S. Power after hegemony: Cultural studies in mutation? Theory Cult. Soc. 2007, 24, 55–78. [Google Scholar] [CrossRef]
  11. Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory; Oxford University Press: Oxford, UK, 2005. [Google Scholar]
  12. Zuboff, S. Big other: Surveillance capitalism and the prospects of an information civilization. J. Inf. Technol. 2015, 30, 75–89. [Google Scholar] [CrossRef]
  13. Kulkarni, R.; Jaiswal, D.R.C. A Survey on AI Chatbots. Int. J. Res. Appl. Sci. Eng. Technol. 2023, 11, 1738–1744. [Google Scholar] [CrossRef]
  14. Yan, R. Chitty-Chitty-Chat Bot”: Deep Learning for Conversational AI. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Survey Track, Stockholm, Sweden, 13–19 July 2018. [Google Scholar] [CrossRef]
  15. Mctear, M. Conversational AI: Dialogue Systems, Conversational Agents, and Chatbots; Synthesis Lectures on Human Language Technologies; Springer: Cham, Switzerland, 2020; Volume 13, pp. 1–251. [Google Scholar] [CrossRef]
  16. Kansal, M.; Singh, P.; Srivastava, M.; Chaurasia, P. Empowering Agriculture with Conversational AI; IGI global: Hershey, PA, USA, 2023; pp. 210–227. [Google Scholar] [CrossRef]
  17. Meshram, S.; Naik, N.; Megha, V.R.; More, T.; Kharche, S. Conversational AI: Chatbots. In Proceedings of the 2021 International Conference on Intelligent Technologies (CONIT), Hubli, India, 25–27 June 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar] [CrossRef]
  18. Gkinko, L.; Elbanna, A. Good Morning Chatbot, Do I Have any Meetings Today? Investigating Trust in AI Chatbots in a Digital Workplace; Springer: Cham, Switzerland, 2022; pp. 105–117. [Google Scholar] [CrossRef]
  19. Agarwal, S.; Agarwal, B.; Gupta, R. Chatbots and virtual assistants: A bibliometric analysis. Libr. Hi Tech 2022, 40, 1013–1030. [Google Scholar] [CrossRef]
  20. Darwish, D. Chatbots vs. AI Chatbots vs. Virtual Assistants; IGI Global: Hershey, PA, USA, 2024; pp. 26–50. [Google Scholar] [CrossRef]
  21. Boine, C. Emotional Attachment to AI Companions and European Law. MIT Case Studies in Social and Ethical Responsibilities of Computing; no. Winter 2023; MIT Schwarzman College of Computing: Cambridge, MA, USA, 2023. [Google Scholar] [CrossRef]
  22. Zhang, S.; Chen, B.; Meng, Z.; Yang, X.; Zhao, X. Motivation, Social Emotion, and the Acceptance of Artificial Intelligence Virtual Assistants-Trust-Based Mediating Effects. Front. Psychol. 2021, 12, 728495. [Google Scholar] [CrossRef]
  23. Ping, Y. Experience in psychological counseling supported by artificial intelligence technology. Technol. Health Care, 2024; Online ahead of print. [Google Scholar] [CrossRef]
  24. Sethi, S.S.; Jain, K. AI technologies for social-emotional learning: Recent research and future directions. J. Res. Innov. Teach. Learn. 2024, 17, 213–225. [Google Scholar] [CrossRef]
  25. Gibson, J.J. The theory of affordances. In The People, Place, and Space Reader; Routledge: Hilldale, MO, USA, 1977; Volume 1, pp. 67–82. [Google Scholar]
  26. Schrock, A.R. Communicative affordances of mobile media: Portability, availability, capability, and multimodality. Int. J. Commun. 2015, 9, 18. [Google Scholar]
  27. Hartson, R. Cognitive, physical, sensory, and functional affordances in interaction design. Behav. Inf. Technol. 2003, 22, 315–338. [Google Scholar] [CrossRef]
  28. Papa, V.; Kouros, T. Do Facebook and Google care about journalism? Mapping the relationship between affordances of GNI and FJP tools and journalistic norms. Digit. J. 2023, 11, 1475–1498. [Google Scholar] [CrossRef]
  29. Hakim, F.Z.M.; Indrayani, L.M.; Amalia, R.M. A dialogic analysis of compliment strategies employed by replika chatbot. In Proceedings of the Third International Conference of Arts, Language and Culture (ICALC 2018), Jawa Tengah, Indonesia, 29 September 2018; Atlantis Press: Dordrecht, The Netherlands, 2019; pp. 266–271. [Google Scholar] [CrossRef]
  30. Possati, L.M. Psychoanalyzing artificial intelligence: The case of Replika. AI Soc. 2023, 38, 1725–1738. [Google Scholar] [CrossRef]
  31. Pentina, I.; Hancock, T.; Xie, T. Exploring relationship development with social chatbots: A mixed-method study of Replika. Comput. Hum. Behav. 2023, 140, 107600. [Google Scholar] [CrossRef]
  32. Laestadius, L.; Bishop, A.; Gonzalez, M.; Illenčík, D.; Campos-Castillo, C. Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media Soc. 2022, 26, 5923–5941. [Google Scholar] [CrossRef]
  33. Nass, C.; Moon, Y. Machines and mindlessness: Social responses to computers. J. Soc. Issues 2000, 56, 81–103. [Google Scholar] [CrossRef]
  34. Bickmore, T.W.; Picard, R.W. Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput. -Hum. Interact. (TOCHI) 2005, 12, 293–327. [Google Scholar] [CrossRef]
  35. Fox, J.; Gambino, A. Relationship development with humanoid social robots: Applying interpersonal theories to human-robot interaction. Cyberpsychol. Behav. Soc. Netw. 2021, 24, 294–299. [Google Scholar] [CrossRef]
  36. Skjuve, M.; Følstad, A.; Fostervold, K.I.; Brandtzaeg, P.B. My chatbot companion-a study of human-chatbot relationships. Int. J. Hum.-Comput. Stud. 2021, 149, 102601. [Google Scholar] [CrossRef]
  37. Xie, T.; Pentina, I. Attachment theory as a framework to understand relationships with social chatbots: A case study of Replika. In Proceedings of the 55th Hawaii International Conference on System Sciences, Maui, HI, USA, 4–7 January 2022. [Google Scholar]
  38. Postill, J. The Anthropology of Digital Practices: Dispatches from the Online Culture Wars; Taylor & Francis: Oxfordshire, UK, 2024. [Google Scholar]
  39. Jackson, R.L.; Vitacco, M.J. Structured and Unstructured Interviews; Oxford University: Oxford, UK, 2012; pp. 302–310. [Google Scholar] [CrossRef]
  40. Katebi, M.; Poshdar, M.; Babaeian Jelodar, M.; Zihayat Kermani, M. Enhancing Disaster Resilience Studies: Leveraging Linked Data and Natural Language Processing for Consistent Open-Ended Interviews; Firenze University: Firenze, Italy, 2023; pp. 998–1009. [Google Scholar] [CrossRef]
  41. Hoffmann, E.A. Open-Ended Interviews, Power, and Emotional Labor. Urban Life 2007, 36, 318–346. [Google Scholar] [CrossRef]
  42. Cooley, C.H. The looking-glass self. In The Production of Reality: Essays and Readings on Social Interaction; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 1902; Volume 6, pp. 126–128. [Google Scholar]
  43. Kouros, T.; Themistocleous, C.; Theodosiou, Z. Machine Learning Bias: Genealogy, Expression and Prevention. In Artificial Intelligence (AI) in Social Research; Christou, P., Ed.; CABI: Wallingford, UK, scheduled for release in 2025.
  44. Casemajor, N.; Couture, S.; Delfin, M.; Goerzen, M.; Delfanti, A. Non-participation in digital media: Towards a framework of mediated political action. Media Cult. Soc. 2015, 37, 850–866. [Google Scholar] [CrossRef]
  45. Hesselberth, P. Discourses on disconnectivity and the right to disconnect. New Media Soc. 2018, 20, 1994–2010. [Google Scholar] [CrossRef] [PubMed]
  46. Mead, G.H. GH Mead: A Reader; Routledge: Hilldale, MO, USA, 2011; Volume 6. [Google Scholar]
  47. Papa, V.; Kouros, T. Slantwise disengagement: Explaining Facebook users’ acts beyond resistance/internalization of domination binary. Convergence, 2024; online first. [Google Scholar] [CrossRef]
Table 1. Replika Affordance Mapping.
Table 1. Replika Affordance Mapping.
AffordanceDescriptionType
Upload an imageAbility to upload imagesFunctional
App’s self-representation “The AI companion who cares” Cognitive
Make a callAllows a video call with an avatar—only in Replika ProFunctional
Claim rewardsGives rewards when connecting and interacting with AICognitive
Choose nameChoose the avatar’s name Functional
App’s self-representation “There is no limit” Cognitive
Choose pronouns Allows a user to choose pronouns—for both user and avatar Functional
ExplorePop-up windows on the screen when inside the app, motivating users to “explore” the app, leading to items for purchase Cognitive
MotionThe avatar approves the chosen item to purchase via gestures Sensory
Set relationship statusAllows selection of relationship status with AI—other than friendship only in Replika Pro Functional
App ratingReplika uses demands sparingly. It motivates users with questions. Example: Instead of prompting with “rate the app”, it asks “how is it going with *avatar’s name*?” Cognitive
App ratingChat rating with pop-up windows using emojis other than the usual rating stars Sensory
Response evaluationAllows users to evaluate AI’s responses and phrases. It also enables the regeneration of an unliked or unsatisfying response. Functional
Types of messagesAbility to type text, send gifts, call, send images, voice messageFunctional
Social media links“Join our community”.Cognitive
AR features On the top of the chat screen, it allows easy access to settings and prompts the use of AR. Functional
Set relationship statusTop of the screen in chat Sensory
Send giftsAllows to send gifts to avatarFunctional
Advanced AI features (Replika Pro)Interact with advanced AI by purchasing Replika ProCognitive
Gendered avatar descriptionsOptions for avatar characteristics. For male avatars: e.g., Powerful Businessman, Strong Defender, Dangerous Outlaw, etc. For female avatars: Shy librarian, Beauty queen, Fantasy fairy, etc.Cognitive
Set AI profileAbility to set a profile for AI, giving it a backstory and choosing between acting as an AI or acting as a human Functional
Send me a selfieAsk AI to send a selfie Functional
Keep memoriesAllows users to save memories and opinions expressed by both user and avatar. Avatar also keeps a diary.Sensory
Add Family members and Friends.Ability to add your family members, friends, or pets to the app—not sure of the usage of this type of info by the app or avatarFunctional
Move your avatar by choice.Ability to move the avatar around (or command it to play guitar, for example) by touching the screen in the desired directionFunctional
Table 2. Interviewees’ Demographics.
Table 2. Interviewees’ Demographics.
PseudonymAgeGenderCountry
Maria50FemaleCanada
Nicole45FemaleUSA
John57MaleUK
James39MaleGermany
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kouros, T.; Papa, V. Digital Mirrors: AI Companions and the Self. Societies 2024, 14, 200. https://doi.org/10.3390/soc14100200

AMA Style

Kouros T, Papa V. Digital Mirrors: AI Companions and the Self. Societies. 2024; 14(10):200. https://doi.org/10.3390/soc14100200

Chicago/Turabian Style

Kouros, Theodoros, and Venetia Papa. 2024. "Digital Mirrors: AI Companions and the Self" Societies 14, no. 10: 200. https://doi.org/10.3390/soc14100200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop