Next Article in Journal
“We Cannot Go There, They Cannot Come Here”: Dispersed Care, Asian Indian Immigrant Families and the COVID-19 Pandemic
Previous Article in Journal
Perpetuation of Gender Bias in Visual Representation of Professions in the Generative AI Tools DALL·E and Bing Image Creator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics

by
Michael Gerlich
Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School, 8302 Kloten, Switzerland
Soc. Sci. 2024, 13(5), 251; https://doi.org/10.3390/socsci13050251
Submission received: 27 February 2024 / Revised: 26 April 2024 / Accepted: 30 April 2024 / Published: 6 May 2024
(This article belongs to the Section Contemporary Politics and Society)

Abstract

:
This study analyses the dimensions of trust in artificial intelligence (AI), focusing on why a significant portion of the UK population demonstrates a higher level of trust in AI compared to humans. Conducted through a mixed-methods approach, this study gathered 894 responses, with 451 meeting the criteria for analysis. It utilised a combination of a six-step Likert-scale survey and open-ended questions to explore the psychological, sociocultural, and technological facets of trust. The analysis was underpinned by structural equation modelling (SEM) and correlation techniques. The results unveil a strong predilection for trusting AI, mainly due to its perceived impartiality and accuracy, which participants likened to conventional computing systems. This preference starkly contrasts with the scepticism towards human reliability, which is influenced by the perception of inherent self-interest and dishonesty in humans, further exacerbated by a general distrust in media narratives. Additionally, this study highlights a significant correlation between distrust in AI and an unwavering confidence in human judgment, illustrating a dichotomy in trust orientations. This investigation illuminates the complex dynamics of trust in the era of digital technology, making a significant contribution to the ongoing discourse on AI’s societal integration and underscoring vital considerations for future AI development and policymaking.

1. Introduction

1.1. Background

The introduction of new technologies in recent years has considerably altered people’s decision-making processes (Rahaman 2023). AI is currently one of the most recent technologies utilized to make judgements. AI is a branch of computer science that takes into account linguistics, psychology, philosophy, and other subjects to enable computers to accomplish activities that would normally need human intelligence (Van Duin and Bakhshi 2017). AI-powered personalised decision making enables businesses to examine customer behaviour and develop products and services that are specifically tailored to each individual customer according to their needs and preferences. Campaign efficacy may be greatly increased, and sales can be increased by using a personalised strategy powered by AI (Rahaman 2023). Personalisation has gained popularity as a marketing strategy in recent years (Unemyr and Wass 2018). Personalisation uses prediction algorithms to give unique communications that are personalised and optimised for each user (Unemyr and Wass 2018; Gerlich 2023b). It involves more than just making content, offers, or product recommendations; it also entails developing user experiences that foster engagement and increase retention (Gerlich et al. 2023). The person who has the ability to purchase a good or service that involves AI engagement is the consumer in this scenario. In order to live, work, and shop differently these days, consumers everywhere are continually adjusting to new technology, frequently in an effort to lead more economical daily lives (Rogers 2023). Despite their ignorance, consumers may suddenly find themselves dependent on new tools. In addition, users of new technologies, like consumers, depend on them to simplify their lives and save money and time.
Companies now have almost a continual interaction with customers thanks to the development of new technologies like AI, which presents several chances to gain the confidence of customers. Nonetheless, businesses have just as many chances to make a mistake and lose the trust of customers (Rogers 2023). Though AI-powered personalisation may enhance and preserve customer trust, it is crucial to address data security and privacy as well (Morey et al. 2015). Customers’ acceptance of the use of this new technology depends critically on their level of trust in AI and technology. Customers can control their personal information that they share online if a company is transparent about how it collects and uses customer data (Morey et al. 2015). These personalised systems, also known as decision support systems, may provide suggestions that one may normally find in conflict with their own perspectives, but such suggestions are, only this time, provided by a system not a person. However, because people prefer to see technical systems as social actors, they may build “relationships” with them and feel emotions and attitudes in the same way they would with other humans. As a result, human trust in a decision support system (DSS) and its suggestions is as vital to decision making as human guidance (Madhavan and Wiegmann 2007).
It is important to understand what human–AI trust is, what influences it, and how much can be improved via empirical research. Because of its diverse and interdisciplinary nature, as well as its theoretical confusion with other related notions like confidence, distrust, dependence, obedience, and trustworthiness, human trust is a difficult subject to investigate. As such, designing an experimental technique and selecting suitable metrics of trust might be difficult. Although there have been a few prior attempts, McKnight and associates (McKnight et al. 2002a, 2002b) were the most noteworthy. Researchers proposed that more transparent technology would consistently obtain higher levels of trust from users, whereas people’s confidence in more sophisticated technology would be more variable. This study’s hypothesis was confirmed, showing a negative relationship between technological sophistication and human–computer trust: higher levels of trust were in accordance with more sophisticated technologies, whose functions can range from transparent to opaque, as opposed to those whose functions are always clear and straightforward.
This research might have several implications for human–AI trust, one of which is that humans feel more at ease with AI when they comprehend it and believe that its goals and purposes are clear. In addition to the factors evaluated in the Technology Acceptance Model (TAM), a thorough analysis of the link between human trust and AI necessitates adherence to the theoretical methodology employed by McKnight and colleagues. We may therefore concentrate on our understanding of the complex nature of trust with technology as a whole thanks to this integrative method. The division of trust into four constructs—trusting beliefs, disposition to trust, institution-based trust, and trusting intentions—as well as sixteen distinct subconstructs was suggested and verified by McKnight et al. (2002a). This method examines the situational and intrinsic factors while acknowledging the complexity of the trust construct. We can distinguish between the discrete components of trust that are sustained over time by trait-driven factors and those that may be altered by straightforward, doable actions by looking closely at these components. Prior knowledge of the human trust domain and its application to this research issue is crucial.

1.2. Research Purpose

The mismatch between knowledge and comprehension might result in an overestimation or underestimation of AI-based technologies in the end by misrepresenting the factual and practical capabilities of AI. This research seeks to gain a profound understanding of these characteristics and determine their impact on respondents’ overall trust in AI. A considerable proportion of the population places more faith in AI than in people (Gerlich 2023a). Consequently, this analysis delves into the reasons behind the trust in AI, while simultaneously exploring the causes of distrust in humans. Additionally, the investigation explores how individuals’ opinions about AI are influenced by their beliefs. Despite extensive large-scale studies examining public opinions on AI, a significant gap remains between quantitative results and people’s personal views, attitudes, and beliefs. Using a mixed-methods approach, this study fills this vacuum by asking participants to score the biases, misconceptions, hazards, and opportunities around AI and evaluate how these factors eventually affected their desire to trust and employ the technology.

2. Literature Review

The public’s perceptions of AI so far continue to be divided (Fast and Horvitz 2017). While some view it as innovative, others are afraid of its black-box aspect and inexplicability (Kieslich et al. 2022). An increasing interest in AI and AI-based technology is supported by current studies. The general population also exhibits this curiosity; therefore, it is not limited to the academic community. In our contemporary Western society, smartphones, smartwatches, and smart TVs are practically universal, and the tendency is rising. On the other hand, many contend that the rate at which technology is developing is not in line with our moral, legal, or social norms (Jobin et al. 2019). In his research, Gerlich (2023a) studied the perceptions and acceptance of AI at business schools in the UK, USA, Germany, and Switzerland. The study findings showed that a large number of participants seem to trust AI more than humans. Similar tendencies were shown by a study on virtual influencers (Gerlich 2023b).
In another study, high school students’ opinions on AI were examined and the extent of their cynical animosity towards technology was gauged (Bochniarz et al. 2022). According to the findings, participants thought that AI was more hostile and mistrusted it more as it was seen as a danger or as subjective, meaning it was controlled by emotions. Data on the public perception of AI from eight nations were gathered by Kelley and colleagues (2021) in a large-scale survey with over 10,000 participants. The researchers split up the respondents into four groups for their investigation. These four groups reflected different perspectives on AI: (1) excitation; (2) worry; (3) usefulness; and (4) futuristic views of its advancement. Just 12% of the responses were categorised as useful, while about 23% of the respondents were classified as having troubling sentiments. Approximately 35% of the participants did not fit into any of the four sentiment categories. These figures imply that people are more concerned about the dangers with AI than they are with its potential. Studies showing people’s fear of losing their jobs, privacy concerns, or moral conundrums (Dietterich and Horvitz 2015; Kieslich et al. 2022) corroborate this. A second study that concentrated on medical AI discovered that individuals erroneously believe that some AI technologies are currently in use while, in reality, they are not, and they also rely on their understanding of false information that is frequently presented by the media (Stai et al. 2020). Because of this mismatch between knowledge and comprehension, the factual and practical capabilities of AI may ultimately be misrepresented, leading to an overestimation or underestimation of AI-based technologies. This research aims to acquire a more profound comprehension of these attributes and ascertain their impact on the participants’ overall trust in AI.

2.1. Effects of AI Trust

In the study of Yang and Wibowo (2022), it was observed that people are willing to trust emerging technologies like AI in different stages. People’s faith in AI is founded on their belief that it is reliable and works consistently. The durability of this confidence over time depends on how much the AI service provider can be trusted or relied upon. User trust in AI reaches its uppermost level when users have faith in the technology and intend to continue depending on it (Yang and Wibowo 2022). This conforms to Bedué and Fritzsche’s (2021) claim that compliance, standards, and social norms are not sufficient as a wider environment also has to be taken into account.
Recently, the European Commission’s High Level Expert Group on AI (HLEG 2020, p. 35) stated that a relationship of trust is possible and has to be developed with AI by working in building its trustworthiness. People’s anxiety about the opinions of friends, family members, and others regarding their own conduct has an impact on their decision-making abilities and trusting capacity (Mostafa and Kasamani 2021). In the same vein, Ryan (2020) disagrees with the notion of trusting AI by stating that AI does not have the capability to build trust, due to several reasons. Trust is one of the core defining features of human relationships. Rather, AI has built a form of reliance on its capabilities rather than trust as the latter is built on emotive states and it cannot be held responsible for its actions. He further states that this dependency on AI cannot be viewed as trust, as it dilutes the value one places on interpersonal trust between humans and therefore undermines the value of responsibility one develops for building the trust (Ryan 2020).
Lastly, user-related factors such as personality types and past experiences with technology including AI as well as preconceived ideas about AI affect trust in it (Yang and Wibowo 2022). Trust based on these traits allows for affirmative cognitive–behavioural benefits that can be realised by users (Yang and Wibowo 2022).

2.2. Agreement on AI

Customers may have to make trade-offs while utilising AI-enabled services, according to research by Ameen et al. (2021). This study further stated that some of these trade-offs may involve a loss of privacy, a decrease in human interaction, a loss of control, the requirement for more time, and the possibility of negative feelings that could make people feel worse about AI-enabled services. According to Wang et al. (2019), trust between customers and retailers strengthens the relationship commitment between them. This commitment is fundamental to the success of any relationship between humans and automation systems like AI (Hengstler et al. 2016). But for this, privacy is a critical component about which apprehensions may arise within customers about their data and their usage and control (Wang et al. 2019). However, Prakash et al. (2023) highlighted the fact that some criteria are critical for predicting trust in the context of AI. According to their findings, the most important factor in determining trust is ease of use. Customers are more likely to adopt new technologies that are easy to understand, such as AI (Mostafa and Kasamani 2021). Multiple studies have shown that two levels of trust exist when using technology-mediated services—trust in the technology itself (Hengstler et al. 2016; Siau and Wang 2018) and trust in the firm behind the technology which encompasses the process in which data are collected and how the purpose of the technology is communicated (Nienaber and Schewe 2014). According to Hengstler et al. (2016), when AI is implemented in the service industry, its purpose should be communicated early on when the knowledge levels are low. By communicating effectively and proactively, the brand can increase the chances of it (the technology) being accepted by the society at large. Studies in the past have also corroborated that the more customers that understand the brand, the more likely they are to build a relationship with it in the long term (Keiningham et al. 2017). Bedué and Fritzsche’s (2021) findings elaborated that enhancing consumers’ understanding of AI is crucial for cultivating a higher level of trust. This might be accomplished by educating clients about artificial intelligence by explaining algorithmic results.
The next factors are social presence, perceived benefit, and proclivity to trust technology (Prakash et al. 2023).

2.3. Integrity and AI

Transparency is one of the problems with AI that Bedué and Fritzsche (2021) claim their research has demonstrated. As with every technology, transparency is crucial, but with AI, it has been demonstrated to be much more so. This is because it directly affects people’s confidence in AI. The same study by Bedué and Fritzsche (2021) makes it abundantly evident that ensuring data and application integrity alone will not suffice to win confidence in the context of AI. Ryan (2020) questions the capacity of AI to build trust as it is merely used to help make decisions and is setup by multi-agent systems where it is not known who makes the decisions. This view was previously highlighted by Bryson (2018) as he states that AI is merely a software development technique which should only increase the trustworthiness of its founding institution and not in itself.
Additionally, the research conducted by Prakash et al. (2023) broadens our knowledge by demonstrating that conversational cues and social qualities are critical to the perception of and trust in AI. It will be possible for it to develop into a trustworthy belief by forming these positive individual perceptions (Prakash et al. 2023). According to earlier studies, speaking with a person is often seen as carrying a higher risk to a consumer’s privacy than using an AI service (Song et al. 2022). Customers believe that people are driven by subjective interests to maintain their privacy. This is not always the case, though. More interpersonal connections are necessary for certain customers than for others. According to Wang et al. (2019), trust between customers and retailers strengthens the relationship commitment between them. This commitment is fundamental to the success of any relationship between humans and automation systems like AI (Hengstler et al. 2016). But for this, privacy is a critical component about which apprehensions may arise within customers about their data and their usage and control (Wang et al. 2019). A consumer’s likelihood of trust and perceived danger is increased and their awareness of how computer agents might keep their created data may increase with more familiarity with computer agents (Song et al. 2022). It is critical that clients realise they are using a secure information system as a consequence. Regardless of whether the service is supplied by a person or an AI system, all customer service personnel must respect their privacy (Song et al. 2022).

2.4. Ethics and AI

Technological advancements are rapidly advancing, such as autonomous agents and the growing quantity of data available, yet ethical concerns have also surfaced in tandem with the advancements (Klockmann et al. 2022). It is important to remember that ethics in AI and personalization should not be viewed as a hindrance or a problem, but rather as something that will expand responsibility, action, and autonomy. The development of morally sound computers has been the subject of a much earlier study; yet, people’s reactions to these machines appear to be less concerned with moral behaviour when engaging with technology (Giroux et al. 2022). Consumers have questioned the lack of human interactions that might result in problematic moral acts, notwithstanding the fact that some firms employ AI irresponsibly. With AI-based services or services that have AI as enablers, non-monetary sacrifices are also often made which include the absence of human interaction, which may lead to social isolation, which can influence customer experience (Davenport et al. 2020). They can be viewed as a loss of human control (Murphy 2017) due to the need for more personal data, to operate seamlessly, and the high level of coordination in the social context (Christakis 2019). It was discovered that although people apply social principles to machines, the degree of social and moral norms is not as strong as when humans engage with one another. This contributes to the notion that computers operate as social agents (Giroux et al. 2022). Their study demonstrated that customers’ moral motivations to disclose faults are strengthened when they regard AI as being more human.

2.5. AI Customisation

In AI personalisation, the seamless delivery of tailored experiences to clients is revolutionary (Fan et al. 2022). Viewing it from a different perspective, when the users are familiar with the position of AI in the personalisation process, they are provided with highly recommended materials. However, by using natural language and technology to emphasise services that help people, AI may provide information more quickly and accurately (Brill et al. 2019; Baabdullah et al. 2022). Because the end user is supplied with personalised material based on their consumption behaviour, the integration of customer personalisation and customisation is seen as a successful way towards enhancing the full suite of products (Pappas et al. 2017). Using the individual’s profile, products and service offerings are made to be individual-focused (Zanker et al. 2019) because AI algorithms decide the why, when, what, and how of approaching the customers (Zanker et al. 2019). Both AI and machine learning enhance the brand’s image and delivery of services using customer profiling tools (O’Riordan 2019). According to Lambillotte et al. (2022), personalised content may provide for subjective and enjoyable customer experiences, whereas generic or non-personalised information can result in what can be described as a factory-like experience devoid of a subjective element. Furthermore, it has also been proven that personalised content helps ensure that visitors stay longer than necessary and eventually buy that specific product according to the visitor’s needs. It is the outcome of the latest technology that makes human judgements through the amalgamation of AI and consumer judgment that are based on consumer data to find patterns and make judgements (Klaus and Zaichkowsky 2020). Thus, with modern technology today, businesses can use consumer personalisation that offers a more individualised experience and is both perceived and, in reality, personalised as stated by Lambillotte et al. (2022). But recent regulatory measures such as the EU’s General Data Protection Regulation (GDPR) have brought to light the need for discussion on the ethical aspects of AI and the levels of transparency needed to show algorithm-based decision making. The level of compliance needed to meet these regulatory requirements is a challenge for industries and brands that are reliant on personalisation.
There may be some differences between the real and the perceived qualities of customisation. Such a situation may occur, for example, if a message that was originally made personally is interpreted as generalised or non-personalised; it may also occur in the opposite way. As the content progresses, it sometimes becomes interpreted differently when presented at different times, through different channels and in different forms; thus, customer customisation can also mean several things (Li 2016). For instance, the message sent through social mass media might be perceived differently from the same one delivered through e-mail. Therefore, firms have to take into consideration the contextual situation and what distribution channel they will employ during consumer customisation strategy development.

2.6. Trust in AI Customisation

While new forms of customised features and AI-enabled services are born, the user’s experience is compromised and often sacrificed. Butt et al.’s (2021) research supports the fact that consumer opinion is initially apprehensive of the benefits that the AI-enabled services bring with them, but eventually, they are more willing to adopt new technology that will improve the quality of their lives. It has also been demonstrated that optimism increases with increasing use of AI-integrated services. They include human-centricity (Butt et al. 2021) and human-likeness characteristics because they can have a strong influence on customer tendencies to adopt the services enabled by A.I. Such organisations can end up developing better AI-based services than those performing poorly. It has to satisfy the demands and desires of the end users by taking their perspectives into consideration. It is necessary to consider the benefits of AI and its attainment of rational optimisation tools to enhance efficacity, team building, and client desires (Xu et al. 2020). However, it may be different for different businesses and the way the clients see AI.
Trust is a key factor in the acceptance of AI. An important factor that should be considered in order to help improve the overall customer experience is trust from the customer base. Nagy and Hajdú (2021) emphasised that if this trust is not upheld, internet traffic will decline. If the AI-enabled service is personalised in terms of the user interface, content, and interaction process, then it is crucial that customers see it as trustworthy and safe and that the sacrifices they plan to make feel less burdensome (Ameen et al. 2021). Since customer trust plays a key part in the consumer experience, it is imperative to consider it while delivering AI-enabled services and AI personalisation (Ameen et al. 2021; Nagy and Hajdú 2021). Highly personalised experiences can make customers feel like they are sacrificing (data privacy, control, etc.) a lot. As per the study by Ameen et al. (2021) on the AI-based shopping experiences of people, people sacrifice a lot in terms of loss of human interaction, privacy, time consumption, and loss over data control due to AI-enabled services. But on the contrary, this study also showed how trust is built between AI and customers as it provides high-quality services that are personalised and delivered at their convenience (Ameen et al. 2021). In order to provide customer assistance, the business must be open and honest about how it uses customer data analysis, since this also has an impact on the consumer’s convenience and level of service (Ameen et al. 2021).
According to Zerilli et al. (2022), trust is a subjective attitude that permits individuals to make risky actions. Customers that have faith in technology are able to believe that using a device would provide the desired result, such as effectively navigating to a restaurant by using Google Maps to receive directions (Chang et al. 2017). Trust is a critical component in forecasting user behaviour in technology adoption models. Extending the TAM model, Choung et al. (2022) found that trust positively predicts PU. Further research discovered that the greatest indicator of behavioural intents to use AI for iris scanning was trust, which lessened the effect of PU (Miltgen et al. 2013). Therefore, a crucial element in the adoption of AI is trust in the technology provider and AI itself. Using these studies as a base, the questionnaire was designed to capture the responses from respondents at multiple facets. This is further explained in the next section.

3. Materials and Methods

Based on earlier research (Gerlich 2023a), this study examined why and for what reasons the majority of participants, from the UK, trust AI above humans. Given the dearth of prior research, an inductive approach and an exploratory study were decided to be the most suitable for the desired research.
In this study, the researcher’s aim was to delve into gaining a deeper understanding of customer trust in personalised AI services, to identify themes and patterns that can help build a conceptual framework on trust and AI. As no theoretical framework was employed in this study, an inductive approach was chosen. By employing research-based insights, it was assumed that a comprehensive understanding of consumers’ opinions can be gained (Bell et al. 2019). Furthermore, inputs from the customers will give an insight into the social reality surrounding the participants (Bell et al. 2019). An inductive approach was chosen for this study so that researchers gain deeper knowledge of consumer behaviour to enable them to derive conclusions from actual data, instead of relying only on pre-existing conceptions (Hackett 2016).
To conduct the data analysis, a hybrid method was utilised involving multiple steps. The first step involved designing a questionnaire that is designed to understand multiple facets of trust, such as psychological, social, and technical components. This questionnaire also collected demographic data including age, gender, education, and income. The questionnaire was designed to investigate the nuances of trust in AI, particularly in the context of experiences with standard computing tools, perceptions of neutrality and bias, as well as the influence of marketing, influencers, and political discourse. This widened questionnaire-based approach has many advantages:
  • A variety of opinions from all parties involved, along with their underlying assumptions and influences, were gathered. Such diverse data transformed this method significantly, making it distinctive. This was also crucial in understanding faith as well as scepticism about AI.
  • Engaging a broader range of participants guaranteed that the research was never conducted just as a means of direct connection with AI, but also as a means of influencing participants’ opinions through advertisements, societal beliefs, and personal values.
  • The questionnaire’s extensive range of questions enabled the identification of multiple themes. A larger sample size made it possible to determine the degree to which different groups trust AI. These models may be examined using a variety of variables, including age, education level, cultural background, and prior experience with technology.
  • The flexibility of comparison research is another benefit of this method. The range of those who expressed trust spanned from total atheists to strong proponents of AI. This kind of study can help in understanding why certain groups or people seem to trust AI more than others.
  • This method also gives the option of maximising the generalisation of data across groups by increasing the sample size. These initiatives are particularly crucial for research aiming to understand public attitudes and opinions. While the research may possibly focus just on those who trust AI to understand their unique motivators, integrating the broader public will give a more thorough insight into society’s perceptions of AI, including why some people may be hesitant to believe it. This comprehensive approach is critical for gaining a holistic understanding of public confidence in AI technology.
In analysing views and trust in AI, this study naturally focused on those who have strong feelings about AI. The nature of convenience and voluntary sampling, which is sometimes criticised for recruiting people at opposite extremes of the opinion spectrum, is especially useful in this situation. As observed by Etikan et al. (2016), convenience sampling is useful for studying certain features or viewpoints within a community, particularly when these qualities are not equally distributed.
Furthermore, the argument that such sampling methods may not provide a representative cross-section of the broader population (Bryman 2012) is less relevant in this scenario. The goal here is not to generalise to the entire population but rather to obtain a better understanding of the motivators and deterrents of faith in AI among people who have developed differing perspectives. For this, strong opinions are more than just statistical outliers; they are important to the prime objective of the study. Bryman (2012) agrees that, while convenience sampling has drawbacks in terms of representativeness, it delivers useful insights when investigating specific, well-defined phenomena.
This method is consistent with this research’s exploratory nature since it tries to dive into the complexities of trust in AI, a topic where neutrality or indifference is less illuminating than definite viewpoints. As a result, collecting strong opinions, good or negative, becomes critical. This methodological decision is backed by the work of Etikan et al. (2016), who argue that when a study focuses on certain attitudes or views, the benefits of convenience sampling in reaching these specific sectors of the population exceed its constraints. While admitting the inherent flaws of convenience and voluntary sampling, this study uses these biases to its advantage by explicitly targeting individuals with well-defined views on AI. This methodological option allows for a more concentrated investigation into the underlying causes of trust in AI, which contributes greatly to our knowledge of this complicated and multifaceted problem.

3.1. Sampling

For this study, non-probability sampling, specifically purposive sampling combined with voluntary sampling, is applied. Purposive sampling involves selecting a sample based on the judgment of the researcher, aiming to include participants who are deemed representative of a larger group—in this case, the academic environment of business schools. Purposive sampling allows the researcher to focus on specific characteristics of a population that are pertinent to the research question. For example, selecting business schools as the setting and including both students and faculty ensure that the sample reflects a diversity of perspectives within the academic community as well ensures a high probability that the participants have working experience with AI, which is essential for this study. While voluntary sampling often faces criticism due to its susceptibility to self-selection bias, where only individuals with a strong interest in the topic—representing either extreme enthusiasm or significant disapproval— typically choose to participate, this characteristic does not detract from the research objectives in this case. Rather, it aligns closely with the study’s purpose, which explicitly focuses on individuals who possess direct experience with the subject matter. This experience is fundamental to the research questions being addressed, as it ensures that the responses reflect the insights of those who are most engaged with and affected by the topic under investigation. Consequently, the utilisation of voluntary sampling in this context not only facilitates the collection of pertinent data but also enhances the depth and relevance of the findings by focusing on a segment of the population that is intrinsically motivated to provide meaningful and informed responses.
This approach, while useful in exploratory phases of research where specific populations are of interest, does not allow for generalisations to the wider population with the same degree of confidence as probability sampling methods, where each member of the population has a known chance of being selected. This should be considered when interpreting the research findings, especially in terms of their applicability and transferability to other contexts.

3.2. Participant Selection

In November 2023, the survey questionnaire poll was distributed to nearly 1000 participants in the UK. The participants were students and faculty members from five UK business schools. The sample of business school members was chosen as their probability of interaction with AI is high (Gerlich 2023a). Those interested in the study were given further information and were guaranteed data protection and sought consent. Only participants with a valid university/business school email account received access to the survey platform (https://www.surveyhero.com/, accessed on 26 February 2024).

3.3. Data Collection

A total of 894 replies were received, of which 451 were genuine (all questions answered). The survey included 33 questions, out of which 10 were open questions and 23 were closed with a 6-step Likert scale response system.

3.4. Model

The participant survey data collected were analysed using the structural equation modelling (SEM) approach. SEM is a statistical technique for evaluating structural models with latent variables (Meyers et al. 2013). It may be used to analyse two different types of models: measurement models and structural models. The measuring model assesses how well the observed relationships between variables match the predicted associations, whereas Meyers et al. (2013) stated that the SEM model can be implemented independently of the measurement model. It may be useful to describe the hypothesised model if the data from the observed and hypothesised models’ match. The main emphasis of this work was the structural model; therefore, SEM was chosen for this study and was used to assess the current study’s hypothesis. Additionally, several correlation analyses were conducted to identify important interdependencies between certain variables.

3.5. Explanation of Factors

As previously discussed, research shows that, while AI is now better at predicting outcomes than humans (e.g., weather reports, decision options), there are still cases where people prefer humans to make forecasts and predictions, a phenomenon known as algorithm aversion as was explained by Dietvorst et al. (2015). Another line of study found that individuals rely more on algorithm-driven AI than human guidance, i.e., algorithm appreciation (Helberger et al. 2020; Araujo et al. 2020; Logg et al. 2019). Such algorithmic appreciation may be driven by machine heuristics (Sundar and Kim 2019), but other variables may contribute to these positive evaluations. Among the many aspects that may influence consumers’ perceptions of AI adoption, recent studies have expressed an interest in the function of explainability. Recently, researchers have begun to investigate if and how explainability and the external environment surrounding AI might impact perceptions of justice, trust, and acceptance of AI choices. Thus, discussing the decision-making process increases transparency, which might result in varying opinions of subjective justice. Nonetheless, it is unclear how these views would emerge in the sphere of education, and if algorithmic appreciation can be anticipated if an algorithm takes over human activities such as test marking. Until recently, few studies have evaluated the potential role of AI in education (Chen et al. 2020; Murphy 2017), but they mostly sketch and hypothesise on what AI may provide for future education. It is uncertain whether algorithmic decision makers are seen to be on par with or even more fair than humans, and whether these judgements are impacted by machine heuristics and explanations. To dive deeper into these perceptions, the elements that shape them were investigated. An illustrated model of the elements that influence these impressions is shown in Figure 1.
Using these categories as the basis, a set of 33 questions was developed. These questions were framed to address the respective categories. A total of 20 such quantitative questions were assessed on the Likert scale of 6. Totals of 3 demographic questions and 10 qualitative questions were also included in the questionnaire. These questions were an attempt to measure the AI-related impact and perception development among people. These questions touch upon multiple subfactors that contribute directly to the main factors like messaging that have an impact on perception development. In the wake of this argument, messaging is the main factor but the subfactors or the mediums that contribute to that factor are messaging by marketing teams, content shared by influencers, and what the politicians (main people within the society) and the management speak about the product or service. Gauging the impact of these subfactors leads to an assessment of the messaging impact. Similarly, other questions were designed and are tabulated in Table 1.
Please refer to Appendix A for the detailed questionnaire. The entire analysis was designed around the responses to the subfactors that contribute to the main factors that help in gauging the perception.

4. Results and Discussion

This study intended to respond to the variables that establish a trust factor of AI for its further adoption in society. To test the hypotheses and solve the research question (what are the motivators for trust in AI and distrust in humans), a between-subject experiment was carried out using two decision makers (human and AI) and three components (explanation: messaging, experiences, and analytics). A total of 451 individuals (50.8% female, 49.2% male) were selected from a pool of over 800 respondents based on survey questionnaire completion. Participants were 18 years old or older, with nearly equal percentages of responders in each age category. Similarly, there was practically an equal distribution of education levels among participants, ranging from high school to doctorate, with the highest being vocationally trained and bachelor’s-educated respondents. Based on the responses, it may be reasonably assumed that there is no skewness in the data collection based on demographic characteristics.
The next step in the methodology was to perform an SEM analysis wherein the above-mentioned relational reference between the factors was used. In our study, it has been assumed that the creation of positive perception leads to the development of trust amongst the individuals around AI, and this study is based around the same assumption.
Using GSCA software (ver. 1.2.1), SEM was carried out on the entire data set as detailed in the methodology section. The following are the test results:
H0. 
The proposed model fits the data well, and a number of variables have a direct bearing on people’s degree of confidence in AI.
H1. 
The degree of confidence in AI technology is not affected by a variety of circumstances, and the proposed model is not a suitable fit.
The amount of variation across all variables (indicators and components) that a given model specification can account for is referred to as FIT. FIT values (Table 2) range from 0 to 1, similar to R squared in linear regression. This number increases with the variance in the variables accounted for by the stated model. For example, FIT = 0.50 means that the model accounts for half of the variation in all variables. There are no uniform FIT criteria for evaluating a good fit. As a result, this parameter alone is insufficient to draw conclusions about the model’s quality of fit; but, when paired with mean squared residuals, it can give a definite answer on the goodness of fit of the model.
The SRMR (standardised root-mean-squared residual) and GFI (goodness-of-fit indicator) are proportional to the difference between the sample covariances and the covariances reproduced by the GSCA parameter estimations. According to the current study, the GFI and SRMR cutoff criteria in GSCA should be used as a general guideline (Cho et al. 2020).
When the sample size is 100, a GFI of 0.89 or higher and an SRMR of less than 1.4 indicate a satisfactory match. While either index can be used to assess model fit, the SRMR with the previously specified cutoff value may be chosen above the GFI with the proposed cutoff value. Furthermore, a GFI cutoff value of less than 85 may still indicate a solid match if the SRMR is less than 0.9.
When the sample size reaches 100, a GFI better than 0.93 and an SRMR less than 1.2 indicate a good fit. In this case, there is no preference for utilising a combination of the indexes over using each one alone, or for using one index over another. The recommended cutoff values for each indicator can be used separately to assess the model’s fit.
Based on the outcome of the parametric values, it can be safely stated that the subfactors contributing to each main factor are a good fit to define the factor, and the relational model that has been used for the study is a good fit for the purpose of analysis. In simpler terms, it can be stated that the factors of messaging, experiences, and use of data analytics help in developing the perception around the AI which in turn helps in creating trust in AI depending on whether the perception is positive or negative. Therefore, the null hypothesis is accepted. In our case, the factors of experiences and the use of data analytics help in developing a positive perception; thus, aiding in trust while messaging by various stakeholders is taken as detrimental to positive perception, thereby reducing the trust factor. Similar inferences can be validated using correlation analysis.
As seen from the correlation analysis (Table 3), experiences and analytics have a positive impact on the perception towards AI, whereas messaging has a negative impact. Reading it with the questions around the biasedness of humans and objectivity of the machines, it can be inferred that people feel an element of biasedness and discretion in the messages conveyed by various parties. These discretions can be motivated by economic, financial, personal, or environmental reasons. Therefore, consumers would like to have an unbiased opinion or decision which can be given by AI. This observation is further validated by the subjective question of why AI is trustworthy, to which respondents responded with the following answers:
As a machine, it has no interest (41% of responses);
It can neutrally evaluate situations (61% of responses);
AI will be smarter than humans (38% of responses);
It combines all the knowledge and is not like individual humans who use their knowledge only partially and that too much for their benefit (21% of responses).
When asked about the factors that contribute to the trustworthiness of AI, the neutral and logical approach was the second-best answer after the depth of knowledge used by AI.
When a straightforward question was asked, “What are the main reasons for your mistrust in human statements?”, the majority of responses mentioned reasons like the following:
Humans have self-interest that they prioritise (49% of responses);
Humans lie most of the time (37% of responses);
One cannot trust media anymore (52% responses).
Further, the analysis of “respondents trust AI more that humans” (Figure 2) revealed that the majority (67%) overall trust AI more than humans (33%).
The Bayesian Binomial Test (Table 4) confirms the trust levels.
Bayesian Binomial Test Interpretation:
  • Trust Level 1 (Full Trust in AI)
    1.
    Proportion: 0.235 indicates that 23.5% of the participants have a full level of trust in AI over humans.
    2.
    BF10: 4.568 × 1027 suggests extremely strong evidence against the null hypothesis (that the true proportion is 0.5). This indicates that the level of full trust is significantly less than the neutral expectation but notably strong among certain segments of the sample.
  • Trust Level 2 (High Trust in AI)
    3.
    Proportion: 0.439, where 43.9% of respondents show high trust in AI.
    4.
    BF10: 1.687 provides moderate evidence against the null hypothesis. This value indicates that high trust in AI, while not reaching the neutral expectation of 0.5, is relatively substantial.
  • Trust Level 5 (Low Trust in AI)
    5.
    Proportion: 0.038, only 3.8% of participants display a low level of trust in AI.
    6.
    BF10: 4.699 × 10102 provides extraordinarily strong evidence against the null hypothesis. This extremely high Bayes factor indicates that very few respondents have low trust in AI, which is significantly below what would be expected if opinions were neutral.
  • Trust Level 6 (No Trust in AI)
    7.
    Proportion: 0.288 indicates that 28.8% of respondents have no trust in AI at all.
    8.
    BF10: 7.229 × 1016 suggests very strong evidence against the null hypothesis. This indicates that a significant minority of the population explicitly distrusts AI compared to humans, more than expected under a neutral scenario.
The Bayesian analysis reveals substantial deviations from neutral trust in AI. The proportions at each trust level are significantly different from 0.5, indicating that attitudes towards AI are polarised. Most notably, a considerable number of respondents either fully trust or have high trust in AI (levels 1 and 2), suggesting a positive disposition towards AI among a majority of the sample. Conversely, a significant minority exhibits no trust (level 6), with very few respondents falling into the low-trust category (level 5).
These findings suggest that while there is a significant inclination towards trusting AI among the surveyed population, there remains a notable segment that holds no trust at all, highlighting critical areas for further exploration into the underlying reasons for distrust or conditional trust in AI. This polarised trust distribution could have implications for the adoption and integration of AI technologies in contexts where trust is a pivotal factor.
When further analysing the correlation between trust in AI and distrust in humans (Table 5), we notice an extreme correlation between the two variables. This means that participants who trust AI also distrust humans and vice versa.
The findings from this study suggest that consumers perceive artificial intelligence (AI) as a preferable alternative to human discretion and influence, which are often perceived as motivated by self-interest. This inference draws significantly on the patterns observed in the collected responses, which elucidate the underlying reasons for this trust in technology.
Firstly, the study indicates that individuals tend to trust AI more than humans due to their belief in the inherent impartiality of technology. Unlike humans, technology is not driven by personal interests, which appears to resonate with users based on their prior experiences with technological applications. This trust is further bolstered by the perception that technology operates on an objective basis, devoid of personal biases.
Secondly, the distrust in humans is accentuated by the scepticism towards politicians, public figures, and statements made on social media platforms, which are frequently considered unreliable. The common perception is that such communications are often tainted by personal agendas, whether to sell a product or to manipulate public opinion for personal gain.
Moreover, this research highlights that positive experiences with technology enhance the overall perception of AI-related decision making. The data-driven nature of AI, coupled with its capacity to make decisions based on analytics rather than emotion or personal interest, engenders a higher level of trust. This objectivity is viewed as contributing to a more reliable decision-making process, which in turn fosters greater confidence in AI systems over traditional human-driven methods.
While it is acknowledged that negative experiences, such as encounters with fraud or the misuse of technology, can adversely affect perceptions of AI, these incidents have been systematically incorporated into the overall evaluation framework through the use of the questionnaire. This methodological approach allows for a balanced assessment of AI, taking into account both its advantages and potential pitfalls.

5. Conclusions

Artificial intelligence is increasingly becoming a staple in everyday life and the workplace, driving continuous innovation that is reshaping how work is performed and services are delivered. Technologies like ChatGPT and other generative AI platforms are exerting a substantial impact across various industries, prompting increased investment due to the significant benefits they offer to individuals, companies, and society at large. Organisations deploy AI for myriad purposes, including enhancing predictive accuracy, optimising products and services, fostering innovation, improving productivity and efficiency, and reducing costs. However, the deployment of AI is not without risks and challenges, central among which is the issue of trust in the reliability of AI systems, which are fundamentally composed of data, algorithms, and software applications. For AI to be widely adopted across diverse industries and service sectors, and for its benefits to be fully realised, gaining universal acceptance and trust is imperative.
In this article, we explore perceptions, attitudes, and trust towards AI among people in the UK. We collected valid data from 451 participants across five business schools, employing a mixed-methods approach to capture diverse user themes and patterns. This empirical study also delves into participants’ optimism towards technology, encompassing biases, misconceptions, risks, opportunities concerning AI, and their confidence in navigating uncertain environments. Despite extensive research into public perceptions of AI, individual attitudes and viewpoints have often been overlooked due to concerns about the generalisability of findings. Nevertheless, these individual perceptions are crucial for understanding technology adoption, particularly when it involves innovative solutions like AI.
Our findings reveal that the majority of participants, who are generally well educated and frequently interact with AI, report predominantly positive experiences with AI services. These positive interactions significantly contribute to their trust in AI, which they associate with the reliability and neutrality historically linked to computer technology. Participants expressed a strong belief that technology operates without self-interest—unlike humans, who may be biased and therefore less trustworthy.
Another significant insight from this study is that customised or tailored content enhances trust in AI, as the material produced meets user expectations. This trust extends beyond the service and its usability to the perceived benefits of AI personalisation chosen by the users. The data also suggest that customer scepticism towards AI customisation stems from various challenges, including previous negative experiences that have fostered mistrust towards human influencers. Influencers, social media users, and public figures often appear to mislead, lie, and prioritise their own interests. This widespread distrust extends to politicians and public figures whose statements on social media are generally viewed with suspicion, as individuals frequently manipulate information to further their agendas or sell products. This has heightened general concerns about the integrity of human-driven communication channels. On social media, users often consume information passively without critical analysis, leading to an overconsumption of potentially false or misleading content. Consequently, the reliability of information from public authorities and the media has become increasingly critical, yet increasingly doubted.
This study also highlights a significant shift among the population towards technology as a more trustworthy alternative, given their disenchantment with traditional human-driven information sources, except for close personal relationships like friends and family. AI is favoured for its ability to circumvent human arbitrariness and self-centeredness, offering fair and impartial decision making. However, factors such as AI personalisation accuracy, data integrity, and cybersecurity are critical and can negatively impact trust levels.
While the benefits of AI are clear, they cannot be universally applied to satisfy the objectives and desires of all stakeholders, including businesses, influencers, and politicians, who often have financial interests at stake. Nevertheless, it is reasonable to suggest that businesses might leverage AI to enhance their revenues by controlling the messaging and underlying algorithms. Furthermore, the future of AI also raises concerns among consumers, particularly regarding job security, which influences their willingness to engage with AI services. Security concerns regarding the processing of their data and other associated risks make consumers cautious. Addressing these concerns is essential as we continue to expand the integration of AI in various sectors.
The research findings elucidate significant implications for businesses and society at large, underscoring the transformative potential of AI in reshaping interactions and trust dynamics. For businesses, the enhanced trust in AI, spurred by positive experiences and the perception of AI’s impartiality, suggests an opportunity to integrate AI solutions more deeply into operational and customer service strategies. This can lead to improved efficiency, heightened innovation, and a competitive edge in markets where trust equates to consumer loyalty and brand strength. Moreover, the demand for personalised AI-driven services indicates a shift in consumer expectations, which businesses can leverage to deliver more tailored and engaging user experiences, thereby increasing customer satisfaction and retention. The increased use of virtual influencers seems to also be advisable.
On a societal level, the findings highlight a growing reliance on technology as a more trustworthy alternative to traditional human-driven information sources, which are often perceived as biased or self-interested. This trust in technology could lead to a greater acceptance of AI in critical decision-making roles within public and private sectors, potentially enhancing transparency and fairness in processes typically vulnerable to human error or corruption. However, the concerns about job security and data privacy associated with AI adoption also call for robust ethical frameworks and regulatory measures to ensure that the integration of AI technologies protects individuals’ interests and promotes a fair and equitable society. Thus, this research underscores the need for a balanced approach to AI implementation, one that maximises its benefits while addressing the ethical, legal, and social challenges it poses.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of SBS Swiss Business School (protocol code EC23/FR12, 19 July 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Supporting data can be requested from the author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The questionnaire and the variables it represents:
Demographics
1. 
What is your age?
  • 18–24
  • 25–34
  • 35–44
  • 45–54
  • 55–64
  • Above 64
2. 
What is your gender?
  • Female
  • Male
  • Other (specify)
3. 
What is your education?
  • High school (freshmen)
  • Trade/vocational/technical
  • Bachelors
  • Masters
  • Doctorate
4. 
General Trust in Technology:
  • How much do you trust technology to improve your quality of life? (1 = No trust at all, 6 = Complete trust)
  • To what extent do you believe technology has a positive impact on society? (1 = Strongly disagree, 6 = Strongly agree)
5. 
Specific Trust in AI:
  • What aspects of AI do you find most trustworthy? (Open-ended)
  • Rate your trust in AI to make decisions without human intervention. (1 = No trust at all, 6 = Complete trust)
6. 
Comparative Trust (AI vs. Humans):
  • In what situations do you trust AI more than human judgment? (Open-ended)
  • Do you perceive AI as more objective than human decision-making? (1 = Strongly disagree, 6 = Strongly agree)
7. 
Sociocultural Influences:
  • Does your cultural background influence your trust in AI? (1 = Not at all, 6 = Significantly)
  • How do societal norms and values shape your perception of AI? (Open-ended)
8. 
Psychological Aspects of Trust:
  • Do personal experiences with technology affect your trust in AI? (1 = Not at all, 6 = Significantly)
  • To what degree do media portrayals of AI impact your trust in it? (1 = No impact, 6 = Major impact)
9. 
Risk Perception:
  • What are your primary concerns about trusting AI? (Open-ended)
  • Rate your level of concern about potential misuse of AI. (1 = No concern, 6 = Extremely concerned)
10. 
Perceived Benefits of AI:
  • What benefits of AI do you think contribute to its trustworthiness? (Open-ended)
  • How does the potential of AI in solving complex problems influence your trust in it? (1 = Not at all, 6 = Significantly)
11. 
Ethical Considerations:
  • How do ethical considerations around AI affect your trust in it? (1 = No effect, 6 = Significant effect)
  • Rate your agreement: “AI can be trusted to act ethically”. (1 = Strongly disagree, 6 = Strongly agree)
12. 
Future Orientation:
  • How optimistic are you about the future developments of AI? (1 = Very pessimistic, 6 = Very optimistic)
  • What is your perception of the long-term implications of trusting AI? (Open-ended)
13. 
Experiences with Standard Computing Tools:
  • Describe your level of trust in standard computing tools (e.g., software) for accuracy and reliability. (1 = No trust at all, 6 = Complete trust)
  • Have your experiences with standard computing tools influenced your trust in AI? Please explain. (Open-ended)
14. 
Perception of Neutrality and Bias in AI:
  • To what extent do you believe AI is free from biases compared to human beings? (1 = Strongly disagree, 6 = Strongly agree)
  • In your opinion, how does the perceived neutrality of computers influence your trust in AI? (Open-ended)
15. 
Influence of Marketing and Influencers:
  • How does marketing and the role of influencers affect your trust in human statements? (1 = No effect, 6 = Significant effect)
  • Compare your trust in information disseminated by AI vs. that shared by human influencers. (Open-ended)
16. 
Political Factors and Trust in Human Statements:
  • Rate your level of trust in statements made by political figures. (1 = No trust at all, 6 = Complete trust)
  • How do political factors influence your trust in human statements versus statements made by AI? (Open-ended)
17. 
Projection of Computer Experiences onto AI:
  • To what degree do you think your experiences with computers (like using Excel) shape your expectations and trust in AI? (1 = Not at all, 6 = significantly)
  • In what ways do you differentiate between your experiences with traditional software and AI systems? (Open-ended)
18. 
Mistrust in Human Statements:
  • What are the main reasons for your mistrust in human statements (if any)? (Open-ended)
  • Compare your trust in data or information provided by AI systems versus human sources. (1 = Always trust AI more, 6 = Always trust humans more)

References

  1. Ameen, Nisreen, Tarhini Ali, Reppel Alexander, and Anand Amitabh. 2021. Customer experiences in the age of AI. Computers in Human Behavior 114: 106548. [Google Scholar] [CrossRef]
  2. Araujo, Theo, Natali Helberger, Sanne Kruikemeier, and Claes H. de Vreese. 2020. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Soc 35: 611–23. [Google Scholar] [CrossRef]
  3. Baabdullah, Abullah M., Ali Abdallah Alalwan, Raed S. Algharabat, Bhimaraya Metri, and Nripendra P. Rana. 2022. Virtual agents and flow experience: An empirical examination of AI-powered chatbots. Technological Forecasting and Social Change 181: 121772. [Google Scholar] [CrossRef]
  4. Bedué, Patrick, and Albrecht Fritzsche. 2021. Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management 35: 530–49. [Google Scholar] [CrossRef]
  5. Bell, Emma, Alan Bryman, and Bill Harley. 2019. Business Research Methods, 5th ed. Oxford: Oxford University Press. [Google Scholar]
  6. Bochniarz, Klaudia T., Stanislaw K. Czerwinski, Artur Sawicki, and Pawel A. Atroszko. 2022. Attitudes to AI among high school students: Understanding distrust towards humans will not help us understand distrust towards AI. Personality and Individual Differences 185: 111299. [Google Scholar] [CrossRef]
  7. Brill, Thomas M., Laura Munoz, and Richard J. Miller. 2019. Siri, Alexa, and other digital assistants: A study of customer satisfaction with AI applications. Journal of Marketing Management 35: 1401–36. [Google Scholar] [CrossRef]
  8. Bryman, Alan. 2012. Social Research Methods. Oxford: Oxford University Press. [Google Scholar]
  9. Bryson, Joanna J. 2018. Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology 20: 15–26. [Google Scholar] [CrossRef]
  10. Butt, Asad H., Hassan Ahmad, Muhammad A. S. Goraya, Muhammad S. Akram, and Muhammad N. Shafique. 2021. Let’s play: Me and my AI-powered avatar as one team. Psychol Mark 38: 1014–25. [Google Scholar] [CrossRef]
  11. Chang, Shuchih E., Anne Y. Liu, and Wei Shen. 2017. User trust in social networking services: A comparison of Facebook and LinkedIn. Computers in Human Behavior 69: 207–17. [Google Scholar] [CrossRef]
  12. Chen, Lija, Pingping Chen, and Zhijan Lin. 2020. Artificial Intelligence in Education: A Review. IEEE Access 8: 75264–78. [Google Scholar] [CrossRef]
  13. Cho, Gyeongcheol, Heungsun Hwang, Marko Sarstedt, and Christian Ringle. 2020. Cutoff criteria for overall model fit indexes in generalized structured component analysis. Journal of Marketing Analytics 8: 189–202. [Google Scholar] [CrossRef]
  14. Choung, Hyesun, Prabu David, and Arun Ross. 2022. Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction 39: 1727–39. [Google Scholar] [CrossRef]
  15. Christakis, Nicholas. 2019. How AI Will Rewire Us. For Better and for Worse, Robots Will Alter Humans’ Capacity for Altruism, Love, and Friendship. Available online: https://www.theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204/ (accessed on 20 February 2024).
  16. Davenport, Thomas, Abhijit Guha, Dhruv Grewal, and Timna Bressgott. 2020. How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science 48: 24–42. [Google Scholar] [CrossRef]
  17. Dietterich, Thomas G., and Eric J. Horvitz. 2015. Rise of concerns about AI: Reflections and directions. Communications of the ACM 58: 38–40. [Google Scholar] [CrossRef]
  18. Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144: 114–26. [Google Scholar] [CrossRef]
  19. Etikan, Iker, Sulaiman A. Musa, and Rukayya S. Alkassim. 2016. Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics 5: 1–4. [Google Scholar] [CrossRef]
  20. Fan, Hua, Bing Han, Wei Gao, and Wenqian Li. 2022. How AI chatbots have reshaped the frontline interface in China: Examining the role of sales–service ambidexterity and the personalization–privacy paradox. International Journal of Emerging Markets 17: 967–86. [Google Scholar] [CrossRef]
  21. Fast, Ethan, and Eric Horvitz. 2017. Long-Term Trends in the Public Perception of AI. Paper presented at Thirty-First AAAI Conference on AI (AAAI’17), San Francisco, CA, USA, February 4–9; Washington: AAAI Press, pp. 963–69. [Google Scholar] [CrossRef]
  22. Gerlich, Michael. 2023a. Perceptions and Acceptance of AI: A Multi-Dimensional Study. Social Sciences 12: 502. [Google Scholar] [CrossRef]
  23. Gerlich, Michael. 2023b. The Power of Virtual Influencers: Impact on Consumer Behaviour and Attitudes in the Age of AI. Administrative Sciences 13: 178. [Google Scholar] [CrossRef]
  24. Gerlich, Michael, Walaa Elsayed, and Konstantin Sokolovskiy. 2023. Artificial intelligence as toolset for analysis of public opinion and social interaction in marketing: Identification of micro and nano influencers. Frontiers in Communication 8: 1075654. [Google Scholar] [CrossRef]
  25. Giroux, Marilyn, Jungkeun Kim, Jacob C. Lee, and Jongwon Park. 2022. AI and Declined Guilt: Retailing Morality Comparison Between Human and AI. Journal of Business Ethics 178: 1027–41. [Google Scholar] [CrossRef]
  26. Hackett, Paul M. W. 2016. Consumer Psychology: A Study Guide to Qualitative Research Methods. Leverkusen: Barbara Budrich. [Google Scholar]
  27. Helberger, Natali, Huh Jisu, Milne George, Strycharz Joanna, and Sundaram Hari. 2020. Macro and Exogenous Factors in Computational Advertising: Key Issues and New Research Directions. Journal of Advertising 49: 377–93. [Google Scholar] [CrossRef]
  28. Hengstler, Monika, Ellen Enkel, and Selina Duelli. 2016. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change 105: 105–20. [Google Scholar] [CrossRef]
  29. European Commission, Directorate-General for Communications Networks, Content and Technology. 2020. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment. Brussels: European Commission. Available online: https://data.europa.eu/doi/10.2759/002360 (accessed on 27 February 2024).
  30. Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1: 389–99. [Google Scholar] [CrossRef]
  31. Keiningham, Timothy, Joan Ball, Sabine Benoit, Helen L. Bruce, Alexander Buoye, Julija Dzenkovska, Linda Nasr, Yi-Chun Ou, and Mohamed Zaki. 2017. The interplay of customer experience and commitment. Journal of Services Marketing 31: 148–60. [Google Scholar] [CrossRef]
  32. Kieslich, Kimon, Birte Keller, and Christopher Starke. 2022. AI ethics by design. Evaluating public perception on the importance of ethical design principles of AI. Big Data and Society 9. [Google Scholar] [CrossRef]
  33. Klaus, Phil, and Judy Zaichkowsky. 2020. AI voice bots: A services marketing research agenda. Journal of Services Marketing 34: 389–98. [Google Scholar] [CrossRef]
  34. Klockmann, Victor, von Alicia Schenk, and Marie C. Villeval. 2022. AI, ethics, and intergenerational responsibility. Journal of Economic Behavior and Organization 203: 284–317. [Google Scholar] [CrossRef]
  35. Lambillotte, Laetitia, Nathan Magrofuoco, Ingrid Poncin, and Jean Vanderdonckt. 2022. Enhancing playful customer experience with personalization. Journal of Retailing and Consumer Services 68: 103017. [Google Scholar] [CrossRef]
  36. Li, Cong. 2016. When does web-based personalization really work? The distinction between actual personalization and perceived personalization. Computers in Human Behavior 54: 25–33. [Google Scholar] [CrossRef]
  37. Logg, Jennifer M., Julia Minson, and Don A. Moore. 2019. Algorithm Appreciation: People Prefer Algorithmic To Human Judgment. Organizational Behavior and Human Decision Processes 151: 90–103. [Google Scholar] [CrossRef]
  38. Madhavan, Poornima, and Douglas A. Wiegmann. 2007. Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science 8: 277–301. [Google Scholar] [CrossRef]
  39. McKnight, D. Harrison, Vivek Choudhury, and Charles Kacmar. 2002a. The impact of initial consumer trust on intentions to transact with a web site: A trust building model. The Journal of Strategic Information Systems 11: 297–323. [Google Scholar] [CrossRef]
  40. McKnight, D. Harrison, Vivek Choudhury, and Charles Kacmar. 2002b. Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research 13: 334–59. [Google Scholar] [CrossRef]
  41. Meyers, Lawrence, Glenn Gamst, and Anthony Guarino. 2013. Applied Multivariate Research: Design and Interpretation. Los Angeles: Sage Publications, Inc. [Google Scholar]
  42. Miltgen, Caroline Lancelot, Aleš Popovič, and Tiago Oliveira. 2013. Determinants of end-user acceptance of biometrics: Integrating the “Big 3” of technology acceptance with privacy context. Decision Support Systems 56: 103–14. [Google Scholar] [CrossRef]
  43. Morey, Tim, Theodore Forbath, and Allison Schoop. 2015. Customer Data: Designing for Transparency and Trust. Harvard Business Review. Available online: https://hbr.org/2015/05/customer-data-designing-for-transparency-and-trust (accessed on 2 February 2024).
  44. Mostafa, Rania Badr, and Tamara Kasamani. 2021. Antecedents and consequences of chatbot initial trust. European Journal of Marketing 56: 1748–71. [Google Scholar] [CrossRef]
  45. Murphy, Margi. 2017. A Mind of Its Own Humanity Is Already Losing Control of Artificial Intelligence and It Could Spell Disaster for Our Species, Warn Experts. Available online: https://www.thesun.co.uk/tech/3306890/humanity-is-already-losing-control-of-artificial-intelligence-and-it-could-spell-disaster-for-our-species/ (accessed on 2 February 2024).
  46. Nagy, Szaboles, and Noémi Hajdú. 2021. Consumer Acceptance of the Use of AI in Online Shopping: Evidence from Hungary. The Amfiteatru Economic Journal 23: 155. [Google Scholar] [CrossRef]
  47. Nienaber, Ann-Marie, and Gerhard Schewe. 2014. Enhancing trust or reducing perceived risk, what matters more when launching a new product? International Journal of Innovation Management 18: 1–24. [Google Scholar] [CrossRef]
  48. O’Riordan, Peter. 2019. Using AI and Personalization to Provide a Complete Brand Experience. Available online: https://www.aithority.com/guest-authors/using-ai-and-personalization-to-provide-a-complete-brand-experience/ (accessed on 2 February 2024).
  49. Pappas, Ilias O., Panos E. Kourouthanassis, Michail N. Giannakos, and George Lekakos. 2017. The interplay of online shopping motivations and experiential factors on personalized e-commerce: A complexity theory approach. Telematics and Informatics 34: 730–42. [Google Scholar] [CrossRef]
  50. Prakash, Ashish V., Arun Joshi, Shuhi Nim, and Saini Das. 2023. Determinants and consequences of trust in AI-based customer service chatbots. The Service Industries Journal 43: 642–75. [Google Scholar] [CrossRef]
  51. Rahaman, Mizanur. 2023. Digital Marketing in the Era of AI (AI). Available online: https://www.linkedin.com/pulse/digital-marketing-era-ai-artificial-intelligence-mizanur-rahaman (accessed on 2 February 2024).
  52. Rogers, Kristina. 2023. How Consumers Rely on Technology but Don’t Trust It|EY—Global. Available online: https://www.ey.com/en_gl/consumer-products-retail/how-to-serve-consumers-who-rely-on-tech-but-dont-trust-tech (accessed on 2 February 2024).
  53. Ryan, Mark. 2020. In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics 26: 2749–67. [Google Scholar] [CrossRef]
  54. Siau, Keng, and Weiyu Wang. 2018. Building Trust in Artificial Intelligence, Machine Learning, and Robotics. Cutter Business Technology Journal 31: 47–53. [Google Scholar]
  55. Song, Mengmeng, Xinyu Xing, Yucong Duan, Jason Cohen, and Jian Mou. 2022. Will AI replace human customer service? The impact of communication quality and privacy risks on adoption intention. Journal of Retailing and Consumer Services 66: 102900. [Google Scholar] [CrossRef]
  56. Stai, Bethany, Nick Heller, Sean McSweeney, Jack Rickman, Paul Blake, Ranveer Vasdev, Zach Edgerton, Resha Tejpaul, Matt Peterson, Joel Rosenberg, and et al. 2020. Public perceptions of AI and robotics in medicine. Journal of Endourology 34: 1041–48. [Google Scholar] [CrossRef]
  57. Sundar, Shyam, and Jinyoung Kim. 2019. Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, May 4–9; pp. 1–9. [Google Scholar] [CrossRef]
  58. Unemyr, Magnus, and Martin Wass. 2018. Data-Driven Marketing with AI: Harness the Power of Predictive Marketing and Machine Learning. Jönköping: Independently Published. [Google Scholar]
  59. Van Duin, Stefan, and Naser Bakhshi. 2017. Part 1: AI Defined|Deloitte|Technology Services. Deloitte Sweden. Available online: https://www2.deloitte.com/content/dam/Deloitte/nl/Documents/deloitte-analytics/deloitte-nl-data-analytics-artificial-intelligence-whitepaper-eng.pdf (accessed on 2 February 2024).
  60. Wang, Xuequn, Mina Tajvidi, Xiaolin Lin, and Nick Hajli. 2019. Towards an ethical and trustworthy social commerce community for brand value co-creation: A trust-commitment perspective. Journal of Business Ethics 167: 137–52. [Google Scholar] [CrossRef]
  61. Xu, Yingzi, Chih-Hui Shieh, van Patrick Esch, and I-Ling Ling. 2020. AI customer service: Task complexity, problem-solving ability, and usage intention. Australasian Marketing Journal 28: 189–99. [Google Scholar] [CrossRef]
  62. Yang, Rongbin, and Santoso Wibowo. 2022. User trust in AI: A comprehensive conceptual framework. Electronic Markets 56: 347–69. [Google Scholar] [CrossRef]
  63. Zanker, Markus, Laurens Rook, and Dietmar Jannach. 2019. Measuring the impact of online personalisation: Past, present and future. International Journal of Human-Computer Studies 131: 160–68. [Google Scholar] [CrossRef]
  64. Zerilli, John, Umang Bhatt, and Adrian Weller. 2022. How transparency modulates trust in AI. Patterns 3: 100455. [Google Scholar] [CrossRef]
Figure 1. SEM framework.
Figure 1. SEM framework.
Socsci 13 00251 g001
Figure 2. Trust in AI versus humans.
Figure 2. Trust in AI versus humans.
Socsci 13 00251 g002
Table 1. Factor categorisation.
Table 1. Factor categorisation.
Main FactorPerceptionExperiencesMessagingData Analytics
Subfactor 1Quality of LifePersonal ExperiencesMarketingTools
Subfactor 2Positive ImpactMediaInfluencersData
Subfactor 3Decision ApproachMisusePoliticalObjectivity
Subfactor 4PotentialEthicalOffice bearersBiases
Subfactor 5FuturisticCultural
Table 2. Model fitness.
Table 2. Model fitness.
FITGFISRMR
0.6550.9720.115
Table 3. Correlation factors.
Table 3. Correlation factors.
PerceptionExperientialMessagingAnalytics
Perception10.782−0.9110.96
Experiential0.7821−0.8490.826
Messaging−0.911−0.8491−0.964
Analytics0.960.826−0.9641
Table 4. Bayesian Binomial Test.
Table 4. Bayesian Binomial Test.
LevelCountsTotalProportionBF10
Trust more AI than Humans11064510.2354.568 × 1027
21984510.4391.687
5174510.0384.699 × 10102
61304510.2887.229 × 1016
Note. Proportions tested against value of 0.5. The shape of the prior distribution under the alternative hypothesis is specified by Beta(1, 1).
Table 5. Pearson’s correlation between trust in AI and distrust in humans.
Table 5. Pearson’s correlation between trust in AI and distrust in humans.
Pearson’s Correlations
Variable Trust AIDistrust Human
1. Trust AIPearson’s r
p-value
2. Distrust HumanPearson’s r0.960
p-value<0.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gerlich, M. Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics. Soc. Sci. 2024, 13, 251. https://doi.org/10.3390/socsci13050251

AMA Style

Gerlich M. Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics. Social Sciences. 2024; 13(5):251. https://doi.org/10.3390/socsci13050251

Chicago/Turabian Style

Gerlich, Michael. 2024. "Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics" Social Sciences 13, no. 5: 251. https://doi.org/10.3390/socsci13050251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop