Next Article in Journal
Gender Bias in Residents’ Perceptions and Support of Rally Event Tourism: The Sierra Morena Rally of Córdoba, Spain
Previous Article in Journal
Lessons from Team Entrepreneurship Research for General Entrepreneurship Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact

Center for Strategic Corporate Foresight and Sustainability, SBS Swiss Business School, 8302 Kloten, Switzerland
Adm. Sci. 2024, 14(11), 288; https://doi.org/10.3390/admsci14110288
Submission received: 14 September 2024 / Revised: 13 October 2024 / Accepted: 2 November 2024 / Published: 5 November 2024

Abstract

:
This research critically examines the underlying anxieties surrounding artificial intelligence (AI) that are often concealed in public discourse, particularly in the United Kingdom. Despite an initial reluctance to acknowledge AI-related fears in focus groups, where 86% of participants claimed no significant concerns, further exploration through anonymous surveys and interviews uncovered deep anxieties about AI’s impact on job security, data privacy, and ethical governance. The research employed a mixed-methods approach, incorporating focus groups, a survey of 867 participants, and 53 semi-structured interviews to investigate these anxieties in depth. The study identifies key sources of concern, ranging from the fear of job displacement to the opacity of AI systems, particularly in relation to data handling and the control exerted by corporations and governments. The analysis reveals that anxieties are not evenly distributed across demographics but rather shaped by factors such as age, education, and occupation. These findings point to the necessity of addressing these anxieties to foster trust in AI technologies. This study highlights the need for ethical and transparent AI governance, providing critical insights for policymakers and organisations as they navigate the complex socio-technical landscape that AI presents.

1. Introduction

Technological advancement continues to reshape industries globally, with artificial intelligence (AI) standing at the forefront of this transformation. The development of autonomous systems capable of performing complex cognitive tasks is rapidly changing not only business practices but also societal norms. Companies in developed nations are increasingly adopting AI solutions to improve efficiency, reduce costs, and optimise performance. Automation, driven by AI, has begun to redefine the nature of work and decision-making across various sectors. As such, the influence of AI extends beyond the technical realm, impacting the economy, society, and individual lives in unprecedented ways (Marr 2019; Bryson 2019).
Leaders such as Jeff Bezos, Sergey Brin, and Satya Nadella have described AI as ushering in a “golden age” of innovation, tackling problems once relegated to science fiction (Marr 2019). AI’s ability to process vast amounts of data, simulate human decision-making, and optimise operations has created new opportunities across industries, from healthcare to finance (Bryson 2019). However, this enthusiasm is accompanied by growing concerns regarding the social, ethical, and economic implications of AI’s widespread adoption (Smith and Anderson 2014; Winfield 2019).
Globally, AI research has generated mixed reactions, ranging from optimism to apprehension. Ray Kurzweil envisions a future where AI and human intelligence coexist harmoniously to solve global challenges, while others, like Elon Musk and the late Stephen Hawking, have raised concerns about AI’s existential risks (Kurzweil 2005; Barrat 2013). Although these apocalyptic predictions are speculative, more immediate concerns focus on AI’s impact on employment, data privacy, and ethical dilemmas in its implementation. Such concerns are particularly acute in industries where AI-driven automation is replacing jobs, creating uncertainty about the future of human labour (Brynjolfsson and McAfee 2014).
Public perceptions of AI vary significantly across regions and demographic groups, shaping how AI technologies are received and adopted (Yigitcanlar et al. 2024). Job displacement is a key issue, with studies showing that AI’s ability to automate repetitive tasks threatens jobs in sectors reliant on manual or cognitive labour (Autor 2015). Frey and Osborne (2013) estimated that nearly half of all jobs in the United States could be automated within the next few decades, mirroring concerns in the United Kingdom, where industries most vulnerable to AI-driven disruptions must balance efficiency gains against potential job loss (Graetz and Michaels 2015).
Beyond employment, AI also raises significant concerns around data privacy and security. AI systems rely heavily on vast amounts of data, which raises questions about how such data are collected, processed, and stored. Ethical issues surrounding AI include the risk of privacy breaches, data misuse, and biases embedded in algorithms (Floridi 2016; Stahl and Wright 2018). Scholars stress the need for transparent and accountable AI systems to protect user data and ensure fairness (Winfield 2019). As AI becomes more integrated into personal, governmental, and corporate decision-making, the urgency to address these ethical concerns becomes more pressing.
Although much public discourse around AI has focused on its technical and economic implications, there remains a gap in understanding how people experience and express anxieties about AI in their everyday lives. Analysing these concerns offers critical insights into the psychological and social dimensions of AI adoption. Understanding these anxieties is essential for shaping AI’s development in ways that foster trust, transparency, and societal acceptance (Stahl and Wright 2018). Studies on socio-technical imaginaries further illustrate the role of public perception in influencing AI’s implementation across different contexts (Sartori and Bocca 2023).
This study takes a unique approach by uncovering hidden or suppressed anxieties about AI. Preliminary focus group discussions revealed an intriguing pattern: when asked directly, 86% of participants claimed to have no anxieties about AI. This reluctance to express concerns in public, particularly in environments where AI is viewed positively, suggests a form of social desirability bias. However, anonymous surveys revealed significant anxieties about AI, illustrating a contrast between public reticence and private honesty. This divergence highlights deeper societal concerns, which companies and policymakers must address to avoid resistance to AI adoption, mistrust in its applications, or widespread opposition to the technology’s benefits.
This study examines underlying anxieties related to AI in the UK, focusing on issues such as job displacement, data security, and ethical control of AI technologies by businesses and governments. Through comprehensive surveys, focus groups, and semi-structured interviews, the research aims to quantify the level of anxiety and explore demographic variations in these concerns. The insights gained from this research will inform responsible and transparent AI development, ensuring that public concerns are acknowledged and addressed.
Building on the concerns outlined, this study seeks to provide an in-depth understanding of public anxieties related to the rapid development and integration of AI technologies. The research focuses on five central areas:
  • What are the primary sources of anxiety related to AI development, and which areas—such as job loss, data security, or ethical concerns—generate the most public concern?
  • How significant is the anxiety associated with the rapid speed of AI development, particularly in sectors where AI adoption is fast-paced?
  • How do anxieties about job displacement due to AI vary across different demographic groups, such as age, education, and occupation?
  • How do public perceptions of AI’s technological complexity influence anxiety, and do these perceptions differ based on individuals’ levels of technical literacy?
  • To what extent do concerns about the control that businesses, governments, and other institutions have over AI development contribute to public anxieties, particularly in relation to ethical governance and trust?

2. Literature Review

The rapid advancement of artificial intelligence (AI) has prompted extensive academic inquiry into its economic, social, and ethical implications. However, the literature often exhibits a confirmation bias, focusing predominantly on AI’s potential benefits and downplaying or oversimplifying the anxieties that individuals and societies may harbour. Public discussions on social media, particularly on platforms like Twitter, reveal varied opinions on AI, influenced by occupational and usage patterns (Miyazaki et al. 2024). This review critically examines the existing body of research, focusing on job displacement, data privacy, ethical governance, and the underexplored issue of social pressures that prevent individuals from openly discussing their AI-related concerns in public settings.

2.1. Economic Impact and Job Displacement

AI’s potential to disrupt labour markets has been a central theme in the literature for over a decade, with early works such as Brynjolfsson and McAfee’s (2014) predicting that automation would replace millions of jobs, particularly those involving routine, manual tasks. Their work laid the foundation for much of the current discourse, but it also sparked debate about the true scale of AI-driven job displacement. Autor (2015) offers a more tempered view, arguing that while some jobs will indeed disappear, others will be created, particularly in high-skill sectors. This viewpoint reflects the ‘creative destruction’ theory often associated with technological progress, but it has been criticised for being overly optimistic and for failing to account for the difficulties workers face in transitioning to new roles.
More recent studies continue to forecast significant disruptions in employment. Frey and Osborne’s (2013) much-cited paper suggested that up to 47% of U.S. jobs could be automated within the next two decades. Yet, this prediction has been met with scepticism from researchers like Bessen (2019), who contend that such estimates fail to consider the capacity of firms and workers to adapt. While Frey and Osborne focus on potential risks, critics argue that there is insufficient empirical evidence to support such catastrophic scenarios. Bessen (2019) calls for a more nuanced analysis of how job markets evolve alongside technological changes, particularly in different economic contexts.
However, even as researchers debate the magnitude of job displacement, few studies focus on the emotional and psychological impacts that these forecasts have on the workforce. Public perceptions of AI-induced unemployment often generate anxieties that are not sufficiently captured in economic models. Mishra et al. (2023) emphasise the need for qualitative research to explore how workers in vulnerable industries perceive AI and automation. The existing literature often overlooks the fact that such anxieties are deeply personal and may not always align with empirical predictions. Thus, a gap persists in understanding how the fear of job loss, even if unsubstantiated, affects workers’ mental health and their willingness to embrace technological change.

2.2. Data Privacy and Security Concerns

AI’s reliance on large-scale data processing has heightened concerns about privacy and data security, particularly as personal information is increasingly leveraged for AI-driven decision-making. Floridi (2016) has raised critical questions about AI’s ability to handle sensitive data ethically, but his work primarily focuses on the theoretical risks of data misuse. More contemporary studies, such as that of Stahl and Wright (2018), attempt to address these concerns by examining how regulatory frameworks like the General Data Protection Regulation (GDPR) are evolving to keep pace with AI developments. While these regulatory measures offer some level of protection, they remain inadequate in dealing with the complexity of AI systems, particularly those involving machine learning algorithms that operate with minimal human oversight. Recent surveys indicate growing public concern over AI’s role in data privacy and its management of personal information (Tyson and Kikuchi 2023).
One area of contention in the literature is the question of whether AI can ever be truly transparent and accountable. Zarsky (2016) argues that the opacity of AI algorithms, often referred to as the “black-box” problem, undermines public trust. While advocates of AI regulation, like Winfield (2019), propose increased transparency as a solution, other scholars are sceptical about whether this is feasible in practice. Mishra et al. (2023) note that transparency itself is not a panacea, as AI systems are often so complex that making them fully transparent may overwhelm users with technical details, rendering the transparency ineffective.
This debate points to a broader gap in the literature: most research focuses on the technical and legal challenges of AI-related privacy risks, but few studies explore how these risks contribute to public anxiety. Public concerns about data privacy are often heightened by a lack of understanding about how AI processes data, yet much of the literature treats these concerns as secondary to the technical and regulatory challenges. There is a need for more interdisciplinary research that bridges the gap between technical AI studies and the social sciences, exploring how data privacy anxieties influence public attitudes toward AI adoption.
While there is a growing body of literature on the economic and technical implications of AI, public anxieties about AI’s rapid adoption have been explored in several recent studies, particularly in the UK. For instance, large-scale surveys such as the UK Government’s “Public Attitudes to Data and AI Tracker Survey” (UK Government 2023) highlight that concerns around AI’s implications for job security, data privacy, and ethical governance are prevalent across various demographic groups. Similarly, a comprehensive study by the Ada Lovelace Institute (2023) found that public concerns regarding AI are often linked to a perceived lack of transparency in how AI systems are developed and controlled by both corporate and governmental entities. These studies indicate a broad awareness and anxiety about AI’s impact on employment and data handling, with 88% of respondents reporting unease about AI’s potential misuse in areas such as surveillance and data privacy.
In a comparative study spanning five countries, Gillespie et al. (2021) demonstrated that trust in AI varies significantly across national contexts, with the UK reflecting some of the highest levels of concern about AI’s rapid deployment. This body of work provides valuable context for understanding public anxieties about AI, particularly around issues of trust, control, and ethical use. While these studies offer important insights, they often rely on broad quantitative measures, leaving less room for the nuanced exploration of how these anxieties manifest in different social settings, as is the focus of the present study. Our research builds on these findings by investigating how public anxieties are both suppressed and expressed (Gerlich 2024b) differently in anonymous surveys and public group settings, particularly in environments where AI adoption is presented in overwhelmingly positive terms.
Therefore, while acknowledging that the landscape of public anxieties around AI is well-explored, this study provides a unique contribution by examining the hidden anxieties that may not be captured in large-scale surveys or public forums. The contrast between public reluctance to admit concerns and the private expression of deep anxieties in anonymous settings reveals the complexity of how AI is perceived across different social contexts.

2.3. Ethical Governance and Trust

The ethical governance of AI is another area that has generated significant debate in the academic community. Scholars such as Nemitz (2018) argue that the ethical risks posed by AI systems, including bias and discrimination, are not sufficiently addressed by current regulatory frameworks. This concern is particularly relevant as AI systems are increasingly deployed in high-stakes decision-making processes, such as hiring, criminal justice, and healthcare. Theodorou (2022) similarly critiques the lack of robust mechanisms for ensuring accountability, pointing out that even when AI systems are regulated, the enforcement of ethical standards is inconsistent. Ethical and governance challenges surrounding AI, such as concerns about data privacy and algorithmic bias, continue to evolve as the technology advances (Floridi and Cowls 2022).
However, not all scholars agree on the extent of the risks posed by AI. Dignum (2020) takes a more optimistic view, arguing that ethical AI is achievable through the adoption of best practices and international standards. This perspective, while constructive, has been criticised for underestimating the cultural and institutional barriers that can impede the implementation of such standards. For instance, companies driven by profit motives may be reluctant to prioritise ethical considerations over efficiency and cost-effectiveness (Stahl and Wright 2018).
The literature on ethical AI governance also suffers from a lack of empirical studies. Much of the discourse revolves around hypothetical scenarios—such as the use of AI in warfare or surveillance—which, while important, divert attention away from the more immediate and tangible ethical challenges facing AI developers and users today. Trust in AI is shaped by governance frameworks and public perceptions of transparency within institutions that implement AI systems (Gillespie et al. 2023). Additionally, the literature often fails to consider the role of public trust in shaping AI governance. While transparency and accountability are frequently cited as solutions to ethical dilemmas, few studies investigate how these factors influence public perceptions of AI systems. This omission points to a significant gap in the literature: understanding how to build and maintain public trust in AI is critical for its successful adoption, yet little research has been conducted on the relationship between ethical governance and public anxiety.

2.4. Social Pressures and Hidden Anxieties

One of the most underexplored aspects of public perceptions of AI is the role of social pressure in shaping how people express their anxieties. Preliminary focus groups conducted for this study revealed that individuals are often reluctant to admit their concerns about AI in public settings, particularly in professional environments where AI is framed as a positive and inevitable development. In these contexts, expressing anxiety may be seen as a sign of resistance to progress or even weakness. This reluctance to voice concerns publicly is corroborated by research in the field of psychology, which suggests that people are more likely to conform to the perceived norms of their social group, especially when discussing emerging technologies (Smith and Anderson 2014).
Yet, in anonymous surveys, the picture changes dramatically. The preliminary results from this study show that when given the opportunity to express their thoughts privately, many respondents revealed significant anxieties about AI, particularly regarding job security, data privacy, and the ethical control of AI systems. This discrepancy between public discourse and private concerns highlights a critical gap in the literature: the social dynamics that shape how individuals express—or suppress—AI-related anxieties have been largely overlooked. Studies have shown that anonymity reduces social desirability bias, encouraging individuals to express their true feelings about contentious issues (Joinson 1999). Similarly, research by Smith and Anderson (2014) on public attitudes towards algorithms indicates that people tend to withhold their true concerns in public forums, revealing more nuanced or negative views when surveyed anonymously. There is little research on how workplace environments, peer pressure, or societal expectations influence people’s willingness to discuss their fears about AI. This oversight is significant because it suggests that current surveys and studies, which often rely on public or semi-public forms of data collection, may underestimate the true extent of public anxiety.
Understanding the role of social pressure in shaping AI perceptions is critical for businesses and policymakers. As companies increasingly implement AI technologies, they need to be aware that the absence of vocal concerns does not necessarily mean the absence of anxiety. Anonymous survey methods, such as those used in this study, offer a more accurate picture of public sentiment, revealing hidden fears that may otherwise remain unexpressed. The literature currently lacks a robust framework for understanding these hidden anxieties, which represents a crucial area for future research.
The growing body of literature on public anxieties about artificial intelligence (AI) highlights that these concerns are not uniform but shaped by various socio-cultural and national contexts. In the UK, the “Public Attitudes to Data and AI Tracker Survey” (2023) provides extensive insights into public concerns surrounding AI’s implications for data privacy, ethical governance, and job displacement. Over 60% of respondents expressed discomfort with AI systems making autonomous decisions without human oversight, highlighting fears about the opacity of AI’s decision-making processes and the potential misuse of personal data by both corporations and governments (UK Government 2023). Supporting this, the Ada Lovelace Institute (2023) identifies that a lack of transparency in AI governance exacerbates public anxiety, especially in areas like surveillance and the ethical management of AI systems. Although the public recognises the potential benefits of AI, there is a pervasive fear that without proper oversight, AI could be misused in ways that harm individual rights, particularly in contexts such as criminal justice or healthcare. These findings underline the need for clearer, more accessible governance structures to mitigate public fears about AI’s unchecked autonomy (Ada Lovelace Institute 2023).
On a broader scale, Gillespie et al. (2021) conducted a comparative study across five countries, revealing that public trust in AI varies significantly by national context. The UK, in particular, exhibited higher levels of anxiety about AI compared to countries like Germany and the United States. This can be attributed to historical factors, such as past government data scandals, which have left a legacy of distrust in how AI systems are governed. The study suggests that addressing these anxieties will require not only improved governance but also more public engagement and education about AI’s capabilities and limitations (Gillespie et al. 2021).
Binns et al. (2018) explored ethical concerns surrounding AI in Europe, finding that issues related to transparency and accountability were more prevalent in countries with strong regulatory frameworks, such as Germany and France. In contrast, studies in the United States, such as those by Brynjolfsson and McAfee (2014), indicate that job displacement due to AI and automation is a dominant concern, especially in industries heavily impacted by AI-driven automation, such as manufacturing and transportation. These findings underscore the broader societal anxieties about AI’s potential to disrupt labour markets and exacerbate inequality.
Adding to this global perspective, Chen et al. (2023) explore public trust in AI across China, the US, and Europe, highlighting how national contexts influence the public’s perception of AI governance and trust. In China, the government’s active role in promoting AI technologies and its integration into everyday services leads to higher levels of trust in AI. Conversely, in the US and Europe, concerns about data privacy, ethical governance, and job displacement remain at the forefront of public anxiety (Chen et al. 2023). This contrast underscores how socio-political contexts shape perceptions of AI, with Western countries often exhibiting more scepticism towards AI’s societal impact.
Yigitcanlar et al. (2024) offer further insights from Australia, where public perceptions of AI are framed by both optimism and concern. Their study on Australian cities reveals that while many view AI as essential for the future of urban development, anxieties surrounding data privacy and potential surveillance continue to shape public discourse. Australians express a pragmatic approach to AI adoption, balancing its benefits with ethical concerns about its broader societal implications. This nuanced view provides valuable insight into the global conversation on AI anxieties (Yigitcanlar et al. 2024).
These large-scale surveys provide a valuable foundation for understanding public anxieties about AI; however, they often fail to capture the more nuanced concerns that emerge in social contexts. Previous research, such as Gerlich’s (2024a), highlights the growing trust in AI due to its perceived impartiality and reliability compared to human decision-making. Unlike humans, who are often seen as biased, AI is perceived as neutral, making it a more trustworthy decision-making tool in certain contexts.
The existing literature on AI’s societal impact reveals both strengths and significant gaps. While much attention has been paid to the technical and economic implications of AI, the emotional and social dimensions of public anxiety remain underexplored. The literature’s tendency to focus on theoretical risks, such as job displacement and data privacy breaches, often neglects the real-world emotional responses that these risks generate. Furthermore, the role of social pressure in suppressing public expressions of anxiety has not been adequately addressed. This study seeks to fill these gaps by providing a more nuanced understanding of public anxieties surrounding AI, particularly through the use of anonymous survey methods that capture hidden concerns. By doing so, it aims to contribute to a more balanced and comprehensive discourse on AI’s role in society.

3. Materials and Methods

This study employs a mixed-methods approach, combining focus group discussions and survey data, consistent with the recommendations for business research methodology provided by Bryman and Bell (2011). The use of surveys and focus group discussions allows for a thorough exploration of both overtly expressed and latent anxieties. The combination of anonymous surveys and group discussions offers a holistic understanding of public concern, particularly in the workplace, where social pressures may influence the willingness to express anxieties.

3.1. Research Design

The research follows a sequential explanatory design, commencing with focus group discussions to explore the social dynamics that shape the expression of AI-related anxieties. This qualitative phase was followed by a large-scale survey to quantify the extent and distribution of these concerns across a broader population. The two-phase approach allows for initial qualitative insights to be contextualised and validated through the subsequent quantitative data collection.

3.1.1. Focus Group Discussions

At the outset of the study, five focus group discussions were conducted, with each group consisting of 10 participants representing an even demographic mix in terms of age, gender, and occupation. The focus group sessions were facilitated by trained moderators who adhered to a semi-structured format. Moderators were instructed to introduce the discussion topics neutrally, without leading participants toward particular responses. Their role was to encourage open discussion while ensuring that all participants had an opportunity to share their perspectives. The focus group format allowed for the exploration of initial attitudes toward AI, followed by a deeper engagement with the topic as participants discussed their concerns in more detail. To ensure neutrality, the moderators employed reflective techniques, repeating back participants’ statements for clarity without adding new content or influencing the direction of the discussion. They used non-verbal cues and open-ended questions to encourage further elaboration, helping participants to fully articulate their views without guiding them towards specific responses. These measures ensured that the data collected from the focus groups represented genuine participant concerns rather than moderator influence.

3.1.2. Survey Design and Data Collection

Following the focus groups, a structured survey was developed. A pre-testing phase was conducted with a sample of 10 participants to ensure the clarity and relevance of the survey items. Feedback from this group was used to refine the questionnaire before its full deployment to a representative sample of 867 individuals across the United Kingdom. The survey aimed to quantify public anxieties about AI, focusing on key areas identified during the focus groups: job displacement, data privacy, and the ethical use of AI technologies. Stratified random sampling was used to ensure the sample reflected a diverse range of demographics, including variations in age, gender, education, and occupation. The questionnaire was designed specifically for this study, drawing upon key themes identified in the recent literature on public attitudes toward artificial intelligence (AI). These themes were integrated into the survey to capture the most relevant anxieties related to AI’s adoption and impact on job displacement, data privacy, and ethical governance. While the questions are original, they align closely with previous findings in AI-related studies.
The survey consisted of 30 Likert-scale questions, with response options ranging from 1 (strongly disagree) to 6 (strongly agree). Participants were asked to respond to statements such as “AI poses a significant threat to jobs in my industry” and “I am concerned about how businesses use AI to collect and process personal data”. The Likert scale allowed for nuanced responses, capturing varying degrees of anxiety (Bryson 2019).
One of the key strengths of the survey design was its anonymity, which allowed participants to express their concerns without fear of judgment or peer pressure. This approach was particularly important given the findings from the focus groups, where social dynamics played a significant role in shaping individuals’ willingness to admit anxieties in a public setting (Joinson 1999). By conducting the survey anonymously, this study sought to reveal hidden anxieties that might otherwise go unexpressed.

3.1.3. Semi-Structured Interviews

The final phase of the study involved 53 semi-structured interviews conducted with participants who volunteered for further discussion after completing the survey. These interviews allowed for a deeper exploration of the anxieties identified in both the focus groups and the survey. Each interview followed a flexible guide designed to elicit more detailed responses about the sources of anxiety, including questions such as the following:
  • “Can you elaborate on your concerns about how AI will impact your job or industry?”
  • “How do you feel about the way AI systems handle personal data?”
  • “Do you think AI is being developed too quickly for society to handle responsibly?”
The semi-structured format allowed participants to discuss their personal experiences and provide insights into the reasons behind their concerns. A thematic analysis of the interviews helped identify recurring patterns and deeper layers of anxiety, such as fears about AI exacerbating inequalities and concerns about AI’s lack of transparency.

3.1.4. Survey Questions and Variables

The survey was structured with 30 questions designed to measure respondents’ anxieties about AI across a range of key variables. These variables were carefully selected based on the insights from the focus group discussions and existing literature on public perceptions of AI. For the survey questions, please refer to Appendix A:
Job Displacement (5 questions).
Data Privacy and Security (6 questions) based on Floridi (2016); Zarsky (2016).
Technological Complexity (5 questions) based on Winfield (2019).
Ethical Governance and Control (7 questions) based on Nemitz (2018); Theodorou (2022).
AI’s Speed of Development (3 questions) based on Bryson (2019).
Impact on Future Generations (4 questions).
The selection of variables was carefully designed to capture the broad range of anxieties that the focus groups identified. The use of a 6-point Likert scale allowed for greater nuance, enabling participants to express varying degrees of concern rather than forcing them into binary “yes/no” answers. This structure was vital in ensuring that the subtlety of respondents’ anxieties could be quantified. The combination of these variables not only aligns with the existing literature but also fills gaps in current research by providing a more comprehensive, multi-dimensional view of public anxieties regarding AI. The survey was particularly designed to measure both explicit anxieties (e.g., job loss) and more abstract concerns (e.g., technological complexity and ethical control), addressing both immediate and long-term societal fears.

3.2. Data Analysis Methods

The data analysis was conducted in two distinct phases: quantitative and qualitative. Each phase provided complementary insights into public anxieties about AI. Descriptive statistics, correlation analysis, and thematic analysis were employed to ensure a comprehensive interpretation of both the survey and interview data.

3.2.1. Quantitative Data Analysis

1.
Descriptive Statistics:
Descriptive statistics were used to summarise the demographic characteristics of the survey respondents and their responses to 19 Likert-scale questions. Metrics such as the mean, median, mode, and standard deviation were calculated for each theme, which included job displacement, data privacy, and control over AI systems. The objective was to determine broad trends in public anxieties related to AI. For instance, low mean scores indicated heightened concerns about job security and the rapid pace of AI development.
The descriptive analysis facilitated the ranking of anxiety themes by identifying those of the greatest concern. Respondents strongly agreed with statements such as “AI poses a significant threat to jobs in my industry” and “I do not trust AI systems to handle my personal data securely”.
2.
Correlation Analysis:
A pairwise Pearson correlation analysis was performed to assess the relationships between demographic variables (e.g., age, gender, education, employment status) and levels of AI-related anxiety. Correlation coefficients ranged from +1 to −1, representing the strength and direction of these relationships. A positive correlation (closer to +1) indicated that as one variable increased, so did the other; a negative correlation (closer to −1) indicated an inverse relationship.
For instance, the correlation analysis explored whether certain demographic factors, such as age or education, were associated with increased concerns about job displacement or AI governance. The analysis also ensured that continuous variables (such as age) and categorical variables (such as gender) were processed appropriately through cross-tabulations and correlation coefficients.
3.
Handling of Variables:
Demographic variables were processed according to their nature: continuous variables like age were analysed using Pearson correlation coefficients, while categorical variables such as gender and occupation were examined using cross-tabulations to assess group differences in anxiety levels. This distinction ensured that the analysis accurately captured the relationships between demographic factors and AI-related anxieties.
4.
Reliability of Survey Instrument:
Cronbach’s alpha was calculated for the 19 Likert-scale items to assess the internal consistency of the survey instrument. The alpha value exceeded 0.80, indicating high reliability and ensuring that the questions measured the same underlying constructs (AI-related anxieties) consistently.

3.2.2. Qualitative Data Analysis

The qualitative data, drawn from both focus group discussions and semi-structured interviews, were analysed using thematic analysis. This method enabled the identification of recurring themes and patterns, enriching the quantitative data by providing deeper insights into the underlying reasons for public anxieties about AI.
  • Thematic Analysis of Focus Groups:
    Five focus group discussions, each comprising 10 participants, were conducted to explore how anxieties about AI were expressed in a public group setting. Initially, 85% of participants claimed they had no significant concerns about AI. However, after specific examples of AI-related risks were introduced, the percentage of participants admitting to such concerns rose to 88%.
Using Braun and Clarke’s (2006) six-step thematic analysis process, the focus group transcripts were coded for recurring themes such as job displacement, ethical governance, and data privacy. This process allowed for the identification of key anxieties and social dynamics, particularly the reluctance to admit concerns in public settings.
2.
Thematic Analysis of Interviews:
Semi-structured interviews were conducted with 53 participants to gain deeper insights into the anxieties identified during the focus group and survey phases. The interviews followed a flexible guide, allowing respondents to elaborate on specific concerns related to job security, control over AI systems, and data privacy.
The interview transcripts were analysed using the same thematic analysis process, which involved generating codes, identifying themes, and refining these themes to reflect the core anxieties expressed. This analysis revealed a consistent pattern: many participants expressed concerns about the lack of transparency in AI systems and the potential for AI to exacerbate inequalities.
3.
Inter-Rater Reliability:
To ensure the reliability of the qualitative analysis, two independent researchers coded a subset of the interview and focus group transcripts. Cohen’s kappa was calculated to measure inter-rater reliability, yielding a value of 0.81, which indicates substantial agreement between coders. This process reduced subjective bias and enhanced the rigor of the thematic analysis.
4.
Triangulation:
The triangulation of qualitative and quantitative data was employed to validate the findings and ensure consistency across different data sources. The themes that emerged from the focus group discussions and interviews were cross-referenced with the survey results. For example, the concerns about job displacement expressed during interviews mirrored the high levels of anxiety reported in the survey. This triangulation reinforced the robustness of the study’s findings and provided a comprehensive understanding of public anxieties about AI.
This analysis method provided a thorough understanding of public anxieties related to AI. The combination of descriptive statistics, correlation analysis, and thematic analysis ensured that both quantitative and qualitative data were rigorously interpreted. The integration of multiple data sources enriched the study’s conclusions and provided valuable insights for policymakers and businesses seeking to address public concerns about AI.

3.3. Ethical Considerations

All participants provided informed consent before taking part in the study, with assurances that their responses would remain confidential. The survey was conducted anonymously, and interview transcripts were anonymised to protect participants’ identities. Ethical approval was obtained from the relevant institutional ethics committee before data collection commenced.

4. Results and Discussion

4.1. Demographic Characteristics

The survey data were collected from 867 participants, offering a broad and representative sample of the population. Of the respondents, 51% were male and 49% female, indicating an even gender distribution (Figure 1). Age-wise, the majority of respondents (67%) fell within the 18–50 age group, with 33% aged over 50. In terms of education, the sample was also diverse: 36% of participants held a high school diploma, 32% had a college degree, and 32% had a postgraduate degree. Employment sectors varied, with 34% identifying as self-employed, 35% working in the service industry, and 26% employed in other sectors. The sample also included 11% of respondents from other European countries and 11% from outside Europe, though the majority (78%) were UK nationals.
These demographic insights form the basis for understanding how AI-related anxieties differ across groups. The relatively high level of education among respondents could influence the type of anxieties expressed, particularly concerning technological complexity and the ethical use of AI, as the literature suggests that highly educated individuals may have more nuanced concerns about the societal implications of AI (Smith and Anderson 2014). In contrast, individuals with lower education levels may be more focused on the immediate threat of job displacement, as automation often targets low-skill, routine tasks (Frey and Osborne 2013).

4.2. Descriptive Statistics

Table 1 presents the descriptive statistics summarising the participants’ responses to key survey items. These statistics highlight the primary concerns that emerged from the survey, particularly focusing on anxieties related to the speed of AI development, job displacement, technological complexity, control over AI technologies, and data security. As seen in the table, the public’s greatest concern lies in the rapid pace of AI’s advancement, followed by anxieties about job loss and data privacy. The mean and median values provide a detailed overview of the distribution of these concerns across the participant sample.
The mean value of the responses (Table 1) is least for the speed of change and the job loss-related concerns. While the speed of change refers to the concerns that originate because of the speed at which the ecosystem is changing with the adaptation of AI, job loss-related concerns are directly related to job insecurity and the potential of systems and machines replacing human-oriented less skilled jobs. Both the concerns pose a threat for the people when they evaluate AI as a technology for tomorrow. It is also anticipated that the speed at which technology is changing and assimilating into daily processes will leave a permanent mark on society in general, which is very concerning for people. The median value of the responses for job loss-related concerns is 1, which clearly indicates that more than 50% of respondents strongly agreed that job loss is a definite concern with the rise in popularity and adaptation of AI. Based on the skewness and kurtosis values, both speed of change and job loss-related concerns distribution have fat tails. Still, the data are left skewed for job loss only, which means the concern level is higher for job loss amongst people as compared to their inability to cope with the speed at which the change is happening. The results show that the most significant concern is the speed of change, with a mean score of 1.54, followed closely by job loss, which has a mean of 1.33. While both concerns are critical, speed of change reflects the public’s greatest anxiety about AI’s rapid development.

4.3. Public Anxieties Related to AI

Public perceptions of AI remain influenced by biases, which significantly impact the overall trust and acceptance of AI technologies (Brauner et al. 2023). The analysis of survey data reveals several significant public concerns related to AI, with the primary areas of anxiety focusing on job displacement, data privacy, and control over AI technologies by businesses and governments. Table 2 summarises these key themes.
The speed of change emerged as the leading concern, followed closely by control over AI technologies and job displacement, with 91% of respondents agreeing or strongly agreeing with the statement ‘AI poses a significant threat to jobs in my industry’. This figure is consistent with the existing literature on automation and job loss, particularly in sectors dependent on routine tasks, where AI is most likely to replace human workers (Brynjolfsson and McAfee 2014). Job displacement has been a prominent topic of discussion in AI studies, especially in countries like the UK, where service industry jobs, which are often lower-skill jobs, are particularly vulnerable to automation (Frey and Osborne 2013).
Similarly, 88% of respondents expressed concerns about data privacy, specifically the potential for AI-driven systems to misuse personal information. The thematic analysis of interviews further reinforced this anxiety, with many participants voicing concerns about the ethical implications of AI collecting and processing personal data. These concerns align with broader societal fears about data security, as AI systems have been increasingly used for surveillance and decision-making, often without sufficient regulatory oversight (Floridi 2016). The prominence of this anxiety suggests that the public may not trust organisations to manage AI responsibly, which has been a central theme in AI ethics discussions (Stahl and Wright 2018).
Control over AI technologies by businesses and governments was also a major concern, with 84% of respondents indicating that they worry about the power imbalance created by AI. Interview data revealed that participants feared the increasing use of AI by large corporations and governments to manipulate or control public behaviour, which reflects broader discussions in the literature about the ethical governance of AI and the potential for AI technologies to reinforce existing power structures (Winfield 2019). This concern is particularly relevant in the context of AI’s growing role in decision-making processes, where transparency is often lacking. The use of AI in the public sector presents specific challenges around trust and transparency in decision-making processes (Chen et al. 2023).

4.4. Job Loss and Economic Anxiety

While job displacement was a significant source of anxiety, the speed of change emerged as the leading concern, with 96.3% of respondents expressing high anxiety about the rapid pace of AI development. Job displacement followed closely, with 91% of respondents expressing concern about AI’s impact on their employment prospects. This aligns with extensive research showing that automation poses the greatest threat to routine, manual jobs, particularly in industries such as manufacturing and services (Autor 2015). Despite the significant concern about job displacement, speed of change emerged as the top public anxiety, with 96.3% of respondents expressing high anxiety about the rapid advancement of AI technologies. The correlation analysis, presented in Table 3, shows a strong positive correlation (r = 0.93) between job displacement anxiety and respondents’ country of origin, indicating that non-UK nationals are more likely to fear job loss due to AI. The correlation analysis reveals several significant relationships between income levels and participants’ concerns about AI. A very weak negative correlation (−0.018) was found between income level and concern over job loss, indicating that fears about AI-induced job displacement are prevalent across income groups, rather than concentrated in lower-income participants. This suggests that anxieties about employment displacement are broad-reaching and that AI implementation strategies should account for resistance to job automation from a wide range of socio-economic backgrounds.
Additionally, there is a negligible correlation (−0.001) between income levels and concerns over data privacy, suggesting that worries about how AI systems handle personal data are consistent across income groups. These findings imply that data privacy concerns could affect AI adoption universally, rather than being restricted to specific economic groups. Companies integrating AI systems must address these concerns broadly, ensuring data security for all socio-economic levels.
A strong positive correlation (0.926) was observed between concerns about data privacy and control over AI technologies, indicating that individuals who express worry over data privacy are also concerned about broader governance issues. These results highlight that for AI to be widely adopted, businesses and governments must address both control and privacy issues simultaneously, as they are closely interconnected. Without this, the public’s trust in AI systems may wane, affecting the rate of AI adoption.
Additionally, a moderate positive correlation (r = 0.25) between anxieties over job loss and control, suggesting that fears about losing control over work environments exacerbate concerns about job security can be seen. This suggests that as concerns about the centralisation of AI control rise, so do anxieties about losing jobs to automation. The fear of losing control over one’s work environment is closely tied to anxieties about job security, reinforcing the need for transparent AI governance.
This finding resonates with studies showing that migrant workers often feel more vulnerable to job displacement, as they are more likely to be employed in industries susceptible to automation (Graetz and Michaels 2015). Furthermore, the thematic analysis of interviews indicated that participants feared the replacement of jobs not only in low-skill sectors but also in professions traditionally considered secure, such as healthcare and law, which are increasingly adopting AI for tasks like diagnosis and legal research (Mishra et al. 2023).
Job loss ranked third among public anxieties, following speed of change and security concerns, with 91% of respondents expressing concern about the impact of AI on employment (Table 4). This aligns with concerns raised by Barrat (2013), who explores the broader implications of AI on human employment and governance. Several participants also acknowledged the potential for AI to create new job opportunities, particularly in tech-driven industries. This more nuanced view is consistent with optimistic perspectives on AI, such as those put forward by Kurzweil (2005), which suggest that automation will lead to the creation of new, higher-skilled jobs. However, the immediate fear of unemployment remained dominant, particularly among those employed in routine tasks, though speed of change and security concerns were ranked as higher public anxieties overall.
These correlations offer important insights into how different demographic groups may respond to AI adoption in the workforce. For example, the strong positive correlation between job loss anxiety and respondents’ country of origin (r = 0.93) suggests that non-UK nationals, who may already face precarious employment situations, feel particularly vulnerable to AI-induced job displacement. This heightened anxiety could hinder their acceptance of AI technologies in industries where automation is on the rise, particularly those reliant on foreign labour.
Similarly, the moderate correlation between job loss and concerns over control (r = 0.25) indicates that as anxieties about businesses and governments controlling AI rise, there is a corresponding increase in concerns over job security. This relationship underscores how trust in AI governance and control mechanisms plays a crucial role in shaping public perceptions of its societal impacts, particularly in the context of employment.
In terms of educational disparities, the weak correlation between educational attainment and technological complexity anxiety (r = −0.03) suggests that those with lower educational qualifications might struggle more with adapting to AI technologies. This points to the importance of tailored AI literacy programmes to ensure that AI adoption does not exacerbate existing educational inequalities.
These findings indicate that public anxieties about AI are not uniform but rather shaped by underlying demographic factors, which, in turn, may influence the public’s willingness to adopt or support AI technologies. Understanding these demographic nuances is crucial for policymakers and businesses aiming to foster a more inclusive and equitable approach to AI adoption.

4.5. Data Privacy and Security Concerns

Businesses’ and governments’ control over AI technologies was the second most significant concern, followed closely by job displacement. However, security concerns, including data privacy and misuse of personal information, also ranked high, with 88% of respondents expressing concern about how AI systems handle personal information. The thematic analysis of interview data revealed that many participants felt uneasy about the increasing use of AI in data processing, particularly in sectors such as finance and healthcare, where sensitive personal data are routinely collected and analysed. These concerns echo the ethical issues discussed by Floridi (2016), who emphasised the importance of transparent AI systems to mitigate risks to personal data.
The concerns expressed about data privacy reflect a broader societal fear about the potential misuse of personal data by AI systems, especially given recent high-profile data breaches and scandals involving AI-driven technologies (Floridi 2016). Participants in the interviews raised issues related to the opacity of AI systems, expressing doubts about the ability of governments and regulatory bodies to adequately oversee and manage AI technologies. The sentiment that “AI is advancing faster than we can regulate it” was a recurring theme, with participants highlighting that the rapid pace of AI development often outstrips the creation of legal and ethical frameworks necessary to protect personal data. This aligns with research on the challenges of AI governance, which emphasises the difficulty of regulating technologies that evolve faster than the regulatory systems meant to govern them (Stahl and Wright 2018). Furthermore, the concerns expressed by participants about AI-driven data collection are consistent with studies on the “black-box” nature of AI systems, where the complexity and opacity of algorithms prevent public understanding and reduce trust in how personal data are handled (Zarsky 2016). Participants voiced concerns over the opaque nature of AI systems, which often lack transparency in their decision-making processes, as highlighted by Citron and Pasquale (2014).

4.6. Technological Complexity and Control

Concerns about the complexity and control of AI were the third most prominent theme, with 84% of respondents agreeing or strongly agreeing with the statement, “AI is advancing too quickly for society to manage its consequences”. This reflects the broader governance challenges outlined by Bradford et al. (2024), who stressed the importance of decentralising AI control to alleviate public concerns. The survey results revealed that anxiety about technological complexity was particularly high among respondents with postgraduate degrees, who may have a better understanding of the intricacies of AI systems and the potential risks associated with their unchecked development. This concern mirrors the broader academic debate about the societal impacts of rapid technological change, where public institutions and policymakers struggle to keep pace with innovation (Bryson 2019).
Interview participants emphasised the need for greater transparency in AI development and expressed fears about the potential for AI to be used for unethical purposes, particularly by large corporations and governments. Several participants raised concerns about AI being used to manipulate public opinion, control consumer behaviour, or reinforce existing power structures. These fears are supported by the literature, which highlights the ethical risks of allowing powerful entities to control AI technologies without adequate oversight (Nemitz 2018).
The analysis also revealed that respondents were concerned about the lack of public involvement in discussions about AI governance. Many interviewees felt that AI was being developed and implemented without sufficient input from the broader public, which they believed could lead to technologies that do not reflect societal values or priorities. This is consistent with calls in the literature for more inclusive and participatory approaches to AI governance, which would involve a wider range of stakeholders in decision-making processes (Dignum 2020).

4.7. Future of the Next Generation

A significant portion of the respondents (72%) expressed concerns about the impact of AI on future generations, particularly in terms of job opportunities, education, and societal well-being. The fear that AI could disrupt traditional career paths and create new inequalities was a recurring theme, both in the survey responses and the interviews. Many participants worried that the rapid pace of technological change could lead to a world where future generations are overly dependent on AI, with fewer opportunities for meaningful employment or personal autonomy. These concerns reflect broader societal anxieties about the long-term impacts of AI on future generations, as documented in research on intergenerational fears about technological change (Smith and Anderson 2014). Interview participants highlighted the potential for AI to exacerbate existing social and economic inequalities, particularly if access to AI technologies is concentrated in the hands of a few wealthy individuals or corporations. This concern is echoed in the literature, which warns that the unequal distribution of AI’s benefits and risks could deepen societal divisions and create new forms of inequality (Taddeo and Floridi 2018).
At the same time, some participants expressed optimism about AI’s potential to improve education and healthcare for future generations. They pointed to the possibilities for AI to enhance personalised learning experiences, streamline healthcare delivery, and address some of the most pressing global challenges, such as climate change and resource scarcity. This duality—where AI is seen as both a threat and an opportunity—highlights the complexity of public attitudes toward AI and underscores the need for balanced, inclusive discussions about its future development (Acemoglu and Restrepo 2019).

Correlation Analysis

A bivariate study known as correlation quantifies the direction and strength of the relationship between two variables. The correlation coefficient, ranging from +1 to −1, represents the strength of this association. A value of ±1 indicates a perfect connection between the two variables. There will be less of a link between the two variables as the correlation coefficient value approaches 0. The coefficient’s sign—a + denotes a positive association, and a—denotes a negative relationship—indicates the direction of the relationship. In the current study, a pairwise correlation analysis was performed, and each demographic variable, viz. age, gender, education, origin, and employment status, was used to determine the correlation coefficients with the thematic factors. The correlation factors for respective demographic factors are tabulated below (Table 5).
It can be seen that only one factor pair, viz., origin and job loss, is highly correlated with a correlated coefficient of 0.93. This is primarily because three quarters of the sample size of the total population of the respondents has the same origin, and job loss is the number one concern for all the respondents. So, this correlation pair must be rejected, and no inferences should be made. On a level 2 analysis of origin-related job loss responses, all the respondents gave almost similar responses. The next correlation pair, which has shown some correlation factor value, is the job loss vs. control pair. Again, this result is very intuitive, as it is obvious as well as anticipated that when the state and employer control increase with the adaptation of AI, the manual jobs will be replaced by machines and technology advancements, leading to efficiency and cost-effectiveness and, as a result, more automation will lead to more unemployment. Therefore, the concerns relating to the job and the concerns relating to the control are somewhat related.

4.8. Thematic Analysis of Interviews and Focus Groups

The thematic analysis of the fifty-three semi-structured interviews and five focus group discussions revealed a range of anxieties about artificial intelligence (AI), echoing the concerns identified in the survey but providing deeper insight into the reasons behind these fears. The qualitative data identified key themes around job displacement, data privacy, control over AI, and ethical governance, reflecting how public concerns extend beyond immediate, practical issues to broader societal implications. The participants’ responses, including the way they evolved over the course of discussions, shed light on the social dynamics that shape how people express or suppress their anxieties in public.
During the focus group discussions, participants’ attitudes toward AI evolved significantly. Initially, many participants expressed few or no concerns about AI, but as the discussions progressed, deeper anxieties began to surface. This shift in responses suggests that participants may have struggled to conceptualise their anxieties at the outset or felt social pressure to conform to the general excitement surrounding AI.
An important finding of the study is the role of social desirability bias (Paulhus 2002) in shaping public expressions of anxiety. In social settings, individuals may suppress their true concerns to avoid appearing anxious or resistant to technological progress. However, as the discussions continued and a more open environment developed, participants felt more comfortable admitting their anxieties. This highlights the complexity of discussing AI-related concerns in public forums, where social pressures can prevent honest discourse. The ability to openly express anxieties only after prolonged discussion suggests that creating safe, judgment-free spaces is crucial for understanding the full range of public concerns about AI.

4.8.1. Job Loss and Economic Displacement

While job displacement emerged as a prominent theme, concerns about control over AI technologies were also significant. Table 5 reveals a notable correlation between fears of job displacement and control, reflecting participants’ anxieties about how centralised AI governance might lead to job insecurity. Across the interviews, 72% of participants explicitly voiced concerns about the potential for AI to replace jobs, particularly in sectors that are increasingly automating routine tasks. For example, a participant working in customer service mentioned, “I can already see the impact—self-service kiosks are everywhere, and we’re told it’s more efficient, but it feels like I’m being replaced”. This anxiety was not limited to low-skill jobs; participants in highly skilled professions also expressed uncertainty about the future. A software engineer remarked, “AI is getting better at tasks we used to think only humans could do. I worry about how long before it makes my role redundant”.
In the focus groups, the discussion of job displacement was initially met with hesitation, with 85% of participants stating at the start that they did not have significant anxieties about AI. However, as the conversation progressed, and examples of AI-related job losses in various sectors were provided, participants began to acknowledge their fears. By the end of the sessions, 88% admitted that they were indeed anxious about how AI could impact their job security. One participant remarked, “I didn’t think it was a real concern before, but when you hear about how AI is being used to automate jobs across different industries, it starts to hit home”.
This shift in openness is indicative of the social pressures that affect how individuals express their anxieties in public. In private, people are more willing to admit their concerns, but in group settings, there is often reluctance to express fear, especially in environments where technological innovation is seen as inevitable or necessary. These findings align with the work of Autor (2015), who highlights the hidden fears people have about technological displacement, especially in workplaces that encourage enthusiasm for innovation.
The interviews and focus groups also revealed nuanced insights into the long-term economic impacts of AI. Participants feared not only for their own jobs but also for future generations. One interviewee stated, “It’s not just about whether I’ll lose my job—what kind of jobs will be left for our kids? Will AI take over everything, leaving only a handful of jobs for humans?” This intergenerational anxiety is well-documented in the literature on technological disruptions, where parents express concern about the ability of their children to navigate a world dominated by AI (Smith and Anderson 2014).
A key element of the study was the comparison between the survey data and the findings from the focus groups. In the anonymous surveys, participants were more willing to report anxieties about AI upfront. For example, over 80% of survey respondents expressed concern about data privacy and job displacement. However, in the initial stages of the focus group discussions, fewer participants expressed such concerns, with only 40% openly discussing anxieties related to AI. As the discussions progressed, and participants felt more comfortable, this number rose significantly. This divergence suggests that the format of data collection can influence the level of openness with which individuals express their concerns.
The focus groups offered a deeper, more nuanced exploration of anxieties, revealing that many participants were initially hesitant to express their true feelings in a social setting. In contrast, the survey’s anonymity allowed participants to express concerns more freely. This is consistent with the idea of social desirability bias, where individuals may suppress their anxieties in group settings to avoid appearing anxious or pessimistic about AI. These findings underscore the importance of using both qualitative and quantitative methods in exploring complex issues like public anxieties about AI, as each method reveals different facets of the participants’ attitudes.

4.8.2. Data Privacy and Security Concerns

Data privacy emerged as a central theme, with over half of the interviewees (52%) expressing concerns about AI’s role in the collection, storage, and misuse of personal information. A participant working in finance highlighted the vulnerability of personal data in AI-driven systems: “Every financial transaction is now tracked and processed by AI systems. What scares me is how easy it would be for that data to be used against us—by corporations, hackers, or even governments”. This concern was echoed by another participant who worked in healthcare, who said, “We deal with incredibly sensitive patient information, and while AI helps process it faster, I don’t feel confident that it’s always secure”.
The focus group discussions around data privacy also reflected deep concerns. Initially, many participants had not considered the full extent of AI’s role in data handling, but once the discussion delved into examples of AI systems being used in surveillance and predictive policing, anxiety levels rose. One participant commented, “The more I think about it, the more I realise that we have no control over what AI systems do with our data. It feels like we’re being watched all the time”.
These concerns align with the existing literature on the risks associated with AI-driven data collection, particularly regarding the ethical challenges and potential for abuse (Floridi 2016). The participants’ unease about the lack of transparency in how data is handled, and the perceived inadequacy of current regulations, reflect broader societal fears about the growing role of AI in surveillance. As one participant noted, “AI is so opaque—how are we supposed to know who has access to our data and what they’re doing with it?

4.8.3. Control and Ethical Governance

Another major theme was the issue of control over AI systems, with 84% of respondents indicating that they were concerned about the power imbalance created by AI. The interviews provided deeper insights into how this concern manifested, with participants frequently mentioning the concentration of AI development and deployment in the hands of a few large corporations and governments. One participant remarked, “It’s not just about what AI can do, but who controls it. The more powerful AI becomes, the more control those at the top will have over all of us”.
In the focus groups, the idea of control was closely tied to ethical concerns about the misuse of AI by businesses and governments. Participants feared that AI could be used to manipulate public opinion, influence consumer behaviour, or even restrict individual freedoms. One participant pointed out, “We’ve already seen how social media algorithms can shape what we think—what’s stopping AI from doing the same on a larger scale?
The discussions also highlighted concerns about the lack of public involvement in decisions about AI governance. Many interviewees expressed frustration that AI technologies were being developed and implemented without adequate input from ordinary people. One participant stated, “AI is shaping our future, but it feels like all the decisions are being made behind closed doors. We’re not part of the conversation”. This sentiment aligns with calls in the literature for more participatory approaches to AI governance, where stakeholders from various sectors, including the public, are actively involved in shaping AI policies (Dignum 2020).

4.8.4. Technological Complexity and the Future of the Next Generation

Technological complexity and its implications for the future were other recurring themes. Over 80% of interviewees and focus group participants expressed concerns that AI was advancing too quickly for society to adapt. One interviewee noted, “It feels like we’re hurtling towards a future that we don’t fully understand, and I’m not sure if we’re prepared for it”. This concern was particularly pronounced among parents, who worried about the impact of AI on future generations. One participant expressed anxiety about her children’s future, saying, “Will my kids even have jobs when they grow up? Or will AI have taken over everything by then?
The focus groups revealed a mix of optimism and fear about AI’s role in education and the workforce of the future. Some participants acknowledged the potential for AI to enhance learning opportunities, particularly through personalised education systems. However, many were sceptical about whether these benefits would be accessible to all. As one participant noted, “AI could revolutionise education, but only if it’s available to everyone. Otherwise, it’ll just widen the gap between those who have access and those who don’t”. This aligns with research that highlights the risk of AI exacerbating existing inequalities.
The thematic analysis of the interviews and focus group discussions revealed deep-rooted public anxieties about AI, extending beyond immediate concerns about job loss to more complex issues around data privacy, control, and the future of society. The participants’ responses provide critical insights into how social dynamics shape the expression of these anxieties, with people more likely to admit their fears in anonymous or private settings. These findings not only support the survey results but also offer new perspectives on the underlying drivers of public anxiety, highlighting the need for more inclusive and transparent AI governance.

5. Conclusions

This study sought to explore the anxieties individuals have about artificial intelligence (AI) through an investigation structured around seven research questions. Each question was designed to uncover the social, psychological, and demographic factors that shape how these anxieties are expressed, particularly in work and group settings. The results provided a comprehensive understanding of public fears regarding AI, offering valuable insights for both academic research and practical applications in business environments.
The findings showed that job displacement, data privacy, and ethical governance were the most significant concerns. An overwhelming 91% of survey respondents expressed fears about job loss due to AI, aligning with studies by Frey and Osborne (2013), who highlighted the potential for AI to automate routine tasks, particularly in vulnerable industries such as manufacturing and services. However, this study extended the discussion by revealing that even those in traditionally secure professions, such as healthcare, feared displacement. This result builds on Gerlich’s (2023) observation that fears of AI are becoming more prevalent across a broader range of industries, suggesting that the reach of AI anxiety is not limited to low-skill sectors. For AI to be successfully integrated into society, addressing issues of public trust and ensuring robust ethical governance are crucial (Stahl et al. 2023).
The focus group discussions revealed that only 15% of participants initially admitted to having concerns about AI, but after facilitated discussions, 88% acknowledged their fears. This significant shift confirms that many individuals are reluctant to admit their anxieties in public settings due to social desirability bias, a phenomenon also identified in the work of Gerlich (2024b). These findings have profound implications for businesses, as employees may suppress their concerns about AI in organisational settings, creating an illusion of widespread acceptance when, in fact, underlying fears could hinder the success of AI implementation projects.
The correlation analysis showed that non-UK nationals were significantly more anxious about job loss compared to UK nationals, with a strong correlation (r = 0.93) indicating heightened vulnerability to AI-driven displacement among foreign workers. This result is consistent with Graetz and Michaels’ (2015) work on migrant workers and automation, suggesting that demographic factors such as nationality and employment status play a key role in shaping public fears about AI. The study also found that younger respondents expressed more concern about the future implications of AI for job security, particularly as AI technologies rapidly evolve.
The study found that 88% of respondents expressed concerns about the misuse of personal data by AI systems, with many interviewees highlighting the opaque nature of AI-driven data processing. These concerns mirror the findings of Floridi (2016), who emphasised the ethical challenges of AI in data privacy and the need for greater transparency in AI governance. The interviews and focus group discussions further revealed that individuals felt powerless in the face of AI’s increasing role in data collection, with several participants raising fears about the lack of oversight and regulation. This highlights the need for businesses and policymakers to address data privacy issues more rigorously, especially as AI technologies become more integrated into daily life. AI’s role in scientific research introduces a range of ethical challenges, calling for updated guidance to navigate these complexities (Resnik and Hosseini 2024).
The survey and interviews revealed that 84% of participants were concerned about the speed of AI development and its potential to outpace societal and regulatory capacities. Participants expressed doubts about whether governments and businesses were adequately prepared to manage the ethical and operational challenges posed by AI. This aligns with the observations of Winfield (2019), who argued that the rapid advancement of AI often leaves public institutions struggling to keep up, potentially exacerbating public fears. These results underscore the importance of transparency and education in AI governance, as the public remains uncertain about AI’s long-term consequences. The results showed that 72% of respondents were anxious about the implications of AI for future employment and societal well-being. This intergenerational fear was particularly pronounced among parents, who worried about whether AI would eliminate job opportunities for their children. Similar concerns were found in the work of Smith and Anderson (2014), who documented the increasing anxiety surrounding AI’s impact on future generations. Participants in this study echoed these fears, with several interviewees questioning whether AI would create a world where human skills and autonomy are diminished.
The study revealed that businesses are likely to face significant challenges in implementing AI technologies if they fail to address employee anxieties. The findings suggest that while employees may not openly express their concerns about AI in organisational settings, these fears remain prevalent and could manifest in resistance or sabotage during AI-driven change initiatives. Gerlich (2024a) emphasised that organisations often overlook the human element in AI implementation, leading to friction between technological progress and employee well-being. This study reinforces that view, suggesting that companies must proactively engage with employees, creating safe spaces for them to voice their concerns. Failure to do so could result in significant obstacles to AI adoption, including decreased morale, reduced productivity, and active resistance to AI-driven projects.
The relevance of these findings extends beyond the theoretical realm into practical applications for both academia and business. For academia, this study contributes to the growing literature on AI anxieties by offering a comprehensive, mixed-methods approach that combines survey data with in-depth qualitative insights. The study’s findings on how social contexts influence the expression of anxieties add a new dimension to the existing research on AI-related fears, particularly in workplace settings. This aligns with the work of Gerlich (2023, 2024b), who emphasised the importance of understanding the unspoken anxieties employees may harbour about AI technologies.
For businesses, the findings offer a critical warning: companies should not assume that employees are fully on board with AI adoption simply because they do not openly express concerns. The research suggests that AI implementation projects could face significant obstacles if organisations fail to acknowledge and address the anxieties that exist beneath the surface. Businesses should develop comprehensive communication strategies that encourage openness and transparency, allowing employees to express their fears without fear of judgment or reprisal. By creating an environment where employees feel heard, companies can mitigate the risk of resistance and ensure smoother transitions during AI-driven change projects.
One limitation of this study is the absence of geographic and industry-specific data, which could have provided further insights into how AI-related anxieties vary across different regions and sectors. Additionally, while the study offers valuable insights into public anxieties about AI, it focuses primarily on the UK context, limiting the generalisability of the findings to other national contexts. Future research should consider expanding the sample to include diverse geographic regions and industries to provide a more comprehensive understanding of how AI adoption impacts various demographic and professional groups.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of SBS Swiss Business School (protocol code EC23/FR18, 1 September 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Supporting data can be requested from the author.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Survey questions:
Demographic Questions
What is your age?
What is your gender? (1 = male, 2 = female)
What is your highest level of education? (1 = high school, 2 = college, 3 = post grad)
What is your current occupation? (1 = unemployed, 2= student, 3 = employed, 4 = self-employed, 5 = retired)
What is your country of origin? (1 = UK, 2 = EU, 3 = rest of the world)
Likert Scale Questions (1 = Strongly Agree to 6 = Strongly Disagree)
I feel anxious about the rapid pace at which AI technology is advancing. (Speed of Change)
The speed at which AI is developing makes it difficult for society to adapt. (Speed of Change)
I worry that AI will cause significant job losses. (Job Loss)
AI technology poses a threat to job security for future generations. (Job Loss)
Understanding AI technology is challenging for most people. (Technological Difficulty)
The complexity of AI technology makes me uneasy. (Technological Difficulty)
I believe with a AI governments have too much control over citizens. (State Control)
Employers can use AI in ways that can harm employees’ interests. (Employer Control)
Businesses may exploit AI to maximise profits at the expense of societal well-being. (Business Control)
AI development is crucial for the future success of the next generation. (Future of Children)
I am concerned about the implications of AI for the future of my children. (Future of Children)
The use of AI increases the risk of security breaches and data theft. (Security)
AI technology makes personal data less secure. (Security)
The development of AI could lead to more sophisticated cyberattacks. (Security)
AI-driven automation could lead to significant societal changes. (Speed of Change)
I feel anxious about the potential misuse of AI by governments. (State Control)
Businesses do not fully consider the ethical implications of AI. (Business Control)
The next generation will be better equipped to handle AI-related challenges than we are. (Future of Children)
AI development should be regulated more strictly to ensure public safety. (Security)
Open-Ended Questions
What is your biggest concern about the future development of AI? (General Anxiety)
How do you think society should address the challenges posed by AI technology? (General Anxiety)

References

  1. Acemoglu, Daron, and Pascual Restrepo. 2019. The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand. Economics Working Papers (No. 24334). Cambridge: National Bureau of Economic Research. [Google Scholar] [CrossRef]
  2. Ada Lovelace Institute. 2023. How Do People Feel About AI? London: The Alan Turing Institute. Available online: https://www.adalovelaceinstitute.org/wp-content/uploads/2023/06/Ada-Lovelace-Institute-The-Alan-Turing-Institute-How-do-people-feel-about-AI.pdf (accessed on 2 February 2024).
  3. Autor, David H. 2015. Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives 29: 3–30. [Google Scholar] [CrossRef]
  4. Barrat, James. 2013. Our Final Invention: Artificial Intelligence and the End of the Human Era. New York: Thomas Dunne Books. [Google Scholar]
  5. Bessen, James E. 2019. AI and Jobs: The Role of Demand. NBER Working Paper No. 24235. Cambridge: National Bureau of Economic Research. [Google Scholar]
  6. Binns, Reuben, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2018. ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. Paper presented at 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, April 21–26. [Google Scholar]
  7. Bradford, Angelina, Mindy Weisberger, and Nicoletta Lanese. 2024. Deductive Reasoning vs. Inductive Reasoning. Live Science. Available online: https://www.livescience.com/21569-deduction-vs-induction.html (accessed on 9 May 2024).
  8. Braun, Virginia, and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3: 77–101. [Google Scholar] [CrossRef]
  9. Brauner, Philipp, Anna Hick, Rahel Philipsen, and Martina Ziefle. 2023. What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science 5: 1113903. [Google Scholar] [CrossRef]
  10. Bryman, Alan, and Emma Bell. 2011. Business Research Methods, 3rd ed. Oxford: Oxford University Press. [Google Scholar]
  11. Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton & Company. [Google Scholar]
  12. Bryson, Joanna J. 2019. The past decade and future of AI’s impact on society. Technology and Society 41: 23–42. [Google Scholar]
  13. Chen, Yu-Che, Michael J. Ahn, and Yi-Fan Wang. 2023. Artificial intelligence and public values: Value impacts and governance in the public sector. Sustainability 15: 4796. [Google Scholar] [CrossRef]
  14. Citron, Danielle K., and Frank A. Pasquale. 2014. The scored society: Due process for automated predictions. Washington Law Review 89: 1–33. Available online: https://digitalcommons.law.uw.edu/wlr/vol89/iss1/2 (accessed on 13 March 2024).
  15. Dignum, V. 2020. Responsible artificial intelligence: Designing AI for human values. IT Professional 21: 18–19. [Google Scholar]
  16. Floridi, Lugiano. 2016. Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. Science and Engineering Ethics 22: 1669–88. [Google Scholar] [CrossRef]
  17. Floridi, Lugiano, and Josh Cowls. 2022. Ethics and governance of artificial intelligence: Public anxieties and regulatory challenges. Journal of Information, Communication and Ethics in Society 20: 45–62. [Google Scholar] [CrossRef]
  18. Frey, Carl Benedikt, and Michael A. Osborne. 2013. The future of employment: How susceptible are jobs to computerization? Technological Forecasting and Social Change 114: 254–80. [Google Scholar] [CrossRef]
  19. Gerlich, Michael. 2023. Perceptions and acceptance of artificial intelligence: A multi-dimensional study. Social Sciences 12: 502. [Google Scholar] [CrossRef]
  20. Gerlich, Michael. 2024a. Exploring motivators for trust in the dichotomy of human—AI trust dynamics. Social Sciences 13: 251. [Google Scholar] [CrossRef]
  21. Gerlich, Michael. 2024b. Brace for Impact: Facing the AI Revolution and Geopolitical Shifts in a Future Societal Scenario for 2025–2040. Societies 14: 180. [Google Scholar] [CrossRef]
  22. Gillespie, Nicole, Steve Lockey, and Caitlin Curtis. 2021. Trust in Artificial Intelligence: A Five Country Study. Saint Lucia: The University of Queensland and KPMG Australia. [Google Scholar] [CrossRef]
  23. Gillespie, Nicole, Steve Lockey, Caitlin Curtis, Jack Pool, and Arian Akbari. 2023. Trust in Artificial Intelligence: A Global Study. New York: The University of Queensland. Brisbane: KPMG Australia. [Google Scholar] [CrossRef]
  24. Graetz, Georg, and Guy Michaels. 2015. Robots at work. In Centre for Economic Performance. Discussion Paper No. 1335. London: London School of Economics and Political Science. Available online: https://cep.lse.ac.uk/pubs/download/dp1335.pdf (accessed on 15 May 2024).
  25. Joinson, Adam N. 1999. Social desirability, anonymity, and Internet-based questionnaires. Behavior Research Methods, Instruments, & Computers 31: 433–38. [Google Scholar]
  26. Kurzweil, Ray. 2005. The Singularity is Near: When Humans Transcend Biology. New York: The Viking Press. [Google Scholar]
  27. Marr, Bernard. 2019. Artificial Intelligence in Practice: How 50 Successful Companies Used AI and Machine Learning to Solve Problems. Hoboken: John Wiley & Sons. [Google Scholar]
  28. Mishra, A., N. Kumar, and R. Kapoor. 2023. Transparency and trust in AI systems: A comprehensive review. Journal of AI and Ethics 4: 231–50. [Google Scholar]
  29. Miyazaki, Kunihiro, Taichi Murayama, Takayuki Uchiba, Jisun An, and Haewoon Kwak. 2024. Public perception of generative AI on Twitter: An empirical study based on occupation and usage. EPJ Data Science 13: 2. [Google Scholar] [CrossRef]
  30. Nemitz, Paul. 2018. Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374: 20180089. [Google Scholar] [CrossRef]
  31. Paulhus, Delroy L. 2002. Socially desirable responding: The evolution of a construct. In The Role of Constructs in Psychological and Educational Measurement. Edited by H. Braun, D. N. Jackson and D. E. Wiley. Mahwah: Lawrence Erlbaum Associates, pp. 49–69. [Google Scholar]
  32. Resnik, David B., and Mohammad Hosseini. 2024. The ethics of using artificial intelligence in scientific research: New guidance needed for a new tool. AI and Ethics, 1–23. [Google Scholar] [CrossRef]
  33. Sartori, Laura, and Giulia Bocca. 2023. Minding the gap(s): Public perceptions of AI and socio-technical imaginaries. AI & Society 38: 443–58. [Google Scholar] [CrossRef]
  34. Smith, Aaron, and Janna Anderson. 2014. AI, Robotics, and the Future of Jobs. Washington, DC: Pew Research Center. [Google Scholar]
  35. Stahl, Bernd Carsten, and David Wright. 2018. Ethics and privacy in AI: Implementing responsible research and innovation. IEEE Security & Privacy 16: 26–33. [Google Scholar] [CrossRef]
  36. Stahl, Bernd Carsten, Laurence Brooks, Tally Hatzakis, Nicole Santiago, and David Wright. 2023. Exploring ethics and human rights in artificial intelligence—A Delphi study. Technological Forecasting and Social Change 191: 122502. [Google Scholar] [CrossRef]
  37. Taddeo, Mariarosaria, and Luciano Floridi. 2018. Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics 24: 505–28. [Google Scholar]
  38. Theodorou, Andreas. 2022. Accountability and control in AI governance. Science and Engineering Ethics 28: 123–39. [Google Scholar]
  39. Tyson, Alec, and Eileen Kikuchi. 2023. Growing Public Concern About the Role of Artificial Intelligence in Daily Life. Washington, DC: Pew Research Center. [Google Scholar]
  40. UK Government. 2023. Public Attitudes to Data and AI Tracker Survey Wave 3. Available online: https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3 (accessed on 11 October 2024).
  41. Winfield, Alan F. 2019. Ethical governance in robotics and AI. Nature Electronics 2: 46–48. [Google Scholar] [CrossRef]
  42. Yigitcanlar, Tan, Kenan Degirmenci, and Tommi Inkinen. 2024. Drivers behind the public perception of artificial intelligence: Insights from major Australian cities. AI & Society 39: 833–53. [Google Scholar] [CrossRef]
  43. Zarsky, Tal Z. 2016. The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values 41: 118–32. [Google Scholar]
Figure 1. Demographic distribution of respondents.
Figure 1. Demographic distribution of respondents.
Admsci 14 00288 g001
Table 1. Descriptive statistics of responses.
Table 1. Descriptive statistics of responses.
Speed of ChangeJob LossTech DiffControlFuture of GenSecurity
Mean1.5401.3332.2831.7933.2381.755
Standard Error0.0180.0240.0510.0140.0410.012
Median1.3331.0002.0001.8003.3331.750
Mode1.001.001.001.804.001.75
Standard Deviation0.5280.7041.5100.4211.2080.340
Sample Variance0.2790.4962.2790.1771.4590.115
Kurtosis4.2796.199−0.111−0.062−1.093−0.476
Skewness1.5502.3891.0440.416−0.117−0.007
Minimum111111
Maximum4.75.56.03.25.32.5
Count867867867867867867
Confidence Level (95.0%)0.0350.0470.1010.0280.0810.023
Table 2. Key themes analysed.
Table 2. Key themes analysed.
ThemeNo. of Questions
Speed of change—pace of advancement and difficulty in keeping up the pace to adapt, societal changes3
Job Loss—unemployment, job security concerns2
Technological difficulty—challenges, ease 2
Control—state control, business control, business control5
Future of children—success and implication on next gen3
Security—data breach, data theft, data abuse, cyber attack4
TOTAL19
Table 3. Level 2 analysis of questions and measurement of anxiety level.
Table 3. Level 2 analysis of questions and measurement of anxiety level.
123456High Anxiety %Low Anxiety %Factor Group
Q67331021886 96.3%0.7%Speed of change
Q7737962185 96.1%0.6%Speed of change
Q867510570116 90.0%0.7%Job loss
Q96871066174291.5%0.7%Job loss
Q103642208972893367.4%14.1%Technological difficulties
Q113992049045755469.6%14.9%Technological difficulties
Q126999573 91.6%0.0%Fear of control
Q1360419172 91.7%0.0%Fear of control
Q145091621064540577.4%5.2%Fear of control
Q1514214513615114514833.1%33.8%Future of generation
Q16200225231211 49.0%0.0%Future of generation
Q17285291291 66.4%0.0%Security concerns
Q18431436 100.0%0.0%Security concerns
Q19416451 100.0%0.0%Security concerns
Q20300235195137 61.7%0.0%Speed of change
Q21285289293 66.2%0.0%Fear of control
Q22224202223218 49.1%0.0%Fear of control
Q2315812611312514020532.8%39.8%Future of generation
Q24294286287 66.9%0.0%Security concerns
Table 4. Ranking of theme-based anxieties/concerns.
Table 4. Ranking of theme-based anxieties/concerns.
RankTheme
1Speed of change
2Job loss
3Security concerns
4Fear of control
5Technological difficulties
6Future of next generation
Table 5. Correlation analysis of each demographic factor with the theme-based factors.
Table 5. Correlation analysis of each demographic factor with the theme-based factors.
OriginSpeedJob LossTech DiffControlFutureSecurity
Origin1.00
Speed0.031.00
Job Loss0.930.031.00
Tech Diff−0.01−0.03−0.021.00
Control0.260.000.25−0.061.00
Future0.03−0.090.03−0.01−0.021.00
Security0.00−0.010.00−0.02−0.020.021.00
OccupationSpeedJob LossTech DiffControlFutureSecurity
Occupation1.00
Speed−0.011.00
Job Loss0.000.031.00
Tech Diff0.00−0.03−0.021.00
Control0.030.000.25−0.061.00
Future0.05−0.090.03−0.01−0.021.00
Security0.04−0.010.00−0.02−0.020.021.00
EducationSpeedJob LossTech DiffControlFutureSecurity
Education1.00
Speed0.011.00
Job Loss0.000.031.00
Tech Diff−0.03−0.03−0.021.00
Control−0.010.000.25−0.061.00
Future0.01−0.090.03−0.01−0.021.00
Security0.00−0.010.00−0.02−0.020.021.00
GenderSpeedJob LossTech DiffControlFutureSecurity
Gender1.00
Speed−0.011.00
Job Loss−0.020.031.00
Tech Diff−0.06−0.03−0.021.00
Control−0.020.000.25−0.061.00
Future0.01−0.090.03−0.01−0.021.00
Security0.00−0.010.00−0.02−0.020.021.00
AgeSpeedJob LossTech DiffControlFutureSecurity
Age1.00
Speed0.011.00
Job Loss−0.030.031.00
Tech Diff0.01−0.03−0.021.00
Control0.030.000.25−0.061.00
Future0.04−0.090.03−0.01−0.021.00
Security−0.01−0.010.00−0.02−0.020.021.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gerlich, M. Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact. Adm. Sci. 2024, 14, 288. https://doi.org/10.3390/admsci14110288

AMA Style

Gerlich M. Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact. Administrative Sciences. 2024; 14(11):288. https://doi.org/10.3390/admsci14110288

Chicago/Turabian Style

Gerlich, Michael. 2024. "Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact" Administrative Sciences 14, no. 11: 288. https://doi.org/10.3390/admsci14110288

APA Style

Gerlich, M. (2024). Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact. Administrative Sciences, 14(11), 288. https://doi.org/10.3390/admsci14110288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop