Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact
Abstract
:1. Introduction
- What are the primary sources of anxiety related to AI development, and which areas—such as job loss, data security, or ethical concerns—generate the most public concern?
- How significant is the anxiety associated with the rapid speed of AI development, particularly in sectors where AI adoption is fast-paced?
- How do anxieties about job displacement due to AI vary across different demographic groups, such as age, education, and occupation?
- How do public perceptions of AI’s technological complexity influence anxiety, and do these perceptions differ based on individuals’ levels of technical literacy?
- To what extent do concerns about the control that businesses, governments, and other institutions have over AI development contribute to public anxieties, particularly in relation to ethical governance and trust?
2. Literature Review
2.1. Economic Impact and Job Displacement
2.2. Data Privacy and Security Concerns
2.3. Ethical Governance and Trust
2.4. Social Pressures and Hidden Anxieties
3. Materials and Methods
3.1. Research Design
3.1.1. Focus Group Discussions
3.1.2. Survey Design and Data Collection
3.1.3. Semi-Structured Interviews
- “Can you elaborate on your concerns about how AI will impact your job or industry?”
- “How do you feel about the way AI systems handle personal data?”
- “Do you think AI is being developed too quickly for society to handle responsibly?”
3.1.4. Survey Questions and Variables
Job Displacement (5 questions).Impact on Future Generations (4 questions).
3.2. Data Analysis Methods
3.2.1. Quantitative Data Analysis
- 1.
- Descriptive Statistics:Descriptive statistics were used to summarise the demographic characteristics of the survey respondents and their responses to 19 Likert-scale questions. Metrics such as the mean, median, mode, and standard deviation were calculated for each theme, which included job displacement, data privacy, and control over AI systems. The objective was to determine broad trends in public anxieties related to AI. For instance, low mean scores indicated heightened concerns about job security and the rapid pace of AI development.
- 2.
- Correlation Analysis:A pairwise Pearson correlation analysis was performed to assess the relationships between demographic variables (e.g., age, gender, education, employment status) and levels of AI-related anxiety. Correlation coefficients ranged from +1 to −1, representing the strength and direction of these relationships. A positive correlation (closer to +1) indicated that as one variable increased, so did the other; a negative correlation (closer to −1) indicated an inverse relationship.
- 3.
- Handling of Variables:Demographic variables were processed according to their nature: continuous variables like age were analysed using Pearson correlation coefficients, while categorical variables such as gender and occupation were examined using cross-tabulations to assess group differences in anxiety levels. This distinction ensured that the analysis accurately captured the relationships between demographic factors and AI-related anxieties.
- 4.
- Reliability of Survey Instrument:Cronbach’s alpha was calculated for the 19 Likert-scale items to assess the internal consistency of the survey instrument. The alpha value exceeded 0.80, indicating high reliability and ensuring that the questions measured the same underlying constructs (AI-related anxieties) consistently.
3.2.2. Qualitative Data Analysis
- Thematic Analysis of Focus Groups:Five focus group discussions, each comprising 10 participants, were conducted to explore how anxieties about AI were expressed in a public group setting. Initially, 85% of participants claimed they had no significant concerns about AI. However, after specific examples of AI-related risks were introduced, the percentage of participants admitting to such concerns rose to 88%.
- 2.
- Thematic Analysis of Interviews:Semi-structured interviews were conducted with 53 participants to gain deeper insights into the anxieties identified during the focus group and survey phases. The interviews followed a flexible guide, allowing respondents to elaborate on specific concerns related to job security, control over AI systems, and data privacy.
- 3.
- Inter-Rater Reliability:To ensure the reliability of the qualitative analysis, two independent researchers coded a subset of the interview and focus group transcripts. Cohen’s kappa was calculated to measure inter-rater reliability, yielding a value of 0.81, which indicates substantial agreement between coders. This process reduced subjective bias and enhanced the rigor of the thematic analysis.
- 4.
- Triangulation:The triangulation of qualitative and quantitative data was employed to validate the findings and ensure consistency across different data sources. The themes that emerged from the focus group discussions and interviews were cross-referenced with the survey results. For example, the concerns about job displacement expressed during interviews mirrored the high levels of anxiety reported in the survey. This triangulation reinforced the robustness of the study’s findings and provided a comprehensive understanding of public anxieties about AI.
3.3. Ethical Considerations
4. Results and Discussion
4.1. Demographic Characteristics
4.2. Descriptive Statistics
4.3. Public Anxieties Related to AI
4.4. Job Loss and Economic Anxiety
4.5. Data Privacy and Security Concerns
4.6. Technological Complexity and Control
4.7. Future of the Next Generation
Correlation Analysis
4.8. Thematic Analysis of Interviews and Focus Groups
4.8.1. Job Loss and Economic Displacement
4.8.2. Data Privacy and Security Concerns
4.8.3. Control and Ethical Governance
4.8.4. Technological Complexity and the Future of the Next Generation
5. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Demographic Questions |
What is your age? |
What is your gender? (1 = male, 2 = female) |
What is your highest level of education? (1 = high school, 2 = college, 3 = post grad) |
What is your current occupation? (1 = unemployed, 2= student, 3 = employed, 4 = self-employed, 5 = retired) |
What is your country of origin? (1 = UK, 2 = EU, 3 = rest of the world) |
Likert Scale Questions (1 = Strongly Agree to 6 = Strongly Disagree) |
I feel anxious about the rapid pace at which AI technology is advancing. (Speed of Change) |
The speed at which AI is developing makes it difficult for society to adapt. (Speed of Change) |
I worry that AI will cause significant job losses. (Job Loss) |
AI technology poses a threat to job security for future generations. (Job Loss) |
Understanding AI technology is challenging for most people. (Technological Difficulty) |
The complexity of AI technology makes me uneasy. (Technological Difficulty) |
I believe with a AI governments have too much control over citizens. (State Control) |
Employers can use AI in ways that can harm employees’ interests. (Employer Control) |
Businesses may exploit AI to maximise profits at the expense of societal well-being. (Business Control) |
AI development is crucial for the future success of the next generation. (Future of Children) |
I am concerned about the implications of AI for the future of my children. (Future of Children) |
The use of AI increases the risk of security breaches and data theft. (Security) |
AI technology makes personal data less secure. (Security) |
The development of AI could lead to more sophisticated cyberattacks. (Security) |
AI-driven automation could lead to significant societal changes. (Speed of Change) |
I feel anxious about the potential misuse of AI by governments. (State Control) |
Businesses do not fully consider the ethical implications of AI. (Business Control) |
The next generation will be better equipped to handle AI-related challenges than we are. (Future of Children) |
AI development should be regulated more strictly to ensure public safety. (Security) |
Open-Ended Questions |
What is your biggest concern about the future development of AI? (General Anxiety) |
How do you think society should address the challenges posed by AI technology? (General Anxiety) |
References
- Acemoglu, Daron, and Pascual Restrepo. 2019. The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand. Economics Working Papers (No. 24334). Cambridge: National Bureau of Economic Research. [Google Scholar] [CrossRef]
- Ada Lovelace Institute. 2023. How Do People Feel About AI? London: The Alan Turing Institute. Available online: https://www.adalovelaceinstitute.org/wp-content/uploads/2023/06/Ada-Lovelace-Institute-The-Alan-Turing-Institute-How-do-people-feel-about-AI.pdf (accessed on 2 February 2024).
- Autor, David H. 2015. Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives 29: 3–30. [Google Scholar] [CrossRef]
- Barrat, James. 2013. Our Final Invention: Artificial Intelligence and the End of the Human Era. New York: Thomas Dunne Books. [Google Scholar]
- Bessen, James E. 2019. AI and Jobs: The Role of Demand. NBER Working Paper No. 24235. Cambridge: National Bureau of Economic Research. [Google Scholar]
- Binns, Reuben, Michael Veale, Max Van Kleek, and Nigel Shadbolt. 2018. ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. Paper presented at 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, April 21–26. [Google Scholar]
- Bradford, Angelina, Mindy Weisberger, and Nicoletta Lanese. 2024. Deductive Reasoning vs. Inductive Reasoning. Live Science. Available online: https://www.livescience.com/21569-deduction-vs-induction.html (accessed on 9 May 2024).
- Braun, Virginia, and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3: 77–101. [Google Scholar] [CrossRef]
- Brauner, Philipp, Anna Hick, Rahel Philipsen, and Martina Ziefle. 2023. What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science 5: 1113903. [Google Scholar] [CrossRef]
- Bryman, Alan, and Emma Bell. 2011. Business Research Methods, 3rd ed. Oxford: Oxford University Press. [Google Scholar]
- Brynjolfsson, Erik, and Andrew McAfee. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton & Company. [Google Scholar]
- Bryson, Joanna J. 2019. The past decade and future of AI’s impact on society. Technology and Society 41: 23–42. [Google Scholar]
- Chen, Yu-Che, Michael J. Ahn, and Yi-Fan Wang. 2023. Artificial intelligence and public values: Value impacts and governance in the public sector. Sustainability 15: 4796. [Google Scholar] [CrossRef]
- Citron, Danielle K., and Frank A. Pasquale. 2014. The scored society: Due process for automated predictions. Washington Law Review 89: 1–33. Available online: https://digitalcommons.law.uw.edu/wlr/vol89/iss1/2 (accessed on 13 March 2024).
- Dignum, V. 2020. Responsible artificial intelligence: Designing AI for human values. IT Professional 21: 18–19. [Google Scholar]
- Floridi, Lugiano. 2016. Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. Science and Engineering Ethics 22: 1669–88. [Google Scholar] [CrossRef]
- Floridi, Lugiano, and Josh Cowls. 2022. Ethics and governance of artificial intelligence: Public anxieties and regulatory challenges. Journal of Information, Communication and Ethics in Society 20: 45–62. [Google Scholar] [CrossRef]
- Frey, Carl Benedikt, and Michael A. Osborne. 2013. The future of employment: How susceptible are jobs to computerization? Technological Forecasting and Social Change 114: 254–80. [Google Scholar] [CrossRef]
- Gerlich, Michael. 2023. Perceptions and acceptance of artificial intelligence: A multi-dimensional study. Social Sciences 12: 502. [Google Scholar] [CrossRef]
- Gerlich, Michael. 2024a. Exploring motivators for trust in the dichotomy of human—AI trust dynamics. Social Sciences 13: 251. [Google Scholar] [CrossRef]
- Gerlich, Michael. 2024b. Brace for Impact: Facing the AI Revolution and Geopolitical Shifts in a Future Societal Scenario for 2025–2040. Societies 14: 180. [Google Scholar] [CrossRef]
- Gillespie, Nicole, Steve Lockey, and Caitlin Curtis. 2021. Trust in Artificial Intelligence: A Five Country Study. Saint Lucia: The University of Queensland and KPMG Australia. [Google Scholar] [CrossRef]
- Gillespie, Nicole, Steve Lockey, Caitlin Curtis, Jack Pool, and Arian Akbari. 2023. Trust in Artificial Intelligence: A Global Study. New York: The University of Queensland. Brisbane: KPMG Australia. [Google Scholar] [CrossRef]
- Graetz, Georg, and Guy Michaels. 2015. Robots at work. In Centre for Economic Performance. Discussion Paper No. 1335. London: London School of Economics and Political Science. Available online: https://cep.lse.ac.uk/pubs/download/dp1335.pdf (accessed on 15 May 2024).
- Joinson, Adam N. 1999. Social desirability, anonymity, and Internet-based questionnaires. Behavior Research Methods, Instruments, & Computers 31: 433–38. [Google Scholar]
- Kurzweil, Ray. 2005. The Singularity is Near: When Humans Transcend Biology. New York: The Viking Press. [Google Scholar]
- Marr, Bernard. 2019. Artificial Intelligence in Practice: How 50 Successful Companies Used AI and Machine Learning to Solve Problems. Hoboken: John Wiley & Sons. [Google Scholar]
- Mishra, A., N. Kumar, and R. Kapoor. 2023. Transparency and trust in AI systems: A comprehensive review. Journal of AI and Ethics 4: 231–50. [Google Scholar]
- Miyazaki, Kunihiro, Taichi Murayama, Takayuki Uchiba, Jisun An, and Haewoon Kwak. 2024. Public perception of generative AI on Twitter: An empirical study based on occupation and usage. EPJ Data Science 13: 2. [Google Scholar] [CrossRef]
- Nemitz, Paul. 2018. Constitutional democracy and technology in the age of artificial intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374: 20180089. [Google Scholar] [CrossRef]
- Paulhus, Delroy L. 2002. Socially desirable responding: The evolution of a construct. In The Role of Constructs in Psychological and Educational Measurement. Edited by H. Braun, D. N. Jackson and D. E. Wiley. Mahwah: Lawrence Erlbaum Associates, pp. 49–69. [Google Scholar]
- Resnik, David B., and Mohammad Hosseini. 2024. The ethics of using artificial intelligence in scientific research: New guidance needed for a new tool. AI and Ethics, 1–23. [Google Scholar] [CrossRef]
- Sartori, Laura, and Giulia Bocca. 2023. Minding the gap(s): Public perceptions of AI and socio-technical imaginaries. AI & Society 38: 443–58. [Google Scholar] [CrossRef]
- Smith, Aaron, and Janna Anderson. 2014. AI, Robotics, and the Future of Jobs. Washington, DC: Pew Research Center. [Google Scholar]
- Stahl, Bernd Carsten, and David Wright. 2018. Ethics and privacy in AI: Implementing responsible research and innovation. IEEE Security & Privacy 16: 26–33. [Google Scholar] [CrossRef]
- Stahl, Bernd Carsten, Laurence Brooks, Tally Hatzakis, Nicole Santiago, and David Wright. 2023. Exploring ethics and human rights in artificial intelligence—A Delphi study. Technological Forecasting and Social Change 191: 122502. [Google Scholar] [CrossRef]
- Taddeo, Mariarosaria, and Luciano Floridi. 2018. Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics 24: 505–28. [Google Scholar]
- Theodorou, Andreas. 2022. Accountability and control in AI governance. Science and Engineering Ethics 28: 123–39. [Google Scholar]
- Tyson, Alec, and Eileen Kikuchi. 2023. Growing Public Concern About the Role of Artificial Intelligence in Daily Life. Washington, DC: Pew Research Center. [Google Scholar]
- UK Government. 2023. Public Attitudes to Data and AI Tracker Survey Wave 3. Available online: https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-3 (accessed on 11 October 2024).
- Winfield, Alan F. 2019. Ethical governance in robotics and AI. Nature Electronics 2: 46–48. [Google Scholar] [CrossRef]
- Yigitcanlar, Tan, Kenan Degirmenci, and Tommi Inkinen. 2024. Drivers behind the public perception of artificial intelligence: Insights from major Australian cities. AI & Society 39: 833–53. [Google Scholar] [CrossRef]
- Zarsky, Tal Z. 2016. The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology, & Human Values 41: 118–32. [Google Scholar]
Speed of Change | Job Loss | Tech Diff | Control | Future of Gen | Security | |
---|---|---|---|---|---|---|
Mean | 1.540 | 1.333 | 2.283 | 1.793 | 3.238 | 1.755 |
Standard Error | 0.018 | 0.024 | 0.051 | 0.014 | 0.041 | 0.012 |
Median | 1.333 | 1.000 | 2.000 | 1.800 | 3.333 | 1.750 |
Mode | 1.00 | 1.00 | 1.00 | 1.80 | 4.00 | 1.75 |
Standard Deviation | 0.528 | 0.704 | 1.510 | 0.421 | 1.208 | 0.340 |
Sample Variance | 0.279 | 0.496 | 2.279 | 0.177 | 1.459 | 0.115 |
Kurtosis | 4.279 | 6.199 | −0.111 | −0.062 | −1.093 | −0.476 |
Skewness | 1.550 | 2.389 | 1.044 | 0.416 | −0.117 | −0.007 |
Minimum | 1 | 1 | 1 | 1 | 1 | 1 |
Maximum | 4.7 | 5.5 | 6.0 | 3.2 | 5.3 | 2.5 |
Count | 867 | 867 | 867 | 867 | 867 | 867 |
Confidence Level (95.0%) | 0.035 | 0.047 | 0.101 | 0.028 | 0.081 | 0.023 |
Theme | No. of Questions |
---|---|
Speed of change—pace of advancement and difficulty in keeping up the pace to adapt, societal changes | 3 |
Job Loss—unemployment, job security concerns | 2 |
Technological difficulty—challenges, ease | 2 |
Control—state control, business control, business control | 5 |
Future of children—success and implication on next gen | 3 |
Security—data breach, data theft, data abuse, cyber attack | 4 |
TOTAL | 19 |
1 | 2 | 3 | 4 | 5 | 6 | High Anxiety % | Low Anxiety % | Factor Group | |
---|---|---|---|---|---|---|---|---|---|
Q6 | 733 | 102 | 18 | 8 | 6 | 96.3% | 0.7% | Speed of change | |
Q7 | 737 | 96 | 21 | 8 | 5 | 96.1% | 0.6% | Speed of change | |
Q8 | 675 | 105 | 70 | 11 | 6 | 90.0% | 0.7% | Job loss | |
Q9 | 687 | 106 | 61 | 7 | 4 | 2 | 91.5% | 0.7% | Job loss |
Q10 | 364 | 220 | 89 | 72 | 89 | 33 | 67.4% | 14.1% | Technological difficulties |
Q11 | 399 | 204 | 90 | 45 | 75 | 54 | 69.6% | 14.9% | Technological difficulties |
Q12 | 699 | 95 | 73 | 91.6% | 0.0% | Fear of control | |||
Q13 | 604 | 191 | 72 | 91.7% | 0.0% | Fear of control | |||
Q14 | 509 | 162 | 106 | 45 | 40 | 5 | 77.4% | 5.2% | Fear of control |
Q15 | 142 | 145 | 136 | 151 | 145 | 148 | 33.1% | 33.8% | Future of generation |
Q16 | 200 | 225 | 231 | 211 | 49.0% | 0.0% | Future of generation | ||
Q17 | 285 | 291 | 291 | 66.4% | 0.0% | Security concerns | |||
Q18 | 431 | 436 | 100.0% | 0.0% | Security concerns | ||||
Q19 | 416 | 451 | 100.0% | 0.0% | Security concerns | ||||
Q20 | 300 | 235 | 195 | 137 | 61.7% | 0.0% | Speed of change | ||
Q21 | 285 | 289 | 293 | 66.2% | 0.0% | Fear of control | |||
Q22 | 224 | 202 | 223 | 218 | 49.1% | 0.0% | Fear of control | ||
Q23 | 158 | 126 | 113 | 125 | 140 | 205 | 32.8% | 39.8% | Future of generation |
Q24 | 294 | 286 | 287 | 66.9% | 0.0% | Security concerns |
Rank | Theme |
---|---|
1 | Speed of change |
2 | Job loss |
3 | Security concerns |
4 | Fear of control |
5 | Technological difficulties |
6 | Future of next generation |
Origin | Speed | Job Loss | Tech Diff | Control | Future | Security | |
Origin | 1.00 | ||||||
Speed | 0.03 | 1.00 | |||||
Job Loss | 0.93 | 0.03 | 1.00 | ||||
Tech Diff | −0.01 | −0.03 | −0.02 | 1.00 | |||
Control | 0.26 | 0.00 | 0.25 | −0.06 | 1.00 | ||
Future | 0.03 | −0.09 | 0.03 | −0.01 | −0.02 | 1.00 | |
Security | 0.00 | −0.01 | 0.00 | −0.02 | −0.02 | 0.02 | 1.00 |
Occupation | Speed | Job Loss | Tech Diff | Control | Future | Security | |
Occupation | 1.00 | ||||||
Speed | −0.01 | 1.00 | |||||
Job Loss | 0.00 | 0.03 | 1.00 | ||||
Tech Diff | 0.00 | −0.03 | −0.02 | 1.00 | |||
Control | 0.03 | 0.00 | 0.25 | −0.06 | 1.00 | ||
Future | 0.05 | −0.09 | 0.03 | −0.01 | −0.02 | 1.00 | |
Security | 0.04 | −0.01 | 0.00 | −0.02 | −0.02 | 0.02 | 1.00 |
Education | Speed | Job Loss | Tech Diff | Control | Future | Security | |
Education | 1.00 | ||||||
Speed | 0.01 | 1.00 | |||||
Job Loss | 0.00 | 0.03 | 1.00 | ||||
Tech Diff | −0.03 | −0.03 | −0.02 | 1.00 | |||
Control | −0.01 | 0.00 | 0.25 | −0.06 | 1.00 | ||
Future | 0.01 | −0.09 | 0.03 | −0.01 | −0.02 | 1.00 | |
Security | 0.00 | −0.01 | 0.00 | −0.02 | −0.02 | 0.02 | 1.00 |
Gender | Speed | Job Loss | Tech Diff | Control | Future | Security | |
Gender | 1.00 | ||||||
Speed | −0.01 | 1.00 | |||||
Job Loss | −0.02 | 0.03 | 1.00 | ||||
Tech Diff | −0.06 | −0.03 | −0.02 | 1.00 | |||
Control | −0.02 | 0.00 | 0.25 | −0.06 | 1.00 | ||
Future | 0.01 | −0.09 | 0.03 | −0.01 | −0.02 | 1.00 | |
Security | 0.00 | −0.01 | 0.00 | −0.02 | −0.02 | 0.02 | 1.00 |
Age | Speed | Job Loss | Tech Diff | Control | Future | Security | |
Age | 1.00 | ||||||
Speed | 0.01 | 1.00 | |||||
Job Loss | −0.03 | 0.03 | 1.00 | ||||
Tech Diff | 0.01 | −0.03 | −0.02 | 1.00 | |||
Control | 0.03 | 0.00 | 0.25 | −0.06 | 1.00 | ||
Future | 0.04 | −0.09 | 0.03 | −0.01 | −0.02 | 1.00 | |
Security | −0.01 | −0.01 | 0.00 | −0.02 | −0.02 | 0.02 | 1.00 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gerlich, M. Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact. Adm. Sci. 2024, 14, 288. https://doi.org/10.3390/admsci14110288
Gerlich M. Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact. Administrative Sciences. 2024; 14(11):288. https://doi.org/10.3390/admsci14110288
Chicago/Turabian StyleGerlich, Michael. 2024. "Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact" Administrative Sciences 14, no. 11: 288. https://doi.org/10.3390/admsci14110288
APA StyleGerlich, M. (2024). Public Anxieties About AI: Implications for Corporate Strategy and Societal Impact. Administrative Sciences, 14(11), 288. https://doi.org/10.3390/admsci14110288