Employees’ Appraisals and Trust of Artificial Intelligences’ Transparency and Opacity
Abstract
:1. Introduction
2. Theoretical Background and Research Hypotheses
2.1. Cognitive Appraisal Theory
2.2. AI Transparency and Employees’ Trust in AI
2.3. The Mediating Effect of Employees’ Challenge Appraisals
2.4. The Mediating Effect of Employees’ Threat Appraisals
2.5. Employees’ Appraisals of AI in AI Transparency and Opacity
2.6. The Moderating Effect of Employees’ Domain Knowledge of AI
3. Methodology
3.1. Sample and Data Collection
3.2. Procedure and Manipulation
3.3. Measures
4. Results
4.1. Validity and Reliability
4.2. Test the Manipulation of AI Transparency
4.3. Difference Test for Challenge Appraisals, Threat Appraisals and Trust
4.4. Test of Mediating Effect
4.5. Test of Moderating Effect
5. Discussion
5.1. Theoretical Implications
5.2. Practical Implications
5.3. Limitations and Suggestions for Future Research
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Glikson, E.; Woolley, A.W. Human Trust in Artificial Intelligence: Review of Empirical Research. Acad. Manag. Ann. 2020, 14, 627–660. [Google Scholar] [CrossRef]
- Hengstler, M.; Enkel, E.; Duelli, S. Applied Artificial Intelligence and Trust—The Case of Autonomous Vehicles and Medical Assistance Devices. Technol. Forecast. Soc. Chang. 2016, 105, 105–120. [Google Scholar] [CrossRef]
- Guan, H.; Dong, L.; Zhao, A. Ethical Risk Factors and Mechanisms in Artificial Intelligence Decision Making. Behav. Sci. 2022, 12, 343. [Google Scholar] [CrossRef] [PubMed]
- Siau, K.; Wang, W. Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. J. Database Manag. JDM 2020, 31, 74–87. [Google Scholar] [CrossRef]
- Danks, D.; London, A.J. Algorithmic Bias in Autonomous Systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI 2017) Forthcoming, Melbourne, Australia, 19–25 August 2017; Volume 17, pp. 4691–4697. [Google Scholar]
- Zhao, R.; Benbasat, I.; Cavusoglu, H. Do users always want to know more? Investigating the relationship between system transparency and users’ trust in advice-giving systems. In Proceedings of the 27th European Conference on Information Systems(ECIS), Stockholm/Uppsala, Sweden, 8–14 June 2019. [Google Scholar]
- Liu, B. In AI We Trust? Effects of Agency Locus and Transparency on Uncertainty Reduction in Human–AI Interaction. J. Comput.-Mediat. Commun. 2021, 26, 384–402. [Google Scholar] [CrossRef]
- Felzmann, H.; Villaronga, E.F.; Lutz, C.; Tamò-Larrieux, A. Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns. Big Data Soc. 2019, 6, 2053951719860542. [Google Scholar] [CrossRef]
- Höddinghaus, M.; Sondern, D.; Hertel, G. The Automation of Leadership Functions: Would People Trust Decision Algorithms? Comput. Hum. Behav. 2021, 116, 106635. [Google Scholar] [CrossRef]
- Cramer, H.; Evers, V.; Ramlal, S.; Van Someren, M.; Rutledge, L.; Stash, N.; Aroyo, L.; Wielinga, B. The Effects of Transparency on Trust in and Acceptance of a Content-Based Art Recommender. User Model. User-Adapt. Interact. 2008, 18, 455–496. [Google Scholar] [CrossRef]
- Dogruel, L. Too Much Information Examining the Impact of Different Levels of Transparency on Consumers’ Evaluations of Targeted Advertising. Commun. Res. Rep. 2019, 36, 383–392. [Google Scholar] [CrossRef]
- Juma, C. Innovation and Its Enemies: Why People Resist New Technologies; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
- Lazarus, R.S.; Folkman, S. Stress, Appraisal, and Coping; Springer Publishing Company: Berlin/Heidelberg, Germany, 1984. [Google Scholar]
- Cao, J.; Yao, J. Linking Different Artificial Intelligence Functions to Employees’ Psychological Appraisals and Work. In Academy of Management Proceedings; Academy of Management: Briarcliff Manor, NY, USA, 2020; Volume 2020, p. 19876. [Google Scholar]
- Hoff, K.A.; Bashir, M. Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef]
- Allen, R.; Choudhury, P. Algorithm-Augmented Work and Domain Experience: The Countervailing Forces of Ability and Aversion. Organ. Sci. 2022, 33, 149–169. [Google Scholar] [CrossRef]
- Ragot, M.; Martin, N.; Cojean, S. Ai-Generated vs. Human Artworks. a Perception Bias towards Artificial Intelligence? In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–10. [Google Scholar] [CrossRef]
- Chiu, Y.-T.; Zhu, Y.-Q.; Corbett, J. In the Hearts and Minds of Employees: A Model of Pre-Adoptive Appraisal toward Artificial Intelligence in Organizations. Int. J. Inf. Manag. 2021, 60, 102379. [Google Scholar] [CrossRef]
- Walker, K.L. Surrendering Information through the Looking Glass: Transparency, Trust, and Protection. J. Public Policy Mark. 2016, 35, 144–158. [Google Scholar] [CrossRef]
- Kulesza, T.; Stumpf, S.; Burnett, M.; Yang, S.; Kwan, I.; Wong, W.-K. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users’ Mental Models. In Proceedings of the 2013 IEEE Symposium on Visual Languages and Human Centric Computing, San Jose, CA, USA, 15–19 September 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 3–10. [Google Scholar] [CrossRef]
- Chander, A.; Srinivasan, R.; Chelian, S.; Wang, J.; Uchino, K. Working with Beliefs: AI Transparency in the Enterprise. In Proceedings of the IUI Workshops, Tokyo, Japan, 11 March 2018. [Google Scholar]
- de Fine Licht, K.; de Fine Licht, J. Artificial Intelligence, Transparency, and Public Decision-Making. AI Soc. 2020, 35, 917–926. [Google Scholar] [CrossRef]
- De Fine Licht, J.; Naurin, D.; Esaiasson, P.; Gilljam, M. When Does Transparency Generate Legitimacy? Experimenting on a Context-Bound Relationship. Governance 2014, 27, 111–134. [Google Scholar] [CrossRef]
- Dzindolet, M.T.; Peterson, S.A.; Pomranky, R.A.; Pierce, L.G.; Beck, H.P. The Role of Trust in Automation Reliance. Int. J. Hum.-Comput. Stud. 2003, 58, 697–718. [Google Scholar] [CrossRef]
- Wang, W.; Benbasat, I. Recommendation Agents for Electronic Commerce: Effects of Explanation Facilities on Trusting Beliefs. J. Manag. Inf. Syst. 2007, 23, 217–246. [Google Scholar] [CrossRef]
- Wang, W.; Benbasat, I. Empirical Assessment of Alternative Designs for Enhancing Different Types of Trusting Beliefs in Online Recommendation Agents. J. Manag. Inf. Syst. 2016, 33, 744–775. [Google Scholar] [CrossRef]
- Kovoor-Misra, S. Understanding Perceived Organizational Identity during Crisis and Change: A Threat/Opportunity Framework. J. Organ. Chang. Manag. 2009, 22, 494–510. [Google Scholar] [CrossRef]
- Liu, K.; Tao, D. The Roles of Trust, Personalization, Loss of Privacy, and Anthropomorphism in Public Acceptance of Smart Healthcare Services. Comput. Hum. Behav. 2022, 127, 107026. [Google Scholar] [CrossRef]
- Brougham, D.; Haar, J. Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA): Employees’ Perceptions of Our Future Workplace. J. Manag. Organ. 2018, 24, 239–257. [Google Scholar] [CrossRef]
- Schmid, K.; Ramiah, A.A.; Hewstone, M. Neighborhood Ethnic Diversity and Trust: The Role of Intergroup Contact and Perceived Threat. Psychol. Sci. 2014, 25, 665–674. [Google Scholar] [CrossRef] [PubMed]
- Doshi-Velez, F.; Kim, B. Considerations for Evaluation and Generalization in Interpretable Machine Learning. In Explainable and Interpretable Models in Computer Vision and Machine Learning; Springer: Cham, Switzerland, 2018; pp. 3–17. [Google Scholar]
- Lipton, Z.C. The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability Is Both Important and Slippery. Queue 2018, 16, 31–57. [Google Scholar] [CrossRef]
- Kim, B.; Park, J.; Suh, J. Transparency and Accountability in AI Decision Support: Explaining and Visualizing Convolutional Neural Networks for Text Information. Decis. Support Syst. 2020, 134, 113302. [Google Scholar] [CrossRef]
- Parasuraman, R.; Manzey, D.H. Complacency and Bias in Human Use of Automation: An Attentional Integration. Hum. Factors 2010, 52, 381–410. [Google Scholar] [CrossRef] [PubMed]
- Logg, J.M.; Minson, J.A.; Moore, D.A. Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organ. Behav. Hum. Decis. Process. 2019, 151, 90–103. [Google Scholar] [CrossRef]
- Kim, T.; Hinds, P. Who should I blame? Effects of autonomy and transparency on attributions in human-robot interac-tion. In Proceedings of the ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 80–85. [Google Scholar] [CrossRef]
- Ötting, S.K.; Maier, G.W. The importance of procedural justice in human–machine interactions: Intelligent systems as new decision agents in organizations. Comput. Hum. Behav. 2018, 89, 27–39. [Google Scholar] [CrossRef]
- Faul, F.; Erdfelder, E.; Lang, A.-G.; Buchner, A. G* Power 3: A Flexible Statistical Power Analysis Program for the Social, Behavioral, and Biomedical Sciences. Behav. Res. Methods 2007, 39, 175–191. [Google Scholar] [CrossRef]
- Hu, J.; Ma, X.; Xu, X.; Liu, Y. Treat for affection? Customers’ differentiated responses to pro-customer deviance. Tour. Manag. 2022, 93, 104619. [Google Scholar] [CrossRef]
- Huang, M.; Ju, D.; Yam, K.C.; Liu, S.; Qin, X.; Tian, G. Employee Humor Can Shield Them from Abusive Supervision. J. Bus. Ethics 2022, 1–18. [Google Scholar] [CrossRef]
- Zhang, Q.; Wang, X.-H.; Nerstad, C.G.; Ren, H.; Gao, R. Motivational Climates, Work Passion, and Behavioral Consequences. J. Organ. Behav. 2022, 43, 1579–1597. [Google Scholar] [CrossRef]
- Aguinis, H.; Villamor, I.; Ramani, R.S. MTurk Research: Review and Recommendations. J. Manag. 2021, 47, 823–837. [Google Scholar] [CrossRef]
- Drach-Zahavy, A.; Erez, M. Challenge versus Threat Effects on the Goal–Performance Relationship. Organ. Behav. Hum. Decis. Process. 2002, 88, 667–682. [Google Scholar] [CrossRef]
- Zhou, L.; Wang, W.; Xu, J.D.; Liu, T.; Gu, J. Perceived Information Transparency in B2C E-Commerce: An Empirical Investigation. Inf. Manag. 2018, 55, 912–927. [Google Scholar] [CrossRef]
- Hayes, A.F. An Index and Test of Linear Moderated Mediation. Multivar. Behav. Res. 2015, 50, 1–22. [Google Scholar] [CrossRef] [PubMed]
- Schwartz, R.; Vassilev, A.; Greene, K.; Perine, L.; Burt, A.; Hall, P. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. NIST Spec. Publ. 2022, 1270, 1–77. [Google Scholar] [CrossRef]
- Gutiérrez, F.; Seipp, K.; Ochoa, X.; Chiluiza, K.; De Laet, T.; Verbert, K. LADA: A Learning Analytics Dashboard for Academic Advising. Comput. Hum. Behav. 2020, 107, 105826. [Google Scholar] [CrossRef]
- Schmidt, P.; Biessmann, F.; Teubner, T. Transparency and Trust in Artificial Intelligence Systems. J. Decis. Syst. 2020, 29, 260–278. [Google Scholar] [CrossRef]
- Smith, C. An Employee’s Best Friend? How AI Can Boost Employee Engagement and Performance. Strateg. HR Rev. 2019, 18, 17–20. [Google Scholar] [CrossRef]
- IBM. Trust and Transparency in AI. Available online: https://www.ibm.com/watson/trust-transparency (accessed on 18 January 2019).
Constructs | Items | References |
---|---|---|
Perceived transparency | I can access a great deal of information which explains how the AI system works. | Zhao et al. [6] |
I can see plenty of information about the AI system’s inner logic. | ||
I feel that the amount of available information regarding the AI system’s reasoning is large. | ||
Challenge appraisals | AI-involved work seems like a challenge to me. | Drach-Zahavya and Erez [43] |
AI-involved work provides opportunities to exercise reasoning skills. | ||
AI-involved work provides opportunities to overcome obstacles. | ||
AI-involved work provides opportunities to strengthen my self-esteem. | ||
Threat appraisals | AI-involved work seems like a threat to me. | Drach-Zahavya and Erez [43] |
I’m worried that AI-involved work might reveal my weaknesses. | ||
AI-involved work seems long and tiresome. | ||
I’m worried that AI-involved work might threaten my self-esteem. | ||
Trust | I would heavily rely on the AI system. | Höddinghaus et al. [9] |
I would trust the AI system completely. | ||
I would feel comfortable relying on the AI system. | ||
Domain knowledge | I know pretty much about AI systems. | Zhou et al. [44] |
Among my circle of friends, I’m one of the “experts” on AI systems. | ||
Compared to most other people, I know less about AI systems. |
Constructs | Items | Standardized Factor Loading (λ) | t-Value | Cronbach’s α | CR | AVE |
---|---|---|---|---|---|---|
Trust | TRU01 | 0.922 | 23.072 | 0.934 | 0.936 | 0.829 |
TRU02 | 0.891 | 21.788 | ||||
TRU03 | 0.918 | 22.903 | ||||
Challenge appraisals | CHA01 | 0.490 | 9.523 | 0.831 | 0.835 | 0.568 |
CHA02 | 0.817 | 18.271 | ||||
CHA03 | 0.860 | 19.701 | ||||
CHA04 | 0.791 | 17.460 | ||||
Threat appraisals | THR01 | 0.853 | 20.007 | 0.888 | 0.892 | 0.676 |
THR02 | 0.866 | 20.451 | ||||
THR03 | 0.655 | 13.777 | ||||
THR04 | 0.892 | 21.426 | ||||
Perceived transparency | PER01 | 0.972 | 25.735 | 0.969 | 0.969 | 0.913 |
PER02 | 0.953 | 24.825 | ||||
PER03 | 0.942 | 24.292 | ||||
Domain knowledge | KNO01 | 0.846 | 19.650 | 0.899 | 0.902 | 0.755 |
KNO02 | 0.888 | 21.163 | ||||
KNO03 | 0.872 | 20.581 |
Constructs | Mean | SD | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|---|---|
1. Trust | 4.708 | 1.474 | 0.910 | ||||
2. Challenge appraisals | 5.064 | 1.125 | 0.495 *** | 0.754 | |||
3. Threat appraisals | 2.785 | 1.254 | −0.385 *** | −0.389 *** | 0.822 | ||
4. Perceived Transparency | 3.939 | 2.190 | 0.596 *** | 0.395 *** | −0.145 ** | 0.956 | |
5. Domain knowledge | 4.908 | 1.184 | 0.414 *** | 0.551 *** | −0.370 *** | 0.351 *** | 0.869 |
AI Opacity | AI Transparency | ||
---|---|---|---|
Challenge appraisals | 4.795 (SE = 0.091) | 5.344 (SE = 0.065) | F (1, 373) = 23.687, p < 0.001 |
Threat appraisals | 2.916 (SE = 0.094) | 2.650 (SE = 0.088) | F (1, 373) = 4.281, p < 0.05 |
Trust | 3.967 (SE = 0.116) | 5.476 (SE = 0.058) | F (1, 373) = 133.063, p < 0.001 |
Correlation | Challenge Appraisals | ThreatAppraisals | t-Value | df | Sig | |
---|---|---|---|---|---|---|
AI opacity | −0.322 *** | 4.795 (SE = 0.091) | 2.916 (SE = 0.094) | 12.465 | 190 | p < 0.001 |
AI transparency | −0.242 *** | 5.344 (SE = 0.065) | 2.650 (SE = 0.088) | 22.185 | 183 | p < 0.001 |
Dependent Variable | Independent Variable | β | SE | T | 95% Confidence Interval | R2 | F | |
---|---|---|---|---|---|---|---|---|
LLCI | ULCI | |||||||
Trust | Constant | 3.677 *** | 0.568 | 6.472 | 2.560 | 4.794 | 0.265 | 33.304 *** |
AI transparency | 1.528 *** | 0.134 | 11.407 | 1.265 | 1.792 | |||
Gender | −0.100 | 0.133 | −0.753 | −0.360 | 0.161 | |||
Age | 0.008 | 0.013 | 0.633 | −0.017 | 0.033 | |||
Educational background | 0.026 | 0.125 | 0.211 | −0.219 | 0.271 | |||
Challenge appraisals | Constant | 4.623 *** | 0.489 | 9.448 | 3.661 | 5.585 | 0.064 | 6.356 *** |
AI transparency | 0.531 *** | 0.115 | 4.603 | 0.304 | 0.758 | |||
Gender | −0.016 | 0.114 | −0.137 | −0.240 | 0.209 | |||
Age | −0.006 | 0.011 | −0.577 | −0.027 | 0.015 | |||
Educational background | 0.126 | 0.107 | 1.175 | −0.085 | 0.337 | |||
Threat appraisals | Constant | 4.399 *** | 0.553 | 7.954 | 3.311 | 5.486 | 0.037 | 3.558 ** |
AI transparency | −0.284 * | 0.130 | −2.174 | −0.540 | −0.027 | |||
Gender | 0.101 | 0.129 | 0.784 | −0.153 | 0.355 | |||
Age | −0.015 | 0.012 | −1.226 | −0.039 | 0.009 | |||
Educational background | −0.346 ** | 0.121 | −2.850 | −0.585 | −0.107 | |||
Trust | Constant | 3.690 *** | 0.639 | 5.773 | 2.433 | 4.947 | 0.413 | 43.056 *** |
AI transparency | 1.283 *** | 0.124 | 10.383 | 1.040 | 1.526 | |||
Challenge appraisals | 0.294 *** | 0.057 | 5.214 | 0.183 | 0.406 | |||
Threat appraisals | −0.312 *** | 0.050 | −6.253 | −0.411 | −0.214 | |||
Gender | −0.064 | 0.119 | −0.535 | −0.297 | 0.170 | |||
Age | 0.005 | 0.011 | 0.452 | −0.017 | 0.027 | |||
Educational background | −0.119 | 0.113 | −1.053 | −0.341 | 0.103 |
Effect | SE | 95% Confidence Interval | |||
---|---|---|---|---|---|
LLCI | ULCI | ||||
Total effect | 1.528 | 0.134 | 1.265 | 1.792 | |
Direct effect | 1.283 | 0.124 | 1.040 | 1.526 | |
Mediating effect | Total | 0.245 | 0.073 | 0.112 | 0.399 |
Challenge appraisals | 0.156 | 0.052 | 0.068 | 0.272 | |
Threat appraisals | 0.089 | 0.044 | 0.009 | 0.183 |
Dependent Variable | Independent Variable | β | SE | T | 95% Confidence Interval | R2 | F | |
---|---|---|---|---|---|---|---|---|
LLCI | ULCI | |||||||
Challenge appraisals | Constant | 2.861 *** | 0.497 | 5.760 | 1.885 | 3.838 | 0.244 | 19.785 *** |
AI transparency | 1.268 * | 0.491 | 2.584 | 0.303 | 2.233 | |||
Domain knowledge | 0.485 *** | 0.056 | 8.610 | 0.374 | 0.596 | |||
AI transparency × Domain knowledge | −0.201 * | 0.095 | −2.116 | −0.389 | −0.014 | |||
Gender | −0.063 | 0.104 | −0.604 | −0.267 | 0.141 | |||
Age | −0.007 | 0.010 | −0.719 | −0.026 | 0.012 | |||
Educational background | −0.010 | 0.098 | −0.101 | −0.202 | 0.183 | |||
Threat appraisals | Constant | 6.048 *** | 0.590 | 10.257 | 4.889 | 7.207 | 0.142 | 10.139 *** |
AI transparency | −1.526 ** | 0.583 | −2.619 | −2.671 | −0.380 | |||
Domain knowledge | −0.436 *** | 0.067 | −6.522 | −0.568 | −0.305 | |||
AI transparency × Domain knowledge | 0.291 * | 0.113 | 2.577 | 0.069 | 0.514 | |||
Gender | 0.124 | 0.123 | 1.003 | −0.119 | 0.366 | |||
Age | −0.015 | 0.012 | −1.317 | −0.038 | 0.008 | |||
Educational background | −0.231 * | 0.116 | −1.991 | −0.460 | −0.003 | |||
Trust | Constant | 3.690 *** | 0.639 | 5.773 | 2.433 | 4.947 | 0.413 | 43.056 *** |
AI transparency | 1.283 *** | 0.124 | 10.383 | 1.040 | 1.526 | |||
Challenge appraisals | 0.294 *** | 0.057 | 5.214 | 0.183 | 0.406 | |||
Threat appraisals | −0.312 *** | 0.050 | −6.253 | −0.411 | −0.214 | |||
Gender | −0.064 | 0.119 | −0.535 | −0.297 | 0.170 | |||
Age | 0.005 | 0.011 | 0.452 | −0.017 | 0.027 | |||
Educational background | −0.119 | 0.113 | −1.053 | −0.341 | 0.103 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, L.; Li, Y.; Fan, F. Employees’ Appraisals and Trust of Artificial Intelligences’ Transparency and Opacity. Behav. Sci. 2023, 13, 344. https://doi.org/10.3390/bs13040344
Yu L, Li Y, Fan F. Employees’ Appraisals and Trust of Artificial Intelligences’ Transparency and Opacity. Behavioral Sciences. 2023; 13(4):344. https://doi.org/10.3390/bs13040344
Chicago/Turabian StyleYu, Liangru, Yi Li, and Fan Fan. 2023. "Employees’ Appraisals and Trust of Artificial Intelligences’ Transparency and Opacity" Behavioral Sciences 13, no. 4: 344. https://doi.org/10.3390/bs13040344
APA StyleYu, L., Li, Y., & Fan, F. (2023). Employees’ Appraisals and Trust of Artificial Intelligences’ Transparency and Opacity. Behavioral Sciences, 13(4), 344. https://doi.org/10.3390/bs13040344