Next Article in Journal
Experiencing Traumatic Violence: An Interpretative Phenomenological Analysis of One Man’s Lived Experience of a Violent Attack Involving a Knife
Previous Article in Journal
Risk Perception and Key Beliefs on Business Adaptation Behavior of Family Farmers: Empirical Evidence from Sichuan Province, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

When Healthcare Professionals Use AI: Exploring Work Well-Being Through Psychological Needs Satisfaction and Job Complexity

1
SILC Business School, Shanghai University, Shanghai 200444, China
2
School of Management, Fudan University, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Behav. Sci. 2025, 15(1), 88; https://doi.org/10.3390/bs15010088
Submission received: 4 November 2024 / Revised: 14 January 2025 / Accepted: 16 January 2025 / Published: 18 January 2025

Abstract

:
This study examines how the use of artificial intelligence (AI) by healthcare professionals affects their work well-being through the satisfaction of basic psychological needs, framed within Self-Determination Theory. Data from 280 healthcare professionals across various departments in Chinese hospitals were collected, and the hierarchical regression and regression were analyzed to assess the relationship between the use of AI, psychological needs satisfaction (autonomy, competence, and relatedness), and their work well-being. The results reveal that the use of AI enhances work well-being indirectly by increasing the satisfaction of these psychological needs. Additionally, job complexity serves as a boundary condition that moderates the relationship between the use of AI and work well-being. Specifically, job complexity weakens the relationship between the use of AI and the satisfaction of autonomy and competence, while having no significant effect on the relationship between the use of AI and the satisfaction of relatedness. These findings suggest that the impact of the use of AI on healthcare professionals’ well-being is contingent on job complexity. This study highlights that promoting healthcare professionals’ well-being at work in the context of AI adoption requires not only technological implementation but also ongoing adaptation to meet their evolving psychological needs. These insights provide a theoretical foundation and practical guidance for integrating AI into healthcare to support the well-being of healthcare professionals.

1. Introduction

In recent years, global healthcare systems have confronted unprecedented challenges, with healthcare professionals experiencing intensifying work stress amid persistent workforce shortages (Bamforth et al., 2023; Ahmed, 2019). Meanwhile, medical artificial intelligence (AI) has demonstrated substantial advancements across multiple domains, including radiological imaging (Lebovitz et al., 2022; Jussupow et al., 2021), workflow optimization (Topol, 2019; Ahmed et al., 2022), and intelligent health management (Topol, 2019), alleviating healthcare professionals’ workload and augmenting the precision of clinical diagnosis and treatment decision-making (Ahmed et al., 2022; Delshad et al., 2021). For instance, in radiological imaging, AI systems leveraging big data techniques assist physicians in rapidly and accurately analyzing CT, X-ray, MRI, and other imaging data, automatically identifying potential lesion areas. This improves the accuracy and efficiency of detecting conditions such as lung cancer (Topol, 2019). Moreover, AI can extract pertinent information from patient histories, automatically generate medical summaries and treatment recommendations, and update medical records in real time. This streamlines health data management, enabling physicians to focus on clinical judgment while ensuring accurate and up-to-date patient information (Jussupow et al., 2021; Topol, 2019). These technological advancements not only improve the efficiency of medical services but also profoundly impact the work experiences and work well-being of healthcare professionals. As the cornerstone of the healthcare system, the work well-being of healthcare professionals is intrinsically tied to the quality of patient care, team collaboration, and the overall sustainability of the healthcare system (Bamforth et al., 2023). Given the ongoing and widespread integration of medical AI, it has become increasingly critical to conduct a comprehensive examination of its impact on healthcare professionals’ work well-being. Such an investigation is essential for optimizing the application of AI technologies and fostering sustainable work environments in healthcare settings.
A comprehensive review of the existing literature reveals that research on medical AI has primarily concentrated on examining technological acceptance among patients and healthcare professionals (Huo et al., 2024a; Huo et al., 2024b; Cai et al., 2024; Huo et al., 2023; Buck et al., 2022). Despite the rapid adoption of AI technologies in medical practice, there remains a significant gap in understanding the full impact of AI on healthcare professionals, particularly with respect to its utility and psychological implications. In other professional domains, the literature on AI usage has largely focused on its effects on work outcomes (Man Tang et al., 2022; Shao et al., 2024; Chowdhury et al., 2023; Malik et al., 2023). Existing studies have approached AI usage through theoretical lenses such as complementarity and role theory (Man Tang et al., 2022), self-regulation theory (Man Tang et al., 2023), and cognitive load theory (Shao et al., 2024), predominantly analyzing how AI usage affects job performance. However, these investigations have critically overlooked subjective work experience related to employee work well-being, thereby presenting a notable research gap. The introduction of AI transcends mere performance optimization; it fundamentally reshapes work experiences and psychological dynamics within work environments (Chowdhury et al., 2023; Malik et al., 2023). While current empirical research predominantly conceptualizes AI as a performance enhancement tool, it fails to appreciate the technology’s profound transformative potential on professionals’ work processes and experiences. The nuanced mechanisms through which AI usage influences work well-being remain largely unexplored and theoretically underdeveloped. Despite the pervasive integration of medical AI in healthcare, there is still a substantial deficit in our understanding of how, when, and to what extent AI usage affects the work well-being of healthcare professionals (Man Tang et al., 2023; Au-Yong-Oliveira et al., 2021; Bauer & Thamm, 2021).
Accordingly, this study aims to bridge the gap in existing research by examining the impact of AI usage on healthcare professionals’ work well-being. We argue that in high-stress medical environments, medical AI offers the potential to enhance work efficiency by automating routine tasks, streamlining diagnostic processes, and supporting clinical decision-making (Bekbolatova et al., 2024), thereby alleviating professional burden and improving their work well-being. To frame this exploration, we draw upon Self-Determination Theory (SDT), which offers a comprehensive theoretical lens for understanding the complex relationship between AI usage and healthcare professionals’ work well-being. According to SDT, basic psychological needs serve as key mediators linking contextual factors, such as AI adoption, to employee well-being (Ryan & Deci, 2000; Duan et al., 2024). SDT posits that three fundamental psychological needs—autonomy, competence, and relatedness—are essential for fostering optimal motivation and well-being. When work environments meet these needs, employees are more likely to experience heightened intrinsic motivation and enhanced psychological well-being (Olafsen & Frølund, 2018; Van den Broeck et al., 2016). Building on this framework, we propose that AI usage influences these three dimensions of psychological needs satisfaction in distinct ways. First, AI can enhance the needs for autonomy and competence satisfaction of healthcare professionals by improving clinical decision-making efficiency and supporting skill development, thereby boosting job performance and intrinsic motivation. Second, AI can foster the need for relatedness satisfaction by strengthening healthcare professionals’ sense of connections and communications, contributing to greater well-being and satisfaction. Together, these three basic psychological needs function synergistically to promote healthcare professionals’ work well-being.
Building upon the framework of SDT, the present study also seeks to explore the boundary conditions under which AI utilization affects psychological needs satisfaction and work well-being. The existing literature in the healthcare domain has primarily focused on the technology itself (Topol, 2019), its features (Jussupow et al., 2021), or individual traits as factors influencing psychological needs (Arslan et al., 2022). However, there is a notable gap in research addressing how unique job characteristics within the medical field may moderate these relationships. In this context, McAnally and Hagger (2024) emphasize that job characteristics can interact with contextual factors, such as AI adoption, to influence the satisfaction of basic psychological needs, which in turn affects employees’ psychological states and behaviors. Building on this insight, we propose job complexity as a critical moderating boundary condition in the relationship between the use of AI, psychological needs satisfaction, and work well-being. By considering varying levels of job complexity, this study aims to deepen our understanding of how AI impacts healthcare professionals’ work experiences across medical work environments.
In conclusion, our research aims to address a critical question: how (i.e., through psychological needs satisfaction) and when (i.e., in relation to job complexity) does medical AI usage influence healthcare professionals’ work well-being? To answer this question, we develop a comprehensive theoretical framework to elucidate the impact of medical AI on healthcare professionals’ work well-being (as illustrated in Figure 1). The model is subsequently tested using data from 280 online survey responses collected from healthcare professionals in Chinese hospitals. Our study makes several key contributions. First, by drawing on SDT, we shift the research focus beyond technological acceptance to explore the positive and nuanced psychological impacts of AI usage (Huo et al., 2024b; Huo et al., 2023). By examining healthcare professionals’ subjective experiences following the implementation of AI technologies, we move beyond traditional performance-centered frameworks that primarily focus on task performance (Man Tang et al., 2022, 2023; Shao et al., 2024; Leroy, 2024). Second, we extend the application of SDT by incorporating the need for autonomy satisfaction, need for competence satisfaction, and need for relatedness satisfaction into the AI workplace context. This approach offers a theoretically grounded explanation of how AI usage influences work well-being, thereby expanding the theoretical boundaries of SDT within digital work environments. Finally, we investigate the boundary conditions of use of AI by introducing job complexity as a critical moderating variable. While the existing healthcare literature predominantly focuses on technological features (Jussupow et al., 2021; Topol, 2019; Xu et al., 2023), our study addresses a significant research gap by examining the role of job characteristics as a moderating factor. By considering the unique work characteristics of the medical field, we provide a more nuanced understanding of the psychological implications of AI usage in healthcare settings.

2. Theoretical Background and Hypotheses Development

2.1. Use of AI and Work Well-Being

Self-Determination Theory. Self-Determination Theory (SDT), introduced in the late 1970s by American psychologists Edward Deci and Richard Ryan, explores the dynamics of motivation in human behavior. SDT elucidates how external environmental factors influence internal motivation and the process of internalization, shedding light on the pathways that shape individual motivation (Ryan & Deci, 2000; Deci & Ryan, 1985; Ryan & Deci, 2020; Van den Broeck et al., 2016). Empirical research within the SDT framework, viewed through an interdisciplinary lens, consistently demonstrates that when individuals receive support from the social environment and internal demands for autonomy, competence, and relatedness, it not only fosters high-quality motivation but also enhances health-promoting behaviors, vitality, and the pursuit of life goals, thereby contributing to overall well-being (Deci & Ryan, 1985; Ryan & Deci, 2020; Ryan et al., 2022; Sheldon & Prentice, 2019; Gillison et al., 2019). As a result, an increasing body of literature suggests that SDT provides a robust and comprehensive framework for understanding human motivation, particularly in the context of a rapidly changing world. It offers valuable insights into the underlying mechanisms of behavior change, both in real-time and in anticipation of future shifts. Central to this theory are three dimensions of satisfaction of psychological needs—autonomy, competence, and relatedness—that foster internal motivation. Autonomy emphasizes self-determination and self-regulation, reflecting a sense of ownership and psychological freedom in one’s actions. Competence pertains to the need to proficiently interact with the environment and cultivate new skills, capturing an individual’s innate drive to explore, manipulate their surroundings, and overcome challenges. Relatedness represents the fundamental need for social connection, including the feeling of being close to and valued by others, underscoring its essential role in emotional and social well-being.
Building on SDT, AI technologies in the healthcare sector play a pivotal role as facilitators within the social work environment, shaping healthcare professionals’ motivation and enhancing the outcomes of their daily tasks. In the management of thyroid nodules, AI-assisted tools effectively alleviate physicians’ workload by significantly reducing the time required for image review. These tools offer notable benefits in improving diagnostic efficiency while maintaining high levels of diagnostic accuracy (Chiniara & Bentein, 2016; Moor et al., 2023). As AI technologies continue to evolve, healthcare professionals are increasingly utilizing AI tools to optimize patient care (Topol, 2019; Moor et al., 2023). For example, AI-assisted CT tools are employed by radiologists as supplementary aids following their initial independent assessments, thereby improving diagnostic efficiency, particularly in cancer detection, such as lung cancer. These tools not only provide reliable foundations for confirming diagnoses but also expedite the evaluation of disease progression and the development of treatment plans.
On the one hand, medical AI usage alleviates the daily burden of strenuous tasks for healthcare professionals, allowing them to dedicate more time and energy to complex, high-value responsibilities. On the other hand, by validating their initial judgments with AI-generated results such as CT-image results, healthcare professionals can enhance their competence and expand their knowledge. This dynamic interaction between AI tools and healthcare professionals can also support the satisfaction of intrinsic motivation and need, helping individuals align their competencies with the demands of an evolving technological work environment. Moreover, intrinsically motivated behaviors, which are more closely aligned with an individual’s core values and interests, tend to result in higher levels of satisfaction and sustained well-being. Extensive research has demonstrated that intrinsic motivation is not only linked to short-term enjoyment but also contributes to long-term mental health and work well-being (Van den Broeck et al., 2016; Ryan & Deci, 2020; Ryan et al., 2022; Sheldon & Prentice, 2019). Consequently, this alignment between AI-driven tasks and intrinsic motivation may ultimately foster improved job satisfaction and overall work well-being (Ryan & Deci, 2020; Wallace et al., 2016).
Therefore, we propose the following hypothesis:
H1. 
Healthcare professionals’ use of AI is positively associated with work well-being.

2.2. Use of AI and Psychological Needs Satisfaction

Although a substantial body of literature highlights the “mixed blessings and disadvantages” associated with the integration of AI and human professionals (Topol, 2019; Jia et al., 2024; Yam et al., 2023; Liang et al., 2022; Teng et al., 2024; Ding, 2021), the introduction of AI in healthcare organizations, such as hospitals, underscores a growing consensus among academics regarding the advantages of integrating intelligent technologies into the workplace (Man Tang et al., 2023). For example, while Topol (2019) provides an overview of the current state of medical AI development and cites various concerns, including data privacy risks (such as the increased likelihood of identifying individuals through genomic sequences in vast databases, exacerbated by hacking and data breaches) (Albahri et al., 2023), algorithmic bias (e.g., diagnostic algorithms for melanoma that fail to account for skin color, or genomic databases that remain grossly under-representative of minorities) (Chen et al., 2023), and lack of transparency (where the opacity of AI systems creates uncertainty among professionals when AI outputs deviate from clinical judgments without clear explanations, requiring additional time and effort to verify results, which may decrease productivity) (Jussupow et al., 2021). However, Topol (2019) and numerous researchers argue that, despite these challenges, nearly all types of clinicians, from specialists to caregivers, will increasingly adopt AI technologies to address critical issues such as healthcare resource imbalances and the shortage of healthcare professionals (Song et al., 2024; Feng & Hua, 2022; Zhang & Zhao, 2024; Li & Qin, 2023; Gu et al., 2019; Fan et al., 2020; Pan et al., 2019).
Based on the aforementioned information, AI serves as an external environmental stimulus that has catalyzed the redesign and optimization of healthcare workflows (Buck et al., 2022), empowering healthcare professionals to make more autonomous decisions by leveraging AI for diagnosis, treatment planning, and patient management (Man Tang et al., 2023; Makarius et al., 2020; Zahlan et al., 2023). For instance, the Galen image-recognition platform’s AI-driven algorithms could assess and classify cancers, thereby enhancing the diagnostic accuracy of pathologists while also reducing the time required for diagnosis. Similarly, See-Mode, a Singapore-based company, integrates medical imaging with AI to assist clinicians in predicting strokes, potentially saving lives (Zahlan et al., 2023). The adoption of such AI technologies largely depends on the individual clinician’s willingness, positioning them as supplementary tools to support healthcare professionals. Importantly, these AI-powered devices are not confined to specific medical specialties; rather, they offer broad benefits across various sectors of healthcare, supporting healthcare professionals in diverse contexts to varying extents (Khan et al., 2024). According to SDT, theorists suggest that individuals possess intrinsic tendencies toward integration, growth, and well-being, contingent upon the satisfaction of basic psychological needs. In this context, AI usage enables healthcare professionals to allocate more time and resources toward personal and professional development, including the personalized management of complex, non-procedural cases and the exploration of innovative treatment options. Thus, AI-assisted applications allow physicians to achieve greater autonomy in their work, thereby partially satisfying their need for autonomy in the process.
The utilization of intelligent machines for clinical decision support represents a transformative approach in healthcare. For instance, advanced healthcare AI models can diagnose patients by analyzing data from digitized electronic health records (EHRs), summarizing the patient’s current status, predicting potential future developments, and recommending treatment plans to assist physicians in their diagnoses (Kalra et al., 2024; Spring et al., 2022; Youssef et al., 2023). In this integrated interaction between human experts and machines, both parties leverage their complementary strengths. Typically, machine outputs are used to challenge initial human judgments, while human inputs serve to refine and optimize machine-generated outputs. This synergistic relationship facilitates knowledge translation and integration, enabling both humans and machines to learn from each other’s inputs and outputs, thereby enhancing their respective capabilities (Jussupow et al., 2021; Moor et al., 2023). Specifically, numerous AI-assisted diagnostic tools incorporate built-in feedback mechanisms that enable physicians to provide input after reviewing and correcting the AI-generated diagnostic results. For instance, in early breast cancer screening, an AI system is trained on a large dataset of medical images to automatically detect potential lesion areas and generate preliminary diagnostic outputs. However, the AI system’s output is not infallible and may include occasional false positives or missed diagnoses. In such instances, physicians can manually annotate the correct lesion areas and specify the nature of the misdiagnosis through the system’s interface. These annotations are then stored and incorporated into the system’s training dataset, contributing to the refinement of the model. By continuously accumulating feedback data from clinicians, the AI system can progressively enhance its diagnostic accuracy over time. Through this process, human experts can expand and deepen their expertise, while machines enhance their accuracy and efficiency. This collaboration not only improves overall performance but also fosters innovation and continuous learning. Consequently, as individuals engage with new technologies, they partially satisfy their needs to control and navigate their environment by anticipating the challenges posed by these technologies and acquiring proficiency in the novel techniques required for their work.
AI acts as a technological enabler, facilitating interaction with smart devices and fostering new forms of communication within healthcare. On the one hand, the integration of AI in clinical data sharing has promoted cross-sector collaboration, reinforcing its role in the medical field. These technologies allow healthcare professionals to collaborate more effectively, monitor treatment efficacy, and adjust strategies in real time (Man Tang et al., 2023; Youssef et al., 2023). On the other hand, the impact of technological advancements on individuals is increasingly complex (Wang et al., 2023). The shift from viewing machines as mere tools for production to recognizing them as integral components of organizational and economic systems carries significant implications (Arslan et al., 2022). Healthcare professionals may perceive AI technology either as a simple tool that supports their tasks or as a more emotionally significant entity. When AI is considered a team member, the collaborative process not only aids in task completion but also enhances expertise through interactive engagement. This iterative collaboration with AI-assisted diagnostic tools—characterized by continuous feedback and input–output exchanges, as previously discussed—mirrors the dynamics of human team communication. Such interactions enable healthcare professionals to experience a sense of connection and value, akin to the mutual recognition found in human collaboration. As a result, this AI-based model strengthens communication and mutual support among team members, thereby enhancing physicians’ need for relatedness satisfaction—a sense of connection and belonging with others (Arslan et al., 2022).
Taken together, we propose the following hypothesis:
H2. 
The use of AI is positively associated with the need for (a) autonomy, (b) competence, and (c) relatedness satisfaction of healthcare professionals.

2.3. The Mediation of Three Dimensions of Psychological Needs Satisfaction

Within the framework of SDT and healthcare AI, it becomes evident that the integration of AI not only improves the efficiency and quality of patient care but also addresses the fundamental psychological needs of healthcare professionals, specifically the need for autonomy satisfaction, need for competence satisfaction, and need for relatedness satisfaction. The satisfaction of these needs plays a critical role in promoting physicians’ mental health, motivation, and overall well-being (Brady et al., 2020). Extending this theoretical framework to healthcare AI, we observe that AI systems facilitate the gratification of these psychological needs, which, in turn, can enhance physicians’ work well-being, thereby increasing their motivation and job performance (Van den Broeck et al., 2016; Kahn, 1990).
As AI optimizes workflows, it enables healthcare professionals to exercise greater autonomy in decision-making processes related to diagnosis, treatment planning, and patient management. This enhanced autonomy fosters a heightened sense of control over their work while also reducing external pressures (Cramarenco et al., 2023). As a result, the satisfaction of autonomy not only bolsters intrinsic motivation but also encourages more active engagement in professional activities (Van den Broeck et al., 2016; Hood & Patton, 2022). Over time, interactions with AI allow healthcare professionals to refine their skills and expand their expertise, further reinforcing their sense of competence. Together, the satisfaction of autonomy and competence needs contributes to a stronger sense of agency within their professional environment, fostering empowerment both at work and in life and promoting work well-being.
In addition, according to SDT, the need for relatedness satisfaction is intricately linked to an individual’s mental health. In the healthcare context, when individuals experience meaningful connections and mutual concern during their interactions with AI, they gain a more profound perspective on life, leading to enhanced well-being and satisfaction. Extensive literature supports the view that the satisfaction of these basic psychological needs significantly contributes to improved health and well-being (Van den Broeck et al., 2016). Based on these insights, we propose the following hypothesis:
H3. 
The needs for (a) autonomy, (b) competence, and (c) relatedness mediate the relationship between the use of AI and healthcare professionals’ well-being at work.

2.4. The Moderating Impact of Job Complexity

The preceding arguments have highlighted the positive impact of AI utilization on the performance of healthcare professionals. However, in the healthcare sector—where innovation and rigor are intertwined—it is reasonable to posit that the user-centered use of AI may diversely impact personnel. Integrating Cognitive Evaluation Theory (CET), a sub-theory of SDT, helps to explain how intrinsic motivation can vary depending on environmental conditions. According to CET, intrinsic motivation flourishes when individuals have the opportunity to pursue their personal interests, goals, and values within a supportive environment. Conversely, when environmental factors constrain these pursuits, or when individuals are subject to extrinsic rewards or punishments for engaging in controlling behaviors, intrinsic motivation is likely to diminish (Ryan & Deci, 2000). Consequently, this perspective reinforces the notion that psychological needs are significantly influenced by the surrounding work environment, including factors such as task characteristics (e.g., complexity and variety) (Parent-Rocheleau & Parker, 2022).
Job complexity (JC), a key attribute of job characteristics, refers to the depth and breadth of psychological demands faced by employees in the workplace (Morgeson & Humphrey, 2006). Generally, higher levels of job complexity are associated with increased mental demands and challenges, which can lead to positive motivational outcomes (Humphrey et al., 2007; Parent-Rocheleau & Parker, 2022). Fasbender and Gerpott (2023) emphasize that job characteristics such as job complexity, responsibility, and autonomy have greater motivational potential than more routinized and formalized job attributes.
In the healthcare context, however, we contend that as job complexity increases, tasks become more detailed and specialized, thereby intensifying the demands placed on healthcare professionals’ expertise. High-complexity tasks, driven by the intrinsic motivation to provide accurate patient care, often necessitate more in-depth analysis and judgment from physicians. In such situations, healthcare AI systems are generally limited in their ability to address these complexities (Lebovitz et al., 2022). The findings of Jussupow et al. (2021) further support this view, as demonstrated by their 10-month qualitative study conducted across three departments (Breast Imaging, Chest Imaging, and Pediatric Imaging) at a teaching hospital in the United States. The study revealed that, particularly in the Breast Imaging and Pediatric Imaging departments, physicians frequently encounter significant uncertainties in disease diagnosis, especially when confirming cancer diagnoses. These uncertainties, in turn, limit the effectiveness of AI in these clinical contexts. For example, in breast cancer diagnosis, the high incidence and risk profile of the disease render early detection of the most treatable stages critical. Any errors in assessment carry substantial risks, potentially leading to significant negative impacts on patient outcomes. Moreover, the variability in patient anatomy and the inherent complexity of breast tissue contribute to the characterization of breast cancer as a multifaceted and unpredictable disease. In practice, radiologists often begin their assessments by reviewing mammographic images, a process fraught with uncertainty as they attempt to identify abnormal regions within the complex architecture of breast tissue and to determine the likelihood of malignancy or benignity.
Consequently, as the job complexity increases, the accuracy and reliability of medical AI may decline. This is because complex tasks often involve greater variability and uncertainty, which can adversely affect the predictions and judgments made by AI systems (Topol, 2019; Wang et al., 2024). In such scenarios, the assistance provided by AI may prove insufficient to address the diverse and intricate challenges healthcare professionals encounter, thus failing to meet all professional requirements. When faced with a highly complex job, healthcare professionals may derive limited benefits from the use of AI, thereby weakening the relationship between the use of AI and psychological needs satisfaction. Therefore, we propose the following hypotheses:
H4a. 
Job complexity moderates the impact of AI usage on the need for autonomy satisfaction, such that the relationship is weaker with a higher level of job complexity.
H4b. 
Job complexity moderates the impact of AI usage on the need for competence satisfaction, such that the relationship is weaker with a higher level of job complexity.
H4c. 
Job complexity moderates the impact of AI usage on the need for relatedness satisfaction, such that the relationship is weaker with a higher level of job complexity.
Building on the preceding arguments, we hypothesize that the use of AI leads to an increase in the satisfaction of the psychological needs for autonomy, competence, and relatedness (H2). Taking it one step further, we posit that the need for autonomy, competence, and relatedness satisfaction serves as a mediator in the positive and indirect relationship between the use of AI and work well-being (H3). In addition, we propose that job complexity can moderate the relationship between the use of AI and the need for autonomy, competence and relatedness satisfaction (H4a, H4b, and H4c). These relationships can be conceptualized within a moderated mediation model. Therefore, combining H1–H4, we propose the following moderated mediation hypotheses:
H5a. 
Job complexity moderates the mediation effect of the need for autonomy satisfaction on the relationship between AI usage and work well-being, such that this mediation effect will be weaker when job complexity is higher.
H5b. 
Job complexity moderates the mediation effect of the need for competence satisfaction on the relationship between AI usage and work well-being, such that this mediation effect will be weaker when job complexity is higher.
H5c. 
Job complexity moderates the mediation effect of the need for relatedness satisfaction on the relationship between AI usage and work well-being, such that this mediation effect will be weaker when job complexity is higher.

3. Methods

3.1. Design, Setting, and Participants

To test our hypotheses and mitigate the potential impact of common method bias (CMB)1 in the collection of self-reported data, we conducted two separate questionnaire surveys distributed via online platforms in China. Given that our study aims at empirical validation (focuses on the extent of the use of AI rather than the mere use of AI itself), all participants, including healthcare professionals from various departments such as radiology, ophthalmology, cardiology, and others, were required to have prior experience with medical AI (Huo et al., 2024a). The first wave of data was collected in February 2024, during which demographic information (i.e., gender, age, educational background, and job tenure) was gathered, along with measures assessing the use of AI and job complexity. The second wave of data collection occurred in March 2024, focusing on the constructs of psychological needs satisfaction (i.e., autonomy, competence, and relatedness) and work well-being.
After matching the data from the two waves and excluding incomplete responses as well as those failing to meet the screening criteria, a total of 280 valid responses were retained. Among the participants, 29.3% were male, 42.9% were aged between 26 and 35 years, and 55% had more than six years of work experience. Regarding educational background, 41.1% held a bachelor’s degree, while 21.4% held a master’s degree. These demographic characteristics may influence the relationships between the study variables in different ways. For instance, younger healthcare professionals may exhibit greater enthusiasm and openness toward adopting AI in the workplace, whereas those with higher levels of education may possess a more advanced understanding of AI technology, potentially leading to more favorable attitudes toward its use (Bühler et al., 2022). The potential impact of these demographic factors will be addressed in the subsequent analysis.

3.2. Variables and Measurement

In addition to collecting demographic data, we measured all other constructs in our survey using a five-point Likert scale (for its simplicity, ease of understanding, and convenience for quantitative analysis), ranging from 1 (strongly disagree) to 5 (strongly agree) (Chyung et al., 2017). The original English scales were translated into Chinese using a back-translation process, conducted by a team of researchers proficient in both Chinese and English to ensure translation accuracy. Additionally, to maintain the cultural appropriateness of the scale, we engaged in multiple discussions and revisions during the translation process, particularly addressing any ambiguous items in the original scale through detailed discussions and adjustments. To ensure that the translation preserved the core meaning of the original items while aligning with the cultural background and understanding of the research participants, we conducted a pilot survey with experts in the fields of medical AI and organizational behavior before the formal data collection. After repeatedly confirming the clarity of the measurements and resolving any ambiguities, we finalized the formal survey.
Use of AI. A three-item scale, adopted from Man Tang et al. (2023), was used to measure the extent to which healthcare professionals in hospitals use medical AI. An example of items is “I depend on medical AI to help me with work-related tasks”, and the Cronbach’s alpha coefficient for this construct was 0.878 (as shown in Table 1).
Psychological needs satisfaction. We measured the construct based on La Guardia et al.’s (2000) 9-item scale for the implementation of medical AI in medical settings. Needs satisfaction is divided into three dimensions, including need for autonomy satisfaction, need for competence satisfaction, and need for relatedness satisfaction, with three measure items for each dimension. For the need for autonomy satisfaction, an illustrative item is “When I collaborate with medical AI, I can still follow my own approach to diagnosis and treatment”. For the need for competence satisfaction, a demonstrative example is “When I collaborate with medical AI, I feel capable in my work”. As for the need for relatedness satisfaction, a sample item is “When collaborating with medical AI, I feel as if it cares for and supports me like a colleague”. The Cronbach’s alpha values for these dimensions were 0.755, 0.784, and 0.803, respectively.
Work well-being. To assess healthcare professionals’ work well-being, we utilized a situationally adapted six-item scale developed by Zheng et al. (2015). A representative item from this scale is “Since the introduction of medical AI, I find my work to be more interesting”. The Cronbach’s alpha of this variable was 0.888.
Job complexity. Job complexity in healthcare institutions was assessed using a four-item scale adapted from Zacher et al. (2010). An example item is “My current work tasks are very complex”. The scale demonstrated good internal consistency, with a Cronbach’s alpha of 0.860. Additionally, the Cronbach’s alpha for all other measured variables ranged from 0.755 to 0.860, with all exceeding the acceptable threshold of 0.7, thereby indicating strong internal consistency across the variables.
Control variables. Gender, age, education, and job tenure were included as control variables due to their potential impact on the study outcomes. Gender may influence healthcare professionals’ acceptance of AI, while age is related to career experience and technological adaptability. Education reflects the ability to comprehend and apply technological tools, and tenure of job in the workforce may affect proficiency with technology. Controlling for these variables helps minimize confounding effects and ensures the robustness of the study’s findings (Bühler et al., 2022).

4. Analyses and Results

4.1. Reliability and Confirmatory Factor Analysis

Utilizing confirmatory factor analysis (CFA), we extracted the factor loadings for the items in this study. Based on these loadings, we calculated the composite reliability (CR) and average variance extracted (AVE) for the six variables. All constructs exhibited Cronbach’s α and CR values greater than 0.7, while the AVE values exceeded 0.5, indicating strong structural validity. These results demonstrate good internal consistency within the measurement model, ensuring the accuracy and reliability of our study findings (Fornell & Larcker, 1981).

4.2. Common Method Bias and Discriminant Validity

To verify the absence of underlying methodological factors, specifically the presence of serious CMB, we first conducted Harman’s one-factor test. The results indicated that the predominant factor accounted for only 36.88% of the variance, falling below the 50% threshold recommended by Fuller et al. (2016).
Considering the possible limitations of Harman’s one-factor test, we adopted the CFA Marker Technique suggested by Williams et al. (2010) to further test for CMB. First, we selected age as an unrelated marker variable, which is also not significantly correlated with the six substantive latent variables (use of AI, need for autonomy satisfaction, need for competence satisfaction, need for relatedness satisfaction, work well-being, and job complexity) in our research model (in Table 2). The presence of CMB was assessed by comparing the chi-square tests across several models. Specifically, in the CFA model, we allowed the six substantive latent variables to be fully correlated with the marker variable to estimate the factor loadings and measurement error variances of the marker variable. Next, we constructed a baseline model in which the marker variable was assumed to be orthogonal to the other latent variables, with its factor loadings and error variances fixed. Additionally, we compared the method-C and method-U models, where the former assumes equal measurement effects for all substantive indicators, while the latter allows for different effects. Finally, we use the factor correlations obtained from the baseline model for substantive variables as fixed values in the method-U model to assess the method-R model. As presented in Table 3, a significant differences that is observed between the method-C model and the baseline model [ Δ χ 2 Δ d f = 1 = 0.53 ,   p < 0.05 ]. Further, we compared the significant differences between the baseline and method-U models. The evaluation of the method-U model and its comparison with the baseline model revealed a significant difference [ Δ χ 2 Δ d f = 22 = 35.325 ,     p < 0.05 ]. Consequently, additional testing was performed. A chi-square difference test between the method-U and method-R models showed no significant difference between the two models [ Δ χ 2 Δ d f = 16 = 10.456 ,     p > 0.05 ]. Thus, the CFA Marker Technique indicated that there was no serious CMB in our research model.
As shown in Table 4, the six-factor model demonstrates a good fit ( χ 2 / d f = 1.534 < 3; RMSEA = 0.044 < 0.05; SRMR = 0.0488 < 0.05; TLI = 0.962 > 0.9; CFI = 0.968 > 0.9). Additionally, compared to the other models (ranging from the one-factor model to the five-factor model), the six-factor model showed the best fit, indicating that its discriminant validity was acceptable.

4.3. Descriptive Statistics Analysis

Table 2 presents the descriptive statistics of the variables, including their means, standard deviations, and inter-construct correlations. The results indicated that there were no significant relationships between the control variables (gender, age, education, and tenure) and the dependent variable, work well-being. Therefore, these control variables were unlikely to influence the relationships between the other variables in the research model and work well-being. Furthermore, we found significant positive correlations between the use of AI and the autonomy (r = 0.256, p < 0.01), competence (r = 0.324, p < 0.01), and relatedness (r = 0.33, p < 0.01) of the three dimensions of psychological needs satisfaction. Likewise, these three dimensions were significantly correlated with work well-being. These findings establish a basis for the subsequent hypothesis testing in this study.

4.4. Hypothesis Testing

We further conducted hierarchical regression analysis on the variables presented in Table 5. In Model 2, after including both the independent and control variables, we observed a significant positive regression coefficient for the effect of the use of AI on work well-being (β = 0.297, p < 0.001), thus supporting H1. Additionally, Model 4 revealed a significant positive relationship between the use of AI and the need for autonomy satisfaction (β = 0.269, p < 0.001), confirming H2a. Similarly, Models 8 and 12 demonstrated significant positive relationships between the use of AI and the needs for competence (β = 0.332, p < 0.001) and relatedness (β = 0.358, p < 0.001) satisfaction, thereby supporting both H2b and H2c.

4.4.1. Mediation Effects: Three Dimensions of Psychological Needs Satisfaction

The bootstrap method was used to confirm the statistical significance of the path coefficients and the mediation effect through 5000 bootstrap samples. Hypotheses were considered supported if 0 was not included within the 95% confidence interval (CI) (Wang et al., 2023). As shown in Table 6, the total effect of the use of AI on work well-being in our research model is statistically significant (β = 0.236, p < 0.01), thus providing further support for H1. Additionally, the three dimensions of psychological needs satisfaction—autonomy (β = 0.081, 95% CI [0.033, 0.138]), competence (β = 0.046, 95% CI [0.002, 0.103]), and relatedness (β = 0.035, 95% CI [0.003, 0.073])—were found to significantly mediate the relationship between the use of AI and work well-being (see Table 6 and Figure 2). These results support H3a, H3b, and H3c. Moreover, it is noteworthy that the three dimensions of basic psychological needs fully mediate the relationship between the use of AI and work well-being, as indicated in Table 6.

4.4.2. The Moderation Effect of Job Complexity

To test H4a, H4b, and H4c, we employed Process Model 7 in SPSS 26.0 to examine the moderating effect of job complexity on the relationship between the use of AI and psychological needs satisfaction (need for autonomy satisfaction, need for competence satisfaction, and need for relatedness satisfaction). First, as indicated in Model 6 of Table 5, the interaction term between the use of AI and job complexity exhibits a significant negative effect on the need for autonomy satisfaction (β = −0.13, p < 0.05). The Johnson–Neyman test further revealed that the relationship between the use of AI and the need for autonomy satisfaction became non-significant (p > 0.05) when job complexity exceeded a value of 3.778, suggesting that higher levels of job complexity weaken the relationship between the use of AI and the need for autonomy satisfaction. Thus, the results support H4a. Second, as shown in Model 10 of Table 5, there is also a significant negative effect of the interaction term between the use of AI and job complexity on the need for competence satisfaction (β = −0.146, p < 0.05). Similarly, the Johnson–Neyman test indicated that when the value of job complexity exceeded 4.197, the effect of the use of AI on the need for competence satisfaction was non-significant (p > 0.05), further demonstrating that increased job complexity weakens the relationship between these constructs. Consequently, H4b is supported.
Additionally, the moderating influence of job complexity on the relationships between the use of AI and the needs for autonomy and competence satisfaction are illustrated in Figure 3 and Figure 4. However, as observed in Model 14 (in Table 5), the interaction term between the use of AI and job complexity does not yield a significant effect on the need for relatedness satisfaction (β = −0.025, p > 0.05). Consequently, H4c is not supported.

4.4.3. Moderated Mediation Effect Test

To further verify the existence of the moderated mediation effect, we employed the methodology recommended by James and Brett (1984). Based on the results from the moderating effects tested in H4a, H4b, and H4c, we found that job complexity significantly moderates the relationship between the use of AI and healthcare professionals’ need for autonomy satisfaction. Further, to test hypotheses H5a, H5b, and H5c, we conducted a moderated mediation analysis using Process Model 7 in SPSS 26.0, with 5000 bootstrap samples at the 95% confidence interval. As shown in Table 7, for this path of need for autonomy satisfaction as a mediator, job complexity as the moderator, the Index is −0.035, with a Boot standard error of 0.019, and the confidence interval (BootCI) at the 95% level is [−0.077, −0.002], which does not contain 0, indicating that the moderated mediation effect of the pathway “use of AI—need for autonomy satisfaction—work well-being” is significant. These findings validate the moderated mediation role of job complexity in the relationship between the use of AI and work well-being, thereby supporting H5a.
Furthermore, for this pathway in which the need for competence satisfaction served as a mediator (Table 7), Index = −0.018, with a Boot standard error of 0.013, and BootCI at the 95% level is [−0.051, 0], which contains 0. This suggests that the moderated mediation effect of job complexity in the pathway “use of AI—need for competence satisfaction-work well-being” is not significant. Therefore, the moderated mediating effect of job complexity impacting the mediating role of the need for competence satisfaction on the relationship between the use of AI and work well-being is not validated, and H5b is not supported.
In the same way, Table 7 presents the role of job complexity for the pathway of “use of AI —need for relatedness satisfaction— work well-being”, which indicates the moderated mediation effect is not significant (Index = −0.002, Boot standard error = 0.005, and BootCI at the 95% confidence level is [−0.015, 0.008]). Therefore, H5c is not supported.

5. Discussion

We propose a theoretical model from the lens of Self-Determined Theory to elucidate the relationship between healthcare professionals’ use of AI and their work well-being. Specifically, we argue that the three dimensions of psychological needs satisfaction (autonomy, competence, and relatedness) mediate the relationship between the AI usage and work well-being. While the existing literature highlights the “double-edged sword” of AI usage (Man Tang et al., 2023; Liang et al., 2022; Ding, 2021), our model emphasizes the potential positive outcomes of AI integration, particularly its ability to enhance healthcare professionals’ work well-being. Notably, the use of AI in healthcare can stimulate intrinsic motivation, fostering professional growth, increasing job satisfaction, and enhancing overall well-being and innovative performance (Jussupow et al., 2021; Van den Broeck et al., 2016; Gagné et al., 2022).
Contrary to findings from previous studies, our research demonstrates that job complexity acts as a boundary condition, weakening the relationship between healthcare professionals’ use of AI technologies and their satisfaction of autonomy and competence needs. This is primarily due to the challenges posed by complex medical tasks that may exceed the processing capabilities of medical AI, thereby limiting its utility in providing decision support (Topol, 2019; Shen et al., 2019). Additionally, job complexity weakens the connection between the use of AI and healthcare professionals’ sense of ownership at work, which in turn attenuates the indirect effect of AI on work well-being. Interestingly, while job complexity diminishes the link between the use of AI and the need for competence satisfaction, it does not significantly reduce the relationship between AI usage and work well-being. Healthcare professionals may experience temporary disruptions in their competence needs due to the increased effort required to resolve complex medical cases. However, their intrinsic motivation, coping strategies, and professional responsibility enable them to maintain a positive attitude and continue prioritizing patient care, ultimately sustaining their work well-being (Chan et al., 2020; Selvachandran et al., 2023; Sendak et al., 2020).
In light of these findings, while job complexity weakens the relationship between the use of AI and the need for relatedness satisfaction, healthcare professionals’ recognition of the benefits of AI adoption remains largely unaffected. Consequently, H4c and H5c are not supported. This further highlights the key findings of our study: within the context of healthcare development in China, the introduction of AI benefits healthcare professionals, AI-based devices, and other stakeholders. Moreover, our research aligns with the optimistic perspectives on AI in healthcare, as articulated by proponents such as Topol (2019), reaffirming the potential positive impact of AI in this field. We hope that these findings can inform stakeholders on the effective development and integration of AI technologies, ultimately improving healthcare outcomes and enhancing the work experiences of healthcare professionals.

5.1. Theoretical Implications

The work well-being of healthcare professionals is integral to both patient care and the advancement of AI in healthcare. As outlined above, our study offers significant insights into the positive performance and progression of healthcare professionals within this domain. By employing SDT as a foundational framework, we underscore the significance of intrinsic motivation, highlighting that individuals are often driven by self-transcendent values that prioritize the welfare of others over self-interest. Our findings also foster optimism regarding the integration of intelligent machines in healthcare settings, suggesting that both the creators and users of healthcare AI can recognize its potential performance benefits. Specifically, as noted earlier, the close interaction and coupling between human experts and intelligent machines facilitates the transfer and integration of knowledge, allowing both parties to learn from each other’s inputs and outputs, thereby enhancing their respective capabilities. This provides positive empirical evidence of the beneficial effects of such collaboration. This perspective can serve as a guiding principle for stakeholders in the effective development and integration of AI technologies in healthcare (Man Tang et al., 2023; Dediu et al., 2018; Economou-Zavlanos et al., 2024).
Second, our contribution lies in the articulation of an intrinsic mechanism that elucidates the relationship between the use of AI and the well-being of healthcare professionals. By incorporating the satisfaction of the psychological needs for autonomy, competence, and relatedness as mediating variables, we develop a theoretical model that interprets how employees’ adoption of smart machines in healthcare correlates with enhanced work well-being. Notably, there is a paucity of research exploring the impact of AI usage on psychological needs satisfaction within the medical field; our study addresses this gap and contributes to the understanding of the mechanisms underlying psychological needs satisfaction. Moreover, by examining the technological implications of intelligent machines and their role in satisfying healthcare professionals’ psychological needs, we provide new insights into the positive effects of human–machine collaboration in healthcare. Our findings further demonstrate that healthcare professionals can be relieved from routine, standardized tasks, allowing them to engage in more nuanced and valuable responsibilities. This shift not only facilitates the integration of human expertise with AI but also fosters the mutual advancement of both human and AI capabilities.
Third, we acknowledge that the unique characteristics of the healthcare industry establish a boundary condition for our findings, particularly regarding the limitations of job complexity on the positive aspects of healthcare AI applications. Our research highlights the detrimental effect of job complexity on the relationship between the use of AI and the satisfaction of psychological needs, thereby enhancing our understanding of the constraints associated with healthcare AI in practical settings. These limitations extend beyond technological capabilities, highlighting the inability of AI systems to fully replace healthcare professionals in making independent decisions and adapting to the diverse and complex healthcare environments. This underscores the indispensable role of healthcare professionals in medical AI applications and their irreplaceability in delivering high-quality healthcare services. Consequently, we advocate for the development of a more collaborative working model in the application of medical AI, in which AI systems serve as auxiliary tools that provide informational support to healthcare professionals. In this model, healthcare professionals would leverage their professional knowledge and experience to interpret and evaluate the insights generated by AI, facilitating a partnership that ensures the delivery of optimal patient care.

5.2. Practical Implications

Our research reveals several practical implications. First, we emphasize that the implementation of smart machines in healthcare settings can stimulate the intrinsic motivation of healthcare professionals, thereby enhancing their well-being at work. To maximize the benefits of AI, healthcare professionals should actively promote collaborative working models that position smart technologies as supportive tools rather than replacements in the decision-making process of healthcare professionals. Additionally, we recommend that hospital administrators highlight the advantages of user-friendly AI systems to facilitate effective workflows. By fostering a culture of mutual learning between AI and healthcare professionals, organizations can empower healthcare professionals while alleviating routine workloads, ultimately improving job satisfaction, well-being, and innovative employee performance. This approach serves as a catalyst for adapting to an increasingly technology-driven and dynamic work environment, ensuring that healthcare organizations remain aligned with contemporary developments in the field.
Second, our research underlines the importance of addressing the psychological needs of healthcare professionals within the medical field. Specifically, the need for autonomy satisfaction, need for competence satisfaction, and need for relatedness satisfaction are identified as critical aspects. It is essential for healthcare organizations to actively support these needs through initiatives such as flexible work arrangements, continuous professional development, and the cultivation of a positive team culture. These measures not only enhance healthcare professionals’ sense of well-being at work but also cultivate a culture of innovation within the profession (Guo et al., 2025). Moreover, hospital administrators should ensure that new healthcare professionals receive adequate resources and organizational support for the integration and application of AI technologies. This support will enable them to adapt their learning capabilities and leverage AI effectively, thereby stimulating their innovative thinking and enhancing their professional abilities.
Third, job complexity serves as a boundary condition that influences the practical utility of intelligent systems in healthcare settings. When addressing high-complexity medical tasks, healthcare organizations must recognize the limitations of AI and encourage healthcare professionals to rely on their professional expertise and judgment in decision-making (Canhoto & Clear, 2020). To better navigate the complexities of the medical environment, institutions should provide necessary support, such as strengthening team cohesion and streamlining work processes. This support will enable healthcare professionals to more effectively manage challenges. Concurrently, continuous advancements in medical AI technology are crucial. By optimizing algorithms and enhancing learning capabilities, AI systems can be better equipped to handle complex cases, thereby offering more robust support to healthcare professionals in diagnostics and treatment.
Moreover, work well-being is a multidimensional concept influenced by a variety of factors (Miao et al., 2024). Healthcare organizations should adopt a holistic approach when developing strategies to enhance the well-being of healthcare professionals. This approach should consider key elements such as optimizing the work environment, fostering positive interpersonal relationships, supporting opportunities for personal and professional growth, and promoting a healthy work-life balance. By implementing these comprehensive measures, organizations can more effectively promote healthcare professionals’ well-being. Such efforts not only benefit individual professionals but also contribute to the overall quality and efficiency of healthcare services, particularly in the context of integrating AI technologies.

5.3. Limitations and Directions for Future Research

Our study has certain limitations and offers directions for future research. First, the scope of this study is confined to exploring the impact of the use of AI in healthcare on the work well-being of healthcare professionals. This focus is unique compared to other fields, given the critical nature of life-safety concerns in healthcare, which may limit the applicability of our theoretical model to other professions that also involve emerging technologies (Topol, 2019). As such, the generalizability of our findings is constrained. Future research could broaden this scope by collecting and analyzing data from diverse sectors, such as services, finance, and education, to enhance the generalizability of the results.
Second, although this study employs a time-lagged survey design to mitigate CMB (Piyathasanan et al., 2018), the data are primarily collected through self-reported questionnaires. This methodology does not allow for definitive causal inferences regarding the relationships between the use of AI, psychological needs satisfaction, and work well-being. Future studies could address this limitation by adopting controlled experimental designs or longitudinal data collection to further validate our findings. Additionally, while this research examines the role of job complexity as a moderating factor, it does not fully capture the nuances of the relationship between the use of AI and work well-being. Future research could delve deeper into this relationship by considering variables such as the specialization and personality traits of physicians, as well as the varying levels of AI sophistication (e.g., automation AI vs. augmentation AI) (Guo et al., 2025; Nazareno & Schiff, 2021). This would provide a more comprehensive understanding of how these factors impact the integration of AI in healthcare settings.
Third, our study primarily relies on samples collected from healthcare professionals in Chinese hospitals. As such, the representativeness of the sample may be influenced by regional, cultural context, and national differences in the development of AI, which could cause certain biases. For instance, due to cultural differences, healthcare professionals in collectivist and individualist societies may approach complex medical challenges distinctively. Therefore, future research could benefit from collecting a broader and more diverse sample, including data from various countries. Comparative and cross-cultural analyses would provide a more comprehensive understanding of the specific impacts of medical AI usage on healthcare professionals.
To more comprehensively assess the broader impact of AI in healthcare, future studies could also focus on the patient experience, particularly examining the effects of AI implementation on patient trust and satisfaction. Additionally, the influence of AI on the doctor–patient relationship warrants rigorous investigation, as AI has the potential to significantly alter communication patterns and trust within this critical interaction.

6. Conclusions

Drawing on SDT, we develop a conceptual framework to explore the positive impact of the use of AI on healthcare professionals. This framework offers new theoretical insights into the relationship between intelligent machines and healthcare professionals’ work well-being, emphasizing the beneficial effects of AI technologies. Specifically, AI optimizes workflows and improves the efficiency of diagnosing routine or simple conditions. By reducing the time spent on routine tasks, AI allows healthcare professionals to focus on more complex and rewarding tasks, such as diagnosing complex medical conditions.
Moreover, this shift enhances their professional knowledge and capabilities, fostering greater intrinsic motivation and satisfying their psychological needs. As a result, healthcare professionals’ work well-being is further strengthened, enabling them to adapt more effectively to a technology-driven, continuously evolving work environment. Additionally, this study acknowledges the limitations of AI in addressing complex medical tasks. In summary, our research contributes to the literature by addressing the impact of AI on healthcare professionals’ psychological needs and related outcomes, offering valuable insights for the effective integration of AI in healthcare settings.

Author Contributions

Conceptualization, W.H. and Q.L.; methodology, W.H. and X.L.; software, Y.W.; validation, B.L., Q.L. and Y.W.; formal analysis, W.H.; investigation, W.H. and B.L.; resources, W.H.; data curation, Q.L.; writing—original draft preparation, W.H.; writing—review and editing, Q.L. and B.L.; visualization, Y.W.; supervision, B.L. and X.L.; project administration, W.H. and Q.L.; funding acquisition, W.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Shanghai Philosophy and Social Science Planning Project [grant number 2022EGL004] and the National Natural Science Foundation of China [grant number 72472095; grant number 72302140; grant number 72072110].

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Shanghai University (protocol code: 2019-001; date of approval: 29 December 2019).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and ethical constraints.

Conflicts of Interest

The authors declare no conflicts of interest.

Note

1
Self-report as a measurement method (it is one of the main sources of measurement error) may introduce common method bias in the collected data, influenced by factors such as respondent consistency, transient emotional states, and other related factors. Collecting data at two different points in time (1 month apart) allows for a more independent, objective, and reliable correlation between the variables (Podsakoff et al., 2003).

References

  1. Ahmed, A., Boopathy, P., & Sudhagararajan, S. (2022). Artificial intelligence for the novel corona virus (COVID-19) pandemic: Opportunities, challenges, and future directions. International Journal of E-Health and Medical Communications (IJEHMC), 13(2), 1–21. [Google Scholar] [CrossRef]
  2. Ahmed, I. (2019). Staff well-being in high-risk operating room environment: Definition, facilitators, stressors, leadership, and team-working—A case-study from a large teaching hospital. International Journal of Healthcare Management, 12(1), 1–17. [Google Scholar] [CrossRef]
  3. Albahri, A. S., Duhaim, A. M., Fadhel, M. A., Alnoor, A., Baqer, N. S., Alzubaidi, L., Albahri, O. S., Alamoodi, A. H., Bai, J., Salhi, A., & Deveci, M. (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156–191. [Google Scholar] [CrossRef]
  4. Arslan, A., Cooper, C., Khan, Z., Golgeci, I., & Ali, I. (2022). Artificial intelligence and human workers interaction at team level: A conceptual assessment of the challenges and potential HRM strategies. International Journal of Manpower, 43(1), 75–88. [Google Scholar] [CrossRef]
  5. Au-Yong-Oliveira, M., Pesqueira, A., Sousa, M. J., Dal Mas, F., & Soliman, M. (2021). The potential of big data research in healthcare for medical doctors’ learning. Journal of Medical Systems, 45(1), 13. [Google Scholar] [CrossRef]
  6. Bamforth, K., Rae, P., Maben, J., Lloyd, H., & Pearce, S. (2023). Perceptions of healthcare professionals’ psychological wellbeing at work and the link to patients’ experiences of care: A scoping review. International Journal of Nursing Studies Advances, 2023, 100148. [Google Scholar] [CrossRef]
  7. Bauer, C., & Thamm, A. (2021). Six areas of healthcare where AI is effectively saving lives today. In P. Glauner, P. Plugmann, & G. Lerzynski (Eds.), Digitalization in healthcare: Implementing innovation and artificial intelligence (pp. 245–267). Future of Business and Finance. [Google Scholar]
  8. Bekbolatova, M., Mayer, J., Ong, C. W., & Toma, M. (2024). Transformative potential of AI in Healthcare: Definitions, applications, and navigating the ethical Landscape and Public perspectives. Healthcare, 12(2), 125. [Google Scholar] [CrossRef]
  9. Brady, G. M., Truxillo, D. M., Cadiz, D. M., Rineer, J. R., Caughlin, D. E., & Bodner, T. (2020). Opening the black box: Examining the nomological network of work ability and its role in organizational research. Journal of Applied Psychology, 105(6), 637. [Google Scholar] [CrossRef]
  10. Buck, C., Doctor, E., Hennrich, J., Jöhnk, J., & Eymann, T. (2022). General practitioners’ attitudes toward artificial intelligence–enabled systems: Interview study. Journal of Medical Internet Research, 24(1), e28916. [Google Scholar] [CrossRef]
  11. Bühler, M. M., Jelinek, T., & Nübel, K. (2022). Training and preparing tomorrow’s workforce for the fourth industrial revolution. Education Sciences, 12(11), 782. [Google Scholar] [CrossRef]
  12. Cai, Z., He, H., Huo, W., & Xu, X. (2024). More unique, more accepting? Integrating sense of uniqueness, perceived knowledge, and perceived empathy with acceptance of medical artificial intelligence. International Journal of Human–Computer Interaction, 40(24), 8433–8446. [Google Scholar] [CrossRef]
  13. Canhoto, A. I., & Clear, F. (2020). Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential. Business Horizons, 63(2), 183–193. [Google Scholar] [CrossRef]
  14. Chan, H. P., Hadjiiski, L. M., & Samala, R. K. (2020). Computer-aided diagnosis in the era of deep learning. Medical Physics, 47(5), e218–e227. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, P., Wu, L., & Wang, L. (2023). AI fairness in data management and analytics: A review on challenges, methodologies and applications. Applied Sciences, 13(18), 10258. [Google Scholar] [CrossRef]
  16. Chiniara, M., & Bentein, K. (2016). Linking servant leadership to individual performance: Differentiating the mediating role of autonomy, competence and relatedness need satisfaction. The Leadership Quarterly, 27(1), 124–141. [Google Scholar] [CrossRef]
  17. Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A., & Truong, L. (2023). Unlocking the value of artificial intelligence in human resource management through AI capability framework. Human Resource Management Review, 33(1), 100899. [Google Scholar] [CrossRef]
  18. Chyung, S. Y., Roberts, K., Swanson, I., & Hankinson, A. (2017). Evidence-based survey design: The use of a midpoint on the Likert scale. Performance Improvement, 56(10), 15–23. [Google Scholar] [CrossRef]
  19. Cramarenco, R. E., Burcă-Voicu, M. I., & Dabija, D. C. (2023). The impact of artificial intelligence (AI) on employees’ skills and well-being in global labor markets: A systematic review. Oeconomia Copernicana, 14(3), 731–767. [Google Scholar] [CrossRef]
  20. Deci, E. L., & Ryan, R. M. (1985). The general causality orientations scale: Self-determination in personality. Journal of Research in Personality, 19(2), 109–134. [Google Scholar] [CrossRef]
  21. Dediu, V., Leka, S., & Jain, A. (2018). Job demands, job resources and innovative work behaviour: A European Union study. European Journal of Work and Organizational Psychology, 27(3), 310–323. [Google Scholar] [CrossRef]
  22. Delshad, S., Dontaraju, V. S., & Chengat, V. (2021). Artificial intelligence-based application provides accurate medical triage advice when compared to consensus decisions of healthcare providers. Cureus, 13(8), e16956. [Google Scholar] [CrossRef] [PubMed]
  23. Ding, L. (2021). Employees’ challenge-hindrance appraisals toward STARA awareness and competitive productivity: A micro-level case. International Journal of Contemporary Hospitality Management, 33(9), 2950–2969. [Google Scholar] [CrossRef]
  24. Duan, Z., Zeng, Q., & Liu, X. (2024). Examining the Effect of Supervisors’ Humble Leadership on Immediate and Delayed Well-Being in Postgraduate Students. Behavioral Sciences, 14(11), 1004. [Google Scholar] [CrossRef] [PubMed]
  25. Economou-Zavlanos, N. J., Bessias, S., Cary, M. P., Jr., Bedoya, A. D., Goldstein, B. A., Jelovsek, J. E., O’Brien, C. L., Walden, N., Elmore, M., Parrish, A. B., & Poon, E. G. (2024). Translating ethical and quality principles for the effective, safe and fair development, deployment and use of artificial intelligence technologies in healthcare. Journal of the American Medical Informatics Association, 31(3), 705–713. [Google Scholar] [CrossRef] [PubMed]
  26. Fan, W., Liu, J., Zhu, S., & Pardalos, P. M. (2020). Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Annals of Operations Research, 294(1), 567–592. [Google Scholar] [CrossRef]
  27. Fasbender, U., & Gerpott, F. H. (2023). Designing work for change and its unintended side effects. Journal of Vocational behavior, 145, 103913. [Google Scholar] [CrossRef]
  28. Feng, Z., & Hua, X. (2022). Applications and current status of AI in the medical field. Journal of Physics: Conference Series, 2289(1), 012030. [Google Scholar] [CrossRef]
  29. Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. [Google Scholar] [CrossRef]
  30. Fuller, C. M., Simmering, M. J., Atinc, G., Atinc, Y. O., & Babin, B. J. (2016). Common methods variance detection in business research. Journal of Business Research, 69, 3192–3198. [Google Scholar] [CrossRef]
  31. Gagné, M., Parker, S. K., Griffin, M. A., Dunlop, P. D., Knight, C., Klonek, F. E., & Parent-Rocheleau, X. (2022). Understanding and shaping the future of work with self-determination theory. Nature Reviews Psychology, 1(7), 378–392. [Google Scholar] [CrossRef]
  32. Gillison, F. B., Rouse, P., Standage, M., Sebire, S. J., & Ryan, R. M. (2019). A meta-analysis of techniques to promote motivation for health behaviour change from a self-determination theory perspective. Health Psychology Review, 13(1), 110–130. [Google Scholar] [CrossRef] [PubMed]
  33. Gu, D., Deng, S., Zheng, Q., Liang, C., & Wu, J. (2019). Impacts of case-based health knowledge system in hospital management: The mediating role of group effectiveness. Information & Management, 56(8), 103162. [Google Scholar]
  34. Guo, M., Gu, M., & Huo, B. (2025). The impacts of automation and augmentation AI use on physicians’ performance: An ambidextrous perspective. International Journal of Operations & Production Management, 45(1), 114–151. [Google Scholar]
  35. Hood, C., & Patton, R. (2022). Exploring the role of psychological need fulfilment on stress, job satisfaction and turnover intention in support staff working in inpatient mental health hospitals in the NHS: A self-determination theory perspective. Journal of Mental Health, 31(5), 692–698. [Google Scholar] [CrossRef]
  36. Humphrey, S. E., Nahrgang, J. D., & Morgeson, F. P. (2007). Integrating motivational, social, and contextual work design features: A meta-analytic summary and theoretical extension of the work design literature. Journal of Applied Psychology, 92(5), 1332. [Google Scholar] [CrossRef]
  37. Huo, W., Luo, W., Yan, J., Wang, Y., & Deng, Y. (2024a). Medical artificial intelligence information disclosure on healthcare professional involvement in innovation: A transactional theory of stress and coping model. International Journal of Human-Computer Interaction, 40(22), 7655–7667. [Google Scholar] [CrossRef]
  38. Huo, W., Yuan, X., Li, X., Luo, W., Xie, J., & Shi, B. (2023). Increasing acceptance of medical AI: The role of medical staff participation in AI development. International Journal of Medical Informatics, 175, 105073. [Google Scholar] [CrossRef]
  39. Huo, W., Zhang, Z., Qu, J., Yan, J., Yan, S., Yan, J., & Shi, B. (2024b). Speciesism and preference of human–artificial intelligence interaction: A study on medical artificial intelligence. International Journal of Human-Computer Interaction, 40(11), 2925–2937. [Google Scholar] [CrossRef]
  40. James, L. R., & Brett, J. M. (1984). Ediators, moderators, and tests for mediation. Journal of Applied Psychology, 69(2), 307. [Google Scholar] [CrossRef]
  41. Jia, N., Luo, X., Fang, Z., & Liao, C. (2024). When and how artificial intelligence augments employee creativity. Academy of Management Journal, 67(1), 5–32. [Google Scholar] [CrossRef]
  42. Jussupow, E., Spohrer, K., Heinzl, A., & Gawlitza, J. (2021). Augmenting medical diagnosis decisions? An investigation into physicians’ decision-making process with artificial intelligence. Information Systems Research, 32(3), 713–735. [Google Scholar] [CrossRef]
  43. Kahn, W. A. (1990). Psychological conditions of personal engagement and disengagement at work. Academy of Management Journal, 33(4), 692–724. [Google Scholar] [CrossRef]
  44. Kalra, N., Verma, P., & Verma, S. (2024). Advancements in AI based healthcare techniques with focus on diagnostic techniques. Computers in Biology and Medicine, 179, 108917. [Google Scholar] [CrossRef] [PubMed]
  45. Khan, M., Shiwlani, A., Qayyum, M. U., Sherani, A. M. K., & Hussain, H. K. (2024). AI-powered healthcare revolution: An extensive examination of innovative methods in cancer treatment. BULLET: Jurnal Multidisiplin Ilmu, 3(1), 87–98. [Google Scholar]
  46. La Guardia, J. G., Ryan, R. M., Couchman, C. E., & Deci, E. L. (2000). Within-person variation in security of attachment: A self-determination theory perspective on attachment, need fulfillment, and well-being. Journal of Personality and Social Psychology, 79(3), 367. [Google Scholar] [CrossRef]
  47. Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To engage or not to engage with AI for critical judgments: How professionals deal with opacity when using AI for medical diagnosis. Organization Science, 33(1), 126–148. [Google Scholar] [CrossRef]
  48. Leroy, H. (2024). Motivating people to work: The value behind diverse assumptions. Journal of Management Studies, Early View. [Google Scholar] [CrossRef]
  49. Li, Q., & Qin, Y. (2023). AI in medical education: Medical student perception, curriculum recommendations and design suggestions. BMC Medical Education, 23(1), 852. [Google Scholar] [CrossRef]
  50. Liang, X., Guo, G., Shu, L., Gong, Q., & Luo, P. (2022). Investigating the double-edged sword effect of AI awareness on employee’s service innovative behavior. Tourism Management, 92, 104564. [Google Scholar] [CrossRef]
  51. Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, 120, 262–273. [Google Scholar] [CrossRef]
  52. Malik, A., Budhwar, P., & Kazmi, B. A. (2023). Artificial intelligence (AI)-assisted HRM: Towards an extended strategic framework. Human Resource Management Review, 33(1), 100940. [Google Scholar] [CrossRef]
  53. Man Tang, P., Koopman, J., McClean, S. T., Zhang, J. H., Li, C. H., De Cremer, D., Lu, Y., & Ng, C. T. S. (2022). When conscientious employees meet intelligent machines: An integrative approach inspired by complementarity theory and role theory. Academy of Management Journal, 65(3), 1019–1054. [Google Scholar] [CrossRef]
  54. Man Tang, P., Koopman, J., Yam, K. C., De Cremer, D., Zhang, J. H., & Reynders, P. (2023). The self-regulatory consequences of dependence on intelligent machines at work: Evidence from field and experimental studies. Human Resource Management, 62(5), 721–744. [Google Scholar] [CrossRef]
  55. McAnally, K., & Hagger, M. S. (2024). Self-determination theory and workplace outcomes: A conceptual review and future research directions. Behavioral Sciences, 14(6), 428. [Google Scholar] [CrossRef]
  56. Miao, C., Liu, C., Zhou, Y., Zou, X., Song, L., Chung, J. W., Tan, W., Li, X., & Li, D. (2024). Nurses’ perspectives on professional self-concept and its influencing factors: A qualitative study. BMC Nursing, 23(1), 237. [Google Scholar] [CrossRef]
  57. Moor, M., Banerjee, O., Abad, Z. S. H., Krumholz, H. M., Leskovec, J., Topol, E. J., & Rajpurkar, P. (2023). Foundation models for generalist medical artificial intelligence. Nature, 616(7956), 259–265. [Google Scholar] [CrossRef]
  58. Morgeson, F. P., & Humphrey, S. E. (2006). The Work Design Questionnaire (WDQ): Developing and validating a comprehensive measure for assessing job design and the nature of work. Journal of Applied Psychology, 91(6), 1321. [Google Scholar] [CrossRef]
  59. Nazareno, L., & Schiff, D. S. (2021). The impact of automation and artificial intelligence on worker well-being. Technology in Society, 67, 101679. [Google Scholar] [CrossRef]
  60. Olafsen, A. H., & Frølund, C. W. (2018). Challenge accepted! Distinguishing between challenge-and hindrance demands. Journal of Managerial Psychology, 33(4/5), 345–357. [Google Scholar] [CrossRef]
  61. Pan, J., Ding, S., Wu, D., Yang, S., & Yang, J. (2019). Exploring behavioural intentions toward smart healthcare services among medical practitioners: A technology transfer perspective. International Journal of Production Research, 57(18), 5801–5820. [Google Scholar] [CrossRef]
  62. Parent-Rocheleau, X., & Parker, S. K. (2022). Algorithms as work designers: How algorithmic management influences the design of jobs. Human Resource Management Review, 32(3), 100838. [Google Scholar] [CrossRef]
  63. Piyathasanan, B., Mathies, C., Patterson, P. G., & de Ruyter, K. (2018). Continued value creation in crowdsourcing from creative process engagement. Journal of Services Marketing, 32(1), 19–33. [Google Scholar] [CrossRef]
  64. Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of applied psychology, 88(5), 879. [Google Scholar] [CrossRef] [PubMed]
  65. Richardson, H. A., Simmering, M. J., & Sturman, M. C. (2009). A tale of three perspectives: Examining post hoc statistical techniques for detection and correction of common method variance. Organizational Research Methods, 12(4), 762–800. [Google Scholar] [CrossRef]
  66. Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68. [Google Scholar] [CrossRef]
  67. Ryan, R. M., & Deci, E. L. (2020). Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemporary Educational Psychology, 61, 101860. [Google Scholar] [CrossRef]
  68. Ryan, R. M., Duineveld, J. J., Di Domenico, S. I., Ryan, W. S., Steward, B. A., & Bradshaw, E. L. (2022). We know this much is (meta-analytically) true: A meta-review of meta-analytic findings evaluating self-determination theory. Psychological Bulletin, 148(11–12), 813. [Google Scholar] [CrossRef]
  69. Selvachandran, G., Quek, S. G., Paramesran, R., Ding, W., & Son, L. H. (2023). Developments in the detection of diabetic retinopathy: A state-of-the-art review of computer-aided diagnosis and machine learning methods. Artificial Intelligence Review, 56(2), 915–964. [Google Scholar] [CrossRef]
  70. Sendak, M., Elish, M. C., Gao, M., Futoma, J., Ratliff, W., Nichols, M., Bedoya, A., Balu, S., & O’Brien, C. (2020, January 27–30). “The human body is a black box” supporting clinical decision-making with deep learning. 2020 Conference on Fairness, Accountability, and Transparency (pp. 99–109), Barcelona, Spain. [Google Scholar]
  71. Shao, Y., Huang, C., Song, Y., Wang, M., Song, Y. H., & Shao, R. (2024). Using augmentation-based AI tool at work: A daily investigation of learning-based benefit and challenge. Journal of Management, 2024, 01492063241266503. [Google Scholar] [CrossRef]
  72. Sheldon, K. M., & Prentice, M. (2019). Self-determination theory as a foundation for personality researchers. Journal of Personality, 87(1), 5–14. [Google Scholar] [CrossRef]
  73. Shen, J., Zhang, C. J., Jiang, B., Chen, J., Song, J., Liu, Z., He, Z., Wong, S. Y., Fang, P. H., & Ming, W. K. (2019). Artificial intelligence versus clinicians in disease diagnosis: Systematic review. JMIR Medical Informatics, 7(3), e10010. [Google Scholar] [CrossRef] [PubMed]
  74. Song, Z., Cai, J., Zhou, Y., Jiang, Y., Huang, S., Gu, L., & Tan, J. (2024). Knowledge, attitudes and practices among anesthesia and thoracic surgery medical staff toward aI-PCA. Journal of Multidisciplinary Healthcare, 31, 3295–3304. [Google Scholar] [CrossRef] [PubMed]
  75. Spring, M., Faulconbridge, J., & Sarwar, A. (2022). How information technology automates and augments processes: Insights from Artificial-Intelligence-based systems in professional service operations. Journal of Operations Management, 68(6–7), 592–618. [Google Scholar] [CrossRef]
  76. Teng, R., Zhou, S., Zheng, W., & Ma, C. (2024). Artificial intelligence (AI) awareness and work withdrawal: Evaluating chained mediation through negative work-related rumination and emotional exhaustion. International Journal of Contemporary Hospitality Management, 36(7), 2311–2326. [Google Scholar] [CrossRef]
  77. Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. [Google Scholar] [CrossRef]
  78. Van den Broeck, A., Ferris, D. L., Chang, C. H., & Rosen, C. C. (2016). A review of self-determination theory’s basic psychological needs at work. Journal of Management, 42(5), 1195–1229. [Google Scholar] [CrossRef]
  79. Wallace, J. C., Butts, M. M., Johnson, P. D., Stevens, F. G., & Smith, M. B. (2016). A multilevel model of employee innovation: Understanding the effects of regulatory focus, thriving, and employee involvement climate. Journal of Management, 42(4), 982–1004. [Google Scholar] [CrossRef]
  80. Wang, W., Chen, L., Xiong, M., & Wang, Y. (2023). Accelerating AI adoption with responsible AI signals and employee engagement mechanisms in health care. Information Systems Frontiers, 25(6), 2239–2256. [Google Scholar] [CrossRef]
  81. Wang, W., Gao, G., & Agarwal, R. (2024). Friend or foe? Teaming between artificial intelligence and workers with variation in experience. Management Science, 70(9), 5753–5775. [Google Scholar] [CrossRef]
  82. Williams, L. J., Hartman, N., & Cavazotte, F. (2010). Method variance and marker variables: A review and comprehensive CFA marker technique. Organizational Research Methods, 13(3), 477–514. [Google Scholar] [CrossRef]
  83. Xu, Q., Xie, W., Liao, B., Hu, C., Qin, L., Yang, Z., Xiong, H., Lyu, Y., Zhou, Y., & Luo, A. (2023). Interpretability of clinical decision support systems based on artificial intelligence from technological and medical perspective: A systematic review. Journal of Healthcare Engineering, 2023(1), 9919269. [Google Scholar] [CrossRef] [PubMed]
  84. Yam, K. C., Man Tang, P., Jackson, J. C., Su, R., & Gray, K. (2023). The rise of robots increases job insecurity and maladaptive workplace behaviors: Multimethod evidence. Journal of Applied Psychology, 108(5), 850. [Google Scholar] [CrossRef] [PubMed]
  85. Youssef, A., Ng, M. Y., Long, J., Hernandez-Boussard, T., Shah, N., Miner, A., Larson, D., & Langlotz, C. P. (2023). Organizational factors in clinical data sharing for artificial intelligence in health care. JAMA Network Open, 6(12), e2348422. [Google Scholar] [CrossRef] [PubMed]
  86. Zacher, H., Heusner, S., Schmitz, M., Zwierzanska, M. M., & Frese, M. (2010). Focus on opportunities as a mediator of the relationships between age, job complexity, and work performance. Journal of Vocational Behavior, 76(3), 374–386. [Google Scholar] [CrossRef]
  87. Zahlan, A., Ranjan, R. P., & Hayes, D. (2023). Artificial intelligence innovation in healthcare: Literature review, exploratory analysis, and future research. Technology in Society, 74, 102321. [Google Scholar] [CrossRef]
  88. Zhang, D., & Zhao, X. (2024). Understanding adoption intention of virtual medical consultation systems: Perceptions of ChatGPT and satisfaction with doctors. Computers in Human Behavior, 159, 108359. [Google Scholar] [CrossRef]
  89. Zheng, X., Zhu, W., Zhao, H., & Zhang, C. (2015). Employee well-being in organizations: Theoretical model, scale development, and cross-cultural validation. Journal of Organizational Behavior, 36(5), 621–644. [Google Scholar] [CrossRef]
Figure 1. The proposed theoretical model.
Figure 1. The proposed theoretical model.
Behavsci 15 00088 g001
Figure 2. Results of model effect analysis. Note: * p < 0.05. *** p < 0.001.
Figure 2. Results of model effect analysis. Note: * p < 0.05. *** p < 0.001.
Behavsci 15 00088 g002
Figure 3. Two-way interaction between USAI and JC for prediction NS_AUT. Note: USAI=Use of AI. NS_AUT = Need for Autonomy Satisfaction. JC = Job Complexity.
Figure 3. Two-way interaction between USAI and JC for prediction NS_AUT. Note: USAI=Use of AI. NS_AUT = Need for Autonomy Satisfaction. JC = Job Complexity.
Behavsci 15 00088 g003
Figure 4. Two-way interaction between USAI and JC for prediction NS_COM. Note: USAI=Use of AI. NS_COM = Need for Competence satisfaction. JC = Job Complexity.
Figure 4. Two-way interaction between USAI and JC for prediction NS_COM. Note: USAI=Use of AI. NS_COM = Need for Competence satisfaction. JC = Job Complexity.
Behavsci 15 00088 g004
Table 1. Measurement items, reliability, and internal consistency reliability.
Table 1. Measurement items, reliability, and internal consistency reliability.
Variables and Survey ItemsFactor LoadingCronbach’s αCRAVE
Use of AI (USAI) 0.8780.8800.710
I depend on medical AI to help me with work-related tasks.0.876
I collaborate with medical AI to make key work-related decisions.0.831
I use AI to review and monitor the quality of my work.0.819
Need for Autonomy Satisfaction (NS_AUT) 0.7550.7570.509
When I collaborate with medical AI, I can still follow my own approach to diagnosis and treatment.0.724
When I collaborate with medical AI, I still have a voice and the ability to express my opinions.0.728
When I collaborate with medical AI, I feel controlled and pressured to act in a certain way. (R)0.688
Need for Competence Satisfaction (NS_COM) 0.7840.7840.548
When I collaborate with medical AI, I feel capable in my work.0.757
When I collaborate with medical AI, I often feel inadequate or incompetent. (R)0.745
When I collaborate with medical AI, I feel both capable and efficient.0.718
Need for Relatedness Satisfaction (NS_REL) 0.8030.8030.576
When collaborating with medical AI, I feel as if it cares for and supports me like a colleague.0.785
When collaborating with medical AI, I feel I can establish a relationship with it, just like with a colleague.0.73
When collaborating with medical AI, I feel a strong sense of closeness and warmth, as if it were a colleague.0.76
Work Well-being (WWB) 0.8880.8900.573
Since the introduction of medical AI, I find my work to be more interesting.0.757
Since the introduction of medical AI, overall, I am very satisfied with the work I do.0.819
Since the introduction of medical AI, I am always able to find ways to enrich my work.0.711
Since the introduction of medical AI, I am generally satisfied with the specific tasks I perform.0.778
Since the introduction of medical AI, I feel that my work is a meaningful experience.0.794
Since the introduction of medical AI, I am generally satisfied with the sense of accomplishment I gain from my work.0.672
Job complexity (JC) 0.860.8410.575
My current work tasks are very complex.0.659
I have to make very complex decisions in my work.0.601
In my work, I need to apply all the knowledge and skills I possess.0.908
In my work, I need to continuously learn knowledge related to new things.0.825
Note: CR = composite reliability. AVE = average variance extracted. R = Reverse.
Table 2. Descriptive statistics and inter-construct correlations.
Table 2. Descriptive statistics and inter-construct correlations.
MeanSDUSAINS_AUTNS_COMNS_RELWWBJCGenderAgeEduTenure
USAI3.1331.130
NS_AUT3.6520.9130.256 **
NS_COM3.6390.9340.324 **0.752 **
NS_REL3.4571.0300.366 **0.532 **0.616 **
WWB3.6380.8950.284 **0.597 **0.557 **0.460 **
JC3.4151.0870.406 **0.306 **0.302 **0.313 **0.314 **
Gender1.710.4560.0020.125 *0.119 *0.0900.0600.074
Age3.0700.9030.011−0.019−0.0500.0460.0590.047−0.073
Edu2.7400.903−0.149 *0.022−0.037−0.1130.050−0.008−0.0880.153 *
Tenure2.8501.3230.059−0.043−0.0690.0420.0470.113−0.0270.710 **−0.069
Note: n = 280. * p < 0.05. ** p < 0.01. USAI = Use of AI. NS_AUT = Need for Autonomy Satisfaction. NS_COM = Need for Competence Satisfaction. NS_REL = Need for Relatedness Satisfaction. WWB = Work Well-being. JC = Job Complexity.
Table 3. Chi-square, goodness-of-fit values, and model comparison tests.
Table 3. Chi-square, goodness-of-fit values, and model comparison tests.
Model χ 2 d f CFI
1. CFA318.5222090.966
2. Baseline325.532150.966
3. Method-C325.0092140.966
4. Method-U290.2051930.97
5. Method-R300.6612090.972
Chi-Square Model Comparison Tests
∆Model χ 2 d f Chi-Square Critical Value; 0.05
1. Baseline vs. Method-C0.5313.841
2. Baseline vs. Method-U35.325 *2234.382
3. Method-U vs. Method-R10.4561626.296
Note: * p < 0.05. CFA = Confirmatory factor analysis. CFI = Comparative Fit Index. Method-C = Control method variance model. Method-U = Unrestricted method variance model. Method-R proposed by Richardson et al. (2009).
Table 4. Discriminant validity and common method bias.
Table 4. Discriminant validity and common method bias.
Model χ 2 d f χ 2 / d f RMSEASRMRTLICFI
Six-factor model a296.1071931.5340.0440.04880.9620.968
Five-factor model b308.8881981.560.0450.05010.960.966
Five-factor model c378.8841981.9140.0570.05450.9350.944
Four-factor model d404.52022.0020.060.05710.9290.938
Three-factor model e692.3062053.3770.0920.07480.8310.85
Two-factor model f1009.2332074.8760.1180.10410.7250.753
One-factor model g1374.7462086.6090.1420.11820.6020.641
Note: a Six-factor model: USAI, NS_AUT, NS_COM, NS_REL, WWB, JC. b Five-factor model: USAI, NS_AUT + NS_COM, NS_REL, WWB, JC. c Five-factor model: USAI, NS_AUT, NS_COM + NS_REL, WWB, JC. d Four-factor model: USAI, NS_AUT + NS_COM + NS_REL, WWB, JC. e Three-factor model: USAI, NS_AUT + NS_COM + NS_REL + WWB, JC. f Two-factor model: USAI, NS_AUT + NS_COM + NS_REL + WWB + JC. g One-factor model: USAI + NS_AUT + NS_COM + NS_REL + WWB + JC. RMSEA = Root Mean Square Error of Approximation. SRMR = Standardized Root Mean Square Residual. TLI = Tucker–Lewis Index. CFI = Comparative Fit Index. USAI = Use of AI. NS_AUT = Need for Autonomy Satisfaction. NS_COM = Need for Competence Satisfaction. NS_REL = Need for Relatedness Satisfaction. WWB = Work Well-being. JC = Job Complexity.
Table 5. Results of hypothesis testing.
Table 5. Results of hypothesis testing.
WWBNS_AUT
Model 1Model 2Model 3Model 4Model 5Model 6
Variableββββββ
Gender0.0680.0700.127 *0.130.1110.111
Age0.0390.0410.0270.0290.0440.041
Edu0.0510.0950.02500.0640.0450.04
Tenure0.0240.009−0.058−0.072−0.106−0.101
USAI 0.297 *** 0.269 ***0.171 **0.17 **
JC 0.239 ***0.212 **
USAI × JC −0.13 *
F0.6975.84 ***1.295.352 ***7.133 ***6.941 ***
∆F0.69726.158 ***1.2921.222 ***14.698 ***5.141 *
R20.1000.0960.0180.0890.1360.152
∆R20.1000.0860.0180.0710.0470.016
NS_COMNS_REL
Model 7Model 8Model 9Model 10Model 11Model 12Model 13Model 14
Variableββββββββ
Gender0.1150.118 *0.1020.1010.0850.0890.0730.073
Age0.0250.0270.040.0370.0920.0940.1070.106
Edu−0.0360.012−0.005−0.01−0.122−0.07−0.086−0.087
Tenure−0.087−0.104−0.134−0.127−0.029−0.048−0.076−0.075
USAI 0.332 ***0.247 ***0.246 *** 0.358 ***0.276 ***0.276 ***
JC 0.207 **0.177 ** 0.198 **0.193 **
USAI × JC −0.146 * −0.025
F1.387.98 ***8.804 ***8.678 ***1.7299.619 ***10.081 ***8.644 ***
∆F1.3833.723 ***11.41 **6.799 *1.72940.191 ***10.69 **0.202
R20.020.1270.1620.1830.0250.1490.1810.182
∆R20.020.1070.0350.020.0250.1250.0320.001
Note: n = 280. * p < 0.05. ** p < 0.01. *** p < 0.001. USAI = Use of AI. NS_AUT = Need for Autonomy Satisfaction. NS_COM = Need for Competence Satisfaction. NS_REL = Need for Relatedness Satisfaction. WWB = Work Well-being. JC = Job Complexity.
Table 6. Results of mediation effects.
Table 6. Results of mediation effects.
95% Confidence Interval
Path: USAI -> WWBCoefficientTpLLCIULCI
Total effect0.2365.1140.000.1450.326
Direct effect0.0741.8340.068−0.0050.154
Indirect effect
USAI -> NS_AUT -> WWB0.081 0.0330.138
USAI -> NS_COM -> WWB0.046 0.0020.103
USAI -> NS_REL -> WWB0.035 0.0030.073
Note: USAI = Use of AI. NS_AUT = Need for Autonomy Satisfaction. NS_COM = Need for Competence Satisfaction. NS_REL = Need for Relatedness Satisfaction. WWB = Work Well-being. JC = Job Complexity.
Table 7. Results for moderated mediator test.
Table 7. Results for moderated mediator test.
Mediation VariableMediatorIndex of Moderated Mediation
ModeratorEffect(BootCI)Index(BootCI)
NS_AUTL-JC(−1SD)0.089[0.026, 0.170]−0.035[−0.077, −0.002]
H-JC(+1SD)0.013[−0.030, 0.059]
NS_COML-JC(−1SD)0.054[0.003, 0.129]−0.018[−0.051, 0.000]
H-JC(+1SD)0.014[−0.006, 0.047]
NS_RELL-JC(−1SD)0.029[0.002,0.068]−0.002[−0.015, 0.008]
H-JC(+1SD)0.024[0.002, 0.060]
Note: NS_AUT = Need for Autonomy Satisfaction. NS_COM = Need for Competence Satisfaction. NS_REL = Need for Relatedness Satisfaction. JC = Job Complexity. H = High. L = Low. SD = Standard Deviation. CI = Confidence Interval.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huo, W.; Li, Q.; Liang, B.; Wang, Y.; Li, X. When Healthcare Professionals Use AI: Exploring Work Well-Being Through Psychological Needs Satisfaction and Job Complexity. Behav. Sci. 2025, 15, 88. https://doi.org/10.3390/bs15010088

AMA Style

Huo W, Li Q, Liang B, Wang Y, Li X. When Healthcare Professionals Use AI: Exploring Work Well-Being Through Psychological Needs Satisfaction and Job Complexity. Behavioral Sciences. 2025; 15(1):88. https://doi.org/10.3390/bs15010088

Chicago/Turabian Style

Huo, Weiwei, Qiuchi Li, Bingqian Liang, Yixin Wang, and Xuanlei Li. 2025. "When Healthcare Professionals Use AI: Exploring Work Well-Being Through Psychological Needs Satisfaction and Job Complexity" Behavioral Sciences 15, no. 1: 88. https://doi.org/10.3390/bs15010088

APA Style

Huo, W., Li, Q., Liang, B., Wang, Y., & Li, X. (2025). When Healthcare Professionals Use AI: Exploring Work Well-Being Through Psychological Needs Satisfaction and Job Complexity. Behavioral Sciences, 15(1), 88. https://doi.org/10.3390/bs15010088

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop