1. Introduction
With its myriad applications, artificial intelligence (AI) holds the promise to truly revolutionize patient care. Artificial intelligence in healthcare (AIH) as an emerging technological system enables healthcare providers to manage and analyze data by emulating human cognitive functions with greater accuracy [
1]. This technology is introducing a paradigm shift to the healthcare industry, aided by the increasing availability of healthcare data and the progress of analytical techniques [
1]. To reflect the legal, internationally accepted status of AI in healthcare, the medical device category of software as a medical device (SaMD) defines it as “analytical software with a significant potential for automating routine functions normally performed by a human”; it does not simply “simulate”, but rather performs analytical tasks previously performed only by a human.
The primary aim of many AI applications within the healthcare space is to analyze and understand the relationships between both prevention and/or treatment options, and their related patient outcomes [
2]. Natural language processing (NLP) techniques and machine learning techniques (MLT) are two main categories for AIH. NLP augments structured medical data that could eventually be analyzed with MLT by utilizing unstructured data, including medical journals and clinical records. On the other hand, MLT seeks to estimate the disease outcomes and congregate patients’ characteristics by analyzing structured medical data, including imaging and genetic profiles [
2,
3]. While research on AI’s potential in the healthcare industry is often directed towards validating its efficacy in improving care, the risks it may introduce to both patients and providers, such as algorithmic bias and machine morality issues, are also worthy of exploration. These possible challenges additionally highlight the need to regulate such technology [
4]. This also underscores the importance of ethically integrating AI and big data into the current healthcare landscape, in order to develop armamentariums that are satisfactory to patients and providers in all strata. To do so, we must first appraise the various factors that affect the adoption of artificial intelligence by healthcare professionals.
The world is on the brink of what is often called “The Fourth Industrial Revolution”. Rapid advances in technology are helping to blur the lines between physical, biological, and digital realms to completely reimagine the way we experience most aspects of life. Exponential leaps in computing power and the increased availability of large volumes of data are hastening this progress and have led to a world in which AI is already ubiquitous [
5]. From virtual assistants to drones and self-driving cars, most industries are seeking to explore their potential, and healthcare is no exception. Today, AI is commonly harnessed in the field of medicine to perform a vast range of functions, including administrative support, clinical decision making, and interventions in management; it has also been shown to reduce overall healthcare costs and medical errors, among other benefits [
6]. These advantages could greatly improve a system that is additionally challenged by a surge in non-communicable diseases and an aging population. The availability of AI, however, does not predicate its usage. Throughout the world, healthcare providers are faced with an increased presence of technology in their day-to-day practice and must learn how to effectively channel these new resources into improving patient outcomes, and perhaps physician satisfaction, as well [
7]. Many questions remain unanswered about the opportunities and threats posed by AI in healthcare, as well as its effects on individual practitioners, policymakers, and institutions. Therefore, it is critical to learn what drives and what prevents healthcare professionals from integrating AI into their daily work [
8,
9,
10]. The current research aims to achieve this by extensively investigating such confounding variables. One can ensure the successful application of AI in healthcare by first identifying potential barriers to adoption, and then developing effective tactics to overcome them.
This study aims to significantly contribute to the study of AI in healthcare by investigating factors that influence the uptake of AI by healthcare practitioners. Through a comprehensive investigation of these factors, we may be able to better understand the influencing factors preventing wider acceptance and create strategies to resolve them. Understanding the drivers and barriers to the adoption of AI by healthcare providers is crucial to ensuring its smooth implementation in the industry, which is the purpose of this study. Our research has the potential to inform and influence healthcare authorities and organizations, as they are responsible for promoting the appropriate and productive use of AI.
2. Related Works
Miller et al. [
8] stated that the role of AI in the future of medicine is yet uncertain. By running large datasets (big data) through complex mathematical models (algorithms), computers are trained to recognize patterns that would otherwise be undecipherable using biostatistics. Training improves the reliability of AI predictive models by fixing the algorithm’s flaws. Successful applications of AI for image analysis can be seen in radiology, pathology, and dermatology, where it is outpacing human specialists in both the speed and accuracy of diagnosis. While there can be no absolute certainty in a diagnosis, the performance of the system is reliably improved when machines and doctors work together. By using natural language processing to sort through the constantly growing scientific literature and compile years of various electronic medical information, cognitive programs are having an impact on medical practice. This is just one way in which artificial intelligence (AI) has the potential to enhance the standard of care, reduce medical errors, increase the number of people willing to participate in clinical trials, and ultimately improve the lives of people with chronic diseases. Despite the growing number of IoT applications in hospitals and their associated benefits and drawbacks, no investigations have been undertaken to determine whether IoT services are genuinely in demand. To verify the need for IoT services, Kang et al. [
9] surveyed working hospital nurses to confirm the demand for IoT services. There were 1086 responses (90.2% response rate). Five out of seven points for all service questions were obtained, which indicates a high demand for all services. A vital sign device interface system was the most sought-after. Nurses in wards have a higher desire for IoT services that improve patient care, whereas nurses in non-ward departments have a higher demand for IoT services that improve staff productivity. Overall, the findings offer a road map for the development of services that might enhance both the effectiveness of doctors’ workflows and the health of their patients.
To examine how consumers accept and use technology, Venkatesh et al. [
10] extended the scope of the unified theory of acceptance and use of technology (UTAUT). The proposed UTAUT2 modifies UTAUT to include the concepts of hedonic motivation, monetary value, and routine. The impacts of these dimensions on behavioral intention and technology use are hypothesized to be moderated by individual variations such as age, gender, and experience. Our theory was validated by the outcomes of a two-stage online survey among 1512 mobile Internet users, with data on technology use gathered four months following the initial poll. Improvements in both behavioral intention (from 56% to 74%) and technology use (from 40% to 52%) were shown to result from the suggested extensions in UTAUT2.
In their review, Khanijahani et al. [
11] discussed the organizational, professional, and patient characteristics associated with the adoption of artificial intelligence in healthcare. They performed comprehensive search of available databases and analyzed data from 27 papers that fit their criteria. Healthcare providers and patients were shown to place significant weight on psychosocial aspects such as perceived ease of use or usefulness, performance or effort expectancy, and social influence. The adoption of AI technology was also found to be affected by structural factors such as company size, workflow, training, and security. Patient demographic and medical characteristics were found to be factors associated with AI adoption, but only in a small number of studies. The authors suggest that a more holistic strategy for AI adoption is required, which accounts for the interaction between professionals, organizations, and patients. To fully comprehend the mechanisms of AI adoption in healthcare organizations and environments, more study is needed.
The study by Zhai et al. [
12] determined which aspects of artificial intelligence (AI) contouring technology in China are most appealing to radiation oncologists. Based on the Unified Theory of Acceptance and Use of Technology (UTAUT), the team of researchers created a model that includes the factors of perceived risk and resistance bias. A total of 307 participants filled out the survey, and the results were analyzed using structural equation modeling. The findings indicated that Chinese radiation oncologists had a generally favorable impression of AI-assisted contouring technology. Performance expectancy, social influence, and conducive conditions significantly influenced behavioral intention, whereas effort expectancy, perceived risk, and resistance bias did not. The study results imply that technology resistance is minimal and unrelated to behavioral intent among Chinese radiation doctors. These results may apply to efforts to increase the use of computer-assisted contouring in Chinese medical facilities.
Dos Santos et al. [
13] explore the perspective of medical students on the use of AI in radiology and medicine. The outcomes highlight that the majority of students know what AI could achieve and what that could mean for their fields. It also mentions that the students are not concerned that AI will replace human radiologists or doctors. Because of the growing prevalence of AI technology in healthcare, this study emphasizes the need to include AI education in the medical curriculum. The results also show that medical students are optimistic about the future of AI in medicine and do not see it as a threat to the need for human doctors. The essay as a whole stresses the significance of doctors and other medical workers keeping up with the ever-changing healthcare technology landscape.
A multicenter survey-based study conducted by Sit et al. [
14] in 19 UK medical schools to evaluate the attitudes and perceptions of UK medical students towards artificial intelligence and radiology revealed that the majority of medical students recognize the importance of artificial intelligence (AI) in healthcare and believe that it will play a crucial role in the field. Although AI is increasingly being used in radiology, nearly half of the students said they were less interested in pursuing a career in the area as a result. The study also discovered that only a tiny percentage of students received any kind of formal instruction on AI, and that those students who did have a stronger interest in radiology had a higher opinion of their ability to use AI technologies. Still, many of these students lacked self-assurance and knowledge regarding the application of AI in healthcare. To prevent AI training in medical schools from discouraging students from studying radiology, the study recommends that medical schools improve and expand AI training and offer actual use cases and constraints. A study conducted by Lennartz et al. [
15] on patients scheduled for computed tomography or magnetic resonance imaging found that patients preferred physicians over AI for most clinical tasks, except for treatment planning based on current scientific evidence. In situations where doctors’ diagnoses and those of AI disagreed, patients tended to side with their doctors. In both diagnosis and treatment, patients overwhelmingly desired that AI be overseen by a human doctor. A total of 229 patients took part. Patients preferred human doctors over AI in every clinical area except one: determining the best course of treatment. The study found that although patients are aware of AI’s potential to help doctors use the latest scientific evidence in practice, they still prefer an AI application that is supervised by doctors. To safeguard patient interests and uphold ethical norms, it stresses the importance of the transparency and regulation of AI in healthcare. Overall, the study stresses the significance of considering physicians’ roles when implementing AI in healthcare and giving patients’ preferences and values priority.
5. Discussion
Despite the ability of AI to generate high-quality data, there are reservations about its real-world application, in part due to the lack of clarity surrounding technology that is often poorly understood [
8]. The demand for AI-based services may also be affected by a wide spectrum of factors. A study by Kang S et al. involving a comparison between ward and non-ward nurses showed that individuals working in non-ward departments demonstrated a higher demand for AI tools to increase productivity, as compared to their ward counterparts [
9]. The collective concerns of accuracy and efficiency must also be skillfully balanced against individual concerns of privacy and liability [
19].
This study aimed to investigate the attitude of healthcare professionals, specifically doctors, towards the use of artificial intelligence in their routine clinical practice. Most of the initial hypotheses of our model were supported. However, some unexpected results contradicted our original assumptions. The authors hereby provide insights into the determinant factors affecting the adoption of artificial intelligence by healthcare professionals.
The degree to which an individual assumes that utilizing a specific system will enhance his or her job performance is defined as performance expectancy (PE) (10). Healthcare professionals are inclined to act more cautiously than other technology adopters when it comes to deciding how to adopt the best objective technologies to provide high-quality healthcare services. The user’s perception of the AIH system’s ability represents a considerable dimension for trust in the AIH system, which is highly associated with the mathematical error representation, the quality of the input data, and the algorithms considered in the decision-making process. Mcknight and collaborators described
IT in terms of technology as beliefs towards a technology’s capability regardless of its antecedent motives or will [
20]. Fan and collaborators considered PE and IT as meaningful predictors of behavioral intention to adopt AIH (10). Furthermore, IT has been demonstrated to have a robust potential impact inconsistent with trend studies, as PE is commonly found to have the greatest impact [
21]. Nakrem and collaborators demonstrated that healthcare providers are less likely to adopt digital technology if they do not realize the underlying rationale for using it or consider that it can ameliorate the quality of care delivered [
22]. The PT is defined as the stable contributing factor referring to a person’s overall predisposition to trust other people or technology [
23]. Mayer’s trust theory reflects the pivotal role of PT as an antecedent factor formatting peoples’ trust, especially when there exists no preceding interaction between trustee and trustor [
24].
The effort expectancy (EE) refers to the level of ease related to technology adoption (19). Therefore, ease of use and usefulness are tailored to effort expectancy as well as performance, all of which are regarded as robust predictors of AIH adoption by healthcare providers (20). Social influence (SI) refers to the degree to which a person can convince others and expect them to adopt a new system [
10]. SI was found to be a meaningful predictor of an individual’s intention to adopt health and mobile diet apps or other mobile health services. Behavioral intention (BI) is more affected by SI and facilitating conditions compared with PE. As a therapeutic instance, when Chinese physicians encountered AIH-based technology, their therapeutic point of view was more prone to be driven by individuals who share close relationships, e.g., hospital leaders, colleagues, and friends [
23]. This concept, which is aligned with the ideology of a Chinese philosophy called “utilitarian Guanxi”, which combines profit with objective intentions, reveals the culture of vertical collectivism [
23]. Moreover, social propaganda, such as news stories regarding the successful use of AIH-based technology by healthcare providers, is likely to affect physicians’ perceptions of adopting such technologies [
11,
12]. While the aforementioned contributing factors were argued to significantly affect AIH-based technology adoption in an appreciable number of studies, others did not indicate consistent results. Other factors, including standardization in healthcare practices and process orientation, dramatically corresponded with augmenting the perceived ease of adoption of AIH-based technologies. Concerning technology acceptance mode, it is not unexpected that all the above-mentioned psychological agents could positively or negatively affect people’s intention to adopt AIH-based technologies. This, in turn, depends thoroughly on how professionals perceive the subsequent results of adopting such technologies and the extent to which we can trust them (20).
Personal innovativeness (PI) refers to the degree to which a person is receptive to novel views and independently makes innovative choices [
25]. Considering innovation diffusion theory, differences in personal innovativeness lead to different individual reactions, yielding a predisposed tendency towards using the innovation [
26]. As noted by Webster and collaborators, PI stands for a comparatively stable descriptor of each person and remains invariant over the different situations or internal variables [
27]. A growing trend in the literature has regarded PI as a pivotal contributing factor when it comes to the adoption of novel mobile commerce, consumer products, mobile payment, and mobile diet apps. Moreover, it also appears to be an antecedent of intention to adopt information and communication technologies [
25]. Yi and collaborators paid attention to the positive link between PI and perceived ease of adoption in terms of PDA acceptance by healthcare providers [
28].
Task complexity (TAC) is defined as the level of required load and perceived difficulty to perform a task successfully [
5]; thereby, if healthcare providers are convinced that their given tasks are challenging, they are further prone to accept AIH-based support to augment their PE [
20]. According to a study, Canadian medical students perceived the promising role of AIH in addressing TE, which has been efficient enough to build personalized medication, perform robotic surgery, and provide diagnosis and prognosis. In case of the complex diagnosis tasks that threaten the patient’s survival, healthcare professionals may perceive a higher appreciation of AIH-based technologies than the classic ones [
18]. Technology using processes and more transparent interfaces could make the manipulation less complex [
18]. Zhou and collaborators verified that the technological characteristic plays a pivotal role as a direct antecedent of EE towards the intention to adopt mobile banking [
29]. However, healthcare professionals have raised some concerns regarding the competency of digital platforms to consider socio-demographic characteristics, including gender, race, and ethnicity [
30].
With the rapid evolution of AIH applications, this hypothesis that doctors’ authority may be challenged or even replaced by AIH products has recently raised several concerns and remains one of the most serious barriers facing the adoption of AIH-based technology [
18,
31]. According to a study, 49% of English medical students did not reflect enthusiasm in pursuing radiology as a profession because of AIH, and 17% of German medical students were convinced that healthcare professionals might be replaced by AIH (29, 30). Fans et al. [
18] indicated that PSC yields no meaningful impact on healthcare providers’ intention towards adopting AIH systems. Optimally, not only will AIH entirely replace professionals, but also enable them to focus more on important aspects of health care [
32].
Gardner and collaborators addressed age as another contributing factor, as most younger healthcare professionals were found to recruit automated pain recognition procedures [
33]. According to another investigation, minorities and male individuals were more likely to adopt AIH services rather than conventional physicians [
34].
According to a study among patients scheduled for magnetic resonance imaging or computed tomography, the severity of illness was proposed as the determinant factor for the adoption of AIH. In other words, AIH acceptance was meaningfully higher for diseases of medium-to-low severity compared with high-severity diseases [
15]. The role of previous experience of missed or delayed diagnoses should be highlighted as a positively correlated factor in adopting AIH [
35]. Patients who had received conventional Chinese medicine suffering from cancer, patients who had never experienced chemotherapy, and those who had undergone an operation were more likely to adopt AIH services in one study in our literature review [
34].
At a time when the adoption of artificial intelligence is significantly changing the landscape of the healthcare industry, India is uniquely positioned to leverage AI to address its many shortcomings in patient care. As of June 2018, India has 0.76 doctors and 2.09 nurses per 1000 population, as compared to the WHO’s recommendation of 1 doctor and 2.5 nurses, respectively [
36]. The gap in patient care due to this shortage is further weighed down by overwhelmed primary healthcare services, the absence of uniformity in physician training, the inaccessibility of standardized testing facilities, and the poor maintenance of patient records. Another major barrier to the delivery of good care is the lack of accessibility to quality healthcare services. A total of 67% of all doctors in India are clustered in urban areas, serving 33% of its population, and under-equipped public healthcare systems result in 78.8% and 71.7% of urban and rural cases, respectively, being treated in private (and invariably more expensive) facilities [
37]. India’s public health expenditure amounts to 1.28% of its GDP, one of the lowest figures amongst countries globally. This amounts to USD 62 per capita as public health expenditure, far behind most BRIC and Southeast Asian countries [
38]. Affordability also remains an issue, with a significant number of people (~63 million) being driven into poverty every year as a result of healthcare expenses. As of 2015, private expenditure accounted for approximately 70% of healthcare costs, ~62% of which was calculated to be out-of-pocket expenditure. This is estimated to be the highest of any country in the world. The liquidation of assets and borrowing of loans remains the primary means to finance healthcare expenditure in ~47% of rural and ~31% of urban households, respectively, which is not an inconsiderable amount. Needless to say, the poorest of the poor and more marginalized sections of society are the worst affected [
39]. Several challenges remain for the management of big data, including a lack of interoperability and unstructured and unorganized data [
40]. Moreover, data security, ethical use, and data privacy pose global challenges that remain a substantial issue, particularly in developing countries [
31]. The minority population intends to be less enrolled in datasets applied to develop AIH algorithms [
31]. Many AIH algorithms are regarded as a black box, which is not anticipated to be evaluated for bias [
31]. According to Precedence Research, the global AIH market size is projected to reach approximately USD 187.95 billion by 2030, with an expanding growth at a compound annual growth rate of 37% from 2022 to 2030 [
41]. An approximate total private and public sector investment in AIH exceeds USD 6.6 billion, indicating the scope of utilization of AIH by the year 2021 [
11]. By applying AIH solutions, the industry is estimated to rescue approximately USD 150 billion per year by 2026 [
11].
The underrepresentation of the populations in datasets used to develop AIH algorithms and the opaqueness of these algorithms concerning bias evaluation are concerning [
31]. Despite these issues, the global AIH market is expected to expand significantly, with a projected size of approximately USD 187.95 billion by 2030 and a compound annual growth rate of 37% from 2022 to 2030 [
41]. The private and public sectors have already invested over USD 6.6 billion in AIH solutions, highlighting the growing importance of AIH [
11]. The potential benefits of AIH are significant, with the industry estimated to save approximately USD 150 billion per year by 2026 through the application of AIH solutions [
11]. However, it is essential to ensure that AIH algorithms are developed with fairness and inclusivity in mind to prevent the perpetuation of systemic biases and inequities in society. Therefore, it is critical to address the issues of underrepresentation and bias evaluation in the development of AIH algorithms to achieve equitable and just outcomes. [
31,
41,
42].
The current study suffers several drawbacks, including that our data collection was geographically limited to India, which warrants the need for cross-regional studies. Subsequent studies in multidisciplinary/interdisciplinary aspects of healthcare settings containing more items, such as patients’ historical procedures, the severity of illness, and underlying disorders or cultural factors, could shed more light on this subject. The specific attributes of different AI models being deployed in different aspects of healthcare settings are bound to be different and would need specific and relevant studies to understand the pearls and pitfalls of individual AI models. Furthermore, due to the lack of specialized AI training and experience within our respondent cohort, it would be ideal to expand the study by giving the same questionnaire to a properly trained group of practitioners with the correctly taught definition of AI at the beginning of the survey, and to compare the two study groups to draw valuable evidence-based conclusions.