1. Introduction
Digital technologies have profound effects on all areas of modern life, including life at the workplace. By digitalisation we here refer to all forms of digital technologies, including artificial intelligence (AI) and robot technologies. Certain forms of digitalisation entail simply exchanging digital files for paper, while more complex instances involve computers and machines that perform a wide variety of tasks. From self-service checkouts in major supermarkets, IKEA, and the post office, to automated warehouses, it is becoming more and more evident that robots are not merely performing tasks, but taking over jobs from humans [
1,
2,
3]. While some are wary of the displacement of humans that occur when robots perform tasks previously performed by humans, others argue that robots only perform the tasks that should have been carried out by robots in the very first place, and never by humans [
4]. Some have even argued that robots should be designed and conceived of as some form of slaves [
5], although this can have repercussions for human relationships [
6].
Researchers and consultants have examined how jobs are increasingly susceptible to automation for some time now. For example, studies claim that big data techniques could substitute non-routine cognitive tasks and that increased robot dexterity will allow robots to perform an increasing number of manual tasks previously thought to require humans [
7,
8,
9]. A recent and extensive quantitative study on industrial robots and human replacement also shows that, although not in alarming numbers, there is a tendency towards worker replacement in industrial environments due to the productivity robots offer [
10]. The World Economic Forum [
11] suggests that, instead of replacing existing occupations and job categories, robots and artificial intelligence (AI) will substitute specific tasks to free workers up to focus on new tasks. This notion is taken further by Danaher [
3], who argues that work in general is, in fact, something that most people would benefit from being freed from. Along the same lines, the European Parliament points out that healthcare robots may ease the work of care assistants by performing automated tasks [
12]. This technology will allow caregivers to devote more time to diagnosis and better-planned treatment options.
Understanding the effects of digitalisation in work life requires us to understand the effects of digital technology on the tasks we perform. Usually, these effects entail further impacts that are not always foreseeable, thus a broader and more comprehensive analysis is required. In this article, the changing nature of work in the health care sector is used as a case to analyse such change on three levels: the societal level (macro), the organisational level (meso), and the individual level (micro). The societal level involves asking whether the new technologies in question entail technological change of a substitutional or infrastructural kind [
13]. For example, some argue that digital technologies are now taking us towards a fourth industrial revolution (4IE from hereon) [
14], and this would entail that technologies, such as artificial intelligence (AI), big data, and modern robotics, lead to change in the technological infrastructure. In addition, we argue that it is important to simultaneously ask how these changes—of both kinds—affect the experience of work from the organisational and individual perspectives. We, here, use the theories of Kaptelinin [
15] and Norman [
16] to examine the changing nature of tasks on an individual level, while they also allow us to examine change on an intermediate (meso) level.
Our contribution in the perspective article explores how distinguishing between the micro, meso, and macro level and changes to activities view from the system and personal level allows us to better understand the nature of digitalisation and the technological change it entails for the healthcare sector. While the changes in the healthcare sector involve particular technologies and occupations, we argue that these examples and the analytical framework we develop based on these are relevant to understanding other sectors as well, and also technological change in general. We emphasise the importance of a layered analytical approach, which precludes us from going into detail on all the particular technologies and occupations in the sector, but allows us to understand the actual magnitude of these changes and foster informed regulatory and societal responses. By analysing these transformations following a layered micro-meso-macro approach, we can encourage an informed and proportionate response, help preserve the rule of law, and avoid what has been called “regulatory madness” (in French
la folie normative) [
17]. Such a madness implies a rushed over-regulatory response that will not necessarily solve the problems it aims to address, while it could hamper innovation without providing a usable compass to guide society and technological development. On the contrary, this could contribute to the creation of “legal bubbles” that arise in times of innovation and increased economic investment in areas in which legislation is still immature, whereby consequences of the new technologies in question are still partly unknown [
18]. A layered approach would help prevent the disruptive consequences of ignoring the implications of technological development at different levels.
We argue that, while AI, big data, and robotics are revolutionary technologies, most of the changes we see involve technological substitution, and not infrastructural change. This entails that the notion of a fourth industrial revolution might not be the most useful concept for understanding the changes we now see happening through digitalisation in the healthcare sector. Such changes might certainly affect the experience of work life for a large number of individuals, but this does not necessarily lead to the conclusion that the technological structure is changing in radical and new or revolutionary ways. Not having a clear idea about the nature and extent of these transformations may lead to regulatory, economic, political, and societal responses that are disproportionate to the nature of these changes and, therefore, may have a range of unintended effects.
This article is structured as follows. In
Section 2 we present a selection of key examples of digitalisation and automation in the healthcare sector. In
Section 3, we present a theoretical framework for analysing these examples, and we apply this framework to the examples in the discussion in
Section 4. More focused studies, based, for example, on systematic literature reviews of the kinds of changes we here describe within a particular part of the healthcare sector, will be a natural next step for testing the approach and hypotheses put forth in this article.
2. Healthcare Automation Transformations
Healthcare automation based on digital technologies are here examined through two major technologies: AI and robotics. AI is expanding the frontiers of medical practice [
19,
20]. The increased availability of data, improved computing power, and advances in machine learning has led to this proliferation of AI systems [
21,
22]. Various medical domains previously reserved for human experts are increasingly augmented or changed thanks to the implementation of AI, including decision making in clinical practice (e.g., disease diagnosis, automated surgery, patient monitoring, foetus monitoring in the prenatal phase), translational medical research (such as improvements in drug discovery, drug repurposing, genetic variant annotation), and tasks related to basic biomedical research (e.g., automated data collection, gene function annotation, and literature mining) [
23,
24,
25]. In addition to automation of data collection and improved data from traditional sources, researchers have also found new sources of data that are used in healthcare research, such as data from social networks, for example Twitter [
26].
One example of a research field that is highly relevant for both AI and robotics in the healthcare sector is research related to dementia, which is expected to be a key challenge for the healthcare sector—and society in general—in the years to come [
27,
28]. A recent study shows that a deep learning model could predict Alzheimer on average six years before the final diagnosis was made [
29]. In another recent study from a related field, researchers show that an AI-powered triage and diagnostic system produces differential diagnoses with an accuracy comparable to human doctors in precision and recall [
30]. Although these systems only outperform human doctors in certain cases, their findings show that, on average, the AI system assigned triages more safely than human doctors.
A study from the University of North Carolina School of Medicine tested IBM Watson for Genomics against 1018 cancer diagnoses targeting tumour and normal tissue DNA sequencing [
31]. The results showed that human oncologists agreed with 99% treatment plans from IBM Watson. Moreover, Watson ascertained treatment options human doctors missed in 30% of the cases. In a different study, Watson analysed 638 treatment recommendations for breast cancer with a human-Watson concordance rate of 93% [
32]. These technologies may have the potential to predict healthcare-related outcomes, including genetic disorders or suicide risk, leading to an earlier intervention, and potentially save more lives [
33,
34].
A key challenge associated with the use of AI systems in the healthcare sector is the lack of transparency and explainability [
21], a topic which is receiving increasing attention from regulators, as seen, for example, in the European General Data Protection Regulation (GDPR) [
35]. As the healthcare sector is what must be considered a security-and privacy-sensitive domain, transparency and the development of means to uncover, for example, bias in decision-making systems, is vital [
21]. Such issues highlight the need for a suitable regulatory approach and response, which applies both to AI and robotics systems.
Although decision support systems that combine aggregated patient information have existed for a while, progress in this domain conveys the impression that machines for specific tasks will soon outperform humans. The fear is that, even in the healthcare sector, which was previously portrayed as relatively immune to automation, there is a clear tendency for professional tasks to become increasingly susceptible to digitalisation or automation. Routine tasks—both cognitive and physical—are already being automated on a large scale. However, big data techniques also enable us to substitute humans for non-routine cognitive tasks, and progress in robot dexterity could allow robots to perform increasingly complex manual tasks and hence lead to what is perceived as a profound transformation of healthcare workplaces. However, how profound and fundamental are these changes really? That is the question we return to in
Section 4.
We now turn to the other key technology being examined in the article, namely robots. One example we will consider in
Section 4 is the introduction of robots in care, with a particular focus on care for the elderly [
28,
36]. In addition to social robots, who mainly allow for the automation of therapeutic and welfare increasing interventions, there are a large number of robots that provide physical assistance while also functioning as an interface to various digital technologies [
37]. Assistive technology has been developed in order to, for example, help with feeding, lifting, and washing [
38]. There are also a number of ways in which such technologies can be used to sense, monitor, and alert when particular situations occur, such as an elderly person falling in their bathroom [
39]. These kinds of technologies might be applied in eldercare facilities, but they will also allow an increasing number of people to age at home [
40].
Nevertheless, robots are not only used in a care setting. Robot-assisted surgery (RAS) is associated with a number of benefits that we will return to shortly, and introducing a robot to the doctor-patient relationship changes how surgeries are performed. RAS extends the abilities of the doctor, but it also presents new challenges. A revision of 14 years of data from the Food and Drug Administration (FDA) shows that robot surgeons can cause injury or death if they spontaneously power down mid-operation due to system errors or imaging problems [
41]. Broken or burnt robot pieces can fall into the patient, electric sparks may burn human tissue, and instruments may operate unintendedly; all of which may cause harm, including death [
41]. Moreover, as surgical robots’ perception, decision-making power and capacity to perform tasks autonomously will increase, and the surgeon’s duties and oversight over the surgical procedure will inevitably change. Other issues relating to cybersecurity and privacy will also become more significant [
42].
Additionally, security vulnerabilities may allow unauthorised users to remotely access, control, and issue commands to robots, potentially causing harm to patients [
43]. Despite its widespread adoption for minimally invasive surgery (MIS), a non-negligible number of technical difficulties and complications are still experienced during surgical procedures performed by surgical robots. To prevent or, at least, reduce such preventable incidents in the future, advanced techniques in the design and operation of robotic surgical systems and enhanced mechanisms for adverse event reporting are important [
41].
While these ethical considerations are crucial for achieving responsible and beneficial digitalisation, we must also note that RAS, for example, provides a wide range of benefits, as surgery might be made more reliable, precise, and effective, and expert surgeons will be available to a broader range of potential patients. Such benefits must be weighed against the potential downsides just discussed, and policy related to such technologies involves examining whether technologies, such as RAS, are overall beneficial for patients, as no technology—and no human—can ever be perfectly safe or error-free.
The introduction of highly sophisticated machines in the healthcare domain may entail several changes, but the nature of these changes may not be immediately apparent. This is because analyses of the changes caused by the insertion of a particular technology often fail to consider the broader consequences this may have at multiple levels, including the individual, the organisational, and the social. For instance, robot-mediated surgeries may have implications for new roles and responsibilities of medical practitioners and staff (individual), the allocation of responsibility and insurance (organisational), or even the education of future medical doctors (societal) [
44]. In the following section we introduce a layered theoretical framework that helps in understanding and differentiating between the various consequences of technology adoption—either positive or negative—at different levels.
3. Theoretical Framework
To examine how work in the healthcare sector is changing, we develop a layered framework for analysing these changes at multiple levels: social and economic (macro), intermediate or organisational (meso), and individual (micro) levels. The macro level relates to large- and long-scale impacts on societies and economies as production systems [
45], which is the level the discussions of industrial revolutions usually refer to [
13]. When the focus is shifted to the intermediate level, including organisations and the relationships between organisations, institutions, political bodies, and regulators, we refer to the intermediate or meso level [
45]. Finally, the micro level refers to changes that affect individuals, or that are limited to changes within organisations or groups [
45]. With such a broad focus, our main goal is to provide a framework for analysing the effects of digitalisation, and the examples we use cannot provide a complete picture of how healthcare is changing. It will, however, provide a starting point for this discussion, which that can subsequently be tested, supplemented and continued in more focused and empirical research.
First, we distinguish between substitutional and infrastructural technological change [
13]. Technological substitution involves using technology to perform tasks in a more efficient manner
within the existing sociotechnical framework [
13]. If a technology enables a worker to do things more quickly, for example, without really changing the nature of the work, technological substitution allows for increased productivity without broader implications for the socio-technical system. Technologies may also lead to more fundamental changes, however, if they involve changes in the very infrastructure of work. Electrical power and the combustion engine, for example, are examples of technologies that are seen as changing technological infrastructure. Such changes entail changes in the broader socio-technical structure, involving, for example, what sort of tasks people are needed for, the educational requirements for working with the new technologies, structural changes in the companies, whether technologies allow for production, and work in larger or smaller units. This, in turn, may change society itself, by the changes it leads to in income structures, education, and even residential patterns [
13].
In the case of welfare technology, imagine the effects of social robots in elder care. When a robot seal, for example, Paro, is introduced to the elderly with dementia [
36], how does this change the work of the caretakers? Does it make the caretakers more effective, as they have a new tool that enables them to care for more elderly or provide better care for the same number? Or does the introduction of robots change the nature of the work, and consequently entail more fundamental changes for those who work in the care sector, including changes regarding what sort of skills are required, and, not least, how many people work in the sector? The first situation in which work is simply made a bit more effective would be an example of technological substitution, while the latter might entail changes fundamental enough to be infrastructural.
Sharkey and Sharkey [
38], Coeckelbergh [
46], and Sparrow [
47] reason about situations in which robots have completely replaced humans in elder care. Such dystopian scenarios are not necessarily infrastructural just because of the scope of substitution in care facilities. If robots perform the tasks and actions almost precisely as humans would do, and if this replacement does not entail wider societal effects related to education, employment, the need for substantial relocation for workers, or changed economic structures, for example, the change might still be substitutional [
13]. In addition, the use of technology in particularly sensitive domains—such as domains involving care—are traditionally assumed to require a ‘human touch’ and may entail even broader long-term consequences for society [
48].
This takes us into the domain of human–computer interaction (HCI), in which the relationship between humans and computers is studied. In this article, we limit this discussion to the introduction of two similar, but slightly different, perspectives on such interaction: the cognitive approach and activity theory [
15,
16]. Norman [
16] is a proponent of the cognitive approach in the field, and he explains that there are two different views of artefacts—devices that “maintain, display or operate upon information” in order to, for example, assist us in cognitive tasks. The system view involves us seeing the actor, the task, and the artefact as a whole, whereas the system’s capacity is affected by the introduction of an artefact. From the personal view, however, which is the view of the human actor, their capacity is not necessarily enhanced by the artefact even if the capacity of the system is increased. As the task itself is changed, this can be experienced both positively and negatively by an actor, irrespective of the effect on the system in which they are a constituent part [
16]. If we take this approach to the introduction of artefacts in general, we see that the different perspectives provide different perspectives on issues of automation and the introduction of AI in industry. From one perspective, humans are empowered, but from the other, the tasks are changed, and the actor may even feel diminished. Of importance is also the possibility that the capacity of AI may be said to have gone beyond the role of the cognitive artefacts here discussed. AI systems are often less of a help or tool for human actors, but more of an autonomous replacement.
Activity theory is another approach to HCI, in which both of Norman’s views are considered personal [
15]. In activity theory, tools are seen to empower, and even change, the actor, and we focus in particular on the notion of mediation, internalisation, and externalisation of skills, as described in the literature on activity theory [
49,
50]. Kaptelinin [
15] refers to studies that show that we often go through three phases when tools are used to assist us in tasks. First, we cannot effectively use to tool, so performance of the task is the same with or without the tool. In the second phase, we perform better with the tool than without. The third phase is the most interesting, and that is when we can perform the original task better than before, even without the tool [
15]. Using the tools can actually change us and help us learn how to do new things. One example of how cognitive artefacts might help internalise new skills is how go and chess players are adapting their strategies and are now achieving new levels of skill by using computer software, such as AlphaGo and AlphaZero, in order to analyse, practice, and play in new ways [
51,
52,
53]. In theory, AI systems with superhuman diagnostic abilities, mapping, and planning abilities for interventions and surgeries, VR goggles for training, augmented reality glasses used while operating in normal contexts, and also RAS systems might have similar effects, indicating that such systems do not make humans obsolete, but instead provide new avenues for human development.
While tools can empower individuals, technology also inevitably changes power relations, and structural power refers to the distribution of power in a given setting [
54]. All technological change potentially impacts existing power structures, and the digitalisation in any sector inevitably involves shifts in power that must be examined when the effects of technology are discussed. An example of such effects of technology was seen when snowmobiles were introduced into Skolt Lapland, completely changing the economy of reindeer herding. The changes went far beyond the simple act of herding and gathering the reindeer, as it affected just about every aspect of Skolt societal institutions, social relations, economy, and the distribution of wealth and work [
55]. Infrastructural changes such as these are most clearly linked to shifts of power, but also substitutional and more subtle changes involve shifts of power. While we mainly focus on the effects of new technologies on the technological infrastructure and individuals’ experience of work, we will continually also keep an eye on how the changes discussed changes power relations, as these are central to understanding the future of work. For example, automation in healthcare likely entails a power shift in favour of manufacturers and developers of digital technologies, a development that necessitates both political awareness and most likely various regulatory responses aimed at alleviating the potential negative consequences for both individuals and society of such power shifts.
5. Discussion: Long Term Changes and Areas of Future Research
Digitalisation and automation are posed to make the healthcare sector more effective in terms of resources (both material and human), which might be used to improve the resource allocation in healthcare. For now, it seems that the near-future healthcare sector will be clearly recognisable and not radically different in organisation from what we see today, even if it will be more technical. We expect that the need for human beings in the sector will be relatively stable, but that the people who work in the sector in the future will have a different type of education, and that their responsibilities will often relate to translating and controlling the operation of advanced AI and robotic systems rather than on direct interaction with patients. Such a change may create an incentive structure that promotes larger institutions and efforts to garner the benefits of economies of scale, as these systems are (a) costly, and (b) have the potential to care for a large number of patients.
As we have concluded that the changes at the macro level are as of now somewhat limited, we have also shown that there is a clear potential for important macro-level impacts in the future. Automation leading to job replacement and changed demands for education and competencies are two areas we have discussed here, with a particular emphasis on digital competence [
67]. We also wish to point to three other areas of potential macro-level change that require more research.
First, there is a possibility that technological change leads to a change in how care is both perceived and delivered. While some have previously conceived care as something restricted to human–human relations, this might change, and our accompanying ideas of what constitutes
quality care (refer to
Figure 1 above) could simultaneously change [
62]. Danaher [
68] provides the foundation of such research in a recent article examining axiological futurism, in which value change as a result of technological change. In a similar vein, Sætra [
69] deals with how robots designed for the use for love might change the very concept of love, and empirical research to test the strength of such hypotheses are required to accurately evaluate the fears related to certain negative effects of digitalisation in the healthcare sector.
Secondly, while more digitalisation and automation allow for care to be delivered in new, and potentially more effective, ways, new challenges also manifest themselves. The need for digital skills will be important for workers, but technology is also susceptible to cyber-attacks which may have fatal consequences for, for example, patient safety [
70]. Politicians, institutions, organisations, and individuals will all be required to account for such attacks. We might here also note that issues of privacy are highly relevant in this context, with the growth of sensor technology both in networked devices and surveillance equipment but also robots in general, which have a number of sensors and methods for storing and transmitting data [
39]. Digital competence related to the protection of privacy, and more in-depth knowledge of how “digital dossiers” affect individuals facing these new technologies [
71], are crucial for achieving a responsible and beneficial digitalisation process. In addition, there is always the risk of malfunction, and we as societies must decide what sort of systems of backup, for example, we demand, as old ways of performing tasks might relatively quickly be forgotten or become impractical once digitalisation and automation is implemented. If, or when, technology then fails, we must either accept such failure or require the potential for doing things the traditional way as well.
Thirdly, new technologies often require new interpretations of existing policies or the creation of new policy mechanisms to frame the developments accordingly. The EU, for example, recently proposed the AI Act as a regulation to align AI development with EU fundamental rights. All the developments in healthcare discussed in this article introduce new demands for legislators and policymakers. Digitalisation and automation are not some nature-given phenomena that simply occur. Or, rather, they do not have to be. We, as a society, have an opportunity to control and direct all the changes discussed, but this requires politicians to work proactively to understand the implications of new technologies and work actively with the industries and the healthcare sector in order to make sure that the future develops in a direction we desire. As argued by Sætra and Fosch-Villaronga [
72], this should not entail preventing foundational research on AI or robotics, but instead actively regulating and legislating the application of such technologies. As shown in the various examples discussed in this article, digitalisation is associated with a number of important benefits for both patients, workers, and society in general, and it is imperative that the domains of science, ethics, and politics interact in such a way, allowing for these benefits to remedy the key challenges also created by the same technologies [
72].
6. Conclusions
Technology is shaping the healthcare sector and changing it in a variety of ways. We have argued that, while AI, big data, and modern robotics are changing the healthcare sector, these changes are evolutionary and partly a logical consequence of techno-solutionism. A metaphor for understanding current technological development is travelling to some mountain by car; while the mountain in the distance seems to barely move, the markings on the road are advancing and passing at high speeds. This disconnect between the rapid advances and seemingly radical changes on the micro level (the markings on the road in this metaphor) and the delayed impacts on the macro level (the mountain) oftentimes amount to a disproportionate response that does not match the actual need created by these changes when they are properly analysed. Still, and inevitably, the insertion of technologies in any sector is not straightforward and has consequences for society in multiple levels that need an adequate response.
This article has shown that, by applying a framework where effects on the micro, meso, and macro levels are distinguished from each other, the nature of technological change in the healthcare sector can be more clearly understood. Our approach allows for distinguishing between system and personal perspectives when examining effects on the micro level, which further helps understanding why change that may appear both radical and fundamental at the micro and meso levels are not necessarily associated with revolutionary macro-level changes. A key contribution of this article has thus been to show that a broad analytical perspective is required for understanding technological change and informing policymaking and society. Our work has also provided the foundation for further research of a more focused, and also empirical, nature.
On the micro and meso level, individuals and organisations are experiencing changes, yet these changes do not, for the most part, involve the substitution of humans for machines, but rather a transformation of the skills and jobs that humans perform. As new human–machine partnerships are formed, workers are as of yet largely part able to keep up with these changes, which is why we argue that the changes to the macro level are somewhat limited. However, we show that the long-term effects of digitalisation do entail new requirements for digital competence, and these changes will, in the long term, have the potential to change the entire structure of the educational system, as the healthcare sector and other sectors increasingly require workers with medium-to-high-level digital skills and more administrative training.
While our analysis undermines the assumption that digital technologies, and AI and robotics, in particular, constitute a fourth industrial revolution, their effects on the micro and meso level still require both political awareness and proper regulatory responses. By analysing technological transformation through the lenses of a layered approach, a better informed and proportionate response that calibrates societal expectations, preserves the rule of law, and avoids “regulatory madness” can be provided to guide society and technological development.