Next Article in Journal
Analysis of the Development Patterns and Improvement Strategies of China’s Digital Economy—Drawing Insights from Data Collected across 227 Cities in China
Previous Article in Journal
Physical and Economic Water Productivity in Agriculture between Traditional and Water-Saving Irrigation Systems: A Case Study in Southern Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Corporate Digital Responsibility to Responsible Digital Ecosystems

by
Bernd Carsten Stahl
School of Computer Science, University of Nottingham, Nottingham NG7 2RD, UK
Sustainability 2024, 16(12), 4972; https://doi.org/10.3390/su16124972
Submission received: 18 May 2024 / Revised: 8 June 2024 / Accepted: 9 June 2024 / Published: 11 June 2024
(This article belongs to the Section Economic and Business Aspects of Sustainability)

Abstract

:
The significant and rapidly growing impact that digital technologies has on all aspects of our lives has raised awareness of benefits but also concerns and worries linked to the development and use of these technologies. The concept of responsibility, as expressed in terms such as corporate social responsibility or responsible research and innovation, represents one well-established discourse and a set of practices that are used to deal with social and ethical concerns and which are now a well-established part of the broader sustainability literature. Recently, a novel discourse has gained prominence that specifically explores the question of which responsibilities arise for organisations based on their engagement with digital technologies. This discourse uses the term ‘corporate digital responsibility’ to describe how organisations can understand, shape, and discharge their responsibilities not just in a narrow economic sense, but also their broader moral, social and environmental responsibilities. This article takes its point of departure from the corporate digital responsibility discussion and aims to contribute to the theoretical underpinnings of the term. This article suggests that an appropriate metaphor to approach the reality of current socio-technical systems is that of an “ecosystem”. This metaphor accommodates many of the systems features of contemporary digital technologies, but it clashes with traditional conceptions of responsibility. This article argues that a concept of responsible digital ecosystems is a precondition for the successful ascription of responsibilities in digital contexts. Such ecosystems can be understood as systems that are designed to foster and support existing and novel responsibilities, that align responsibility ascriptions with a view to achieve desirable and acceptable outcomes. Based on these ideas of responsible digital ecosystems, this article spells out some of the implications for research and practice that the adoption of this conceptual framework entails.

1. Introduction

It is easy to make the case that the pervasive impact that digital technologies have on current societies requires a heightened level of attention to ethical and social questions, that it requires explicit attention to questions of responsibility. Attention to such ethical and social questions is a well-established part of the sustainability literature, linked to both the environmental sustainability of digital technologies (such as the rapidly increasing energy consumption of the AI industry) but also broader questions of the sustainability of organisations and business models.
It is not always easy to translate this call for responsibility into conceptually sound positions that can guide the development and use of technology and the organisational and social responses to pervasive socio-technical change that are sensitive to broader sustainability questions. There are numerous reasons why this translation is challenging. Key among them is the nature of digital technology and the digital transformation that it engenders. It is difficult to delineate clearly which technology has which consequences even in relatively tightly defined horizons. Moreover, digital technologies are fundamentally socio-technical systems that form part of, constitute, and overlap with other socio-technical systems, leading to problems of complexity and difficulties when one attempts to control them. Such socio-technical systems raise complex questions in terms of responsibility. Who or what is responsible, what are they responsible for, what happens as a result of such responsibilities are some of the pertinent questions that are difficult to answer for all socio-technical systems and exacerbated by the unpredictability of digital technologies.
This article therefore develops a conceptual approach to digital responsibility starting from the well-established position that any successful intervention into digital technologies needs to take into account their character as systems. It argues that it is suitable to approach them using the theoretical concept of digital ecosystems. The question that drives this article then is: how can digital ecosystems be rendered responsible? In light of the large number of actual and potential digital ecosystems, there is no simple and straightforward answer to be expected to this question. In addition, any ascription of responsibility within or to digital ecosystems will by necessity require empirical insights, for example into the capability of the underlying technology but also into the specifics of the socio-technical system(s) within which the technical artefacts are deployed. However, any such empirical insights can only be gathered on the basis of sufficiently sophisticated conceptual underpinnings.
This conceptual article aims to provide the theoretical underpinnings required to undertake meaningful empirical research in the area. As a non-empirical article, it draws neither on quantitative nor qualitative data [1], nor does it claim to be a literature review of any sort, including systematic, testing, narrative or any other type [2,3,4]. As a conceptual article, it aims to clarify a theoretical and conceptual position, in this case the importance of recognising the ecosystems nature of digital technologies. By developing a theoretical position, it aims to stimulate scholarly debate and provide the basis for subsequent research. The quality of this type of research contribution can be assessed by the relevance of its research question or field, its clarity and coherence, the validity of the justification of this article, its originality or contribution to the field [5,6].
This conceptual article makes an important contribution to the discourse on digital responsibility and the role of ethics and responsibility with regard to digital technologies more broadly. This is of academic interest to those research communities that focus on such questions, for example in the fields of ethics of technology, computer ethics, or corporate social and digital responsibility. It is furthermore of importance to researchers in IS and related fields who want to undertake empirical research around ethical and related topics, as it provides them with an appropriate conceptual basis to plan and execute such research. This can facilitate research of importance to decision makers, be that in industry, policy or elsewhere.
This article shows that digital ecosystems cannot be subjects of responsibility in a traditional sense, as they do not fulfil the conditions typically linked to the status of being a subject of responsibility, such as the possession of agency, personality, emotions or the ability to react appropriately to the anticipation of sanctions. However, this does not preclude the application of other and more recent concepts of responsibility to digital ecosystems. This article suggests that it is fruitful to see responsible digital ecosystems as those socio-technical systems involving digital technologies that are conducive to the creation and expansion of existing networks of responsibility. Human beings are always embedded in such networks of responsibility. The active management and alignment of these existing responsibilities to achieve aims and objectives that are justifiably deemed to be acceptable and desirable would be a key feature of a responsible digital ecosystem. Following this line of thought allows the identification of key aspects of responsible digital ecosystems and of implications they have for research and practice in systems research, design and development.
This paper is structured as follows. It starts by introducing the concept of digital ecosystems. This includes some of the characteristics of digital technologies and digital transformation. It proceeds by demonstrating the importance of systems thinking when considering these technologies and argues that the innovation ecosystems literature provides a strong basis for practically implementing such systems thinking. This is followed by a discussion of digital responsibility which commences with a brief introduction of traditional (analogue) responsibility which leads to a discussion of recent ethical research in digital technologies. The concept of corporate digital responsibility is touched on which then leads to a discussion of responsible digital ecosystems.

2. Digital Ecosystems

The argument put forward in this article hinges on the recognition that digital technologies and the responsibilities arising from their development and use are best understood by using the metaphor of an ecosystem. This section therefore starts by introducing digital technologies and the concept of digital transformation before exploring in some more detail the systems character of such technologies which then sets the background for the introduction of the concept of innovation ecosystems to digital technologies.

2.1. Digital Technologies and Their Transformative Impact

A good starting point for the conceptual analysis of digital responsibility is to take a look at the technologies underpinning the concept and driving the need for it. The term ‘digital technologies’ can be defined as “tools, systems and devices that can generate, create, store or process data. The data processing and logic capabilities of digital technologies are enabled through microprocesses that are programmed to perform various functions” [7]. The data that are at the centre of attention can normally be expressed, stored and processed in digital form, i.e., as a collection of 0 s and 1 s. While this may seem straightforward and we are all familiar with digital technologies, most prominently with personal computers and smart phones, the exact delimitation of digital technologies is not trivial. There are computing technologies that do not exclusively use digital data or digital data processing structures. Examples include certain types of neuromorphic hardware such as the BrainScaleS system [8] or computing devices based on quantum effects, so-called quantum computers [9]. Strictly speaking, such technologies may not be considered as digital technologies, but they hold the potential to radically change our world through their computational capabilities and should thus plausibly be included in a discussion of digital responsibility.
Rather than focusing on the exact definition it may thus be more fruitful to delineate the phenomenon under investigation by looking at the characteristics and capabilities of digital technologies. We have already alluded to the importance of data in digital technologies. These produce, collect, process and output data. Digital data are not an objective representation of an independent phenomenon but it is a product of human intention and intervention that not only describes but can fundamentally change the nature of the phenomenon. Zuboff [10] made this point powerfully in the context of industrial production when she coined the term to “informate” by which she expressed the observation that the increased emphasis on data affects the social reality in question. To some degree, this is probably a trivial point to make in a world where social interaction has fundamentally changed due to social media, where e-commerce has revolutionised retail, where consumer products from toothbrushes to sex toy are now internet enabled. But it is nevertheless an important point to keep in mind, as it is a key motivator of the call for digital responsibility.
The growing attention to data has led to the emergence of data sciences and attention to the concept of ‘big data’ [11,12]. This focus on data per se has been overtaken in recent year by a focus on a particular family of technologies that make use of and sense out of these data, namely artificial intelligence (AI). Defining AI is a thankless task, as there appears to be no common core to all uses of the terms [13]. Instead, this article confines itself to stating that AI constitutes the currently most prominent example of digital technologies. There are immense hopes linked to AI as well as significant worries and concerns. One of the reasons for the concerns is that many instantiations of current machine learning technologies are difficult or impossible to fully understand, even by the experts who build them [14]. This is caused by the opacity of systems based on neural networks that provide the technical basis of many of the most prominent and successful examples of AI [15,16].
While the opacity of AI is owed to the nature of the underlying technology, it points to an older and well-established characteristic of digital technologies, namely their unpredictability. Moor, in his seminal discussion of computer ethics [17], coined the term ‘logical malleability’, which he saw as one of the characteristics of computing that called for ethical attention. Logical malleability stands for the fact that computers can be made to perform any task that is characterised in terms of inputs, outputs, and connecting logical operations. Unlike many other technologies that are developed with a specific purpose in mind, digital technologies are open to an infinity of possible purposes which the designers and developers may never have considered. While technology in general is not determined [18], i.e., it is impossible to comprehensively predict how it will be used and which consequences this will have, the problem of prediction is exacerbated for digital technologies that by definition do not have a fixed purpose.
This leaves humanity with a significant problem. On the one hand, the digital technologies we produce and use promise and often display large-scale social, economic and political impact. The metaphor of a revolution is often evoked to highlight the severity of changes engendered by digital technologies [19,20]. We are collectively calling forth and starting this technology-driven revolution. On the other hand, we often do not understand in detail the way in which the technologies that promote the revolution work and struggle to predict the consequences of their use. One stream of research that seeks to make sense of these changes and channel them in desirable directions is that of digital transformation [21,22,23,24]. The term ‘transformation’ instead of ‘revolution’ suggests an intentional and well-understood process that has predictable and desirable outcomes.
The interesting question arising from this attempt to channel the chaos of digital revolutions into the order of digital transformation is how this could be achieved. This question has many aspects touching on technical, social, legal, and economic aspects. The part of the question that this article focuses on relates to the ethical questions that lead to calls for digital responsibility. Before we proceed to look at the nature of these ethical questions and develop the concept of responsible digital ecosystems, we need to briefly review why systems thinking plays a central role in digital technologies and how the metaphor of ecosystems can help us better understand digital technologies.

2.2. Systems Thinking

It should not be difficult to convince an audience of IS scholars of the importance of the concept of ‘system’ as it is a constituent part of the name of the field or discipline. However, it has been observed that the IS field in practice tends to make little use of its roots in systems theory [25]. It is therefore worth highlighting some aspects of systems and systems thinking that can inform our approach to digital technologies and questions surrounding digital responsibility.
Systems theories and systems thinking have been influential in shaping the debate on digital technologies since their inception in the wake of the second World War. However, some authors trace systems thinking as an expression of holistic views emphasising the integral relationship between human and non-human nature back to ancient spiritual traditions including Hinduism, Buddhism, Taoism and Islam [26]. Similarly, Ropohl [27] points to the relationship between the whole and its parts as a key aspect of systems thinking that can be found in Aristotle’s writings.
Systems thinking in this broad understanding of the term is thus anything but new. In its current instantiations, it can be seen as an approach or worldview that can support and inspire research across disciplines [28]. Systems thinking is characterised by its holistic approach [29] that avoids what Reynolds and Holwell [26] call the trap of reductionism, i.e., it creates awareness of the interconnectivity between variables, and the trap of dogmatism, i.e., the temptation to work on the basis of a single unquestioning (and unquestioned) perspective.
The need for systems thinking in complex modern societies is difficult to deny. The foundations of contemporary systems thinking point to the problems of traditional scientific approaches and their reliance on identifiable causal chains in dealing with complex problems in the biosocial sciences but also in the area of technical impacts on society [30]. These problems have since become even more pronounced and visible. The complexity of current socio-technical systems, such as smart cities [31], defies simple, straightforward, and linear responses, thus calling for approaches informed by systems thinking.
Systems thinking can help better conceptualise and deal with systems-related phenomena, such as emergence and emergent properties of systems. This can include specific behaviour of systems but reach up to complex emerging phenomena including consciousness [32]. Emergent properties contribute to the uncertainty that research encounters and that systems thinking can address [33].
There is no single one authoritative expression of systems thinking. Instead, there are various schools of systems thinking using different theoretical and methodological approaches, which are not always mutually compatible. There are different ways of categorising various approaches to systems. One such categorisation that is useful for thinking about responsible digital ecosystem is the division of systems approaches into ‘hard’, ‘soft’, and ‘critical’ [26]. This distinction has a close affinity to the well-discussed distinction of research paradigms in the field of information systems [34,35,36]. Hard systems approaches assume that systems are objectively real entities in an external world which can be clearly delineated and described, often using quantitative methods. Soft approaches, notably including Checkland’s soft systems [37], see systems as social constructs that can better help us understand complex social assemblages. Critical approaches, finally, emphasise the emancipatory potential of systems [38].
In addition to the fundamental approaches to systems, it is important for this paper to explore the relationship between systems and ethics. Again, there is no agreement on this point. Some authors such as Habermas see a fundamental disconnect between ethics and systems because ethics is traditionally based on individual human agency, whereas systems approaches tend to abstract from individual humans and explain social developments in terms of structural variables. From this perspective, systems theory struggles with the translation of descriptive work to taking normative positions [39]. The opposing view has, however, been prominent in many approaches to systems thinking and systems theory. Bertalanffy [30], for example, uses his general systems theory to express the hope that it can overcome the mechanistic worldview that he sees as being responsible for the catastrophes of the mid-20th century and proposes that systems theory can offer a worldview of “reverence for the living”. Soft systems approaches not only understand ethical questions as part of the social reality they describe but reflect on their own embedding in normative worldviews. Critical systems approaches identify causes of human misery and alienation [27] and look to systems thinking as a way to overcome it. These views are reflected in more recent insights on how responsibility can be designed into large socio-technical systems [40], which brings us to the systems approach at the centre of this article.

2.3. Innovation Ecosystems and Digital Ecosystems

So far, I have argued that we need to take into account the systems character of digital technologies if we want to adequately understand the ethical and responsibility challenges they pose. At the same time there are many ways of formulating systems thinking and practically using it in research, which raises the question how best to approach questions of ethics and responsibility in current systems that are influenced and often shaped by digital technologies. The answer that I propose to this question is to use the discourse on technology ecosystems that has developed in the field of innovation studies. It is important to understand some of the characteristics of the debate concerning these innovation ecosystems, as they raise particular challenges for implementing responsibility.
A good starting point for the introduction of the innovation ecosystems discussion is to point out that it does not refer to ecosystems in the biological sense of systems comprised of living organisms and their environments, even though innovation ecosystems are embedded in biological ecosystems and can have significant influence on them. However, the term ‘ecosystem’ in this context should be understood as a metaphor that is used to better understand how socio-technical environments develop. When Moor [41] introduced the metaphor to the academic discourse, he did so develop a better understanding of competition, of why some organisations thrive while others disappear. The terminology has caught on in various parts of the academic debate with the innovation studies literature probably leading the debate. It has also proven to be popular among policy makers [42]. The immediate attraction probably comes from the fact that everybody can envisage various biological ecosystems and see the parallel to socio-technical systems in terms of number of participants, complex relationship between them and potentially radical change over time.
The metaphor can furthermore be used for various purposes from different perspectives. From an organisational perspective it can be used to describe why and how certain organisations emerge and thrive or fail to do so [43]. Such insights can be used for functional purposes, for example by developing strategies [44], which can be based on an improved understanding of opportunities for growth [45] and help design practical interventions, for example in the form of innovation management activities [46]. The ecosystems perspective can also allow an organisation better to understand its own position in a broader environment, which can lead to a better understanding of the competition [47] and identify opportunities for collaboration [48,49].
In addition to the immediate benefits of the ecosystems perspective to an organisation that forms part of or wants to understand its environment, the innovation ecosystems perspective offers benefits to other stakeholders. It can help technology developers better understand and thus better align with the technology they are developing with the use environment it is likely to encounter [50]. One can use innovation ecosystems perspectives from a social perspective [47] to drive public policy decisions on which systems to support [51]. From academic perspective, innovation ecosystems are a useful theoretical position that allows an explanation of change, progress and the mechanisms of innovation [43,45]. On a practical level, it can provide a basis for collaboration across disciplines, for example with systems-oriented field such as information systems [52].
In this article, I will not be able to offer a comprehensive overview of the innovation ecosystems literature. But it is important to highlight some of the characteristics of these ecosystems to understand their relevance for social and ethical issues and how they relate to questions of responsibility. One aspect that innovation ecosystems have in common with systems more broadly as, for example, described in general systems theory [30] is the complex interplay between the overall systems and its components [52]. Granstrand and Holgersson [53] define an innovation ecosystem as “the evolving set of actors, activities, and artifacts, and the institutions and relations, including complementary and substitute relations, that are important for the innovative performance of an actor or a population of actors” thus pointing to the different types of components that contribute to the system as a whole. A key characteristic in particular with regard to achieving desired outcomes of systems is that there is no central point of control. This is not a mistake that can be remedied but it is a defining feature of complex adaptive systems [54,55].
Instead of being subject to centralised and managerial directions, ecosystems change through the mechanism of evolution [56,57,58]. Evolution, however, is typically not predictable. And while evolution is a key aspect of ecosystems, these systems need a certain amount of stability to survive and facilitate evolution. The relationship between members of ecosystems is thus subject to change [59], where ecosystems members can cooperate and co-evolve [60] but they can also compete an annihilate one another. Systems members can be highly heterogenous, as indicated by the fact that individual human beings, organisations, governmental structures but also technologies can be seen as components of such systems. Innovation ecosystems can but do not have to have a central node in the form of a technology platform [47,48].
A final point worth highlighting is the ontological nature of innovation ecosystem, which is closely related to the question of the boundaries of the system. Put differently, the question is whether innovation ecosystems are natural occurrences that can simply be observed or whether they are social constructs that emerge through the process of observation. This is a fundamental question that arises in similar forms in general systems theory as well [27]. I will remain agnostic on this question, as the argument proposed here does not require a strong ontological commitment. However, the drawing of the systems boundaries is important across ontological positions, even though the exact meaning of this drawing of boundaries changes. As is the case in biological ecosystems, one can describe innovation ecosystems from a multitude of perspectives. The choice of perspective determines the shape of the system, its members and dynamics. This choice of perspective is thus not an objective and neutral one, but needs to be critically reflected by the observer and may be subject to debate.
Having now argued that digital technologies can be described in terms of innovation ecosystems, we can progress to the discussion of the implications that this perspective has for understanding and dealing with ethical and social concerns and thus what it means for digital responsibility.

3. Digital Ethics and Responsibility

In this paper, I want to answer the question how digital ecosystems can be rendered responsible. Having argued that the ecosystems perspective is useful for understanding the socio-technical world co-constituted by digital technologies, I now need to discuss the role of responsibility in this context. For this purpose, I will start by giving an overview of what might be called digital ethics [61,62], i.e., the discussion of ethical aspects of digital technologies. From there, I will proceed to look at responsibility as a traditional response to ethical questions and issues and explore the main features of this concept. This will be followed by an introduction of two discourses of relevance for responsible digital ecosystems, namely corporate digital responsibility and responsible (research and) innovation. This section thus provides the conceptual underpinnings to move from the digital ecosystems discussed earlier to the integration of the idea of responsibility in those digital ecosystems.

3.1. Digital Ethics: Ethics of Digital Technologies, Computing, and AI

The call for digital responsibility is triggered by the awareness that digital technologies can raise ethical and social questions and concerns. In order to assess whether a model of digital responsibility promises a viable response to those questions and concerns, it is therefore important to understand the nature and detail of what they are, how they materialise and how they can be evaluated. I will therefore offer a high-level overview of the key strands of digital ethics.
An understanding of digital ethics implies an understanding of ethics. The use of the term ‘ethics’ in the English language refers to a number of different phenomena. They all refer in some way to the distinction between right and wrong or good and bad as well as various types of statements referring to this distinction. Stahl [63] suggests four different levels of the use of the term. He distinguishes between the intuitive feeling that an individual holds that something is right or wrong (moral intuition), explicit statements about agreed-upon rules (explicit morality), the analysis and justification of these moral rules (ethical theory), and higher level reflection on these theories (reflection and meta-ethics). Philosophical ethics tends to focus on the third and fourth levels in this classification. There are numerous ethical theories that philosophers have been discussing for millennia. Much of the current ethics of technology discourse highlights three main classical ones: Utilitarianism is a theoretical position that states that the ethical quality of an action can be assessed by looking at its consequences and adding up the total of the utility it produces minus the total of disutility. The option with the highest net utility score is the one that is to be ethically preferred [64,65]. An opposing view is held by deontological ethics which holds that the ethical quality of an action is determined by the intention of the agent, notably in the Kantian tradition by the adherence of the agent to the Categorical Imperative which holds that the maxim motivating the action must be universalisable [66,67]. The third classical theory often referred to is that of virtue ethics which locates ethical quality in the character of the agent as someone who prudently navigates extreme positions to come to measured position [68,69]. Ethical theory is by no means confined to these classical positions. A further stream of ethical thinking of high relevance to the current article is that of an ethics of responsibility. Originally proposed by Max Weber [70,71], the ethics of responsibility has been prominently adopted to address key challenges of the modern world [72].
While ethical theory is predominantly concerned with the critical reflection and justification of moral positions, practical deliberations in applied ethics spend much more attention on specific moral issues. Those are cases where technology leads to situations which are deemed to be of moral relevance, either positively or negatively. Most of what one can glean from the media relates to such moral issues. Examples would be bias and discrimination caused by AI, social exclusion due to digital divides, unemployment caused by rationalisation, but also a reduction in CO2 emissions because of optimised production or better social embedding due to social media. All of these can count as ethical issue in that they lead to a state that is deemed to be morally good or morally bad and technology plays a role in facilitating this state.
It is worth highlighting that an awareness of digital ethics, i.e., that digital technologies can have ethically relevant consequences has accompanied the development of digital technologies from its outset. Norbert Wiener [73,74], one of the founding figures of digital computing, highlighted early some of the key issues that are still being discussed, including the possibility of machines gaining autonomy or leading to unemployment. Josef Weizenbaum [75], who developed the first digital chat bot, pre-empted many of the ethical questions that are currently discussed in the context of generative AI.
While awareness of ethical concerns can thus be observed throughout the history of digital computing, the first attempt to organise this topic area into a coherent discourse happened during the 1980s and 1990s under the heading of computer ethics [17,76,77]. The field of computer ethics draws from a wide range of disciplines across philosophy, computer science, social studies, legal studies and their various sub-disciplines. The terminology used to describe the field has reflected different points of emphasis and preferences of different authors to cover terms such as ethics and ICT [78], information technology ethics [79] or cyberethics [80,81].
The term ‘computer ethics’ seems to put an emphasis on the physical manifestation of the computer as an artifact. It has long been recognised that this artifact may not be the most interesting and relevant aspect to consider and there has been a long-standing debate on the ethics of information or information ethics [82,83,84], which to a significant extent overlaps with computer ethics [76]. More recently in the context of the emergence and rapidly growing relevance of big data and AI, the discourse has expanded to include new terms such as data ethics [85], the ethics of AI [86,87], or responsible AI [88]. These different terms and the discourses they are related to overlap somewhat, even though they tend to incorporate new disciplines, contributors and areas of knowledge. To a significant degree, they can be mapped to one another [89].
In this paper, I use the term ‘digital ethics’ to cover all of these streams of debate. They have in common that they are interested in moral concerns raised by digital technologies and the way in which these moral concerns could be reflected on and evaluated from a theoretical ethical perspective.
One final word on this view digital ethics is in order before we can proceed to the discussion of responsibility. There is currently heightened attention to digital ethics which started somewhere in the mid-2010s due to the amazing progress of machine learning, in particular deep learning [15,16] facilitated by neural network technologies and which received a further boost due to the arrival of generative AI, notably the arrival of Chat GPT and its competitors on the scene. The flurry of interventions in this debate, including very broad media coverage, seems to suggest that there have been substantial changes to the technology which may raise fundamentally new and undiscovered ethical concerns. At this point I am not convinced that such as position is warranted. Recent technical developments including those in the area of generative AI may expose new stakeholders to ethical concerns and may exacerbate problems and issues. It is less clear, however, whether and to which degree these are fundamentally novel. A relatively recent systematic literature review of digital ethics [90] lists a number of issues including privacy, professionalism, autonomy, trust, consent, identity, inclusion, deception, health-related issues and intellectual property, all of which remain relevant. A more recent exploration of the ethics of generative AI [91] suggests that the issues that have previously been identified in computer and information ethics cover the topics currently discussed with regard to ChatGPT. The point here is not so much to claim that there is nothing new under the sun, but to point to the continuity of digital ethics which means that we can build on existing insights. While the current discourse overflows with references to radical change and revolution, I suggest that the topics and figures of thought relevant to digital ethics have temporal stability and it is thus worth considering past insights when thinking about digital responsibility. In order to progress this, the next step is to look at the concept of responsibility and its precedent in the pre-digital world.

3.2. Analogue Responsibility

I have called this section ‘analogue’ responsibility to underline the fact that it covers the concept of responsibility independent from digital (or any other) technology. Responsibility has been a core concept of philosophical ethics for a long time, and it is worth outlining what responsibility refers to and why it has been a popular concept to ensure that its application to digital ecosystems is plausible.
The etymology of the word responsibility points to the term ‘response’ [92], which suggests that responsibility has something to do with the ability to respond, to answer. A typical way to describe responsibility is to look at its structure and function. The communicative nature of responsibility implies that there is someone or something that responds to someone else. The one who responds is typically called the subject of responsibility. The subject responds because of something that has happened or will happen that they have a level of influence on. This something is typically called the object of responsibility. This relationship between subject and object is observed or overseen by an entity that one can call the authority of responsibility that judges the subject on the basis of a set of agreed rules which lead to positive or negative sanctions. To use a simple example, one can look at a criminal (subject) who has committed a crime (object) and has to answer to the judge (authority) under the rules of criminal law, which can lead to a prison sentence (sanction).
There are numerous further aspects that can describe or affect a relationship of responsibility. There are different types of responsibility (e.g., moral, legal, professional, personal). Responsibility can be prospective or retrospective. Different instances of responsibility can thus look vastly different. One aspect that they have in common, however, is that they are social constructions that further a particular social aim. This may be to provide incentives for certain behaviour, to punish an act that is deemed worthy of punishment, to achieve a socially desired outcome or a combination of these and possibly other goals. It is important to keep in mind this nature of responsibility as a social construct.
Another aspect of responsibility to keep in mind is that responsibility relationships and ascriptions never happen in a vacuum. While new instances of responsibility can and do arise, they always do this in a context of existing responsibility. If, for example, we were to promote a novel responsibility of AI developers for the long-term consequences of the use of their developments, then this would happen in a context where the developer already has many responsibilities in their various roles, for example as a citizen, a parent, a member of the neighbourhood association. They will have responsibilities as an AI developer, for example contractually defined ones with customers, responsibilities within their organisation or their professional body. A novel instance of responsibility will need to fit into this existing context and form part of what can best be described as a network of responsibilities [93]. These networks of responsibility change over time but, as humans, we are always part of them in different roles throughout our lifetimes.
The focus of this article is on ethics, as highlighted in the previous section. It is worth pointing out, however, that there are numerous types of responsibility relationships and moral responsibility is only one particular type. By moral responsibility we refer to responsibility relationships that refer to ethical principles such as those enumerated above as their normative basis. Other types of responsibility exist which do not directly refer to ethics, such as role responsibility, legal responsibility, or political responsibility. Without being able to argue the point in detail here, this article assumes that all responsibility ascriptions have a moral component to them. They are united by a desire to achieve an aim that is deemed to be good or desirable, an aim that will at least have some ethical relevance. The exact distinction between ethical and other aims of responsibility and how these translate into social practice in innovation ecosystem calls for a more fine-grained analysis than this article can provide. However, looking at the responsibility discourse, there are still a number of aspects that can further our understanding of responsibility in digital ecosystems.
There has been quite some discussion on what will render an ascription of responsibility viable. This is an important question when looking at responsibility and digital ecosystems. Much of this discussion focuses on the subject of responsibility which traditionally has been assumed by default to be an individual human being. Many of the conditions attached to successful responsibility ascriptions refer to the criteria a human being has to meet to be deemed a suitable subject of responsibility. They include personal characteristics [94,95], such as rationality and reason that allow the subject to understand the ascription and react accordingly. The subject needs to be able to act intentionally [96] and have an appropriate range of emotions to motivate their intention and follow them through [97]. The subject needs to be able to exert self-control and they must be able to react to sanctions, for the ascription to be successful. In addition to these internal states, the subject must play a role in the causal chains that lead to the object which implies at least some level of influence or power to change outcomes [72]. This, in turn, is linked to freedom of will and freedom of action that allows the subject to exert its influence, which implies a minimum level of knowledge of the role that the subject plays.
It is easy to see that these points raise the question when someone can be held responsible in everyday situations where we generally agree that a lack of knowledge may limit responsibility, as does a lack of certain capacities, which is why we typically do not hold children or severely mentally disabled people responsible.
This discussion of conditions of responsibility could easily be extended and it can be observed to play out in courtrooms and family living rooms on a daily basis. For this article, it is important to underline, however, that this traditional picture of responsibility with the individual, rational, adult human being serving as the subject is not the only model of responsibility. There has been a long debate whether it is possible to have collective subjects of responsibility [98]. This debate has progressed in most detail with regard to organisations as possible subjects [99], which has led to a cautious acceptance of the idea of non-individual responsibility. It has been realised in some jurisdictions that accept that there is a case to hold collectives responsible, for example through the mechanism of corporate manslaughter.
The other example of non-individual responsibility of relevance to this article is that of responsibility of machines, in particular computing machines. This idea has been controversially discussed throughout the history of computing [74,75] and is often linked to questions such as the possibility of personhood or consciousness of technology, which is a staple of the debate on transhumanism, singularity and artificial general intelligence (AGI) [100,101,102]. I do not want to engage with this debate here but would like to point out that following the definition of responsibility as the process of social ascription of an object to a subject with the aim of achieving a socially desirable aim, responsibility of machines may be viable without having to rely on the heavy conceptual baggage of AGI [103].
This very brief overview of the concept of responsibility suggests that it may be useful to think about questions of digital ethics using the term. Responsibility as a social construct based on communication does not posit a particular ethical position. Instead, it can be seen as a procedural approach that aims to elicit what is deemed to be acceptable and find ways to achieve such acceptable ends by ascribing objects to subjects and allocating sanctions accordingly. Responsibility is not always explicitly ethical in nature, but arguably has at least ethical undertones in practice. Before we come to the application of responsibility to digital ecosystems, it makes sense to briefly revisit to related concepts that aim to apply ideas of responsibility to digital technologies, namely corporate digital responsibility and responsible (research and) innovation.

3.3. Corporate Digital Responsibility

A relatively recent development in the discourse on digital ethics has been the development of the concept of corporate digital responsibility (CDR). This idea brings together various strands of previous discourses an has its roots in discussion of computer ethics and business ethics [104]. A review of the definitions of the nascent CDR literature and existing definitions [105] shows that it combines various concerns about digital ethics with an organisation-centric approach to the context in which these ethical questions are considered. Lobschat et al. [106] see CDR as the “the set of shared values and norms guiding an organization’s operations with respect to four main processes related to digital technology and data”. The processes they refer to are technology creation and data capture, operation and decision making, inspection and impact assessment, and the refinement of technology and data.
In the context of this article, CDR can be interpreted as one facet of digital ethics that focuses on how topics and questions from digital ethics are perceived and dealt with by organisations. This intersection of digital technologies, social questions and organisational practice explains why the term is prominent in the community of IS scholars who tend to work on questions involving the organisational side of information technologies [107]. One advantage of this positioning of the CDR debate is that it can draw on existing ways of dealing with ethical questions such as corporate social responsibility (CSR) which are not strongly represented in the digital ethics discourse [105]. In addition, CDR is located in a discourse where the concept of responsibility is no longer by definition confined to individual human beings as subjects of responsibility thus opening the perspective towards broader questions of responsibility and the role that organisations can play.
From the perspective of this paper, the CDR discourse can therefore play an important role in understanding responsible digital ecosystems by providing theoretical perspectives such as CSR and by bringing in research communities that have not necessarily traditionally been included in the discussion of digital ethics. A further advantage of aligning CDR with CSR is that it can help sidestep the difficult question of which ethical theories should be adopted. As indicated earlier, all responsibility relationships can be said to have an ethical connotation. However, in light of the absence of a universally agreed ethical framework, it is possible to sidestep questions of philosophical ethics and work with existing consensus on what counts as acceptable and appropriate. The CSR debate provides a good example of this in the form of the ISO norm 26000 (Social responsibility) [108,109], a procedural governance framework that requires no explicit commitment to any ethical position. At the same time, the focus on the organisation and questions of digital ethics using organisational perspective means that CDR does not necessarily cover the entire breadth of issues, questions, topics and perspectives that are relevant to digital ethics and that motivate my proposal to focus on responsible digital ecosystems.

3.4. Responsible (Research and) Innovation

A final theoretical position worth highlighting because of its influence on and relevance to responsible digital ecosystems is that of responsible (research and) innovation (R(R)I) [110]. This discourse which rose to prominence in the early 2010s can best be seen as a development of work on research and innovation governance. It draws from various roots across several disciplines, including the philosophy of technology, science and technology studies and technology assessment. While not exclusively focused on digital technologies it has from its inception included contributions related to questions of responsibility in digital technologies [111].
This discourse is of interest here because it takes a novel perspective on responsibility. In their seminal paper Stilgoe, Owen and Macnaghten [112] highlight four dimensions that they see as crucial for R(R)I: anticipation, reflexivity, inclusion and responsiveness, thus underlining the communicative and reciprocal nature of responsibility. These ideas have proven to be highly influential in the field of research governance and in particular research funding, becoming adopted by the UK’s Engineering and Physical Science Research Council [113] and they now form part of guidance for all research funded by UK Research and Innovation. These principles have been further developed with a specific focus on digital technologies [114]. There is a closely related but not fully identical discourse that covers these topics of research and innovation governance on the EU funding level which has led to the somewhat confusing acronym R(R)I [110]. For the purposes of this article, we can ignore the fine points of this debate and see it as “the on-going process of aligning research and innovation to the values, needs and expectations of society” [115]. It is worth highlighting that discussions of R(R)I have an established position in the sustainability literature [116,117,118].
The R(R)I discussion is relevant to the argument put forward here because it has established a concept of responsibility that moves beyond the traditional view introduced earlier and that focuses on the communicative nature without taking strong positions on who or what should answer to whom for what and with which consequences. It implies that communication, responsiveness, a willingness to engage with stakeholders and to question one’s position on the basis of such feedback and anticipatory expectations will lead to socially desirable and morally acceptable outcomes. This position raises some questions about its empirical validity which have yet to be answered, but it points towards a concept of responsibility that I will now explore with regard to digital ecosystems.

4. Responsible Digital Ecosystems

Having introduced the building blocks of responsible digital ecosystems, I am now in position to put them together to outline what the concept could look like in theory and in practice. I will start by highlighting the challenges before exploring how these could be addressed or overcome to move towards digital ecosystems that can claim to be responsible.

4.1. Challenges of Responsible Digital Ecosystems

A key challenge for the concept of responsible digital ecosystem comes from the traditional notion of responsibility where a clearly identified subject is held responsible for an object before an authority which leads to sanctions. While such instances of responsibility exist and form part of most social relationships and can be found within digital ecosystems, they are not applicable to these ecosystems as a whole. To put it differently, it is difficult to see how a digital ecosystem could be the subject of responsibility. Ecosystems, whether biological or metaphorical, do not fulfil the criteria and conditions that are typically associated with being a subject. They have no intention, they cannot act in a concerted manner, they have no overall awareness and understanding of current situations, possible consequences, they lack the emotional capacity to react to threats or promises of sanctions and they cannot respond in a traditional sense of the word. This means that the term responsible digital ecosystems must refer to a different conceptualisation of responsibility.
A second challenge of digital ecosystems that render responsibility ascriptions problematic, even when going beyond the idea of the system itself being the subject of responsibility is the complexity of these systems. The exact nature of this complexity depends on the definition of the system, on where exactly boundaries are drawn around it. The key challenge is the drawing of the boundary, as it drives most other aspects of the system, including its members, their dynamics, and external influences. It is not obvious how the boundaries are to be drawn. In the policy discourse, there is typically an implied assumption that digital ecosystems refer to the totality of digital technologies and their socio-technical implementation within a nation state. This is plausible because it means that some aspects such as the legal underpinning and established mechanisms of attribution of sanctions through judicial mechanisms are consistent across the system. However, digital ecosystems are certainly not confined to national ecosystems. It is equally plausible to draw the boundary around a particular organisation, an industry, an application area as well as going beyond national boundaries, as very few digital technologies have natural boundaries that coincide with national boundaries.
Drawing the boundary of the digital ecosystem in question is thus an activity that requires detailed deliberation and justification. The choice made will lead to descriptions of systems that that more or less accessible and offer different opportunities for the observer to understand their internal dynamics and possible interventions. It will never be the only way of delineating the system and competing perspectives will be available.
A further challenge is that of epistemic uncertainty. The above issues would arise even if we had perfect knowledge of the digital technology, its socio-technical realisation, of moral preferences, social dynamics and stakeholders involved in the system. In practice, however, no observer will ever have this level of certainty. The nature and capability of the technical substrate can always throw up surprises and are subject to continuous development. The use of any technology is not determined and it is therefore impossible to predict what eventual uses will be. This is true for any technology, but, as indicated earlier, Moor [17] highlighted that the “logical malleability”, the fact that a technology is made as a general purpose tool is a design feature in computing technologies. Epistemic uncertainty furthermore arises with regard to moral preferences and ethical positions. These can be described using social science methods but they change over time. One of the main advantages of the use of the ecosystems metaphor, namely its ability to reflect the dynamic nation of socio-technical systems thus turns out to increase challenges when using it, as future uses, future moral preferences and perceptions, and future ethical and legal positions are by definition unknown. Given all of these challenge, the question arises how responsible digital ecosystems can be conceptualised.

4.2. Responsibility of, in and through Digital Ecosystems

The concept of a responsible digital ecosystem can be understood as an expression of the desire to ensure that the ensemble of socio-technical systems and the social, economic, legal and other broader systems it is embedded in will have beneficial consequences for society. This means that these responsible digital ecosystems are capable of identifying issues, questions, and concerns arising from the design and use of the digital technologies and providing structures of responsibility that promote socially desirable, ethically acceptable and sustainable uses of technology.
As indicated earlier, the ecosystem itself is not a plausible candidate for being the subject of responsibility. Instead, a responsible digital ecosystem would have to be a system that fosters and supports responsibility. This can be achieved by strengthening the different aspects of responsibility. In practice, this could mean that it provides communication structures where claims to responsibility can be raised, contested, and negotiated. This includes the identification of relevant subjects and the attribution of objects of responsibility to such subjects. The system must allow open communication as well as the mechanisms to query claims, propose sanctions and implement them.
If we use the example of a large company that is active in the development and deployment of digital technologies, then we can suggest some of the characteristics that such a company would have to possess, in order to count as a responsible digital ecosystem. In order to facilitate responsibility ascriptions, the company would have to have a culture that facilitates and supports communication. Members of the organisations need to feel empowered to raise questions and issues and propose subjects of responsibility who are capable of dealing with these questions and issues. The proposed subjects need to be able to accept or reject these proposals and the company needs to have the flexibility to accommodate new responsibility structures. It must be able to recognise the normative assumptions underpinning responsibilities and translate those into organisational structures, for example by assigning questions arising from novel capabilities of digital technologies to specific individuals or departments. The company will furthermore need the ability to react appropriately to outside influences, understand external voices and be able to translate them into contributions to its internal communications.
Such a company as an example of a responsible digital ecosystem thus needs a repository of knowledge, not only about the digital technologies it works on but also about ways of interacting with internal and external stakeholders, communicating normative positions and realising them. This knowledge base can only be deployed successfully, if the company has a governance structure that is open, flexible and open to learning. If these conditions are met, then the company can support the development and critical reflection of responsibility structures which will help it deal with and address the challenges linked to digital technologies and be considered a responsible digital ecosystem. It will then be a company which fosters responsibilities and that therefore showcases responsibilities within the ecosystem. The company is not a stand-alone entity in an empty world but it forms part of larger political, economic, and social systems that it interacts with. These larger ecosystems can use such a company to promote responsibilities, thus rendering the company a vehicle through which responsibilities are expressed and realised.
A further observation is that these new and developing responsibilities will be embedded in the network of existing responsibilities outlined above. Different instances of responsibility may be mutually supporting and synergistic but they may also conflict. A responsible digital ecosystem will thus need to have a way to shape, maintain, develop, coordinate and align existing and novel research and innovation-related processes, actors and responsibilities. Elsewhere I have suggested the term “meta-responsibility” for these activities [119,120]. The term ‘meta’ here stands for a higher level responsibility that deals with existing responsibilities. Establishing such a meta-responsibility within a company may be relatively easy, as existing management structures can be used to achieve it. In other types of responsible digital ecosystems that do not have corporate structures, it may be more challenging.

5. Implications and Conclusions

In this article, I have argued that questions of digital responsibility may be best viewed through the lens of the metaphor of a digital ecosystem. I have then explored what the concept of responsibility can mean in such ecosystems by looking at traditional concepts of responsibility as well as more recent developments such as responsible (research and) innovation and corporate digital responsibility. I have then proposed a concept of responsible digital ecosystem that builds on and incorporates ideas of responsibility that go beyond the traditional individual human being as a subject and thereby draw from ideas of CDR and R(R)I. I believe that these ideas are based on a conceptually robust basis which warrant the question of which implications the adoption of the concept of responsible digital ecosystems has for research and practice.

5.1. Implications for Research

The key implication for research that arises from the use of the ecosystems metaphor to describe current digital landscapes and their ways of identifying and addressing social and ethical issues is the call for a change in perspective. Instead of focusing on specific technologies or their applications, researchers interested in digital responsibility need to adopt systems thinking [25,121].
This call for a stronger focus on systems thinking touches both on the framing of research questions and the methodologies used to address these questions. The framing of research is always important and needs to be justified as part of any research project. By highlighting the ecosystems perspective, an additional onus is placed on researchers to not only draw the boundaries around the research question they want to pursue but to acknowledge that this drawing of boundaries is already part of the research itself, that it is based on often implicit assumptions that need to be made explicit and subjected to external scrutiny. Adopting a systems approach carries the implication that the boundaries of the system under investigation form part of the research question.
In addition to the way research is framed and research questions are designed, the turn towards systems thinking offers a broad array of methodologies that researchers can consider when undertaking the research, collecting data and analysing it. With regard to digital responsibility, there are existing approaches in the IS field like Mumford’s ETHICS methodology that builds on systems thinking [122,123]. In addition, there are many other approaches to research that are informed by systems thinking within the IS field [124] but also from other disciplines [125]. The discourse around systems is broad and diverse and offers many different understandings of the nature of systems and ways to investigate them, which means that this call for the integration of systems thinking in research methodology is not a call for methodological uniformity but rather for a shift in perspective which will continue to allow a multiplicity of approaches to research.

5.2. Implications for Practice

The argument put forward in this article not only has implications for research but also for the practice of developing and using digital technologies in society. For individuals and organisations involved in these activities, it points to the need to more explicitly consider which ecosystem(s) they are part of or wish to interact with. The delineation of boundaries is as much of practical relevance as it is of theoretical importance. This delineation process can help practitioners better assess who their stakeholders are, which mechanisms they may have to interact with them and how to structure this interaction successfully.
The leadership of organisations active in this space may profit from the ecosystems ideas by developing their organisations to form part of successful ecosystems. Some of the insights developed earlier, such as the need for a well-maintained knowledge base that includes but goes beyond technical knowledge or the requirement to have adaptive and flexible governance structures can be seen in this light.
Practical implications furthermore arise for policy makers who are interested in developing policies for digital technologies and the ecosystems in which they are realised. The question of policy development for digital ecosystems is currently a hot topic, in particular with regard to AI. Policy makers can take various lessons from this discussion of responsible digital ecosystems. This can start with the drawing of boundaries which is particularly problematic in the AI policy discourse. AI is an exceptionally poorly defined concept and my suspicion is that any AI-specific policy will suffer greatly from this conceptual lack of clarity which translates into poorly defined boundaries of ecosystems that it is cover. On the other hand, the AI policy discourse provides examples of the adoption of some of the insights from the ecosystems perspective. For example, there now seems to be a broad agreement that AI policy not only calls for a well-developed body of knowledge, but also for gatekeepers who can validate and extend this knowledge. There are various initiatives to establish such gatekeepers, typically on the national level, such as the Alan Turing Institute in the UK. Similarly, one can observe numerous attempts to grow the knowledge base and the number of individuals who can access it through various education and training initiatives, again typically defined and funded on the level of the nation state.
A key question for policy makers arises from the complex nature of existing and novel responsibilities related to digital ecosystems. I have earlier suggested that this complexity calls for a way to align existing responsibility networks. It is not necessarily clear how this might be achieved. However, if we take the idea of a meta-responsibility seriously, then, in line with traditional thinking on responsibility, we will need a subject of this meta-responsibility, that means someone or something that is held responsible for this alignment of the network of responsibility. What exactly such a subject of meta-responsibility would look like is not necessarily obvious. However, on a national policy level, where much of the current policy discussion is focused, it might take the form of a regulatory body, something that has been suggested in the context of AI [126]. It is currently unclear whether nation states of political bodies such as the European Union will go down this route, but if such regulators for AI were to be established, they might serve as models or expand to cover digital ecosystems more broadly.

5.3. Conclusions

The fact that digital technologies raise ethical and social concerns is well established. The concept of responsibility offers mechanisms for steering actions in ways that promote desirable outcomes by linking objects of responsibility to subjects, typically employing authorities and based on accepted normative foundations to allocate sanctions to those subjects. In this paper, I have suggested that digital responsibility, if it wants to be successful, needs to take into consideration the complexity of current digital socio-technical systems and that one way of achieving this would be to adopt the metaphor of digital ecosystems. I then used different concepts of responsibility to discuss what would constitute responsible digital ecosystems. In the final section, I suggested some relevant implications for research and practice that would arise from the adoption of this concept.
The ideas spelled out here give some indication of practical next steps, but they probably raise more questions than they answer. This conceptual essay will require significant empirical underpinning and research on digital ecosystems to show whether it does in fact offer a viable way to approach responsibility questions of digital technologies. While this article thus cannot be the final word on digital responsibility, it can be read as a call for the development of a research programme that has a realistic chance to progress the state of the art help us understand the nature of and possible interventions into digital technologies in the hope of minimising risks and maximising benefits.

Funding

This work was supported by the UK Engineering and Physical Sciences Research Council [Horizon Digital Economy Research ‘Trusted Data Driven Products’: EP/T022493/1] and [RAI UK: Creating an International Ecosystem for Responsible AI Research and Innovation: EP/Y009800/1].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Schwartz, S.J. Publishing Non-Empirical Papers. In The Savvy Academic: Publishing in the Social and Health Sciences; Schwartz, S.J., Ed.; Oxford University Press: Oxford, UK, 2022; pp. 470–490. ISBN 978-0-19-009591-8. [Google Scholar]
  2. Rowe, F. What Literature Review Is Not: Diversity, Boundaries and Recommendations. Eur. J. Inf. Syst. 2014, 23, 241–255. [Google Scholar] [CrossRef]
  3. Boell, S.K.; Cecez-Kecmanovic, D. On Being ‘Systematic’ in Literature Reviews in IS. J. Inf. Technol. 2015, 30, 161–173. [Google Scholar] [CrossRef]
  4. Snyder, H. Literature Review as a Research Methodology: An Overview and Guidelines. J. Bus. Res. 2019, 104, 333–339. [Google Scholar] [CrossRef]
  5. Reese, S.D. Writing the Conceptual Article: A Practical Guide. Digit. J. 2023, 11, 1195–1210. [Google Scholar] [CrossRef]
  6. Jaakkola, E. Designing Conceptual Articles: Four Approaches. AMS Rev. 2020, 10, 18–26. [Google Scholar] [CrossRef]
  7. Johnston, K.; Kervin, L.; Wyeth, P. Defining Digital Technology; Australian Research Council Centre of Excellence for the Digital Child: Kelvin Grove, QLD, Australia, 2022. [Google Scholar]
  8. Schmitt, S.; Klähn, J.; Bellec, G.; Grübl, A.; Güttler, M.; Hartel, A.; Hartmann, S.; Husmann, D.; Husmann, K.; Jeltsch, S.; et al. Neuromorphic Hardware in the Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 2227–2234. [Google Scholar]
  9. Steane, A. Quantum Computing. Rep. Prog. Phys. 1998, 61, 117. [Google Scholar] [CrossRef]
  10. Zuboff, S. In the Age of the Smart Machine: The Future of Work and Power; Basic Books: New York, NY, USA, 1988; ISBN 0-465-03212-5. [Google Scholar]
  11. Saltz, J.S.; Dewar, N. Data Science Ethical Considerations: A Systematic Literature Review and Proposed Project Framework. Ethics Inf. Technol. 2019, 21, 197–208. [Google Scholar] [CrossRef]
  12. Someh, I.; Davern, M.; Breidbach, C.; Shanks, G. Ethical Issues in Big Data Analytics: A Stakeholder Perspective. Commun. Assoc. Inf. Syst. 2019, 44, 718–747. [Google Scholar] [CrossRef]
  13. Elsevier ArtificiaI Intelligence. How Knowledge Is Created, Transferred, and Used—Trends in China, Europe, and the United States; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  14. Alpaydin, E. Introduction to Machine Learning; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
  15. Bengio, Y.; Lecun, Y.; Hinton, G. Deep Learning for AI. Commun. ACM 2021, 64, 58–65. [Google Scholar] [CrossRef]
  16. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  17. Moor, J.H. What Is Computer Ethics. Metaphilosophy 1985, 16, 266–275. [Google Scholar] [CrossRef]
  18. Feenberg, A. Questioning Technology, 1st ed.; Routledge: London, UK, 1999; ISBN 0-415-19755-4. [Google Scholar]
  19. Brynjolfsson, E.; McAfee, A. Race against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy; Digital Frontier Press: Lexington, MA, USA, 2012. [Google Scholar]
  20. Floridi, L. Artificial Intelligence’s New Frontier: Artificial Companions and the Fourth Revolution. Metaphilosophy 2008, 39, 651–655. [Google Scholar] [CrossRef]
  21. Majchrzak, A.; Markus, M.L.; Wareham, J. Designing for Digital Transformation: Lessons for Information Systems Research from the Study of ICT and Societal Challenges. MIS Q. 2016, 40, 267–277. [Google Scholar] [CrossRef]
  22. Rowe, F. Being Critical Is Good, but Better with Philosophy! From Digital Transformation and Values to the Future of IS Research. Eur. J. Inf. Syst. 2018, 27, 380–393. [Google Scholar] [CrossRef]
  23. Verhoef, P.C.; Broekhuizen, T.; Bart, Y.; Bhattacharya, A.; Qi Dong, J.; Fabian, N.; Haenlein, M. Digital Transformation: A Multidisciplinary Reflection and Research Agenda. J. Bus. Res. 2021, 122, 889–901. [Google Scholar] [CrossRef]
  24. Zimmer, M.P.; Järveläinen, J. Digital-Sustainable Co-Transformation: Introducing the Triple Bottom Line of Sustainability to Digital Transformation Research. In Proceedings of the HCC 2022, IFIP AICT 656 Proceedings, Tokyo, Japan, 8–9 September 2022; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  25. Lee, A.S. Retrospect and Prospect: Information Systems Research in the Last and next 25 Years. J. Inf. Technol. 2010, 25, 336–348. [Google Scholar] [CrossRef]
  26. Hoverstadt, P. The Viable System Model. In Systems Approaches to Managing Change: A Practical Guide; Reynolds, M., Holwell, S., Eds.; Springer: London, UK, 2010; pp. 87–134. ISBN 978-1-84882-808-7. [Google Scholar]
  27. Ropohl, G. Philosophy of Socio-Technical Systems. Soc. Philos. Technol. Q. Electron. J. 1999, 4, 186–194. [Google Scholar] [CrossRef]
  28. Demetis, D.; Lee, A. When Humans Using the IT Artifact Becomes IT Using the Human Artifact. J. Assoc. Inf. Syst. 2018, 19, 929–952. [Google Scholar] [CrossRef]
  29. Checkland, P. Soft Systems Methodology: A Thirty Year Retrospective. Syst. Res. Behav. Sci. 2000, 17, S11–S58. [Google Scholar] [CrossRef]
  30. Von Bertalanffy, L. General System Theory: Foundations, Development, Applications, 2015 Illustrated ed.; George Braziller Inc.: New York, NY, USA, 1968. [Google Scholar]
  31. Mantelero, A.; Esposito, M.S. An Evidence-Based Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data-Intensive Systems. Comput. Law Secur. Rev. 2021, 41, 105561. [Google Scholar] [CrossRef]
  32. Hofstadter, D. Godel, Escher, Bach: An Eternal Golden Braid, 20th anniversary ed.; Basic Books: New York, NY, USA, 1999; ISBN 978-0-465-02656-2. [Google Scholar]
  33. Luhmann, N. Soziale Systeme: Grundriß Einer Allgemeinen Theorie, 1st ed.; Suhrkamp Verlag: Frankfurt am Main, Germany, 1987; ISBN 978-3-518-28266-3. [Google Scholar]
  34. Chen, W.; Hirschheim, R. A Paradigmatic and Methodological Examination of Information Systems Research from 1991 to 2001. Inf. Syst. J. 2004, 14, 197–235. [Google Scholar] [CrossRef]
  35. Chua, W.F. Radical Developments in Accounting Thought. Account. Rev. 1986, 61, 601–632. [Google Scholar]
  36. Orlikowski, W.J.; Baroudi, J.J. Studying Information Technology in Organizations: Research Approaches and Assumptions. Inf. Syst. Res. 1991, 2, 1–28. [Google Scholar] [CrossRef]
  37. Checkland, P.; Poulter, J. Soft Systems Methodology. In Systems Approaches to Managing Change: A Practical Guide; Reynolds, M., Holwell, S., Eds.; Springer: London, UK, 2010; pp. 191–242. ISBN 978-1-84882-808-7. [Google Scholar]
  38. Ulrich, W. A Brief Introduction to Critical Systems Heuristics (CSH); The Open University: Milton Keynes, UK, 2005. [Google Scholar]
  39. Habermas, J. Legitimationsprobleme im Spatkapitalismus; Suhrkamp Verlag: Frankfurt am Main, Germany, 1973. [Google Scholar]
  40. Lu, Q.; Zhu, L.; Xu, X.; Whittle, J. Responsible-AI-by-Design: A Pattern Collection for Designing Responsible AI Systems. IEEE Softw. 2023, 40, 63–71. [Google Scholar] [CrossRef]
  41. Moore, J.F. Predators and Prey: A New Ecology of Competition. Harv. Bus. Rev. 1993, 71, 75–86. [Google Scholar]
  42. Oh, D.-S.; Phillips, F.; Park, S.; Lee, E. Innovation Ecosystems: A Critical Examination. Technovation 2016, 54, 1–6. [Google Scholar] [CrossRef]
  43. Jacobides, M.G.; Cennamo, C.; Gawer, A. Towards a Theory of Ecosystems. Strateg. Manag. J. 2018, 39, 2255–2276. [Google Scholar] [CrossRef]
  44. Adner, R. Ecosystem as Structure: An Actionable Construct for Strategy. J. Manag. 2017, 43, 39–58. [Google Scholar] [CrossRef]
  45. Nylund, P.A.; Ferras-Hernandez, X.; Brem, A. Strategies for Activating Innovation Ecosystems: Introduction of a Taxonomy. IEEE Eng. Manag. Rev. 2019, 47, 60–66. [Google Scholar] [CrossRef]
  46. Ritala, P.; Almpanopoulou, A. In Defense of ‘Eco’in Innovation Ecosystem. Technovation 2017, 60, 39–42. [Google Scholar] [CrossRef]
  47. De Vasconcelos Gomes, L.A.; Facin, A.L.F.; Salerno, M.S.; Ikenami, R.K. Unpacking the Innovation Ecosystem Construct: Evolution, Gaps and Trends. Technol. Forecast. Soc. Chang. 2018, 136, 30–48. [Google Scholar] [CrossRef]
  48. Gobble, M.M. Charting the Innovation Ecosystem. Res. Technol. Manag. 2014, 57, 55–59. [Google Scholar]
  49. Lis, D.; Otto, B. Towards a Taxonomy of Ecosystem Data Governance. In Proceedings of the 54th Hawaii International Conference on System Sciences, Kauai, HI, USA, 5 January 2021; pp. 6067–6076. [Google Scholar]
  50. Cath, C.; Mittelstadt, B.; Wachter, S.; Taddeo, M.; Floridi, L. Artificial Intelligence and the “Good Society”: The US, EU, and UK Approach. Sci. Eng. Ethics 2018, 24, 505–528. [Google Scholar] [CrossRef]
  51. Asplund, F.; Björk, J.; Magnusson, M.; Patrick, A.J. The Genesis of Public-Private Innovation Ecosystems: Bias and Challenges. Technol. Forecast. Soc. Chang. 2021, 162, 120378. [Google Scholar] [CrossRef]
  52. Wang, P. Connecting the Parts with the Whole: Toward an Information Ecology Theory of Digital Innovation Ecosystems. Manag. Inf. Syst. Q. 2021, 45, 397–422. [Google Scholar] [CrossRef]
  53. Granstrand, O.; Holgersson, M. Innovation Ecosystems: A Conceptual Review and a New Definition. Technovation 2020, 90–91, 102098. [Google Scholar] [CrossRef]
  54. Phillips, M.A.; Ritala, P. A Complex Adaptive Systems Agenda for Ecosystem Research Methodology. Technological Forecast. Soc. Chang. 2019, 148, 119739. [Google Scholar] [CrossRef]
  55. Smolka, M.; Böschen, S. Responsible Innovation Ecosystem Governance: Socio-Technical Integration Research for Systems-Level Capacity Building. J. Responsible Innov. 2023, 10, 2207937. [Google Scholar] [CrossRef]
  56. Chae, B.K. A General Framework for Studying the Evolution of the Digital Innovation Ecosystem: The Case of Big Data. Int. J. Inf. Manag. 2019, 45, 83–94. [Google Scholar] [CrossRef]
  57. Porra, J. Colonial Systems. Inf. Syst. Res. 1999, 10, 38–69. [Google Scholar] [CrossRef]
  58. Wareham, J.; Fox, P.B.; Cano Giner, J.L. Technology Ecosystem Governance. Organ. Sci. 2014, 25, 1195–1215. [Google Scholar] [CrossRef]
  59. Tsujimoto, M.; Kajikawa, Y.; Tomita, J.; Matsumoto, Y. A Review of the Ecosystem Concept—Towards Coherent Ecosystem Design. Technol. Forecast. Soc. Chang. 2018, 136, 49–58. [Google Scholar] [CrossRef]
  60. Klimas, P.; Czakon, W. Species in the Wild: A Typology of Innovation Ecosystems. Rev. Manag. Sci. 2021, 73, 1–34. [Google Scholar] [CrossRef]
  61. European Data Protection Supervisor. Towards a New Digital Ethics; EDPS: Brussel, Belgium, 2015. [Google Scholar]
  62. Floridi, L. Soft Ethics and the Governance of the Digital. Philos. Technol. 2018, 31, 1–8. [Google Scholar] [CrossRef]
  63. Stahl, B.C. Morality, Ethics, and Reflection: A Categorization of Normative IS Research. J. Assoc. Inf. Syst. 2012, 13, 636–656. [Google Scholar] [CrossRef]
  64. Bentham, J. An Introduction to the Principles of Morals and Legislation; Dover Publications Inc.: Mineola, NY, USA, 1789; ISBN 0-486-45452-5. [Google Scholar]
  65. Mill, J.S. Utilitarianism, 2nd revised ed.; Hackett Publishing Co., Inc.: Indianapolis, IN, USA, 1861; ISBN 0-87220-605-X. [Google Scholar]
  66. Kant, I. Kritik Der Praktischen Vernunft; Reclam: Ditzingen, Germany, 1788; ISBN 3-15-001111-6. [Google Scholar]
  67. Kant, I. Grundlegung Zur Metaphysik Der Sitten; Reclam: Ditzingen, Germany, 1797; ISBN 3-15-004507-X. [Google Scholar]
  68. Aristotle; Barnes, J. The Nicomachean Ethics; Tredennick, H., Ed.; New Edition; Penguin Classics: London, UK; New York, NY, USA, 2004; ISBN 978-0-14-044949-5. [Google Scholar]
  69. MacIntyre, A.C. After Virtue: A Study in Moral Theory; University of Notre Dame Press: Notre Dame, IN, USA, 2007; ISBN 978-0-268-03504-4. [Google Scholar]
  70. Weber, M. Politik als Beruf, 11th ed.; Duncker & Humblot: Berlin, Germany, 2010; ISBN 978-3-428-13479-3. [Google Scholar]
  71. Weber, M. Die Protestantische Ethik und der Geist des Kapitalismus; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2015; ISBN 978-1-5084-8443-1. [Google Scholar]
  72. Jonas, H. Das Prinzip Verantwortung: Versuch Einer Ethik Für Die Technologische Zivilisation; Suhrkamp: Frankfurt am Mein, Germany, 1984; ISBN 3-518-37585-7. [Google Scholar]
  73. Wiener, N. The Human Use of Human Beings; Doubleday: New York, NY, USA, 1954. [Google Scholar]
  74. Wiener, N. Some Moral and Technical Consequences of Automation. Science 1960, 131, 1355–1358. [Google Scholar] [CrossRef] [PubMed]
  75. Weizenbaum, J. Computer Power and Human Reason: From Judgement to Calculation; New Edition; W.H. Freeman & Co. Ltd.: New York, NY, USA, 1977; ISBN 0-7167-0463-3. [Google Scholar]
  76. Bynum, T.W. The Historical Roots of Information and Computer Ethics. In The Cambridge Handbook of Information and Computer Ethics; Floridi, L., Ed.; Cambridge University Press: Cambridge, UK, 2010; pp. 20–38. ISBN 0-521-71772-8. [Google Scholar]
  77. Johnson, D.G. Computer Ethics; Prentice Hall: Hoboken, NJ, USA, 1985. [Google Scholar]
  78. Rogerson, S. Ethics and ICT. In The Oxford Handbook of Management Information Systems: Critical Perspectives and New Directions; Galliers, R.D., Currie, W., Eds.; OUP Oxford: Oxford, UK, 2011; pp. 601–622. ISBN 0-19-958058-8. [Google Scholar]
  79. Spinello, R.A. Case Studies in Information Technology Ethics, 2nd ed.; Pearson: Upper Saddle River, NJ, USA, 2002; ISBN 978-0-13-099150-8. [Google Scholar]
  80. Baird, R.M.; Ramsower, R.M.; Rosenbaum, S.E. (Eds.) Cyberethics: Social and Moral Issues in the Computer Age; Prometheus: Amherst, NY, USA, 2000; ISBN 978-1-57392-790-1. [Google Scholar]
  81. Spinello, R.A.; Tavani, H.T. Readings in CyberEthics; Jones and Bartlett Publishers, Inc.: Burlington, MA, USA, 2001; ISBN 0-7637-1500-X. [Google Scholar]
  82. Capurro, R. Information Ethics for and from Africa. J. Am. Soc. Inf. Sci. 2008, 59, 1162–1170. [Google Scholar] [CrossRef]
  83. Ess, C. Luciano Floridi’s Philosophy of Information and Information Ethics: Critical Reflections and the State of the Art. Ethics Inf. Technol. 2008, 10, 89–96. [Google Scholar] [CrossRef]
  84. Floridi, L. Information Ethics: On the Philosophical Foundation of Computer Ethics. Ethics Inf. Technol. 1999, 1, 33–52. [Google Scholar] [CrossRef]
  85. Floridi, L.; Taddeo, M. What Is Data Ethics? Phil. Trans. R. Soc. A 2016, 374, 20160360. [Google Scholar] [CrossRef]
  86. Borenstein, J.; Grodzinsky, F.S.; Howard, A.; Miller, K.W.; Wolf, M.J. AI Ethics: A Long History and a Recent Burst of Attention. Computer 2021, 54, 96–102. [Google Scholar] [CrossRef]
  87. Coeckelbergh, M. AI Ethics; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
  88. Dignum, V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2019; ISBN 978-3-030-30370-9. [Google Scholar]
  89. Stahl, B.C. From Computer Ethics and the Ethics of AI towards an Ethics of Digital Ecosystems. AI Ethics 2022, 2, 65–77. [Google Scholar] [CrossRef]
  90. Stahl, B.C.; Timmermans, J.; Mittelstadt, B.D. The Ethics of Computing: A Survey of the Computing-Oriented Literature. ACM Comput. Surv. 2016, 48, 55:1–55:38. [Google Scholar] [CrossRef]
  91. Stahl, B.C.; Eke, D. The Ethics of ChatGPT—Exploring the Ethical Issues of an Emerging Technology. Int. J. Inf. Manag. 2024, 74, 102700. [Google Scholar] [CrossRef]
  92. Lewis, H.D. The Non-Moral Notion of Collective Responsibility. In Individual and Collective Responsibility; French, P., Ed.; Schenkman: Cambridge, MA, USA, 1972; pp. 116–144. [Google Scholar]
  93. Timmermans, J.; Yaghmaei, E.; Stahl, B.C.; Brem, A. Research and Innovation Processes Revisited—Networked Responsibility in Industry. Sustainability 2017, 8, 307–334. [Google Scholar] [CrossRef]
  94. Hart, H.L.A. Punishment and Responsibility: Essays in the Philosophy of Law; Clarendon Press: Oxford, UK, 1968. [Google Scholar]
  95. Weckert, J.; Adeney, D. (Eds.) Computer and Information Ethics; Contributions to the Study of Computer Science; Greenwood Press: Westport, CT, USA, 1997; ISBN 0-313-29362-7. [Google Scholar]
  96. Collste, G. Ethics in the Age of Information Technology; Centrum för Tillämpad Etik: Linköping, Sweden, 2000. [Google Scholar]
  97. Wallace, R.J. Responsibility and the Moral Sentiments; Harvard University Press: Cambridge, MA, USA, 1998; ISBN 0-674-76623-7. [Google Scholar]
  98. French, P. (Ed.) Individual and Collective Responsibility; Schenkman: Cambridge, MA, USA, 1972. [Google Scholar]
  99. Werhane, P.H. Persons, Rights, and Corporations; Prentice-Hall: Hoboken, NJ, USA, 1985; ISBN 978-0-13-660341-2. [Google Scholar]
  100. Baum, S. A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy; Social Science Research Network: Rochester, NY, USA, 2017. [Google Scholar]
  101. Kurzweil, R. The Singularity Is Near; Gerald Duckworth & Co. Ltd.: London, UK, 2006; ISBN 978-0-7156-3561-2. [Google Scholar]
  102. Livingstone, D. Transhumanism: The History of a Dangerous Idea; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2015; ISBN 978-1-5152-3257-5. [Google Scholar]
  103. Stahl, B.C. Information, Ethics, and Computers: The Problem of Autonomous Moral Agents. Minds Mach. 2004, 14, 67–83. [Google Scholar] [CrossRef]
  104. Mueller, B. Corporate Digital Responsibility. Bus. Inf. Syst. Eng. 2022, 64, 689–700. [Google Scholar] [CrossRef]
  105. Herden, C.J.; Alliu, E.; Cakici, A.; Cormier, T.; Deguelle, C.; Gambhir, S.; Griffiths, C.; Gupta, S.; Kamani, S.R.; Kiratli, Y.-S.; et al. Corporate Digital Responsibility. NachhaltigkeitsManagementForum 2021, 29, 13–29. [Google Scholar] [CrossRef]
  106. Lobschat, L.; Mueller, B.; Eggers, F.; Brandimarte, L.; Diefenbach, S.; Kroschke, M.; Wirtz, J. Corporate Digital Responsibility. J. Bus. Res. 2021, 122, 875–888. [Google Scholar] [CrossRef]
  107. Mihale-Wilson, C.; Hinz, O.; van der Aalst, W.; Weinhardt, C. Corporate Digital Responsibility. Bus. Inf. Syst. Eng. 2022, 64, 127–132. [Google Scholar] [CrossRef]
  108. ISO 26000:2010; Guidance on Social Responsibility. ISO: Geneva, Switzerland, 2010.
  109. Herciu, M. ISO 26000—An Integrative Approach of Corporate Social Responsibility. Stud. Bus. Econ. 2016, 11, 73–79. [Google Scholar] [CrossRef]
  110. Shanley, D. Imagining the Future through Revisiting the Past: The Value of History in Thinking about R(R)I’s Possible Future(s). J. Responsible Innov. 2021, 8, 234–253. [Google Scholar] [CrossRef]
  111. Von Schomberg, R. (Ed.) Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields; Publication Office of the European Union: Luxembourg, 2011. [Google Scholar]
  112. Stilgoe, J.; Owen, R.; Macnaghten, P. Developing a Framework for Responsible Innovation. Res. Policy 2013, 42, 1568–1580. [Google Scholar] [CrossRef]
  113. Owen, R. The UK Engineering and Physical Sciences Research Council’s Commitment to a Framework for Responsible Innovation. J. Responsible Innov. 2014, 1, 113–117. [Google Scholar] [CrossRef]
  114. Jirotka, M.; Grimpe, B.; Stahl, B.; Hartswood, M.; Eden, G. Responsible Research and Innovation in the Digital Age. Commun. ACM 2017, 60, 62–68. [Google Scholar] [CrossRef]
  115. European Commision. Rome Declaration on Responsible Research and Innovation in Europe; European Commision: Brussels, Belgium, 2014. [Google Scholar]
  116. Gurzawska, A.; Mäkinen, M.; Brey, P. Implementation of Responsible Research and Innovation (RRI) Practices in Industry: Providing the Right Incentives. Sustainability 2017, 9, 1759. [Google Scholar] [CrossRef]
  117. Martinuzzi, A.; Blok, V.; Brem, A.; Stahl, B.; Schönherr, N. Responsible Research and Innovation in Industry—Challenges, Insights and Perspectives. Sustainability 2018, 10, 702. [Google Scholar] [CrossRef]
  118. Stahl, B.C.; Obach, M.; Yaghmaei, E.; Ikonen, V.; Chatfield, K.; Brem, A. The Responsible Research and Innovation (RRI) Maturity Model: Linking Theory and Practice. Sustainability 2017, 9, 1036. [Google Scholar] [CrossRef]
  119. Stahl, B.C. Responsible Research and Innovation: The Role of Privacy in an Emerging Framework. Sci. Public Policy 2013, 40, 708–716. [Google Scholar] [CrossRef]
  120. Stahl, B.C. Responsible Innovation Ecosystems: Ethical Implications of the Application of the Ecosystem Concept to Artificial Intelligence. Int. J. Inf. Manag. 2022, 62, 102441. [Google Scholar] [CrossRef]
  121. Lee, A.S. Thinking about Social Theory and Philosophy for Information Systems. In Social Theory and Philosophy for Information Systems; Mingers, J., Willcocks, L., Eds.; Wiley: Chichester, UK, 2004; pp. 1–26. [Google Scholar]
  122. Mumford, E. Effective Systems Design and Requirements Analysis: The ETHICS Approach; Palgrave Macmillan: London, UK, 1995; ISBN 0-333-63908-1. [Google Scholar]
  123. Mumford, E. Systems Design: Ethical Tools for Ethical Change; Palgrave Macmillan: London, UK, 1996; ISBN 0-333-66946-0. [Google Scholar]
  124. Checkland, P.; Poulter, J. Learning for Action: A Short Definitive Account of Soft Systems Methodology and Its Use for Practitioner, Teachers, and Students; Wiley: Hoboken, NJ, USA, 2006; ISBN 978-0-470-02554-3. [Google Scholar]
  125. Salmon, P.M.; Stanton, N.A.; Walker, G.H.; Hulme, A.; Goode, N.; Thompson, J.; Read, G.J.M. Handbook of Systems Thinking Methods, 1st ed.; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  126. Stahl, B.C.; Rodrigues, R.; Santiago, N.; Macnish, K. A European Agency for Artificial Intelligence: Protecting Fundamental Rights and Ethical Values. Comput. Law Secur. Rev. 2022, 45, 105661. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stahl, B.C. From Corporate Digital Responsibility to Responsible Digital Ecosystems. Sustainability 2024, 16, 4972. https://doi.org/10.3390/su16124972

AMA Style

Stahl BC. From Corporate Digital Responsibility to Responsible Digital Ecosystems. Sustainability. 2024; 16(12):4972. https://doi.org/10.3390/su16124972

Chicago/Turabian Style

Stahl, Bernd Carsten. 2024. "From Corporate Digital Responsibility to Responsible Digital Ecosystems" Sustainability 16, no. 12: 4972. https://doi.org/10.3390/su16124972

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop