Next Article in Journal
‘AReal-Vocab’: An Augmented Reality English Vocabulary Mobile Application to Cater to Mild Autism Children in Response towards Sustainable Education for Children with Disabilities
Next Article in Special Issue
The Ethics of AI-Powered Climate Nudging—How Much AI Should We Use to Save the Planet?
Previous Article in Journal
Impact of Artificial Intelligence News Source Credibility Identification System on Effectiveness of Media Literacy Education
Previous Article in Special Issue
From an Ethics of Carefulness to an Ethics of Desirability: Going Beyond Current Ethics Approaches to Sustainable AI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Our New Artificial Intelligence Infrastructure: Becoming Locked into an Unsustainable Future

by
Scott Robbins
1,* and
Aimee van Wynsberghe
2
1
Center for Science and Thought, University of Bonn, Poppelsdorfer Allee 28, 53115 Bonn, Germany
2
Institute for Science and Ethics, University of Bonn, Bonner Talweg 57, 53113 Bonn, Germany
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(8), 4829; https://doi.org/10.3390/su14084829
Submission received: 28 February 2022 / Revised: 13 April 2022 / Accepted: 15 April 2022 / Published: 18 April 2022

Abstract

:
Artificial intelligence (AI) is becoming increasingly important for the infrastructures that support many of society’s functions. Transportation, security, energy, education, the workplace, the government have all incorporated AI into their infrastructures for enhancement and/or protection. In this paper, we argue that not only is AI seen as a tool for augmenting existing infrastructures, but AI itself is becoming an infrastructure that many services of today and tomorrow will depend upon. Considering the vast environmental consequences associated with the development and use of AI, of which the world is only starting to learn, the necessity of addressing AI alongside the concept of infrastructure points toward the phenomenon of carbon lock-in. Carbon lock-in refers to society’s constrained ability to reduce carbon emissions technologically, economically, politically, and socially. These constraints are due to the inherent inertia created by entrenched technological, institutional, and behavioral norms. That is, the drive for AI adoption in virtually every sector of society will create dependencies and interdependencies from which it will be hard to escape. The crux of this paper boils down to this: in conceptualizing AI as infrastructure we can recognize the risk of lock-in, not just carbon lock-in but lock-in as it relates to all the physical needs to achieve the infrastructure of AI. This does not exclude the possibility of solutions arising with the rise of these technologies; however, given these points, it is of the utmost importance that we ask inconvenient questions regarding these environmental costs before becoming locked into this new AI infrastructure.

1. Introduction

Artificial intelligence (AI) is becoming increasingly important for the infrastructures that support many of society’s functions. Transportation, security, energy, education, the workplace, government, have all incorporated AI into their infrastructures for enhancement and/or protection. Not only is AI seen as a tool for augmenting existing infrastructures, but AI itself is becoming an infrastructure that many services of today and tomorrow will depend upon. There is a growing body of research addressing the impact of AI on the environment. This body of literature shows that AI development and use requires an amazing amount of computational power which creates increased carbon emissions. The effects of AI on environmental justice will be vast considering too the mining of precious minerals and the vulnerable demographics exploited for these processes. This is deeply concerning given the grave situation the world finds itself in regarding the climate. The Intergovernmental Panel on Climate Change (IPCC) goes so far as to say that it is “code red for humanity” [1]. Given that there is high confidence that climate change is to a large extent human-induced [2], we should be asking more questions before introducing a new human-made carbon-emitting infrastructure powered by AI.
The field of sustainable AI has been put forward as a way of addressing the environmental justice issues associated with AI throughout its lifecycle [3]. Sustainable AI is about more than applying AI to achieve climate goals (Though much work in the field is devoted to this idea. See e.g., [4,5,6,7,8,9,10]), it is about understanding and measuring the environmental impact of developing and using AI. The little information we have on the environmental impact of AI is, to say the least, not encouraging [11]. Thus, many of the questions surrounding the sustainability of AI remain unanswered. These answers are needed for society to make an informed choice regarding the use of AI in a particular context. This makes AI a huge environmental risk as AI continues to be implemented in a broad range of contexts despite this opacity regarding its environmental consequences.
It may not be immediately clear why AI researchers and developers, in particular, must pay attention to issues of environmental sustainability. Does not everything need to consider issues of sustainability? In this paper, we argue that the environmental consequences associated with AI are essential issues of AI ethics. The way we choose to build and implement AI today will have profound consequences for our future sustainability that warrants a specific focus on its sustainability. This special attention is due to the connection between AI and the concept of infrastructure.
In what follows, we illustrate how AI has traditionally been understood as conceptually distinct from infrastructure. From this vantage point, AI can be used to enhance or protect existing infrastructures. We also point out that AI is dependent on vast infrastructures which are climate intensive, e.g., AI needs electricity, precious minerals, data to be transferred, etc. AI is increasingly being used to power the next generation of digital services. That is, AI is now the infrastructure relied upon by digital services. Look to the Facebook outage of 2021 that showed how many businesses in Ghana were unable to function without the Facebook infrastructure. Facebook’s services are AI-powered services. Everything from how content is displayed, moderated, and sorted is powered by AI [12]. Furthermore, the advertising ecosystem which Facebook makes money from is AI-powered [13]. It is safe to say that without AI there is no Facebook. Consider also, the business model of social networking companies that rely on targeted advertising to generate revenue. The necessity of addressing AI alongside the concept of infrastructure points toward the phenomenon of carbon lock-in—whereby society’s ability to technologically, economically, politically, and socially reduce carbon emissions are constrained due to the inherent inertia created by entrenched technological, institutional, and behavioral norms [14]. The negative outcomes that AI adoption creates may also give rise to innovative, environmentally sound, solutions. However, without knowing the extent of the problem and giving that problem the attention it deserves, those solutions will never come about. Given these points, we must ask inconvenient questions regarding these environmental costs before becoming locked into this new AI infrastructure. No amount of convenience provided by AI can justify further decimating our planet.

2. AI Ethics and the Sustainability of AI

It is important here to note what we mean by AI. The concept is overused and can refer to many different things. For this article, AI refers to the methodology of creating algorithms driven by the rise of machine learning (ML). ML algorithms “use statistics and probability to “learn” from large datasets” [15]. This learning is not restricted to picking out features that humans could understand—which gives the resulting algorithm greater power than we have seen before. This is a pragmatic definition as it excludes other methodologies which should fall under the definition of AI. For example, expert systems and decisions trees were for decades the only AI algorithms out there. However, they are not what is driving the rise of AI in our everyday lives and their impact is similar to traditional software applications. ML is what has driven the need for more data, sensors, computing power, etc. We would prefer that everyone simply use ML rather than AI (because that is usually what is being referred to). However, as it stands, AI is the standard concept that people hear about in academic literature, popular culture, and the media. In what follows we use AI to refer to ML algorithms. This means that while autonomous cars and medical technologies are not AI—more and more of these technologies are powered by ML algorithms.
As AI applications increase across society AI ethicists have begun to uncover risks associated with the technology. Risks, for example, concerning the use of historical data to train algorithms when such historical data embeds stereotypes and discriminatory assumptions about individuals and groups in society. The consequence of this practice is oftentimes further discrimination of said individuals and groups. AI ethics, in short, is dedicated to uncovering and understanding the ethical issues associated with the development and use (i.e., the entire life cycle) of AI-powered machines—how does AI threaten the ability of individuals and groups to live a “good life”. Once the risks have been identified it is then the goal to prevent and/or mitigate said risks.
The field of AI ethics has grown in importance in the last decade seen through an increase in academic publications on the topic (For example the Berkman Klein Center identified 36 “prominent AI principles” documents [16]), the involvement of AI ethics in the policy forum (e.g., European Commission High-Level Expert Group on AI) [17], and the adoption of AI ethics into the business and consulting space (see e.g., [18,19]). In each of these sectors, there are certain canonical ethical issues pertaining to AI that are being discussed, most often concerning particular AI methodologies. Machine learning, for example, has been described as a method that creates a kind of opacity given that it is often impossible to know and/or to understand the rules generated by the model used to make a prediction. Stemming from this technical feature come ethical issues related to transparency (e.g., should a particular technology be used if it is impossible to understand how it arrives at an output); responsibility (e.g., should a particular technology be used if this lack of transparency leads to confusion in terms of who is responsible for the consequences of a decision that are not known or understood by the programmer); and, security (e.g., how can we ensure that security of a system when we do not entirely understand its functioning). To be sure, none of these concerns have been rectified.
Without diminishing the significance of the above issues, it is also important to note that little attention has been paid, to date, to the environmental consequences of making and using AI. A small group of researchers has begun to study carbon emissions [11] and computing power [20]; however, there is little incentive for academics and/or industry to incorporate this systematically into research and production methods. There is no regulation to demand an environmental assessment of the impacts of making and/or using AI/ML. The systematic accounting of these environmental impacts is necessary to have a better idea of the large-scale impact of making and using AI systems. Moreover, “accurate accounting of carbon and energy impacts aligns with energy efficiency [21], raises awareness, and drives mitigation efforts, among other benefits” [20]. It is this connection—between AI and environmental consequences—that drives the points made in this paper. Namely, that we must know the specifics of this connection before we become (more) dependent on AI.
To be sure, the environmental costs of making and using AI do not end with direct carbon emissions or computing power. The systems used to create and run AI models require precious minerals that are mined in often horrible conditions for the individuals involved [22]. There is water needed for the cooling of the computing centers. There will be electronic waste (e-waste) resulting from the updating of materials, computers, and data centers. Historically, e-waste has been dumped in underdeveloped countries exposing inhabitants to the toxic chemicals in their water supplies and agricultural [23]. These concerns are essentially issues of environmental justice and while they focus on environmental consequences, they point to societal concerns that have been, to date, invisible from public discourse. As Hasselbalch describes, data ethics is not only about power but also is power [24]. AI ethics is not only about power asymmetries but is power in so far as the loudest voices are the ones who determine the ethical issues of importance and priority. The movement to focus on sustainability is about revealing the hidden demographics who suffer and will continue to suffer as AI becomes more and more pervasive in our daily lives.
Sustainable AI was first defined by van Wynsberghe in 2021 as a “movement to foster change in the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity and social justice” [3]. Given the high costs already identified, we suggest that AI researchers (both ethicists and computer scientists) along with AI practitioners (AI developers) and policymakers (those involved with drafting legislation concerning the governance of AI) ought to shift focus to explicitly, and quickly, address the hidden environmental costs associated with AI development and usage.
Reframing AI ethics discussions in terms of sustainability opens up novel insights. First, to use the phrase ”sustainable AI” demands that one consider sustainability as a value within the AI ethics domain, one that is deserving of greater attention. Second, the label of sustainability invokes the recognition of the environment as a starting point for addressing AI ethics issues. The environment becomes a lens through which societal and economic issues are uncovered, conceptualized, and understood. Third, sustainability as a concept emphasizes issues of intra- and inter-generational justice. Attention to environmental consequences demands consideration of the impacts, and our responsibilities to mitigate said impacts, on younger generations as well as those yet to come.
Fourth, sustainability demands the recognition of AI on a larger scale rather than on one or two specific applications. To date, the focus of AI ethics has been on mitigating concerns of privacy, safety, and fairness, to name a few. With this narrow view of the impacts of AI, researchers run the risk of overlooking the larger structural issues of AI as infrastructure, researchers cannot see “the forest from the trees”. By this, we mean that in focusing on issues of design, or how to implement the technology, researchers to date have been unable to take a step back and understand the magnitude of AI development and use. AI is not one or two models that will be restricted to a particular sector or for a particular application. Instead, AI is being promoted as an innovation suitable for any sector, for any application. From our perspective, it is thus paramount to address AI alongside the notion of infrastructure.

3. AI and Infrastructure

We begin by asking: “What is the relationship between AI and infrastructure?” As Kate Crawford describes in her book “Atlas of AI” there is a fascinating phenomenon concerning the materiality of AI/ML; the language used to describe the materials refers to algorithms and “the cloud”, making AI seem not of the physical. However, in reality, there is a vast physical infrastructure behind the production of AI. Water is needed to cool computing centers and the water obtained for this comes from public infrastructures. Electricity is needed to fuel computing centers and the pipelines through which the electricity travels are often publicly funded networks. Minerals are required for batteries and microchips. These minerals are part of a long chain of procurement in which humans often work in slave-like conditions and degradation of the environment results from the way minerals are sourced (see e.g., [25,26,27]). These realities are kept hidden to ensure enthusiasm toward AI. Consequently, the hidden materiality of AI fosters a lack of understanding of the breadth of the physical infrastructures powering AI. This does not entail that AI and its materiality are worse than other industries regarding carbon emissions. Rather, the materiality of AI points to a non-negligible impact on the environment. This must be included in the cost-benefit analysis of specific AI-powered services and products.
Not only does the development and use of AI rely on existing infrastructures but AI is seen as a powerful tool to support, enhance, or protect infrastructure. AI was used by Google in 2016 to understan42d how to conserve electricity in their data centers—allowing the company to enhance its energy conservation efforts [28]. AI is used in the banking sector to predict when/if a fraudulent transaction has occurred allowing banks to react faster for their customers’ protection [29,30]. AI is used in both the public and private sectors to protect against spam and phishing schemes [31,32]. AI is used across the transportation sector to enhance in a variety of ways from managing traffic lights [33] to the idea of autonomous vehicles for the reduction of fatalities [34].
As we see, AI can be understood as dependent on existing infrastructure and/or as enhancing existing infrastructure. Our aim now is to argue that AI should itself be understood as infrastructure. And it is this understanding that adds urgency to the environmental concerns. Infrastructure is not easily defined, and we do not attempt here to settle any debates on that subject. What we can do is take some properties of infrastructure and show how they relate to AI.

3.1. Infrastructure Properties

Susan Leigh Star [35] lists 9 such properties (which she calls dimensions). We highlight a few here concerning AI. First, infrastructure has the feature of embeddedness. That is, it exists within other structures, social arrangements, and technologies [35]. AI can clearly be said to have this property as it is embedded into the technologies and structures that we interact with daily, e.g., simple tools such as Google Maps or the advertising shown to us whenever we are online. When AI is implemented, it often does not stand alone—but interacts with the technologies we use and takes data from our social arrangements (and/or actions) to generate its outputs, e.g., advertisements require data from our search history and previous purchases fed to an AI to predict what might be appealing.
Second, is the property of transparency. Infrastructure is transparent to use. When we turn on a light switch, we do not see the infrastructure of wiring and power grids that enable the light to come on. We simply enjoy the convenience of light. Likewise, when we turn on Netflix, we do not see the infrastructure of cables, servers, and algorithms (often AI) that enable those recommendations to populate the home screen. Our attention is drawn to the result that infrastructure enables—not the process that leads to the result. In our many daily interactions with AI, we could be excused for not even knowing that AI was driving what was happening.
Third, infrastructure becomes visible upon breakdown. When infrastructure ceases to function properly our attention directs itself toward that infrastructure. When the light does not turn on upon flipping the light switch, we direct our attention to the fuse box and if that does not work, we may have to call our attention to the company that runs the infrastructure that provides our electricity. Much attention has been given to AI when it functions improperly. When Google’s AI-powered image labeling system incorrectly labeled people of color as gorillas it quickly drew people’s attention to the algorithm and the data that serves as the infrastructure to that system.
Fourth, infrastructure is modular. Infrastructure does not simply grow from nothing. It is put on top of other infrastructure and must take on the benefits and negatives that come with it. The original wiring of the internet was done through the existing phone lines. Only incrementally was this replaced with fiber optic cables that power the internet that we have today. This is because the infrastructure we have come to rely on has its own inertia—it has to work with the existing infrastructure because we depend on it. AI must also be placed on top of existing infrastructure. It interacts with platforms, algorithms, and the infrastructure that powers the internet. We see new phones with processors that enable AI features [36]—thereby starting the modular process that slowly replaces old infrastructure.

3.2. AI as Infrastructure

This listing of infrastructure properties provides a base of understanding into how AI can already be considered infrastructure and how this will continue in the years to come. Currently, AI is evaluated in terms of its impact on infrastructure (i.e., as being conceptually distinct from infrastructure); however, in (the near) future AI must be evaluated as the infrastructure itself. Following this, any new infrastructure—because of its importance and resistance to change—should be an environmentally sustainable one. Consequently, evaluating AI requires insight into the environmental consequences of understanding AI as infrastructure.
Part of the reason for writing this paper is that the environmental sustainability of AI is unknown—and for the reasons outlined above, this is an unacceptable situation. Furthermore, one cannot state the environmental sustainability of AI in broad strokes. Particular systems in particular contexts may be environmentally sustainable (e.g., green servers) while others not. The point here is that for governments and consumers to make informed decisions regarding AI-powered solutions, the environmental sustainability of those solutions themselves must be known and factored in.

4. Locked in with AI

The crux of this paper boils down to this: in conceptualizing AI as infrastructure, we can recognize the risk of lock-in, not just carbon lock-in but lock-in as it relates to all the physical needs to achieve the infrastructure of AI.
The phenomenon of lock-in is most referenced in terms of carbon lock-in and the concern for greenhouse gas (GHG) emissions. Carbon Lock-In refers to “the dynamic whereby prior decisions relating to GHG-emitting technologies, infrastructure, practices, and their supporting networks constrain future paths, making it more challenging, even impossible, to subsequently pursue more optimal paths toward low-carbon objectives” [37]. Coal power plants are an oft-cited example of a carbon lock-in [37,38]. While expensive to build carbon plants, they are cheap to operate. This creates political, economic, and social conditions that make it difficult to replace this high carbon-emitting infrastructure.
This points to the fact that the choices we make now regarding our new AI-augmented infrastructure not only relate to the carbon emissions that it will have; but also relate to the creation of constraints that will prevent us from changing course if that infrastructure is found to be unsustainable.
Self-driving vehicles require a large amount of energy to capture, store, and process the large amount of data required to navigate their environments. One estimate shows that it takes roughly 2500 Watts, which is enough to light 40 incandescent light bulbs. That is for just one car [39]. Multiple studies have attempted to estimate the energy savings and costs of self-driving cars (see e.g., [40,41]). They factor in the energy that the sensors capturing data consume, the onboard computers and processors, data transfer energy costs, as well as the efficiency gained by automating driving. However, there is a range of variables that are not accounted for in such analyses, such as hardware production. Thus, we argue here that it is not enough to address a limited number of variables; rather the entire system (from procurement to development to recycling) must be considered.
In what follows we highlight some of the major processes that come with the rise of AI. This points to what must be measured and accounted for when we evaluate the cost of a particular AI application. The costs of producing the hardware running the algorithms, the costs of collecting and transmitting data used and processed by AI, the computational cost of training and using the model, the disposal of the network of hardware needed by AI, and the costs of ensuring that the algorithms are aligned with ethical principles all must factor in. This is not supposed to be exhaustive; rather, it should point to the fact that a lot of work must be done before we even have the information necessary to make informed decisions regarding the use of a particular AI system.

4.1. Hardware Production

The hardware used in the AI lifecycle is, to say the least, non-negligible in terms of energy consumption. There are the obvious components such as the servers and their components (e.g., hard drives, GPUs, etc.) that are required to run the algorithms and store large amounts of data. However, there are also many devices used to collect data such as video cameras, lidar sensors, motion detectors, and so on. It has been shown that the manufacturing of these devices “as opposed to hardware use and energy consumption, accounts for most of the carbon output attributable to hardware systems” [42]. The rise of “edge computing” is fueling the rise in these devices.
Edge computing has been defined as “the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services and upstream data on behalf of IoT services” [43]. This simply means that the processing of data is happening at the edge of the network closer to the source of the data rather than in some central cloud server. This could, for example, mean that facial recognition processing happens on the smart CCTV camera rather than the video footage being sent to the cloud. This can save on the cost of transferring data and reduce the need for energy-intensive cloud servers; however, it increases the need for complex devices. It is estimated that the number of these devices will almost five-fold by 2030 to 7.8 billion devices [44].
Many of these modern technological devices have rare earth elements (REE). For example, REEs are found in “hybrid vehicles, rechargeable batteries, wind turbines, mobile phones, flat-screen display panels, compact fluorescent light bulbs, laptop computers, disk drives, catalytic converters, etc.” [45]. The production of these devices has a huge impact—not only on the environment but also on vulnerable populations that suffer human rights violations [25]. The environmental impacts are not fully understood; however, it is understood that they are significant: “REE mining and refining generate significant amounts of liquid and solid wastes, with potentially deleterious effects on the environment, and it is expected to continue increasing in the future because they are irreplaceable in many technological sectors” [45].
As we increasingly depend upon AI-powered technologies, we increase the need for REEs and the processes that produce them. Currently, these processes are terrible for the environment and the people who work in the mines. This cost cannot be ignored when we tally the benefits and consequences of delegating more and more to AI.

4.2. Data Collection and Transmission

AI applications require input data to be processed. This data can come from virtually any kind of data. Security services use video feeds as input for facial recognition algorithms. Biometric sensors attached to people (e.g., smartwatches) collect and send data to healthcare AI algorithms used to detect, for example, heart problems. Smart cities use a vast network of sensors and devices (see Section 4.1 above) to collect data to use as inputs for many AI algorithms promising to better our cities. It has been calculated that “Internet use has a carbon footprint ranging from 28 to 63 g CO2 equivalent per gigabyte (GB)” [46]. The power necessary to keep these sensors active as well as the energy required to transmit this data is non-negligible.
The Shift Project estimates that the digital era is responsible for 4% of greenhouse gas emissions in 2020 [47]. This is similar to the emissions caused by pre-Covid level commercial aviation [48]. The Shift Project further estimates an 8% rise year over year due to several factors including the rise of the Internet of Things (IoT) and an explosion in data traffic [47]. Increasingly relying upon AI will exacerbate these factors.
The massive amounts of video, image, pollution, temperature, biometric, radar, lidar, etc. data that must be transmitted to cloud servers for processing by AI algorithms takes energy. By increasingly relying upon AI to run our society, we become locked into needing this vast network of data transmission. We should know more about its energy cost to responsibly evaluate whether or not certain AI applications are worth it.

4.3. AI Model Creation and Data Processing

The most often cited statistic regarding the creation of AI models is that common large AI models emit more than 626,000 pounds of carbon dioxide—equivalent to five times the lifetime emissions of an automobile [11]. While this number may be far lower depending on the specific context—for example, when designers are simply fine-tuning a model that has already been trained—there is no question that AI requires an exponentially increasing amount of computing power [49]. Once the model is trained and the algorithm is live, inputs must be given to that model for processing. Videos, images, text, sound, etc. all need to be classified using the model in question. This has its own associated cost—and with, for example, video input, this can be a large cost. Efforts are being made to reduce this cost by, for example, only feeding the model-specific frames of the video rather than the whole thing. Other methods are also being explored [50].
Once the hardware is set up, the coding is done, the model is trained on the collected training data and everything is running smoothly, there is the problem that all of this will need updating. We learned that many AI systems failed during the COVID-19 pandemic simply because our behavior changed drastically—making many ML models useless [51]. New behavior requires new models—which can then cause some of the processes listed above to need to be re-done—furthering the environmental impact, in terms of carbon emissions, of these systems.

4.4. Hardware Disposal

Finally, the process of recycling and disposing of hardware must be accounted for. In 2019 the world generated “53.6 Mt [million metric tons] of e-waste…and is projected to grow to 74.7 Mt by 2030” [52]. This, of course, factors in all types of e-waste including appliances and personal devices, and not just AI devices alone. The point is that an increased reliance upon AI will require the disposal of more e-waste. While it may seem reasonable for anyone with AI application design not to spend time thinking about this; ignoring this fact while setting up a society that depends more and more on AI would be a critical failure.
Not all computer hardware is used to power AI; however, AI requires an extreme amount of computational power—which requires not only more hardware—but new hardware. Anything which relies upon computer hardware should factor in the cost of the disposal and recycling of that hardware. Here we only want to point out that this is also a cost of using AI. Furthermore, “there is a growing demand for specialized hardware accelerators with optimized memory hierarchies that can meet the enormous compute and memory requirements of” machine learning [53]. McKinsey, in a report, found that “AI-related semiconductors will see growth of about 18 percent annually over the next few years—five times greater than the rate for semiconductors used in non-AI applications” [54]. This shows that there is a rise in hardware specifically designed for AI.
There must be a plan for the recycling of all of this hardware—and the environmental cost associated with such recycling must be factored in when setting up a society dependent on AI and the hardware it requires.

4.5. Ethics Alignment

The rise of AI has precipitated a rise in those pointing out that there are many ethical issues associated with AI. Methods for overcoming these risks have been proposed and implemented. Some of these methods themselves come with a cost. For example, many contemporary AI methodologies (e.g., deep neural networks) are not explainable. That is, the considerations which contribute to the output are unknown to even the designers of the algorithm [15,55,56,57]. When we are delegating the task of certain decisions to AI this lack of explanation will not be acceptable. Delegating judicial decisions [58] or moral decisions [59] to AI requires an explanation for the outputs generated.
Various methodologies have come out to overcome this lack of explainability. For example, one proposal has suggested that we can use counterfactual explanations. That is, an explanation can be provided by knowing the smallest change in the input that would yield a positive outcome [60]. Visual methods that apply to specific models have also been proposed such as Gradient and Guided Back Propagation. These yield visual explanations which may show us which features of an image most contributed to an output. Other methods are more general, for instance LIME and SHAP, which aim to highlight feature importance for a particular output (for a review of such methods see e.g., [61]).
These methods require their own trained model which then exacerbates the environmental costs pointed out in the above sections. When the use of AI will require the use of more AI to overcome ethical issues, then the environmental cost of this further model must also be calculated.

5. Conclusions

It is no secret that AI requires a vast amount of energy to accomplish its tasks. Any industry uses energy to accomplish its tasks. What we have shown to be special about AI is that AI is increasingly becoming the infrastructure that is required for society to function. Governments, schools, cars, hospitals, banking, etc. are all becoming dependent upon this AI-powered infrastructure. This is a choice that society is making. Choices as important as these cannot be done without thinking about the environmental consequences. And there is little known about the breadth of environmental consequences associated with AI as infrastructure.
Choosing a path that leads to greater harm to the environment is unacceptable. Choosing a path out of ignorance to its impact on the environment is also unacceptable. So far, we are blindly going forward with the creation of a dependence relationship on a technology whose environmental impact, based on the little we do know, is extremely high. While there is much work being done to mitigate this impact, that work, and its results should be known before creating this dependence. We run the risk of locking ourselves into a technological infrastructure that is energy-intensive in both its development and use, as well as energy-intensive to mitigate certain ethical concerns. This is precisely the aim of the Sustainable AI domain—to investigate and make clear that there are a plethora of environmental risks associated with AI and to argue that these risks ought to be the starting point in any ethical analysis of AI/ML.
The argument from large tech companies that most of the energy they use is renewable—and therefore has little impact on the environment is frivolous. The use of energy, renewable or not, during a time that has been called “code red for humanity” is of great importance. The question before any AI model is created should be: is this worth the environmental cost that we will be locked into for decades? The answer will often be no.

Author Contributions

Conceptualization, S.R. and A.v.W.; writing—original draft preparation, S.R. and A.v.W.; writing—review and editing, S.R. and A.v.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The Alexander von Humboldt Foundation (The Alexander von Humboldt Stiftung) in Germany in the form of a Professorship for Aimee van Wynsberghe.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. McGrath, M. Climate Change: IPCC Report Is “Code Red for Humanity”. BBC News, 2021. Available online: https://www.bbc.com/news/science-environment-58130705(accessed on 22 March 2022).
  2. IPCC. Climate Change 2022 Impacts, Adaptation and Vulnerability: Summary for Policymakers; Intergovernmental Panel on Climate Change: Geneva, Switzerland, 2022. [Google Scholar]
  3. van Wynsberghe, A. Sustainable AI: AI for Sustainability and the Sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
  4. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Fuso Nerini, F. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Tomašev, N.; Cornebise, J.; Hutter, F.; Mohamed, S.; Picciariello, A.; Connelly, B.; Belgrave, D.C.M.; Ezer, D.; van der Haert, F.C.; Mugisha, F.; et al. AI for Social Good: Unlocking the Opportunity for Positive Impact. Nat. Commun. 2020, 11, 2468. [Google Scholar] [CrossRef] [PubMed]
  6. Sætra, H.S. AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System. Sustainability 2021, 13, 1738. [Google Scholar] [CrossRef]
  7. Nishant, R.; Kennedy, M.; Corbett, J. Artificial Intelligence for Sustainability: Challenges, Opportunities, and a Research Agenda. Int. J. Inf. Manag. 2020, 53, 102104. [Google Scholar] [CrossRef]
  8. Lahsen, M. Should AI Be Designed to Save Us From Ourselves?: Artificial Intelligence for Sustainability. IEEE Technol. Soc. Mag. 2020, 39, 60–67. [Google Scholar] [CrossRef]
  9. Dauvergne, P. AI in the Wild: Sustainability in the Age of Artificial Intelligence; MIT Press: Cambridge, MA, USA, 2020; ISBN 978-0-262-53933-3. [Google Scholar]
  10. Tsolakis, N.; Zissis, D.; Papaefthimiou, S.; Korfiatis, N. Towards AI Driven Environmental Sustainability: An Application of Automated Logistics in Container Port Terminals. Int. J. Prod. Res. 2021, 1–21. [Google Scholar] [CrossRef]
  11. Strubell, E.; Ganesh, A.; McCallum, A. Energy and Policy Considerations for Deep Learning in NLP. arXiv 2019, arXiv:190602243. [Google Scholar]
  12. Macaulay, T. Here’s How AI Determines What You See on the Facebook News Feed. Available online: https://thenextweb.com/news/heres-how-ai-determines-what-you-see-on-facebook-news (accessed on 22 March 2022).
  13. Facebook How Does Facebook Use Machine Learning to Deliver Ads? Available online: https://www.facebook.com/business/news/good-questions-real-answers-how-does-facebook-use-machine-learning-to-deliver-ads (accessed on 22 March 2022).
  14. Seto, K.C.; Davis, S.J.; Mitchell, R.B.; Stokes, E.C.; Unruh, G.; Ürge-Vorsatz, D. Carbon Lock-In: Types, Causes, and Policy Implications. Annu. Rev. Environ. Resour. 2016, 41, 425–452. [Google Scholar] [CrossRef] [Green Version]
  15. Robbins, S. AI and the Path to Envelopment: Knowledge as a First Step towards the Responsible Regulation and Use of AI-Powered Machines. AI Soc. 2020, 35, 391–400. [Google Scholar] [CrossRef] [Green Version]
  16. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI; Social Science Research Network: Rochester, NY, USA, 2020. [Google Scholar]
  17. High Level Expert Group on AI Ethics Guidelines for Trustworthy AI. Available online: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (accessed on 15 January 2020).
  18. AI at Google: Our Principles. Available online: https://www.blog.google/technology/ai/ai-principles/ (accessed on 14 January 2019).
  19. Nadella, S. Microsoft’s CEO Explores How Humans and A.I. Can Solve Society’s Challenges—Together. Available online: https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html (accessed on 14 January 2019).
  20. Henderson, P.; Hu, J.; Romoff, J.; Brunskill, E.; Jurafsky, D.; Pineau, J. Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. J. Mach. Learn. Res. 2020, 21, 1–43. [Google Scholar]
  21. Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. Commun. ACM 2020, 63, 54–63. [Google Scholar] [CrossRef]
  22. Crawford, K. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021; ISBN 978-0-300-20957-0. [Google Scholar]
  23. Vidal, J. Toxic “e-Waste” Dumped in Poor Nations, Says United Nations. The Guardian, 2013. Available online: https://www.theguardian.com/global-development/2013/dec/14/toxic-ewaste-illegal-dumping-developing-countries(accessed on 22 March 2022).
  24. Hasselbalch, G. Data Ethics of Power: A Human Approach in the Big Data and AI Era; Edward Elgar Publishing: Cheltenham, UK, 2021; ISBN 978-1-80220-311-0. [Google Scholar]
  25. Amnesty International. “This Is What We Die for” Human Rights Abuses in the Democratic Republic of the Congo Power the Global Trade in Cobalt; Amnesty International: London, UK, 2016. [Google Scholar]
  26. Searcey, D.; Lipton, E.; Gilbertson, A. Hunt for the ‘Blood Diamond of Batteries’ Impedes Green Energy Push. New York Times, 29 November 2021. [Google Scholar]
  27. Precious Metal, Cheap Labor: Child Labor and Corporate Responsibility in Ghana’s Artisanal Gold Mines; Human Rights Watch: New York, NY, USA, 2015.
  28. DeepMind DeepMind AI Reduces Google Data Centre Cooling Bill by 40%. Available online: https://www.deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-by-40 (accessed on 13 April 2022).
  29. Pierre, R. Detecting Financial Fraud Using Machine Learning: Winning the War Against Imbalanced Data. Available online: https://towardsdatascience.com/detecting-financial-fraud-using-machine-learning-three-ways-of-winning-the-war-against-imbalanced-a03f8815cce9 (accessed on 30 June 2019).
  30. West, J.; Bhattacharya, M. Intelligent Financial Fraud Detection: A Comprehensive Review. Comput. Secur. 2016, 57, 47–66. [Google Scholar] [CrossRef]
  31. Karim, A.; Azam, S.; Shanmugam, B.; Kannoorpatti, K.; Alazab, M. A Comprehensive Survey for Intelligent Spam Email Detection. IEEE Access 2019, 7, 168261–168295. [Google Scholar] [CrossRef]
  32. Basit, A.; Zafar, M.; Liu, X.; Javed, A.R.; Jalil, Z.; Kifayat, K. A Comprehensive Survey of AI-Enabled Phishing Attacks Detection Techniques. Telecommun. Syst. 2021, 76, 139–154. [Google Scholar] [CrossRef]
  33. Srivastava, M.D.; Sachin, S.; Sharma, S.; Tyagi, U. Smart traffic control system using. Int. J. Innov. Res. Sci. Eng. Technol. 2012, 1, 169–172. [Google Scholar]
  34. Fleetwood, J. Public Health, Ethics, and Autonomous Vehicles. Am. J. Public Health 2017, 107, 532–537. [Google Scholar] [CrossRef]
  35. Star, S.L. The Ethnography of Infrastructure. Am. Behav. Sci. 1999, 43, 377–391. [Google Scholar] [CrossRef]
  36. Molloy, D. Google’s Pixel 6 Processor Brings AI Photo Features. BBC News, 2021. Available online: https://www.bbc.com/news/technology-58955304(accessed on 22 March 2022).
  37. Erickson, P.; Kartha, S.; Lazarus, M.; Tempest, K. Assessing Carbon Lock-In. Environ. Res. Lett. 2015, 10, 084023. [Google Scholar] [CrossRef]
  38. OECD. Energy, Climate Change and Environment: 2014 Insights; International Energy Agency: Paris, France, 2014. [Google Scholar]
  39. Stewart, J. Self-Driving Cars Use Crazy Amounts of Power, and It’s Becoming a Problem. Wired, 6 February 2018. Available online: https://www.wired.com/story/self-driving-cars-power-consumption-nvidia-chip/(accessed on 21 October 2021).
  40. Lee, J.; Kockelman, K.M. Energy implications of self-driving vehicles. In Proceedings of the 98th Annual Meeting of the Transportation Research Board, Washington, DC, USA, 13–17 January 2019; Available online: https://www.caee.utexas.edu/prof/Kockelman/public_html/TRB19EnergyAndEmissions.pdf (accessed on 22 March 2022).
  41. Liu, Z.; Tan, H.; Kuang, X.; Hao, H.; Zhao, F. The Negative Impact of Vehicular Intelligence on Energy Consumption. J. Adv. Trans. 2019, 2019, e1521928. [Google Scholar] [CrossRef]
  42. Gupta, U.; Kim, Y.G.; Lee, S.; Tse, J.; Lee, H.-H.S.; Wei, G.-Y.; Brooks, D.; Wu, C.-J. Chasing Carbon: The Elusive Environmental Footprint of Computing. In Proceedings of the 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, Korea, 27 February–3 March 2021; pp. 854–867. [Google Scholar]
  43. Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge Computing: Vision and Challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
  44. Transforma Insights Edge Computing Set for Rapid Growth, across Both IoT Devices and ‘Campus Edge’. Available online: https://transformainsights.com/edge-computing-rapid-growth-iot (accessed on 25 February 2022).
  45. Edahbi, M.; Plante, B.; Benzaazoua, M. Environmental Challenges and Identification of the Knowledge Gaps Associated with REE Mine Wastes Management. J. Clean. Prod. 2019, 212, 1232–1241. [Google Scholar] [CrossRef]
  46. Obringer, R.; Rachunok, B.; Maia-Silva, D.; Arbabzadeh, M.; Nateghi, R.; Madani, K. The Overlooked Environmental Footprint of Increasing Internet Use. Resour. Conserv. Recycl. 2021, 167, 105389. [Google Scholar] [CrossRef]
  47. The Shift Project. The Shift Project Lean ICT: Towards Digital Sobriety; The Shift Project: Paris, France, 2019. [Google Scholar]
  48. Griffiths, S. Why Your Internet Habits Are Not as Clean as You Think. Available online: https://www.bbc.com/future/article/20200305-why-your-internet-habits-are-not-as-clean-as-you-think (accessed on 25 February 2022).
  49. Thompson, N.C.; Greenewald, K.; Lee, K.; Manso, G.F. The Computational Limits of Deep Learning. arXiv 2020, arXiv:200705558. [Google Scholar]
  50. Martineau, K. Shrinking Deep Learning’s Carbon Footprint. Available online: https://news.mit.edu/2020/shrinking-deep-learning-carbon-footprint-0807 (accessed on 28 February 2022).
  51. Heavon, W. Our Weird Behavior during the Pandemic is Messing with AI Models. Available online: https://www.technologyreview.com/2020/05/11/1001563/covid-pandemic-broken-ai-machine-learning-amazon-retail-fraud-humans-in-the-loop/ (accessed on 28 February 2022).
  52. Forti, V.; Cornelis, P.B.; Kuehr, R.; Bel, G. The Global E-Waste Monitor 2020: Quantities, Flows and the Circular Economy Potential; United Nations University (UNU): Geneva, Switzerland; United Nations Institute for Training and Research (UNITAR)—Co-hosted SCYCLE Programme, International Telecommunication Union (ITU): Bonn, Switzerland; International Solid Waste Association (ISWA): Rotterdam, Switzerland, 2020. [Google Scholar]
  53. Capra, M.; Bussolino, B.; Marchisio, A.; Shafique, M.; Masera, G.; Martina, M. An Updated Survey of Efficient Hardware Architectures for Accelerating Deep Convolutional Neural Networks. Future Internet 2020, 12, 113. [Google Scholar] [CrossRef]
  54. Batra, G.; Jacobson, Z.; Madhav, S.; Queirolo, A.; Santhanam, N. Artificial-Intelligence Hardware: New Opportunities for Semiconductor Companies; McKinsey & Company: Hong Kong, China, 2018. [Google Scholar]
  55. Robbins, S. A Misdirected Principle with a Catch: Explicability for AI. Minds Mach. 2019, 29, 495–514. [Google Scholar] [CrossRef] [Green Version]
  56. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [Green Version]
  57. Robbins, S.; Henschke, A. The Value of Transparency: Bulk Data and Authoritarianism. Surveill. Soc. 2017, 15, 582–589. [Google Scholar] [CrossRef]
  58. McKay, C. Predicting Risk in Criminal Procedure: Actuarial Tools, Algorithms, AI and Judicial Decision-Making. Curr. Issues Crim. Justice 2020, 32, 22–39. [Google Scholar] [CrossRef]
  59. van Wynsberghe, A.; Robbins, S. Critiquing the Reasons for Making Artificial Moral Agents. Sci. Eng. Ethics 2019, 25, 719–735. [Google Scholar] [CrossRef] [Green Version]
  60. Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harv. J. Law Technol. 2017, 31, 841. [Google Scholar] [CrossRef] [Green Version]
  61. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2021, 23, 18. [Google Scholar] [CrossRef] [PubMed]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Robbins, S.; van Wynsberghe, A. Our New Artificial Intelligence Infrastructure: Becoming Locked into an Unsustainable Future. Sustainability 2022, 14, 4829. https://doi.org/10.3390/su14084829

AMA Style

Robbins S, van Wynsberghe A. Our New Artificial Intelligence Infrastructure: Becoming Locked into an Unsustainable Future. Sustainability. 2022; 14(8):4829. https://doi.org/10.3390/su14084829

Chicago/Turabian Style

Robbins, Scott, and Aimee van Wynsberghe. 2022. "Our New Artificial Intelligence Infrastructure: Becoming Locked into an Unsustainable Future" Sustainability 14, no. 8: 4829. https://doi.org/10.3390/su14084829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop