Next Article in Journal
Study on Synergistic Emission Reduction in Greenhouse Gases and Air Pollutants in Hebei Province
Previous Article in Journal
Empowering Clusters: How Dynamic Capabilities Drive Sustainable Supply Chain Clusters in Egypt
Previous Article in Special Issue
Advances in the Optimization of Vehicular Traffic in Smart Cities: Integration of Blockchain and Computer Vision for Sustainable Mobility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Metrics: Navigating AI through Sustainable Paradigms

1
Macadamia Sustainability Research Hub, Galilee 1522500, Israel
2
Department of Industrial Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(24), 16789; https://doi.org/10.3390/su152416789
Submission received: 27 September 2023 / Revised: 3 December 2023 / Accepted: 6 December 2023 / Published: 13 December 2023

Abstract

:
This manuscript presents an innovative approach to the concept of sustainability in the realm of Artificial Intelligence (AI), recognizing that sustainability is a dynamic vision characterized by harmony and balance. We argue that achieving sustainability in AI systems requires moving beyond rigid adherence to protocols and compliance checklists, which tend to simplify sustainability into static criteria. Instead, sustainable AI should reflect the balance and adaptability intrinsic to the broader vision of sustainability. In crafting this vision, we draw upon the principles of complex systems theory, the wisdom of philosophical doctrines, and the insights of ecology, weaving them into a comprehensive paradigm.

1. Introduction

In September 2015, a seminal gathering at the United Nations (UN) Headquarters in New York culminated in the declaration of a groundbreaking set of global objectives known as ‘Agenda 2030’ [1]. This paradigmatic shift in global policymaking aims to augment the quality of human life through a trifold focus: economic vitality, social equity, and environmental sustainability. Within the broader framework of Sustainable Development Goals (SDGs), Artificial Intelligence (Although we use the term ‘AI’ throughout this article, our primary focus is on the subset of AI that employs machine learning and deep learning techniques.) (AI) and computational technologies are increasingly conceived as catalyzing agents that can powerfully advance these objectives.
In alignment with the SDGs, which span multiple domains, including environmental, economic, and social dimensions, as illustrated by Figure 1, AI has the potential to serve many purposes conducive to achieving these goals. Specifically, AI can contribute to Good Health and Well-Being (SDG 3), Quality Education (SDG 4), Clean Water and Sanitation (SDG 6), and Responsible Consumption and Production (SDG 12). The challenges of climate change have been addressed using AI applications [2]; the development of circular economies and the construction of intelligent urban infrastructure optimized for resource utilization have been developed using AI [3], and multiple conservation strategies have been implemented using AI analysis. Among these, wildlife corridors have been identified as an effective countermeasure against habitat fragmentation [4]. Several studies have also tackled additional issues, such as integrating and analyzing data from diverse sensors at various scales, quantifying and depicting uncertainty in species forecasting, and modeling migration [5,6,7].
However, sustainability cannot be achieved through guidelines and bureaucracy. Rather, there is a critical need for a paradigm shift in individual and collective values. As articulated by Bailey in 2008 [8], the prevalent focus on immediate self-maximization must evolve toward a more balanced approach that includes individual restraint and a long-term sense of responsibility. This means that rather than exploiting resources for immediate gains, there should be an ethical commitment to preserving them for future generations. By adjusting our values in this manner, the goal is to create a more sustainable and equitable system that takes into account the long-term impact of our actions on these diverse but interconnected resources.
In this work, we aim to examine the nexus between AI and sustainability and to propose a framework for AI that aligns more cohesively with sustainability.
Research Question: What represents the most fitting paradigm for AI developers to align with sustainability?
This question is not confined to a narrow appraisal of AIs immediate effects on the SDGs [9]. The quest to interpret what constitutes sustainable AI is challenging given the diversity of perspectives on the utility of sustainability in resource management, stemming chiefly from the absence of a universally accepted definition [10]. Originating from an inquiry into the delicate balance between nature and society, sustainability encapsulates the aspiration for fostering a future characterized by well-being and opportunities for development. Sometimes sustainability is referred to as a concept that encompasses the responsible management and conservation of shared resources with the aim of fulfilling both current and future human needs [11]. This definition has its roots in the 1987 Brundtland Report by the World Commission on Environment and Development, which emphasized the need for equitable and lasting resource utilization.
In our pursuit of sustainable AI, it is becoming increasingly apparent that the solutions we seek may not be fully realized within the confines of our current frameworks and practices, which are often at odds with sustainable principles [12]. Remember that sustainability has not been achieved yet; it is a stated aspiration of governments and societies, a vision [13,14]. Therefore, it is hardly surprising that a universal definition remains elusive. This quest requires us to look beyond the status quo and to explore ‘other worlds’ for this aspiration. These could be other traditions with their time-tested philosophy, the natural world with its inherent systems of balance and renewal, or even an envisioned scientific future that challenges our accepted norms (Kemp and Martens [13] speak about sustainability science as a new form of science). We must, therefore, cast our intellectual nets wider to these ‘other worlds’ that hold alternative modes of coexistence and interaction with our environment. By doing so, we are closer to unearthing innovative approaches and untapped knowledge that could guide the creation of truly sustainable AI systems.
The proposed thesis diverges from the prevailing focus on the control of either energy consumption and CO2 emissions associated with the deployment of AI [15,16,17] or on natural processes through AI [18]. Since sustainability is not a static target but a vision seeking harmony between human civilization and the planet’s ecosystems, our approach necessitates a multidisciplinary understanding that transcends mere technicalities and quantifiable metrics. Therefore, our inquiry begins with exploring frameworks that inherently reflect the harmonious vision itself. Our proposed paradigm is grounded in the tenets of complex systems theory, philosophical frameworks, and ecological principles. In positing these frameworks, we present a novel alternative to the field of AI—one that challenges the status quo and proposes a paradigm that reflects the vision for harmony.
This paper is organized in the following way: Section 2 and Section 3 critically review the main trajectories of ongoing efforts at the intersection of AI and sustainability. Section 2 begins with an overview of AI and machine learning and delves into the effort and cost associated with training general AI models. Understanding the effort involved in developing AI systems is an essential starting point for our discussion. Section 3 illustrates the paradox of AI and sustainability by raising issues of justice and environmental costs intrinsic to these technologies, even when trying to address the SDGs. It implies that the vision of sustainability extends beyond what can be captured through rigid guidelines and bureaucratic measures. In Section 4, we introduce philosophic and ecological frameworks that reflect the harmonious nature of sustainability; these building blocks comprise the proposed paradigm. Section 5 makes the first steps to unify the proposed frameworks and proposes a few interpretations for their implementation in the field of AI. Finally, Section 6 concludes this work, summarizing our findings and reflecting on the implications for AI within the broader context of sustainability.

2. The Effort and Cost of Training AI Models

Artificial Intelligence is the scientific discipline that seeks to develop intelligent machines, while machine learning is a major subset of AI that involves the use of statistical techniques to improve computer programs through ‘learning’ from experience rather than being pre-programed by explicit orders. The principle that experience is provided to a machine learning system through data representative of the real world is pivotal to the functionality and reliability of the model’s output.
The core objective of most machine learning algorithms is to generalize from the training data to unseen situations in a process known as inductive learning. This involves constructing a model—either implicitly or explicitly—that captures the underlying patterns and structures within the data. Through the application of various methods such as regression, classification, or clustering, these algorithms infer rules and relationships that enable them to make predictions or decisions about new data points. The ability of a model to generalize well is pivotal; it indicates that the model has not just “memorized” the training data (fitting) but have learned a simplified representation of the core trends present within the data that are applicable to the broader environment it represents. This generalization is a statistical process wherein the model applies what it has learned from a sample to the entire population, making inductive learning a foundational stone of machine learning’s predictive capacities.
Deep learning represents a special case of machine learning, characterized by its capacity to process and learn from vast amounts of data through layers of neural networks. Unlike traditional machine learning algorithms, which may require manual feature selection, deep learning architectures autonomously identify features. These networks consist of multiple layers of interconnected nodes, or neurons, that simulate the decision-making process. As data traverses through these layers, each one progressively extracts higher-level features, with deeper layers capturing more complex and abstract representations. This hierarchical learning approach enables deep learning models to perform exceptionally well in tasks such as image and speech recognition, where the nuances of patterns are paramount. Furthermore, deep learning models are particularly adept at handling non-linear and high-dimensional data, making them a potent tool for tackling a range of complex problems that were previously insurmountable for machines. With their growing prevalence, deep learning models are shaping up to be a cornerstone of AI, driving advancements across various fields with their unparalleled pattern recognition capabilities.
To learn more about the capacities of machine learning algorithms, please refer to [19].
AI stands apart from other technologies due to its unparalleled complexity, adaptability, and potential for autonomy. While various technologies pose environmental and societal concerns, AI’s distinctiveness lies in its ability to learn from data, make decisions, and perform tasks that traditionally required human intelligence. This level of sophistication allows AI systems to not only execute predefined processes but also to evolve their performance over time through continuous learning, often surpassing human capabilities in specific domains.
Furthermore, AI’s influence extends beyond individual tasks to systemic impacts. Its integrative nature means it can optimize entire networks, from energy grids to transportation systems, with significant efficiency gains. However, these optimizations come with the caveat of increased energy demands for processing massive amounts of data and potential unintended consequences arising from complex AI behaviors; this results in a larger carbon footprint associated with AI [20]. The societal implications are profound as well; AI can inadvertently perpetuate biases present in training data, leading to ethical dilemmas and governance challenges unseen in other technologies [21].
The application of machine learning to sustainability challenges introduces a layer of complexity that is both promising and problematic. On the one hand, these algorithms have the potential to navigate the intricacies of multi-variable, non-linear, and constrained problems often encountered in sustainability domains such as resource allocation, supply chain management, and ecological conservation [22,23,24]. However, the very features that make these algorithms powerful—namely; their ability to consider multiple dimensions simultaneously—also pose challenges.
On one hand, larger machine learning models (i.e., more layers and parameters) tend to fit better. However, this improvement comes with an increasing financial and environmental cost [25]. The rapid escalation in model size often outpaces the corresponding gains in performance, signaling that the high costs associated with expanding machine learning models are likely to persist. Increasing concerns surrounding the energy consumption associated with computing suggest that energy costs may soon become a predominant component in the total cost of ownership [25]. While energy management are a key issue for data center operations, capital expenditures, operational outlays, and environmental ramifications are also considered [26].
The computational cost for training a machine learning model is determined by a variety of factors, including the complexity of the model architecture, the optimization algorithm employed and its loss function, and the volume and intricacy of the dataset being used. In artificial neural networks (ANN), a subset of machine learning algorithms (as illustrated in Figure 2), a loss function is a mathematical formulation employed to measure the disparity between the network’s predictions and the actual target values in the training data. The loss function quantifies how well the model approximates the underlying function that maps inputs to outputs. During the optimization process, the goal is to minimize the value of the loss function, adjusting the network’s weights to achieve this objective. This fine-tuning process necessitates a substantial number of computations, thereby consuming significant processing time [27].
In traditional machine learning models, the complexity was principally determined by the size of the training set [28]. However, the rise of Deep Neural Networks (DNNs)–specialized type of ANN designed to model and understand complex patterns and representations characterized by multiple layers between the input and output layers–has shifted the focus toward architectural intricacies as a major determinant of computational complexity. DNNs are characterized by multiple hidden layers that often include specialized types such as convolutional [29] and recurrent layers [30], thereby enhancing their representational power but also adding computational challenges.
Optimization algorithms used in DNN training, like Stochastic Gradient Descent (SGD), have to navigate a considerably larger parameter space as they iteratively minimize the loss function while updating potentially millions or even billions of parameters. This results in a more computationally intensive process, as these algorithms often require more iterations to converge to an optimal solution. Each of these iterations involves the calculation and updating of a substantially larger set of parameters compared to traditional models. In DNNs, the forward and backward passes during the training phase become more resource-intensive due to the increased number of neurons and their interconnections [31].
Moreover, the intricate nature of DNNs often necessitates the application of advanced regularization techniques to mitigate overfitting, further adding to the computational overhead. Issues such as vanishing gradients [32] also become more prevalent as the network architecture deepens, requiring additional algorithmic or architectural adjustments. Additionally, the volume and complexity of the dataset contribute to the computational load; larger datasets usually require extended periods of training. Consequently, while DNNs offer improved performance and capabilities, this comes at the cost of significantly higher computational complexity compared to traditional machine learning models.
The energy consumption and resulting carbon emissions of Large Language Models (LLMs), particularly neural transformer models, are surging. According to recent estimates, training a large neural transformer model could consume as much as 280,000 kg of CO2 [33]. Although it is problematic to assess the emissions of these models [16], this figure is alarming not only in an absolute sense but also when considered in the context of individual or collective scientific contributions. To elucidate, a transatlantic flight from London to New York emits about 900 kg of CO2 per passenger, a warning fact that is often cited in discussions around individual and corporate carbon footprints. The training of one large neural model, therefore, equates to the emissions of over 300 transatlantic flying passengers. Thus, a substantial portion of the greenhouse gas emissions are attributed to many Natural Language Processing (NLP) endeavors.
An examination in 2018 by the OpenAI Lab, which aims to ensure that artificial general intelligence serves the collective good, disclosed an astonishing trend: the computational resources devoted to training expansive AI models have been doubling approximately every 3.4 months since 2012. This rate diverges dramatically from Moore’s Law, which predicts a doubling every 18 months and represents a 300,000x increase in computational capacity [15]. This is illustrated in Figure 3.

Complexity under the Lens of Sustainability

It is noteworthy that the computational complexity of most machine learning models tends to exceed linear proportions relative to the number of samples or features. This escalation in complexity is a significant concern, especially when dealing with large-scale datasets. One effective approach to mitigating this issue is through dimensionality partitioning in a mutually exclusive manner. By breaking down the problem space into smaller, mutually exclusive subsets, one can effectively reduce the computational burden.
For instance, techniques such as divide-and-conquer algorithms [35] and ensemble methods [36] can be utilized to partition the problem into more manageable sub-problems. These sub-problems can be solved independently, perhaps even in parallel, and their solutions can be recombined to form the final solution. This methodological shift does not just distribute computational load but also significantly decreases the algorithm’s computational complexity, making it more scalable and efficient. Such partitioning techniques thus offer a promising pathway for dealing with computationally intensive learning algorithms, aiding in both scalability and the effective utilization of computational resources.
The question of scalability in the application of machine learning algorithms to vast datasets are both imperative and complex. On the one hand, organizations across various domains are accumulating large repositories of data—ranging from customer behavior to scientific observations—that could be instrumental for extracting valuable knowledge. On the other hand, not all machine learning algorithms can efficiently process such colossal datasets. The motivation for tackling this scalability issue is multifaceted. One salient reason is the direct correlation between the size of the training set and the accuracy of the learned models. Smaller datasets often lead to overfitting, particularly in instances where the feature space is large or the program needs to learn “small disjuncts” or rare cases that are essential for high accuracy [37]. The problem exacerbates when noise is present in the data, making it challenging to discern between genuine special cases and outliers.
Another facet of the scalability problem pertains to the computational complexity of learning algorithms. Algorithms with greater-than-linear complexity can become unmanageable as dataset sizes increase [28]. This issue is not just about predictive modeling but also extends to data-mining applications focused on the discovery of previously unknown knowledge. However, achieving this without being overwhelmed by spurious small disjuncts necessitates large enough datasets to facilitate confident generalization. Therefore, there is an increasing need for the development of fast, scalable algorithms that can efficiently handle large datasets while preserving or even enhancing the accuracy and interpretability of the models [38].
Various methods for data representation employ sparse feature representations, especially in text-mining tasks [39]. For example, in the traditional text representation method based on term frequencies (TF), each feature corresponds to the frequency of a specific n-gram. This straightforward technique has proven highly efficacious in traditional text classification tasks, including sentiment analysis [40]. However, this methodology’s high-dimensional feature space can be susceptible to issues related to the “curse of dimensionality”. As the number of dimensions (i.e., variables or constraints) increases, the computational cost of finding the optimal solution grows exponentially. This leads to scenarios where optimization tasks become computationally infeasible, thus limiting the algorithms’ applicability in real-world, large-scale sustainability challenges, thereby making the model vulnerable to overfitting [41].

3. The Paradox of Sustainable AI: The Ethical and Environmental Costs

Building on our review of the effort, cost, and resources required for developing any AI systems as discussed in the previous section, this section reviews AI systems that are designated to address challenges related to sustainability. This presents a paradox that underscores a disquieting irony: the AI technologies that are being leveraged to achieve SDGs and protect our ecosystems might also, inadvertently, contribute to their degradation [42,43]; thus, labeling a technology sustainable is conducted while such an assessment is not warranted.

3.1. The Ethical and Environmental Costs

AI models have shown promise in monitoring ecological systems, simulating the impacts of various interventions, and optimizing resource allocation for sustainability. Intriguingly, despite the substantial computational and environmental costs associated with deploying expansive AI models, marginalized communities often remain the least beneficiaries of such technological advancements [44]. These communities often carry the burden of climate-related hazards [45]. Various reports corroborate that economically disadvantaged families frequently reside in areas inherently vulnerable to an array of climate-induced perils, including but not limited to mudslides, extreme heat events, water contamination, and flooding. Such conditions not only amplify preexisting social inequalities but also render these communities particularly susceptible due to their limited capacity for adaptation. In this context, note that reducing inequality is an important aspect of the SDGs, particularly SDG 10 (reduced inequalities).
To illustrate the impact of climate change on marginal populations, it is reported that as of 2000, 11% of the global population resided in low-elevation coastal zones, many of whom were economically disadvantaged and confined to flood-prone areas [46]. This pattern is particularly pronounced in regions like South and East Asia, as well as Latin America and the Caribbean. In addition, about 29% of the global population lives in arid, semi-arid, or dry sub-humid zones, facing intensified challenges due to climate change. These structural inequalities are further nuanced by differing degrees of vulnerability among social groups. For example, in Mumbai, India, the poor spend a higher proportion of their income on flood-related home repairs than wealthier citizens. Likewise, the devastation wrought by Hurricane Katrina in New Orleans in 2005 revealed how intersecting inequalities—defined by income, race, and education—contributed to the increased vulnerability of low-income African Americans; making recovery significantly more challenging for this demographic.
The United Nations Agenda 2030 [1], with its explicit calls for “urgent action to combat climate change and its impacts” as well as for “…transformative steps that are urgently needed to shift the world onto a sustainable and resilient path,” is fostering a sense of acute urgency in the global community [47]. This urgency is increasingly turning attention toward AI as a presumed panacea for our multifaceted environmental and social crises. The rising desperation to meet these goals within a dwindling timeframe has the potential to position AI as a sort of “magic wand” to instantaneously resolve issues that are deeply rooted in complex systems [48]. While AI indeed offers promising avenues for accelerating progress toward sustainability, this perception risks simplifying the profound ethical and logistical challenges involved in deploying AI solutions on a large scale. Caution is therefore essential to ensure that this sense of urgency does not compromise the careful, ethical application of AI in the pursuit of global sustainability.
This paradox is further illuminated by our enduring human ambition to exert control over nature—a quest rooted in ancient instincts and compounded by modern hubris. Within the sphere of AI, this pursuit is evident in attempts to not only regulate the intricacies of natural ecosystems [49] but also to govern human behavior and social dynamics [50]. Particularly in regions where democratic oversight, ethical scrutiny, and transparency are scant, AI can inadvertently become an instrument for perpetuating nationalism, amplifying biases, and infringing upon human rights [51]. Another disquieting application is the development of “citizen scores,” which aim to control social conduct through AI-driven analytics [52], and the CalGang database, employed for forecasting gang-related violent crime, which has been criticized for its disproportionate bias and pervasive inaccuracies [53].
The concept of exerting greater control over natural and human systems through scientific and technological interventions is contentious when viewed through the lens of the Anthropocene—a term reflecting humanity’s significant impact on the planet [54]. This raises a pivotal question: does amplifying our influence over Earth’s processes truly correspond with the vision of harmony, or does it contribute to the very challenges we seek to overcome? The notion of increasing our agency with regard to our ecosystems and societies demands scrutiny to discern whether it is a step towards remediation or a continuation of the problems attributed to the Anthropocene era.

3.2. Metrics for Sustainable AI

There is a growing call towards the development and incorporation of new evaluative metrics [16,55,56]. These metrics should capture a comprehensive range of factors, including but not limited to a model’s performance, computational efficiency, and environmental impact. By incorporating these dimensions into a unified evaluation framework, researchers and policymakers can be provided with the necessary tools to make more responsible and informed decisions about when and how to invest in new model training or updates.
Gauging the energy expenditure associated with DNNs proves to be a more intricate task compared to evaluating other metrics like model size (storage cost) and computational operations (throughput). This complexity arises largely because a considerable fraction of the energy usage stems from data transfer, a variable not easily quantifiable solely from the DNN’s architecture. Yang and colleagues [33] introduce a methodology that considers the DNN’s structural design among other parameters to estimate its energy consumption.
The Pareto frontier can be used to define the optimal trade-off between cost and prediction performance, delineating a boundary in the objective space where any further improvement in one metric would necessitate a compromise in the other. This frontier serves as a valuable guide for decision-makers since it allows them to balance the computational cost of an algorithm against its predictive accuracy, facilitating more informed and sustainable choices in model selection and deployment. For example, Ofek and colleagues [28] examine points along the Pareto frontier to identify algorithms that offer the most favorable trade-offs, thereby optimizing the allocation of computational resources without sacrificing the quality of predictions.

4. Towards Sustainability: Philosophic and Ecological Paradigms

The last section poses a critical analysis of the shortcomings inherent in a checklist approach to sustainability [57], which raises issues of justice and inevitably misses the core of what sustainability truly aims to achieve. This oversight is unsurprising given the multifaceted nature and interconnectedness of our world, aspects that cannot be fully captured through achieving a single goal.
Since this work calls for a deeper integration of principles that reflect a harmonious vision, the current step delves into the frameworks that can enrich the development of sustainable AI, while the next section proposes how these principles can be operationalized in the face of AI. Recall that our position diverges from the prevailing narrative of tension [58] and contends that the path to sustainable AI must be rooted in harmony; therefore, the design of AI systems must possess the qualities inherent in the nature of sustainability itself—qualities such as adaptability; balance; harmony; and a recognition of interconnectedness.
What frameworks essentially belong to this nature?
Following, we present a philosophical approach and principles from complex systems theory and ecology. These frameworks are deemed alternatives due to their distinct approach to understanding and interacting with the world, which corresponds with the aspiration for harmony and balance.

4.1. Towards Ethical Sustainability: The Ecological Principle

In charting the course towards ethical sustainability, this section foregrounds the ecological principle, sidestepping in-depth analysis of the ethics of sustainability since this concept is still under discussion. Some works posit sustainability as a core tenet of ethics, suggesting that the value of ecosystems—both intrinsically and as a foundation for human prosperity—persists beyond their presence. It recognizes our capacity to rectify unsustainable practices through deliberate action [59]. Another proposition is to place complexity as an inherent attribute of ethical sustainability [60], largely due to the multifaceted nature of sustainability itself.
Rather than attempting to rigidly define ethical sustainability, we highlight the ecological principle as its fundamental aspect. This principle mandates a comprehensive approach that emphasizes the interconnectedness of all things, including wildlife, plant life, and the less privileged sections of society. Therefore, the ecological principle calls for balance and harmony, advocating for approaches that maintain the equilibrium between diverse life forms and the environments they inhabit, including humanity and its societal constructs. Ethics that correspond with this principle demands an examination of progress from the perspective of its impact on all these stakeholders, including the potential social injustices, economic implications, and non-human lives. Ethical sustainability, therefore, requires a thorough appreciation of these intricate interrelations and their inherent complexities, while simplistic optimization of single aspects, such as carbon emissions, is insufficient; instead, we must aim for a harmonized balance that cultivates fair and lasting resolutions.
This ecological principle should be pivotal in the engineering and sciences of sustainability because it expands the ethical vision to include a multitude of perspectives, integrating concerns about privacy, autonomy, and control, as well as addressing biases against minorities that may be perpetuated by AI systems [21]. By placing the ecological principle at the heart of sustainability efforts, we ensure that the trajectory of AI development and deployment is not just technically and economically sound but also ethically attuned to the broader fabric of life it is meant to serve [61].

4.2. Self-Organization Paradigm

The integration of complex systems within the sustainability discourse [57] brings to light the intricate interplay between agents capable of dynamic information exchange and adaptation. As we delve deeper into the operational mechanics of these systems, we find that self-organization [62], intrinsic to complex systems, stands as a profound counter-point to the prevailing trend of exerting technological control over natural processes. This process, rooted in the concepts of emergent behavior and decentralization, corresponds with ecological models that emphasize the natural emergence of order without rigid control. It is this very relinquishment of control that mirrors the harmonious essence of sustainability, making self-organization an apt framework for articulating our aspirations for a sustainable future.
Self-organization stands as a defiant challenge to the Anthropocene’s premise of human dominion and the belief that human-devised algorithms should supplant the self-regulating mechanisms of nature. This paradigm questions the interventionist approach of optimizing ecosystems through algorithms, which may not account for the comprehensive wisdom embedded in natural processes. By advocating for decentralized models, self-organization reframes our interaction with AI, shifting from exerting stringent control to fostering collaborative coexistence with the complexities of natural and social systems. This approach honors the intricate governance that nature inherently possesses, suggesting that true sustainability lies in complementing [63], not commanding, the organic order of our environment.
More specifically, self-organization refers to the emergent structures formed by networks of agents (e.g., bees) who collaborate and coordinate without centralized control or management (e.g., without a controlling queen-bee). These self-organizing systems are inherently more effective at various tasks (e.g., decision-making), as they leverage the collective intelligence and localized knowledge of multiple stakeholders. Unlike isolated autonomous systems where each entity operates independently or centrally managed systems where decision-making is concentrated at a single point (e.g., policymakers using AI models for decision support), self-organizing networks capitalize on decentralized behavior. Such networks are highly responsive to environmental variances and common needs, enabling them to adapt behavior in a more efficient and sustainable manner. For the decentralized decision-making process in the nest selection of bees, please refer to [64].
Comparative studies indicate that such self-organizing systems can, in specific contexts, outperform their centralized, hierarchical counterparts. For instance, studies suggest that self-organizing networks of farmers are inherently more adept at effective water management than either isolated autonomous systems or centrally managed ones [65]. This view contradicts the narrative that technological interventions, often driven by AI, are universally superior to optimizing resource management.

4.3. Effortless Paradigm

The Daoist philosophy, with its emphasis on harmony, balance, and purposeful non-action, presents an enriching lens through which to examine the principles of sustainability [66]. Rooted in ancient Chinese thought, Daoism accentuates the interconnectedness of all things and advocates for a harmonious coexistence with the natural world. This ethos remarkably aligns with the contemporary imperatives of sustainable development, offering an alternative framework that extends modern sustainability paradigms [67]. Daoism guides us to not just optimize but to transform—fostering behaviors that are intrinsically aligned with the cycles and balances of ecological systems. Thus, Daoism enriches the discourse on sustainability by introducing holistic, long-term perspectives that echo its age-old wisdom, potentially reshaping our understanding of what it means to live sustainably.
Approximately 2500 years ago, Laozi, the founding figure of Daoism, articulated the principle that human activities should align with the laws of the Earth. In Daoism, the concept of “wu-wei,” originally signifying ‘no (wu) effort/work’ (wei), proves to be a deeply profound idea that has historically been challenging. Wu-wei does not mean to do nothing. Instead, it consciously suggests not doing too much, or simply unforced action, flowing naturally from the circumstances [68].
The Daoist approach does not align closely with the principles of Agenda 2030, but it offers an alternative framework that could profoundly impact the interpretation and implementation of sustainability [67]. SDGs 12 and 13 focus on responsible consumption and climate action, respectively. Traditional interpretations of these goals have emphasized energy management, optimization and minimal waste, but the Daoist approach suggests a more transformative orientation. Specifically, SDG12 would not merely be about making production and consumption less harmful, but about genuinely consuming less. This aligns with the wu-wei: by deliberately doing less, we may actually achieve more in terms of sustainability and ecological balance.
Similarly, SDG 13, which calls for urgent action to combat climate change, could benefit from the Daoist counsel to “act less”. This does not advocate for neglecting climate change; on the contrary, it recognizes that sometimes the best action is restrained and thoughtful, rather than overpowering and investing a huge effort. A Daoist interpretation would guide us towards climate solutions that are not just about doing more—more technology, more control, more mitigation—but about judiciously doing less of what harms the planet.

5. Navigating beyond Metrics: A Paradigm Shift

The growing preoccupation with metrics in assessing the environmental footprint of AI systems [55,56] may serve to commodify nature in an unsettling manner. Firstly, this approach reduces the multi-dimensional value of natural resources and ecosystems to mere economic terms, facilitating their acquisition and regulation through market-based mechanisms [69]. This simplification inadvertently advances a transactional paradigm wherein nature is reduced to a series of assets to be optimized rather than an interconnected system to be respected. Secondly, the complex, adaptive nature of ecological systems poses a significant challenge to the accurate measurement of AI’s environmental impact and contribution. Given the intricate web of dependencies and feedback loops that characterize natural ecosystems, it is overly optimistic, if not naive, to presume that all potential ramifications can be captured through quantifiable metrics.
A dependence on metrics not only steers us away from the harmonious approach required for true sustainability, but it also cultivates a narrow view that may overlook the broader, often immeasurable aspects of ecological influence.
In the quest to place AI within the vision of sustainability, this chapter makes the first steps towards synthesizing the three interrelated paradigms: the Effortless Paradigm, the Ecological Principle, and Self-Organization. Together, they possess the qualities inherent in the nature of sustainability itself—adaptability; balance; harmony; and a recognition of interconnectedness.
This unified paradigm is calling for a shift since it is predicated on the pursuit of synergy over subjugation, signaling a shift from AI that seeks to dominate natural and societal systems to one that seeks harmonious integration. Daoism’s principle of ‘wu-wei’, signifying unforced action, attuned to the natural world. Philosopher Martin Buber interprets “nondoing,” as a noncoercive and responsive doing, and it is claimed to be the key experience and conception that Europe can learn from Chinese philosophy in order to temper its thirst for power and domination over things and over others [70].
Self-organization becomes a metaphor for AI development that seeks synergy rather than subjugation. The ecological concepts become blueprints for AI systems that enhance our ecosystems, championing a development ethos that values coexistence and mutual benefit. The synergy of these paradigms may lead to emerging outcomes where AI is developed not as a forceful tool of human ambition but as an effortless, integrated, and responsible participant in the broader ecological and societal context.
The applications of this new paradigm within AI are multifaceted, offering a spectrum of possibilities for the future direction of the field. Herein, we outline initial steps that exemplify the paths AI development can take, aligning with our approach.
The Effortless Paradigm invites us to reimagine AI as a system that operates with a sense of ease and unforced functionality. Such interpretation could be promoting incremental learning, which stands out within machine learning for its ability to continuously assimilate new knowledge from incoming data without the necessity of revisiting the entire original dataset [71]. This characteristic allows for modifications and improvements to be made to the model in a dynamic, ongoing manner, circumventing the exhaustive process typically associated with training a completely new model from scratch. By embracing incremental learning, we enable AI systems to grow and evolve efficiently, mirroring the adaptability of living organisms. This adaptive learning process ensures that AI applications remain relevant and effective within the dynamic contexts in which they operate.
The Ecological Principle broadens our understanding of AI’s role within the web of life; one interpretation may encourage AI designers and developers to undertake a mapping of the stakeholders relevant to their task. This initial step is crucial in cultivating awareness of the AI’s contextual ecosystem and promoting a conscientious approach to its deployment. By identifying and understanding the various parties affected—ranging from direct users to indirect ecological and social entities—developers can anticipate and mitigate potential adverse impacts; thus fostering; for example; balance when obtaining fair data representations [72].
Self-Organization champions, for example, decentralized AI in a fleet of autonomous vehicles; decentralization enables individual cars to make real-time decisions based on immediate environmental inputs and intra-fleet communication [73]. This not only optimizes traffic flow and reduces congestion, but also reduces central control in the system, a desirable value in itself.
Within the context of our proposed paradigm, which emphasizes harmony, adaptability, and less intervention, transfer learning [74] may emerge as a favored approach. This technique aligns well with the ethos of reducing unnecessary computational strain and resource consumption, as it leverages pre-trained models and adapts them to new but related tasks, avoiding the need to build and train models from the ground up for every unique application. Operationalizing transfer learning within AI systems means embracing the interconnectedness of knowledge domains. It allows us to draw from a vast pool of pre-existing models (such as models on musical preferences), fine-tuning them with a specific, often smaller set of data that are relevant to the new context (for example, film preferences).

6. Conclusions

In conclusion, we recognize that the essence of sustainability cannot be distilled into a checklist or a set of prescriptive guidelines. Instead, it is a vision and aspiration for harmony and balance. The paradigm we advocate for in AI development is reflective of this vision—emphasizing adaptability; balance; harmony; and a profound recognition of interconnectedness. This paradigm is a living, dynamic, and responsive concept, and as such, it can be intertwined with new technology.
Our exploration has led us to propose a first step in the synthesis of the Effortless Paradigm, the Ecological Principle, and the notion of Self-Organization, forging a path for AI development that aligns with the natural order and operates within the principles of ecological integrity and ethical stewardship. This new paradigm reimagines AI not as a force dominating nature but as a facilitator of natural harmony.
The enthusiasm with which AI has been embraced as a solution to sustainability challenges must be tempered with caution. We must guard against the ‘hammer-and-nail’ syndrome, where AI becomes the go-to solution for all problems, potentially sidelining the ethical and ecological imperatives that are the hallmark of true sustainability. Hence, we promote a judicious and thoughtful applications of AI, emphasizing that their use should extend beyond mere convenience and truly align with the complex and diverse dimensions of sustainability.
The current emphasis on metrics to evaluate AI’s environmental footprint risks simplifying nature into economic metrics, overlooking the inherent complexity and value of natural ecosystems. In this vein, we call for a paradigm shift; pursuit of synergy over subjugation, signaling a shift from AI that seeks to dominate natural and societal systems to one that seeks harmonious integration. The need for such an approach is clear: it is not merely an option but an imperative for navigating the complexities of our present and future challenges. By adopting the multidimensional approach that synthesizes technological innovation with the vision for harmony, we can hope to forge a future that honors both human creativity and the natural systems that sustain life.

Author Contributions

Writing—original draft, N.O.; Supervision, O.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United Nations. Transforming Our World: The 2030 Agenda for Sustainable Development; Department of Economic and Social Affairs, United Nations: New York, NY, USA, 2015.
  2. Fang, F.; Tambe, M.; Dilkina, B.; Plumptre, A.J. AI and Conservation; Cambridge University Press: Cambridge, UK, 2019; ISBN 1-316-51292-4. [Google Scholar]
  3. Carter, J.G.; Cavan, G.; Connelly, A.; Guy, S.; Handley, J.; Kazmierczak, A. Climate Change and the City: Building Capacity for Urban Adaptation. Prog. Plan. 2015, 95, 1–66. [Google Scholar] [CrossRef]
  4. Gomes, C.; Dietterich, T.; Barrett, C.; Conrad, J.; Dilkina, B.; Ermon, S.; Fang, F.; Farnsworth, A.; Fern, A.; Fern, X. Computational Sustainability: Computing for a Better World and a Sustainable Future. Commun. ACM 2019, 62, 56–65. [Google Scholar] [CrossRef]
  5. Sheldon, D.R.; Dietterich, T. Collective Graphical Models. In Proceedings of the 24th International Conference on Neural Information Processing Systems, Granada, Spain, 12–15 December 2011; Volume 24. [Google Scholar]
  6. Sullivan, B.L.; Aycrigg, J.L.; Barry, J.H.; Bonney, R.E.; Bruns, N.; Cooper, C.B.; Damoulas, T.; Dhondt, A.A.; Dietterich, T.; Farnsworth, A. The eBird Enterprise: An Integrated Approach to Development and Application of Citizen Science. Biol. Conserv. 2014, 169, 31–40. [Google Scholar] [CrossRef]
  7. Jain, P.; Coogan, S.C.; Subramanian, S.G.; Crowley, M.; Taylor, S.; Flannigan, M.D. A Review of Machine Learning Applications in Wildfire Science and Management. Environ. Rev. 2020, 28, 478–505. [Google Scholar] [CrossRef]
  8. Bailey, E.W. Incorporating Ecological Ethics into Manifest Destiny: Sustainable Development, the Population Explosion, and the Tradition of Substantive Due Process. Tul. Envtl. LJ 2007, 21, 473. [Google Scholar]
  9. Khakurel, J.; Penzenstadler, B.; Porras, J.; Knutas, A.; Zhang, W. The Rise of Artificial Intelligence under the Lens of Sustainability. Technologies 2018, 6, 100. [Google Scholar] [CrossRef]
  10. Pajak, P. Sustainability, Ecosystem Management, and Indicators: Thinking Globally and Acting Locally in the 21st Century. Fisheries 2000, 25, 16–30. [Google Scholar] [CrossRef]
  11. Ali, S.M.; Appolloni, A.; Cavallaro, F.; D’Adamo, I.; Di Vaio, A.; Ferella, F.; Gastaldi, M.; Ikram, M.; Kumar, N.M.; Martin, M.A.; et al. Development Goals towards Sustainability. Sustainability 2023, 15, 9443. [Google Scholar] [CrossRef]
  12. Heilinger, J.-C.; Kempt, H.; Nagel, S. Beware of Sustainable AI! Uses and Abuses of a Worthy Goal. AI Ethics 2023, 4, 1–12. [Google Scholar] [CrossRef]
  13. Kemp, R.; Martens, P. Sustainable Development: How to Manage Something That Is Subjective and Never Can Be Achieved? Sustain. Sci. Pract. Policy 2007, 3, 5–14. [Google Scholar] [CrossRef]
  14. Mensah, J. Sustainable Development: Meaning, History, Principles, Pillars, and Implications for Human Action: Literature Review. Cogent Soc. Sci. 2019, 5, 1653531. [Google Scholar] [CrossRef]
  15. Dhar, P. The Carbon Impact of Artificial Intelligence. Nat. Mach. Intell. 2020, 2, 423–425. [Google Scholar] [CrossRef]
  16. Patterson, D.; Gonzalez, J.; Le, Q.; Liang, C.; Munguia, L.-M.; Rothchild, D.; So, D.; Texier, M.; Dean, J. Carbon Emissions and Large Neural Network Training. arXiv 2021, arXiv:2104.10350. [Google Scholar]
  17. Strubell, E.; Ganesh, A.; McCallum, A. Energy and Policy Considerations for Deep Learning in NLP. arXiv 2019, arXiv:1906.02243. [Google Scholar]
  18. Burrell, J.; Fourcade, M. The Society of Algorithms. Annu. Rev. Sociol. 2021, 47, 213–237. [Google Scholar] [CrossRef]
  19. Rokach, L.; Maimon, O.; Shmueli, E. Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook; Springer Nature: Berlin, Germany, 2023; ISBN 3-031-24628-4. [Google Scholar]
  20. Brevini, B. Is AI Good for the Planet? Polity Press: Cambridge, UK, 2021. [Google Scholar]
  21. Sison, A.J.G.; Daza, M.T.; Gozalo-Brizuela, R.; Garrido-Merchán, E.C. ChatGPT: More than a Weapon of Mass Deception, Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective. arXiv 2023, arXiv:2304.11215. [Google Scholar]
  22. Dilkina, B.; Houtman, R.; Gomes, C.P.; Montgomery, C.A.; McKelvey, K.S.; Kendall, K.; Graves, T.A.; Bernstein, R.; Schwartz, M.K. Trade-offs and Efficiencies in Optimal Budget-constrained Multispecies Corridor Networks. Conserv. Biol. 2017, 31, 192–202. [Google Scholar] [CrossRef]
  23. Fisher, D. A Selected Summary of AI for Computational Sustainability. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  24. Gomes, C.P. Computational Sustainability: Computational Methods for a Sustainable Environment, Economy, and Society. Bridge 2009, 39, 5–13. [Google Scholar]
  25. Brown, R.E.; Brown, R.; Masanet, E.; Nordman, B.; Tschudi, B.; Shehabi, A.; Stanley, J.; Koomey, J.; Sartor, D.; Chan, P. Report to Congress on Server and Data Center Energy Efficiency: Public Law 109-431; Lawrence Berkeley National Laboratory (LBNL): Berkeley, CA, USA, 2007. [Google Scholar]
  26. Barroso, L.A.; Hölzle, U. The Case for Energy-Proportional Computing. Computer 2007, 40, 33–37. [Google Scholar] [CrossRef]
  27. Garbin, C.; Zhu, X.; Marques, O. Dropout vs. Batch Normalization: An Empirical Study of Their Impact to Deep Learning. Multimed. Tools Appl. 2020, 79, 12777–12815. [Google Scholar] [CrossRef]
  28. Ofek, N.; Rokach, L.; Stern, R.; Shabtai, A. Fast-CBUS: A Fast Clustering-Based Undersampling Method for Addressing the Class Imbalance Problem. Neurocomputing 2017, 243, 88–102. [Google Scholar] [CrossRef]
  29. O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks. arXiv 2015, arXiv:1511.08458. [Google Scholar]
  30. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to Forget: Continual Prediction with LSTM. Neural Comput. 2000, 12, 2451–2471. [Google Scholar] [CrossRef]
  31. Zhou, X.; Zhang, W.; Chen, Z.; Diao, S.; Zhang, T. Efficient Neural Network Training via Forward and Backward Propagation Sparsification. Adv. Neural Inf. Process. Syst. 2021, 34, 15216–15229. [Google Scholar]
  32. Hanin, B. Which Neural Net Architectures Give Rise to Exploding and Vanishing Gradients? In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada, 2–8 December 2018; Volume 31. [Google Scholar]
  33. Yang, T.-J.; Chen, Y.-H.; Emer, J.; Sze, V. A Method to Estimate the Energy Consumption of Deep Neural Networks. In Proceedings of the 51st Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 29 October–1 November 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1916–1920. [Google Scholar]
  34. Amodei, D.; Hernandez, D. AI and Compute. Available online: https://openai.com/research/ai-and-compute (accessed on 24 September 2023).
  35. Hsieh, C.-J.; Si, S.; Dhillon, I. A Divide-and-Conquer Solver for Kernel Support Vector Machines. In Proceedings of the 31st International Conference on Machine Learning, PMLR, Beijing, China, 22–24 June 2014; pp. 566–574. [Google Scholar]
  36. Ganaie, M.A.; Hu, M.; Malik, A.K.; Tanveer, M.; Suganthan, P.N. Ensemble Deep Learning: A Review. Eng. Appl. Artif. Intell. 2022, 115, 105151. [Google Scholar] [CrossRef]
  37. Provost, F.; Kolluri, V. A Survey of Methods for Scaling Up Inductive Algorithms. Data Min. Knowl. Discov. 1999, 3, 131–169. [Google Scholar] [CrossRef]
  38. Nguyen, A.-P.; Moreno, D.L.; Le-Bel, N.; Rodríguez Martínez, M. MonoNet: Enhancing Interpretability in Neural Networks via Monotonic Features. Bioinform. Adv. 2023, 3, vbad016. [Google Scholar] [CrossRef]
  39. Ofek, N.; Katz, G.; Shapira, B.; Bar-Zev, Y. Sentiment Analysis in Transcribed Utterances. In Advances in Knowledge Discovery and Data Mining; Lecture Notes in Computer Science; Cao, T., Lim, E.-P., Zhou, Z.-H., Ho, T.-B., Cheung, D., Motoda, H., Eds.; Springer International Publishing: Cham, Switzerland, 2015; Volume 9078, pp. 27–38. ISBN 978-3-319-18031-1. [Google Scholar]
  40. Xia, R.; Zong, C. Exploring the Use of Word Relation Features for Sentiment Classification. In Proceedings of the Coling 2010, Beijing, China, 23–27 August 2010; pp. 1336–1344. [Google Scholar]
  41. Bengio, Y.; Delalleau, O.; Le Roux, N. The Curse of Dimensionality for Local Kernel Machines. Techn. Rep. 2005, 1258, 1. [Google Scholar]
  42. Nordgren, A. Artificial Intelligence and Climate Change: Ethical Issues. J. Inf. Commun. Ethics Soc. 2023, 21, 1–15. [Google Scholar] [CrossRef]
  43. Cowls, J.; Tsamados, A.; Taddeo, M.; Floridi, L. The AI Gambit: Leveraging Artificial Intelligence to Combat Climate Change—Opportunities, Challenges, and Recommendations. AI Soc. 2023, 38, 283–307. [Google Scholar] [CrossRef]
  44. Bender, E.M.; Gebru, T.; McMillan-Major, A.; Shmitchell, S. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event, 3–10 March 2021; pp. 610–623. [Google Scholar]
  45. Bullard, R.D. Confronting Environmental Racism: Voices from the Grassroots; South End Press: Boston, MA, USA, 1993; ISBN 0-89608-446-9. [Google Scholar]
  46. Report: Inequalities Exacerbate Climate Impacts on Poor. Available online: https://www.un.org/sustainabledevelopment/blog/2016/10/report-inequalities-exacerbate-climate-impacts-on-poor/ (accessed on 24 September 2023).
  47. Cui, S.; Gao, Y.; Huang, Y.; Shen, L.; Zhao, Q.; Pan, Y.; Zhuang, S. Advances and Applications of Machine Learning and Deep Learning in Environmental Ecology and Health. Environ. Pollut. 2023, 335, 122358. [Google Scholar] [CrossRef]
  48. Berti-Equille, L.; Dao, D.; Ermon, S.; Goswami, B. Challenges in KDD and ML for Sustainable Development. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14 August 2021; pp. 4031–4032. [Google Scholar]
  49. Wu, X.; Gomes-Selman, J.; Shi, Q.; Xue, Y.; Garcia-Villacorta, R.; Anderson, E.; Sethi, S.; Steinschneider, S.; Flecker, A.; Gomes, C. Efficiently Approximating the Pareto Frontier: Hydropower Dam Placement in the Amazon Basin. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar] [CrossRef]
  50. Portier, K.; Greer, G.E.; Rokach, L.; Ofek, N.; Wang, Y.; Biyani, P.; Yu, M.; Banerjee, S.; Zhao, K.; Mitra, P.; et al. Understanding Topics and Sentiment in an Online Cancer Survivor Community. JNCI Monogr. 2013, 2013, 195–198. [Google Scholar] [CrossRef]
  51. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Fuso Nerini, F. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef] [PubMed]
  52. Nagler, J.; van den Hoven, J.; Helbing, D. An Extension of Asimov’s Robotics Laws. In Towards Digital Enlightenment; Helbing, D., Ed.; Springer: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
  53. Crawford, K. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021; ISBN 0-300-25239-0. [Google Scholar]
  54. Coeckelbergh, M. AI for Climate: Freedom, Justice, and Other Ethical and Political Challenges. AI Ethics 2021, 1, 67–72. [Google Scholar] [CrossRef]
  55. Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. Commun. ACM 2020, 63, 54–63. [Google Scholar] [CrossRef]
  56. Henderson, P.; Hu, J.; Romoff, J.; Brunskill, E.; Jurafsky, D.; Pineau, J. Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. J. Mach. Learn. Res. 2020, 21, 10039–10081. [Google Scholar]
  57. Bolte, L.; Vandemeulebroucke, T.; Van Wynsberghe, A. From an Ethics of Carefulness to an Ethics of Desirability: Going Beyond Current Ethics Approaches to Sustainable AI. Sustainability 2022, 14, 4472. [Google Scholar] [CrossRef]
  58. Van Wynsberghe, A. Sustainable AI: AI for Sustainability and the Sustainability of AI. AI Ethics 2021, 1, 213–218. [Google Scholar] [CrossRef]
  59. Pierce, J.; Jameton, A. The Ethics of Environmentally Responsible Health Care; Oxford University Press: Oxford, UK, 2004; ISBN 0-19-513903-8. [Google Scholar]
  60. Asheim, G.B. Sustainability: Ethical Foundations and Economic Properties; Policy Research Working Paper Series 1302; The World Bank: Washington, DC, USA, 1994. [Google Scholar]
  61. Nash, K.L.; Blythe, J.L.; Cvitanovic, C.; Fulton, E.A.; Halpern, B.S.; Milner-Gulland, E.; Addison, P.F.; Pecl, G.T.; Watson, R.A.; Blanchard, J.L. To Achieve a Sustainable Blue Future, Progress Assessments Must Include Interdependencies between the Sustainable Development Goals. One Earth 2020, 2, 161–173. [Google Scholar] [CrossRef]
  62. Seeley, T.D. When Is Self-Organization Used in Biological Systems? Biol. Bull. 2002, 202, 314–318. [Google Scholar] [CrossRef]
  63. Walker, B.; Holling, C.S.; Carpenter, S.R.; Kinzig, A. Resilience, Adaptability and Transformability in Social–Ecological Systems. Ecol. Soc. 2004, 9, 5. [Google Scholar] [CrossRef]
  64. Seeley, T.D. Honeybee Democracy; Princeton University Press: Princeton, NJ, USA, 2011; ISBN 978-1-4008-3595-9. [Google Scholar]
  65. Lansing, J.S.; Kremer, J.N. Emergent Properties of Balinese Water Temple Networks: Coadaptation on a Rugged Fitness Landscape. Am. Anthropol. 1993, 95, 97–114. [Google Scholar] [CrossRef]
  66. Nelson, E.S. Daoism and Environmental Philosophy: Nourishing Life; Routledge: London, UK, 2020; ISBN 0-429-67822-3. [Google Scholar]
  67. Luo, Q. Daoism and Environmental Sustainability—A Completely Different Way of Thinking. In Proceedings of the International Workshop on Sustainable City Region, Bali, Indonesia, 23–24 February 2009; pp. 164–171. [Google Scholar]
  68. Loy, D. Wei-Wu-Wei: Nondual Action. Philos. East West 1985, 35, 73–86. [Google Scholar] [CrossRef]
  69. Acevedo, B.; Malevicius, R. Rethinking Education for Sustainability in Management Education: Going beyond Metrics toward Human Virtues. In Handbook of Research on International Business and Models for Global Purpose-Driven Companies; IGI Global: Hershey, PA, USA, 2021; pp. 219–234. [Google Scholar]
  70. Nelson, E.S. Technology and the Way: Buber, Heidegger, and Lao-Zhuang “Daoism”. J. Chin. Philos. 2014, 41, 307–327. [Google Scholar] [CrossRef]
  71. Luo, Y.; Yin, L.; Bai, W.; Mao, K. An Appraisal of Incremental Learning Methods. Entropy 2020, 22, 1190. [Google Scholar] [CrossRef] [PubMed]
  72. Pagano, T.P.; Loureiro, R.B.; Lisboa, F.V.N.; Peixoto, R.M.; Guimarães, G.A.S.; Cruz, G.O.R.; Araujo, M.M.; Santos, L.L.; Cruz, M.A.S.; Oliveira, E.L.S.; et al. Bias and Unfairness in Machine Learning Models: A Systematic Review on Datasets, Tools, Fairness Metrics, and Identification and Mitigation Methods. Big Data Cogn. Comput. 2023, 7, 15. [Google Scholar] [CrossRef]
  73. Zhang, Y.; Macke, W.; Cui, J.; Urieli, D.; Stone, P. Learning a Robust Multiagent Driving Policy for Traffic Congestion Reduction. Neural Comput. Appl. 2021, 1–14. [Google Scholar] [CrossRef]
  74. Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A Survey of Transfer Learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef]
Figure 1. On 25 September 2015, under the auspices of the United Nations and as part of a wider 2030 Agenda for Sustainable Development, 193 countries agreed on a set of 17 ambitious goals, referred to as the Sustainable Development Goals.
Figure 1. On 25 September 2015, under the auspices of the United Nations and as part of a wider 2030 Agenda for Sustainable Development, 193 countries agreed on a set of 17 ambitious goals, referred to as the Sustainable Development Goals.
Sustainability 15 16789 g001
Figure 2. Deep Learning in the context of Artificial Intelligence.
Figure 2. Deep Learning in the context of Artificial Intelligence.
Sustainability 15 16789 g002
Figure 3. Visualization of the escalating computational needs from AlexNet in 2013 to AlphaGo Zero: the data points, when fit exponentially, indicated a doubling time of 3.4 months, as sourced from OpenAI [34].
Figure 3. Visualization of the escalating computational needs from AlexNet in 2013 to AlphaGo Zero: the data points, when fit exponentially, indicated a doubling time of 3.4 months, as sourced from OpenAI [34].
Sustainability 15 16789 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ofek, N.; Maimon, O. Beyond Metrics: Navigating AI through Sustainable Paradigms. Sustainability 2023, 15, 16789. https://doi.org/10.3390/su152416789

AMA Style

Ofek N, Maimon O. Beyond Metrics: Navigating AI through Sustainable Paradigms. Sustainability. 2023; 15(24):16789. https://doi.org/10.3390/su152416789

Chicago/Turabian Style

Ofek, Nir, and Oded Maimon. 2023. "Beyond Metrics: Navigating AI through Sustainable Paradigms" Sustainability 15, no. 24: 16789. https://doi.org/10.3390/su152416789

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop