1. Introduction
In September 2015, a seminal gathering at the United Nations (UN) Headquarters in New York culminated in the declaration of a groundbreaking set of global objectives known as ‘Agenda 2030’ [
1]. This paradigmatic shift in global policymaking aims to augment the quality of human life through a trifold focus: economic vitality, social equity, and environmental sustainability. Within the broader framework of Sustainable Development Goals (SDGs), Artificial Intelligence (Although we use the term ‘AI’ throughout this article, our primary focus is on the subset of AI that employs machine learning and deep learning techniques.) (AI) and computational technologies are increasingly conceived as catalyzing agents that can powerfully advance these objectives.
In alignment with the SDGs, which span multiple domains, including environmental, economic, and social dimensions, as illustrated by
Figure 1, AI has the potential to serve many purposes conducive to achieving these goals. Specifically, AI can contribute to Good Health and Well-Being (SDG 3), Quality Education (SDG 4), Clean Water and Sanitation (SDG 6), and Responsible Consumption and Production (SDG 12). The challenges of climate change have been addressed using AI applications [
2]; the development of circular economies and the construction of intelligent urban infrastructure optimized for resource utilization have been developed using AI [
3], and multiple conservation strategies have been implemented using AI analysis. Among these, wildlife corridors have been identified as an effective countermeasure against habitat fragmentation [
4]. Several studies have also tackled additional issues, such as integrating and analyzing data from diverse sensors at various scales, quantifying and depicting uncertainty in species forecasting, and modeling migration [
5,
6,
7].
However, sustainability cannot be achieved through guidelines and bureaucracy. Rather, there is a critical need for a paradigm shift in individual and collective values. As articulated by Bailey in 2008 [
8], the prevalent focus on immediate self-maximization must evolve toward a more balanced approach that includes individual restraint and a long-term sense of responsibility. This means that rather than exploiting resources for immediate gains, there should be an ethical commitment to preserving them for future generations. By adjusting our values in this manner, the goal is to create a more sustainable and equitable system that takes into account the long-term impact of our actions on these diverse but interconnected resources.
In this work, we aim to examine the nexus between AI and sustainability and to propose a framework for AI that aligns more cohesively with sustainability.
Research Question: What represents the most fitting paradigm for AI developers to align with sustainability?
This question is not confined to a narrow appraisal of AIs immediate effects on the SDGs [
9]. The quest to interpret what constitutes sustainable AI is challenging given the diversity of perspectives on the utility of sustainability in resource management, stemming chiefly from the absence of a universally accepted definition [
10]. Originating from an inquiry into the delicate balance between nature and society, sustainability encapsulates the aspiration for fostering a future characterized by well-being and opportunities for development. Sometimes sustainability is referred to as a concept that encompasses the responsible management and conservation of shared resources with the aim of fulfilling both current and future human needs [
11]. This definition has its roots in the 1987 Brundtland Report by the World Commission on Environment and Development, which emphasized the need for equitable and lasting resource utilization.
In our pursuit of sustainable AI, it is becoming increasingly apparent that the solutions we seek may not be fully realized within the confines of our current frameworks and practices, which are often at odds with sustainable principles [
12]. Remember that sustainability has not been achieved yet; it is a stated aspiration of governments and societies, a vision [
13,
14]. Therefore, it is hardly surprising that a universal definition remains elusive. This quest requires us to look beyond the status quo and to explore ‘other worlds’ for this aspiration. These could be other traditions with their time-tested philosophy, the natural world with its inherent systems of balance and renewal, or even an envisioned scientific future that challenges our accepted norms (Kemp and Martens [
13] speak about sustainability science as a new form of science). We must, therefore, cast our intellectual nets wider to these ‘other worlds’ that hold alternative modes of coexistence and interaction with our environment. By doing so, we are closer to unearthing innovative approaches and untapped knowledge that could guide the creation of truly sustainable AI systems.
The proposed thesis diverges from the prevailing focus on the control of either energy consumption and CO
2 emissions associated with the deployment of AI [
15,
16,
17] or on natural processes through AI [
18]. Since sustainability is not a static target but a vision seeking harmony between human civilization and the planet’s ecosystems, our approach necessitates a multidisciplinary understanding that transcends mere technicalities and quantifiable metrics. Therefore, our inquiry begins with exploring frameworks that inherently reflect the harmonious vision itself. Our proposed paradigm is grounded in the tenets of complex systems theory, philosophical frameworks, and ecological principles. In positing these frameworks, we present a novel alternative to the field of AI—one that challenges the status quo and proposes a paradigm that reflects the vision for harmony.
This paper is organized in the following way:
Section 2 and
Section 3 critically review the main trajectories of ongoing efforts at the intersection of AI and sustainability.
Section 2 begins with an overview of AI and machine learning and delves into the effort and cost associated with training general AI models. Understanding the effort involved in developing AI systems is an essential starting point for our discussion.
Section 3 illustrates the paradox of AI and sustainability by raising issues of justice and environmental costs intrinsic to these technologies, even when trying to address the SDGs. It implies that the vision of sustainability extends beyond what can be captured through rigid guidelines and bureaucratic measures. In
Section 4, we introduce philosophic and ecological frameworks that reflect the harmonious nature of sustainability; these building blocks comprise the proposed paradigm.
Section 5 makes the first steps to unify the proposed frameworks and proposes a few interpretations for their implementation in the field of AI. Finally,
Section 6 concludes this work, summarizing our findings and reflecting on the implications for AI within the broader context of sustainability.
2. The Effort and Cost of Training AI Models
Artificial Intelligence is the scientific discipline that seeks to develop intelligent machines, while machine learning is a major subset of AI that involves the use of statistical techniques to improve computer programs through ‘learning’ from experience rather than being pre-programed by explicit orders. The principle that experience is provided to a machine learning system through data representative of the real world is pivotal to the functionality and reliability of the model’s output.
The core objective of most machine learning algorithms is to generalize from the training data to unseen situations in a process known as inductive learning. This involves constructing a model—either implicitly or explicitly—that captures the underlying patterns and structures within the data. Through the application of various methods such as regression, classification, or clustering, these algorithms infer rules and relationships that enable them to make predictions or decisions about new data points. The ability of a model to generalize well is pivotal; it indicates that the model has not just “memorized” the training data (fitting) but have learned a simplified representation of the core trends present within the data that are applicable to the broader environment it represents. This generalization is a statistical process wherein the model applies what it has learned from a sample to the entire population, making inductive learning a foundational stone of machine learning’s predictive capacities.
Deep learning represents a special case of machine learning, characterized by its capacity to process and learn from vast amounts of data through layers of neural networks. Unlike traditional machine learning algorithms, which may require manual feature selection, deep learning architectures autonomously identify features. These networks consist of multiple layers of interconnected nodes, or neurons, that simulate the decision-making process. As data traverses through these layers, each one progressively extracts higher-level features, with deeper layers capturing more complex and abstract representations. This hierarchical learning approach enables deep learning models to perform exceptionally well in tasks such as image and speech recognition, where the nuances of patterns are paramount. Furthermore, deep learning models are particularly adept at handling non-linear and high-dimensional data, making them a potent tool for tackling a range of complex problems that were previously insurmountable for machines. With their growing prevalence, deep learning models are shaping up to be a cornerstone of AI, driving advancements across various fields with their unparalleled pattern recognition capabilities.
To learn more about the capacities of machine learning algorithms, please refer to [
19].
AI stands apart from other technologies due to its unparalleled complexity, adaptability, and potential for autonomy. While various technologies pose environmental and societal concerns, AI’s distinctiveness lies in its ability to learn from data, make decisions, and perform tasks that traditionally required human intelligence. This level of sophistication allows AI systems to not only execute predefined processes but also to evolve their performance over time through continuous learning, often surpassing human capabilities in specific domains.
Furthermore, AI’s influence extends beyond individual tasks to systemic impacts. Its integrative nature means it can optimize entire networks, from energy grids to transportation systems, with significant efficiency gains. However, these optimizations come with the caveat of increased energy demands for processing massive amounts of data and potential unintended consequences arising from complex AI behaviors; this results in a larger carbon footprint associated with AI [
20]. The societal implications are profound as well; AI can inadvertently perpetuate biases present in training data, leading to ethical dilemmas and governance challenges unseen in other technologies [
21].
The application of machine learning to sustainability challenges introduces a layer of complexity that is both promising and problematic. On the one hand, these algorithms have the potential to navigate the intricacies of multi-variable, non-linear, and constrained problems often encountered in sustainability domains such as resource allocation, supply chain management, and ecological conservation [
22,
23,
24]. However, the very features that make these algorithms powerful—namely; their ability to consider multiple dimensions simultaneously—also pose challenges.
On one hand, larger machine learning models (i.e., more layers and parameters) tend to fit better. However, this improvement comes with an increasing financial and environmental cost [
25]. The rapid escalation in model size often outpaces the corresponding gains in performance, signaling that the high costs associated with expanding machine learning models are likely to persist. Increasing concerns surrounding the energy consumption associated with computing suggest that energy costs may soon become a predominant component in the total cost of ownership [
25]. While energy management are a key issue for data center operations, capital expenditures, operational outlays, and environmental ramifications are also considered [
26].
The computational cost for training a machine learning model is determined by a variety of factors, including the complexity of the model architecture, the optimization algorithm employed and its loss function, and the volume and intricacy of the dataset being used. In artificial neural networks (ANN), a subset of machine learning algorithms (as illustrated in
Figure 2), a loss function is a mathematical formulation employed to measure the disparity between the network’s predictions and the actual target values in the training data. The loss function quantifies how well the model approximates the underlying function that maps inputs to outputs. During the optimization process, the goal is to minimize the value of the loss function, adjusting the network’s weights to achieve this objective. This fine-tuning process necessitates a substantial number of computations, thereby consuming significant processing time [
27].
In traditional machine learning models, the complexity was principally determined by the size of the training set [
28]. However, the rise of Deep Neural Networks (DNNs)–specialized type of ANN designed to model and understand complex patterns and representations characterized by multiple layers between the input and output layers–has shifted the focus toward architectural intricacies as a major determinant of computational complexity. DNNs are characterized by multiple hidden layers that often include specialized types such as convolutional [
29] and recurrent layers [
30], thereby enhancing their representational power but also adding computational challenges.
Optimization algorithms used in DNN training, like Stochastic Gradient Descent (SGD), have to navigate a considerably larger parameter space as they iteratively minimize the loss function while updating potentially millions or even billions of parameters. This results in a more computationally intensive process, as these algorithms often require more iterations to converge to an optimal solution. Each of these iterations involves the calculation and updating of a substantially larger set of parameters compared to traditional models. In DNNs, the forward and backward passes during the training phase become more resource-intensive due to the increased number of neurons and their interconnections [
31].
Moreover, the intricate nature of DNNs often necessitates the application of advanced regularization techniques to mitigate overfitting, further adding to the computational overhead. Issues such as vanishing gradients [
32] also become more prevalent as the network architecture deepens, requiring additional algorithmic or architectural adjustments. Additionally, the volume and complexity of the dataset contribute to the computational load; larger datasets usually require extended periods of training. Consequently, while DNNs offer improved performance and capabilities, this comes at the cost of significantly higher computational complexity compared to traditional machine learning models.
The energy consumption and resulting carbon emissions of Large Language Models (LLMs), particularly neural transformer models, are surging. According to recent estimates, training a large neural transformer model could consume as much as 280,000 kg of CO
2 [
33]. Although it is problematic to assess the emissions of these models [
16], this figure is alarming not only in an absolute sense but also when considered in the context of individual or collective scientific contributions. To elucidate, a transatlantic flight from London to New York emits about 900 kg of CO
2 per passenger, a warning fact that is often cited in discussions around individual and corporate carbon footprints. The training of one large neural model, therefore, equates to the emissions of over 300 transatlantic flying passengers. Thus, a substantial portion of the greenhouse gas emissions are attributed to many Natural Language Processing (NLP) endeavors.
An examination in 2018 by the OpenAI Lab, which aims to ensure that artificial general intelligence serves the collective good, disclosed an astonishing trend: the computational resources devoted to training expansive AI models have been doubling approximately every 3.4 months since 2012. This rate diverges dramatically from Moore’s Law, which predicts a doubling every 18 months and represents a 300,000x increase in computational capacity [
15]. This is illustrated in
Figure 3.
Complexity under the Lens of Sustainability
It is noteworthy that the computational complexity of most machine learning models tends to exceed linear proportions relative to the number of samples or features. This escalation in complexity is a significant concern, especially when dealing with large-scale datasets. One effective approach to mitigating this issue is through dimensionality partitioning in a mutually exclusive manner. By breaking down the problem space into smaller, mutually exclusive subsets, one can effectively reduce the computational burden.
For instance, techniques such as divide-and-conquer algorithms [
35] and ensemble methods [
36] can be utilized to partition the problem into more manageable sub-problems. These sub-problems can be solved independently, perhaps even in parallel, and their solutions can be recombined to form the final solution. This methodological shift does not just distribute computational load but also significantly decreases the algorithm’s computational complexity, making it more scalable and efficient. Such partitioning techniques thus offer a promising pathway for dealing with computationally intensive learning algorithms, aiding in both scalability and the effective utilization of computational resources.
The question of scalability in the application of machine learning algorithms to vast datasets are both imperative and complex. On the one hand, organizations across various domains are accumulating large repositories of data—ranging from customer behavior to scientific observations—that could be instrumental for extracting valuable knowledge. On the other hand, not all machine learning algorithms can efficiently process such colossal datasets. The motivation for tackling this scalability issue is multifaceted. One salient reason is the direct correlation between the size of the training set and the accuracy of the learned models. Smaller datasets often lead to overfitting, particularly in instances where the feature space is large or the program needs to learn “small disjuncts” or rare cases that are essential for high accuracy [
37]. The problem exacerbates when noise is present in the data, making it challenging to discern between genuine special cases and outliers.
Another facet of the scalability problem pertains to the computational complexity of learning algorithms. Algorithms with greater-than-linear complexity can become unmanageable as dataset sizes increase [
28]. This issue is not just about predictive modeling but also extends to data-mining applications focused on the discovery of previously unknown knowledge. However, achieving this without being overwhelmed by spurious small disjuncts necessitates large enough datasets to facilitate confident generalization. Therefore, there is an increasing need for the development of fast, scalable algorithms that can efficiently handle large datasets while preserving or even enhancing the accuracy and interpretability of the models [
38].
Various methods for data representation employ sparse feature representations, especially in text-mining tasks [
39]. For example, in the traditional text representation method based on term frequencies (TF), each feature corresponds to the frequency of a specific n-gram. This straightforward technique has proven highly efficacious in traditional text classification tasks, including sentiment analysis [
40]. However, this methodology’s high-dimensional feature space can be susceptible to issues related to the “curse of dimensionality”. As the number of dimensions (i.e., variables or constraints) increases, the computational cost of finding the optimal solution grows exponentially. This leads to scenarios where optimization tasks become computationally infeasible, thus limiting the algorithms’ applicability in real-world, large-scale sustainability challenges, thereby making the model vulnerable to overfitting [
41].
4. Towards Sustainability: Philosophic and Ecological Paradigms
The last section poses a critical analysis of the shortcomings inherent in a checklist approach to sustainability [
57], which raises issues of justice and inevitably misses the core of what sustainability truly aims to achieve. This oversight is unsurprising given the multifaceted nature and interconnectedness of our world, aspects that cannot be fully captured through achieving a single goal.
Since this work calls for a deeper integration of principles that reflect a harmonious vision, the current step delves into the frameworks that can enrich the development of sustainable AI, while the next section proposes how these principles can be operationalized in the face of AI. Recall that our position diverges from the prevailing narrative of tension [
58] and contends that the path to sustainable AI must be rooted in harmony; therefore, the design of AI systems must possess the qualities inherent in the nature of sustainability itself—qualities such as adaptability; balance; harmony; and a recognition of interconnectedness.
What frameworks essentially belong to this nature?
Following, we present a philosophical approach and principles from complex systems theory and ecology. These frameworks are deemed alternatives due to their distinct approach to understanding and interacting with the world, which corresponds with the aspiration for harmony and balance.
4.1. Towards Ethical Sustainability: The Ecological Principle
In charting the course towards ethical sustainability, this section foregrounds the ecological principle, sidestepping in-depth analysis of the ethics of sustainability since this concept is still under discussion. Some works posit sustainability as a core tenet of ethics, suggesting that the value of ecosystems—both intrinsically and as a foundation for human prosperity—persists beyond their presence. It recognizes our capacity to rectify unsustainable practices through deliberate action [
59]. Another proposition is to place complexity as an inherent attribute of ethical sustainability [
60], largely due to the multifaceted nature of sustainability itself.
Rather than attempting to rigidly define ethical sustainability, we highlight the ecological principle as its fundamental aspect. This principle mandates a comprehensive approach that emphasizes the interconnectedness of all things, including wildlife, plant life, and the less privileged sections of society. Therefore, the ecological principle calls for balance and harmony, advocating for approaches that maintain the equilibrium between diverse life forms and the environments they inhabit, including humanity and its societal constructs. Ethics that correspond with this principle demands an examination of progress from the perspective of its impact on all these stakeholders, including the potential social injustices, economic implications, and non-human lives. Ethical sustainability, therefore, requires a thorough appreciation of these intricate interrelations and their inherent complexities, while simplistic optimization of single aspects, such as carbon emissions, is insufficient; instead, we must aim for a harmonized balance that cultivates fair and lasting resolutions.
This ecological principle should be pivotal in the engineering and sciences of sustainability because it expands the ethical vision to include a multitude of perspectives, integrating concerns about privacy, autonomy, and control, as well as addressing biases against minorities that may be perpetuated by AI systems [
21]. By placing the ecological principle at the heart of sustainability efforts, we ensure that the trajectory of AI development and deployment is not just technically and economically sound but also ethically attuned to the broader fabric of life it is meant to serve [
61].
4.2. Self-Organization Paradigm
The integration of complex systems within the sustainability discourse [
57] brings to light the intricate interplay between agents capable of dynamic information exchange and adaptation. As we delve deeper into the operational mechanics of these systems, we find that self-organization [
62], intrinsic to complex systems, stands as a profound counter-point to the prevailing trend of exerting technological control over natural processes. This process, rooted in the concepts of emergent behavior and decentralization, corresponds with ecological models that emphasize the natural emergence of order without rigid control. It is this very relinquishment of control that mirrors the harmonious essence of sustainability, making self-organization an apt framework for articulating our aspirations for a sustainable future.
Self-organization stands as a defiant challenge to the Anthropocene’s premise of human dominion and the belief that human-devised algorithms should supplant the self-regulating mechanisms of nature. This paradigm questions the interventionist approach of optimizing ecosystems through algorithms, which may not account for the comprehensive wisdom embedded in natural processes. By advocating for decentralized models, self-organization reframes our interaction with AI, shifting from exerting stringent control to fostering collaborative coexistence with the complexities of natural and social systems. This approach honors the intricate governance that nature inherently possesses, suggesting that true sustainability lies in complementing [
63], not commanding, the organic order of our environment.
More specifically, self-organization refers to the emergent structures formed by networks of agents (e.g., bees) who collaborate and coordinate without centralized control or management (e.g., without a controlling queen-bee). These self-organizing systems are inherently more effective at various tasks (e.g., decision-making), as they leverage the collective intelligence and localized knowledge of multiple stakeholders. Unlike isolated autonomous systems where each entity operates independently or centrally managed systems where decision-making is concentrated at a single point (e.g., policymakers using AI models for decision support), self-organizing networks capitalize on decentralized behavior. Such networks are highly responsive to environmental variances and common needs, enabling them to adapt behavior in a more efficient and sustainable manner. For the decentralized decision-making process in the nest selection of bees, please refer to [
64].
Comparative studies indicate that such self-organizing systems can, in specific contexts, outperform their centralized, hierarchical counterparts. For instance, studies suggest that self-organizing networks of farmers are inherently more adept at effective water management than either isolated autonomous systems or centrally managed ones [
65]. This view contradicts the narrative that technological interventions, often driven by AI, are universally superior to optimizing resource management.
4.3. Effortless Paradigm
The Daoist philosophy, with its emphasis on harmony, balance, and purposeful non-action, presents an enriching lens through which to examine the principles of sustainability [
66]. Rooted in ancient Chinese thought, Daoism accentuates the interconnectedness of all things and advocates for a harmonious coexistence with the natural world. This ethos remarkably aligns with the contemporary imperatives of sustainable development, offering an alternative framework that extends modern sustainability paradigms [
67]. Daoism guides us to not just optimize but to transform—fostering behaviors that are intrinsically aligned with the cycles and balances of ecological systems. Thus, Daoism enriches the discourse on sustainability by introducing holistic, long-term perspectives that echo its age-old wisdom, potentially reshaping our understanding of what it means to live sustainably.
Approximately 2500 years ago, Laozi, the founding figure of Daoism, articulated the principle that human activities should align with the laws of the Earth. In Daoism, the concept of “wu-wei,” originally signifying ‘no (wu) effort/work’ (wei), proves to be a deeply profound idea that has historically been challenging. Wu-wei does not mean to do nothing. Instead, it consciously suggests not doing too much, or simply unforced action,
flowing naturally from the circumstances [
68].
The Daoist approach does not align closely with the principles of Agenda 2030, but it offers an alternative framework that could profoundly impact the interpretation and implementation of sustainability [
67]. SDGs 12 and 13 focus on responsible consumption and climate action, respectively. Traditional interpretations of these goals have emphasized energy management, optimization and minimal waste, but the Daoist approach suggests a more transformative orientation. Specifically, SDG12 would not merely be about making production and consumption less harmful, but about genuinely consuming less. This aligns with the wu-wei: by deliberately doing less, we may actually achieve more in terms of sustainability and ecological balance.
Similarly, SDG 13, which calls for urgent action to combat climate change, could benefit from the Daoist counsel to “act less”. This does not advocate for neglecting climate change; on the contrary, it recognizes that sometimes the best action is restrained and thoughtful, rather than overpowering and investing a huge effort. A Daoist interpretation would guide us towards climate solutions that are not just about doing more—more technology, more control, more mitigation—but about judiciously doing less of what harms the planet.
5. Navigating beyond Metrics: A Paradigm Shift
The growing preoccupation with metrics in assessing the environmental footprint of AI systems [
55,
56] may serve to commodify nature in an unsettling manner. Firstly, this approach reduces the multi-dimensional value of natural resources and ecosystems to mere economic terms, facilitating their acquisition and regulation through market-based mechanisms [
69]. This simplification inadvertently advances a transactional paradigm wherein nature is reduced to a series of assets to be optimized rather than an interconnected system to be respected. Secondly, the complex, adaptive nature of ecological systems poses a significant challenge to the accurate measurement of AI’s environmental impact and contribution. Given the intricate web of dependencies and feedback loops that characterize natural ecosystems, it is overly optimistic, if not naive, to presume that all potential ramifications can be captured through quantifiable metrics.
A dependence on metrics not only steers us away from the harmonious approach required for true sustainability, but it also cultivates a narrow view that may overlook the broader, often immeasurable aspects of ecological influence.
In the quest to place AI within the vision of sustainability, this chapter makes the first steps towards synthesizing the three interrelated paradigms: the Effortless Paradigm, the Ecological Principle, and Self-Organization. Together, they possess the qualities inherent in the nature of sustainability itself—adaptability; balance; harmony; and a recognition of interconnectedness.
This unified paradigm is calling for a shift since it is predicated on the pursuit of synergy over subjugation, signaling a shift from AI that seeks to dominate natural and societal systems to one that seeks harmonious integration. Daoism’s principle of ‘wu-wei’, signifying unforced action, attuned to the natural world. Philosopher Martin Buber interprets “nondoing,” as a noncoercive and responsive doing, and it is claimed to be the key experience and conception that Europe can learn from Chinese philosophy in order to temper its thirst for power and domination over things and over others [
70].
Self-organization becomes a metaphor for AI development that seeks synergy rather than subjugation. The ecological concepts become blueprints for AI systems that enhance our ecosystems, championing a development ethos that values coexistence and mutual benefit. The synergy of these paradigms may lead to emerging outcomes where AI is developed not as a forceful tool of human ambition but as an effortless, integrated, and responsible participant in the broader ecological and societal context.
The applications of this new paradigm within AI are multifaceted, offering a spectrum of possibilities for the future direction of the field. Herein, we outline initial steps that exemplify the paths AI development can take, aligning with our approach.
The Effortless Paradigm invites us to reimagine AI as a system that operates with a sense of ease and unforced functionality. Such interpretation could be promoting incremental learning, which stands out within machine learning for its ability to continuously assimilate new knowledge from incoming data without the necessity of revisiting the entire original dataset [
71]. This characteristic allows for modifications and improvements to be made to the model in a dynamic, ongoing manner, circumventing the exhaustive process typically associated with training a completely new model from scratch. By embracing incremental learning, we enable AI systems to grow and evolve efficiently, mirroring the adaptability of living organisms. This adaptive learning process ensures that AI applications remain relevant and effective within the dynamic contexts in which they operate.
The Ecological Principle broadens our understanding of AI’s role within the web of life; one interpretation may encourage AI designers and developers to undertake a mapping of the stakeholders relevant to their task. This initial step is crucial in cultivating awareness of the AI’s contextual ecosystem and promoting a conscientious approach to its deployment. By identifying and understanding the various parties affected—ranging from direct users to indirect ecological and social entities—developers can anticipate and mitigate potential adverse impacts; thus fostering; for example; balance when obtaining fair data representations [
72].
Self-Organization champions, for example, decentralized AI in a fleet of autonomous vehicles; decentralization enables individual cars to make real-time decisions based on immediate environmental inputs and intra-fleet communication [
73]. This not only optimizes traffic flow and reduces congestion, but also reduces central control in the system, a desirable value in itself.
Within the context of our proposed paradigm, which emphasizes harmony, adaptability, and less intervention, transfer learning [
74] may emerge as a favored approach. This technique aligns well with the ethos of reducing unnecessary computational strain and resource consumption, as it leverages pre-trained models and adapts them to new but related tasks, avoiding the need to build and train models from the ground up for every unique application. Operationalizing transfer learning within AI systems means embracing the interconnectedness of knowledge domains. It allows us to draw from a vast pool of pre-existing models (such as models on musical preferences), fine-tuning them with a specific, often smaller set of data that are relevant to the new context (for example, film preferences).
6. Conclusions
In conclusion, we recognize that the essence of sustainability cannot be distilled into a checklist or a set of prescriptive guidelines. Instead, it is a vision and aspiration for harmony and balance. The paradigm we advocate for in AI development is reflective of this vision—emphasizing adaptability; balance; harmony; and a profound recognition of interconnectedness. This paradigm is a living, dynamic, and responsive concept, and as such, it can be intertwined with new technology.
Our exploration has led us to propose a first step in the synthesis of the Effortless Paradigm, the Ecological Principle, and the notion of Self-Organization, forging a path for AI development that aligns with the natural order and operates within the principles of ecological integrity and ethical stewardship. This new paradigm reimagines AI not as a force dominating nature but as a facilitator of natural harmony.
The enthusiasm with which AI has been embraced as a solution to sustainability challenges must be tempered with caution. We must guard against the ‘hammer-and-nail’ syndrome, where AI becomes the go-to solution for all problems, potentially sidelining the ethical and ecological imperatives that are the hallmark of true sustainability. Hence, we promote a judicious and thoughtful applications of AI, emphasizing that their use should extend beyond mere convenience and truly align with the complex and diverse dimensions of sustainability.
The current emphasis on metrics to evaluate AI’s environmental footprint risks simplifying nature into economic metrics, overlooking the inherent complexity and value of natural ecosystems. In this vein, we call for a paradigm shift; pursuit of synergy over subjugation, signaling a shift from AI that seeks to dominate natural and societal systems to one that seeks harmonious integration. The need for such an approach is clear: it is not merely an option but an imperative for navigating the complexities of our present and future challenges. By adopting the multidimensional approach that synthesizes technological innovation with the vision for harmony, we can hope to forge a future that honors both human creativity and the natural systems that sustain life.