Next Article in Journal
Leibniz’s Principle, (Non-)Entanglement, and Pauli Exclusion
Next Article in Special Issue
Refining Mark Burgin’s Case against the Church–Turing Thesis
Previous Article in Journal
The Nascent State
Previous Article in Special Issue
New Approaches to the Circle of Sense and Nonsense
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Systematic Approach to Autonomous Agents

by
Gordana Dodig-Crnkovic
1,* and
Mark Burgin
2,†
1
Department of Computer Science and Engineering, Chalmers University of Technology, 412 96 Gothenburg, Sweden
2
Department of Computer Science, University of California, Los Angeles (UCLA), Los Angeles, CA 90095, USA
*
Author to whom correspondence should be addressed.
Deceased author.
Philosophies 2024, 9(2), 44; https://doi.org/10.3390/philosophies9020044
Submission received: 30 January 2024 / Revised: 8 March 2024 / Accepted: 24 March 2024 / Published: 27 March 2024
(This article belongs to the Special Issue Special Issue in Memory of Professor Mark Burgin)

Abstract

:
Agents and agent-based systems are becoming essential in the development of various fields, such as artificial intelligence, ubiquitous computing, ambient intelligence, autonomous computing, and intelligent robotics. The concept of autonomous agents, inspired by the observed agency in living systems, is also central to current theories on the origin, development, and evolution of life. Therefore, it is crucial to develop an accurate understanding of agents and the concept of agency. This paper begins by discussing the role of agency in natural systems as an inspiration and motivation for agential technologies and then introduces the idea of artificial agents. A systematic approach is presented for the classification of artificial agents. This classification aids in understanding the existing state of the artificial agents and projects their potential future roles in addressing specific types of problems with dedicated agent types.

1. Prologue

This paper is dedicated to the memory of Professor Mark Burgin and is based on our common research. Over 15 years of fruitful collaboration, beginning with the first meeting at Stephen Wolfram’s NKS 2007 conference at the University of Vermont, Burlington, USA [1], we discussed fundamental questions of information science and computation [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Our discussions included the taxonomies of computation and information, as well as the methodological and philosophical aspects of these research areas.
In 2014, we started studying the concept of agency and its importance for artifactual/technological systems, which resulted in a preprint [3] on a systematic approach to artificial agents. However, our collaboration focus shifted to shared community-serving projects within the IS4SI society (International Society for Study of Information), such as conference organization and book editing, while concurrently researching the conceptual consolidation of fundamental knowledge in the information and computation fields of information studies. Consequently, the article on agents remained in preprint form [3], awaiting a time when we could revisit the topic, complete the article, and publish it in a journal. Regrettably, that time never arrived.
As a tribute to Professor Burgin, I have decided to publish an updated version of our preprint “A Systematic Approach to Artificial Agents”, adding an explanation of how the concepts of autonomous agency and agent-based models emerged from observing living systems. This update also discusses the increasing significance of agent-based models in various fields, such as biology, evolution, and artificial intelligence, and includes new connections, clarifications, and references.
Currently, the concept of an autonomous agent—which communicates information (with data as its atomic element), processes it, and acts upon it—plays an increasingly prominent role in our understanding of natural phenomena, as well as in the networks of computational artifacts that concurrently process and exchange information. Social networks of agents and ecologies are additional areas where agent-based models are vitally important.

2. Living Agency as Inspiration for Artificial Agents

Life is characterized by organisms’ ability to act in the world. Historically, humans were considered as only autonomous agents in nature. The classical humanist understanding of agency starts with a “notion of humans as a certain kind of preformed willful agent” [18].
However, further research revealed autonomous agency as a feature of living systems from single cell and up [19,20,21,22,23,24]. Historical perspectives on biological agency and autonomy can be found in [21,22,23,25,26,27].
Biological agency refers to the capacity of living organisms to act autonomously and make decisions based on their internal processes and external stimuli. It encompasses the ability of organisms to sense their environment, process information, and respond to changes in ways that promote their survival and reproduction. Biological agency is a fundamental concept in the field of biology and is often explored in the context of topics such as behavior, genetics, and evolution. It helps us understand how organisms interact with their surroundings and adapt to various challenges.
The agency of living organisms was an inspiration for artificial agents.
Okasha, in “The Concept of Agent in Biology: Motivations and Meanings” [28], defines a “minimal concept” of agency as “simply that of doing something or behaving”.
Barandarian, in “Defining Agency: Individuality, Normativity, Asymmetry, and Spatiotemporally in Action”, provides the following three conditions for an agent [29]:
(a)
a system must define its own individuality,
(b)
it must be the active source of activity in its environment (interactional asymmetry) and
(c)
it must regulate this activity in relation to certain norms (normativity).
For biological systems, the distinction between constitutive and interactive autonomy is important, as described by Maturana [30]. Moreno and Mossio explain the relationship as follows:
“(W)e argue that autonomy involves also an interactive dimension, enabling biological systems to maintain themselves in an environment We will refer to this interactive dimension as agency. A system that realizes constitutive closure (metabolism) and agency, even in a minimal form, is an autonomous system, and therefore a biological organism.”
[20]
Sultan, Moczek, and Walsh [31] introduce a biological agency perspective that points out how the response capacities of organisms shape phenotypic expression, inheritance, and trait innovation.
“Prevailing approaches to the causes of development, inheritance, and innovation, we argue, should be augmented by explanations that fully take into account biological agency—ways that organisms themselves actively shape their own structure and function.”
Agency can be described as the capacity of acting while behavior is the way of acting.
With regard to the classification of behavior, Rosenblueth, Wiener, and Bigelow [32] (p. 21) state that
“Cognitive capacities of diverse body forms occupy a gradient of increasing agency and self-determination, starting from purely reactive processes to those which have feedback, learning, memory, anticipation, and the ability to modify their own goals and model themselves and counterfactual conditions within the external world.”
Taking the agential view of organisms seriously changes the understanding of the processes of evolution and development. This means evolution not only and primarily proceeds through random variations but also through the goal-directed behavior of biological agents [24,31,33,34,35,36,37,38,39,40].
“This means that we ought to take agency seriously—to better understand the concept and its role in explaining biological phenomena—if we aim to obtain an organismic theory of evolution in the original spirit of Darwin’s struggle for existence. This kind of understanding must rely on an agential perspective on evolution, complementing and succeeding existing structural, functional, and processual approaches.”
[24] (p. 159)
Lewin talks about ”Darwin’s agential material” and the evolutionary implications of the multiscale structure of biological competencies as they appear in developmental biology [41].

3. Introduction

Artificial agents are advanced tools used to achieve various goals and solve problems. The main difference between ordinary tools and agents is that agents can function more or less independently from those who delegated agency to them. For a long time, people used only other people and sometimes animals as their agents. Developments in information processing technology, computers, and their networks have made it possible to build and use artificial agents. At present, the most popular approaches in artificial intelligence are based on agents.
Over the recent decade, research on agents and multi-agent systems has significantly matured, and these technologies have been effectively integrated into real-world applications. Now, they serve as a foundation, with key abstractions for tackling complex issues in distributed systems, interactive processes, concurrency, autonomy, reactivity, decentralization, and dynamic adaptation.
Intelligent agents form a basis for many kinds of advanced software systems that incorporate varying methodologies, diverse sources of domain knowledge, and a variety of data types. The intelligent agent approach has been applied extensively in business applications, and more recently in medical decision support systems [42,43] and ecology [44]. In the general paradigm, the human decision maker is considered to be an agent and is incorporated into the decision process. The overall decision is facilitated by a task manager that assigns subtasks to the appropriate agent and combines conclusions reached by agents to form the final decision.

4. The Concept of an Artificial Agent

There are several definitions of intelligent software agents [18,45,46,47,48]. However, they describe rather than define agents in terms of their tasks, autonomy, and communication capabilities. Some of the major definitions and descriptions of agents are provided in Jansen [49].
  • Agents are semi-autonomous computer programs that intelligently assist the user with computer applications by employing artificial intelligence techniques to assist users with daily computer tasks, such as reading electronic mail, maintaining a calendar, and filing information. Agents learn through example-based reasoning and can improve their performance over time.
  • Agents are computational systems that inhabit some complex, dynamic environment and sense and act autonomously to realize a set of goals or tasks.
  • Agents are software robots that think and act on behalf of a user to carry out tasks. Agents will help meet the growing need for more functional, flexible, and personal computing and telecommunications systems. Uses for intelligent agents include self-contained tasks, operating semi-autonomously, and communication between the user and systems resources.
  • Agents are software programs that implement user delegation. Agents manage complexity, support user mobility, and lower the entry level for new users. Agents are a design model similar to client-server computing, rather than strictly a technology, program, or product.
Franklin and Graesser [50] have collected and analyzed a more extended list of definitions:
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors, Russell and Norvig [51].
Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment. By doing so they realize a set of goals or tasks for which they are designed, Maes [52].
Let us define an agent as a persistent software entity dedicated to a specific purpose. ‘Persistent’ distinguishes agents from subroutines; agents have their own ideas about how to accomplish tasks, their own agendas. ‘Special purpose’ distinguishes them from entire multifunction applications; agents are typically much smaller, Smith, Cypher, and Spohrer [53].
Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions, Hayes-Roth [54].
Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program, with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user’s goals or desires [55]
An autonomous agent is a system situated within and an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and to affect what it senses in the future, Franklin and Graesser [50].
Tosic and Agha [56] distinguish between two types of autonomous agency:
  • Weak autonomous agency ≈ control of own state + reactivity + persistence.
  • Strong autonomous agency ≈ weak autonomous agency + goal-orientation + pro-activity.
As we mentioned in the Introduction, there are also natural and social agents. For instance, the term “agent” in the context of business or economic modeling refers to natural real-world objects, such as organizations, companies, or people. These real-world objects are capable of displaying autonomous behavior. They react to external events and are capable of initiating activities and interacting with other objects (agency).
Thus, it is reasonable to assume that an agent is anything (or anybody) that can be viewed as perceiving its environment through sensors and acting upon this environment through effectors. A human agent has eyes, ears, and other organs for sensors, with hands, legs, mouth, and other body parts for effectors. A robotic agent uses cameras, infrared range finders, and other sensing devices as sensors, and it uses various body parts as effectors. A software agent has communication channels both for sensors and effectors.
This gives us the following informational structure of an agent, reflecting the agent’s information flows: Raw information (Receptors) leads to Descriptive information (Processors), which leads to Prescriptive information (Effectors).

5. Typology of Agents

There are different types of intelligent agents [57]. For instance, Russell and Norvig [51] define an agent as a program that perceives and acts in an environment.
Russell and Norvig consider the following four types:
-
Simple reflex (or tropistic, or behavioristic) agents—respond immediately to percepts.
-
Agents with memory—an internal state, which is used to keep track of past states of the world.
-
Goal-based agents—in addition to state information, they have goal information that describes desirable situations.
-
Utility-based agents—base their decisions on classic axiomatic utility theory to act rationally.
The general structure of the world in the form of the Existential Triad gives us the following three classes of agents:
-
Physical agents.
-
Virtual agents.
-
Structural or information agents.
People, animals, and robots are examples of physical agents. Software agents and cognitive processes are examples of virtual agents. The head of a Turing machine (cf., for example, Burgin [58]) is an example of a structural agent.
Physical agents belong to the following three classes:
-
Biological agents.
-
Artificial agents.
-
Hybrid agents.
People, animals, and microorganisms are examples of biological agents. Robots are examples of artificial agents. Hybrid agents consist of biological and artificial parts (see Venda [59]).
Mizzaro [60] classifies agents according to the following three parameters: perception, reasoning, and memory:
-
Perceiving agents with various perception levels. Complete perceiving agents, who have a complete perception of the world, constitute the highest level of perceiving agents. Opposite to perceiving agents, there are no perception agents, which are completely isolated from their environment. This does not correlate with the definition of Russell and Norvig [51], but it is consistent with a general definition of an agent.
-
Reasoning agents with various reasoning capabilities. Reasoning agents derive new knowledge items from their existing knowledge state. On the highest level of reasoning agents, we have omniscient agents, which are capable of actualizing all their potential knowledge by logical reasoning. Opposite to reasoning agents, Mizzaro [60] identifies non-reasoning agents, which are unable to derive new knowledge items from the existing knowledge they possess.
-
Memorizing agents are permanent memory agents, no memory agents, or volatile memory agents. Humans are volatile memory agents.
Here, we suggest a classification of agents based on their attributive dimensions.
(I)
According to the cognitive/intelligence criterion, there are
-
Reflex (or tropistic, or behavioristic) agents, which realize the simple schema action–reaction.
-
Model-based agents, which have a model of their environment.
-
Inference-based agents, which use inference in their activity.
-
Predictive (prognostic, anticipative) agents, which use prediction in their activity.
-
Evaluation-based agents, which use evaluation in their activity.
Some of these classes are also considered in Russell and Norvig [51].
Note that prediction and/or evaluation do not necessarily involve inference.
(II)
According to the dynamic criterion, there are
-
Static agents, which do not move (at least, by themselves), e.g., desktop computer.
-
Mobile agents, which can move to some extent of freedom.
-
Effector mobile agents, which have effectors that can move.
-
Receptor (sensor) mobile agents, which have receptors that can move.
Note that mobility can be realized on different levels and to different degrees.
(III)
According to the interaction criterion, there are
-
Deliberative (proactive) agents, which anticipate what is going to happen in their environment and organize their activity taking into account these predictions.
-
Reactive agents, which react to changes in the environment.
-
Inactive agents, which do the same thing independently of what happens in the environment.
(IV)
According to the autonomy criterion, there are
-
Controlled agents.
-
Dependent agents.
-
Autonomous agents.
There are many kinds and levels of dependence. Control is considered the highest level.
(V)
According to the learning criterion, there are
-
Conservative agents, which do not learn at all.
-
Remembering agents, which realize the lowest level of learning—remembering or memorizing.
-
Learning agents.
(VI)
According to the cooperation criterion, there are
-
Individualistic agents, which do not interact with other agents.
-
Competitive agents, which do not collaborate but only compete.
-
Collaborative agents.
There are many kinds and levels of competition. For instance, it can be competition at any cost or competition according to definite (e.g., moral) rules or/and principles. Collaboration can also take different forms.
There is one more dimension (criterion for classification), which underlies all others. It is the algorithmic dimension. Indeed, an agent can perform operations (e.g., building a model of the environment or making an evaluation) and actions (e.g., moving from one place to another) in different modes and using various types of algorithms.
(VII)
According to the algorithmic criterion, there are
-
Sub-recursive agents that use only sub-recursive algorithms, e.g., finite automata.
-
Recursive agents that use any recursive algorithms, such as Turing machines, random access machines, Kolmogorov algorithms, or Minsky machines.
-
Super-recursive agents can use some super-recursive algorithms, such as inductive Turing machines or trial-and-error machines.
The difference between recursive and super-recursive agents is that at some moment after receiving or formulating a task and starting to fulfill it, a recursive agent will stop and inform the operator that the task is fulfilled. In a similar situation, the super-recursive agent can fulfill tasks that do not demand stopping. For instance, a program of a satellite computer that performs observations of changes in the atmosphere for weather prediction has to carry out observations all the time because there is no such moment when all these observations are completed. As a result, super-recursive agents can perform much more tasks and solve much more problems than recursive agents (cf., for example [61]).
A cognitive agent has a system of knowledge K. Such an agent perceives information from the world, and it changes the initial knowledge state, i.e., the state of the system K.
In general, agents may be usefully classified according to the subset of these properties that they enjoy, Franklin and Graesser [50]. When properties are organized in definite classes, it is possible to use sub-classification schemes via control structures, environments (database, file system, network, Internet), language, or applications. For instance, the distinction between data-based and knowledge-based agents is made on such part of the agent environment as the source of information. Generalizing the approach of Franklin and Graesser [50], we can classify agents by their internal and external components. For instance, a control structure is an internal component, while a source of information is an external component. A slightly different approach to the taxonomy of agent properties is based on an aspect of an agent, for example, on agent functions. Thus, the separation of signal and image analysis agents is related to agent functions.
Brustoloni [62] offers another classification by function, distinguishing regulation, planning, and adaptive agents.
Different types of automata can be associated with the types of agents. Reflex agents may be modeled by automata without memory, and are represented by decision tables. All other types of agents demand memory [51]. The third and higher levels, in addition, need a sufficiently powerful processor, varying by the level of the agent. Agents that perform simple tasks may use a finite automaton processor. More sophisticated agents demand processors that perform inference and have the computational power of Turing machines. Processors and program systems for intelligent agents have to utilize super-recursive automata and algorithms, Burgin [58].

6. Conclusions and Future Work

Agency and agent-based solutions are becoming more and more interesting for a wide variety of classes of problems and important as we face situations of ubiquitous computing, ambient intelligence, autonomous computing, intelligent systems, and intelligent robotics—to name but a few emerging issues. In this paper, we presented an extended classification of autonomous agents as a contribution to a systematic approach. This classification aims to better understand what kinds of agents can be created and what type of problems demand a specific kind of agent for their solution. It is crucial for both conceptual and practical advancement to enhance our fundamental understanding of agents: their nature, capabilities, and potential for development and action.
Autonomous agents are advanced AI-driven programs that operate towards a defined goal. They are capable of independently generating, executing, and dynamically prioritizing tasks in a continuous loop so that their intended objective is achieved. These agents can have a wide array of activities, from managing social media accounts to making investment decisions in financial markets and even writing poetry. The underlying programming techniques and AI technologies, which include generative models like GPT, represent a cutting-edge frontier in the field of artificial intelligence.
An extensive list of autonomous agent use cases presented by Schlicht [63] includes agents used as personal assistants [64], as well as agents used in healthcare, education, finance, retail, manufacturing, agriculture, transportation, energy, legal system, real estate, entertainment, gaming, human resources, public safety, physical environment, natural resource optimization, space exploration, art and design, news, customer support, etc.
A review of the recent literature on agent-based models and multi-agent systems shows that ABMs are used in many scientific domains, including biology (e.g., population dynamics, stochastic gene expression, morphogenesis, evolution, development) ecology, epidemiology (spread of epidemics, and strategies to manage epidemics), networks, economics, and even philosophy [65].
Looking at the frequency of articles with agent-based models, the following comprises the list of representative journals: Ecological Modelling, Nature, Science, Journal of Theoretical Biology, Ecology, The American Naturalist, Trends in Ecology & Evolution, Lecture Notes in Artificial Intelligence, Proceedings of the National Academy of Sciences.
Autonomous agents offer a new approach to building generative AI models. Autonomous agents are software programs that can act independently to achieve a goal. In the context of generative AI, autonomous agents can be used to generate content without the need for human intervention. Based on their autonomous capabilities, the collaborative use of autonomous AI agents as assistants and teammates opens new and exciting possibilities [66] where mechanisms of second-order cybernetics come into play [67] pp. 283–286. As second-order cybernetics suggests, humans and AI agents are part of a shared system. Autonomous AI agents could be designed to not just perform tasks but to observe and adapt to the behavior and feedback of human users. The agents would adjust their actions based on this interaction, leading to a more intuitive and responsive collaboration.
The future of generative AI is exciting and full of possibilities. Autonomous agents have the potential to make a real difference in our world, and we are only just beginning to explore their potential.
These are just a few examples of the many ways that autonomous agents could be used to generate content in the future. As autonomous agents continue to develop, we can expect to see even more innovative and creative applications of this technology.
Apart from being used in a variety of practical applications [68], agent models are used in sciences to study the behavior of complex systems—physical, biological (including artificial life), economic, or social [69]. Agent models are even used as a metaphor for the real world, as in Karen Barad’s agential realism, conceiving the world as intra-acting agencies, where entities emerge from their interactions rather than pre-existing them [70].
Today’s large language models are just the start of the generative AI revolution, coming next are autonomous agents that work independently to achieve an assigned goal. Autonomous agents can plan task execution, monitor the output, adapt, and use tools to accomplish goals. Autonomous agents can sense and act on their environment. Finally, building on generative AI’s ability to mimic human behavior, agents could make it possible to run simulations at a large scale for a wide range of products and services.

Author Contributions

Conceptualization, methodology, investigation, and writing of the original draft [3], G.D.-C. and M.B.; writing, review and editing of the present extended version, G.D.-C. The published version of the manuscript was only possible to read by G.D.-C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Swedish Research Council, VR grant MORCOM@COG.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The author would like to thank the reviewers for their encouraging, helpful, constructive, and instructive comments.

Conflicts of Interest

The author declares no conflicts of interest. The funders had no role in the design of this study, in the writing of this manuscript, or in the decision to publish the results.

References

  1. NKS 2007 Wolfram Science Conference. Available online: http://www.wolframscience.com/conference/2007/presentations (accessed on 29 January 2024).
  2. Dodig-Crnkovic, G.; Burgin, M. Philosophy and Methodology of Information: The Study of Information in a Transdisciplinary Perspective; World Scientific Publishing Co. Series in Information Studies; World Scientific Publishing Co.: Singapore, 2008; Volume 3. [Google Scholar]
  3. Burgin, M.; Dodig-Crnkovic, G. A Systematic Approach to Artificial Agents. arXiv 2009, arXiv:0902.3513. [Google Scholar]
  4. Dodig-Crnkovic, G.; Burgin, M. Philosophy and Methodology of Information; World Scientific: Singapore, 2018. [Google Scholar]
  5. Dodig-Crnkovic, G.; Burgin, M. The Study of Information in the Context of Knowledge Ecology. In Philosophy and Methodology of Information: The Study of Information in the Transdisciplinary Perspective; World Scientific: Singapore, 2019. [Google Scholar]
  6. Burgin, M.; Dodig-Crnkovic, G. A Multiscale Taxonomy of Information in the World. Theor. Inf. Studies. 2019, 11, 3–27. [Google Scholar]
  7. Dodig-Crnkovic, G.; Burgin, M. Recent Books Delineating the Emergent Academic Filed of the Study of Information. Proceedings 2020, 47, 6. [Google Scholar] [CrossRef]
  8. Burgin, M.; Dodig-Crnkovic, G. Theoretical Information Studies: Information in the World; World Scientific Publishing Co. Series in Information Studies; World Scientific Publishing Co.: Singapore, 2020. [Google Scholar]
  9. Burgin, M.; Dodig-Crnkovic, G. Prolegomena to Information Taxonomy. Proceedings 2017, 1, 210–213. [Google Scholar] [CrossRef]
  10. Burgin, M.; Dodig-Crnkovic, G. Information and Computation—Omnipresent and Pervasive. In Information and Computation; World Scientific Pub Co Inc.: New York, NY, USA; London, UK; Singapore, 2011; pp. vii–xxxii. [Google Scholar]
  11. Burgin, M.; Dodig-Crnkovic, G. From the closed (axiomatic)universe to an open world. In Proceedings of the AISB/IACAP World Congress 2012: Natural Computing/Unconventional Computing and Its Philosophical Significance, Part of Alan Turing Year 2012, Birmingham, UK, 2–6 July 2012; The Society for the Study of Artificial Intelligence and Simulation of Behaviour: Swansea, UK, 2012. [Google Scholar]
  12. Dodig-Crnkovic, G.; Burgin, M. Axiomatic tools versus constructive approach to unconventional algorithms. In Proceedings of the AISB/IACAP World Congress 2012: Natural Computing/Unconventional Computing and Its Philosophical Significance, Part of Alan Turing Year 2012, Birmingham, UK, 2–6 July 2012. [Google Scholar]
  13. Dodig-Crnkovic, G.; Burgin, M. Information Dynamics in a Categorical Setting. In Information and Computation; World Scientific Publishing Co. Series in Information Studies; World Scientific Publishing Co.: Singapore, 2012; pp. 35–78. [Google Scholar] [CrossRef]
  14. Burgin, M.; Dodig-Crnkovic, G. The Nature of Computation and The Development of Computational Models. In Proceedings of the Computability in Europe 2013 (CiE 2013) the Nature of Computation, University of Milano-Bicocca, Milano, Italy, 1–5 July 2013. [Google Scholar]
  15. Burgin, M.; Dodig-Crnkovic, G. From the Closed Classical Algorithmic Universe to an Open World of Algorithmic Constellations. In Computing Nature; Studies in Applied Philosophy, Epistemology and Rational Ethics; Dodig-Crnkovic, G., Giovagnoli, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; Volume 7. [Google Scholar] [CrossRef]
  16. Burgin, M.; Dodig-Crnkovic, G. A Taxonomy of Computation and Information Architecture. In Proceedings of the 2015 European Conference on Software Architecture Workshops (ECSAW ’15), Cavtat, Croatia, 7–11 September 2015; Galster, M., Ed.; ACM Press: New York, NY, USA, 2015. [Google Scholar]
  17. Burgin, M.; Dodig-Crnkovic, G. Computation as Information Transformation. In Proceedings of the IS4IS Summit Vienna 2015, Vienna University of Technology, Vienna, Austria, 3–7 June 2015. [Google Scholar]
  18. Pickering, A. What Is Agency? A View from Science Studies and Cybernetics. Biol. Theory 2023, 19, 16–21. [Google Scholar] [CrossRef]
  19. Winning, J.; Bechtel, W. Review of Biological Autonomy by Alvaro Moreno and Matteo Mossio. Philos. Sci. 2016, 83, 446–452. [Google Scholar] [CrossRef]
  20. Moreno, A.; Mossio, M. Biological Autonomy: A Philosophical and Theoretical Enquiry; Springer: Dordrecht, The Netherlands, 2015; ISBN 978-94-017-9837-2. [Google Scholar]
  21. Winning, J.; Bechtel, W.; Moreno, A.; Mossio, M. Review of Biological Autonomy. Philos. Sci. 2024, 83, 446–452. [Google Scholar] [CrossRef]
  22. García-Valdecasas, M. On the naturalisation of teleology: Self-organisation, autopoiesis and teleodynamics. Adapt. Behav. 2021, 30, 103–117. [Google Scholar] [CrossRef]
  23. Montévil, M.; Mossio, M. Biological organisation as closure of constraints. J. Theor. Biol. 2015, 372, 179–191. [Google Scholar] [CrossRef]
  24. Jaeger, J. The Fourth Perspective: Evolution and Organismal Agency BT. In Organization in Biology; Mossio, M., Ed.; Springer International Publishing: Cham, Switzerland, 2024; pp. 159–186. ISBN 978-3-031-38968-9. [Google Scholar]
  25. Omankwu, O.C.; Nwagu, C.K.; Inyiama, H. Historic Perspective of Intelligent Agents. Int. J. Comput. Sci. Inf. Secur. (IJCSIS) 2017, 15, 119–123. [Google Scholar]
  26. Heylighen, F.; Busseniers, E. Modeling autopoiesis and cognition with reaction networks. Biosystems 2023, 230, 104937. [Google Scholar] [CrossRef]
  27. Kesić, S. Complexity and biocomplexity: Overview of some historical aspects and philosophical basis. Ecol. Complex. 2024, 57, 101072. [Google Scholar] [CrossRef]
  28. Okasha, S. The Concept of Agent in Biology: Motivations and Meanings. Biol. Theory 2023, 19, 6–10. [Google Scholar] [CrossRef]
  29. Barandiaran, X.E.; Di Paolo, E.; Rohde, M. Defining Agency: Individuality, Normativity, Asymmetry, and Spatio-temporality in Action. Adapt. Behav. 2009, 17, 367–386. [Google Scholar] [CrossRef]
  30. Maturana, H. Autopoiesis, Structural Coupling and Cognition: A history of these and other notions in the biology of cognition. Cybern. Hum. Knowing 2002, 9, 5–34. [Google Scholar]
  31. Sultan, S.E.; Moczek, A.P.; Walsh, D.M. Bridging the explanatory gaps: What can we learn from a biological agency perspective? BioEssays 2021, 44, e2100185. [Google Scholar] [CrossRef] [PubMed]
  32. Rosenblueth, A.; Wiener, N.; Bigelow, J. Behavior, Purpose and Teleology. Philos. Sci. 1943, 10, 18–24. [Google Scholar] [CrossRef]
  33. Walsh, D. Piaget’s Paradox: Adaptation, Evolution, and Agency. Hum. Dev. 2023, 67, 273–287. [Google Scholar] [CrossRef]
  34. Varela, F.; Thompson, E.; Rosch, E. The Embodied Mind: Cognitive Science and Human Experience; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  35. Frodeman, R.; Klein, J.T.; Mitcham, C. (Eds.) The Oxford Handbook of Interdisciplinarity; OUP Oxford: Oxford, UK, 2010; ISBN 9780199236916. [Google Scholar]
  36. Walsh, D.M.; Rupik, G. The agential perspective: Countermapping the modern synthesis. Evol. Dev. 2023, 25, 335–352. [Google Scholar] [CrossRef]
  37. Ball, P. Organisms as Agents of Evolution; John Templeton Foundation: West Conshohocken, PA, USA, 2023. [Google Scholar]
  38. Baluška, F.; Miller, W.B.; Reber, A.S. Cellular and evolutionary perspectives on organismal cognition: From unicellular to multicellular organisms. Biol. J. Linn. Soc. 2022, 139, blac005. [Google Scholar] [CrossRef]
  39. Torday, J.; Miller, W. Cellular-Molecular Mechanisms in Epigenetic Evolutionary Biology; Springer: Cham, Switzerland, 2020; ISBN 9783030381332. [Google Scholar]
  40. Corning, P.A.; Kauffman, S.A.; Noble, D.; Shapiro, J.A.; Vane-Wright, R.I. Evolution “On Purpose”: Teleonomy in Living Systems; The MIT Press: Cambridge, MA, USA, 2023; ISBN 9780262376013. [Google Scholar]
  41. Levin, M. Darwin’s agential materials: Evolutionary implications of multiscale competency in developmental biology. Cell. Mol. Life Sci. 2023, 80, 142. [Google Scholar] [CrossRef] [PubMed]
  42. Hsu, C.; Goldberg, H.S. Knowledge-mediated retrieval of laboratory observations. Proc. AMIA Symp. 1999, 23, 809–813. [Google Scholar]
  43. Lanzola, G.; Gatti, L.; Falasconi, S.; Stefanelli, M. framework for building cooperative software agents in medical applications. Artif. Intell. Med. 1999, 16, 223–249. [Google Scholar] [CrossRef] [PubMed]
  44. Judson, O.P. The Rise of the Individual-based Model in Ecology. Trends Ecol. Evol. 1994, 9, 9–14. [Google Scholar] [CrossRef]
  45. Nwana, H.S. Software agents: An overview. Knowl. Eng. Rev. 1996, 11, 205–244. [Google Scholar] [CrossRef]
  46. Murch, R.; Johnson, T. Intelligent Software Agents; Prentice Hall PTR: Hoboken, NJ, USA, 1998; ISBN 0130110213. [Google Scholar]
  47. Rabuzin, K.; Maleković, M.; Bača, M. A survey of the properties of agents. J. Inf. Organ. Sci. 2006, 30, 29–54. [Google Scholar] [CrossRef]
  48. Dattathrani, S.; De’, R. The Concept of Agency in the Era of Artificial Intelligence: Dimensions and Degrees. Inf. Syst. Front. 2023, 25, 29–54. [Google Scholar] [CrossRef]
  49. Jansen, J. Using an Intelligent Agent to Enhance Search Engine Performance. First Monday 1997, 2. [Google Scholar] [CrossRef]
  50. Franklin, S.; Graesser, A. Is it an Agent, or just a Program?: A Taxonomy for Autonomous Agents. In Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages. ATAL 1996; Lecture Notes in Computer Science; Müller, J.P., Wooldridge, M.J., Jennings, N.R., Eds.; Springer: Berlin/Heidelberg, Germany, 1996; Volume 1193, pp. 21–35. [Google Scholar]
  51. Russel, S.; Norvig, P. Artificial Intelligence: A Modern Approach; Prentice-Hall: Englewood Cliffs, NJ, USA, 1995. [Google Scholar]
  52. Maes, P. Artificial life meets entertainment: Lifelike autonomous agents. Commun. ACM 1995, 38, 108–114. [Google Scholar] [CrossRef]
  53. Smith, D.C.; Cypher, A.; Spohrer, J. KidSim: Programming Agents without a Programming Language. Commun. ACM 1994, 37, 55–56. [Google Scholar] [CrossRef]
  54. Hayes-Roth, B. An Architecture for Adaptive Intelligent Systems. Artif. Intell. Spec. Issue Agents Interactivity 1995, 72, 329–365. [Google Scholar] [CrossRef]
  55. Gilbert, D. Intelligent Agents:The Right Information at the Right Time. Available online: https://fmfi-uk.hq.sk/Informatika/Uvod%20Do%20Umelej%20Inteligencie/clanky/ibm-iagt.pdf (accessed on 29 January 2024).
  56. Tosic, P.T.; Agha, G.A. Towards a hierarchical taxonomy of autonomous agents. In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583), The Hague, The Netherlands, 10–13 October 2004; Volume 4, pp. 3421–3426. [Google Scholar]
  57. Jennings, N.R.; Wooldridge, M. Intelligent agents: Theory and practice. Knowl. Eng. Rev. 1995, 10, 115–152. [Google Scholar] [CrossRef]
  58. Burgin, M. Super-Recursive Algorithms; Springer: New York, NY, USA, 2005; ISBN 0387955690. [Google Scholar]
  59. Venda, V.F. Hybrid Intelligence Systems: Evolution, Psychology, Informatics. Mach. Eng. Ind. 1990, 448. (In Russian) [Google Scholar]
  60. Mizzaro, S. Towards a Theory of Epistemic Information; Kawaguchi, E., Kangassalo, H., Jaakkola, H., Eds.; IOS Press: Amsterdam, The Netherlands, 2001; ISBN 978-1-58603-163-3. [Google Scholar]
  61. Burgin, M. Nonlinear Phenomena in Spaces of Algorithms. Int. J. Comput. Math. 2003, 80, 1449–1476. [Google Scholar] [CrossRef]
  62. Brustoloni, J.C. Autonomous Agents: Characterization and Requirements. Carnegie Mellon Technical Report CMU-CS-91-204; Carnegie Mellon University: Pittsburgh, PA, USA, 1991. [Google Scholar]
  63. Schlicht, M. Autonomous Agent Use Cases. Available online: http://tinyurl.com/4zb5ve8w (accessed on 29 January 2024).
  64. Grochow, J.M. A taxonomy of automated assistants. Commun. ACM 2020, 63, 39–41. [Google Scholar] [CrossRef]
  65. Šešelja, D. Agent-Based Modeling in the Philosophy of Science. In the Stanford Encyclopedia of Philosophy (Winter 2023 Edition). Available online: https://plato.stanford.edu/archives/win2023/entries/agent-modeling-philscience (accessed on 29 January 2024).
  66. Hauptman, A.I.; Schelble, B.G.; McNeese, N.J.; Madathil, K.C. Adapt and overcome: Perceptions of adaptive autonomous agents for human-AI teaming. Comput. Hum. Behav. 2023, 138, 107451. [Google Scholar] [CrossRef]
  67. von Foerster, H. Understanding Understanding: Essays on Cybernetics and Cognition; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  68. Golubin, A.B.; Kopnin, A.A. Agent-Based Modeling Using Artificial Intelligence as a Method for Creating Rational Consumption and Production Models. E3S Web Conf. 2023, 451, 04001. [Google Scholar] [CrossRef]
  69. Castiglione, F. Agent Based Modeling and Simulation, Introduction to BT. In Encyclopedia of Complexity and Systems Science; Meyers, R.A., Ed.; Springer: New York, NY, USA, 2009; pp. 197–200. ISBN 978-0-387-30440-3. [Google Scholar]
  70. Barad, K. Meeting the Universe Halfway; Duke University Press: Durham, NC, USA, 2007; ISBN 9780822388128. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dodig-Crnkovic, G.; Burgin, M. A Systematic Approach to Autonomous Agents. Philosophies 2024, 9, 44. https://doi.org/10.3390/philosophies9020044

AMA Style

Dodig-Crnkovic G, Burgin M. A Systematic Approach to Autonomous Agents. Philosophies. 2024; 9(2):44. https://doi.org/10.3390/philosophies9020044

Chicago/Turabian Style

Dodig-Crnkovic, Gordana, and Mark Burgin. 2024. "A Systematic Approach to Autonomous Agents" Philosophies 9, no. 2: 44. https://doi.org/10.3390/philosophies9020044

APA Style

Dodig-Crnkovic, G., & Burgin, M. (2024). A Systematic Approach to Autonomous Agents. Philosophies, 9(2), 44. https://doi.org/10.3390/philosophies9020044

Article Metrics

Back to TopTop