Intelligent agents, powered by artificial intelligence and machine learning, have become increasingly prevalent in various decision-making tasks across fields such as video games [
1], autonomous driving [
2], and IoT systems [
3], among others. An intelligent agent perceives its environment, takes autonomous actions to achieve goals, and can improve performance through learning and knowledge acquisition [
4]. They are designed to learn from collected data, adapt to their environment, and make informed decisions to carry out tasks.
In the advent of the fourth industrial revolution, the lack of transparency in artificial intelligence-based systems presents a pivotal hindrance to their use, leading to the application of explainable AI (XAI) [
5] to these systems. For instance, conventional intelligent agents are trained to perform tasks without considering the inherent causal relationships underlying the problem they must solve. This is an issue that must be addressed. However, as pointed out by Maes et al. [
6], incorporating causal inference into intelligent agents is challenging due to numerous hidden variables within the model. Nevertheless, there has been a recent surge of interest in developing algorithms that generate interpretable agent behavior regarding goals, plans, or rewards, as discussed by Chakraborti et al. [
7]. That has paved the way for research, such as that of Neufeld and Kristtorn [
8], Jezic et al. [
9], and Meganck et al. [
10], demonstrating the feasibility of integrating probabilistic reasoning and causal maps into the logic of intelligent agents. By adopting causal reasoning, intelligent agents can move beyond merely identifying correlations and discerning the fundamental causes of events, facilitating more robust, reliable, and interpretable decision-making processes. For example, Dasgupta et al. [
11] have adopted “meta-reinforcement learning” to generate an agent capable of executing tasks through causal inferences, even without explicit knowledge of causality. Similarly, regarding designing a smart grid power system, in order to ensure that the system performs well and reduces the communication bandwidth, ref. [
12] proposes a causal inference communication model (CICM). The efficiency of their algorithm was demonstrated in their experiments using navigation tasks in the virtual world of StarCraft II, a video game. Miao et al. [
13] has presented a dynamic inference agent that uses numerical representations instead of symbolic representations for modeling, inference, and decision-making. Also, Ceballos and Cantu [
14] have suggested a design method and agent architecture that build on the Beliefs, Desires, and Intentions (BDI) framework that [
15] described for creating intelligent agents. Highlighting the difficulty of accurately modeling an agent’s causal structure, Jensen [
16] has emphasized this challenge using the Angry Birds AI Competition as an example, where agents must analyze levels and predict the physical consequences of their actions to achieve high scores, as described by Renz et al. [
17] In response to this competency, Tziortziotis et al. [
18] have developed an agent architecture employing Bayesian inference to enhance decision-making abilities.
Although some studies have addressed the development of intelligent agents that incorporate causal inference in their learning, there needs to be more experimentation on this topic, indicating the need for more empirical studies to gain a deeper understanding of how causal inference can enhance the decision-making capabilities of intelligent agents.
This research will thoroughly examine the intersection between intelligent agents and causal inference to explore how incorporating causal reasoning can significantly enhance decision-making abilities and task execution, providing these agents with a distinctive advantage over other systems. The primary purpose of this investigation is to contribute to the growing body of knowledge in the fields of intelligent agent systems and causal inference, shedding light on the promising potential of merging these areas to create more informed and transparent decision-making systems.
The present article is organized into four sections. In
Section 1, we discuss the motivation behind the study, present notable contributions to the development of intelligent agents endowed with causal reasoning, and position our work as one of the pioneering initiatives exploring the advantages and possibilities presented by agents utilizing causal inference for decision-making and task accomplishment. In
Section 2, we present the methodology for generating the virtual environment where the agents interact, specifying the behavior of three agent types: GuardBOT (GBOT), ExplorerBOT (EBOT), and CausalBOT (CBOT), which form an integral part of our experiment. We also describe the data generation and causal inference processes employed to test the hypothesis and determine the most effective task-completion agent. In
Section 3, we analyze the results obtained from each process considered in the methodology. Finally, in In
Section 4 we discuss the conclusions derived from the study.