entropy-logo

Journal Browser

Journal Browser

Probabilistic Inference in Goal-Directed Human and Animal Decision-Making

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (18 October 2021) | Viewed by 34414

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR), San Martino della Battaglia 44, 00185 Roma, Italy
Interests: machine learning; deep learning; active inference; BCI
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of High Performance Computing and Networking (ICAR), National Research Council of Italy, Via Pietro Castellino, 111, 80131 Naples, Italy
Interests: artificial intelligence; machine learning; computational intelligence; computational neuroscience; cognitive science
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Cognitive Science and Technologies (ISTC), National Research Council (CNR) of Italy, Via Martiri della Libertà 2, 35137 Padova, Italy
Interests: neurocomputational modelling; machine learning; neural networks; deep learning; bayesian reinforcement learning; visual perception; cognitive number processing; decision making and planning; spatial navigation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Since Helmholtz’s intuition on visual perception, the hypothesis that the brain performs inferential processing to accomplish perceptual tasks has acquired numerous confirmations and, more generally, the Bayesian probabilistic framework has proven capable of providing computational theories of cognitive functioning with high explanatory value. According to this idea, incoming sensory evidence is modulated by prior information to estimate the state of the external world. Similarly, inferential processing might underlie decision making: Humans and animals are thought to combine habitual control (model-free decision) with predictions from internal models of their interactions with the environment (model-based decision) to flexibly and efficiently guide behavior. However, while model-free inference takes advantage of a consolidated mathematical framework inherited from classical reinforcement learning with highly efficient recent algorithmic-level explanations, model-based decision-making still lacks converging computational theories that are both biologically plausible and cost-effective.

This Special Issue aims to focus on recent advances in probabilistic inference in goal-directed human and animal decision making, and we welcome submissions that:

  • Shed light on the computations of neuronal circuits involved in goal-directed decision making, with a focus on the inferential mechanisms involved;
  • Propose novel probabilistic models and methods in decision making including (but not limited to) information-theory approaches, statistical and free-energy minimization, hierarchical models, and deep networks;
  • Introduce decision-making applications in ethological, social, psychological, psychiatric, robotics, and computer science research.

Dr. Francesco Donnarumma
Dr. Domenico Maisto
Dr. Ivilin Stoianov
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Model-based decision making
  • Bayesian Inference
  • Hierarchical probabilistic models
  • Approximate probabilistic inference
  • Deep networks
  • Information theory methods
  • Temporal dynamics inference
  • Computational neuroscience
  • Social decision making
  • Cognitive systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3322 KiB  
Article
Biases and Variability from Costly Bayesian Inference
by Arthur Prat-Carrabin, Florent Meyniel, Misha Tsodyks and Rava Azeredo da Silveira
Entropy 2021, 23(5), 603; https://doi.org/10.3390/e23050603 - 13 May 2021
Cited by 3 | Viewed by 2789
Abstract
When humans infer underlying probabilities from stochastic observations, they exhibit biases and variability that cannot be explained on the basis of sound, Bayesian manipulations of probability. This is especially salient when beliefs are updated as a function of sequential observations. We introduce a [...] Read more.
When humans infer underlying probabilities from stochastic observations, they exhibit biases and variability that cannot be explained on the basis of sound, Bayesian manipulations of probability. This is especially salient when beliefs are updated as a function of sequential observations. We introduce a theoretical framework in which biases and variability emerge from a trade-off between Bayesian inference and the cognitive cost of carrying out probabilistic computations. We consider two forms of the cost: a precision cost and an unpredictability cost; these penalize beliefs that are less entropic and less deterministic, respectively. We apply our framework to the case of a Bernoulli variable: the bias of a coin is inferred from a sequence of coin flips. Theoretical predictions are qualitatively different depending on the form of the cost. A precision cost induces overestimation of small probabilities, on average, and a limited memory of past observations, and, consequently, a fluctuating bias. An unpredictability cost induces underestimation of small probabilities and a fixed bias that remains appreciable even for nearly unbiased observations. The case of a fair (equiprobable) coin, however, is singular, with non-trivial and slow fluctuations in the inferred bias. The proposed framework of costly Bayesian inference illustrates the richness of a ‘resource-rational’ (or ‘bounded-rational’) picture of seemingly irrational human cognition. Full article
Show Figures

Figure 1

15 pages, 3023 KiB  
Article
Mismatch Negativity and Stimulus-Preceding Negativity in Paradigms of Increasing Auditory Complexity: A Possible Role in Predictive Coding
by Francisco J. Ruiz-Martínez, Antonio Arjona and Carlos M. Gómez
Entropy 2021, 23(3), 346; https://doi.org/10.3390/e23030346 - 15 Mar 2021
Cited by 4 | Viewed by 2844
Abstract
The auditory mismatch negativity (MMN) has been considered a preattentive index of auditory processing and/or a signature of prediction error computation. This study tries to demonstrate the presence of an MMN to deviant trials included in complex auditory stimuli sequences, and its possible [...] Read more.
The auditory mismatch negativity (MMN) has been considered a preattentive index of auditory processing and/or a signature of prediction error computation. This study tries to demonstrate the presence of an MMN to deviant trials included in complex auditory stimuli sequences, and its possible relationship to predictive coding. Additionally, the transfer of information between trials is expected to be represented by stimulus-preceding negativity (SPN), which would possibly fit the predictive coding framework. To accomplish these objectives, the EEG of 31 subjects was recorded during an auditory paradigm in which trials composed of stimulus sequences with increasing or decreasing frequencies were intermingled with deviant trials presenting an unexpected ending. Our results showed the presence of an MMN in response to deviant trials. An SPN appeared during the intertrial interval and its amplitude was reduced in response to deviant trials. The presence of an MMN in complex sequences of sounds and the generation of an SPN component, with different amplitudes in deviant and standard trials, would support the predictive coding framework. Full article
Show Figures

Figure 1

23 pages, 3754 KiB  
Article
Space Emerges from What We Know—Spatial Categorisations Induced by Information Constraints
by Nicola Catenacci Volpi and Daniel Polani
Entropy 2020, 22(10), 1179; https://doi.org/10.3390/e22101179 - 19 Oct 2020
Cited by 2 | Viewed by 3103
Abstract
Seeking goals carried out by agents with a level of competency requires an “understanding” of the structure of their world. While abstract formal descriptions of a world structure in terms of geometric axioms can be formulated in principle, it is not likely that [...] Read more.
Seeking goals carried out by agents with a level of competency requires an “understanding” of the structure of their world. While abstract formal descriptions of a world structure in terms of geometric axioms can be formulated in principle, it is not likely that this is the representation that is actually employed by biological organisms or that should be used by biologically plausible models. Instead, we operate by the assumption that biological organisms are constrained in their information processing capacities, which in the past has led to a number of insightful hypotheses and models for biologically plausible behaviour generation. Here we use this approach to study various types of spatial categorizations that emerge through such informational constraints imposed on embodied agents. We will see that geometrically-rich spatial representations emerge when agents employ a trade-off between the minimisation of the Shannon information used to describe locations within the environment and the reduction of the location error generated by the resulting approximate spatial description. In addition, agents do not always need to construct these representations from the ground up, but they can obtain them by refining less precise spatial descriptions constructed previously. Importantly, we find that these can be optimal at both steps of refinement, as guaranteed by the successive refinement principle from information theory. Finally, clusters induced by these spatial representations via the information bottleneck method are able to reflect the environment’s topology without relying on an explicit geometric description of the environment’s structure. Our findings suggest that the fundamental geometric notions possessed by natural agents do not need to be part of their a priori knowledge but could emerge as a byproduct of the pressure to process information parsimoniously. Full article
Show Figures

Figure 1

16 pages, 3460 KiB  
Article
The Convergence of a Cooperation Markov Decision Process System
by Xiaoling Mo, Daoyun Xu and Zufeng Fu
Entropy 2020, 22(9), 955; https://doi.org/10.3390/e22090955 - 30 Aug 2020
Cited by 1 | Viewed by 2874
Abstract
In a general Markov decision progress system, only one agent’s learning evolution is considered. However, considering the learning evolution of a single agent in many problems has some limitations, more and more applications involve multi-agent. There are two types of cooperation, game environment [...] Read more.
In a general Markov decision progress system, only one agent’s learning evolution is considered. However, considering the learning evolution of a single agent in many problems has some limitations, more and more applications involve multi-agent. There are two types of cooperation, game environment among multi-agent. Therefore, this paper introduces a Cooperation Markov Decision Process (CMDP) system with two agents, which is suitable for the learning evolution of cooperative decision between two agents. It is further found that the value function in the CMDP system also converges in the end, and the convergence value is independent of the choice of the value of the initial value function. This paper presents an algorithm for finding the optimal strategy pair (πk0,πk1) in the CMDP system, whose fundamental task is to find an optimal strategy pair and form an evolutionary system CMDP(πk0,πk1). Finally, an example is given to support the theoretical results. Full article
Show Figures

Figure 1

29 pages, 1823 KiB  
Article
Is the Free-Energy Principle a Formal Theory of Semantics? From Variational Density Dynamics to Neural and Phenotypic Representations
by Maxwell J. D. Ramstead, Karl J. Friston and Inês Hipólito
Entropy 2020, 22(8), 889; https://doi.org/10.3390/e22080889 - 13 Aug 2020
Cited by 82 | Viewed by 10442
Abstract
The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance—in relation to [...] Read more.
The aim of this paper is twofold: (1) to assess whether the construct of neural representations plays an explanatory role under the variational free-energy principle and its corollary process theory, active inference; and (2) if so, to assess which philosophical stance—in relation to the ontological and epistemological status of representations—is most appropriate. We focus on non-realist (deflationary and fictionalist-instrumentalist) approaches. We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims. After reviewing the free-energy principle and active inference, we argue that the model of adaptive phenotypes under the free-energy principle can be used to furnish a formal semantics, enabling us to assign semantic content to specific phenotypic states (the internal states of a Markovian system that exists far from equilibrium). We propose a modified fictionalist account—an organism-centered fictionalism or instrumentalism. We argue that, under the free-energy principle, pursuing even a deflationary account of the content of neural representations licenses the appeal to the kind of semantic content involved in the ‘aboutness’ or intentionality of cognitive systems; our position is thus coherent with, but rests on distinct assumptions from, the realist position. We argue that the free-energy principle thereby explains the aboutness or intentionality in living systems and hence their capacity to parse their sensory stream using an ontology or set of semantic factors. Full article
Show Figures

Figure 1

26 pages, 7851 KiB  
Article
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network
by Takazumi Matsumoto and Jun Tani
Entropy 2020, 22(5), 564; https://doi.org/10.3390/e22050564 - 18 May 2020
Cited by 26 | Viewed by 6461
Abstract
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees [...] Read more.
It is crucial to ask how agents can achieve goals by generating action plans using only partial models of the world acquired through habituated sensory-motor experiences. Although many existing robotics studies use a forward model framework, there are generalization issues with high degrees of freedom. The current study shows that the predictive coding (PC) and active inference (AIF) frameworks, which employ a generative model, can develop better generalization by learning a prior distribution in a low dimensional latent state space representing probabilistic structures extracted from well habituated sensory-motor trajectories. In our proposed model, learning is carried out by inferring optimal latent variables as well as synaptic weights for maximizing the evidence lower bound, while goal-directed planning is accomplished by inferring latent variables for maximizing the estimated lower bound. Our proposed model was evaluated with both simple and complex robotic tasks in simulation, which demonstrated sufficient generalization in learning with limited training data by setting an intermediate value for a regularization coefficient. Furthermore, comparative simulation results show that the proposed model outperforms a conventional forward model in goal-directed planning, due to the learned prior confining the search of motor plans within the range of habituated trajectories. Full article
Show Figures

Graphical abstract

20 pages, 2082 KiB  
Article
Inferring What to Do (And What Not to)
by Thomas Parr
Entropy 2020, 22(5), 536; https://doi.org/10.3390/e22050536 - 11 May 2020
Cited by 7 | Viewed by 4382
Abstract
In recent years, the “planning as inference” paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about “how I am going to act”. This paper provides an overview of [...] Read more.
In recent years, the “planning as inference” paradigm has become central to the study of behaviour. The advance offered by this is the formalisation of motivation as a prior belief about “how I am going to act”. This paper provides an overview of the factors that contribute to this prior. These are rooted in optimal experimental design, information theory, and statistical decision making. We unpack how these factors imply a functional architecture for motivated behaviour. This raises an important question: how can we put this architecture to work in the service of understanding observed neurobiological structure? To answer this question, we draw from established techniques in experimental studies of behaviour. Typically, these examine the influence of perturbations of the nervous system—which include pathological insults or optogenetic manipulations—to see their influence on behaviour. Here, we argue that the message passing that emerges from inferring what to do can be similarly perturbed. If a given perturbation elicits the same behaviours as a focal brain lesion, this provides a functional interpretation of empirical findings and an anatomical grounding for theoretical results. We highlight examples of this approach that influence different sorts of goal-directed behaviour, active learning, and decision making. Finally, we summarise their implications for the neuroanatomy of inferring what to do (and what not to). Full article
Show Figures

Graphical abstract

Back to TopTop