%A Friston,Karl
%A Schwartenbeck,Philipp
%A Fitzgerald,Thomas
%A Moutoussis,Michael
%A Behrens,Tim
%A Dolan,Raymond
%D 2013
%J Frontiers in Human Neuroscience
%C
%F
%G English
%K active inference,agency,Bayesian,bounded rationality,embodied cognition ∙ free energy,inference,Utility Theory
%Q
%R 10.3389/fnhum.2013.00598
%W
%L
%N 598
%M
%P
%7
%8 2013-September-25
%9 Hypothesis and Theory
%+ Prof Karl Friston,UCL,London,United Kingdom,k.friston@ucl.ac.uk
%#
%! The anatomy of choice
%*
%<
%T The anatomy of choice: active inference and agency
%U https://www.frontiersin.org/article/10.3389/fnhum.2013.00598
%V 7
%0 JOURNAL ARTICLE
%@ 1662-5161
%X This paper considers agency in the setting of embodied or active inference. In brief, we associate a sense of agency with prior beliefs about action and ask what sorts of beliefs underlie optimal behavior. In particular, we consider prior beliefs that action minimizes the Kullback–Leibler (KL) divergence between desired states and attainable states in the future. This allows one to formulate bounded rationality as approximate Bayesian inference that optimizes a free energy bound on model evidence. We show that constructs like expected utility, exploration bonuses, softmax choice rules and optimism bias emerge as natural consequences of this formulation. Previous accounts of active inference have focused on predictive coding and Bayesian filtering schemes for minimizing free energy. Here, we consider variational Bayes as an alternative scheme that provides formal constraints on the computational anatomy of inference and action—constraints that are remarkably consistent with neuroanatomy. Furthermore, this scheme contextualizes optimal decision theory and economic (utilitarian) formulations as pure inference problems. For example, expected utility theory emerges as a special case of free energy minimization, where the sensitivity or inverse temperature (of softmax functions and quantal response equilibria) has a unique and Bayes-optimal solution—that minimizes free energy. This sensitivity corresponds to the precision of beliefs about behavior, such that attainable goals are afforded a higher precision or confidence. In turn, this means that optimal behavior entails a representation of confidence about outcomes that are under an agent's control.