%A Pezzulo,Giovanni %A Rigoli,Francesco %A Chersi,Fabian %D 2013 %J Frontiers in Psychology %C %F %G English %K anticipation,mental simulation,habitual choice,goal-directed choice,model-based reinforcement learning,forward sweeps %Q %R 10.3389/fpsyg.2013.00092 %W %L %M %P %7 %8 2013-March-04 %9 Original Research %+ Dr Giovanni Pezzulo,National Research Council of Italy,Institute of Cognitive Sciences and Technologies,via S. Martino della Battaglia, 44,Rome,00185,Italy,giovanni.pezzulo@gmail.com %# %! The Mixed Instrumental Controller %* %< %T The Mixed Instrumental Controller: Using Value of Information to Combine Habitual Choice and Mental Simulation %U https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00092 %V 4 %0 JOURNAL ARTICLE %@ 1664-1078 %X Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available “cached” value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated “Value of Information” exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus – ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.