%A Berthet,Pierre %A Hellgren Kotaleski,Jeanette %A Lansner,Anders %D 2012 %J Frontiers in Behavioral Neuroscience %C %F %G English %K Basal Ganglia,BCPNN,behaviour selection,direct-indirect pathway,Dopamine,Hebbian-Bayesian plasticity,reinforcement learning %Q %R 10.3389/fnbeh.2012.00065 %W %L %M %P %7 %8 2012-October-02 %9 Original Research %+ Prof Anders Lansner,Stockholms Universitet,Computational Biology,Roslagstullsbacken 35,Stockholm,10691,Sweden,ala@kth.se %+ Prof Anders Lansner,KTH Royal Institute of Technology,School of Computer Science and Communication,Roslagstullsbacken 35,Stockholm,10691,Sweden,ala@kth.se %+ Prof Anders Lansner,Stockholm Brain Institute,Stockholm,Sweden,ala@kth.se %# %! Basal Ganglia inspired abstract model with reconfigurable connectivity %* %< %T Action selection performance of a reconfigurable basal ganglia inspired model with Hebbian–Bayesian Go-NoGo connectivity %U https://www.frontiersin.org/articles/10.3389/fnbeh.2012.00065 %V 6 %0 JOURNAL ARTICLE %@ 1662-5153 %X Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e., the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behavior to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian–Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect (NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo, and RP system were utilized, e.g., using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioral data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available.