%A Harlé,Katia M. %A Zhang,Shunan %A Schiff,Max %A Mackey,Scott %A Paulus,Martin P. %A Yu,Angela J. %D 2015 %J Frontiers in Psychology %C %F %G English %K Bayesian model,decision-making,reward processing,methamphetamine stimulant,Addiction,multi-armed bandit task %Q %R 10.3389/fpsyg.2015.01910 %W %L %M %P %7 %8 2015-December-18 %9 Original Research %+ Dr Katia M. Harlé,Department of Psychiatry, University of California San Diego,La Jolla, CA, USA,kharle@ucsd.edu %# %! Reward-based decision-making in methamphetamine dependence %* %< %T Altered Statistical Learning and Decision-Making in Methamphetamine Dependence: Evidence from a Two-Armed Bandit Task %U https://www.frontiersin.org/articles/10.3389/fpsyg.2015.01910 %V 6 %0 JOURNAL ARTICLE %@ 1664-1078 %X Understanding how humans weigh long-term and short-term goals is important for both basic cognitive science and clinical neuroscience, as substance users need to balance the appeal of an immediate high vs. the long-term goal of sobriety. We use a computational model to identify learning and decision-making abnormalities in methamphetamine-dependent individuals (MDI, n = 16) vs. healthy control subjects (HCS, n = 16), in a two-armed bandit task. In this task, subjects repeatedly choose between two arms with fixed but unknown reward rates. Each choice not only yields potential immediate reward but also information useful for long-term reward accumulation, thus pitting exploration against exploitation. We formalize the task as comprising a learning component, the updating of estimated reward rates based on ongoing observations, and a decision-making component, the choice among options based on current beliefs and uncertainties about reward rates. We model the learning component as iterative Bayesian inference (the Dynamic Belief Model), and the decision component using five competing decision policies: Win-stay/Lose-shift (WSLS), ε-Greedy, τ-Switch, Softmax, Knowledge Gradient. HCS and MDI significantly differ in how they learn about reward rates and use them to make decisions. HCS learn from past observations but weigh recent data more, and their decision policy is best fit as Softmax. MDI are more likely to follow the simple learning-independent policy of WSLS, and among MDI best fit by Softmax, they have more pessimistic prior beliefs about reward rates and are less likely to choose the option estimated to be most rewarding. Neurally, MDI's tendency to avoid the most rewarding option is associated with a lower gray matter volume of the thalamic dorsal lateral nucleus. More broadly, our work illustrates the ability of our computational framework to help reveal subtle learning and decision-making abnormalities in substance use.