Skip to main content

EDITORIAL article

Front. Neurosci., 10 October 2011
Sec. Decision Neuroscience
This article is part of the Research Topic Neurobiology of Choice View all 12 articles

Frontiers research topic on the neurobiology of choice

  • Center for Neural Science, New York University, New York, NY, USA

Research on economic decision-making seeks to understand how subjects choose between plans of action (lotteries, gambles, prospects) that have economic consequences. The key difficulty in making such decisions is that typically no plan of action available to the decision-maker guarantees a specific outcome, rather, consequences are risky or uncertain. More recently, researchers in psychology, behavioral and computational neuroscience, and psychology have started to apply these theoretical principles to studying choice behavior and its neural basis in the laboratory, for instance in electrophysiological studies of animals making choices for primary reward such as juice, and neuroimaging studies of humans making choices for money. Moreover, researchers across all these fields are, in parallel, studying how decisions are guided by learning and how the computations relevant to decisions and choices are represented neurally.

This Frontiers Research Topic on The Neurobiology of Choice combines contributions from researchers from the fields of neurobiology, behavioral, and computational neuroscience that discuss the neural computations underlying decision-making and adaptive behavior.

Placing motor and cognitive decisions in a common theoretical framework brings into sharp relief one apparent difference between them. Researchers have long argued that humans and animals mostly make choices in sensorimotor tasks that are nearly optimal – in the sense of approaching maximal expected utility – or complying with principles of statistical inference. In contrast, work in traditional economic decision-making tasks often focuses on situations in which participants violate the predictions of expected utility theory, for instance by misrepresenting the frequency of rare events or due to interference with emotional factors (Kirk et al., 2011).

More recently, researchers in psychology and neuroscience have started to apply these theoretical principles to studying choice behavior and its neural basis in the laboratory, for instance in electrophysiological studies of animals making choices for primary reward such as juice (Milstein and Dorris, 2011; Opris et al., 2011) and neuroimaging studies of humans making choices for money (Delgado et al., 2011).

Meanwhile, a largely different group of researchers, working in the field of sensorimotor control, have also recently drawn on statistical decision theory and reinforcement learning in order to reformulate the problem of hand and eye movement control (Stoloff et al., 2011). An important area of current research in both areas is how decisions are impacted by learning (Delgado et al., 2011) and when reward is delayed. The latter type of task is concerned with the discounting of future rewards, as opposed to smaller rewards that may be preferred when available immediately (Ray and Bossaerts, 2011).

Bayesian decision theory describes how to select between possible courses of action on the basis of a specified loss function, e.g., expected utility, in many circumstances (Pezzulo and Rigoli, 2011).

However, during most decision tasks, neither the outcomes associated with different plans of actions, nor the probability of their occurrence is available to the decision-maker prior to making the decision. Under these conditions, it is necessary to learn about the available outcomes from trial-and-error experience (Stoloff et al., 2011). The field of reinforcement learning (e.g., Sutton and Barto, 1998; Balleine et al., 2008; Niv and Montague, 2008) extends decision-theoretic accounts to situations involving learning. This theoretical framework, and its underlying statistical principles, have been used to explain the role of learning both in traditional choice tasks (e.g., Behrens et al., 2007; Dayan and Daw, 2008), and in sensorimotor adaptation (e.g., Körding et al., 2007).

Reinforcement learning theories also play an important role in another key area of current work in decision-making: the study of the neural processes underlying these functions. Notably, this system is involved both in motivated decisions (Kurniawan et al., 2011) and in movement, though how these functions relate is a subject of ongoing research and controversy. In addition to dopaminergic recordings, monkey work on learning about decisions from rewards has focused on frontal cortex (e.g., Lee and Seo, 2007) and also posterior parietal cortex, which is classically thought to be involved in the so-called dorsal visual processing stream.

Besides the underlying theoretical parallels between the two fields, and the growing interest in both fields in similar learning processes and common neural mechanisms, two recent developments make the time ripe to begin building a bridge between research on decision-making and the research on optimal motor control. The first is the availability of new experimental tools such as functional MRI to assess and measure the neural processes underlying human and non-human decision behavior, during the decision process and following choice (Hansen et al., 2011; Santos et al., 2011). The second are new analytical tools, specifically the growing application of behavioral and computational methods from psychophysics and Bayesian decision theory in the context of decision-making (Baldassi and Simoncini, 2011). This has created a situation in which researchers across fields have started to use a common set of conceptual tools for defining problems, building computational models, and designing and analyzing experiments.

References

Baldassi, S., and Simoncini, C. (2011). Reward sharpens orientation coding independently of attention. Front. Neurosci. 5:13. doi: 10.3389/fnins.2011.00013

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Balleine, B. W., Daw, N. D., and Doherty, J. P. (2008). “Multiple forms of value learning and the function of dopamine,” in Neuroeconomics: Decision Making and the Brain, eds P. W. Glimcher, C. F. Camerer, E. Fehr and R. A. Poldrack (London: Elsevier), 367–385.

Behrens, T. E., Woolrick, M. W., Walton, M. E., and Rushworth, M. F. (2007). Learning the value of information in an uncertain world. Nat. Neurosci. 10, 1214–1221.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Dayan, P., and Daw, N. D. (2008). Decision theory, reinforcement learning, and the brain. Cogn. Affect. Behav. Neurosci. 8, 429–453.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Delgado, M. R., Jou, R. L., and Phelps, E. A. (2011). Neural systems underlying aversive conditioning in humans with primary and secondary reinforcers. Front. Neurosci. 5:71. doi: 10.3389/fnins.2011.00071

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Hansen, K. A., Hillenbrand, S. F., and Ungerleider, L. G. (2011). Persistency of priors-induced bias in decision behavior and the fMRI signal. Front. Neurosci. 5:29. doi: 10.3389/fnins.2011.00029

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kirk, U., Downar, J., and Montague, P. R. (2011). Interoception drives increased rational decision-making in meditators playing the ultimatum game. Front. Neurosci. 5:49. doi: 10.3389/fnins.2011.00049

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Körding, K. P., Tenenbaum, J. B., and Shadmehr, R. (2007). The dynamics of memory as a consequence of optimal adaptation to a changing body. Nat. Neurosci. 10, 779–786.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kurniawan, I. T., Guitart-Masip, M., and Dolan, R. J. (2011). Dopamine and effort-based decision making. Front. Neurosci. 5:81. doi: 10.3389/fnins.2011.00081

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Lee, D., and Seo, H. (2007). Mechanisms of reinforcement learning and decision making in the primate dorsolateral prefrontal cortex. Ann. N. Y. Acad. Sci. 1104, 108–122.

Pubmed Abstract | Pubmed Full Text

Milstein, D. M., and Dorris, M. C. (2011). Saccade generation is influenced by relative expected subjective value under conditions of uncertainty. Front. Neurosci. in press.

Niv, Y., and Montague, P. R. (2008). “Theoretical and empirical studies of learning,” in Neuroeconomics: Decision Making and the Brain, eds P. W. Glimcher, C. F. Camerer, E. Fehr and R. A. Poldrack (London: Elsevier), 331–348.

Opris, I., Lebedev, M., and Nelson, R. J. (2011). Motor planning under unpredictable reward: modulations of movement vigor and primate striatum activity. Front. Neurosci. 5:61. doi: 10.3389/fnins.2011.00061

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Pezzulo, G., and Rigoli, F. (2011). The value of foresight: how prospection affects decision-making. Front. Neurosci. 5:79. doi: 10.3389/fnins.2011.00079

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ray, D., and Bossaerts, P. (2011). Positive temporal dependence of the biological clock implies hyperbolic discounting. Front. Neurosci. 5:2. doi: 10.3389/fnins.2011.00002

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Santos, J. P., Seixas, D., Brandão, S., and Moutinho, L. (2011). Investigating the role of the ventromedial prefrontal cortex in the assessment of brands. Front. Neurosci. 5:77. doi: 10.3389/fnins.2011.00077

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Stoloff, R. H., Taylor, J. A., Xu, J., Ridderikhoff, A., and Ivry, R. B. (2011). Effect of reinforcement history on hand choice in an unconstrained reaching task. Front. Neurosci. 5:41. doi: 10.3389/fnins.2011.00041

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Sutton, R. S., and Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.

Citation: Trommershäuser J (2011) Frontiers research topic on the neurobiology of choice. Front. Neurosci. 5:119. doi: 10.3389/fnins.2011.00119

Received: 15 September 2011; Accepted: 15 September 2011;
Published online: 10 October 2011.

Copyright: © 2011 Trommershäuser. This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.

*Correspondence: julia.trommershaeuser@nyu.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.