Front. Behav. Neurosci., 23 October 2009
Sec. Motivation and Reward
Volume 3 - 2009 | https://doi.org/10.3389/neuro.08.039.2009
Sec. Motivation and Reward
Volume 3 - 2009 | https://doi.org/10.3389/neuro.08.039.2009
Department of Psychology, Stony Brook University, Stony Brook, NY, USA
Decisions frequently have consequences that play out over time and these temporal factors can exert strong influences on behavior. For example, decision-makers exhibit delay discounting, behaving as though immediately consumable goods are more valuable than those available only after some delay. With the use of functional magnetic resonance imaging, we are now beginning to characterize the physiological bases of such behavior in humans and to link work on this topic from neuroscience, psychology, and economics. Here we review recent neurocognitive investigations of temporal decision-making and outline the theoretical picture that is beginning to take shape. Taken as a whole, this body of work illustrates the progress made in understanding temporal choice behavior. However, we also note several questions that remain unresolved and areas where future work is needed.
Decision-makers are frequently faced with choices that differ in the timing of their consequences. Such intertemporal choices require shrewd decision-makers to consider, not only what they want, but when they want it. For example, when asked to deliver a guest lecture, your response is likely to depend strongly on whether the lecture is to be delivered relatively soon or in the more distant future. More gravely, decisions about whether to refinance one’s mortgage (Harding, 2000 ) and about whether governments should spend money to protect the environment (Dasgupta, 2008 ; Hardisty and Weber, 2009 ) can be characterized as intertemporal choices. Furthermore, abnormalities in intertemporal choice behavior have been associated with an array of undesirable behavior including drug addiction (Kirby and Petry, 2004 ; Rossow, 2008 ). Given the relevance of intertemporal choice, it is clear that we have much to gain by understanding how intertemporal choices are made, what factors influence intertemporal choices, and what is responsible for aberrations in intertemporal choice in some patient populations.
Like much of decision-making, intertemporal choice has long been the province of economics. Work from this field has provided both normative guidelines for intertemporal choice and the theoretical tools to evaluate observed behavior. Empirical support has come primarily from psychology and has, as it often does, focused on decision-makers’ deviations from the prescriptions of economics. With the recent interest in utilizing neurobiological techniques to understand decision making behavior (Glimcher, 2003 ; Glimcher et al., 2008 ), particularly functional magnetic resonance imaging (fMRI), we are in the position to observe the operation of the processes responsible for intertemporal decisions, processes that are extremely difficult to evaluate using behavior alone. Here, we review recent work on intertemporal choice with a focus on studies involving humans, the majority of which have utilized a combination of behavior, fMRI, and quantitative economic theory.
The majority of intertemporal choice studies have been designed to explore delay discounting, the robust finding that animals, including humans, behave as though immediately consumable goods are more valuable than those only available after some delay. This phenomenon is so powerful that decision-makers frequently forgo delayed rewards in favor of immediate rewards even when the delayed rewards are objectively more valuable. For example, a decision-maker might choose $100 delivered immediately over $200 to be delivered in 3 years. Such a choice is said reflect the subjective value of the $200 option, discounted according to the associated 3-year delay. The sway of negative events is similarly blunted by delay. The idea of working on your taxes next month seems less unpleasant than the prospect of working on them tonight.
Economics has viewed delay discounting from within the framework of discounted utility theory (Samuelson, 1937 ) according to which the subjective value of goods drops by a fixed percentage (frequently referred to as the discount rate) for each unit of time that those goods are delayed. If a decision-maker discounts the future at a rate of 10% annually, then $100 available in a year is only worth $90 right now. That same reward offered in 5 years is only worth $59. If this drop in subjective value is plotted over time, the resulting discounting curve is exponential in shape.
Behavioral work on delay discounting has primarily focused on two major facets of the phenomenon. First, it appears that animals, including humans, do not discount exponentially. Given that such behavior is arguably non-normative, this possibility has generated a large body of behavioral data (Ainslie and Herrnstein, 1981 ; Loewenstein and Thaler, 1989 ; Ainslie, 1992 ; Green et al., 1994a ; Kirby and Herrnstein, 1995 ; Rachlin, 1995 ; Kirby, 1997 ) nearly all of which demonstrates that decision-makers behave as though their discount rate declines as rewards are pushed further into the future. Waiting 2 years for a reward might be worth 10% less than waiting 1 year, but waiting 4 years for a reward might be worth only 5% less than waiting 3 years. Such discounting is referred to as hyperbolic or quasi-hyperbolic and is blamed for a variety of unwanted behavior (Ainslie, 2001 ) all stemming from the fact that hyperbolic discounters make one set of choices about rewards in the distant future only to reverse their preferences as those same rewards draw near.
Second, work has focused on the rate of discounting itself and has found that discounting rates vary across individuals and contexts, and are sometimes unreasonably extreme. For example, in two of the more well-cited studies (Hausman, 1979 ; Gately, 1980 ), discount rates were estimated based on the purchase price and operating costs of home appliances. The estimated rates were shown to be significantly greater than typically assumed by economists (anywhere from 25 to 300% per year which is obviously well above the rates at which consumers borrow and invest). Thus, there has been a significant effort to characterize the rate at which various populations discount rewards. For example, children (Green et al., 1994b ; Scheres et al., 2006 ), including those with attention deficit hyperactivity disorder (ADHD, Barkley et al., 2001 ), alcoholics (Vuchinich and Simpson, 1998 ; Petry, 2001 ), smokers (Bickel et al., 1999 ; Reynolds et al., 2004 ), cocaine and heroin addicts (Coffey et al., 2003 ; Kirby and Petry, 2004 ), and compulsive gamblers (Holt et al., 2003 ) all discount at a faster rate than healthy adults; they exhibit a relative inability to wait for rewards. In contrast, older adults (Green et al., 1994b ) and those with a higher IQ (Shamosh and Gray, 2008 ) have been shown to discount at a slower rate; they exhibit relative patience.
Utilizing classic intertemporal choice tasks, recent work in cognitive neuroscience has begun to address the neural mechanisms associated with delay discounting. One basic question that this field is uniquely suited to address is what distinguishes those occasions on which decision-makers choose to wait from those occasions on which they choose immediate rewards. That is, what leads to patient and impatient choices? Wittmann et al. (2007 ) utilized fMRI and a standard intertemporal choice task (though with completely hypothetical rewards). Based on subjects’ choices, the magnitude of the immediate option was adjusted incrementally to find the point at which that particular decision-maker would be indifferent between the immediate and delayed options. Trials on which the delayed option was chosen (patient choices) were then compared to trials on which the immediate option was chosen (impatient choices). This contrast yielded a network of brain regions that included bilateral posterior insular cortex, left posterior cingulate, as well as temporal and parietal regions. Interestingly, no regions appeared to exhibit greater activity when choosing the immediate reward. This study also observed higher levels of activity in the striatum when subjects were asked about rewards to be delivered in the near future (<1 year) than when they were asked about delayed rewards in the distant future (≥1 year).
These findings, particularly the involvement of the insula, extends previous work (Tanaka et al., 2004 ) on reward-based learning that has shown a delay-related gradient running from anterior to posterior insular cortex. When subjects learn to make sequences of actions to acquire monetary rewards, anterior and inferior portions of insular cortex appear to be differentially involved in producing reward-prediction error signals related to immediate rewards. In contrast, posterior and superior portions of insular cortex appear to serve this same function when learning about more delayed rewards. Taken together, these studies suggest that decisions involving increased delay are associated with activity in the posterior insula.
Though insular cortex has been implicated in a variety of sensory, cognitive, and emotional processes, there are intriguing intersections between these decision-related findings and previous work on pain. Mirroring the delay-related gradient in insular cortex, work on pain perception has found a similar differentiation along the anterior–posterior axis with more posterior portions associated with the more sensory aspects of pain processing and the more anterior portions associated with the more cognitive or emotional aspects of pain (Singer et al., 2004 ). For example, the anticipation of impending pain elicits activity in more anterior portions of the insula than the subsequent pain experience itself (Ploghaus et al., 1999 ). More generally, insular cortex has been associated with drug addiction, a condition marked by, and presumably maintained by, pronounced difficulties in weighing short-term gains (e.g., drug-use) against long-term outcomes (e.g., jail, health). For example, cocaine addicts exhibit structural abnormalities in insular cortex including white matter legions (Bartzokis et al., 1999 ) and a reduction in gray matter (Franklin et al., 2002 ). In particular, insular activity appears to be closely related to drug craving (Garavan et al., 2000 ; Kilts et al., 2001 ; Schneider et al., 2001 ; Bonson et al., 2002 ; Brody et al., 2002 ; Myrick et al., 2004 ; Wang et al., 2007 ) and relapse (Paulus et al., 2005 ; Naqvi et al., 2007 ). Abnormal insular activation has also been found in individuals with ADHD (Ernst et al., 2003 ; Rubia et al., 2009 , but see Scheres et al., 2006 ) and conduct disorder (Rubia et al., 2009 ) who, like addicts, exhibit diminished patience in delay discounting tasks (Barkley et al., 2001 ).
Other work has sought to explore what neural features distinguish patient individuals from impatient individuals. Activity in the striatum has been shown (Hariri et al., 2006 ) to predict discounting rates across individuals such that larger but less discriminative reward prediction errors are associated with diminished patience. This same pattern of striatal activity was recently shown (Forbes et al., 2009 ) to be associated with genetic variation in genotypes thought to influence the release, availability, and signaling strength of dopamine (DAT1, DRD2, and DRD4). There also appears (Boettiger et al., 2007 ) to be a relationship between polymorphic variation of the catechol-O-methyltransferase (COMT) gene and delay discounting with the 158Val/Val genotype being associated with diminished patience and hyperactivity in dorso-lateral prefrontal cortex (DLPFC) and posterior parietal cortex (with no apparent effects in the striatum). The 158Val/Val genotype has also been linked to perseverative errors during reinforcement learning tasks which have been attributed to reduced levels of dopamine in prefrontal cortex (Egan et al., 2001 ; Frank et al., 2007 ). Lastly, there is intriguing evidence (Yacubian et al., 2007 ) that variation of COMT and DAT may interact to modulate complex patterns of activity in the striatum during reward processing.
More recent neurocognitive work has explored delay discounting using more fine-grained analytical methods. For example, an fMRI study by Kable and Glimcher (2007) utilized what they refer to as a “neurometric” approach in order to explore brain regions whose activity varied with the subjective value of various monetary rewards. In their study, decision-makers completed a standard intertemporal choice task in which they chose between pairs of rewards that varied both in their magnitude and in when they would be delivered. For example, a subject might choose between $20 to be delivered that day and $40 to be delivered 30 days later. By observing how changes in delay and reward magnitude modulated behavioral choices, the discounting curves underlying subjects’ choices could be reconstructed (Myerson and Green, 1995 ). These reconstructed curves could then be used to compute the idiosyncratic subjective value of any arbitrary reward–delay combination. To explore the neural representation of subjective value, these authors investigated what, if any, brain regions exhibited activity that corresponded to these subjective value functions. The results indicated that the ventral striatum, medial prefrontal cortex, and posterior cingulate cortex exhibited such a pattern of activity. Variation in these regions’ activity was better predicted by subjective value than by several related quantities (e.g., delay, reward magnitude, choice) and closely mirrored individual differences in subjects’ discounting rates.
One problem in relating the neurobiological work on human intertemporal choice with the currently larger literature on non-human animals (Cardinal, 2006 ) is that the delays typically utilized in human tasks (e.g., days, months, years) are significantly longer than those used with other animals (e.g., less than a minute). In an attempt to bridge this gap, recent work (Gregorios-Pippas et al., 2009 ) has investigated human delay discounting utilizing relatively short delays. Subjects completed a delay discounting task involving delays ranging from 4 to 14 s. Unlike other studies, subjects did not choose between rewards. Instead, subjects were presented with visual cues about impending, temporally delayed rewards with the identity of the cue reliably signaling the length of the delay (although rewards were only paid out at the conclusion of the study). The results reveal that the visual cues elicited graded increases in the ventral striatum (the focus of this study) such that cues associated with shorter delays (thus indicating more subjectively valuable rewards) elicited greater striatal activity. Furthermore, these neural responses mirrored individual subjects’ patterns of choice in a separate behavioral choice task. Intriguingly, these cue-induced neural responses tended to decrease as subjects’ total accumulated reward increased, suggesting a potential neural analog of diminishing marginal utility (Edwards, 1954 ). Taken together with the work of Kable and Glimcher (2007) , these results suggest that activity in the ventral striatum, along with portions of anterior and posterior medial cortex, exhibits a graded signal that represents the subjective value of delayed rewards. This, along with related pharmacological work demonstrating the role of dopamine in delay discounting (Montague and Berns, 2002 ; Kheramin et al., 2004 ; Winstanley et al., 2004 ; Phillips et al., 2007 ; Moustafa et al., 2008 ) suggests that striatal–cortical circuitry is likely to be a key player in the valuation of delayed rewards and a target for therapeutic work on disorders characterized by impulsive behavior (Rahman et al., 2001 ).
Despite the large and growing literature describing the neural signals that represent the idiosyncratic, subjective value of delayed rewards, we ultimately wish to understand the origin of these value signals, their variation across healthy individuals, and their aberrations in clinical populations. If the subjective value of delayed rewards underlies impatient choices occur when, it seems reasonable to ask why they are not valued more strongly. With a better understanding of how subjective value is computed, we would be in a much better position to design both diagnostic instruments and treatments.
Theorizing in psychology has emphasized the idea that choices between delayed rewards (as well as other types of choices) involve a competition between “the passions” and reason (Ainslie, 1975 , 2001 ; Schelling, 1984 ; Loewenstein, 1996 ; Soman et al., 2005 ). Some (Metcalfe and Mischel, 1999 ) suggested that this competition is between rational, cognitive processes and irrational, emotional processes. Others (Thaler and Sheffrin, 1981 ; Ainslie, 1992 ; McClure et al., 2004 ) have suggested a competition between a prudent, far-sighted process concerned with overall welfare and a greedy, myopic process more concerned with immediate gains. Regardless of the details, what is common across these accounts is the belief that the relative value of waiting and immediate gratification results from a struggle between mutually incompatible drives. If the prudent, rational, cognitive system is able to suppress the greedy, myopic, emotional system, then the decision-maker will see the wisdom of waiting and exhibit relative patience. Otherwise, the emotional system will dominate, producing a strong aversion to waiting and relative impatience.
Several broad literatures have yielded data in support of this general proposal, though it is predominantly indirect in nature. For example, there appear to be large inter-species differences in delay discounting, though the comparison is plagued by methodological differences which make interpretation difficult. Compared to humans, non-human animals exhibit greater impatience for delayed rewards (Logue et al., 1986 ). Even monkeys, which exhibit relative cognitive sophistication, will choose immediate rewards over significantly large delayed rewards even when the delay is only several seconds (Kim et al., 2008 ; Hwang et al., 2009 ). For pigeons, the situation is even more dramatic, with immediate rewards losing approximately 50% of their value when delayed by a single second (Mazur, 1984 ). To the extent that one associates prudent, rational control of behavior with frontal lobe function (and to the extent that species differences are not a methodological artifact), these differences across species suggestively mirror the phylogenetic development of frontal cortex (Fuster, 2002 ). Similarly, delay discounting behavior appears to follow a systematic trajectory over the course of the human lifespan (Green et al., 1994b , 1999b ). Relative to young adults, children exhibit significantly less patience for delayed rewards. Here again, this developmental trend is generally consistent with the ontogenetic changes taking place in frontal cortex (Sowell et al., 1999 ; Fuster, 2002 ). A related and growing literature has also demonstrated a strong relationship between overall intellectual ability and patience (Mischel et al., 1989 ; Burks et al., 2009 ). Indeed, a recent meta-analysis of 24 relevant delay discounting studies ultimately concluded that higher IQ is reliably associated with greater patience (Shamosh and Gray, 2008 ).
Two related studies by McClure and colleagues (McClure et al., 2004 ; McClure et al., 2007 ) provide the first neural evidence to support the idea that delay discounting involves a dual-process competition. Specifically, this group tested Laibson’s beta-delta account of discounting (Laibson, 1997 ) which posits two components: one concerned with immediate rewards (beta) and one concerned with delayed rewards (delta). Using a traditional delay discounting task, decision-makers were asked to choose between pairs of rewards of varying sizes to be delivered at various points in the future. To isolate neural activity associated with the beta component, trials involving an immediate reward were compared with trials that involved only delayed rewards. This comparison revealed several brain regions that exhibited greater activity when faced with an immediate reward. These regions included ventral striatum, medial prefrontal cortex, and posterior cingulate cortex. To isolate the delta component, brain regions that were activated by the task, but that did not distinguish between the different delays were selected. This resulted in a broad network of regions including dorsolateral and ventrolateral portions of prefrontal cortex as well as lateral orbital frontal cortex.
In isolation, these contrasts are relatively coarse, especially given how well-specified the theory being tested is. Critically, however, further analyses demonstrated that the relative activity in these two networks was predictive of subjects’ choices. When faced with a choice between an immediate reward and a delayed reward, choosing the immediate reward was associated with increased activity in the beta network and decreased activity in the delta network. Choices for the delayed reward were associated with the opposite pattern. A recent replication of this study generalized these findings to decisions involving primary rewards (juice and water) and shorter delays (up to 20 min). Contrasts revealed similar networks of brain regions associated with the beta and delta components. Furthermore, choices were again found to be predicted by the relative activity in the two networks, this time utilizing more rigorous regression analyses.
It is interesting to note that the anatomical details of the beta and delta networks grossly mirror the psychologist’s conceptualization. The greedy, irrational, myopic drive is embodied by portions of the evolutionarily older limbic system whereas the rational, patient drive is embodied by the relatively recent frontal cortex (particularly DLPFC). The relationship between activity in these networks and choice behavior also matches the expected competition. To the extent that DLPFC can suppress the relatively insolent limbic system, the decision-maker will make choices that are beneficial in the long-run. If the passionate limbic system can overcome the DLPFC’s control, the decision-maker makes impatient choices.
Along with a fairly well-entrenched theoretical story, investigations into the mechanisms underlying delay discounting face another hurdle; such investigations are simply difficult to conduct. The above investigation of the beta and delta networks is illustrative. Though these findings are consistent with the theoretical framework proposed by its authors, this interpretation has been criticized (Kable and Glimcher, 2007 ) as being consistent with alternative formulations. Recall the investigations into the neural representation of subjective value reviewed above. These studies found that activity in a highly similar set of regions was related to both delay and reward magnitude (and their combination, subjective value). Thus, it is possible that the ostensible beta network exhibited greater activity for immediate rewards simply because the immediate reward represented an option with a large subjective value. Furthermore, if choices are made on the basis of subjective value, then it is not surprising that activity in the beta should be related to choice behavior (see regression analyses, McClure et al., 2007 ). Below we outline other obstacles.
The work reviewed above illustrates that neuroscientists have done much to shed light on what distinguishes patient from impatient choices and individuals and have even begun to gain insight to the cognitive and neural processes that govern decisions about delayed rewards. However, there is an even more basic issue that has been largely ignored. Why are delayed rewards discounted at all? Why are small, immediate rewards ever tempting enough to eclipse larger, delayed rewards? Why would rational decision-makers not always wait for larger rewards, regardless of the associated delay?
Again, one likely explanation may come from a long history of theorizing in economics (Yaari, 1965 ; Benzion et al., 1989 ; Prelec and Loewenstein, 1991 ; Sozou, 1998 ; Dasgupta and Maskin, 2005 ), ecology (Kacelnik, 2003 ), and psychology (Mischel, 1966 ; Stevenson, 1986 ; Mazur, 1989 ; Rachlin et al., 1991 ; Mazur, 1995 , 1997 ) which suggests that delay exerts its influence on choices via the perceived risk associated with waiting; a suggestion that has been referred to as the implicit risk hypothesis (Benzion et al., 1989 ). If a decision-maker believes that the probability of acquiring a promised reward is uncertain simply by virtue of being delayed, then that decision-maker is justified in reducing the subjective value of the reward. For example, a bird waiting for fruit to ripen might choose to eat some immediately if it believed the fruit’s future availability was not guaranteed (i.e., it could be eaten by a competitor, it might rot, etc.). Furthermore, decision-makers might believe that the probability of receiving a promised reward generally decreases with time which would give rise to the monotonic decreases in subjective value that occur with increases in delay.
In one sense, the implicit risk hypothesis is attractive because it has the potential to entirely eliminate the phenomenon of delay discounting by translating time, the processing of which we are just beginning to grapple with (Mauk and Buonomano, 2004 ), into probability and uncertainty, concepts that are relatively well understood. In another sense, however, this hypothesis creates ambiguity when attempting to interpret previous delay discounting results (both behavioral and neural). For example, according to the implicit risk hypothesis, comparisons between immediate and delayed rewards are actually comparisons between high and low probability rewards. Thus, any results from such comparisons (e.g., contrasts in fMRI analyses) could reflect temporal processing or the processing of implicit probability or both. Similarly, one can reconceptualize the computation of subjective value as reflecting implicit probability instead of delay and the same can be done for dual-process accounts of choice.
Because of this potential ambiguity, it is instructive to briefly compare the temporal decision-making results reviewed above with work on choice under risk and uncertainty. Just as with the delay discounting work reviewed above, insular cortex has been implicated in risky decision-making. For example, insular cortex exhibits greater activity when decisions-makers chose a low probability reward than when decisions-makers chose a high probability reward (Paulus et al., 2003 ). Furthermore, insular activity predicts the likelihood of choosing low probability rewards. Reward probability also modulates activity in orbitofrontal and ventromedial frontal cortices (Critchley et al., 2001 ; Smith et al., 2002 ; Clark et al., 2008 ; Xue et al., 2009 ) as well as in the striatium (Hsu et al., 2005 ; Xue et al., 2009 ) and activity in the striatum is correlated with the subjective value of risky rewards (Hsu et al., 2005 ; Knutson et al., 2005 ; Yacubian et al., 2006 ; Tobler et al., 2007 ; Yacubian et al., 2007 ). The overlap between these findings and those from investigations of ostensibly temporal decision-making suggest that there is at least a reasonable possibility that the implicit risk hypothesis is correct. To be clear, reducing temporal decision-making to risky choice in no way trivializes the work on temporal decision-making. Indeed, substantiating the neural equivalence of delay and reward probability would be a major step forward, helping to unify two, currently separate, processes and to validate long-standing theory.
Unfortunately, not all of the empirical evidence for the implicit risk hypothesis is as straightforward. For example, manipulations of probability and delay appear to elicit different patterns of choices (Ostaszewski et al., 1998 ; Holt et al., 2003 ; Green and Myerson, 2004 ; Chapman and Weber, 2006 ). For example, as reward magnitudes increase, probability appears to have more influence on behavior, whereas delay appears to have less influence (Green et al., 1999a ). Temporal decisions appear to depend on whether the relevant rewards are immediately consumable (e.g., candy) or not (e.g., money) whereas discounting over probability does not (Estle et al., 2007 ). Furthermore, some authors (Green et al., 1999a ) have noted that the lay concept of “impulsivity” seems to best describe an increased preference for low probability rewards (e.g., the temptation to play the lottery) but a decreased preference for delayed rewards (e.g., the temptation to take out a payday loan). Indeed, even with large samples, choice behavior in delay and probability discounting tasks is only weakly correlated within individual subjects (Myerson et al., 2003 ). Lastly, there are results showing that delay can have behavioral consequences even when probability is held constant. Work on what is referred to as the temporal resolution of uncertainty (Chew and Ho, 1994 ; Arai, 1997 ) has found people exhibit strong preferences between gambles in which reward delivery time is fixed and only differ in when the outcome of the gamble is revealed.
The partial dissociation of risky and temporal decision-making implies that the neural basis of temporal decision-making is significantly less clear than it might appear. Without appropriate comparisons, it remains ambiguous as to whether delay discounting results are being driven by delay, the risk implied by delay, or both. Recent work has begun to tackle this issue directly.
The first direct test of the implicit risk hypothesis to utilize fMRI (or any other physiological measure) was recently carried out (Weber and Huettel, 2008 ). Subjects in this study were asked to make two sorts of choices. First, subjects performed a classic risky choice task, choosing between rewards that varied in both magnitude and probability (e.g., a 50% chance of $13.50 or a 100% chance of $7). Second, subjects performed a traditional delay discounting task, choosing between rewards that varied in both magnitude and delay (e.g., $6.25 today or $9.25 in 1 month). According to the implicit risk hypothesis, these two conditions should be essentially identical because the stated delays are only influencing choices via the risk they imply. In contrast, this study revealed a variety of brain regions that were differentially engaged by the two tasks. Risky choice elicited greater involvement of posterior portions of parietal cortex, anterior cingulate, and anterior portions of the insula whereas the delay discounting task elicited greater involvement of DLPFC, posterior cingulate, and the caudate. Unfortunately, this comparison was complicated by the fact that subjects exhibited strikingly different patterns of choice in the risky and delayed choice tasks. Thus, it remains somewhat unclear whether the neural dissociation of risk and delay was driven by the task dimensions or other factors.
A recent fMRI study from our lab (Luhmann et al., 2008 ) has taken a slightly different approach to this same question. Rather than comparing risky and delay tasks, we instead choose to compare a risky task with a temporal resolution of uncertainty task that involved both risk and delay. Doing so allowed us to exert considerable control over the decision variables and to thus isolate behavioral and physiological results specifically tied to the temporal dimension. Subjects choose between pairs small rewards (10 and 20 cents) that were delivered with varying probabilities (39–100%). In the immediate condition, the uncertainty associated with subjects’ choices was resolved immediately; subjects’ learned whether they would or would not be receiving their chosen reward as soon as they made their choice. In the delay condition, the uncertainty associated with subjects’ choices was resolved only after some variable delay. The delays were constructed such that lower probability rewards were resolved after a longer delay and higher probability rewards were resolved after a shorter delay. Specifically, the probabilities were such that the delay period embodied a constant hazard rate, a pattern that has been theorized to underlie normative delay discounting (Sozou, 1998 ; Dasgupta and Maskin, 2005 ). Comparing the two conditions, we found that both risk and delay exerted influence on subjects’ choices. Subjects were significantly more likely to choose the larger, less probable reward when the outcomes were revealed immediately, despite the fact that the probabilities were identical. Neurally, the delay condition elicited greater activity in the posterior cingulate than did the immediate condition. Furthermore, we observed parametric effects in the parahippocampal gyri, the anterior cingulate and the portions of superior parietal cortex such that activity in these regions increased as the delay associated with chosen rewards increased. Lastly, we found that differences in individuals’ attitudes toward the delay component of our task were mirrored by activity in a region of frontopolar cortex.
These two studies appear to contradict the implicit risk hypothesis and may begin to shed some light on how delay and risk exert dissociable influences on choice. We have noted that the specific brain regions implicated in temporal processing in our study have also been implicated in the process of prospection, the imagining of events in one’s future (Okuda et al., 2003 ; Addis et al., 2007 ; Buckner and Carroll, 2007 ; Hassabis et al., 2007 ; Szpunar et al., 2007 ; Addis and Schacter, 2008 ). Thus, we suggested that one way in which temporal decisions might differ from similar, risky decisions, is that deliberation about temporal choices is likely to involve evaluating both the reward and its value, but also the experience of waiting itself. Our subjects’ choice behavior implied that delaying the resolution of uncertainty decreased the subjective value of options (much like when reward delivery is delayed). This very well may be due to the fact that the delay interval itself evoked a negative subjective experience. Decision-makers able to foresee such experiences as they deliberated their choices would be in a much better position to make superior choices.
This possibility highlights the true complexity of temporal decision-making. Risky choice involves presenting a choice to the decision-maker, allowing a choice to be made, and resolving the outcome, all of which can and usually does happen rather quickly. Temporal decisions, on the other hand, are made at one point in time, but produce consequences that are subsequently stretched out over time. Decision-makers have subjective experiences as they attempt to make decision, while waiting, when uncertainty is resolved, and when receiving (or not receiving) the reward itself. To the extent that any or all of these experiences can be forecast before choices are made (Wilson and Gilbert, 2005 ), they can presumably exert some influence on decision-makers’ behavior (Loewenstein, 1987 ; Loewenstein et al., 2001 ).
Indeed, we already know something about the physiology of anticipation itself. For example, in a particularly elegant fMRI study (Berns et al., 2006 ), human subjects were shown cues followed by a variable delay and then an electric shock. The identity of the cue signaled the duration of the delay interval, so subjects knew in advance how long they would have to wait. As subjects waited for the shock delivery, a network of brain regions exhibited a complex and theoretically interesting pattern of activity. Regions that respond to pain, including somatosensory cortex, insular cortex, and the anterior cingulate, exhibited activity that reflected both the anticipation of the impending shock as well as dread, the negative, subjective experience associated with the waiting itself. Furthermore, the neural patterns exhibited during this delay period were associated with preferences in a behavioral decision-making task performed separately. Those subjects that exhibited the strongest neural effects of dread were more likely to choose stronger, immediate shocks over weaker, but delayed shocks.
We also know that delay period activity is modulated by factors such as risk. In one study (Critchley et al., 2001 ), decision-makers made risky choices and had the outcomes of their choices withheld for 8 s. Across trials, the probability of winning was varied to investigate anticipatory processes. Several regions, including the anterior cingulate and orbital frontal cortex, and anterior insula exhibited delay period activity that reflected the amount of uncertainty subjects were facing.
These findings suggest that temporal decisions pose a formidable challenge for the savvy decision-maker. Despite the relative simple descriptions temporal outcomes can take (e.g., $100 in 12 months), they actual embody a complex sequence of events including emotional and cognitive events, each of which can potentially influence the subjective value of a choice. Delay periods can elicit negative emotional reactions (e.g., dread) and thus decrease the value of delayed outcomes, but waiting can also elicit positive emotional reaction (Loewenstein, 1987 ; Chew and Ho, 1994 ) and thus act to increase value. This lability, coupled with the finding that people are not necessarily adept at predicting their future emotional reactions (Wilson and Gilbert, 2005 ), begins to make temporal choice look even more difficult. Furthermore, not only do these factors complicate temporal decision-making itself, they also complicate our attempts to study it; attempts to fully control and dissociate each of the relevant influences are unlikely to be feasible. Nonetheless, studies that acknowledge and take these factors into account certainly have the potential to help overcome some of the ambiguities noted above and to paint a much richer picture of temporal decision-making.
The work reviewed above suggests that we are just beginning to gain insight into the nature temporal decision-making. Much of this work has sought to explore the phenomenon of delay discounting; the finding that waiting for rewards decreases their attractiveness. As a first step, this work has begun to characterize the difference between patient and impatient choices within a single individual as well as between patient individuals and impatient individuals. While critically informative, we ultimately need to understand the processes that operate to produce temporal decisions. Fortunately, the study of decision-making has a wealth of theoretical tools available from economics. More recent work has attempted to leverage economic theories to better understand the neural patterns observed during decision-making. The results have been provocative, but there are many questions left unanswered.
We have pointed to several places where there is currently ambiguity in the treatment of temporal decision-making: correlates of patience vs. subjective value signals, delay vs. risk, etc. Furthermore, we have tried to point to a small number of relatively unexplored dimensions that are likely to be relevant for temporal decision-making: affective forecasting, anticipation, etc. This is certainly not an exhaustive list, as others (Berns et al., 2007 ; Wittmann and Paulus, 2008 ) have noted additional relevant factors. At this point, it appears that, despite complicating our experimental designs, investigating the interactions between these factors would be most valuable in illuminating the subtleties of temporal decision-making. Given the clinical and public policy implications of temporal decision-making as well as the sheer scientific potential in psychology, neuroscience, and economics, the benefits of such an approach seem to outweigh the costs.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
We are grateful to Heather Moore and Marla Krukowski for comments and feedback on drafts of this manuscript.
Addis, D. R., and Schacter, D. L. (2008). Constructive episodic simulation: temporal distance and detail of past and future events modulate hippocampal engagement. Hippocampus 18, 227–237.
Addis, D. R., Wong, A. T., and Schacter, D. L. (2007). Remembering the past and imagining the future: common and distinct neural substrates during event construction and elaboration. Neuropsychologia 45, 1363–1377.
Ainslie, G. (1975). Specious reward: a behavioral theory of impulsiveness and impulse control. Psychol. Bull. 82, 463–496.
Barkley, R. A., Edwards, G., Laneri, M., Fletcher, K., and Metevia, L. (2001). Executive functioning, temporal discounting, and sense of time in adolescents with attention deficit hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD). J. Abnorm. Child. Psychol. 29, 541–556.
Bartzokis, G., Goldstein, I. B., Hance, D. B., Beckson, M., Shapiro, D., Lu,P. H., Edwards, N., Mintz, J., and Bridge, P. (1999). The incidence of T2-weighted MR imaging signal abnormalities in the brain of cocaine-dependent patients is age-related and region-specific. Am. J. Neuroradiol. 20, 1628–1635.
Benzion, U., Rapoport, A., and Yagil, J. (1989). Discount rates inferred from decisions: an experimental study. Manage. Sci. 35, 270–284.
Berns, G. S., Chappelow, J., Cekic, M., Zink, C. F., Pagnoni, G., and Martin-Skurski, M. E. (2006). Neurobiological substrates of dread. Science 312, 754–758.
Berns, G. S., Laibson, D., and Loewenstein, G. (2007). Intertemporal choice–toward an integrative framework. Trends Cogn. Sci. 11, 482–488.
Bickel, W. K., Odum, A. L., and Madden, G. J. (1999). Impulsivity and cigarette smoking: delay discounting in current, never, and ex-smokers. Psychopharmacology 146, 447–454.
Boettiger, C. A., Mitchell, J. M., Tavares, V. C., Robertson, M., Joslyn, G., D’Esposito, M., and Fields, H. L. (2007). Immediate reward bias in humans: fronto-parietal networks and a role for the catechol-O-methyltransferase 158Val/Val genotype. J. Neurosci. 27, 14383–14391.
Bonson, K. R., Grant, S. J., Contoreggi, C. S., Links, J. M., Metcalfe, J., Weyl, H. L., Kurian, V., Ernst, M., and London, E. D. (2002). Neural systems and cue-induced cocaine craving. Neuropsychopharmacology 26, 376–386.
Brody, A. L., Mandelkern, M. A., London, E. D., Childress, A. R., Lee, G. S., Bota, R. G., Ho, M. L., Saxena, S., Baxter, L. R. Jr., Madsen, D., and Jarvik, M. E. (2002). Brain metabolic changes during cigarette craving. Arch. Gen. Psychiatry 59, 1162–1172.
Buckner, R. L., and Carroll, D. C. (2007). Self-projection and the brain. Trends Cogn. Sci. 11, 49–57.
Burks, S. V., Carpenter, J. P., Goette, L., and Rustichini, A. (2009). Cognitive skills affect economic preferences, strategic behavior, and job attachment. Proc. Natl. Acad. Sci. U.S.A. 106, 7745–7750.
Cardinal, R. N. (2006). Neural systems implicated in delayed and probabilistic reinforcement. Neural Netw. 19, 1277–1301.
Chew, S. H., and Ho, J. L. (1994). Hope: an empirical study of attitude toward the timing of uncertainty resolution. J. Risk Uncertain. 8, 267–288.
Clark, L., Bechara, A., Damasio, H., Aitken, M. R. F., Sahakian, B. J., and Robbins, T. W. (2008). Differential effects of insular and ventromedial prefrontal cortex lesions on risky decision-making. Brain 131, 1311–1322.
Coffey, S. F., Gudleski, G. D., Saladin, M. E., and Brady, K. T. (2003). Impulsivity and rapid discounting of delayed hypothetical rewards in cocaine-dependent individuals. Exp. Clin. Psychopharmacol. 11, 18–25.
Critchley, H. D., Mathias, C. J., and Dolan, R. J. (2001). Neural activity in the human brain relating to uncertainty and arousal during anticipation. Neuron 29, 537–545.
Dasgupta, P., and Maskin, E. (2005). Uncertainty and hyperbolic discounting. Am. Econ. Rev. 95, 1290–1299.
Egan, M. F., Goldberg, T. E., Kolachana, B. S., Callicott, J. H., Mazzanti, C. M., Straub, R. E., Goldman, D., and Weinberger, D. R. (2001). Effect of COMT Val108/158 Met genotype on frontal lobe function and risk for schizophrenia. Proc. Natl. Acad. Sci. U.S.A. 98, 6917–6922.
Ernst, M., Kimes, A. S., London, E. D., Matochik, J. A., Eldreth, D., Tata, S., Contoreggi, C., Leff, M., and Bolla, K. (2003). Neural substrates of decision making in adults with attention deficit hyperactivity disorder. Am. J. Psychiatry 160, 1061–1070.
Estle, S. J., Green, L., Myerson, J., and Holt, D. D. (2007). Discounting of monetary and directly consumable rewards. Psychol. Sci. 18, 58–63.
Forbes, E. E., Brown, S. M., Kimak, M., Ferrell, R. E., Manuck, S. B., and Hariri, A. R. (2009). Genetic variation in components of dopamine neurotransmission impacts ventral striatal reactivity associated with impulsivity. Mol. Psychiatry 14, 60–70.
Frank, M. J., Moustafa, A. A., Haughey, H. M., Curran, T., and Hutchison, K. E. (2007). Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning. Proc. Natl. Acad. Sci. U.S.A. 104, 16311–16316.
Franklin, T. R., Acton, P. D., Maldjian, J. A., Gray, J. D., Croft, J. R., Dackis, C. A., O’Brien, C. P., and Childress, A. R. (2002). Decreased gray matter concentration in the insular, orbitofrontal, cingulate, and temporal cortices of cocaine patients. Biol. Psychiatry 51, 134–142.
Garavan, H., Pankiewicz, J., Bloom, A., Cho, J. K., Sperry, L., Ross, T. J., Salmeron, B. J., Risinger, R., Kelley, D., and Stein, E. A. (2000). Cue-induced cocaine craving: neuroanatomical specificity for drug users and drug stimuli. Am. J. Psychiatry 157, 1789–1798.
Gately, D. (1980). Individual discount rates and the purchase and utilization of energy-using durables: comment. Bell J. Econ. 11, 373–374.
Green, L., Fry, A. F., and Myerson, J. (1994b). Discounting of delayed rewards: a life-span comparison. Psychol. Sci. 5, 33–36.
Green, L., and Myerson, J. (2004). A discounting framework for choice with delayed and probabilistic rewards. Psychol. Bull. 130, 769–792.
Green, L., Myerson, J., and Ostaszewski, P. (1999a). Amount of reward has opposite effects on the discounting of delayed and probabilistic outcomes. J. Exp. Psychol. Learn. Mem. Cogn. 25, 418–427.
Green, L., Myerson, J., and Ostaszewski, P. (1999b). Discounting of delayed rewards across the life span: age differences in individual discounting functions. Behav. Process. 46, 89–96.
Gregorios-Pippas, L., Tobler, P. N., and Schultz, W. (2009). Short-term temporal discounting of reward value in human ventral striatum. J. Neurophysiol. 101, 1507–1523.
Harding, J. P. (2000). Mortgage valuation with optimal intertemporal refinancing strategies. J. Hous. Econ. 9, 233–266.
Hardisty, D. J., and Weber, E. U. (2009). Discounting future green: money versus the environment. J. Exp. Psychol. Gen. 138, 329–340.
Hariri, A. R., Brown, S. M., Williamson, D. E., Flory, J. D., de Wit, H., and Manuck, S. B. (2006). Preference for immediate over delayed rewards is associated with magnitude of ventral striatal activity. J. Neurosci. 26, 13213.
Hassabis, D., Kumaran, D., Vann, S. D., and Maguire, E. A. (2007). Patients with hippocampal amnesia cannot imagine new experiences. Proc. Natl. Acad. Sci. U.S.A. 104, 1726.
Hausman, J. A. (1979). Individual discount rates and the purchase and utilization of energy-using durables. Bell J. Econ. 10, 33–54.
Holt, D. D., Green, L., and Myerson, J. (2003). Is discounting impulsive? Evidence from temporal and probability discounting in gambling and non-gambling college students. Behav. Process. 64, 355–367.
Hsu, M., Bhatt, M., Adolphs, R., Tranel, D., and Camerer, C. F. (2005). Neural systems responding to degrees of uncertainty in human decision-making. Science 310, 1680–1683.
Hwang, J., Kim, S., and Lee, D. (2009). Temporal discounting and inter-temporal choice in rhesus monkeys. Front. Behav. Neurosci. 3:9. doi:10.3389/neuro.08.009.2009.
Kable, J. W., and Glimcher, P. W. (2007). The neural correlates of subjective value during intertemporal choice. Nat. Neurosci. 10, 1625–1633.
Kheramin, S., Body, S., Ho, M., Velazquez-Martinez, D. N., Bradshaw, C. M., Szabadi, E., Deakin, J. F. W., and Anderson, I. M. (2004). Effects of orbital prefrontal cortex dopamine depletion on inter-temporal choice: a quantitative analysis. Psychopharmacology 175, 206–214.
Kilts, C. D., Schweitzer, J. B., Quinn, C. K., Gross, R. E., Faber, T. L., Muhammad, F., Ely, T. D., Hoffman, J. M., and Drexler, K. P. G. (2001). Neural activity related to drug craving in cocaine addiction. Arch. Gen. Psychiatry 58, 334–341.
Kim, S., Hwang, J., and Lee, D. (2008). Prefrontal coding of temporally discounted values during intertemporal choice. Neuron 59, 161–172.
Kirby, K. N. (1997). Bidding on the future: evidence against normative discounting of delayed rewards. J. Exp. Psychol. Gen. 126, 54–70.
Kirby, K. N., and Herrnstein, R. J. (1995). Preference reversals due to myopic discounting of delayed reward. Psychol. Sci. 6, 83–89.
Kirby, K. N., and Petry, N. M. (2004). Heroin and cocaine abusers have higher discount rates for delayed rewards than alcoholics or non-drug-using controls. Addiction 99, 461–471.
Knutson, B., Taylor, J., Kaufman, M., Peterson, R., and Glover, G. (2005). Distributed neural representation of expected value. J. Neurosci. 25, 4806–4812.
Loewenstein, G. (1987). Anticipation and the valuation of delayed consumption. Econ. J. 97, 666–684.
Loewenstein, G. (1996). Out of control: visceral influences on behavior. Organ. Behav. Hum. Decis. Process. 65, 272–292.
Loewenstein, G. F., Weber, E. U., Hsee, C. K., and Welch, N. (2001). Risk as feelings. Psychol. Bull. 127, 267–286.
Logue, A. W., Peña-Correal, T. E., Rodriguez, M. L., and Kabela, E. (1986). Self-control in adult humans: variation in positive reinforcer amount and delay. J. Exp. Anal. Behav. 46, 159–173.
Luhmann, C. C., Chun, M. M., Yi, D. J., Lee, D., and Wang, X. J. (2008). Neural dissociation of delay and uncertainty in intertemporal choice. J. Neurosci. 28, 14459–14466.
Mauk, M., and Buonomano, D. V. (2004). The neural basis of temporal processing. Annu. Rev. Neurosci. 27, 307–340.
Mazur, J. E. (1984). Tests of an equivalence rule for fixed and variable reinforcer delays. J. Exp. Psychol. Anim. Behav. Process. 10, 426–436.
Mazur, J. E. (1995). Conditioned reinforcement and choice with delayed and uncertain primary reinforcers. J. Exp. Anal. Behav. 63, 139–150.
McClure, S. M., Ericson, K. M., Laibson, D. I., Loewenstein, G., and Cohen, J. D. (2007). Time discounting for primary rewards. J. Neurosci. 27, 5796–5804.
McClure, S. M., Laibson, D. I., Loewenstein, G., and Cohen, J. D. (2004). Separate neural systems value immediate and delayed monetary rewards. Science 306, 503–507.
Metcalfe, J., and Mischel, W. (1999). A hot/cool-system analysis of delay of gratification: dynamics of willpower. Psychol. Rev. 106, 3–19.
Mischel, W. (1966). Theory and research on the antecedents of self-imposed delay of reward. Prog. Exp. Pers. Res. 3, 85–132.
Mischel, W., Shoda, Y., and Rodriguez, M. I. (1989). Delay of gratification in children. Science 244, 933–938.
Montague, P. R., and Berns, G. S. (2002). Neural economics and the biological substrates of valuation. Neuron 36, 265–284.
Moustafa, A. A., Cohen, M. X., Sherman, S. J., and Frank, M. J. (2008). A role for dopamine in temporal decision making and reward maximization in Parkinsonism. J. Neurosci. 28, 12294–12304.
Myerson, J., and Green, L. (1995). Discounting of delayed rewards: models of individual choice. J. Exp. Anal. Behav. 64, 263–276.
Myerson, J., Green, L., Scott Hanson, J., Holt, D. D., and Estle, S. J. (2003). Discounting delayed and probabilistic rewards: processes and traits. J. Econ. Psychol. 24, 619–635.
Myrick, H., Anton, R. F., Li, X., Henderson, S., Drobes, D., Voronin, K., and George, M. S. (2004). Differential brain activity in alcoholics and social drinkers to alcohol cues: relationship to craving. Neuropsychopharmacology 29, 393–402.
Naqvi, N. H., Rudrauf, D., Damasio, H., and Bechara, A. (2007). Damage to the insula disrupts addiction to cigarette smoking. Science 315, 531–534.
Okuda, J., Fujii, T., Ohtake, H., Tsukiura, T., Tanji, K., Suzuki, K., Kawashima, R., Fukuda, H., Itoh, M., and Yamadori, A. (2003). Thinking of the future and past: the roles of the frontal pole and the medial temporal lobes. Neuroimage 19, 1369–1380.
Paulus, M. P., Rogalsky, C., Simmons, A., Feinstein, J. S., and Stein, M. B. (2003). Increased activation in the right insula during risk-taking decision making is related to harm avoidance and neuroticism. Neuroimage 19, 1439–1448.
Paulus, M. P., Tapert, S. F., and Schuckit, M. A. (2005). Neural activation patterns of methamphetamine-dependent subjects during decision making predict relapse. Arch. Gen. Psychiatry 62, 761–768.
Petry, N. M. (2001). Delay discounting of money and alcohol in actively using alcoholics, currently abstinent alcoholics, and controls. Psychopharmacology 154, 243–250.
Phillips, P. E. M., Walton, M. E., and Jhou, T. C. (2007). Calculating utility: preclinical evidence for cost–benefit analysis by mesolimbic dopamine. Psychopharmacology 191, 483–495.
Ploghaus, A., Tracey, I., Gati, J. S., Clare, S., Menon, R. S., Matthews, P. M., and Rawlins, J. N. P. (1999). Dissociating pain from its anticipation in the human brain. Science 284, 1979–1981.
Prelec, D., and Loewenstein, G. (1991). Decision making over time and under uncertainty: a common approach. Manage. Sci. 37, 770–786.
Rachlin, H., Raineri, A., and Cross, D. (1991). Subjective probability and delay. J. Exp. Anal. Behav. 55, 233–244.
Rahman, S., Sahakian, B. J., Cardinal, R. N., Rogers, R. D., and Robbins, T. W. (2001). Decision making and neuropsychiatry. Trends Cogn. Sci. 5, 271–277.
Reynolds, B., Richards, J. B., Horn, K., and Karraker, K. (2004). Delay discounting and probability discounting as related to cigarette smoking status in adults. Behav. Process. 65, 35–42.
Rubia, K., Smith, A. B., Halari, R., Matsukura, F., Mohammad, M., Taylor, E., and Brammer, M. J. (2009). Disorder-specific dissociation of orbitofrontal dysfunction in boys with pure conduct disorder during reward and ventrolateral prefrontal dysfunction in boys with pure ADHD during sustained attention. Am. J. Psychiatry 166, 83–94.
Scheres, A., Dijkstra, M., Ainslie, E., Balkan, J., Reynolds, B., Sonuga-Barke, E., and Castellanos, F. X. (2006). Temporal and probabilistic discounting of rewards in children and adolescents: effects of age and ADHD symptoms. Neuropsychologia 44, 2092–2103.
Schneider, F., Habel, U., Wagner, M., Franke, P., Salloum, J. B., Shah, N. J., Toni, I., Sulzbach, C., Honig, K., and Maier, W. (2001). Subcortical correlates of craving in recently abstinent alcoholic patients. Am. J. Psychiatry 158, 1075–1083.
Shamosh, N. A., and Gray, J. R. (2008). Delay discounting and intelligence: a meta-analysis. Intelligence 4, 289–305.
Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R. J., and Frith, C. D. (2004). Empathy for pain involves the affective but not sensory components of pain. Science 303, 1157–1162.
Smith, K., Dickhaut, J., McCabe, K., and Pardo, J. V. (2002). Neuronal substrates for choice under ambiguity, risk, gains, and losses. Manage. Sci. 48, 711–718.
Soman, D., Ainslie, G., Frederick, S., Li, X., Lynch, J., Moreau, P., Mitchell, A., Read, D., Sawyer, A., and Trope, Y. (2005). The psychology of intertemporal discounting: why are distant events valued differently from proximal ones? Mark. Lett. 16, 347–360.
Sowell, E. R., Thompson, P. M., Holmes, C. J., Jernigan, T. L., and Toga, A. W. (1999). In vivo evidence for postadolescent brain maturation in frontal and striatal regions. Nat. Neurosci. 2, 859–860.
Sozou, P. D. (1998). On hyperbolic discounting and uncertain hazard rates. Proc. Biol Sci. 265, 2015–2020.
Stevenson, M. K. (1986). A discounting model for decisions with delayed positive or negative outcomes. J. Exp. Psychol. Gen. 115, 131–154.
Szpunar, K. K., Watson, J. M., and McDermott, K. B. (2007). Neural substrates of envisioning the future. Proc. Natl. Acad. Sci. U.S.A. 104, 642.
Tanaka, S. C., Doya, K., Okada, G., Ueda, K., Okamoto, Y., and Yamawaki, S. (2004). Prediction of immediate and future rewards differentially recruits cortico-basal ganglia loops. Nat. Neurosci. 7, 887–893.
Tobler, P. N., O’Doherty, J. P., Dolan, R. J., and Schultz, W. (2007). Reward value coding distinct from risk attitude-related uncertainty coding in human reward systems. J. Neurophysiol. 97, 1621.
Vuchinich, R. E., and Simpson, C. A. (1998). Hyperbolic temporal discounting in social drinkers and problem drinkers. Exp. Clin. Psychopharmacol. 6, 292–305.
Wang, Z., Faith, M., Patterson, F., Tang, K., Kerrin, K., Wileyto, E. P., Detre, J. A., and Lerman, C. (2007). Neural substrates of abstinence-induced cigarette cravings in chronic smokers. J. Neurosci. 27, 14035–14040.
Weber, B., and Huettel, S. (2008). The neural substrates of probabilistic and intertemporal decision making. Brain Res. 1234, 104–115.
Wilson, T. D., and Gilbert, D. T. (2005). Affective forecasting: Knowing what to want. Curr. Dir. Psychol. Sci. 14, 131–134.
Winstanley, C. A., Theobald, D. E. H., Cardinal, R. N., and Robbins, T. W. (2004). Contrasting roles of basolateral amygdala and orbitofrontal cortex in impulsive choice. J. Neurosci. 24, 4718–4722.
Wittmann, M., Leland, D. S., and Paulus, M. P. (2007). Time and decision making: differential contribution of the posterior insular cortex and the striatum during a delay discounting task. Exp. Brain Res. 179, 643–653.
Wittmann, M., and Paulus, M. P. (2008). Decision making, impulsivity and time perception. Trends Cogn. Sci. 12, 7–12.
Xue, G., Lu, Z., Levin, I. P., Weller, J. A., Li, X., and Bechara, A. (2009). Functional dissociations of risk and reward processing in the dedial prefrontal cortex. Cereb. Cortex 19, 1019–1027.
Yaari, M. E. (1965). Uncertain lifetime, life insurance, and the theory of the consumer. Rev. Econ. Stud. 137–150.
Yacubian, J., Glascher, J., Schroeder, K., Sommer, T., Braus, D. F., and Buchel, C. (2006). Dissociable systems for gain-and loss-related value predictions and errors of prediction in the human brain. J. Neurosci. 26, 9530–9537.
Yacubian, J., Sommer, T., Schroeder, K., Gläscher, J., Kalisch, R., Leuenberger, B., Braus, D. F., and Büchel, C. (2007). Gene–gene interaction associated with neural reward sensitivity. Proc. Natl. Acad. Sci. U.S.A. 104, 8125–8130.