Abstract
What has transpired immediately before has a strong influence on how sensory stimuli are processed and perceived. In particular, temporal context can have contrastive effects, repelling perception away from the interpretation of the context stimulus, and attractive effects (TCEs), whereby perception repeats upon successive presentations of the same stimulus. For decades, scientists have documented contrastive and attractive temporal context effects mostly with simple visual stimuli. But both types of effects also occur in other modalities, e.g., audition and touch, and for stimuli of varying complexity, raising the possibility that context effects reflect general computational principles of sensory systems. Neuroimaging shows that contrastive and attractive context effects arise from neural processes in different areas of the cerebral cortex, suggesting two separate operations with distinct functional roles. Bayesian models can provide a functional account of both context effects, whereby prior experience adjusts sensory systems to optimize perception of future stimuli.
Introduction
The information our senses receive is often ambiguous, incomplete, and discontinuous. Nevertheless, we perceive our environment as a unified whole. According to a theory originally proposed by Helmholtz (1867), our brains achieve this using prior information. For example, when talking to someone, we decipher each word by taking into account not only the sounds and movements coming from the speaker’s mouth but also the meaning of preceding words, the topic of the conversation, as well as our lifelong knowledge of language. One particularly important type of experience that can aid perception is what has just occurred (i.e., temporal context). Temporal context effects (TCEs) are evident in various experimental phenomena, such as visual aftereffects. Yet, TCEs also occur in other sensory modalities, such as audition and touch. Given the pervasive influence that previous experience has on perception, it is essential to understand what factors determine how this perceptual adjustment occurs. Here, we explain TCEs using Bayesian theory, with an emphasis on how it explains TCEs across senses and levels of processing.
TCEs in Perception
Opposing Context Effects on Perception
Two TCEs have been investigated most extensively: the first typically occurs when a non-ambiguous, salient context stimulus (e.g., leftward tilted lines) precedes a test stimulus (e.g., vertically oriented lines), which results in perception being repelled away from the interpretation of the context stimulus such that participants perceive the test stimulus lines as tilted rightward (Figure 1A). Similarly, in the waterfall illusion, also known as the motion aftereffect, a rock on the side of the stream is usually perceived as moving upward after staring at the downward motion of the waterfall (Addams, 1834). This contrastive effect is known as adaptation, negative aftereffect, or habituation. There is considerable evidence that it results from neural adaptation, which in turn alters the balance of population activity, thus favoring perception of features that are not adapted (Grunewald and Lankheet, 1996; Huk et al., 2001).
Figure 1
The second type of effect is typically (Brascamp et al., 2007), but not always (Fischer and Whitney, 2014) observed when context and test stimuli are weak or ambiguous. For example, if two Necker cubes are presented sequentially, the percept of the first cube (e.g., face down) will typically dominate how the second cube is perceived (i.e., face down again), despite the fact that either perceptual interpretation of the cube is equally likely in isolation (Figure 1B). This attractive effect is known as facilitation, perceptualmemory, hysteresis, or stabilization (Pearson and Brascamp, 2008). Although the term “priming” is also sometimes used, this can also refer to facilitation of reaction times (Pickering and Ferreira, 2008). Importantly, unlike the first (contrastive) effect, which is the result of stimulus features of the context, attractive effects depend more strongly on the perceptual interpretation of the context stimulus (Hock et al., 1996; De Lucia et al., 2010; Schwiedrzik et al., 2014; but see Kanai and Verstraten, 2005).
Why do these two context effects occur and why do they have opposite effects on perception? One possibility is that attractive and contrastive context effects serve different functions. Although there are many theories that explain why contrastive effects are so pervasive in perception, they mostly agree that contrastive effects sensitize the brain to take in new information, facilitating the detection of changes (Barlow, 1990; Clifford et al., 2000). On the other hand, attractive effects may stabilize perception in the face of noisy, discontinuous, and constantly changing environments brought about by eye and head movements, stimulus occlusion, or spontaneous neural fluctuations (Kleinschmidt et al., 2002). The coexistence of attractive and contrastive effects (Snyder et al., 2009; Denison et al., 2011; Schwiedrzik et al., 2014) may then endow the brain with the flexibility to deal with constantly changing internal and external demands, accomplishing balance between constancy and variation.
General-Purpose Mechanisms Across Senses
TCEs also occur in senses other than vision (Riskey et al., 1979; Hulshoff Pol et al., 1998; Carter et al., 2008; Thiel et al., 2014). In audition, frequency glides cause similar aftereffects as those observed for visual motion, aftereffects, whereby a test stimulus—an illusory frequency modulation in a constant-frequency stimulus—is perceived as having the opposite direction as a context stimulus with periodic frequency modulation (Shu et al., 1993). As the motion aftereffect transfers between eyes (Anstis et al., 1998), this effect transfers between ears, suggesting that both effects arise from central motion processing. Attractive effects have been observed with ambiguous tritone stimuli (i.e., two tones that have no pitch height cues and form a six-semitone interval, Deutsch, 1997), by presenting tone pairs with increasing sized intervals for each successive pair starting with an ascending melodic interval (i.e., D#-E, D#-F, D#-F#, D#-G, … D#-D), or a descending melodic interval (i.e., D#-D, D#-C#, D#-C, D#-B, … D#-E; Giangrande et al., 2003). These ordered conditions biased the normally ambiguous tritone stimulus to be perceived as moving in the same direction as the initial small intervals.
Auditory scene analysis (Bregman, 1990; Snyder and Alain, 2007; Snyder et al., 2012), the perceptual segregation and integration of sound elements to form auditory objects, also shows contrastive and attractive context effects in experimental paradigms such as the stream segregation task. Listeners are presented with sequences of two tones of different frequencies in an alternating pattern, and the likelihood to perceive one or two streams depends on the frequency separation between the tones (Δf): the larger the separation, the more likely one hears two segregated streams. However, when a preceding context sequence has a larger Δf than the test, listeners are less likely to perceive the test sequence as two segregated streams—a contrastive effect (Figure 1C; Snyder et al., 2008, 2009; Snyder and Weintraub, 2011). An attractive effect of prior perception occurs during stream segregation tasks when context and test sequences have the same Δf (Figure 1D). The fact that the prior Δf effect generalizes to different frequency ranges (Snyder et al., 2009) but less so to different rhythmic patterns (Snyder and Weintraub, 2011) points to the recruitment of complex representations that are not arranged according to frequency. The contrastive effect declines over several seconds (Snyder et al., 2008; Snyder and Weintraub, 2013) and shows a persistent component (Snyder and Weintraub, 2013), suggesting the involvement of long auditory memory stores (Cowan, 1984; Lu et al., 1992). Importantly, the temporal dynamics of TCEs in audition can be strikingly similar to those in vision: for example, contrastive TCEs in visual apparent motion scale with log alternation rate (Anstis et al., 1985), as do contrastive effects in auditory stream segregation (Anstis and Saida, 1985). The fact that TCEs in vision and audition are not only phenomenologically similar but also both centrally based and share temporal properties suggests that the existence of attractive and contrastive context effects across domains reflects the likely operation of general-purpose mechanisms, achieving balance between perceptual constancy and variation. The shared properties of attractive and contrastive TCEs may originate from common computations by individual neurons or neural networks that similarly shape the processing of stimuli across domains.
Context Effects with Complex Stimuli
Context effects are not limited to simple stimuli, such as tones or oriented lines, but are also prevalent for complex stimuli and can even occur across senses. In audition, a classic study demonstrated that listeners’ perception of a target word changed, for example, from a “bit” to a “bet”, depending on the acoustic characteristics of the preceding phrase (Ladefoged and Broadbent, 1957; also see Laing et al., 2012). Similarly, listeners perceive an ambiguous target word as either “bet” or “but” depending on the frequency content of the preceding context phrase (Huang and Holt, 2012). After exposure to a low-frequency version of the phrase, listeners more often hear “bet”, which typically contains more high-frequency energy; following a high-frequency version of the phrase, they more often hear “but”, which contains more low-frequency energy. Critically, the same patterns of contrastive effects are observed when the context consists of sequences of tones of similar frequency content to that of the phrase, suggesting a general computation that is not speech-specific. Speech context effects have been found to occur when context and target are presented dichotically (Holt and Lotto, 2002), suggesting involvement of cortical mechanisms. Contrastive effects have also been described in studies that manipulate speaking rate (Heffner et al., 2013) or style (casual vs. clear, Vitela et al., 2013) and with audiovisual stimuli: for example, presenting a face unambiguously articulating the syllable /aba/ or /ada/ decreases the likelihood to perceive the following unambiguous syllable as /aba/ or /ada/, respectively (Bertelson et al., 2003).
Attractive effects in speech perception have been reported for word identification (Tuller et al., 1994; Case et al., 1995) and in determining the meaning of a sentence through prosodic (or pitch) cues (Raczaszek et al., 1999). Again, attractive effects on speech perception are also observed across sensory modalities: brief exposure to a face articulating the syllable /aba/ or /ada/ increases the likelihood to perceive the identity of ambiguous auditory targets in the direction of the visual component (Bertelson et al., 2003; Vroomen et al., 2007). Attractive effects of similar magnitude and with a similar time course have also been reported for the perception of ambiguous phonemes when participants were previously exposed to speaking face stimuli (a face articulating /t/ or /p/ or lexical stimuli embedded into a word, van Linden and Vroomen, 2007), suggesting that abstract representations may mediate such crossmodal effects.
In vision, attractive and contrastive context effects are also found during perception of complex stimuli, including objects (Daelli et al., 2010), bodies (Rhodes et al., 2013), and scenes (Greene and Oliva, 2010). Particularly striking contextual effects occur for faces, affecting the perception of gender, race, expression (Webster et al., 2004), gaze direction (Jenkins et al., 2006), and age (Schweinberger et al., 2010). A striking case in point is facial identity. Here, adaptation to a face typically biases perception of subsequently presented faces away from the adapting stimulus towards a different identity (Leopold et al., 2001). These effects exhibit similar properties as context effects with simple visual stimuli, e.g., a power law dependency on stimulus duration (Leopold et al., 2005). Additionally, both contrastive (Zhao and Chubb, 2001; Anderson and Wilson, 2005; Jiang et al., 2006) and attractive effects (Brooks et al., 2002; Oruc and Barton, 2010) show invariance to various image transformations such as size, position, and view, congruent with the known invariance of higher-level visual representations. Furthermore, facial identity can both be adapted (Hills et al., 2010) and primed (Stevenage et al., 2012) crossmodally using voices, which may involve higher-level multimodal brain areas (Perrodin et al., 2014). Together, these characteristics of attractive and contrastive context effects indicate the involvement of higher-level visual areas, where neurons respond to their preferred category, irrespective of variations in simple stimulus properties.
This then suggests that both in vision and audition, contrastive and attractive effects occur for simple features and for higher-level representations. The temporal properties of context effects for simple stimuli seem to be conserved for complex stimulus processing, indicating that the mechanisms underlying these effects may also be conserved across processing stages. Furthermore, the fact that comparable context effects are observed across and between sensory modalities strongly suggests general computational principles.
Different Context Effects, Different Mechanisms
Given that identical stimuli can elicit both types of contextual effects, the question arises whether both effects result from the same or separate neuronal mechanisms. We recently addressed this issue in a functional imaging study (Schwiedrzik et al., 2014) using multistable visual stimuli in which both attractive and contrastive effects were present concurrently but could be separately quantified. The two effects mapped onto distinct cortical networks: higher-order fronto-parietal areas, in particular dorsomedial prefrontal cortex (dmPFC), and higher-order visual areas were active for the attractive effects, while contrastive effects were confined to early visual areas (Figure 2A). Involvement of fronto-parietal areas in attractive effects has also been observed in a recent study on face processing (Kaiser et al., 2013) and for crossmodal context effects (Kilian-Hutten et al., 2011). Furthermore, these neuroimaging results mesh well with behavioral studies: for example, while attractive context effects can transfer beyond the exact location where the context stimulus was presented (Knapen et al., 2009), contrastive effects can be restricted to the retinotopic location of the contextual stimulus (Knapen et al., 2010). This implies that attractive effects arise in brain areas with larger receptive fields. In addition, contrastive effects only occur for test stimuli that are very similar to the context, while attractive effects allow for more variability between context and test (Gepshtein and Kubovy, 2005), in line with the fact that higher brain areas show broader tuning; finally, attractive context effects display a longer time constant than contrastive context effects (Pastukhov and Braun, 2013), mirroring neurons in higher brain areas that integrate over longer time windows (Honey et al., 2012; Chaudhuri et al., 2015).
Figure 2
Together, these results argue for a general dissociation of the underlying mechanisms of attractive and contrastive context effects, and suggest that attractive context effects involve higher-level areas, while contrastive context effects originate in earlier, sensory areas. This processing hierarchy may relate to the function of the proposed computations of attractive and contrastive context effects. In particular, stabilization may be more easily achieved at higher processing stages where neurons exhibit larger receptive fields, longer time constants, and invariance to simple features, while the extraction of new information requires detailed and sensitive representations at earlier sensory processing stages.
Bayesian Models of Context Effects
Bayesian models provide a general theoretical framework in which attractive and contrastive context effects can be understood at the computational level. Here, perception is framed as an inferential process whereby previous experience is combined with current sensory information. This entails a prior, i.e., a probability distribution of likely percepts given previous experience, and a likelihood function, which can be thought of as the currently available evidence. These two distributions are combined into the posterior distribution through Bayes’ rule, and the maximum of this distribution is what is perceived. The Bayesian framework thus takes temporal context explicitly into account. The different components of Bayesian models allow accommodating the finding that attractive and contrastive effects co-occur and are implemented in different brain regions (Schwiedrzik et al., 2014). We recently proposed one such model (Figure 2B): in this model, attractive contextual effects are conceptualized as resulting from changes in the priors. This is similar to a model by Hohwy et al. (2008), where attractive effects occur as long as the prior distribution is skewed towards the currently dominant percept. We account for contrastive effects by assuming that neural adaptation reduces the available sensory evidence and thus the likelihood function (Stocker and Simoncelli, 2006), linking the Bayesian likelihood function directly to neuronal mechanisms. Alternatively, contrastive effects can be modeled as a temporal drift in the prior distribution, caused by a “hyperprior” that emphasizes change over stability (Hohwy et al., 2008), or by assuming an asymmetric likelihood function that follows the principles of “efficient coding” (Wei and Stocker, 2013). Thus, no matter the specific implementation, the Bayesian framework is compatible with dissociation between attractive and contrastive context effects at the neural level, highlighting its usefulness in understanding the coexistence of these two phenomena.
Conclusions
In summary, TCEs powerfully modify perception across different senses and stimuli of varying complexity. Neuroimaging data suggest that distinct neural circuits in low-level and high-level brain areas mediate contrastive and attractive effects, respectively. The remarkable similarity of the contextual effects across sensory modalities and stimulus complexity strongly indicates the conservation of mechanisms throughout different sensory systems and levels of processing and raises the possibility that a single computational framework is sufficient to explain both context effects. Bayesian models of perception appear to be useful here as they unify perception, inference, and memory within the same framework. In particular, we show that a Bayesian account can accommodate both attractive and contrastive context effects, at least in vision. However, generalizing this framework to other sensory modalities remains an important challenge for future research. Similarly, more research is needed on the neurophysiological mechanisms, especially of attractive context effects, and directly linking these mechanisms to Bayesian concepts through explicit modeling.
Finding similar context effects across the senses raises the question to what extent modality-specific vs. modality-general brain areas mediate the context effects. At least for simple stimuli, it appears that contrastive effects result from modulations of early modality-specific sensory cortex, while attractive effects recruit fronto-parietal and higher-level modality-specific areas (Kilian-Hutten et al., 2011; Kaiser et al., 2013; Schwiedrzik et al., 2014). The activation of frontal and parietal areas is intriguing, as they are known to respond to multimodal stimuli (Calvert, 2001), making them suitable to convey abstract (modality general) memory representations. Thus, future studies should evaluate whether the same brain areas are active for attractive contextual effects in different senses.
Another interesting question is to what degree attractive and contrastive effects occur between senses. The areal separation described above suggests that contrastive effects might exert weak crossmodal contextual effects, given the recruitment of early unimodal cortex where crossmodal interactions are often weak, while attractive effects that engage higher unimodal areas and associative cortex may lead to greater crossmodal influences. However, crossmodal context effects might be implemented in a different way because the computational demands for unimodal and multimodal representations differ. For instance, multimodal inputs require transformations into a representational format that can make contact with other senses, a demand that is less strong for unimodal representations. Furthermore, it is important to consider that the occurrence of crossmodal influences might be determined less by the neuroanatomical locations of particular processes, but by the extent to which stimuli in different modalities are perceived as arising from the same objects in the physical world. Such a flexible arrangement might enable organisms to behave adaptively in complex multisensory situations typical of everyday life.
Statements
Acknowledgments
We were supported by the National Science Foundation [BCS1026023] and the Army Research Office [W9IINF-I2-I-0256] (JSS and ADV); a Human Frontier Science Program Longterm Fellowship [LT001118/2012-L] (CMS); a Marie Curie International Outgoing Fellowship of the European Community’s Seventh Framework Programme [299372] (LM); and the LOEWE Neuronale Koordination Forschungschwerpunkt Frankfurt, NeFF (LM).
Conflict of interest
The Reviewer Rachel Denison declares that, despite of being affiliated with the same institution as author Lucia Melloni, the review process was handled objectively. The other authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
1
AddamsR. (1834). An account of a peculiar optical phenomenon seen after having looked at a moving body. Lond. Edinb. Phil. Mag5, 373–374. 10.1080/14786443408648481
2
AndersonN. D.WilsonH. R. (2005). The nature of synthetic face adaptation. Vision Res.45, 1815–1828. 10.1016/j.visres.2005.01.012
3
AnstisS.SaidaS. (1985). Adaptation to auditory streaming of frequency-modulated tones. J. Exp. Psychol. Hum. Percept. Perform.11, 257–271. 10.1037/0096-1523.11.3.257
4
AnstisS.GiaschiD.CoganA. I. (1985). Adaptation to apparent motion. Vision Res.25, 1051–1062. 10.1016/0042-6989(85)90093-8
5
AnstisS.VerstratenF. A. J.MatherG. (1998). The motion aftereffect: a review. Trends Cogn. Sci.2, 111–117. 10.1016/S1364-6613(98)01142-5
6
BarlowH. B. (1990). “A theory about the functional role and synaptic mechanism of visual aftereffects,” in Vision: Coding and Efficiency, eds BlakemoreC.. (Cambridge: Cambridge University Press), 363–375.
7
BertelsonP.VroomenJ.De GelderB. (2003). Visual recalibration of auditory speech identification: a McGurk aftereffect. Psychol. Sci.14, 592–597. 10.1046/j.0956-7976.2003.psci_1470.x
8
BrascampJ. W.KnapenT. H.KanaiR.van EeR.van den BergA. V. (2007). Flash suppression and flash facilitation in binocular rivalry. J. Vis.7, 12–12. 10.1167/7.12.12
9
BregmanA. S. (1990). Auditory Scene Analysis: The Perceptual Organization of Sound, Cambridge, MA: MIT Press.
10
BrooksB. E.RosielleL. J.CooperE. E. (2002). The priming of face recognition after metric transformations. Perception31, 297–313. 10.1068/p3283
11
CalvertG. A. (2001). Crossmodal processing in the human brain: insights from functional neuroimaging studies. Cereb. Cortex11, 1110–1123. 10.1093/cercor/11.12.1110
12
CarterO.KonkleT.WangQ.HaywardV.MooreC. (2008). Tactile rivalry demonstrated with an ambiguous apparent-motion quartet. Curr. Biol.18, 1050–1054. 10.1016/j.cub.2008.06.027
13
CaseP.TullerB.DingM.KelsoJ. A. (1995). Evaluation of a dynamical model of speech perception. Percept. Psychophys.57, 977–988. 10.3758/bf03205457
14
ChaudhuriR.KnoblauchK.GarielM. A.KennedyH.WangX. J. (2015). A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex. Neuron88, 419–431. 10.1016/j.neuron.2015.09.008
15
CliffordC. W.WenderothP.SpeharB. (2000). A functional angle on some after-effects in cortical vision. Proc. Biol. Sci.267, 1705–1710. 10.1098/rspb.2000.1198
16
CowanN. (1984). On short and long auditory stores. Psychol. Bull.96, 341–370. 10.1037/0033-2909.96.2.341
17
DaelliV.Van RijsbergenN. J.TrevesA. (2010). How recent experience affects the perception of ambiguous objects. Brain Res.1322, 81–91. 10.1016/j.brainres.2010.01.060
18
De LuciaM.CocchiL.MartuzziR.MeuliR. A.ClarkeS.MurrayM. M. (2010). Perceptual and semantic contributions to repetition priming of environmental sounds. Cereb. Cortex20, 1676–1684. 10.1093/cercor/bhp230
19
DenisonR. N.PiazzaE. A.SilverM. A. (2011). Predictive context influences perceptual selection during binocular rivalry. Front. Hum. Neurosci.5:166. 10.3389/fnhum.2011.00166
20
DeutschD. (1997). The tritone paradox: a link between music and speech. Curr. Dir. Psychol. Sci.6, 174–180. 10.1111/1467-8721.ep10772951
21
FischerJ.WhitneyD. (2014). Serial dependence in visual perception. Nat. Neurosci.17, 738–743. 10.1038/nn.3689
22
GepshteinS.KubovyM. (2005). Stability and change in perception: spatial organization in temporal context. Exp. Brain Res.160, 487–495. 10.1007/s00221-004-2038-3
23
GiangrandeJ.TullerB.KelsoJ. A. S. (2003). Perceptual dynamics of circular pitch. Music Percept.20, 241–262. 10.1525/mp.2003.20.3.241
24
GreeneM. R.OlivaA. (2010). High-level aftereffects to global scene properties. J. Exp. Psychol. Hum. Percept. Perform.36, 1430–1442. 10.1037/a0019058
25
GrunewaldA.LankheetM. J. (1996). Orthogonal motion after-effect illusion predicted by a model of cortical motion processing. Nature384, 358–360. 10.1038/384358a0
26
HeffnerC. C.DilleyL. C.McauleyJ. D.PittM. A. (2013). When cues combine: how distal and proximal acoustic cues are integrated in word segmentation. Lang. Cogn. Process28, 1275–1302. 10.1080/01690965.2012.672229
27
HelmholtzH. (1867). Handbuch der Physiologischen Optik, Leipzig: Leopold Voss.
28
HillsP. J.ElwardR. L.LewisM. B. (2010). Cross-modal face identity aftereffects and their relation to priming. J. Exp. Psychol. Hum. Percept. Perform.36, 876–891. 10.1037/a0018731
29
HockH. S.SchonerG.HochsteinS. (1996). Perceptual stability and the selective adaptation of perceived and unperceived motion directions. Vision Res.36, 3311–3323. 10.1016/0042-6989(95)00277-4
30
HohwyJ.RoepstorffA.FristonK. (2008). Predictive coding explains binocular rivalry: an epistemological review. Cognition108, 687–701. 10.1016/j.cognition.2008.05.010
31
HoltL. L.LottoA. J. (2002). Behavioral examinations of the level of auditory processing of speech context effects. Hear. Res.167, 156–169. 10.1016/s0378-5955(02)00383-0
32
HoneyC. J.ThesenT.DonnerT. H.SilbertL. J.CarlsonC. E.DevinskyO.et al. (2012). Slow cortical dynamics and the accumulation of information over long timescales. Neuron76, 423–434. 10.1016/j.neuron.2012.08.011
33
HuangJ.HoltL. L. (2012). Listening for the norm: adaptive coding in speech categorization. Front. Psychol.3, 10. 10.3389/fpsyg.2012.00010
34
HukA. C.RessD.HeegerD. J. (2001). Neuronal basis of the motion aftereffect reconsidered. Neuron32, 161–172. 10.1016/s0896-6273(01)00452-4
35
Hulshoff PolH. E.HijmanR.BaareW. F.Van ReeJ. M. (1998). Effects of context on judgements of odor intensities in humans. Chem. Senses23, 131–135. 10.1093/chemse/23.2.131
36
JenkinsR.BeaverJ. D.CalderA. J. (2006). I thought you were looking at me: direction-specific aftereffects in gaze perception. Psychol. Sci.17, 506–513. 10.1111/j.1467-9280.2006.01736.x
37
JiangF.BlanzV.O’tooleA. J. (2006). Probing the visual representation of faces with adaptation: a view from the other side of the mean. Psychol. Sci.17, 493–500. 10.1111/j.1467-9280.2006.01734.x
38
KaiserD.WaltherC.SchweinbergerS. R.KovacsG. (2013). Dissociating the neural bases of repetition-priming and adaptation in the human brain for faces. J. Neurophysiol.110, 2727–2738. 10.1152/jn.00277.2013
39
KanaiR.VerstratenF. A. (2005). Perceptual manifestations of fast neural plasticity: motion priming, rapid motion aftereffect and perceptual sensitization. Vision Res.45, 3109–3116. 10.1016/j.visres.2005.05.014
40
Kilian-HuttenN.VroomenJ.FormisanoE. (2011). Brain activation during audiovisual exposure anticipates future perception of ambiguous speech. Neuroimage57, 1601–1607. 10.1016/j.neuroimage.2011.05.043
41
KleinschmidtA.BüchelC.HuttonC.FristonK. J.FrackowiakR. S. (2002). The neural structures expressing perceptual hysteresis in visual letter recognition. Neuron34, 659–666. 10.1016/s0896-6273(02)00694-3
42
KnapenT.BrascampJ.AdamsW. J.GrafE. W. (2009). The spatial scale of perceptual memory in ambiguous figure perception. J. Vis.9, 16.11–16.12. 10.1167/9.13.16
43
KnapenT.RolfsM.WexlerM.CavanaghP. (2010). The reference frame of the tilt aftereffect. J. Vis.10, 8.1–8.13. 10.1167/10.1.8
44
LadefogedP.BroadbentD. E. (1957). Information conveyed by vowels. J. Acoust. Soc. Am.29, 98–104. 10.1121/1.1908694
45
LaingE. J.LiuR.LottoA. J.HoltL. L. (2012). Tuned with a tune: talker normalization via general auditory processes. Front. Psychol.3:203. 10.3389/fpsyg.2012.00203
46
LeopoldD. A.O’tooleA. J.VetterT.BlanzV. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nat. Neurosci.4, 89–94. 10.1038/82947
47
LeopoldD. A.RhodesG.MullerK. M.JefferyL. (2005). The dynamics of visual adaptation to faces. Proc. Biol. Sci.272, 897–904. 10.1167/5.8.830
48
LuZ. L.WilliamsonS. J.KaufmanL. (1992). Behavioral lifetime of human auditory sensory memory predicted by physiological measures. Science258, 1668–1670. 10.1126/science.1455246
49
OrucI.BartonJ. J. (2010). A novel face aftereffect based on recognition contrast thresholds. Vision Res.50, 1845–1854. 10.1016/j.visres.2010.06.005
50
PastukhovA.BraunJ. (2013). Disparate time-courses of adaptation and facilitation in multi-stable perception. Learn Percept.5, 101–118. 10.1556/lp.5.2013.suppl2.7
51
PearsonJ.BrascampJ. (2008). Sensory memory for ambiguous vision. Trends Cogn. Sci.12, 334–341. 10.1016/j.tics.2008.05.006
52
PerrodinC.KayserC.LogothetisN. K.PetkovC. I. (2014). Auditory and visual modulation of temporal lobe neurons in voice-sensitive and association cortices. J. Neurosci.34, 2524–2537. 10.1523/jneurosci.2805-13.2014
53
PickeringM. J.FerreiraV. S. (2008). Structural priming: a critical review. Psychol. Bull.134, 427–459. 10.1037/0033-2909.134.3.427
54
RaczaszekJ.TullerB.ShapiroL. P.CaseP.KelsoS. (1999). Categorization of ambiguous sentences as a function of a changing prosodic parameter: a dynamical approach. J. Psycholinguist. Res.28, 367–393. 10.1023/A:1023289031747
55
RhodesG.JefferyL.BoeingA.CalderA. J. (2013). Visual coding of human bodies: perceptual aftereffects reveal norm-based, opponent coding of body identity. J. Exp. Psychol. Hum. Percept. Perform.39, 313–317. 10.1037/a0031568
56
RiskeyD. R.ParducciA.BeauchampG. K. (1979). Effects of context in judgments of sweetness and pleasantness. Percept. Psychophys.26, 171–176. 10.3758/bf03199865
57
SchweinbergerS. R.ZaskeR.WaltherC.GolleJ.KovacsG.WieseH. (2010). Young without plastic surgery: perceptual adaptation to the age of female and male faces. Vision Res.50, 2570–2576. 10.1016/j.visres.2010.08.017
58
SchwiedrzikC. M.RuffC. C.LazarA.LeitnerF. C.SingerW.MelloniL. (2014). Untangling perceptual memory: hysteresis and adaptation map into separate cortical networks. Cereb. Cortex24, 1152–1164. 10.1093/cercor/bhs396
59
ShuZ. J.SwindaleN. V.CynaderM. S. (1993). Spectral motion produces an auditory after-effect. Nature364, 721–723. 10.1038/364721a0
60
SnyderJ. S.AlainC. (2007). Toward a neurophysiological theory of auditory stream segregation. Psychol. Bull.133, 780–799. 10.1037/0033-2909.133.5.780
61
SnyderJ. S.WeintraubD. M. (2011). Pattern specificity in the effect of prior Δf on auditory stream segregation. J. Exp. Psychol. Hum. Percept. Perform.37, 1649–1656. 10.1037/a0023098
62
SnyderJ. S.WeintraubD. M. (2013). Loss and persistence of implicit memory for sound: Evidence from auditory stream segregation context effects. Atten. Percept. Psychophys.75, 1059–1074. 10.3758/s13414-013-0460-y
63
SnyderJ. S.CarterO. L.HannonE. E.AlainC. (2009). Adaptation reveals multiple levels of representation in auditory stream segregation. J. Exp. Psychol. Hum. Percept. Perform.35, 1232–1244. 10.1037/a0012741
64
SnyderJ. S.CarterO. L.LeeS. K.HannonE. E.AlainC. (2008). Effects of context on auditory stream segregation. J. Exp. Psychol. Hum. Percept. Perform.34, 1007–1016. 10.1037/0096-1523.34.4.1007
65
SnyderJ. S.GreggM. K.WeintraubD. M.AlainC. (2012). Attention, awareness and the perception of auditory scenes. Front. Psychol.3:15. 10.3389/fpsyg.2012.00015
66
StevenageS. V.HugillA. R.LewisH. G. (2012). Integrating voice recognition into models of person perception. J. Cogn. Psychol.24, 409–419. 10.1080/20445911.2011.642859
67
StockerA. A.SimoncelliE. P. (2006). Sensory adaptation within a Bayesian framework for Perception. Adv. Neural. Inform. Process Syst.18, 1291–1298.
68
ThielS. D.BitzerS.NierhausT.KalberlahC.PreusserS.NeumannJ.et al. (2014). Hysteresis as an implicit prior in tactile spatial decision making. PLoS One9:e89802. 10.1371/journal.pone.0089802
69
TullerB.CaseP.DingM.KelsoJ. A. (1994). The nonlinear dynamics of speech categorization. J. Exp. Psychol. Hum. Percept. Perform.20, 3–16. 10.1037/0096-1523.20.1.3
70
van LindenS.VroomenJ. (2007). Recalibration of phonetic categories by lipread speech versus lexical information. J. Exp. Psychol. Hum. Percept. Perform.33, 1483–1494. 10.1037/0096-1523.33.6.1483
71
VitelaA. D.WarnerN.LottoA. J. (2013). Perceptual compensation for differences in speaking style. Front. Psychol.4:399. 10.3389/fpsyg.2013.00399
72
VroomenJ.van LindenS.De GelderB.BertelsonP. (2007). Visual recalibration and selective adaptation in auditory-visual speech perception: contrasting build-up courses. Neuropsychologia45, 572–577. 10.1016/j.neuropsychologia.2006.01.031
73
WebsterM. A.KapingD.MizokamiY.DuhamelP. (2004). Adaptation to natural facial categories. Nature428, 557–561. 10.1038/nature02420
74
WeiX. X.StockerA. A. (2013). Efficient coding provides a direct link between prior and likelihood in perceptual Bayesian inference. Adv. Neural. Inform. Process Syst.25, 1313–1321.
75
ZhaoL.ChubbC. (2001). The size-tuning of the face-distortion after-effect. Vision Res.41, 2979–2994. 10.1016/s0042-6989(01)00202-4
Summary
Keywords
context effects, adaptation, perceptual memory, Bayesian modeling, vision, audition
Citation
Snyder JS, Schwiedrzik CM, Vitela AD and Melloni L (2015) How previous experience shapes perception in different sensory modalities. Front. Hum. Neurosci. 9:594. doi: 10.3389/fnhum.2015.00594
Received
02 July 2015
Accepted
12 October 2015
Published
31 October 2015
Volume
9 - 2015
Edited by
Michael A. Silver, University of California, Berkeley, USA
Reviewed by
Marzia De Lucia, Laboratoire de Recherche en Neuroimagerie - LREN, Switzerland; Rachel Denison, New York University, USA
Updates
Copyright
© 2015 Snyder, Schwiedrzik, Vitela and Melloni.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution and reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Joel S. Snyder joel.snyder@unlv.edu; Lucia Melloni lucia.melloni@brain.mpg.de
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.