Skip to main content

MINI REVIEW article

Front. Hum. Neurosci., 10 October 2019
Sec. Sensory Neuroscience
Volume 13 - 2019 | https://doi.org/10.3389/fnhum.2019.00353

Challenges to the Modularity Thesis Under the Bayesian Brain Models

  • Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Gandhinagar, India

Modularity assumption is central to most theoretical and empirical approaches in cognitive science. The Bayesian Brain (BB) models are a class of neuro-computational models that aim to ground perception, cognition, and action under a single computational principle of prediction-error minimization. It is argued that the proposals of BB models contradict the modular nature of mind as the modularity assumption entails computational separation of individual modules. This review examines how BB models address the assumption of modularity. Empirical evidences of top-down influence on early sensory processes is often cited as a case against the modularity thesis. In the modularity thesis, such top-down effects are attributed to attentional modulation of the output of an early impenetrable stage of sensory processing. The attentional-mediation argument defends the modularity thesis. We analyse this argument using the novel conception of attention in the BB models. We attempt to reconcile classical bottom-up vs. top-down dichotomy of information processing, within the information passing scheme of the BB models. Theoretical considerations and empirical findings associated with BB models that address the modularity assumption is reviewed. Further, we examine the modularity of perceptual and motor systems.

1. Introduction

The modularity of cognitive processes is a fundamental principle of the representationalist paradigm (Fodor, 1983). Information encapsulation, according to Fodor (2001), is a necessary condition for modularity. It entails the restriction of information flow into a computational module from another module, referred to as cognitive impenetrability. The assumption of information encapsulation was critical to the paradigm shift from the behaviorist to a cognitivist perspective of mental functions as it makes mental processes tractable (Carruthers, 2006) and thus computationally realizable.

Pylyshyn (1980) extended the concept of modularity to computational systems and posited that for any theoretical account of mental process to be explanatory, it should have a cognitively impenetrable functional architecture. For example, the computations in the visual module should have some domain-specific architecture that enables them to transform information uniquely. The range of inputs a module can parse and compute defines domain specificity. Further, Pylyshyn (1980) argued that the modules should be computationally autonomous to make meaningful propositions about mental faculties. A convincing demonstration of information encapsulation in early vision is the persistence of visual illusions, even when one is consciously aware of the illusion. Prinz (2006) counters the argument-from-illusion, by claiming that illusion is an instance of perception trumping belief in the presence of conflict between perception and belief, and when there is no conflict, belief can indeed affect perception. Churchland (1988) argues that visual illusions can be over-ridden by voluntary attempt to modify the character of the visual experience. For instance, when the drawing of a cube lacks visual cues about its orientation, the farther side is interpreted as the closer side by the observer. This illusion, called the Necker-cube illusion, can be overridden by the deliberate “mental” inversion of the cube.

Higher-order cognitive states, such as desires (Balcetis and Dunning, 2006), morality (Gantman and Van Bavel, 2014), and racial category (Levin and Banaji, 2006) is shown to affect perceptual processing. Such instances of top-down effects on perception are argued as evidence for cognitive penetration. In other words, if beliefs affect early visual processes like color perception, it suggests that the information in the “belief processing” module penetrates the color perception module. The strictest form of modularity thesis, known as Massive Modularity, ascribes absolute information encapsulation between all modules of cognition, including the central systems. However, central systems, such as reasoning and decision-making involve the integration of domain-general representations, violating modularity. The defense for massive modularity is that language, with its ability to encode and transform conceptual representations, integrates information across modules (Carruthers, 2006). However, there is empirical support for the notion that content integration is not restricted to faculty of language (Varley and Siegal, 2000), suggesting non-modularity of central systems (Rice, 2011). The modest form of modularity, in contrast, claims that there is a set of common central functions that do not follow information encapsulation. For example, analysis of resting-state BOLD activity has shown the existence of local nodes that are tightly connected within a specific functional module, and connector nodes that integrates information across the individual modules (Bertolero et al., 2015). Modest modularity maintains that input systems, such as perception are modular, whereas, the domain-general integrative processes are non-modular.

Firestone and Scholl (2016) presented an extensive critique of the studies that report top-down effects on early sensory processing. The paper argues that various empirical pitfalls cause the change in perceptual state reported in studies demonstrating top-down effects. Furthermore, the studies that report “valid” top-down effects are explained as peripheral attentional effects. Attention is the mechanism that guides the selection of relevant information from the environment. Attentional mechanisms are found to be responsible for the both enhancement (Carrasco et al., 2000) and inhibition (Tipper, 1985) of sensory representations. The attention-mediation argument for modularity is the proposition that attention affects early perception by selecting one/few representations over others. Consequently, attentional guidance cannot imply penetration as attention is merely changing the output of the early sensory processes. According to Pylyshyn (1999), “[attentional guidance does] not count as cognitive penetration because they do not alter the contents of perception.” Thus, attention is believed to perform the function of integrating the outputs of the impenetrable early sensory processing.

On the contrary, there is another view that the classical models of attention are built on the assumption of modularity, and consequently, attention is not solving the problem of modularity, rather, modularity solves the problem of attention (Van der Heijden, 1995). For instance, the Feature Integration Theory (FIT) (Treisman and Gelade, 1980), a widely accepted model of attention, proposes a dichotomy between bottom-up and top-down processing. In this account, the bottom-up processing involves the computation of fundamental featural dimensions, such as color and orientation by domain-specific units. These bottom-up units are believed to be implementing its natural constraints and are automatic to the extent that it does not engage in an inferential processing (Pylyshyn, 1999). The top-down information, such as goals and desires has no access to these “feature detectors” that processes the fundamental dimensions of the sensory signal. The dichotomy of bottom-up/top-down processing is defined in terms of how each mode of processing is affected by cognitive state. Bottom-up processing is invariant to cognitive states, and top-down processing is influenced by cognitive states. Thus, the evidence that corroborates the proposed distinction between bottom-up and top-down processing (for a review, see Theeuwes, 2010) points to the cognitive impenetrability of the early perceptual processing. Furthermore, the automatic nature of bottom-up units is a defining feature of modular systems (Fodor, 1983). The classical formulation of attention is challenged by the recent models classified as Bayesian Brain (BB) models. In the next section, we review how attention is defined in BB models and place the bottom-up/top-down dichotomy within the information passing scheme of the BB models.

2. What Is Bottom-Up in Bayesian Brain?

According to Helmholtz (1925), perception is an inference on the sensory states. This inferential process is necessitated by the absence of one-to-one mapping of the external environment and the information encoded by the senses. Any given sensation could give rise to many possible interpretations. However, we solve what is termed as the “inverse problem” of many-to-one-mapping of the sensory state to internal representation and perceive a relatively stable reality. The inherent ambiguity in the data gathered by the senses necessitates a hypothesis-testing process to build a singular percept (Gregory, 1980). BB models solve the inverse problem by generating optimal prediction about the causes of sensory state. The predictions are compared against incoming information. The information that matches the prediction is “explained away,” and the deviation from the prediction (prediction-error) updates the generative model, which optimizes future predictions (Rao and Ballard, 1999; Friston et al., 2006; Clark, 2013).

BB models posit that the predictive nature of mind entails dissemination of top-down information that affects early stages of perceptual processing. For example, in the version of the BB model developed in Lee and Mumford (2003), the higher-order contextual beliefs interact with early visual processing and the early visual areas are not only doing feature extraction but are also involved in image segmentation and figure-ground segregation. Whether the predictive top-down influence amounts to cognitive penetration is debated. Lupyan (2015) provides an extensive review on how BB models present a case for cognitive penetration of perception based on evidence from cross-modal effects and perceptual illusions. According to Lupyan (2015), the extent of penetrability can be defined in terms of the contribution of a perceptual process toward the minimization of the system level prediction-error (PE).

BB models redefine the classic notions about the nature of bottom-up and top-down information (for a review, see Rauss and Pourtois 2013). According to BB models, the bottom-up information carries prediction-error and top-down information caries predictions about the sensory causes. In the literature, prediction is often referred to as anticipation (Butz and Pezzulo, 2008), expectation (Summerfield and Egner, 2009), preparation (Brunia, 1999). These terms are generally conceptualized in a domain-specific manner in individual studies. Prediction in the BB model involves domain-general signaling about the sensory states and is estimated from the “model-of-the-world.” This generative model encodes the statistical regularities in the environment. The sensory signals that are consistent with the prediction of the generative model are silenced (Summerfield et al., 2008). It is hypothesized that predictions are encoded by the deep pyramidal cells and PEs are encoded in superficial pyramidal cells (Bastos et al., 2012).

According to Free Energy Principle (FEP), which is a generalization of predictive coding, the top-down prediction is weighted by the precision of the PE. Precision quantifies the amount of uncertainty about the information at each level of the cortical hierarchy and is functionally modulated by attention. The metaphor Feldman and Friston (2010) uses for attention is that of Standard Error (SE) in statistical decision-making. The test statistic, on which the statistical inference is made, is obtained by dividing the Mean Difference with the SE. When the SE is high, the test statistic will be low, and thus, the hypothesis is more likely to be rejected. Attention does to perceptual inference what SE does to statistical inference. When implemented as a hierarchical information passing scheme, attention affects perception by optimizing precision. So, signals with higher precision are weighted over signals with low precision. Consequently, at every level in the hierarchy, the signals conveying prediction and attention (precision-weighted PE) information influence perception. In sum, the top-down information is the precision-weighted prediction, referred to as hyper priors (Hohwy et al., 2008) and the bottom-up information is the precision-weighted PE.

The effects of attention observed in studies subscribing to the classical models of attention conflate attention and prediction (Summerfield and Egner, 2009). Hence, the extent to which the effects of prediction and attention are separately contributing to early perceptual effects is mostly unexplored. Empirical evidence corroborating the dissociation of attention, and prediction is demonstrated by orthogonal manipulation of spatial attention and feature prediction. Wyart et al. (2012) found that increase in the prior probability of the occurrence of signal leads to an increase in the baseline performance, whereas the attention cueing lead to increased signal-to-noise-precision at the attended location. Similar manipulation of attention and prediction has shown that attention can reverse the sensory silencing of prediction on BOLD responses (Kok et al., 2011). The difference in BOLD response to expected and unexpected percepts was pronounced in the presence of attention, corroborating the idea that attention improves the precision of PE (Jiang et al., 2013).

O'Callaghan et al. (2017), argued that the top-down effects on early perceptual processing could be considered as penetration by the predictive information, referred to as predictive penetration. This argument is corroborated by neurophysiological evidence reporting the rapid access of top-down information by the early perceptual processing. Orbitofrontal Cortex (OFC) responds to low spatial frequency information of the object ≈ 50 ms before the recognition-related activity started in the Inferior Temporal (IT) area. Early activity in OFC was also better predictive of successful recognition of the object than the activity in the IT region (Bar et al., 2006). Does this suggest that predictions are changing the contents of perception? It is argued that when attention and prediction are separated, the influence of prediction on early sensory processing is restricted to response selection (Rungratsameetaweemana and Serences, 2019). Prediction is found to change the criterion, a signal detection measure of the response bias (Bang and Rahnev, 2017), but not the sensitivity (d′) of the perceptual inference (Summerfield and Egner, 2016). This suggests that prediction alone does not significantly affect early sensory processing.

The bottom-up information, according to the information passing scheme described by the FEP, is not the output of “feature detectors,” but contains the information deviating from the top-down prediction; PE. Empirically, this suggests that classical bottom-up effects are susceptible to the uncertainty (precision) attached to the bottom-up cues. The bottom-up information is continuously modulated by the real-time estimation of precision. Evidence corroborating this has been demonstrated using the irrelevant singleton paradigm (Vatterott and Vecera, 2012), where the attentional capture by color singletons (a classic bottom-up cue) changed as a function of time. Similarly, neural activity associated with “pop-out” like saliency was induced through experience and behavioral relevance (Lee et al., 2002). Observation of experience-dependent changes to classic bottom-up cues shows that precision-weighting dynamically alters early components of perception.

The claim that bottom-up units are not invariant to top-down effects also violates the “automaticity assumption” of modular systems. There have been suggestions that automaticity is conditional on the set of circumstances available to the agent (Bargh, 1989). Anderson and Folk (2014) reported that involuntary response inhibition could be modulated by the mechanisms of goal-directed processing. The Stroop effect was shown to be eliminated when a single letter was colored instead of the whole word (Besner et al., 1997). Future studies investigating precision-dependent changes to classic bottom-up cues can corroborate the information passing scheme proposed by BB models and reconcile the bottom-up/top-down dichotomy.

In the next section, we examine the penetrability of the perceptual systems by the motor systems. Modularity is assumed by many of the influential models that explain perception-action interaction, such as the Optimal Control Theory (OCT) (Wolpert, 1997) and the dorsal-ventral model of vision (Mishkin et al., 1983). We attempt to analyze the modularity assumption within OCT and the alternate proposal by FEP based on a non-modular approach to explain perception-action interaction.

3. Modularity of Perception and Action

The separation of perceptual and motor systems as modules that work independently and sequentially is a classic notion in cognitive science. In the classical “sandwich” model (Hurley, 2002), the perceptual system builds the internal representation of the external environment and the motor system derives the motor commands based on the output of the perceptual system, mediated by the integrative cognitive processes. Studies reporting dynamic interaction between action and perception have questioned the classical sandwich model. Estimation of the physical aspects of the environment, such as size, distance, and the slope is found to be modulated by factors, such as effort (Witt et al., 2004), handedness (Linkenauger et al., 2009), graspability (Linkenauger et al., 2011), and skill (Witt and Proffitt, 2005). When participants had to exert more effort to throw a ball at a target, their perceived distance to the target also increased. Objects presented near the hand is also shown to improve perception, manifested as better change detection performance (Tseng et al., 2012), and faster perceptual processing (Thomas and Sunny, 2017).

On the one hand, the enactive theories (Varela et al., 2017) posit that such effects can be understood as emerging from the agent-environment interaction, where perception and action are coupled together in a non-modular, non-sequential, and non-encapsulated manner (Baltieri and Buckley, 2018). However, the enactive approach rejects the idea that the agent engages in an inferential process or generate an internal representation of the environment. On the other hand, the optimal control theory of action and motor control assumes that the agent constructs an internal representation of the environment. Importantly, perception and action are considered as separate modules in OCT (Wolpert and Kawato, 1998). In OCT (Wolpert et al., 1995), motor control depends on two computationally independent and informationally encapsulated modules; the estimator and the controller. The estimator predicts the future sensory state based on the current sensory state given the motor command and is referred to as the forward model. The controller provides the motor command that causes the sensory state predicted by the estimator and is referred to as the inverse model. Thus, in OCT, perception, and action are computationally separated as forward and inverse models (Figure 1).

FIGURE 1
www.frontiersin.org

Figure 1. Simplified outline of the information flow in OCT and FEP that illustrates how perception and action are linked. In OCT, perception involves the generation of the predicted sensation that feeds into the controller, and the controller feeds the forward model with the efference copy of the motor command. In FEP, the proprioceptive PE, estimated by comparing the generative model with the sensation, is fed into the controller. Perception and action are separated as forward and inverse model (efference copy) in OCT. In FEP, the inverse model is replaced by the Bayesian inversion of the forward model.

FEP, while maintaining that perception involves inferential processing, proposes a non-modular approach to understand perception-action coupling. Friston (2011) questions the separation of forward and inverse models by OCT and propose an alternative formulation where the Bayesian inversion of the forward model replaces the inverse model. That is, the top-down projections in this framework are not the motor commands, but the predictions about the proprioceptive sensations. PE minimization is achieved in two ways; one, by making accurate predictions about the sensations, two, by acting in such a way that sensations which match the predictions are selectively sampled. This is called active inference (Feldman and Friston, 2010). In active inference, the action minimizes the precision of sensory PE so that the predictions are fulfilled (Friston et al., 2011). This conception views perception and action as inferential processes that are not computationally separated and thus, non-modular. Friston et al. (2010) notes, “the central nervous system is not divided into the motor and sensory systems but is one perceptual inference machine that provides predictions of optimal action, in terms of its expected outcomes.”

FEP combines the non-modular approach of the enactive theories with Helmholzian inferential representations. This representationalist non-modular approach of FEP could be argued as a trivialization of the idea of representation. According to Ramsey (2007), formulating representation as a mediating structure between the external environment and behavior amounts to trivialization. Gładziejewski (2016) argues that the representation in the BB models is as much “action-guiding” as the representation of a cartographic map, and it non-trivially “recapitulates” the causal-probabilistic structure of the environment. Although OCT and FEP maintain a representationalist approach to describe perceptual and motor systems, FEP rejects the separation of forward and inverse models.

The difference between the proposals of the OCT and the FEP can be understood by examining how sensory attenuation of action-effects is explained by both frameworks. Sensory attenuation is the reduction in subjective sensitivity to self-generated sensory-effects. The classic demonstration of this effect is the inability to tickle ourselves (Blakemore et al., 1999). Apart from somatosensation, sensory attenuation of self-caused action-effect has been reported in visual (Cardoso-Leite et al., 2010) and auditory modalities (Hughes and Waszak, 2011). When participants associated a specific sensory outcome (Gabor patch) with a unique action (keypress), the sensitivity to the predicted action-effect was reduced (Cardoso-Leite et al., 2010). According to OCT, the perceived intensity of an action-effect is proportional to the amplitude of the PE (the difference between the forward model prediction and sensation). However, in most of the studies reporting sensory attenuation, the responses are made on the stimuli applied or generated by the agent and not by the experimenter. Voss et al. (2008) observed sensory attenuation for experimenter-generated sensations that occurred while the participant was preparing a movement. This suggests that sensory attenuation happens even when the agent does not generate a forward model (Voss et al., 2008; Brown et al., 2013).

The FEP does not distinguish between the forward and the inverse models. In FEP, sensory attenuation is an effect of reducing the precision of sensory PE. Mechanistically, this is achieved by withdrawing attention from the consequences of action, thereby reducing the intensity of the sensation (Brown et al., 2013). A piece of evidence that points to the role of attention in sensory attenuation was the reduction of action-effect learning when participants were paired with more than one action-effects, suggesting that action-effect association competes for attentional resources (Watson et al., 2015). The presence of valid attention cue is shown to result in faster processing of action-effects (Gozli et al., 2016). In the auditory modality, motor predictions are shown to modulate the action-effect negativity at the posterior electrodes when the stimulus is unattended and not when the stimulus is attended, suggesting an interactive effect of motor prediction and attention on sensory attenuation (Jones et al., 2013). This evidence does not sufficiently corroborate the “withdrawal of attention” hypothesis proposed by FEP. A convincing test of the predictions of FEP about sensory attenuation would be achieved by orthogonally manipulating action-effect prediction and spatial attention to dissociate the separable contribution of action-prediction and attention on sensory attenuation (Schröger et al., 2015a,b).

4. Summary

In the current review, we explored the nature of information processing in the BB models and its implications on the assumption of modularity. Recent empirical findings question the classic notion that bottom-up units are invariant to top-down influences. The proposed nature of bottom-up and top-down processing in the BB models is corroborated by empirical findings that report experience-dependent changes to the perceptual quality of the classical bottom-up information. The dynamic and real-time changes in the estimated precision affect perceptual inference. Defining attention as the modulator of precision/synaptic gain provides a rich and nuanced conception of attention.

The early sensory processing is influenced not only by precision-weighting but also by top-down predictions that carry information about expected sensory states. Top-down predictions and bottom-up sensory evidence are affected by attention at each level of the cortical hierarchy. Such a top-down influence is not equivalent to changing the output of early sensation. Clark (2016) claims that the formulation of attention in BB model makes it a mechanism that dynamically re-configures the cognitive architecture of a given stage. Thus, the BB model's definition of attention questions the idea that attention is merely changing the output of early perception. In the context of perception-action interaction, the BB models do not hold a modular view where perception and action are computationally separated as forward and inverse models. The estimator in the FEP minimizes PE by comparing the motor signal with the proprioceptive (sensory) PE, without a separate forward model.

In order to build a unified epistemology of mental functions, the BB models need to explain empirical findings from diverse domains of cognition, emotion, perception, and action. BB models offer a domain-general formulation of information passing, where the external environment and internal representations are defined in terms of their causal-probability structure. In other words, the model itself is neutral about the content of perceptual experience. The perceptual content is determined by the winning hypothesis of the Bayesian inferential process (Hohwy et al., 2008). This definition of perceptual content can facilitate the integration of this framework into the theorizations about diverse mental functions. The attempts to build a unified theory necessitates a novel approach in which the mental function of interest is not ascribed epistemic boundary at the computational level. The BB models appropriate the enactive notion of information flow, where epistemic boundaries between the mind, the body, and the environment are not necessary to explain the behavior of the system.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Anderson, B. A., and Folk, C. L. (2014). Conditional automaticity in response selection: contingent involuntary response inhibition with varied stimulus-response mapping. Psychol. Sci. 25, 547–554. doi: 10.1177/0956797613511086

PubMed Abstract | CrossRef Full Text | Google Scholar

Balcetis, E., and Dunning, D. (2006). See what you want to see: motivational influences on visual perception. J. Pers. Soc. Psychol. 91:612. doi: 10.1037/0022-3514.91.4.612

PubMed Abstract | CrossRef Full Text | Google Scholar

Baltieri, M., and Buckley, C. L. (2018). “The modularity of action and perception revisited using control theory and active inference,” in Artificial Life Conference Proceedings (Cambridge, MA: MIT Press), 121–128.

Google Scholar

Bang, J. W., and Rahnev, D. (2017). Stimulus expectation alters decision criterion but not sensory signal in perceptual decision making. Sci. Rep. 7:17072. doi: 10.1038/s41598-017-16885-2

CrossRef Full Text | Google Scholar

Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, A. M., et al. (2006). Top-down facilitation of visual recognition. Proc. Natl. Acad. Sci. U.S.A. 103, 449–454. doi: 10.1073/pnas.0507062103

PubMed Abstract | CrossRef Full Text | Google Scholar

Bargh, J. A. (1989). Conditional automaticity: varieties of automatic influence in social perception and cognition. Unintend. Thought 3, 51–69.

Google Scholar

Bastos, A. M., Usrey, W. M., Adams, R. A., Mangun, G. R., Fries, P., and Friston, K. J. (2012). Canonical microcircuits for predictive coding. Neuron 76, 695–711. doi: 10.1016/j.neuron.2012.10.038

PubMed Abstract | CrossRef Full Text | Google Scholar

Bertolero, M. A., Yeo, B. T., and D'Esposito, M. (2015). The modular and integrative functional architecture of the human brain. Proc. Natl. Acad. Sci. U.S.A. 112, E6798–E6807. doi: 10.1073/pnas.1510619112

PubMed Abstract | CrossRef Full Text | Google Scholar

Besner, D., Stolz, J. A., and Boutilier, C. (1997). The stroop effect and the myth of automaticity. Psychon. Bull. Rev. 4, 221–225. doi: 10.3758/BF03209396

PubMed Abstract | CrossRef Full Text | Google Scholar

Blakemore, S. J., Frith, C. D., and Wolpert, D. M. (1999). Spatio-temporal prediction modulates the perception of self-produced stimuli. J. Cognit. Neurosci. 11, 551–559. doi: 10.1162/089892999563607

PubMed Abstract | CrossRef Full Text | Google Scholar

Brown, H., Adams, R. A., Parees, I., Edwards, M., and Friston, K. (2013). Active inference, sensory attenuation and illusions. Cognit. Process. 14, 411–427. doi: 10.1007/s10339-013-0571-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Brunia, C. (1999). Neural aspects of anticipatory behavior. Acta Psychol. 101, 213–242. doi: 10.1016/S0001-6918(99)00006-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Butz, M. V., and Pezzulo, G. (2008). “Benefits of anticipations in cognitive agents,” in The Challenge of Anticipation (Berlin; Heidelberg: Springer), 45–62.

Google Scholar

Cardoso-Leite, P., Mamassian, P., Schütz-Bosbach, S., and Waszak, F. (2010). A new look at sensory attenuation: action-effect anticipation affects sensitivity, not response bias. Psychol. Sci. 21, 1740–1745. doi: 10.1177/0956797610389187

CrossRef Full Text | Google Scholar

Carrasco, M., Penpeci-Talgar, C., and Eckstein, M. (2000). Spatial covert attention increases contrast sensitivity across the csf: support for signal enhancement. Vis. Res. 40, 1203–1215. doi: 10.1016/S0042-6989(00)00024-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Carruthers, P. (2006). The Architecture of the Mind. Oxford, UK: Oxford University Press.

Google Scholar

Churchland, P. M. (1988). Perceptual plasticity and theoretical neutrality: a reply to jerry fodor. Philos. Sci. 55, 167–187. doi: 10.1086/289425

CrossRef Full Text | Google Scholar

Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 36, 181–204. doi: 10.1017/S0140525X12000477

PubMed Abstract | CrossRef Full Text | Google Scholar

Clark, A. (2016). Attention alters predictive processing. Behav. Brain Sci. 39:e234. doi: 10.1017/S0140525X15002472

PubMed Abstract | CrossRef Full Text | Google Scholar

Feldman, H., and Friston, K. J. (2010). Attention, uncertainty, and free-energy. Front. Hum. Neurosci. 4:215. doi: 10.3389/fnhum.2010.00215

PubMed Abstract | CrossRef Full Text | Google Scholar

Firestone, C., and Scholl, B. J. (2016). Cognition does not affect perception: evaluating the evidence for ‘top-down’ effects. Behav. Brain Sci. 39:e229. doi: 10.1017/S0140525X15000965

PubMed Abstract | CrossRef Full Text | Google Scholar

Fodor, J. A. (1983). The Modularity of Mind: An Essay on Faculty Psychology. Cambridge, MA: MIT Press.

Fodor, J. A. (2001). The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology. Cambridge, MA: MIT Press.

Google Scholar

Friston, K. (2011). What is optimal about motor control? Neuron 72, 488–498. doi: 10.1016/j.neuron.2011.10.018

PubMed Abstract | CrossRef Full Text | Google Scholar

Friston, K., Kilner, J., and Harrison, L. (2006). A free energy principle for the brain. J Physiol. 100, 70–87. doi: 10.1016/j.jphysparis.2006.10.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Friston, K., Mattout, J., and Kilner, J. (2011). Action understanding and active inference. Biol. Cybern. 104, 137–160. doi: 10.1007/s00422-011-0424-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Friston, K. J., Daunizeau, J., Kilner, J., and Kiebel, S. J. (2010). Action and behavior: a free-energy formulation. Biol. Cybern. 102, 227–260. doi: 10.1007/s00422-010-0364-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Gantman, A. P., and Van Bavel, J. J. (2014). The moral pop-out effect: enhanced perceptual awareness of morally relevant stimuli. Cognition 132, 22–29. doi: 10.1016/j.cognition.2014.02.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Gładziejewski, P. (2016). Predictive coding and representationalism. Synthese 193, 559–582. doi: 10.1007/s11229-015-0762-9

CrossRef Full Text | Google Scholar

Gozli, D. G., Aslam, H., and Pratt, J. (2016). Visuospatial cueing by self-caused features: orienting of attention and action–outcome associative learning. Psychon. Bull. Rev. 23, 459–467. doi: 10.3758/s13423-015-0906-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Gregory, R. L. (1980). Perceptions as hypotheses. Philos. Trans. R. Soc. Lond. B Biol. Sci. 290, 181–197. doi: 10.1098/rstb.1980.0090

PubMed Abstract | CrossRef Full Text | Google Scholar

Helmholtz, H. (1925). Helmholtz's Treatise on Physiological Optics Volume 3, Translated by JPC Southall. New York, NY: Optical Society of America; Dover.

Hohwy, J., Roepstorff, A., and Friston, K. (2008). Predictive coding explains binocular rivalry: an epistemological review. Cognition 108, 687–701. doi: 10.1016/j.cognition.2008.05.010

PubMed Abstract | CrossRef Full Text | Google Scholar

Hughes, G., and Waszak, F. (2011). ERP correlates of action effect prediction and visual sensory attenuation in voluntary action. Neuroimage 56, 1632–1640. doi: 10.1016/j.neuroimage.2011.02.057

PubMed Abstract | CrossRef Full Text | Google Scholar

Hurley, S. L. (2002). Consciousness in Action. Cambridge, MA: Harvard University Press.

Google Scholar

Jiang, J., Summerfield, C., and Egner, T. (2013). Attention sharpens the distinction between expected and unexpected percepts in the visual brain. J. Neurosci. 33, 18438–18447. doi: 10.1523/JNEUROSCI.3308-13.2013

PubMed Abstract | CrossRef Full Text | Google Scholar

Jones, A., Hughes, G., and Waszak, F. (2013). The interaction between attention and motor prediction. An ERP study. Neuroimage 83, 533–541. doi: 10.1016/j.neuroimage.2013.07.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Kok, P., Rahnev, D., Jehee, J. F., Lau, H. C., and de Lange, F. P. (2011). Attention reverses the effect of prediction in silencing sensory signals. Cereb. Cortex 22, 2197–2206. doi: 10.1093/cercor/bhr310

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, T. S., and Mumford, D. (2003). Hierarchical bayesian inference in the visual cortex. J Opt Soc Am A Opt Image Sci Vis. 20, 1434–1448. doi: 10.1364/JOSAA.20.001434

PubMed Abstract | CrossRef Full Text | Google Scholar

Lee, T. S., Yang, C. F., Romero, R. D., and Mumford, D. (2002). Neural activity in early visual cortex reflects behavioral experience and higher-order perceptual saliency. Nat. Neurosci. 5:589. doi: 10.1038/nn860

PubMed Abstract | CrossRef Full Text | Google Scholar

Levin, D. T., and Banaji, M. R. (2006). Distortions in the perceived lightness of faces: the role of race categories. J. Exp. Psychol. Gen. 135:501. doi: 10.1037/0096-3445.135.4.501

PubMed Abstract | CrossRef Full Text | Google Scholar

Linkenauger, S. A., Witt, J. K., and Proffitt, D. R. (2011). Taking a hands-on approach: apparent grasping ability scales the perception of object size. J. Exp. Psychol. Hum. Percept. Perform. 37:1432. doi: 10.1037/a0024248

PubMed Abstract | CrossRef Full Text | Google Scholar

Linkenauger, S. A., Witt, J. K., Stefanucci, J. K., Bakdash, J. Z., and Proffitt, D. R. (2009). The effects of handedness and reachability on perceived distance. J. Exp. Psychol. Hum. Percept. Perform. 35:1649. doi: 10.1037/a0016875

PubMed Abstract | CrossRef Full Text | Google Scholar

Lupyan, G. (2015). Cognitive penetrability of perception in the age of prediction: predictive systems are penetrable systems. Rev. Philos. Psychol. 6, 547–569. doi: 10.1007/s13164-015-0253-4

CrossRef Full Text | Google Scholar

Mishkin, M., Ungerleider, L. G., and Macko, K. A. (1983). Object vision and spatial vision: two cortical pathways. Trends Neurosci. 6, 414–417. doi: 10.1016/0166-2236(83)90190-X

CrossRef Full Text | Google Scholar

O'Callaghan, C., Kveraga, K., Shine, J. M., and Bar, M. (2017). Predictions penetrate perception: converging insights from brain, behaviour and disorder. Conscious. Cognit. 47, 63–74. doi: 10.1016/j.concog.2016.05.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Prinz, J. J. (2006). “Is the mind really modular?” in Contemporary Debates in Cognitive Science, ed R. J. Stainton (Hoboken, NJ: Wiley-Blackwell), 22–36.

Google Scholar

Pylyshyn, Z. (1999). Is vision continuous with cognition?: The case for cognitive impenetrability of visual perception. Behav. Brain Sci. 22, 341–365. doi: 10.1017/S0140525X99002022

PubMed Abstract | CrossRef Full Text | Google Scholar

Pylyshyn, Z. W. (1980). Computation and cognition: issues in the foundations of cognitive science. Behav. Brain Sci. 3, 111–132. doi: 10.1017/S0140525X00002053

CrossRef Full Text | Google Scholar

Ramsey, W. M. (2007). Representation Reconsidered. Cambridge, UK: Cambridge University Press.

Google Scholar

Rao, R. P., and Ballard, D. H. (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87. doi: 10.1038/4580

PubMed Abstract | CrossRef Full Text | Google Scholar

Rauss, K., and Pourtois, G. (2013). What is bottom-up and what is top-down in predictive coding? Front. Psychol. 4:276. doi: 10.3389/fpsyg.2013.00276

PubMed Abstract | CrossRef Full Text | Google Scholar

Rice, C. (2011). Massive modularity, content integration, and language. Philos. Sci. 78, 800–812. doi: 10.1086/662259

CrossRef Full Text | Google Scholar

Rungratsameetaweemana, N., and Serences, J. T. (2019). Dissociating the impact of attention and expectation on early sensory processing. Curr. Opin. Psychol. 29, 181–186. doi: 10.1016/j.copsyc.2019.03.014

PubMed Abstract | CrossRef Full Text | Google Scholar

Schröger, E., Kotz, S. A., and SanMiguel, I. (2015a). Bridging prediction and attention in current research on perception and action. Brain Res. 1626:1. doi: 10.1016/j.brainres.2015.08.037

PubMed Abstract | CrossRef Full Text | Google Scholar

Schröger, E., Marzecová, A., and SanMiguel, I. (2015b). Attention and prediction in human audition: a lesson from cognitive psychophysiology. Eur. J. Neurosci. 41, 641–664. doi: 10.1111/ejn.12816

PubMed Abstract | CrossRef Full Text | Google Scholar

Summerfield, C., and Egner, T. (2009). Expectation (and attention) in visual cognition. Trends Cognit. Sci. 13, 403–409. doi: 10.1016/j.tics.2009.06.003

PubMed Abstract | CrossRef Full Text | Google Scholar

Summerfield, C., and Egner, T. (2016). Feature-based attention and feature-based expectation. Trends Cognit. Sci. 20, 401–404. doi: 10.1016/j.tics.2016.03.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Summerfield, C., Trittschuh, E. H., Monti, J. M., Mesulam, M.-M., and Egner, T. (2008). Neural repetition suppression reflects fulfilled perceptual expectations. Nat. Neurosci. 11:1004. doi: 10.1038/nn.2163

PubMed Abstract | CrossRef Full Text | Google Scholar

Theeuwes, J. (2010). Top–down and bottom–up control of visual selection. Acta Psychol. 135, 77–99. doi: 10.1016/j.actpsy.2010.02.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Thomas, T., and Sunny, M. M. (2017). Slower attentional disengagement but faster perceptual processing near the hand. Acta Psychol. 174, 40–47. doi: 10.1016/j.actpsy.2017.01.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Tipper, S. P. (1985). The negative priming effect: inhibitory priming by ignored objects. Q. J. Exp. Psychol. 37, 571–590. doi: 10.1080/14640748508400920

PubMed Abstract | CrossRef Full Text | Google Scholar

Treisman, A. M., and Gelade, G. (1980). A feature-integration theory of attention. Cognit. Psychol. 12, 97–136. doi: 10.1016/0010-0285(80)90005-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Tseng, P., Bridgeman, B., and Juan, C. H. (2012). Take the matter into your own hands: a brief review of the effect of nearby-hands on visual processing. Vis. Res. 72, 74–77. doi: 10.1016/j.visres.2012.09.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Van der Heijden, A. (1995). Modularity and attention. Vis. Cognit. 2, 269–302. doi: 10.1080/13506289508401734

CrossRef Full Text | Google Scholar

Varela, F. J., Thompson, E., and Rosch, E. (2017). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press.

Google Scholar

Varley, R., and Siegal, M. (2000). Evidence for cognition without grammar from causal reasoning and ‘theory of mind’ in an agrammatic aphasic patient. Curr. Biol. 10, 723–726. doi: 10.1016/S0960-9822(00)00538-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Vatterott, D. B., and Vecera, S. P. (2012). Experience-dependent attentional tuning of distractor rejection. Psychon. Bull. Rev. 19, 871–878. doi: 10.3758/s13423-012-0280-4

PubMed Abstract | CrossRef Full Text | Google Scholar

Voss, M., Ingram, J. N., Wolpert, D. M., and Haggard, P. (2008). Mere expectation to move causes attenuation of sensory signals. PLoS ONE 3:e2866. doi: 10.1371/journal.pone.0002866

PubMed Abstract | CrossRef Full Text | Google Scholar

Watson, P., van Steenbergen, H., de Wit, S., Wiers, R. W., and Hommel, B. (2015). Limits of ideomotor action–outcome acquisition. Brain Res. 1626, 45–53. doi: 10.1016/j.brainres.2015.02.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Witt, J. K., and Proffitt, D. R. (2005). See the ball, hit the ball apparent ball size is correlated with batting average. Psychol. Sci. 16, 937–938. doi: 10.1111/j.1467-9280.2005.01640.x

CrossRef Full Text | Google Scholar

Witt, J. K., Proffitt, D. R., and Epstein, W. (2004). Perceiving distance: a role of effort and intent. Perception 33, 577–590. doi: 10.1068/p5090

PubMed Abstract | CrossRef Full Text | Google Scholar

Wolpert, D. M. (1997). Computational approaches to motor control. Trends Cognit. Sci. 1, 209–216. doi: 10.1016/S1364-6613(97)01070-X

PubMed Abstract | CrossRef Full Text | Google Scholar

Wolpert, D. M., Ghahramani, Z., and Jordan, M. I. (1995). An internal model for sensorimotor integration. Science 269, 1880–1882. doi: 10.1126/science.7569931

PubMed Abstract | CrossRef Full Text | Google Scholar

Wolpert, D. M., and Kawato, M. (1998). Multiple paired forward and inverse models for motor control. Neural Netw. 11, 1317–1329. doi: 10.1016/S0893-6080(98)00066-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Wyart, V., Nobre, A. C., and Summerfield, C. (2012). Dissociable prior influences of signal probability and relevance on visual contrast sensitivity. Proc. Natl. Acad. Sci. U.S.A. 109, 3593–3598. doi: 10.1073/pnas.1120118109

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: modularity hypothesis, attention, precision, cognitive penetrability of perception, predictive coding

Citation: George N and Sunny MM (2019) Challenges to the Modularity Thesis Under the Bayesian Brain Models. Front. Hum. Neurosci. 13:353. doi: 10.3389/fnhum.2019.00353

Received: 11 April 2019; Accepted: 23 September 2019;
Published: 10 October 2019.

Edited by:

Yoshiyuki Kubota, National Institute for Physiological Sciences (NIPS), Japan

Reviewed by:

Miao Cao, Fudan University, China
Anna Marzecová, Ghent University, Belgium

Copyright © 2019 George and Sunny. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Nithin George, n.george@iitgn.ac.in

Download