MINI REVIEW article

Front. Psychol., 27 October 2017

Sec. Perception Science

Volume 8 - 2017 | https://doi.org/10.3389/fpsyg.2017.01896

Can Limitations of Visuospatial Attention Be Circumvented? A Review

  • 1. Institute of Cognitive Science, Universität Osnabrück, Osnabrück, Germany

  • 2. Institut für Neurophysiologie und Pathophysiologie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Germany

Article metrics

View details

14

Citations

15,9k

Views

1,4k

Downloads

Abstract

In daily life, humans are bombarded with visual input. Yet, their attentional capacities for processing this input are severely limited. Several studies have investigated factors that influence these attentional limitations and have identified methods to circumvent them. Here, we provide a review of these findings. We first review studies that have demonstrated limitations of visuospatial attention and investigated physiological correlates of these limitations. We then review studies in multisensory research that have explored whether limitations in visuospatial attention can be circumvented by distributing information processing across several sensory modalities. Finally, we discuss research from the field of joint action that has investigated how limitations of visuospatial attention can be circumvented by distributing task demands across people and providing them with multisensory input. We conclude that limitations of visuospatial attention can be circumvented by distributing attentional processing across sensory modalities when tasks involve spatial as well as object-based attentional processing. However, if only spatial attentional processing is required, limitations of visuospatial attention cannot be circumvented by distributing attentional processing. These findings from multisensory research are applicable to visuospatial tasks that are performed jointly by two individuals. That is, in a joint visuospatial task requiring object-based as well as spatial attentional processing, joint performance is facilitated when task demands are distributed across sensory modalities. Future research could further investigate how applying findings from multisensory research to joint action research may facilitate joint performance. Generally, findings are applicable to real-world scenarios such as aviation or car-driving to circumvent limitations of visuospatial attention.

1. Introduction

In everyday life, humans continuously process information from several sensory modalities. However, the amount of information humans can process is limited (Marois and Ivanoff, 2005; Dux et al., 2006). In particular, using attentional mechanisms humans are able to selectively attend only a limited amount of information while neglecting irrelevant sensory input (James, 1890; Chun et al., 2011). Researchers have explained these limitations in terms of a limited pool of attentional resources that can be depleted under high attentional demands (Kahneman, 1973; Wickens, 2002; Lavie, 2005). These limitations do not solely apply to sensory processing but also to motor processing (e.g., see Pashler, 1994; Dux et al., 2006; Sigman and Dehaene, 2008), yet for this review we primarily focus on limitations in sensory processing.

Regarding the type of attentional demands, a distinction in attention research is that between object-based attention and spatial attention (Fink et al., 1997; Serences et al., 2004; Soto and Blanco, 2004). Object-based attention refers to selectively attending to features of an object (e.g., attending to the color or shape of an object) whereas spatial attention refers to selectively attending to a location in space.

In the present review, we will primarily focus on limitations of spatial attention in the visual sensory modality and on how they can be circumvented. We first review findings about these limitations with a focus on visuospatial tasks. We then briefly describe physiological correlates of attentional processing during visuospatial task performance. We then turn to review multisensory research that has investigated whether limitations in visuospatial attention can be circumvented by distributing information processing across several sensory modalities. Subsequently, we review research in which findings from multisensory research are applied to joint tasks (i.e., tasks that are performed jointly by two individuals). Finally, we conclude the review with future directions for research on how findings from multisensory research could be used to circumvent limitations of visuospatial attention in joint tasks.

2. Limitations of visuospatial attention and physiological correlates

Limitations of visuospatial attention have been investigated in a wide variety of visuospatial tasks. One task that has been suggested to be highly suitable [among others such as response-competition tasks (Lavie, 2005, 2010; Matusz et al., 2015), or orthogonal cueing tasks (Spence and Driver, 2004; Spence, 2010)] to investigate visuospatial attentional processing is the “Multiple Object Tracking” (MOT) task (Pylyshyn and Storm, 1988; Yantis, 1992) (see Figure 1A, for a typical trial logic) as the attentional load can be systematically varied (i.e., by varying the number of targets that need to be tracked) while keeping the perceptual load constant (i.e., the total number of displayed objects) (Cavanagh and Alvarez, 2005; Arrighi et al., 2011; Wahn and König, 2015a,b). Notably, apart from spatial attentional demands, the MOT task also involves anticipatory processes (i.e., predicting the trajectories of the targets' movements) (Keane and Pylyshyn, 2006; Atsma et al., 2012). However, as in several studies investigating the MOT task the trajectories of targets also do change randomly (e.g., in Wahn and König, 2015a,b), the MOT task at least in these cases primarily involves spatial attentional processing. The general finding across studies is that with an increasing number of targets, performance in the MOT task systematically decreases (see Figure 1B), suggesting a limit of visuospatial attentional resources (Alvarez and Franconeri, 2007; Wahn et al., 2016a). Moreover, these capacity limitations are stable across several repetitions of the experiment on consecutive days (Wahn et al., 2016a, see Figure 1B) and over considerably longer periods of time (Alnæs et al., 2014).

Figure 1

Figure 1

(A) Multiple object tracking (MOT) task trial logic. First, several stationary objects are shown on a computer screen. A subset of these objects is indicated as targets (here in gray). Then, the target indication is removed (i.e., targets become indistinguishable from the other objects) and all objects start moving randomly across the screen. After several seconds, the objects stop moving and participants are asked to select the previously indicated target objects. (B) MOT performance (i.e., percent correct of selected targets) as a function of attentional load (i.e., number of tracked objects) and days of measurement. (C) Pupil size increases relative to a passive viewing condition (i.e., tracking no targets) as a function of attentional load and days of measurement. Error bars in (B,C) are standard error of the mean. All figures have been adapted from Wahn et al. (2016a).

The behavioral findings from the MOT task have been corroborated by studies looking at the physiological correlates of attentional processing. A prominent physiological correlate of attentional processing are pupil sizes (Heinrich, 1896; Kahneman and Beatty, 1966; Beatty, 1982; Hoeks and Levelt, 1993; Wierda et al., 2012; Mathôt et al., 2013; Alnæs et al., 2014; Lisi et al., 2015; Mathôt et al., 2016). Increases in pupil sizes have been shown to be associated with increases in attentional load in recent studies that used the MOT task (Alnæs et al., 2014; Wahn et al., 2016a). Specifically, it has been shown that when participants perform the MOT task at varying levels of attentional load, pupil sizes systematically increase with attentional load and these increases are consistently found for measurements on consecutive days (Wahn et al., 2016a, see Figure 1C). Apart from these studies investigating changes in pupil size, researchers also investigated physiological correlates of attentional processing using fMRI and EEG. Researchers found that parietal regions in the brain typically associated with attentional processing were active when participants performed the MOT task (Jovicich et al., 2001; Howe et al., 2009; Jahn et al., 2012; Alnæs et al., 2014) but notably also for several other spatial tasks (Mishkin and Ungerleider, 1982; Livingstone and Hubel, 1988; Maeder et al., 2001; Reed et al., 2005; Ahveninen and et al., 2006; Ungerleider and Pessoa, 2008), suggesting that performing the MOT task requires processing of brain regions typically associated with visuospatial attention. Moreover, several EEG studies have identified neural correlates whose activity rises with increasing attentional load in the MOT task (Sternshein et al., 2011; Drew et al., 2013).

In sum, the MOT task has served to assess visuospatial limitations of attentional resources in a number of studies (Alvarez and Franconeri, 2007; Alnæs et al., 2014; Wahn et al., 2016a) and their physiological correlates (Jovicich et al., 2001; Howe et al., 2009; Jahn et al., 2012; Alnæs et al., 2014; Wahn et al., 2016a). In the following, we discuss how the use of the MOT and other spatial tasks has been extended to investigate spatial attentional resources across multiple sensory modalities.

3. Circumventing limitations of visuospatial attention

A question that has been extensively investigated in multisensory research is whether there are distinct pools of attentional resources for each sensory modality or one shared pool of attentional resources for all sensory modalities. Studies have found empirical support for the hypothesis that there are distinct resources (Duncan et al., 1997; Potter et al., 1998; Soto-Faraco and Spence, 2002; Larsen et al., 2003; Alais et al., 2006; Hein et al., 2006; Sinnett et al., 2006; Talsma et al., 2006; Van der Burg et al., 2007; Keitel et al., 2013; Finoia et al., 2015) as well as for the hypothesis that there are shared resources (Jolicoeur, 1999; Arnell and Larson, 2002; Soto-Faraco et al., 2002; Arnell and Jenkins, 2004; Macdonald and Lavie, 2011; Raveh and Lavie, 2015). In principle, if there are separate pools of attentional resources, attentional limitations in one sensory modality can be circumvented by distributing attentional processing across several sensory modalities. Conversely, if there is only one shared pool of attentional resources for all sensory modalities, attentional limitations in one sensory modality cannot be circumvented by distributing attentional processing across several sensory modalities.

The question of whether there are shared or distinct attentional resources across the sensory modalities has often been investigated using dual task designs (Pashler, 1994). In a dual task design, participants perform two tasks separately (“single task condition”) or at the same time (“dual task condition”). The extent to which attentional resources are shared for two tasks is assessed by comparing performance in the single task condition with performance in the dual task condition. If the attentional resources required for the two tasks are shared, task performance should decrease in the dual task condition relative to the single task condition. If attentional resources required for the two tasks are distinct, performance in the single and dual task conditions should not differ. In multisensory research, the two tasks in a dual task design are performed either in the same sensory modality or in different sensory modalities. The rationale of the design is that two tasks performed in the same sensory modality should always share attentional resources while two tasks performed in separate sensory modalities may or may not rely on shared attentional resources. That is, if attentional resources are distinct across sensory modalities, tasks performed in two separate sensory modalities should interfere less than tasks performed in the same sensory modality.

In the following, we will focus on research that has investigated how limitations in attentional resources for visuospatial attention can be circumvented by distributing information processing across sensory modalities using dual task designs. Several researchers suggested that a factor that influences the allocation of attentional resources across sensory modalities is the task-specific type of attentional processing (Bonnel and Hafter, 1998; Chan and Newell, 2008; Arrighi et al., 2011; Wahn and König, 2016; Wahn et al., 2017c). That is, the allocation of attentional resources depends on whether tasks performed in separate sensory modalities require object-based attention or spatial attention (for a recent review, see Wahn and König, 2017). In recent studies (Arrighi et al., 2011; Wahn and König, 2015a,b), this task-dependency in attentional resource allocation has been tested in a dual task design involving a visuospatial task (i.e., a MOT task). In particular, the MOT task was performed either alone or in combination with a secondary task that was either performed in the visual, auditory, or tactile sensory modalities. The secondary task either required object-based attention (i.e., the secondary task was a discrimination task) or spatial attention (i.e., the secondary task was a localization task). When participants performed the MOT task in combination with an object-based attention task in another sensory modality (i.e., an auditory pitch discrimination task), distinct attentional resources were found for the visual and auditory modalities (Arrighi et al., 2011). However, in studies in which participants performed the MOT task in combination with either a tactile (Wahn and König, 2015b) or auditory localization task (Wahn and König, 2015a), findings suggest that attentional resources are shared across the visual, tactile, and auditory sensory modalities. In particular, results showed that regardless of whether two spatial attention tasks were performed in two separate sensory modalities or the same sensory modality, tasks equally interfered with each other (see Figure 2A).

Figure 2

Figure 2

(A) Dual task interference when participants perform the MOT task either in combination with a visual (VI), tactile (TA), audiovisual (VIAU), or visuotactile (VITA) localization task. Interference is measured as the reduction in performance between single and dual task conditions. In particular, the reduction in performance for both tasks (i.e., MOT and localization task) are combined by taking the Euclidean distance between the performances in the single and dual task conditions, separately for each combination of tasks (MOT+VI, MOT+AU, MOT+TA, MOT+VIAU, MOT+VITA). (B) Search time increase relative to performing the visual search task alone when participants perform the same task either in combination with the VI, TA, or VITA localization task. (C) Joint visual search task conditions. Co-actors jointly searched for a target among distractors on two separate computer screens. A black mask was applied to the whole screen and only the currently viewed location was visible to the co-actors. Co-actors received the information about where their co-actor was looking either via a visual map (VI) that was displayed below their viewed location, via vibrations on a vibrotactile belt (TA), or via tones received through headphones (AU). (D) Joint visual search results. Search performance (i.e., time of the co-actor who found the target first) as a function of the sensory modality (VI, TA, or AU) in which the gaze information was received. Error bars in (A,B,D) are standard error of the mean. *Indicate significant comparisons with an alpha of .05. (A) has been adapted from Wahn and König (2015a,b), (B) from Wahn and König (2016), and (C,D) from Wahn et al. (2016c).

Further support for these conclusions was provided in another study (Wahn and König, 2016). In contrast to earlier studies (Wahn and König, 2015a,b), this time an object-based attention task was combined with a spatial attention task. In particular, participants performed a visual search task either in combination with a visual or tactile localization task. In line with the findings above (Arrighi et al., 2011), participants performed the visual search task faster in combination with the tactile localization task than in combination with the visual localization task (see Figure 2B). These findings suggest that attentional resources for the sensory modalities are distinct when tasks involve different types of attentional processing, i.e. object-based and spatial attentional processing.

In sum, the findings discussed above suggest that the allocation of attentional resources across sensory modalities (i.e., whether they are shared or distinct) depends on what type of attentional processing is required in a task. In particular, if tasks only require spatial attentional processing, findings suggest that attentional resources are shared across sensory modalities (Wahn and König, 2015a,b). However, if tasks also require object-based attentional processing, findings suggest that attentional resources are distinct across the sensory modalities (Arrighi et al., 2011; Wahn and König, 2016). Importantly, limitations in visuospatial attention can be circumvented by distributing attentional processing across sensory modalities if tasks involve object-based as well as spatial attentional processing.

Apart from the task-dependency, we also want to emphasize that there are several other factors that influence attentional processing such as motor demands (Marois and Ivanoff, 2005; Dux et al., 2006) and the sensory modality in which task load is increased (Rees et al., 2001; Macdonald and Lavie, 2011; Molloy et al., 2015; Raveh and Lavie, 2015) (for a detailed discussion, see Wahn and König, 2017). Another important factor to consider is the age of participants. Findings of a recent study (Matusz et al., 2015) suggested that conclusions about the distribution of attentional resources across the sensory modalities for adults do not necessarily generalize to children. In addition, we want to note that another effective means to circumvent limitations in one sensory modality is by providing redundant information via several sensory modalities, thereby taking advantage of the behavioral benefits of multisensory integration (i.e., faster reaction times and a higher accuracy) (Meredith and Stein, 1983; Ernst and Banks, 2002; Helbig and Ernst, 2008; Stein and Stanford, 2008; Gibney et al., 2017). The process of multisensory integration has been argued to be independent of top-down influences (Matusz and Eimer, 2011; De Meo et al., 2015; ten Oever et al., 2016) and be robust against additional attentional demands (Wahn and König, 2015a,b) for low-level stimuli (for more general reviews on the topic, see van Atteveldt et al., 2014; Chen and Spence, 2016; Macaluso et al., 2016; Tang et al., 2016), making it highly suitable to circumvent limitations within one sensory modality.

4. Circumventing limitations of visuospatial attention in joint tasks

In previous sections, we have reviewed studies in which participants perform a task alone. However, in many situations in daily life, tasks are performed jointly by two or more humans with a shared goal (Sebanz et al., 2006; Vesper et al., 2017). For instance, when two humans carry a table together (Sebanz et al., 2006), search for a friend in a crowd (Brennan et al., 2008), or play team sports such as basketball or soccer. In such joint tasks, humans often achieve a higher performance than the better individual would achieve alone (i.e., a collective benefit) (Bahrami et al., 2010). Collective benefits have been investigated in several task domains such as visuomotor tasks (Knoblich and Jordan, 2003; Masumoto and Inui, 2013; Ganesh et al., 2014; Skewes et al., 2015; Rigoli et al., 2015; Wahn et al., 2016b), decision-making tasks (Bahrami et al., 2010, 2012a,b), and visuospatial tasks (Brennan et al., 2008; Neider et al., 2010; Brennan and Enns, 2015; Wahn et al., 2016c, 2017b).

Regarding visuospatial tasks, several studies have investigated joint performance in visual search tasks (Brennan et al., 2008; Neider et al., 2010; Brennan and Enns, 2015; Wahn et al., 2016c). In particular, Brennan et al. (2008) investigated how performance in a joint visual search task depends on how information is exchanged between two co-actors. In a joint visual search task, two co-actors jointly search for a target stimulus among distractor stimuli. Brennan et al. (2008) found that co-actors performed the joint search the fastest and divided the task demands most effectively in the condition where they received gaze information (i.e., a continuous display of the co-actor's gaze location), suggesting that co-actors highly benefit from receiving spatial information about the actions of their co-actor (also see Wahn et al., 2017b).

The task demands in the joint visual search task as employed by Brennan et al. (2008) involve a combination of object-based attention (i.e., discriminate targets from distractors in the visual search task) and spatial attention (i.e., localize where the co-actor is looking using the gaze information). As reported above, findings in multisensory research suggest that limitations of visuospatial attention can be effectively circumvented by distributing information processing across sensory modalities if processing involves a combination of object-based attention and spatial attention (Arrighi et al., 2011; Wahn and König, 2016). In a recent study (Wahn et al., 2016c), these findings from multisensory research were applied to a joint visual search task setting similar to the one used by Brennan et al. (2008). In particular, researchers investigated whether joint visual search performance is faster when actors receive information about their co-actor's viewed location via the auditory or tactile sensory modality compared to when they receive this information via the visual modality (see Figure 2C). Researchers found that co-actors searched faster when they received the viewing information via the tactile or auditory sensory modalities than via the visual sensory modality (see Figure 2D). These results suggest that findings from multisensory research mentioned above (Arrighi et al., 2011; Wahn and König, 2016) can be successfully applied to a joint visuospatial task.

5. Conclusions and future directions

The aim of the present review was to review recent studies investigating limitations in visuospatial attention. These studies have reliably found limitations of visuospatial attention and physiological correlates whose activity rises with increasing visuospatial attentional demands (Sternshein et al., 2011; Drew et al., 2013; Alnæs et al., 2014; Wahn et al., 2016a). Findings from multisensory research have demonstrated that such limitations of visuospatial attention can be circumvented by distributing information processing across sensory modalities (Arrighi et al., 2011; Wahn and König, 2015a,b, 2016) and these findings are applicable to joint tasks (Wahn et al., 2016c).

Apart from the study above (Wahn et al., 2016c), other studies on joint action have investigated how the use of multisensory stimuli (e.g., visual and auditory) can serve to facilitate joint performance (Knoblich and Jordan, 2003) and how the process of multisensory integration is affected by social settings (Heed et al., 2010; Wahn et al., 2017a). However, these studies have not investigated how distributing information processing across sensory modalities potentially could facilitate joint performance. We suggest that future studies could further investigate to what extent findings from multisensory research are applicable to joint tasks. In particular, attentional limitations may be circumvented in every joint task that involves a combination of object-based and spatial attentional processing in the visual sensory modality, thereby possibly facilitating joint performance.

The possibility to circumvent limitations of visuospatial attention is also relevant for many real-world tasks that require visuospatial attention such as car-driving (Spence and Read, 2003; Kunar et al., 2008; Spence and Ho, 2012), air-traffic control (Giraudet et al., 2014), aviation (Nikolic et al., 1998; Sklar and Sarter, 1999), navigation (Nagel et al., 2005; Kaspar et al., 2014; König et al., 2016), or rehabilitation (Johansson, 2012; Maidenbaum et al., 2014). Notably, for applying findings to real-world tasks additional factors such as how much the task was practiced (Ruthruff et al., 2001; Chirimuuta et al., 2007) or memorized (Matusz et al., 2017) should be taken into account as real-world tasks are often highly practiced and memorized. More generally, in such scenarios limitations of visuospatial attention could be effectively circumvented by distributing attentional processing across sensory modalities, thereby improving human performance and reducing the risk of accidents.

Statements

Author contributions

Drafted the manuscript: BW. Revised the manuscript: BW and PK.

Funding

We acknowledge the support by H2020 – H2020-FETPROACT-2014 641321 – socSMCs (for BW) and ERC-2010-AdG #269716 – MULTISENSE (for PK). Moreover, we acknowledge support from the Deutsche Forschungsgemeinschaft (DFG), Open Access Publishing Fund of Osnabrück University, and an open access publishing award by Osnabrück University (for BW).

Acknowledgments

This review is in large part a reproduction of Basil Wahn's Ph.D. thesis [“Limitations of visuospatial attention (and how to circumvent them)”], which can be found online in the following University repository: https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2017051515895

Note, the Ph.D. Thesis is only available in the University repository. Moreover, submitting the thesis as a mini review to this journal is in line with University policies of the University of Osnabrück.

We also acknowledge that Figures 1, 2 have appeared in part in our earlier publications. We referenced the original publications in the Figure captions and obtained permission to re-use the Figures.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  • 1

    AhveninenJ.JääskelinenI. P.RaijT.BonmassarG.DevoreS.HämäläinenM.et al. (2006). Task-modulated “what” and “where” pathways in human auditory cortex. Proc. Natl. Acad. Sci. U.S.A.103, 1460814613. 10.1073/pnas.0510480103

  • 2

    AlaisD.MorroneC.BurrD. (2006). Separate attentional resources for vision and audition. Proc. R. Soc. B Biol. Sci.273, 13391345. 10.1098/rspb.2005.3420

  • 3

    AlnæsD.SneveM. H.EspesethT.EndestadT.van de PavertS. H. P.LaengB. (2014). Pupil size signals mental effort deployed during multiple object tracking and predicts brain activity in the dorsal attention network and the locus coeruleus. J. Vis.14, 1. 10.1167/14.4.1

  • 4

    AlvarezG. A.FranconeriS. L. (2007). How many objects can you track?: Evidence for a resource-limited attentive tracking mechanism. J. Vis.7:14. 10.1167/7.13.14

  • 5

    ArnellK. M.JenkinsR. (2004). Revisiting within-modality and cross-modality attentional blinks: Effects of target–distractor similarity. Percept. Psychophys.66, 11471161. 10.3758/BF03196842

  • 6

    ArnellK. M.LarsonJ. M. (2002). Cross-modality attentional blinks without preparatory task-set switching. Psychon. Bull. Rev.9, 497506. 10.3758/BF03196305

  • 7

    ArrighiR.LunardiR.BurrD. (2011). Vision and audition do not share attentional resources in sustained tasks. Front. Psychol.2:56. 10.3389/fpsyg.2011.00056

  • 8

    AtsmaJ.KoningA.van LierR. (2012). Multiple object tracking: anticipatory attention doesn't “bounce.”J. Vis.12, 11. 10.1167/12.13.1

  • 9

    BahramiB.OlsenK.BangD.RoepstorffA.ReesG.FrithC. (2012a). Together, slowly but surely: the role of social interaction and feedback on the build-up of benefit in collective decision-making. J. Exp. Psychol. Hum. Percept. Perform.38, 38. 10.1037/a0025708

  • 10

    BahramiB.OlsenK.BangD.RoepstorffA.ReesG.FrithC. (2012b). What failure in collective decision-making tells us about metacognition. Philos. Trans. R. Soc. B Biol. Sci.367, 13501365. 10.1098/rstb.2011.0420

  • 11

    BahramiB.OlsenK.LathamP. E.RoepstorffA.ReesG.FrithC. D. (2010). Optimally interacting minds. Science329, 10811085. 10.1126/science.1185718

  • 12

    BeattyJ. (1982). Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol. Bull.91, 276. 10.1037/0033-2909.91.2.276

  • 13

    BonnelA.-M.HafterE. R. (1998). Divided attention between simultaneous auditory and visual signals. Percept. Psychophys.60, 179190. 10.3758/BF03206027

  • 14

    BrennanA. A.EnnsJ. T. (2015). When two heads are better than one: interactive versus independent benefits of collaborative cognition. Psychon. Bull. Rev.22, 10761082. 10.3758/s13423-014-0765-4

  • 15

    BrennanS. E.ChenX.DickinsonC. A.NeiderM. B.ZelinskyG. J. (2008). Coordinating cognition: the costs and benefits of shared gaze during collaborative search. Cognition106, 14651477. 10.1016/j.cognition.2007.05.012

  • 16

    CavanaghP.AlvarezG. A. (2005). Tracking multiple targets with multifocal attention. Trends Cogn. Sci.9, 349354. 10.1016/j.tics.2005.05.009

  • 17

    ChanJ. S.NewellF. N. (2008). Behavioral evidence for task-dependent “what” versus “where” processing within and across modalities. Percept. Psychophys.70, 3649. 10.3758/PP.70.1.36

  • 18

    ChenY.-C.SpenceC. (2016). Hemispheric asymmetry: looking for a novel signature of the modulation of spatial attention in multisensory processing. Psychon. Bull. Rev.24, 690707. 10.3758/s13423-016-1154-y

  • 19

    ChirimuutaM.BurrD.MorroneM. C. (2007). The role of perceptual learning on modality-specific visual attentional effects. Vis. Res.47, 6070. 10.1016/j.visres.2006.09.002

  • 20

    ChunM. M.GolombJ. D.Turk-BrowneN. B. (2011). A taxonomy of external and internal attention. Ann. Rev. Psychol.62, 73101. 10.1146/annurev.psych.093008.100427

  • 21

    De MeoR.MurrayM. M.ClarkeS.MatuszP. J. (2015). Top-down control and early multisensory processes: chicken vs. egg. Front. Integr. Neurosci.9:17. 10.3389/fnint.2015.00017

  • 22

    DrewT.HorowitzT. S.VogelE. K. (2013). Swapping or dropping? electrophysiological measures of difficulty during multiple object tracking. Cognition126, 213223. 10.1016/j.cognition.2012.10.003

  • 23

    DuncanJ.MartensS.WardR. (1997). Restricted attentional capacity within but not between sensory modalities. Nature397, 808810. 10.1038/42947

  • 24

    DuxP. E.IvanoffJ.AsplundC. L.MaroisR. (2006). Isolation of a central bottleneck of information processing with time-resolved fmri. Neuron52, 11091120. 10.1016/j.neuron.2006.11.009

  • 25

    ErnstM. O.BanksM. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature415, 429433. 10.1038/415429a

  • 26

    FinkG.DolanR.HalliganP.MarshallJ.FrithC. (1997). Space-based and object-based visual attention: shared and specific neural domains. Brain120, 20132028. 10.1093/brain/120.11.2013

  • 27

    FinoiaP.MitchellD. J.HaukO.BesteC.PizzellaV.DuncanJ. (2015). Concurrent brain responses to separate auditory and visual targets. J. Neurophysiol.114, 12391247. 10.1152/jn.01050.2014

  • 28

    GaneshG.TakagiA.OsuR.YoshiokaT.KawatoM.BurdetE. (2014). Two is better than one: Physical interactions improve motor performance in humans. Sci. Rep.4:3824. 10.1038/srep03824

  • 29

    GibneyK. D.AligbeE.EgglestonB. A.NunesS. R.KerkhoffW. G.DeanC. L.et al. (2017). Visual distractors disrupt audiovisual integration regardless of stimulus complexity. Front. Integr. Neurosci.11:1. 10.3389/fnint.2017.00001

  • 30

    GiraudetL.BerengerM.ImbertJ.-P.TremblayS.CausseM. (2014). Inattentional deafness in simulated air traffic control tasks: a behavioral and p300 analysis, in 5th International Conference on Applied Human Factors and Ergonomics (Kraków).

  • 31

    HeedT.HabetsB.SebanzN.KnoblichG. (2010). Others' actions reduce crossmodal integration in peripersonal space. Curr. Biol.20, 13451349. 10.1016/j.cub.2010.05.068

  • 32

    HeinG.ParrA.DuncanJ. (2006). Within-modality and cross-modality attentional blinks in a simple discrimination task. Percept. Psychophys.68, 5461. 10.3758/BF03193655

  • 33

    HeinrichW. (1896). Die aufmerksamkeit und die funktion der sinnesorgane. Zeitschrift für Psychologie und Physiologie der Sinnesorgane11, 342388.

  • 34

    HelbigH. B.ErnstM. O. (2008). Visual-haptic cue weighting is independent of modality-specific attention. J. Vis.8:21. 10.1167/8.1.21

  • 35

    HoeksB.LeveltW. J. (1993). Pupillary dilation as a measure of attention: a quantitative system analysis. Behav. Res. Methods Instrum. Comput.25, 1626. 10.3758/BF03204445

  • 36

    HoweP. D.HorowitzT. S.MoroczI. A.WolfeJ.LivingstoneM. S. (2009). Using fmri to distinguish components of the multiple object tracking task. J. Vis.9:10. 10.1167/9.4.10

  • 37

    JahnG.WendtJ.LotzeM.PapenmeierF.HuffM. (2012). Brain activation during spatial updating and attentive tracking of moving targets. Brain Cogn.78, 105113. 10.1016/j.bandc.2011.12.001

  • 38

    JamesW. (1890). The Principles of Psychology. Cambridge, MA: Harvard UP.

  • 39

    JohanssonB. B. (2012). Multisensory stimulation in stroke rehabilitation. Front. Human Neurosci.6:60. 10.3389/fnhum.2012.00060

  • 40

    JolicoeurP. (1999). Restricted attentional capacity between sensory modalities. Psychon. Bull. Rev.6, 8792. 10.3758/BF03210813

  • 41

    JovicichJ.PetersR. J.KochC.BraunJ.ChangL.ErnstT. (2001). Brain areas specific for attentional load in a motion-tracking task. J. Cogn. Neurosci.13, 10481058. 10.1162/089892901753294347

  • 42

    KahnemanD. (1973). Attention and Effort. Englewood Cliffs, NJ: Prentice-Hall Inc.

  • 43

    KahnemanD.BeattyJ. (1966). Pupil diameter and load on memory. Science154, 15831585. 10.1126/science.154.3756.1583

  • 44

    KasparK.KönigS.SchwandtJ.KönigP. (2014). The experience of new sensorimotor contingencies by sensory augmentation. Conscious. Cogn.28, 4763. 10.1016/j.concog.2014.06.006

  • 45

    KeaneB. P.PylyshynZ. W. (2006). Is motion extrapolation employed in multiple object tracking? tracking as a low-level, non-predictive function. Cogn. Psychol.52, 346368. 10.1016/j.cogpsych.2005.12.001

  • 46

    KeitelC.MaessB.SchrögerE.MüllerM. M. (2013). Early visual and auditory processing rely on modality-specific attentional resources. Neuroimage70, 240249. 10.1016/j.neuroimage.2012.12.046

  • 47

    KnoblichG.JordanJ. S. (2003). Action coordination in groups and individuals: learning anticipatory control. J. Exp. Psychol. Learn. Mem. Cogn.29, 10061016. 10.1037/0278-7393.29.5.1006

  • 48

    KönigS. U.SchumannF.KeyserJ.GoekeC.KrauseC.WacheS.et al. (2016). Learning new sensorimotor contingencies: effects of long-term use of sensory augmentation on the brain and conscious perception. PLoS ONE11:e0166647. 10.1371/journal.pone.0166647

  • 49

    KunarM. A.CarterR.CohenM.HorowitzT. S. (2008). Telephone conversation impairs sustained visual attention via a central bottleneck. Psychon. Bull. Rev.15, 11351140. 10.3758/PBR.15.6.1135

  • 50

    LarsenA.McIlhaggaW.BaertJ.BundesenC. (2003). Seeing or hearing? Perceptual independence, modality confusions, and crossmodal congruity effects with focused and divided attention. Percept. Psychophys.65, 568574. 10.3758/BF03194583

  • 51

    LavieN. (2005). Distracted and confused?: Selective attention under load. Trends Cogn. Sci.9, 7582. 10.1016/j.tics.2004.12.004

  • 52

    LavieN. (2010). Attention, distraction, and cognitive control under load. Curr. Dir. Psychol. Sci.19, 143148. 10.1177/0963721410370295

  • 53

    LisiM.BonatoM.ZorziM. (2015). Pupil dilation reveals top–down attentional load during spatial monitoring. Biol. Psychol.112, 3945. 10.1016/j.biopsycho.2015.10.002

  • 54

    LivingstoneM.HubelD. (1988). Segregation of form, color, movement, and depth: anatomy, physiology, and perception. Science240, 740749. 10.1126/science.3283936

  • 55

    MacalusoE.NoppeneyU.TalsmaD.VercilloT.Hartcher-O'BrienJ.AdamR. (2016). The curious incident of attention in multisensory integration: bottom-up vs. top-down. Multisens. Res.29, 557583. 10.1163/22134808-00002528

  • 56

    MacdonaldJ. S.LavieN. (2011). Visual perceptual load induces inattentional deafness. Atten. Percept. Psychophys.73, 17801789. 10.3758/s13414-011-0144-4

  • 57

    MaederP. P.MeuliR. A.AdrianiM.BellmannA.FornariE.ThiranJ.-P.et al. (2001). Distinct pathways involved in sound recognition and localization: a human fMRI study. Neuroimage14, 802816. 10.1006/nimg.2001.0888

  • 58

    MaidenbaumS.AbboudS.AmediA. (2014). Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci. Biobehav. Rev.41, 315. 10.1016/j.neubiorev.2013.11.007

  • 59

    MaroisR.IvanoffJ. (2005). Capacity limits of information processing in the brain. Trends Cogn. Sci.9, 296305. 10.1016/j.tics.2005.04.010

  • 60

    MasumotoJ.InuiN. (2013). Two heads are better than one: both complementary and synchronous strategies facilitate joint action. J. Neurophys.109, 13071314. 10.1152/jn.00776.2012

  • 61

    MathôtS.MelmiJ.-B.van der LindenL.Van der StigchelS. (2016). The mind-writing pupil: a human-computer interface based on decoding of covert attention through pupillometry. PLOS ONE11:e0148805. 10.1371/journal.pone.0148805

  • 62

    MathôtS.Van der LindenL.GraingerJ.VituF. (2013). The pupillary light response reveals the focus of covert visual attention. PLoS ONE8:e78168. 10.1371/journal.pone.0078168

  • 63

    MatuszP. J.BroadbentH.FerrariJ.ForrestB.MerkleyR.ScerifG. (2015). Multi-modal distraction: insights from children's limited attention. Cognition136, 156165. 10.1016/j.cognition.2014.11.031

  • 64

    MatuszP. J.EimerM. (2011). Multisensory enhancement of attentional capture in visual search. Psychon. Bull. Rev.18, 904909. 10.3758/s13423-011-0131-8

  • 65

    MatuszP. J.WallaceM. T.MurrayM. M. (2017). A multisensory perspective on object memory. Neuropsychologia. [Epub ahead of print]. 10.1016/j.neuropsychologia.2017.04.008

  • 66

    MeredithM. A.SteinB. E. (1983). Interactions among converging sensory inputs in the superior colliculus. Science221, 389391. 10.1126/science.6867718

  • 67

    MishkinM.UngerleiderL. G. (1982). Contribution of striate inputs to the visuospatial functions of parieto-preoccipital cortex in monkeys. Behav. Brain Res.6, 5777. 10.1016/0166-4328(82)90081-X

  • 68

    MolloyK.GriffithsT. D.ChaitM.LavieN. (2015). Inattentional deafness: visual load leads to time-specific suppression of auditory evoked responses. J. Neurosci.35, 1604616054. 10.1523/JNEUROSCI.2931-15.2015

  • 69

    NagelS. K.CarlC.KringeT.MärtinR.KönigP. (2005). Beyond sensory substitution—learning the sixth sense. J. Neural Eng.2, R13R26. 10.1088/1741-2560/2/4/R02

  • 70

    NeiderM. B.ChenX.DickinsonC. A.BrennanS. E.ZelinskyG. J. (2010). Coordinating spatial referencing using shared gaze. Psychon. Bull. Rev.17, 718724. 10.3758/PBR.17.5.718

  • 71

    NikolicM. I.SklarA. E.SarterN. B. (1998). Multisensory feedback in support of pilot-automation coordination: the case of uncommanded mode transitions, in Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 42 (Chicago, IL: SAGE Publications), 239243.

  • 72

    PashlerH. (1994). Dual-task interference in simple tasks: data and theory. Psychol. Bull.116:220. 10.1037/0033-2909.116.2.220

  • 73

    PotterM. C.ChunM. M.BanksB. S.MuckenhouptM. (1998). Two attentional deficits in serial target search: the visual attentional blink and an amodal task-switch deficit. J. Exp. Psychol. Learn. Mem. Cogn.24, 979992. 10.1037/0278-7393.24.4.979

  • 74

    PylyshynZ. W.StormR. W. (1988). Tracking multiple independent targets: evidence for a parallel tracking mechanism. Spat. Vis.3, 179197. 10.1163/156856888X00122

  • 75

    RavehD.LavieN. (2015). Load-induced inattentional deafness. Atten. Percept. Psychophys.77, 483492. 10.3758/s13414-014-0776-2

  • 76

    ReedC. L.KlatzkyR. L.HalgrenE. (2005). What vs. where in touch: an fMRI study. Neuroimage25, 718726. 10.1016/j.neuroimage.2004.11.044

  • 77

    ReesG.FrithC.LavieN. (2001). Processing of irrelevant visual motion during performance of an auditory attention task. Neuropsychologia39, 937949. 10.1016/S0028-3932(01)00016-1

  • 78

    RigoliL.RomeroV.ShockleyK.FunkeG. J.StrangA. J.RichardsonM. J. (2015). Effects of complementary control on the coordination dynamics of joint-action, in Proceedings of the 37th Annual Conference of the Cognitive Science Society (Pasadena, CA), 19972002.

  • 79

    RuthruffE.JohnstonJ. C.Van SelstM. (2001). Why practice reduces dual-task interference. J. Exp. Psychol. Human Percept. Perform.27:3. 10.1037/0096-1523.27.1.3

  • 80

    SebanzN.BekkeringH.KnoblichG. (2006). Joint action: bodies and minds moving together. Trends Cogn. Sci.10, 7076. 10.1016/j.tics.2005.12.009

  • 81

    SerencesJ. T.SchwarzbachJ.CourtneyS. M.GolayX.YantisS. (2004). Control of object-based attention in human cortex. Cereb. Cortex14, 13461357. 10.1093/cercor/bhh095

  • 82

    SigmanM.DehaeneS. (2008). Brain mechanisms of serial and parallel processing during dual-task performance. J. Neurosci.28, 75857598. 10.1523/JNEUROSCI.0948-08.2008

  • 83

    SinnettS.CostaA.Soto-FaracoS. (2006). Manipulating inattentional blindness within and across sensory modalities. Q. J. Exp. Psychol.59, 14251442. 10.1080/17470210500298948

  • 84

    SkewesJ. C.SkewesL.MichaelJ.KonvalinkaI. (2015). Synchronised and complementary coordination mechanisms in an asymmetric joint aiming task. Exp. Brain Res.233, 551565. 10.1007/s00221-014-4135-2

  • 85

    SklarA. E.SarterN. B. (1999). Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event- driven domains. Hum. Factors41, 543552. 10.1518/001872099779656716

  • 86

    SotoD.BlancoM. J. (2004). Spatial attention and object-based attention: a comparison within a single task. Vis. Res.44, 6981. 10.1016/j.visres.2003.08.013

  • 87

    Soto-FaracoS.SpenceC. (2002). Modality-specific auditory and visual temporal processing deficits. Q. J. Exp. Psychol.55, 2340. 10.1080/02724980143000136

  • 88

    Soto-FaracoS.SpenceC.FairbankK.KingstoneA.HillstromA. P.ShapiroK. (2002). A crossmodal attentional blink between vision and touch. Psychon. Bull. Rev.9, 731738. 10.3758/BF03196328

  • 89

    SpenceC. (2010). Crossmodal spatial attention. Ann. N. Y. Acad. Sci.1191, 182200. 10.1111/j.1749-6632.2010.05440.x

  • 90

    SpenceC.DriverJ. (2004). Crossmodal Space and Crossmodal Attention. Oxford, UK: Oxford University Press.

  • 91

    SpenceC.HoC. (2012). The Multisensory Driver: Implications for Ergonomic Car Interface Design. Hampshire: Ashgate Publishing, Ltd.

  • 92

    SpenceC.ReadL. (2003). Speech shadowing while driving on the difficulty of splitting attention between eye and ear. Psychol. Sci.14, 251256. 10.1111/1467-9280.02439

  • 93

    SteinB. E.StanfordT. R. (2008). Multisensory integration: current issues from the perspective of the single neuron. Nat. Rev. Neurosci.9, 255266. 10.1038/nrn2331

  • 94

    SternsheinH.AgamY.SekulerR. (2011). Eeg correlates of attentional load during multiple object tracking. PLoS ONE6:e22660. 10.1371/journal.pone.0022660

  • 95

    TalsmaD.DotyT. J.StrowdR.WoldorffM. G. (2006). Attentional capacity for processing concurrent stimuli is larger across sensory modalities than within a modality. Psychophysiology43, 541549. 10.1111/j.1469-8986.2006.00452.x

  • 96

    TangX.WuJ.ShenY. (2016). The interactions of multisensory integration with endogenous and exogenous attention. Neurosci. Biobehav. Rev.61, 208224. 10.1016/j.neubiorev.2015.11.002

  • 97

    ten OeverS.RomeiV.van AtteveldtN.Soto-FaracoS.MurrayM. M.MatuszP. J. (2016). The cogs (context, object, and goals) in multisensory processing. Exp. Brain Res.234, 13071323. 10.1007/s00221-016-4590-z

  • 98

    UngerleiderL. G.PessoaL. (2008). What and where pathways. Scholarpedia3:5342. 10.4249/scholarpedia.5342

  • 99

    van AtteveldtN.MurrayM. M.ThutG.SchroederC. E. (2014). Multisensory integration: flexible use of general operations. Neuron81, 12401253. 10.1016/j.neuron.2014.02.044

  • 100

    Van der BurgE.OliversC. N. L.BronkhorstA. W.KoelewijnT.TheeuwesJ. (2007). The absence of an auditory–visual attentional blink is not due to echoic memory. Percept. Psychophys.69, 12301241. 10.3758/BF03193958

  • 101

    VesperC.AbramovaE.BtepageJ.CiardoF.CrosseyB.EffenbergA.et al. (2017). Joint action: mental representations, shared information and general mechanisms for coordinating with others. Front. Psychol.7:2039. 10.3389/fpsyg.2016.02039

  • 102

    WahnB.FerrisD. P.HairstonW. D.KönigP. (2016a). Pupil sizes scale with attentional load and task experience in a multiple object tracking task. PLoS ONE11:e0168087. 10.1371/journal.pone.0168087

  • 103

    WahnB.KeshavaA.SinnettS.KingstoneA.KönigP. (2017a). Audiovisual integration is affected by performing a task jointly, in Proceedings of the 39th Annual Conference of the Cognitive Science Society (Austin, TX), 12961301.

  • 104

    WahnB.KingstoneA.KönigP. (2017b). Two trackers are better than one: information about the co-actor's actions and performance scores contribute to the collective benefit in a joint visuospatial task. Front. Psychol.8:669. 10.3389/fpsyg.2017.00669

  • 105

    WahnB.KönigP. (2015a). Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration. Front. Psychol.6:1084. 10.3389/fpsyg.2015.01084

  • 106

    WahnB.KönigP. (2015b). Vision and haptics share spatial attentional resources and visuotactile integration is not affected by high attentional load. Multisens. Res.28, 371392. 10.1163/22134808-00002482

  • 107

    WahnB.KönigP. (2016). Attentional resource allocation in visuotactile processing depends on the task, but optimal visuotactile integration does not depend on attentional resources. Front. Integr. Neurosci.10:13. 10.3389/fnint.2016.00013

  • 108

    WahnB.KönigP. (2017). Is attentional resource allocation across sensory modalities task-dependent?Adv. Cogn. Psychol.13, 8396. 10.5709/acp-0209-2

  • 109

    WahnB.MuraliS.SinnettS.KönigP. (2017c). Auditory stimulus detection partially depends on visuospatial attentional resources. Iperception8:2041669516688026. 10.1177/2041669516688026

  • 110

    WahnB.SchmitzL.KönigP.KnoblichG. (2016b). Benefiting from being alike: Interindividual skill differences predict collective benefit in joint object control, in Proceedings of the 38th Annual Conference of the Cognitive Science Society (Austin, TX), 27472752.

  • 111

    WahnB.SchwandtJ.KrügerM.CrafaD.NunnendorfV.KönigP. (2016c). Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search. Ergonomics59, 781795. 10.1080/00140139.2015.1099742

  • 112

    WickensC. D. (2002). Multiple resources and performance prediction. Theor. Issues Ergon. Sci.3, 159177. 10.1080/14639220210123806

  • 113

    WierdaS. M.van RijnH.TaatgenN. A.MartensS. (2012). Pupil dilation deconvolution reveals the dynamics of attention at high temporal resolution. Proc. Natl. Acad. Sci. U.S.A.109, 84568460. 10.1073/pnas.1201858109

  • 114

    YantisS. (1992). Multielement visual tracking: Attention and perceptual organization. Cogn. Psychol.24, 295340. 10.1016/0010-0285(92)90010-Y

Summary

Keywords

multisensory processing, visuospatial attention, joint action, attentional resources, multiple object tracking

Citation

Wahn B and König P (2017) Can Limitations of Visuospatial Attention Be Circumvented? A Review. Front. Psychol. 8:1896. doi: 10.3389/fpsyg.2017.01896

Received

17 May 2017

Accepted

12 October 2017

Published

27 October 2017

Volume

8 - 2017

Edited by

Kathrin Ohla, Medical School Berlin, Germany

Reviewed by

Pawel J. Matusz, Centre Hospitalier Universitaire Vaudois (CHUV), Switzerland; Roberto Arrighi, University of Florence, Italy

Updates

Copyright

*Correspondence: Basil Wahn

This article was submitted to Perception Science, a section of the journal Frontiers in Psychology

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics