AUTHOR=Adeel Ahsan TITLE=Conscious Multisensory Integration: Introducing a Universal Contextual Field in Biological and Deep Artificial Neural Networks JOURNAL=Frontiers in Computational Neuroscience VOLUME=Volume 14 - 2020 YEAR=2020 URL=https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2020.00015 DOI=10.3389/fncom.2020.00015 ISSN=1662-5188 ABSTRACT=Awareness plays a major role in human cognition and adaptive behaviour, though mechanisms involved remain uncertain. Awareness is not an objectively established fact and its role in multisensory integration is not yet fully understood, hence, questions remain: (1) how does the biological neuron integrate the incoming multisensory signals with respect to different situations? (2) how are the roles of multisensory signals defined that help neuron(s) to originate a precise control command, complying with the anticipated behavioural-constraint of the environment? In this research, we introduce a new theory on conscious multisensory integration (CMI) that sheds light on the aforementioned crucial neuroscience question. Specifically, the contextual field (CF) in pyramidal cells and coherent infomax theory \cite{kay1998contextually}\cite{kay2011coherent} is split into two functionally distinctive integrated input fields: local contextual field (LCF) and universal contextual field (UCF). LCF defines the modulatory sensory signal coming from some other parts of the brain (in principle from anywhere in space-time) and UCF defines the outside environment and anticipated behaviour (based on past learning and reasoning). Both LCF and UCF are integrated with the receptive field (RF) to develop a new class of contextually-adaptive neuron (CA, which adapts to changing situations. The proposed theory is evaluated using human contextual audio-visual (AV) speech modelling. Firstly, using a deep recurrent neural network, it is shown (at the network level) that the parallel three-stream (RF+LCF+UCF) model outperforms parallel two-stream (RF+LCF) and single-stream (RF-only) models, in terms of enhanced learning, processing, and decision-making. The AV speech enhancement results provide clues to contextual modulation and selective multisensory (AV) information amplification/suppression with respect to the outside world. Lastly, the RF, LCF, and UCF are integrated at the neural level to enable further quantification of multiway mutual/shared information. We hypothesis that the pyramidal cell, in addition to the classical excitatory and inhibitory signals, receives LCF and UCF inputs. Furthermore, the integration of RF, LCF, and UCF produces a widely distributed activity pattern, defining the outside environment, anticipated behaviour, and corresponding precise multisensory integration.