WM: an integrated framework for modeling the visual system
-
1
University of Washington, Department of Biological Structure, United States
Visual neuroscience is a broad field consisting of many sub-specialities, covering a range of experimental paradigms, visual modalities, and scales of description. Forging links between experimental findings across subfields is important for advancing our understanding of visual function. Across and within these subfields many models have been built, but they vary widely in their fundamental architecture, level of physiological detail, the nature of their inputs, and the types of responses or outputs that they generate. These differences make it difficult or impossible to directly test the models with novel visual stimuli, to make direct comparisons between models, or to compare model output with the wealth of experimental data.
To address these issues, we have developed a framework for building working models (WM) of the visual system based on the following principles: (1) models should be able to operate on any visual input, so they can be tested in a variety of ways, with stimuli relevant to color, motion, form and depth processing, (2) models should produce outputs like those recorded in experimental studies (spikes, membrane voltages, conductances) to facilitate direct comparison to experimental data and to make clear predictions that can be tested experimentally, (3) models should be fast and easy to run online by anyone, otherwise they will never be tested enough to be thoroughly understood.
This framework supports simple L-N (linear-nonlinear) models with only a few parameters, which can be used for elucidating basic principles (Heess and Bair, 2010), as well as large-scale population models (Baker and Bair, 2012) that implement physiologically plausible networks. Thus, conceptually minimal as well as plausibly elaborate models can be fruitfully studied within the framework. Within WM, users can vary model parameters, select and design stimuli, choose responses to record, and run the model in parallel across a cluster of CPUs. The software is developed in C and MPI for computational speed, and Java for platform-independence and web-based interaction.
The building blocks of the population models are physiological cell classes connected by realistic synapses. The units in the model, from retinal ganglion cells (RGCs) onward, generate spikes and can be switched between Poisson generation and conductance-driven integrate and fire units, so model outputs include spike trains, membrane voltages, and synaptic conductances. Cells in LGN and cortex are organized topographically for receptive field position, and maps for attributes such as orientation, SF and ocular dominance are easily defined. Model neurons can be synaptically connected not just probabilistically between populations or over spatial extent, but on the basis of these mapped visual attributes.
We have built an online portal to the framework at http://www.iModel.org. It gives investigators the the ability to explore model architectures online, view visual stimuli, select models, stimuli and responses to use in simulations, run those simulations on our own cluster, and display and analyze simulation output. All of this functionality is freely available to users and requires only a web browser and internet connection for access. We will present examples of models built within this framework and prototypes for tools for online access and simulation in the cloud.
References
Baker PM, Bair W (2012) Inter-neuronal correlation distinguishes mechanisms of direction selectivity in cortical circuit models. J Neurosci 32:8800--8816.
Heess N, Bair W (2010) Direction opponency, not quadrature, is key to the 1/4 cycle preference for apparent motion in the motion energy model. J Neurosci 30:11300--11304.
Keywords:
Visual System,
population models,
data analysis tools,
simulation software,
online portal,
cortical network models
Conference:
Neuroinformatics 2013, Stockholm, Sweden, 27 Aug - 29 Aug, 2013.
Presentation Type:
Demo
Topic:
Computational neuroscience
Citation:
Baker
PM and
Bair
W
(2013). WM: an integrated framework for modeling the visual system.
Front. Neuroinform.
Conference Abstract:
Neuroinformatics 2013.
doi: 10.3389/conf.fninf.2013.09.00084
Copyright:
The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers.
They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.
The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.
Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.
For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.
Received:
08 Apr 2013;
Published Online:
11 Jul 2013.
*
Correspondence:
Dr. Pamela M Baker, University of Washington, Department of Biological Structure, Seattle, WA, 98112, United States, pmbaker@uw.edu