Event Abstract

Integration of visuo-motor network models by MUSIC

  • 1 Kyoto University, OIST, Japan
  • 2 Kyoto University, OIST, NAIST, RIKEN, Japan
  • 3 Kyoto University, NAIST, Japan
  • 4 OIST, NAIST, Japan
  • 5 NAIST, Japan

1. Introduction A challenging goal in computational neuroscience is to realize a "whole-brain" simulation linking sensory inputs to motor outputs. The visual system is a good instance to approach the goal, because brain parts involved have been well studied and their models have been proposed. Here we aim to connect such models with spiking neurons and close the visual sensor-motor loop.
In connecting multiple models, we often suffer from difficulty in communicating the events among heterogeneous processes, since the models are built by independent groups and implemented on different computer platforms. The MUSIC [1], Multi-Simulation Coordinator, was developed recently to remedy the difficulty. In this study, we implement the visual-oculomotor system by connecting the models of superior colliculi, brainstem integrators, and the eyeballs with MUSIC, on left and right sides.
2. Method
MUSIC
MUSIC [1] is an API allowing large-scale neuron simulators using MPI internally to exchange data during runtime. MUSIC provides mechanisms to transfer massive amounts of event information and continuous values from one parallel application to another. Developed as a C++ library, it is available on a wide range of UNIX systems and Blue Gene/L. Overall architecture
Fig. 1 shows the schematic diagram of our current implementation, in which the aim is to reproduce the low-level saccadic sensory-motor loop. The visual inputs, which code the target position, are generated by an application module implemented in C++ or Python. The data are delivered to the superior colliculus (SC) model as introduced by Jan Moren [2] and transformed into activation patterns of inhibitory- and excitatory-burst neurons that encode the planned saccade vector. The neural circuit in the SC model consists of leaky integrate-and-fire neurons and is implemented using NEST [4]. The neural activities in the SC are then sent to the brainstem integrator network model [5] to create signal for fixing the eyes still, which is also implemented by NEST. The output is finally transformed into the eye position by the oculomotor plant models implemented in C++. All communication among the modules is handled by MUSIC.
3. Results The entire model consisted of about 40000 spiking neurons. The model reproduced the standard behavior of burst in SC neurons and persistent activities of the integrator neurons. Simulation of the network for 700 ms was run within 9 minutes on a desktop machine running a Linux OS. The projection to the oculomotor plant through MUSIC yields good results.
4. Conclusion
The communication between the several processes was reliably handled by MUSIC and we observed no overload due to the use of the MUSIC API. We now have to observe how the current architecture implementation scales on a richer simulation. We then test its feasibility on large-scale simulation on RIKEN’s Next-Generation Supercomputer .

References

1. ] M. Djurfeldt, J. Hjorth, J.M. Eppler, N. Dudani, M. Helias, T. C. Potjans, U. S. Bhalla, M. Diesmann, J. Hellgren K. and Ö. Ekeberg (2010) Run-Time Interoperability Between Neuronal Network Simulators Based on the MUSIC Framework Neuroinformatics 8(1):43-60, DOI 10.1007/s12021-010-9064-z

2. J. Morén, T. Shibata, K. Doya (2010), "Toward a Spiking-Neuron Model of the Oculomotor System". From Animals to Animats: The 11th International Conference on Simulation of Adaptive Behavior.

3. Gewaltig M.-O. & Diesmann M. (2007). NEST (Neural Simulation Tool) Scholarpedia 2(4):1430.

4. Seung H.S., et al. (2000). Stability of the Memory of Eye Position in a Recurrent Network of Conductance-Based Model Neurons. Neuron 26(1): 259-271.

Conference: Neuroinformatics 2010 , Kobe, Japan, 30 Aug - 1 Sep, 2010.

Presentation Type: Poster Presentation

Topic: Computational Neuroscience

Citation: Cassagnes A, Doya K, Moren J, Yoshimoto J and Shibata T (2010). Integration of visuo-motor network models by MUSIC. Front. Neurosci. Conference Abstract: Neuroinformatics 2010 . doi: 10.3389/conf.fnins.2010.13.00070

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 11 Jun 2010; Published Online: 11 Jun 2010.

* Correspondence: Aurelien Cassagnes, Kyoto University, OIST, Okinawa, Japan, aurelien@oist.jp