Event Abstract

Developing software tools for parameter fitting and validation of neuronal models

  • 1 Pázmány Péter Catholic University, Faculty of Information Technology and Bionics, Hungary
  • 2 Hungarian Academy of Sciences, Institute of Experimental Medicine, Hungary
  • 3 École Polytechnique Fédérale de Lausanne, Switzerland

Anatomically and biophysically detailed conductance-based neuronal models can be useful tools in understanding the behavior and function of neurons. Although there are more and more experimental data available that constrain the many parameters of multi-compartmental models, there are generally several remaining parameters whose values have not been experimentally determined. The values of these unknown parameters are often set using manual, ad hoc procedures with the aim of reproducing the behavior of the cell in one or a few specific paradigms. However, the performance of such a model outside the original context typically remains unexplored, and systematic comparisons of different models are difficult and thus rare, limiting the reusability of these models. Recently, several solutions have been developed for the systematic optimization of neuronal parameters based on the quantitative evaluation of model performance, but customizing these tools to individual needs can be a substantial challenge. To overcome these problems we are developing software tools for automatic model validation, and for the automated, intuitive fitting of unknown model parameters. For automatic and quantitative model validation we are developing a python test suite, called hippounit, which is based on NeuronUnit, a SciUnit repository for testing neuronal models (Gerkin and Omar, 2014). Hippounit automatically performs simulations that mimic experimental protocols on detailed hippocampal CA1 pyramidal cell models built in the NEURON simulator. To test a model the user needs to create a python class for the model, including its intrinsic mechanisms, and receptor models can also be added for synaptic stimulation of the model cell. The tests of hippounit use feature-based error functions to compare the output of the model to the results of experimental measurements on several different cells. Errors are typically measured as the difference from the mean of the experimental data, measured in the units of the experimental standard deviation (Druckmann et al. 2007). The final output of a test is an error score that is the sum of the errors of all the features tested by the given test, and a number of figures which illustrate the model's behavior and the extracted feature values. Beside hippounit's own functions, the Electrophys Feature Extraction Library (eFEL) of the Blue Brain Project is used for feature extraction. So far there are three different tests implemented in hippounit. The Somatic Feature Test uses the somatic spiking features of eFEL for parameter fitting; target values for these features were extracted from recordings in rat CA1 pyramidal neurons in the laboratory of Alex Thomson. The Depolarization Block Test aims to determine whether the model enters depolarization block in response to prolonged, large-amplitude current injections to the soma, using experimental data from Bianchi et al. (2012). Finally, the Oblique Integration Test probes the integration properties of oblique dendrites according to the experimental results of Losonczy and Magee (2006). Using hippounit, we have compared the behavior of several CA1 pyramidal cell models in these domains, and found that all of these models perform well in some domains (typically on features they were built to capture) but badly in others. We also present the improvements made in the Optimizer software tool since its initial release (the version described in Friedrich et al., 2014). Optimizer is a general-purpose tool for fitting the parameters of neuronal models. Optimizer offers a graphical user interface (GUI) for non-expert users to do optimization in several commonly used scenarios. It implements several different optimization algorithms and a number of fitness functions which can also be combined. Optimizer has a modular structure that makes it easy to extend it by adding new optimization algorithms and/or fitness functions. Recently added optimization methods include random search, and the differential evolution and particle swarm algorithms from the inspyred package. The scipy version of simulated annealing, which has been deprecated, has been replaced by the basinhopping algorithm (also in scipy). The software can now handle a combination of voltage traces and corresponding explicit spike times; this is important for the correct optimization of integrate-and-fire models. Several bugs have been fixed; most importantly, when selecting parameters for optimization in the GUI, only actual parameters (and not state variables) of Neuron models are displayed. In addition, the software now can handle abstract data that are already extracted from traces, and use fitness functions extracted from a set of traces (the model's responses to different stimuli) rather than a single trace. These capabilities have been used to fit the behavior of a CA1 pyramidal cell model to the somatic spiking features and the depolarization block features described above. The performance of several optimization algorithms in Optimizer has been systematically compared on some benchmark problems, including the optimization of both conductance-based and integrate-and-fire models. We concluded that the classic evolutionary algorithm included in Optimizer was effective in solving all types of problems, while the basinhopping algorithm performed well in the case of conductance based models (and continuously varying features) but not in the case of integrate-and-fire models with discrete feature values. Automated tools for model fitting and validation should enable a more principled and systematic approach to model building and validation. Together with efforts on other important components such as standardized model representation, such tools should make possible the reproducible construction, validation, and comparison of detailed neural models, and encourage collaborative research in computational neuroscience.

Acknowledgements

This work was supported by the Hungarian Scientific Research Fund (OTKA K115441), ERC-2011-ADG-294313 (SERRACO), and the EU FP7 Grant 604102 (Human Brain Project).

References

Gerkin, R.C. and Omar, C. (2013). NeuroUnit: Validation Tests for Neuroscience Models. Front. Neuroinform. Conference Abstract: Neuroinformatics 2013. doi:10.3389/conf.fninf.2013.09.00013

Druckmann, S., Banitt, Y., Gidon, A., Schürmann, F., Markram, H., and Segev, I. (2007). A novel multiple objective optimization framework for constraining conductance-based neuron models by experimental data. Front. in Neuroscienc. 1:1. 7–18. doi:10.3389/neuro.01.1.1.001.2007

Geit, W. V., Moor, R., Ranjan, R., Riquelme, L., Rössert, C., GitHub. (2016). BlueBrain/eFEL. [online] Available at: https://github.com/BlueBrain/eFEL [Accessed 23 Mar. 2016].

Bianchi, D., Marasco, A., Limongiello, A., Marchetti. C., Marie, H., Tirozzi, B., Migliore, M. (2012). On the mechanisms underlying the depolarization block in the spiking dynamics of CA1 pyramidal neurons. Journal of Computational Neuroscience. 33: 2. 207–225.

Losonczy, A. and Magee, J. C. (2006). Integrative Properties of Radial Oblique Dendrites in Hippocampal CA1 Pyramidal Neurons. Neuron. 50: 2.291–307.

Friedrich, P., Vella, M., Gulyás, A. I., Freund, T. F., and Káli, S. (2014). A flexible, interactive software tool for fitting the parameters of neuronal models. Front. Neuroinform. 8: 63.

Keywords: parameter fitting, model validation, CA1 pyramidal cell, neuronal models, dendritic integration, depolarization block, python, compartmental models

Conference: Neuroinformatics 2016, Reading, United Kingdom, 3 Sep - 4 Sep, 2016.

Presentation Type: Poster

Topic: Computational neuroscience

Citation: Sáray S, Friedrich P, Rössert CA, Bagi B, Kacz L, Török MP, Muller EB, Freund TF and Káli S (2016). Developing software tools for parameter fitting and validation of neuronal models. Front. Neuroinform. Conference Abstract: Neuroinformatics 2016. doi: 10.3389/conf.fninf.2016.20.00076

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 31 May 2016; Published Online: 18 Jul 2016.

* Correspondence: Ms. Sára Sáray, Pázmány Péter Catholic University, Faculty of Information Technology and Bionics, Budapest, 1083, Hungary, saraysari@gmail.com