Event Abstract

A testsuite for a neural simulation engine

  • 1 Honda Research Institute Europe GmbH, Germany
  • 2 Norwegian University of Life Sciences, Norway
  • 3 RIKEN Brain Science Institute, Japan

Testing is a standard activity of the software development process [1,2]. However, little research has been carried out on the specific problems of testing neuronal simulation engines. We found that with the growing complexity of the software and the growing number of developers, formalized and systematic testing becomes critical. This is because the changes made to the code for a new feature are more likely to break an existing function and no researcher has a full overview of the code. The rapid growth of neuroscientific knowledge and the changing research directions require an incremental/iterative development style [3] which extends over the full life time of the product. Thus, there is no single testing phase but the same tests need to be carried out repetitively over a time span of many years. Furthermore, there is usually no rigorous way to determine whether the result of a simulation is correct. The confidence of the scientist relies on comparisons of the results of different implementations and the correctness of critical components of the system.

Here we present the architecture of the testing framework we have developed for NEST [4]. The fully automatized testsuite consists of a collection of test scripts which are sequentially executed by a concise shell script. While high expressiveness of the test code is a goal, sometimes more primitive code is used in order not to condition a test on the correctness of a further component. NEST's built-in simulation language (SLI) is the lowest level at which tests are formulated. The C++ classes of the simulation kernel have no code for testing but are equipped with checks of invariants (assertions).

In designing the tests we found the following principles effective: (1) test the ability to report errors (self-test), (2) hierarchically organize tests from simple to complex, (3) test that objects have the expected default values and accept parameter changes, (4) compare simulation results with analytical results for simple scenarios, (5) check correctness of results with simpler algorithms, (6) test the convergence of results with decreasing simulation time step, (7) check for expected accuracy, (8) test the invariance of results with increasing numbers of processors used, (9) create regression tests for fixed problems.

It is essential to combine these principles. For example, a simulation may converge with decreasing time step, but to the wrong result. Convergence to the correct result still does not guarantee correctness of the implementation: if a signal delay is offset by one time step, the simulation may converge, but with unexpectedly large errors at a given resolution.

Acknowledgements: Next-Generation Supercomputer Project of MEXT, EU Grant 15879 (FACETS), BMBF Grant 01GQ0420, Helmholtz Alliance on Systems Biology


1. Sommerville I (2007) Software Engineering (8th edn), Harlow, Addison-Wesley

2. Beck K & Andres C (2004) Extreme Programming Explained (2nd edn), Boston, Addison-Wesley

3. Diesmann M and Gewaltig M-O (2002) GWDG Bericht, 58:43-70

4. Gewaltig M-O & Diesmann M (2007) Scholarpedia, 2(4):1430

Conference: Neuroinformatics 2009, Pilsen, Czechia, 6 Sep - 8 Sep, 2009.

Presentation Type: Poster Presentation

Topic: Large scale modeling

Citation: Eppler JM, Kupper R, Plesser HE and Diesmann M (2009). A testsuite for a neural simulation engine. Front. Neur. Conference Abstract: Neuroinformatics 2009. doi: 10.3389/conf.neuro.11.2009.08.042

Received: 22 May 2009; Published Online: 22 May 2009.

* Correspondence: Jochen M Eppler, Honda Research Institute Europe GmbH, Offenbach, Germany, j.eppler@fz-juelich.de

© 2007 - 2018 Frontiers Media S.A. All Rights Reserved