A Spiking Neural Network Builder for Systematic Data-to-Model Workflow

In building biological neural network models, it is crucial to efficiently convert diverse anatomical and physiological data into parameters of neurons and synapses and to systematically estimate unknown parameters in reference to experimental observations. Web-based tools for systematic model building can improve the transparency and reproducibility of computational models and can facilitate collaborative model building, validation, and evolution. Here, we present a framework to support collaborative data-driven development of spiking neural network (SNN) models based on the Entity-Relationship (ER) data description commonly used in large-scale business software development. We organize all data attributes, including species, brain regions, neuron types, projections, neuron models, and references as tables and relations within a database management system (DBMS) and provide GUI interfaces for data registration and visualization. This allows a robust “business-oriented” data representation that supports collaborative model building and traceability of source information for every detail of a model. We tested this data-to-model framework in cortical and striatal network models by successfully combining data from papers with existing neuron and synapse models and by generating NEST simulation codes for various network sizes. Our framework also helps to check data integrity and consistency and data comparisons across species. The framework enables the modeling of any region of the brain and is being deployed to support the integration of anatomical and physiological datasets from the brain/MINDS project for systematic SNN modeling of the marmoset brain.


Neuron in-degree:
The number of neurons from a source population providing inputs to a single neuron in a target population is estimated as the number of input synapses ⌫ x y , also known as the in-degree of the target neuron: Where Y and X are the source and target populations respectively, N Y and N X the corresponding number of neurons, P Y !X the proportion of neurons in Y projecting to X, ↵ y!x the average number of synapses each neuron of Y makes in X. The in-degree is used for connection rules of the type fixed in-degree (Hahne et al. (2021)).

Neuron out-degree:
↵ y!x is approximated by the synaptic bouton counts, frequently found on single-axon tracing studies. Besides being used to define the in-degree value described above, ↵ y!x is also used stand-alone for connection rules of the type fixed out-degree.

Redundancy:
Considering that a source-target neuron pair may have multiple synaptic contacts, an average redundancy ⇢ Y !X is considered (Girard et al. (2020)). Redundancy is a number between 1 (each synapse comes from a different neuron) and ⌫ x y (a single neuron provides all synapses), and it is used to adjust the in-degree: ⌫ x y /⇢ Y !X .
Connection rules of the type "in-degree" draw ⌫ x y /⇢ Y !X neurons of Y , within a pre-defined spatial mask (i.e. a circular mask), and connect them to each neuron of X. There is a trade-off between the neural population size (number of neurons), the size of the spatial mask and the redundancy value. The connection mask should allocate enough neurons in order to achieve the required in-degree of the connection rule, if not, a higher redundancy adjusts the in-degree accordingly to the available number of neurons within the mask. Similarly, this can be applied to the out-degree-based connectivity.
Adjustment by redundancy is automatic if ⇢ Y !X is specified ( Fig.9.23). By default, redundancy is 1 and no adjustment occurs. Testing several values of the redundancy is possible; however, it is required to change or activate/deactivate values by GUI, and re-generate the simulation code.
Redundancy is also useful to lighten computations when scaling to larger models, allowing, relatively, conservation of model's properties (see Connection weight below); although limitations on the neural dynamics may occur (Van Albada et al. (2015)). Higher redundancy creates fewer synapses, and adjusts the synaptic weight to maintain a similar network activity. 4. Dendritic attenuation: Dendritic attenuation x y is considered as a function of the mean diameter d x and the average maximal extent l x of the dendrite, as well as the mean distance r x where synaptic contacts are made to the soma, expressed as percentage of l x . Attenuation is computed as: With L x as the dendrite-based electronic constant (Koch (2004)), and R i & R m the intracellular and membrane resistances respectively.

Connection weight:
The incoming post-synaptic potential change caused by a single spike at the location of the synapse V n (mV ) is attenuated by x y on its way to the soma, and adjusted by the redundancy ⇢ Y !X . The connection weight w n is then computed as: Beside PSP-based connection weights, an alternative approach for defining connection weights is available at the synapse-level parameters (synapse tab) or within NEST synapse model (other parameters tab, Fig.9.18). The connection weight is not attenuated when "None" is assigned to the parameter "Location to Soma" at the "Receptor & Location to soma" tab ( Fig.9.18) in Projections.
6. PSP rise time: depending on the receptor type, the PSP rise time t Vn is used as the tau syn parameter of the synaptic alpha function included in several NEST neuron models. 7. Connection parameters: connection rules of the type "in-degree" use ⌫ x y as the number of incoming connections to a single target neuron. Whereas, rules of the type "out-degree" adopt ↵ y!x as the number of outgoing connections from a single source neuron. Rules of the type "constant probability" and "distance-depended Gaussian probability", use p (probability) and std (standard deviation) respectively, which are parameters specified by GUI. SNNbuilder implements topological connection rules for NEST 3. The spatial mask, which defines the subset of neurons considered as potential targets (or sources) for each source (target) neuron, is assumed as the axonal spatial domain (focused or diffuse), circular or spherical (2D or 3D spatial organization), of configurable radius. Other parameters such as connection strength w n and axonal delay, are mapped to connection rules as well. Figure B.1. SNNbuilder incorporates a function to import connectomic data (in JSON format) from remote URL, avoiding the manual specification of neurons and projections. Non-enclosed area corresponds to the current SNNbuilder release. Future work (dotted squares) targets application-to-application integration in real time and cross-collaboration; for example, connectomic data retrieval by a web-service at Brain/MINDS, data exchange as provider/consumer of web-services to/from other projects, machine learning services, knowledge-graphs, and others. Figure B.2. Several values surveyed for the dendritic extent of a neuron type serve as the "collective" knowledge, computed as the mean value (blue squares). References are recorded as well, linking the original sources by DOI's (red square). Notes, memos or reminders support model documentation (red square, top side).