Edited by: Mayank R. Mehta, University of California, Los Angeles, USA
Reviewed by: Jeffrey S. Taube, Dartmouth College, USA; Sen Song, Tsinghua University, USA
*Correspondence: Hector J. I. Page, Departmental of Experimental Psychology, Oxford Center for Theoretical Neuroscience and Artificial Intelligence, University of Oxford, Tinbergen Building, 9 South Parks Road, Oxford OX1 3UD, UK e-mail:
This article was submitted to the journal Frontiers in Computational Neuroscience.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Head direction cells fire to signal the direction in which an animal's head is pointing. They are able to track head direction using only internally-derived information (path integration)In this simulation study we investigate the factors that affect path integration accuracy. Specifically, two major limiting factors are identified: rise time, the time after stimulation it takes for a neuron to start firing, and the presence of symmetric non-offset within-layer recurrent collateral connectivity. On the basis of the latter, the important prediction is made that head direction cell regions directly involved in path integration will not contain this type of connectivity; giving a theoretical explanation for architectural observations. Increased neuronal rise time is found to slow path integration, and the slowing effect for a given rise time is found to be more severe in the context of short conduction delays. Further work is suggested on the basis of our findings, which represent a valuable contribution to understanding of the head direction cell system.
Head direction (HD) cells respond to the animal's HD in the horizontal plane (Ranck,
Most path integration models are based on a continuous attractor neural network (CANN) layer of HD cells. External input shifts a packet of activity representing current HD through the HD layer (Skaggs et al.,
Path integration must happen at the correct speed for the system to accurately track true HD. However, the factors governing path integration speed have not been fully investigated. One theory of how time-accurate path integration is achieved (Walters et al.,
The model used in this paper is based on the path integration mechanism of Walters et al. (
During training, in the case of the self-organizing network, activity moves through the HD layer at constant velocity V. There is a delay Δ
During testing, pre-synaptic neurons influence post-synaptic neurons via recurrent collateral connections, which contain a conduction delay Δ
where |
Path integration during testing has only reached a maximum of 81% accuracy in previous work utilizing this mechanism (Walters and Stringer,
One potential source of inaccuracy comes from within-layer symmetrical recurrent collateral connectivity, which has been used in past CANNs to stabilize HD cell activity in the dark. In such models, the layer of HD cells receives two peaked weight profiles: an offset profile representing idiothetic input required for path integration, and a non-offset profile originating from the same layer to stabilize HD activity in the dark. However, non-offset within-layer connectivity will reduce the effect of any offset weight profile projecting into that layer. The resultant weight profile will be a combination of these offset and non-offset components, and thus a given pre-synaptic cell will project most strongly to a different post-synaptic cell than in the case of a fully asymmetrical weight profile. This will change the value of |
Another factor is the time neurons take to fire in response to input, known as rise time. Rise time would mean that even with purely asymmetrical connectivity, path integration will not be 100% accurate. The mechanism given above does not quite work, as post-synaptic cells will not begin firing instantaneously. There will instead be a short time lag, rise time, between when they first receive input and when they begin firing. This rise time,
Rise time will act in the context of a given conduction delay. The accuracy of observed packet speed is proportional to the ratio of conduction delay to the sum of the conduction delay and rise time. This relationship can be expressed as
The coefficient Δ
The neural network model in this paper utilizes a single layer of HD neurons, with recurrent connections that can contain both an offset component, representing idiothetic path integration input, and symmetric non-offset recurrent collaterals. The inclusion of non-offset recurrent collateral connectivity is shown to have a slowing effect on path integration speed, preventing accurate reproduction of a target velocity. The architectural prediction is made that HD areas of the rat brain involved in processing idiothetic signals for path integration will not contain within-layer recurrent collateral connectivity for this reason. Also demonstrated is the slowing effect of neuronal rise time, specifically in relation to axonal conduction delays. These findings represent a major contribution to understanding of time-accuracy in path integration and shed light on the architecture of the HD system.
Here we provide details of model governing equations and simulation protocol.
The model used for this paper, based on a CANN, is pictured as a schematic in Figure
All cells influence one another via excitatory recurrent collateral connections. Recurrent weights are pre-wired with Gaussian profiles to have either an offset profile or a combination of an offset profile with an added non-offset component of varying strength. These are constructed in a Gaussian configuration: offset weights have an included term, such that each cell projects maximally not to itself but to an offset cell in the same layer. This offset acts to drive activity through the network at a fixed speed, intended to match a target speed,
Recurrent collateral connections contain a delay Δ
The activation level
where
The firing rate
Recurrent collateral weights from HD cells back onto the HD cell layer are pre-wired with mild variations depending on whether they are to be purely offset or a mixture of offset and non-offset components. Offset weights are initialized using the Gaussian function
where
which creates a wrap-around effect, with weights between HD cells as a population remaining continuous across the 360/0°divide. Note that the pre-synaptic preferred directions are incremented by a fixed offset,
which effectively pre-wires connectivity with an offset that matches the amount by which a packet ought to have moved over the course of the conduction delay, given a particular target packet speed.
In some simulations, the weight profile is initialized with a combination of offset and non-offset components. In this case, the resulting weight profile is a simple addition of both offset weights, as calculated above, and non-offset weights. These non-offset recurrent collateral weights are pre-wired using the same method, with the change that pre-synaptic preferred firing directions are no longer incremented by an offset, such that the previous equation is now calculated as
The relative strengths of the offset and non-offset weight components are modified by the parameter λ
In cases where an offset weight profile must be self-organized, synaptic weights are updated every timestep according to a local associative Hebbian learning rule
where
The differential equations given for this model cannot be solved analytically. Instead, they are implemented in the computer model by making discrete approximations of their solutions. A Forward Euler finite difference scheme is used to approximate all differential equations during simulation, and the value of the forward Euler timestep size used, δ
Head direction cells | 500 |
500 | |
ϕ1 | 200.0 |
σ |
10.0 ° |
τ |
0.001 s |
0.005 | |
δ |
0.0001 s |
λ |
10.0 |
σ |
20.0 ° |
180.0 °/ |
|
Δ |
0.01 s |
Head direction cells | 500 |
Training time | 298.5 s |
Testing time | 2.0 s |
500 | |
ϕ1 | 60.0 |
τ |
0.001 s |
0.01 | |
50.0 | |
δ |
0.0001 s |
λ |
70.0 |
σ |
30.0°/ |
180.0°/ |
|
Δ |
0.01 s |
0.01 |
At the beginning of each simulation, firing rates
If the network is not self-organizing, recurrent collateral connections are pre-wired as offset or a mixture of offset and non-offset, as detailed in Equations (3–6). Synaptic weights are then normalized as in Equation (7). An external input,
where λ
which creates a wrap-around effect, with response profiles of HD cells as a population remaining continuous across the 360/0°divide.
This input acts solely to initialize an activity packet at location
In order to calculate the speed of motion of the HD layer activity packet, the center of mass (i.e., location) of HD layer firing rates is computed at each forward Euler timestep. This was calculated using an established population vector scheme (Georgopoulos et al.,
where θ
However, because the ring effectively wraps around, it can provide problems for taking the mean of a set of locations spanning the 360/0° mark. For example, the mean of three cells firing maximally at 310°, 330°, and 350° would be correctly calculated by the above formula as 330°. However, the mean of three cells firing maximally at 350°, 10°, and 30° would be incorrectly calculated as 130°, rather than the correct value of 10°. To account for this, the following corrected formula is used instead
We set the offset of the asymmetric weight component to a known value in pre-wired simulations. However, in cases where a non-offset component is added to the offset component, the new effective offset must be calculated. This is also true of self-organizing simulations, where the offset is unknown and develops as a result of training. In order to calculate the weight offset in the efferent recurrent weights for individual HD cells, a similar center of mass, referred to as the weight vector
where the sum is over pre-synaptic HD cells
Again, a corrected forumula (Batschelet,
where
The new offset value is then calculated using the final projecting weight vector
which, similarly to activity packet speed calculations, accounts for the circular nature of the data.
Figure
Figure
Figure
We hypothesize that the time taken for pre-synaptic neurons to drive up a post-synaptic neuron will be too long by the amount that post-synaptic neuronal rise time adds to the axonal conduction delay. If this is the case, a constant rise time should be more or less severe in the context of varying Δ
Consistent with this hypothesis, we see that packet speed is slowest for small values of Δ
In this paper we identify, through simulation, two key hypothesized sources of error in path integration. Firstly, the antagonism between symmetrical (non-offset) recurrent collateral weights and asymmetrical (offset) weights. Secondly, neuronal rise time. Both of these factors are investigated in two variants of the model: one with pre-wired synaptic weights and one which undergoes training and accompanying self-organization of the offset weight profile.
Here we show that packet speed is reduced in direct proportion to the strength of the non- offset weight component, which represents symmetrical recurrent collateral connectivity; a key aspect of most continuous attractor neural networks (CANNs). We therefore hypothesize that individual layers of the HD system, which play a dominant role in combining allothetic and idiothetic signals to perform path integration, do not contain recurrent connections to help stabilize activity packets representing current HD, because these connections would slow down path integration as soon as the animal begins to rotate. The fundamental problem is that the recurrent connections within a layer could only learn one specific rotation speed, whether this is either no rotation or some fixed speed of rotation. If the animal rotates at any other speed, these recurrent connections will introduce error into the path integration. Specifically, error will be introduced if signals for two different speeds (e.g., current head velocity and no head velocity) co-occur in time, a necessary consequence of within-layer recurrent connectivity.
Following the attractor hypothesis of Skaggs et al. (
What, then, are the alternatives? We suggest that between-layer rather than within-layer connectivity is crucial. This is based on neurophysiological evidence demonstrating reciprocal connectivity between excitatory lateral mamilliary nucleus (LMN) cells and inhibitory dorsal tegmental nucleus (DTN) cells (Allen and Hopkins,
We propose an architecture, shown in Figure
Some AHV cells show firing even when the head is not rotating (Bassett and Taube,
Within the COMB layer, neurons learn to respond to specific combinations of HD and AHV. At any moment, only the correct subset of COMB cells are activated corresponding to the current rotation speed. This means that the bi-directional connections between the two layers learn to encode specific rotation speeds, with different connections encoding different speeds and thus reducing error from interference of other speeds. Consequently, the connectivity between the two layers can implement accurate path integration across all of the trained speeds, including stabilizing the HD activity packet during no rotation.
We also show that a long neuronal rise time introduces error in path integration: the longer the rise time for a given conduction delay, the greater the error in path integration. Rise time also appears to reduce packet speed in relation to the axonal conduction delay used: the shorter the conduction delay for a given rise time, the more severe the slowing effect. Rise time does not appear to have an effect on the self-organization of offset connectivity. This is because both pre and post-synaptic cells are driven up dynamically by the same external input, and thus both have the same rise time. Crucially, the post-synaptic cell is driven up during training by external input rather than by the firing of the pre-synaptic cell conveyed via recurrent connections as in testing. During testing, post-synaptic cells will be driven up by the firing of pre-synaptic cells via connectivity containing a conduction delay. This conduction delay used will have added to it the post-synaptic rise time, constituting an error in path integration. Our findings support the intuition that rise time causes signal transmission between HD cells to be too long during testing but not during training, with the correct offset self-organizing even with long rise times.
Whilst rise time is a particular issue for a rate-coded neural network as in this paper, it may be ameliorated by shifting to a spiking network addressing the fine dynamics of neuronal firing. Such networks can update their representations very rapidly (Brunel and Wang,
This paper focuses on an architectural approach to the issue of path integration speed. Previous work involving within-layer symmetric connectivity has suggested that path integration accuracy can be improved with neuronal mechanisms. One example is short-term depression (STD) (Fung et al.,
This model represents a simple, yet powerful, approach to uncovering key factors affecting accuracy of path integration speed. Several simplifications have been made in striving for maximum explanatory power. In this context, it is important to be clear about what issues we are and are not addressing. Firstly, HD cells are explicitly pre-designated with a preferred direction. In reality, the preferred direction of individual HD cells must be calibrated in some way to establish the specific connectivity, both between HD cells, and from visual areas onto HD areas. This paper however, is not focused on answering how HD cells acquire a preferred direction but rather how mature HD cells behave during path integration.
We employ a single layer of recurrent connections with varying degrees of asymmetry. The final weight profile is an additive combination of two components. This represents two sources of input to HD cells: non-offset synapses from the HD layer back onto itself, and offset output of some path integration system. Reducing these two sources to one layer demonstrates clearly their interaction. It is believed that the case in which HD cells receive inputs from different sources would not be qualitatively different provided they occur at the same time.
All simulations reported here were run with a target velocity of
Here we investigate the issue of path integration speed in continuous attractor models of the HD cell system. Two major factors were discovered to affect the replay speed of path integration systems: the presence of within-layer symmetric non-offset recurrent collateral connectivity, and neuronal rise time. In the case of rise time, it appears that this factor does not adversely affect the self-organization of these path integration systems. We show that it is possible to have perfect path integration accuracy if rise time is negated and within-layer connectivity is purely offset. These findings represent a major contribution to theoretical understanding of the factors governing accuracy of path integration and result in key architectural predictions. Future approaches to coping with these speed-limiting factors are suggested.
Hector J. I. Page, Simon M. Stringer, and Daniel Walters designed the model and simulation protocol, discussed results, and commented on the manuscript. Hector J. I. Page and Simon M. Stringer wrote the manuscript, Hector J. I. Page and Daniel Walters revised the manuscript, Hector J. I. Page programmed simulations and analyzed results.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This work was supported by the Hinzte Family Charitable Foundation.