Neuromorphic Analog Implementation of Neural Engineering Framework-Inspired Spiking Neuron for High-Dimensional Representation

Brain-inspired hardware designs realize neural principles in electronics to provide high-performing, energy-efficient frameworks for artificial intelligence. The Neural Engineering Framework (NEF) brings forth a theoretical framework for representing high-dimensional mathematical constructs with spiking neurons to implement functional large-scale neural networks. Here, we present OZ, a programable analog implementation of NEF-inspired spiking neurons. OZ neurons can be dynamically programmed to feature varying high-dimensional response curves with positive and negative encoders for a neuromorphic distributed representation of normalized input data. Our hardware design demonstrates full correspondence with NEF across firing rates, encoding vectors, and intercepts. OZ neurons can be independently configured in real-time to allow efficient spanning of a representation space, thus using fewer neurons and therefore less power for neuromorphic data representation.


INTRODUCTION
Albeit artificial intelligence has emerged as the focal point for countless state-of-the-art developments, in many ways, it is nullified when compared with biological intelligence, particularly in terms of energy efficiency. For instance, the honeybee is capable of exceptional navigation while possessing just under 1 million neurons and consuming only 10 −3 W of power. Comparably, an autonomous car would need to utilize over a 10 3 W of sensing and computing power, demonstrating lamentable energetic efficiency decreased a millionfold (Liu et al., 2014). Consequentially, braininspired hardware designs have been used in numerous applications, particularly in neuro-robotics (Krichmar and Wagatsuma, 2011;Zaidel et al., 2021) and smart-edge devices (Krestinskaya et al., 2019;Zhang et al., 2020). In neuromorphic computing architectures, the computational principles of biological neural circuits are utilized to design artificial neural systems. A neuromorphic circuit comprises densely connected, physically implemented computing elements (e.g., silicon neurons), which communicate with spikes . Notable neuromorphic hardware includes the TrueNorth (DeBole et al., 2019), developed by IBM research, the Loihi , developed by Intel Labs, the NeuroGrid (Benjamin et al., 2014), developed at Stanford University, and the SpiNNaker (Furber et al., 2014), developed at the University of Manchester. One theoretical framework, which allows for efficient data encoding and decoding with spiking neurons, is the Neural Engineering Framework (NEF) (Eliasmith and Anderson, 2003). NEF is one of the most utilized theoretical frameworks in neuromorphic computing. It was adopted for various neuromorphic tasks, ranging from neuro-robotics (DeWolf et al., 2020) to high-level cognition (Eliasmith et al., 2012). It was compiled to work on multiple neuromorphic hardware using Nengo, a Python-based "neural compiler, " which translates high-level descriptions to low-level neural models (Bekolay et al., 2014). NEF was shown to be incredibly versatile, as a version of it was compiled on each of the neuromorphic hardware designs listed earlier (Mundy et al., 2015;Boahen, 2017;Fischl et al., 2018;Lin et al., 2018), although they do not follow the same paradigm of neuromorphic implementation. Although the Loihi, the TrueNorth, and the Spinnaker are pure digital systems, in the sense that both computing and communication are held digitally, the NeuroGrid is a mixed analog-digital circuit. In the Neurogrid, synaptic computations were implemented with analog circuitry. Although these general-purpose computing architectures adopted the digital realm for better adherence with application programming and ease of fabrication, analog implementation of synapses (such as the one implemented in the NeuroGrid) is commonly found in analog neuromorphic sensing and signal processing. Notably, some of the first and most significant successes in neuromorphic architectures have been in vision (Indiveri and Douglas, 2000) and sound (Liu and Delbruck, 2010) processing.
NEF-inspired neurons were previously directly implemented in both digital and analog circuitry. For example, NEF-inspired neurons were implemented on a digital Field-Programmable Gate Array (FPGA)-circuit and used for pattern recognition (Wang et al., 2017). However, it is not clear if such implementations can approximate the density, energy efficiency, and resilience of large-scale neuromorphic systems (Indiveri et al., 2011). Current analog implementations of NEF-inspired neurons rely on the circuit fabrication's stochasticity to constitute the variational activity patterns required to span a representation space. The activity pattern of these neurons cannot be modulated or programmed, and therefore, using them for precise representation of a mathematical construct-even in low dimension-requires a large number of neurons and, hence, has suboptimal energy consumption (see section "Discussion" for further details) (Mayr et al., 2014;Boahen, 2017). Here, we present OZ, a programable, analog implementation of NEFinspired spiking neuron. OZ utilizes several of the most wellknown building blocks for analog spiking neurons to provide a design with a programable high-dimensional response curve and a temporally integrated output.

Circuit Simulations and Analysis
All circuit simulations in this study were executed using LTspice, offered by Analog Devices (2008). The simulator is based on the open-sourced SPICE framework (Nagel and Pederson, 1973), which utilizes the numerical Newton-Raphson method to analyze non-linear systems (Nichols et al., 1994). Signal analysis was performed using the Python scripts we developed. Curve and surface fittings were performed using MATLAB's curve fitting toolbox. Simulation files are available upon request.

Distributed Neuronal Representation With Neural Engineering Framework
Let a be a representation, or a function, of a stimulus x using a = f (x). With NEF, high-level network specifications, given in terms of vectors and functions, are transformed to a set, or an ensemble, of spiking neurons. A neural representation will therefore take the form of a = G (J (x)), where G is a spiking neuron model [e.g., the leaky-integrate-and-fire (LIF) model (Burkitt, 2006)] and J is the integrated inputs introduced to the neuron. NEF uses a distributed neuron representation, where each neuron i responds independently to x, resulting in a i = G i (J i (x)). One possible modeling for J would be J = αxJ bias , where α is a gain term and J bias is a fixed background current. Neurons often have some preferred stimuli e (preferred direction, or encoder) to which they respond with a high frequency of spikes [e.g., direction selectivity in retinal ganglion cells (Ankri et al., 2020)]. J will therefore be more appropriately defined using: J = αx · eJ bias , where x · e equals 1 when both x and e are in the same direction, and 0 when they are opposing each other. To conclude, in NEF, a neuron firing rate δ i is defined using: (1) An ensemble of neurons in which each neuron has a gain and preferred direction distributively represents a vectorized (or high-dimensional) stimulus x. The represented stimulix can be decoded using: Where d i is a linear decoder, which was optimized to reproduce x using least squared optimization and a * i h is a spiking activity a i , convolved with a filter h (both are functions of time). NEF is described in detail in Eliasmith and Anderson (2003) and succinctly reviewed in Stewart and Eliasmith (2014). NEF is the foundation upon which our neuron model is built. Particularly, it is utilized here to represent a high-dimensional stimulus with spiking neurons distributively.

Analog Building Blocks
In a seminal review by Indiveri et al. (2011) "Neuromorphic silicon neuron circuits, " the fundamental building blocks of analog neurons were described. Among them were (1) the FIGURE 1 | Neuron building blocks. (A) Pulse current source synapse. This voltage-controlled current source is activated by an active-low input spike, producing a current, which follows the input voltage pattern and dynamic. (B) Subthreshold first-order LPF circuit. This circuit provides temporal control of both charging and discharging of a capacitor, allowing for temporal integration of incoming spikes. (C) Voltage-amplifier LIF neuron. This spike generating circuit provides precise control of the generated spikes, including spikes' rise time, width, fall time, and refractory period. (D) Signal traces for the current-source synapse. When spike arrives (synapse is activated-low; V in ), current I syn is proportionally generated. This synapse offers magnitude control where I syn is proportionally correlated to V w . (E) Signal traces for the log-domain integrator synapse. Log-domain integrator synapse features a linear integration of incoming spikes, where ahead of saturation, each spike equally contributes to I syn . (F) Signal traces for the voltage-amplifier LIF neuron. Neuron is driven by I in , which was generated by the subthreshold first-order LPF circuit (described earlier). Circuit has two voltage inverters: first inverter I inv 1 drives current I NA , and second inverter I inv 2 drives the I K currents. These currents adhere to the behavior of biological neurons, providing precise control of spikes' dynamic.
pulse current source synaptic circuit, (2) the subthreshold firstorder LPF circuit, and (3) the voltage-amplifier LIF neuron (Figures 1A-C). We will briefly revisit these circuits here, as they constitute the OZ neuron's main building blocks.
The pulse current-source synapse ( Figure 1A), proposed by Mead (1989), was one of the first artificial synapse circuits created. It is a voltage-controlled current source, which is driven by an active-low input spike. The resulting current I syn is defined using: Where V dd is the supply voltage, I o is the leakage current of the transistor M w , which is activated in the subthreshold regime, κ is the subthreshold slope factor and U T is a thermal voltage (at room temperature, it is approximately 26 mV). This circuit allows for controlling the magnitude of I syn such that when V w equals V dd , I syn is I 0 . As we decrease V w , I 0 is scaled up exponentially, increasing I syn accordingly ( Figure 1D). While offering control over I syn 's magnitude, the pulse current-source synapse does not provide temporal modulation.
The subthreshold first-order LPF circuit ( Figure 1B), proposed by Merolla and Boahen (2004), offers linear integration of incoming spikes. This circuit is built upon the charge and discharge synapse [described in Bartolozzi and Indiveri (2007)], which provides temporal control of charging and discharging of C syn . In the charge and discharge synapses, the incoming activehigh spikes activate the transistor M in . During a spike, V syn decreases linearly, at a rate set by the net current I w − I τ , where FIGURE 2 | An illustration of four OZ neurons grouped into two branches, one with positive and the other with negative encoders. Each branch initiates with an input preprocessing module, and each neuron comprises a spike generation and temporal integration modules. Each neuron has different tuning, and it is therefore producing spikes in a different dynamic.
I w is the current driven through transistor M w (and regulated by V w ) and I τ is the current driven through transistor M τ (and regulated by V τ ). This net current is responsible for discharging C syn . The linearly decreasing V syn drives I syn by regulating transistor M out . In this log-domain circuit, the logarithmic relationship between the transistor'sV gs and its current is used to exhibit overall linear properties [see (Indiveri et al., 2011) for a detailed analysis]. The governing equations of this synapse during a spike I spike syn and between spikes I flat syn are: where t − i and t i are the times at which spike i arrives and terminates, respectively, I − syn and I syn are the I syn in times t − i and t i , respectively, τ c is the time constant for the capacitor charge, which equals U T = C/κ (I w − I τ ), and τ d is the time constant for the capacitor-discharge, which equals U T = C/κI τ . Controlling the charge and discharge of C syn allows for temporal control of both rise and fall times of V syn , thus providing the ability to temporally integrate multiple incoming spikes ( Figure 2E).
The voltage-amplifier LIF neuron is a spike generating circuit proposed by van Schaik (2001) and Figure 1C. This circuit enhances the classic axon-hillock neuron design [described in Mead (1989)] with precise control of the generated spikes' dynamic, including spikes' rise time, width, fall time, and refractory period. Capacitor C mem models the neuron membrane and V lk , which regulates the conductance of transistor M lk , controls its leakage current I lk . In the absence of an input current from incoming spikes (flat phase), I lk drives the membrane voltage V mem down to 0 V. When an input current is apparent, the net incoming current I in − I lk is charging C mem , increasing V mem . When V mem exceeds V th , an action potential is generated via an operational amplifier (op-amp). This action potential is introduced into a voltage inverter, where high logical states are transformed into low logical states and vice versa. A low logical voltage state activates transistor M Na , through which I Na current is driven, charging C mem and creating a sustained high voltage (constituting the spike). A second voltage inverter drives I kup through transistor M inv2 , charging C k , thus controlling spike's width. As C k is charging, it activates transistor M k , through which I k is driven. I k discharges C mem and when V mem drops below V th , the amplifier's output drops to a low state. In response, the first voltage inverter's output is driven high, deactivating transistor M Na , thus terminating I Na . The second inverter's output voltage is driven low, terminating I kup and allowing I ref to discharge C k . As long as I ref is not strong enough to discharge C k , the circuit cannot be further stimulated by incoming current (assuming I in < I k ), constituting a refractory period. The generated spikes are shown in Figure 1F. This process is a direct embodiment of the biological behavior, in which an influx of sodium ions FIGURE 3 | OZ neuron analog design. (A) Preprocessing module for negatively encoded OZ neurons. Module inverses input voltage, aligns it to initiate at 0 V, and reinverts and scales it to terminate at 3.3 V. (B) Neuron's spike generator. Voltages from two weighted inputs are transformed into a proportional current, injected into a modified voltage-amplifier LIF neuron. Neuron produces a spike train according to its response dynamic. Spike train is introduced into a temporal integration circuit. (C) Eight OZ neurons, four of them are positively encoded (right), and four are negatively encoded (left). All neurons were stimulated with a linearly increasing voltage, rising from -1 to 1 V. Each neuron was modulated with various values of V lk to produce a spike train at a particular rate, starting from a specific input (intercept).
(Na + ) and a delayed outflux of potassium ions (K + ) govern the initiation of an action potential.

Circuit Design
In our circuit design, stimulus x is introduced through preprocessing modules to two branches, one connected to positively encoded OZ neurons and the other to negatively encoded OZ neurons. These preprocessing modules accept an input voltage ranging from −1 to 1 V (corresponding to the default input normalization scheme taken by NEF) and produce an output voltage ranging from 0 to 3.3 V. Each OZ neuron is comprised of two consecutive modules: a spike generator and a temporal integrator. Each spike generator is characterized by a tuning curve, modulated using control signals, thus realizing Eq. 1. A generated spike train is introduced to a temporal integrator, which integrates the incoming spikes, thus realizing Eq. 2 and constituting a NEF-inspired neuron. The circuit schematic for two negatively encoded and two positively encoded neurons is shown in Figure 2. The negative preprocessing circuit comprises two consecutive modules: the first one inverses the voltage and aligns it to initiate at 0 V, and the second reinverts and scales it so it will terminate at 3.3 V (circuit'sV dd ) ( Figure 3A). The first module uses an op-amp based adder to add 1 V to the input signal (aligning it to 0 V) and inverts it according to V o = − (VV − ), where V and V − are the op-amp's input terminals. The resulted voltage is ranging from 0 to 2 V. The second module uses an inverter amplifier that scales its input voltage according to −R fb /R in , where R fb is the feedback resistor and R in is the amplifier's input terminal resistor. Here, R fb2 = 1.65 kOhm and R in2 = 1 kOhm, achieving a scaling factor of −1.65, which transform 2 to 3.3 V output. The positive preprocessing module resembles the negative preprocessing module, with the addition of another voltage inverter, which produces a similar waveform, initiating at 3.3 V and terminating at 0 V.
The OZ neuron is shown in Figure 3B. It is based on modified versions of the pulse current source synaptic circuit (for weighted input), the voltage-amplifier LIF neuron (for spike generation), and the subthreshold first-order LPF circuit (for temporal integration). The pulse-current source synapse is used to convert an input voltage to a proportional current, introduced into the spike generation circuit, and defined by V w according to Eq. 3. The voltage-amplifier LIF neuron's response dynamic is predominantly determined by the values of I kup , I kdn , I Na , and I ref and the leakage current I lk (driven through transistor M lk ), which are regulated, respectively, by of V kup , V kdn , V Na , V ref , and V lk via dedicated transistors. Therefore, this neuron has five degrees of freedom (DOF): V lk controls the discharge rate of C mem , V ref controls spikes' refractory period, V kup and V kdn control the fall time of the generated spikes, and V Na controls the spikes' rise time. Furthermore, its spiking dynamic relies on an op-amp, which is comprised of multiple transistors and resistors. OZ's spike generator is a NEF-optimized circuit design, where V up , V Na , and V kdn are redundant. Furthermore, it does not rely on op-amp for spike generation, as the amplifier has no significant functional effect in terms of neuron's firing rate and intercept (see section "Discussion"). A NEF-tailored design should also enable high-dimensional input representation, which can be achieved by concatenating the input module (highlighted in Figure 3B as weighted input; see section "Circuit Analysis"). Finally, temporal integration can be achieved via a simplified LPF temporal integration circuit. In OZ, capacitor C int is charged by current I int , which is activated by the generated spike train and driven through transistor M int . C int is discharging at a constant rate through a leakage current, which is driven through transistor M int2 and regulated by a continuously held voltage V int . The voltage on C int constitutes the OZ neuron's output.
A useful way of representing a neuron's response to varying inputs is by using a response, or a tuning curve, which is one of the most fundamental concepts of NEF. In NEF, a tuning curve is defined using an intercept, the value for which the neuron starts to produce spikes at a high rate, and its maximal firing rate. OZ's tuning curve can be programmed to control both. For circuit analysis, we built eight OZ neurons, four with positive and four with negative encoders. Each neuron has d + 2 DOF, where d is the dimensionality of the input, corresponding to d values of V w , which regulate each input dimension, and V lk and V ref correspond to the two other DOF. To demonstrate OZ, we built eight neurons; each was defined to feature a different intercept and maximal firing rate. Each neuron was stimulated with the same input voltage, which linearly increased from −1 to 1 V over 1 s. Results are shown in Figure 3C.

Architectural Design
For the sake of discussion, we will consider one-dimensional (1D) OZ neurons. First, we shall consider the classic voltage-amplifier LIF neuron, shown in Figure 1C. This design relies on an opamp for spike generation. From a functional perspective, the op-amp provides the neuron with a digital attribute, splitting the neuron into an analog pre-op-amp circuit and a digital post-opamp circuit. Particularly, when an incoming current is inducing V mem to exceed a predefined threshold voltage, the op-amp yields a square signal, which generates a sharp I Na response. This fast response induces sharp swing-ups in V mem and V out . Without the op-amp, this transition between states is gradual (Figures 4A-C). Although both designs permit spike generation, the op-ampbased design can generate spikes in a higher frequency and amplitude. To compensate for that, we can discard both I Na and I kup controls through the removal of their regulating transistors. Removing these resistance-inducing transistors maximizes I Na and I kup , thus achieving op-amp-like frequency and amplitude (Figures 4D-F). Moreover, without the op-amp, there is no need to explicitly define a threshold, providing a more straightforward and biologically plausible design.

Neuron Control
Did we lose control over the maximal firing rate over our neuron by eliminating the regulation of I kup ? Fortunately, both I kup and I ref impact neuron's firing rate. Although I kup limits neuron's firing rate by governing the rise time of the generated spikes, I ref does that by setting the refractory period between spikes. Controlling both currents is redundant as both imply similar constraints, as shown in Figure 4G (V lk and V w are held constant at 2.2 and 0.5 V, respectively).
V lk controls the discharge rate of C mem by regulating a leakage current through transistor M lk . As long as this leakage current is lower than the input current (driven through the weighted input module), V mem will rise toward saturation. As we decrease V lk , leakage current drops and C mem charged faster. As a result, the neuron's intercept (the input value for which the neuron's initiate spikes at a high rate) increases for positively encoded neurons and decreases for negatively encoded neurons. Neurons exhibit maximal firing rate when their input voltage is either −1 or 1 V, depending on the neuron's encodings. The maximal firing rate is proportionally dependent on the charging status of membrane capacitance C mem . The faster C mem is charging, the more frequent the neuron will emit spikes. However, a neuron's maximal firing rate is not entirely decoupled from its intercept. Although a neuron's intercept is controlled by V lk , it can also be modulated by V w , which provides a magnitude control for the input current. Therefore, although V lk can be used to define the neuron's intercept, V w can impose on it a firing rate constraint. For example, the neuron's spiking rate will not exceed 400 Hz, when V w is set to 2.2 V. V lk and V w imposed constraint on neuron's spiking rate is demonstrated in Figure 4H  (R 2 > 0.98), neuron's intercept N int can be described with: and neuron's maximal firing rate N FR with: Figure 5, the tuning curves of our eight OZ neurons, along with the tuning curves of eight simulated neurons, which were computed directly with NEF, are demonstrated. The tuning curves indicate varying intercepts and spiking rates, showcasing the produced spike trains' high predictability and the full correspondence between our hardware design and NEF. In Figures 5B,D, we compared 2D tuning curves. It was achieved with OZ by concatenating two weighted inputs: x 1 and x 2 , weighting them with V w1 and V w2 , respectively. Results show the high predictability of the neuron in response to a highdimensional stimulus.

DISCUSSION
Numerous digital and analog designs of spiking neurons have been previously proposed. For example, Yang et al. (2018b) proposed a biologically plausible, conductancebased implementation of spiking neurons with an FPGA. This design was used to simulate 1 million neurons by utilizing six state-of-the-art FPGA chips simultaneously, achieving biological plausibility and scale (Yang et al., 2018a). Furthermore, it was shown to feature multicompartmental neuron design, supporting the morphologically detailed realization of neural networks (Yang et al., 2019). Biologically plausible spiking neurons were also implemented in analog circuits, featuring spike adaptation (Aamir et al., 2017). Although incredibly versatile and highly configurable, these designs were guided by a bottom-up approach, tailored to reproduce biological behavior. However, to achieve functionoptimized neural networks (e.g., for neurorobotics or other smart-edge devices), top-bottom modeling is more suitable In panels A,C, each color stands for one neuron spiking at a specific rate in response to an input voltage (x 1 ). In OZ-based 2D representation, V w for x 1 was held constant at 2.5 V, V ref at 0.4 V, and V lk at 0.729. V w values for x 2 were 3.3, 2.8, 2.6, and 2.4 V, left to right, respectively. (Eliasmith and Trujillo, 2014). By throwing out morphological and physiological constraints, the NEF allows top-down optimization, with which high-level specification can be realized in spiking neurons with a minimal number of explicitly defined neuronal characteristics.
NEF is one of the most utilized theoretical frameworks in neuromorphic computing. A version of NEF was compiled on various neuromorphic digital systems, such as Intel's Loihi and IBM's TrueNorth (Fischl et al., 2018;Lin et al., 2018), as well as on hybrid analog/digital systems such as the NeuroGrid (Boahen, 2017). NEF-inspired neurons were directly implemented in both digital (Wang et al., 2017) and analog (Indiveri et al., 2011) circuitry. Although digital NEF-inspired implementations are versatile and programmable, they are fundamentally less energy-efficient and footprint-restricted in comparison with analog circuitry (Amara et al., 2006). Current analog implementations of NEF-inspired neurons rely on the inherent stochasticity in the fabrication process of integrated circuits to create the variational neurons' tuning required to span a representation space (Mayr et al., 2014;Boahen, 2017) or to support machine learning (Tripathi et al., 2019).
Neurons in NEF represent mathematical constructs, and their accuracy of that representation is fundamentally limited to the neurons' tuning curves. A tunning curve is defined using an intercept and maximal spike rate. Intercepts represent the part of the representation space for which the neuron will fire. In 1D, uniformly distributed intercepts will uniformly span the representation space. A neuron with an intercept of 0 will be active for 50% of that space, and a neuron with an intercept of 0.75 will be active for only 7.5% of that space. However, using randomly distributed tuning curves would require many more neurons to achieve adequate space spanning. When an input does not invoke a neuron to spike, that neuron is essentially a waste of space and energy.
Moreover, as we advance toward representing values in higher dimensions, articulating and carefully defining neurons' tuning curves become a critical design factor. This design factor was attested by the authors of Mayr et al. (2014), as they discussed their analog implementation of a NEF-inspired neuron: "as the spread of the curves is determined by random effects of the manufacturing process, individual instances of the ADC [the designated application for that design] have to be checked for sufficient spread, thus defining a yield in terms of ADC resolution. When comparing the two families of tuning curves, the main observation is that the Nengo generated neurons tend to vary more, especially in their gain. . . this has a significant impact on the overall computation. If the neurons do not encode for sufficiently different features of the input signal, the representation of the input signal degrades" (Mayr et al., 2014).
Moreover, our proposed implementation offers a highdimensional representation. Distributing intercepts uniformly between −1 and 1 makes sense for 1D ensembles. Because a neuron's intercept defines the part of the representation space in which this neuron is firing, in 1D representation, uniformly distributed intercepts create a uniform spanning of that space. In higher dimensions, the proportions of activity are getting smaller (or larger for negatively encoded neurons). In high dimensions, the naive distribution of intercepts results in many neurons, either rarely producing spikes or always active ( Figure 6A). In both cases, these neurons are essentially not contributing to the representation. A representation space in 2D is a 3D sphere, in which each neuron's encoder points to a cap, which specifies the space in which that neuron is active (Gosmann and Eliasmith, 2016). The intercept is the location of the cap's cutoff plane. The ratio between the cap's and the sphere's volumes is the percentage of the representation space in which a neuron is active. A generalized sphere in a higher dimension is a hyper-sphere. The volume of a hyper-sphere cap v cap is defined with: v cap = 1 2 C d r d I 2rh−h 2 /r 2 d + 1 2 , 1 2 where C d is the volume of a unit hypersphere of dimension d and radius r, h is the cap's height, and I x (a, b) is the regularized incomplete beta function. Here, r = 1 (representation is in [−1, 1]), and h = x − 1, where x is the intercept. The ratio p between the hypersphere's volume C d and its cap's volume v cap is: To more efficiently span in high-dimensional representation space, we can use the inverse of Eq. 6 to derive a desired p value, the intercept, which will create it. This equation is defined with: With Eq. 7, we can generate the intersects to better span the representation space. Utilizing this equation can provide the intersects distribution for which the spikes activity pattern is uniform ( Figure 6B). This is a clear example of the importance of being able to modulate neuron's tuning curves in highdimensional representation. The importance of the discussion earlier was recently highlighted in DeWolf et al. (2020) in the context of neuro-robotics.
Here, we presented the OZ neuron-a programmable analog implementation of a spiking neuron, which can have its tuning curve explicitly defined. With our system design, for a uniform distribution of tuning curves (required in most low-dimensional applications), only one among the positive and negative branches has to be defined, cutting in half the number of neurons, which have to be controlled. Because we can design the neurons' tuning curve to accurately span the representation space following a particular application's needs, the required number of neurons for spanning that space can be significantly reduced. Moreover, uniquely, neurons' tuning curves can be changed in real-time to provide dynamically modulated neuromorphic representation. However, when the required number of neurons is large, the apparent overhead of control must be considered. Our design can be scaled to a full very large-scale integration neuromorphic circuit design, providing analog, distributed, and energy-efficient neuromorphic representation of high-dimensional mathematical constructs.

DATA AVAILABILITY STATEMENT
Model simulation will be provided upon request.

AUTHOR CONTRIBUTIONS
AH designed the circuits and performed circuit simulation and analysis. EE conceptualized the research, designed the circuits, and wrote the manuscript. Both authors contributed to the article and approved the submitted version.

FUNDING
This research was supported by the Israel Innovation Authority (EzerTech) and the Open University of Israel research grant.