ORIGINAL RESEARCH article

Front. Neurosci., 09 January 2026

Sec. Neuromorphic Engineering

Volume 19 - 2025 | https://doi.org/10.3389/fnins.2025.1735027

Sequential analysis and its applications to neuromorphic engineering

  • International Centre for Neuromorphic Systems, The MARCS Institute, Western Sydney University, Sydney, NSW, Australia

Article metrics

View details

872

Views

79

Downloads

Abstract

Introduction::

Neuromorphic circuits operate by comparing fluctuating signals to thresholds. This operation underpins sensing and computation in both neuromorphic architectures and biological nervous systems. Rigorous analysis of such systems is rarely attempted because the statistical tools to study them are both inaccessible and largely unknown to the neuromorphic community.

Methods:

We offer a gentle introduction to one such tool, sequential analysis, a classical framework that addresses a particular class of threshold-crossing problems. We define the formal problem analyzed in sequential analysis and present Abraham Wald's elegant methodology for solving it.

Results:

We then apply this framework to three examples in neuromorphic engineering, demonstrating how it can serve as a benchmark, proxy model, and design tool. Our introduction is understandable without prior training in probability or statistics.

Discussion:

Sequential analysis provides the statistical limits of circuit performance, tractable abstractions of complex circuit behavior, and constructive rules for circuit design. It establishes rigorous statistical baselines for evaluating hardware. It links low-level circuit parameters to observable dynamics, clarifying the computational role of neuromorphic architectures. By translating performance goals into optimal thresholds and design parameters, it offers principled prescriptions that go beyond empirical tuning.

1 Introduction

Neuromorphic engineering develops hardware and software systems based on the structure and function of nervous systems. Its principal goal is to design efficient, adaptive, and robust computation beyond conventional digital architectures (Christensen et al., 2022; Indiveri et al., 2011; Mead, 1989). Many neuromorphic devices encode and process information using discrete “spikes” or “events” (Mahowald, 1992). These spikes are typically generated when an internal variable, e.g., a voltage, crosses a threshold.

Threshold crossing problems are well-studied in statistics. They arise whenever a fluctuating process is compared to one or more boundaries (Monk et al., 2015, 2024, 2014; Monk and van Schaik, 2021, 2022; Gold and Shadlen, 2007; Monk and van Schaik, 2020; Urzay et al., 2023). One example is a spiking neuron whose membrane potential exceeds the firing threshold (Mani et al., 2025; Shadlen and Shohamy, 2016; Kira et al., 2015). Another is a pixel in an event-based sensor whose voltage crosses “on” or “off” thresholds to generate events (Gallego et al., 2022). Statistics provides powerful tools for analyzing these problems (Doob, 1953; Lai, 2009b; Tartakovsky et al., 2014; Taylor and Karlin, 1984). But those tools remain largely inaccessible to the neuromorphic community. Much of the statistics literature is written in abstract mathematical language, which obscures its applicability to neuromorphic systems.

In this study, we introduce sequential analysis, a classical statistical framework for threshold crossing problems, to the neuromorphic community (Wald, 1944). Sequential analysis was pioneered by Abraham Wald (Wald, 1944, 1947) to study optimal decision-making when data arrive over time. It provides exact results for decision accuracy, decision times, and optimal thresholds. We demonstrate that these results translate naturally to neuromorphic circuits. In the Methods, we present the formal problem of sequential analysis and its solution. Then, in the Results, we demonstrate how sequential analysis functions as a benchmark, proxy model, and design tool in three neuromorphic applications. We deliberately avoid technical jargon to make the derivations accessible to readers without a background in probability theory. All prerequisite materials for our derivations are available as Supplementary material online. Threshold crossing problems provide a rigorous and intuitive lens for benchmarking, interpreting, and designing neuromorphic architectures. By formalizing how evidence is accumulated toward a decision, sequential analysis provides a principled framework for interpreting the behavior and computational role of both neuromorphic circuits and biological neurons.

2 Materials and methods

If any step in the following derivation is unclear, Supplementary material walks through the underlying principles in plain language. No prior background in statistics is required to understand this material; only the SI.

Figure 1 illustrates a sequential analysis problem (Wald, 1944; Monk and van Schaik, 2020). Let St represent the cumulative sum of t realizations of a random variable X~Pr(X). At each time step, we observe a new realization of X and add it to the sum of all previous realizations, St−1. Thus, St is a random walk that changes by X at each time step. The key assumption of sequential analysis is that X is independent and identically distributed (i.i.d.) for all time steps.

Figure 1

A two-part diagram. On the left, a normal distribution curve with labeled points X1, X2, X3 at 1.23, -1.78, and -0.69. On the right, an absorbing random walk graph shows steps S0 to S3 with boundaries a = 3 and b = -2 over time T = 10.

Schematic of a sequential analysis problem. (Left) We observe realizations of a random variable X, one per time step. X is assumed to be independent and identically distributed at every time step. In this example, Pr(X) is a normal distribution with mean μ and variance σ2 (magenta trace). Three samples from the first three time steps, X1, X2, and X3, are shown in cyan, green, and red, respectively. (Right) Sequential analysis considers the cumulative sum St of those realizations of X. It compares St to two constant absorbing barriers a and b (e.g., on and off thresholds). While b < St < a, we observe new realizations of X and continue adding them to the cumulative sum. We want to find the probabilities that the sum hits either barrier before the other and the distribution of the number of realizations T required to hit it. In this example, ST = a and T = 10 time steps (black dot, upper-right of the panel).

The left panel in Figure 1 plots an example distribution Pr(X) as a normal distribution with mean μ and variance σ2 (magenta trace). The left panel also shows three realizations drawn from Pr(X) at the first three time steps, shown in cyan, green, and red, respectively. The right panel in Figure 1 shows that we add each realization of X to the sum of all previous observations.

The right panel in Figure 1 also illustrates that sequential analysis assumes the random walk St to be between two constant thresholds b and a (horizontal dashed black lines). In this example, b = −2, a = 3, and the initial sum is S0 = 0. As long as b < St < a, we continue making new observations of X and adding them to St (gray, cyan, green, and red dots). When St crosses either threshold (e.g., threshold a at random time T = 10, black dot, right panel), we stop the random walk. One goal of sequential analysis is to obtain the probability of crossing one threshold before the other, i.e., Pr(ST = a) and Pr(ST = b). Another goal is to find the conditional distributions of T, Pr(TST = a) and Pr(TST = b).

Abraham Wald derived threshold crossing probabilities and conditional time distributions from a martingale (Wald, 1944; Lai, 2009b; Doob, 1953). Consider the conditional expectation:

where ϕX(h) is the MGF of X and h is its (real) independent variable. Wald's analysis requires that the MGF be well-defined within a specific domain of h. It also requires regularity conditions, e.g., integrability and boundedness. In most practical applications of sequential analysis (including the examples we will consider here), these conditions hold.

We can quickly show that this conditional expectation is a martingale. Notice that ϕX(h) is a deterministic function. Moreover, t is just the number of time steps that have elapsed up to a given time. Since neither is random, we can pull them out of the conditional expectation:

Insert St = St−1 + Xt in the right-hand side:

Given St−1, the term exp(St−1h) is not random, so pull it out:

Since the X are i.i.d., Xt is independent of St − 1:

Recognize the expectation as the MGF of X and simplify:

Comparing this expression to our original conditional expectation, we observe that it forms a martingale:

Next, we write this martingale as a conservation statement. Take the expectations of both sides and apply the law of total expectation:

By induction:

assuming that S0 is known (i.e., not random) and t begins at 0. This equation states that the expectation of the quantity is conserved throughout this stochastic process.

Doob's optional stopping theorem states that a randomly stopped martingale is still a martingale (Doob, 1953). Thus, insert the random stopping time T for t:

Equation 1 is known as the fundamental identity in sequential analysis (Wald, 1944; Tartakovsky et al., 2014). Wald derived threshold crossing probabilities and conditional time distributions from it (Wald, 1944, 1947). The reason we can extract so much threshold crossing information from it is that it is valid for all values of h. We extract our desired quantities by choosing special values of h and inserting them into Equation 1 (Monk and van Schaik, 2020).

Figure 2 visualizes the special values of h that we insert into the martingale. First, we calculate the probabilities of crossing the threshold. The left panel of Figure 2 plots the MGF of X, given the same distribution as shown in the left panel of Figure 1. Given weak assumptions (Wald, 1944), ϕX(h) is convex and crosses 1 at exactly two values of h (cyan lines and markers). Since all MGFs equal 1 at h = 0 by definition, we discard one of those crossings because it provides no useful information (square cyan marker, left panel Figure 2). The second crossing occurs at a nontrivial value h = h0≠0 (circle cyan marker). Inserting h0 into Equation 1:

For compact notation, let α ≡ Pr(ST = a) and β ≡ Pr(ST = b). Split the expectation, conditional on crossing either threshold first:

Given that ST is a or b, the terms in the conditional expectations are just constants (i.e., not random):

The process will cross either threshold in finite time (Wald, 1944), so we can insert β = 1−α and rearrange:

The right panel of Figure 2 shows the values of h that we insert into Equation 1 to obtain the conditional characteristic functions (CCFs) of threshold crossing times. Under weak assumptions (Wald, 1944), ϕX(h) has two real-valued crossings of a horizontal line in the neighborhood of 1, and not just at 1 (magenta lines and markers, left panel Figure 2). So its logarithm has two real roots in h in that neighborhood. Then for imaginary τ, −logϕX(h) = τ has two complex roots h1(τ) and h2(τ):

Insert h1(τ) into Equation 1:

Split the expectation, conditional on hitting either threshold first:

Given which threshold was crossed first, exp(STh1) is not random. Pull it out of the conditional expectations:

Recognize the conditional expectations as the CCFs of T, ψT|a(τ) and ψT|b(τ):

Now, insert h2(τ) into Equation 1, repeat the same argument, and we have a system of two equations:

Since we have two equations, we can rearrange for both CCFs:

By the law of total expectations, the marginal CF of T is as follows:

When 𝔼[X] = 0, Equation 2 is undefined. In this special case, ϕX(h) only crosses 1 once, at the trivial value h = h0 = 0. We circumvent this issue by taking the limit of Equation 2 as h0 → 0:

Alternatively, we can find a simpler martingale that allows us to find α in this special case:

and we have another martingale. Writing it as a conservation statement and invoking Doob's optional stopping theorem:

Split the expectation conditional on which threshold was crossed:

Evaluate the conditional expectations and rearrange:

and we recover the same expression.

Figure 2

Two graphs are displayed. The left graph shows a yellow parabolic curve representing the function \(\phi_X(h) = \exp(\mu h + \sigma^2 h^2 / 2) \) with annotated points at different \(h_0 \) values. The right graph depicts two intersecting curves, one solid and one dashed, representing \(h_1(\tau) \) and \(h_2(\tau) \) over a complex plane with values ranging from \(-0.25i\) to \(0.25i\), with key points marked at \(h_0 \).

Special values of h extract threshold crossing probabilities and times from Wald's martingale. (Left) Under weak assumptions, the MGF of a random variable is convex and crosses 1 at exactly two values of h (cyan markers and lines). All MGFs cross 1 at h = 0 (square cyan marker), so that crossing is trivial and we discard it. The other crossing is nontrivial, and we use it to compute threshold-crossing probabilities. Furthermore, under those weak assumptions, the MGF crosses one twice in the neighborhood of 1 (magenta markers and lines). Then the logarithm of the MGF has two real roots in h in the neighborhood of 0. So for imaginary τ, the logarithm of the MGF has two complex roots. (Right) Those two complex roots h1(τ) and h2(τ) for the MGF in the left panel. We use them to calculate conditional threshold crossing times. Solid traces represent one root, and dashed traces represent the other. Red and gray traces represent the real and imaginary parts of those roots, respectively. When τ = 0, those complex roots pass through the MGF crossings of 1 (cyan markers and lines).

Wald's analysis is exact when ST hits either threshold exactly, i.e., when there is never any threshold overshoot. Generally speaking, ST can overshoot thresholds more when the variance of X is high. If the threshold overshoot is large with respect to the distance between the thresholds, then Wald's analysis can yield inaccurate estimates of threshold crossing probabilities and times. The literature reports techniques to estimate and/or bound threshold overshoot in order to obtain more accurate approximations of these quantities (Lai, 2009a). Since this paper is an introduction to sequential analysis, these techniques are beyond the scope of this study. We can often reduce barrier overshoot by defining the time step to be extremely brief so that St changes only slightly from one time step to the next. This approach is valid for many neuromorphic applications, in which a continuous quantity fluctuates between thresholds.

3 Results

We now show how to apply sequential analysis in neuromorphic engineering, including worked examples.

3.1 Characterizing idealized event sensor pixel noise

Figure 3 illustrates key differences between conventional and neuromorphic vision sensors. Panel A is a schematic of a visual scene representation by a conventional camera. Conventional cameras discretize an analog signal (i.e., incident light, top plot) uniformly across space (by pixels) and time (by frames). They output a series of digitized frames comprising pixel values that describe the light they observed at uniformly spaced moments in time (panel A, bottom plot). Panel B shows how neuromorphic vision sensors (e.g., event sensors) represent a scene. Event sensor pixels generate events in response to changes in light intensity, typically by monitoring changes in log intensity or voltage (Gallego et al., 2022). When the integrated change exceeds a certain positive threshold, an ‘on event' is generated (green shaded area and raster plot, panel B). Conversely, if the change is sufficiently negative, an ‘off event' is triggered (red shaded area and raster plot, panel B). Output events can occur within microseconds of a change in light intensity.

Figure 3

Diagram comparing conventional sensory encoding and neuromorphic sensing. Panel A shows conventional encoding with smooth stimulus and a stepwise digital signal. Panel B illustrates neuromorphic sensing with ON and OFF spikes. Panel C presents an event-based sensor pixel with light input and ON/OFF events. Panel D graphs log stimulus, panel E shows an image of a person walking outside, and panel F displays voltage versus time for ON/OFF spikes. Panel G illustrates processed image data with highlighted features.

Event sensors represent visual data differently than conventional cameras. (A) Conventional sensors convert an analog signal (top plot) into a series of frames that uniformly discretize the signal in space and time (bottom plot). (B) Neuromorphic sensors efficiently represent that analog signal (top plot) with precisely timed events (raster plots, bottom) when it increases (ON, green) or decreases (OFF, red). (C) Simplified circuit diagram of a neuromorphic vision sensor pixel. (D) Illustrative log-intensity of photons over time log λ(t) on the highlighted pixel in (E). (F) Corresponding voltage trajectory V(t) (cyan trace) for the highlighted neuromorphic pixel in panel G. V(t) approximates the change in log photon intensity from (D). V(t) is compared to on (green) and off (red) thresholds. Threshold crossings generate events that mark the microsecond at which the log photon intensity changed beyond the threshold. When either threshold is crossed, V(t) is reset to the other threshold. Collectively, the panels demonstrate key advantages of neuromorphic sensing: microsecond temporal resolution, sparse output proportional to scene dynamics, and in-pixel analog preprocessing that reduces bandwidth and energy consumption.

Figure 3C is a diagram illustrating how an event sensor pixel achieves such remarkable temporal precision. A photodiode transduces photons and charges a capacitor. The charging current is proportional to the change in the log intensity (logλ(t) of the incoming photons. Panel D shows a hypothetical log intensity of photons incident on a single pixel from the image of panel E. Panel F illustrates that the event sensor pixel then compares the voltage across the capacitor to on (green horizontal line) and off (red horizontal line) thresholds. When the voltage exceeds either threshold, the pixel generates a corresponding event. The green and red raster plots in panel F present hypothetical event output of a single pixel from the neuromorphic ‘image' in panel G. The circuitry implementing this comparison and event generation is not shown in panel C. Event sensors can output data at different rates depending on the scene that they are filming. If nothing in the scene is moving with respect to the sensor, its output events are sparse, and its resulting output data rate can be very low; vice versa. For example, the background of the scene illustrated in panel E is not moving with respect to the event sensor. Thus, event sensor pixels with a static background in their field of view will output few events, as shown in panel G. In contrast, conventional cameras output the same amount of data regardless of the scene.

Theoretically, if nothing in a scene changes, the event sensor should not output any events. However, in practice, event pixels generate ‘noisy' events even in a static scene with no true intensity change (Padala et al., 2018). Since there is no change in intensity, the light incident on a pixel during a timestep does not vary (regardless of how we define a timestep). Therefore, the key assumption of sequential analysis (i.i.d. timesteps) is met. We can then use sequential analysis to characterize the statistics of those noisy events, at least for idealized pixels. Even if sequential analysis does not accurately model the circuitry of a real event sensor pixel, it provides a benchmark for comparing the statistical properties of noisy events from a real event sensor.

The top plot in Figure 4 considers an idealized event sensor pixel as a sequential analysis problem. Let the cumulative sum St represent the voltage of the pixel Vt at time t. Let the change in the sum X be the change in the pixel voltage ΔV on a time step. To keep our calculations simple, let . The thresholds a and b directly map to the on and off thresholds of the pixel. The top plot also shows two example realizations of the voltage path. One crosses the on threshold first (red trace and dashed line), and the other crosses the off threshold first (blue trace and dashed line). The waiting times for both voltage-path realizations are shown on the x-axis.

Figure 4

Graphical illustration showing pixel voltage and probability distribution analysis. The top graph displays pixel voltage variations over time, divided into two phases: on (pink) and off (blue). Two middle graphs compare theoretical and simulated data for on and off states using complex functions of τ, highlighting agreement between theory and simulations. The bottom graphs depict probability distributions, showing Pr(T|V_T = on) peaking around 58,102 and Pr(T|V_T = off) peaking around 33,543. Yellow bars in histograms represent distributions with marked peaks indicating key values for each state.

Noisy events from idealized event sensor pixels receiving constant light intensity is a sequential analysis problem. (Top row) Schematic of a single event sensor pixel's voltage fluctuating between an on and off threshold. If we assume that the pixel receives constant light intensity and the change in pixel voltage on a time step is i.i.d. We want to find the probability that the pixel generates an on or off event. The conditional waiting time distributions are used to do so. Two example voltage-path realizations are shown: one ultimately crosses the on threshold (red trace, upper dashed line) and the other crosses the off threshold (blue trace, lower dashed line). The waiting times of both paths are shown on the x-axis. (Middle row) Theoretical (solid traces) and simulation (dashed traces) CCFs and threshold crossing probabilities are practically identical. Real (pink/red) and imaginary (gray/black) parts of CCFs are plotted separately. (Bottom row) Analogous to the middle row, but with conditional probability distributions instead of CCFs. The red and blue traces are effectively inverse Fourier transforms of the CCFs above. The gold histograms are simulation results for conditional waiting times at threshold crossings. The two voltage-path realizations from the top row are plotted as samples from those distributions (red and blue bars). Simulation curves reflect 100,000 independent trials; sampling variance is below the line thickness at the plotted scale.

First, we calculate the probability that a pixel generates an on or off event first, i.e., threshold crossing probabilities. The MGF of ΔV is as follows:

This MGF was plotted in the left panel of Figure 2. ϕΔV(h) crosses one twice, once at zero and again at a non-trivial value:

Inserting h0 into Equation 2, we determine the probability that the pixel will generate an on event before an off event. Put in some example parameter values a = 3, b = −2, V0 = 0, μ = 10−5, and σ = 10−2:

Next, we calculate waiting-time CCFs for noisy on- and off-events. For imaginary τ, −logϕΔV(h) = τ has two complex roots h1(τ) and h2(τ) (right panel, Figure 2). Obtaining their expressions is straightforward:

One root, say h1(τ), is given by one sign in the ± symbol, say +. The other root h2(τ) is given by the other sign. The right panel in Figure 2 plots these two complex roots. Inserting them into Equations 3, we find the waiting time CCFs of the idealized pixel's noisy events.

The middle row of Figure 4 plots these CCFs for the parameter values stated above (thick solid traces). Pink and gray traces plot the real and imaginary parts of the CCFs, respectively. We also print the threshold crossing probabilities α and 1−α. These panels also compare simulation results with theoretical results. We ran 100,000 independent simulations of this idealized event sensor pixel with the parameter values stated above. We saved which threshold was crossed first and how many time steps the process took to reach it. Then we computed the Fourier transforms of our resulting waiting-time distributions to compare them with our theoretical CCFs. The reported numbers in the panels indicate very strong agreement between our calculated and simulated threshold-crossing probabilities. The dashed red and black traces in the middle panels show that our simulated CCFs very closely match our theoretical CCFs. Since we assumed that the pixel voltage changes only very slightly at each time step, the voltage is extremely unlikely to overshoot either threshold appreciably. So Wald's analysis is practically exact, and we ran enough simulations to converge on his solution.

The bottom row of Figure 4 shows that we can recover conditional probability distributions of T from those CCFs. For example, we find Pr(T|VT = on) from ψT|on(τ) via the inverse Fourier transform:

Pr(T|VT = off) is found analogously. The red and blue traces in the bottom row show the conditional probability distributions calculated from the CCFs in the middle row. The gold bars are histograms of our simulation results. Again, we see a very strong match between Wald's analysis and our simulation results. The two example voltage-path realizations from the top row are also displayed as samples from their corresponding conditional waiting-time distributions.

We can construct other waiting time CFs from ψT|a(τ) and ψT|b(τ). For example, let A represent the waiting time until a pixel generates an on event. Let B represent the number of off events that occur while we wait for the on event. Since each threshold crossing is an i.i.d. trial, the CCF of A given B is:

We can calculate the CF of A from this CCF:

We evaluate this expectation by noting that B~Geom(1−α):

then recognizing the sum as a geometric series:

3.2 Leaky current-integrating node given SPAD input

Figure 5 shows another hardware implementation of a sequential analysis problem (Delic and Afshar, 2020). A single-photon avalanche diode (SPAD) operated in Geiger mode converts every detected photon into a precisely timed electrical impulse. Early SPAD-event processing circuits illustrate this principle (Afshar et al., 2020). Efficient real-time processing of SPAD array data is the subject of active investigation (Gyongy et al., 2020; Delic and Afshar, 2024). The gold trace in the top plot of Figure 5 is a hypothetical cumulative count of photons detected by a SPAD (left y-axis, top plot). When that SPAD feeds a current-integrating node with a finite leak conductance, the node voltage V(t) executes a biased random walk. V(t) jumps up by a fixed amount u upon each photon arrival and drifts down with constant slope m between arrivals (Delic and Afshar, 2024; Morrison et al., 2020). The cyan trace in the top plot is a realization of that random walk, given the observation of photons (right y-axis, top plot). When V(t) crosses an on or off threshold (dashed green and red lines in the top plot), an event is generated (shown in green and red raster plots), and V(t) is reset between these points. Sequential analysis can thoroughly and tractably analyze this circuit.

Figure 5

Graph showing photon counts over 10,000 clock cycles with a fluctuating cyan line and a solid yellow line. Labels indicate “ON” and “OFF” states. Below, a flowchart illustrates a system with SPAD and decrement counters linked to event detectors, controlled by a digital phase-locked loop.

Diagram and operation of a leaky capacitive node that integrates SPAD sensor input. (Top plot) A single-photon avalanche diode (SPAD) converts each detected photon (the gold trace is the cumulative photon count, left y-axis) into identical current impulses that charge a capacitive node. The voltage across this node V(t) (cyan trace, right y-axis) leaks through a constant conductance and decays at a rate m between photon arrivals. Each photon arrival causes an instantaneous voltage jump of u in V(t). Two programmable comparators monitor V(t): an upper (on) threshold and a lower (off) threshold. When V(t) crosses the on threshold, an on event is emitted, and vice versa. After each event, Vt is reset between the thresholds. Notice that on events are more frequent as the photon count increases, and off events are more frequent when the photon count stagnates. (Middle plot) The leak rate m is decreased or increased after each on or off event, respectively (blue trace). This adaptive change implements a refractory-like gain control mechanism that balances the event rate between the two thresholds. (Bottom plot) Circuit diagram of the SPAD and capacitive node.

The tractability of sequential analysis is a powerful feature. Absorption probabilities and times are expressed as functions of the problem's input parameters. Thus, if we change some parameter values upon threshold crossing, Wald's methodology remains applicable for analyzing future threshold crossings. For example, say we change the value of the decay rate m after every threshold crossing. We could increase or decrease it depending on which threshold is crossed to achieve a habituation-like gain-control mechanism. The blue trace in the middle plot of Figure 5 is a realization of m when we implement such a rule. Every time an on event is generated, the slope decreases, and vice versa. Thus, every event biases V(t) to hit the off threshold on the next random walk. In Figure 5, notice that an increasing photon count initially causes the current-integrating node to frequently generate on events. However, as m decreases, the on events habituate, and we begin to observe off events. Then, when the photon count stagnates, we observe the reverse effect. We could achieve a similar gain-control effect through other mechanisms. Upon crossing the threshold, we change the values of the thresholds to bias future crossings analogously. The circuit at the bottom of Figure 5 is a diagram of how to implement these types of feedback mechanisms in hardware. Whichever mechanism(s) we use, sequential analysis tells us how to compute threshold-crossing probabilities and times.

Define a ‘time step' to be the waiting time until a photon arrival, including the arrival itself. We model photon arrival to the SPAD as a Poisson process with rate λ. Then the waiting time until photon arrival E is exponentially distributed; E ~ Exp(λ). While waiting for the photon to arrive at the SPAD, the node voltage decays linearly at a rate m < 0. When a photon arrives, its voltage bumps up by a constant amount u > 0. Thus, the change in the node's voltage over a time step is as follows:

The MGF of X is as follows:

Again, we begin by calculating threshold crossing probabilities. It is noteworthy that we cannot find a closed-form expression for the non-trivial crossing at ϕX(h0) = 1. We could numerically evaluate that crossing, but instead, we will employ an approximation. Assume that 𝔼[X]≈0 so that h0≈0. Taylor expanding the exponential:

Rearranging for h0:

Equation 2 is very sensitive to the value of h0 used because h0 appears in exponentials. Thus, our approximation for h0 must be very accurate to yield accurate approximations of threshold crossing probabilities. We can enhance the accuracy of our approximation by adding more terms to the Taylor expansion of the exponential. Or we can revert to a numerical solver to achieve sufficient accuracy for Wald's analysis. Whatever method we use to obtain h0, threshold crossing probabilities are then given by Equation 2.

Next, we obtain waiting time CCFs by finding two complex roots h1(τ) and h2(τ) to the equation :

Rearranging, we find that h(τ) is given by the quadratic equation:

Again, h1(τ) is given by one sign in the ± symbol, and h2(τ) by the other. Inserting h1(τ) and h2(τ) into Equations 3, we find the waiting time CCFs to threshold crossing.

Figure 6 applies Wald's analysis to the circuit diagram at the bottom of Figure 5. We achieved gain control for the circuit by adjusting the on and off thresholds, depending on which threshold was crossed. The top row of Figure 6 shows one example of how the thresholds evolve over a single trial of 2e6 seconds (red and blue traces). We initialized V0 = 0 and the on and off thresholds at 10 and −10, respectively. Both thresholds were increased or decreased by 0.1, depending on which was crossed. Our other parameter values were u = 0.1, m = −0.00999, and λ = 0.1 (so 𝔼[X]≈0). Halfway through the simulation, we slightly increased the photon intensity from 0.1 to 0.102 photons per second (yellow shaded background, top plot). The timings of on- and off-threshold crossings are represented as red and blue raster plots, respectively. For the first half of the simulation, on and off events are approximately equally frequent, so thresholds do not appreciably change. Then, when λ increases, some events become significantly more frequent. Both thresholds increase, then balance each other at new values.

Figure 6

Graphical representation with three panels. The top panel shows two lines with different trajectories, labeled “on” and “off,” over a yellow and white background. The middle panel contains two graphs labeled \(\psi_{p|a}(\tau)\) and \(\psi_{p|b}(\tau)\), each showing bell-shaped curves in black and pink. The bottom panel mirrors the top panel's style with lines marked “on” and “off,” displaying convergence over a similar color gradient.

A SPAD sensor feeding inputs to a thresholded capacitive node is a sequential analysis problem. (Top row) Output spikes and thresholds of a capacitive node given SPAD sensor input. The SPAD sensor transduces Poisson-distributed photons into electrical impulses. Those impulses are accumulated by a capacitive node. When an impulse arrives, the node's voltage bumps up by an amount u = 0.1. As no impulses arrive, the voltage decays linearly at a rate of m = −0.00999. The voltage is bounded by two thresholds (red and blue traces), and the node produces on and off events (green and red raster plots) when either threshold is crossed. Both thresholds increase or decrease by 0.1 when the on or off threshold is crossed, respectively. Halfway through the simulation, the photon rate increases from λ = 0.1 to λ = 0.102 (yellow shaded area). As event frequency increases, off events become rare. The thresholds increase accordingly and stabilize at new, higher values when on and off events occur at similar rates. Middle row: CCFs of the number of photons required to cross the on (left panel) or off (right panel) threshold. CCFs and simulation results are plotted in a manner analogous to those in Figure 4. Each panel plots two CCFs, one with thresholds at –5 and 15 (solid traces) and the other with thresholds at –15 and 5 (dashed traces). (Bottom row) Threshold evolution of the capacitive node from 1,000 independent simulations identical to the top row. Thresholds were initialized at –10 and 10 with an initial voltage set to V0 = 0 in all simulations. Simulation curves reflect 100,000 independent trials; sampling variance is below line thickness at the plotted scale.

The middle row of Figure 6 plots CCFs of the number of photons required to hit either threshold, and for particular parameter values. It is directly analogous to the middle row of Figure 4. The only difference between the middle rows of Figures 4, 6 show that we plot two CCFs in each panel instead of one. Each CCF was plotted for different threshold values. The solid trace CCFs used threshold values of a = 15 and b = −5, and the dashed trace CCFs used a = 5 and b = −15. For all CCFs we used V0 = 0, λ = 0.1, and u = 0.1. We set m = λu+10−5 to ensure that 𝔼[X]≈0 for all CCFs. The middle row of Figure 6 shows that Wald's CCFs are sensitive to threshold values. So every time the capacitive node spikes and we change the threshold values, those CCFs can change appreciably.

The bottom row of Figure 6 plots the evolution of the capacitive node's on and off thresholds over 1,000 independent trials. As in the top row, we changed the rate of incoming photons from λ = 0.1 to λ = 0.102 halfway through each trial (yellow-shaded background). Each time the node spiked, we adjusted both thresholds, as in the top row, depending on which threshold was crossed. Even a small 2% increase in the photon rate biases the node voltage to cross the on threshold with a much higher probability and in a much shorter time. Since both thresholds increase with each event from the node (and vice versa), both thresholds increase soon after the rate of incoming photons increases. Eventually, the thresholds saturate and stabilize around new values.

Recall that our definition of a time step was the waiting time until a photon arrival, including the arrival itself. Therefore, the random variables of our CCFs are the number of photon arrivals required to cross one threshold or the other. In Figure 6, notice that the random variable in the CCFs (middle row) and the x-axes of the top and bottom plots are numbers of photons (P), and not time T. We can easily switch random variables from P to T. Upon threshold crossing, P and T are linearly related:

So the CCFs of T are:

3.3 Hypothesis testing

Sequential analysis is directly applicable to hypothesis testing (Wald and Wolfowitz, 1948). If we have two competing hypotheses H0 (the “null hypothesis”) and H1 (the “alternative hypothesis”). We observe data that support one hypothesis or the other until we accumulate sufficient evidence to accept one and reject the other with a specified confidence level. For example, imagine an SPAD sensor receiving photons at a rate λ that can only be one of two values λ0 or λ1. We can set our null hypothesis to be that λ = λ0 and our alternative hypothesis to be λ = λ1. We observe the output of the SPAD sensor until we conclude, with a confidence level, that the photon rate is one value or the other.

The top plot in Figure 7 frames sequential analysis as an online algorithm for hypothesis testing. This algorithm is called the sequential probability ratio test (Wald and Wolfowitz, 1948) that has been applied across many disciplines (Li and Kulldorff, 2010; Kulldorff et al., 2011; Wang and Wan, 2017; Gold and Shadlen, 2007). Let the sum St be the log-likelihood ratio Lt of the data, given that either hypothesis is true. For example, we calculate Lt for a Poisson rate of incoming photons by taking the log of the ratio of Poisson distributions:

for P photons in time t. The inset in the top plot shows how Lt increases by a constant amount upon photon arrival and decays linearly between arrivals. While b<Lt<a, we continue observing new data points because we cannot accept either hypothesis with sufficient confidence. Then, when Lt crosses either threshold, we accept the corresponding hypothesis. Two example paths of Lt are plotted in Figure 7, one crossing the upper threshold (red path) and the other the lower threshold (blue path).

Figure 7

Graphical representation of a log likelihood ratio, featuring a main plot and three subplots. The main plot shows two lines, pink and blue, representing the likelihood ratio over time, with horizontal dashed reference lines labeled \(L_0 \), \(P_1 = 5982 \), and \(P_2 = 4202 \). The inset graph emphasizes a segment of the pink line. The bottom subplots display curves with parameters for different lambda (\(\lambda\)) values and alpha (\(\alpha\)) levels. The vertical axis ranges from negative to positive values over varying tau (\(\tau\)) ranges.

The sequential probability ratio test is a famous example of a sequential analysis problem. (Top plot) The sum St is the log likelihood Lt of data under two competing hypotheses. In this example, the hypotheses are the photon arrival rates λ0 and λ1. Two example paths of Lt are plotted, one crossing threshold a (red) and the other b (blue). Thresholds map to the probabilities of correct and incorrect hypothesis detection pc and pi (y-axis). When Lt crosses a threshold, the test accepts the corresponding hypothesis. The inset is a magnification showing the behavior of Lt given the first few photons. (Bottom plots) CCFs of the number of photons required for threshold crossing, ψP|a(τ) (left column) and ψP|b(τ) (right column). The top CCFs assume a photon rate of λ0 and the bottom CCFs assume a photon rate of λ1. Interpretation of these panels is directly analogous to all other CCFs presented in earlier figures. In all plots, λ0 = 0.1, λ1 = 0.102, pc = 0.9, and pi = 0.1. Notice that α ≈ pc or pi, depending on which hypothesis was true. Simulation curves reflect 100,000 independent trials; sampling variance is below line thickness at the plotted scale.

The thresholds a and b map beautifully to the probabilities of correct and incorrect hypothesis detection pc and pi:

Thus, before we start the test, we decide on ‘acceptable' errors: accepting the hypothesis H0 when H1 is true, and vice versa. Then we simply calculate the thresholds that we should use for the test to achieve those probabilities. In Figure 7, we set pc = 0.9 and pi = 0.1. One reason that the sequential probability ratio test is a very popular hypothesis testing algorithm is that it enjoys a significant optimality property. No other test can achieve the same or better pc with a lower expected number of samples (Wald and Wolfowitz, 1948). When samples are expensive, e.g., testing the efficacy of a new drug that is expensive to manufacture, this optimality property is very valuable.

Our sequential probability ratio test on Poisson-distributed photons is identical to our application on the current-integrating node, given SPAD sensor input from the previous subsection. The linear decay between photon arrivals (what we called m) is (λ1−λ0) here. The voltage bumps upon photon arrivals (what we called u) are (logλ1−logλ0) here. Therefore, the capacitive node from the previous subsection implements a sequential probability ratio test for two particular photon intensities. The value of the photon intensities in the test is implied by the slope of the decay between photon arrivals and the amount its voltage bumps up upon photon arrivals:

The thresholds of the capacitive node define probabilities of correct and incorrect hypothesis detection of the test. Thus, when we changed thresholds after each threshold crossing in the previous subsection, we implicitly changed those detection probabilities. The output spikes of the node are its assertions that it accepts one photon rate over another. We can simply reuse the mathematical analysis from the previous subsection for this sequential probability ratio test.

The bottom half of Figure 7 plots four waiting time CCFs that we obtained from Equation 3. More specifically, they are CCFs for the number of photons P required to cross either threshold. Again, real and imaginary parts of the CCFs are given by the pink/red and gray/black traces, respectively. Again, solid traces are theoretical results from Equation 3 and dashed traces are Fourier transforms of simulated waiting times until threshold crossing. In all plots, we set the photon rates to be λ0 = 0.1 and λ1 = 0.102. The CCFs in the left column are ψP|a(τ) and the CCFs in the right column are ψP|b(τ). The top CCFs were evaluated under hypothesis H0, i.e., the photon rate was λ = λ0. The bottom CCFs were evaluated under hypothesis H1. We print threshold crossing probabilities in the left panels. Notice that α≈0.1 and α≈0.9 are in close agreement with our desired probabilities of correct and incorrect detection pc and pi.

4 Discussion

Threshold-crossing problems, such as sequential analysis, provide a rigorous framework for evaluating and/or interpreting neuromorphic architectures. We have illustrated how the same mathematical framework can play three different roles, depending on its context. First, sequential analysis can serve as a benchmark to compare against hardware. Similar to the Carnot engine in thermodynamics, it does not need to model real devices to be useful. Second, it can serve as a proxy model that makes circuit behavior tractable. By abstracting away transistor-level details, sequential analysis offers a tractable model that maps circuit parameters to dynamics. Third, it can serve as a design tool that prescribes optimal circuit behavior. Framing circuits as statistical decision-makers turns parameter selection into a principled mapping from problem specification to circuit design. Together, these three roles establish sequential analysis as a benchmark, proxy model, and design tool for neuromorphic hardware within a unified statistical language.

Our first application, noisy event pixels, illustrates the value of sequential analysis as a benchmark. Real pixels generate spurious events even under constant illumination (Lichtsteiner et al., 2008; iniVation, 2020; Hamilton et al., 2014). The statistics of those events are difficult to model in detail because of device mismatch and circuit-level complexity. Sequential analysis does not attempt to reproduce those details. Instead, it provides the ideal statistical baseline for fluctuations, defining what noise would look like if the pixel were to follow a perfectly tractable random process. This role is directly analogous to that of the Carnot engine in thermodynamics. No real physical engine achieves it, but it defines the efficiency ceiling and establishes a rigorous language for comparison. In the same way, sequential analysis supplies neuromorphic engineers with an ideal standard against which measured device behavior can be evaluated. Deviations from this benchmark are then informative rather than merely problematic.

Our second application, adaptive dynamics in neuromorphic circuits, illustrates the value of sequential analysis as a proxy model. Circuit behavior is often shaped by many interdependent parameters at the transistor level, making tractable analysis impossible. Sequential analysis does not replicate those low-level mechanisms. Instead, it provides a simplified statistical model in which adaptation appears as a shift in decision thresholds or effective time constants. This abstraction connects measurable input–output behavior to the circuit's underlying computational role. Sequential analysis provides a tractable stand-in that reveals how circuit parameters drive observable dynamics.

Our third application, decision-making circuits, illustrates the value of sequential analysis as a design tool (Burri et al., 2014; Woods et al., 2019). When circuits are viewed as statistical decision-makers, problem requirements such as tolerable error rates map directly onto circuit parameters, such as thresholds and decay rates. Sequential analysis provides the formal framework that defines this mapping, making design choices constructive rather than empirical. Under the assumptions of sequential analysis, hardware decision times inherit valuable optimality properties (Wald and Wolfowitz, 1948). This perspective also makes the computational function of a circuit node transparent. For example, a current-integrating node can be interpreted as implementing an online sequential probability ratio test on SPAD input. Each output event is its probabilistic assertion that the incident photon intensity belongs to one hypothesis rather than another. Threshold values specify the confidence level of those assertions. Sequential analysis, therefore, provides both design prescriptions and rigorous probabilistic interpretations of circuit behavior.

While powerful, sequential analysis rests on restrictive assumptions that limit its direct applicability. Classical formulations require independent and identically distributed observations. In practice, neuromorphic signals rarely meet this criterion. Inputs are often time-varying, reflecting both stimulus dynamics and adaptive circuit responses. They are also correlated across space and time, violating the independence assumptions of Wald's classical approach. Moreover, optimal sequential tests assume that the likelihood ratio between competing hypotheses can be written down explicitly. In practice, this explicit representation is rarely available. The probability distributions of real inputs are often too complex to admit closed-form likelihoods, e.g., due to device mismatch. These challenges do not invalidate sequential analysis, but they highlight the need for extensions that adapt the framework to more realistic conditions.

The assumption of stationarity is incompatible with sensory streams, where input rates fluctuate due to both external stimuli and circuit-level adaptation. Sequential analysis can still be applied by invoking the time-rescaling theorem (Brown et al., 2002; Haslinger et al., 2010), which transforms an inhomogeneous Poisson process into a homogeneous one of unit rate. This transformation preserves the statistics of event timing while removing the nonstationarity. So we can apply classical sequential methods in the rescaled time domain. If we also had an estimate of the inhomogeneous intensity (Ng and Murphy, 2019), we could invert our results to return to the original time domain. Circuits that integrate changing input rates can still be interpreted within a sequential framework, but with temporal variability absorbed into the rescaling.

The assumption of independence is also violated in practice, since neuromorphic signals are often correlated across space and time. These dependencies break the classical proofs of optimality that rely on the i.i.d. structure of Wald's original framework. However, the sequential paradigm itself does not require independence. It only requires that we can quantify how new evidence modifies the decision statistic. Correlations can be incorporated directly into the likelihood function when the joint distribution is known or approximated. Even when exact models are unavailable, structured approximations can preserve the essential behavior of the test statistic. Correlation complicates the math, but the core idea of accumulating evidence toward a threshold remains valid.

A further assumption is the absence of threshold overshoot: increments are implicitly taken to be small enough that the process crosses the boundary exactly rather than leaping past it. Real neuromorphic circuits do not obey this constraint. In practice, overshoot does not invalidate the sequential framework; it alters only the mapping between the physical voltage trajectory and the effective statistical test. Standard corrections are available: boundary-adjustment factors derived from renewal theory, first-passage approximations for jump processes, or continuous-time formulations in which the hazard rate characterizes crossings without requiring perfect boundary contact. These provide engineering countermeasures that preserve the utility of sequential analysis even when threshold crossings occur with finite overshoot.

Although our derivations rely on i.i.d. and stationary increments, their role in the three case studies is as a principled reference rather than a literal description of hardware or biology. For the idealized event-pixel example, the assumption isolates the fundamental stochastic mechanism underlying threshold crossings; real pixels introduce temporal correlations, drift, and device-level variability, but these effects serve as structured deviations from the baseline predicted by sequential analysis rather than contradictions. For the LIF node driven by SPAD-like input, the model already includes leaky dynamics and thus diverges from strict i.i.d. behavior; here, sequential analysis provides an analytically tractable approximation that captures the correct scaling of crossing statistics even when the microscopic increments are not perfectly independent. In the SPRT case study, non-i.i.d. evidence streams simply alter the effective log-likelihood update without compromising the decision-theoretic framework. Across all three examples, the idealized assumptions provide a clean baseline that clarifies how each system behaves when higher-order correlations, drift, or non-stationarities are added. This makes deviations interpretable and allows the sequential analysis predictions to function as a benchmark rather than a surrogate for full hardware realism.

Our contribution is conceptual rather than device-specific. We establish how classical sequential analysis provides analytically tractable predictions for threshold-crossing behavior and decision dynamics, and we demonstrate this across three distinct neuromorphic contexts. The case studies serve as proof of principle, showing how the framework can be integrated into existing modeling and design workflows. The simulations validate the theory under controlled conditions. Measurements of event-sensor noise often show deviations from the idealized model used in our first case study, including illumination-dependent drift and heavier-tailed inter-event statistics. These discrepancies are expected: sequential analysis defines the baseline fluctuations in the absence of such circuitry-specific effects, and real data typically sit above this baseline in predictable ways. Hardware evaluation is, therefore, a natural next step for future work.

Although our case studies focus on single nodes, the same framework extends naturally to larger SNNs. Threshold tuning in multi-layer networks is often heuristic, whereas sequential analysis provides explicit predictions for firing rates, false-alarm probabilities, and detection delays. These quantities can be propagated through a network because each layer filters and transforms the distribution observed by the previous one. This makes sequential analysis a principled complement to existing tuning methods: it provides analytical targets for threshold setting and clarifies how architectural changes affect the statistics of threshold events.

Sequential analysis occupies a distinct position relative to existing neuromorphic modeling approaches. Circuit-level models, such as mixed-signal simulations, differential-equation descriptions of neurons, or large-scale numerical SNN frameworks, emphasize numerical fidelity but rarely yield closed-form predictions for error rates, latency distributions, or threshold-crossing probabilities. In contrast, sequential analysis trades low-level detail for analytical transparency; it provides exact or asymptotic expressions for these quantities with far lower computational cost. Its value is therefore complementary rather than competitive: classical models capture device realism, while sequential analysis supplies principled performance bounds and interpretable operating points that would otherwise require extensive simulation to estimate.

The same threshold-centric view also connects to contemporary SNN architectures. In models such as Spiking Transformers (Zhao et al., 2025), multi-modal SNNs with temporal attention (Shen et al., 2025), or recurrent SNNs employing adaptive history mechanisms (Xu et al., 2023), performance ultimately depends on how local membrane-state variables cross internal thresholds to emit spikes. Sequential analysis provides closed-form links between increment statistics, firing probabilities, and expected latencies. These metrics can inform the co-design of thresholds or attention-gating rules in these architectures. More biologically grounded models with adaptive dendritic processes (Mao et al., 2025) can also be framed in this way: the dendritic dynamics shape the effective distribution seen at the soma (or the relevant spike initiation zone), and the same machinery characterizes the resulting crossing statistics. Thus, while specific implementation details differ across architectures, the underlying principles–increment distributions, drift, variance, and threshold geometry–admit the same analytical treatment and offer a complementary tool for understanding and tuning complex SNNs.

Classical sequential tests require exact likelihood functions to define optimal decision rules (Singh and Zhurbenko, 1975; Bartoo and Puri, 1967). The assumption of known likelihood ratios is fragile because its underlying distributions may be analytically intractable. When these functions are unknown, one approach is to exploit the Karlin–Rubin theorem. We can still identify uniformly most powerful tests within exponential families, even without explicit forms of the likelihood ratio. More generally, sequential analysis can be extended to composite hypothesis testing, where decision rules are constructed across a family of possible distributions rather than a single known model (Tartakovsky et al., 2014). We can often relax the requirement of closed-form likelihoods while retaining the sequential structure of accumulating evidence toward a threshold.

Sequential analysis provides more than a narrow mathematical idealization. We consider it a practical language for thinking about neuromorphic circuits. As a benchmark, it defines rigorous statistical baselines that reveal when hardware deviates from expectation. As a proxy model, it makes intractable circuit dynamics interpretable by recasting them in a simplified but faithful statistical form. As a design tool, it translates performance goals into circuit parameters, providing constructive prescriptions for circuit design rather than empirical tuning. While its classical formulation relies on restrictive assumptions of stationarity, independence, and closed-form likelihoods, these are not fatal obstacles. Extensions such as time-rescaling, correlated likelihoods, and composite hypothesis testing preserve the sequential paradigm while broadening its reach to realistic neuromorphic signals and circuits.

Statements

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

SM: Investigation, Writing – review & editing, Writing – original draft, Formal analysis, Validation, Methodology, Conceptualization. SA: Visualization, Writing – original draft, Supervision, Validation, Methodology, Writing – review & editing. TM: Visualization, Project administration, Formal analysis, Writing – review & editing, Conceptualization, Supervision, Methodology, Writing – original draft, Investigation.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnins.2025.1735027/full#supplementary-material

References

  • 1

    Afshar S. Hamilton T. J. Davis L. van Schaik A. Delic D. V. (2020). Event based processing of single photon avalanche diode sensors. IEEE Sens. J. 20, 76777691. doi: 10.1109/JSEN.2020.2979761

  • 2

    Bartoo J. B. Puri P. S. (1967). On optimal asymptotic tests of composite statistical hypotheses. Ann. Math. Stat. 38, 18451852. doi: 10.1214/aoms/1177698617

  • 3

    Brown E. N. Barbieri R. Ventura V. Kass R. E. Frank L. M. (2002). The time-rescaling theorem and its application to neural spike train data analysis. Neural Comput. 14, 325346. doi: 10.1162/08997660252741149

  • 4

    Burri S. Charbon E. Bruschini C. (2014). Architecture and applications of a high resolution gated SPAD image sensor. Optics Express22, 1757317589. doi: 10.1364/OE.22.017573

  • 5

    Christensen D. V. Dittmann R. Linares-Barranco B. Sebastian A. Le Gallo M. Redaelli A. et al . (2022). 2022 roadmap on neuromorphic computing and engineering. Neuromorph. Comput. Eng. 2:022501. doi: 10.1088/2634-4386/ac4a83

  • 6

    Delic D. V. Afshar S. (2020). Neuromorphic single photon avalanche detector (spad) array microchip. Filed15:2020. Available online at: https://patents.google.com/patent/US11474215B2/en

  • 7

    Delic D. V. Afshar S. (2024). “Neuromorphic computing for compact LiDAR systems,” in More than Moore Devices and Integration for Semiconductors, chap. 9, eds F. Iacopi and F. Balestra (Cham: Springer Nature), 191240. doi: 10.1007/978-3-031-21610-7_6

  • 8

    Doob J. L. (1953). Stochastic Processes. New York, NY: Wiley.

  • 9

    Gallego G. Delbruck T. Orchard G. Bartolozzi C. Taba B. Censi A. et al . (2022). Event-based vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 44, 154180. doi: 10.1109/TPAMI.2020.3008413

  • 10

    Gold J. I. Shadlen M. N. (2007). The neural basis of decision making. Annu. Rev. Neurosci. 30, 535574. doi: 10.1146/annurev.neuro.29.051605.113038

  • 11

    Gyongy I. Halimi A. et al . (2020). A 128 × 128 SPAD motion triggered time of flight image sensor with in pixel histogram and column parallel vision processor. IEEE J. Solid State Circuits55, 17621775. doi: 10.1109/JSSC.2020.2993722

  • 12

    Hamilton T. J. Afshar S. van Schaik A. Tapson J. (2014). Stochastic electronics: a neuro-inspired design paradigm for integrated circuits. Proc. IEEE102, 843859. doi: 10.1109/JPROC.2014.2310713

  • 13

    Haslinger R. Pipa G. Brown E. (2010). Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking. Neural Comput. 22, 24772506. doi: 10.1162/NECO_a_00015

  • 14

    Indiveri G. Linares-Barranco B. Hamilton T. J. Schaik A. v Etienne-CummingsR.et al. (2011). Neuromorphic silicon neuron circuits. Front. Neurosci. 5:73. doi: 10.3389/fnins.2011.00073

  • 15

    iniVation A. (2020). Understanding the Performance of Neuromorphic Event-Based Vision Sensors. Tech. Rep, Zurich, Switzerland.

  • 16

    Kira S. Yang T. Shadlen M. N. (2015). A neural implementation of wald's sequential probability ratio test. Neuron85, 861873. doi: 10.1016/j.neuron.2015.01.007

  • 17

    Kulldorff M. Davis R. L. Kolczak †, M. Lewis E. Lieu T. R. P . (2011). A maximized sequential probability ratio test for drug and vaccine safety surveillance. Sequential Anal. 30, 5878. doi: 10.1080/07474946.2011.539924

  • 18

    Lai T. L. (2009a). Martingales in Sequential Analysis and Time Series, 1945-1985, Volume 5. Seminairé d'Histoire du Calcul des Probabilitéset de la Statistique, EHESS, Paris; Laboratoire de Probabiliteś et Modeleś Aleatoireś. Paris: Univerité Paris VI et VII.

  • 19

    Lai T. L. (2009b). Sequential analysis: Some classical problems and new challenges. Stat. Sin. 19, 303351. Available online at: https://www.jstor.org/stable/24306854

  • 20

    Li L. Kulldorff M. (2010). A conditional maximized sequential probability ratio test for pharmacovigilance. Stat. Med. 29, 284295. doi: 10.1002/sim.3780

  • 21

    Lichtsteiner P. Posch C. Delbruck T. (2008). A 128x128 120 db 15 us latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits43, 566576. doi: 10.1109/JSSC.2007.914337

  • 22

    Mahowald M. (1992). VLSI Analogs of Neuronal Visual Processing: A Synthesis of Form and Function (Ph.D. thesis). California Institute of Technology Pasadena, Pasadena, California, United States.

  • 23

    Mani S. Hurley P. van Schaik A. Monk T. (2025). The leaky integrate-and-fire neuron is a change-point detector for compound poisson processes. Neural Comput. 37, 926956. doi: 10.1162/neco_a_01750

  • 24

    Mao J. Zheng H. Yin H. Fan H. Mei L. Guo H. et al . (2025). Adaptive dendritic plasticity in brain-inspired dynamic neural networks for enhanced multi-timescale feature extraction. Neural Netw. 194:108191. doi: 10.1016/j.neunet.2025.108191

  • 25

    Mead C. (1989). Analog vlsi and neutral systems. NASA STI/recon Techn. Rep. A90:16574.

  • 26

    Monk T. Dennler N. Ralph N. Rastogi S. Afshar S. Urbizagastegui P. et al . (2024). Electrical signaling beyond neurons. Neural Comput. 36, 19392029. doi: 10.1162/neco_a_01696

  • 27

    Monk T. Green P. Paulin M. (2014). Martingales and fixation probabilities of evolutionary graphs. Proc. R. Soc. A: Math. Phys. Eng. Sci. 470:20130730. doi: 10.1098/rspa.2013.0730

  • 28

    Monk T. Paulin M. G. Green P. (2015). Ecological constraints on the origin of neurones. J. Math. Biol. 71, 12991324. doi: 10.1007/s00285-015-0862-7

  • 29

    Monk T. van Schaik A. (2020). Wald's martingale and the conditional distributions of absorption time in the moran process. Proc. R. Soc. A: Math. Phys. Eng. Sci. 476:20200135. doi: 10.1098/rspa.2020.0135

  • 30

    Monk T. van Schaik A. (2021). Martingales and the characteristic functions of absorption time on bipartite graphs. R. Soc. Open Sci. 8:210657. doi: 10.1098/rsos.210657

  • 31

    Monk T. van Schaik A. (2022). Martingales and the fixation time of evolutionary graphs with arbitrary dimensionality. R. Soc. Open Sci. 9:220011. doi: 10.1098/rsos.220011

  • 32

    Morrison D. Kennedy S. Delic D. V. Yuce M. R. Redoute J. (2020). A 64 × 64 SPAD flash LiDAR sensor using a triple integration timing technique with 1.95 mm depth resolution. IEEE Sens. J. 20, 1407214082. doi: 10.1109/JSEN.2020.3030788

  • 33

    Ng T. L. J. Murphy T. B. (2019). Estimation of the intensity function of an inhomogeneous poisson process with a change-point. Can. J. Stat. 47, 604618. doi: 10.1002/cjs.11514

  • 34

    Padala V. Basu A. Orchard G. (2018). A noise filtering algorithm for event-based asynchronous change detection image sensors on truenorth and its implementation on truenorth. Front. Neurosci. 12:118. doi: 10.3389/fnins.2018.00118

  • 35

    Shadlen M. N. Shohamy D. (2016). Decision making and sequential sampling from memory. Neuron90, 927939. doi: 10.1016/j.neuron.2016.04.036

  • 36

    Shen J. Xie Y. Xu Q. Pan G. Tang H. Chen B. (2025). “Spiking neural networks with temporal attention-guided adaptive fusion for imbalanced multi-modal learning,” in Proceedings of the 33rd ACM International Conference on Multimedia (Dublin), 1104211051. doi: 10.1145/3746027.3755622

  • 37

    Singh A. C. Zhurbenko I. G. (1975). The power of the optimal asymptotic tests of composite statistical hypotheses. Proc. Natl. Acad. Sci. U. S. A. 72, 577580. doi: 10.1073/pnas.72.2.577

  • 38

    Tartakovsky A. Nikiforov I. Basseville M. (2014). Sequential Analysis: Hypothesis Testing and Changepoint Detection. Chapman and Hall/CRC, 1st edition. doi: 10.1201/b17279

  • 39

    Taylor H. M. Karlin S. (1984). An Introduction to Stochastic Modeling. Cambridge, MA: Academic Press. doi: 10.1016/B978-0-12-684880-9.50004-6

  • 40

    Urzay C. Ahad N. Azabou M. Schneider A. Atamkuri G. Hengen K. B. et al . (2023). Detecting change points in neural population activity with contrastive metric learning. Int. IEEE EMBS Conf. Neural Eng. 2023, 14. doi: 10.1109/NER52421.2023.10123821

  • 41

    Wald A. (1944). On cumulative sums of random variables. Ann. Math. Stat. 15, 283296. doi: 10.1214/aoms/1177731235

  • 42

    Wald A. (1947). Sequential Analysis. New York, NY: John Wiley and Sons.

  • 43

    Wald A. Wolfowitz J. (1948). Optimum character of the sequential probability ratio test. Ann. Math. Stat. 19, 326339. doi: 10.1214/aoms/1177730197

  • 44

    Wang W. Wan H. (2017). “Sequential probability ratio test for multiple-objective ranking and selection,” in 2017 Winter Simulation Conference (WSC) (Piscataway, NJ), 19982009. doi: 10.1109/WSC.2017.8247934

  • 45

    Woods W. Delic D. V. Smith B. Świerkowski L. Day G. Devrelis V. et al . (2019). “Object detection and recognition using laser radars incorporating novel SPAD technology,” in Proc. SPIE 11005, Laser Radar Technology and Applications XXIV (Bellingham, WA), 110050M. doi: 10.1117/12.2517869

  • 46

    Xu Q. Gao Y. Shen J. Li Y. Ran X. Tang H. et al . (2023). Enhancing adaptive history reserving by spiking convolutional block attention module in recurrent neural networks. Adv. Neural Inform. Process. Syst. 36, 5889058901.

  • 47

    Zhao L. Huang Z. Ding J. Yu Z. (2025). “Ttfsformer: a ttfs-based lossless conversion of spiking transformer,” in Forty-second International Conference on Machine Learning (Vancouver, BC), 7755877571.

Summary

Keywords

applied statistics, event camera, event sensor, hypothesis testing, likelihood ratio, neuromorphic computing, sequential analysis, threshold crossing

Citation

Mani S, Afshar S and Monk T (2026) Sequential analysis and its applications to neuromorphic engineering. Front. Neurosci. 19:1735027. doi: 10.3389/fnins.2025.1735027

Received

29 October 2025

Revised

27 November 2025

Accepted

08 December 2025

Published

09 January 2026

Volume

19 - 2025

Edited by

Lei Deng, Tsinghua University, China

Reviewed by

Rong Yao, Taiyuan University of Technology, China

Jiangrong Shen, Xi'an Jiaotong University, China

Updates

Copyright

*Correspondence: Shivaram Mani,

Disclaimer

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Outline

Figures

Cite article

Copy to clipboard


Export citation file


Share article

Article metrics