Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Netw. Physiol., 21 October 2025

Sec. Networks in the Brain System

Volume 5 - 2025 | https://doi.org/10.3389/fnetp.2025.1646391

This article is part of the Research TopicSelf-Organization of Complex Physiological Networks: Synergetic Principles and Applications — In Memory of Hermann HakenView all 13 articles

Magnitude-constrained optimal chaotic desynchronization of neural populations

Michael ZimetMichael Zimet1Faranak RajabiFaranak Rajabi2Jeff Moehlis,
Jeff Moehlis1,2*
  • 1Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA, United States
  • 2Department of Mechanical Engineering, University of California, Santa Barbara, Santa Barbara, CA, United States

In this paper, we calculate magnitude-constrained optimal stimuli for desynchronizing a population of neurons by maximizing the Lyapunov exponent for the phase difference between pairs of neurons while simultaneously minimizing the energy which is used. This theoretical result informs the way optimal inputs can be designed for deep brain stimulation in cases where there is a biological or electronic constraint on the amount of current that can be applied. By exploring a range of parameter values, we characterize how the constraint magnitude affects the Lyapunov exponent and energy usage. Finally, we demonstrate the efficacy of this approach by considering a computational model for a population of neurons with repeated event-triggered optimal inputs.

1 Introduction

As deep brain stimulation (DBS) emerges as an effective therapy for a wide range of neurological disorders, theoretical perspectives are growing to help inform effective stimulation protocols. DBS technology involves surgically implanting electrodes to deliver electrical stimuli over time to specific brain regions with wires connecting the implanted electrodes to an implantable pulse generator, which can be programmed to set DBS parameters (Lozano and Lipsman, 2013; Sandoval-Pistorius et al., 2023; Frey et al., 2022). Novel technologies enable the optimization of DBS stimulation parameters including the input pulse’s shape, amplitude, frequency, and interstimulus interval (Najera et al., 2023; Krauss et al., 2021). Computational analysis of the interaction between an applied electrical stimulus and the spiking dynamics of neural populations can inform the effective design of DBS parameters to enhance clinical outcomes while respecting device engineering constraints.

Among other conditions, deep brain stimulation has become an effective treatment for Parkinson’s disease, where pathological synchronization of the basal ganglia-cortical loop is associated with dopaminergic denervation of the striatum, which over time leads to motor impairment including tremors, bradykinesia, and akinesia (Hammond et al., 2007; Neumann et al., 2007). In particular, the parkinsonian low-dopamine state is observed to be related to excessive synchronization in the beta frequency band (15–30 Hz) in the subthalamic nucleus (STN) of the basal ganglia (Rubchinsky et al., 2012; Asl et al., 2022). It has been proposed that the symptoms of parkinsonian resting tremors are caused by excessive synchronization in populations of neurons firing at a similar frequency to that of the tremor (Tass, 2006). Deep brain stimulation works as a therapeutic intervention by modulating such synchronization patterns, often ameliorating motor impairment for patients living with Parkinson’s disease. DBS is also used in the treatment of essential tremor (ET), epilepsy, Tourette’s syndrome, obsessive compulsive disorder, and treatment-resistant depression.

The most commonly offered protocol for DBS therapy is continuous high-frequency stimulation (Lozano and Lipsman, 2013; Sandoval-Pistorius et al., 2023). Despite its clinical success, conventional high-frequency DBS faces several limitations including diminishing efficacy over time, stimulation-induced side effects, and high energy consumption necessitating battery replacements. Furthermore, as open-loop stimulation, where the device continuously applies electrical inputs as long as it is on, its static parameters do not adapt to the dynamic nature of disease symptoms, limiting its long-term effectiveness. In recent years, there has been growing interest in developing alternative stimulation paradigms that can effectively disrupt pathological synchrony while minimizing energy consumption and side effects. Several desynchronization methods have been proposed and tested on patients, including coordinated reset stimulation (Tass, 2003; Manos et al., 2021), adaptive deep brain stimulation (aDBS) (Sandoval-Pistorius et al., 2023), and phase-specific stimulation approaches (Cagnan et al., 2017). Such techniques often leverage theoretical frameworks from nonlinear dynamics and control theory to design stimuli that can efficiently desynchronize neural populations.

In particular, coordinated reset uses multiple electrode implants which deliver identical impulses separated by a time delay between implants (Tass, 2003; Lysyansky et al., 2011; Lücken et al., 2013; Popovych and Tass, 2014; Kubota and Rubin, 2018; Khaledi-Nasab et al., 2022). This leads to clustering behavior for the neural populations, in which each cluster fires at different times, giving (partial) desynchronization of the dynamics. This approach has achieved preliminary clinical success (Adamchic et al., 2014; Manos et al., 2021). In adaptive deep brain stimulation (aDBS), closed-loop systems monitor biomarkers in real time and can initiate changes in stimulation parameters (Sandoval-Pistorius et al., 2023; Oehrn et al., 2024). The goal of aDBS is to improve clinical outcomes by designing control signals based on neural data to deliver stimulation only when needed. For instance, in Parkinson’s disease, a potential biomarker is the amplitude of the pathological beta rhythms, with stimulation becoming active if this exceeds some prescribed threshold. On demand stimulation through aDBS can prolong device battery life, thereby extending device lifetimes and prolonging the interval between surgeries in clinical treatment protocols. Finally, for phase-specific stimulation the inputs occur at a particular dynamical phase in order to disrupt synchrony (Cagnan et al., 2017; Cagnan et al., 2019; cf. Holt et al., 2016).

In parallel, there have been a number of computational studies exploring the mechanisms by which DBS might be working (e.g., Santaniello et al., 2015; Spiliotis et al., 2022). There have also been theoretical and computational studies exploring different strategies for desynchronizing neural populations (Wilson and Moehlis, 2022), with approaches including delayed feedback control (Rosenblum and Pikovsky, 2004a; Rosenblum and Pikovsky, 2004b; Popovych et al., 2006; Popovych et al., 2017), phase randomization through optimal phase resetting (Danzl et al., 2009; Nabi et al., 2013; Rajabi et al., 2025), phase distribution control (Monga et al., 2018; Monga and Moehlis, 2019), cluster control (Wilson and Moehlis, 2015a; Matchen and Moehlis, 2018; Wilson, 2020; Qin et al., 2023), machine learning and data-driven approaches (Matchen and Moehlis, 2021; Vu et al., 2024), and chaotic desynchronization (Wilson et al., 2011; Wilson and Moehlis, 2014b; Wilson and Moehlis, 2014a). Each of these approaches has advantages and disadvantages based on the control objective and what is known about and what can be measured for the neural dynamics (Wilson and Moehlis, 2022).

In this paper, we focus on chaotic desynchronization, for which an energy-optimal stimulus exponentially desynchronizes a population of neurons. This approach relies on phase reduction methods, which have proven particularly valuable in analyzing and controlling neural oscillators (Monga et al., 2019; Wilson and Moehlis, 2022). These methods allow for the simplification of complex neuronal dynamics into phase models, where the behavior of an oscillating neuron can be characterized by its phase and response to perturbations, captured by the phase response curve (PRC). Unlike previous studies of chaotic desynchronization, here we include a constraint on stimulus magnitude. Such constraints are important engineering considerations for the practical applications of DBS, as there can exist both biological limitations on the maximum electrical stimulation that can be safely applied to brain tissue as well as electronic limitations on the current that stimulation devices can reliably store and deliver over time.

Specifically, in this paper we calculate magnitude-constrained optimal stimuli that maximize the Lyapunov exponent for the phase difference between pairs of neurons while simultaneously minimizing energy consumption. The Lyapunov exponent quantifies the exponential divergence rate of initially close trajectories, making it an appropriate measure of desynchronization efficiency. By systematically exploring different constraint magnitudes, we characterize the tradeoff between maximum allowable stimulus amplitude, desynchronization efficacy, and energy usage. We set up the optimal control problem in Section 2.1. In Section 2.2 we describe several canonical phase response curves representing different types of neuronal dynamics: Sinusoidal, SNIPER, Hodgkin-Huxley, and Reduced Hodgkin-Huxley models. In Section 3.1, we investigate our approach for each PRC, computing the optimal stimulus under various magnitude constraints and evaluating its performance in desynchronizing initially synchronized neurons. In Section 3.2, we validate our approach using computational simulations of neural populations with coupling and noise, demonstrating that our magnitude-constrained optimal stimuli can effectively desynchronize neural populations. Finally, a discussion of our results is given in Section 4. Overall, this work provides a theoretical foundation for designing energy-efficient DBS protocols that respect hardware and biological constraints while effectively disrupting pathological neural synchronization. We respectfully present this study as a tribute to the pioneering work of Hermann Haken on the control of complex systems.

2 Methods

2.1 Optimal control problem

We present a procedure for finding an energy-optimal stimulus which maximizes the Lyapunov exponent associated with the phase difference between a pair of neurons, while accounting for a constraint on the stimulus magnitude. This approach is based on the phase reduction of neural oscillators in the presence of an input (see, for example, Monga et al., 2019), and only requires knowledge of a neuron’s phase response curve (PRC). We note that the PRC can in principle be measured experimentally (Netoff et al., 2012), or can be calculated numerically if the model is known (Ermentrout, 2002; Monga et al., 2019). In particular, we consider the following set of equations:

dθidt=ω+Zθiut,i=1,2,(1)

where θi0,2π is the phase of the ith neuron; ω=2π/T, where T is the period of the neuron in the absence of stimulus; and u(t) is the control stimulus. Note that here we are assuming that the neurons are identical (having the same ω and Z()), and for simplicity we assume that the neurons are the same distance from the electrode so they receive the same stimulus u(t). Neuron i fires an action potential when θi crosses through 0.

Following Wilson and Moehlis (2014b), we suppose that the neurons are nearly synchronized (θ1θ2). Defining ϕ=θ2θ1, we obtain

dϕdt=Zθutϕ+Oϕ2.(2)

Linearizing about ϕ=0, the solution to Equation 2 is ϕeΛt, where

Λτ=logϕττ=aa+τZθsusdsτ.(3)

Here Λ can be viewed as the finite time Lyapunov exponent (Abouzeid and Ermentrout, 2009), which characterizes the exponential growth or decay of the phase difference ϕ. A positive Lyapunov exponent will correspond to the divergence of nearby trajectories, and hence desynchronization. We note that this Lyapunov exponent corresponds to phase difference direction, so it is not directly related to the non-trivial Floquet multipliers which describe transverse stability of the periodic orbits for the neurons (Guckenheimer and Holmes, 1983). We formulate the control problem in terms of the cost function

Gut=0t1ut2βZθutdt,(4)

where the goal is to maximize the Lyapunov exponent while minimizing the energy used, where the energy is the integral of the square of the control stimulus u. Here t1 is the time that we choose for the duration of the stimulus, β>0 is a parameter that scales the importance of the Lyapunov exponent term relative to the energy term. Generalizing the formulation in Wilson and Moehlis (2014b), here we consider a magnitude constraint on the control stimulus given by Equation 5:

|ut|umax.(5)

In order to account for this constraint, we use a Hamiltonian formulation for the optimal control problem (Kirk, 1998), with the Hamiltonian given in Equation 6:

Hθ,λ,u=u2βZθut+λω+Zθut,(6)

where λ is the Lagrange multiplier or co-state for the system. From Hamilton’s equations,

θ̇=Hλ=ω+Zθut,(7)
λ̇=Hθ=βZθλZθut.(8)

This defines a two-point boundary value problem which must be solved subject to the boundary conditions θ(0)=0 and θ(t1)=ωt1. The latter boundary condition ensures that the phase at time t1 is the same as what it would’ve been in the absence of stimulus. The function u(t) in these equations will be found using Pontryagin’s minimum principle (Kirk, 1998), which states that u should be chosen as the extremum of the Hamiltonian, subject to the constraints. If there is no constraint on the magnitude of u(t), the optimal control stimulus is the solution to H/u=0, giving Equation 9:

2uβZθ+λZθ=0(9)
ut=βZθλZθ2ũt,(10)

where ũ(t) is the optimal unconstrained input. With constraints, Pontryagin’s minimum principle gives the following expression for the optimal magnitude-constrained input u*(t):

u*t=umaxif  ũtumax,ũtifumax<ũt<umax,umaxif  ũtumax.(11)

In particular, the optimal u*(t) might or might not be the solution to H/u=0, because the extremum may be reached at a constraint boundary. To summarize, we solve the two-point boundary value problem Equations 7, 8 using Equation 11. This is done numerically using a shooting method, and the optimal stimulus is given by Equation 11.

2.2 Example phase response curves

The PRC quantifies the effect of an external stimulus on the phase of a periodic orbit. In this paper, we consider four example PRCs: Sinusoidal, SNIPER, Hodgkin-Huxley, and Reduced Hodgkin-Huxley, as shown in Figure 1.

Figure 1
Four line graphs labeled A, B, C, and D show plots of \(Z(\theta)\) against \(\theta\) from 0 to 2. Graph A shows a wave fluctuating between -0.5 and 0.5. Graph B depicts a wave peaking at 0.5. Graph C displays a varied waveform peaking around 0.1. Graph D has a waveform with peaks at about 0.2 and troughs around -0.2.

Figure 1. Phase Response Curves for the neuron models considered in the paper. (A) Sinusoidal with Zd=0.5, (B) SNIPER with Zd=0.5, (C) Hodgkin-Huxley, and (D) Reduced Hodgkin-Huxley.

The Sinusoidal PRC

Zθ=Zdsinθ,(12)

shown in Figure 1A with Zd=0.5, is a special case of the PRC which is found for periodic orbits close to a supercritical Hopf bifurcation. (Recall that when a parameter is on one side of a supercritical Hopf bifurcation there is a stable fixed point and no periodic orbit, and when the parameter is on the other side of the supercritical Hopf bifurcation there is an unstable fixed point and a stable periodic orbit). More generally, the PRC for a periodic orbit close to a supercritical Hopf bifurcation is a phase-shifted form of the PRC (Equation 12); see Brown et al. (2004). This is an example of a Type II PRC (Hansel et al., 1995), in which the PRC takes both positive and negative values.

Periodic orbits can also arise from a SNIPER bifurcation, which stands for Saddle-Node Infinite Period bifurcation; this is also often called a SNIC bifurcation, which stands for Saddle-Node Invariant Circle bifurcation. Here, for a parameter on one side of the bifurcation there is a stable fixed point and a saddle fixed point that lie on an invariant circle. As the parameter is varied, these fixed points annihilate in a saddle-node bifurcation, and when the parameter is on the other side of the bifurcation there is a stable periodic orbit whose period approaches infinity as the bifurcation is approached. For a periodic orbit near a SNIPER bifurcation, the PRC is approximately given by Equation 13 (Ermentrout, 1996; Brown et al., 2004):

Zθ=Zd1cosθ,(13)

shown in Figure 1B with Zd=0.5. This is an example of a Type I PRC (Hansel et al., 1995; Ermentrout, 1996), in which the PRC takes only non-negative values.

The Hodgkin-Huxley equations are a well-studied conductance-based model for neural activity, and were developed to describe the dynamics for a squid giant axon (Hodgkin and Huxley, 1952). Mathematically, they are a four-dimensional set of coupled ordinary differential equations for the voltage across the neural membrane and three gating variables associated with the flow of ions across the membrane. The full equations are given in the Supplementary Appendix. We chose a baseline current value Ib=10 so that the Hodgkin-Huxley equations have a stable periodic orbit, and then found the PRC for this periodic orbit using XPP (Ermentrout, 2002). For computational convenience, we approximate this as a Fourier series with the first ten sin() and cos() terms to give the PRC shown in Figure 1C.

Finally, the Reduced Hodgkin-Huxley equations are an approximation to the full Hodgkin-Huxley equations (Keener and Sneyd, 1998; Moehlis 2006). Mathematically, they are two-dimensional set of coupled ordinary differential equations for the voltage across the neural membrane and one gating variable. The equations are given in the Supplementary Appendix. We chose a baseline current value Ib=10 so that the Reduced Hodgkin-Huxley equations have a stable periodic orbit, and then found the PRC for this periodic orbit using XPP (Ermentrout, 2002). For computational convenience, we approximate this as a Fourier series with the first two hundred sin() and cos() terms to give the PRC shown in Figure 1D. The PRCs for the Hodgkin-Huxley and Reduced Hodgkin-Huxley models are examples of Type II PRCs.

3 Results

3.1 Results for pairs of neurons

In this section, we consider the dynamics of a pair of neurons satisfying Equation 1, where u*(t) is chosen to be the optimal control stimulus for different values of the constraint umax. For simplicity, we will take t1=T, so that the duration of the control stimulus is equal to the period of the neuron in the absence of stimulus. By design, we expect that application of one cycle of the optimal stimulus will cause the phase difference ϕ to increase, at least when the initial value for ϕ is small. We will consider an event-based approach for which multiple cycles of the optimal control stimulus are applied, where a new cycle of the control stimulus is triggered when θ1=0, that is, when Neuron 1 fires an action potential. We will see that this leads to growing phase difference ϕ.

Figure 2 shows results for the Sinusoidal PRC with Zd=0.5, β=2 and ω=1, corresponding to T=2π. In particular, Figure 2A shows the calculated optimal control stimuli for the unconstrained case, umax=0.35, and umax=0.2. In this case, adding a magnitude constraint gives an optimal control stimulus which appears to be very similar to the unconstrained stimulus “chopped off” at the constraint; we will discuss this further below. We observe that the unconstrained input has the highest efficacy of desynchronization as measured by the Lyapunov exponent, and also the highest energy utilization. The input with a constraint umax=0.35 gives desynchronization results that are similar while achieving a significant reduction in energy usage.

Figure 2
Six graphs labeled A to F show different data visualizations. Graph A displays a curve of \( u^*(t) \) over \( t \) with several sinusoidal lines. Graph B shows a linear increase of \( \theta \) over \( t \). Graph C depicts a rising trend in \( \phi \) over time. Graph D shows a log transformation of \( \phi \) with a similar increasing pattern. Graph E illustrates a scatter plot of \( \Lambda \) versus \( u_{\text{max}} \) with red and blue points. Graph F presents a scatter plot showing an integral of \( u^*(t)^2 \) over \( u_{\text{max}} \) with blue points.

Figure 2. Results for Sinusoidal PRC, with Zd=0.5, ω=1, and β=2. (A) Optimal stimulus calculated for the unconstrained case (blue), umax=0.35 (red), and umax=0.2 (green). (B) Time series traces of the phases θ1 (blue) and θ2 (red) of two neurons, where both have the optimal stimulus applied with umax=0.35. (C) Phase difference ϕ between two neurons over multiple stimuli. Each stimulus u* is triggered when θ1=0. (D) Log of the difference ϕ over multiple stimuli. (E) Approximation to the Lyapunov exponent Λfit found from the slopes of the fits to log(ϕ) versus t (red) and the Lyapunov exponent Λcalc (blue), for different values of umax. (F) Effect of the constraint on the energy used for the optimal stimulus, here computed for one cycle of the stimulus. For panels (B–F), the initial conditions are θ1=0 and θ2=0.1.

To investigate the efficacy of desynchronization between Neurons 1 and 2, we ran simulations of the phases of Neurons 1 and 2 over a full cycle of the control stimulus, with initial conditions θ1=0 and θ2=0.1. Figure 2B shows the time series traces of these two neurons for the Sinusoidal PRC. We see growing desynchronization over one cycle of the control stimulus, as measured by the phase difference between the two traces.

Next, we apply successive control stimuli to pairs of neurons with the event-based approach described above. We compute the phase difference ϕ between the pair of neurons, which is observed to grow exponentially over multiple cycles of the control stimulus; see Figure 2C. This is also evident in Figure 2D, where we observe that log(ϕ) approximately grows linearly with t. To quantify this, we estimate the Lyapunov exponent based on the line of best fit to log(ϕ) versus t over multiple event-triggered control stimuli. We used the first half of the time interval for these fits, since there is saturation in these traces in the later part of time interval as the small ϕ approximation used to obtain Equation 3 no longer holds. These estimated Lyapunov exponents Λfit are plotted at varying values of the constraint in Figure 2E, along with estimates Λcalc obtained by plugging the computed stimulus into Equation 3 and numerically evaluating the integral over one cycle of the stimulus according to Equation 14:

Λcalc1T0TZθsu*sds.(14)

Table 1 compares Λfit and Λcalc for several values of umax; good agreement is found between these approaches. Finally, Figure 2F shows the energy

0Tu*t2dt

used over one cycle of the control stimulus for a range of values of umax.

Table 1
www.frontiersin.org

Table 1. For the Sinusoidal PRC, comparison of the Lyapunov exponent Λfit estimated from slope of the line of best fit for log(ϕ) versus t with the Lyapunov exponent Λcalc calculated from the integral formulation.

Similarly, Figure 3 shows results for the SNIPER PRC with Zd=0.5, β=2, and ω=1, corresponding to T=2π, and Table 2 compares Λfit and Λcalc for several values of umax. Moreover, Figure 4 shows results for the Hodgkin-Huxley PRC with β=2 and period T=14.56, which was obtained numerically; Table 3 compares the Lyapunov exponent estimates. Finally, Figure 5 shows results for the Reduced Hodgkin-Huxley PRC with β=9 and period T=11.85, which was obtained numerically; Table 4 compares the Lyapunov exponent estimates. In all of these examples, the optimal input gives exponential divergence of the phases of the neurons. As umax becomes smaller, the Lyapunov exponent becomes smaller while staying positive, and the energy associated with the input stimulus is reduced. We observe that the numerically calculated optimal inputs without the magnitude constraint resemble Z(θ) for each PRC. This was first noticed in Wilson and Moehlis (2014b), where it was attributed to the numerical observations that the optimal input is weak enough that θωt, and βZ(θ) dominates λZ(θ) in Equation 10, so u*(t)βZ(ωt)/2. This approximation is explored in more detail in Moehlis et al. (2025).

Figure 3
Graphs A to F display various mathematical functions and their behavior over time or under different conditions. Graph A shows \( u^{*}(t) \) with a sinusoidal blue line and a flat green line over time \( t \). Graph B depicts a linear increase in \( \theta \) for blue and red lines. Graph C presents \( \phi \) increasing non-linearly with blue, red, and green lines. Graph D illustrates the logarithm of \( \phi \) with similar color-coded lines over time. Graph E shows \( \Lambda \) increasing with \( u_{\text{max}} \) using red dots. Graph F exhibits an uptrend in the integral of \( u^{*}(t)^2 \, dt \) with \( u_{\text{max}} \) using blue dots.

Figure 3. Results for SNIPER PRC with Zd=0.5,ω=1, and β=2. (A) Optimal stimulus calculated for the unconstrained case (blue), umax=0.35 (red), and umax=0.2 (green). (B) Time series traces of the phases θ1 (blue) and θ2 (red) of two neurons, where both have the optimal stimulus applied with umax=0.35. (C) Phase difference ϕ between two neurons over multiple stimuli. Each stimulus u* is triggered when θ1=0. (D) Log of the difference ϕ over multiple inputs. (E) Approximation to the Lyapunov exponent Λfit found from the slopes of the fits to log(ϕ) versus t (red) and the Lyapunov exponent Λcalc (blue), for different values of umax. (F) Effect of the constraint on the energy used for the optimal stimulus, here computed for one cycle of the stimulus. For panels (B–F), the initial conditions are θ1=0 and θ2=0.1.

Table 2
www.frontiersin.org

Table 2. For the SNIPER PRC, comparison of the Lyapunov exponent Λfit estimated from slope of the line of best fit for log(ϕ) versus t with the Lyapunov exponent Λcalc calculated from the integral formulation.

Figure 4
Six graphs illustrating various mathematical functions over time or in relation to other variables. A: Graph of \( u^*(t) \) shows a fluctuating curve with peaks around \( t = 10 \).B: Linear graph of \( \theta \) steadily increasing with time \( t \).C: Graph of \( \phi \) showing multiple lines with a gradual upward trend as \( t \) increases.D: Graph of \( \log(\phi) \) displaying a similar upward trend with steps as \( t \) increases.E: Scatter plot of \( \Lambda \) against \( u_{\text{max}} \) shows a rising trend with red and blue points.F: Scatter plot of \( \int u^*(t)^2 dt \) against \( u_{\text{max}} \) exhibits a positive correlation with blue points.

Figure 4. Results for Hodgkin-Huxley PRC with β=2 and T=14.56. (A) Optimal stimulus calculated for the unconstrained case (blue), umax=0.3 (red), and umax=0.2 (green). (B) Time series traces of the phases θ1 (blue) and θ2 (red) of two neurons, where both have the optimal stimulus applied with umax=0.3. (C) Phase difference ϕ between two neurons over multiple stimuli. Each stimulus u* is triggered when θ1=0. (D) Log of the difference ϕ over multiple stimuli. (E) Approximation to the Lyapunov exponent Λfit found from the slopes of the fits to log(ϕ) versus t (red) and the Lyapunov exponent Λcalc (blue), for different values of umax. (F) Effect of the constraint on the energy used for the optimal stimulus, here computed for one cycle of the stimulus. For panels (B–F), the initial conditions are θ1=0 and θ2=0.1.

Table 3
www.frontiersin.org

Table 3. For the Hodgkin-Huxley PRC, comparison of the Lyapunov exponent Λfit estimated from slope of the line of best fit for log(ϕ) versus t with the Lyapunov exponent Λcalc calculated from the integral formulation.

Figure 5
Graphs depicting various data across six panels labeled A to F. Panel A shows a wavy function of \( u^*(t) \) over time \( t \). Panel B demonstrates a linear increase of \( \theta \) over \( t \). Panel C illustrates \( \phi \) escalating in steps over \( t \). Panel D graphs the logarithmic scale of \( \phi \) showing similar stepwise growth. Panel E compares \( \Lambda \) against \( u_{max} \) using red and blue dots. Panel F presents an increasing integral \( \int u^*(t)^2 dt \) against \( u_{max} \) using blue dots.

Figure 5. Results for Reduced Hodgkin-Huxley PRC with β=9 and T=11.85. (A) Optimal stimulus calculated for the unconstrained case (blue), umax=1.5 (red), and umax=0.5 (green). (B) Time series traces of the phases θ1 (blue) and θ2 (red) of two neurons with initial conditions θ1=0 and θ2=0.1, where both have the optimal stimulus applied with umax=1.5. (C) Phase difference ϕ between two neurons over multiple stimuli. Each stimulus u* is triggered when θ1=0. (D) Log of the difference ϕ over multiple inputs. (E) Approximation to the Lyapunov exponent Λfit found from the slopes of the fits to log(ϕ) versus t (red) and the Lyapunov exponent λcalc (blue), for different values of umax. (F) Effect of the constraint on the energy used for the optimal stimulus, here computed for one cycle of the stimuilus. For panels (C–F), the initial conditions are θ1=0 and θ2=0.001.

Table 4
www.frontiersin.org

Table 4. For the Reduced Hodgkin-Huxley PRC, comparison of the Lyapunov exponent Λfit estimated from slope of the line of best fit for log(ϕ) versus t with the Lyapunov exponent Λcalc calculated from the integral formulation.

Here we make the new observation that in some cases the magnitude-constrained optimal input resembles the unconstrained input simply “chopped off” at the constraint, i.e., uchopmin(umax,max(umax,ũ(t))), where here ũ would be the optimal input found by solving Equations 7, 8 using Equation 10, i.e., without any magnitude constraint. That said, we note that uchop is not necessarily a good approximation to the optimal input. Solutions to the two-point boundary value problem have the property of modifying a neuron’s phase to go from θ=0 to θ=ωt1 in the time t1. This must be the case for the unconstrained optimal input ũ(t) and for the constrained optimal input u*(t). Because uchop is different from ũ(t) during the time intervals for which the constraint is applied, but otherwise the same, we do not expect it to exactly take θ from 0 to ωt1 in the time t1. However, numerically we find for some examples that u*(t) looks very similar to uchop. This appears to be because the product Z(θ)u(t) is very small in these examples. But, it is clear from Figure 5 that for the constraint umax=0.5 the optimal u*(t) can be significantly different from uchop(t).

When β is large, the Lyapunov exponent term in Equation 4 dominates the energy usage term. Therefore, in the limit of large β the optimal inputs approach what is known as Bang-bang control, as seen in Figure 6. For Bang-bang control, the controller alternates between maximum and minimum inputs which switch at optimal times (Kirk, 1998). In particular, the control is driven by the sign of Z(θ): if Z>0 we take u=umax, and if Z<0 we take u=umax. Thus, at every time the integrand is maximized subject to the umax constraint, so the value of the Lyapunov exponent is maximized. The unconstrained stimulus magnitudes rise with increasing β, so in the large β limit the magnitude constraint becomes more important. Figure 6 shows the approach to Bang-bang control for increasing β (from left to right panel for each row) on the optimal input for each PRC.

Figure 6
Eight line graphs labeled A to H present the function \(u^*(t)\) over time \(t\). Each graph displays three lines in blue, red, and green, showing their relationships and variations at distinct intervals. The graphs vary in time range, with A to D from 0 to 2, and E to H from 0 to 15.

Figure 6. Approaching Bang-bang control in the large β limit. (A) Results for Sinusoidal PRC with Zd=0.5 and ω=1, for β=5 and (B) β=20. Optimal stimulus calculated for umax=0.5 (blue), umax=0.35 (red), and umax=0.2 (green). (C) Results for SNIPER PRC with Zd=0.5 and ω=1, for β=5 and (D) β=20. Optimal stimulus calculated for umax=0.5 (blue), umax=0.35 (red), and umax=0.2 (green). (E) Results for Hodgkin-Huxley PRC with β=5 and (F) with β=20. Optimal stimulus calculated for umax=0.4 (blue), umax=0.3 (red), and umax=0.2 (green); here T=14.56. (G) Results for Reduced Hodgkin-Huxley PRC with β=20 and (H) with β=40. Optimal stimulus calculated for umax=2.5 (blue), umax=1.5 (red), and umax=0.5 (green); here T=11.85.

3.2 Results for population-level simulations of neurons

While the results in the previous section illustrate that the stimuli with and without magnitude constraints can give positive Lyapunov exponents for the phase difference betweeen pairs of neural oscillators, we are also interested in how such inputs perform for a larger population of coupled neural oscillators. In this section, we consider a population of Reduced Hodgkin Huxley neurons with all-to-all electrotonic coupling, and independent additive noise for each neuron. The governing equations are:

V̇i=fVVi,ni+1Nj=1NαVjVi+ut+ηit,ṅi=fnVi,ni,

where N=100 is the number of neurons, u(t) is the common input for all neurons, α=0.04 is the coupling strength, and ηi(t)=2DN(0,1) is intrinsic noise modeled as zero-mean Gaussian white noise with variance 2D, where D=0.7. These values are chosen so that there is a balance between the synchronizing influence of the coupling and the desynchronizing influence of the noise. Expressions for fV and fn are given in the Supplementary Appendix. We suppose that at t=0 all neurons have the same Vi and ni corresponding to the neuron at the peak of its action potential. Each neuron receives the same input u(t), but a different realization of noise ηi(t). It is useful to think of the noise as having a desynchronizing effect, and the coupling as having a synchronizing effect.

To test our magnitude-constrained optimal inputs u*(t), we use an event-based control scheme similar to Danzl et al. (2009), Nabi et al. (2013), Wilson and Moehlis (2014b), Rajabi et al. (2025). In particular, when the average voltage V̄ for the neurons crosses a threshold, we input one cycle of the pre-computed optimal stimulus, which will have a desynchronizing influence. Another control input occurs if the previous input has finished and the average voltage again crosses threshold, for example, due to the synchronizing influence of the coupling. According to this control logic, each simulation generates a control input u(t) for the full 350 msec time window. Figures 79 respectively show results for population-level simulations for the optimal unconstrained input, optimal constrained input with umax=1.5, and optimal constrained input with umax=0.5. Comparing the results using event-based control with the network’s behavior without control, it is apparent that all of these stimuli are able to keep the neural population desynchronized. We can interpret these results as follows: if the population is too synchronized (its average voltage goes above the control activation threshold), a cycle of input is applied. If this does not sufficiently desynchronize the population, another cycle of input is applied. If the population is sufficiently desynchronized, no input is needed. However, the coupling eventually leads to a level of synchronization which is above the control activation threshold, triggering another cycle of input. For umax=0.5 it is apparent that more cycles of the stimulus are needed to achieve and maintain desynchronized dynamics; see Figure 9.

Figure 7
Four graphs show neural activity over 350 milliseconds. The first two depict voltage changes with oscillating patterns, marked with black spikes, red lines, and a green dashed line. The third shows variations in current with sharp peaks. The last is a raster plot detailing neuron firing patterns over time.

Figure 7. Results for Population-Level Simulations with Unconstrained Input. Here α=0.04 and D=0.7. The top panel shows the network’s behavior without control, with synchronized dynamics. The second panel illustrates the same network with event-based control using the optimal u(t) without any magnitude constraints, where the red trace shows the mean voltage, and the horizontal green dotted line shows the control activation threshold (V̄=30 mV). The third panel shows the control input. The bottom panel is a raster plot of the spike times.

Figure 8
Graphs showing neural activity over time. The first and second graphs display voltage fluctuations, with marked peaks and troughs. The third graph illustrates current dynamics, showing distinct spikes. The fourth plot represents neuronal firing patterns across neurons over time, with dots indicating activity points. Time is measured in milliseconds on the x-axis for each graph.

Figure 8. Results for Population-Level Simulations with Constrained Input at umax=1.5. Here α=0.04 and D=0.7. The top panel shows the network’s behavior without control, with synchronized dynamics. The second panel illustrates the same network with event-based control using the optimal u(t) with umax=1.5, where the red trace shows the mean voltage, and the horizontal green dotted line shows the control activation threshold (V̄=30 mV). The third panel shows the control input. The bottom panel is a raster plot of the spike times.

Figure 9
Four-panel graph showing neuronal activity over time. The first two panels display voltage traces with oscillations over 350 milliseconds. The third panel shows input current fluctuations. The fourth panel presents a raster plot of neuron firing patterns, indicating neuron numbers one through one hundred over time, with distinct firing events.

Figure 9. Results for Population-Level Simulations with Constrained Input at umax=0.5. Here α=0.04 and D=0.7. The top panel shows the network’s behavior without control, with synchronized dynamics. The second panel illustrates the same network with event-based control using the optimal u(t) with umax=0.5, where the red trace shows the mean voltage, and the horizontal green dotted line shows the control activation threshold (V̄=30 mV). The third panel shows the control input. The bottom panel is a raster plot of the spike times.

This motivates us to better understand how the average amount of energy required to keep the neural population desynchronized with this event-based scheme depends on the magnitude constraint. To investigate this, we consider 100 different population-level simulations for different values of umax. For each simulation, the neurons start out completely synchronized. Results are shown in Figure 10 for the average energy

0350ut2dt,

where denotes the average over the 100 simulations, and the integration is over the first 350 msec. The error bars represent the standard deviation over the 100 iterations of the population-level simulations. The variability is due to the different realizations of noise for each simulation. We see that the average energy needed to desynchronize the population increases monotonically with umax, until the constraint no longer has any effect on the computed input.

Figure 10
Scatter plot showing data points with error bars, depicting an increasing trend from bottom left to top right. The x-axis is labeled \(u_{\text{max}}\) ranging from 0 to 2.5, while the y-axis is labeled \(\langle \int u^2 \, \Delta t \rangle\) with values from 0 to 80.

Figure 10. Energy used in population-level simulations for 350 msec for different values of umax, averaged over 100 simulations. Here α=0.04 and D=0.7.

We used the average voltage as a measure of synchronization because this is easily defined, and one expects this to be related to the local field potential, which can be measured experimentally. However, a more common measure of synchronization is the Kuramoto order parameter (Kuramoto, 1984), whose amplitude is

R1Nj=1Neiθj,(15)

where N is the number of neurons, and θj0,2π is the phase of the jth neuron in the sense of isochrons (Winfree, 1967). In particular, the phase is defined at all points in the basin of attraction of the periodic orbit; this is important because to calculate the Kuromoto order parameter we need to know the phase even if noise, coupling, and/or a control input causes the state of the neuron to be off the periodic orbit. We define θ=0 (which is equivalent to θ=2π) to be the phase at which the neuron spikes, and parametrize the isochrons so that θ̇=2π/T in the absence of noise, coupling, and control input, where T is the period of the neuron. We can estimate the phase of a neuron by setting its current state (Vi,ni) as the initial condition at t=0 for the Reduced Hodgkin-Huxley equations for a single neuron in the absence of noise, coupling and control input. We integrate these equations forward in time until a voltage spike occurs at time ts. Because the spike corresponds to θ=2π and the phase advances according to θ̇=2π/T, the phase corresponding to the state (Vi,ni) is given by Equation 16:

θ=2π1tsT.(16)

To obtain the order parameter R at a given time, we estimate the phases of all N=100 neurons at that time, and use these in Equation 15. Higher values of R correspond to greater synchronization.

Figure 11 shows results from this order parameter calculation for the plots shown in Figure 7 (no constraint on the magnitude of u), Figure 8 (for umax=1.5), and Figure 9 (for umax=0.5), at 10msec intervals. We see that, broadly speaking, the control inputs tend to reduce R, and that R increases when there are no control inputs, due to the synchronizing effect of the coupling. The results for the no constraint case and for umax=1.5 are quite similar except for later times. The fact that R increases more rapidly in the no constraint case than for umax=1.5 is due to the variability in results due to noise. More interestingly, we see that the order parameter tends to have higher R values for umax=0.5; because the control input is more constrained, a single cycle of input has a smaller effect on the order parameter.

Figure 11
Line graph showing three curves representing different constraints over time in milliseconds. The blue line represents

Figure 11. The magnitude of the Kuramoto order parameter for different constraints on the magnitude of the control input, calculated at 10 msec intervals for the results shown in Figures 79. Higher values of R correspond to greater synchronization.

4 Discussion

Motivated by deep brain stimulation treatment of neurological disorders including Parkinson’s disease, there has been much recent interest in using control theory to design optimal input stimuli which desynchronize neural populations. One such approach used chaotic desynchronization to achieve this goal in an energy-optimal fashion (Wilson and Moehlis, 2014b). In this paper, we generalized this optimal chaotic desynchronization methodology by including a magnitude constraint on the input. In particular, we showed how to calculate the optimal stimuli which satisfy such a contraint, and demonstrated that such inputs lead to exponential desynchronization of pairs of neurons and effective dysynchronization of populations of coupled, noisy neurons. This approach is based on a phase-reduction of the dynamics of a neuron, with the phase response curve quantifying the effect of an external input on a neuron’s phase. Based on the knowledge of a neuron’s phase response curve, we calculated the inputs that are optimally desynchronizing while minimizing energy utilization, using methods from optimal control. The addition of a magnitude constraint allows for the design of optimal stimulation inputs with a maximum amplitude that is customizable based on biological and electronic considerations. Interestingly, while these constrained inputs use less energy, they still achieve population-level desynchronization.

We note an extension of the current paper which could be of interest in future work: incorporation of both magnitude and charge-balance constraints on the control stimulus. This follows from the observation that non-charge-balanced stimuli, such as those considered in this paper, can cause harmful Faradaic reactions that may damage the DBS electrode or neural tissue (Merrill et al., 2005). This has motivated the use of a charge-balance constraint for optimal control design (Nabi and Moehlis, 2009; Wilson and Moehlis, 2014b). However, this presents additional challenges because it increases the dimension of the two-point boundary value problem which must be solved numerically. Our approach could be applied to other types of neurons, including those for common neurostimulation targets such as the subthalamic nucleus or the thalamus, even for periodically bursting neurons, as long as the phase response curve can be determined. If it is not possible to obtain this from electrophysiological measurements (Netoff et al., 2012), or if there is not an accurate mathematical model which would allow numerical techniques to be used Ermentrout (2002), Monga et al. (2019), an approach such as that described in Wilson and Moehlis (2015b), which can estimate the phase response curve based on aggregate measurements such as the local field potential, could be used. Moreover, we expect that similar population-level control results would be found for other types of coupling, such as synaptic coupling and/or heterogeneous coupling, provided that the coupling strength is not too strong.

We imagine that the results from this paper can be useful to the neuroscience community in cases where there are biological or electronic hardware considerations which limit the allowed input magnitude for a stimulus. With deep brain stimulation becoming an increasingly adopted therapeutic technique for treatment of neurological disorders, this research extends ongoing research efforts to theoretically inform the optimal design of deep brain stimulation protocols.

Data availability statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: (https://github.com/michaelzimet/MgOptChaoticDesync) (DOI: 10.5281/zenodo.15595745).

Author contributions

MZ: Software, Writing – review and editing, Methodology, Investigation, Writing – original draft, Conceptualization. FR: Conceptualization, Writing – review and editing, Software. JM: Writing – review and editing, Conceptualization, Methodology, Supervision, Software.

Funding

The author(s) declare that no financial support was received for the research and/or publication of this article.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declare that no Generative AI was used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnetp.2025.1646391/full#supplementary-material

References

Abouzeid, A., and Ermentrout, B. (2009). Type-II phase resetting curve is optimal for stochastic synchrony. Phys. Rev. E 80, 011911. doi:10.1103/PhysRevE.80.011911

PubMed Abstract | CrossRef Full Text | Google Scholar

Adamchic, I., Hauptmann, C., Barnikol, U. B., Pawelczyk, N., Popovych, O., Barnikol, T. T., et al. (2014). Coordinated reset neuromodulation for Parkinson’s disease: proof-of-concept study. Mov. Disord. 29, 1679–1684. doi:10.1002/mds.25923

PubMed Abstract | CrossRef Full Text | Google Scholar

Asl, M. M., Vahabie, A.-H., Valizadeh, A., and Tass, P. A. (2022). Spike-timing-dependent plasticity mediated by dopamine and its role in Parkinson’s disease pathophysiology. Front. Netw. Physiology 2, 817524. doi:10.3389/fnetp.2022.817524

CrossRef Full Text | Google Scholar

Brown, E., Moehlis, J., and Holmes, P. (2004). On the phase reduction and response dynamics of neural oscillator populations. Neural Comput. 16, 673–715. doi:10.1162/089976604322860668

PubMed Abstract | CrossRef Full Text | Google Scholar

Cagnan, H., Pedrosa, D., Little, S., Pogosyan, A., Cheeran, B., Aziz, T., et al. (2017). Stimulating at the right time: phase-specific deep brain stimulation. Brain 140, 132–145. doi:10.1093/brain/aww286

PubMed Abstract | CrossRef Full Text | Google Scholar

Cagnan, H., Denison, T., McIntyre, C., and Brown, P. (2019). Emerging technologies for improved deep brain stimulation. Nat. Biotechnol. 37, 1024–1033. doi:10.1038/s41587-019-0244-6

PubMed Abstract | CrossRef Full Text | Google Scholar

Danzl, P., Hespanha, J., and Moehlis, J. (2009). Event-based minimum-time control of oscillatory neuron models: phase randomization, maximal spike rate increase, and desynchronization. Biol. Cybern. 101, 387–399. doi:10.1007/s00422-009-0344-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Ermentrout, B. (1996). Type I membranes, phase resetting curves, and synchrony. Neural Comput. 8, 979–1001. doi:10.1162/neco.1996.8.5.979

PubMed Abstract | CrossRef Full Text | Google Scholar

Ermentrout, B. (2002). Simulating, analyzing, and animating dynamical systems: a guide to XPPAUT for researchers and students. Philadelphia: Society for Industrial and Applied Mathematics.

CrossRef Full Text | Google Scholar

Frey, J., Cagle, J., Johnson, K. A., Wong, J. K., Hilliard, J. D., Butson, C. R., et al. (2022). Past, present, and future of deep brain stimulation: hardware, software, imaging, physiology and novel approaches. Front. Neurol. 13, 825178. doi:10.3389/fneur.2022.825178

PubMed Abstract | CrossRef Full Text | Google Scholar

Guckenheimer, J., and Holmes, P. J. (1983). Nonlinear oscillations, dynamical systems and bifurcations of vector fields. New York: Springer-Verlag.

Google Scholar

Hammond, C., Bergman, H., and Brown, P. (2007). Pathological synchronization in Parkinson’s disease: networks, models and treatments. Trends Neurosci. 30, 357–364. doi:10.1016/j.tins.2007.05.004

PubMed Abstract | CrossRef Full Text | Google Scholar

Hansel, D., Mato, G., and Meunier, C. (1995). Synchrony in excitatory neural networks. Neural Comp. 7, 307–337. doi:10.1162/neco.1995.7.2.307

PubMed Abstract | CrossRef Full Text | Google Scholar

Hodgkin, A. L., and Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. 117, 500–544. doi:10.1113/jphysiol.1952.sp004764

PubMed Abstract | CrossRef Full Text | Google Scholar

Holt, A. B., Wilson, D., Shinn, M., Moehlis, J., and Netoff, T. I. (2016). Phasic burst stimulation: a closed-loop approach to tuning deep brain stimulation parameters for Parkinson’s disease. PLoS Comput. Biol. 12, e1005011. doi:10.1371/journal.pcbi.1005011

PubMed Abstract | CrossRef Full Text | Google Scholar

Keener, J., and Sneyd, J. (1998). Mathematical physiology. New York: Springer.

Google Scholar

Khaledi-Nasab, A., Kromer, J. A., and Tass, P. A. (2022). Long-lasting desynchronization of plastic neuronal networks by double-random coordinated reset stimulation. Front. Netw. Physiology 2, 864859. doi:10.3389/fnetp.2022.864859

PubMed Abstract | CrossRef Full Text | Google Scholar

Kirk, D. (1998). Optimal control theory. New York: Dover Publications.

Google Scholar

Krauss, J. K., Lipsman, N., Aziz, T., Boutet, A., Brown, P., Chang, J. W., et al. (2021). Technology of deep brain stimulation: current status and future directions. Nat. Rev. Neurol. 17, 75–87. doi:10.1038/s41582-020-00426-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Kubota, S., and Rubin, J. E. (2018). Numerical optimization of coordinated reset stimulation for desynchronizing neuronal network dynamics. J. Comput. Neurosci. 45, 45–58. doi:10.1007/s10827-018-0690-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Kuramoto, Y. (1984). Chemical oscillations, waves, and turbulence. Berlin: Springer.

Google Scholar

Lozano, A. M., and Lipsman, N. (2013). Probing and regulating dysfunctional circuits using deep brain stimulation. Neuron 77, 406–424. doi:10.1016/j.neuron.2013.01.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Lücken, L., Yanchuk, S., Popovych, O. V., and Tass, P. A. (2013). Desynchronization boost by non-uniform coordinated reset stimulation in ensembles of pulse-coupled neurons. Front. Comput. Neurosci. 7, 63. doi:10.3389/fncom.2013.00063

PubMed Abstract | CrossRef Full Text | Google Scholar

Lysyansky, B., Popovych, O. V., and Tass, P. A. (2011). Desynchronizing anti-resonance effect of m: n on-off coordinated reset stimulation. J. Neural Eng. 8, 036019. doi:10.1088/1741-2560/8/3/036019

PubMed Abstract | CrossRef Full Text | Google Scholar

Manos, T., Diaz-Pier, S., and Tass, P. A. (2021). Long-term desynchronization by coordinated reset stimulation in a neural network model with synaptic and structural plasticity. Front. Physiology 12, 716556. doi:10.3389/fphys.2021.716556

PubMed Abstract | CrossRef Full Text | Google Scholar

Matchen, T., and Moehlis, J. (2018). Phase model-based neuron stabilization into arbitrary clusters. J. Comput. Neurosci. 44, 363–378. doi:10.1007/s10827-018-0683-y

PubMed Abstract | CrossRef Full Text | Google Scholar

Matchen, T., and Moehlis, J. (2021). Leveraging deep learning to control neural oscillators. Biol. Cybern. 115, 219–235. doi:10.1007/s00422-021-00874-w

PubMed Abstract | CrossRef Full Text | Google Scholar

Merrill, D., Bikson, M., and Jefferys, J. (2005). Electrical stimulation of excitable tissue: design of efficacious and safe protocols. J. Neurosci. Methods 141, 171–198. doi:10.1016/j.jneumeth.2004.10.020

PubMed Abstract | CrossRef Full Text | Google Scholar

Moehlis, J. (2006). Canards for a reduction of the Hodgkin-Huxley equations. J. Math. Biol. 52, 141–153. doi:10.1007/s00285-005-0347-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Moehlis, J., Zimet, M., and Rajabi, F. (2025). Nearly optimal chaotic desynchronization of neural oscillators.

Google Scholar

Monga, B., and Moehlis, J. (2019). Phase distribution control of a population of oscillators. Phys. D. 398, 115–129. doi:10.1016/j.physd.2019.06.001

CrossRef Full Text | Google Scholar

Monga, B., Froyland, G., and Moehlis, J. (2018). “Synchronizing and desynchronizing neural populations through phase distribution control,” in 2018 Annual American Control Conference (ACC), 2808–2813. doi:10.23919/ACC.2018.8431114

CrossRef Full Text | Google Scholar

Monga, B., Wilson, D., Matchen, T., and Moehlis, J. (2019). Phase reduction and phase-based optimal control for biological systems: a tutorial. Biol. Cybern. 113, 11–46. doi:10.1007/s00422-018-0780-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Nabi, A., and Moehlis, J. (2009). “Charge-balanced optimal inputs for phase models of spiking neurons,” in Proceedings of the ASME 2009 Dynamic Systems and Control Conference, 685–687. doi:10.1115/DSCC2009-2541

CrossRef Full Text | Google Scholar

Nabi, A., Mirzadeh, M., Gibou, F., and Moehlis, J. (2013). Minimum energy desynchronizing control for coupled neurons. J. Comput. Neurosci. 34, 259–271. doi:10.1007/s10827-012-0419-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Najera, R. A., Mahavadi, A. K., Khan, A. U., Boddeti, U., Bene, V. A. D., Walker, H. C., et al. (2023). Alternative patterns of deep brain stimulation in neurologic and neuropsychiatric disorders. Front. Neuroinform. 17, 1156818. doi:10.3389/fninf.2023.1156818

PubMed Abstract | CrossRef Full Text | Google Scholar

Netoff, T., Schwemmer, M. A., and Lewis, T. J. (2012). “Experimentally estimating phase response curves of neurons: theoretical and practical issues,” in Phase response curves in neuroscience. Editors N. W. Schultheiss, A. A. Prinz, and R. J. Butera (New York, NY: Springer), 95–129.

CrossRef Full Text | Google Scholar

Neumann, W.-J., Horn, A., and Kühn, A. A. (2007). Insights and opportunities for deep brain stimulation as a brain circuit intervention. Trends Neurosci. 46, 472–487. doi:10.1016/j.tins.2023.03.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Oehrn, C. R., Cernera, S., Hammer, L. H., Shcherbakova, M., Yao, J., Hahn, A., et al. (2024). Chronic adaptive deep brain stimulation versus conventional stimulation in Parkinson’s disease: a blinded randomized feasibility trial. Nat. Med. 30, 3345–3356. doi:10.1038/s41591-024-03196-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Popovych, O. V., and Tass, P. A. (2014). Control of abnormal synchronization in neurological disorders. Front. Neurology 5, 268. doi:10.3389/fneur.2014.00268

PubMed Abstract | CrossRef Full Text | Google Scholar

Popovych, O. V., Hauptmann, C., and Tass, P. A. (2006). Control of neuronal synchrony by nonlinear delayed feedback. Biol. Cybern. 95, 69–85. doi:10.1007/s00422-006-0066-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Popovych, O. V., Lysyansky, B., Rosenblum, M., Pikovsky, A., and Tass, P. A. (2017). Pulsatile desynchronizing delayed feedback for closed-loop deep brain stimulation. PLoS One 12, e0173363. doi:10.1371/journal.pone.0173363

PubMed Abstract | CrossRef Full Text | Google Scholar

Qin, Y., Nobili, A. M., Bassett, D. S., and Pasqualetti, F. (2023). Vibrational stabilization of cluster synchronization in oscillator networks. IEEE Open J. Control Syst. 2, 439–453. doi:10.1109/OJCSYS.2023.3331195

CrossRef Full Text | Google Scholar

Rajabi, F., Gibou, F., and Moehlis, J. (2025). Optimal control for stochastic neural oscillators. Biol. Cybern. 119, 9. doi:10.1007/s00422-025-01007-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosenblum, M. G., and Pikovsky, A. S. (2004a). Controlling synchronization in an ensemble of globally coupled oscillators. Phys. Rev. Lett. 92, 114102. doi:10.1103/PhysRevLett.92.114102

PubMed Abstract | CrossRef Full Text | Google Scholar

Rosenblum, M. G., and Pikovsky, A. S. (2004b). Delayed feedback control of collective synchrony: an approach to suppression of pathological brain rhythms. Phys. Rev. E 70, 041904. doi:10.1103/PhysRevE.70.041904

PubMed Abstract | CrossRef Full Text | Google Scholar

Rubchinsky, L. L., Park, C., and Worth, R. M. (2012). Intermittent neural synchronization in Parkinson’s disease. Nonlinear Dyn. 68, 329–346. doi:10.1007/s11071-011-0223-z

PubMed Abstract | CrossRef Full Text | Google Scholar

Sandoval-Pistorius, S. S., Hacker, M. L., Waters, A. C., Wang, J., Provenza, N. R., de Hemptinne, C., et al. (2023). Advances in deep brain stimulation: from mechanisms to applications. J. Neurosci. 43, 7575–7586. doi:10.1523/JNEUROSCI.1427-23.2023

PubMed Abstract | CrossRef Full Text | Google Scholar

Santaniello, S., McCarthy, M. M., Montgomery, Jr., E. B., Gale, J. T., Kopell, N., and Sarma, S. V. (2015). Therapeutic mechanisms of high-frequency stimulation in Parkinson’s disease and neural restoration via loop-based reinforcement. Proc. Natl. Acad. Sci. 112, E586–E595. doi:10.1073/pnas.1406549111

PubMed Abstract | CrossRef Full Text | Google Scholar

Spiliotis, K., Starke, J., Franz, D., Richter, A., and Köhling, R. (2022). Deep brain stimulation for movement disorder treatment: exploring frequency-dependent efficacy in a computational network model. Biol. Cybern. 116, 93–116. doi:10.1007/s00422-021-00909-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Tass, P. A. (2003). A model of desynchronizing deep brain stimulation with a demand-controlled coordinated reset of neural subpopulations. Biol. Cybern. 89, 81–88. doi:10.1007/s00422-003-0425-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Tass, P. A. (2006). Phase resetting in medicine and biology: stochastic modelling and data analysis. Berlin: Springer.

Google Scholar

Vu, M., Singhal, B., Zeng, S., and Li, J.-S. (2024). Data-driven control of oscillator networks with population-level measurement. Chaos 34, 033138. doi:10.1063/5.0191851

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, D. (2020). Optimal open-loop desynchronization of neural oscillator populations. J. Math. Biol. 81, 25–64. doi:10.1007/s00285-020-01501-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, D., and Moehlis, J. (2014a). Locally optimal extracellular stimulation for chaotic desynchronization of neural populations. J. Comput. Neurosci. 37, 243–257. doi:10.1007/s10827-014-0499-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, D., and Moehlis, J. (2014b). Optimal chaotic desynchronization for neural populations. SIAM J. Appl. Dyn. Syst. 13, 276–305. doi:10.1137/120901702

CrossRef Full Text | Google Scholar

Wilson, D., and Moehlis, J. (2015a). Clustered desynchronization from high-frequency deep brain stimulation. PLoS Comput. Biol. 11, e1004673. doi:10.1371/journal.pcbi.1004673

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, D., and Moehlis, J. (2015b). Determining individual phase response curves from aggregate population data. Phys. Rev. E 92, 022902. doi:10.1103/PhysRevE.92.022902

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, D., and Moehlis, J. (2022). Recent advances in the analysis and control of large populations of neural oscillators. Annu. Rev. Control 54, 327–351. doi:10.1016/j.arcontrol.2022.05.002

CrossRef Full Text | Google Scholar

Wilson, C. J., Beverlin II, B., and Netoff, T. (2011). Chaotic desynchronization as the therapeutic mechanism of deep brain stimulation. Front. Syst. Neurosci. 5, 50. doi:10.3389/fnsys.2011.00050

PubMed Abstract | CrossRef Full Text | Google Scholar

Winfree, A. (1967). Biological rhythms and the behavior of populations of coupled oscillators. J. Theor. Biol. 16, 15–42. doi:10.1016/0022-5193(67)90051-3

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: optimal control, desynchronization, deep brain stimulation, Lyapunov exponent, network physiology

Citation: Zimet M, Rajabi F and Moehlis J (2025) Magnitude-constrained optimal chaotic desynchronization of neural populations. Front. Netw. Physiol. 5:1646391. doi: 10.3389/fnetp.2025.1646391

Received: 13 June 2025; Accepted: 29 September 2025;
Published: 21 October 2025.

Edited by:

Eckehard Schöll, Technical University of Berlin, Germany

Reviewed by:

Alexander Neiman, Ohio University, United States
Louis M Pecora, Naval Research Laboratory, United States

Copyright © 2025 Zimet, Rajabi and Moehlis. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jeff Moehlis, bW9laGxpc0B1Y3NiLmVkdQ==

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.