Response theory: a trajectory-based approach

We collect recent results on deriving useful response relations also for nonequilibrium systems. The approach is based on dynamical ensembles, determined by an action on trajectory space. (Anti)Symmetry under time-reversal separates two complementary contributions in the response, one entropic the other frenetic. Under time-reversal invariance of the unperturbed reference process, only the entropic term is present in the response, giving the standard fluctuation-dissipation relations in equilibrium. For nonequilibrium reference ensembles, the frenetic term contributes essentially and is responsible for new phenomena. We discuss modifications in the Sutherland-Einstein relation, the occurence of negative differential mobilities and the saturation of response. We also indicate how the Einstein relation between noise and friction gets violated for probes coupled to a nonequilibrium environment. We end with some discussion on the situation for quantum phenomena, but the bulk of the text concerns classical mesoscopic (open) systems. The choice of many simple examples is trying to make the notes pedagogical, to introduce an important area of research in nonequilibrium statistical mechanics.

To know a system operationally, is to be able to predict its response to a stimulus.
Conversely, we learn about a system by observing its response. In many ways and in all sciences, that is the very ground for doing experiments where we interfere with the system's condition. In psychology for example, subjects are often tested for their reaction to external stimuli. Conclusions are then formulated about susceptibility or vulnerability.
In other domains from sociology to climate science, we speak of the impact of events or measures, and/or of resilience of the system of interest; see e.g. [1]. For biological processes, adaptation (i.e., proper response) to changes in the environment is a matter of survival. How robust are foodwebs or other (economic) networks over which supply and demand move? On micro-scales, mechanotransduction makes cells respond biochemically to mechanical stimuli. All of these areas are of immense interest and even importance today.
In physics, and since a long time, response has been associated with transport phenomena.
The transport of particles, energy, volume or momentum is a central subject in all of physics. Pushing, driving, stimulating or exciting a system in one or the other way, leads to displacements in physical quantities. The amount and nature of any displacement and how it depends on the original condition is the subject of response theory. Transport coefficients such as conductivities and mobilities, viscosities and elasticity moduli, have therefore been studied often in the context of response theory.
Over time however, a more general framework has emerged, to begin with linear response theory around equilibrium. It is the context of so called fluctuation-dissipation relations.
The terminology hints at the nature of the result, at least for equilibrium systems: response got connected with fluctuating quantities, in some cases expressing dissipation or diffusion of quantities like energy, position or velocity. As a consequence, response theory also played a role in summarizing or establishing irreversible behavior on macroscopic scales starting from reversible microscopic laws; see e.g. [2].
Response theory for systems out-of-equilibrium is of more recent times. One major problem, even for the more restricted class of nonequilibrium processes considered here, is that the response is no longer describable in terms of thermodynamic variables like energies or entropy. Kinetics enters and the steady condition is not characterized simply in terms of a few macroscopic quantities. Typically we do not know the stationary distribution, and yet we wish to formulate response in terms of observable quantities. This is the main attempt of the paper, to explain an approach to response which is trajectory-based, meaning to formulate ensembles on the space of allowed trajectories. The action or Lagrangian contains both thermodynamic and kinetic information about the process, and that gets reflected in response relations. The trajectory-based approach of the present paper, on micrometer scales, is compatible with the recent great progress in monitoring and manipulating mesoscopic trajectories of tagged particles. We have in mind fluorescence and fast-camera tracking, combined with optical manipulations and shaping of potentials and driving, e.g. via optical tweezers (1986) [3]. Such experimental tools enable to collect also kinetic (and not only thermodynamic) information, which appears an unavoidable prerequisite for understanding nonequilibrium behavior.
From the conceptual point of view, we must prepare the scene and introduce structure in (nonequilibrium) response. From what will follow below, the most important players to correlate with are excesses in entropy flux and frenesy. The last concept is relatively new, and requires examples and illustrations to understand its operational meaning. In particular, response measurements will give information about changes in dynamical activity and escape rates, which constitute the meaning of frenesy. We refer to recent monographs on frenesy for an update, [4,5]. In all, we seek expressions of response that are informative or operationally useful. Response theory indeed hopes to relate the stimulus with observable effects in the unperturbed system. The ambition is thus bigger than providing a Taylor expansion or some formal perturbation series in the amplitude of the stimulus. Understanding response means to identify mechanisms and specify observables that are relevant even independent of the detailed model, stimulating intuition and enabling to reconstruct the response in terms of some more elementary considerations.
Response relations have been formulated since a very long time, and their contents never failed to impress. An early example has the typical setup drawn in Fig. 1. It concerns the second Thomson relation (1854) between the Seebeck and the Peltier coefficients. Their equality was understood to be a manifestation of time-reversal invariance in the 1931-work [6] of Lars Onsager. Such Onsager reciprocity relations as indeed found in thermoelectric phenomena are useful to decrease the number of unknown linear response coefficients. They can also be read off from the Green-Kubo relations that were derived hundred years after the paper by Kelvin [7]. The general idea is that in linear response around equilibrium, the average of a current J i F of type i (e.g. an electric current) is proportional to its correlation with the excess entropy flux S, where F k is the thermodynamic force of type k (e.g. giving the difference in temperature at opposite ends of the system). The linear response coefficients J i J k with averages in the equilibrium ensemble are clearly symmetric under exchanging i ↔ k (e.g. allowing to identify the Seebeck with the Peltier coefficient divided by temperature). (We ignore for the moment the issue of parity and generalized Casimir-Onsager reciprocity.) The intervention of the entropy flux, defined from a balance equation, was in essence the start of much of irreversible thermodynamics [8].
Another line of response theory started with the PhD work of Pierre Curie (1896) on the magnetic susceptibility of paramagnets. There, we do not deal with transport or with currents but we look at the response of magnetization. Curie derived that at high temperature the equilibrium magnetization m h responds to a small external magnetic field h with susceptibility χ, for which m h − m 0 = h χ, χ ∼ 1 T I.e., the magnetic susceptibility falls off with the inverse of the absolute temperature T (law of Curie). The structure of such relations has been clarified by the Gibbs formalism, where free energies govern responses via their derivatives. E.g., heat capacities are thereby related to variances in energy or enthalpy. Mixed derivatives give rise to an analogue of the Onsager reciprocity for linear transport coefficients known as Betti-Maxwell reciprocity (in equilibrium elasticity theory).
Perhaps the best-known response formula however is the Sutherland-Einstein relation (1904-05), [9,10]. There, the mobility is proportional to the diffusion constant. It is a functional cornerstone of much of colloidal physics. We will see various elementary examples in Section II B. All of the above are called fluctuation-dissipation relations of the first type.
A further line of relations, following from response theory and called fluctuation-dissipation relations of the second type, has been opened by the Johnson-Nyquist formula. It gives an expression for the noise arising from the thermal agitations of the electrons in a resistor. As a consequence, a random voltage emerges which can be measured at the ends of the resistor (Johnson effect, 1926). Mathematically, that voltage can be described as the random voltage source U f t given in the Nyquist formula (1928), with R the resistance and ξ t a standard white noise. The amplitude is of course very small Write U t for the variable potential difference over the capacitor. Kirchhoff's second law By inserting the white noise ξ t following (2), we obtain the Langevin equatioṅ With the battery removed, E = 0, the dynamics is reversible for energy function H(U ) = CU 2 /2. In particular, lim t↑∞ U 2 t = k B T /C, in accordance with the equipartition theorem. We can however also see from (4) how the potential changes when the battery is turned on or when E changes in time. That is again the subject of response theory and the answer obviously depends on and should make use of the choice (2); we come back to the example at the end of Example II.4 From the above (more historical) examples we already become aware of a possible connection between response and dissipation as expressed in fluctuation relations. That will be systematized in the following sections. In this respect it is useful to keep distinctions clear and to separate various questions. Terminology is not always helpful here, as such terms as fluctuation-dissipation relations, Einstein relation, response relation etc. are used in multiple meanings throughout the literature.

II. GENERAL QUESTION AND AMBITIONS
Response will be collected in a time-interval [0, t]. At negative times s ≤ 0 (all the way to time zero) the system of interest has been prepared in a reference condition. That can be many things, from a thermal equilibrium condition to a specific transient regime or, most often in this paper, a steady nonequilibrium reference. The idea is that at time zero, the system (in whatever prepared or reference condition) opens to a time-dependent stimulus.
That stimulus will be treated as a perturbation and hence we speak of linear versus nonlinear response depending on the sought consequence of the (small) stimulus. Both the stimulus (or perturbation) and the observed quantity are allowed to be time-extensive over The goal of response theory is to describe and predict in a systematic and physical way the statistical response, preferably from observations that could be made in the initial (reference) condition. The word "statistical" refers to the fact that we deal with a reduced description, physically compatible with the microscopic laws but on a level where the hidden degrees of freedom have been integrated out (after some infinite volume limit, in weak coupling etc) and provide "enough" noise for dissipative behavior. In that respect it is not necessarily the task of response theory to demonstrate dissipative behavior; rather, its validity will depend on it.
As is clear from scanning the vast literature on the subject, there are many different versions of response theory. Apart from standard treatments in text books such as [11][12][13][14][15], they include the papers [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31] to which we refer for other approaches and results. The originality of our approach is to start from dynamical ensembles on path-space. The action governing the weight of a trajectory will get a physical significance in its decomposition in a time-antisymmetric source (entropy flux) and a time-symmetric contibution (frenesy) which both change due to the perturbation. The merit of response theory is indeed not its formal appearance -in the end we are all doing Taylor expansion assuming (and sometimes proving) convergence of certain integrals. In particular, for nonequilibrium purposes, we emphasize the importance of the frenetic contribution in response; for different details and discussions, we refer to [32][33][34][35][36][37][38].
We end those verbosities by winding three final remarks around the main subject: Remark II.1. -The objection by Nico van Kampen (1971) against linear response theory and the derivation of (Green-)Kubo relations has been widely discussed. The original concerns were formulated in [39]. Multiple reactions and answers have been given. To summarize the situation, van Kampen criticised the microscopic approach via the Liouville equation (which one still often encounters in text books and reviews). Linearizing the microscopic theory is no justification of linear macroscopic equations (with currents proportional to forces). Moreover, microscopic dynamics can be very nonlinear in the sense of possessing strong dependence on initial conditions. Linear response on that micro-level would only hold for absurdly short times.
These objections are of course fully justified, but linear response need not proceed so naively as criticized by van Kampen. In a way, and in no contradiction with van Kampen's objection, linear response can only be expected to work well on scales of descriptions where "noise has been effective" to make the reduced description sufficiently chaotic. Paradoxically, instabili-ties typically help to assure sufficient statistical mixing; see also [40]. In what we will discuss, the system is open and assumed to be described by a probability law on trajectories with an action which is sufficiently local in spacetime. A simple realization are Markov processes.
The physics that proceeds that description is one of weak coupling with an infinite bath of components which evolve on a much faster time scale. The correct order of linear response is indeed to first take the thermodynamic limit and to focus on a reduced description which is sufficiently spacetime-mixing. Then, only afterwards, the limit of linear response can be taken. Linear reponse formulae will therefore not prove diffusive or dissipative behavior on meso-to macroscopic scales, but instead depend on it for their full justification.
Remark II.2. -The issue of causality and relaxation amounts to the question whether we should impose or rather derive the fact that the response happens after the stimulus. It would seem natural that no extra condition of causality is needed; the dynamics with its perturbation should take care of that. That is also the option we are taking. Nevertheless, the fact that it takes time for a perturbation at some fixed moment to relax away so that the system may return to its original condition, is deep and interesting even in classical physics.
Clearly, estimating relaxation times is not purely a question of thermodynamics. That convergence is fast enough requires absence of jamming and localization. Response theory indeed uses time-correlation functions and their (sufficient) decay is an assumption or a result whose justification falls outside response theory all together. Stability of (non)equilibria [41] is a subject which is clearly related to response theory but the latter often pre-supposes the first.
Remark II.3. -Numerical work and in particular equilibrium molecular dynamics has been succesfully used to compute transport coefficients from the Green-Kubo formulae. For nonequilibrium response relations, various new algorithms, in particular using thermostated dynamics, have been employed. Numerical methods and their physical motivation fall out of the scope of the present discussion but we refer to the book [42] for more material and references. For nonequilibrium response, the search for efficient numerical algorithms to evaluate the FDR such as the so-called zero-field (or field-free) algorithms played an important role; see [27,[43][44][45].
A. Plan of the paper After presenting a number of well-known and more elementary examples, we introduce the main formal tool in Section III. Dynamical ensembles are presented with their action and decomposition in time-symmetric and time-antisymmetric excesses. There will be plenty of examples to illustrate their nature for various types of Markov processes satisfying local detailed balance. As such however, dynamical ensembles may stand on their own and do not essentially depend on specifying the underlying dynamical equations. The response theory in Section IV depends mathematically solely on the action. Its decomposition becomes meaningful by giving rise to two major contributions to the response (entropic and frenetic).
The discussion on response around nonequilibria makes the main part of the paper, but we also discuss a unifying view on response around equilibria. Apart from presenting various cases of response, we also explain the relation with local detailed balance and the different kinds of fluctuation-dissipation relations that exist. We often concentrate there on the Sutherland-Einstein relation and its possible violation. We discuss some experimental chal- Having stated that, there are naturally also many things which are not being discussed explicitly in the present review. There is for example no discussion on nonequilibrium additions to viscosities and elastic moduli, hence not touching the subject of odd viscocity and elasticity [46][47][48][49]. We also spend very little time with aspects of heat conductivity and with the question of anomalous transport (in low dimensions), see e.g. [50] and references therein.
In particular, we do not address the question of integrability of (even) Kubo expressions for the linear transport, when there are more conserved quantities, when (almost) integrability obtains or in low dimensions. All of those are important topics of current research but here we have chosen to highlight only the most elementary structures in a pedagogical exposition for making the bridge to nonequilibrium response theory.

B. Elementary examples
Example II.4 (Langevin dynamics). Consider a small particle of mass m in a thermal environment at temperature T . At time s = 0 an external force field F s is turned on. From then the dynamics is modeled with the perturbed Langevin evolution (in one dimension) with position q s followingq s = v s and velocity v s changing witḣ where (here and later) (ξ s ) s is a standard white noise process (dimension of time −1/2 , with mean zero and delta-timecorrelated with unit variance). At time s = 0, the particle has Maxwellian velocity distribution, with v eq = 0. The idea is that at times s > 0, F s pushes the particle to move. The mobility is the response function R(τ ) entering in the expected It is showing how susceptible the particle is to the force F s . Here we can compute everything which means that the mobility M = 1/(γm).
It is however physically and mathematically often useful to work in Fourier space. One easily computes the Fourier transform, On the other hand, without the forcing we have an equilibrium process, satisfying detailed balance with Maxwellian stationary distribution. There, the time-correlation is v t v 0 eq = k B T m e −γ|t| , such that its Fourier transform equals which implies the equality (with β = 1/(k B T )) call it a fluctuation-dissipation relation (FDR) of the first kind. In the present case it provides an easy example of the Sutherland-Einstein relation because the diffusion constant D is related toG: with q 0 = 0, Combining (6), (7) and (8) we arrive indeed at Note that this relation is exact for the Langevin dynamics (5), because of the linearity of the dynamics.
Looking back at thr RC-circuit and (4), we see the same structure as in (5) with the identification γ = 1/(RC), F s = E/R and m = C. In other words, it is the FDR of the second kind (also called, the Einstein relation between noise and friction) that ensures (9): if the factor in front of the noise in (5) would have been different, (9) would not obtain. In fact, also the opposite is true in the sense that the FDR of the second kind can be derived from linear response theory (relations like (9)) around equilibrium for the bath. We will explain that in Section IV A 3. All of such relations depend on microscopic reversibility, which is derived from the dynamical reversibility of Hamiltonian dynamics in the microcanonical ensemble.
These things will become more clear as we proceed; see Section III C.
Example II.5. [simple random walk] Consider a dilute suspension of colloids being driven in a tube or channel with a rough and irregular inner surface and filled with some viscous fluid in equilibrium at temperature T . We suppose that the tube is spatially periodic in one dimension with cells of size L. The driving is from a constant force F (on the colloids) pushing them say to the right. A picture of the situation in one cell (repeated periodically) is provided in Fig. 3. We want to model that dynamics and transport with a biased continuoustime random walk on the one-dimensional lattice. Each site x corresponds to a cell. Our mathematical model needs two parameters giving the transition rates to hop to the right, respectively to the left, We think of a local force around x making that possible, which is working on the walker to make the transition to the next cell. That work is dissipated instantaneously into the thermal environment. The work done by the constant force over length L is dissipated as Joule heating in the fluid. The corresponding change in entropy in the bath is thus F L/T . From the condition of local detailed balance (to be recalled in Section III C) we put p/q = exp[F L/k B T ] which expresses that the ratio of forward to backward rates is given by the entropy flux to the environment per k B . Writing F L/k B T = we thus have where we inserted a kinetic parameter a( ) = √ pq > 0, possibly depending on the driving F , temperature T , cell length L and other things such as the geometry of the channel/tube.
To say it differently we suppose the escape rate from each cell to be It tells us how the average residence time ∼ 1/(p + q) in each cell of the channel depends on the force F . Now let us see about the motion.
The current (flux per particle from cell to cell) obviously equals Expanding around F = 0 gives for the linear term v F = L a(0) = a(0) β F L 2 Hence, the mobility is M = β a(0)L 2 , the linear transport coefficient. That is again an instance of the Sutherland-Einstein relation since the diffusion constant (without force, = 0) here equals D = a(0) Note that the expression (11) is exact and can of course be evaluated to all orders in . The differential mobility d v F /dF as function of clearly picks up the dependence of the escape rate p + q on . In particular it is easy to see that this differential mobility can get negative at large enough values of when p + q decreases with large . There is nothing surprising here, and we will see later how that conclusion can be turned into a constructive idea.
Example II.6 (Periodic potential). Example II.4 can be extended to include a periodic potential. Then, a force is added to the Langevin equation that derives from a periodic potential U , making (in three dimensions now) where f is a constant force, to perturb the purely diffusive motion. For the response, there is the mobility (matrix) function M(t), measuring the expected displacement of the particle: The subscript f in the average refers to the dynamics with the extra force f , perturbing −∇U ( r t ) → −∇U ( r t ) + f . The mobility is the limit giving the linear change in the stationary velocity by the addition of a small constant force.
The subscripts give the components of the corresponding vectors.
The diffusion (matrix) function D(t) at finite time t is defined as That is again in the equilibrium process, with f = 0. The right-hand side is the covariance: in general, for observables A and B we write The diffusion matrix is the limit as we expect the (co)variance of the displacement of the particle to be linear in time t 1/γ.
Exact computations are tedious now. Yet we will see in Section IV A 2 why (also for the dynamics (12)) we have the standard Sutherland-Einstein relation M ij = D ij /(k B T ). Note however that in contrast with the case where U = constant, the mobility no longer equals δ ij /(γm). For example, the mobility decreases with the amplitude of the conservative force as the particle needs to escape potential wells to have a non-zero velocity.

III. DYNAMICAL ENSEMBLES
Equilibrium statistical mechanics is centered around an object which is often called the Hamiltonian. It specifies the interaction potential between the components. Given such an energy function on some effective scale of description, the ensemble gets fixed by specifying the constraints or by giving intensive variables such as temperature and chemical potential.
The resulting Boltzmann-Gibbs probability laws give the equilibrium distributions on configuration or phase space. Under conditions like translation-invariance, they are solution of the Gibbs variational principle for a suitable free energy functional.
There is no strict analogue for nonequilibrium systems, at least not reaching the power and the glory of the Gibbs formalism. While in some rare cases of nonequilibrium systems we have partial information about the stationary (single-time) distribution for a given dynamics, there is no overarching principle to specify it physically. The reason is probably that kinetic (non-thermodynamic) features cannot be well represented (locally) at a fixed time.
The situation appears to be more promising on trajectory space. Such an option was already chosen in the work of Onsager and Machlup [51], for Gaussian processes showing small fluctuations around hydrodynamical behavior for relaxation to equilibrium. It was also the start of [52] for studying steady nonequilibrium. We then want to find the physically correct (relative) weigths of trajectories, as traditionally given in terms of an action and a Lagrangian. We will see below how to construct the action for Markov processes. Yet, and even more importantly, we hope to understand operationally what contributes to the action by using it.
The idea is to consider on the level of description of interest a family of possible/realizable trajectories ω. They are realized by continuous time processes for systems in contact with possibly various but well-separated equilibrium reservoirs. We open the time-window [0, t] to write ω = (x s , s ∈ [0, t]) for a trajectory. The "state" x s at time s can be a manybody mesoscopic condition, e.g, giving the chemomechanical configuration of a collection of molecular motors or the positions of colloids or the displacements and velocities for a crystal of oscillators 1 . Most often, the space of trajectories (path-space) must be restricted mathematically to have some regularities and for sure, it is an infinite-dimensional space.
Yet, we ignore the mathematically more precise formulation, which is trivial enough, and we outline the formal structure only, choosing also for the simplest notation. In that spirit we write the probability of a trajectory as where the P ref = Prob ref denotes a reference ensemble (probability) and A is called the action. We obviously want to use that the action A as function of the trajectories, is (quasi-)local in spacetime. E.g. for Markov processes, A will be given by a time-integral of single or double-time events. We did not specify here the initial conditions (at time 0) but the idea is that we want A only to depend on the dynamics, not on the initial conditions 2 .
In other words, in (16) we let P (x 0 ) = P ref (x 0 ), coninciding at time zero. Below we give examples to illustrate that stucture; Section III B is devoted to it. To start immediately however, we go back to Example II.5.
Example III.1 (simple random walk, continued). What weight P [ω] to give to a trajectory ω of a continuous-time random walker? As in Example II.5, we take the transion rates A trajectory has periods of waiting separated by jump times. The waiting times are distributed exponentially with 1 We will write x for a general state, possibly including many-body positions, velocities or spins. We use q or r when we explicitly address the positions of particles, and v for velocities. 2 We do not consider in the present review the case of comparing two different initial conditions, or the relaxation from perturbing the stationary distribution as initial condition. In such cases, a procedure following the Agarwal-method is possible, [33,53]. constant rate p+q, wherever the walker resides at that moment. It will contribute an overall factor. To concentrate on the jumping, we suppose the trajectory has N + steps forward and has N − steps backward during [0, t]. Then, where the second line takes the notation of Example II.5 and, in the last line, the total number of (unoriented) jumps (dynamical activity). Hence, taking as reference the process with = 0 in (16), we have up to irrelevant (since constant) terms.
Let us see what we can learn from just that expression. Take e.g. − log a( )/a(0) 2 . Then, for large , trajectories ω having small N (ω) will be preferred. Therfore, as grows larger, the dynamical activity gets reduced and hence the current will also decrease. It will possibly die. That is the same conclusion as from the considerations in Example II.5. Trapping far-from-equilibrium can be induced by pushing too much; see also [54,55]. On the other hand, for small (in linear order around zero bias) we can as well forget the influence of the dynamical activity and the linear response regime may be called purely dissipative: we could as well take instead of (16)-(18), when asking for linear response around the reference = 0.

A. Decomposition from time-symmetry
In the generality in which we work at this point, there is only one but rather relevant symmetry transformation to decompose the action A in (16). We consider the involution θ on trajectories ω, by which The kinematical time-reversal π is an involution on the state space which flips the odd degrees of freedom (such as velocities) present in the trajectory. We assume here that θω is an allowed trajectory, whenever ω is (assumption of dynamical reversibility). Note that we also time-reverse external (time-dependent) protocols, if any, in the same manner.
We now decompose the action according to that symmetry, The reason for the factor 1/2 in front of S will become more clear later 3 . The main point is that under the condition of local detailed balance (below), S(ω) is the change of entropy (per k B ) in the environment as caused and determined by the system trajectory ω. We will therefore refer to S (anti-symmetric under time-reversal θ) as the entropic part. The time-symmetric part D is referred to as the frenetic part. Note that both D and S refer to excesses with respect to the reference ensemble; they specify how entropic and frenetic parts change. A more informal observation may be that our Lagrangian approach [52,56] where we give weights to trajectories with the decomposition of the action A = D − S/2 suggests to think of D as the analogue of time-integrated kinetic energy and of S as the analogue of time-integrated potential energy. In that respect, we have that the extensivity in time will mostly be guaranteed only for D.

B. Examples
The writing of (18) already gives an example of the decomposition (21): the dynamical activity N (ω) is clearly time-symmetric, and the particle current J(ω) is time-antisymmetric.
Indeed, J(ω) is the entropy flux per k B released in the viscous environment. We give some other examples illustrating the decomposition.
Example III.2 (Markov jump processes). We denote the transition rate for a jump x → y by taking a parametrization with symmetric activity parameters and antisymmetric driving Under local detailed balance, also discussed in the next section, the s(x, y) get the interpretation of giving the (discrete) change of entropy per k B in the equilibrium bath with which energy, volume or particles are exchanged during the system transition x → y. Here, an environment is imagined consisting of spatially well-separated equilibrium baths, each with fast relaxation. Trajectories are piecewise constant and they consist of "waiting" times and "jumping" events. During the jump, the system exchanges "stuff" with one of the baths.
Local detailed balance thus amounts here to being able to identify with the path-wise total entropy flux (per k B ) in the environment. In (24) we sum over the jump times in the (system) trajectory ω = (x τ , 0 ≤ τ ≤ t) and x τ − is the state just before the jump to the state x τ at time τ . In other words, we assume in such models that we can read the variable changes of the entropy in the reservoirs in terms of system trajectories.
where the sum is again over the jump times in ω and a 0 is a reference rate.
Let us finally turn to (21). The frenesy associated to the path ω is That makes the time-symmetric contribution in the decomposition (21). When a(x, y) ≡ a is constant, Act(ω) is proportional to the dynamical activity (time-symmetric traffic, total Example III.3 (Overdamped diffusion). We can take the diffusive limit of the previous example. A Brownian particle has position r t = (r t (1), r t (2), r t (3)) ∈ R 3 with motion followinġ The mobility χ is a positive 3 × 3−matrix that for simplicity we choose not to depend on q here. It implies that in the frenesy, only the escape rates will change when we change F with respect to a reference choice. We put where f and g are vector functions. The constant h is a parameter and h = 0 gives the reference dynamics. We want the excess frenesy and entropy flux per k B for h = 0, as defined from (16) and (21). We refer to [4,38,57] for detailed calculations. Mathematical understanding follows from the Cameron-Martin and Girsanov theorems for the change of measure (via Radon-Nikodym derivative); cf [58]. We can also remember the trick that ξ s , s ∈ [0, t], is (formally) a stationary Gaussian process whose weights carry over to the trajectory via the quadractic form To obtain the action A, that must be integrated over time s ∈ [0, t] after which we must take the difference between the expressions for F = g and for F = g + h f .
At the same time we remember here that the Itô-integral is not time-symmetric, but the Stratonovich-integral is. There is the relation for general smooth functions G, that connects for (28) the Stratonovich-integral (left-hand side) to the Itô-integral (first term on the right-hand side).
Whatever method one prefers, the result for (28) is that in (21). Note that the highest order in the excess parameter h appears in the frenetic part.
Indeed, frenesy will matter more at larger excesses.
When f = ∇V is conservative, then the second and third term in D (the linear part of the frenesy) add up to become proportional to the time-integral of the backward generator L acting on V : for the backward generator Lu = ∇u · χ g + k B T (χ∇) · ∇u (on a functon u) of the reference dynamics. On the other hand, the entropy flux (31) becomes a time-difference, We can also specify to the case where g = −∇U and f being the nonconservative (or rotational) part of the force F . The reference dynamics (h = 0) satisfies the condition of detailed balance (time-reversibility). The excess frenesy (30) now equals The entropy flux per k B becomes time-extensive, being β times the work done by the nonconservative force as given in (31). It is the Joule-heating divided by k B T .
Example III.4 (Underdamped diffusion). The Langevin dynamics for a particle with mass m, position q s and velocity v s reads in one-dimensional notation aṡ where we added a perturbation f of strength to the reference force F . Here, γ is the constant friction and ξ s is standard white process, as always. The strength D = mγk B T > 0 governs the variance of that noise. The action in (16) is taken for force F + f with reference at = 0. The decomposition (21) here employs the velocity-flip in the time-reversal. The result gives, [37,59], As before, S equals the work done by the nonconservative force f , times β. The frenesy D represents the kinetics. Note also that in the last two (linear in ) terms of (34) we find the with a time-dependence in the force F governed by an external protocol with parameter λ s at time s. The reference process we choose here for applying (16) is taken for F = 0. The time-reversal must include the protocol; we reverse it as (θλ) s = λ t−s .
We find for (21), which is the time-integrated power divided by temperature, instantly dissipated as Joule heat in the environment and given by (35). The frenesy D = (Aθ + A)/2 as in (34) equals where the first term refers to an escape rate and the second term (with Stratonovich integral) to the dynamical activity (having the accelerationv s ).
Other examples can be added; heat conduction networks are treated in [4,60]. More examples are collected in [34,57].

C. Local detailed balance
The decomposition of Section III A is especially useful when there is a physical meaning to S and D as excesses with respect to the reference ensemble. The previous examples have shown that S and D may indeed come with such a physical meaning. The time-symmetric part D is the frenesy, collecting both the undirected traffic and the quiescence in the trajectory: too much waiting is punished when the escape rates are high and undirected traffic (also called, dynamical activity) is being stimulated when the time-symmetric activation part exceeds that of the reference ensemble. That was already clear in the example (18).
We will learn more about the role of D in the following section. Hence, for the conditional probabilities, The logarithm of the ratio of transition rates is given by the change of entropy. A particularly relevant reduced description is to take mesoscopic variables for a subsystem and a thermodynamic description for its environment (consisting of equilibrium baths). Then, under weak coupling assumption, the above identities propagate on the level of the subsystem, [64]. In summary, working under the condition of local detailed balance implies that we assume that the time-antisymmetric term S in gives the time-integrated entropy flux per k B in excess with respect to the reference ensemble 5 . To make sure, the probabilities "Prob" in (38) really refer to the same process or ensemble, i.e., starting from the same initial distribution at time zero and generated with the same dynamics.
There are various reformulations of that, and also various more microscopic foundations which are known as fluctuation theorems [66][67][68][69]; we refer to [52,56,57,64,[70][71][72] for some of the original papers making the connection between the source term of time-reversal breaking and entropy.
As a final word of warning, we emphasize that not in all physical cases local detailed balance needs to be true. For example, if a system is directly coupled to a nonequilibrium bath or if the coupling with or between equilibrium reservoirs is too large, local detailed balance will fail. We give two examples (and their response relations) in Section IV B 2.

IV. RESPONSE RELATIONS
We come to the questions of Section II. Recall the situation pictured in Fig. 4.
In the present section we use dynamical ensembles to obtain response relations. That is a different approach than from imitating classically the formalities of quantum mechanics and 5 Remember that the operation θ of time-reversal is supposed to work on all dynamical variables including the protocol. Even though that protocol is fixed, its time-reversed version is to be taken in the denominator of (38).
6 Integration over trajectories is a mathematical subject we are not touching here; in line with a more common physics notation, we can also write P 0 (dω) = P 0 (ω) dω.
Remember that the right-hand side is an average in the reference ensemble. In other words, To show the order of perturbation we write ∆D = D 0 + 2 2 D 0 +. . . , ∆S = S 0 + 2 2 S 0 +. . . where the primes denote derivatives with respect to and is the strength (overall amplitude) of the considered perturbation. The rest is straight; we expand the exponential in (40) which to second order in turns into To indicate the strength of the perturbation we sometimes write a subscript on the expectations · = · . For time-dependent perturbations the same logic applies. For what is next, we divide in various cases to estimate the relevant terms in the decomposition. We start with the linear response around equilibrium.

A. Linear response around equilibrium
Linear response takes the first order in the response formula of (42). We get Remember that D 0 , S 0 are the first derivatives evaluated at = 0. Note that, if we would have O(ω) = g(x 0 ), only depending on the initial time as some arbitrary function g, then O −D 0 + 1 2 S 0 0 = 0 by the normalization g(x 0 ) exp[−A] 0 = 1, as it should because g(x 0 ) = g(x 0 ) 0 and A only depends on the dynamics. Such arguments take care of causality, that the response to later perturbations must equal zero.
Let us now focus on reference processes which are time-reversal invariant: O(θω) 0 = O(ω) 0 or P 0 (ω) = P 0 (θω). That is the case of reference equilibria, where we write expectations · eq = · ref = · 0 . Linear response around equilibrium has been developed since the 1950's into a systematic theory, [11-15, 73, 74]. We refer to [75] for a review in the case of interacting particle systems.
which is nonzero because S 0 (θω) = −S 0 (ω) is also anti-symmetric. This formula is generally true for linear response around equilibrium for odd observables and will be applied for state functions (as in the Kubo formula next) and for currents (in the Green-Kubo relations further down). It is physically useful because of the ready interpretation of S 0 as the (linear) excess (time-integrated) entropy flux due to the perturbation, following local detailed balance. In particular we have that always which says that in linear order the expected dissipation in the perturbed condition is always nonnegative and equals the equilibrium variance of that flux. That explains somewhat the origin of the terminology for the relation (44) as fluctuation-dissipation relation (of the first kind). The reason why the time-symmetric frenesy D 0 is unseen in the linear response of (antisymmetric) currents J is that, to linear order in , field-reversal is equivalent with time-reversal. To say it with a formula, we can as well use (19) in linear response: Such equivalence is of course not true in general farther away from equilibrium, except in very rare cases. For such a rare case we refer to [76] for That is interesting for currents which are even under time-reversal as happens for the momentum current (e.g. generated by shear). Another example for jump processes is the number N (ω) of jumps (dynamical activity) in [0, t] as in (17)- (18). Here we have that always D 0 − D 0 eq = (D 0 ) 2 eq ≥ 0 which is the analogue of (45). For example, again looking at (18), the expected change in the number of steps N − N eq for a random walker always has the same sign as a (0) for small .

Kubo formula
We can specify the result (44) further by taking for a function f on states. We then go for single-time observations. Remember here that π is the kinematical time-reversal (like flipping the velocities if any). In that case, the left-hand side says where the last equality uses that we have equilibrium (full time-reversal invariance) at time zero. For the right-hand side of (44), where we used that f (π x 0 ) S 0 eq = − f (x t ) S 0 eq . Hence, in linear response around equilibrium, for all functions f . This response relation has followed straightforwardly from the assumption of time-reversal invariance in the equilibrium (reference) ensemble, where S = S 0 + O ( 2 ) is the antisymmetric part in the action A of (21) or of (39) under time-reversal, following (38). The final step for recognizing the Kubo formula in (47) while the work done on the thermal bath equals Therefore, applying Clausius relation to the thermal equilibrium reservoir, the entropy change in the environment per k B is as a function of the system position-trajectory q s , s ≤ t. The correlation in (47) becomes Concluding, we find that the linear response function for observing f at time t with pertur- which is the Kubo formula [33,74]. Very little algebra has been used to derive it; yet the derivation is physically cogent.
There are of course other possibilities for the entropy flux (48). For example, in an underdamped dynamics we may have as the time-integrated dissipated power over thermal energy (instead of (48)). That leads however to exactly the same Kubo formula (50) when using thatq s = v s .
We emphasize that we have not used any specific dynamical evolution except for the assumptions (48) or (51)  It means that we imagine the nonequilibrium process to proceed as if locally each transition or each local change in the state (in energy, particle number, volume or momentum) is in contact with one well-defined equilibrium reservoir, for which the condition of detailed balance (38) applies.

Green-Kubo and Sutherland-Einstein formula
Another instance of (44) is to take O(ω) = J i (ω), an antisymmetric current of some type i (particles, energy, mass,...). We follow again the condition of local detailed balance (Section III C) whereby, when thermodynamic forces F k are exerted, then S = k J k (ω) F k . As a consequence we have which are the Green-Kubo relations announced in (1). A detailed modeling of some thermo-electric phenomena as introduced along the cartoon of Fig. 1 and following local detailed balance is exposed in [77].
Green-Kubo relations connect transport coefficients with fluctuation properties J i J k eq in the equilibrium system. Quite generally, in equilibrium, the latter can be rewritten as where we use the notation from Example II.6. To understand its origin, we can use (52) or directly derive it from (44). Taking a colloid suspended in a fluid at rest, we apply an external field E. The entropy flux per k B caused by dissipating the work done by the force is equal to As observable O we take the displacement r t − r 0 and apply (71): Dividing by time t and taking derivatives with respect to the force components E(i), yields (53) if the infinite-time integrals make sense.
Remark IV.1. There remains often the question whether all this and all that are restricted to stochastic dynamics. The correct answer starts from noting that in the correct (e.g. weak coupling) regime of reduced descriptions the correct dynamics is of course stochastic when considering the reduced trajectories only. Obviously, the same result will be reached when doing the Hamiltonian dynamics in the bigger microscopic system, when the reduced dynamics made any sense to start with. Deviations will be observable (experimentally) due to realistic couplings, finite time-scale differences or absence of thermodynamic limits etc.
In other words, whenever we see · eq we better take an average over the microcanonical ensemble with suitable constraints of energy, volume, etc.... when we can. Another consideration is the effectiveness of simulations which may be better for deterministic dynamics.
Note however that at any rate we must somehow circumvent the van Kampen objection in Remark II.1 and take a statistical approach, meaning to observe the appropriate physically coarse-grained observables.

Fluctuation-dissipation relations of first and second kind
The terminology of fluctuation-dissipation relations (FDRs) is not always very precise. To be more specific, let us consider a probe trajectory (Y s , s ≤ t) up to time t as a perturbation from the case where the probe has always been at rest at its present position Y t . For the equilibrium bath coupled to the probe, that means (for (16)) that we have the reference ensemble for the bath having the probe at rest (at position Y t at time t) and the perturbed bath ensemble where the probe moves away from Y t for time s < t: The P (ω|Y s , s ≤ t) is the probability of a bath-trajectory ω conditioned on a(n arbitrary) probe trajectory (Y s ) t , while the probability in the right hand-side is the reference probability on bath trajectories.
The difference between the two ensembles originates physically from the coupling between probe and bath. We assume for simplicity that the probe position Y only enters via an interaction potential U (Y, q) = N i=1 u(Y − q(i)) with the various (N ) bath particles at positions q(i). At time s ≤ t, the force of the probe on a bath particle (with generic position q) is thus of the form 8 to linear order in Y s − Y t . In the last equality, we rewrote the force to make the link with the notation of the response theory above: h s = Y s − Y t is a time-dependent amplitude and V (q) = u (Y t − q) for s ≤ t and at fixed Y t . In other words, the effect of the probe motion on the bath is to provide a time-dependent perturbation with potential V , much the same way as treated in the Kubo formula of Section IV A 1.
Let us next find the relevant bath observable for which we need to see the influence of that perturbation. That has of course everything to do with the probe dynamics: the force of each bath-particle on the probe (all at time t) is where the fluctuation term ζ t has mean zero for every probe trajectory (Y s , s ≤ t). For dω P (ω|Y s , s ≤ t) u (Y t − ω t ) we use the Kubo formula (50): where the average · Yt with the probe at rest in Y t is taken over the stationary bathparticles.The identity (56) follows from the entropy flux as time-integrated dissipated power by the probe on the bath, Since the bath is supposed in thermal equilibrium, we indeed only need the entropic contribution for calculating the response. The last term in the first line of (56) is the noise introduced in (55) and given in zero order as As a summary, the induced force on the probe at time t is The first term is a systematic force on the probe. The second term is the friction and the third term is the noise in linear order around the equilibrium bath, satisfying (57). We conclude therefore that it is the entropic term in the action that produces the Einstein relation between the noise kernel and the friction memory. We do not elaborate here on the collective effect of the large N number of bath particles which would have to be combined with a weak coupling limit; cf. the van Hove limit [12,15,80]. That would simplify the expressions more, producing e.g. Gaussian white noise and a deltacorrelated-memory kernel in the friction.

B. Linear response around nonequilibrium
We move to the situation where the system's condition was prepared in steady nonequilibrium (until time zero). Note that we do not require a close-to-equilibrium regime, the perturbation is small but the reference condition can be far out-of-equilibrium. The formalism applies generally, but for the interpretation we stick to the regime where we have local detailed balance; see Section III C. We still have (43) for perturbations around nonequilibrium, but we must include the frenetic contribution even in linear order. Taking as observable The last line has its first term on the right-hand side giving the Kubo formula (50) for linear response around equilibrium. Indeed, time-reversal invariance in equilibrium im- (ω) − S 0 (ω)/2] eq = 0 because of the normalization e −A eq = 1 for whatever initial condition. The correction to the linear response in equilibrium is (obviously) additive . By writing (59) as we get a prefactor β (·) which may be called an effective inverse temperature when compared to (49)- (50). That is one way for an effective temperature to appear, obviously depending on the observable f ; see e.g. [81][82][83]. For example, if f (x t ) D 0 (ω) 0 0 then the effective temperature T eff 2T is twice the thermodynamic surrounding temperature. We see that in this context, using effective temperatures is a rather drastic multiplicative abbreviation of taking into account the frenetic contribution.
The last term in (59) can also be used as indicator of violation of the FDR of the first kind.
Or, the difference between the left-hand side and the first term on the right-hand side gives an estimate of the nonequilibrium nature of the reference process. To make that into a more physical prescription we take the freedom to subtract Note that in equilibrium the last line (61) vanishes because of the Kubo formula (50).

Moreover when f is odd (like a velocity) in the sense that
is symmetric under time-reversal, then the right-hand side of the first line (60) also vanishes in equilibrium. In other words, then, the left-hand side of (60) measures the violation of the Kubo formula (FDR of the first kind). Now take f (x) = v to get for (60)- (61): In the underdamped regime, see Example III.4, we can use that the excess entropy flux On the other hand, for the excess frenesy we use (34), Hence, for all times t, Again, the right-hand side vanishes in equilibrium by the Kubo relation (50). The left-hand side gives a time-integration of delayed power-dissipation. For times t = ds, we see that the frenesy contributes −F (q s )ds + mdv s = −mγv s ds + √ 2D ξ s (multiplied with β/(mγ)) representing the thermostating forces for the unperturbed dynamics. Together, (64) gives a reordering of the linear response around a NESS where the violation of the FDR of the first kind is measured (via the left-hand side) in terms of dissipation. Similar expresions can be obtained by time-modulating the constant → cos νs so that we enter Fourier-space. We can also take the limit t ↑ ∞. The left-hand side then becomes the expectation of the rate of energy dissipation J 0 , and we arrive at the Harada-Sasa equality [84], in their notation, The "tilde" denotes Fourier-transform andR S (ν) is the real part of the transform, C denotes the velocity correlation function and R is the change of velocity caused by a constant external force.
After these generalities it is time to get more specific examples. As for experiments, we refer to [85] where a driven Brownian particle in a toroidal optical trap is studied for its linear response of the potential energy. The frenetic contribution to the response is separately measurable. It shows the experimental feasibility of the entropic-frenetic dichotomy at least for nonequilibrium micron-sized systems with a small number of degrees of freedom immersed in simple fluids. For an example with many nonequilibrium degrees of freedom we present a theoretical model as illustration: Example IV.2 (Coupled oscillators). We put a one-dimensional oscillator (q i , p i ) at sites i = 1, . . . , n with energy U = n i=1 ϕ(q i+1 − q i ) where for example ϕ(q) = 1 2 q 2 + 1 4 q 4 . We keep q 0 = q n+1 = 0 as boundary conditions. The dynamics adds white noise ξ s (i) to every oscillator,q The nonequilibrium resides in the nonconservative forcing F i and/or in the presence of multiple temperatures T i = D/(γ i k B ). A sketch of the situation is depicted in Fig. 5.
The excess frenesy (in linear order) is As a result (needing some more calculation) we end up with the linear response formula for where the last term can be obtained from find the linear response,   Observe the spacetime reciprocity j ↔ k, s ↔ t. In Fig. 6 we see the susceptibility χ jk (t − s) as function of time for different values of the damping γ j . It appears that the limit of vanishing bulk thermal noise continues to make sense for the response, [86]. That example thus stands for the study of longitudinal waves in heat conducting strings.
Example IV.3 (Linear response of jump processes). We revisit the Markov jump processes of Section III.2, with the parametrization (22); see also [87]. We take a perturbation s(x, y) → s(x, y) + s 1 (x, y), a(x, y) → a(x, y) + a 1 (x, y) to linear order in . Then, the excess frenesy equals and see [87].
Example (II.5) is the simplest illustration of the above 9 , where we perturb around a fixed (large) value of . The current appears in (11) and its derivative equals The derivative a ( ) only contributes for = 0. The negativity of a /a( ) < −1/2 for large implies a negative differential conductivity. The same can be concluded from taking the derivative of (18), which is reproducing (71) with J ; J 0 N ; J 0 t a( ) exp /2.
Such a simple scenario as above with the crucial role of the frenetic contribution gets realized in more examples, including responses to temperature and chemical affinities; see [54,55,[88][89][90][91][92]. To pick one, in [92] one sees modifier activation-inhibition switching in enzyme kinetics. A more abstract scenario (going beyond the case of Markov jump processes) goes as follows: taking the observable O = S 0 (typically proportional to a current), linear order response gives With the possible abuse of notation that there stands for the nonequilibrium driving, and we perturb → + d .
In contrast with (45), a positive correlation between the linear excesses in entropy flux and in frenesy in the original dynamics yields a negative frenetic contribution. In and close-to-equilibrium, S 0 − S 0 0 ≥ 0 always. Two necessary conditions for a negative susceptibility for the observable S 0 are, (1) one needs to be sufficiently away from equilibrium, and (2) one needs a positive correlation S 0 D 0 0 > 0 in the original process. More generally, it is the frenetic contribution that can make currents to saturate and provide homeostatic effects far enough from equilibrium.
We also recall an application of the Cramér-Rao bound, which enables to give a general bound on response functions. That was exploited in the Dechant-Sasa inequality [93] [93,94] for details. Naturally, the (unperturbed) expectation A 0 is related to the frenesy.
As a final remark, nonequilibrium linear response as formalized above can also be used for an expansion of the stationary distribution around a reference nonequilibrium. In particular we mention the work of Komatsu and Nakagawa in [95] for characterizing nonequilibrium stationary distributions. A similar analysis followed in [59,96]. Work remains to be done towards applications on population selection and the understanding of relations with interdisciplinary aspects having to do with trophic levels in foodwebs or with the appearance of homeostasis in biological conditions, to mention just two.

Modified (Sutherland-)Einstein relations
Around nonequilibrium, the FDR of the first kind (between mobility and diffusion) is violated, and the Sutherland-Einstein relation must be corrected with a frenetic contribution.
We refer to the constructions in [97][98][99] for more introduction and examples.
In general, we take a particle of mass m in a heat bath according to the Langevin dynamics for the position r t and the velocity v t , We get out of equilibrium when the force F is not derived from a periodic potential. It can be arbitrarily large. We have no confining potential and no global bias, meaning that the steady (net) velocity is zero. The easiest is to work with a spatially periodic force field F which adds vortices in its rotational component, e.g. a lattice of convective cells as in Fig. 7. The vector ξ t is standard Gaussian white noise.
When the system is not in equilibrium, and we search for an expression for the mobility (13), we can use (43) or (59) where the perturbation changes F ( r t ) → F ( r t ) + E. We look at the linear response in E. Frenetic terms show up so that the mobility and diffusion constants (15) are no longer proportional. See [97] for a detailed derivation of the following result: the nonequilibrium modification of the Sutherland-Einstein relation is given by (notation from (13)- (15).) The frenetic contribution gives a spacetime correlation between applied forcing and displacement (last expectation in the right-hand side of (74)). Quite generally, the diffusion is much more sensitive to the strength of the force than is the mobility. The deviation with respect to the Sutherland-Einstein relation is second order in the nonequilibrium driving. We refer also [100] for further analysis and phenomenoloby, including the occurrence of negative mobilities.
The formula (74) is again similar to a Harada-Sasa equality (see (64) and formula 22 in [84]). It also invites some inverse problem. In the paper [99] the theory of linear response around nonequilibria is used to probe active forces in living cells: by measuring the force, one obtains the correlation between force and displacement which is exactly the frenetic part in (74).
To understand the modifications to the Einstein relation (FDR of the second kind) we must revisit the calculations in Section IV A 3. The logic remains the same but we must add the frenetic contribution to (56). It means that the induced friction gets a modification (and is no longer purely dissipative into the environment) because of the nonequilibrium nature of the bath. For details we refer to [101][102][103][104], where [103] also discussed the possible changes in the noise statistics related to the nonequilibrium bath.

Active particles: NO local detailed balance
To show how the formalities proceed even in the absence of local detailed balance, we give here the example of linear response for an active particle system. See for example [105] for a general review on active particles.
We start by illustrating the situation in the case of an active Ornstein-Uhlenbeck (AOU) particle , [106]. Linear response for AOU particles has been subject of various papers already, including [107,108].
Consider a particle in one dimension in a potential V and with position q s followinġ The noise is v s and while it is mean-zero Gaussian, it is not white. In fact, The time-constant τ measures the persistence time in the process v s , which is then applied as an external field with amplitude E to the particle motion. As a consequence, the process Even though the model does not satisfy local detailed balance (of Section III C), we can still apply the same response formulae if we identify the action in (16) to apply (40). In a formal sense, the probability of a trajectory ω of positions q s , s ∈ [−∞, +∞], is proportional to if we substitute As usual we put and find the action where the kernel K s := h s − τ 2ḧ s . Concerning the nature of the stochastic integral (78) it is interesting to remark that there is no difference here between the Itô and the Stratonovich convention. For the first term in the integral of (78) we can write where the integral is discretized to become a sum where the difference between consecutive s is of order δ. For the time-symmetric part Iθ + I (and also time-reversing the perturbation), we see that which tends to zero as δ ↓ 0. There is indeed no short-time diffusion and the behavior of q s is ballistic for every τ > 0. The excess frenesy as induced by the perturbation to linear order, is therefore On the other hand, the time-antisymmetric part of the action is In the passive case where K(s) = µh s /R local detailed balance would impose µE 2 R = T to be the temperature and (81) would represent the entropy flux per k B . In the active case, we can only consider R E 2 as a measure of the strength of dynamical activity delivered by the Ornstein-Uhlenbeck noise. There is however no physical identification of Aθ − A with the (excess) entropy flux due to the perturbation.
Nevertheless, the formula of response to linear order holds unchanged as for functions f and with K s = h s − τ 2ḧ s . That second term, proportional to the persistence time, induces a double time-derivative to apply on the expectation, of course also depending on τ .
A second example of an active particle model is the well-known run-and-tumble process, also called Kac or telegraph process, where the particle moves on the real line with positions where the noise σ s = ±1 is dichotomous. Again, there is no local detailed balance, and no presence of an Einstein equation except in the limit a ↑ ∞ where the noise becomes statistically indistinguishable from being white. That is a finite temperature (T -)generalization of the usual run-and-tumble process introduced in [109]; see also [110]. The Smoluchowski equation for the spatial density ρ = ρ(q, t) satisfies The derivation of (84), a thermal telegraph equation, is done in [109].
We start the process at q = 0 with equal probability of having σ 0 = 1 or σ 0 = −1. We find q 2 t 0 for large t by multiplying equation (84) by q 2 and integrating: Therefore, the diffusion constant is (see also [111]). Note that there is already diffusion at zero temperature T = 0.
To get the mean velocity v = lim t→∞ q t /t resulting from the application of an extra external field , we modify in (83) the drift σ s c → σ s c + . We easily find that v = and the mobility is thus M = 1. Per consequence, and the Sutherland-Einstein relation is broken. See more discussion in [109]. The Sutherland-Einstein relation has been discussed as well for active systems with a possible interpretation in terms of an effective temperature in [112,113].

Open problems
We mention a couple of natural open problems related to response around nonequilibria.  [33]) can be used. In the same paper [33] and via the same method a linear response for dynamical systems is illustrated. A third (always) related case is that of changes in geometry and topology. Nonequilibrium may be a topological effect as e.g. allowing circuits is essential for breaking detailed balance. Again, changes in the network architecture or topology may be give rise to incomparable trajectories.
In general, stochastic regularization is a good method to pragmatically deal with it, if linear response makes sense at all. That is illustrated in Example IV.2 and in Fig. 5 for a chain of oscillators where the dynamics becomes Hamiltonian in the bulk.
2. Many-body physics: We have emphasized since the start that response expansions must be useful. That means also that the observables appearing in the expectations of linear or nonlinear response should be measurable. Today, much progress was made to follow trajectories of individual particles. The many-body case is however still very challenging. There seems no good escape here; frenesy is necessary in response around nonequilibria and involves many-body kinetics. Other relations avoid the details of response but still give useful relations. We have in mind for example the discussed Harada-Sasa equality where the energy dissipation is obtained from experimentally accessible quantities alone, without knowing every detail of the system. Again, physical coarse-graining towards more reduced descriptions appears a good option; see e.g. [131].

No local detailed balance:
We have supposed throughout that we work under the condition of local detailed balance. That is not a strict mathematical prerequisite, but it is essential for the physical interpretations. In Section IV B 2 we have seen the examples of linear response for active Ornstein-Uhlenbeck and run-and-tumble processes. Those were the easy cases however. Extensions of the FDR of the first and the second kind for active systems which are in direct contact with nonequilibrium degrees of freedom are therefore to be explored further. We have seen how the Einstein relation between noise and friction gets modified for probes coupled to nonequilibrium reservoirs, but much needs to be clarified here for benchmarking a physically motivated active Brownian motion. Active systems as we encounter in biological processes break the FDR, and we wish to construct the response from the tools of the present paper. See e.g.
[120] for such a challenge.
4. Quantum nonequilibrium: The linear response around quantum nonequilibria faces various problems, To start, we lack good modeling of quantum nonequilibrium pro-cesses 11 . Quantum open dynamics is usually treated in the weak coupling limit where motion, see [124] for an exciting possibility based on the quantum FDR. We continue this discussion in Section V.
5. Ageing and glassy systems: This review does not deal explicitly with response theory in disordered and glassy systems, [125]. That is unfortunate as one of the main forces for the development of response theory out-of-equilibrium has indeed been the physics of glassy systems; see e.g. [44,126]. The focus of this review is much more on response around steady behavior, while glasses refer to a transient albeit very long-lived condition. The methods of the previous Sections remain valid but the nonequilibrium sits entirely in the nature of the condition with a dynamics that is, for the rest, undriven and satisfying detailed balance with respect to an asymptotic equilibrium.
While the physics is clearly much more complicated than what has been presented in the majority of examples so far, there is a further good reason why it should appear as an (advanced) application of response theory in a trajectory-based approach. Today, there is a growing trend to emphasize the kinetics of glassy behavior, instead of the thermodynamics of metastability. The general idea is that many-body interactions create kinetic constraints for the evolution and relaxation to equilibrium. But that is exactly in line with the frenetic aspects we have been enphasizing: relaxation requires the possibility of traffic between mesoscopic conditions. We have seen examples of particle transport where the current gets strongly diminished when pushing harder as the frenesy takes over as the main component in the action. Similarly, people have considered the glassy phases and transitions as manifestations of jamming and transitions in dynamical activity [89,127].
6. Applications and experiments: While we tried to emphasize the importance of the frenetic contribution to response, there are clearly many more applications and insights that can be gained; see also [5]. One possible avenue is to understand better what determines the scale of susceptibilities. How sensing works, in other words. It would for example be interesting to understand the validity of the Weber-Fechner law (1834) from psychophysics and which states that the relationship between stimulus and perception is logarithmic; see e.g. [128].
We see also that weak susceptibility of certain observables (homeostasis) would follow from near orthogonality of the observable O and the excess action, O ⊥ −D 0 + 1 2 S 0 , in the sense of a vanishing right-hand side in (43). Such points of zero susceptibility are reached when moving from a regime of positive to negative susceptibility.
At the same time, experiments on measuring the role of frenesy are still limited.
Trajectory-based response is feasible with the newest tools of tracking and data selection. We hope more of that can be used for understanding nonequilibrium response.

C. Nonlinear response around equilibrium
One may wonder whether the (mutilated) ensemble (19) or just the fluctuation identity (38) would suffice to continue response theory to second order. It was explained in [129] why that does not work. If all we know is (38)  The question of nonlinear response around equilibrium has of course been considered in many important papers. We mention [126] for the context of disordered systems to enable measurement of a correlation length and [127] where the frenetic term plays a central role.
Section IV A can be continued from (42). We start again with the equilibrium reference with expectations · eq . We suppose that S = S 0 , meaning that the entropy flux determines the order of the perturbation, e.g. from adding external fields or potentials as perturbations.
Using (42) With a state function O(ω) = f (x t ), applying formula (87), we get the next order beyond the traditional Kubo formula (50), We have used again that f (πx 0 ) eq = f (x 0 ) eq = f (x t ) eq . The result (88) is valid for general time-dependent perturbation protocols as well; see [36].
Starting the discussion of the next section it is interesting to observe that perturbations which are thermodynamically equivalent (having the same S 0 ), still yield a different response.
That is due to the frenetic contribution (different D 0 ). Sensing beyond close-to-equilibrium is a kinetic effect; see Fig. 9.

Feeling kinetics
Suppose we have a gas in a volume V which is open to exchange of particles from a chemical bath at temperature T and chemical potential µ. The gas finds itself in thermal and chemical equilibrium with fixed volume, chemical potential and temperature. Of course, the number N (t) of particles at time t is variable. The density N eq /V is constant and determined by the environment (µ, T ). That is the preparation at time zero. Let us then change the chemical potential from µ to µ + δ at fixed T , for some small δ. In time the gas will relax to the new equilibrium at (µ + δ, T ), with an evolution of the density through the expected particle number N (t) . Its change in time is given by response theory. In the linear regime, from (47), we get Here, · eq is the expectation in the original equilibrium process with (µ, T ), and J(s) is the net current at time s of particles entering the environment. Using which is the FDR of the first kind (linear in small δ). We only used (47) and a general thermodynamic description in terms of particle number, entropy flux and the relevant intensive variables. The expectation takes care of the rest. The expectation in the right-hand side only depends on the original chemical potential.
That situation changes in second order around equilibrium as seen from (88). We sketched the general scenario in Fig. 9. The frenetic contribution enters and exit and entrance rates of the particles now matter. The response has become sensitive to kinetic information beyond the change in (thermodynamic) chemical potential. There are indeed different kinetic ways to increase the bath chemical potential and the difference will be picked up by the timedependence of N (t) − N eq in second order around equilibrium (δ 2 ). As first explored in [36], the total exchange activity (between the system and the reservoir) enters, which is a time-symmetric traffic.

Experimental challenges
Second order response around equilibrium was explored first in [130] for a colloidal particle in an anharmonic potential. There, the technique to measure the trajectory of the particle is known as total internal reflection microscopy. The perturbation is an optical force on the particle.
In [131] the problem of coarse-graining is investigated. A trajectory-based response theory for a dense suspension is obviously challengiing. As we saw before, also in Section IV B 3, getting "enough" kinetic information in many-body systems is problematic to evaluate the frenetic contribution. Such coarse-graining aspects also can be studied in simulation and numerical studies.

V. QUANTUM CASE
The formalism of linear response theory as developed in the 1960's much followed that of perturbation theory in quantum mechanics. We repeat the main steps of that formalism, limiting ourselves to finite systems. Mathematically rigorous generalizations to spatially-extended systems, to ground states in particular and to the description of linear response in the thermodynamic limit are obviously important, but today seem restricted to systems showing a mass gap uniformly in the volume; see e.g. [132]. The initial density matrix ρ 0 is invariant for U 0 : U 0 (s) ρ 0 U * 0 (s) = ρ 0 . A first order calculation gives

One starts with a Hamiltonian
or, in first order and with B 0 (u) := U * 0 (u) B U 0 (u), That is all we need to calculate the density matrix ρ(t), t > 0, to first order in h s : We obtain the perturbed expectations from A(t) = Tr[ρ(t) A] for observables A. Writing A 0 (t) := U * 0 (t) A U 0 (t) we conclude that the response function is given by for t ≥ s > 0. That also works for ground states ρ 0 = |0 0| (projector on the (nondegenerate) ground state of H 0 ): and obviously, by the stationarity of ρ 0 , the response only depends on the time-difference τ = t − s > 0.
To reach the quantum fluctuation-dissipation theorem one must use that ρ 0 is the thermal equilibrium state for H 0 . At this point one can use the Kubo-Martin-Schwinger condition for equilibrium densities ρ 0 = ρ eq = exp −βH 0 /Z, Tr[ρ eq A] = A eq , which says That basically uses analyticity in a complex-time domain where B 0 (−i s) = e sH 0 B e −sH 0 .
We thus have where we can put that A = B = 0 without loss of generality. Assuming that the decay in time t is sufficiently fast, we define the Fourier transform where ν is the time-conjugate complex variable. Since G AB (t) ∈ R, we havẽ where the second equality follows from the cyclicity of the trace making G AB (t) = G BA (−t).
In particular, G AA (t) is positive-definite, meaning that n i,j=1 for all coefficients c i ∈ C. That can be shown by using G AA (t i − t j ) = 1 2 Tr[ρ 0 (A 0 (t i )A 0 (t j ) + A 0 (t j )A 0 (t i ))] and it implies thatG AA (ν) ≥ 0 is real and positive.
A final calculation from (91) leads to the fluctuation-dissipation theorem in the form That is the better known quantum version of the Kubo relation (50) (obtained from taking tanh(β ν/2) β ν/2).
When A = B, we have It is the imaginary part of the response function that relates to dissipation. If indeed we consider E(t) = Tr(ρ(t)H(t)) and we take h s =Re(h 0 e −iνs ), A = B, then where the left-hand side is the change of energy over one period. That dissipation is connected to fluctuations via the right-hand side of (95). In general one can find also the real part of the response by using the so called Kramers-Kronig relations, where the integrals are for Principal Values.
Let us add that we can get rid of the "Imaginary," say in (95) and we can go back to the time-domain by taking convolutions.
The quantum version of the Sutherland-Einstein version is readily obtained from (95).
The mean square displacement is (using anti-commutators) where we inserted (93) for G(t) = 1 2 {X 0 , X t } eq . Following [124], with (95) that implies that the diffusive behavior can be deduced from For the time-dependent response function we use (91), (zero for τ < 0.) In the long time, classical regime we must take β 1/γ with 1/γ the relaxation time for R(τ ) → µ as τ ↑ ∞, with µ the mobility. Then, (97) yields µ = β D as in the classical Sutherland-Einstein relation; see Example II.4. In the long time quantum regime where we consider relaxation times shorter than β , other (intrinsic quantum) behavior may arise, as studied in [124].
The reason for recalling the above is not only for completeness. The calculations above give the standard approach to FDR of the first kind. Note the difference in approach with all that went before. An extension to quantum nonequilibrium dynamics is therefore not of (quantum) traffic or dynamical activity, even to start in the semiclassical realm [133].
Ideas of unravelling of trajectories [134,135] or of classical representations of spin density evolutions [136,137] go in that same direction.
On the other hand, much of today's research activity in quantum nonequilibrium physics uses either the SchwingerKeldysh nonequilibrium Green function technique [121,122] or the FeynmanVernon influence functional approach [123]. The calculations using time-dependent nonequilibrium Green functions are rather complicated however, and we fail to see a powerful conceptual framework. The FeynmanVernon approach is useful for deriving (certain) master equations for the reduced density matrix, with most emphasis on bosonic (thermal) environments.

VI. CONCLUSIONS AND OUTLOOK
The tools for observing and manipulating mesoscopic kinetics have been growing sensationally. We are therefore hopeful that a response theory based on checking trajectories is