Entropy bound for time reversal markers

We derive a bound for entropy production in terms of the mean of normalizable path-antisymmetric observables. The optimal observable for this bound is shown to be the signum of entropy production, which is often easier determined or estimated than entropy production itself. It can be preserved under coarse graining by use of a simple path grouping algorithm. We demonstrate this relation and its properties using a driven network on a ring, for which the bound saturates for short times for any driving strength. This work can open a way to systematic coarse graining of entropy production.

Another way of detecting broken detailed balance is via entropy production, which has been found to obey a variety of theorems including the fluctuation theorems [28][29][30][31].A number of important relations have been found that bound entropy production, such as the thermodynamic uncertainty relation (TUR) [32][33][34][35][36][37][38][39].TUR bounds mean and variance of currents by entropy production, or vice versa.It has been extended and refined, including path anti-symmetric observables (FTUR) or to more general path weights [40][41][42][43].Little is however known about how bounds behave under coarse graining.
In this manuscript we derive an entropy bound in terms of the mean of path-antisymmetric observables, based on an integrated fluctuation theorem.In contrast to TUR and FTUR, it does not involve the variance of the observable.We determine the optimal observable, i.e., the observable that maximizes the bound, to be the signum of entropy production, so that a relation between entropy production and its sign appears.As this relation saturates for a binary process at short times, we argue that no better relation between entropy production and its sign can exist with the same range of validity.The sign of entropy production, and hence the bound for entropy production, can be preserved under coarse graining with a simple path grouping rule.We apply these results for a discrete network on a ring.For this network, the signum of entropy production is coarse grained under preserva-tion to the signum of traveled distance, demonstrating how a bound for microscopic entropy production is obtained from a macroscopic observable.Under such coarse graining, entropy production can at most reduce to the original bound.

II. SETUP AND FLUCTUATION THEOREM
Consider a path observable O [ω], with path ω in phase space, with path probability p[ω], and the average formally given by the sum over paths [44] To construct a marker for path reversal, the sum is reordered [45], We introduced notation for path reversal, θω, including reversal of time as well as of kinematic reversal of momenta [46].Validity of Eq. (1) requires the sum of paths to include θω for any included ω.Adding a zero yields where the term = ⟨O s ⟩.
Violations of Eq. ( 4) thus indicate the breakage of detailed balance [47,48].While the symmetric part O s does not appear in the final result, Eq. ( 12) below, for the derivation it is useful to start with O s finite.
To quantify the path reversal properties of cases that break detailed balance, we introduce the stochastic change in entropy defined as the log ratio of path arXiv:2302.07709v2[cond-mat.stat-mech]13 Nov 2023 probabilities [49][50][51] For simplicity, we will in the following refer to s as the entropy production, despite some caveats regarding this term [52].Inserting s into Eq.( 2) yields Reordering the terms yields a fluctuation theorem includ- Eq. ( 7) may be found equivalently from the so called strong detailed fluctuation theorem [45], and has been stated in similar form [47].
III. ENTROPY BOUND Equation ( 7) can be used to find bounds for s, and we therefore restrict to positive observables, O[ω] ≥ 0. This allows Jensen's inequality [53,54] to be applied for the average ⟨O . ..⟩ / ⟨O⟩, to obtain from Eq. ( 7), As expected from Jensen's inequality, Eq. ( 8) saturates for small s, as seen by expanding it in this limit, Taking the logarithm of Eq. ( 8) yields a lower bound for the correlation ⟨Os⟩, Because the conjugate observable 10) is also valid for O * .The bounds for ⟨Os⟩ and ⟨O * s⟩ may thus be added to arrive at a bound for ⟨sO s ⟩, the correlation of s and the symmetric part O s Notably, when adding Eq. ( 10) for ⟨Os⟩ and ⟨O * s⟩, the linear term in Eq. ( 9) drops out, so that Eq. ( 11) does in general not saturate for small s.As will be discussed below, it saturates for a binary process for short times, for any value of s.
A straight forward way to extract a bound for ⟨s⟩ from Eq. ( 11) is by considering O s = 1, i.e., path independent.12) in terms of anti-symmetric observable ⟨Oa⟩, with accessible area marked in green.For ⟨Oa⟩ → ±1, the bound diverges logarithmically.
In order for O to be positive, the antisymmetric part, Eq. ( 12) is a main result of this manuscript, a bound for entropy production ⟨s⟩ in terms of the average of antisymmetric observable O a .This relation is thus fundamentally different from uncertainty relations, which bound entropy production in terms of mean and variance [41].
The condition of |O a [ω]| ≤ 1 may seem to be a strong restriction of validity of Eq. ( 12).However, a bound between ⟨s⟩ and ⟨O a ⟩ can only be useful if O a is normalizable, i.e., if a maximum value of max 12) is thus applicable for any normalizable antisymmetric observable.We also note that the right hand side of Eq. ( 12) is non-negative, so that any nonzero ⟨O a ⟩ yields a positive bound for ⟨s⟩.
Eq. ( 12) can be read in two ways: (i) A given ⟨s⟩ yields a bound for how far the mean of (any) O a can deviate from zero.Using, e.g., a time interval from −t 0 to t 0 , O a can be the time-moment at which a certain event occurs, which is then bound by ⟨s⟩ via Eq.( 12).This will be investigated in future work.(ii) A given nonvanishing value of ⟨O a ⟩ yields a lower bound for entropy production.We will analyze this below.
The form of Eq. ( 12) is illustrated in Fig. 1.For small ⟨O a ⟩, the bound grows quadratically in ⟨O a ⟩ while it diverges logarithmically for ⟨O a ⟩ → 1.

IV. OPTIMAL OBSERVABLE: SIGNUM OF ENTROPY
Eq. ( 12), as mentioned, is valid for any normalizable antisymmetric observable, and, naturally, the observable that maximizes the right hand side of it, yields the best estimate for ⟨s⟩.Which observable is it?Answering this important question has been found non-trivial for entropy bounds [56,57], while it has a clear answer for Eq. ( 12).To see this, rewrite [58] In the second step, we used the anti-symmetry of O a .The inequality in the last step of Eq. ( 13) follows by noting that the sum is maximized if . This is the definition of O a = sign(s) [59].As the right hand side of Eq. ( 12) is a monotonically growing function of | ⟨O a ⟩ | (compare Fig. 1), O a = sign(s) yields the optimal bound for ⟨s⟩ from Eq. ( 12).To emphasize this, we write explicitly The first inequality of Eq. ( 14) bounds ⟨s⟩ by ⟨sign(s)⟩.
) shows that ⟨sign(s)⟩ ≥ 0, and that ⟨sign(s)⟩ = 0 only if ⟨s⟩ = 0, i.e., Eq. ( 14) yields a finite bound for any finite ⟨s⟩.The second inequality of Eq. ( 14) restates that O a = sign(s) yields the optimal bound, so that any other O a lies below it.

V. COARSE GRAINING
A bound of ⟨s⟩ in terms of ⟨sign(s)⟩ is fundamentally interesting, and it is also useful, as, e.g., ⟨sign(s)⟩ has beneficial properties under coarse graining.Therefore consider coarse grained paths Ω with probabilities P (Ω) = ω∈Ω p(ω), and coarse grained entropy production S = log . Naturally, O a = sign(S) fulfills Eq. ( 13), so that, for any grouping of paths 0 ≤ ⟨sign(S)⟩ ≤ ⟨sign(s)⟩ . ( Coarse graining thus leads, in general, to a decrease of ⟨sign(s)⟩, reminiscent of the finding that ⟨s⟩ also decreases under coarse graining [60].Notably, grouping paths according to the sign of s, i.e., with sign(s Under this "optimal" (index o) coarse graining, the bound provided by sign(s) is invariant, so that the macroscopic ⟨sign(S o )⟩ yields the same bound as the microscopic ⟨sign(s)⟩.Furthermore, as the bound must hold for s and S o alike, coarse grained entropy production S o never falls below the original, microscopic bound.This algorithm thus provides a controlled coarse graining of entropy production, which is especially useful if the bound from sign(s) is close to s.

VI. EXAMPLE: NETWORK ON A RING
To display this in an example, consider a network on a ring, where every state is connected to two neighbors (see inset sketch of Fig. 2).In every discrete time step, a particle jumps to the left (right) with probability p (q).For q ̸ = p, the system violates detailed balance and shows a directed flow.After N steps, the probability of finding a specific path with n L steps to the left, is given by the binomial distribution p[ω] = 1 L p n L q N −n L , with L the number of states in the network.With it, entropy production after N steps is given by, Optimal coarse graining can be performed here in straight forward manner: Because s = d log(p/q), the sign of entropy production equals the sign of d = n L −n R (for p > q), with n R the number of steps to the right, i.e., sign(d[ω]) = sign(s[ω]).This system thus allows coarse graining towards measurement of the net displacement d, under preservation of the bound.We may expect that d is easier to measure than s.
Having established that Eq. ( 12) is maximal for O a = sign(d[ω]), we can test the quality of the estimate for ⟨s⟩ provided by it.For N = 1, ⟨sign(d)⟩ = p − q and For N = 1, the bound of Eq. ( 12) thus meets entropy production exactly, for any p and q, i.e., arbitrarily far from equilibrium.This is the above mentioned case of binary process, where a particle either jumps right or left.
Figure 2 shows ⟨s⟩ and the bound of Eq. ( 12) as a function of N .For N > 1, the bound grows sublinear in N for an intermediate range, and thus falls below the value of s.For N ≫ 1, it approaches a linear asymptote, which can be found via a large deviation principle.We find for p > 1  2 and N → ∞, [61] ⟨sign(s i.e., ⟨sign(d)⟩ approaches unity exponentially fast with N .Because of this, the bound for s of Eq. ( 12) grows linear in  17), for p = 0.57, versus the bound obtained from Eq. ( 12) for various observables as sign(s), sign(d) and tanh( d 2 ) using numerical simulations.Gray dotted line is the asymtotic limit of large N .Graph also shows optimally coarse grained entropy ⟨So⟩.
N , and plugging (19) into Eq.( 12) yields 1  2 log 1 4pq N , shown as a gray line in the graph.The ratio between this large N asymptote and ⟨s⟩ of Eq. ( 17) varies between 1 2 for p → 1 and 1 4 for p → 1 2 .The coarse graining groups paths according to their displacement, i.e., Ω for d > 0 and θΩ for d < 0. This way, the coarse grained entropy ⟨S o ⟩ can be determined, which is also shown in Fig. 2. The curve demonstrates that it, as expected, stays above the bound.As only two coarse grained paths with finite S o exist, it is here found from Notably, the bound Eq. ( 12) is saturated with respect to S o for odd N as indicated.For N even, paths with zero entropy production exist, and the second equality Eq. ( 20) is not valid, and Eq. ( 12) lies below ⟨S o ⟩.For large N , these differences vanish, so that the bound of Eq. ( 12) and ⟨S o ⟩ share the same asymptote.In this system, entropy production may thus be coarse grained by a maximal loss of a factor between 2 and 4, depending on p, using the optimal algorithm.According to Eq. ( 13), any other normalizable antisymmetric observable should yield a lower bound, which we examplify by using O a = tanh(d/2).Indeed, it lies lower, but approaches the optimal bound for large N , because then, a typical trajectory shows d ≫ 1 so that tanh(d/2) becomes equivalent to sign(d).
While the optimal coarse graining is possible in an exact manner in this model, we expect approximate preservation of sign(s) to be possible in more complicated systems, which will be investigated in future work.

VII. DISCUSSION
Entropy production is bound by the mean of normalizable antisymmetric observables.The optimal observable is identified to be the signum of entropy production, so that we determine a bound between entropy production and its sign, sign(s).The latter may often be estimated from simple observables, like here, the displacement on a ring.For the investigated network, ⟨sign(s)⟩ approaches unity exponentially fast with number of steps, so that the bound grows with the expected linear dependence.Grouping paths according to sign(s) yields a coarse graining algorithm that preserves sign(s) and the bound.The presented analysis is not restricted to specific dynamics, and future work may investigate applications to quantum mechanics [62][63][64].

Figure 2 .
Figure 2. Network on a ring model: Entropy production ⟨s⟩ as a function of N , Eq. (17), for p = 0.57, versus the bound obtained from Eq. (12) for various observables as sign(s), sign(d) and tanh( d 2 ) using numerical simulations.Gray dotted line is the asymtotic limit of large N .Graph also shows optimally coarse grained entropy ⟨So⟩.