Infinite Series of Singularities in the Correlated Random Matrices Product

We consider the product of a large number of two 2 × 2 matrices chosen randomly (with some correlation): at any round there are transition probabilities for the matrix type, depending on the choice at previous round. Previously, a functional equation has been derived to calculate such a random product of matrices. Here, we identify the phase structure of the problem with exact expressions for the transition points separating “localized” and “ergodic” regimes. We demonstrate that the latter regime develops through a formation of an infinite series of singularities in the steady-state distribution of vectors that results from the action of the random product of matrices on an initial vector.


INTRODUCTION
Products of random matrices are one of the most important subjects in the statistical physics of disordered systems [1][2][3][4][5][6][7][8], with a large number of interdisciplinary applications [5]. The continuous distribution of random matrices has been intensively studied in condensed matter physics, see, e.g., references [7,8], while biological applications typically deal with discrete sets of matrices [9][10][11][12][13][14][15]. Another difference is that in the condensed-matter problems the choice of matrices in random products is usually a random process, while for interdisciplinary applications one considers correlations between the choice of matrices at different rounds.
In this paper, we treat the random choice of matrices type as a Markov process. Let us take a reference vector, then consider how the reference vector changes after multiplying it by the random product of n matrices of two types. The vector's norm after n steps is characterized by the Lyapunov exponent for the product of matrices, see reference [16]. In references [17,18] we gave the semi-analytical solution for the probability distribution for the resulting vector. We derived a system of functional equations, and expressed the maximum Lyapunov index via the steady-state solution of this system of equations.
Previously, a closed functional equation had been obtained in reference [19] for the one-dimensional Ising model in a random magnetic field. Similar functional equations had been derived also in reference [20] to describe the transmission of electrons trough a chain of disordered random scatterers. The main difference between our model and those of reference [20] is that here we consider a correlated choice of matrices, such that the probability of choosing of the next matrix depends on the choice at the last step ("dynamically correlated disorder"), whereas those references studied the case of uncorrelated choice of predefined (static) disorder. Here we identify the phase structure of the correlated random product of matrices, and derive an exact analytical expression for the transition points between different phases of this random product.
Let us now set the stage and specify the model. Consider two 2 × 2 matrices M 0 and M 1 . In this paper, we focus on the case of real-valued matrices and vectors. Assume that at the start, we have M 1 . We define the transition probabilities q ij to choose the matrix of type i = 0, 1, when at the previous round we have the matrix of type j = 0, 1. The balance condition for the transition probabilities reads: q 00 + q 01 = 1, q 10 + q 11 = 1.
We choose the vector |z 0 ≡ (x 0 , y 0 ) at the start and denote the sequence of vectors in the course of evolution as |z n = (x n , y n ). We then formulate the following iteration rule to deduce z n from the values at previous rounds: for the i n = 0 or i n = 1 choices of the matrix type at the n-th round.
It is convenient to introduce the following representation of vector z n in terms of the polar coordinates: where α is the angle of the vector, and v parameterizes the norm of the vector. We assume that the norm of |z n grows exponentially with n: Then R is identified as the maximum Lyapunov exponent of the matrix product. The main idea of the derivation of R is to consider a correlated random matrix product as a random walk on a two-channel one-dimensional chain.
In addition to the calculation of the Lyapunov exponent, the random product of matrices is characterized by the distribution of angles α n after nth step (and in the steady state at n → ∞). Below, we will analyze this distribution and identify the phase transition points (depending on the switching probabilities q 01 and q 10 ) between the phases where the angles in the steady state remain localized and those where they cover the whole phase space.
The structure of the paper is as follows: in section 2, we briefly overview the main analytical results of reference [17], which will serve as the basis for the following analysis. In section 3.1, we perform a numerical analysis for the case of non-singular matrices. We discuss the phenomenon of emergence of "reflected singularities" in the distribution of angles in section 3.2. In section 3.3, we consider the case of singular matrices. Finally, in section 4, we discuss the obtained results and present the outlook for further studies.

THE KNOWN RESULTS ABOUT THE FUNCTIONAL EQUATION
To describe the random process, we consider an ensemble of the states. We should then look at the probability distribution for the vector z n or, equivalently, for α n and v n . Introducing the master equation for ρ i (n, α) (where i = 0, 1) describing the distributions of α when the matrix M i is applied at round n, we derive the following system of functional equations [17]: where we define the functions f i as and where |α = (cos α, sin α) Assuming a steady-state distribution, we get for such a distribution the set of functional equations and calculate the Lyapunov index as [17].
where the function g i (α) is given by For the case of singular matrices, we use an alternative system of equations: where the functionsf i are defined similarly to f i , with the replacement of matrices M −1 i with M i : Similarly, we introduce the function that determines the Lyapunov exponent, as in Equation (8).
When deriving Equations (4) and (10), we have assumed a continuous distribution in some interval (that can be the case, e.g., for a continuous distribution of the initial vector z 0 ), and an asymptotically stable distribution for the steady state.

The Phase Transitions Between Phases for Non-singular Matrices
For the non-zero transition probabilities, let us first consider the symmetric case q 01 = q 10 ≡ q. Let us assume that the maximal eigenvalue of the first matrix corresponds to the eigenvector and for the second matrix to the eigenvector We also denote also α 0 = 2πX 0 and α 1 = 2πX 1 . For the zero transition rate q = 0, we will have the steady-state distributions The main question we are going to answer is: How would these angular delta-distributions evolve when the transition rate is non-zero. We start by numerically solving the functional equations for the steady-state angular distributions in the case of non-singular matrices. We take for our numerics the two non-commuting matrices: For q ≪ 1, we observe that ρ 0 (α) has a maximum at α 0 and ρ 1 (α) has a maximum at α 1 , i.e., the delta-distribution for q = 0 are only slightly broadened and remain localized. For our choice of matrices the positions of the maxima are at X 0 = 0.231 and X 1 = 0.785. With increasing the transition rate q, we encounter different situations. Let us look at the distribution ρ 1 (α). For the small transition rates q, we have a peak in Figure 1. This corresponds to the first phase. It disappears while increasing q (Figure 2), and we enter to the second phase (see Figure 3). In the second phase we get a wider distribution for ρ 1 . Thus, we observe numerically a transition between the phases where the angular distributions are "localized" and "de-localized" in the phase space.
We can identify the transition point between the first and second phases. Let us assume a singularity of the distribution ρ 1 (α) near the point X 1 , with γ → 1 and K 1 → 0 for q → 0. For q → 0, this distribution becomes a delta-distribution, as it should: For the small transition rate for almost diagonal matrices, an exact solution with the scaling singularity has been derived in reference [11], which supports our ansatz. Moreover, this scaling ansatz allows for a correct solution of our functional equation near the singularity point.
Then we get from the second equation in Equation (7) ( and find γ from this equation. The transition point from the behavior by Figure 1 to the one by Figure 2 is at γ = 0, Equation (20) has a solution for In our case, the transition point for the ρ 1 is at q ≈ 0.02. Figures 1, 2 confirm well this our findings about transition point. Similarly, we can assume a singular behavior for the distribution ρ 0 , and derive the following expression for the critical index The transition point is at q = 0.62.

The Reflected Singularities
In Figure 1 we have single peaks for the distributions (within our accuracy of numerics). In Figures 2, 3, we see several peaks for ρ 1 , and many peaks in Figures 4, 5. Further, in Figure 5 we observe five high and eight low peaks in ρ 1 . Some of the peaks have identical locations with the peaks in ρ 0 , while other peaks have independent locations.
How can we explain this phenomenon? Here, it is convenient to use the representation of the functional equations for the distributions that we derived for the singular matrices (it is also applicable for the non-singular matrices). Let us look at the distributions near the point α 1 in the equation for ρ 1 and near the point α 0 in the equation for ρ 0 . From these equations, we get a new singularity near the point for ρ 0 (α), as well as a new singularity near the point for ρ 1 (α). In fact, iterating this analysis, we find singularities at any of the points α 00..1 =f 0 (. . .f 0 (f 0 (α 1 ))) (26) for ρ 0 (α), and singularities near the points α 11..0 =f 1 (. . .f 1 (f 1 (α 0 ))) (27)  for the ρ 1 (α). Consider the case of the power-law singularity addressed above. We assume ρ 0 (α) ∼ K 10 |α − α 10 |γ , Then we can express K 01 and K 10 via K 0 and K 1 .
In Figure 2, the distribution ρ 1 is just between α 0 and α 10 . Actually we have peaks for the mixed choice of functionsf 0 ,f 1 in Equations (26) and (27); only the heights are suppressed via higher powers of q: Let us find the reflected singularities near the point α 10 . Considering we get In the same way, we get The distribution ρ 1 is extended between α 0 and α 10 , even after the transition point (see Figure 3).
Then we observe a copy singularities: Further increase of the transition rate brings to the new groups of the peaks (see Figures 4, 5). We can identify the locations of first and second high peaks, then the last peak in Figure 5. We have also done numerical calculations with a different matrix M 0 , getting qualitatively similar results.

Singular Matrices
We have done numerics for the choice Let us look atf ′ 0 (α): The singular case is described by Equation (10). Here, we have a zero value for the derivative off ′ 0 (see Figure 6), so there is no transition in the behavior of ρ 1 (α).

DISCUSSION
We have considered the correlated random product of two distinct 2 × 2 matrices, establishing a rich phase structure of the problem. We have identified the transition points between the regimes of "localized" and "extended" behavior of the steadystate distribution of vectors emerging after multiplying an initial vector with an infinite random product of matrices.
Important characteristics of the system are the values of the steady-state vector angle α for the pure case with zero transition rates, α 0 and α 1 , corresponding to the maximum eigenvalues of the two matrices involved. For very weak small transition rates, we have two singular power-law distributions, each representing a broadened delta-function around the steady-state values α 0 , α 1 . We assumed a scaling singularity, as was rigorously derived in reference [10], where an exact solution for the small transition rates was obtained for nearly diagonal matrices. Here, we have assumed such a singular behavior for the general case of matrices M 0 and M 1 that do not commute with each other.
Introducing ansatz (16), it is straightforward to identify the critical indices for the angular distributions. While in the related evolution model with the fluctuating fitness landscape (which randomly chooses one of the two landscapes) the distributions of α in the steady state, ρ 0 , ρ 1 , are located in the interval [α 0 , α 1 ], now the distribution is non-zero outside of these intervals (see Figure 1), with the singularities at α 0 and α 1 .
Increasing the transition rates, we enter the phase where ρ 1 is smooth, without any singularity (see Figure 2). We have identified the transition point between the first ("localized") and second ("extended") phases. With further increasing the switching probability, we obtain a series of new peaks, one of them at the point near α 0 (Figure 2).
What we have observed, besides the singularities at the points α 0 , α 1 , is that there is an infinite sequence of singularities, originating from the two original ones. We refer to these emergent singularities as to reflected singularities. It is a general property of functional equations of the type determining the steady-state distributions in the present problem. We have identified the locations of the new singularities, as well as their scaling properties.
We note that the existence of the reflected singularities is a general property of the functional equation, which should be valid in the general case (even when the broadened delta-function is not described by a scaling singularity at small q). We have also performed the numerical calculations for the case when the second matrix is singular. For the chosen case of singular matrix there is no transition to the second phase.
We considered only the case of symmetric transition probabilities. For the asymmetric case the reflected singularities again should exist, only the phase structure will become more complicated. We can apply our functional equation method to solve exactly the general case of diagonal matrices product.
Here the distribution could depend on a starting choice of the matrix The reflected singularities should present again, only the phase structure will became more complicated. We hope that the results reported in this paper will further stimulate the advances in this interesting area of statistical physics, having implications on various fields from condensed-matter physics to biological evolution.
Let us briefly describe our simulation code. We first build our functions f,g, and identified steady points f 0 (α 0 ) = α 0 , f 1 (α 1 ) = α 1 . We consider 500 discrete values of α between α 0 , α 1 . We considered 20,000 samples of 1,000 iterations. During any iteration we randomly change the type of matrices, depending on the current type.
We are grateful to I. V. Gornyi for interesting discussions and critical remarks on the manuscript, as well as for attracting our attention to reference [20].

DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

AUTHOR CONTRIBUTIONS
The work has been planned by DS. RP and DS have done the calculations and wrote the article.

FUNDING
The work was done by the financial support of Russian Science Foundation from the Russian Transport University grant 19-11-00008.

ACKNOWLEDGMENTS
We thank Igot Gornyi for the discussions.