Small deviation properties concerning arrays of non-homogeneous Markov information sources

In this study, we first define the logarithmic likelihood ratio as a measure between arbitrary generalized information sources and non-homogeneous Markov sources and then establish a class of generalized information sources for small deviation theorems, strong limit theorems, and the asymptotic equipartition property. The present outcomes generalize some existing results.

However, most of the aforementioned results do not consider arrays of information sources, which play significant roles in information science. In recent works, [30] explored the conditions and SLLNs for almost certain convergence of double random variable arrays, and [31] established several kinds of convergences for row negatively correlated random variable arrays under certain conditions. More related studies can be found in their references. Therefore, the limit behavior and the AEP, as well as the small deviation properties of the arrays of information sources, aroused our interest. This paper, in line with [3], [30], and [31], first introduces the logarithmic likelihood ratio as a measure between arbitrary generalized information sources and non-homogeneous Markov sources and then establishes a class of generalized information sources for small deviation theorems and strong limit theorems. The outcomes generalize some existing results.
The rest of the content is arranged as follows: Section 2, the preliminaries, gives some notations and establishes some definitions and lemmas. Section 3 states the main results and presents the strong limit behaviors and strong deviation properties of nonhomogeneous Markov sources.

Preliminaries
In this section, we first introduce several notation and then establish the definition of the generalized divergence rate distance of the arbitrary measure μ with respect to the Markov measureμ. In the rest of this section, the probability space (Ω, F , μ) that we explore in our main results is fixed. Let ξ {ξ (n) i , v n #i#u n } n∈N + be a general information source, where {ξ (n) i , v n #i#u n } are an array of nonnegative integer-valued random variables over the (u n − v n + 1)th Cartesian product X vn × X vn+1 ×/× X un of an arbitrary discrete source alphabet X (X {s n1 , s n2 , / }, n ∈ N + ) with the distribution i ∈ X , v n #i#u n , n ∈ N + and {(v n , u n ): v n , u n ∈ Z, −∞ < v n < u n < + ∞}, Z {/ , −2, −1, 0, 1, 2, / }, N + {1, 2, / }. For any arbitrary information source, ξ which is called the entropy density of p (n) ξ (n) vn , . . . , ξ (n) un . Supposing thatμ is a non-homogeneous Markov information source, then there exists a distribution q (n) (1), q (n) (2)/q (n) (n), q (n) (i) > 0, v n #i#u n and a transition probability density p (n) i (x, y), v n #i#u n , which is called the nth step transition probability density, such that Here, H(μ μ) is called the generalized divergence rate distance of the arbitrary measure μ with respect to the Markov measureμ.
We use log to represent the logarithm operator. Let 0 log 0 = 0, which can be verified since x log x → 0 as x → 0. The proof of Lemma 2.1 can be found in [27], which is omitted in this study.

Main results and proofs
In this section, we first derive the strong deviation theorem (Theorem 3.1) for a sequence of measurable functions defined on N 2 under certain conditions. Then, by considering the special case with c = 0 in Theorem 3.1, we derive the strong law of large numbers for strong ergodic information sources (Theorem 3.2). Finally, we obtain the small deviation behavior (Theorem 3.3) and the asymptotic property of the entropy density f n (ω) (Corollary 3.1).
Theorem 3.1: Let f n (ω) and H(μ μ) be as given in Definition 2.1, f (n) i (x, y) be a sequence of measurable functions defined on N 2 , andξ Supposing that there exists α > 0, for each v n #i#u n and for arbitrary v n #i#u n , where t ∈ (0, α). Then, in the case of 0#c#t 2 H t (α, τ), it can be found that lim sup Note: In Eq. 3.1 of Theorem 3.1, D(ω) defines the range of the generalized divergence rate distance of the arbitrary measure μ with respect to the Markov measureμ. It measures the difference between arbitrary generalized information sources and non-homogeneous Markov sources. In the rest of the content, we omit ω for notation explicitly. Equation 3.2 states the restriction that the array .3 gives the moment condition for the conditional mathematical expectation of the array.
Proof: Let λ be a negative constant and We consider that the maximum of x 2 e −hx is 4e −2 h 2 with (h > 0). Hereafter, we restrict the analysis to 0 < λ < t and 0 < t < α. According to the inequality 2 − 1/x#1 + log x#x and e x − 1#x + x 2 e |x| /2 with x > 0, the properties of the superior limit, and Eq. 3.4, we have lim sup  Considering 0 < λ < t < α and 0#c#t 2 H t (α, τ), with Eqs 3.2, 3.13, we have lim sup Defining function g(λ) λH t (α, τ) + c λ and 0#c#t 2 H t (α, τ), we can arrive at Considering 0#c#t 2 H t (α, τ), it can be found that lim sup In particular, and only if c = 0 lim sup The proof is completed. □ In the following content, we assume that P is a strongly ergodic matrix and the vector π is the unique invariant measure determined by P.  Since e α|x| is convex, according to the Jensen's inequality of conditional expectation, we arrive at Eμ e α|Eμ f n It is easy to obtain the conclusion that g(x) = x 2 e α|x| is a convex function. With Eq. 3.15, we have