Skip to main content

ORIGINAL RESEARCH article

Front. Phys., 17 March 2023
Sec. Statistical and Computational Physics
Volume 11 - 2023 | https://doi.org/10.3389/fphy.2023.1156610

Small deviation properties concerning arrays of non-homogeneous Markov information sources

www.frontiersin.orgXimei Qin1 www.frontiersin.orgZhaobiao Rui1* www.frontiersin.orgShu Chen1,2 www.frontiersin.orgWeicai Peng1
  • 1School of Mathematics and Big Data, Chaohu University, Hefei, China
  • 2School of Microelectronics and Data Science, Anhui University of Technology, Ma’anshan, China

In this study, we first define the logarithmic likelihood ratio as a measure between arbitrary generalized information sources and non-homogeneous Markov sources and then establish a class of generalized information sources for small deviation theorems, strong limit theorems, and the asymptotic equipartition property. The present outcomes generalize some existing results.

1 Introduction

In information theory, the asymptotic equipartition property (abbreviated as AEP) is a type of property of random sources. It is the basis of the typical set concept used in data compression theory. The AEP is the constant convergence of certain random processes in some types of convergence, such as L1 convergence, probability convergence, and almost surely convergence. In some circumstances, it is also known as the Shannon–McMillan–Breiman theorem or entropy ergodic theorem. Small deviation properties are types of strong limit theorems (i.e., in the sense of almost everywhere convergence) presented by inequalities for information sources, and they usually involve the generalization of the strong law of large numbers (SLLNs). In this paper, following the research [1-3], we mainly consider the strong limiting properties and small deviation properties of generalized information entropy (a type of array of non-homogeneous Markov chains), which is an important issue in the study of limit theory.

In 1948, Shannon first explored the AEP of i.i.d. sequences (i.e., independent identically distributed sequences) and the entropy ergodic theorem of convergence in the sense of probability (see [4]). Then, in the 1950s, McMillan and Breiman established the AEP for certain types of information sources in the sense of L1 and almost everywhere (abbreviated as a.e.) convergence, respectively (see [5-6]). In 1960, Chung relaxed the conditions and found that the AEP still holds for random sources equipped with countable state (see [7]). From the 1970s to the early stages of the 21st century, the AEP for various general stochastic processes was investigated by many studies, such as [813]. Recently, many scholars, such as Yang (e.g., [1417]), Shi (e.g., [3,1821]), Huang [22,23], and Peng [2426], by generalizing the method proposed by [27], [11], and Wang [2,28,29], studied the AEP and the limit properties (including AEP and SLLNs) of some types of Markov chains (such as homogeneous and non-homogeneous; finite state space and infinite state space; and Markov chains indexed by the set of positive integers and tree-indexed Markov chains).

However, most of the aforementioned results do not consider arrays of information sources, which play significant roles in information science. In recent works, [30] explored the conditions and SLLNs for almost certain convergence of double random variable arrays, and [31] established several kinds of convergences for row negatively correlated random variable arrays under certain conditions. More related studies can be found in their references. Therefore, the limit behavior and the AEP, as well as the small deviation properties of the arrays of information sources, aroused our interest. This paper, in line with [3], [30], and [31], first introduces the logarithmic likelihood ratio as a measure between arbitrary generalized information sources and non-homogeneous Markov sources and then establishes a class of generalized information sources for small deviation theorems and strong limit theorems. The outcomes generalize some existing results.

The rest of the content is arranged as follows: Section 2, the preliminaries, gives some notations and establishes some definitions and lemmas. Section 3 states the main results and presents the strong limit behaviors and strong deviation properties of non-homogeneous Markov sources.

2 Preliminaries

In this section, we first introduce several notation and then establish the definition of the generalized divergence rate distance of the arbitrary measure μ with respect to the Markov measure μ̃. In the rest of this section, the probability space (Ω,F,μ) that we explore in our main results is fixed. Let ξ={ξi(n),vniun}nN+ be a general information source, where {ξi(n),vniun} are an array of non-negative integer-valued random variables over the (unvn + 1)th Cartesian product Xvn×Xvn+1××Xun of an arbitrary discrete source alphabet X(X={sn1,sn2,},nN+) with the distribution

pnxvnn,,xunn=μξvnn=xvnn,ξvn+1n=xvn+1n,ξvn+2n=xvn+2n,ξvn+3n=xvn+3n,,ξunn=xunn>0,

where xi(n)X,vniun,nN+ and {(vn,un):vn,unZ,<vn<un<+}, Z={,2,1,0,1,2,}, N+={1,2,}.

For any arbitrary information source, ξ={ξn=(ξvn(n),,ξun(n))}nN+ denote p(n)(xvn(n),,xun(n))=μ(ξvn(n)=xvn(n),,ξun(n)=xun(n)). Let

fnω=1nlogpnξvnn,,ξunn,

which is called the entropy density of p(n)(ξvn(n),,ξun(n)).

Supposing that μ̃ is a non-homogeneous Markov information source, then there exists a distribution q(n)(1),q(n)(2)q(n)(n),q(n)(i)>0,vniun and a transition probability density pi(n)(x,y),vniun, which is called the nth step transition probability density, such that

qnxvnn,,xunn=qnxvnni=vn+1unpinxi1n,xin,xinX,vniun,nN+

and

1nlogqnξvnn,,ξunn=1nlogqnξvnn+1ni=vn+1unlogpinξi1n,ξin.

Definition 2.1. Defining

Hμμ̃lim infn1nlogqnξvnni=vn+1unpinξi1n,ξinpnξvnn,,ξunn.

Here, H(μμ̃) is called the generalized divergence rate distance of the arbitrary measure μ with respect to the Markov measure μ̃.

We use log to represent the logarithm operator. Let 0 log  0 = 0, which can be verified since x log  x → 0 as x → 0.

Lemma 2.1. [27] Let {ξn}nN+ be a sequence of non-negative random variables with E[ξn]1, then

lim supn1nlogξn0a.s.

The proof of Lemma 2.1 can be found in [27], which is omitted in this study.

3 Main results and proofs

In this section, we first derive the strong deviation theorem (Theorem 3.1) for a sequence of measurable functions defined on N2 under certain conditions. Then, by considering the special case with c = 0 in Theorem 3.1, we derive the strong law of large numbers for strong ergodic information sources (Theorem 3.2). Finally, we obtain the small deviation behavior (Theorem 3.3) and the asymptotic property of the entropy density fn(ω) (Corollary 3.1).

Theorem 3.1. Let fn(ω) and H(μμ̃) be as given in Definition 2.1, fi(n)(x,y) be a sequence of measurable functions defined on N2, and ξ̃(i1,i)(n)=fi(n)(ξi1(n),ξi(n)). Let c > 0 be a real-valued constant and

Dω=ω:Hμμ̃c.

Supposing that there exists α > 0, for each vn⩽i⩽un

Eμ̃eα|finξi1n,ξin|<,

and for arbitrary vn⩽i⩽un,

bα=Eμ̃finξi1n,ξin2eα|finξi1n,ξin|ξi1n=kτ.

Let

Htα,τ=2τe2tα2,

where t ∈ (0, α). Then, in the case of 0⩽c⩽t2Ht (α, τ), it can be found that

lim supn1ni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1n2cHtα,τa.s.

and

lim infn1ni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1n2cHtα,τa.s.

Note: In Eq. 3.1 of Theorem 3.1, D(ω) defines the range of the generalized divergence rate distance of the arbitrary measure μ with respect to the Markov measure μ̃. It measures the difference between arbitrary generalized information sources and non-homogeneous Markov sources. In the rest of the content, we omit ω for notation explicitly. Equation 3.2 states the restriction that the array fi(n)(ξi1(n),ξi(n)) is integrable in the exponential sense. Equation 3.3 gives the moment condition for the conditional mathematical expectation of the array.

Proof. Let λ be a negative constant and

gnxvnn,,xunn=qnxvnni=vn+1uneλfinξi1n,ξinpinxi1n,xinEμ̃eλfinξi1n,ξin|ξi1n=xi1n.

Let

Λnλ,ω=gnxvnn,,xunnpnxvnn,,xunn=qnxvnni=vn+1uneλfinξi1n,ξinpinxi1n,xinEμ̃eλfinξi1n,ξin|ξi1n=xi1npnxvnn,,xunn,

then

EΛnλ,ω=xvnnxunnqnxvnni=vn+1uneλfinξi1n,ξinpinxi1n,xinEμ̃eλfinξi1n,ξin|ξi1n=xi1npnxvnn,,xunnpnxvnn,,xunn=xvnnxunnqnxvnni=vn+1uneλfinξi1n,ξinpinxi1n,xinEμ̃eλfinξi1n,ξin|ξi1n=xi1n=xvnnqnxvnnxvn+1nxunni=vn+1uneλfinξi1n,ξinpinxi1n,xinEμ̃eλfinξi1n,ξin|ξi1n=xi1n=xvnnqnxvnnxvn+1nxunneλfvn+1nξvnn,ξvn+1npvn+1nxvnn,xvn+1nEμ̃eλfvn+1nξvnn,ξvn+1n|ξvnn=xvnneλfunnξun1n,ξunnpunnxun1n,xunnEμ̃eλfunnξun1n,ξunn|ξun1n=xun1n=xvnnqnxvnnxvn+1nxunneλfvn+1nξvnn,ξvn+1npvn+1nxvnn,xvn+1neλfvn+1nξvnn,ξvn+1npvn+1nxvnn,xvn+1neλfunnξun1n,ξunnpunnxun1n,xunneλfunnξun1n,ξunnpunnxun1n,xunn=xvnnqnxvnn=1.

Combining Lemma 2.1, we can obtain

lim supn1nlogΛnλ,ω0a.s.

With Eqs 3.1, 3.7, we have

1nlogΛnλ,ω=1ni=vn+1unλfinξi1n,ξin1ni=vn+1unlogEμ̃eλfinξi1n,ξin|ξi1n+1nlogqnxvnni=vn+1unpinxi1n,xinpnxvnn,,xunn.

With Eqs 3.8, 3.9, we have

lim supn1ni=vn+1unλfinξi1n,ξin1ni=vn+1unlogEμ̃eλfinξi1n,ξin|ξi1nlim infn1nlogqnxvnni=vn+1unpinxi1n,xinpnxvnn,,xunnHμμ̃.

Hence, with Eq. 3.10, we have

lim supnλni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1nlim supn1ni=vn+1unlogEμ̃eλfinξi1n,ξin|ξi1nEμ̃λfinξi1n,ξin|ξi1n+Hμμ̃.

We consider that the maximum of x2ehx is 4e2h2 with (h > 0). Hereafter, we restrict the analysis to 0 < λ < t and 0 < t < α. According to the inequality 2 − 1/x⩽1 + log  xx and ex − 1⩽x + x2e|x|/2 with x > 0, the properties of the superior limit, and Eq. 3.4, we have

lim supn1ni=vn+1unlogEμ̃eλfinξi1n,ξin|ξi1nEμ̃λfinξi1n,ξin|ξi1nlim supn1ni=vn+1unEμ̃eλfinξi1n,ξin|ξi1n1Eμ̃λfinξi1n,ξin|ξi1n=lim supn1ni=vn+1unEμ̃eλfinξi1n,ξin1λfinξi1n,ξin|ξi1nlim supn1ni=vn+1unEμ̃λ22finξi1n,ξin2e|λfinξi1n,ξin||ξi1n=lim supn1ni=vn+1unλ22Eμ̃eα|finξi1n,ξin|finξi1n,ξin2eλα|finξi1n,ξin||ξi1nλ22lim supn1ni=vn+1unEμ̃4e2λα2eα|finξi1n,ξin||ξi1nλ2Htα,τa.s.

From Eqs 3.11, 3.12, we can obtain

lim supnλni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1n,
λ2Htα,τ+Hμμ̃a.s.

Considering 0 < λ < t < α and 0⩽ct2Ht (α, τ), with Eqs 3.2, 3.13, we have

lim supn1ni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1nλHtα,τ+Hμμ̃λλHtα,τ+cλ.

Defining function g(λ)=λHt(α,τ)+cλ and 0⩽ct2Ht (α, τ), we can arrive at

infλ0,tgλ=gcHtα,τ=2cHtα,τ.

Considering 0⩽ct2Ht (α, τ), it can be found that

lim supn1ni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1n2cHtα,τa.s.

Similarly, supposing −α < − t < λ < 0, we have

lim infnλni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1n2cHtα,τa.s.

In particular, and only if c = 0

lim supn1ni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1n=0.

The proof is completed. □

In the following content, we assume that P is a strongly ergodic matrix and the vector π is the unique invariant measure determined by P.

Theorem 3.2:. Supposing the conditions of Theorem 3.1 hold, if

supkjξ̃k,jnpink,j<.

for any positive integer k,

Cαi=Eμ̃finξi1n,ξin2eα|finξi1n,ξin|ξi1n=kτ

and for vniun,

Eμ̃eα|finξi1n,ξin<,

then

limn1ni=vn+1unfinξi1n,ξin=kπkjξ̃k,jnpknξkn,ξjna.s.

Proof. According to Theorem 3.1, we consider, under the condition of c = 0, that

limn1ni=vn+1unfinξi1n,ξinEμ̃finξi1n,ξin|ξi1n=0a.s.

With Eqs 3.14, 3.16, for arbitrary k, we have jξ̃(k,j)(n)pi(n)(ξk(n),ξj(n))<, and under the condition of vniun, we have Eμ̃fi(n)(ξi1(n),ξi(n))|ξi1(n)< and

limnEμ̃fi+1nξin,ξi+1n|ξinn=0,
limnEμ̃fvn+1nξvnn,ξvn+1n|ξvnnn=0.

With Eqs 3.17–3.19, we can arrive at

limn1ni=vn+1unfinξi1n,ξin1ni=vn+1unEμ̃finξi1n,ξin|ξin=0a.s.

Since eα|x| is convex, according to the Jensen’s inequality of conditional expectation, we arrive at

Eμ̃eα|Eμ̃fi+1nξin,ξi+1n|ξin|Eμ̃Eμ̃eα|fi+1nξin,ξi+1n|ξin|=Eμ̃eα|fi+1nξin,ξi+1n|<.

It is easy to obtain the conclusion that g(x) = x2eα|x| is a convex function. With Eq. 3.15, we have

limn1ni=vn+1unEμ̃Eμ̃ξ̃i,i+1n|ξin2eα|Eμ̃ξ̃i,i+1n|ξin||ξi1n=limn1ni=vn+1unEμ̃gEμ̃ξ̃i,i+1n|ξin|ξi1nlimn1ni=vn+1unEμ̃Eμ̃gξ̃i,i+1n|ξin|ξi1n=limn1ni=vn+1unEμ̃gξ̃i,i+1n|ξi1n=limn1ni=vn+1unEμ̃ξ̃i,i+1n2eα|ξ̃i,i+1n||ξi1nτ.

Let Eμ̃(ξ̃(i,i+1)(n)|ξi(n)), satisfying the condition of Eqs 3.2 and 3.3, then

limn1ni=vn+1unEμ̃ξ̃i,i+1n|ξinEμ̃Eμ̃ξ̃i,i+1n|ξinξi1n=0a.s.

Then,

Eμ̃Eμ̃ξ̃i,i+1n|ξinξi1n=Eμ̃ξ̃i,i+1n|ξi1n.

Therefore,

limn1ni=vn+1unEμ̃ξ̃i,i+1n|ξinEμ̃ξ̃i,i+1n|ξi1n=0a.s.

With Eqs 3.18, 3.19, we can find

limn1ni=vn+1unEμ̃ξ̃i,i+1n|ξinEμ̃ξ̃i+1,i+2n|ξin=0a.s.

With Eqs 3.20, 3.21, we have

limn1ni=vn+1unξ̃i1,inEμ̃ξ̃i+1,i+2n|ξin=0a.s.

For positive integer h, calculating by induction, we arrive at

limn1ni=vn+1unξ̃i1,inEμ̃ξ̃i+h,i+h+1n|ξin=0a.s.

With the strong ergodicity of P and the invariant of π, we can find

1ni=vn+1unEμ̃ξ̃i+h,i+h+1n|ξinkπkξ̃k,jnpknξkn,ξjn=1ni=vn+1unξ̃i+h,i+h+1npi+hnξi+hn,ξi+h+1nkπkξ̃k,jnpknξkn,ξjn=|1ni=vn+1unξ̃i,i+1npi+1nξi+1n,ξi+2npi+h+1nξi+hn,ξi+h+1npi+h+1nξi+hn,ξi+h+1nkπkξ̃k,jnpknξkn,ξjn|=1ni=vn+1unkjξ̃k,jnpi+1nξi+1n,ξknhpknξkn,ξjnkπkξ̃k,jnpknξkn,ξjn=1ni=vn+1unkjξ̃k,jnpknξkn,ξjnpinξin,ξknhkπkξ̃k,jnpknξkn,ξjn=1ni=vn+1unk,j,lδlnξinξ̃k,jnpknξkn,ξjnplnξln,ξknhkπkξ̃k,jnpknξkn,ξjn=1nlδnlkjξ̃k,jnpknξkn,ξjnplnξln,ξknhπk=1nSnlkjξ̃k,jnpknξkn,ξjnplnξln,ξknhπksupkjξ̃k,jnpknξkn,ξjnsuplkplnξln,ξknhπk0h.

The proof is completed.

Theorem 3.3. Let fn(ω) and H(μμ̃) be as given in Definition 2.1. Let 0 < t < 1 and

Ht=2Ne2t12.

Assume that 0 < c < t2Ht and j (1⩽j⩽N) are constant, then we have

lim supn1ni=vn+1unfnωHpinξi1n,j2cHta.s.

and

lim infn1ni=vn+1unfnωHpinξi1n,j2cHtca.s.

Proof. Under the conditions of Theorem 3.1, let ξ̃(x,y)(n)=logpi(n)(x,y) and α = 1, and

Eμe|ξ̃i1,in||ξi1n=k=y=1Ne|logpinx,y|pinx,y=y=1Npinx,ypinx,y=N,

then

lim supn1ni=vn+1unEμe|ξ̃i1,in||ξi1n=kN.

Supposing 1⩽jN is a constant, we have

Eμlogpinξi1n,ξin|ξi1n=y=1Npinξi1n,ylogpinξi1n,y=Hpinξi1n,j.

Applying 0 < c < t2Ht, we have

lim supn1ni=vn+1unlogpinξi1n,ξin1ni=vn+1unHpinξi1n,j2cHta.s.

and

lim infn1ni=vn+1unlogpinξi1n,ξin1ni=vn+1unHpinξi1n,j2cHta.s.

With Eqs 3.5, 3.11, we have

lim supnfnω1ni=vn+1unHpinξi1n,jlim supnfnω1ni=vn+1unlogpinξi1n,ξin+lim supn1ni=vn+1unlogpinξi1n,ξin1ni=vn+1unHpinξi1n,jlim infn1nlogpnξ1n,,ξnnqnx1ni=vn+1unpinxi1n,xin+2cHt2cHta.s.

With Eqs 3.6, 3.11, we have

lim infnfnω1ni=vn+1unHpinξi1n,jlim infnfnω1ni=vn+1unlogpinξi1n,ξin+lim infn1ni=vn+1unlogpinξi1n,ξin1ni=vn+1unHpinξi1n,jlim supn1nlogpnξ1n,,ξnnqnx1ni=vn+1unpinxi1n,xin2cHtHμμ̃2cHt2cHtca.s.

The proof is completed.

Corollary 3.1. Supposing the conditions of Theorem 3.1 hold, then

limnfnω1ni=vn+1unHpinξi1n,j=0.

Proof. It is easy to obtain this conclusion regarding the strong limit theory of entropy when c = 0. □

We point out that Corollary 3.1 implies that our main outcomes generalize the known results, such as Liu and Yang [12].

Data availability statement

The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.

Author contributions

Conceptualization: XQ, ZR, and WP; methodology: XQ; software: ZR; validation: XQ and SC; writing—original draft preparation: XQ; visualization: ZR. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the University Key Project of the Natural Science Foundation of Anhui Province (grant nos KJ2021A1032, KJ2019A0683, and KJ2021A1031), Key Project of the Natural Science Foundation of Chaohu University (grant no. XLZ-202201), Key Construction Discipline of Chaohu University (grant no. kj22zdjsxk01, kj22yjzx05, and kj22xjzz01), Anhui Province Social Science Innovation Development Research Project (grant no. 2021CX077), and University Outstanding Young Talents Project of Anhui Province (grant no. gxyq2021018).

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Peng W. Conditional entropy, entropy density, and strong law of large numbers for generalized controlled tree-indexed Markov chains. Commun Statistics-Theory Methods (2017) 46(23):11880–91. doi:10.1080/03610926.2017.1285935

CrossRef Full Text | Google Scholar

2. Wang Z, Yang W. Markov approximation and the generalized entropy ergodic theorem for non-null stationary process. Proc Math Sci (2020) 130(1):13. doi:10.1007/s12044-019-0542-4

CrossRef Full Text | Google Scholar

3. Shi Z, Wang Z, Zhong P, Fan Y. The generalized entropy ergodic theorem for nonhomogeneous bifurcating Markov chains indexed by a binary tree. J Theor Probab (2021) 35:1367–90. doi:10.1007/s10959-021-01117-1

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Shannon C. A mathematical theory of communication. Bell Syst Tech J (1948) 27:379–423. doi:10.1002/j.1538-7305.1948.tb01338.x

CrossRef Full Text | Google Scholar

5. McMillan B. The basic theorems of information theory. Ann Math Stat (1953) 24:196–219. doi:10.1214/aoms/1177729028

CrossRef Full Text | Google Scholar

6. Breiman L. The individual ergodic theorem of information theory. Ann Math Stat (1957) 28:809–11. doi:10.1214/aoms/1177706899

CrossRef Full Text | Google Scholar

7. Chung K. A note on the ergodic theorem of information theory. Ann Math Stat (1961) 32(2):612–4. doi:10.1214/aoms/1177705069

CrossRef Full Text | Google Scholar

8. Kieffer J. A simple proof of the Moy-Perez generalization of the Shannon-McMillan theorem. Pac J. Math. (1974) 51:203–6. doi:10.2140/pjm.1974.51.203

CrossRef Full Text | Google Scholar

9. Barron A. The strong ergodic theorem for densities: Generalized shannnon-MaMillan-breiman theorem. Ann Prob (1985) 13:1292–303.

CrossRef Full Text | Google Scholar

10. Han T. Information-spectrum methods in information theory. New York: Springer (1990).

Google Scholar

11. Liu W. Relative entropy densities and a class of limit theorems of the sequence of m-valued random variables. Ann Probab (1990) 1990:829–39.

Google Scholar

12. Liu W, Yang W. The Markov approximation of the sequences of N-valued random variables and a class of small deviation theorems, Stochastic Process. Appl (2000) 89:117–30.

CrossRef Full Text | Google Scholar

13. Liu W. The strong deviation theorms and analytical methods (in Chinese). Beijing: Science Press (2003).

Google Scholar

14. Yang W, Liu W. Strong law of large numbers and Shannon-McMillan theorem for Markov chain fields on trees. IEEE Trans Inf Theor (2002) 48(1):313–8. doi:10.1109/18.971762

CrossRef Full Text | Google Scholar

15. Yang W, Ye Z. The asymptotic equipartition property for nonhomogeneous Markov chains indexed by a homogeneous tree. IEEE Trans Inf Theor (2007) 53(9):3275–80. doi:10.1109/tit.2007.903134

CrossRef Full Text | Google Scholar

16. Yang W. A class of strong deviation theorems for the random fields associated with nonhomogeneous Markov chains indexed by a Bethe tree. Stochastic Anal Appl (2012) 30(2):220–37. doi:10.1080/07362994.2012.649619

CrossRef Full Text | Google Scholar

17. Yang W, Zhao Y, Pan H. Strong laws of large numbers for asymptotic even–odd Markov chains indexed by a homogeneous tree. J Math Anal Appl (2014) 410(1):179–89. doi:10.1016/j.jmaa.2013.08.009

CrossRef Full Text | Google Scholar

18. Shi Z, Ji J, Yang W. A class of small deviation theorem for the sequences of countable state random variables with respect to homogeneous Markov chains. Commun Stat (2016) 46:6823–30. doi:10.1080/03610926.2015.1137594

CrossRef Full Text | Google Scholar

19. Shi Z, Bao D, Fan Y, Wu B. The asymptotic equipartition property of Markov chains in single infinite markovian environment on countable state space. Stochastics Int J Probab Stochastic Process (2019) 91:945–57. doi:10.1080/17442508.2019.1567730

CrossRef Full Text | Google Scholar

20. Shi Z, Ding C. A class of small deviation theorems for functionals of random fields on a tree with uniformly bounded degree in random environment. Probab Eng Informational Sci (2020) 36:169–83. doi:10.1017/s026996482000042x

CrossRef Full Text | Google Scholar

21. Zhao M, Shi Z, Yang W, Wang . A class of strong deviation theorems for the sequence of real valued random variables with respect to continuous-state non-homogeneous Markov chains. Commun Statistics-Theory Methods (2021) 50(23):5475–87. doi:10.1080/03610926.2020.1734838

CrossRef Full Text | Google Scholar

22. Huang H, Yang W. Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree. Sci China Ser A: Math (2008) 51(2):195–202. doi:10.1007/s11425-008-0015-1

CrossRef Full Text | Google Scholar

23. Huang H. The generalized entropy ergodic theorem for nonhomogeneous Markov chains indexed by a homogeneous tree. Probab Eng Informational Sci (2020) 34(2):221–34. doi:10.1017/s0269964818000554

CrossRef Full Text | Google Scholar

24. Peng W, Yang W, Wang B. A class of small deviation theorems for functionals of random fields on a homogeneous tree. J Math Anal Appl (2010) 361(2):293–301. doi:10.1016/j.jmaa.2009.06.079

CrossRef Full Text | Google Scholar

25. Peng W, Yang W, Shi Z. Strong law of large numbers for Markov chains indexed by spherically symmetric trees. Probab Eng Informational Sci, (2015) 29, 473–81. doi:10.1017/s0269964815000108

CrossRef Full Text | Google Scholar

26. Peng W, Xi X. Shannon-McMillan theorem and strong law of large numbers for Markov chains indexed by generalized spherically symmetric trees. Commun Statistics-Theory Methods (2021) 2021:1–12. doi:10.1080/03610926.2021.1955385

CrossRef Full Text | Google Scholar

27. Algoet P, Cover T. A sandwich proof of the Shannon-MaMillan-Breiman theorem. Ann.Prob. (1998) 16(2):899–909.

Google Scholar

28. Wang Z. A kind of asymptotic properties of moving averages for Markov chains in markovian environments. Commun Statistics-Theory Methods (2017) 46:10926–40. doi:10.1080/03610926.2016.1252404

CrossRef Full Text | Google Scholar

29. Wang Z, Yang W. The generalized entropy ergodic theorem for nonhomogeneous Markov chains. Theor Probab (2017) 29:761–75. doi:10.1007/s10959-015-0597-9

CrossRef Full Text | Google Scholar

30. Chen M, Zhu Y, Niu X, Peng W, Wang Z. On almost sure convergence for double arrays of dependent random variables. Commun Statistics-Theory Methods (2022) 2002:1–15. doi:10.1080/03610926.2022.2095403

CrossRef Full Text | Google Scholar

31. Wang M, Wang M, Wang R, Wang X. Complete convergence and complete/ moment convergence for arrays of rowwise negatively dependent random variables under sub–linear expectations. J Math inequalities (2022) 16(4):1347–70. doi:10.7153/jmi-2022-16-89

CrossRef Full Text | Google Scholar

Keywords: non-homogeneous Markov chains, generalized information sources, small deviation properties, general relative entropy, asymptotic equipartition property

Citation: Qin X, Rui Z, Chen S and Peng W (2023) Small deviation properties concerning arrays of non-homogeneous Markov information sources. Front. Phys. 11:1156610. doi: 10.3389/fphy.2023.1156610

Received: 01 February 2023; Accepted: 27 February 2023;
Published: 17 March 2023.

Edited by:

Song Zheng, Zhejiang University of Finance and Economics, China

Reviewed by:

Weigang Sun, Hangzhou Dianzi University, China
Zhiyan Shi, Jiangsu University, China
Huilin Huang, Wenzhou University, China

Copyright © 2023 Qin, Rui, Chen and Peng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zhaobiao Rui, ruizhaobiao@chu.edu.cn

Download