Skip to main content

ORIGINAL RESEARCH article

Front. Big Data, 03 January 2022
Sec. Data Science
Volume 4 - 2021 | https://doi.org/10.3389/fdata.2021.763925

The Bayes Estimators of the Variance and Scale Parameters of the Normal Model With a Known Mean for the Conjugate and Noninformative Priors Under Stein’s Loss

www.frontiersin.orgYing-Ying Zhang1,2* www.frontiersin.orgTeng-Zhong Rong1,2 www.frontiersin.orgMan-Man Li1,2
  • 1Department of Statistics and Actuarial Science, College of Mathematics and Statistics, Chongqing University, Chongqing, China
  • 2Chongqing Key Laboratory of Analytic Mathematics and Applications, Chongqing University, Chongqing, China

For the normal model with a known mean, the Bayes estimation of the variance parameter under the conjugate prior is studied in Lehmann and Casella (1998) and Mao and Tang (2012). However, they only calculate the Bayes estimator with respect to a conjugate prior under the squared error loss function. Zhang (2017) calculates the Bayes estimator of the variance parameter of the normal model with a known mean with respect to the conjugate prior under Stein’s loss function which penalizes gross overestimation and gross underestimation equally, and the corresponding Posterior Expected Stein’s Loss (PESL). Motivated by their works, we have calculated the Bayes estimators of the variance parameter with respect to the noninformative (Jeffreys’s, reference, and matching) priors under Stein’s loss function, and the corresponding PESLs. Moreover, we have calculated the Bayes estimators of the scale parameter with respect to the conjugate and noninformative priors under Stein’s loss function, and the corresponding PESLs. The quantities (prior, posterior, three posterior expectations, two Bayes estimators, and two PESLs) and expressions of the variance and scale parameters of the model for the conjugate and noninformative priors are summarized in two tables. After that, the numerical simulations are carried out to exemplify the theoretical findings. Finally, we calculate the Bayes estimators and the PESLs of the variance and scale parameters of the S&P 500 monthly simple returns for the conjugate and noninformative priors.

1 Introduction

There are four basic elements in Bayesian decision theory and specifically in Bayesian point estimation: The data, the model, the prior, and the loss function. In this paper, we are interested in the data from the normal model with a known mean, with respect to the conjugate and noninformative (Jeffreys’s, reference, and matching) priors, under Stein’s and the squared error loss functions. We will analytically calculate the Bayes estimators of the variance and scale parameters of the normal model with a known mean, with respect to the conjugate and noninformative priors under Stein’s and the squared error loss functions.

The squared error loss function has been used by many authors for the problem of estimating the variance, σ2, based on a random sample from a normal distribution (see for instance (Maatta and Casella, 1990)). As pointed out by (Casella and Berger, 2002), the squared error loss function penalizes overestimation and underestimation equally, which is fine for the location parameter with parameter space Θ=,. For a variance or scale parameter, the parameter space is Θ=0, where 0 is a natural lower bound and the estimation problem is not symmetric. In these cases, we should not choose the squared error loss function, but choose a loss function which penalizes gross overestimation and gross underestimation equally, that is, an action a will incur an infinite loss when it tends to 0 or . Stein’s loss function has this property, and thus it is recommended to use for the positive restricted parameter space Θ=0, by many authors (see for example (James and Stein, 1961; Petropoulos and Kourouklis, 2005; Oono and Shinozaki, 2006; Bobotas and Kourouklis, 2010; Zhang, 2017; Xie et al., 2018; Zhang et al., 2019; Sun et al., 2021)). In the normal model with a known mean μ, our parameters of interest are θ = σ2 (a variance parameter) and θ = σ (a scale parameter). Therefore, we will select Stein’s loss function.

The motivation and contributions of our paper are summarized as follows. For the normal model with a known mean μ, the Bayes estimation of the variance parameter θ = σ2 under the conjugate prior which is an Inverse Gamma distribution is studied in Example 4.2.5 (p.236) of (Lehmann and Casella, 1998) and Example 1.3.5 (p.15) of (Mao and Tang, 2012). However, they only calculate the Bayes estimator with respect to a conjugate prior under the squared error loss. (Zhang, 2017) calculates the Bayes estimator of the variance parameter θ = σ2 of the normal model with a known mean with respect to the conjugate prior under Stein’s loss function which penalizes gross overestimation and gross underestimation equally, and the corresponding Posterior Expected Stein’s Loss (PESL). Motivated by the works of (Lehmann and Casella, 1998; Mao and Tang, 2012; Zhang, 2017), we want to calculate the Bayes estimators of the variance and scale parameters of the normal model with a known mean for the conjugate and noninformative priors under Stein’s loss function. The contributions of our paper are summarized as follows. In this paper, we have calculated the Bayes estimators of the variance parameter θ = σ2 with respect to the noninformative (Jeffreys’s, reference, and matching) priors under Stein’s loss function, and the corresponding Posterior Expected Stein’s Losses (PESLs). Moreover, we have calculated the Bayes estimators of the scale parameter θ = σ with respect to the conjugate and noninformative priors under Stein’s loss function, and the corresponding PESLs. For more literature on Bayesian estimation and inference, we refer readers to (Sindhu and Aslam, 2013a; Sindhu and Aslam, 2013b; Sindhu et al., 2013; Sindhu et al., 2016a; Sindhu et al., 2016b; Sindhu et al., 2016c; Sindhu et al., 2017; Sindhu et al., 2018; Sindhu and Hussain, 2018)

The rest of the paper is organized as follows. In the next Section 2, we analytically calculate the Bayes estimators of the variance and scale parameters of the normal model with a known mean, with respect to the conjugate and noninformative priors under Stein’s loss function, and the corresponding PESLs. We also analytically calculate the Bayes estimators under the squared error loss function, and the corresponding PESLs. The quantities (prior, posterior, three posterior expectations, two Bayes estimators, and two PESLs) and expressions of the variance and scale parameters for the conjugate and noninformative priors are summarized in two tables. Section 3 reports vast amount of numerical simulation results of the combination of the noninformative prior and the scale parameter to support the theoretical studies of two inequalities of the Bayes estimators and the PESLs, and that the PESLs depend only on the number of observations, but do not depend on the mean and the sample. In Section 4, we calculate the Bayes estimators and the PESLs of the variance and scale parameters of the S&P 500 monthly simple returns for the conjugate and noninformative priors. Some conclusions and discussions are provided in Section 5.

2 Bayes Estimator, PESL, IRSL, and BRSL

In this section, we will analytically calculate the Bayes estimator δsπ,θx of the variance parameter θ=σ2Θ=0, under Stein’s loss function, the PESL at δsπ,θx, PESLsπ,θx, and the Integrated Risk under Stein’s Loss (IRSL) at δsπ,θ, IRSLsπ,θ=BRSLπ,θ, which is also the Bayes Risk under Stein’s Loss (BRSL) for π, θ. See (Robert, 2007) for the definitions of the posterior expected loss, the integrated risk, and the Bayes risk. We will also analytically calculate the Bayes estimator δsπ,σx of the scale parameter σΘ=0, under Stein’s loss function, the PESL at δsπ,σx, PESLsπ,σx, and the IRSL at δsπ,σ, IRSLsπ,σ=BRSLπ,σ, which is also the BRSL for π, σ.

Suppose that we observe X1, X2, …, Xn from the hierarchical normal model with a mixing variance parameter θ = σ2:

Xi|θiidNμ,θ,i=1,2,,n,θπθ,(1)

where − < μ < is a known constant, Nμ,θ is the normal distribution with a known mean μ and an unknown variance θ, and πθ is the prior distribution of θ. For the normal model with a known mean μ, the Bayes estimation of the variance parameter θ = σ2 under the conjugate prior which is an Inverse Gamma distribution is studied in Example 4.2.5 (p.236) of (Lehmann and Casella, 1998) and Example 1.3.5 (p.15) of (Mao and Tang, 2012). However, they only calculate the Bayes estimator with respect to a conjugate prior under the squared error loss. (Zhang, 2017) calculates the Bayes estimator of the variance parameter θ = σ2 with respect to the conjugate prior under Stein’s loss function, and the corresponding PESL. Motivated by the works of (Lehmann and Casella, 1998; Mao and Tang, 2012; Zhang, 2017), we want to calculate the Bayes estimators of the variance parameter of the normal model with a known mean for the noninformative (Jeffreys’s, reference, and matching) priors under Stein’s loss function. The usual Bayes estimator with respect to a prior πθ is to calculate δ2π,θx=Eθ|x under the squared error loss function. As pointed out in the introduction, we should calculate and use the Bayes estimator of the variance parameter θ with respect to a prior πθ under Stein’s loss function, that is, δsπ,θx.

Alternatively, we may be interested in the scale parameter θ = σ. Motivated by the works of (Lehmann and Casella, 1998; Mao and Tang, 2012; Zhang, 2017), we also want to calculate the Bayes estimators of the scale parameter θ = σ with respect to the conjugate and noninformative priors under Stein’s loss function, and the corresponding PESLs. Suppose that we observe X1, X2, …, Xn from the hierarchical normal model with a mixing scale parameter θ = σ:

Xi|σiidNμ,σ2,i=1,2,,n,σπσ,(2)

where − < μ < is a known constant, Nμ,σ2 is the normal distribution with a known mean μ and an unknown variance σ2, and πσ is the prior distribution of σ. The usual Bayes estimator with respect to a prior πσ is to calculate δ2π,σx=Eσ|x under the squared error loss function. As pointed out in the introduction, we should calculate and use the Bayes estimator of the scale parameter σ with respect to a prior πσ under Stein’s loss function, that is, δsπ,σx.

Now let us explain why we choose Stein’s loss function on Θ=0,. Stein’s loss function is given by

Lsθ,a=aθlogaθ1,(3)

where θ > 0 is the unknown parameter of interest and a is an action or estimator. The squared error loss function is given by

L2θ,a=aθ2.(4)

The asymmetric Linear Exponential (LINEX) loss function ((Varian et al., 1975; Zellner, 1986; Robert, 2007)) is given by

LLθ,a=ecaθcaθ1,(5)

where c ≠ 0 serving to determine its shape. In particular, when c > 0, the LINEX loss function tends to exponentially, while when c < 0, the LINEX loss function tends to linearly. Note that on the positive restricted parameter space Θ=0,, Stein’s loss function penalizes gross overestimation and gross underestimation equally, that is, an action a will incur an infinite loss when it tends to 0 or . Whereas, the squared error loss function does not penalize gross overestimation and gross underestimation equally, as an action a will incur a finite loss (in fact θ2) when it tends to 0 and incur an infinite loss when it tends to . Similarly, the LINEX loss functions also do not penalize gross overestimation and gross underestimation equally, as an action a will incur a finite loss (in fact e + − 1) when it tends to 0 and incur an infinite loss when it tends to . Figure 1 shows the four loss functions on Θ=0, when θ = 2.

FIGURE 1
www.frontiersin.org

FIGURE 1. The four loss functions on Θ=0, when θ = 2.

As pointed out by (Zhang, 2017), the Bayes estimator

δsπ,θx=1E1θ|x

minimizes the PESL, that is,

δsπ,θx=argminaAELsθ,a|x,

where A=ax:ax>0 is an action space, a=ax>0 is an action (estimator), which is a function only of x, Lsθ,a given by (Eq. 3) is Stein’s loss function, and θ > 0 is the unknown parameter of interest. Note that Stein’s loss function has a nice property that it penalizes gross overestimation and gross underestimation equally, that is, an action a will incur an infinite loss when it tends to 0 or . Moreover, note that θ may be the variance parameter σ2 or the scale parameter σ.

The usual Bayes estimator of θ is δ2π,θx=Eθ|x which minimizes the Posterior Expected Squared Error Loss. It is interesting to note that

δsπ,θxδ2π,θx,(6)

whose proof exploits Jensen’s inequality and the proof can be found in (Zhang, 2017). Note that the inequality (Eq. 6) is a special inequality in (Zhang et al., 2018). As calculated in (Zhang, 2017), the PESL at δsπ,θx=Eθ1|x1 is

PESLsπ,θx=ELsθ,a|xa=1E1θ|x=logE1θ|x+Elogθ|x,

and the PESL at δ2π,θx=Eθ|x is

PESL2π,θx=ELsθ,a|xa=Eθ|x=Eθ|xE1θ|xlogEθ|x+Elogθ|x1.

As observed in (Zhang, 2017),

PESLsπ,θxPESL2π,θx,(7)

which is a direct consequence of the general methodology for finding a Bayes estimator or due to δsπ,θx minimizes the PESL. The numerical simulations will exemplify (Eqs 6, 7) later. Note that the calculations of δsπ,θx, δ2π,θx, PESLsπ,θx, and PESL2π,θx depend only on the three expectations Eθ|x, Eθ1|x, and Elogθ|x.

2.1 Conjugate Prior

The problem of finding the Bayes estimator under a conjugate prior is a standard problem that is treated in almost every text on Mathematical Statistics.

The quantities and expressions of the variance and scale parameters of the normal models (Eqs 1, 2) with a known mean μ for the conjugate prior are summarized in Table 1. In the table, α > 0 and β > 0 are known constants,

α=α+n2,β=1β+12i=1nxiμ21,
ψz=ΓzΓz=ddzlogΓz=digammaz

is the digamma function, and Γz is the gamma function. In R software (R Core Team. R, 2021), the function digamma(z) calculates ψz. The quantities and expressions of the variance parameter θ = σ2 for the conjugate prior are calculated in and quoted from (Zhang, 2017). The calculations of the quantities and expressions of the scale parameter θ = σ for the conjugate prior can be found in the Supplementary Material. We remark that the calculations of the quantities and expressions in Table 1 are not trivial, especially Eπclogθ|x.

TABLE 1
www.frontiersin.org

TABLE 1. The quantities and expressions for the conjugate prior.

2.2 Noninformative Priors

Famous noninformative priors include the Jeffreys’s ( (Jeffreys, 1961)), reference ( (Bernardo, 1979; Berger and Bernardo, 1992)), and matching ( (Tibshirani, 1989; Datta and Mukerjee, 2004)) priors. See also (Berger, 2006; Berger et al., 2015) and the references therein.

The Jeffreys’s noninformative prior for θ = σ2 is

πJθ1θ or πJσ21σ2.

See Part I (p.66) of (Chen, 2014), where μ is assumed known in the normal model Nμ,θ. The Jeffreys’s noninformative prior for θ = σ is

πJσ1σ.

See Example 3.5.6 (p.131) of (Robert, 2007), where μ is assumed known in the normal model Nμ,σ2.

Since μ is assumed known in the normal models, there is only one unknown parameter. Therefore, the reference prior is equal to the Jeffreys’s prior, and the matching prior is also equal to the Jeffreys’s prior (see pp.130–131 of (Ghosh et al., 2006)). In summary, when μ is assumed known in the normal models, the three noninformative priors equal, that is,

πnθ=πJθ=πRθ=πMθ1θ

and

πnσ=πJσ=πRσ=πMσ1σ,

where πn stands for the noninformative prior.

Note that as in many statistics textbooks, the probability density function (pdf) of θIGα,β is given by

fθθ|α,β=1Γαβα1θα+1exp1βθ,θ>0,α>0,β>0.

The conjugate prior of the scale parameter θ = σ is a Square Root of the Inverse Gamma (SRIG) distribution that we define below.

DEFINITION 1 Let θ=σ2IGα,β with α > 0 and β > 0. Then σ=θSRIGα,β and the pdf of σ is given by

fσσ|α,β=2Γαβα1σ2α+1exp1βσ2,σ>0,α>0,β>0.

Definition 1 gives the definition of the SRIG distribution, which is the conjugate prior of the scale parameter θ = σ of the normal distribution. Because the SRIG distribution can not be found in standard textbooks, so we give its definition here. Moreover, Definition 1 is reasonable, since

fσσ|α,β=fθθ|α,βθσ=1Γαβα1σ2α+1exp1βσ22σ=2Γαβα1σ2α+1exp1βσ2.

We have the following proposition which gives the three expectations of the SRIGα,β distribution. The calculations needed in the proposition can be found in the Supplementary Material. We remark that the calculations of Eσ and Eσ1 are straightforward by utilizing a simple transformation of θ = σ2 and the integration of an IGα,β distribution. However, the calculations of Elogσ is skillful by first a transformation of y=1/βσ2 and then a change of the order of integration and differentiation.

PROPOSITION 1 Let σ=θSRIGα,β with α > 0 and β > 0. Then

Eσ=Γα12Γαβ12,forα>12andβ>0,E1σ=Γα+12β12Γα,forα>0andβ>0,Elogσ=12logβ12ψα,forα>0andβ>0.

The relationship between the two distributions IGα,β and SRIGα,β are given in the following proposition whose proof can be found in the Supplementary Material. We remark that the proof of the proposition is straightforward by utilizing monotone transformations θ = σ2 and σ=θ.

PROPOSITION 2 θ=σ2IGα,β if and only if σ=θSRIGα,β, where α > 0 and β > 0.The posterior distributions of θ and σ for the noninformative priors are given in the following theorem whose proof can be found in the Supplementary Material.

THEOREM 1 Let X|θNμ,θ and X|σNμ,σ2 where μ is known and θ = σ2 is unknown, πθ1θ, and πσ1σ. Then

πθ|xIGα̃,β̃ and πσ|xSRIGα̃,β̃,

where

α̃=n2 and β̃=2i=1nxiμ2.(8)

We have the following two remarks for Theorem 1.

Remark 1 Let θ = σ2. In the derivation of πσ|x, if we derive it in this way,

fσσ=πσ|x1σn+1exp12σ2i=1nxiμ2=1σ2n+12exp12σ2i=1nxiμ2=1θn+12exp12θi=1nxiμ2=fθθIGα̃1,β̃,

where

α̃1=n12 and β̃=2i=1nxiμ2,

then by Proposition 2, fσσ=πσ|xSRIGα̃1,β̃, which is different from SRIGα̃,β̃. In fact, the above practice is equivalent to the derivation of the pdf of θ in terms of the pdf of σ by fθθ=fσσ, ignoring the σθ term, which is obviously wrong. Therefore, the above derivation which is a pitfall for incautious users is wrong. ‖

Remark 2 The two posterior distributions in Theorem 1, πθ|xIGα̃,β̃ and πσ|xSRIGα̃,β̃, follow Proposition 2 by accident. We have

fθθ=πθ|xfx|θπθfx|θ1θIGα̃,β̃

and

fσσ=πσ|xfx|σπσfx|θ1σSRIGα̃,β̃.

Note that σ=θ, and thus

fσσσθfx|θ1σ12θ=fx|θ1θ12θ=fx|θ12θfθθ,(9)

which is the reason why πθ|x=fθθ and πσ|x=fσσ follow Proposition 2. Note that the posterior distributions depend on the prior distributions. If the prior distributions πθ and πσ are selected different from 1θ and 1σ, then the relationship (Eq. 9) may not be satisfied, and thus πθ|x and πσ|x may not follow Proposition 2. ‖

2.2.1 The Quantities and Expressions of the Variance Parameter

In this subsubsection, we will calculate the expressions of the quantities (three posterior expectations, two Bayes estimators, and two PESLs) of the variance parameter θ = σ2.

Now we calculate the three expectations Eθ|x, Eθ1|x, and Elogθ|x for the variance parameter θ = σ2. By Theorem 1, πθ|xIGα̃,β̃, and thus

Eθ|x=1α̃1β̃,α̃>1 and E1θ|x=α̃β̃.

From (Zhang, 2017), we know that

Elogθ|x=logβ̃ψα̃.

It is easy to see that, for α̃>1,

δsπ,θx=1E1θ|x=1α̃β̃<1α̃1β̃=Eθ|x=δ2π,θx,

which exemplifies (Eq. 6). From (Zhang, 2017), we find that

PESLsπ,θx=logα̃ψα̃, for α̃>0,

and

PESL2π,θx=1α̃1+logα̃1ψα̃, for α̃>1.

It can be directly proved that PESLsπ,θxPESL2π,θx for α̃>1, which exemplifies (Eq. 7), and its proof which exploits the Taylor series expansion for ex can be found in the Supplementary Material. Note that PESLsπ,θx and PESL2π,θx depend only on α̃=n/2. Therefore, they depend only on n, but do not depend on μ and x. Numerical simulations will exemplify this result.

The IRSL at δsπ,θ or the BRSL for θ = σ2 is (similar to (Robert, 2007))

IRSLsπ,θ=BRSLπ,θ=rπ,δsπ,θ=EπRθ,δsπ,θ=ΘRθ,δsπ,θπθdθ=ΘXLθ,δsπ,θxfx|θdxπθdθ=XΘLθ,δsπ,θxfx|θπθdθdx=XΘLθ,δsπ,θxπθ|xdθmπ,θxdx=XPESLπ,θax|xa=δsπ,θmπ,θxdx=XPESLsπ,θxmπ,θxdx=Xlogα̃ψα̃mπ,θxdx=logα̃ψα̃=PESLsπ,θx,

since α̃ does not depend on x, where

mπ,θx=0fx|θπθdθ

is the marginal density of x with prior πθ.

2.2.2 The Quantities and Expressions of the Scale Parameter

In this subsubsection, we will calculate the expressions of the quantities (three posterior expectations, two Bayes estimators, and two PESLs) of the scale parameter θ = σ.

Now let us calculate δsπ,σx, δ2π,σx, PESLsπ,σx, and PESL2π,σx for the scale parameter σ. To calculate these quantities, we need to calculate the three expectations Eσ|x, Eσ1|x, and Elogσ|x. Since πσ|xSRIGα̃,β̃ by Theorem 1, from Proposition 1, we have

Eσ|x=Γα̃12Γα̃β̃12, for α̃>12 and β̃>0,(10)
E1σ|x=Γα̃+12β̃12Γα̃, for α̃>0 and β̃>0,(11)
Elogσ|x=12logβ̃12ψα̃, for α̃>0 and β̃>0.(12)

It can be proved that, for α̃>12,

δsπ,σx=1E1σ|x=Γα̃Γα̃+12β̃12<Γα̃12Γα̃β̃12=Eσ|x=δ2π,σx,

which exemplifies (Eq. 6), and the proof which exploits the positivity of ψx can be found in the Supplementary Material.

Now we calculate PESLsπ,σx and PESL2π,σx for the scale parameter σ. From (Zhang, 2017), we know that the PESL at δsπ,σx=Eσ1|x1 is

PESLsπ,σx=ELsθ,a|xa=1E1σ|x=logE1σ|x+Elogσ|x,

and the PESL at δ2π,σx=Eσ|x is

PESL2π,σx=ELsθ,a|xa=Eσ|x=Eσ|xE1σ|x1logEσ|x+Elogσ|x.

Substituting (Eqs 10, 11, 12), into the above expressions, we obtain

PESLsπ,σx=logΓα̃+12β̃12Γα̃12logβ̃12ψα̃=logΓα̃+12logΓα̃12ψα̃,

for α̃>0 and β̃>0, and

PESL2π,σx=Γα̃12Γα̃β̃12Γα̃+12β̃12Γα̃1logΓα̃12Γα̃β̃1212logβ̃12ψα̃=Γα̃12Γα̃+12Γ2α̃1logΓα̃12+logΓα̃12ψα̃,

for α̃>12 and β̃>0. It can be directly proved that PESLsπ,σxPESL2π,σx for α̃>12 and β̃>0, which exemplifies (Eq. 7), and its proof which exploits the Taylor series expansion for log  u with u near 1 can be found in the Supplementary Material. Note that PESLsπ,σx and PESL2π,σx depend only on α̃=n/2. Therefore, they depend only on n, but do not depend on μ and x. Numerical simulations will exemplify this result.

The IRSL at δsπ,σ or the BRSL for θ = σ is (similar to (Robert, 2007))

IRSLsπ,σ=BRSLπ,σ=rπ,δsπ,σ=EπRσ,δsπ,σ=ΣRσ,δsπ,σπσdσ=ΣXLσ,δsπ,σxfx|σdxπσdσ=XΣLσ,δsπ,σxfx|σπσdσdx=XΣLσ,δsπ,σxπσ|xdσmπ,σxdx=XPESLπ,σax|xa=δsπ,σmπ,σxdx=XPESLsπ,σxmπ,σxdx=XlogΓα̃+12logΓα̃12ψα̃mπ,σxdx=logΓα̃+12logΓα̃12ψα̃=PESLsπ,σx,

since α̃ does not depend on x, where

mπ,σx=0fx|σπσdσ

is the marginal density of x with prior πσ.

The quantities and expressions of the variance and scale parameters for the noninformative priors are summarized in Table 2. In the table, α̃ and β̃ are given by (Eq. 8).

TABLE 2
www.frontiersin.org

TABLE 2. The quantities and expressions for the noninformative priors.

From Tables 1, 2, we find that there are four combinations of the expressions of the quantities: conjugate prior and variance parameter, conjugate prior and scale parameter, noninformative prior and variance parameter, and noninformative prior and scale parameter. The forms of the expressions of the quantities are the same for the variance parameter under the conjugate and noninformative priors, since they have the same Inverse Gamma posterior distributions. Similarly, the forms of the expressions of the quantities are the same for the scale parameter under the conjugate and noninformative priors, since they have the same Square Root of the Inverse Gamma posterior distributions.

The inequalities (Eqs 6, 7) exist in Tables 1, 2. In fact, there are 8 inequalities in Tables 1, 2 and 4 inequalities in each table. Since the forms of the expressions of the quantities are the same in Tables 1, 2, with the only difference of the parameters, there are actually 4 different inequalities which are in Table 2. One inequality of the four inequalities about the Bayes estimators is obvious, and the proofs of the other three inequalities can be found in the Supplementary Material.

3 Numerical Simulations

In this section, we will numerically exemplify the theoretical studies of (Eqs 6, 7), and that the PESLs depend only on n, but do not depend on μ and x. The numerical simulation results are similar for the four combinations of the expressions of the quantities, and thus we only present the results for the combination of the noninformative prior and the scale parameter.

First, we fix μ = 0 and n = 10, and assume that σ = 1 is drawn from the improper prior distribution. After that, we draw a random sample

x=rnorm(n =n, mean =μ, sd =σ)

from N(μ, σ2).

To generate a random sample σ=σ1,,σk with k = 1000 from

πnσ|x=SRIGα̃,β̃,

we will adopt the following algorithm. First, compute α̃ and β̃ from (Eq. 8). Second, generate a random sample

G=rgamma(n =k, shape =α̃, scale =β̃)Gα̃,β̃.

Third, compute

IG=1GIGα̃,β̃.

Fourth, compute

σ=IGSRIGα̃,β̃.

Hence, σ is a random sample from the SRIGα̃,β̃ distribution. Figure 2 shows the histogram of σ|x and the density estimation curve of πn(σ|x). It is πn(σ|x) that we find δsπn,σx to minimize the PESL. From the figure, we see that the SRIGα̃,β̃ distribution is left peaked, right skewed, and continuous.

FIGURE 2
www.frontiersin.org

FIGURE 2. The histogram of σ|x and the density estimation curve of πn(σ|x).

The Bayes estimators (δsπn,σx and δ2πn,σx) and the PESLs (PESLsπn,σx and PESL2πn,σx) are computed by the following algorithm. First, compute α̃ and β̃ from (Eq. 8). Second, compute

E1=Eσ|x=Γα̃12Γα̃β̃12,E2=E1σ|x=Γα̃+12β̃12Γα̃,E3=Elogσ|x=12logβ̃12ψα̃.

Third, compute

δsπn,σx=1E2,δ2πn,σx=E1,PESLsπn,σx=logE2+E3,PESL2πn,σx=E1×E2logE1+E31.

Numerical results show that

δsπn,σx=0.7712483<0.8152161=δ2πn,σx

and

PESLsπn,σx=0.0267013<0.02826706=PESL2πn,σx,

which exemplify the theoretical studies of (6) and (7).

In Figure 3, we fix μ = 0 and n = 10, but allow the seed number to change from 1 to 10 (i.e., we change x). From the figure we see that the estimators and PESLs are functions of x. We see from the left plot of the figure that the estimators depend on x in an unpredictable manner, and δsπn,σx are unanimously smaller than δ2πn,σx, and thus (Eq. 6) is exemplified. The two Bayes estimators are distinguishable since we fix n = 10 to be a small number. The right plot of the figure exhibits that the PESLs do not depend on x, and PESLsπn,σx are unanimously smaller than PESL2πn,σx, and thus (Eq. 7) is exemplified.

FIGURE 3
www.frontiersin.org

FIGURE 3. The estimators are functions of x (left) and the PESLs are also functions of x (right).

Now we allow one of the two parameters μ and n to change, holding other parameters fixed. Moreover, we also assume that the sample x is fixed, as it is the case for the real data. Figure 4 shows the estimators and PESLs as functions of μ and n. We see from the left plots of the figure that the estimators depend on μ and n, and (Eq. 6) is exemplified. More specifically, the estimators are first decreasing and then increasing functions of μ, and the estimators attain the minimum when μ = 0. However, the estimators fluctuate around some value when n increases. The right plots of the figure exhibit that the PESLs depend only on n, but do not depend on μ , and (Eq. 7) is exemplified. More specifically, the PESLs are decreasing functions of n. Furthermore, the two PESLs as functions of n are indistinguishable, as the two PESLs are very close. In summary, the results of the figure exemplify the theoretical studies of (Eqs 6, 7).

FIGURE 4
www.frontiersin.org

FIGURE 4. Left: The estimators as functions of μ and n. Right: The PESLs as functions of μ and n.

Since the estimators δsπn,σx and δ2πn,σx and the PESLs PESLsπn,σx and PESL2πn,σx depend on α̃ and β̃, where α̃>1/2 and β̃>0, we can plot the surfaces of the estimators and the PESLs on the domain α̃,β̃(0.5,10]×(0,10]=D via the R function persp3d() in the R package rgl (see (Adler and Murdoch, 2017; Zhang et al., 2017; Zhang et al., 2019; Sun et al., 2021)). We remark that the R function persp() in the R package graphics can not add another surface to the existing surface, but persp3d() can. Moreover, persp3d() allows one to rotate the perspective plots of the surface according to one’s wishes. Figure 5 plots the surfaces of the estimators and the PESLs, and the surfaces of the difference of the estimators and the difference of the PESLs. From the left two plots of the figure, we see that δsπn,σx<δ2πn,σx for all α̃,β̃ on D, which exemplifies (Eq. 6). From the right two plots of the figure, we see that PESLsπn,σx<PESL2πn,σx for all α̃,β̃ on D, which exemplifies (Eq. 7). In summary, the results of the figure exemplify the theoretical studies of (Eqs 6, 7).

FIGURE 5
www.frontiersin.org

FIGURE 5. The domain for α̃,β̃ is D = (0.5, 10] × (0, 10] for all the plots. a is for α̃ and b is for β̃ in the axes of all the plots. The red surface is for δ2πn,σx and the blue surface is for δsπn,σx in the upper two plots. (upper left) The estimators as functions of α̃ and β̃. δsπn,σx<δ2πn,σx for all α̃,β̃ on D. (upper right) The PESLs as functions of α̃ and β̃. PESLsπn,σx<PESL2πn,σx for all α̃,β̃ on D. (lower left) The surface of δ2πn,σxδsπn,σx which is positive for all α̃,β̃ on D. (lower right) The surface of PESL2πn,σxPESLsπn,σx which is also positive for all α̃,β̃ on D.

4 A Real Data Example

In this section, we exploit the data from finance. The R package quantmod ( (Ryan and Ulrich, 2017)) is exploited to download the data ˆGSPC (the S&P 500) during 2020-04-24 and 2021-07-02 from “finance.yahoo.com.” It is commonly believed that the monthly simple returns of the index data or the stock data are normally distributed. It is simple to check that the S&P 500 monthly simple returns follow the normal model. Usually, the data from real examples can be regarded as iid from the normal model with an unknown mean μ. However, the mean μ could be estimated by prior information or historical information. Alternatively, the mean μ could be estimated by the sample mean. Therefore, for simplicity, we assume that the mean μ is known. Assume that

μ=x̄,α=1,β=1

for the S&P 500 monthly simple returns.

The Bayes estimators and the PESLs of the variance and scale parameters of the S&P 500 monthly simple returns for the conjugate and noninformative priors are summarized in Table 3. From the table, we observe the following facts.

• The two inequalities (Eqs 6, 7) are exemplified.

• Given the prior (conjugate or noninformative), the Bayes estimators are similar across different loss functions (Stein’s or squared error).

• Given the loss function, the Bayes estimators are quite different across different priors. Therefore, the prior has a larger influence than the loss function in calculating the Bayes estimators.

TABLE 3
www.frontiersin.org

TABLE 3. The Bayes estimators and the PESLs of the S&P 500 monthly simple returns.

More results (the data of the S&P 500 monthly simple returns, the plot of the S&P 500 monthly close prices, the plot of the S&P 500 monthly simple returns, the histogram of the S&P 500 monthly simple returns) for the real data example can be found in the Supplementary Material due to space limitations.

5 Conclusions and Discussions

For the variance (θ = σ2) and scale (θ = σ) parameters of the normal model with a known mean μ, we recommend and analytically calculate the Bayes estimators, δsπ,θx, with respect to the conjugate and noninformative (Jeffreys’s, reference, and matching) priors under Stein’s loss function which penalizes gross overestimation and gross underestimation equally. These estimators minimize the PESLs. We also analytically calculate the Bayes estimators, δ2π,θx=Eθ|x, with respect to the conjugate and noninformative priors under the squared error loss function, and the corresponding PESLs. The quantities (πθ, πθ|x, Eπθ|x, Eπθ1|x, Eπlogθ|x, δsπ,θx, δ2π,θx, PESLsπ,θx, PESL2π,θx ) and expressions of the variance and scale parameters for the conjugate and noninformative priors are summarized in Tables 1, 2, respectively. Note that Eπlogθ|x, which is essential for the calculation of PESLsπ,θx and PESL2π,θx, depends on the digamma function.

Proposition 1 gives the three expectations of the SRIGα,β distribution. Moreover, Proposition 2 gives the relationship between the two distributions IGα,β and SRIGα,β.

For the conjugate and noninformative priors, the posterior distribution of θ = σ2, πθ|x, follows an Inverse Gamma distribution, and the posterior distribution of σ, πσ|x, follows an SRIG distribution which is defined in Definition 1.

We find that the IRSL at δsπ,θ or the BRSL for θ = σ2 is

PESLsπ,θx=logα̃ψα̃.

In addition, the IRSL at δsπ,σ or the BRSL for θ = σ is

PESLsπ,σx=logΓα̃+12logΓα̃12ψα̃.

The numerical simulations of the combination of the noninformative prior and the scale parameter exemplify the theoretical studies of (Eqs 6, 7), and that the PESLs depend only on n, but do not depend on μ and x. Moreover, in the real data example, we have calculated the Bayes estimators and the PESLs of the variance and scale parameters of the S&P 500 monthly simple returns for the conjugate and noninformative priors.

Unlike in frequentist paradigm, if σ̂ is the Maximum Likelihood Estimator (MLE) of σ, then σ̂2 is the MLE of σ2. In Bayesian paradigm, we usually should estimate the variance parameter σ2 and the scale parameter σ separately. In Table 2, we find that

δsπn,σ2x=1α̃β̃ and δsπn,σx=Γα̃Γα̃+12β̃12.

It is easy to see that

δsπn,σ2xδsπn,σx2.

Similarly,

δ2πn,σ2xδ2πn,σx2.

When there is no prior information about the unknown parameter of interest, we prefer the noninformative prior, as the hyperparameters α and β are somewhat arbitrary for the conjugate prior.

We remark that the Bayes estimator under Stein’s loss function is more appropriate than that under the squared error loss function, not because the former is smaller, but because Stein’s loss function which penalizes gross overestimation and gross underestimation equally is more appropriate for the positive restricted parameter.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Author Contributions

This work was carried out in collaboration among all authors. Author YYZ wrote the first draft of the article. Author TZR did literature searches and revised the article. Author MML revised the article. All authors read and approved the final article.

Funding

The research was supported by the Ministry of Education (MOE) project of Humanities and Social Sciences on the west and the border area (20XJC910001), the National Social Science Fund of China (21XTJ001), the National Natural Science Foundation of China (12001068; 72071019), and the Fundamental Research Funds for the Central Universities (2020CDJQY-Z001; 2021CDJQY-047).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

The authors are extremely grateful to the editor, the guest associate editor, and the reviewers for their insightful comments that led to significant improvement of the article.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fdata.2021.763925/full#supplementary-material

References

Adler, D., and Murdoch, D. (2017). Rgl: 3D Visualization Using OpenGL. OthersR package version 0.98.1.

Google Scholar

Berger, J. O., and Bernardo, J. M. (1992). On the Development of the Reference Prior Method. Bayesian Statistics 4. London: Oxford University Press.

Google Scholar

Berger, J. O., Bernardo, J. M., and Sun, D. C. (2015). Overall Objective Priors. Bayesian Anal. 10, 189–221. doi:10.1214/14-ba915

CrossRef Full Text | Google Scholar

Berger, J. O. (2006). The Case for Objective Bayesian Analysis. Bayesian Anal. 1, 385–402. doi:10.1214/06-ba115

CrossRef Full Text | Google Scholar

Bernardo, J. M. (1979). Reference Posterior Distributions for Bayesian Inference. J. R. Stat. Soc. Ser. B (Methodological) 41, 113–128. doi:10.1111/j.2517-6161.1979.tb01066.x

CrossRef Full Text | Google Scholar

Bobotas, P., and Kourouklis, S. (2010). On the Estimation of a Normal Precision and a Normal Variance Ratio. Stat. Methodol. 7, 445–463. doi:10.1016/j.stamet.2010.01.001

CrossRef Full Text | Google Scholar

Casella, G., and Berger, R. L. (2002). Statistical Inference (USA: Duxbury). 2nd edition.

Google Scholar

Chen, M. H. (2014). Bayesian Statistics Lecture. Changchun, China: Statistics Graduate Summer SchoolSchool of Mathematics and Statistics, Northeast Normal University.

Google Scholar

Datta, G. S., and Mukerjee, R. (2004). Probability Matching Priors: Higher Order Asymptotics. New York: Springer.

Google Scholar

Ghosh, J. K., Delampady, M., and Samanta, T. (2006). An Introduction to Bayesian Analysis. New York: Springer.

Google Scholar

James, W., and Stein, C. (1961). Estimation with Quadratic Loss. Proc. Fourth Berkeley Symp. Math. Stat. Probab. 1, 361–380.

Google Scholar

Jeffreys, H. (1961). Theory of Probability. 3rd edition. Oxford: Clarendon Press.

Google Scholar

Lehmann, E. L., and Casella, G. (1998). Theory of Point Estimation. 2nd edition. New York: Springer.

Google Scholar

Maatta, J. M., and Casella, G. (1990). Developments in Decision-Theoretic Variance Estimation. Stat. Sci. 5, 90–120. doi:10.1214/ss/1177012263

CrossRef Full Text | Google Scholar

Mao, S. S., and Tang, Y. C. (2012). Bayesian Statistics. 2nd edition. Beijing, China: Statistics Press.

Google Scholar

Oono, Y., and Shinozaki, N. (2006). On a Class of Improved Estimators of Variance and Estimation under Order Restriction. J. Stat. Plann. Inference 136, 2584–2605. doi:10.1016/j.jspi.2004.10.023

CrossRef Full Text | Google Scholar

Petropoulos, C., and Kourouklis, S. (2005). Estimation of a Scale Parameter in Mixture Models with Unknown Location. J. Stat. Plann. Inference 128, 191–218. doi:10.1016/j.jspi.2003.09.028

CrossRef Full Text | Google Scholar

R Core Team. R (2021). A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.

Google Scholar

Robert, C. P. (2007). The Bayesian Choice: From Decision-Theoretic Motivations to Computational Implementation. 2nd paperback edition. New York: Springer.

Google Scholar

Ryan, J. A., and Ulrich, J. M. (2017). R Package Version 0, 4–10.Quantmod: Quantitative Financial Modelling Framework.

Google Scholar

Sindhu, T. N., and Aslam, M. (2013). Bayesian Estimation on the Proportional Inverse Weibull Distribution under Different Loss Functions. Adv. Agric. Sci. Eng. Res. 3, 641–655.

Google Scholar

Sindhu, T. N., Aslam, M., and Hussain, Z. (2016). Bayesian Estimation on the Generalized Logistic Distribution under Left Type-II Censoring. Thailand Statistician 14, 181–195.

Google Scholar

Sindhu, T. N., and Aslam, M. (2013). Objective Bayesian Analysis for the Gompertz Distribution under Doudly Type II Cesored Data. Scientific J. Rev. 2, 194–208.

Google Scholar

Sindhu, T. N., and Hussain, Z. (2018). Mixture of Two Generalized Inverted Exponential Distributions with Censored Sample: Properties and Estimation. Stat. Applicata-Italian J. Appl. Stat. 30, 373–391.

Google Scholar

Sindhu, T. N., Saleem, M., and Aslam, M. (2013). Bayesian Estimation for Topp Leone Distribution under Trimmed Samples. J. Basic Appl. Scientific Res. 3, 347–360.

Google Scholar

Sindhu, T. N., Aslam, M., and Hussain, Z. (2016). A Simulation Study of Parameters for the Censored Shifted Gompertz Mixture Distribution: A Bayesian Approach. J. Stat. Manage. Syst. 19, 423–450. doi:10.1080/09720510.2015.1103462

CrossRef Full Text | Google Scholar

Sindhu, T. N., Feroze, N., and Aslam, M. (2017). A Class of Improved Informative Priors for Bayesian Analysis of Two-Component Mixture of Failure Time Distributions from Doubly Censored Data. J. Stat. Manage. Syst. 20, 871–900. doi:10.1080/09720510.2015.1121597

CrossRef Full Text | Google Scholar

Sindhu, T. N., Khan, H. M., Hussain, Z., and Al-Zahrani, B. (2018). Bayesian Inference from the Mixture of Half-Normal Distributions under Censoring. J. Natn. Sci. Found. Sri Lanka 46, 587–600. doi:10.4038/jnsfsr.v46i4.8633

CrossRef Full Text | Google Scholar

Sindhu, T. N., Riaz, M., Aslam, M., and Ahmed, Z. (2016). Bayes Estimation of Gumbel Mixture Models with Industrial Applications. Trans. Inst. Meas. Control. 38, 201–214. doi:10.1177/0142331215578690

CrossRef Full Text | Google Scholar

Sun, J., Zhang, Y.-Y., and Sun, Y. (2021). The Empirical Bayes Estimators of the Rate Parameter of the Inverse Gamma Distribution with a Conjugate Inverse Gamma Prior under Stein's Loss Function. J. Stat. Comput. Simulation 91, 1504–1523. doi:10.1080/00949655.2020.1858299

CrossRef Full Text | Google Scholar

Tibshirani, R. (1989). Noninformative Priors for One Parameter of Many. Biometrika 76, 604–608. doi:10.1093/biomet/76.3.604

CrossRef Full Text | Google Scholar

Varian, H. R. (1975). “A Bayesian Approach to Real Estate Assessment,” in Studies in Bayesian Econometrics and Statistics. Editors S. E. Fienberg, and A. Zellner (Amsterdam: North Holland), 195–208.

Google Scholar

Xie, Y.-H., Song, W.-H., Zhou, M.-Q., and Zhang, Y.-Y. (2018). The Bayes Posterior Estimator of the Variance Parameter of the Normal Distribution with a Normal-Inverse-Gamma Prior Under Stein’s Loss. Chin. J. Appl. Probab. Stat. 34, 551–564.

Google Scholar

Zellner, A. (1986). Bayesian Estimation and Prediction Using Asymmetric Loss Functions. J. Am. Stat. Assoc. 81, 446–451. doi:10.1080/01621459.1986.10478289

CrossRef Full Text | Google Scholar

Zhang, Y.-Y. (2017). The Bayes Rule of the Variance Parameter of the Hierarchical Normal and Inverse Gamma Model under Stein's Loss. Commun. Stat. - Theor. Methods 46, 7125–7133. doi:10.1080/03610926.2016.1148733

CrossRef Full Text | Google Scholar

Zhang, Y.-Y., Wang, Z.-Y., Duan, Z.-M., and Mi, W. (2019). The Empirical Bayes Estimators of the Parameter of the Poisson Distribution with a Conjugate Gamma Prior under Stein's Loss Function. J. Stat. Comput. Simulation 89, 3061–3074. doi:10.1080/00949655.2019.1652606

CrossRef Full Text | Google Scholar

Zhang, Y.-Y., Xie, Y.-H., Song, W.-H., and Zhou, M.-Q. (2018). Three Strings of Inequalities Among Six Bayes Estimators. Commun. Stat. - Theor. Methods 47, 1953–1961. doi:10.1080/03610926.2017.1335411

CrossRef Full Text | Google Scholar

Zhang, Y.-Y., Zhou, M.-Q., Xie, Y.-H., and Song, W.-H. (2017). The Bayes Rule of the Parameter in (0,1) under the Power-Log Loss Function with an Application to the Beta-Binomial Model. J. Stat. Comput. Simulation 87, 2724–2737. doi:10.1080/00949655.2017.1343332

CrossRef Full Text | Google Scholar

Keywords: Bayes estimator, variance and scale parameters, normal model, conjugate and noninformative priors, Stein’s loss

Citation: Zhang Y-Y, Rong T-Z and Li M-M (2022) The Bayes Estimators of the Variance and Scale Parameters of the Normal Model With a Known Mean for the Conjugate and Noninformative Priors Under Stein’s Loss. Front. Big Data 4:763925. doi: 10.3389/fdata.2021.763925

Received: 24 August 2021; Accepted: 01 November 2021;
Published: 03 January 2022.

Edited by:

Niansheng Tang, Yunnan University, China

Reviewed by:

Guikai Hu, East China University of Technology, China
Akio Namba, Kobe University, Japan
Tabassum Sindhu, Quaid-i-Azam University, Pakistan

Copyright © 2022 Zhang, Rong and Li. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ying-Ying Zhang, robertzhangyying@qq.com

Download