Your new experience awaits. Try the new design now and help us make it even better

ORIGINAL RESEARCH article

Front. Appl. Math. Stat., 12 January 2026

Sec. Statistics and Probability

Volume 11 - 2025 | https://doi.org/10.3389/fams.2025.1709968

A new approach for shrinkage estimators of the multivariate normal mean vector under the balanced loss criterion

  • 1Department of Mathematics, College of Science and Arts Onaizah, Qassim University, Buraydah, Saudi Arabia
  • 2Department of Mathematics, University of Sciences and Technology, Oran, Algeria
  • 3Laboratory of Stochastic Models, Statistics and Applications, Department of Biology, University of Mascara, University Tahar Moulay of Saïda, Mascara, Algeria

In this study, we investigate the issue of estimating the mean vector of a multivariate normal distribution. We introduce two new families of shrinkage estimators derived from both the maximum likelihood estimator and the James-Stein estimator. To evaluate their performance, we employ the risk function associated with the balanced loss criterion. Using this criterion, we establish that these estimators consistently outperform the positive part of the James-Stein estimator. Furthermore, we show that the estimators from the second family exhibit better performance than those from the first. Finally, we conclude with simulation studies that confirm our theoretical findings.

1 Introduction

Since Stein's study [1], estimating the mean of a multivariate normal distribution (MND) has remained a pivotal problem in statistics, underpinning many fundamental results in both the theory and practice of statistical inference. Stein [1] showed that the maximum likelihood estimator (MLE) of the mean vector θ=(θ1,θ2,,θq)T of the random vector Z=(Z1,Z2,,Zq)T~Nq(θ,Iq) is minimax and admissible for q ≤ 2. Namely, there is no other estimator that uniformly dominates it under the quadratic loss function (QLF). However, in the case of large dimensions, specifically for q ≥ 3, Stein [1], and James and Stein [2] established that the MLE is inadmissible and can be improved by the so-called James-Stein estimator (JSE). To construct this estimator, the authors employed a uniform reduction strategy for the components of the MLE, multiplying each component by the same value defined via a shrinkage function of the form ϕ(Z) = (1−((q−2)/||Z||2)). This is why the JSE is considered one of the most common shrinkage estimators in statistical analysis.

Numerous studies have focused on developing new shrinkage estimators that enhance the performance of both the MLE and the JSE. Notable contributions in this direction include the studies by Lindley [3], Bhattacharya [4], Berger [5], Stein [6], Arnold [7], Norouzirad and Arashi [8], Kashani et al. [9], and Benkhaled and Hamdaoui [10]. Additionally, several researchers have explored shrinkage estimators within a Bayesian framework, such as Strawderman [11], Lindley [12], Efron and Morris [13], Hamdaoui et al. [14], and Alahmadi et al. [15].

When shrinkage estimators were first introduced, some researchers pointed out a key limitation: the shrinkage factor may assume negative values. When this happens, it fails to perform its intended role of shrinking the components of the MLE toward zero. To address this issue, and under the QLF, Baranchik [16] proposed the positive part of the James-Stein estimator (PPJSE) provided as Λ(Z) = (1−((q−2)/|Z|2))+Z, with (1−((q−2)/|Z|2))+ defined as the maximum between 0 and 1−((q−2)/|Z|2). This construction ensures that the new shrinkage factor is always non-negative. Moreover, Baranchik proved that the PPJSE uniformly improves upon the JSE under the QLF. In Hamdaoui and Benmansour [17], the authors established in the simulation part that under the QLF, the improvement of the PPJSE over the JSE is very significant. In relation to the QLF, Hamdaoui [18] has also suggested new classes of shrinkage estimators extracted from the MLE and has improved the PPJSE.

Among the studies that have marked a major development in the field of estimating the mean of an MND, we find that of Zellner [19]. He focused on estimating the multivariate normal mean under the balanced loss function (BLF), which generalizes the QLF. Since then, several studies have been published in this direction, for example, those of Sanjari Farsipour and Asgharzadeh [20], Selahattin and Issam [21], Nimet and Selahattin [22], Karamikabir and Afsahri [23], Lahoucine et al. [24], Karamikabir et al. [25], and Benkhaled et al. [26].

In this study, we extend the results of Hamdaoui [18] by using the BLF rather than the QLF. Specifically, we consider the model Z=(Z1,Z2,,Zq)T~Nq(θ,τ2Iq), where the parameter τ2 is known. The main objective is to estimate the vector θ=(θ1,,θq)T using shrinkage estimators, which are derived from the MLE. The performance of each estimator is then evaluated through the risk function associated with the BLF. We organize this study as follows. We recall some essential results in Section 2. In Section 3, we present a class of inadmissible shrinkage estimators and establish the necessary and sufficient conditions for the shrinkage function to enhance the performance of the PPJSE. Within this framework, we identify the optimal estimator in the proposed class. Section 4 extends this approach by constructing a new class of estimators, deriving the corresponding necessary and sufficient conditions on the shrinkage function to improve upon the optimal estimator obtained in Section 3, and then determining the best estimator in this class. We conclude this paper with a simulation study to validate and illustrate the theoretical findings.

2 Preliminaries

In this study, we deal with the model Z=(Z1,Z2,,Zq)T~Nq(θ,τ2Iq), where the parameter τ2 is known. Without loss of generality, assume that τ2 = 1, and the aim is to estimate the mean vector θ=(θ1,,θq)T by new shrinkage estimators derived from both the MLE and the PPJSE. To measure the quality and the performance of estimators that we will treat, we use the risk function associated with the BLF provided in Hamdaoui et al. [27] as, for any estimator Λ of θ:

Lω(Λ,θ)=ω||Λ-Λ0||2+(1-ω)||Λ-θ||2, 0ω<1.    (1)

where Λ0 is the target estimator (in this study, Λ0 is the MLE), ω is the weight given to the proximity of Λ to Λ0, 1−ω is the relative weight given to the precision of estimation. We define the risk function relatively to this BLF as follows:

Rω(Λ,θ)=Eθ(Lω(Λ,θ)).

In relation to the BLF mentioned above, the MLE associated with our model is Z: = Λ0, and from a simple calculus given in pages 713-714 in Hamdaoui et al. [28], we show that the value of its risk function is equal to (1−ω)q. Furthermore, the well-known estimator that improves upon the MLE is the so-called JSE, which is expressed as,

Λjs(Z)=(1-c||Z||2)Z.    (2)

with c = (1−ω)(q−2). It is easy to demonstrate that the difference in risk between the JSE and the MLE is

Dw,js(Z)=Rω(Λjs(Z),θ)-Rω(Z,θ)=-(1-ω)2(q-2)2E(1||Z||2).    (3)

Moreover, the classical estimator that ameliorates the JSE is the PPJSE, expressed as follows:

Λjs+(Z)=(1-c1||Z||2)+Z=(1-c1||Z||2)Ic||Z||21Z,    (4)

where (1-c1||Z||2)+=max(0,1-c1||Z||2), and Ic||Z||21 is the indicator function of A={c||Z||21}, defined as,

IA(z)={1,if zA,0, otherwise.

From Casella and Hwang [29], and Hamdaoui et al. [27], we can deduce that the difference in risk between this estimator and the JSE is expressed as

Dw,js,js+(Z)=Rω(Λjs+(Z),θ)Rω(Λjs(Z),θ)                       =E[(Z2+(1ω)2(q2)2Z22(1ω)q)                         IcZ21 ].    (5)

Hamdaoui et al. [27] also demonstrated that under the BLF provided in Equation 1, Λjs+(Z) improves Λjs(Z).

3 Inadmissible shrinkage estimators dominating the PPJSE

In this part, we introduce a new class of shrinkage estimators of the mean vector θ=(θ1,,θq)T, which is derived from both the MLE and the JSE, and study their out-performance over the PPJSE under the BLF provided in Equation 1.

Consider the estimators

Λk,js+,2(Z)=Λjs+(Z)+k(1||Z||2)2Ic||Z||21Z,    (6)

with k a real positive parameter.

Proposition 3.1. Under the BLF provided in Equation 1, the difference in risk between the estimators Λk,js+,2(Z) given in Equation 6 and Λjs+(Z) defined in Equation 4 can be expressed as,

Dw,k,js+=k2E(1||Z||6Ic||Z||21)-4k(1-ω)E(1||Z||4Ic||Z||21).

Proof Relative to the BLF defined in Equation 1, and using the fact that

Rω(Λjs+(Z),θ)=ωE(||Λjs+(Z)-Z||2)+(1-ω)E(||Λjs+(Z)-θ||2),

the difference in risk between the estimators Λk,js+,2(Z) and Λjs+(Z) can be written as

Dw,k,js+=Rω(Λk,js+,2(Z),θ)-Rω(Λjs+(Z),θ)    =ωE(Λjs+(Z)+k1(Z2)2IcZ21Z-Z2)    +(1-ω)E(Λjs+(Z)+k1(Z2)2IcZ21Z-θ2)    -Rω(Λjs+(Z),θ)=k2E(1(Z2)3IcZ21)    +2ωkE(Λjs+(Z)-Z,1(Z2)2IcZ21Z)    +2(1-ω)kE(Λjs+(Z)-θ,1(Z2)2IcZ21Z).

The last equality follows directly from the definitions of the Euclidean norm and its associated inner product in ℝq.

As

E(Λjs+(Z)-θ,1(||Z||2)2Ic||Z||21Z)=E(Λjs+(Z)-Z+Z-θ,1(||Z||2)2Ic||Z||21Z)=E(Λjs+(Z)-Z,1(||Z||2)2Ic||Z||21Z)+E(Z-θ,1(||Z||2)2Ic||Z||21Z),

Then, the difference in risk between the estimators Λk,js+,2(Z) and Λjs+(Z) is

Dw,k,js+=Rω(Λk,js+,2(Z),θ)-Rω(Λjs+(Z),θ)     =k2E(1(||Z||2)3Ic||Z||21)     +2kE(Λjs+(Z)-Z,1(||Z||2)2Ic||Z||21Z)     +2(1-ω)kE(Z-θ,1(||Z||2)2Ic||Z||21Z).    (7)

The Equation 4, leads to

E(Λjs+(Z)-Z,1(||Z||2)2Ic||Z||21Z)=E(-c||Z||2Ic||Z||21Z,1(||Z||2)2Ic||Z||21Z)=-cE(1||Z||4Ic||Z||21).    (8)

And by using Lemma 2.1 of Shao and Strawderman [30], we deduce that

E(Z-θ,1(||Z||2)2Ic||Z||21Z)=(q-4)E(1||Z||4Ic||Z||21).    (9)

Then, according to Equations 79, we obtain the desired result.

The following theorem establishes a sufficient and the necessary conditions for improving the estimator Λk,js+,2(Z) over Λjs+(Z), and gives the optimal value of the parameter k, which minimizes the values of the risk function Rω(Λk,js+,2(Z),θ).

Theorem 3.2. Let q > 4. Under the BLF provided in Equation 1,

i) Λk,js+,2(Z) outperforms Λjs+(Z) if and only if

0k4(1-ω)E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21).

ii) The optimal value of the parameter k, which minimizes the risk function Rω(Λk,js+,2(Z),θ), is

k^=2(1-ω)E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21).

Proof i) From Proposition 3.1, a sufficient and necessary condition for that the estimator Λk,js+,2(Z) improves Λjs+(Z) is

Dw,k,js+=k2E(1||Z||6Ic||Z||21)-4k(1-ω)E(1||Z||4Ic||Z||21)0.

This means that Λk,js+,2(Z) improves Λjs+(Z) if and only if the polynomial P2(k)=k2E(1||Z||6Ic||Z||21)-4k(1-ω)E(1||Z||4Ic||Z||21) with respect to the variable k, is negative. Namely, our problem is to determine the values of k for which this polynomial is negative.

As

P2(k)=k[kE(1||Z||6Ic||Z||21)-4(1-ω)E(1||Z||4Ic||Z||21)]

and the fact that the expectations E(1||Z||4Ic||Z||21) and E(1||Z||6Ic||Z||21) are positive, we deduce immediately that the polynomial P2(k) of the variable k is negative if and only if

0k4(1-ω)E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21).

Then we achieve the desired result.

ii) Using the previous Proposition, the explicit formula for the risk function of the estimator Λk,js+,2(Z) is written as,

Rω(Λk,js+,2(Z),θ)=Rω(Λjs+(Z),θ)+k2E(1||Z||6Ic||Z||21)        -4k(1-ω)E(1||Z||4Ic||Z||21).

And by applying the fact that the risk function Rω(Λk,js+,2(Z),θ) is a convex function with respect to the variable k, we can conclude that the optimal value of k which minimizes Rω(Λk,js+,2(Z),θ) is

k^=2(1-ω)E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21).    (10)

If we replace k by k^ in Equation 6, we get the best estimator in the class of the estimators Λk,js+,2(Z), which is defined as follows:

Λk^,js+,2(Z)=Λjs+(Z)+k^(1||Z||2)2Ic||Z||21Z     =Λjs+(Z)+2(1-ω)E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)(1||Z||2)2     Ic||Z||21Z.    (11)

Furthermore, from Proposition 3.1, we immediately deduce that the difference in risk between Λk^,js+,2(Z) and Λjs+(Z), is shown below:

Dw,k^,js+=Rω(Λk^,js+,2(Z),θ)-Rω(Λjs+(Z),θ)    =-4(1-ω)2[E(1||Z||4Ic||Z||21)]2E(1||Z||6Ic||Z||21)    0,    (12)

and this confirms the domination of the estimator Λk^,js+,2(Z) over Λjs+(Z).

4 The effectiveness of new classes of estimators extracted from the PPJSE

The results established in the previous section indicate that the idea to add the term k(1||Z||2)2Ic||Z||21Z to the estimator Λjs+(Z), conducts us to construct the estimator Λk,js+,2(Z) which dominates Λjs+(Z). This observation motivates us to consider adopting the same process to improve the new estimator Λk,js+,2(Z). Next, we will incorporate a term of the form l(1||Z||2)3Ic||Z||21Z with l is a real positive parameter, into the estimator Λk^,js+,2(Z). We then follow a similar method to improve the obtained estimator by successively adding terms of the form b(1||Z||2)pIc||Z||21Z, where p is an integer parameter and b is a positive real constant. This layered approach produces a hierarchy of estimators with a polynomial structure of the variable (1||Z||2Ic||Z||21), and as the degree of the polynomial increases, the risk associated with the constructed estimator decreases, and this leads to the best estimator.

Now, consider the estimator

Λl,k^,js+,3(Z)=Λk^,js+,2(Z)+l(1||Z||2)3Ic||Z||21Z      =(1-c(1||Z||2)+k^(1||Z||2)2+l(1||Z||2)3)      Ic||Z||21Z,    (13)

where k^ is defined in Equation 10 and the parameter l is a real positive constant.

Proposition 4.1. Relative to the BLF defined in Equation 1, the difference in risk between the estimators Λl,k^,js+,3(Z) given in Equation 13 and Λk^,js+,2(Z) defined by Equation 11 is

Dw,l,k^,js+=Rω(Λl,k^,js+,3(Z),θ)-Rω(Λk^,js+,2(Z),θ)    =l2E(1||Z||10Ic||Z||21)+4l(1-ω)E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)    E(1||Z||8Ic||Z||21)-8l(1-ω)E(1||Z||6Ic||Z||21).

Proof Based on the BLF defined in Equation 1, the risk function of the estimator Λl,k^,js+,3(Z) provided in Equation 13 is equal to

Rω(Λl,k^,js+,3(Z),θ)=ωE(Λk^,js+,2(Z)+l(1Z||2)3IcZ||21Z-Z2)+(1-ω)E(Λk^,js+,2(Z)+l(1Z2)3IcZ21Z-θ2)=Rω(Λk^,js+,2(Z),θ)+l2E(1(Z2)5IcZ21)+2lωE(Λk^,js+,2(Z)-Z,1(Z2)3IcZ21Z)+2l(1-ω)E(Λk^,js+,2(Z)-θ,1(Z2)3IcZ21Z).

As,

E(Λk^,js+,2(Z)-θ,1(||Z||2)3Ic||Z||21Z)=E(Λk^,js+,2(Z)-Z+Z-θ,1(||Z||2)3Ic||Z||21Z)=E(Λk^,js+,2(Z)-Z,1(||Z||2)3Ic||Z||21Z)+E(Z-θ,1(||Z||2)3Ic||Z||21Z),

Then, the difference in risk between the estimators Λl,k^,js+,3(Z) given in Equation 13 and Λk^,js+,2(Z) can be written as,

Dw,l,k^,js+=Rω(Λl,k^,js+,3(Z),θ)-Rω(Λk^,js+,2(Z),θ)    =l2E(1(||Z||2)5Ic||Z||21)    +2lE(Λk^,js+,2(Z)-Z,1(||Z||2)3Ic||Z||21Z)    +2l(1-ω)E(Z-θ,1(||Z||2)3Ic||Z||21Z).    (14)

From Equation 13 and by a simple calculation, we obtain

E(Λk^,js+,2(Z)-Z,1(||Z||2)3Ic||Z||21Z)=E(-c1||Z||6Ic||Z||21+k^1||Z||8Ic||Z||21)=-(q-2)(1-ω)E(1||Z||6Ic||Z||21)+k^E(1||Z||8Ic||Z||21).    (15)

In addition, Lemma 2.1 of Shao and Strawderman [30] leads to

EZ-θ,1||Z||6Ic||Z||21Z=(q-6)E(1||Z||6Ic||Z||21).    (16)

We use Equations 1416 together, and the required result is obtained.

Theorem 4.2. Let q > 6. Under the BLF defined in Equation 1,

i) the estimator Λl,k^,js+,3(Z) dominates Λk^,js+,2(Z) if and only if

0l4(1-ω)[2E(1||Z||6Ic||Z||21)-E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)E(1||Z||8Ic||Z||21)]E(1||Z||10Ic||Z||21).

ii) The optimal value of the parameter l, which minimizes the risk function Rω(Λl,k^,js+,3(Z),θ), is

l^=2(1-ω)[2E(1||Z||6Ic||Z||21)-E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)E(1||Z||8Ic||Z||21)]E(1||Z||10Ic||Z||21).

Proof i) From Proposition 4.1, a sufficient and necessary condition for that Λl,k^,js+,3(Z) dominates Λk^,js+,2(Z) is

Dw,l,k^,js+=l2E(1||Z||10Ic||Z||21)+4l(1-ω)E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)E(1||Z||8Ic||Z||21)-8l(1-ω)E(1||Z||6Ic||Z||21)0.

Namely,

l{l-4(1-ω)[2E(1||Z||6Ic||Z||21)-E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)E(1||Z||8Ic||Z||21)E(1||Z||10Ic||Z||21)]}0.    (17)

To solve this inequality, we need to study the sign of the quantity

Q1:=2E(1||Z||6Ic||Z||21)-E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)E(1||Z||8Ic||Z||21)E(1||Z||10Ic||Z||21).

As the expectations E(1||Z||4Ic||Z||21), E(1||Z||6Ic||Z||21), E(1||Z||8Ic||Z||21) and E(1||Z||10Ic||Z||21) are positive, and the fact that the quantity

Q2:=2[E(1||Z||6Ic||Z||21)]2-E(1||Z||4Ic||Z||21)   E(1||Z||8Ic||Z||21),

is also positive. We deduce immediately that the quantity Q1 is positive. Consequently, the solution of the inequality [23] is

l[0;4(1-ω)Q1],

which allows us to conclude that a necessary and sufficient condition for which the estimator Λl,k^,js+,3(Z) dominates Λk^,js+,2(Z) is

0l4(1ω) [ 2E(1Z6IcZ21)E(1Z4IcZ21)E(1Z6IcZ21) E(1Z10IcZ21) ].

ii) Using the previous proposition, the risk function of Λl,k^,js+,3(Z) is,

Rω(Λl,k^,js+,3(Z),θ)=Rω(Λk^,js+,2(Z),θ)+l2E(1Z10IcZ21)+4l(1ω)E(1Z4IcZ21)E(1Z6IcZ21)E(1Z8IcZ21)8l(1ω)E(1Z6IcZ21)=Rω(Λk^,js+,2(Z),θ)+l2E(1Z10IcZ21)4l(1ω) [ 2E(1Z6IcZ21)E(1Z4IcZ21)E(1Z6IcZ21) E(1Z8IcZ21) ].

As the risk function Rω(Λl,k^,js+,3(Z),θ) is a convex function with respect to the variable l, the optimal value of l which minimizes Rω(Λk,js+,2(Z),θ) is

l^=2(1-ω)[2E(1||Z||6Ic||Z||21)-E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)E(1||Z||8Ic||Z||21)]E(1||Z||10Ic||Z||21).    (18)

If we replace l by l^ in Equation 13, we obtain the estimator defined as,

Λl^,k^,js+,3(Z)=Λk^,js+,2(Z)+l^(1||Z||2)3Ic||Z||21Z=(1-c1||Z||2+k^(1||Z||2)2+l^(1||Z||2)3)Ic||Z||21Z,    (19)

Consequently, the difference in risk between the estimators Λl^,k^,js+,3(Z) and Λk^,js+,3(Z) is

Dw,l^,k^,js+=Rω(Λl^,k^,js+,3(Z))-Rω(Λk^,JS+(2)(Z),θ)    =-4(1-ω)2A2B    0    (20)

with

A=2E(1||Z||6Ic||Z||21)-E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)E(1||Z||8Ic||Z||21)

and

B=E(1||Z||10Ic||Z||21).

This confirms the outperformance of the estimator Λl^,k^,js+,3(Z) over Λk^,js+,2(Z).

5 Numerical results

From Proposition 3.1, Proposition 4.1, and the Equation 3, we can deduce that the ratio of the difference in risk between the estimators Λk^,js+,2(Z) and Λjs+(Z) relative to the risk of JSE and the ratio of the difference in risk between the estimators Λl^,k^,js+,3(Z) and Λk^,js+,2(Z) relative to the risk of JSE are respectively given by the following:

Dw,k^,js+Rω(Λjs(Z),θ)=Rω(Λk^,js+,2(Z),θ)-Rω(Λjs+(Z),θ)Rω(Λjs(Z),θ)     =k^2E(1||Z||6Ic||Z||21)-4k^(1-ω)E(1||Z||4Ic||Z||21)(1-ω)q-(1-ω)2(q-2)2E(1||Z||2),    (21)

and

Dw,l^,k^,js+Rω(Λjs(Z),θ)=Rω(Λl^,k^,js+,3(Z),θ)-Rω(Λk^,js+,2(Z),θ)Rω(Λjs(Z),θ)     =l^2E(1||Z||10Ic||Z||21)+4l^(1-ω)E(1||Z||4Ic||Z||21)E(1||Z||6Ic||Z||21)E(1||Z||8Ic||Z||21)(1-ω)q-(1-ω)2(q-2)2E(1||Z||2)     -8l^(1-ω)E(1||Z||6Ic||Z||21)(1-ω)q-(1-ω)2(q-2)2E(1||Z||2),    (22)

with k^ and l^ given respectively in Equations 10, 18.

In this part, we will present the functions: f1(d)=Dw,k^,js+Rω(Λjs(Z),θ) and f2(d)=Dw,l^,k^,js+Rω(Λjs(Z),θ) defined respectively by Equations 21, 22 as functions of d=θ22 for selected values of q and ω.

Figures 1, 2, indicate that the difference in risk between the estimators Λk^,js+,2(Z) and Λjs+(Z) relative to the risk of JSE and the difference in risk between the estimators Λl^,k^,js+,3(Z) and Λk^,js+,2(Z) relative to the risk of JSE are negative for q = (10; 14; = 20; 24) and ω = (0.05; 0.25; 0.5; 0.7; 0.9). These findings confirm the overall superiority of Λk^,js+,2(Z) over the PPJSE:=Λjs+(Z), and of Λl^,k^,js+,3(Z) over Λk^,js+,2(Z). Moreover, the benefit of Λk^,js+,2(Z) compared with the PPJSE and the Λl^,k^,js+,3(Z) compared with Λk^,js+,2(Z) is significant when d=θ22 is small and ω is close to zero. This advantage diminishes gradually as ω approaches 1 and d=θ22 becomes large.

Figure 1
Four line graphs comparing the difference in index against the variable \(d\) for different \(q\) values: (a) \(q = 10 \), (b) \(q = 14 \), (c) \(q = 20 \), and (d) \(q = 24 \). Each graph includes five lines representing different \(\omega \) values of 0.05, 0.25, 0.5, 0.7, and 0.9. The lines remain mostly flat but slope downward steeply for \(\omega = 0.05 \), and the slope decreases as \(\omega \) increases.

Figure 1. Curve of f1(d) as function of d=θ22 for q = (10; 14; 20; 24) and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (a) q = 10 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (b) q = 14 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (c) q = 20 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (d) q = 24 and ω = (0.05; 0.25; 0.5; 0.7; 0.9).

Figure 2
Four line graphs display the difference in the index versus \(d \) for varying \(q \) values and \(\omega \) from \(0.05 \) to \(0.9 \). Graph (a) shows \(q = 10 \), (b) \(q = 14 \), (c) \(q = 20 \), and (d) \(q = 24 \). As \(d \) increases, the index difference decreases in all graphs, particularly for higher \(\omega \) values. Each graph uses the same color legend representing \(\omega \) values.

Figure 2. Curve of f2(d) as function of d=θ22 for q = (10; 14; 20; 24) and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (a) q = 10 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (b) q = 14 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (c) q = 20 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (d) q = 24 and ω = (0.05; 0.25; 0.5; 0.7; 0.9).

6 Conclusion

In this study, we examined the problem of estimating the mean vector θ of the random vector Z~Nq(θ,τ2Iq). The risk function based on the BLF was used as the criterion to evaluate the performance of the estimators under consideration. We proposed a new class of estimators of the form Λk,js+,2(Z)=Λjs+(Z)+k(1||Z||2)2Ic||Z||21Z and established a necessary and sufficient condition on the parameter k to ensure that Λk,js+,2(Z) dominates the PPJSE Λjs+(Z). Furthermore, we extended this idea by constructing polynomial-type estimators in the variable (1/||Z||2)Ic||Z||21, obtained by recursively adding terms of the form γ(||Z||2)mIc||Z||21Z. At each step, these estimators improved upon the previous ones, leading to a sequence of polynomial estimators. We showed that increasing the degree of the polynomial allows us to construct better estimators, with the limitation that the dimension of the parameter space is sufficiently large to satisfy domination conditions. However, this also complicates the computation of the corresponding risks, making it more challenging to determine sufficient domination criteria. We supported these theoretical findings by simulation studies, in which we demonstrated the overall superiority of Λk^,js+,2(Z) over the PPJSE (Λjs+(Z)), and of Λl^,k^,js+,3(Z) over Λk^,js+,2(Z), for selected values of q and ω.

A natural direction for future research is to study this trade-off further and identify the optimal polynomial degree that yields the best possible estimator. As a continuation of this study, one may explore related extensions by analyzing estimators of the form Λjs+(Z)+γ(1/||Z||2)sIc||Z||21Z, where s is a real positive parameter, within the context of the general balanced loss function η,Λ(Λ,θ)=ωφ(||Λ-Λ0||2)+(1-ω)φ(||Λ-θ||2), 0ω<1,, where φ(.) represents a general positive function. Moreover, this line of research may be further developed in a Bayesian decision-theoretic setting.

Data availability statement

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.

Author contributions

NA: Investigation, Methodology, Visualization, Writing – original draft, Resources, Supervision. AH: Investigation, Methodology, Visualization, Writing – original draft. AB: Investigation, Methodology, Validation, Visualization, Writing – original draft.

Funding

The author(s) declared that financial support was not received for this work and/or its publication.

Conflict of interest

The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Generative AI statement

The author(s) declared that generative AI was not used in the creation of this manuscript.

Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Stein C. Inadmissibilty of the usual estimator for the mean of a multivariate normal distribution. In: Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1. Berkeley, CA: University of California Press (1956). p. 197–206. doi: 10.1525/9780520313880-018

Crossref Full Text | Google Scholar

2. James W, Stein C. Estimation with quadratic loss. In: Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1. Berkeley, CA: University of California Press (1961). p. 361–79.

Google Scholar

3. Lindley D. Discussion on professor Stein's paper. J R Stat Soc Ser B Stat Methodol. (1962) 24:285–7. doi: 10.1111/j.2517-6161.1962.tb00459.x

Crossref Full Text | Google Scholar

4. Bhattacharya PK. Estimating the mean of a multivariate normal population with general quadratic loss function. Ann Math Stat. (1966) 37:1819–24. doi: 10.1214/aoms/1177699174

Crossref Full Text | Google Scholar

5. Berger J. Admissible minimax estimation of a multivariate normal mean with arbitrary quadratic loss. Ann Statist. (1976) 4:223–6. doi: 10.1214/aos/1176343356

Crossref Full Text | Google Scholar

6. Stein C. Estimation of the mean of a multivariate normal distribution. Ann Statist. (1981) 9:1135–51. doi: 10.1214/aos/1176345632

Crossref Full Text | Google Scholar

7. Arnold FS. The Theory of Linear Models and Multivariate Analysis. New York, NY: John Wiley and Sons (1981). p. 9–10.

Google Scholar

8. Norouzirad M, Arashi M. Preliminary test and Stein-type shrinkage ridge estimators in robust regression. Statist Papers. (2019) 60:1849–82. doi: 10.1007/s00362-017-0899-3

Crossref Full Text | Google Scholar

9. Kashani M, Rabiei MR, Arashi M. An integrated shrinkage strategy for improving efficiency in fuzzy regression modeling. Soft Comput. (2021) 25:8095–107. doi: 10.1007/s00500-021-05690-9

Crossref Full Text | Google Scholar

10. Benkhaled A, Hamdaoui A. General classes of shrinkage estimators for the multivariate normal mean with unknown variancee: minimaxity and limit of risks ratios. Kragujevac J Math. (2022) 46:193–213. doi: 10.46793/KgJMat2202.193B

Crossref Full Text | Google Scholar

11. Strawderman WE. Proper Bayes minimax estimators of the multivariate normal mean. Ann Math Statist. (1971) 42:385–8. doi: 10.1214/aoms/1177693528

Crossref Full Text | Google Scholar

12. Lindley D, Smith AFM. Bayes estimates for the linear model (with discussion). J Roy Statist Soc. (1972) 34:1–41. doi: 10.1111/j.2517-6161.1972.tb00885.x

Crossref Full Text | Google Scholar

13. Efron B, Morris CN. Stein's estimation rule and its competitors: an empirical Bayes approach. J Amer Statist Assoc. (1973) 68:117–30. doi: 10.1080/01621459.1973.10481350

Crossref Full Text | Google Scholar

14. Hamdaoui A, Benkhaled A, Mezouar N. Minimaxity and limits of risks ratios of shrinkage estimators of a multivariate normal mean in the Bayesian case. Stat Optim Inf Comput. (2020) 8:507–20. doi: 10.19139/soic-2310-5070-735

Crossref Full Text | Google Scholar

15. Alahmadi A, Benkhaled A, Almutiry W. On the effectiveness of the new estimators obtained from the Bayes estimator. AIMS Math. (2025) 10:5762–84. doi: 10.3934/math.2025265

Crossref Full Text | Google Scholar

16. Baranchik AJ. Multiple Regression and Estimation of the Mean of a Multivariate Normal Distribution. Technical Report. Stanford, CA: Stanford University (1964). p. 51.

Google Scholar

17. Hamdaoui A, Benmansour D. Asymptotic properties of risks ratios of shrinkage estimators. Hacet J Math Stat. (2015) 44:1181–95.

Google Scholar

18. Hamdaoui A. On shrinkage estimators improving the positive part of James-Stein estimator. Demonstratio Math. (2021) 54:462–73. doi: 10.1515/dema-2021-0038

Crossref Full Text | Google Scholar

19. Zellner A. Bayesian and non-Bayesian estimation using balanced loss functions. In:Berger JO, Gupta S, , editors. Statistical Decision Theory and Methods. New York, NY: Springer (1994). p. 337–90. doi: 10.1007/978-1-4612-2618-5_28

Crossref Full Text | Google Scholar

20. Sanjari Farsipour N, Asgharzadeh A. Estimation of a normal mean relative to balanced loss functions. Statist Papers. (2004) 45:279–86. doi: 10.1007/BF02777228

Crossref Full Text | Google Scholar

21. Selahattin K, Issam D. The optimal extended balanced loss function estimators. J Comput Appl Math. (2019) 345:86–98. doi: 10.1016/j.cam.2018.06.021

Crossref Full Text | Google Scholar

22. Nimet O, Selahattin K. Risk performance of some shrinkage estimators. Commun Stat Simul Comput. (2019) 50:323–42. doi: 10.1080/03610918.2018.1554116

Crossref Full Text | Google Scholar

23. Karamikabir H, Afsahri M. Generalized Bayesian shrinkage and wavelet estimation of location parameter for spherical distribution under balanced-type loss: minimaxity and admissibility. J Multivariate Anal. (2020) 177:110–20. doi: 10.1016/j.jmva.2019.104583

Crossref Full Text | Google Scholar

24. Lahoucine H, Eric M, Idir O. On shrinkage estimation of a spherically symetric distribution for balanced loss function. J Multivariate Anal. (2021) 186:1–11. doi: 10.1016/j.jmva.2021.104794

Crossref Full Text | Google Scholar

25. Karamikabir H, Afshri M, Lak F. Wavelet threshold based on Stein's unbiased risk estimators of restricted location parameter in multivariate normal. J Appl Stat. (2021) 48:1712–29. doi: 10.1080/02664763.2020.1772209

PubMed Abstract | Crossref Full Text | Google Scholar

26. Benkhaled A, Hamdaoui A, Almutiry W, Alshahrani M, Terbeche M. A study of minimax shrinkage estimators dominating the James-Stein estimator under the balanced loss function. Open Math. (2022) 20:1–11. doi: 10.1515/math-2022-0008

Crossref Full Text | Google Scholar

27. Hamdaoui A, Almutiry W, Terbeche M, Benkhaled A. Comparison of risk ratios of shrinkage estimators in high dimensions. Mathematics. (2021) 52:1–14. doi: 10.3390/math10010052

Crossref Full Text | Google Scholar

28. Hamdaoui A, Terbeche M, Benkhaled A. On shrinkage estimators improving the James-Stein estimator under balanced loss function. Pak J Stat Oper Res. (2021) 17:711–27. doi: 10.18187/pjsor.v17i3.3663

Crossref Full Text | Google Scholar

29. Casella G, Hwang JT. Limit expressions for the risk of James-Stein estimators. Canad J Statist. (1982) 4:305–9. doi: 10.2307/3556196

Crossref Full Text | Google Scholar

30. Shao P, Strawderman WE. Improving on the James-Stein positive-part estimator of the multivariate normal mean vector for the case of common unknown variances. Ann Statist. (1994) 22:1517–39. doi: 10.1214/aos/1176325640

Crossref Full Text | Google Scholar

Keywords: balanced loss function, multivariate normal distribution, positive part of James-Stein estimator, risk function, shrinkage estimator

Citation: ALoraini NM, Hamdaoui A and Benkhaled A (2026) A new approach for shrinkage estimators of the multivariate normal mean vector under the balanced loss criterion. Front. Appl. Math. Stat. 11:1709968. doi: 10.3389/fams.2025.1709968

Received: 21 September 2025; Revised: 27 November 2025;
Accepted: 29 November 2025; Published: 12 January 2026.

Edited by:

Ronald Wesonga, Sultan Qaboos University, Oman

Reviewed by:

Iman Al Hasani, Sultan Qaboos University, Oman
Mohd Alodat, Sultan Qaboos University, Oman

Copyright © 2026 ALoraini, Hamdaoui and Benkhaled. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Najla M. ALoraini, YXJpZW5pZUBxdS5lZHUuc2E=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.