- 1Department of Mathematics, College of Science and Arts Onaizah, Qassim University, Buraydah, Saudi Arabia
- 2Department of Mathematics, University of Sciences and Technology, Oran, Algeria
- 3Laboratory of Stochastic Models, Statistics and Applications, Department of Biology, University of Mascara, University Tahar Moulay of Saïda, Mascara, Algeria
In this study, we investigate the issue of estimating the mean vector of a multivariate normal distribution. We introduce two new families of shrinkage estimators derived from both the maximum likelihood estimator and the James-Stein estimator. To evaluate their performance, we employ the risk function associated with the balanced loss criterion. Using this criterion, we establish that these estimators consistently outperform the positive part of the James-Stein estimator. Furthermore, we show that the estimators from the second family exhibit better performance than those from the first. Finally, we conclude with simulation studies that confirm our theoretical findings.
1 Introduction
Since Stein's study [1], estimating the mean of a multivariate normal distribution (MND) has remained a pivotal problem in statistics, underpinning many fundamental results in both the theory and practice of statistical inference. Stein [1] showed that the maximum likelihood estimator (MLE) of the mean vector of the random vector is minimax and admissible for q ≤ 2. Namely, there is no other estimator that uniformly dominates it under the quadratic loss function (QLF). However, in the case of large dimensions, specifically for q ≥ 3, Stein [1], and James and Stein [2] established that the MLE is inadmissible and can be improved by the so-called James-Stein estimator (JSE). To construct this estimator, the authors employed a uniform reduction strategy for the components of the MLE, multiplying each component by the same value defined via a shrinkage function of the form ϕ(Z) = (1−((q−2)/||Z||2)). This is why the JSE is considered one of the most common shrinkage estimators in statistical analysis.
Numerous studies have focused on developing new shrinkage estimators that enhance the performance of both the MLE and the JSE. Notable contributions in this direction include the studies by Lindley [3], Bhattacharya [4], Berger [5], Stein [6], Arnold [7], Norouzirad and Arashi [8], Kashani et al. [9], and Benkhaled and Hamdaoui [10]. Additionally, several researchers have explored shrinkage estimators within a Bayesian framework, such as Strawderman [11], Lindley [12], Efron and Morris [13], Hamdaoui et al. [14], and Alahmadi et al. [15].
When shrinkage estimators were first introduced, some researchers pointed out a key limitation: the shrinkage factor may assume negative values. When this happens, it fails to perform its intended role of shrinking the components of the MLE toward zero. To address this issue, and under the QLF, Baranchik [16] proposed the positive part of the James-Stein estimator (PPJSE) provided as Λ(Z) = (1−((q−2)/|Z|2))+Z, with (1−((q−2)/|Z|2))+ defined as the maximum between 0 and 1−((q−2)/|Z|2). This construction ensures that the new shrinkage factor is always non-negative. Moreover, Baranchik proved that the PPJSE uniformly improves upon the JSE under the QLF. In Hamdaoui and Benmansour [17], the authors established in the simulation part that under the QLF, the improvement of the PPJSE over the JSE is very significant. In relation to the QLF, Hamdaoui [18] has also suggested new classes of shrinkage estimators extracted from the MLE and has improved the PPJSE.
Among the studies that have marked a major development in the field of estimating the mean of an MND, we find that of Zellner [19]. He focused on estimating the multivariate normal mean under the balanced loss function (BLF), which generalizes the QLF. Since then, several studies have been published in this direction, for example, those of Sanjari Farsipour and Asgharzadeh [20], Selahattin and Issam [21], Nimet and Selahattin [22], Karamikabir and Afsahri [23], Lahoucine et al. [24], Karamikabir et al. [25], and Benkhaled et al. [26].
In this study, we extend the results of Hamdaoui [18] by using the BLF rather than the QLF. Specifically, we consider the model where the parameter τ2 is known. The main objective is to estimate the vector using shrinkage estimators, which are derived from the MLE. The performance of each estimator is then evaluated through the risk function associated with the BLF. We organize this study as follows. We recall some essential results in Section 2. In Section 3, we present a class of inadmissible shrinkage estimators and establish the necessary and sufficient conditions for the shrinkage function to enhance the performance of the PPJSE. Within this framework, we identify the optimal estimator in the proposed class. Section 4 extends this approach by constructing a new class of estimators, deriving the corresponding necessary and sufficient conditions on the shrinkage function to improve upon the optimal estimator obtained in Section 3, and then determining the best estimator in this class. We conclude this paper with a simulation study to validate and illustrate the theoretical findings.
2 Preliminaries
In this study, we deal with the model where the parameter τ2 is known. Without loss of generality, assume that τ2 = 1, and the aim is to estimate the mean vector by new shrinkage estimators derived from both the MLE and the PPJSE. To measure the quality and the performance of estimators that we will treat, we use the risk function associated with the BLF provided in Hamdaoui et al. [27] as, for any estimator Λ of θ:
where Λ0 is the target estimator (in this study, Λ0 is the MLE), ω is the weight given to the proximity of Λ to Λ0, 1−ω is the relative weight given to the precision of estimation. We define the risk function relatively to this BLF as follows:
In relation to the BLF mentioned above, the MLE associated with our model is Z: = Λ0, and from a simple calculus given in pages 713-714 in Hamdaoui et al. [28], we show that the value of its risk function is equal to (1−ω)q. Furthermore, the well-known estimator that improves upon the MLE is the so-called JSE, which is expressed as,
with c = (1−ω)(q−2). It is easy to demonstrate that the difference in risk between the JSE and the MLE is
Moreover, the classical estimator that ameliorates the JSE is the PPJSE, expressed as follows:
where , and is the indicator function of , defined as,
From Casella and Hwang [29], and Hamdaoui et al. [27], we can deduce that the difference in risk between this estimator and the JSE is expressed as
Hamdaoui et al. [27] also demonstrated that under the BLF provided in Equation 1, improves Λjs(Z).
3 Inadmissible shrinkage estimators dominating the PPJSE
In this part, we introduce a new class of shrinkage estimators of the mean vector which is derived from both the MLE and the JSE, and study their out-performance over the PPJSE under the BLF provided in Equation 1.
Consider the estimators
with k a real positive parameter.
Proposition 3.1. Under the BLF provided in Equation 1, the difference in risk between the estimators given in Equation 6 and defined in Equation 4 can be expressed as,
Proof Relative to the BLF defined in Equation 1, and using the fact that
the difference in risk between the estimators and can be written as
The last equality follows directly from the definitions of the Euclidean norm and its associated inner product in ℝq.
As
Then, the difference in risk between the estimators and is
The Equation 4, leads to
And by using Lemma 2.1 of Shao and Strawderman [30], we deduce that
Then, according to Equations 7–9, we obtain the desired result.
The following theorem establishes a sufficient and the necessary conditions for improving the estimator over and gives the optimal value of the parameter k, which minimizes the values of the risk function
Theorem 3.2. Let q > 4. Under the BLF provided in Equation 1,
i) outperforms if and only if
ii) The optimal value of the parameter k, which minimizes the risk function is
Proof i) From Proposition 3.1, a sufficient and necessary condition for that the estimator improves is
This means that improves if and only if the polynomial with respect to the variable k, is negative. Namely, our problem is to determine the values of k for which this polynomial is negative.
As
and the fact that the expectations and are positive, we deduce immediately that the polynomial P2(k) of the variable k is negative if and only if
Then we achieve the desired result.
ii) Using the previous Proposition, the explicit formula for the risk function of the estimator is written as,
And by applying the fact that the risk function is a convex function with respect to the variable k, we can conclude that the optimal value of k which minimizes is
If we replace k by in Equation 6, we get the best estimator in the class of the estimators which is defined as follows:
Furthermore, from Proposition 3.1, we immediately deduce that the difference in risk between and is shown below:
and this confirms the domination of the estimator over
4 The effectiveness of new classes of estimators extracted from the PPJSE
The results established in the previous section indicate that the idea to add the term to the estimator conducts us to construct the estimator which dominates This observation motivates us to consider adopting the same process to improve the new estimator Next, we will incorporate a term of the form with l is a real positive parameter, into the estimator We then follow a similar method to improve the obtained estimator by successively adding terms of the form , where p is an integer parameter and b is a positive real constant. This layered approach produces a hierarchy of estimators with a polynomial structure of the variable and as the degree of the polynomial increases, the risk associated with the constructed estimator decreases, and this leads to the best estimator.
Now, consider the estimator
where is defined in Equation 10 and the parameter l is a real positive constant.
Proposition 4.1. Relative to the BLF defined in Equation 1, the difference in risk between the estimators given in Equation 13 and defined by Equation 11 is
Proof Based on the BLF defined in Equation 1, the risk function of the estimator provided in Equation 13 is equal to
As,
Then, the difference in risk between the estimators given in Equation 13 and can be written as,
From Equation 13 and by a simple calculation, we obtain
In addition, Lemma 2.1 of Shao and Strawderman [30] leads to
We use Equations 14–16 together, and the required result is obtained.
Theorem 4.2. Let q > 6. Under the BLF defined in Equation 1,
i) the estimator dominates if and only if
ii) The optimal value of the parameter l, which minimizes the risk function is
Proof i) From Proposition 4.1, a sufficient and necessary condition for that dominates is
Namely,
To solve this inequality, we need to study the sign of the quantity
As the expectations and are positive, and the fact that the quantity
is also positive. We deduce immediately that the quantity Q1 is positive. Consequently, the solution of the inequality [23] is
which allows us to conclude that a necessary and sufficient condition for which the estimator dominates is
ii) Using the previous proposition, the risk function of is,
As the risk function is a convex function with respect to the variable l, the optimal value of l which minimizes is
If we replace l by in Equation 13, we obtain the estimator defined as,
Consequently, the difference in risk between the estimators and is
with
and
This confirms the outperformance of the estimator over
5 Numerical results
From Proposition 3.1, Proposition 4.1, and the Equation 3, we can deduce that the ratio of the difference in risk between the estimators and relative to the risk of JSE and the ratio of the difference in risk between the estimators and relative to the risk of JSE are respectively given by the following:
and
with and given respectively in Equations 10, 18.
In this part, we will present the functions: and defined respectively by Equations 21, 22 as functions of for selected values of q and ω.
Figures 1, 2, indicate that the difference in risk between the estimators and relative to the risk of JSE and the difference in risk between the estimators and relative to the risk of JSE are negative for q = (10; 14; = 20; 24) and ω = (0.05; 0.25; 0.5; 0.7; 0.9). These findings confirm the overall superiority of over the PPJSE, and of over . Moreover, the benefit of compared with the PPJSE and the compared with is significant when is small and ω is close to zero. This advantage diminishes gradually as ω approaches 1 and becomes large.
Figure 1. Curve of f1(d) as function of for q = (10; 14; 20; 24) and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (a) q = 10 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (b) q = 14 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (c) q = 20 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (d) q = 24 and ω = (0.05; 0.25; 0.5; 0.7; 0.9).
Figure 2. Curve of f2(d) as function of for q = (10; 14; 20; 24) and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (a) q = 10 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (b) q = 14 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (c) q = 20 and ω = (0.05; 0.25; 0.5; 0.7; 0.9). (d) q = 24 and ω = (0.05; 0.25; 0.5; 0.7; 0.9).
6 Conclusion
In this study, we examined the problem of estimating the mean vector θ of the random vector . The risk function based on the BLF was used as the criterion to evaluate the performance of the estimators under consideration. We proposed a new class of estimators of the form and established a necessary and sufficient condition on the parameter k to ensure that dominates the PPJSE . Furthermore, we extended this idea by constructing polynomial-type estimators in the variable , obtained by recursively adding terms of the form . At each step, these estimators improved upon the previous ones, leading to a sequence of polynomial estimators. We showed that increasing the degree of the polynomial allows us to construct better estimators, with the limitation that the dimension of the parameter space is sufficiently large to satisfy domination conditions. However, this also complicates the computation of the corresponding risks, making it more challenging to determine sufficient domination criteria. We supported these theoretical findings by simulation studies, in which we demonstrated the overall superiority of over the PPJSE (), and of over , for selected values of q and ω.
A natural direction for future research is to study this trade-off further and identify the optimal polynomial degree that yields the best possible estimator. As a continuation of this study, one may explore related extensions by analyzing estimators of the form , where s is a real positive parameter, within the context of the general balanced loss function , where φ(.) represents a general positive function. Moreover, this line of research may be further developed in a Bayesian decision-theoretic setting.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
NA: Investigation, Methodology, Visualization, Writing – original draft, Resources, Supervision. AH: Investigation, Methodology, Visualization, Writing – original draft. AB: Investigation, Methodology, Validation, Visualization, Writing – original draft.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Generative AI statement
The author(s) declared that generative AI was not used in the creation of this manuscript.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
References
1. Stein C. Inadmissibilty of the usual estimator for the mean of a multivariate normal distribution. In: Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1. Berkeley, CA: University of California Press (1956). p. 197–206. doi: 10.1525/9780520313880-018
2. James W, Stein C. Estimation with quadratic loss. In: Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1. Berkeley, CA: University of California Press (1961). p. 361–79.
3. Lindley D. Discussion on professor Stein's paper. J R Stat Soc Ser B Stat Methodol. (1962) 24:285–7. doi: 10.1111/j.2517-6161.1962.tb00459.x
4. Bhattacharya PK. Estimating the mean of a multivariate normal population with general quadratic loss function. Ann Math Stat. (1966) 37:1819–24. doi: 10.1214/aoms/1177699174
5. Berger J. Admissible minimax estimation of a multivariate normal mean with arbitrary quadratic loss. Ann Statist. (1976) 4:223–6. doi: 10.1214/aos/1176343356
6. Stein C. Estimation of the mean of a multivariate normal distribution. Ann Statist. (1981) 9:1135–51. doi: 10.1214/aos/1176345632
7. Arnold FS. The Theory of Linear Models and Multivariate Analysis. New York, NY: John Wiley and Sons (1981). p. 9–10.
8. Norouzirad M, Arashi M. Preliminary test and Stein-type shrinkage ridge estimators in robust regression. Statist Papers. (2019) 60:1849–82. doi: 10.1007/s00362-017-0899-3
9. Kashani M, Rabiei MR, Arashi M. An integrated shrinkage strategy for improving efficiency in fuzzy regression modeling. Soft Comput. (2021) 25:8095–107. doi: 10.1007/s00500-021-05690-9
10. Benkhaled A, Hamdaoui A. General classes of shrinkage estimators for the multivariate normal mean with unknown variancee: minimaxity and limit of risks ratios. Kragujevac J Math. (2022) 46:193–213. doi: 10.46793/KgJMat2202.193B
11. Strawderman WE. Proper Bayes minimax estimators of the multivariate normal mean. Ann Math Statist. (1971) 42:385–8. doi: 10.1214/aoms/1177693528
12. Lindley D, Smith AFM. Bayes estimates for the linear model (with discussion). J Roy Statist Soc. (1972) 34:1–41. doi: 10.1111/j.2517-6161.1972.tb00885.x
13. Efron B, Morris CN. Stein's estimation rule and its competitors: an empirical Bayes approach. J Amer Statist Assoc. (1973) 68:117–30. doi: 10.1080/01621459.1973.10481350
14. Hamdaoui A, Benkhaled A, Mezouar N. Minimaxity and limits of risks ratios of shrinkage estimators of a multivariate normal mean in the Bayesian case. Stat Optim Inf Comput. (2020) 8:507–20. doi: 10.19139/soic-2310-5070-735
15. Alahmadi A, Benkhaled A, Almutiry W. On the effectiveness of the new estimators obtained from the Bayes estimator. AIMS Math. (2025) 10:5762–84. doi: 10.3934/math.2025265
16. Baranchik AJ. Multiple Regression and Estimation of the Mean of a Multivariate Normal Distribution. Technical Report. Stanford, CA: Stanford University (1964). p. 51.
17. Hamdaoui A, Benmansour D. Asymptotic properties of risks ratios of shrinkage estimators. Hacet J Math Stat. (2015) 44:1181–95.
18. Hamdaoui A. On shrinkage estimators improving the positive part of James-Stein estimator. Demonstratio Math. (2021) 54:462–73. doi: 10.1515/dema-2021-0038
19. Zellner A. Bayesian and non-Bayesian estimation using balanced loss functions. In:Berger JO, Gupta S, , editors. Statistical Decision Theory and Methods. New York, NY: Springer (1994). p. 337–90. doi: 10.1007/978-1-4612-2618-5_28
20. Sanjari Farsipour N, Asgharzadeh A. Estimation of a normal mean relative to balanced loss functions. Statist Papers. (2004) 45:279–86. doi: 10.1007/BF02777228
21. Selahattin K, Issam D. The optimal extended balanced loss function estimators. J Comput Appl Math. (2019) 345:86–98. doi: 10.1016/j.cam.2018.06.021
22. Nimet O, Selahattin K. Risk performance of some shrinkage estimators. Commun Stat Simul Comput. (2019) 50:323–42. doi: 10.1080/03610918.2018.1554116
23. Karamikabir H, Afsahri M. Generalized Bayesian shrinkage and wavelet estimation of location parameter for spherical distribution under balanced-type loss: minimaxity and admissibility. J Multivariate Anal. (2020) 177:110–20. doi: 10.1016/j.jmva.2019.104583
24. Lahoucine H, Eric M, Idir O. On shrinkage estimation of a spherically symetric distribution for balanced loss function. J Multivariate Anal. (2021) 186:1–11. doi: 10.1016/j.jmva.2021.104794
25. Karamikabir H, Afshri M, Lak F. Wavelet threshold based on Stein's unbiased risk estimators of restricted location parameter in multivariate normal. J Appl Stat. (2021) 48:1712–29. doi: 10.1080/02664763.2020.1772209
26. Benkhaled A, Hamdaoui A, Almutiry W, Alshahrani M, Terbeche M. A study of minimax shrinkage estimators dominating the James-Stein estimator under the balanced loss function. Open Math. (2022) 20:1–11. doi: 10.1515/math-2022-0008
27. Hamdaoui A, Almutiry W, Terbeche M, Benkhaled A. Comparison of risk ratios of shrinkage estimators in high dimensions. Mathematics. (2021) 52:1–14. doi: 10.3390/math10010052
28. Hamdaoui A, Terbeche M, Benkhaled A. On shrinkage estimators improving the James-Stein estimator under balanced loss function. Pak J Stat Oper Res. (2021) 17:711–27. doi: 10.18187/pjsor.v17i3.3663
29. Casella G, Hwang JT. Limit expressions for the risk of James-Stein estimators. Canad J Statist. (1982) 4:305–9. doi: 10.2307/3556196
Keywords: balanced loss function, multivariate normal distribution, positive part of James-Stein estimator, risk function, shrinkage estimator
Citation: ALoraini NM, Hamdaoui A and Benkhaled A (2026) A new approach for shrinkage estimators of the multivariate normal mean vector under the balanced loss criterion. Front. Appl. Math. Stat. 11:1709968. doi: 10.3389/fams.2025.1709968
Received: 21 September 2025; Revised: 27 November 2025;
Accepted: 29 November 2025; Published: 12 January 2026.
Edited by:
Ronald Wesonga, Sultan Qaboos University, OmanReviewed by:
Iman Al Hasani, Sultan Qaboos University, OmanMohd Alodat, Sultan Qaboos University, Oman
Copyright © 2026 ALoraini, Hamdaoui and Benkhaled. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Najla M. ALoraini, YXJpZW5pZUBxdS5lZHUuc2E=
Najla M. ALoraini1*