Научная статья на тему 'Minimax Estimation of the Scale Parameter of Inverse Rayleigh Distribution under Symmetric and Asymmetric Loss Functions'

Minimax Estimation of the Scale Parameter of Inverse Rayleigh Distribution under Symmetric and Asymmetric Loss Functions Текст научной статьи по специальности «Математика»

CC BY
60
16
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
Minimax estimator / squared log error loss function / quadratic loss function / general entropy loss function / extended Jeffrey’s prior / risk function

Аннотация научной статьи по математике, автор научной работы — Proloy Banerjee, Shreya Bhunia

In this article, minimax estimation of the scale parameter λ of the inverse Rayleigh distribution is performed under symmetric (QLF) and asymmetric (SLELF and GELF) loss functions by applying the Lehmann’s theorem (1950). An extended Jeffrey’s prior and gamma prior are assumed to derive the minimax estimators under each of the considered loss functions. An extensive simulation study is carried out to compare the performance of the minimax estimators with the maximum likelihood (MLE), which is traditionally used as a classical estimator, on the basis of biases and mean squared errors (MSE). The obtained results suggest that under the assumption of extended Jeffrey’s prior, minimax estimators with positive c values are superior as compared to the MLE. Moreover, it is found that in most of the cases, minimax estimator under quadratic loss function (QLF) performs satisfactory on the assumption of gamma prior.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Minimax Estimation of the Scale Parameter of Inverse Rayleigh Distribution under Symmetric and Asymmetric Loss Functions»

Minimax Estimation of the Scale Parameter of Inverse Rayleigh Distribution under Symmetric and Asymmetric

Loss Functions

Proloy Banerjee* and Shreya Bhunia

Department of Mathematics and Statistics, Aliah University, Kolkata, India proloy.stat@gmail.com and shreyabhunia.stat@gmail.com

Corresponding author E-mail: proloy.stat@gmail.com

Abstract

In this article, minimax estimation of the scale parameter A of the inverse Rayleigh distribution is performed under symmetric (QLF) and asymmetric (SLELF and GELF) loss functions by applying the Lehmann's theorem (1950). An extended Jeffrey's prior and gamma prior are assumed to derive the minimax estimators under each of the considered loss functions. An extensive simulation study is carried out to compare the performance of the minimax estimators with the maximum likelihood (MLE), which is traditionally used as a classical estimator, on the basis of biases and mean squared errors (MSE). The obtained results suggest that under the assumption of extended Jeffrey's prior, minimax estimators with positive c values are superior as compared to the MLE. Moreover, it is found that in most of the cases, minimax estimator under quadratic loss function (QLF) performs satisfactory on the assumption of gamma prior.

Keywords: Minimax estimator, squared log error loss function, quadratic loss function, general entropy loss function, extended Jeffrey's prior, risk function

1. Introduction

Minimax estimation is a Bayesian estimation approach in statistical inference, which was introduced by Wald [1] relating to the concept of Game theory. It brings different dimensions to statistical estimation and improves the point estimation process. In recent years, a vast amount of research works have been devoted to study the minimax estimators of some well known distributions. Roy et al. [4] developed the minimax estimation of the scale parameter of the Weibull distribution using Quadratic and MLINEX loss functions. The minimax estimator of the scale parameter of Rayleigh distribution under Quadratic loss function was investigated in [5]. Li [3] discussed the Minimax estimation of the parameter of Maxwell distribution under different loss functions considering non-informative quasi-prior density. The problem of finding the minimax estimator of the scale parameter in a class of lifetime distributions under different loss functions are discussed in [2].

The fundamental differences between the classical and minimax estimation approach is that in classical estimation the parameter is assumed to be a fixed point, whereas in minimax estimation the parameter of interest is considered to be a random variable. The most important element in the minimax approach is the specification of a distribution function on the parameter space, which is called prior distribution. In addition to the prior distribution the assumed loss functions also have a significant impact on the minimax estimator for a given model. Recently, in literature the inverted version of a standard probability distribution got a lot of attention by many researchers including [6], [7], [8]. In this study, our concern is to derive the minimax estimator of the unknown

scale parameter A of the inverse Rayleigh distribution having the following probability density function _

" A "

X2

f(x;A) = 2Aa exp

x > 0, A > 0. (1)

,x2

For modeling lifetime data, inverse Rayleigh (IR) distribution, which is a special case of inverse Weibull (IW) distribution has many applications in reliability studies. Voda [9] discussed some statistical properties of IR distribution like maximum likelihood estimator, confidence intervals etc. Bayes estimators for the parameter of inverse Rayleigh distribution under squared error and zero one loss functions based on lower record values are studied by Soliman et al. [10]. Bayesian estimation of the parameter and reliability function of an inverse Rayleigh distribution under symmetric and asymmetric linear exponential loss functions using a non-informative prior has been done in [11].

The aim of this article is to make a comparison between the maximum likelihood estimator (MLE) and minimax estimators of the scale parameter of inverse Rayleigh distribution under three different loss functions. These are quadratic loss function, which is symmetric in nature and another two are asymmetric loss functions, namely, squared log error and general entropy loss functions. As a prior knowledge of the unknown scale parameter A, we consider both noninformative and informative prior. In case of non-informative prior, our choice is the extended Jeffrey's prior which is also a generalization of the Jeffrey's prior and for informative prior, gamma prior is chosen which is also conjugate in structure. The Bayes estimates of A as well as the risk functions are derived under the mentioned loss functions and further by applying Lehmann's theorem, it is shown that the obtained estimators are also the minimax estimators.

The rest of the article is structured in following manner. In section 2, maximum likelihood estimator for the scale parameter A is derived. In section 3, we discuss about the prior and posterior distributions of A by considering both the non informative and informative prior respectively. Bayes estimators under quadratic loss (QLF), squared log error loss (SLELF) and general entropy loss (GELF) functions for the scale parameter of the inverse Rayleigh distribution are developed in section 4. In section 5, minimax estimators under different loss functions are discussed. Extensive simulation study for different parameter choices are performed and results are presented in section 6. Finally in section 7, the conclusion of the paper is provided.

2. Maximum Likelihood Estimation

Several desirable properties for a good estimator such as consistency, asymptotic efficiency, invariance property etc. are satisfied by the Maximum likelihood estimator. This makes the MLE one of the most frequently used techniques for parameter estimation. Let xi, x2, ■ ■ ■ , xn be a random sample of size n from the density function (1). Then the likelihood function is given by

n 1 -A rn_1 -1

L (xi; A) = (2A)n n 4e x2. (2)

i=1 xi

Taking logarithm, the log-likelihood function becomes

n ( 1 \ n 1 In L (xi; A) = nln 2 + nln A + ^ ln —j - A ^ .

i=1 Vx3/ i=1x2

Now, by differentiating the above equation with respect to A and equating it with zero, we obtain the MLE of A as,

n

AMLE = ± . (3)

^¿=1 x2

3. Prior and Posterior density function of Scale parameter A

Specification of a prior distribution over the parameter space is a substantial part for deriving the posterior probability distribution under the Bayesian paradigm. The posterior distribution is

defined as proportional to the likelihood function for the data multiplied by the prior information for the parameter(s), which is useful for future inferences and prediction. In literature, there is no specification about the choice of prior from which one can conclude the superiority of one prior over the others. Generally, selection of prior(s) is based on ones subjective knowledge and beliefs. However, it is preferable to use informative prior when sufficient information about the parameter(s) of interest is available, otherwise it is better to use non-informative prior [12]. Here we consider both type of prior distributions for estimating the unknown scale parameter A.

3.1. Posterior distribution under the assumption of extended Jeffrey's prior

The extended Jeffrey's prior was proposed by Al-Kutobi [13] and given as

n(A) a [I(A)]c; c e R+

where, I(A) = —nE 1"/a2;A)J is the Fisher's information matrix. From the probability model (1) we found I (A) = a2 and therefore, the extended Jeffrey's prior becomes

ni(A) a (A2Y . (4)

The prior distribution (4) and the likelihood function (2) are combined to get the posterior distribution of A and it is given by

•T-n 1 ri=1 x2

n—2c+1

—A rn 1

n(A|X)= \(n -2c + 1) An-2Ce Therefore, the distribution of A|X can be written as G ^n — 2c + 1, Yn=1 .

Remark 1. Extended Jeffrey's prior is the generalized version of many non informative priors. We get Jeffrey's prior if we replace c with j. Also it reduces to Hartigan's prior when c =

3.2. Posterior distribution under the assumption of Gamma prior

The gamma distribution with known hyperparameters a and p, is considered here as an informative prior for the parameter A. For the inverse Rayleigh distribution, gamma prior also becomes the conjugate prior as the posterior distribution belongs to the gamma family. For, A ~ Gamma(a, p) the prior density becomes

PK

n2(A) = raAa—1 e—PA; A > 0, * > °, P > ° (5)

Now, combining the prior distribution (5) and the likelihood function (2) the posterior distribution of A takes the form

An+«—1 e—A[En=1 X2+p n2 (A|X) = —-e ^n 1 '

C An+«—1 e—A[r=1 x2+P]dA

n+a

rn=1 X2 + p) + 1 —A rn=1 X2+p

' 1 An+a—1 e [ x2 r

r(n + a)

Therefore, the distribution of A|X can be written as G ^n + a, rn=1 1 + V

4. Bayes Estimation of Scale parameter A under different Loss functions

The selection of an appropriate loss function L(A, A) in Bayesian Inference is a major aspect for the estimation of unknown parameter. Most of the research works on point estimation and prediction considered the underlying loss function as squared error due to its elegant statistical properties and mathematical simplicity. The reason being that it is symmetric in nature and assigns equal importance to the overestimation and underestimation of the parameter. In many practical situations when the loss is not symmetric, use of squared error loss function (SELF) is inappropriate. Basu and Ebrahimi [14] pointed out that overestimation and underestimation have different consequences. Thus, in order to make the statistical inference more practical and applicable we often use asymmetric loss function. In this present study, we consider both the symmetric and asymmetric loss functions to derive the Bayes estimate of A.

4.1. Estimation under Quadratic loss function

Here we consider the quadratic loss function (QLF) for obtaining the Bayes estimate under the assumption of both non informative and informative prior simultaneously. It is well known that, SELF is useful for estimation of location parameter but in case of scale parameter a modified form of this loss, known as QLF is preferable and it is defined as follows [15]

Li (A, A)

A - A A

which is a non-negative, symmetric and continuous loss function. The risk function under QLF is denoted by Rolf (A, A) and is defined as

ROLF (A, A) = E [L^A, A)]

= 1 - 2AE (A-1|X) + A2E (A-2|x) .

(6)

By differentiating the above risk function with respect to A and equating it to zero, we will get the Bayes estimate for which the risk would be minimized. Hence under QLF the Bayes estimate of A takes the form as ( )

E (A-1|X)

A0LF = E (A-2|X). Now, based on the extended Jeffrey's prior we have

(7)

E (A-1|X) E (A-2|X)

1

1

E 72 and

n - 2c V=1 X .

r (n - 2c - 1) _lV r (n - 2c + 1) lE1 X2) •

Therefore, by putting these values in (7), we obtain the Bayes estimate of A under QLF based on Extended Jeffrey's prior as

/1

QLF1

(n - 2c) r (n - 2c)

1

r (n - 2c)

1

(n - 2c) r (n - 2c - 1) ( r (n - 2c - 1) , E jl

E=1 X2 I I E=1 X?

Similarly, based on the assumption of gamma prior we have

E (A-1 IX) = (,E X* + P) and

(8)

2

e mx ) = w (,E £ + p

r (n + a)

After putting these values in (7), we find the Bayes estimate of A under QLF based on gamma

prior as

a olf2 —

r (n + a - 2 + 1)

1

r (n + a - 2)

(n + a - 2)

Yin—i 4 + p) (Tin— 1 X2 + P

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(9)

4.2. Estimation under Squared log error loss function

In order to obtain the Bayes estimate of A, we consider the squared log error loss function (SLELF) which is proposed by Brown [16] and defined as

L2 (A, A) — (lnX - lnA)'

ln I

where both A and A are positive. This is a balanced loss function with lim L2 (A, A) ^ to as A ^ 0 or to. A balanced loss function considers both estimation error and goodness of fit, while an unbalanced loss function only considers estimation error [18]. This loss is asymmetric and convex [17]. It is convex when A < e, and concave otherwise, but its risk function is minimum with respect to Aslelf.

The risk function under SLELF is denoted by RSLELF (A, A) and expressed as

rslelf (A, A) — E [L2 (A, A)]

(lnA) - 2lnAE [InA|X] + E (lnA)2 |X

(10)

Now, by differentiating the risk function with respect to A and equating it to zero, we will be able to find the Bayes estimate for which the above risk is minimized. Hence under SLELF, we obtain the Bayes estimate of A which have the following expression

Aslelf — exp [E (lnA|X)]

(11)

So, we calculate [E (lnA |X)] by using the posterior density derived under both the extended Jeffrey's prior and gamma prior respectively.

Hence, under the assumption of the extended Jeffrey's prior

E (lnAlX) — Y (n - 2c + 1) - ln[ £ -J ) ,

i—1 xf

(12)

E l(lnA)2|X

f" (n - 2c + 1)

r (n — 2c + 1)

where, Y (n — 2c + 1) = r^r^e+ij ,is a digamma function. Similarly, under the gamma prior, expressions are

- 2Y (n - 2c + 1) ln( £ + ln( £ -2

\i—i

Vi—1 Xi

(13)

E (lnAX — Y (n + a) - ln[ £ + p ,

i—1 xi

E ((lnA)2|X) — ^^ -2Y (n + a) X2 + p) +{ln (g X2 + p)}

(14)

(15)

2

2

2

2

where, Y (n + a) = p^fO), is a digamma function.

Therefore, to obtain the Bayes estimate of the parameter A under SLELF based on both prior assumptions, we substitute the expressions (12) and (14) respectively in (11). After simplification, we get,

eY(n-2c+1) T-n

Ei=1 x2

A slelf = -— and (16)

a eY(n+a)

/lSLELF2 = ~n-r—p. (17)

Ei=1 x? + p

4.3. Estimation under General entropy loss function

Another well known asymmetric loss function is general entropy loss function (GELF) proposed by Calabria and Pulcini [19]. Many authors like [20], [21] referred this loss as the modified linear exponential (MLINEX) loss function and defined as

L3 (A, A) = <

AT - 7 ■»( A) -1

; < > 0, 7 = 0.

The constant 7, involved in the loss function is the shape parameter and indicates the deviation from symmetry. It is clear that if the value of the shape parameter 7 = 1, this loss reduces to the entropy loss function which is also used by several authors like [22], [23] etc. Dey [11] mentioned that if we replace (A - A) in place of ln j i.e. lnA - lnA, linear exponential (LINEX) loss function has been obtained, which is proposed by Zellner [24].

Now by considering GELF, the expression of the risk function denoted as RGELF (A, A) is given below

Rgelf (A, A) = E [L3 (A,A)]

= <A7E (A-7|XX) - <7 lnA + <7 E (lnA|X) - <. (18)

So, for minimizing the risk function we differentiate the above equation with respect to A and equate it to zero. After simplification, we have

AGELF = [E (A-7|X)]-7. (19)

Now, we solve the above expression by considering extended Jeffrey's prior and gamma prior simultaneously. Therefore, under the extended Jeffrey's prior

E (A-7|X) = (E ^

and under the gamma prior

n

7

E <A-"|X) = ^n+f^ fE I + p

7

After substituting the values of E (A 7 |X) in (19), we have the following Bayes estimators under both non-informative and informative prior respectively,

a gelf1 =

T(n - 2c - 7 + 1) r(n - 2c + 1)

T-n J_

Ei = 1 x2

(20)

1

7

a gelf2 —

T(n + a - 7) r (n + a)

(

\

1(e1—X2+p))

(21)

1

5. Minimax Estimators

In this section, we derive the minimax estimators of the scale parameter A under symmetric (QLF) and asymmetric (SLELF and GELF) loss functions. Bayes estimators are derived primarily and then minimax etimators are obtained by applying the Lehmann's theorem, which can be described as follows.

Theorem 1. Suppose, t = {FQ; Q € ©} be a family of distribution functions and D is a class of estimators of d. Let, d* € D is a Bayes estimator against a prior distribution £ * (Q) on the parameter space © and the risk function R (d*, Q) = constant on ©; then d* is a minimax estimator of Q.

The motivation behind this study is to check whether the risk functions developed in 4 are constant or not for the corresponding Bayes estimators. If the risk functions are constant then according to the Lehmann's theorem, the respective Bayes estimators are minimax estimators.

First of all, to verify the above Lehmann's theorem we consider the quadratic loss function. The risk function (6) is derived after considering the Bayes estimators (8) and (9) for both the non-informative and informative prior respectively. So, the risk function Rolf (A, A) for the estimators Aqlf1 and Aqlf2 becomes

R

OLF

(A, A0LF1

=1 - 2A0LF1E (a-1|x) + aQLFE (a-2|x)

1 - 2< Jt^-^ ± -1+

r(n - 2c)

r (n - 2c - 1) £n— x -12 n - 2c i—1 x2 ( ' Vr (n - 2c - 1)

1 r (n - 2c - 1)

En iV r (n - 2c + 1)

Ei—1 -2

„ 'n - 2c - 1\ n - 2c - 1 =1 - 2 (-^— ) +

n 2c

n 2c

1

n 2c

which is a constant and

rqlf (a aqlf2)

=1 - 2aqlf2E (a-1|>x) + a2qlf2E (a-2|x) n + a - 2 r (n + a - 1)

12

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

E—1 + p

r (n + a)

E 72 + p

+

(n + a - 2)2 r (n + a - 2)

i—1 -i

E—1 -2 + p

2 r (n + a)

E 72 + p

i—1 -i

1 2 i'n + a - 2^ + n + a - 2

n + a - 1

n + a - 1

1

n + a - 1'

which is also constant.

Therefore, as per the Lehmann's theorem, Aolf1 and Aolf2 are the minimax estimators of the scale parameter A under the quadratic loss function for extended Jeffrey's prior and gamma prior respectively.

2

2

Next for SLELF, we use the Bayes estimators (16) and (17) in (1°) to obtain the risk functions corresponding to the Bayes estimators ASLELF1 and ASLELF2 under the non-informative and informative priors respectively.

RSLELF (X ASLELFJ

= (ln ASLELF1 )2 — 2 ln ASLELF1E [lnA|X] + E [(lnA)2 |X

Y(n — 2c + 1) — ln £ -1 ) — 2 I Y(n — 2c + 1) — ln £ -1 | | Y(n — 2c + 1) — ln £ -1

1=1 xi

i=1xi

i=1xi

+ On—^) — 2Y (n — 2c + 1) J £ I + m £ I

r (n — 2c + 1)

i=1xi

i=1xi

J" (n — 2c + 1) r(n — 2c + 1)

— (Y (n — 2c + 1)) ; which is a constant and

rslelf (x Aslelf2)

-- (ln Aslelf2)2 — 2 ln Aslelf2E [lnA|X] + E [(lnA)2 |X

r' (n + a) r(n + a)

+ 2Y (n + a) ln\ £ + p —

i=1xi

ln\£ 72+p

i=1xi

+

r'' (n + a) r (n + a)

— 2Y(n + a)ln £ + p +

i=1

XT

lny ejz + p

r'' (n + a) r (n + a)

— (Y(n + a))2; which is also constant.

Therefore, according to the Lehmann's theorem the Bayes estimators ASLELF1 and ASLELF2 are the minimax estimators under SLELF. ( )

Finally, we calculate the risk functions RGELF (A, A) for the Bayes estimators AGELF1 and AGELF2 respectively, as

rgelf (X AGELF1) =wAGelf^E (A—7|X) — w y lnAGELF1 + w Y E (lnA|X) — w

-w

r (n — 2c + 1)

r (n — 2c — y + 1^£n=1

r (n — 2c — y + 1^£f=1 jf)

r (n — 2c + 1)

wy

_1 ln r(n — 2c — y + 1) Y r (n — 2c + 1)

n1 —ln £X2

i=1 xi

w ln

+ wy

r(n — 2c — y + 1) r(n — 2c + 1)

Y(n — 2c + 1) — lnl ££ -f

i=1 xi

w

+ w 7Y (n — 2c + 1); which is a constant and

rgelf (x Agelf2) =wAGELF E (A—7|X) — w y lnAGELF2 + w y E (lnA|X) — w

=w

r(n + a)

r (n + a — y) n 1

r(n + a — y) \ £,n=1 + p r (n + a) V i=1 X

£ + p — wY

1, r (n + a — 7)

—ln——-r1^

Y r(n + a)

—ln\ £ ~2 + p

i=1 xi

+ wy

Y(n + a) — lnl £ + p

i=1 xi

w

2

2

2

2

n

Y

7

1

=< - <7

-1 ln r (rn + a -7) - ml E \ + p

7 r (n + a) 1= x2 y

+ <7

Y(n + a) - ln E ~2 + P

i=1 xf

<

r(n + a — 7)

=<ln^^-+ <7Y(n + a); which is also constant.

r(n + a)

Therefore, according to the Lehmann's theorem, both the Bayes estimators AGELF1 and AGELF2 are the minimax estimators of A under the extended Jeffrey's prior and gamma prior respectively. So, the minimax estimators under various loss functions are derived and we compare their performances numerically in the next section.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

6. Simulation Study

In this section, the numerical comparisons between the minimax estimators and the maximum likelihood estimator have been conducted through an extensive Monte Carlo simulation study. The performance of the estimators is evaluated on the basis of biases and mean squared errors (MSE) criteria. The initial choices of the scale parameter are taken as A = 0.75 and 1. We generate random samples of sizes n = 10, 25, 50, 75,100 from (1) by using inverse transformation method and replicate the process for K = 10,000 times. Based on these replicated samples, the bias and MSE of the estimators will be calculated by using the following formula,

Table 1: Estimated values, Bias and MSE ofdiferent estimators under extended Jeffrey's when A = 0.75.

sample c=-1 c=0.5 c=1 c=1.5

sizes(n) criteria AMLE AoLF ASLELF AGELF AoLF ASLELF AGELF AoLF ASLELF AGELF AoLF ASLELF AGELF Estimate 0.833 0.916 1.042 1.000 0.667 0.792 0.75 0.583 0.709 0.667 0.500 0.625 0.583 10 Bias 0.083 0.166 0.292 0.250 -0.083 0.042 0.00 -0.167 -0.041 -0.083 -0.250 -0.125 -0.167 MSE 0.093 0.132 0.220 0.187 0.062 0.080 0.07 0.070 0.064 0.062 0.094 0.064 0.070

Estimate 0.779 0.810 0.857 0.841 0.717 0.764 0.748 0.686 0.732 0.717 0.654 0.701 0.686

25 Bias 0.029 0.060 0.107 0.091 -0.033 0.014 -0.002 -0.064 -0.018 -0.033 -0.096 -0.049 -0.064

MSE 0.027 0.032 0.043 0.039 0.023 0.025 0.024 0.024 0.023 0.023 0.027 0.023 0.024

Estimate 0.763 0.778 0.801 0.794 0.733 0.756 0.748 0.717 0.740 0.733 0.702 0.725 0.717

50 Bias 0.013 0.028 0.051 0.044 -0.017 0.006 -0.002 -0.033 -0.010 -0.017 -0.048 -0.025 -0.033

MSE 0.012 0.013 0.016 0.015 0.011 0.012 0.012 0.012 0.011 0.011 0.013 0.012 0.012

Estimate 0.758 0.768 0.783 0.778 0.738 0.753 0.748 0.727 0.743 0.738 0.717 0.733 0.727

75 Bias 0.008 0.018 0.033 0.028 -0.012 0.003 -0.002 -0.023 -0.007 -0.012 -0.033 -0.017 -0.023

MSE 0.008 0.008 0.010 0.009 0.008 0.008 0.008 0.008 0.008 0.008 0.008 0.008 0.008

Estimate 0.756 0.764 0.775 0.771 0.741 0.752 0.748 0.733 0.745 0.741 0.726 0.737 0.733

100 Bias 0.006 0.014 0.025 0.021 -0.009 0.002 -0.002 -0.017 -0.005 -0.009 -0.024 -0.013 -0.017

MSE 0.006 0.006 0.007 0.007 0.006 0.006 0.006 0.006 0.006 0.006 0.006 0.006 0.006

Bias (A) = K EK=1 (Ai - A) and MSE (A) = K EK=1 (Ai - A)2.

In case of classical estimation, AMLE can be easily obtained for K times from the expression (3) for each of the chosen A with different sample sizes. In Bayesian setup, to obtain the minimax estimators of A, we consider three different loss functions QLF, SLELF and GELF respectively. For GELF, the value of the shape parameter is fixed at 7 = 1. Now, under the assumption of extended Jeffrey's prior, we choose different values of c, such as, c = ±1,0.5,1.5. It is to be noted that, when c = 0.5, then the extended Jeffrey's prior is simplified as Jeffrey's prior and for c = 1.5, it reduces to Hartigan's prior. Also, in this empirical study, the choices of hyper parameters are taken as (a,p) = (0.5,0.5), (0.5,5.0), (1.0,0.25) and (5.0,5.0) under the assumption of gamma prior. For every combinations of (a, p), we calculate the minimax estimators of A under the three various loss functions. Finally, the average minimax and MLE estimators with their corresponding biases

LU

co

MLE m MLE

• QLF • QLF

- A- SLELF CO o - A- SLELF

- -* - GELF A \ - -* - GELF

10

25

50

sample size

(a) when c = —1

75

o o

100

o o

10

25

50

sample size

(b) when c = 0.5

75

100

LU

CO

10

25

50

sample size

(c) when c = 1

75

MLE MLE

• QLF • QLF

. A- SLELF . A- SLELF

- -* - GELF CO O - - -* - GELF

100

10

25

50

sample size

(d) when c = 1.5

75

100

Figure 1: MSEs of MLE and minimax estimators under extended Jeffrey's prior with different values of c when A = 0.75

and MSE values are summarized in Tables 1-2 and 3-4 under the extended Jefferey's prior and gamma prior respectively.

In certain cases, a graphical representation of data is a superior representation of information. The aim is to graphically display comparable findings in order to provide a comprehensive evaluation of the estimators based on their biases and MSEs obtained in subsequent tables. The MSE values are plotted in vertical axis against the increasing order of sample sizes in horizontal axis. Here, for instances we only provide the graph for A = 0.75 under different conditions both for the extended Jeffrey's and gamma prior. The observations obtained from the simulation results are listed below.

1. When c = —1, then it is clearly seen that the MLE is appeared to be better than all the minimax estimators under three loss functions.

LD

CO

О О

О О

О О

10 25 50 75

sample size

MLE ■ MLE

-•- QLF • QLF

■ -A- SLELF 00 О - - -Л- SLELF

- - GELF • \ - -* - GELF

100

10

25

50

sample size

75

100

(a) when (a, p) = (0.5,0.5)

(b) when (a, p) = (0.5,5.0)

ш

со

о о

10

25

50

sample size

75

MLE ■ MLE

• QLF • QLF

. .A- SLELF CO о . .A. SLELF

- -* - GELF - -* - GELF

о о

100

10

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

25

50

sample size

75

-*

I

100

(c) when (a, p) = (1.0,0.25)

(d) when (a, p) = (5.0,5.0)

Figure 2: MSEs of MLE and minimax estimators under gamma prior with different values ofhyperparameters when A = 0.75

2. Under Jeffrey's prior (c = 0.5), minimax estimator under QLF has the smallest MSE value.

3. When c = 1, minimax estimator under GELF performs better than the other estimators.

4. Under Hartigan's prior (c = 1.5), minimax estimator under the SLELF has the smallest MSE compared to the others estimators. Also, both the MSE of MLE and the minimax estimator under QLF are coincided.

5. It is found from Tables 1 and 2 that Hartigan's prior and Jeffrey's prior are identical when sample size n > 50.

6. Under gamma prior, it is observe that in most of the cases the minimax estimator under QLF performs better than the other estimators.

Table 2: Estimated values, Bias and MSE of different estimators under extended Jeffrey's prior when A = 1.

sample c=-1 c=0.5 c=1 c=1.5

sizes(n) criteria AMLE Aqlf Aslelf Agelf Aqlf Aslelf Agelf Aqlf Aslelf Agelf Aqlf Aslelf Agelf

10 Estimate 1.111 1.222 Bias 0.111 0.222 MSE 0.166 0.236 1.389 0.389 0.392 1.333 0.333 0.333 0.889 -0.111 0.111 1.056 0.056 0.142 1.000 0.000 0.125 0.778 -0.222 0.125 0.945 -0.055 0.114 0.889 -0.111 0.111 0.667 -0.333 0.167 0.834 -0.166 0.114 0.778 -0.222 0.125

25 Estimate 1.039 1.080 Bias 0.039 0.080 MSE 0.048 0.056 1.143 0.143 0.076 1.122 0.122 0.069 0.956 -0.044 0.041 1.018 0.018 0.045 0.997 -0.003 0.042 0.914 -0.086 0.043 0.977 -0.023 0.041 0.956 -0.044 0.041 0.873 -0.127 0.049 0.935 -0.065 0.042 0.914 -0.086 0.043

50 Estimate 1.018 1.038 Bias 0.018 0.038 MSE 0.022 0.024 1.068 0.068 0.028 1.058 0.058 0.027 0.977 -0.023 0.020 1.007 0.007 0.021 0.997 -0.003 0.021 0.956 -0.044 0.021 0.987 -0.013 0.020 0.977 -0.023 0.020 0.936 -0.064 0.022 0.967 -0.033 0.021 0.956 -0.044 0.021

75 Estimate 1.010 1.024 Bias 0.010 0.024 MSE 0.014 0.015 1.044 0.044 0.017 1.037 0.037 0.016 0.983 -0.017 0.014 1.004 0.004 0.014 0.997 -0.003 0.014 0.970 -0.030 0.014 0.990 -0.010 0.014 0.983 -0.017 0.014 0.956 -0.044 0.015 0.977 -0.023 0.014 0.970 -0.030 0.014

100 Estimate 1.008 1.018 Bias 0.008 0.018 MSE 0.010 0.011 1.033 0.033 0.012 1.028 0.028 0.012 0.988 -0.012 0.010 1.003 0.003 0.010 0.998 -0.002 0.010 0.978 -0.022 0.010 0.993 -0.007 0.010 0.988 -0.012 0.010 0.968 -0.032 0.011 0.983 -0.017 0.010 0.978 -0.022 0.010

Table 3: Estimated values, Bias and MSE of different estimators under gamma prior when A = = 0.75.

sample (0.5, 0.5) (0.5, 5.0) (1.0,0.25) (5.0, 5.0)

SizeS(n) criteria AMLE Aqlf Aslelf Agelf Aqlf Aslelf Agelf Aqlf Aslelf Agelf Aqlf Aslelf Agelf

10 Estimate 0.833 0.677 Bias 0.083 -0.073 MSE 0.093 0.056 0.796 0.046 0.073 0.756 0.006 0.064 0.488 -0.262 0.081 0.575 -0.175 0.049 0.546 -0.204 0.058 0.733 -0.017 0.064 0.855 0.105 0.097 0.814 0.064 0.082 0.747 -0.003 0.030 0.833 0.083 0.044 0.804 0.054 0.038

25 Estimate 0.779 0.721 Bias 0.029 -0.029 MSE 0.027 0.022 0.767 0.017 0.025 0.751 0.001 0.023 0.631 -0.119 0.027 0.671 -0.079 0.020 0.657 -0.093 0.022 0.742 -0.008 0.023 0.788 0.038 0.028 0.773 0.023 0.026 0.751 0.001 0.018 0.792 0.042 0.021 0.778 0.028 0.020

50 Estimate 0.763 0.735 Bias 0.013 -0.015 MSE 0.012 0.011 0.757 0.007 0.012 0.750 0.000 0.012 0.687 -0.063 0.012 0.708 -0.042 0.011 0.701 -0.049 0.011 0.745 -0.005 0.011 0.768 0.018 0.012 0.760 0.010 0.012 0.751 0.001 0.010 0.772 0.022 0.011 0.765 0.015 0.011

75 Estimate 0.758 0.739 Bias 0.008 -0.011 MSE 0.008 0.008 0.754 0.004 0.008 0.749 -0.001 0.008 0.706 -0.044 0.008 0.721 -0.029 0.007 0.716 -0.034 0.008 0.746 -0.004 0.008 0.761 0.011 0.008 0.756 0.006 0.008 0.750 0.000 0.007 0.764 0.014 0.007 0.759 0.009 0.007

100 Estimate 0.756 0.742 Bias 0.006 -0.008 MSE 0.006 0.006 0.753 0.003 0.006 0.749 -0.001 0.006 0.717 -0.033 0.006 0.728 -0.022 0.005 0.725 -0.025 0.006 0.747 -0.003 0.006 0.758 0.008 0.006 0.755 0.005 0.006 0.750 0.000 0.005 0.761 0.011 0.006 0.757 0.007 0.005

7. The minimax estimator under gamma prior has less MSE value as compared with the extended Jeffrey's prior.

8. Bias of A decreases with an increasing sample sizes for all the estimators.

9. Bias and MSE of all the estimators of A increases with the value of true scale parameter increases.

10. In all the cases MSE of the estimators reduced with the increase in sample size which verifies the consistency of all the estimators. Further, for large size of sample, they all converge to an almost same MSE value.

Table 4: Estimated values, Bias and MSE of different estimators under gamma prior when A = 1.

sample (0.5, 0.5) (0.5, 5.0) (1.0,0.25) (5.0, 5.0)

sizes(n) criteria A MLE A OLF A SLELF A GELF A QLF A SLELF A GELF A QLF A SLELF A GELF A QLF A SLELF A GELF

10 Estimate 1.111 Bias 0.111 MSE 0.166 0.889 -0.111 0.097 1.046 0.046 0.120 0.994 -0.006 0.106 0.592 -0.408 0.182 0.696 -0.304 0.114 0.661 -0.339 0.134 0.97 -0.03 0.110 1.132 0.132 0.165 1.077 0.077 0.140 0.905 -0.095 0.045 1.010 0.010 0.045 0.975 -0.025 0.043

25 Estimate 1.039 Bias 0.039 MSE 0.048 0.956 -0.044 0.039 1.017 0.017 0.042 0.996 -0.004 0.041 0.804 -0.196 0.057 0.855 -0.145 0.042 0.838 -0.162 0.046 0.987 -0.013 0.041 1.048 0.048 0.048 1.028 0.028 0.045 0.958 -0.042 0.028 1.009 0.009 0.029 0.992 -0.008 0.028

50 Estimate 1.018 Bias 0.018 MSE 0.022 0.977 -0.023 0.020 1.007 0.007 0.021 0.997 -0.003 0.020 0.894 -0.106 0.025 0.922 -0.078 0.020 0.913 -0.087 0.022 0.992 -0.008 0.020 1.022 0.022 0.022 1.012 0.012 0.021 0.977 -0.023 0.017 1.005 0.005 0.017 0.996 -0.004 0.017

75 Estimate 1.010 Bias 0.010 MSE 0.014 0.983 -0.017 0.013 1.004 0.004 0.014 0.997 -0.003 0.014 0.927 -0.073 0.016 0.946 -0.054 0.014 0.940 -0.060 0.014 0.993 -0.007 0.014 1.014 0.014 0.014 1.007 7 0.984 0.007 -0.016 0.014 0.012 1.003 0.003 0.012 0.996 -0.004 0.012

100 Estimate 1.008 Bias 0.008 MSE 0.010 0.988 -0.012 0.010 1.003 0.003 0.010 0.998 -0.002 0.010 0.945 -0.055 0.011 0.959 -0.041 0.010 0.954 -0.046 0.010 0.995 -0.005 0.010 1.010 0.010 0.010 1.005 0.005 0.010 0.988 -0.012 0.009 1.002 0.002 0.009 0.998 -0.002 0.009

7. Conclusion

In this article, an attempt has been made towards a comparison between the minimax estimators and the maximum likelihood estimator of the scale parameter A of the inverse Rayleigh distribution. In order to obtain the minimax estimator of A, we consider extended Jeffrey's prior and gamma prior under the symmetric (QLF) and asymmetric (SLELF and GELF) loss functions. An extensive simulation process is performed to investigate the performance of the MLE as well as minimax estimators in terms of bias and MSE values. From the simulation results it can be observed that in large sample cases the MLE and minimax estimators under different loss functions have approximately same MSE values.

In case of extended Jeffrey's prior, when the value of c is negative (i.e. c = -1), the maximum likelihood estimator (MLE) appears to be better than minimax estimators under all the considered loss functions. However, when c has positive values, then the minimax estimators are more efficient than the classical estimator MLE.

While comparing the MLE with the minimax estimators under gamma prior, it has been observe that the minimax estimators are appeared to be better for all the choices of hyperparameters. It is also remarked that the minimax estimators under gamma prior have less MSE as compared to the extended Jeffrey's prior. Therefore, choosing an informative prior is always superior to that of the non-informative prior. Finally, an increasing order of sample size results in a noticeable decrease in MSEs for all choices of parameter values which established that all the estimators are consistent.

References

[1] Wald, A. Statistical decision functions, Wiley, 1950.

[2] Podder, C. K. (2020). Minimax Estimation of the Scale Parameter in a Class of Life-Time Distributions for Different Loss Functions. international Journal of Statistical Sciences, 20(2):85-98.

[3] Li, L. (2016). Minimax estimation of the parameter of Maxwell distribution under different loss functions. American Journal of Theoretical and Applied Statistics, 5(4):202-207.

Roy, M. K., Podder, C. K. and Bhuiyan, K. J. (2002). Minimax estimation of the scale parameter of the Weibull distribution for quadratic and MLINEX loss functions. Jahangirnagar University Journal of Science, 25:277-285.

Dey, S. (2008). Minimax estimation of the parameter of the Rayleigh distribution under quadratic loss function. Data Science Journal, 7:23-30.

Ahmad, A., Ain, S. Q. Ul., Tripathi, R. and Ahmad, A. (2021). Inverse Weibull-Rayleigh Distribution Characterisation with Applications Related Cancer Data. Reliability: Theory & Applications, 16(4(65)):364-382.

Banerjee, P. and Bhunia, S. (2022). Exponential Transformed Inverse Rayleigh Distribution: Statistical Properties and Different Methods of Estimation. Austrian Journal of Statistics, 51(4):60-75.

Bhunia, S. and Banerjee, P. (2022). Some Properties and Different Estimation Methods for Inverse A (a) Distribution with an Application to Tongue Cancer Data. Reliability: Theory & Applications, 17(1 (67)):251-266.

Voda, V. Gh. (1972). On the inverse Rayleigh distributed random variable. Reports of Statistical Application Research, 19:13-21.

Soliman, A., Amin, E. A. and Abd-El Aziz, A. A. (2010). Estimation and prediction from inverse Rayleigh distribution based on lower record values. Applied Mathematical Sciences, 4(62):3057-3066.

Dey, S. (2012). Bayesian estimation of the parameter and reliability function of an inverse Rayleigh distribution. Malaysian Journal of Mathematical Sciences, 6(1):113-124. Banerjee, P. and Seal, B. (2021). Partial Bayes Estimation of Two Parameter Gamma Distribution under Non-Informative Prior. Statistics, Optimization & Information Computing, http://www.iapress.org/index.php/soic/article/view/1110.

Al-Kutobi, HS. On comparison estimation procedures for parameter and survival function exponential distribution using simulation, Baghdad University, College of Education, Iraq, 2005.

Basu, AP. and Ebrahimi, N. (1991). Bayesian approach to life testing and reliability estimation using asymmetric loss function. Journal of statistical planning and inference, 29(1-2):21-31. Singh, S. K., Singh, U. and Kumar, D. (2011). Bayesian estimation of the exponentiated gamma parameter and reliability function under asymmetric loss function. REVSTAT-Stat J, 9(3):247-260.

Brown, L. (1968). Inadmissibility of the usual estimators of scale parameters in problems with unknown location and scale parameters. The Annals of Mathematical Statistics, 39(1):29-48. Kiapour, A. and Nematollahi, N. (2011). Robust Bayesian prediction and estimation under a squared log error loss function. Statistics & probability letters, 81(11):1717-1724. Dey, S. and Maiti, S. S. (2011). Bayesian Inference on the Shape Parameter and Future Observation of Exponentiated Family of Distributions. Journal of Probability and Statistics, 2011.

Calabria, R. and Pulcini, G. (1996). Point estimation under asymmetric loss functions for left-truncated exponential samples. Communications in Statistics-Theory and Methods, 25(3):585-600.

Podder, CK. and Roy, MK. (2003). Bayesian estimation of the parameter of Maxwell distribution under MLINEX loss function. Journal of Statistical Studies, 23:11-16. Li, L. (2013). Minimax Estimation of Parameter of Generalized Exponential Distribution under Square Log Error Loss and MLINEX Loss Functions. Research Journal of Mathematics and Statistics, 5(3):24-27.

Dey, D. K. and Ghosh, M. and Srinivasan, C. (1986). Simultaneous estimation of parameters under entropy loss. Journal of Statistical Planning and Inference, 15:347-363. Dey, D. K. et al. (1992). On comparison of estimators in a generalized life model. Microelectronics Reliability, 32(1-2):207-221.

Zellner, A.(1986). Bayesian estimation and prediction using asymmetric loss functions.

Journal of the American Statistical Association, 81(394):446-451.

i Надоели баннеры? Вы всегда можете отключить рекламу.