Научная статья на тему 'NEW EXTENSION OF INVERTED MODIFIED LINDLEY DISTRIBUTION WITH APPLICATIONS'

NEW EXTENSION OF INVERTED MODIFIED LINDLEY DISTRIBUTION WITH APPLICATIONS Текст научной статьи по специальности «Математика»

CC BY
31
18
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
Inverted modified Lindley distribution / moments / maximum likelihood estimator / order statistics / bootstrap confidence intervals / Bayes estimators

Аннотация научной статьи по математике, автор научной работы — Devendra Kumara, Anju Goyalb, P. Pareekc, M. Sahaa

In this article we, proposed a new two parameter distribution called inverted power modified Lindley distribution. The main objective is to introduce an extension to inverted modified Lindley distribution as an alternative to the inverted exponential, inverted gamma and inverted modified Lindley distributions, respectively. The proposed distribution is more flexible than the above mentioned distributions in terms of its hazard rate function. In the part of estimation of the proposed model, we first utilize the maximum likelihood (ML) estimator and parametric bootstrap confidence intervals, viz., standard bootstrap, percentile bootstrap, bias-corrected percentile (BCPB), bias-corrected accelerated bootstrap (BCAB) from the classical point of view as well the Bayesian estimation under different loss functions, squared error loss function, modified squared error loss function, and Bayes credible interval as to obtain the model parameter based on order statistics. A simulation study is carried out to check the efficiency of the classical and the Bayes estimators in terms of mean squared errors and posterior risks, respectively. Two real life data sets, have been analyzed for order statistics to demonstrate how the proposed methods may work in practice.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «NEW EXTENSION OF INVERTED MODIFIED LINDLEY DISTRIBUTION WITH APPLICATIONS»

NEW EXTENSION OF INVERTED MODIFIED LINDLEY DISTRIBUTION WITH APPLICATIONS

DEVENDRA KuMARa, AnJU GOYALb, P. PAREEKc AND M. SAHAa

"Department of Statistics, Faculty of Mathematical Sciences, University of Delhi, India bDepartment of statistics, Panjab University Chandigarh, India cDepartment of Statistics, Central University of Rajasthan, Kishangarh, India corresponding author: [email protected]

Abstract

In this article we, proposed a new two parameter distribution called inverted power modified Lindley distribution. The main objective is to introduce an extension to inverted modified Lindley distribution as an alternative to the inverted exponential, inverted gamma and inverted modified Lindley distributions, respectively. The proposed distribution is more flexible than the above mentioned distributions in terms of its hazard rate function. In the part of estimation of the proposed model, we first utilize the maximum likelihood (ML) estimator and parametric bootstrap confidence intervals, viz., standard bootstrap, percentile bootstrap, bias-corrected percentile (BCPB), bias-corrected accelerated bootstrap (BCAB) from the classical point of view as well the Bayesian estimation under different loss functions, squared error loss function, modified squared error loss function, and Bayes credible interval as to obtain the model parameter based on order statistics. A simulation study is carried out to check the efficiency of the classical and the Bayes estimators in terms of mean squared errors and posterior risks, respectively. Two real life data sets, have been analyzed for order statistics to demonstrate how the proposed methods may work in practice.

Keywords: Inverted modified Lindley distribution, moments, maximum likelihood estimator, order statistics, bootstrap confidence intervals, Bayes estimators.

1. INTRODUCTION

The inverted modified Lindley (IML) distribution is one of the most famous one-parameter distributions used for modeling count data, whish was introduced by [5] as a mixture of inverted exponential and inverted gamma distributions with mixing proportion 9/(1 + 9), to illustrate diference between fiducial distribution and posterior distribution. [5] pointed out IML distribution outperforms the classical inverse Lindley distribution for some real data sets. They studied many properties of this distribution such as moments and inverse moments and also, noted down that the first four moments of this distribution. Furthermore, the IML distribution does not provide a reasonable parametric fit for modeling phenomenon with non-monotone failure rates, such as the upside-down bathtub failure rates, which are common in reliability and biological studies. For example, such failure rates curves can be observed in the course of a disease whose mortality reaches a peak after some finite period and then declines gradually.

Several generalizations of Lindley distribution have been attempted by many researchers in the existing literature such as [18] studied the generalized Lindley, [3] proposed an extended Lindley, [10] proposed the power Lindley distribution, [2] introduced the exponentiated power Lindley distribution, [4] proposed exponential Poisson Lindley distribution, [1] proposed a

new weighted Lindley distribution, [12] proposed Wrapped Lindley distribution, [7] proposed alpha power transformed inverse Lindley distribution, [7] proposed alpha-power transformed Lindley distribution, [6] proposed a new modified Lindley distribution without considering any special function or additional parameters. Recently, [13] introduced power modified Lindly (PML) distribution. They showed that PML distribution provides better fit than Lindley, Weibull, gamma, generalized exponential (GE) and power Lindley (PL) distributions and it was suitable for modeling constant, increasing, decreasing and unimodal shaped hazard rate function.

Many researchers considered the inverted modified Lindley (IML) distribution in their studies. For example, [14] studied the moments of order statistics and also estimation of the parameters by using maximum likelihood methods, [15] have established relations for moments of generalized order statistics and also proposed the estimation procedures under complete and censored data. This study presents a one parameter extension of the IML distribution by [5]. The presented distribution shows the flexible shapes of the density and hazard functions and gives better fits than some well-known lifetime distributions, such as inverted modified Lindley, Modified Lindley and Lindley distributions. In this article, we propose a three-parameter distribution, referred to as inverted power modified Lindley (IPML) distribution using a similar idea [18], which is the linear combination of inverted power exponential and inverted power gamma distribution. We are motivated to introduce the IPML distribution because (i) it contain lots of aforementioned of known lifetime models; (ii) it is capable of modelling monotonically increasing, decreasing, hazard rates; (iii) it can be viewed as a suitable model for fitting the skewed data which may not be properly fitted by other common distributions and can also be used in a variety of problems in various areas such as public health, biomedical studies, environmental studies and industrial reliability and survival analysis; and, (iv) Three real life data applications show that it compares well with other competing lifetime distributions in modelling lifetime data.

The objective of this paper is three fold: First, we obtain the estimates of model parameters based on maximum likelihood method of estimation. The performance of the MLE is demonstrated in terms of their mean squared errors (MSEs) based on simulated samples and for different sample sizes through a simulation study. The second objective is to obtain four bootstrap confidence intervals (BCIs) of model parameters based on MLE. The performances of the BCIs are demonstrated in terms of their estimated coverage probabilities (CPs) and average widths (AWs). The third objective is to obtain Bayes estimates (BEs) of the model parameters under four loss functions (symmetric as well as asymmetric loss functions).

The rest of the paper is organized as follows: In Section 2, we described proposed model PIML. In Section 3, dealt with some statistical and mathematical properties of PIML distribution. Section 4 described the MLE and BCIs, namely, standard bootstrap (SB), percentile bootstrap (PB), bias-corrected percentile bootstrap (BCPB) and bias-corrected accelerated bootstrap (BCAB) based on MLE have been discussed. Also, we derive the Bayes estimators of the model parameters under four loss functions. In Section 5, a Monte Carlo simulation study has been carried out to assess the performances of the above cited classical and Bayes estimators in terms of their MSEs. Also, we assess the performances of different BCIs and Bayes credible intervals in terms of coverage probabilities (CPs) and average widths (AWs). For illustrative purposes, two real data sets are analyzed in Section 6. Finally, concluding remarks are given in Section 7.

The one parameter inverted modified Lindley (IML) distribution proposed by [5] with cumulative distribution function (CDF)

Now, we introduce a skewness parameter to the inverted modified Lindley distribution using a similar idea to [9], [10], [16] and [13] i.e., X = Y1/T, t > 0 and to obtain a power inverted

2. Model description

modified Lindley (PIML) distribution. The CDF of the two parameter PIML distribution is given by

\ + —e-n/xT) e-n/xT, x > 0, n > 0,t > 0, 1 + n xT J '

and the corresponding probability density function (PDF) given by

F(x)

f (x)

n re

—2y/xT

(V + n)en/xT + 2| - lj, x > 0, n > 0,T > 0,

(1)

(2)

1 + n xT+1 V' ' "" xT The corresponding survival function for a specified value X = x is obtained as

S(x) = 1 - F(x) = 1 - + xte-n/xT^j e-n/xT, x > 0, n > 0, t > 0, Thus, we can also express the corresponding hazard rate function (HRF) for specified X = x as

(3)

h(x)

n t e-2n/xT

T+n xT+1

(1 + n)en/x + 2^ - 1

1 - (1 + 1+n xT e-n/xi e-n/x

, x > 0, n > 0, t > 0,

(4)

— alpfia-2 0,theta=1 E

— alptfca=1,5,theta=1 E-

— 1 o.thH-t 10

— alfitia=2 5,theta=1 E-

- alpha-2.(i.theta-?.C

\

/ \

I \

;

\

17 \

J/ s

.■ r ^ ^

i / ■■■■.

: '! 1

inJ

r

Figure 1: PDF and HRF of the PIML distribution.

From the Figure 1, it is clear that the PDF and HRF of the PIML distribution is right skewed distribution and initially increasing and then decreasing behaviour for the considered parameters values and for specified time. The corresponding cumulative hazard rate function is defined by

C(x) = - logS(x) = - logjl - (l + i+nxTe-n/x^ e-n/x"J , x > 0, n > 0, t > 0. (5)

When t = 1, the PIML distribution reduces to IML distribution. An advantage of the definition

of f( x) is that we can write it as a linear combination of well established PDFs as

1

f (x) = f1 (x) + (f2(x) - f3(x)), (6)

where, f1(x) is inverted exponential with parameter (n, t), f2(x) is inverted gamma with parameter (2n,2r) and f3(x) is inverted exponential with parameter (2n, t)

f1(x) = , f2(x) = (xn+Te-2"" and f3(x) = e-2n/x

3. Statistical and mathematical properties of PIML distribution

Here, we have discussed and derived several mathematical and statistical properties/which are given in the following subsections.

3.1. Moments and moment generating function

Let X be a random variable from PIML distribution with PDF given in (2), then its moments is given by the following

-2n

v'r = ¡0x7(x)dx = f xT+n^ ((1 + +1- ^dx

= ^ xr-r-1Tye—dx + 2(1+ T(2n)2xr-2T-1 e^ - fQ )xr-T-1 e^^ dx

= ''0"r (1 - T)(1 - ^ l))0 0 (7)

Also, the first four inverse moments are given by

E(Y-1) = —¡-7— r (1 + ^ (1 + 1 (1

n1/T V TZ V 2t+1(1 + n )\ T

E(Y-2)^-21Tr (1 + fl + 1 ( 2

n2/T \ TJ V 2t+1(1 + n)\T

E(Y-3) = -br fl + (1 + 3 11 (T n \ tJ\ 2T+1(1 + n )\ T

E(Y-4) = ir (1 + 4W1 + 1 ( 4

n4/T V 24+1(1 + n) \T

Table 1 presents the numerical values of these inverse moments for various values of

For any t < n, the moment generating function of PIML distribution can be computed as

M,(0 = f c-fW = f i,p/Tr (1 - p) J1 - ^ (H

The characteristic function of PML distribution, Q(t) = E(eltx), and the cumulant generating function of X, K(t) = log $>(t), are given by

*(t) = f ^ np/Tr (1 - T) (1 - ^ (p ;

and

K(t)=>og ( f <f (,)p/T)+,„g (m - t) (1 - ^(p;

Table 1: Numerical values related to the moments of the PIML distribution for different values of parameters t and y.

T n E(Y- 1) E(Y- 2) E(Y- -3) E(Y- 4)

Sim. Exact Sim. Exact Sim. Exact Sim. Exact

2 0.1 3.2638 3.2529 12.3493 12.2727 52.6211 52.1709 248.0657 245.4545

1 0.9622 0.9646 1.1202 1.1250 0.5104 0.5115 2.2327 2.2500

2 0.6637 0.6636 0.5413 0.5417 1.4967 1.5056 0.5392 0.5417

3 0.5332 0.5343 0.3526 0.3542 0.2712 0.2728 0.2347 0.2361

4 0.4585 0.4588 0.2621 0.2625 0.1746 0.1750 0.1309 0.1313

5 0.4074 0.4080 0.2076 0.2083 0.1235 0.1242 0.0829 0.0833

10 0.2843 0.2848 0.1020 0.1023 0.0429 0.0431 0.0203 0.0205

15 0.2312 0.2314 0.0677 0.0677 0.0232 0.0233 0.0090 0.0090

30 0.1631 0.1627 0.0338 0.0336 0.0082 0.0082 0.0023 0.0022

3 0.1 2.1535 2.1552 4.9804 4.9901 12.2332 12.2727 31.6766 31.8211

1 0.9525 0.9520 0.9985 0.9975 1.1270 1.1250 1.3522 1.3481

2 0.7387 0.7400 0.6071 0.6085 0.5407 0.5417 0.5138 0.5142

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3 0.6388 0.6396 0.4553 0.4568 0.3523 0.3542 0.2911 0.2934

4 0.5778 0.5774 0.3738 0.3733 0.2630 0.2625 0.1979 0.1974

5 0.5332 0.5337 0.3190 0.3195 0.2079 0.2083 0.1451 0.1454

10 0.4196 0.4195 0.1983 0.1982 0.1024 0.1023 0.0567 0.0566

15 0.3648 0.3651 0.1501 0.1504 0.0676 0.0677 0.0326 0.0327

30 0.2890 0.2886 0.0944 0.0941 0.0337 0.0336 0.0130 0.0129

3.2. Conditional moment, mean deviation, mean residual life and Bonferroni

and Lorenz curves

For the PML distribution, it can be easily seen that the conditional moments E[Xn |X > t], can be

1 ,

S(x)l

written as E[Xn|X > t] = J^y!n(t), where

Vn(t) = E(Xn) = fxnf(x)dx = fxnI+nTTTT ((1 + ^ + % - 1) dx

= Tn J™ xn-T-1 e - dx + ^ (2^°° xn-2T-1 e ^ dx -J™ xn-T-1 e ^ dx

= (^ - n) + ^ (y(2?,2 - n) - Y (¿1 - ?))■ (8)

The MRL function in terms of the first conditional moment as

m(t) = e[x|x > t] = ,

where (t) can be obtained from (8) where n = 1.

If we denote the median by M, then the mean deviations from the mean and the median

can be calculated as

= 2^1 F(^1) - 2^1 + 2 xf (x)dx = 2^1 F(^1) - 2^ h1

( r i z+1

+ Tn2 (1 + n)l+1 EEEEE

( r 1 z+1 a + l\ fr\ fi\ (z + 1

(k,l)e J r=0 i=0 z=0 y=0

r + 1j \iJ \zj \ y

x (r + 1)Wk , (-'r+' nz r( 1 + y + ', K).

' (1 + n )i+1[ni + n] 1 +y+1

Similarly, the mean deviation of median (Sm) is obtained as follows

f TO

SM = 2MF(M) - M - ^ + 2 xf (x)dx

Jm

and by using the steps used to solve the integral, we get

Sm = 2MF(M) - M - ^ + 2r,2 £ ££ £ £ P +1 )(j)(Z + 1

(k,l)eJr=0i=0z=0y=0 v + V W W V y

x (r + 1)WU (-1)r+nZr( 1 + y *). , (1 + S)i+1[ni + n]1 +y+1

respectively. Where (^) and M) can obtained from (8). Also, F(^) and F(M) are easily calculated from (1).

The Bonferroni and Lorenz curves are defined as

1 r Q 1 r Q

B(P) = —J^ xf (x)dx and L(P) = - J^ xf (x)dx,

respectively, where Q = F-1 (P). The Bonferroni and Gini indices are defined by

B = 1 - f1 B(P)dP and G = 1 - 2 L(P)dP,

00

respectively. If X has the pdf in (2), then one can obtain Bonferroni curve of the MPL distribution as By replacing n=1 and t=q in (8) we get-

B(P) = £ (r(^ - 1) + ^ (r (^ 1)-r (^ - 1))) <9>

and the Lorenz curves L(p) = pB(p).

3.3. Entropy

If X is a continuous random variable having probability density function f (.), then Renyi entropy is defined as

Rr = 1-r log (x)dx^j , r = 1, r > 0

log (T-. n * £ £ (-1^0 j- j+L+ji) ■ (10)

IO I I / j / j\ / \ ' I \ ' I r—

- r V i=0j=0 y^yjJ (1 + n)i(r + l)l-j+r--

The r-entropy, say Ir (x), is defined by

Ir(x)= y^t log fr(x)dx^j , r = 1, r > 0

and then it follows from equation (10).

3.4. Stress-strength Reliablity

The stress-strength reliability for PIML random variables X ~ PIML(x\, n1) and Y ~ PIML(n, ni) is given by

R = P(X2 < X1) = j F2(x) f1 (x)dx = 1 -J F2(x) f1 (x)dx

= i-(f ^ ( n2 Vr(iH + 1) + 1 f (

\f H UnO^V V T1 + ) + 2(1 + n1) f0 i! 1^)1

x +2) -^fi^ +1)

+ ) f ^ Yr(^^ +1)

+ U + m) f0 i! Urnr^V V T1 + )

+ ( mm ) 1 f ( 2m ) r( (i +1)^ + 2 + l(1 + m)(1 + m)J{2ni)%+1 ¿0 i! V(2nl)тl/т^ V T1 +

m ) 1 f ( 2m Yr((i +1)T2 +1 2(1 + n1)(1 + m)) (2n1 )T2/T1 f0 i! V(2n1)T2/TV V H ^

3.5. Order statistics

Let X1, X2, ■ ■ ■ , Xn be a random sample of size n from the PIML distribution and X(1), X^), ■ ■ ■ , X(n) be the corresponding order statistics. The probability density function of the rth order statistics is obtained as follow:

n!

fr:n (x) = {r - 1)n{n - r)! [F(x)]r-1 [1 - F(x)]n-rf (x).

For the PIML distribution, the pdf of rth order statistic is obtained as

! n-rf+i-1 (n - r)(r + i - 1) ( n Y+1 1

frn (x) = rar^ SS i H (-1)

(r - 1)!(n - r)\=0 =0 V i A j ) V1 + nJ xT(j+1)

-2-n(i+1) -n(r+i-1) . n 2n ,

x e xT e xT 1(1 + n)exT + —T - 1

The rth ordered moment is obtained as

Kr:n (x) = xfr:n (x)dx =(r - 4 - r)! -0C + / - 0 (-1)i

n )j+1 ((1 + n) r j + 0 2 r j +

1 + ^ \n--+1 (2j + r + i)TJ-+1 n*+1 (2j + r + i + 1) - +2

r(T—+1

n ^+1 (2j + r + i + 1)TJ-+1 V T

4. Parametric estimation of the parameters of PIML distribution

Here, in this Section, we have derived the classical and the Bayesian point and interval estimation of the model parameters, respectively.

1

1

4.1. Classical estimation

Let xi, X2, ■ ■ ■ , xn be a random sample of size n from the PIML distribution. Then, the likelihood function is given by

L = n/(X,) = n 1+n(<1 + 'XT + | -1

i=1 i=1 + rl Xi \

rnt -n m=1 Xt A ^ , 2n A 1

-e

n (1 + V)e"/xi + XT - 1 n

(1 + n)n UV XJ JtJi xJ+1

The corresponding log-likelihood function is

n1

ln L = n ln(r) + n ln (n) — n ln (1 + n) - 2n £ — +

i=1 Xi

£ln ((1 + n)en/X + 2T — 0 — EMx[+1 ) i=1 \ i ' i=1

The maximum likelihood estimates of n and t can be obtained by solving the following non-linear equations:

d ln L = n _2 £ £ xJ en/xT + (1 + n)en/xT + 2 = Q

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

dn = n(1 + n) i£ xT + ¿J xJ(1 + n)en/x + 2n — x¡ = ' dlnL = n £ T ( £ x2Tn(1 + n)en/x' ln(x{) + 2nx2Tln(x{)

n

— £ ln(xi ) = 0.

i=1

To solve the above equations, non-linear optimization methods such as the quasi-Newton algorithm can be used to obtain the MLEs of t and n and are denoted by Tmle and fjmle. To estimate 5 and Y, we use two methods of estimation, namely maximum likelihood method and Bayesian method. Bayesian estimation method will be discussed in the subsequent Section.

Bootstrap confidence interval

Here, we provide a detailed method for constructing the CIs based on bootstrap method. Here, we consider four CIs based on bootstrap methods: (i) standard bootstrap (SB), (ii) percentile bootstrap (PB), (iii) bias-corrected percentile bootstrap (BCPB), and (iv) bias-corrected accelerated bootstrap (BCAB). Below, we provide the algorithm for construction of the bootstrap CIs based on method of maximum likelihood.

1. Let (X1, X2, ..., Xn) be a random sample of size n drawn from PIML(/y , t). (rjmie, Tjmie) of (?/, t). A bootstrap sample of size n is obtained from the original sample by multiplying 1'n as mass at each point, denoted by (X*, X*, ..., X*).

2. Compute the MLEs (fj*mie, ^le) of (n, t). The M-th bootstrap estimator of (r/, t) are computed as

,*(M) _ (X*(M) X*(M) y*(M)\

Vmle _ Vmle [X1 , X2 , ..., Xn )

T*(M) _ T fy*(M) X*(M) X*(M)\

Tmle _ Tmle ^X1 , X2 , ..., Xn )

3. There are total number of nn re-samples. From these re-samples, the entire collection of R values of fj*mle, t^ from smallest to largest would constitute an empirical bootstrap distribution as:

i _ 1(1)R} KiP; i _ 1(1)R}

SB

Let

1 R *(I)

n*mle = R f nmle , s(nmle) = R I=1

1 f fn*(I)_ n* y

(R - 1) f \Vmle nmleJ

-* 1 R *{i)

^mle = R f ^mle , s(Tmle) R I=1

\

R(

1 f (f *(I)_ - * ( R - 1) mle - mle

be the sample means and standard deviations of ^n*mle'; I = 1(1)R}, {^mie); I = 1(1)R}, respectively. Then, 100(1 - y)% SB confidence interval of (n, t) are given as:

n*mle - Z(y/2) X s(n*mle), Cle + Z(1/!) X s(^le^ ,

l^le - Z(y/2) X s(^mle), Tmle + Z(y/2) X s(

where Z^^/i) is obtained by using upper (7/2)-th point of the standard normal deviate.

PB

Let nmle), t^fe) are the £ percentile of jn^u ; I = 1(1)R}, \tfml}'; I = 1(1)R}, respectively. Then, a 100(1 - 7)% PB confidence interval of (t, n) are given as:

f * (Rx(y/2)) t * (RX(1-Y/2)) 1

mle mle

(n * (RX(Y/2)) n * (RX(1-Y/2)) 1 ^ / mle ' I mle J '

respectively.

To study the different confidence intervals, we consider their estimated average widths (AWs) and coverage probabilities (CPs) for each of the considered methods and are given as

K

AW(t) = (UCl- LCU) and CP(t) = number (L* — T — U), KK

K

AW(n) = ^ (UCl- lCU) and CP(n) = number (Lc> — n — Ud). KK

4.2. Bayesian estimation

As a powerful and valid alternative to classical estimation, the Bayesian approach suggests a procedure to combine the observed information with the prior knowledge. Here, for the purpose of framing the Bayesian analysis, we set assumptions as:

t ~ Gamma(t0,t1 ), n ~ Gamma(n0,n1).

We now consider several (symmetric and asymmetric) loss functions (LS), namely, SELF, WSELF, MSELF, and PLF. These loss functions with corresponding Bayesian estimators (BS) and posterior risks (PR) are provided in Table 2.

2

Table 2: Five loss functions with corresponding BS and PR.

LS: L(ty, 5) BS of parameter PR of parameter p^

SELF =($ - d)2 E(f\x) Var(tf\x)

WSELF = to-**2 (E(ri\x)ri E($\x) - (E(ty-1\x))-1

MSELF = (i - PLF = (*-f)2 d E(y-i\x) E(^-2\x) vE(^2\x) 1 E(f-i\x)2 1 E(f-2\x) 2 (7e(^2\x) - E($\xj)

Posterior distributions

The joint prior distribution of parameters t and n under the independent prior distributions

t ~ Gamma(-o,ti),n ~ Gamma(no,ni),

is given as

TTo nno

n(T n) = _-L-n1_TTo-Vo-1e-(TiT+nin) (ii)

П(T, n) r(To)r(no)T n e , (ii)

where all the hyper-parameters To, Ti, no and ni are positive. Now, let Z be

Z(t,n) = TTo-inno-ie-(-iT+nin), t > o, n > o,

then, the joint posterior distribution is proportional to the joint prior distribution k(t, n) and a given likelihood function L(data) as

n*(t,n\data) « k(t,n)L(data). (i2)

In the case of PML distribution, the exact joint posterior PDF of parameters t and n, is given by

n* (t, n\x) = CL(x, Y)Z (t, n) (i3)

where

L(x; Y)

Tnnn

_2„ rn JL n 2n ri=i xT

n

i=i

(i + n)n

Y = (t, n) and K is normalizing constant and is given by

(i + n)en/xT + ^ - i

ni

n XT+i' i=i xi

(i4)

/»TO /»TO

C-

oo

L(x, Y)Z(t, n)dnd-.

Consequently, the marginal posterior PDF for the elements of vector Y with Y = (Yi, Y2) = (t, n), is given by

n(Yi\X) = J n* (Y\x)dYj, (i5)

where i, j = i, 2, i = j and Yi is the ith element of vector parameter Y.

Generating posterior samples

Let f (x\u) be a general PDF that is labeled with parameter vector v = (vi, v2,..., Vp). Based on a

given sample x and initial parameter vector v0 = (u(o), v2\..., ), the Gibbs sampler gives the values for each iteration with p steps by extracting a new value for each parameter from its full conditional PDF. In symbols, the steps for each iteration (iteration l), are as follows:

i

TO

• Set an initial parameter vector (u(O),u^, •••, Up0))

• Extract u1 from n(u1|u^-1, u^-1, •••, up-1,x)

• Extract u' from rc(u2 |u1, u^-1, •••, up-1, x); and so on down to

• Extract ulp from n(up|u1,u^, •••, up-1,x)•

Making use the above GS algorithm, the posterior samples of the parameters t and n of PML

distribution are generated from the full conditional posterior PDFs

n . .

tz(t|nk-1, x) « TT°+n-1e-t1t n ((1 + n)xT-1enx + x¡t-1 - xt-1) e-2nx

and

i=1

7no+n+i „-mn n

yjl/0 I I -1- p '/1'/ / \

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

rc(n|Tk-1,x) « -n ( (1 + n)xT-1enxi + 2nxf-1 - xT-1 ) e-2nxi,

I1 + V i=1K '

respectively.

5. Comparison via Monte-Carlo Simulation

Here, we have carried out a Monte Carlo simulation study to compare the performances of the classical and the Bayesian methods of estimation of the parameters (t, n) of PIML distribution. The performance of the estimates (classical as well as Bayes) are compared in terms of their MSEs and posterior risks, respectively- Also, we have obtained four BCIs, namely, SB, PB, BCPB and BCAB and high posterior density (HPD) credible intervals, respectively. The performance of the CIs are compared in terms of their AWs and CPs, Here, for the simulation study, we have considered the sample sizes n = 2°, 3°, 5°, 1°° and (t,n) = (°.5,2.°), (1.°,2.°), (°.5,3.°), (1.°,3.°), (2.°,2.°), respectively. For each of the designs, r = 1,°°° bootstrap samples each of size n are drawn from the original sample and replicated k = 1,OOO times.

This section presents Monte Carlo simulation results to assess the performance of MLE mentioned in the previous section. First, we generate different samples with size n from (1) based upon the inversion method. We compute the mean square errors (MSEs) and biases of the MLEs of the parameters based on N = 1°,°°° iterations. The results are summed up in Table 2 for some selected parameter values and several sample sizes, n. The results in Table 2 indicate that the MSEs and biases of the MLEs decrease when the sample size n increases. So, the MLEs of the parameters are consistent

5.1. Simulation results using mean squared errors, Bayes risks and nominal coverage probability as the criterion.

This section is devoted to calculate posterior risk values of Bayes estimators under different loss functions based on Monte Carlo simulation. We generated samples of different sizes n = {3O,50,75,1°°} from the PIML distribution for true value of parameters (i) (t,n) = (2,O.5) and (ii) (t, n) = (1,2). Table 3 reports the posterior risk values of Bayes estimators under prior distributions defined in (11) and the aforementioned five loss functions as shown in Table 1. These results provided by considering hyper parameters values as (T°,T1) = (2,1), (n°,n1) = (4,2) for case (i) and (t°,T1) = (1°, 1), (n°, n1) = (1,2) and for case (ii) based on 1°°°° replicates with 1°°° burn-in of MCMC procedure in Open BUGS software. It is evident from Table 4 that with increasing sample size n, the posterior risk decreases and this confirms the consistency property. We also observe that as n increases, Bayes estimate of t based on KL loss function provide superior performance than other Bayes estimates whereas Bayes estimate of n based on PL loss function perform better than other loss functions as n decreases.

—J

co co

Table 3: AE, MSE, AW and CP ofBCIs of the model parameters r and rj by using MLE.

Sample size

T-mle)

AE

MSE AW

BCI(fm/e) SB PB

CP AW CP

AE

BCI(i/m/e)

fjmle SB PB

MSE AW CP AW CP

20 0.5 0.64620 0.0397 0.590470 0.443 0.59545 0.64700 2 2.62260 0.62260 3.10983 0.11200 3.38438 1.00000

30 0.5 0.63180 0.0285 0.447900 0.335 0.45046 0.50600 2 2.57140 0.44800 2.14232 0.01400 2.20525 0.51400

50 0.5 0.61780 0.0197 0.327200 0.199 0.32802 0.28500 2 2.55280 0.36780 1.47214 0.00100 1.49041 0.00000

100 0.5 0.61200 0.0153 0.225480 0.033 0.22584 0.04300 2 2.57160 0.35310 0.96979 0.00000 0.97314 0.00000

20 1 1.29500 0.1555 1.170150 0.446 1.17866 0.67000 2 2.60850 0.59370 3.15528 0.11100 3.45266 1.00000

30 1 1.26660 0.1148 0.881140 0.364 0.88604 0.52000 2 2.57540 0.46550 2.14217 0.00600 2.21428 0.51100

50 1 1.23550 0.0803 0.659850 0.183 0.66214 0.25000 2 2.54390 0.36330 1.47791 0.00000 1.48917 0.00500

100 1 1.22060 0.0605 0.449170 0.026 0.45009 0.03800 2 2.55780 0.33290 0.97038 0.00000 0.97416 0.00000

20 0.5 0.56860 0.0163 0.481160 0.808 0.48511 0.92200 3 3.67660 1.48850 5.83612 0.81000 6.66360 1.00000

30 0.5 0.55520 0.0105 0.363930 0.775 0.36614 0.88700 3 3.52840 0.85260 3.63643 0.76200 3.80360 1.00000

50 0.5 0.54990 0.0068 0.266610 0.717 0.26766 0.80200 3 3.43990 0.45950 2.38948 0.68000 2.43261 0.96200

100 0.5 0.53920 0.0034 0.180740 0.581 0.18113 0.64600 3 3.33840 0.21570 1.48682 0.46700 1.49615 0.71100

20 1 1.13460 0.0661 0.967300 0.787 0.97562 0.92300 3 3.66990 1.62120 5.79667 0.80300 6.73772 1.00000

30 1 1.11470 0.0405 0.724170 0.791 0.72811 0.89100 3 3.51640 0.78410 3.65657 0.76300 3.82785 1.00000

50 1 1.09950 0.0258 0.530040 0.742 0.53161 0.82200 3 3.41530 0.41190 2.36034 0.69100 2.39949 0.96600

100 1 1.07630 0.0137 0.360450 0.604 0.36163 0.66900 3 3.33670 0.21360 1.48351 0.46600 1.49077 0.71700

20 2 2.57430 0.6227 2.326790 0.482 2.34762 0.67400 2 2.63590 0.65470 3.13810 0.10500 3.43835 1.00000

30 2 2.52970 0.4622 1.795760 0.337 1.80633 0.47500 2 2.58900 0.47190 2.13065 0.01400 2.19423 0.51300

50 2 2.47730 0.3309 1.317320 0.176 1.32084 0.26200 2 2.56400 0.38580 1.47180 0.00000 3.74158 0.00400

100 2 2.45010 0.2468 0.894830 0.033 0.89621 0.05100 2 2.56410 0.34630 0.96020 0.00000 0.96310 0.00000

2Z «

9 a ^ 0) M i*

zt

en -

O S3

< ^

?a tr

i—1 a M

a

o g

M

a

r1

Z a

r1

M

!-<

g

Go

H §

H

O Z

£ c 3

n>

h-^

on H

l>

3 z ST O

° Q

Table 4: Posterior risk values ofBayesian estimators under different loss functions based on simulation data set for different sample sizes.

n Loss function n) = r t (2,0.5) (T n) r t = (1,2) rn

20 SELF WSELF MSELF PLF 0.183806 0.071691 0.030278 0.068412 0.009075 0.026926 0.098003 0.025424 0.042515 0.030225 0.022891 0.029432 0.154721 0.077895 0.043705 0.075577

30 SELF WSELF MSELF PLF 0.065705 0.033654 0.018129 0.032622 0.007869 0.017964 0.046110 0.017514 0.026909 0.021658 0.018359 0.021233 0.085396 0.046136 0.026846 0.045574

50 SELF WSELF MSELF PLF 0.035856 0.020319 0.011905 0.020002 0.007735 0.011983 0.019580 0.011839 0.014231 0.011751 0.009982 0.011603 0.051113 0.027568 0.015532 0.027314

100 SELF WSELF MSELF PLF 0.024259 0.011909 0.005946 0.011784 0.003219 0.006025 0.011653 0.005991 0.010833 0.008129 0.006210 0.008063 0.029405 0.015245 0.008089 0.015200

6. Applications

In this section, we examine the versatility of the PIML model in comparison with the inverted modified Lindley (IML), modified Lindley (ML) and inverse Lindley (IL) distributions by usage of three real data sets presented below, which are available in [5]. The box plot of the considered data set are displyed in Figure 2. To check the validity of the considered data sets with the proposed model, the goodness-of-fit statistics is considered. Here, we have used built-in package fitdistrplus of the R open source software (see, Ihaka and Gentleman (1996)) for goodness-of-fit test. And we derived the unknown parameters by the maximum likelihood estimation (MLE) method, log likelihood function evaluated at the MLEs (l), the values of the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), the values of the Kolmogorov"Smirnov (K"S) statistic, the corresponding p values and the values of the Anderson-Darling (AD) and Cram©r von Mises (CM) are compared with IML, IL and also are reported in Table 5.

Data set I: This first data set has been analyzed by [19]. The Open University (1993), which relates to the prices of the 31 various children's wooden toys on sale in a Sufolk craft shop in April 1991, is the source of the first data set. Originally, the data set is: 4.2, 1.12, 1.39, 2, 3.99, 2.15, 1.74, 5.81, 1.7, 0.5, 0.99,11.5, 5.12, 0.9,1.99, 6.24, 2.6, 3, 12.2, 7.36, 4.75,11.59, 8.69, 9.8, 1.85, 1.99, 1.35,10, 0.65,1.45.

Data set II: The second data set, which was obtained from [17], includes the intervals between failures for repairable items and the data set is: 1.43, 0.11, 0.71, 0.77, 2.63, 1.49, 3.46, 2.46, 0.59, 0.74,1.23, 0.94, 4.36, 0.40,1.74, 4.73, 2.23, 0.45, 0.70,1.06,1.46, 0.30,1.82, 2.37, 0.63,1.23,1.24,1.97, 1.86,1.17.

Data set III: The third actual data set includes 30 iterations of [11] reported March precipitation figures for Minneapolis/St. Paul (in inches). The set of data is: 0.77, 1.74, 0.81,1.2,1.95, 1.2, 0.47, 1.43, 3.37, 2.2, 3, 3.09,1.51, 2.1, 0.52,1.62,1.31, 0.32, 0.59, 0.81, 2.81,1.87,1.18, 1.35, 4.75, 2.48, 0.96, 1.89, 0.9, 2.05.

Box plot data set I

Box plot data set II

Box plot data set III

Figure 2: Box plot of the considered data sets I, II and III [[5]].

Table 5: The model fitting summary of the considered data sets I, II and III.

Distribution n_( f, n)_—_AIC BIC KS Statistic p-value AD CM

Data Set I

PIML 30 (1.093,2.233) 73.011 150.023 152.825 0.1017 0.9154 0.4138 0.0546

IML 30 (2.1537) 73.187 148.375 149.776 0.1225 0.7589 0.4082 0.0487

ML 30 (0.2825) 73.00 148.000 149.4016 0.18521 0.2548 0.9004 0.1556

L 30 (0.3999) 73.232 148.464 149.865 0.1832 0.2661 0.8631 0.1478

Data Set II

PIML 30 (0.955,0.941) 45.227 94.454 97.257 0.12767 0.7124 0.9657 0.1387

IML 30 (0.9201) 45.301 92.603 94.004 0.1404 0.5951 0.9454 0.1405

ML 30 (0.7302) 40.749 83.499 84.901 0.0979 0.9355 0.4283 0.0629

L_30 (0.9767) 41.537 85.0740 86.4752 0.1278 0.7108 0.7125 0.1111

Data Set III

PIML 30 (1.362,1.222) 41.608 87.216 90.018 0.1392 0.6058 0.6605 0.0985

IML 30 (1.2473) 43.868 89.736 91.137 0.1974 0.1925 1.391 0.217

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

ML 30 (0.6644) 41.945 85.889 87.291 0.1566 0.4532 1.1278 0.1723

L 30 (0.9096) 43.1437 88.2874 89.6886 0.1882 0.2383 1.5908 0.2618

Table 6: Widths ofBCIs of r and rj for the considered data sets I, II and III.

Data sets t rj

L PB U W L SB U W L PB U W L SB U W

I II III 0.97156 1.23733 1.54637 1.74531 3.70488 3.66720 0.77375 2.46755 2.12083 0.91424 0.80434 1.26633 1.71277 3.48136 3.39761 0.79853 2.67702 2.13127 2.14218 1.42575 1.59218 3.78504 2.00554 2.38249 1.64285 0.57978 0.79030 1.94844 1.39314 1.52205 3.58389 1.99245 2.32971 1.63544 0.59931 0.80765

oo o ro

Table 7: Bayes estimate of r and rj for the considered data sets I, II and III.

Data sets t V

SELF Bayes estimate WSELF MSELF PLF SELF Bayes estimate WSELF MSELF PLF

I II III 0.024264 0.012078 0.029582 0.022644 0.022422 0.013285 0.015249 0.022593 0.018129 0.022129 0.012792 0.021859 0.104306 0.021774 0.033645 0.049351 0.024988 0.0218064 0.023251 0.049351 0.022685 0.048547 0.021587 0.026307

Z «

9 g

^ B) M i*

Z^ in -

o S3

z < ^

M

?a y

H a M

a

o g

M

a

r1 g

a

r1

M !-<

a

Go

H »

§ H

O Z

£ c 3

n>

h-*

^ »

go H

!> 3 z

g" O

° Q

The MLEs of the parameters given in Table 5. The widths of the BCIs and the Bayes estimates as well as Bayes credible intervals of the model parameters are given in Tables 6 and 5, respectively.

7. Concluding Remarks

In this article, we have proposed a new probability distribution, namely, PIML distribution by considering the IML distribution. Different statistical characteristics have been deliberated. Maximum likelihood estimates of the models parameters as well bootstrap confidence intervals from classical point of view and the Bayes estimates have been obtained. The consistency of the point and interval estimates have been shown through the simulation study in terms of mean squared errors, average widths and corresponding coverage probabilities. With the lowest values of AIC, BIC, AD, CM, KS and highest values of KS p values among all the competitive models, viz., L, ML and IML, the PIML distribution has been choden the best fitted model to fit the considered three data sets.

References

[1] Asgharzadeh, A., Bakouch, H. S., Nadarajah, S. and Shara, F. (2016). A New Weighted Lindley Distribution with Application. Brazilian Journal of Probability and Statistics, 30, 1-27.

[2] Ashour SK, Eltehiwy MA (2015) Exponentiated power Lindley distribution. J Adv Res 6, 895-905.

[3] Bakouch, H., Al-Zahrani, B., Al-Shomrani, A., Marchi, V. and Louzada F (2012) An extended lindley distribution. J Korean Stat Soc 41(1), 75-85.

[4] Barreto-SouzaW, Bakouch HS (2013). A new lifetime model with decreasing failure rate. Statistics, 47, 465-476.

[5] Chesneau, C., Tomy, L., Gillariose, J. and Jamal, F. (2020). The inverted modified Lindley distribution, J Stat Theor Pract, 14(3), 1-17.

[6] Chesneau, C., Tomy, L., and Gillariose, J. (2021). A New Modified Lindley Distribution with Properties and Applications." Journal of Statistics Management and System, 24,1383-1403.

[7] Dey, S., Nassar, M. and Kumar, D. (2019a). Alpha power transformed inverse Lindley distribution: A distribution with an upside-down bathtub-shaped hazard function, Journal of Computational and Applied Mathematics, 348 (2019) 130-145.

[8] Dey, S., Gosh, I., and Kumar, D. (2019b). Alpha-power transformed Lindley distribution: Properties and associated inference with application to earthquake data, Ann. Data. Sci., 6(4), 623-650

[9] Gupta, R. D. and Kundu, D. (2009). Introduction of shape/skewness parameter(s) in a probability distribution, Comput. Stat. 7, 153-171.

[10] Ghitany ME, Al-Mutairi DK, Balakrishnan N, Al-Enezi LJ (2013) Power Lindley distribution and associated inference. Comput Stat Data Anal 6, 20-33.

[11] Hinkley D (1977) On quick choice of power transformations. Appl Stat 26, 67-69

[12] Joshi S, Jose KK (2018). Wrapped Lindley Distribution." Communications in Statistics-Theory and Methods, 47,1013-1021.

[13] Kharazmi, O. Kumar, D. and Dey, S. (2023). Power modified Lindley distribution: Properties, classical and Bayesian estimation and regression model with applications, Austrian Journal of Statistics, 52, 71-95.

[14] Kumar, D., Yadav, P. and Kumar, J. (2022a). Classical inferences of order statistics for inverted modified Lindley distribution with applications, Strength of Materials, 55, 441-455.

[15] Kumar, D., Nassar, M., Dey, S. and Diyali, B. (2022b). Analysis of an inverted modified Lindley distribution using dual generalized order statistics, Strength of Materials, 54, 889-904.

[16] Kumar, D. and Kumar, V. (2022c). An extension of exponentiated gamma distribution: A

new regression model with application, Lobachevskii Journal of Mathematics,43, 2525"2543.

[17] Murthy DNP, Xie M, Jiang R (2004). Weibull models. Wiley series in probability and statistics, Wiley, Hoboken.

[18] Nadarajah, S., Bakouch, H. S. and Tahmasbi, R. (2011). A Generalized Lindley Distribution." Sankhya B, 73, 331-359.

[19] Shafei S, Darijani S, Saboori H (2016) Inverse Weibull power series distributions: properties and applications. J Stat Comput Simul 86(6):1069-1094.

i Надоели баннеры? Вы всегда можете отключить рекламу.