Научная статья на тему 'ESTIMATION OF PARAMETERS FOR KUMARASWAMY EXPONENTIAL DISTRIBUTION BASED ON PROGRESSIVE TYPE-I INTERVAL CENSORED SAMPLE'

ESTIMATION OF PARAMETERS FOR KUMARASWAMY EXPONENTIAL DISTRIBUTION BASED ON PROGRESSIVE TYPE-I INTERVAL CENSORED SAMPLE Текст научной статьи по специальности «Математика»

CC BY
60
17
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
Maximum likelihood estimate / EM algorithm / Bayesian inference / Lindley’s approximation

Аннотация научной статьи по математике, автор научной работы — Manoj Chacko, Shilpa S. Dev

In this paper, we consider the problem of estimation of parameters of the Kumaraswamy exponential distribution using progressive type-I interval censored data. The maximum likelihood estimators (MLEs) of the parameters are obtained. As it is observed that there is no closed-form solutions for the MLEs, we implement the Expectation-Maximization (EM) algorithm for the computation of MLEs. Bayes estimators are also obtained using different loss functions such as the squared error loss function and the LINEX loss function. For the Bayesian estimation, Lindley’s approximation method has been applied. To evaluate the performance of the various estimators developed, we conduct an extensive simulation study. The different estimators and censoring schemes are compared based on average bias and mean squared error. A real data set is also taken into consideration for illustration.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «ESTIMATION OF PARAMETERS FOR KUMARASWAMY EXPONENTIAL DISTRIBUTION BASED ON PROGRESSIVE TYPE-I INTERVAL CENSORED SAMPLE»

ESTIMATION OF PARAMETERS FOR KUMARASWAMY

EXPONENTIAL DISTRIBUTION BASED ON PROGRESSIVE TYPE-I INTERVAL CENSORED SAMPLE

*Manoj Chacko and Shilpa S Dev •

Department of Statistics, University of Kerala, Trivandrum-695581, India *[email protected]; [email protected]

Abstract

In this paper, we consider the problem of estimation of parameters of the Kumaraswamy exponential distribution using progressive type-I interval censored data. The maximum likelihood estimators (MLEs) of the parameters are obtained. As it is observed that there is no closed-form solutions for the MLEs, we implement the Expectation-Maximization (EM) algorithm for the computation of MLEs. Bayes estimators are also obtained using different loss functions such as the squared error loss function and the LINEX loss function. For the Bayesian estimation, Lindley's approximation method has been applied. To evaluate the performance of the various estimators developed, we conduct an extensive simulation study. The different estimators and censoring schemes are compared based on average bias and mean squared error. A real data set is also taken into consideration for illustration.

Keywords: Maximum likelihood estimate, EM algorithm, Bayesian inference, Lindley's approximation

1. Introduction

In life testing experiment and survival analysis, the test units may leave the experiment before failure due to restriction of time, budget cost or accidental breakage. A censored sample refers to data that was gathered from such cases but may not be complete. Over the last few decades, a number of censoring methodologies have been developed for the analysis of such situations. In the exiting literature, two commonly used traditional censoring schemes are type-I and type-II, in which experiment is terminated after a prescribed time point and number of failures, respectively. However, neither of these two censoring strategies permit the experimenter to remove live units from the experiment prior to its termination time. To remove the units in between the experiments, the idea of progressive censoring was developed by [7]. It is further observed that, in many practical situations it is not possible for the experimenter to continuously observe the life test units to observe the precise failure lifetimes. For example, in medical and clinical trials, specific information regarding the patient survival lifetime for those diagnosed with a particular treatment may not be available. In such cases, the failure lifetimes are often observed in the intervals, known as interval censoring. However, this censoring does not allow to remove the units in between the experiments. The concept of progressive type-I interval censoring, incorporating the principles of type-I, progressive, and interval censoring schemes, was introduced by [2]. In this type of censoring, items can be withdrawn between two successive time points that have been prescheduled.

The progressive type-I interval censored sample is gathered in the following manner. Assume that n units are placed on a life test at the time t0 = 0. Units are inspected at m predefined times t1, t2,..., tm, with tm being the experiment's scheduled finish time. At the ith inspection

time ti, i = 1,..., m, the number X,, of failures within (t,_1, t,] is recorded and R, surviving units are randomly removed from the life test. The number of surviving units at time ti,..., im is a random variable, hence the number of removals Ri,..., Rm can be estimated as a percentage of the remaining surviving units. Specifically, |x (number of surviving units at time i, )J remaining surviving units are eliminated from the life test with pre-specified values of q1,...,qm-1 and qm = 1, where |_wJ = the largest integer less than or equal to w. Alternatively, R1, R2,..., Rm can be pre-specified non-negative integers, with R?bs = min(R,, number of surviving units at time ti), i = 1,2,..., m _ 1, and Rmbs = number of surviving units at time im. Data observed under this censoring scheme can be represented as (X,, R,, ti)m=1. If F(x, Q) is the cumulative distribution function (cdf) of the population from which the progressive type-I censored sample is taken, then the likelihood function of Q can be constructed as follows (see, [2])

m

L(Q) an [ F(ti, Q) _ F(ti_1, Q)]Xi [1 _ F(ti, Q )]Ri , (1)

i=1

where to = 0.

In the recent past, several authors studied progressive type-I interval censored sampling schemes under various circumstances. The maximum likelihood estimates of the parameters of the exponentiated Weibull family and their asymptotic variances were obtained by [4]. Optimally spaced inspection times for the log-normal distribution were determined by [12], while different estimation methods based on progressive type-I interval censoring were considered for the Weibull distribution by [17] and for the Generalized exponential distribution by [6]. The statistical inference under this censoring for Inverse Weibull distribution was further discussed by [19]. Bayesian inference under this censoring has been discussed by [3] for Dagum distribution. In this paper, we consider progressive type-I interval censored sample taken from a Kumaraswamy exponential (KE) distribution with probability density function (pdf) given by

f (x) = ftAe_x(1 _ e_x)ft_ (1 _ (1 _ e_x)ft)A_1, x > 0 . (2)

The cdf corresponding to the above pdf is given by

F(x) = 1 _ (1 _ (1 _ e_x)ft)A, x > 0 , (3)

where ft > 0, A > 0 are two shape parameters. Through out the paper, we use the notation KE(ft, A) to denote Kumaraswamy exponential distribution with shape parameters ft and A. The KE distribution is a generalisation of the exponential distribution that was created as a model for issues in environmental studies and survival analysis. Several studies on Kumaraswamy distribution and its generalisations have been published in recent years. An exponentiated Kumaraswamy distribution and its properties were considered and discussed by [11].The Ku-maraswamy linear exponential distribution with four parameters was introduced by [9], who also derived some of its mathematical properties. The maximum likelihood estimation of the unknown parameters for the Kumaraswamy exponential distribution was considered by [1]. The exponentiated Kumaraswamy exponential distribution and its characterization properties were introduced by [18]. The estimation of parameters for the Kumaraswamy exponential distribution under a progressive type-II censored scheme was considered by [5].

The structure of this paper is outlined as follows. The maximum likelihood estimators of KE(ft, A) parameters are obtained in Section 2. In this section, estimators are also obtained using EM algorithm. In Section 3, Bayes estimates for ft and A are obtained for different loss functions such as squared error and LINEX. Here, Lindley's approximation method is used to evaluate these Bayes estimates. In Section 4, a simulation study is carried out for analysing the properties of various estimators developed in this paper. In Section 5, a real data is considered for illustration. Finally, in Section 6, we present some concluding remarks.

2. Maximum Likelihood Estimation

Let (Xj, Ri, ti), i = 1,2, ■ ■ ■ n be a progressively type-I interval censored sample taken from the KE(ft, A) distribution defined in (2), then by using (1), the likelihood function is given by

m, a) «e i=1

I A

1 - (1 - e-i'-1 )ft - 1 - (1 -

n-ti

Then the log-likelihood function is given by

A

1 - (1 - e-ti )

A

(4)

i=1

m

l(ft, A) = lnL(ft, A) = E Xiln [1 - (1 - e-ti-1 )ft - 1 - (1 - e

-ti

„-ti\ftlA

E Riln 1 - (1 - e

i=1

The MLEs of ft and A are the solutions to the following normal equations

+

(5)

E

Rii A[1 - Zft]AZft ln Zi

1 [1 - Zft ]

ft 1A

-E

i=1

Xi

A[1 - zft]A-1 zft ln Zi - A[1 - Zft_X]A-1 Zft_ 1 ln Zi-1

[(1 - Zft)A - (1 - Zft_1)A]

and

e Ri a[1 - Zft ]A ln(1 - Zft ) = _ E [1 - Zft ] a = E

where zft = (1 - e-tj).

As the above equations have no closed form solutions, the MLEs can be obtained through an iterative numerical methods such as Newton-Raphson method. Since the MLEs are obtained using numerical method, in the following subsection, the EM algorithm is used to find the MLEs of ft and A.

(1 - Zft)Aln(1 - Zft) - (1 - Zft-1 )A ln(1 - ZP)

[(1 - Zft)A - (1 - Zft_1 )A]

(6)

, (7)

2.1. EM Algorithm

The Expectation-Maximization (EM) algorithm is a broadly applicable method of iterative computing of maximum likelihood estimates and useful in a variety of incomplete-data scenarios where methods like the Newton-Raphson method may prove to be more difficult. The expectation step, also known as the E-step, and the maximisation step, often known as the M-step, are two steps that comprise each iteration of the EM algorithm. Therefore, the algorithm is known as the EM algorithm, and its detailed development can be found in [8]. The EM algorithm for finding MLEs of the parameter of the two-parameter Kumaraswamy exponential distribution is as follows.

Let tyij, j = 1,2,.....Xi, be the survival times of the units failed within subinterval (ti-1 ti] and

j = 1,2,.....Ri be the durations of survival for those units withdrawn at ti for i = 1,2,3, ...m,

then the log likelihood function, ln( Lc), based on the lifetimes of all n items (complete sample) from the two-parameter KE(ft, A) distribution is given by

ln(Lc) = E

i=1

Xi Ri

E log(f o )) + E log(f (^, 0)) j=1 j=1

X

R

m

m

m

1n(Lc) = [1n(ß) + МЛ)] £ [Хг + Ri] - £ i=1 i=1 " X; R

(ß -1) £'

XiRi

£ fi,j + £ vtj

L/=i j=i

+

i=1

m

+

(л -1) £ i=1

£ 1n(1 - e-^') + £ 1n(1 - ) j=1 j=1

Xi г T Ri Г ■

£ In 1 - (1 - e-^')ß + £ In 1 - (1 - e-^/)ß Lj=i /=1

(8)

where Em=1 (Xi + Ri) = n

Taking the derivatives of (8) with respect to ft and A, respectively, the following normal equations are obtained:

m

m

m

ß =(Л -1) £

and

i=1

i=1

Xi

£ (1 - e-^')ß 1n(1 - e-^')

j=1

1 - (1 - e-^i'/ )ß

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

+ £

i=1

£ (1 - e-1^)ß 1n(1 - e-^')

j=1

1 - (1 - e-fi-> )ß

£ 1n(1 - e-^') + £ 1n(1 - e-fi-i) Lj=1 /=1

n

Л = - £ i=1

Xi г t Ri г ■

£ In 1 - (1 - e-^' )ß + £ In 1 - (1 - e-<ft')ß j=1 j=1

(9)

(10)

The lifetimes of Xi failures in the ith interval (ti-1, ti] are independent and follow a doubly truncated Kumaraswamy exponential distribution from left at ti- and right at ti, while the lifetimes of the Ri censored items at the time ti are independent and follow a truncated Kumaraswamy exponential distribution from the left at ti, i = 1,2, ...m.

For the EM algorithm, the following expected values of a doubly truncated Kumaraswamy exponential random variable Y, from я on the left and b on the right with 0 < я < b < то are needed.

--РЛ

1n(1 - e )|Y e [я,b)

rb ln(1 - e-y)f (y; ß, Л)йу Я F(b : ß, Л) - F(a; ß, Л) ,

and

In

1 - (1 - e-Y)ßl | Y e [я, b)

rb ln [1 - (1 - e-y)ß] f (y; ß, Л)^у я F(b : ß, Л) - F(a; ß, Л)

E.

0,Л

(1- e-Y)ß1n(1 - e-Y)

[1 - (1 - e-Y)ß]

|Y e [я, b)

(1- e-Y)ß1n(1 - e-Y) rb [1 - (1 - e-Y)ß]

f (y; ß, л)^у

F(b : ß, Л) - F(я; ß, Л)

The iterative process that results in the EM algorithm is as follows: Step 1: Given starting values of ft and A, say ft(0) and A(0) and set k=0.

Step 2: In the (k + 1)th iteration, the following conditional expectations are computed by the E-step. For i = 1,2, ■ ■ ■ , m

m

R

m

and

E1i —E — Eft(k),A(k)

E2i —E — Eft( k),A( k)

E3i —E — Eft( k),A( k)

E4i —E — Eft( k),A( k)

E5i —E — Eft(k),A(k)

Efi,- — E

ft(k),A(k)

ln(1 - e-Y)|Y e [ii-1,u) ln(1 - e-Y)|Y e [ti,~)1 , ln[1 - (1 - e-Y)ft(k)]|Y e [ti_i, h) ln[1 - (1 - e-Y)ft(k)]|Y e [ti, <»)

'(1 - e-Y)ft(k)in(i - e-Y)

_ [1 - (1 - e-Y)ft(k)]

(1 - e-Y)i(k)in(1 - e-Y)

[1 - (1 - e-Y f(k)]

|Y e [ti-1, ti)

|Y e [ti,

Then, the likelihood equations (9) and (10) are respectively given by

- — (A - 1) £ [XiEsi + RiE6l] - £ [XiE1i + RiE2i]

i—1

i—1

and

n

A — - £ [XiE3i + RiE4i] A i—1

(11) (12)

Step 3: The M-step requires to solve the equations (11) and (12) and obtains the next values, ft(k+1) and A(k+1), of ft and A, respectively, as follows:

n

ft(k+1)

and

(A(k+1) - 1) 1 [XiEsi + RiE6i] - £m—1 [XiE1i + R^] A(k+1) —__m_n_ .

1 [XiE3i + RiE4i]

Step 4: Checking for convergence; if convergence happens, then the current ft(k+1) and A(k+1) are the approximated maximum likelihood estimates of ft and A via EM algorithm. If the convergence doesn't happens, then set k — k + 1 and go to step 2.

3. Bayesian Estimation

In this section, Bayesian estimation of parameters of KE(ft, A) are obtained under both symmetric and assymetric loss functions.

The squared error is a symmetric loss function and is defined as

L1 (5,5) — (5 - 5)2,

where 5 is the estimate of parameter 5.

An asymmetric loss function is the LINEX loss function, defined as

L2(5,5) a eh(5-5) - h(5 - 5) - 1, h — 0.

Manoj Chacko and Shilpa S Dev

ESTIMATION USING PROGRESSIVE TYPE-I RT&A, No 3 (79)

INTERVAL CENSORING Volume 19, September 2024

We assume that the prior distributions for ft and A follow independent gamma distributions given by

^1(ft|a, b) a fta_1 e_bft, ft > 0, a > 0, b > 0,

and

n2(J|c,d) a Ac_1 e_dJ, A > 0, c > 0, d > 0.

In addition, the hyper-parameters a, b, c, and d represent the prior knowledge of the unknown parameters.

The joint prior distribution of ft and A is of the form

n(ft, A) a fta_1 e_bftAc_1 e_dJ, ft > 0, A > 0. (13)

Then, the posterior density of (ft, A) is given by

n- (ft, ai x) = ,. , -itV^AJ\» dft ■ <14»

¡0 ¡0 L(ft A)n(ft A| x)dA dft

The Bayes estimates of ft and A against the loss function L1 are respectively obtained as

JJJj ft J)п(ft, A)dAdft

"SB = E(ftlx) = Jft JJ l(ft, A)n(ft, A)dAdft (15)

and

^ jjjjA |(ft,A)dAdft

ASB = E(A|x) = Jft Jj /(ft, A)n(ft, A)dAdft ■ (16)

The Bayes estimates of ft under the loss function L2 is obtained as

1

|8 LB = _1 /ogE(e_hft |x), h = 0,

where j j

/ft/je_hft/(ft, A)n(ft, A)dJdft (17)

(e |x) Jft Jj/(ft, A)n(ft, A)dAdft ■ ( )

The Bayes estimates of A under the loss function L2 is obtained as

1

A lb = _1 /°gE(e_hJ |x), h = 0,

where j j

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

E( _hj ) = J ft Jj e_hJ/(ft, A)п(ft, J)dJdft (e |x) jft Jj/(ft, A)n(ft, A)dAdft ■ (8)

The ratios of integrals given in equations (15), (16), (17) and (18) cannot be obtained in a closed form. Thus, [13] approximation method for evaluating the ratio of two integrals have been used. This has been adopted by several researchers, such as [10], [5], to obtain the approximate Bayes estimates.

Manoj Chacko and Shilpa S Dev

ESTIMATION USING PROGRESSIVE TYPE-I RT&A, No 3 (79)

INTERVAL CENSORING Volume 19, September 2024

3.1. Lindley approximation method

Since all estimates have the forms of ratios of two integrals, to obtain these estimates numerically, we use the Lindley's approximation method. Since Bayes estimates of ft and A depend on the ratio of two integrals, we define,

fo°° jt u(ft, A)el(ft,Al-)+p(ft,A)dftdA 1 f0°° r el(ft,Alx)+p(ft,A) dftdA ' (19)

where u(ft, A) is function of ft and A only and l(ft, A|x) is the same as logL(ft, A|x) and p(ft, A) — log n(ft, A). Then by Lindley's method, I(x) can be approximated as

~ ~ 1

I(x) —u(ft, A) + ^ [(uftft + 2u ftpft) aftft + (uAft + 2uApg) &Ap+

(UftA + 2uftP a) PftA + (uAA + 2u\P'A) PAA]

+ 1 (uftaftft + ^A^ftA) (jftftft^ftft + lftAft^ftA + lAftft^Aft + lAAft&AA + (upVAp + u A&AA) (lAftftVftft + lftAA^ftA + lAftA&Aft + lAAA&Aft^ , (20)

where ft and A are the ML estimators of ft and A, respectively. Also, uftft is the second derivative of the function u( ft, A) with respect to ft and uftft is the same expression evaluated at ( ft, A) . Other expressions are given by

d2l(ft, A)

dft2

£

i=1

Xi

if d2 F-1

V dft2 dft2

\

(F - Fi-1)

dF« aft

dF-1

aft

(F - Fi-1)2

+Ri

/

\2

dft)

(1 - F) (1 - F)2

a3 f " aft3

2

lAA

d2 l(ft, A)

3A2

£

i—1

Xi

/f a^F _ d2F-1

V 3A2 3A2

\

3A

3R

3A

(F - Fi-1)

(Fi - Fi-1 )2

+ Ri

3^ dA3

bfA 2

3A i

(1 - F) (1 - Fi )2

2

m

lftA

d2l(ft, A)

dA2

£

i=1

Ri

Xi

( f d2_fl_ d2Fi-1

\dftdA dftdA (Fi - Fi-1)

V

d2 F dftdA

(1 (1 - Fi)

dF _ dF-1 dA dA

dfi _ dF-1

dft dft

(Fi - Fi-1 )2

+

(21)

m

lAp

d2l(p, A)

dA2

E

i=1

R,

Xi

( ( B2FL_ d2Fi-1 {d pdA d pdA

(F, - Fi-i)

\

a2 F dpdA ( f ) ( dF; ^aA

(1 - F,) (1- F, )2

From equations (21) and (22), we have

EL _ fdli _

dA

dA

dp dp

(F, - Fi-i)2

+

(22)

lPPP

d3l(p, A)

E

i=1

Xi

a3 F-1 3 (d2F-1 )( dFj__ dFi-1 A 2 (dF _ ap3 ap3 3 I,dp2 dp2 ) {dp dp ) 2 ^ap —^-^---^-;;---1--;—

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

FF ± 1 ± 1

{ d3 F

-Ri

1Fi

+

i-1

d2 Ff\ idFi

dpi H dp2' ' dp

(1 - Fi)2

(Fi - Fi-1)2

2( IF-)

_ 3 dp J , "V dp)

+

(1 - Fi)3

dF;

ap

(Fi - Fi-1)3

lAAA

d31 (p, A)

3A3

E

i=1

Xi

/ a3f; a3f;_ 1 3 /_ a2f,-a f a^ _ af^a 2 /a^ _ aA3" - ~aA-~ 3VdA2 dA2 J [dx. dx J + ^dA

-^----^-v5--1--7ZT"

\

F - F 1

/

R

a3F, 3 (&FL\ fdli\ 2 fdli\ aA3" + 3l dA2J ydAJ + H dAJ

(F, - F,-1 )2

3

1 Fi

\

(1 - F;)2

(1 - F,)3

3F;

3A

(F, - F;-1 )3

lA

d3 l(p, A) 3A3p 2

E

i=1

Xi

/ a3 f; a3 F,-1

3A3p2 3A3p2

V

32Fi _ d2F,-1 dp2 dp2

dFi dA

dF,-1 dA

2 dFL-

Fi - Fi-1

32F,L d2 F;-1

(Fi - F,-1)2

______2 faF«_ dFbA2/'dF>_ dF—.'

ap ap y ^aAap dAdp) ^dp dp ) {dA dA ^

(F, - Fi-1)

2

(F, - Fi-1)

3

Ri

( d3f, (a^f;)(dfl) (dfi)( df.) 2 (af,)2 (d_fi) dadp2 + {dp2) \baJ + ^ dp) {dadp ) + ^ dp) {da,

1 Fi

\

(1 - F,)2

(1 - F,)3

(1 - F,)3

m

3

3

m

3

m

m

d3l(ft, A) dAdft2

£

i—1

( d3 F

Xi

33 F-

dftdAdft dftdAdft

d2 F dftdA

d2 F-1

dftdA

dFi dft

dF-

dft

\

F- — F -

1

(F - Fi-1)2

( (d_F _ E-A f'd^F _ d2F-A + ( d2F_ _ (d_F _ dF-1'

l^A dA J \dft2 dft ) + \dAdft dAdft J \dft dft

\

2 fdF _ dF-i\ fdF _ dFi-1

(Fi - Fi-1 )2

dA dA ) \dft dft (Fi - Fi-1)2

d3 F f d2 Fj + R. || - dftdAdft2 + ^

1- F i

(1 - Fi)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2 (!) (g)

(1 - F3)

Let ui — 1 — e-i''. Then

(1 - Fi)

d2 Fi ) fdF dAdft) ^\dft

3

(1 - Fi)2

+

dFi , dftft — A d2 Fi dft2 d3 Fi dft3 dFi ,

dA — -A

1 uft

1 uft

1 uft

A-1

A1

uftln ui

uft (ln ui)2 - A (A - 1)

1- uft

A2

uftln ui

A1

uft (ln ui)3 + uft (ln Ui)2 A (A - 1)

1 uft

A2

1u

A1

d2 r r

IF — -A(A - 1) [1 - uft dAF — -A(A - 1)(A - 2)

A2

1 uft

A3

d2 Ft dftdA d3 Fi dft2d A

- 2A (A - 1) A (A - 1)

A (A - 1)

1u

1 uft

1 uft

A2

uft (ln ui)3 + A (A - 1) (A - 2)

1 uft

A3

uftln ui

A2

A2

uftln ui

uft (ln ui)2 - A (A - 1)

1- uft

A3

uft ln ui

d3 Ft dftdA2 d2 Ft dAdft d3 Fi,

A (A - 1) (A - 2)

1 uft

A3

uftln ui

— A (A - 1)

— A (A - 1)

ft A ft

d3 Fi A

mf2— A (A -1)

1 uft

1 uft

1 uft

A2

A2

uftln ui

uft (ln u)2 - A (A - 1) (A - 2)

A2

uft (ln u)2 - A (A - 1) (A - 2)

1 uft

1 uft

A3

uftln ui

A3

uftln ui

m

2

3

2

2

Also, Thus,

and Here

p(ß, Я) a (c - 1)3ogß - dß + (b - 1)3ogA - яА

- c - 1 P ß = —3--d

b - 1

Pa = --я

(23)

Pßß pßA\ _ pAß &AAJ

3ßß 3ßA JAß 3AA,

We now determine the approximate Bayes estimates of p> and A under various loss functions using the above-mentioned equations. First, we derive the Bayes estimates for p> and A under the squared error loss function L1. For estimating p>, we take u(p>, A) = p. Therefore Up = 1 and

Upp = uA = uAA = Up a = uAp = 0. Then the Bayes estimate of p> under the loss function L1 is obtained as

1

Psb =P + 0.5 [2/3 pap p + 2$ A^pA + &p

PppPpA 3pAp + 3Ap в + ^Ap^AA 3 ала ] .

To estimate A, we take u(p, A) = A. Thus иа = 1 and up = upp = маа = uAp = upA = 0. Then the Bayes estimate of A under the loss function Li can be determined as

ASB =A + 0.5[2j8 p&Ap + 2P A&AA + VppVpA3p pp +

&pAIpAp + &pA&Aphpp + &pp&AAhpp + ^aa3AAA] .

Now, we obtain the Bayes estimates of p and A under LINEX loss function L2. For estimating p, we take u (p, A) = e-hp. Thus Up = -he-hp, Upp = h2e-hp and uA = uAA = uAp = UpA = 0. Therefore, Bayes estimate of p under the loss function L2 is obtained as

Plb = -h3og[E(e-hß|x)] ,

where

Alb = - 13og[E(e-hA|x)] ,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(24)

E(e-hp |x) =e-hp + 0.5[Uppo-pp + Ux (2p xOpp + 2p aOa + ^p kpp +

frppfrpxlpxp + 2opppoAplxpp] , (25)

To estimate A, we take u(p, A) = e-hA. Thus, uA = -he-hA, uAA = h2e-hA and Up = Upp = uAp = Up a = 0. Therefore, the Bayes estimate of A under the loss function L2 is obtained as

(26)

where

E(e hA|x) =e hA + 0.5 \UaaVaa + "a (2pßP.ß + 2p.Paa + PßßPßA3ßßß+

^ßA*ß Aß + &ßA&Aßhß ß + &AA&ß ß3 Aß ß + ^.^A 3AAA

4. Simulation Study

A simulation study is carried out in this section to investigate the behaviours of the proposed methods of estimation for the KE distribution. Five different censoring schemes are suggested to generate progressive type-I interval censored data from the KE distribution, and a comparison of all the estimating techniques mentioned above will be addressed. The simulation is run in R programming. The different censoring schemes used to compare the performance of the estimation procedures is given in the following table.

Scheme i n m q(i)

1 75 10 (0.25,0.25,0.25,0.25,0.5,0.5,0.5,0.5,0.5,1)

2 75 12 (0,0,0,0,0,0,0,0,0,0,0,1)

3 100 15 (0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.25,0.25,0.25,0.25

0.25,0.25,0.25,1)

4 100 20 (0.25,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1)

5 100 25 (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1)

Here, for the censoring scheme 1, the first few intervals are lighter, and the remaining intervals are heavier. Schemes 2 and 5 represent conventional interval censoring, in which no removals are made prior to the end of the experiment, scheme 3 is the reverse of scheme 1, and censoring in scheme 4 occurs only at the beginning and end. For various combinations of n, m, and various censoring schemes, the performance of each estimators is numerically compared in terms of their bias and mean square error (MSE) values. The bias and MSE of the MLE's and the estimates obtained using the EM algorithm are given in Table 1.

For Bayes estimation, we considered both informative and non-informative priors for the unknown parameters. For informative prior, we consider two priors; Prior 1 and Prior 2. The hyper-parameters for Prior 1 and Prior 2 are chosen in such a way that mean of the prior distribution is equal to the parameter value and variance of the prior distribution is high (Prior 1) and low (Prior 2). The values of hyper-parameters we considered for different choices of the parameters ft and A are given below.

Parameter Prior Hyper parameters

ß/A a/c b/d

1.25 Prior 1 1.25 1

Prior 2 2.5 2

1.5 Prior 1 1.5 1

Prior 2 3 2

1.75 Prior 1 1.75 1

Prior 2 3.5 2

2 Prior 1 1 0.5

Prior 2 4 2

Manoj Chacko and Shilpa S Dev

ESTIMATION USING PROGRESSIVE TYPE-I RT&A, No 3 (79)

INTERVAL CENSORING Volume 19, September 2024

We use h — 1 to evaluate Bayes estimators under the LINEX loss function L2. In each case, we have assessed bias and MSE based on 500 iterations. We repeat the simulation study for various values of ft and A also. The bias and MSE for the estimate of ft for both informative and non-informative priors are given in Table 2. Table 3 provides the bias and MSE for the estimate of A for both informative and non-informative priors.

The tabulated values shows that all of the estimates do improve with a higher value of n. From Table 1, regarding MSE and Bias, we found that the estimates based on the censoring schemes 2 and 5 give the better estimates of ft and A, followed by the scheme 4. In case of maximum likelihood estimation given in table 1, as n increases the MSE of estimates decrease as expected. Also, we can see that bias and MSEs of the estimates of ft and A via EM algorithm are smaller than bias and MSEs of the corresponding MLEs. Also the Bayes estimators based on informative prior perform much better than the MLEs in terms of biases and MSEs. From the tables 1, 2 and 3, it is clear that the bias and MSE of Bayes estimators under informative prior are smaller than those of MLE's.

As expected, the Bayes estimators based on informative prior perform much better than the Bayes estimators based on non-informative prior in terms of biases and MSEs. From Tables 2 and 3, one can see that for ft and A, estimators based on informative priors perform better to those of non-informative priors in terms of bias and MSE. Also, among the Bayes estimators of ft, the estimator under the LINEX loss function performs better. Again, when compared to squared error loss functions, estimators of A under the LINEX loss function have the least bias and MSE.

Table 1: Bias and MSE of parameters under different censoring schemes for different values of ft and A

ft A

(ft, A) n m c.s MLE EM MLE EM

Bias MSE Bias MSE Bias MSE Bias MSE

(1.25,1.5) 75

(1.75,2)

10 1 -0.3509 0.1807 -0.3055 0.0956 -0.4086 0.3751 -0.3131 0.3318

12 2 -0.2609 0.1073 -0.1063 0.0268 -0.2298 0.1260 -0.1192 0.1057

15 3 -0.3948 0.2090 -0.3129 0.0985 -0.6539 0.7903 -0.2564 0.7338

20 4 -0.2811 0.1276 -0.2259 0.0631 -0.2734 0.2140 -0.1871 0.2175

25 5 -0.2725 0.1174 -0.2195 0.0495 -0.2567 0.1782 -0.1691 0.1555

10 1 -0.3487 0.1360 -0.3160 0.0772 -0.1784 0.2405 -0.1686 0.1903

12 2 -0.0542 0.0394 -0.0528 0.0901 -0.1275 0.1113 -0.1136 0.0511

15 3 -0.3526 0.1757 -0.3016 0.0912 -0.3358 0.3049 -0.2373 0.2762

20 4 -0.1386 0.0538 -0.1029 0.0558 -0.1628 0.1496 -0.1421 0.1040

25 5 -0.0982 0.0501 -0.0898 0.0242 -0.1422 0.1386 -0.1315 0.0757

TS s « SH O (U to < »J ca ias MSE

-3 « Is £ & TS ^ 1« 6 S-i O M-i c s= o £ TO < » ca MSE B:

TO s tC

S

S .0

S < trt * § TS CN SH J TO < » ca Bias MSE

« TO » S .0 S ^ 6 S-i O M-i c TO < to ca Bias MSE

TS £ « s ^ ^ TS s s T-H SH (U TO < »J ca Bias MSE

o <3 -1 ^ « 13 6 o M-i TO < to ca MSE

Bias

^ tq cn S ^ u

TS s « 6

B3 IN c

<<

CO CO K co CO 00 T-H T-H 00 vc CN <"1 CN CO O T-H m K c^ G 00 00 vc T-H o CO ^ K C^ T-H K CN ^ CO ^

CD CD o Gi Gi o Gi d

00 ^ vo K K CN m C^ r1 <N 00 CO ^ CO ^ O C^ K C^ CO vc CO <"1 CN m m T-H

CD CD CD CD CD CD CD CD CD CD

VO K co ^ CN T-H CN CO CO O T-H T-H O 00 C^ 00 T-H O CO ^ K C^ C^ T-H O CO ^ m CO ^

CD CD Gi ^ ^ o ^ d

00 ^ ^ O 00 <N CN CN K ^ 00 o K c^ ^ CO m ^ CN 00 CO CN K CO <N m m T-H

CD CD CD CD CD CD CD CD CD CD

O 00 m T-H <N K 00 O CN O 00 o ^ CO K CO ^ ^ T-H O CO ^ o T-H K E^ o C^ 00 CO o O ^

CD CD ^ d ^ ^ d

CN 00 CN CO T-H T-H <N T-H m T-H T-H CN CN CO <N o T-H T-H 00 T-H CO <N CO T-H CN m T-H CO vc T-H T-H 00 00 ^

CD CD CD CD CD CD CD CD CD CD

<N O VC T-H K 00 O O O O T-H K O K CO T-H C^ p ^ o T-H 00 p C^ 00 CO p T-H ^

CD CD Gi ^ ^ ^ Gi ^ d

^ CN CO CO m T-H K ^ T-H T-H 00 CO CO r1 K T-H T-H T-H 00 CO CO r1 O ^ K T-H CO m T-H o T-H T-H 00

CD CD CD CD CD CD CD CD CD CD

K T-H K T-H CO CO O T-H T-H VC C^ T-H m 00 o O ^ CO o T-H T-H CO ^ K K T-H VC CO m 00

CD d Gi ^ Gi Gi d

00 K CO co O 00 K CO 00 ^ T-H T-H K <"1 o CO ^ ^ CO 00 ^ T-H CN ^

CD CD CD CD CD CD CD CD CD CD

VO CO K T-H CO ^ o T-H VC 00 c^ T-H O 00 p CO K o T-H 00 CO ^ CO 00 vc T-H K CO m C^

CD d ^ ^ d

00 o ^ co CN m T-H CN 00 CO ^ ri o K vc o CN CO CO m ^ CO m c^ ^ CN ^

CD CD CD CD CD CD CD CD CD CD

T-H CN CO ^ m T-H <N CO ^ m

O T-H <N T-H m T-H o CN m CN O T-H CN T-H m T-H O <N m CN

m K o o T-H m K o o T-H

T-H CN

<"1 m

T-H T-H

A d n a

A

X

d n a

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

A

n

fun

u

d n u A

B

E S

d n

ia Bi

b abT

e iv at ma

r

o f

nf

o <

£ A

s

ia

Bi

E

S

MS

TO

2 < l-J

r A

io

ri s

p^ ia Bi

e

eiv

at

m E

r S

o f MS

£ TO

< «

A

TO

< l-J

A

1 < l-J

r A

io

ri s ia Bi

e

iv

at m E

r S

o f MS

£ TO

< « A

n

A ft

E S

ia Bi

E S

ia Bi

E S

s ia Bi

1 0 3 2 9 5 2 3 2 1

8 4 7 9 1 8 8 7 1 8

5 3 9 8 4 1 7 4 9 5

.6 .3 .6 .1 .1 .2 .1 .3 .1 .1

0. 0. 2. 0. 0. 0. 0. 0. 0. 0.

6 3 5 0 5 3 4 4 7 8

6 7 2 6 3 8 4 0 5 9

0 4 4 7 4 5 5 0 6 0

.6 .3 .4 .2 .2 .3 .2 .4 .2 .2

-0. -0. -1. -0. -0. -0. -0. -0. -0. -0.

3 1 7 3 0 5 2 3 2 1

8 4 7 9 2 8 8 7 1 8

5 3 9 8 4 1 7 4 9 5

.6 .3 .6 .1 .1 .2 .1 .3 .1 .1

0. 0. 2. 0. 0. 0. 0. 0. 0. 0.

7 5 6 1 6 3 4 4 8 8

6 7 2 6 3 8 4 0 5 9

0 4 4 7 4 5 5 0 6 0

.6 .3 .4 .2 .2 .3 .2 .4 .2 .2

-0. -0. -1. -0. -0. -0. -0. -0. -0. -0.

4 8 9 2 0 4 7 6 3 7

6 3 0 0 9 5 7 8 1 5

2 4 8 2 7 1 4 5 3 3

.4 .1 .3 .1 .0 .2 .2 .1 .1 .1

0. 0. 0. 0. 0. 0. 0. 0. 0. 0.

6 8 8 6 2 3 1 2 7 2

4 9 1 7 1 9 7 9 4

5 9 8 3 3 2 7 7 1 2 2

.4 .1 .2 .2 .1 .1 .1 3 .1 .1

-0. -0. -0. -0. -0. -0. -0. 0. -0. -0.

5 8 9 2 1 4 8 6 3 7

6 3 0 0 9 5 7 8 1 5

2 4 8 2 7 1 4 5 3 3

.4 .1 .3 .1 .0 .2 .2 .1 .1 .1

0. 0. 0. 0. 0. 0. 0. 0. 0. 0.

8 8 9 6 2 3 4 2 7 2

4 9 1 7 1 9 7 9 4

5 9 8 3 3 2 7 7 1 2 2

.4 .1 .2 .2 .1 .1 .1 3 .1 .1

-0. -0. -0. -0. -0. -0. -0. 0. -0. -0.

4 3 2 7 8 9 7 5 6 9

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

4 7 6 3 5 4 9 8 6 7

7 5 8 2 2 4 2 9 4 4

.3 .1 .7 .1 .1 .1 .1 .2 .1 .1

0. 0. 0. 0. 0. 0. 0. 0. 0. 0.

6 8 9 1 3 2 6 2 0 0

7 1 2 9 8 4 4 8 5 4

0 5 5 2 2 7 3 1 5 4

.4 .2 .6 .2 .2 .1 .1 .3 .1 .1

-0. -0. -0. -0. -0. -0. -0. -0. -0. -0.

5 3 5 7 8 9 9 5 6 9

4 7 6 3 5 4 9 8 6 7

7 5 8 2 2 4 2 9 4 4

.3 .1 .7 .1 .1 .1 .1 .2 .1 .1

0. 0. 0. 0. 0. 0. 0. 0. 0. 0.

7 9 2 1 3 2 8 2 0 0

7 1 3 9 8 4 4 8 5 4

0 5 5 2 2 7 3 1 5 4

.4 .2 .6 .2 .2 .1 .1 .3 .1 .1

-0. -0. -0. -0. -0. -0. -0. -0. -0. -0.

1 2 3 4 5 1 2 3 4 5

0 2 5 0 5 0 2 5 0 5

1 1 1 2 2 1 1 1 2 2

0 0

5 0 5 0

7 1 7 1

5 2

5 7

5. Illustrations using real data

In this section, a real-life data is utilised to demonstrate the inference methods proposed in this paper. The data was previously studied by [14] and [15].

The data shows the running and failure times for a sample of devices from the larger system's eld-tracking research. The failure times are:

2.75, 0.13, 1.47, 0.23,1.81, 0.30, 0.65, 0.10, 3.00, 1.73,1.06, 3.00, 3.00, 2.12, 3.00, 3.00, 3.00, 0.02, 2.61, 2.93, 0.88, 2.47, 0.28, 1.43, 3.00, 0.23, 3.00, 0.80, 2.45, 2.66.

The data was also previously considered by [16] and fitted for KE(ft, A) distribution. For evaluating the goodness of fit, they used the Anderson-Darling test. The Anderson-Darling test statistic has a value of 2.00757 and the related P-value is 0.0913729. Based on the aforementioned estimation procedures, we have obtained the estimates of ft and A, which is included in Table 4.

Table 4: Estimates of ft and A for the real data

Bayes

n m Censoring scheme MLE EM SE LINEX

30 5 q =(0.25,0.25,0.5,0.5,1) ft 1.2857 1.5756 1.5875 1.5173

Xi = (8,3,4,1,4) A 0.5192 0.5739 0.5324 0.5338

Ri = (2,1,3,4,0)

30 7 q =(0.5,0,0,0,0,0,1) ft 1.1000 1.3050 0.9374 0.9214

Xi = (7,3,3,2,3,5,0) A 0.5355 0.5784 0.5875 0.5866

Ri = (3,0,0,0,0,4,0)

30 12 q =(0,0,0,0,0,0,0,0,0,0,0,1) ft 0.8453 1.1082 0.7944 0.7866 Xi = (5,2,1,2,1,2,1,1,1,2,3,4) A 0.4445 0.4961 0.4869 0.4346 Ri = (0,0,0,0,0,0,0,0,0,0,0,5)

6. Conclusion

In this paper, we considered the problem of estimation of parameters of Kumaraswamy-exponential distribution based on progressive type-I interval censored sample. The maximum likelihood estimators of the parameters ft and A were obtained. Since the MLEs of the unknown parameters of the distribution does not admit closed form, we employed the EM algorithm approach. The Bayes estimators were also obtained using different loss functions such as squared error loss function and LINEX loss function. To evaluate the Bayes estimators, Lindley's approximation method was applied. Based on simulation study, we have the following conclusions. We observed that the performance of EM algorithm was quite satisfactory. In addition, it was found that for both ft and A, the bias and MSE of the Bayes estimators under an informative prior are smaller than those of MLEs. The performance of Informative prior was better than the Non-informative prior both ft and A in terms of bias and MSE values. For both ft and A, Bayes estimators under LINEX loss function perform better with regard to bias and MSE. The estimation methods employed in this paper were also illustrated using real data sets.

References

[1] Adepoju, K. and Chukwu, O. (2015). Maximum likelihood estimation of the Kumaraswamy exponential distribution with applications. Journal of Modern Applied Statistical Methods, 14(l):18.

[2] Aggarwala, R. (2001). Progressive interval censoring: some mathematical results with applications to inference. Communications in Statistics-Theory and Methods, 30(8- 9):1921-1935.

[3] Alotaibi, R., Rezk, H., Dey, S., and Okasha, H. (2021). Bayesian estimation for dagum distribution based on progressive type-I interval censoring. Plos One, 16(6):e0252556

[4] Ashour, S. and Afify, W. (2007). Statistical analysis of exponentiated Weibull family under type-I progressive interval censoring with random removals. Journal of Applied Sciences Research, 3(12):1851-1863.

[5] Chacko, M. and Mohan, R. (2017). Estimation of parameters of Kumaraswamy-exponential distribution under progressive type-II censoring. Journal of Statistical Computation and Simulation, 87(10):1951-1963.

[6] Chen, D. and Lio, Y. (2010). Parameter estimations for generalized exponential distribution under progressive type-I interval censoring. Computational Statistics & Data Analysis, 54(6):1581-1591.

[7] Cohen, A. C. (1963). Progressively censored samples in life testing. Technometrics, 5(3):327-339.

[8] Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society: series B (methodological), 39(1):1-22.

[9] Elbatal, I. (2013). Kumaraswamy linear exponential distribution. Pioneer J Theor Appl Statist, 5:59-73.

[10] Kundu, D. and Gupta, R. D. (2008). Generalized exponential distribution: Bayesian estimations. Computational Statistics & Data Analysis, 52(4):1873-1883.

[11] Lemonte, A. J., Barreto-Souza, W., and Cordeiro, G. M. (2013). The exponentiated Kumaraswamy distribution and its log-transform. Brazilian Journal of Probability and Statistics, 27(1), 31-53.

[12] Lin, C.-T., Wu, S. J., and Balakrishnan, N. (2009). Planning life tests with progressively type-I interval censored data from the lognormal distribution. Journal of Statistical Planning and Inference, 139(1):54-61.

[13] Lindley, D. V. (1980). Approximate bayesian methods. Trabajos de estadistica y de investigation operativa, 31:223-245.

[14] Meeker, W. and Escobar, L. (1998). Statistical methods for reliability data john wiley & sons new york. New York.

[15] Merovci, F. and Elbatal, I. (2015). Weibull Rayleigh distribution: Theory and applications.

Appl. Math. Inf. Sci, 9(5):1-11.

[16] Mohan, R. and Chacko, M. (2021). Estimation of parameters of Kumaraswamy-exponential distribution based on adaptive type-II progressive censored schemes. Journal of Statistical Computation and Simulation, 91(1):81-107.

[17] Ng, H. K. T. and Wang, Z. (2009). Statistical estimation for the parameters of Weibull distribution based on progressively type-I interval censored sample. Journal of Statistical Computation and Simulation, 79(2):145-159.

[18] Rodrigues, J. and Silva, A. (2015). The exponentiated Kumaraswamy-exponential distribution. British Journal of Applied Science & Technology, 10(1):12.

[19] Singh, S. and Tripathi, Y. M. (2018). Estimating the parameters of an inverse Weibull distribution under progressive type-I interval censoring. Statistical Papers, 59:21-56.

i Надоели баннеры? Вы всегда можете отключить рекламу.