A. Pandey, A. Kaushik & S. K. Singh RT&A, No 2 (68)
ESTIMATION AND PREDICTION UNDER GPH CENSORING Volume 17, June 2022
Estimation and Prediction for Exponentiated Exponential Distribution under Generalised Progressive Hybrid
Censoring
Aakriti Pandey1, A. Kaushik*2, & S. K. Singh3
1,2,3 Department of Statistics, Institute of Science, Banaras Hindu University,
Varanasi, India-221005.
Email: 2arundevkauhsik@gmail.com, 1akritibhu@gmail.com, 3singhsk64@gmail.com * Corresponding author
Abstract
In this article, we propose the estimators for the parameters of exponentiated exponential distribution under generalized progressive hybrid censoring scheme obtained through different methods of estimations like maximum likelihood, Maximum product spacing, Bootstrap and Bayesian. Asymptotic confidence, Bootstrap and HPD intervals have also been computed. Moreover, Stress Strength reliability estimation is also discussed. The performance of the estimators have been studied in terms of their MSEs. Bayesian prediction of future observations has also been attempted. For illustrating the proposed methodology, a real data set is taken into account.
Keywords: Exponentiated Exponential distribution, generalized progressive hybrid censoring scheme, Stress-Strength Reliability, Bayes estimates, Bayesian prediction.
1. Introduction
The progressive hybrid censoring schemes has gained considerable attention in past few years. [1] and [2] have considered progressive hybrid type-I censoring scheme. In PHT-I, n units are put on test with progressive censoring scheme (Ri, R2, •••, Rm) and the time of termination of the experiment is fixed as T* = min{Xm:m:n, T}(Xm:m:n denotes mth order failure time), T <E (0, ro) and 1 < m < n are prefixed constants. In PHT-I,test can never exceed T which facilitates reduction in time and cost of the experimentation. The problem arises when the unknown average life time is higher than the stopping time that results to a status with less than m failures to be observed. It ultimately reduces the efficiency of the inference based on the censored data.
[2] further proposed progressive hybrid type-II censoring scheme. In PHT-II, the experiment is terminated at time T* = max{Xm:m:n, T} which assures at least m number of failures (see [2]). When Xm:m:n > T, the experiment is terminated at mth failure with withdrawals occurring at each failure according to pre-specified progressive scheme (R1,R2, •••,Rm) that may lead to significant increase in the time of termination. On the other hand, when Xm:m:n < T, we observe failure upto time T. The termination time in this censoring scheme is unknown to the experimenter. From the above discussion, we note that PHT-I censoring keeps the termination time of the experiment below prefixed value by compromising the efficiency where as PHT-II censoring ensures efficiency more than the prefixed level but compromises the termination time. Therefore, need for a censoring scheme controlling termination time and efficiency simultaneously was felt. Keeping this point in mind, [3] introduced a censoring scheme called generalized progressive hybrid (GPH) censoring scheme.
The paper envisages exponentiated exponential distribution(EED) as a lifetime distribution which was introduced by [4]. Many author studied the EED, see [5], [6] and [7]. The probability density function of EED is given as
f (x\a,p) = ape-px(1 - e-px)a-1; x > 0, a,p > 0. (1)
where a and p are the shape and scale parameters and the cumulative distribution function is
given by
F(x\a,p) = (1 - e-px)a. (2)
The requisition for the prediction of future sample on the basis of current information started burgeoning demand since it has found application in a wide range of activities including science, engineering, statistics, social sciences and other applied areas. Predictive inference facilitates to infer about future lifetimes using observed data. This impetus is welcomed by many authors and they made successful effort to draft the problem of Bayesian prediction of future observations based on various types of censored data from different lifetime models. A lot of research paper is available in literature on the prediction problem in case of censored as well as complete data (see [8], [9],[10], [11],[12]).
The paper is organized as follows. Section 2 describes the censoring scheme in detail. Section 3 provides estimates through different methods of estimation i.e. Maximum Likelihood estimates, estimates through Maximum product spacing method, Bootstrap estimates and Bayesian Estimates. Section 4 is about Bayesian Prediction of future observations based on GPH censored data in which both one sample prediction and two sample prediction has been attempted. It also contains stress strength reliability estimates. In section 5, a real data set is considered to demonstrate the applicability of the methodology. In section 6, a simulation study is conducted. Finally, conclusions are summarized in section 7.
2. The Censoring Scheme
Let the experiment begins with n units. The lifetime of the sample units X1, X2, ■ ■ ■, Xn are supposed to be independent and identically distributed random variables from a distribution with cumulative density function (cdf) F( ) and probability density function (pdf) f ( ). we have prefixed integers k,m € {1,2, ■ ■ ■,n} such that k < m. Let Ri denotes the number of units which are randomly removed from the experiment at Ith failure obeying the condition that Em=i Ri + m = n. The test is terminated at the stopping time T* = max{Xk:m:n,min{Xm:m:n, T}}. It may be noted that this scheme guarantees a bare minimum number of k failures. Let D be the number of observed failures up to time T. Then for observed observations, following three cases arise under this scheme:
X2:m:n, ' ' ' , Xk:m:n, if T < Xk:m:n, Xk:m:n, ' ' ' , XD:m:n, if Xk:m:n < T < Xm:m:n, Xk:m:n, ' ' ' , -X^m:^ if Xk:m:n < Xm:m:n < T.
A schematic representation of this censoring scheme is given in figure 1. Note that for the Case-I, Xk+1:m:n, ■ ■ ■, Xm:m:n are not observed; likewise for the Case-II, XD+1:m:n, ■ ■ ■, Xm:n:n are not observed. Given a generalised progressive censored sample, the likelihood function for Case-I, Case-II and Case-III denoted by Lj(a, p), Lu(a,p)and Lm(a, p) is given below:
k-1
Case-I: Lj (a, p) = K1 H f (xj:m:n )[1 - F(xj:m:n )]R'f (xk:m:n )[1 - F(xk:m:n )]R*, j=1
D
Case-II: Ljj(a,p) = K2n f (xr.m:n)[1 - Hxjm.n)]Rj[1 - F(T)]RD+1, (3)
j=1
m
Case-III: Ljjj(a, p) = K3 ^ f (xjmn)[1 - ^(xjm.n)]Rj,
j=1
Case-I: X1:m:n,'
Case-II: X1:m:n,' Case-III: X1:m:n,'
Number of removals
Case-I
Xi X2
Experiment Start
Number of removals
Case-II
experiment start
Number of removals
Case-III
experiment start
Ri R2
Xd TXd + 1
Rk
Xk
Experiment End
RiRD + 1*
Experiment End
rm
xm Experiment End
t
Figure 1: Schematic representation of generalised progressive hybrid censoring Scheme
where Kl = nu (Rk +1)], K = nf=i LT=j(Rk + 1)],K = [nf=i L?=j(Rk +1)], R*k
[n - k - £r\ Ri] and R*D+i=[n - D - £=1 Ri].
3. Estimation
3.1. Classical Estimation 3.1.1 Maximum Likelihood Estimation
Maximum likelihood estimation is one of the most felicitous method in classical paradigm to obtain the estimates of the parameters of proposed distribution. In this section, we will find the MLEs of a and ft of the considered distribution. The MLEs a and /3 of a and ft, respectively can be obtained by maximising the likelihood function. Using the equations (1) and (2), the likelihood can be written as :
J
L(a,ß) a (aß)J H(1 - e-ßxj)a-1e-ßxj [1 - (1 - e-ßxj)a]R x W(
(4)
j=1
1,
if J = k, m
where W(a, ß) lr R
V HJ [[1 - (1 - e-ßT)a] Kd+1, if J = D.
and hence log-likelihood equation will be
J
l(a, ß) « J x ln(aß) + (a - 1) £ ln(1 - e-ßxj )
j=i
JJ
+ £ Rj ln[1 - (1 - e-ßx>)a] - ß £ xj + ln W(a,ß), j=1 j=1
0
a
Differentiating it with respect to the parameters a and p respectively, we get :
dl(a, p) = J + J ( -pr, JRi(1 - e-px)' ln(1 - e-pxi) + d ln W(a, p) (6)
da a + j=1ln(1 e ) fa 1 - (1 - e-pxj )' + da (6)
dl(a, p) = J +( 1) J xe-pxi JR a(1 - e^i )a-1 x^i J + d ln W(a, p) (7)
dp = p + (a 1) j=1 (1 - e-pxj) = 1 - (1 - e-pxj )a =1x + dp (7)
wheredlnMapi = -x ln(1 -e-f*) and di«. = -^g^'-1 xTe-pT, if J = D and zero otherwise. The MLE of a and p can be obtained by solving likelihood equations (6) and (7) simultaneously. But it may be noted that explicit solutions of the above equations are difficult to find. Therefore, we propose the use of numerical method to obtain the solution for the above two non linear equations.
3.1.2 Maximum Product Spacing Method
It has underscored that for small samples MPS often perform better than MLE. The asymptotic behaviour of both MPS and MLE methods is same. In this section the method of product of spacings is proposed for point estimation of parameters of EED under GPH censoring scheme. The product spacing, denoted as G, is defined as product of the probabilities of an observation lying in the intervals induced by the sample observations and MPS estimates are values of the parameters that maximizes G . The expressions for G and corresponding equations whose solutions provide the MPS estimates are given below: For Case I and Case III:
J+1 J
G-n[F(xi) - F(xi-1)] n[1 - F(xi)]Ri i=1 i=1 J+1 J
-n[(1 - e-pxi)a - (1 - e-pxi-1 )a] n[1 - (1 - e-pxi)a]R i=1 i=1
The logarithm of G is
J+1 J
logG - Y '0g[(1 - e-pxi)a - (1 - e-pxi-1 )a] + Y^W - (1 - e-px)a] (8)
i=1 i=1
The partial derivatives with respect to the parameters when equated to zero gives the following:
dlogG =J+1 (1 - e-pxi)alog(1 - e-pxi) - (1 - e-pxi-1 )alog(1 - e-px-) da = i=1 [(1 - e-pxi )a - (1 - e-pxi-1 )a]
_JR (1 - e-pxi )alog(1 - e-pxi) = 0 i=1 i [1 - (1 - e-pxi)a] =
dlogG = J+1 a(1 - e-pxi)a-1 e-pxixi - a(1 - e-pxi-1 )a-1 e-pxi-1 xi-1 dp = i=1 [(1 - e-pxi )a - (1 - e-pxi-1 )a]
_JR a(1 - e-pxi)a-1e-pxixi = 0 i=1 i [1 - (1 - e-pxi)a] =
(9)
(10)
For Case II:
DD
G -U[F(xt) - F(xi-1)](F(T) - F(D))(1 - F(T)) n[1 - F(xt)]Ri [1 - F(T)]RD+1 i=1 i=1 D
-n[(1 - e-pxi)a - (1 - e-px- )a] [(1 - e-pT)a - (1 - e-pD)a] [1 - (1 - e-pT)a] i=1 D
n[1 - (1 - e-pxi)a]Ri [1 - (1 - e-pT)a]RD+1 i=1
and
D
logG a £ ¡og[(i - e-/xj)a - (1 - e-//xj-1 )a] + log[(l - e-/T)a - (1 - e-/D)a] i=l
D
+ ¡og[l - (l - e-/T)a] + £ Rilog[1 - (1 - e-/xj)a] + RD+i¡og[1 - (1 - e-/T)a]
i=1
The resulting equations are
dlogG = £ (1 - e-/x)alog(1 - e-/x) - (1 - e-^- )alog(1 - e-/xj-1) da = j£ [(1 - e-/xj)a - (1 - e-Pxi-1 )a]
+ (1 - e-/T)alog(1 - e-/T) - (1 - e-/D)alog(1 - e-
[(1 - e-/T)a - (1 - e-/D)a]
(1 - e-/T)alog(1 - e-/T) £ R (1 - e-/x)alog(1 - e-/x) [1 - (1 - e-/T)a] ¿1 j [1 - (1 - e-/xj)a]
(11)
(1 - e-/T)alog(1 - e-/T) = 0 Rd+1 [1 - (1 - e-?T)a] = 0
dlogG = £ a(1 - e-/x)a-1 e-/xxj - a(1 - e-/xj-1 )a-1 e-/xj-1 xi-1 d/ = j£ [(1 - e-/xj)a - (1 - e-/xj-1 )a]
a(1 - e-/T)a-1e-/TT - a(1 - e-/D)a-1 e-/DD
[(1 - e-/T)a - (1 - e-/D)a]
a(1 - e-/T)a-1e-/TT £ a(1 - e-/x)a-1e-/xjx,
(12)
- £ R
i
[1 - (1 - e-/T)a] j=1 j [1 - (1 - e-/xi)a]
a(1 - e-/T)a-1 e-/TT = 0 Rd+1 [1 - (1 - e-?T)a] = 0
Similar to the likelihood equations, MPS equations are also non-linear equations, thus an approach similar to that proposed for MLE is to used here also.
3.1.3 Bootstrap Estimates
The asymptotic confidence intervals are generally expected not to perform well when the effective sample size is small. The key provision that seeks to address such problems is re-sampling technique such as bootstrap. This tool provides a tactical way as it would give more accurate approximate confidence interval. Basically a couple of methods are available in literature to find bootstrap confidence intervals for the parameter of interest. One is percentile bootstrap (Boot-p) confidence interval given by [13] and another one is student's t bootstrap (Boot-t) confidence interval suggested by [14]. The generation algorithm for GPH censored sample is discussed in section 6. We propose to use the following algorithm to generate parametric bootstrap samples, suggested by [15] as given below. Algorithm:
Step 1. Compute a and / which is nothing but the ML estimate of the parameter a and /
respectively based on GPH censored sample x = (x1:j:n, x2:j:n, ■ ■ ■, xj:j:n).
Step 2. Generate the bootstrap GPH censored samples x* = (xj:j:n, xj:j:n, ■ ■ ■, xj:j:n) from EED with parameters a and / by using algorithm given in section 6. From these data, we compute bootstrap estimates say, a * and / *.
Step 3. Repeat step 2, B times to obtain a set of bootstrap GPH censored samples of a and / as
(a*,a*, ■■■,a*B) and (/**,/**, ■■■, fo).
A. Pandey, A. Kaushik & S. K. Singh RT&A, No 2 (68)
ESTIMATION AND PREDICTION UNDER GPH CENSORING Volume 17, June 2022
From the bootstrap samples generated from the above algorithm, the bootstrap confidence intervals for the parameters a and ft can be obtained by Boot-p method which is as follows: Let
G(x) = Prob(a * < x) be the cdf of a *. Define a Boot (x)=G-1(x) for given x. The 100(1 - Z)% bootstrap percentile interval for a is given by (a Boot (J^ , a Boot (l — 2)), which is § and (1 — 2) quantiles of bootstrap sample a1 ,a2, * * *,ag. In the similar fashion we can obtain bootstrap percentile interval for ft.
3.2. Bayesian Estimation
This section contains the method of obtaining Bayes estimates of the parameters a and ft based on GPH censored data with prefixed removals. In order to obtain Bayes estimators, we assume that the parameters a and ft are random variables which are independently distributed having prior distribution as:
AV1
g1 (a) =-±-e(-—1 a)aV1-1;0 < a < m,A1 > 0,v1 > 0 (13)
rV1
AV2
gi(ft) = e(-A2ft)ftV2-1;0 < ft < m,A2 > 0,V2 > 0 (14)
1 V2
respectively. Combining the priors given by (13) and (14) with likelihood function in (4), we
obtain the joint posterior density function of a and ft given as n(a,ft|x, R) = J1, where
10
A V1 AV2 J
J1 = —L e-(A1 a+A2 ft) a( J+V1-1) ft(J+V2-1) n(1 - e-ftXj )(a-1)[1 - (1 - e-ftxj )a ]Rje-ftXj (15)
rV1 TV2 j=1
and J0 = f J J1 dadft. Bayes estimators aE and ftE of a and ft under SELF(squared error loss function) can be obtained as
/•TO
a B = / an1 (a|x, R)dadft, ft B = / ftn2(ft|x, R)dadft (16)
0 0 0 0
The integrals involved in equation (16) can not be simplified in any standard form. So we use MCMC numerical technique to obtain the estimates. We have used Metropolis-Hastings within Gibbs sampling. The full conditional posterior distribution of parameters a and ft are
AV1 —V2 J
n1(a|x, R) = -i--V-e_(—1a)a(J+V1 -1) ^(1 - e-ftx>)(a-1)[1 - (1 - e-ftx>)a]R»W(a,ft) (17)
1 2 j=1
AV1 —V2 J
n2(ft|x,R) = -i--V-e-(—2ft)ft(J+V2-1) ^(1 - e-ftxj)(a-1)[1 - (1 - e-ftxj)a]Rje-ftxjW(a,ft) (18) 1 2 j=1
The algorithm consists of following steps:
1. Set i=1 and initial guesses of a and ft say the are a0 and ftg.
2. Using Metropolis algorithm, Generate (a;, fti) from n(a, ft, x, R).
3. Repeat steps 2-4, N number of times and obtain (a1, ft\),(a2, ft2 )■ ■ ■ (aN, ft N).
4. Obtain the Bayes estimates of a and ft under SELF as [£(a|d«to)] =
1N
N- N0 ^=N0+1 a'
and
[£(ft|d«to)] =
1N
^i=Nn + 1 fti
. Where N0 is the burn in period.
N - N0 '=N0+1'
5. The HPD credible intervals for a and ft can be obtained by using algorithm given by [16]. Let the ordered MCMC sample be (a[1],a[2], ■ ■ ■,a[N]) and (ft[1],ft[2],■■■ ,ft[N]). Futher 100(1 - Z)% credible intervals of the parameters are constructed say (a^j, a^N(1-Z)J),''' , (a[NZJ, a[Nj) and
(ft[1j, ft [N(1_Z) J), '' , (ft LNZJ, ft [NJ). Where[xJ denotes the greatest integer less than or equal to
x. The HPD credible interval of a and ft is that interval which has shortest length.
4. Bayesian Prediction
4.1. One Sample Prediction
The Bayes prediction of an unknown observation which belongs to the future sample based on the current information available to us, would be a great tool to have an idea about lifetime of unobserved data. The one sample facilitates us to predict the lifetime of future ordered observation which may not be observed due to censoring. In the current section, we have derived the predictive posteriors for the future observations from EED using the informative sample that has been observed under GPH censoring scheme.
Let X(i), X(2), ■ ■ ■, X(i) be the ordered observed sample and y(1:r1), y(2:r2),' ' ', V(s:ri) be future ordered sample from same parent population where s = 1,2, ■ ■ ■, ri and i = 1,2, ■ ■ ■, J (J = k for case-II, J = D for case-I, and J = m for case-III) . From [17], the conditional PDF of y(s:ri) given X(i) is obtained as
n—i—s
(n-iV [1 - F(y(s:r))in ,
(s - 1)!(n h - s)! [1 - F^n-, [F(y(s:r,)) - F(x(i))]S-1 f(y(s:r,))>
/*/.., I .„ \ I for case-I and case-III /-i <~v\
f (y(s:ri) \x(i)) = i ( [1 - -,-. (19)
(s - 1)n--i - s)! [1 - Fil ),n-i [F(y(s:ri)) - F(T)]s-1 f (y(s:ri
(s:ri) for case-II.
After putting the pdf and cdf from equations (1) and (2), we get
(n - i)! [1 - (1 - exp(-/3y(s:ri)))*]
«1 n-i-s
s:ri ) ' '
f (y(s:ri) |x(i)) = <
(20)
(s - 1)!(n - i - s)! [1 - (1 - exp(-px(i)))*]n-i [(1 - exp(-fiy(s:ri)))a - (1 - exp(-px{i)))a]s-1 apexp(-fiy(s:n))(1 - exp(-Py(s:r.)))a-1; for case I and case-III
(n - i)! [1 - (1 - exp(-py(s:n)))*]n-l-s (s - 1)!(n - i - s)! [1 - (1 - exp(-pT))a]n-i
[(1 - exp(-fiy(s:ri)))a - (1 - exp-T))*]s-1
txfiexp(-py(s:r.))(1 - exp(-^y(s:si)))a-1; for case-II.
Then, the predictive posterior density of future observation under GPH censoring scheme can be obtained as
(•TO fTO
f1 (y(s:ri )\x) = J J f (V(s:ri) \a fa tynfa p\x)dadp. (21)
Since the integrals involved in the expression can not be simplified to closed form suggest to use the numerical methods.
The MCMC sample {(k,, pi),i = 1,2, ■ ■ ■,M} obtained from n(a, p\x) using Gibbs algorithm can be utilized to obtain consistent estimate of f1 (y(s:r.) |xc) as
1 M-M0
ft (y(s:ri) \x) = M _ M E f (y{s:.n)\ai, fa x) (22)
- 0 i=1
Where M0 denotes the burn-in-period of Markov Chain. Furthermore, the Survival Function of the future sample can be obtained as
Sy(s,ri )(T)=1 - I f1(y(s:n) \x)dy(s:ri)
Jy(s,r, )=x( J)
cT
r '' (s:ri) \x)dy(s:ri)
Jy(s:ri)=x(J) (23)
f T f to f to
= 1 - / / f (y(s:ri) K ^ tfnfa p\xx)dadpdy(s:,ri)
y(s)=x(J) 0 0
Moreover, the two sided 100(1 - 5)% prediction intervals (Ls:ri, Us:ri) for y(s:r.) can be obtained by solving the following two equations P(y(s:ri) > Us:ri\xc) = 2 and P(y(s:ri) > Ls:ri |xx) = 1 - |. The confidence interval can be obtained by any iterative method as the above equations can not be solved analytically.
4.2. Two Sample Prediction
There may occur a situation, where the distribution of kth order statistics is independent of the informative sample i.e. f (y(k)K ft,x) is same as f (y(k)|a, ft). This is the case of two sample prediction problem. The experimenter is interested in kth failure time of a future sample of size N following the same lifetime distribution. The PDF of kth order statistics is given by: (see [17])
p(y(k) k ft)=(k - 1)NNN _ k)! [f(y(k))]k-111 - F(y(k))]N-kf (y(k)). (24)
after putting the pdf and cdf from equations (1) and (2), we get
p(y(k)|a,ft) =(k _ 1)N(N - k)! (1 - exp(-fty(k)))x(k-1)[1 - (1 - exp(-fty(k)))a]N-k
txftexp(-fty(k))(1 - exp(-fty(k)))a-1 The predictive posterior density of future observations under GPH censoring scheme is given by
TO TO
P1 (y(k) |*) = /0 J0 p(y(k) |a,ft,x)n(a, ft|x)dadft. (26)
Since the above equation can not be solved analytically. Therefore we use MCMC method along with Gibbs algorithm to obtain the consistent estimator of p1(y^) |x) is given by
1 M-M0
P1 (y(k) |X) = M _ M E P(y(k) K fti^ (27)
0 i=1
where Mo is the burn-in-period. The survival function of future sample can be defined as
Sy(k)(T)=1 -l J1 (y(k) |x)dy(k)
T (28)
fl /»TO /»TO v '
=1 - / / p(y(k)|a, ft, «Ma ft | x) dadftdy (k).
y( k) =0 0 0
The two sided 100(1 - ¿)% prediction interval (Lk, Uk) for y(k) can be obtained by solving the equations P(y(k) > Uk|x) = 2 and P(y(k) > Lk|x) = 1 - 2. The confidence interval can be obtained by any iterative method as the above equations can not be solved analytically.
4.3. Stress Strength Reliability
The problem of stress strength reliability estimation is not new rather it has a long history. The term stress strength was first introduced by [18]. Since then, a lot of works have been done in this direction including parametric and non-parametric in nature. One may navigate through some recent works as [19] and [20]. This section discusses the inferential procedure of stress strength reliability R = P(X < Y), when X and Y are independent and following EED with parameters («1,fti) and (a2,ft2) respectively. In a reliability study, let X denotes the strength of the unit and Y denotes the magnitude of stress applied to the unit by operating environment. A unit will function well if its strength is greater than the stress imposed on it. The stress-strength reliability R of the system is defined as
TOTO
R =Pr[X < Y]= / fx(x)fY(y)dxdy
0y
fY(y)(1 - Fx(y))dy °to . (29)
TO
J a.2ft2e-ft2y(1 - e-ft2y)a2-1 [1 - (1 - e-ft1y)a1 ]dy
a1 ^ ivJ. ift1
1 - «2 E( 7) (-^ 1^, ft- +1
TO
In a special case, when ft1 = ft2, this reduced to R = -. Further, one can compute the
a 1 I a 2
estimate of R by using the method discussed in section 3.
5. Real Data Illustration
This section deals with real life applicability of proposed methodology. The data set comprises survival times of two groups of patients suffering from head neck cancer disease which was reported by [21]. First group of patients are treated with radiotherapy whereas second group of patients are treated with both radiotherapy and chemotherapy. The data set is as follows: Data-1(X): 6.53, 7,10.42,14.48,16.10, 22.70, 34, 41.55, 42, 45.28, 49.40, 53.62, 63, 64, 83, 84, 91,108, 112, 129, 133,133, 139, 140, 140, 146, 149, 154, 157, 160, 160, 165, 146, 149, 154, 157, 160, 160, 165, 173,176, 218, 225, 241, 248, 273, 277, 297, 405, 417, 420, 440, 523, 583, 594,1101,1146,1417. Data-2(Y): 12.20, 23.56, 23.74, 25.87, 31.98, 37, 41.35, 47.38, 55.46, 58.36, 63.47, 68.46, 78.26, 74.47, 81, 43, 84, 92,94,110,112,119,127,130,133,140,146,155,159,173,179,194,195, 209, 249, 281, 319, 339, 432, 469, 519, 633, 725,817,1776.
Before advancing further, we first check the validity of EED for the above data sets. The summary of data fit is quoted here:
Data Set p-value KS-distance LogL AIC BIC a ml SE(a ml) ß ML SE(ß ml)
RT 0.0646 0.1720 -372.3767 748.7535 752.8743 1.0636 0.1851 0.0046 0.0007
RT+CT 0.2505 0.1498 -281.9551 567.9101 571.4785 1.0730 0.2178 0.0047 0.0009
So, it is clear that EED fits to the above two data sets. For illustrating the proposed methodology, we have generated censored data for a prefixed m, k, T and number of removals. We have considered different removal patterns by fixing values of R1, R2, ■ ■ ■ , Rm for a set of values of m, k and T. The schemes those have been considered are as follows:
Sm:n (1): All the removals are at the last failure, i.e. Rm = n - m.
Sm:n (2): All the removals are at the first failure, i.e. R1 = n - m.
Sm:n (3): The removals are at the first and last failure, i.e. R1 = Rm = (n - m)/2.
Sm:n (4): The removals are at middle failure, i.e. Rm/2 = Rm/2+1 = (n - m)/2.
Figure 2: Contour plot for parameters a(Left) and ß(Right) based on generated censored dataset-2.4.
Thus, generated censored datasets are given in Table 1. As mentioned earlier, the likelihood, product spacing equations and posterior integral do not have explicit solutions, therefore, numer-
ical approach coupled with R software is maneuvered. Basically optim(-) function is used here to find the ML and MPS estimates of the parameters. We have used contour plot which is shown in figure 2 to provide initial guess to the optim(-) function. For more details reader may see [22]. Using the concept of large sample theory, the asymptotic confidence interval for a and p are also computed and variance of the estimates evaluated by inverse of estimated Fisher Information matrix. The point and asymptotic confidence interval estimates, thus obtained, are summarized in table 2.
Table 1: The censored datasets generated from real datasets by considering different choices ofk, m, T and removal patterns
k m T Scheme Generated data-points Data Name
Data-1(RT)
25 40 600 s(4) (6.53, 7,10.42,14.48,16.1, 22.7, 34, 41.55, 42, 45.28, 49.4, 53.62, 63, 64, 83, 84, 91, 108, 112, 129, 133, 140, 140, 146, 154, 160, 160, 160, 165, 218, 225, 241, 248, 273, 417, 523, 594) 1.1
25 40 600 s(3) (6.53, 7,10.42,14.48,16.1, 22.7, 41.55, 42, 45.28, 53.62, 63, 64, 84, 91, 108, 112, 129, 133, 133, 140, 146, 146, 149, 154, 154, 157, 157, 160,160,160,160,165,165,173,176, 218, 225, 241, 248, 273) 1.2
25 40 600 S(2) (6.53,10.42,14.48,16.1, 22.7, 41.55, 42, 45.28, 49.4, 63, 64, 84, 91, 112, 133, 139, 140, 146, 146, 149, 157, 157, 160, 160, 160, 165, 173, 218, 225, 241, 248, 273, 277, 297, 405, 420, 583, 594) 1.3
25 40 600 S(1) (6.53, 7, 10.42, 14.48, 16.1, 22.7, 34, 41.55, 42, 45.28, 49.4, 53.62, 63, 64, 83, 84, 91, 108, 112, 129, 133, 133, 139, 140, 140, 146, 146, 149, 149,154,154,157,157,160,160,160,160,165,165,173) 1.4
Data-2(RT+CT)
20 30 600 s(4) (12.2, 23.56, 23.74, 25.87, 31.98, 37, 41.35, 43, 47.38, 55.46, 58.36, 63.47, 68.46, 74.47, 78.26, 84, 92, 119, 127, 133, 140, 146, 173, 179, 194, 195, 209, 281, 339, 519) 2.1
20 30 600 S(3) (12.2, 23.56, 23.74, 25.87, 31.98, 37, 43, 47.38, 55.46, 58.36, 63.47, 68.46, 74.47, 81, 92, 94, 110, 112, 119, 127, 130, 133, 140, 146, 159, 179, 194, 195, 209, 249) 2.2
20 30 600 s(2) (12.2, 23.56, 23.74, 25.87, 37, 43, 63.47, 68.46, 74.47, 78.26, 81, 84, 92, 110, 112, 119, 127, 133, 146, 155, 159, 173, 179, 194, 209, 249, 319, 469, 519) 2.3
20 30 600 s(1) (12.2, 23.56, 23.74, 25.87, 31.98, 37, 41.35, 43, 47.38, 55.46, 58.36, 63.47, 68.46, 74.47, 78.26, 81, 84, 92, 94,110, 112,119,127,130,133, 140,146,155,159,173) 2.4
4000 5000 6000 7000 0.8000 0.25 0.50 0.75 4000 5000 6000 7000 0)0001 2 3 4 5
Figure 3: Iteration and density plot ofMCMC sample for parameters a (Left) and p (Right) for dataset-2.4.
To compute Bayes estimates for considered dataset, we have used MCMC technique discussed in Section 3.2. Following [23], we ran three MCMC chains with initial values selected as MLE, MLE - (asymptotic standard deviation) and MLE + (asymptotic standard deviation), respectively.
A. Pandey, A. Kaushik & S. K. Singh RT&A, No 2 (68)
ESTIMATION AND PREDICTION UNDER GPH CENSORING Volume 17, June 2022
Table 2: ML, MPS, Bayes and Bootstrap estimates with their respective 95% CI of parameters (within bracket) for generated censored datasets
Datasets
a ML
a mp
a b
a Boot
1.1 1.2
1.3
1.4 2.1 2.2
2.3
2.4
0.9116(0.5719, 1.3696(0.8197, 1.0873(0.6322, 1.5528(0.8724, 1.6421(0.8167, 2.3914(1.1093 1.3544(0.6903, 1.8013(0.8838
1.2443) 1.9074) 1.5072) 2.1812) 2.4140) 3.5832) 1.9820) 2.7044)
0.8971(0.5889, 1.2563(0.8332, 1.0305(0.6688, 1.4249(0.9307, 1.5305(0.8729, 2.2012(1.2013, 1.3471(0.7367, 1.6441(0.9086,
1.2346) 1.9063) 1.5061) 2.1751) 2.4106) 3.5812) 1.9713) 2.6934)
0.8955(0 1.2674(0. 1.2249(0. 1.4378(0. 1.4925(0. 2.4258(1 1.4143(0 1.7765(0.
.5912,1.2341) .8572,1.9057) .6933,1.4851) .9512,2.1749) .8728,2.1817) .2016,3.2355) .7366,1.8731) .9084,2.6250)
0.9003(0.5892, 1.3119(0.8329, 1.0344(0.6688, 1.4689(0.9355, 1.5491(0.7875, 2.1641(1.1022, 1.3384(0.6989, 1.9375(0.9082,
1.2800) 2.0743) 1.5021) 2.2880) 2.4105) 3.5816) 1.9716) 2.6942)
Datasets
ft ML
ft MP
ft Boot
1.1 1.2
1.3
1.4 2.1 2.2
2.3
2.4
0.0032(0.0001 0.0069(0.0034 0.0047(0.0017 0.0078(0.0042 0.0084(0.0047 0.0124(0.0072 0.0065(0.0031 0.0094(0.0040
0.0123) 0.0121) 0.0163) 0.0148) 0.0176) 0.0195) 0.0134) 0.0210)
0.0032(0.0010, 0.0068(0.0040, 0.0047(0.0017, 0.0078(0.0048, 0.0081(0.0046, 0.0116(0.0064, 0.0065(0.0031, 0.0093(0.0048,
0.0044) 0.0094) 0.0065) 0.0108) 0.0120) 0.0171) 0.0093) 0.0134)
0.0033(0. 0.0067(0. 0.0049(0. 0.0077(0. 0.0091(0. 0.0114(0. 0.0068(0. 0.0101(0.
0013,0.0045) 0036,0.0093) 0021,0.0065) 0041,0.0107) 0051,0.0120) 0067,0.0171) 0032,0.0093) 0043,0.0135)
0.0031(0.0001, 0.0069(0.0033, 0.0049(0.0018, 0.0081(0.0038, 0.0082(0.0045, 0.0115(0.0056, 0.0071(0.0021, 0.0095(0.0042,
0.0115) 0.0116) 0.0155) 0.0145) 0.0169) 0.0181) 0.0126) 0.0219)
Figure 4: Density plot of one sample predicted order statistics with their respective 95% confidence interval.
Figure 3 shows the iterations and density plot of samples generated from the posterior distribution using the MCMC technique. From this figure, we see that all the three chains have converged and are well mixed. It is further noted that the posterior of a is approximately symmetric, but the posterior of ft is left skewed. Utilizing these MCMC samples, we computed the Bayes estimates, following the method discussed in Section 3.2. The ML, MPS, Bayes and bootstrap estimates of a are denoted by aML, aMP, aB and aBoot respectively. Similarly, the ML, MPS, Bayes and bootstrap estimates of ft are denoted by ftML, ftMP, ftB and ftBoot respectively. The point and HPD interval estimates, thus obtained, are summarized in table 2. In the table 3, we have provided the ML, MPS, Bayes and bootstrap estimates of stress-strength reliability for various combination of censored real dataset.
One and two sample predictive densities along with prediction interval for the future observations are presented in figures 4 and 5, respectively. From figure 4, it is observed that the proposed predictive interval for ordered observations contain the observed sample observations and it verifies the applicability of the prediction techniques for real problems.
Two sample future prediction for Dataset-2.3
Order 95% CI
Y[2]= (0.0885, 31.8225) Y[4]= (0.5265, 50.5970) Y[6]= (0.8087, 64.3023) Y[8]= (4.6441, 77.0188) Y[10]= (1 1.8215, 95.9900) Y[20]= (27.0634, 159.4856)
Order 95% CI
Y[2]= (0.1328, 40.0638) Y[4]= (1.0142, 61.2504) Y[6]= (4.6168, 77.3147) Y[8]= (1 1.9088, 96.7540) Y[10]= (15.8471, 1 10.7657) Y[20]= (46.9672, 197.8473)
Figure 5: Density plot of two sample predicted order statistics with their respective 95% confidence interval.
Table 3: ML, MPS, Bayes and Bootstrap estimates of reliability i.e. P[X < Y\ and their respective 95% CI (within bracket) for generated censored datasets
x-data y-data
MLE
MPS
Bayes
Bootstrap
1.1 2.1 0.3878(0.1787,0.5288) 0.3654(0.1924,0.4988) 0.4060(0.1824,0.5277) 0.3813(0.1705,0.5355)
2.2 0.3526(0.1606,0.5766) 0.3301(0.1708,0.5361) 0.3328(0.1736,0.5652) 0.3377(0.1530,0.5715)
2.3 0.4149(0.2216,0.5831) 0.4046(0.2226,0.5489) 0.4084(0.2640,0.5470) 0.4071(0.2203,0.5587)
2.4 0.3765(0.2414,0.5330) 0.3479(0.2480,0.5074) 0.3707(0.2725,0.5292) 0.3613(0.2388,0.5508)
1.2 2.1 0.4814(0.2659,0.6414) 0.4356(0.2925,0.6119) 0.4864(0.2869,0.5833) 0.5050(0.2615,0.6333)
2.2 0.4389(0.2341,0.6136) 0.3978(0.2566,0.6091) 0.4169(0.2526,0.5616) 0.4512(0.2279,0.6179)
2.3 0.5116(0.3238,0.6443) 0.4713(0.3539,0.6079) 0.4821(0.3366,0.6337) 0.5153(0.3193,0.6693)
2.4 0.4683(0.3082,0.6499) 0.4289(0.3259,0.6055) 0.4426(0.3256,0.5865) 0.4846(0.3013,0.6371)
1.3 2.1 0.4359(0.2384,0.5971) 0.4134(0.2535,0.5661) 0.4201(0.2575,0.5614) 0.4424(0.2502,0.5871)
2.2 0.3963(0.1921,0.6534) 0.3776(0.2043,0.6376) 0.4115(0.2257,0.6047) 0.3915(0.1931,0.6517)
2.3 0.4646(0.2676,0.6356) 0.4570(0.2764,0.5951) 0.5066(0.3044,0.6284) 0.4526(0.2723,0.6223)
2.4 0.4234(0.2299,0.6457) 0.3895(0.2320,0.5831) 0.4316(0.2562,0.6262) 0.4115(0.2274,0.6610)
1.4 2.1 0.4913(0.2661,0.6446) 0.4572(0.2873,0.5917) 0.4515(0.2743,0.6207) 0.4774(0.2763,0.6586)
2.2 0.4477(0.2258,0.6491) 0.4205(0.2358,0.6083) 0.4763(0.2534,0.6061) 0.4638(0.2280,0.6809)
2.3 0.5221(0.3798,0.6829) 0.4964(0.3917,0.6572) 0.4962(0.4234,0.6533) 0.4970(0.3979,0.6771)
2.4 0.4778(0.2264,0.7030) 0.4654(0.2479,0.6595) 0.4398(0.2321,0.7011) 0.4642(0.2253,0.7258)
50
150
6. Simulation Study
A simulation study is conducted here to study the performance of the estimates of the parameters a and ft on the basis of MSEs under considered censoring scheme. It is to be mentioned here that the exact expressions for the MSEs cannot be obtained because the estimators are not found in explicit form. Therefore, the MSEs of the estimators are estimated on the basis of simulation study of 5,000 samples. It may be noted here that the MSEs of the estimators will depend on values of n, k, m, T, a and ft, and hence various choices have been made to study the effect thereof. To generate a GPH censored sample from the considered distribution, see [24].
Here, we considered a number of values for sample size n. For an informative prior, the hyper-parameters are chosen on the basis of information possessed by the experimenter. In most cases, the experimenter can have a notion of what are the expected value of the parameter and can always associate a degree of belief to this value. In other words, the experimenter can specify the prior mean and prior variance for the parameters. The prior mean reflects the experimenter's belief about the parameter in the form of its expected value, and the prior variance reflects his confidence in this expected value. Keeping this point in mind, we have chosen hyper-parameters in such a way that the prior mean is equal to the true value of the parameter, and the belief in the prior mean is either strong or weak, i.e. the prior variance is small or large, respectively; for details see [25]. The average ML, MPS, Bayes and bootstrap estimates of parameter with
corresponding MSEs are given in table 4 and table 5. The bias and MSEs of ML, MPS, Bayes and bootstrap estimates of reliability function are given in table 6.
Table 4: Average estimate and MSE (within bracket) of different estimators of parameters for varying n and m
n m Scheme a ml
90 80 Si1
S(2
S(3
S(4
90 60 S(1
S(2
S(3
s(4
90 30 S(1
S(2
S(3
s(4
70 60 S(1
S(2
S(3
s(4
70 36 S(1
S(2
S(3
s(4
70 26 S(1
S(2
S(3
s(4
60 50 S(1
S(2
S(3
s(4
60 30 S(1
S(2
S(3
s(4
60 20 S(1
S(2
S(3
s(4
2.0683(0.1029) 2.0659(0.1029) 2.0615(0.1013) 2.0572(0.1147) 2.0716(0.1157) 2.0533(0.1136) 2.0493(0.1116) 2.0437(0.1205) 2.0427(0.1213) 2.0526(0.1190) 2.0597(0.1253) 2.0652(0.1288) 2.0788(0.1344) 2.0941(0.1319) 2.0459(0.1311) 2.0779(0.1334) 2.0863(0.1379) 2.0689(0.1368) 2.0585(0.1399) 2.0668(0.1344) 2.0771(0.1433) 2.0553(0.1459) 2.0675(0.1478) 2.0822(0.1544) 2.1001(0.1649) 2.0895(0.1657) 2.1037(0.1668) 2.1348(0.1621) 2.1312(0.1714) 2.1003(0.1721) 2.0848(0.1877) 2.0903(0.1707) 2.0772(0.1917) 2.0738(0.1928) 2.0822(0.1991) 2.0578(0.1989)
a MP 2.0159(0.1025 1.9642(0.1026 2.0304(0.1008 2.0421(0.1132 2.0028(0.1146 2.0070(0.1139 1.9896(0.1120 1.9697(0.1197 2.0220(0.1212 2.0442(0.1184 1.9608(0.1238 2.0557(0.1279 2.0575(0.1334 2.0183(0.1311 1.9820(0.1299 2.0621(0.1332 2.0374(0.1373 2.0609(0.1361 1.9804(0.1399 2.0540(0.1332 1.9734(0.1414 2.0013(0.1454 2.0608(0.1462 2.0206(0.1539 2.0472(0.1631 2.0075(0.1641 2.0851(0.1670 2.0864(0.1602 2.1120(0.1699 2.0910(0.1718 1.9918(0.1862 2.0684(0.1693 1.9855(0.1908 2.0258(0.1910 2.0537(0.1981 1.9652(0.1975
a B
2.0686(0.0930) 2.0753(0.0928) 2.0572(0.0911) 2.0529(0.1027) 2.0665(0.1041) 2.0597(0.1025) 2.0566(0.1007) 2.0608(0.1081) 2.0542(0.1094) 2.0556(0.1066) 2.0517(0.1124) 2.0645(0.1159) 2.0653(0.1205) 2.0906(0.1187) 2.0457(0.1176) 2.0878(0.1202) 2.0739(0.1238) 2.0622(0.1233) 2.0544(0.1260) 2.0738(0.1205) 2.0694(0.1284) 2.0697(0.1308) 2.0672(0.1324) 2.0721(0.1391) 2.0961(0.1479) 2.0825(0.1488) 2.1047(0.1505) 2.1254(0.1452) 2.1313(0.1542) 2.0911(0.1550) 2.0852(0.1691) 2.0889(0.1531) 2.0894(0.1728) 2.0671(0.1735) 2.0764(0.1792) 2.0492(0.1783)
P ML 2.0430(0.0583) 2.0514(0.0581) 2.0229(0.0562) 2.0499(0.0586) 2.0496(0.0599) 2.0423(0.0578) 2.0215(0.0579) 2.0347(0.0620) 2.0419(0.0633) 2.0381(0.0620) 2.0275(0.0592) 2.0332(0.0621) 2.0415(0.0862) 2.0525(0.0852) 2.0458(0.0855) 2.0640(0.0856) 2.0470(0.0885) 2.0355(0.0874) 2.0425(0.0868) 2.0424(0.0864) 2.0490(0.0895) 2.0335(0.0889) 2.0371(0.0880) 2.0393(0.0901) 2.0675(0.0938) 2.0349(0.0946) 2.0471(0.0941) 2.0286(0.0944) 2.0852(0.0970) 2.0643(0.1087) 2.0417(0.0969) 2.0698(0.0964) 2.0512(0.1004) 2.0656(0.1136) 2.0573(0.1026) 2.0851(0.1135)
P MP 1.9429(0.0577) 1.9653(0.0575) 1.9410(0.0562) 1.9735(0.0582) 2.0306(0.0591) 1.9675(0.0580) 1.9366(0.0577) 1.9715(0.0616) 2.0297(0.0633) 1.9808(0.0615) 1.9911(0.0592) 1.9379(0.0619) 1.9671(0.0861) 1.9892(0.0846) 1.9620(0.0848) 1.9870(0.0852) 1.9673(0.0886) 1.9803(0.0870) 1.9478(0.0870) 2.0032(0.0863) 1.9615(0.0889) 1.9611(0.0885) 1.9804(0.0875) 1.9362(0.0892) 2.0571(0.0941) 1.9453(0.0941) 1.9920(0.0936) 1.9679(0.0939) 2.0561(0.0963) 1.9843(0.1084) 2.0095(0.0963) 2.0126(0.0965) 1.9888(0.1001) 2.0601(0.1128) 1.9807(0.1023) 2.0743(0.1128)
P B
2.0516(0.0522) 2.0457(0.0521) 2.0213(0.0506) 2.0524(0.0527) 2.0474(0.0536) 2.0427(0.0522) 2.0391(0.0520) 2.0378(0.0557) 2.0290(0.0571) 2.0414(0.0556) 2.0227(0.0534) 2.0346(0.0561) 2.0381(0.0779) 2.0499(0.0766) 2.0502(0.0768) 2.0706(0.0773) 2.0562(0.0798) 2.0365(0.0790) 2.0445(0.0783) 2.0434(0.0777) 2.0585(0.0806) 2.0396(0.0803) 2.0302(0.0792) 2.0475(0.0809) 2.0819(0.0846) 2.0524(0.0847) 2.0618(0.0845) 2.0310(0.0846) 2.0740(0.0868) 2.0733(0.0976) 2.0417(0.0869) 2.0588(0.0868) 2.0654(0.0907) 2.0578(0.1018) 2.0581(0.0927) 2.0853(0.1019)
In the table 4, we have computed average ML, MPS and Bayes estimates of the considered parameters and their corresponding MSEs for different choices of n, m and censoring schemes with fix values of parameters a = 0.5 and p = 0.5 and T = 10. Here, three choices of n i.e. n=60(small), 70(moderate), 90(large) are considered. The value of k are set to be 50% of n respectively. From this table, we can note that in general, the MSEs decrease as n or m increases in all the considered cases. It can also be seen that the MSE of the MLE is more than that of the corresponding MPS and Bayes estimate in all cases; but the difference between the MSEs of the Bayes and ML estimates decreases for increases in the value of n. Further, MSE of the Bayes estimate is least among all the considered estimators. For small number of removals i.e. for large m, the MSEs of both the parameters is less for the removal pattern Sm:n (2) in comparison to Sm:n and MSEs under Sm:n (3) is observed to be lesser than that for Sm:n (4). For large number of removals; the MSEs of both the parameters under removal pattern Sm:n (1) are lesser than those under Sm:n(2) and MSEs under Sm:n(4) are observed to be less than those for Sm:n(3) i.e. the trend shows a reversal from small number of removals.
Table 5 shows the performances of estimates(ML, MPS, Bayes and bootstrap) in terms of MSEs for varying parameters a and p under GPH censoring scheme for various choices of n and fixed
Table 5: Average estimate and MSE (within bracket) of different estimators of parameters for varying a and ft.
CensoringScheme
_ft_
a ml
a mp
a b
ft ml
ft mp
n=30,k = 15, m = 22, T = 10, R = (0*10,4*2,0*10)
n=50, k = 25, m = 36,T = 10, R =(0*17,7*2,0*17)
n=100, k = 50, m = 72, T = 10, R = (0*35,14*2,0*35)
0.5 0.5 0.5403(0.0107)
1 0.5421(0.0108)
2 0.5391(0.0110)
1 0.5 1.0861(0.0558)
1 1.0786(0.0572)
2 1.1054(0.0637)
2 0.5 2.2099(0.2964)
1 2.1867(0.3161)
2 2.2146(0.3599) 0.5 0.5 0.5151(0.0058)
1 0.5247(0.0067)
2 0.5267(0.0068)
1 0.5 1.0615(0.0358)
1 1.0605(0.0362)
2 1.0651(0.0379)
2 0.5 2.1991(0.1615)
1 2.1376(0.1696)
2 2.1527(0.1754) 0.5 0.5 0.5094(0.0027)
1 0.5058(0.0028)
2 0.5134(0.0028)
1 0.5 1.0293(0.0145)
1 1.0324(0.0151)
2 1.0242(0.0159)
2 0.5 2.0463(0.0779)
1 2.0972(0.0798)
2 2.0750(0.0808)
0.5348(0.0103) 0.5365(0.0098) 0.5336(0.0103) 1.0748(0.0532) 1.0673(0.0547) 1.0940(0.0618) 2.1874(0.2868) 2.1644(0.3061) 2.1921(0.3485) 0.5092(0.0048) 0.5189(0.0060) 0.5213(0.0061) 1.0504(0.0346) 1.0493(0.0342) 1.0539(0.0362) 2.1763(0.1560) 2.1159(0.1641) 2.1310(0.1697) 0.5040(0.0025) 0.4998(0.0025) 0.5075(0.0026) 1.0186(0.0137) 1.0215(0.0141) 1.0131(0.0153) 2.0255(0.0750) 2.0755(0.0771) 2.0542(0.0778)
0.5377(0.0095) 0.5370(0.0100) 0.5363(0.0099) 1.0935(0.0526) 1.0867(0.0540) 1.1054(0.0597) 2.2053(0.2812) 2.1925(0.2993) 2.2202(0.3413) 0.5151(0.0049) 0.5251(0.0061) 0.5243(0.0055) 1.0690(0.0337) 1.0659(0.0337) 1.0750(0.0357) 2.1869(0.1529) 2.1168(0.1608) 2.1387(0.1663) 0.5081(0.0020) 0.508(0.0021) 0.5107(0.0019) 1.0383(0.0131) 1.0365(0.0141) 1.0159(0.0150) 2.0663(0.0738) 2.0777(0.0752) 2.0663(0.0767)
0.5576(0.0278) 1.1394(0.1274) 2.2996(0.5353) 0.5476(0.0180) 1.0979(0.0736) 2.1994(0.3117) 0.5402(0.0125) 1.0775(0.0509) 2.1547(0.2041) 0.5404(0.0214) 1.0995(0.0801) 2.1800(0.3354) 0.5350(0.0116) 1.0738(0.0474) 2.1568(0.1843) 0.5335(0.0082) 1.0350(0.0269) 2.1274(0.1095) 0.5183(0.0081) 1.0371(0.0337) 2.0918(0.1205) 0.5131(0.0045) 1.0438(0.0203) 2.0526(0.0775) 0.5101(0.0032) 1.0324(0.0143) 2.0466(0.0525)
0.5511(0.0261) 1.1275(0.1232) 2.2757(0.5192) 0.5416(0.0171) 1.0859(0.0705) 2.1769(0.3018) 0.5339(0.0119) 1.0666(0.0489) 2.1330(0.1976) 0.5344(0.0203) 1.0878(0.0774) 2.1573(0.3245) 0.5292(0.0110) 1.0629(0.0455) 2.1344(0.178) 0.5280(0.0076) 1.0240(0.0259) 2.1058(0.1061) 0.5129(0.0077) 1.0263(0.0319) 2.0706(0.1164) 0.5075(0.0042) 1.0332(0.0197) 2.0316(0.0746) 0.5041(0.0027) 1.0221(0.0137) 2.0256(0.0507)
0.5542(0.0256) 1.1365(0.1209) 2.3057(0.5080) 0.5493(0.0171) 1.1050(0.0691) 2.1887(0.2957) 0.5384(0.0116) 1.0775(0.0474) 2.1725(0.1934) 0.5416(0.0196) 1.1063(0.0754) 2.1724(0.3184) 0.5326(0.0101) 1.0715(0.0450) 2.1626(0.1743) 0.5341(0.0068) 1.0341(0.0251) 2.1468(0.1033) 0.5176(0.0069) 1.0391(0.0320) 2.0924(0.1139) 0.5126(0.0038) 1.0369(0.0191) 2.0599(0.0735) 0.5111(0.0023) 1.0413(0.0136) 2.0666(0.0499)
a
Table 6: Bias and MSE (within bracket) of different estimators of R = P[X < Y\ when a = 0.5, ft = 0.5, T = 10, kj and k2 are half of sample size n and n2, and m\ and m2 are 80% of sample size n and n2 . All entries of the table are multiplied by 103.
Scheme n1 n2 MLE MPS Bootstrap -
S(2) 30 30 6.9337(4.8475) 6.2974(4.2886) 6.8986(4.7014)
30 50 6.0457(4.2175) 5.4149(3.8425) 6.0143(4.3102)
30 100 5.5791(4.0310) 4.9611(3.5106) 5.6192(3.9752)
50 50 5.7328(4.1779) 5.1995(3.6288) 5.9919(4.1521)
50 100 5.4295(3.6835) 4.8097(3.3036) 5.2663(3.6921)
100 100 4.8737(3.3677) 4.3531(3.0344) 4.7554(3.4227)
S(3) 30 30 6.9518(4.8499) 6.0947(4.2810) 6.7646(4.8703)
30 50 6.2685(4.3231) 5.6613(3.9404) 6.1842(4.2264)
30 100 5.6802(3.9142) 5.0137(3.6531) 5.5496(3.9760)
50 50 5.7935(4.1735) 5.2874(3.7631) 5.7932(4.1630)
50 100 5.4147(3.6682) 4.9194(3.3341) 5.3210(3.7371)
100 100 4.9886(3.4919) 4.4981(3.0521) 4.9136(3.4366)
s(1) 30 30 8.3766(5.8594) 7.3233(5.2685) 8.3944(5.8504)
30 50 7.3513(5.2099) 6.7839(4.6143) 7.3860(5.0897)
30 100 6.7621(4.8604) 6.0695(4.3071) 6.7758(4.8412)
50 50 6.9321(5.0217) 6.4680(4.5044) 7.0883(4.9558)
50 100 6.2776(4.5562) 5.9264(4.1362) 6.5143(4.5732)
100 100 5.8052(4.1824) 5.1675(3.6788) 5.8722(4.0758)
s(4) 30 30 7.6202(5.3718) 6.8561(4.8040) 7.3301(5.3458)
30 50 6.8518(4.7686) 6.0057(4.2925) 6.8758(4.7493)
30 100 6.1996(4.2686) 5.6710(3.8474) 6.2351(4.2745)
50 50 6.3107(4.4244) 5.6776(4.0887) 6.4848(4.4322)
50 100 6.0107(4.1180) 5.2660(3.6801) 5.8027(4.0877)
100 100 5.3390(3.6663) 4.8599(3.3559) 5.4634(3.7267)
Large Variance 5.8717(4.0551) 5.2663(3.7421) 4.9102(3.3407) 5.0901(3.4534) 4.6123(3.1861) 4.1374(2.8558)
5.7705(4.1187) 5.2618(3.5897) 4.8688(3.2799) 4.9381(3.5414) 4.5276(3.2698) 4.2458(2.8857)
6.9325(4.8481) 6.2203(4.2930) 5.7347(3.9563) 6.0779(4.1896) 5.4045(3.8858) 4.9535(3.5410)
6.5334(4.4716) 5.6725(4.0401) 5.3898(3.7433) 5.3416(3.8225) 5.1367(3.5053) 4.6548(3.1825)
Bayes Non-Informative 5.9223(4.1511) 5.2601(3.7564) 5.0089(3.3857) 4.9623(3.6513) 4.5537(3.2653) 4.2813(2.8962)
5.9781(4.0731) 5.2718(3.6786) 4.8871(3.3660) 5.1437(3.5796) 4.6319(3.2168) 4.2665(2.9377)
7.0932(5.0225) 6.4552(4.3801) 5.8965(4.0652) 6.0609(4.2478) 5.6245(3.8524) 5.0076(3.5632)
6.5969(4.5147) 5.8644(4.1988) 5.4337(3.7927) 5.6016(3.8429) 5.1041(3.5395) 4.5542(3.1891)
Small Variance 5.6398(3.0416) 5.0293(2.6965) 4.6493(2.4892) 4.7896(2.5583) 4.4485(2.3702) 4.0495(2.1783)
5.6565(2.9414) 4.9798(2.6695) 4.5926(2.5324) 4.8483(2.6166) 4.3927(2.3942) 3.9843(2.1317)
6.5129(3.6627) 5.9409(3.2971) 5.4157(2.9415) 5.6684(3.1288) 5.1871(2.8698) 4.6606(2.5188)
6.1735(3.3876) 5.5385(2.9678) 5.1184(2.7952) 5.3026(2.8149) 4.7381(2.5381) 4.3599(2.4226)
T = 10. The value of k and m are set to be 50% and 72% of n respectively. The removal pattern is taken as Sm:n(4) i.e. Rm/2 = Rm/2+1 = (n - m)/2. From this table, we can conclude that for fix a, as p increases, MSE of p increases. Similarly, for fix p, as a increases, the MSE of a increases. For fix a and p as n (for fixed proportions k, m and fixed T) increases, the MSEs of both the parameters decrease.
In the table 6, we have presented the bias and MSE of different estimators(ML, MPS, Bayes and bootstrap) of R = P[X < Y] when a = 0.5, p = 0.5, T = 10, k1 and k2 are half of sample size n1 and n2, and m1 and m2 are set to be 80% of sample size n1 and n2 respectively. From this table, it can be easily seen that as n1 or n2 increases the bias and MSE decrease for all the estimators and for all the considered censoring schemes. Further, MSE of the Bayes estimate is least among all the considered estimators.
7. Conclusion
The article considers the problem of estimation and prediction for exponentiated exponential distribution from a generalised progressive hybrid censored sample. It is clear from above discussions that the proposed estimation procedures under GPH censoring scheme can be easily implemented with specific choice of T and m. The MPS procedure provides more precise estimates than those obtained from maximum likelihood and bootstrap procedures. The Bayesian procedure delivers more accurate and precise estimates of the parameters even if we consider the vague prior. The HPD intervals for the parameters are also obtained and it is verified that the width of the HPD interval is smaller than asymptotic and bootstrap confidence intervals. Therefore, we may conclude that the use of HPD interval under considered situation can safely be recommended. Moreover, Bayesian prediction of unknown future observation has far flung applicability in different areas of applied statistics. The Bayesian approach using MCMC method
can be effectively used to solve prediction problems. Finally, we can conclude that the discussed
methodology can be extensively used in various disciplines of scientific areas where such life-tests are needed.
References
[1] Kundu, D. and Joarder, A. (2006). Analysis of type-ii progressively hybrid censored data. Computational Statistics & Data Analysis, 50(10):2509-2528.
[2] Childs, A., Chandrasekar, B., and Balakrishnan, N. (2008). Exact likelihood inference for an exponential parameter under progressive hybrid censoring schemes. In Statistical models and methods for biomedical and technical systems, pages 319-330. Springer.
[3] Cho, Y., Sun, H., and Lee, K. (2015). Exact likelihood inference for an exponential parameter under generalized progressive hybrid censoring scheme. Statistical Methodology, 23:18-34.
[4] Gupta, R. D. and Kundu, D. (1999). Theory & methods: Generalized exponential distributions. Australian & New Zealand Journal of Statistics, 41(2):173-188.
[5] Gupta, R. D. and Kundu, D. (2001a). Exponentiated exponential family: an alternative to gamma and weibull distributions. Biometrical journal, 43(1):117-130.
[6] Gupta, R. D. and Kundu, D. (2001b). Generalized exponential distribution: different method of estimations. Journal of Statistical Computation and Simulation, 69(4):315-337.
[7] Singh, S. K. (2011). Estimation of parameters and reliability function of exponentiated exponential distribution: Bayesian approach under general entropy loss function. Pakistan
journal of statistics and operation research, 7(2).
[8] Al-Hussaini, E. K. (1999). Predicting observables from a general class of distributions. Journal of Statistical Planning and Inference, 79(1):79-91.
[9] Al-Hussaini, E. K. (2001). On bayes prediction of future median. Communications in Statistics-Theory and Methods, 30(7):1395-1410.
[10] Pradhan, B. and Kundu, D. (2011). Bayes estimation and prediction of the two-parameter gamma distribution. Journal of Statistical Computation and Simulation, 81(9):1187-1198.
[11] Kundu, D. and Raqab, M. Z. (2012). Bayesian inference and prediction of order statistics for a type-ii censored weibull distribution. Journal of statistical planning and inference, 142(1):41-47.
[12] Kundu, D. and Howlader, H. (2010). Bayesian inference and prediction of the inverse weibull distribution for type-ii censored data. Computational Statistics & Data Analysis, 54(6):1547-1558.
[13] Efron, B. (1982). The jackknife, the bootstrap, and other resampling plans, volume 38. Siam.
[14] Hall, P. (1988). Theoretical comparison of bootstrap confidence intervals. Annals of Statistics, 16:927-953.
[15] Efron, B. and Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press.
[16] Chen, M.-H. and Shao, Q.-M. (1999). Monte carlo estimation of bayesian credible and hpd intervals. Journal of Computational and Graphical Statistics, 8(1):69-92.
[17] David, H. A. and Nagaraja, H. (2003). Order statistics: Wiley series in probability and statistics.
[18] Church, J. D. and Harris, B. (1970). The estimation of reliability from stress-strength relationships. Technometrics, 12(1):49-54.
[19] Sharma, V. K., Singh, S. K., Singh, U., and Agiwal, V. (2015). The inverse lindley distribution: a stress-strength reliability model with application to head and neck cancer data. Journal of Industrial and Production Engineering, 32(3):162-173.
[20] Al-Mutairi, D. K., Ghitany, M. E., and Kundu, D. (2013). Inferences on stress-strength reliability from lindley distributions. Communications in Statistics-Theory and Methods, 42(8):1443-1463.
[21] Efron, B. (1988). Logistic regression, survival analysis, and the kaplan-meier curve. Journal of the American statistical Association, 83(402):414-425.
[22] Kaushik, A., Pandey, A., Singh, U., and Singh, S. K. (2017). Bayesian estimation of the parameters of exponentiated exponential distribution under progressive interval type-i censoring scheme with binomial removals. Austrian Journal of Statistics, 46(2):43-47.
[23] Robert, C. P. (2015). The metropolis-hastings algorithm. arXiv:1504.01896v1 [stat.CO].
[24] Pandey, A., Kaushik, A., Singh, S. K., and Singh, U. (2021). On the estimation problems for exponentiated exponential distribution under generalized progressive hybrid censoring: On the generalised progressive hybrid censoring. Austrian Journal of Statistics, 50(1):24-40.
[25] Singh, S. K., Singh, U., and Kumar, D. (2011). Estimation of parameters and reliability function of exponentiated exponential distribution: Bayesian approach under general entropy loss function. Pakistan Journal of Statistics and Operational Research, VII(2):199-216.