Censoring and reliability inferences for power Lindley distribution with application on hematologic malignancies
data
Abbas Pak1, Mohamed E. Ghitany2 •
department of Computer Sciences, Shahrekord University, Shahrekord, Iran 2Department of Statistics and Operations Research, Faculty of Science, Kuwait University, Kuwait
[email protected] [email protected]
Abstract
In this paper, by using progressively type II censored samples, we discuss on estimation of the parameters of a power Lindley model. Maximum likelihood estimates (MLE) and approximate confidence intervals of the unknown parameters are obtained. Then, considering squared error loss function, the Bayes estimates of the parameters are derived. Because there are not closed forms for the Bayes estimates, we use Tierney and Kadane's technique, to calculate the approximate Bayes estimates. Further, the results are extended to the stress-strength reliability parameter involving two power Lindley distributions. The ML estimate of the stress-strength parameter and its approximate confidence interval are obtained. Then, the Bayes estimates and highest posterior density credible interval of the involved parameter are obtained by using a Markov Chain Monte Carlo method. To evaluate the performances of maximum likelihood and Bayes estimators simulation studies are conducted and two examples of real data sets are provided to illustrate the procedures.
Keywords: Power Lindley model, progressive type II censoring, Bayesian approach, Maximum likelihood method, Stress-strength reliability
1. Introduction
A random variable (r.v.) X follows the power Lindley model with parameters 7 and 5, denoted by PL(j, 5), if its probability density function (p.d.f.) and survival function are defined as
f ^ Y, 5) = j+1 (1 + xY) x7-1 e-5xY, x > 0, 7,5 > 0. (1)
and
S(x; 7,5)= + j+1 x^ e-5xY, x > 0, 7,5 > 0, (2)
respectively. This model is introduced by Ghitany et al. [10] as a new distribution useful to analyze lifetime data. They studied the statistical properties and maximum likelihood estimation (MLE) of the power Lindley model on the basis of complete random sample. However, in many life testing and reliability analysis, the experiment may be terminated before the failure of all items. Hence, the available observations are called censored samples. By the censoring, the test time can be reduced and further some experimental components are kept for future use. In the conventional type I and type II censoring schemes, removing items at stages other than the terminal stage of the test is not allowed. Therefore, in the literature, a more important scheme called progressively type II censoring (PTII) is provided as follows. Suppose that a
sample of size n items are in a life test. When the first item is failed (time x^)), U1 items are discarded from the surviving n — 1 items. With the second failure (x(2)), U2 items of the n — 2 — U1 surviving items are deleted. This procedure is continued until the time of dth failure (X(d)) in which Ud = n — d — (U1 + U2 + ... + Ud—1) surviving items are removed. Note that the censoring numbers Uj, i = 1,..., d, are determined before beginning of the study. When d = n and U1 = U2 = ... = Ud = 0, the complete sample of size n is observed. Also, if U1 = U2 = ... = Ud—1 = 0 and Ud = n — d, the ordinary TII censored sample of size d is observed.
There is a large amount of literature about the estimation of lifetime model parameters using PTII censoring scheme. Krishna and Kumar [16] studied estimation of reliability characteristics in Lindley model. Bayesian analysis for Rayleigh distribution under PTII scheme is discussed by Lee et al. [19]. Pradhan and Kundu [21] addressed statistical inference of generalized exponential model in presence of PTII censored data. Balakrishnan [3] presented inferential approaches for different lifetime models based on the above PTII censoring scheme. Ghitany et al. [11] applied ML procedure to derive the estimates of the Gompertz model parameters by using complete and PTII censored data. Kim and Han [13] provided different inference procedures for Rayleigh distribution parameter by using a progressively censored sample.
The interest of this paper is to provide classical and Bayesian inferences for the parameters of power Lindley distribution by using a PTII censored sample. We first describe the construction of likelihood function using a PTII censored sample from power Lindley distribution. Then, the ML estimates of the parameters and their approximate confidence intervals (CI) are obtained. Considering squared error loss function and using gamma priors of the parameters, an expression is provided as the Bayesian estimate of any function of the parameters. Since this expression can not simplified to a nice closed form, we employ Tierney and Kadane's procedure to obtain the approximate Bayes estimates.
Moreover, the above estimation techniques based on PTII censoring scheme can be naturally extended for inferences about the stress-strength model. This model has attracted the attention of statisticians for many years due to their applicability in diverse areas such as medicine, engineering, and quality control, among others. In reliability studies with strength X and stress Y , the parameter R = P(X > Y) measures the reliability of a system ( [15] ). It is used in biometrical researches for comparison of the two quantities obtained from practical experiments. There is a large amount of literature about the estimation of R using different approaches and distributional assumptions on (X, Y). Estimation of R in the models with correlated stress and strength is conducted by [4]. Hanagal [12] derived maximum likelihood estimate of stress-strength parameter R in a bivariate Pareto model. Inference for the stress-strength models in a generalized exponential model is studied by Kundu and Gupta [18]. Pak et al. [ 20] have used fuzzy set theory to derive inferences on the parameter R when the observations of the strength and stress are imprecise quantities. Statistical estimation of R for the exponential model is discussed by Krishnamoorthy et al. [17]. Inference on the reliability in multicomponent models when the stress and strength have Weibull distribution is considered by Kizilaslan and Nadar[14]. Eryilmaz [6] computed the reliability of coherent structures in multivariate stress-strength models.
Recently, Ghitany et al. [9] developed inference procedures for the stress-strength power Lindley models when the complete information about all experimental units are available. However, in practice, we may deal with censored data sets in which the failures of some items are not observed. For example, assume that the random variables X and Y describe the treatment effects of two new drugs and the quantity of interest is R = P(X > Y). In such situations, censored samples from both treatment groups are observed, rather than complete samples. Other examples include comparison of carbon fiber strengths at different gauge lengths and comparison of the concentration of sulphur dioxide from a Beach in two different years. In this study, we obtain Bayesian and classical estimates of the reliability R by using PTII censored samples from the stress and strength populations. We first determine the ML estimate of the reliability parameter and its asymptotic confidence interval. Then, we use a Markov Chain Monte Carlo (MCMC) procedure to obtain the Bayes estimate and highest posterior density (HPD) credible interval of
the parameter R.
The layout of this paper is as follows. Section 2 concerns inference procedures for the power Lindley based on PTII censored sample. In Section 3, statistical inferences for the reliability parameter R are discussed. To evaluate the performances of the proposed estimators, simulation studies are conducted in Section 4. In Section 5, a real data set from Ebrahimi [7] is analysed to demonstrate the application of PTII censoring scheme. Then, to illustrate the estimation procedures of the stress-strength model, we present an example of two real data sets. Finally, some comments and conclusions are made in Section 6.
2. INfERENCE fOR pROgRESSivElY CENSORED DATA
2.1. Maximum likelihood estimation
Assume that n independent components are put on a life testing experiment with the lifetimes following the power Lindley model. Before the commencement of the experiment, the quantity d < n is specified and the censoring scheme (U\,..., Ud) with U > 0 is determined. Then, by using a PTII censored sample denoted as x = (x^),...,x(d)), the likelihood function of 7 and 5
can be expressed as
■A- T 1 Ui
Lo (7,5) = K n f(x(i); 7,5) S(x({); 7,5)
i=l
Yd52d -S £xUl+Ui)ä
- 1 () n i=l
K^——e i=i () n(1 + xji^x1
'(S + 1)d
S
S +1 (i)
(3)
where K = n(n — U\ — 1)... (n — U\ — ... — U— — d + 1). Therefore, the corresponding log-likelihood function of the parameters become
t(j, S) = log(K)+ d log y + 2d logS - d log(S + 1) - S £ x7{i)(1 + Ui)
i=1
d
d
S
+ E [l°g(1 + x7()) + (7 — 1) log x(()\ + E Ui log 1 + 5+1 xji)) . i=1 i=1 \ + /
The MLE of the parameters 7 and 5, say 7 and 5, are the solutions of nonlinear equations
(4)
dt
d
d d
dY = Y + £ log x(i)- S £ xJ() log x(i)(1 + Ui) i=1 i=1
+ £
LxYi) log x(i) Jk SxYi) logx
(i)
i=1 1 + xY
(i)
+ £ U^^ y
=1 S + 1 + Sxl
0,
(i)
dt dS
2d
T
d
s +1
- £x})(1 + U) + £ Ui
(i)
i=1
0.
(5)
(6)
= ' (5 + 1)2 + 5(5 + 1)x(t)
Note that there are not explicit solutions for the above system of equations and it is required to employ nonlinear numerical computational techniques to calculate the MLEs. In a similar problem, Valiollahi et al. [25], have use EM algorithm to obtain the ML estimates of the parameters. Here, in real data application and simulation studies described later on, we employ nlm function in the R statistical software ([22]) to compute the MLEs.
Once the ML estimates of 7 and 5 are obtained, we can apply the asymptotic normality of the MLEs to compute the approximate CIs for the parameters. The observed variance-covariance matrix for the MLEs of the parameters is
d2t(y,s) ' dy2 d2t(y,s) djds
d2 t(y,s)
dyds d2 t(y,s) ds2
(y=7,s=s)
011 (7, S) a12(7, S) _ 012(7, S) 0-22(7, S)
U
1
where
= — £ — * )(log X(i))2(1 + Ui)
^r4)(l°g x(i))2 *x(i)(l°g x(i))
Uog x(
I i=1
+ E, (i)(logx(i))2+U Sx(i)(logx(i))2 , (8) +£il (1+4))2 + Ui (*+1)(1 + xj))2^ (8)
d d 1
— E 4) log x(i)(1 + Ui) + E Uix(i) log x(i),r , ■■ , N2, (9)
d2e( y, s)
-dYHT = — E x(i) logx(i)(i + Ui) + E Uix(i)log x(i)(* +1 + *xYi))2,
d2e(7,s) = _2d d En 7 2(S +1) + (2* + 1)x(i) dS2 s2 + (s +1)2 ¿1UiX(i)((S +1)2 + s(s + 1)xYi))2.
Thus, by using the delta method and inverse logarithmic transformation (see [10]), the 100(1 — a)% CIs for the parameters and s are derived, respectively, as
(eL, eU) and (eL, eU), (11) where _
\Au (7, s)
(L1, U1) = log 7 ± z -, (12)
y^C^S) S
in which z a is the « upper quantile of the standard normal distribution.
(L2, U2) = log ± z -f--(13)
2 S
2.2. Bayesian analyzes
In the Bayesian setting, the observer combine subjective opinion based on insight or experience with the available observations to get balanced values and to update the estimates as more information and data become accessible. In this section we obtain the Bayes estimates of the unknown parameters assuming that 7 and S are independent r.v.s from the gamma models with respective densities
ni (7; «1, bi) « 7«1—1 e—Yb1, 7 > 0, ( )
n2(S; «2, b2) a S«2—1 e—sb2, S > 0, ()
where the hyperparameters «, bi, i = 1,2, are positive. By combining (3) with (14), the joint density function of (7, S) and the data x = (x^),..., X(m)) becomes
d
^d+«1 —1 e—7^1 S2d+«2—1 —S(b2+ E x^i)(1+S/))
n3(7,S, x) a -(S+ijd- e '=l ()
d
]T[(1 + x^x-l(i + S-+Jxj)) (15)
Thus, we can write the posterior density function of 7 and S as
n* (7, S | x) = „ „ n3(Y, S, X) . (16)
I f n3(7, S, x)djdS 0 0
Now, assuming squared error loss function, the Bayes estimate of a function h(7, S) from the parameters is obtained as
CO CO
/
E(h(7, S) | x) = J J n* (7, S | x) h(7, S)djdS. (17)
00
Since the posterior density function n* (7, S | x) has a complex form, deriving a nice closed form for the Bayes estimate of h(7, S) is difficult. Therefore, in the following, the approximate Bayes estimates are calculated using Tierney and Kadane's procedure. Setting
11 F(7,S) = - ln n3(7, S, x) and F*(7, S) = F(7,S) + - lnh(7, S),
the expression in (17) can be rewritten as
rC rC enF* (7,S) dYdS
E(h(Y, S)|x>= fc^n^^s ■ ™
Following Tierney and Kadane [24], equation (18) can be approximated as the following form:
h bt (Y, S)
det Y* det Y
1/2
exp{n[F* (7*, S*) — F(7, S)]} , (19)
where (7*, <5*) and (7, 0) maximize F* (7, S) and F(7, S), respectively, and Y* and Y are minus the inverse Hessians of F* (7, S) and F(7, S) at (7*, 0*) and (7, 0), respectively. In our case
1
F(7, S) = n {c + (d + «1 — 1) log7 — 7&1 + (2d + «2 — 1) logS
d
—d log(S + 1) — S E xj)(1 + Ui) i=1
d r 1 d i S \
+ £ log(1 + xJ)) + (7 — 1) logx(i) + E Ui log 1 + xJJ } (20)
i=1 i=1 S +
where c does not depend on 7 and S. Therefore, (7, S) can be derived from the equations
d 1 d +« _ 1 d d
¿F^S) = -{-— bi + Elogx(i) — SExj) logx(i)(1 + Ui)
' ' i=1 i=1
d x logx(i) n Sx logx(i)
+ e (i) s (i) + e U__(i) (i) j = 0
+ ¿1 1 + xj) + ¿1Ui S + 1 + Sx() } = 0,
d n if 2d + «2 — 1 d ^ ^ x(i) I
doF(YS) = —--m — b2 — Eix(i)(1 + Ui) + Ei Ui (S +1)2 + S( S + i)x Yi)}
= 0.
Then, by using the second order derivatives of F( , S), the determinant of the negative of the inverse Hessian of H(7,S) at (7,S) is given by detY = (Fl1 F22 — F^2)—1 where
F11 = n { — ^^^ — S EExJ)(log x(i))2(1 + Ui)
i=1
+e rx?i)(l°g x(i))2+U Sx(i)(logx(i))2
¿il (1 + xj))2 i (S + 1)(1 + S+rxYi))2^ -id d 1
Fi2=n{—iEi xi)log x«(i+Ui)+Ei Uixi)log x(i)(S+1+¿x^L )2},
_ = 1r 2d + a2 — 1 + d 7 2(5 + 1) + (25 + ^xj)
F22 n { 52 +(5 + 1)2 E UtX(i)((5 + 1)2 + 5(5 + 1)xJ))2
Now, for computing the estimate of 7 under squared error loss function, let h(7,5) = 7. Thus, we have
1
F1* (7,5) = - {c + (d + ax) log7 — 7b1 + (2d + — 1) log5
d
—d log(5 + 1) — 5 E x7(i)(1 + U)
i=1
d r 1 d ( 5 \
+ E [log(1 + x7{i)) + (7 — 1) logx{i)\ + E Ui log (1 + 5TTx7) J } (21)
and (7*, 5*) are computed from the following system of equations:
¿F1^S) = - bi + E logx(t) - 5 ixj) 1 1 i=1 i=1
+ ^xji) log X(i) + » 5xji) log X(i)^ = 0
+ E 1 + X7 + E U 5 + 1 + 5X7 } = i=1 1 + X(i) i=1 5 + 1 + 5X(i)
d F1* (. 5) = if2^ — J+, - - E xi)(1 + Ui)
d x( i) + E1U (¿Tw+kt+Wi^
Moreover, calculating the second order derivative of F1* (7,5) at (7*, 5*), we obtain
11
- + a1 5 E n { (7* )2 ¿1
r1* — 1r_- +a1 sV-
+e[XS(iogX(i))2+u 5X?i*(iogX(i))2
+ (1 + Xj*)2 + U (5* + 1)(1 + ^X^1'
1 - -* - -* 1 f12* =1 {- Exl)logX(i)(1 + u) + Euixl)logX(i, „ , ^ r)2},
X(i))
^ =1 ^ T i=1 "^(0 (5-* + ! + 5*xJ* )2
„u_1t 2- + a2 - 1 - *„ 2(5* + 1) + (25* + 1)X(1)
F1* = ^ " _ V- u 7 * ^ ' ' ■ ^ ' ' (i) -,
F22 - { /¡-a 2 + fx* , i\2 E UiX(i),,-^ , , IN..7*N 2 }
^ (5"*)2 (5* + 1)2 i=i ! (i) ((5* + 1)2 + 5"*(5* + 1)x7;))2-and hence det Y1* = (FjL1*F12* — (F^* )2)—'1. Therefore, the Bayes estimate of 7 becomes
7 bt
"det Y1* ' 1/2 r
det Y ex^ n
{n [F1*(7*,5*) - F(7,5)JI. (22)
Following the same arguments with h(7,5) = 5 in F* (7,5), 5BT can then be obtained straightforwardly.
3. Inference for the stress-strength reliability 3.1. MLE of R
Suppose that X and Y are random variables in the stress-strength model that are independently distributed as PL(7,0) and PL(7,n), respectively. Our quantity of interest is the parameter R = P(X > Y) that is derived as (see [10] ):
R = J^ ( 20 +1 +_20_) (23)
R n + 1 V(S + 1)(S + n)2 + 0 + n + (0 + 1)(S + n )3J.
In order to compute the maximum likelihood of the parameter R, we need to compute the MLEs of 7, 0 and n. Let x = (x(L),..., x^)) be a PTII censored sample from PL(7,0) based on censoring scheme (Ul,..., UdL) and y = (y(L),...,y(d2)) be a PTII censored sample from PL(7,n) based on censoring scheme (Vl, ..., Vd2). Then, the log-likelihood function of the parameters 7, 0 and n (ignoring the constant terms) becomes
S2 ) y
V I
S2 d1
L(y, 0, n; x, y) = (di + d2) log7 + dilog^ J+[j — < £ xj)(1 + Ui)
d1 r 1 d1 S + E [log(1 + xj)) + (Y — l) log xw + E Ui log 1 + J+ixj) i=1 i=1 S +
+d2 log (n+i) — n E y&a + V)
d2 r 1 d2 ( n
+ £ [log(1 + y(j)) + (7 — l) togyooj + E V log ^ + n+1 v(j)
(24)
The ML estimates of the parameters 7, 0 and n, say 7, <5 and if, are computed from the system of equations
d1 d1 + E log x(i)—0 E xYi) log x(i)(1 + Ui)
and
dL di + d2
dL = T"+Elog xw —" E x(i)
+ Ex?i) log x(i) + eu Sx?i) log x(i)
+ ¿1 1 + xj) + ¿1Ui * + 1 + SxJ)
d2 d2
+ E log y(j)—n E yj log y(j)(1+V ) =1 =1
yj log y(j) nyj log y(j) + E -r—7^ + E V —j-yf = 0, (25)
1 + yj) j=i j n +1 + nyj
£ = f^ — E 4)(1 + Ui) + E Ui (0 + 1)2 +t 0 + 1)x() = 0 (26)
¥ = nm — ¿1 yY)(1 + V) + E V (n +1)2 +yg)(n + i)x() = (27)
Then, by using the invariance property of the MLEs, the maximum likelihood estimate of R = R(S,n) is obtained as R(S,nf). Moreover, from the asymptotic normality of the MLEs (see
[23]), R is asymptotically normal with mean R and asymptotic variance
j \ (dR\2 (dR\2 (dR\ (dR\1
aR + T22U/J +ds) U/Jj
where
dR = —5/2[53 + 252 (/ + 3)+ 5(/ + 2)(/ + 6)+ 2(/2 + 3/ + 3)] ~d3 = ^ '
dR = 52/[6 + 52 (/ + 2)+ 25(/ + 1)(/ + 3)+ / (/2 + 6/ + 12)] d/ = ^ '
and Tij, i = 1,2,3, are the elements of the negative of the matrix
1
a2 l d2l d2l
d52 d5d/ d5d7
d2l d2l
d/d5 d/2 d/d 7
dL d2 l d2l
d7d5 d7d/ d7 2
Now, by using (24), we obtain
d2L = d\ + d2 .d
W — — 5 Ex
— 5 E-
+ E [^o^gxd))2+U ^l^gxd))2 ]
+ E1[ (1 + x7(l))2 + ' (5 + 1)(1 + 5+1x7.))2]
d-2
H
—/ E yio^g y(j))2(1 + Vj) + E[yj^gyp))2 +V wi^gyp))2 ]
+ E [ (■> i y7 \2 + I y7 \2 ],
j=1
a^L = _2_di d1 EU 7 2(5 + 1) + (25 + 1)x7i) d52 52 +(5 + 1)2 E UlX(i)((5 + 1)2 + 5(5 + 1)x7i))2,
+ 7^ — E VjVl
(j)
d/2 (/ +1)2 ' (j) ((/ +1)2 + /(/ + 1)y7j))2' d)2£ d d 1
x7„ logx(i)(1 + Ui) + E Uix7
d2L = _2d2 , d2 £„„7 2(/ + 1) + (2/ + 1)y( d
dd5 = — £X(i)^ + U) ' (^-'-(i)"*-*)^ + 1 + 5xJ{))2'
<2r d2 d2 1
d L 1-1 7 1 / - -r t \ 1-1 ^ 7 1 ±
j=1 " j=1 1 (u + 1 + u Vj)/
d2C d2L
= — g yU)log y(j)(1 + Vj) + g yjyl)log y(j)(5 + 1 + 5y7)2'
0.
d5d/ d/d5
Thus, the 100(1 — a)% asymptotic CI of the reliability R can be derived as
eL eU
\1 + eL' 1 + eU
where
(28)
(29)
(L, U) = log( —± z. (30)
v ' 6\1 — Rj 2 R(1 — R)
Table 1: Different estimâtes of the parameter 7 for various sample sizes when ( y, S) = (2,1).
d Scheme
MLE
Bayes
Confidence interval
AV
MSE AV
MSE AL
CP
20 12 (0,...,0,8) 2.2134 0.3406 2.2186 0.3619 2.2976 0.9261
(8,0...,0) 2.2463 0.5812 2.2509 0.5875 2.8365 0.9232
(0,8,0,...,0) 2.2377 0.5685 2.2311 0.5713 2.8121 0.9238
20 15 (0,...,0,5) 2.1023 0.2818 2.1058 0.2831 1.8658 0.9317
(5,0...,0) 2.2339 0.3416 2.2354 0.3427 2.5813 0.9306
(0,5,0,...,0) 2.1961 0.3225 2.1975 0.3240 2.5762 0.9311
20 18 (0,...,0,2) 2.0761 0.1938 2.0782 0.1947 1.5696 0.9359
(2,0...,0) 2.1874 0.2773 2.1876 0.2785 2.3375 0.9346
(0,2,0,...,0) 2.1325 0.2619 2.1338 0.2623 2.3129 0.9352
30 15 (0,...,0,15) 2.0830 0.2310 2.0861 0.2341 1.7589 0.9317
(15,0...,0) 2.2116 0.3341 2.2174 0.3352 2.3436 0.9302
(0,15,0,...,0) 2.1078 0.3196 2.1083 0.3197 2.3379 0.9305
30 20 (0,...,0,10) 2.0322 0.1875 2.0328 0.1878 1.6136 0.9321
(10,0...,0) 2.1371 0.2918 2.1395 0.2925 2.3355 0.9308
(0,10,0,...,0) 2.0916 0.2641 2.0937 0.2644 2.3278 0.9311
30 25 (0,...,0,5) 2.0208 0.1234 2.0214 0.1238 1.3373 0.9432
(5,0...,0) 2.0864 0.2175 2.0873 0.2189 2.1897 0.9409
(0,5,0,...,0) 2.0738 0.1983 2.0749 0.1984 2.1736 0.9414
50 30 (0,...,0,20) 2.0192 0.1185 2.0205 0.1187 1.3118 0.9373
(20,0...,0) 2.0775 0.1931 2.0782 0.1946 2.1671 0.9358
(0,20,0,...,0) 2.0368 0.1857 2.0391 0.1874 2.1503 0.9360
50 35 (0,...,0,15) 2.0143 0.0902 2.0151 0.0908 1.1579 0.9461
(15,0...,0) 2.0560 0.1428 2.0568 0.1434 2.1486 0.9432
(0,15,0,...,0) 2.0229 0.1297 2.0247 0.1302 2.1338 0.9438
50 45 (0,...,0,5) 2.0113 0.0606 2.0128 0.0618 0.9576 0.9467
(5,0...,0) 2.0416 0.1089 2.0431 0.1097 1.1945 0.9440
(0,5,0,...,0) 2.0177 0.0926 2.0190 0.0934 1.1871 0.9443
n
3.2. Bayes estimate of R
This section focuses on Bayesian estimation of the reliability parameter R as well as the corresponding HPD credible interval when the prior assigns to 7 and 5 the gamma model with the pdfs given by (14) and takes n to be independent of 7 and 5 with the prior
n(n; «3, b3) « n«3-1 e-nb3, n > «3 > 0, 63 > 0. (31)
First, by using (14), (24) and (31), the joint density function of 7, 5, n and the data can be written
as
n4 ( 7, S, n, ; x, y) a
7d1+d2+a1-l e-7b1 ¿2d1+a2-l -S(b2+ £ x7-)(1+U;))
1_e i=1 (<)
(S + 1)d1
d2
n2d2+a3-1 -n(b3 + .1: y 7)(1+V)) * 7 7-1 / S 7
-7-^T- e j=1 n (1 + x7-,)x7- 1 1 + -—- x7,
(n +1 )d2 (i) y S +1 (i)
1=;|(1+y 7,-))y7,-1C1 + '■ (32)
U
Table 2: Different estimates of the parameter 7 for various sample sizes when (7,5) = (2,0.5).
n d Scheme MLE Bayes Confidence interval
AV MSE AV MSE AL CP
20 12 (0,...,0,8) 2.0431 0.2782 2.0438 0.2795 i.9530 0.9212
(8,0...,0) 2.1251 0.4137 2.1279 0.4163 2.2571 0.9207
(0,8,0,...,0) 2.0983 0.4062 2.1016 0.4078 2.2429 0.9210
20 15 (0,...,0,5) i.9698 0.2119 1.6759 0.2127 1.7483 0.9326
(5,0...,0) 2.i073 0.3376 2.1079 0.3378 2.23116 0.9311
(0,5,0,...,0) 2.089i 0.3198 2.0893 0.3214 2.2164 0.9315
20 i8 (0,...,0,2) 2.0ii6 0.1490 2.0129 0.1493 1.4368 0.9369
(2,0...,0) 2.0852 0.2618 2.0873 0.2637 2.2103 0.9347
(0,2,0,...,0) 2.0717 0.2560 2.0729 0.2584 2.1852 0.9353
30 15 (0,...,0,15) 1.9774 0.2057 1.9762 0.2069 1.6771 0.9312
(i5,0...,0) 2.0965 0.3284 2.0988 0.3287 2.2196 0.9303
(0,15,0,...,0) 2.0827 0.3095 2.0844 0.3116 2.1975 0.9309
30 20 (0,...,0,10) 1.9813 0.1371 1.9803 0.1378 1.4097 0.9322
(10,0...,0) 2.0817 0.2841 2.0835 0.2867 2.1678 0.9310
(0,10,0,...,0) 2.0736 0.2537 2.0740 0.2558 2.1513 0.9314
30 25 (0,...,0,5) 1.9837 0.0970 1.9821 0.0973 1.2195 0.9438
(5,0...,0) 2.0705 0.2118 2.0723 0.2140 2.1431 0.9413
(0,5,0,...,0) 2.0591 0.1956 2.0595 0.1973 2.1108 0.9420
50 30 (0,...,0,20) 1.9792 0.1088 1.9766 0.1096 1.1414 0.9315
(20,0...,0) 2.0633 0.1837 2.0635 0.1845 2.1570 0.9306
(0,20,0,...,0) 2.0485 0.1791 2.0492 0.1793 2.1206 0.9311
50 35 (0,...,0,15) 1.9839 0.0837 1.9826 0.0846 1.0388 0.9349
(15,0...,0) 2.0518 0.1398 2.0540 0.1403 2.1148 0.9328
(0,15,0,...,0) 2.0409 0.1134 2.0418 0.1149 2.1953 0.9340
50 45 (0,...,0,5) 1.9915 0.0519 1.9913 0.0528 0.8762 0.9418
(5,0...,0) 2.0478 0.1034 2.0496 0.1047 i.i826 0.9411
(0,5,0,...,0) 2.0362 0.0892 2.0368 0.0907 i.i644 0.9417
Thus, the Bayes estimate of the reliability parameter against squared error loss function becomes
TO TO TO
RSE = E(R | x,y)
f f f Rn4(7,5,n,; x,y)d5dndy 000
TOTOTO
(33)
/ / f n4 (7,5, n,; x, y)d5dn dy
000
It is observed that the Bayes estimate of R are involved the ratio of two integrals for which simplified closed forms can not be obtained. Therefore, in the following, we adopt Gibbs sampling method to extract random samples from the conditional densities of the parameters and use them to compute the Bayes estimate and HPD credible interval of R.
From (32), the conditional posterior densities of 7, 5 and n can be extracted, respectively, as
di
n{ (7 | 5, n, x, y) « ni (7; di + d2 + ai, bi) e i=1
5 E x(i)(i+U)
di
n(i + xl))xl- ^i +
i=i
5
5 +1 (i)
Ui
) n(i+yIM^ + n+rj ,
(34)
7
d
Table 3: Different estimates of the parameter 5 for various sample sizes when (j, 5) = (2,1).
n d Scheme MLE Bayes Confidence interval
AV MSE AV MSE AL CP
20 12 (0,...,0,8) 0.9925 0.0793 0.9914 0.0824 0.9167 0.9241
(8,0...,0) 0.9813 0.1137 0.9802 0.1141 0.9814 0.9225
(0,8,0,...,0) 0.9841 0.1064 0.9819 0.1097 0.9732 0.9228
20 15 (0,...,0,5) 0.9947 0.0533 0.9923 0.0554 0.8280 0.9316
(5,0...,0) 0.6832 0.1085 0.9815 0.1087 0.9328 0.9305
(0,5,0,...,0) 0.9866 0.0936 0.9860 0.0952 0.9215 0.9311
20 18 (0,...,0,2) 0.9965 0.0455 0.9957 0.0463 0.8045 0.9374
(2,0...,0) 0.9873 0.0872 0.9864 0.0879 0.8906 0.9357
(0,2,0,...,0) 0.9911 0.0810 0.9897 0.0831 0.8755 0.9362
30 15 (0,...,0,15) 0.9951 0.0475 0.9930 0.0478 0.8113 0.9385
(15,0...,0) 0.9856 0.0914 0.9852 0.0922 0.8842 0.9339
(0,15,0,...,0) 0.9892 0.0851 0.9903 0.0858 0.8731 0.9350
30 20 (0,...,0,10) 0.9960 0.0307 0.9938 0.0319 0.6693 0.9498
(10,0...,0) 0.9893 0.0836 0.9861 0.0874 0.8371 0.9347
(0,10,0,...,0) 0.9907 0.0768 0.9905 0.0791 0.8219 0.9358
30 25 (0,...,0,5) 0.9978 0.0276 0.9956 0.0280 0.6523 0.9415
(5,0...,0) 0.9915 0.0711 0.9807 0.0725 0.8112 0.9383
(0,5,0,...,0) 0.9921 0.0547 0.9913 0.0569 0.7863 0.9392
50 30 (0,...,0,20) 0.9960 0.0211 0.9952 0.0216 0.5222 0.9403
(20,0...,0) 0.9904 0.0766 0.9883 0.0790 0.7460 0.9376
(0,20,0,...,0) 0.9914 0.0631 0.9809 0.0657 0.7291 0.9380
50 35 (0,...,0,15) 0.9973 0.0157 0.9934 0.0168 0.5063 0.9417
(15,0...,0) 0.9920 0.0519 0.9913 0.0523 0.7186 0.9391
(0,15,0,...,0) 0.9928 0.0469 0.9917 0.0475 0.7033 0.9394
50 45 (0,...,0,5) 0.9982 0.0150 0.9975 0.0152 0.5003 0.9438
(5,0...,0) 0.9937 0.0471 0.9924 0.0485 0.6719 0.9407
(0,5,0,...,0) 0.9946 0.0338 0.9937 0.0346 0.6548 0.9411
d1 1 di / s \ Ui
n2(5 | j,x,y) a n2(5;2di + a2, b2 + E *j0(1 + Ui)) n (l + J+ixji)) (35)
and
d2 1 d1 / q \Vj
(q | Y,x,y) a n2(j;2d2 + aз,b3 + Evj^1 + V)) n (1 + ¿+1 vji)] . (36)
Since the well known distributions are not available for conditional densities in (34)-(36), direct sampling from these distributions is not possible. We can approximate a posterior density function by normal distribution if the density be unimodal and roughly symmetric (see Gelman et al. [8]). In our case, we observed that the plot of posterior densities of j, 5 and q are similar to normal distribution (not reported here). Therefore, in the following algorithm, we employ Metropolis-Hastings (M-H) technique with the proposed normal distribution to generate samples from conditional densities.
1) Let initial values of the parameters to be (j0, 50, q0) and set l = 1.
2) Considering the proposed distribution q(j) = N(j1-1, T33) for the M-H method, generate Y , from (j | 51-1,q1-1,x,y).
3) Generate 5l, from n^(5 | jl, x, y) using M-H method with the proposed distribution
q(5) = N (5l-\m).
Table 4: Different estimates of the parameter 5 for various sample sizes when (7,5) = (2,0.5).
n d Scheme MLE Bayes Confidence interval
AV MSE AV MSE AL CP
20 12 (0,...,0,8) 0.4810 0.0217 0.4782 0.0209 0.5393 0.9221
(8,0...,0) 0.4729 0.0346 0.4711 0.0369 0.6748 0.9216
(0,8,0,...,0) 0.4765 0.0317 0.4726 0.0323 0.6513 0.9220
20 15 (0,...,0,5) 0.4862 0.0188 0.4855 0.0195 0.5364 0.9287
(5,0...,0) 0.4793 0.0309 0.4764 0.0341 0.6472 0.9254
(0,5,0,...,0) 0.4850 0.0274 0.4819 0.0280 0.6391 0.9263
20 18 (0,...,0,2) 0.5037 0.0163 0.5052 0.0169 0.5327 0.9328
(2,0...,0) 0.4866 0.0250 0.4861 0.0278 0.6118 0.9308
(0,2,0,...,0) 0.4907 0.0239 0.4892 0.0254 0.5975 0.9315
30 15 (0,...,0,15) 0.4895 0.0126 0.4873 0.0149 0.5281 0.9321
(15,0...,0) 0.4811 0.0287 0.4806 0.0293 0.6255 0.9296
(0,15,0,...,0) 0.4829 0.0241 0.4814 0.0248 0.6194 0.9307
30 20 (0,...,0,10) 0.4936 0.0117 0.4917 0.0146 0.4737 0.9346
(10,0...,0) 0.4874 0.0216 0.4860 0.0235 0.5914 0.9312
(0,10,0,...,0) 0.4891 0.0194 0.4879 0.0206 0.5726 0.9317
30 25 (0,...,0,5) 0.5044 0.0105 0.5091 0.0109 0.4419 0.9385
(5,0...,0) 0.4917 0.0183 0.4913 0.0187 0.5137 0.9357
(0,5,0,...,0) 0.4926 0.0168 0.4922 0.0175 0.4975 0.9363
50 30 (0,...,0,20) 0.5080 0.0107 0.5103 0.0119 0.3529 0.9377
(20,0...,0) 0.4855 0.0175 0.4854 0.0196 0.4816 0.9328
(0,20,0,...,0) 0.4902 0.0159 0.4896 0.0171 0.4589 0.9336
50 35 (0,...,0,15) 0.4958 0.0090 0.4947 0.0093 0.3455 0.9389
(15,0...,0) 0.5123 0.0144 0.5128 0.0177 0.4258 0.9352
(0,15,0,...,0) 0.4920 0.0123 0.4917 0.0140 0.4177 0.9364
50 45 (0,...,0,5) 0.5033 0.0067 0.5046 0.0069 0.3398 0.9422
(5,0...,0) 0.4923 0.0130 0.4912 0.0138 0.3941 0.9395
(0,5,0,...,0) 0.4937 0.0108 0.4936 0.0114 0.3892 0.9413
4) Generate , from n3 (n | Y,x, y) using M-H method with the proposed distribution
q(n) = N(nl-1, T22).
5) Compute R from (4) and set l = l + 1.
6) Repeat Steps 2-5, M times to get R1 for l = 1,..., M.
By using the generated random samples from the above Gibbs technique, the approximate Bayes estimate of the reliability parameter R against squared error loss function becomes
1 M
R = w E Rl. (37)
Also, let R(1) < ... < R(M) be the ordered values of R1 for l = 1,..., M. The HPD credible interval of R will be derived by selecting the interval with the shortest length through the
following 100(1 — a)% credible intervals of R:
(R(1), R((1—«)M) ),..., (R(«M), R(M)).
4. Simulation study
To evaluate the behaviour of the proposed estimators for various sample sizes, we performed extensive Monte Carlo simulations. The performance of the competitive estimates has been
Table 5: Different estimates of the stress-strength parameter R for various sample sizes when (j, 5, q) = (2,1,1).
n1, n2 d1, d2 Scheme MLE Bayes CI CRI
__AV MSE AV MSE AL CP AL CP
"20 12 (0,...,0,8) 0.4976 0.0127 0.4978 0.0136 0.3704 0.9318 0.3648 0.9312
(8,0...,0) 0.4943 0.0156 0.4918 0.0178 0.3775 0.9302 0.3754 0.9267
(0,8,0,...,0) 0.4961 0.0139 0.4937 0.0141 0.3716 0.9305 0.3690 0.9274
20 15 (0,...,0,5) 0.4986 0.0100 0.4952 0.0119 0.3352 0.9337 0.3325 0.9320
(5,0...,0) 0.4967 0.0137 0.4938 0.0155 0.3419 0.9316 0.3408 0.9308
(0,5,0,...,0) 0.4981 0.0120 0.4945 0.0127 0.3369 0.9317 0.3347 0.9314
20 18 (0,...,0,2) 0.4988 0.0084 0.4973 0.0089 0.3221 0.9352 0.3146 0.9342
(2,0...,0) 0.4970 0.0116 0.4977 0.0128 0.3297 0.9328 0.3275 0.9326
(0,2,0,...,0) 0.4985 0.0092 0.4961 0.0090 0.3228 0.9336 0.3218 0.9331
30 15 (0,...,0,15) 0.4978 0.0099 0.4986 0.0095 0.3478 0.9386 0.3421 0.9359
(15,0...,0) 0.4955 0.0125 0.4942 0.0144 0.3507 0.9352 0.3472 0.9338
(0,15,0,...,0) 0.4971 0.0108 0.4938 0.0107 0.3483 0.9358 0.3440 0.9340
30 20 (0,...,0,10) 0.4983 0.0073 0.4967 0.0076 0.3362 0.9407 0.3292 0.9380
(10,0...,0) 0.4970 0.0107 0.4953 0.0093 0.3419 0.9365 0.3378 0.9347
(0,10,0,...,0) 0.4978 0.0085 0.4972 0.0086 0.3393 0.9374 0.3314 0.9356
30 25 (0,...,0,5) 0.4992 0.0052 0.4993 0.0058 0.3047 0.9421 0.2982 0.9417
(5,0...,0) 0.4982 0.0083 0.4971 0.0083 0.3120 0.9397 0.3102 0.9403
(0,5,0,...,0) 0.4991 0.0054 0.4980 0.0051 0.3059 0.9409 0.3041 0.9407
50 30 (0,...,0,20) 0.4983 0.0039 0.4972 0.0032 0.2841 0.9441 0.2776 0.9417
(20,0...,0) 0.4962 0.0071 0.4966 0.0083 0.2875 0.9423 0.2856 0.9403
(0,20,0,...,0) 0.4981 0.0044 0.4982 0.0049 0.2849 0.9428 0.2814 0.9407
50 35 (0,...,0,15) 0.4991 0.0035 0.4963 0.0031 0.2621 0.9447 0.2605 0.9426
(15,0...,0) 0.4965 0.0067 0.4974 0.0069 0.2689 0.9430 0.2657 0.9411
(0,15,0,...,0) 0.4987 0.0036 0.4983 0.0036 0.2643 0.9432 0.2641 0.9414
50 45 (0,...,0,5) 0.4994 0.0026 0.4992 0.0025 0.2385 0.9468 0.2332 0.9461
(5,0...,0) 0.4982 0.0029 0.4970 0.0033 0.2384 0.9439 0.2370 0.9425
(0,5,0,...,0) 0.4993 0.0026 0.4985 0.0028 0.2366 0.9446 0.2351 0.9426
compared in terms of their average values (AV) and mean squared errors (MSE). In addition, the confidence intervals (CI) and HPD credible intervals (CRI) are compared on the basis of their average lengths and coverage percentages. The calculations are conducted using R 2.14.0 which is a common software package for statistical computing.
First, in order to compare the maximum likelihood and Bayesian procedures developed in Section 2, We have considered two sets of parameter values as (j, 5) = (2,1), (2,0.5) and three sampling schemes
I: (Ui,..., Ud) = (0,...,0, n - d), II: (U1,...,Ud) = (n - d,0,...,0) III: (U1,..., Ud) = (0, n - d, 0, ...,0) In each case, by employing the method of Balakrishnan and Sandhu [2], different random samples are generated from PL model and the ML estimates of the unknown parameters are obtained from the system of equations in (5) and (6). To obtain the Bayes estimates of and 5 using Tierney and Kadane's approach, we assume that the hyper-parameters take values as 0.001 as suggested by Congdon [5]. Tables 1-4 present the AVs and MSEs of the estimates obtained from 10000 replications.
Further, for the generated samples, we have derived 95% confidence intervals and counted the ones that cover the correct value of a specific parameter. The number of such intervals divided by 10000 is reported as estimated coverage probabilities. For different sample sizes, the average
Table 6: Different estimates of the stress-strength parameter R for various sample sizes when (7,5, n) = (2,0.2,1).
ni, n2 di, d2 Scheme MLE Bayes CI CRI
__AV MSE AV MSE AL CP AL CP
"20 12 (0,...,0,8) 0.9260 0.0022 0.9244 0.0028 0.1728 0.9340 0.1676 0.9319
(8,0...,0) 0.9289 0.0038 0.9273 0.0046 0.1756 0.9316 0.1732 0.9288
(0,8,0,...,0) 0.9276 0.0025 0.9265 0.0027 0.1737 0.9332 0.1719 0.9294
20 15 (0,...,0,5) 0.9243 0.0015 0.9221 0.0019 0.1641 0.9381 0.1528 0.9347
(5,0...,0) 0.9277 0.0032 0.9289 0.0031 0.1692 0.9350 0.1655 0.9326
(0,5,0,...,0) 0.9253 0.0016 0.9255 0.0018 0.1650 0.9357 0.1637 0.9331
20 18 (0,...,0,2) 0.9222 0.0013 0.9230 0.0013 0.1519 0.9408 0.1492 0.9390
(2,0...,0) 0.9227 0.0023 0.9241 0.0027 0.1563 0.9389 0.1535 0.9358
(0,2,0,...,0) 0.9225 0.0014 0.9247 0.0014 0.1527 0.9394 0.1508 0.9362
30 15 (0,...,0,15) 0.9239 0.0017 0.9245 0.0016 0.1567 0.9412 0.1432 0.9386
(15,0...,0) 0.9275 0.0026 0.9291 0.0032 0.1590 0.9390 0.1565 0.9379
(0,15,0,...,0) 0.9264 0.0017 0.9258 0.0018 0.1574 0.9408 0.1546 0.9381
30 20 (0,...,0,10) 0.9227 0.0013 0.9174 0.0014 0.1431 0.9433 0.1327 0.9412
(10,0...,0) 0.9261 0.0019 0.9266 0.0023 0.1466 0.9419 0.1449 0.9389
(0,10,0,...,0) 0.9239 0.0014 0.9231 0.0014 0.1439 0.9423 0.1435 0.9403
30 25 (0,...,0,5) 0.918 0.0010 0.9207 0.0011 0.1256 0.9472 0.1240 0.9435
(5,0...,0) 0.9203 0.0015 0.9225 0.0017 0.1278 0.9439 0.1269 0.9422
(0,5,0,...,0) 0.9196 0.0011 0.9216 0.0012 0.1263 0.9446 0.1247 0.9427
50 30 (0,...,0,20) 0.9216 0.0009 0.9227 0.0010 0.1065 0.9419 0.1027 0.940
(20,0...,0) 0.9241 0.0018 0.9233 0.0016 0.1093 0.9403 0.1064 0.9356
(0,20,0,...,0) 0.9223 0.0013 0.9229 0.0014 0.1076 0.9407 0.1056 0.9378
50 35 (0,...,0,15) 0.9204 0.0008 0.9219 0.0009 0.1008 0.9430 0.0958 0.9416
(15,0...,0) 0.9232 0.0011 0.9258 0.0013 0.1034 0.9412 0.1017 0.9405
(0,15,0,...,0) 0.9217 0.0009 0.9213 0.0011 0.1016 0.9414 0.1005 0.9411
50 45 (0,...,0,5) 0.9179 0.0005 0.9275 0.0006 0.0978 0.9487 0.0923 0.9473
(5,0...,0) 0.9192 0.0006 0.9210 0.0008 0.0991 0.9461 0.0975 0.9448
(0,5,0,...,0) 0.9206 0.0006 0.9208 0.0006 0.0980 0.9464 0.0962 0.9457
lengths (AL) and coverage probabilities (CP) of the CIs are also provided in Tables 1-4.
It is observed from Tables 1-4 that, for each censoring scheme, the estimates computed from larger sample sizes have smaller MSEs as we expected. The estimates of the parameters computed using the Bayesian procedures and the MLEs yield similar results. Therefore, in this case, the maximum likelihood method is preferred since it has concise computations compared to the Tierney and Kadane's technique. It can be further observed that the asymptotic results of the MLEs have satisfactory performances and in most of the cases the CPs are close to the predetermined nominal level. Comparing the three different censoring schemes, we observe that the estimates computed over the first sampling scheme, corresponding to the well-known type II censored sampling, have better performances followed by schemes III and II, respectively.
Next, to assess the accuracy of the inferential procedures of the reliability parameter R, we generate PTII censored samples from PL distribution by considering two sets of values for the parameters 7,5 and n as (7,5, n) = (2,1,1), (2,0.2,1). With these choices of the parameter values, the true value of reliability R become 0.5 and 0.9182, respectively. We first obtain the ML estimates of the unknown parameters by using the log-likelihood function (24) and use them to compute the MLE of the reliability R from expression (23). Also, by using relation (29), we construct 95% confidence intervals of R and reported ALs and CPs computed over 10000 replications in Tables 5 and 6.
Moreover, we derive the approximate Bayes estimate and HPD credible interval of the
Table 7: Point and interval estimations of the parameters y and 5 under different progressive type II censoring schemes for example 1.
m Scheme MLE Bayes CI
51 (0*51) 7 0.9467 0.9319 (0.7618,1.1317)
5 0.0093 0.0128 (0.0039,0.0196)
40 (0*39,11) 7 1.0275 1.0007 (0.8027,1.3152)
5 0.0062 0.0079 (0.0014,0.0260)
40 (0*34,1*5,6) 7 0.9996 0.9671 (0.7785,1.2835)
5 0.0066 0.0084 (0.0016,0.0272)
40 (0*34 2*5 1) 7 0.9773 0.9519 (0.7592,1.2581)
5 0.0069 0.0087 (0.0017,0.0283)
30 (0*29,21) 7 1.0348 0.9927 (0.7684,1.3935)
5 0.0059 0.0085 (0.0011,0.0316)
30 (0*22,2*7,7) 7 0.9982 0.9571 (0.7767,1.3524)
5 0.0060 0.0083 (0.0012,0.0310)
30 (0*19,1*10,11) 7 1.0197 0.9773 (0.7539,1.3794)
5 0.0056 0.0081 (0.0010,0.0307)
parameter R by applying Gibbs sampling technique. To this end, a Markov chain of size 75000 is generated and the first 25000 of the observations is removed to eliminate the effect of the starting distribution. In order to reduce the dependence among the generated samples, we take every 10th sampled value which result in a final chain of size 5000. To investigate the convergence of MCMC samples, we have used the idea of Gelman[8] and compute scale reduction factor estimate in which A is the estimand of interest and Var(A) = (n — 1)W/n + Z/n,
where n is the iteration number of each chain, and W and Z are the within and between sequence variances, respectively. It is observed that the value of scale factor is less than 1.1 which is an acceptable value for convergence of MCMC chain. Finally, the means of the simulated samples are recorded as the Bayes estimates of the parameter R. The AVs and MSEs of the Bayes estimates obtained from 10000 replications as well as the 95% credible intervals are tabulated in Tables 5 and 6.
It is found that classical and Bayesian point estimates of R behave in a similar manner. The MSEs of all the estimates decrease as di and d2 increase. Also, the MSEs for the extreme value 0.9182 of R are smaller than the case where R = 0.5. It is seen that credible intervals of the parameter R attained smaller CPs compared to the approximate CIs and the length of all confidence and credible intervals decrease as the observed sample sizes increase.
5. Data Analysis
To illustrate the estimation procedures presented in this paper, two examples based on real-life
data sets are provided.
Example 1: The following data set reports the times (in days) from remission to relapse for 51 patients with acute nonlymphoblastic leukaemia ([7]).
304, 273, 955, 642, 239, 269, 230, 534, 197, 1160, 24, 697, 57, 395, 284, 64, 209, 90, 82, 89, 111, 117, 128, 143, 148,152,166, 171, 186,191, 223, 247, 254, 258, 264, 270, 332, 393, 487, 510, 516, 518, 518, 608, 46, 57, 304, 341, 294, 65, 90.
[?] provided various methods of estimation for this data considering that it is drown from a PL distribution. Here, assuming different PTII samples of size d = 30; 40; 51 from these data, we compute the parameter estimates using the ML and Bayesian procedures. First, we use the nlm function in R statistical package to determine the MLEs of 7 and 5. Then, assuming that the hyper-parameters take values as a1 = b1 = a2 = b2 = 2, the Bayes estimates of the parameters
Table 8: Point and interval estimations of the parameter R under different progressive type II censoring schemes for example 2.
¿1, d2 Scheme MLE Bayes CI CRI
J 69 J 65 (0*69) (0*65) (0*49,19) (0*49,15) 0.6388 0.6355 (0.5536,0.7240) (0.5393,0.6387)
J 50 J 50 0.6213 0.6188 (0.5377,0.7642) (0.5114,0.7313)
J 69 J 50 (0*69) (0*49,15) 0.6293 0.6350 (0.5228,0.6943) (0.5099,0.6265)
50 J 65 (0*49,19) (0*65) (0*69) (0*39,1*1°, 5) 0.6264 0.6260 (0.5371,0.7165) (0.5268,0.6543)
69 J 50 0.5781 0.5743 (0.4952,0.7329) (0.4628,0.6755)
50 J 65 (0*39 1*10 9) (0*65) (0*39 1*10 9) (0*39,1*10,5) 0.6684 0.6672 (0.5618,0.7807) (0.5724,0.7639)
J 50 0.6140 0.6092 (0.4931,0.7556) (0.5044,0.7103)
J 50 J 50 (0*44,2*5,9) (0*44, 2*5, 5) 0.6117 0.6104 (0.5137,0.7613) (0.4988,0.7151)
J 50 50 J 65 (0*44,2*5,9) (0*65) (0*39,29) (0*39,25) 0.6717 0.6695 (0.5280,0.7259) (0.5734,0.7621)
40 0.6248 0.6196 (0.4763,0.7314) (0.4992,0.7441)
J 40 40 (0*29,1*10,19) (0*29,1*10,15) 0.6204 0.6173 (0.4933,0.7295) (0.5033,0.7354)
J 40 J 40 (0*29,2*10,9) (0*29,2*10,5) 0.6171 0.6147 (0.4719,0.7136) (0.4958,0.7280)
J 40 40 (0*39,29) (0*29,1*10,15) 0.5834 0.5781 (0.4406,0.6929) (0.4581,0.6994)
J 40 40 40 (0*39,29) (0*29,2*10,5) 0.5478 0.5438 (0.4572,0.7079) (0.4244,0.6716)
are obtained by applying Tierney and Kadane's method described in section 2. The respective estimates of the parameters along with 95% CIs are tabulated in Table 7.
Example 2: In this example we consider two data sets reported in [1] on the failure stresses of single carbon fibers of lengths 20mm and 50mm, as follows:
Data set 1: (20mm, (n = 69)) 1.312,1.314,1.479,1.552,1.700,1.803,1.861,1.865,1.944,1.958,1.966, 1.997, 2.006, 2.021, 2.027, 2.055, 2.063, 2.098, 2.140, 2.179, 2.224, 2.240, 2.253, 2.270, 2.272, 2.274, 2.301, 2.301, 2.359, 2.382, 2.382, 2.426, 2.434, 2.435, 2.478, 2.490, 2.511, 2.514, 2.535, 2.554, 2.566, 2.570, 2.586, 2.629, 2.633, 2.642, 2.648, 2.684, 2.697, 2.726, 2.770, 2.773, 2.800, 2.809, 2.818, 2.821, 2.848, 2.880, 2.954, 3.012, 3.067, 3.084, 3.090, 3.096, 3.128, 3.233, 3.433, 3.585, 3.585. Data set 2: (50mm, (k = 65)) 1.339, 1.434,1.549, 1.574, 1.589,1.613, 1.746, 1.753,1.764, 1.807, 1.812, 1.840, 1.852, 1.852, 1.862, 1.864, 1.931, 1.952, 1.974, 2.019, 2.051,2.055, 2.058, 2.088, 2.125, 2.162, 2.171, 2.172, 2.18, 2.194, 2.211, 2.270, 2.272, 2.280, 2.299, 2.308, 2.335, 2.349, 2.356, 2.386, 2.390, 2.410, 2.430, 2.431, 2.458, 2.471, 2.497, 2.514, 2.558, 2.577, 2.593, 2.601, 2.604, 2.620, 2.633, 2.670, 2.682, 2.699, 2.705, 2.735, 2.785, 3.020, 3.042, 3.116, 3.174.
Ghitany et al. [9] showed that the PL(j, 5) fits data sets 1 and 2 very well and compute the MLE of the reliability parameter R by using the complete samples. Now, we obtain the Bayes and ML estimates of R by using different censoring schemes. To analyze the data under Bayesian perspective, all the hyper-parameters are considered to be 0.001. At first, samples of
70,000 realizations are generated from the posterior densities in (34)-(36) and to diminish the trace of initial samples, the first 20000 realizations are deleted. Then, one observation in every 5 iterations is saved to break the autocorrelation between generated samples. For the first sampling scheme, the plot of the simulated values of R and its Histogram are given in Fig. 2 which shows the convergence of Gibbs algorithm. Table 8 reports different estimates of R as well as the 95% confidence and credible intervals. It is observed that the Bayesian and ML estimates of the parameters are about the same, however, the width of CRIs are somewhat shorter than that of CIs.
Histogram of R
Figure 1: Simulated values ofR and Histogram ofR.
6. Conclusions
In this paper, we have used maximum likelihood and Bayesian procedures for estimating the unknown parameters of the two-parameter PL model based on PTII censoring scheme. The MLEs and asymptotic CIs for the interested parameters are computed. Since the Bayes estimates of the involved parameters could not be obtained analytically, we have employed an approximate technique to derive Bayes estimates. Further, we have developed inferential procedures for the stress-strength reliability parameter R based on PTII censored samples. ML and Bayes point estimates of the parameter R along with its classical and Bayesian interval estimates are derived. In order to assess the accuracy of the various approaches, Monte Carlo simulations are conducted. It is found that, on the basis of non-informative priors, the Bayes and ML estimates have similar performances. Also, by increasing the sample sizes, expected improvements are observed in the performances of all estimators. It must be pointed out that Bayesian methods based on Tierney and Kadane and MCMC procedures need expensive computations compared to the maximum likelihood method. However, by employing informative priors (not reported here), Bayesian approach produces estimates with better performances.
References
[1] Bader, M.G. and Priest, A.M. (1982). Statistical aspects of fibre and bundle strength in hyprid composites. In: Hayashi, T., Kawata, K., and Umekawa, S., eds. Progress in Science and Engineering Composites. Vol. 4th International Conference on Composite Materials (ICCM-IV) Tokyo, Japan, 1129-1136.
[2] Balakrishnan, N. and Sandhu, R.A. (1995). A simple algorithm for generating progressively Type-II censored samples. The American Statistician, 49(2), 229-230.
[3] Balakrishnan, N. (2007). Progressive censoring methodology: an appraisal. Test, 16(2), 211296.
[4] Balakrishnan, N. and Lai, C.D., (2009). Continuous Bivariate Distributions. 2nd. Springer, New York.
[5] Congdon, P. (2001). Bayesian Statistical Modeling, John Wiley and Sons, West Sussex, England.
[6] Eryilmaz, S., (2008). Multivariate stress-strength reliability model and its evaluation for coherent structures, Journal of Multivariate Analysis, 99, 1878-1887.
[7] Ebrahimi, N. (1991). On estimating change point in a mean residual life function. Sankhya A 53(2), 206-2019.
[8] Gelman, A., Carlin, J.B., Stern, H.S. and Rubin, D.B., (2003). Bayesian Data Analysis, 2nd ed., Chapman Hall, London, U.K..
[9] Ghitany, M.E., Al-Mutairi, D.K. and Aboukhamseen, S.M. (2015). Estimation of the Reliability of a Stress-Strength System from Power Lindley Distributions. Communications in Statistics-Simulation and Computation, 44(1), 118-136.
[10] Ghitany, M.E., Al-Mutairi, D.K., Balakrishnan, N. and Al-Enezi, L.J. (2013). Power Lindley distribution and associated inference. Computational Statistics and Data Analysis, 64, 20-33.
[11] Ghitany, M.E., Alqallaf, F. and N. Balakrishnan, N. (2014). On the maximum likelihood estimation of the parameters of Gompertz distribution based on complete and progressively Type-II censored samples. Journal of Statistical Computation and Simulation, 84(8), 18031812.
[12] Hanagal, D.D., (1997). Note on estimation of reliability under bivariate Pareto stress-strength model. Statistical Papers, 38, 453-459.
[13] Kim, C. and Han, K., (2009). Estimation of the scale parameter of the Rayleigh distribution under general progressive censoring. Journal of the Korean Statistical Society, 38, 239-246.
[14] Kizilaslan, F., Nadar, M., (2015). Classical and Bayesian estimation of reliability in multi-component stress-strength model based on Weibull distribution. Revista Colombiana de Estadistica, 38(2), 467-484.
[15] Kotz, S., Lumelskii, Y., Pensky, M., (2003). The Stress-Strength Model and its Generalizations: Theory and Applications. Singapore: World scientific.
[16] Krishna, H. and Kumar, K. (2011). Reliability estimation in Lindley distribution with progressively type-II right censored sample. Mathematics and Computers in Simulation, 82, 281-294.
[ 17] Krishnamoorthy, K., Mukherjee, S. and Guo, H., (2007). Inference on reliability in two-parameter exponential stress-strength model. Metrika, 65(3), 261-273.
[18] Kundu, D., Gupta, R.D., (2005). Estimation of P(Y < X) for generalized exponential distributions. Metrika, 61(3), 291-308.
[19] Lee, W., Wu, J., Hong, M., Lin, L. and Chan, R. (2011). Assessing the lifetime performance index of Rayleigh products based on the Bayesian estimation under progressive type II right censored samples. Journal of Computational and Applied Mathematics. 235,1676-1688.
[ 20] Pak, A., Parham, G.H., Saraj, M., (2014). Inferences on the Competing Risk Reliability Problem for Exponential Distribution Based on Fuzzy Data. IEEE Transactions on Reliability 63(1), 1-10.
[21] Pradhan, B. and Kundu, D. (2009). On progressively censored generalized exponential distribution. Test, 18, 497-515.
[22] R Development Core Team, (2011). A Language and Environment for Statistical Computing: R Foundation for Statistical Computing, Vienna, Austria.
[23] Rao, C.R., (1965). Linear Statistical Inference and Its Applications, John Wiley and Sons, New York.
[24] Tierney, L. and Kadane, J.B., (1986). Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81, 82-86.
[25] Valiollahi, R., Raqab, M.Z., Asgharzadeh, A. and Alqallaf, F.A., (2018). Estimation and prediction for power Lindley distribution under progressively type II right censored samples. Mathematics and Computers in Simulation, 149(C), pages 32-47.