Length Biased Exponential Distribution as a Reliability Model: a Bayesian Approach
Jismi Mathew & Sebastian George
Department of Statistics, Vimala College(Autonomous), Thrissur, Kerala, India St. Thomas College, Pala,Kerala, India mailto: [email protected]
Abstract
In this paper, mathematical properties of Length biased Exponential distribution via Bayesian approach are derived under various loss functions. These properties include Bayes estimators and posterior risks for the simulation study. The comparison was made based on the performance of the Bayes estimate for the parameter under different loss functions with respect to the posterior risk. Also, obtained the reliability characteristics of this distribution.
Keywords: Bayes estimator,Failure rate function, Length biased exponential distribution,Posterior risk, Reliability, Survival Analysis.
1 Introduction
Weighted distributions take into report the method of ascertainment, by modifying the probabilities of the actual occurrence of events to reach at a specification of the probabilities of those events as observed and recorded [4]. To introduce the concept of a weighted distribution, suppose that X is a random variable(r.v.) with its natural probability density function (PDF) g(x|0) , where the natural parameter 8 £ 0(0 is the parameter space). A weighted distribution with kernel g(x|0) and weight function w(x, ft) is defined as
№6,0) = w(x,p).g(xieyc(6,p) (1)
where, w(x,p) > 0 and C(6,p) = Ef[w(X, ft)]. When X is a non-negative random variable and w(x,p) = x, the resultant weighted distribution is known as length-biased distribution.
The study is developed as follows.In section 2 we consider the derivation of posterior distribution using non informative and informative priors.In Section 3 we derived Bayes estimators and respective posterior risks under various loss functions using different priors . A simulation study of Bayes estimators and their posterior risk is performed in Section 4. In section 5, we derived the characteristic property of LBE distribution. The reliability characteristics of the distribution is obtained in Section 6 and Finally, Section 7 deals with the conclusion.
2 Length biased Exponential distribution
A random variable X is said to possess a Length biased Exponential distribution if it has the following probability density function,
X *
f(xie)=^-2e-oiXie>0 (2)
Jismi Mathew & Sebastian George RT&A No 3 C581 LENGTH BIASED EXPONENTIAL DISTRIBUTION AS A RELIABILITY Volume 15 Se tembor 2020 MODEL: A BAYESIAN APPROACH_° ume , ep em er
and the cumulative distribution function(CDF) is,
x *
F(xl8) = 1 - (1 + X)e-e,x,8 > 0 (3)
The likelihood function for a random sample x1, x2,... xn which is taken from Length biased Exponential distribution is:
Yn
nn x-
L(x,8) = [Jj0r-e ~,x,8>0 (4)
The posterior distribution consists of the probabilistic information about the parameters in the form of prior distribution and the sample information involved in the likelihood function. The likelihood principle propose that the information about the parameter will depend only on its posterior distribution. In this section, we will use the Length biased Exponential model as sampling distribution with Jeffreys Prior as a non informative prior and inverse gamma distribution as a conjugate prior for the derivation of the posterior distribution.
2.1 Posterior Distribution Using the Jeffreys Prior
The posterior distribution based on Jeffreys prior(JP) may be used as a standard or a reference for the class of posterior distributions which may be obtained from other priors. [3] proposed a formal rule for obtaining a non-informative prior as: If 8 is a k-vector valued parameter, then JP of 8 is: p(8) « jdetl(8) where 1(8) is a k x k Fishers (information) matrix whose ( ¿,/}ihelement is = 1, 2,..., k. Fishers information matrix is not exactly related to the
notation of lack of information. The relation comes from the role of Fishers matrix in asymptotic theory. Jeffreys non-informative priors based on Fishers information matrix often point to a family of improper priors. The Jeffreys prior of the parameter 8 is:
K8) = p
P(8) = ^f
-i
P(8) «1,8 > 0
0
and the posterior distribution using Jeffreys Prior is
p(8\x) = (ll=lXl) e ,„= ,8 > 0 (5)
1 J T(2n) 02n+- v '
2.2 Posterior distribution using conjugate prior
If the selection of the prior is such that prior and posterior belong to the same family of distributions then prior is called conjugate prior(CP). For more discussion about conjugate priors see detail in [1]. Assuming the natural conjugate prior for 8 to be inverse gamma distribution defined by
_a e~0
Posterior distribution using Inverse Gamma prior is
b
Jismi Mathew & Sebastian George RT&A No 3 (581 LENGTH BIASED EXPONENTIAL DISTRIBUTION AS A RELIABILITY Volume 15 Se tembor 2020 MODEL: A BAYESIAN APPROACH_° ume , ep em er
(a+Yn x-)b+2ne-r(a+i:jl=1xl)
P(8\x) = (a+^Z -8>° (7)
Lemma: For the given posterior 7, we have
1. E(8\x) = (a+Y¡=1Xl) ^ 1 ' 1
7n
b+2n-l
2. E(82\x)
(a+Yn=iXl)2
(b + 2n-1)(b + 2n-2)
3. E(6-1\x)= (b+n2n) K ' J (a+Yn=i xl)
E 2\x) = (b+2n)(nb+2n+1)
K ' j (a+Y?=i xi)
Proof: The results can be simply derived from the definition
E(9klx) = J™ ekp(9lx)d9 (8)
On using Lemma, Bayes estimators of the parameter 8 can be simply obtained.
3 Bayes estimators and Posterior risk under different loss functions
This section gives the derivation of the Bayes Estimator under different loss functions and their respective Posterior Risk. The results are compared for Jeffreys prior and conjugate prior. The Bayes estimators are determined under squared error loss function (SELF), weighted squared error loss function (WSELF), modified squared error loss function (MSELF), precautionary loss function (PLF) and K-Loss function. Table 1 will show the Bayes estimators and their posterior risks under various loss functions and Table 2 and Table 3 gives Bayes estimators of 8 under different loss functions along with their Posterior risk using Jeffreys and conjugate prior respectively.
Table 1: Bayes Estimator and Posterior Risk under different Loss Functions.
Loss Function Bayes Estimator Posterior Risk
L1=SELF= (8 - d)2 E(8\x) Var(8\x)
L2 =WSELF=(0-d)2 2 0 (E(8-1\x))-1 E(8\x)- (E(8-1\x))-1
L3 =MSELF= (1 -1)2 3 v 9J E(0-1\x) E(0-2\X) 1 E(0-1\x)2 E(0-2\X)
l4=plf= (9-d) 4 d jE(82\x) 2{jE(82\x)-E(8\x))
L5 =KLF= (ß- I0)2 5 \je E(0\X) ije(0-1\X) 2(jE(8\x)E(8-1\x)-l)
Table 2: Bayes Estimator and Posterior Risk under different Loss Functions using Jeffreys prior.
Loss Function Bayes Estimator Posterior Risk
L1 Yl=1 xl 2n-1 (tn=1Xif (2n-1)2(2n-2)
L2 Yl=1 xl 2n Yl=1Xl (2n-1)2n
L3 Yl=1 xl (Yn=1Xl+2n)(Yn=1Xl-2n)
2n+1 2n(2n+1)
L4 Yl=1 xl 2 ( ^2n-1-^2n-2 \
J(2n-1)(2n-2) -1 ' 1 \(2n-1)^(2n-2)j
L 5 Yl=1xl 2n-1)2n 2{= 1) 2 n-1
Table 3: Bayes Estimator and Posterior Risk under different Loss Functions using Conjugate prior.
Loss Function
Bayes Estimator
Posterior Risk
U
(g+Z[=iX¿)
(g+Z"=i*i)
&+2n-1
(b+2n-1)2(b+2n-2)
¿2 Ü
(g+Z¿=iX¿) &+2n
(g+Z"=i*i)
(b+2n-1)(b+2n)
(g+Z"=iX¿)
&+2n+1
((g+Z"=i^i)+(fe+2n))((a+Z"=iXi)-b-2n) _(b+2n)(b+2n+1)_
(g+Z¿=iX¿)
7(&+2n-1)(&+2n-2)
r. . v-in N / V&+2n-1-V&+2n-2 \
2 (a + ¿n, X;) I-.
g+z£
7(&+2n-1)(&+2n)
&+2n
&+2n-1
- 1
¿
4
¿
X
5
2
4 Simulation of Bayes Estimates and Posterior Risk
The inverse cumulative distribution function (CDF) is commonly used for generating random variates . For arbitrary CDF, define F-1(u) = min{x; F(x) > u}. The inverse CDF method can not be directly applied for Length biased Exponential distribution because closed form expression for its quantile function is not available. Here, we consider Newtons method for the calculation of the quantile function numerically. The following algorithm from [6] considered for this purpose: Algorithm
1. Set n, 9 and initial value x0.
2. Generate U ~ Uniform(0, 1).
3. Update x0 by using the Newtonus formula, xa=x° - R(x0,9) where R(x0,9)
F(X°;0)-U
/(x°;0) '
where, f(.) and F(.) are PDF and CDF of the Length biased Exponential respectively.
4. If |x0 - xa| < e, (very small, e > 0 tolerance limit), store x = xa asa sample from Length biased Exponential distribution.
5. If |x0 — xa| > e then, set x0 = xa and go to step 3.
6. Repeat steps 2-5, n times for x1, x2,..., x„ respectively.
We use R-codes given in [5] for the algorithm. On the basis of simulated samples, we study the behaviour of Bayes estimators of 9 are compared in terms of their average risks. We made such comparisons on the basis of ten thousand samples drawn from Length biased Exponential distributions. The performances of the Length biased Exponential parameter are studied for different sets of values of 9 and n. Under conjugate prior, we consider a = 3 and b = 2 such that prior mean is 1. From the simulation results (Tables 4,45,456,4567,45678), we reach the following conclusions.
As we increase sample size posterior risk decreases and also with the increase of parameters values posterior risk also increases. Prior selection concern, the CP has smaller posterior risk than JP. The choice of loss function as concerned, one can easily observe that symmetric loss function has smaller posterior risk as compared to the asymmetric loss function. That is PLF has greater posterior risk as compared to other loss functions. Bayes estimator obtained under SELF over estimate the parameter while using Jeffreyus prior for the parameter 9. Bayes estimator obtained under WSELF shows under estimation while assuming conjugate prior for the parameter 9. Bayes estimator obtained under MSELF under estimate the parameter in case of both Jeffreys and conjugate priors. Bayes estimator obtained under PLF over estimate the parameter while using Jeffreys prior for the parameter 9. Bayes estimator obtained under KLF over estimate the parameter in case of Jeffreys prior but under estimates in case of conjugate prior.
Table 4: Bayes Estimator and Posterior Risk under SELF.
8 1 1.5 2 2.5 3
n JP
10 1.050839(0.057050) 1.577861(0.128166) 2.105983(0.240905) 2.621651(0.361315) 3.151374 (0.523698)
20 1.025913(0.026958) 1.537422(0.0617144) 2.047911(0.108105) 2.565001(0.168103) 3.075444 (0.245407)
30 1.016123(0.017494) 1.523956(0.038925) 2.035946(0.073026) 2.546169(0.112006) 3.046986(0.159117)
40 1.013658(0.012998) 1.516555(0.029366) 2.026467(0.052290) 2.531208(0.081310) 3.040588(0.118794)
50 1.009951(0.010188) 1.516402(0.023346) 2.021150(0.040501) 2.521196(0.064712) 3.037411(0.094524)
n CP
10 1.001784(0.041110) 1.452043(0.093017) 1.910218(0.172565) 2.364622(0.277175) 2.805224(0.411331)
20 0.9992899(0.022447) 1.476889(0.051386) 1.955403(0.091665) 2.424122(0.146191) 2.909636(0.212392)
30 1.000618(0.015602) 1.484401(0.036184) 1.970960(0.063898) 2.447151(0.099446) 2.937984(0.145412)
40 1.002039(0.012052) 1.491129(0.027227) 1.971900(0.047652) 2.462013(0.075992) 2.950605(0.108099)
50 0.9996826(0.009717) 1.489947(0.021964) 1.978285(0.038388) 2.472500(0.060986) 2.956754(0.088349)
Table 5: Bayes Estimator and Posterior Risk under WSELF.
8 1 1.5 2 2.5 3
n JP
10 0.998860(0.050155) 1.500322(0.110952) 2.000823 (0.201593) 2.492451(0.304005) 2.996755(0.448970)
20 1.001486(0.025353) 1.499948(0.056375) 2.006655(0.101651) 2.49867(0.154002) 2.996174(0.225370)
30 1.000853(0.016561) 1.497054(0.036835) 1.998304(0.067076) 2.496211(0.104501) 2.999777(0.151519)
40 1.000404(0.012403) 1.49859(0.027182) 1.998168(0.050678) 2.504209(0.080532) 2.99885(0.115215)
50 1.001162(0.009948) 1.50144(0.022596) 1.996856(0.039692) 2.503607(0.063678) 3.000517(0.090971)
n CP
10 0.957114(0.039384) 1.396833(0.096399) 1.834240(0.181456) 2.263538(0.293588) 2.692223(0.439493)
20 0.974198(0.022048) 1.442250(0.051543) 1.908784(0.095777) 2.368487(0.153005) 2.831670(0.223166)
30 0.984746(0.015216) 1.458896(0.035855) 1.938499(0.065078) 2.409828(0.101718) 2.884782(0.151596)
40 0.988603(0.011672) 1.470956(0.027086) 1.94954(0.048403) 2.429858(0.076752) 2.911890(0.110910)
50 0.987665(0.011788) 1.475537(0.022217) 1.960522(0.038718) 2.446095(0.062097) 2.933175(0.091132)
Table 6: Bayes Estimator and Posterior Risk under MSELF.
8 1 1.5 2 2.5 3
n JP
10 0.9538054(0.047443) 1.427201(0.106207) 1.912552(0.190068) 2.384794(0.298815) 2.852106(0.429349)
20 0.975606(0.024253) 1.45857(0.055761) 1.952824(0.096199) 2.441506(0.152615) 2.932368(0.221503)
30 0.983787(0.016673) 1.473163(0.036916) 1.969054(0.066145) 2.45936(0.102367) 2.953521(0.144153)
40 0.9877532(0.012546) 1.480203(0.027746) 1.971907(0.049682) 2.474515(0.076571) 2.967487(0.113820)
50 0.9905735(0.010151) 1.485611(0.022328) 1.981082(0.038633) 2.475351(0.061634) 2.973353(0.090230)
n CP
10 0.915634(0.042617) 1.336096(0.105570) 1.752156(0.198349) 2.165991(0.326690) 2.583195(0.484528)
20 0.952708(0.022514) 1.408673(0.054173) 1.862613(0.099694) 2.313020(0.1618012) 2.770039(0.237690)
30 0.970615(0.016079) 1.439304(0.036717) 1.906157(0.067063) 2.379814(0.105828) 2.835735(0.157739)
40 0.976914(0.011966) 1.434702(0.036722) 1.925851(0.050425) 2.398621(0.083745) 2.839411(0.156844)
50 0.981895(0.009633) 1.460042(0.022305) 1.943517(0.040534) 2.422901(0.063676) 2.905176(0.090337)
Jismi Mathew & Sebastian George RT&A No 3 (581 LENGTH BIASED EXPONENTIAL DISTRIBUTION AS A RELIABILITY Volume 15 Se Ambor 2020 MODEL: A BAYESIAN APPROACH_° ume , ep em er
Table 7: Bayes Estimator and Posterior Risk under PLF.
0 1 1.5 2 2.5 3
n JP
10 1.080143(0.064980) 1.626697(0.153575) 2.155806(0.254323) 2.700976(0.405576) 3.238162(0.066076)
20 1.040659(0.028565) 1.555354(0.063628) 2.083347(0.115160) 2.597052(0.174577) 3.116272(0.254988)
30 1.024338(0.018080) 1.541264(0.041020) 2.051707(0.071975) 2.562563(0.114311) 3.079783(0.162872)
40 1.019863(0.013347) 1.528745(0.030176) 2.038116(0.054966) 2.547378(0.083676) 3.052076(0.119659)
50 1.014274(0.010459) 1.524923(0.024347) 2.031945(0.043019) 2.537732(0.066076) 3.043194(0.094411)
n CP
10 1.026494(0.044506) 1.460923(0.094313) 1.913013(0.176025) 2.366208(0.274394) 2.825168(0.403514)
20 1.013772(0.024099) 1.478809(0.051262) 1.958041(0.093682) 2.432739(0.143970) 2.908967(0.212538)
30 1.008354(0.016062) 1.481696(0.034607) 1.966843(0.062606) 2.450746(0.098866) 2.936851(0.146527)
40 1.005892(0.011971) 1.489377(0.026776) 1.974448(0.048959) 2.46400(0.076449) 2.954996 (0.108458)
50 1.005354(0.009836) 1.491817(0.021753) 1.980016(0.038384) 2.470013(0.060978) 2.952238(0.089848)
Table 8: Bayes Estimator and Posterior Risk under KLF.
0 1 1.5 2 2.5 3
n JP
10 1.025750(0.052648) 1.542011(0.122813) 2.051425(0.215322) 2.560207(0.333654) 3.088071(0.496273)
20 1.013489(0.025740) 1.518582(0.058053) 2.030651(0.104347) 2.535404(0.164987) 3.037618(0.232088)
30 1.010831(0.017366) 1.510478(0.037983) 2.018387(0.068729) 2.518961(0.103003) 3.023398(0.152086)
40 1.007564(0.012665) 1.51111(0.028126) 2.011791(0.049675) 2.511053(0.080001) 3.011605(0.113605)
50 1.005230(0.010113) 1.505956(0.023395) 2.012489(0.040545) 2.514406(0.062844) 3.014673(0.092787)
n CP
10 0.979370(0.038938) 1.417620(0.093463) 1.868669(0.177487) 2.309896(0.284355) 2.753182(0.415612)
20 0.987159(0.021805) 1.460179(0.051668) 1.938010(0.092811) 2.393906(0.148467) 2.870227(0.214058)
30 0.989702(0.015561) 1.474116(0.035223) 1.954692(0.062533) 2.433272(0.098115) 2.910851 (0.145990)
40 0.992784(0.011793) 1.480878(0.026994) 1.960131(0.047240) 2.450976(0.076274) 2.933232 (0.109091)
50 0.993924(0.009522) 1.483697(0.021450) 1.971755(0.038569) 2.454816(0.061290) 2.944691(0.088662)
5 Characterization Property
Result: The length biased exponential distribution and the distribution of + X2 are the same if and only if and X2 are independent and identically distributed exponential random variables with parameter 9.
Proof Necessary Part
1 —— xf(x|0) x —
Suppose X1,X2 ~ iidexponential (9) withg(xl9) =-e« , then/(x|9) = = — ee , which
is the pdf of Y = +X2.
Sufficiency Part
Suppose that the Length biased distribution of X and Y= + X2 are the same. Now the characteristic function of the length biased distribution can be obtained as follows. Suppose ^(t) is the characteristic function of X.
<K0 = J_°>itx5(x)dx ^ 0 (t) = i J" e'txxfli(x)dx
^ ^'(t) = p c¿txxfl(x)
ify(x)
^(0=^=Oitx/(x)dx provided g(x) is a density with support (0,^ ).
Jismi Mathew & Sebastian George RT&A No 3 (581 LENGTH BIASED EXPONENTIAL DISTRIBUTION AS A RELIABILITY Volume 15 Se tembor 2020 MODEL: A BAYESIAN APPROACH_° ume , ep em er
^ l(t) = ±M is the characteristic function of the LBE distribution. Now under the assumption of the sufficiency part,
m(t) = mt)]2
*xl>Xt) = №Mt)]2 ^y —(m)y2 = o,y = l(t)
On solving this differential equation , we get the solution as y = (1 — i^t)-1. Hence the proof.
6 Reliability characteristics of Length Biased Exponential distribution
In this section, we consider Length Biased Exponential distribution as a lifetime model and study some reliability characteristics. The reliability function of the Length Biased Exponential distribution is given by,
R(t) = F(t) =P(X>t) = l- F(t) = f" £e-°dx = (1+ 8>0 (9)
The mean residual life function (MRLF) is given by,
S(x) = 2JJ+^,x>0 (10)
The hazard rate function is given by,
e
h(X) = (8 + "Ty\8>0 (11)
The cumulative hazard function is given by,
H(x) = -log(F(x)) = -log(R(x)) = -log ((1+^)e-),9,x > 0 (12)
The conditional survival of t is given by,
R(xlt)=R^=(1+£)e-,e,x,t,R(.) > 0 CL3)
The failure rate average (FRA) is given by,
H(x) -og((i+^)e~ir\ FRA(x)=HX) =-i-'-,x>0 (14)
XX x '
In this case on solving numerically, R(xlt) < R(t). Then by [2], we can conclude that the distribution of X belongs to the new better than used (NBU) classes. The reversed hazard function of LBE distribution is given by,
-t
tp o
r(t)= , t zL,,e>o (15)
8-( 1-(1+t)e 0
It has been described that reversed hazard function or hazard rate function uniquely determines the corresponding probability density function. Figure 1 shows that reversed hazard function of LBE distribution with various values of parameter. This shape of reversed hazard function shows that h(t) of LBE distribution is increasing failure rate (IFR).
Figure 1: Reversed Hazard Function
7 Conclusion
We consider the Bayesian analysis of the Length biased Exponential model via informative and non informative prior under different loss functions. Based on posterior distribution, we conclude that informative prior (CP) has smaller posterior risk as compared to non informative prior (JP). The selection of loss function as concerned, one can easily observed that KLF is suitable than other asymmetrical loss functions. Also, we can conclude that when the sample size increases posterior risk decreases.
References
[1] Bansal, Ashok K. Bayesian parametric inference. Alpha Science International Limited, 2007.
[2] R.C. Gupta, R.D. Gupta(2007), Proportional reversed hazard model and its applications, J. Stat. Plan. Infer., 137: 3525-3536.
[3] Jeffreys H. Theory of Probability, third ed., Oxford University Press, 1964.
[4] Patil, G.P. (2002). Weighted distributions. Encyclopedia of Environmetrics 4: 2369- 2377.
[5] Sharma, V.K., Singh, S.K., Singh, U., Merovci, F., (2016b). The generalized inverse Lindley distribution: A new inverse statistical model for the study of upside-down bathtub data. Communications in Statistics-Theory and Methods ,45:5709-5729.
[6] Sharma, V.K., Sanku D., Singh, S.K. and Manzoor U.(2018). On Length and Area biased Maxwell distributions. Communications in Statistics - Simulation and Computation,47:1506-1528
Received: July 04, 2020 Accepted: September 10, 2020