STATISTICAL PROPERTIES AND APPLICATION OF A TRANSFORMED LIFETIME DISTRIBUTION: INVERSE
MUTH DISTRIBUTION
1,2Department of Statistics, Banaras Hindu University, Varanasi, 221005, India. 3Department of Statistics, MMV, Banaras Hindu University, Varanasi 221005, India. E-mail: [email protected], [email protected],*[email protected]
Corresponding Author
In this paper, we have proposed a transformed distribution called inverse Muth (IM) distribution. The expressions for probability density function (pdf), cumulative distribution function (cdf), reliability and hazard function of this distribution are well defined. The statistical properties such as, quantile function, moments, skewness and kurtosis are derived. The methods of estimation such as maximum likelihood estimation (MLE) and maximum product spacing estimation (MPSE) are used to estimate the parameters. The IM distribution is positively skewed and its behavior of hazard rate is upside-down bathtub (UBT) shape. The important finding of the study is that the moments ofIM distribution do not exist. A real dataset (the active repair time for airborne communication transceiver) used for application purpose, after taking a natural extension ofIM distribution. It is expected that the proposed model would be used as a life time model infield of reliability and its applicability.
Keywords: Inverse Muth distribution, quantile function, maximum likelihood estimation, maximum product spacing estimation, real data analysis.
In the statistical literature, there are lots of distribution exist, which are very useful in various fields of science with its applicability. The application of statistical distributions gives the well explanation about the probabilistic behavior of random phenomenon and plays an important role to analyze the different types of data from various fields.
In the field of reliability, the various lifetime distributions derived which are preferred in reliability analyses or lifetime investigation see Martz & Waller [1], and the behavior of failure rate observed to be as increasing , decreasing and bathtub shape. Some distributions (Maxwell, normal, Gompertz, etc.) are having only increasing failure rate whereas Gamma, Weibull and other distributions gives increasing, decreasing as well as constant failure rate. In many situations failure rate increases consistently, after reaching the peak, it starts to decrease which is discussed in Bennett [2], Langlands et. al. [3]. Such type of failure rate is named as UBT failure rate given in Sharma et. al. [4]. Muth distribution is defined on a continuous random variable and introduced by Muth [5] in 1977 for reliability analysis. Let us consider that a random variable Y follow Muth distribution with the shape parameter a and its pdf is defined as
Agni Saroj1, Prashant K. Sonker2 and Mukesh Kumar*3
Abstract
1. Introduction
y > 0, a € (0,1]
otherwise
The cdf is given by,
F(y; a) = 1 - exp jay - 1 (eay - 1) |
y > 0, a e (0,1]
(2)
It has mainly focused on strictly positive memory in Muth [5]. The basic statistical properties of Muth distribution are discussed by Jodra et. al. [6]. The reliability function and hazard function are given by respectively
R(t) = P[Y > t] = exp |at - i. (eKt - 1) J
f (t) _ (ext - a).exp{at - j(eat - 1)} exp |at - a. (eat - 1) j
h(t)
R(t)
t > 0, a e (0,1] (3)
t > 0, a e (0,1] (4)
At different values of parameter a pdf, cdf, reliability and hazard functions are plotted in Figure 1.
(a) cdfofMD (b) pdf of MD
(c) reliability function ofMD (d) hazard function ofMD
Figure 1: pdf, cdf, reliability and hazard functions of Muth Distribution.
A natural extension is also considered in Jodra et. al. [6] by adding a scale parameter named as Scaled Muth distribution. A transformed distribution for Muth distribution called power Muth (PM) distribution proposed by Jodra et. al. [7]. The exponentiated PM distribution and Inverse PM distribution will be proposed by Irshad et. al. [8] and Chesneau & Agiwal [9]. Some other literature on Muth distribution are discussed in Almarashi & Elgarhy [10], Al-Babtain et.al.
[11], Bicer et. at. [12]. In Figure 1, the hazard rate shows the failure rate is increasing. It has explained that the failure rate occurs in UBT shape when we take the inverse transformation of usual distributions given Sharma et. al. [4]. In the case of Invese PM distribution it is found that the behavior of hazard rate is in UBT shape. In this article, we have proposed a transformed distribution which is termed as the IM distribution. All the work of this article is arranged in different sections as: In section 2, statistical properties of proposed distribution are discussed. In section 3, we obtained the estimates of the parameter a using MLE and MPSE. In section 4, we have computed the expression for asymptotic confidence interval in case of MLE and MPSE. In section 5, the scale transformation of IM distribution has taken to estimate the parameters. In section 6, the simulation study has done to compute the estimates of parameters for both IM and scaled inverse Muth (SIM) distributions respectively. In section 7, the real data analysis is done to show the applicability of SIM distribution. Finally, the conclusion of this article is written in section 8.
2. Inverse Muth Distribution
Let Y be a random variable follows the Muth distribution with pdf in equation (1) and cdf in equation (2), on taking inverse transformation as X= y, the pdf of IM distribution is obtained as
f(x; a)= f x? (ea/x - a).exp{ § -\(ea/x - 1)} x > 0, a e (0,1] (5)
10 otherwise
The cdf is given by,
F(x; a) = expi-- 1 (ea/x - 1)} x > 0, a e (0,1]
I x a i
Some statistical properties of IM distribution are discussed as below:
(6)
2.1. Reliability and Hazard Function of IM Distribution
Importance of any lifetime distribution is based on its reliability and hazard rate. By using equation (5) and (6) the reliability and hazard function of the IM distribution are obtained as
R(t) = 1 - exp|y - a(ea/t - 1)} t > 0, a e (0,1]
(7)
f (t) (ea/t - a).expi t - 1 (ea/t - 1)1
h(t) = M = —,-) FfX a (-}f t > 0, a e (0,1] (8)
R(t) t2.(1 - exp{ t - i(ea/t -1)}) v J w
The above equation (7) and (8) show the reliability and hazard function respectively and the graphical representation of these are given in Figure 2. We observed the behavior of hazard rate as UBT shape in Figure 2. As increases the value of parameter a, the peak of hazard rate also increases.
(c) reliability function oflM distribution (d) hazard function oflM distribution
Figure 2: pdf, cdf, reliability and hazard functions oflM Distribution.
2.2. Quantile Function
Quantile function for the cdf Fx(x) is defined as,
QX(u) = inf {x e R : FX(x) > u} 0 < u < 1 (9)
It shows uth quantile of an integer valued random variable, is also an integer. It indicates that if Fx (x) be a continuous and strictly increasing, then quantile function of X is defined as
Qx(u) = F-1(u) 0 < u < 1 (10)
To find the quantile function for the IM distribution, it has to solve F(x, a) = u ; x > 0 with
respect to x for any a e (0,1] and u e (0,1) i.e.
(a 1 1 a
u = exp I - +----ex
xaa
log(u) - - - - = - -ea (11)
x a a
Multiplying by e(log(u)-a-a) on both side in equation (11), we get
log(u) - a - iYeOogw- aa - a) = - ™ (12)
x
To solve equation (12), here we use a generalized integro-exponential function, Lambert-W function. It has applicability in computer algebra system and in mathematics given by Corless et. al. [13]. The Lambert W function is defined as the solution of,
W (z).exp(W (z)) = z (13)
Where, z is complex function. If z is a real number such that If z is a real number such that z > -1 then W(z) becomes a real function having two possible real branches. If the real branch taking value in (-to, -1] is called negative branch and denoted by W-1 (z) where -1 < z < 0. The real root branch taking values in [-1, to) is called the principle branch and denoted by W0(z) where z > -1, we shall use the negative branch which is satisfies the following properties, W-i(^r) = -1, W-1 (z) is decreasing as z increases to 0 and W-1(z) tends to -to as z tends to 0 see Jodra [14].
By using equations (12) and (13), we obtained that (log(u) - x - 1) is the Lambert-W function of the real argument (-e ^), then, the explicit expression for Qx in terms of Lambert-W function.
0L
a1
a.log(u) - a.W^) - 1
It gives the Quantile function of IM distribution. Now for any a e (0,1], x > 0 and u e (0,1) it ensure that,
(14)
And it also be checked that,
(log(u) - x - l) < -1
-e-a.« \ G / 1,1
By using the negative branch of Lambert W function the Quantile function of IM distribution in terms of negative branch of Lambert W function as,
a2
(15)
a.log(u) - a.W_1 ^ - 1
Where, xu gives the uth quantile of IM distribution.
2.3. Moments of the IM distribution
Let X be a random variable follows IM distribution with pdf in equation (5) then the kth raw
moment is defined as:
(•TO
= J xk.f (x; a) dx
ti = 1 xk.1 (ea * -a )ei x - He*/x-1)} dx
I = n'k = xk-2. (eaZx - a) e{X-Hea/x-1)}dx I = /V-2. (ea/x - a) eiX-Mea/x-1)}dx + J°°xk-2. ^x - a) e{x-b(ea/x-1)}dx
x
I = h + h
Where,
h = [\(k-2) ■ (eX - a) ■ e{ ! -1 ^x -1)}dx Jo
h = r x(k-2) ■ (eXx - a) ■ e{ I -1 ^-^dx
a
Now proceeding with integration I2
I2 = r x(k-2) ■ (eï - a) ■ e{I-a^-1)}dx
J a
To check the convergence or divergence of integral I2, we use the limit comparison test which
state that if
1. f(x) and g(x) > 0 on [a,™)
2. f(x) and g(x) both are continuous on [0,™) and
f (x)
3. limx^™ = L > 0 where, L is some finite positive number. then f™ f (x)dx and f™ g(x)dx either both converge or both diverge. For I2, Let,
fi (x) = l™ x(k-2) ■ (el - a) ■ e{I-1 ^-1)} and, g1(x) = x(k-2)
fi(x) and g1(x) > 0 as well as continuous for [a,™) for k =1, 2, 3,... now,
limx^™ ^ = limx^™(ea - a) ■ e{x-1 -1)}
= (1 - a) ■ e0 = (1 - a) > 0 a e (0,1]
ir gi(x)dx = irx(k 2)dx = C T-h)dx
fa° x^ndx is convergent if n > 1 and divergent for n < 1.
So, JO" x-(k-2) dx is convergent if (2-k) > 1 or k <1. But we have k > 0 (k = 1, 2, 3,...).
Then it shows that f™ g1(x)dx is divergent for all k > 1 and by using limit comparison test for
convergence of an improper integral, J'O° f1(x)dx is also divergent i.e. integral
I2 = JO™ x(k-2) ■ (e^ - a) ■ e{a-1 '(ex -1)}dx is divergent for all the value of k > 1. By using the property of convergence of integral, if we have an integral I = I1 + I2 then I is convergent iff I1 and I2 both are convergent. If any one of the I1 and I2 is divergent then the integral I is also divergent. Thus we found that integral I also become a divergent. Hence the moment for the IM distribution does not exist.
2.4. Measures of Skewness and Kurtosis
In the above section, we found that the moment of the IM distribution does not exist, so we cannot obtain Pearson's measure of skewness and kurtosis based on moments. Therefore by using
the quantile function, it may be possible to obtain Galton's measures of skewness and Moor's measures of kurtosis mentioned in Gilchrist [15]. These measures are defined as:
G(a) = x3/4(a) + *1(4(a) - 2x1/2(a) (16)
x3/4( ) - x1/4( )
K(a) = x7/8(a) - x5/8(a) + x3/8(a) - x1/8(a) (17)
( ) x3/4(a) - x1/4(a)
Where, x,/4 ; i = 1,2,3 denote the ith quartile and x,/8 ; i = 1,2,..., 7 denote the ith octile for this distribution. Galton's measure of skewness G (.) lies between (-1,1). If G (.) > 0 it is called positive or right skewed and if G (.) < 0 it is called negative skewed. For a perfect symmetrical distribution, G(.) = 0. Galton's measures of skewness G( ) and Moor's measures of kurtosis K(a) for IM distribution are calculated at different value of a in Table 1. From the Table 1, we observed that all values of skewness are greater than zero for different values of parameter, thus IM distribution is a positive or right skewed distribution.
Table 1: Skewness and kurtosis ofIM distribution
a Skewness Kurtosis
0.1 0.4759 2.1413
0.2 0.4741 2.1385
0.3 0.4695 2.1301
0.4 0.4607 2.1108
0.5 0.4465 2.0733
0.6 0.4264 2.0109
0.7 0.4008 1.9207
0.8 0.3710 1.8080
0.9 0.3388 1.6861
1.0 0.3060 1.5698
3. Parameter Estimation
3.1. Maximum likelihood estimation
Let x1, x2,..., xn be a random sample of size of n from IM distribution with unknown parameter a having pdf equation (5). Likelihood function for the sample x1, x2,..., xn as follows,
L(x; a) = n ea/x - a) .exp(a - 1 (ex/x - (18)
i=1 xi V J \xi a J
log(L(x; a)) = -2 £log(xt) + £log(e*/xi -a)+ - 1 ^/xi - 1)) (W)
i=1 i=1 v / i=1 \xi a
MLE is the value of unknown parameter a which maximize the equation (18). To get estimated value of a, we take partial derivative of equation (19) w.r.t. a and equating to zero i.e.
. da l°g(x, a) = 0
n (ea/xi -1) n 1 1 n . 1 n ea/xi n n
V --) + y — + ^ y ea/xi " y---^ = 0 (20)
¿i xi (ea/xi - a) y xt a2 ^ a ^ xt a2 ()
Now we have to solve equation (20) to get &mi and check that this solution to maximizes equation (18) following condition has to be satisfies:
d
2
log(L(x;a))
< 0 (21)
a= & ml
Where, ami is the estimated value of a which obtained from equation (20). We observed that it is not in closed form, so we cannot solve it analytically. Newton-Raphson iteration method used which gives the numerical solution of equation (20) for a.
3.2. Maximum product spacing
Maximum product spacing estimation (MPSE) method is an alternative to MLE which is proposed by Cheng & Amin [16] and Ranneby [17]. MLE does not give better performance or fails in the case of three or more parameters exist, remarked in Cheng & Traylor [18], and MLE does not perform satisfactorily for heavy tailed distribution which is discussed in Pitman [19]. Let us consider x\, X2,..., xn be a random sample of size 'n' drawn from the IM distribution having cdf in equation (6).
Let xi:n be Ith order statistic and the spacing function Di's is defined as,
Dt
F(xi:n; a) - F(x(l_1):n, — )
For x0 and xn+i, F(x0; a) = 0 and F(xn+1; a) = 1 respectively. at i=1,
D1 = exKxr-1 /x1 -1})
at i=n+1,
Dn+1 = 1 - F(xn:n; a)
Dn+1 = 1 - exp^xt - a (^/xn - 1))
For i = 2,3,..., n the expression is
F(xi:n; a) - F(x(i-1):n; a)
(22)
(23)
(24)
Di
Di = exp^jO- - 1 (ea/xi - 1)) - exp(j- - i(ea/xi-1 - 1)) (25)
Then the product of spacing function is defined as
n+1
S = n Di (26)
i=1
MPSE is the value of a which maximize the product spacing function given in equation (26). Taking the log of both side of equation (26)
log(S) = log
n+1
log(S) = E log(Di) i=1
log(S) = log(Di) + log(Dn+i) + E=2 log(Di)
exp( — - 1 (ea/xi - 1)^1 + log[l - exp( — - 1 (ea/x* x1 xn
+ E log i=2
expi — - 1 (ea/x - 1) | - exp( —— xi xi 1
- 1 (ea/xi-1 -
- D)
-1)
(27)
To find the estimated value of a which maximize the equation (26) we use the method of optimization. For this we have to differentiate the equation (27) w.r.t. a and equate to zero,
I (<°g(S)) =0
(28)
n
On solving the above equation it found an estimated value of a condition of maximization by the value a = &mp , i.e.
s?
<0
imp , and to satisfy the
(29)
The expression given in equation (27) and (28) together is not easy to solve and it is not in closed form. For the solution of this and to find the estimated value of which maximize the product of spacing function given in equation (26) by satisfying the condition in equation (29) and we have used some numerical method to find the numerical solution of equation (28).
a=&
4. Asymptotic Conetoence Intervai
We have obtained both MLE and MPSE of the parameter which are not in explicit form. So the exact distribution of the estimator is quite difficult to obtain. The authors Cheng & Amin [16], Ghosh & Jammalamadaka [20], Anatolyev [21] and Singh et. al. [22] have used MPSE method in their papers and explained the MPSE method is asymptotically equivalent to MLE method. By using the concept of large sample theory we may write the asymptotic distribution for the estimators as,
(6 - d) = N(0,I-1 (0)); where,
6 is the estimate of parameter
6 is the true value of parameter
I-1 (0) is the inverse of Fisher information matrix
(30)
For m parameters 61, 62,63,..., dm involved in a distribution the m x m Fisher information matrix is defined as
I1,1 I1,2 ■ I2,1 I2,2 ■
I (6)
11,m
12,m
_Im,1 Im,2
Wher^ Ii/j = -E(; i^j = 1,2,3_m And the estimated variance for 6& is given by:
Var(6) = It
i,j
1 = -E(I£f)e=ê; here j = j
(31)
This is the diagonal element of the inverse of Fisher information matrix. Therefore, the two sided 100(1 - a* ) % confidence interval for the 6 is
± Za
"72
Var(6);
(32)
Where, a* is the level of significance and Za* is upper a/2 % point of standard normal distribution. For the IM distribution, asymptotic confidence interval defined for the MLE is defined as:
& ml ± Za Var( & ml )
(33)
In the case of MPSE defined as :
imp
Z
a/2
Var(&mp)
(34)
I
m,m
5. Scale transformation of IM distribution
We take a natural transformation (extension) of random variable by including a scale parameter say ft > 0. The scale transformation is taken as Z = ftX. Then the cdf of Z is given as,
FZ(z) = exp | ^ - 1 (e(^) - 1) J ; a£ (0,1] and ft > 0 (35)
the pdf is given by
fz(z; a, ft) = ft^^ (e(¥) - a) exp^ft - \ (e(¥) - 1) J ; a<E (0,1] and ft > 0 (36)
Since, the distribution of Z is obtained by the scaling transformation of X which follows the IM distribution with parameter . So the new distribution of Z is called scaled inverse Muth (SIM) distribution. Here, it is noticeable that Z comes from X follows IM distribution, on taking scale transformation by adding a scale parameter ft, thus SIM distribution has some properties as similar to IM distribution, like as moments of this distribution also does not exist etc. The quantile function for SIM is defined as:
Qz(u;a, ft) = ft • Q(u; a); 0 < u < 1 where Q(u; a) is the quantile function for IM distribution. So it becomes as
ft • a 2
a.log(u) - a.W_i (- 1
(37)
u
6. Simulation study
We have given numerical illustration of the results based on simulation study. We calculated the estimates of parameters, bias and confidence limit for parameter, based on generated random sample from IM distribution. The method of estimation MLE and MPS are used to compare the MSE of parameters. Less MSE gives more efficient method of estimation. We generated 10000 random samples for different sizes to find the estimates for each sample and calculated their MSE and bias using formula :
1 N 1 N
MSE = — ^ (ai - a)2 and bias = — ^ (ai - a), where N = 10000 i=1 i=1
R-codes are used to all the numerical computation. To compute the numerical values first we generated a uniform random sample U = U1, U2, U3,..., un of size n then generated random sample from both distribution by using their quantile function where 'u' is the uniform random sample. For each value of ui we get xi. In equation (15) and (37) W-1() is the lambert-W function which is calculated by "lambertWm1()" command from package "lamW" in 'R', Adler [23].
6.1. Simulation study for IM distribution
To generate the random sample from IM distribution, we have used the quantile function equation (15). We used different sample size n = (15, 25, 50, 75, 100, 125) for each true value of parameter a = (0.3, 0.5, 0.7). In Table 2, we have given average value of MLE and MPSE of parameter a along with their respective MSEs, average value of bias, average length of confidence interval (CI) and average of the upper limit (UL) and lower limit (LL) of confidence interval for a= 0.3, 0.5, and 0.7. The output of simulation study is based on Table 2, explained as: for both method of estimation, MSE decreases as the sample size increases. For the small value of shape parameter a, MPSE has
less MSE than MLE only for small sample, and for the large sample, MLE has less MSE than MPSE. From Table 2, it is observed that for large value of a within its range a E (0,1] , MLE has less MSE than MPSE to all sample size. In the case of MLE, bias is positive for each value of parameter and mostly negative in MPSE method. As usual, the average length of the CI decreases as the sample size increases for both the method MLE and MPSE. In Table 2, somewhere we found that LL of CI and UL of CI is going to outside of range of a E (0,1], but IM distribution is defined for only a E (0,1]. For this we take 0.0000* for LL < 0 and 1.0000* for UL > 1.
6.2. Simulation study for SIM distribution
To generate the random sample from SIM distribution we have used the quantile function equation (37). We have used different sample size n = (15, 25, 50, 75, 100,125) for different value of shape parameter a and scale parameter ft. All the numerical value of average value of MLE and MPSE of parameter a and ft along with their respective MSEs, average value of bias, average length of CI and average of the upper limit (UL) and lower limit (LL) of CI estimates presented in Table [3, 4, 5, 6, 7]. From these Tables, we can observe that MSE of the estimates of shape parameter a and scale parameter ft, decreases as the sample size increases in case of MLE as well as in MPSE. At the fixed value of ft and small value a, MPSE gives less MSE than MLE. It indicates that MPSE gives better estimates than MLE. For large value of a E (0,1] at the same ft, MLE gives less MSE than MPSE for all different sample sizes. Length of the CI decreases as the sample size increases in both the cases MLE and MPSE. MLE has mostly positive bias whereas MPSE has mostly negative bias. 0.0000* and 1.0000* defined same as above in section 6.1.
Table 2: MLE and MPS estimate for a = 0.3, 0.5 and 0.7
n MLE MPS
Est. bias MSE CI Est. bias MSE CI
LL UL length LL UL length
a= 0.3 15 0.3954 0.0954 0.0497 0.0000* 0.8375 0.8375 0.2827 -0.0173 0.0323 0.0000* 0.7870 0.7870
25 0.3544 0.0544 0.0307 0.0069 0.7019 0.6950 0.2661 -0.0339 0.0251 0.0000* 0.6493 0.6493
50 0.3227 0.0227 0.0155 0.0776 0.5679 0.4903 0.2557 -0.0443 0.0152 0.0000* 0.5192 0.5192
75 0.3149 0.0149 0.0105 0.1161 0.5136 0.3976 0.2624 -0.0376 0.0105 0.0531 0.4716 0.4185
100 0.3108 0.0108 0.0078 0.1405 0.4811 0.3406 0.2662 -0.0338 0.0087 0.0885 0.4438 0.3554
125 0.3097 0.0097 0.0060 0.1584 0.4610 0.3026 0.2755 -0.0245 0.0065 0.1194 0.4317 0.3123
a=0.5 15 0.5441 0.0441 0.0371 0.9732 0.1149 0.8583 0.3996 -0.1004 0.0454 0.0000* 0.9135 0.9135
25 0.5329 0.0329 0.0261 0.8612 0.2046 0.6566 0.4250 -0.0750 0.0321 0.0541 0.7960 0.7419
50 0.5157 0.0157 0.0133 0.7429 0.2884 0.4545 0.4464 -0.0536 0.0168 0.2025 0.6904 0.4879
75 0.5098 0.0098 0.0090 0.6943 0.3252 0.3691 0.4545 -0.0455 0.0110 0.2615 0.6476 0.3861
100 0.5099 0.0099 0.0068 0.6683 0.3515 0.3168 0.4654 -0.0346 0.0082 0.3015 0.6293 0.3278
125 0.5094 0.0094 0.0054 0.6509 0.3679 0.2830 0.4678 -0.0322 0.0061 0.3219 0.6137 0.2918
a = 0.7 15 0.7007 0.0007 0.0253 0.3139 1.0000* 0.6861 0.5485 -0.1515 0.0527 0.0980 0.9990 0.9010
25 0.7083 0.0083 0.0189 0.4124 1.0000* 0.5876 0.6048 -0.0952 0.0307 0.2790 0.9307 0.6518
50 0.7097 0.0097 0.0110 0.5028 0.9165 0.4137 0.6435 -0.0565 0.0143 0.4256 0.8614 0.4357
75 0.7079 0.0079 0.0076 0.5389 0.8769 0.3380 0.6558 -0.0442 0.0098 0.4802 0.8313 0.3511
100 0.7079 0.0079 0.0056 0.5614 0.8543 0.2928 0.6678 -0.0322 0.0065 0.5173 0.8183 0.3011
125 0.7041 0.0041 0.0042 0.5731 0.8350 0.2618 0.6732 -0.0268 0.0051 0.5392 0.8073 0.2681
Est.: Estimate; MSE: Mean Square Error; CI: Confidence interval; UL: Upper limit; LL: Lower limit.
Table 3: MLE and MPS estimate for a = 0.3 and £ = 2
n MLE MPS
Est. bias MSE CI Est. bias MSE CI
UL LL length UL LL length
a = 0.3 15 0.4302 0.1302 0.0606 0.8548 0.0056 0.8492 0.3325 0.0325 0.0386 0.8613 0.0000* 0.8613
25 0.3856 0.0856 0.0369 0.7171 0.0540 0.6630 0.3128 0.0128 0.0268 0.7014 0.0000* 0.7014
50 0.3422 0.0422 0.0181 0.5782 0.1063 0.4719 0.2886 -0.0114 0.0156 0.5496 0.0276 0.5220
75 0.3254 0.0254 0.0113 0.5184 0.1324 0.3861 0.2887 -0.0113 0.0105 0.4960 0.0815 0.4146
100 0.3219 0.0219 0.0085 0.4888 0.1549 0.3339 0.2774 -0.0226 0.0075 0.4552 0.0997 0.3555
125 0.3224 0.0224 0.0077 0.4716 0.1733 0.2983 0.2937 -0.0063 0.0070 0.4500 0.1373 0.3127
ii 15 2.0570 0.0570 0.1831 2.7382 1.3758 1.3624 1.9418 -0.0582 0.1756 2.6697 1.2139 1.4558
25 2.0256 0.0256 0.1004 2.5621 1.4890 1.0730 1.9670 -0.0330 0.0881 2.5401 1.3940 1.1460
50 2.0114 0.0114 0.0499 2.3998 1.6229 0.7769 1.9697 -0.0303 0.0437 2.3773 1.5621 0.8152
75 2.0082 0.0082 0.0328 2.3284 1.6879 0.6405 1.9694 -0.0306 0.0304 2.2993 1.6395 0.6598
100 2.0112 0.0112 0.0243 2.2892 1.7332 0.5560 1.9750 -0.0250 0.0247 2.2636 1.6864 0.5771
125 1.9968 -0.0032 0.0185 2.2433 1.7503 0.4929 1.9732 -0.0268 0.0193 2.2262 1.7203 0.5058
Table 4: MLE and MPS estimate for a = 0.5 and £ = 2
n MLE MPS
Est. bias MSE CI Est. bias MSE CI
UL LL length UL LL length
a = 0.5 15 0.5675 0.0675 0.0432 0.9683 0.1668 0.8015 0.4415 -0.0585 0.0417 0.9266 0.0000* 0.9266
25 0.5524 0.0524 0.0317 0.8630 0.2417 0.6213 0.4630 -0.0370 0.0288 0.8150 0.1110 0.7040
50 0.5315 0.0315 0.0172 0.7520 0.3110 0.4411 0.4776 -0.0224 0.0162 0.7138 0.2413 0.4724
75 0.5163 0.0163 0.0108 0.6971 0.3354 0.3617 0.4740 -0.0260 0.0109 0.6641 0.2839 0.3802
100 0.5171 0.0171 0.0087 0.6736 0.3606 0.3130 0.4833 -0.0167 0.0084 0.6459 0.3207 0.3252
125 0.5123 0.0123 0.0070 0.6525 0.3721 0.2804 0.4833 -0.0167 0.0065 0.6280 0.3387 0.2893
II 2 15 2.0673 0.0673 0.1349 2.6636 1.4710 1.1926 1.9923 -0.0077 0.1134 2.6630 1.3216 1.3414
25 2.0262 0.0262 0.0766 2.4799 1.5725 0.9074 1.9826 -0.0174 0.0797 2.4791 1.4860 0.9931
50 2.0042 0.0042 0.0347 2.3231 1.6854 0.6377 1.9838 -0.0162 0.0332 2.3209 1.6466 0.6743
75 2.0093 0.0093 0.0236 2.2724 1.7463 0.5261 1.9884 -0.0116 0.0231 2.2624 1.7144 0.5480
100 2.0038 0.0038 0.0172 2.2303 1.7774 0.4529 1.9930 -0.0070 0.0180 2.2275 1.7584 0.4691
125 2.0014 0.0014 0.0143 2.2043 1.7985 0.4058 1.9917 -0.0083 0.0170 2.2007 1.7826 0.4180
Table 5: MLE and MPS estimate for a = 0.7 and £ = 2
n MLE MPS
Est. bias MSE CI Est. bias MSE CI
UL LL length UL LL length
a=0.7 15 0.7055 0.0055 0.0289 1.0000* 0.3277 0.6723 0.5808 -0.1192 0.0437 1.0000* 0.1437 0.8563
25 0.7167 0.0167 0.0218 1.0000 0.4270 0.5730 0.6253 -0.0747 0.0269 0.9437 0.3068 0.6369
50 0.7187 0.0187 0.0142 0.9237 0.5138 0.4099 0.6600 -0.0400 0.0149 0.8762 0.4438 0.4324
75 0.7118 0.0118 0.0097 0.8795 0.5440 0.3355 0.6717 -0.0283 0.0100 0.8457 0.4977 0.3480
100 0.7111 0.0111 0.0077 0.8566 0.5656 0.2910 0.6786 -0.0214 0.0080 0.8283 0.5289 0.2994
125 0.7089 0.0089 0.0059 0.8393 0.5784 0.2609 0.6795 -0.0205 0.0061 0.8131 0.5458 0.2673
= 15 2.0806 0.0806 0.1020 2.6046 1.5566 1.0480 2.0263 0.0263 0.0918 2.6177 1.4349 1.1828
25 2.0385 0.0385 0.0566 2.4285 1.6486 0.7799 2.0085 0.0085 0.0578 2.4347 1.5824 0.8523
50 2.0078 0.0078 0.0257 2.2763 1.7392 0.5370 1.9905 -0.0095 0.0235 2.2740 1.7071 0.5669
75 2.0117 0.0117 0.0185 2.2317 1.7916 0.4401 2.0021 0.0021 0.0181 2.2307 1.7736 0.4571
100 2.0037 0.0037 0.0132 2.1932 1.8141 0.3790 1.9980 -0.0020 0.0135 2.1935 1.8024 0.3911
125 2.0063 0.0063 0.0106 2.1761 1.8366 0.3395 2.0005 0.0005 0.0119 2.1750 1.8260 0.3490
Table 6: MLE and MPS estimate for a = 0.3 and £ = 5
n MLE MPS
Est. bias MSE CI Est. bias MSE CI
UL LL length UL LL length
a=0.3 15 0.4282 0.1282 0.0589 0.8552 0.0012 0.8540 0.3231 0.0231 0.0375 0.8626 0.0000* 0.8626
25 0.3846 0.0846 0.0383 0.7175 0.0517 0.6658 0.2996 -0.0004 0.0282 0.6955 0.0000* 0.6955
50 0.3452 0.0452 0.0188 0.5815 0.1090 0.4725 0.2840 -0.0160 0.0169 0.5469 0.0211 0.5258
75 0.3359 0.0359 0.0119 0.5284 0.1434 0.3850 0.2880 -0.0120 0.0117 0.4960 0.0801 0.4159
100 0.3265 0.0265 0.0085 0.4933 0.1597 0.3336 0.2967 -0.0033 0.0089 0.4732 0.1202 0.3530
125 0.3161 0.0161 0.0061 0.4657 0.1666 0.2991 0.2896 -0.0104 0.0070 0.4464 0.1328 0.3136
£=5 15 5.0498 0.0498 0.9296 6.7910 3.3085 3.4825 4.7917 -0.2083 0.8598 6.6624 2.9209 3.7415
25 5.0434 0.0434 0.6235 6.4279 3.6589 2.7690 4.8389 -0.1611 0.5573 6.3059 3.3719 2.9340
50 5.0169 0.0169 0.3575 6.0060 4.0279 1.9781 4.8789 -0.1211 0.3539 5.9168 3.8409 2.0759
75 5.0488 0.0488 0.2548 5.8586 4.2390 1.6196 4.9499 -0.0501 0.2534 5.7944 4.1053 1.6891
100 5.0276 0.0276 0.2150 5.7291 4.3261 1.4030 4.9380 -0.0620 0.2150 5.6565 4.2196 1.4369
125 5.0530 0.0530 0.1917 5.6877 4.4183 1.2694 4.9720 -0.0280 0.1821 5.6200 4.3240 1.2960
Table 7: MLE and MPS estimate for a = 0.3 and £ = 10
n MLE MPS
Est. bias MSE CI Est. bias MSE CI
UL LL length UL LL length
«=0.3 15 0.4349 0.1349 0.0623 0.8606 0.0092 0.8514 0.3290 0.0290 0.0377 0.8659 0.0000* 0.8659
25 0.3855 0.0855 0.0389 0.7182 0.0528 0.6654 0.3053 0.0053 0.0284 0.6994 0.0000* 0.6994
50 0.3503 0.0503 0.0197 0.5860 0.1146 0.4714 0.2899 -0.0101 0.0173 0.5521 0.0277 0.5244
75 0.3424 0.0424 0.0121 0.5344 0.1504 0.3840 0.2998 -0.0002 0.0120 0.5065 0.0931 0.4134
100 0.3273 0.0273 0.0086 0.4941 0.1605 0.3336 0.3002 0.0002 0.0093 0.4765 0.1240 0.3525
125 0.3237 0.0237 0.0070 0.4728 0.1745 0.2983 0.2916 -0.0084 0.0062 0.4482 0.1350 0.3132
0=10 15 9.9635 -0.0365 3.4750 13.3824 6.5447 6.8377 9.4550 -0.5450 3.4249 13.1285 5.7816 7.3469
25 10.0247 0.0247 2.6036 12.7796 7.2697 5.5099 9.5618 -0.4382 2.4351 12.4525 6.6711 5.7814
50 9.9386 -0.0614 1.5441 11.8934 7.9838 3.9096 9.6508 -0.3492 1.5991 11.6994 7.6022 4.0972
75 10.0207 0.0207 1.1687 11.6192 8.4223 3.1969 9.7578 -0.2422 1.2321 11.4079 8.1076 3.3003
100 10.0328 0.0328 1.0610 11.4341 8.6315 2.8026 9.7869 -0.2131 1.0286 11.2072 8.3667 2.8405
125 10.0279 0.0279 0.8687 11.2806 8.7752 2.5054 9.8581 -0.1419 0.8287 11.1387 8.5774 2.5613
7. Real data analysis
The real data have been used to show the applicability of the SIM distribution. The results show this model is more appropriate than some other fitted model for this data. The data represent the active repair time (in hrs.) for airborne communication transceiver given in Jorgensen [24]. The data is given as below:
0.50 0.60 0.60 0.70 0.70 0.70 0.80 0.80
1.00 1.00 1.00 1.00 1.10 1.30 1.50 1.50
1.50 1.50 2.00 2.00 2.20 2.50 2.70 3.00
3.00 3.30 4.00 4.00 4.50 4.70 5.00 5.40
5.40 7.00 7.50 8.80 9.00 10.20 22.00 24.50
For the fitting of above real data to the proposed model we used Kolmogorov-Smirnov test (K-S test). In order to compare the models we used negative log-likelihood function define as -logL(a, £) values, Akaike information criteria (AIC) values defined by AIC = -2log(L) + 2q and Bayesian information criterion (BIC) values defined BIC = -2log(L) + q ■ log(n) by BIC where, aml, £ml are the estimates of parameter a and £ by using MLE method, q is the number of parameters and n is the sample size. The best fitted distribution is that distribution which gives the lower values of -log(L), AIC and BIC.
From the Table 8 it is obtained that SIM distribution give best fit among some other popular distributions. And the MLE of parameters of SIM and some other distributions given in Table 9. Figure 3 shows that empirical cdf and fitted cdf plot for SIM and some other distributions.
Table 8: Comparison criterion values for different distribution.
Model AIC BIC -log(L) k-s statistic p-value
SIMD (x; «, 0) 182.6664 182.3504 89.3332 0.0869 0.9231
EPLD (x; «, 0,9) 186.5721 191.6387 90.2861 0.0909 0.8627
PLD (x; 0,9) 195.8854 199.2631 95.9427 0.1346 0.4637
GLD (x; «, 9) 199.8218 203.1995 97.9107 0.1660 0.2201
SIMD:Scaled inverse Muth distribution; EPLD: Exponentiated power Lindley distribution; PLD: Power Lindley distribution; GLD: Generalized Lindley distribution.
Table 9: MLE for the parameters of different distributions.
Model 9 ß a
SIMD (x; a, ß) - 1.5464 0.2630
EPLD (x; a, ß, 9) 3.5472 0.2901 30.8299
PLD (x; ß, 9) 0.5867 0.7988 -
GLD (x; a, 9) 0.3588 - 0.7460
0 5 10 15 20 25
x
Figure 3: Empirical cdf and fitted cdf plot.
Acknowledgement
Authors are deeply indebted to the Editor-in-Chief (Rykov Vladimir) and anonymous referees for their valuable suggestions to improve the quality of the original manuscript. Agni Saroj acknowledges research fellowship (373/NFSCJUNE2019) from UGC, New Delhi.
References
[1] Martz, H. F., & Waller, R. (1982). Bayesian Reliability Analysis. JOHN WILEY & SONS, INC., 605 THIRD AVE., NEW YORK, NY 10158,1982, 704.
[2] Bennett, S. (1983). Log-logistic regression models for survival data. Journal of the Royal Statistical Society: Series C (Applied Statistics), 32(2), 165-171.
[3] Langlands, A. O., Pocock, S. J., Kerr, G. R., & Gore, S. M. (1979). Long-term survival of patients with breast cancer: a study of the curability of the disease. Br med J, 2(6200), 1247-1251.
[4] Sharma, V. K., Singh, S. K., & Singh, U. (2014). A new upside-down bathtub shaped hazard rate model for survival data analysis. Applied Mathematics and Computation, 239, 242-253.
[5] Muth, E. J. (1977). Reliability models with positive memory derived from the mean residual life function. Theory and applications of reliability, 2, 401-436.
[6] Jodra, P., Jimenez-Gamero, M. D., & Alba-Fernandez, M. V. (2015). On the Muth distribution. Mathematical Modelling and Analysis, 20(3), 291-310.
[7] Jodra, P., Gomez, H. W., Jimenez-Gamero, M. D., & Alba-Fernandez, M. V. (2017). The power Muth distribution. Mathematical Modelling and Analysis, 22(2), 186-201.
[8] Irshad, M. R., Maya, R., & Krishna, A. (2021). Exponentiated Power Muth Distribution and Associated Inference. Journal of the Indian Society for Probability and Statistics, 22(2), 265-302.
[9] Chesneau, C., & Agiwal, V. (2021). Statistical theory and practice of the inverse power Muth distribution. Journal of Computational Mathematics and Data Science, 1, 100004.
[10] Almarashi, A. M., & Elgarhy, M. (2018). A new muth generated family of distributions with applications. J. Nonlinear Sci. Appl, 11, 1171-1184.
[11] Al-Babtain, A. A., Elbatal, I., Chesneau, C., & Jamal, F. (2020). The transmuted Muth generated class of distributions with applications. Symmetry, 12(10), 1677.
[12] Bicer, C., Bakouch, H. S., & Bicer, H. D. (2021). Inference on Parameters of a Geometric Process with Scaled Muth Distribution. Fluctuation and Noise Letters, 20(01), 2150006.
[13] Corless, R. M., Gonnet, G. H., Hare, D. E., Jeffrey, D. J., & Knuth, D. E. (1996). On the LambertW function. Advances in Computational mathematics, 5(1), 329-359.
[14] Jodra, P. (2010). Computer generation of random variables with Lindley or Poisson-Lindley distribution via the Lambert W function. Mathematics and Computers in Simulation, 81(4), 851-859.
[15] Gilchrist, W. (2000). Statistical modelling with quantile functions. Chapman and Hall/CRC.
[16] Cheng, R. C. H., & Amin, N. A. K. (1983). Estimating parameters in continuous univari-ate distributions with a shifted origin. Journal of the Royal Statistical Society: Series B (Methodological), 45(3), 394-403.
[17] Ranneby, B. (1984). The maximum spacing method. An estimation method related to the maximum likelihood method. Scandinavian Journal of Statistics, 93-112.
[18] Cheng, R. C. H., & Traylor, L. (1995). Non-regular maximum likelihood problems. Journal of the Royal Statistical Society: Series B (Methodological), 57(1), 3-24.
[19] Pitman, E. J. (1979). Some basic theory for statistical inference (vol. 7). Chapman and Hall, London.
[20] Ghosh, K., & Jammalamadaka, S. R. (2001). A general estimation method using spacings. Journal of Statistical Planning and Inference, 93(1-2), 71-82.
[21] Anatolyev, S., & Kosenok, G. (2005). An alternative to maximum likelihood based on spacings. Econometric Theory, 21(2), 472-476.
[22] Singh, U., Singh, S. K., & Singh, R. K. (2014). A comparative study of traditional estimation methods and maximum product spacings method in generalized inverted exponential distribution. Journal of Statistics Applications & Probability, 3(2), 153.
[23] Adler, A. (2017). lamW: Lambert-W function, R package version 1.3.0.
[24] Jorgensen, B. (1982). Statistical properties of the generalized inverse Gaussian distribution (Vol. 9). Springer.