Bayesian and Non-Bayesian Inference of Exponentiated Moment Exponential Distribution with Progressive
Censored Samples
Amal S. Hassan 1, Samah A. Atia 2, Hiba Z. Muhammed 3
1. Department of Mathematical Statistics, Cairo University, Faculty of Graduate Studies for Statistical
Research, Egypt, Email: amal52 [email protected] 2 Department of Mathematical Statistics, Cairo University, Faculty of Graduate Studies for Statistical
Research, Egypt, Email: [email protected] 3. Department of Mathematical Statistics, Cairo University, Faculty of Graduate Studies for Statistical
Research, Egypt, Email: [email protected]
Abstract
In this paper, a progressive type-II censoring strategy is used to estimate the parameters, reliability and hazard rate functions of the exponentiated moment exponential distribution. The maximum likelihood and Bayesian techniques have been used to estimate the proposed estimators. Gamma (informative) and uniform (non-informative) priors are taken into account under the squared error loss function to produce the Bayesian estimators. The highest posterior density interval estimations and the 95% approximate confidence intervals along with coverage probability are calculated. In order to evaluate the effectiveness of estimates produced by the Metropolis-Hastings sampling algorithms, we provide a numerical research. According to the study's findings, the Bayes estimates under informative priors are typically more accurate than other estimates.
Key Words: Exponentiated moment exponential, gamma prior, credible interval, Metropolis-Hastings, Progressive censorning
1. Introduction
Censoring is widely used in reliability data analysis and other practical life-testing investigations. It becomes apparent when precise failure times for a subset of the test units used in an experiment are observed. The experimenter frequently runs into incomplete data in this scenario. Typical censoring systems include type I censoring (T1C) and type II censoring (T2C). The units can only be expelled after the conclusion of the experiment, which is a major drawback of TIC and T2C methods. In a more open-ended censoring technique known as progressive censoring (PC), units are designated to be discarded from the test at times other than the eventual termination time point. The remaining units are then tested again while being observed. To learn more, visit Balakrishnan [1].
Progressive T2C (PT2C) is the major topic of this research project. Let's assume that n identical items are used in the experiment, and that the PC scheme R is pre-fixed so that, after the first failure, Rj surviving items are ejected from remaining live (n -1) items, R2 surviving items are ejected from remaining live (n - Rj - 2) items, and so on. After mth failure, this procedure is maintained until all Rm = n - m - Rj - ...Rm-j remaining objects are expelled (see Hofmann et al [2]). Therefore, a PT2C
procedure consists of m and R,R2,...,Rm such that^R. + m = n. Note that, if Ri = R2= ...=Rm =0 then
i = 1
the PT2C provides complete sampling and if Ri = R2=.. .= Rm-i = 0 and Rm = n-m then PT2C yields T2C scheme (see Krishna and Kumar [3]).
According to PT2C samples, the likelihood function of random variable X (Balakrishnan and Aggarwala [4]) is supplied as follows .
m R
m =cn f( x(i)) [i-' (1)
where C = n(n - R -1)... (n - R - R-----Rm-1 - m +1). Some important literature regarding the
estimation studies under PT2C scheme can be found in Wu [5] , Ng [6], Dey et al. [7], Hassan et al. [8], EL-Sagheer [9], Noor et al. [10], Alshenawy et al. [11], and Shrahili et al. [12].
Moment distributions are essential in probability theory and several economic, reliability, and biological studies, as well as other areas of mathematics and statistics. Some of the fundamental features of the moment exponential (ME) distribution were studied and suggested by Dara and Ahmad [13]. The version of the ME distribution that includes an additional shape parameter is known as the exponentiated ME (EME) distribution, and it is frequently employed in reliability research. Hasnain et al. [14] suggested several EME distribution features, including conditional-based characterisation, explored maximum likelihood (ML) estimators, and fitted it to actual data sets. Compared to the ME distribution and exponentiated exponential (EE) distribution, the EME distribution is more adaptable when fitting data. As described by Hasnain et al. [14] the EME distribution's cumulative distribution function (CDF), is
F(x;a,b) = [1 -y ]a ; x,b,a> 0, (2)
wherey = (1 + bis scale parameter and a is shape parameter. The probability density
function (PDF) of the EME distribution is
f(x;a, b) = ab-2 [1 -y ]a-1 xe-%lb; x,b,a> 0. (3)
For b = 1, the CDF (2) gives the CDF of one parameter EE distribution (Gupta and Kundu [15]). Also, for a = 1, the CDF (2) gives the CDF of ME distribution. The reliability function (RF) and hazard rate function (HRF) related to (3) are defined as:
R(x) = 1 - [1 -y]a, h(x) =
axe
x/ b
[i-y ]a
b(i-[i-y]a) '
Plots of the PDF and HRF of the EME distribution are displayed in Figure 1. It is evident that different parameter values result in varied forms for the PDF for the EME distribution. The distribution may alternatively be characterised as favourably skewed to right and uni-modal. It is clear that the EME distribution's HRF has an increasing trend.
CO
o
X
N
CO
o *
0
0 0
— beta=i.alpha=l.2
— beta=1.5.alpha=1.2
y* 1 — betas2.alphas1.G
beu=2.5.aipha=i.8
J jr JT ) / / — beta=3.alpha=2
'/ / _ 1 1 // / / '/ / /
10
Figure 1: PDF and HRF plots of the EME distribution
Different approaches to estimating the PDF and CDF of the EME distribution were provided by Tripathi et al. [16]. The ML and Bayesian techniques developed by Fatima and Ahmad [17] have been taken into consideration when discussing the parameter estimators of the EME distribution. Akhter et al. [18] provided explicit algebraic equations that are generated from the EME distribution for both single and product moments of order statistics. Additionally, they used a full sample as well as a T2C sample to identify the best linear unbiased estimators based on these moments. Some generalizations of EME distribution may be found in Iqbal et al. [19], Ahmadini et al. [20] and Shrahili et al. [21].
The RF, HRF, and parameter estimators of the EME utilising ML and Bayesian techniques are addressed in the current study. Both the Bayesian credible intervals (BCIs) and the approximate confidence intervals (ACIs) are built using the PT2C data. This document can be constructed as shown below. Section 2 deals with ML estimators and the ACIs of parameters, RF and HRF. Sections 3 explore Bayesian estimate under informative (IF) and non-informative (NIF) priors. Sections 4 and 5, respectively, provide numerical illustrative studies and a conclusion.
2. Maximum Likelihood Procedure
Here, using PT2C data, we obtain the ML estimators of the parameters, RF, and HRF of the EME distribution. In addition, the ACIs for the RF, HRF and the parameters b and a are built. Let x(1),x(2),...,x(m)be the observed PT2C random samples extracted from the EME distribution. Based on (1), then the likelihood function of the EME distribution takes the following form:
m r
L(x|b,a) x n«/T2 (1 -y T xfi [l-(1 -y T]
(4)
where y=(1+xb l)e band we write x(t) = xt for simplified form. The logarithm of (4), say != logL(x\b,a) becomes:
! x m lna -
m 1 m m m r- -,
2mlnb + £lnx,.--£x,. + (a -1)£ln(l-y ) + £R, ln|_1 - (l-y )a \
P ,=1
The first derivative of (5) via b and a are given as:
*L= +± fx _ (a_ i)V_v_+afR (1 r' x y - e"x,/b)
db b b 1-1 ti(1 _y) tr ' b2 [l _ (1 _y )"]
* = ^ + f ln(1 -y)-*R. ^ln(l-y),
5a a tl V j-t ' 1 -(1 -y)a '
where v = x. -e-xlb). The estimator of b and a is the solution of the first derivative of d//dfi b=p = 0 and 5//da\a=a = 0. Numerical iterative approach may be used to calculate the estimator of band afor the specified values of (m,R,x). Additionally, the invariance feature of the ML method is used to evaluate R(x) and h(x) as below
R(x) = 1 - [1 -y], h(x) = axfi^e-*'1 b [1 -y f-1 (l - [1 - yf .
In addition, we get the observed information matrix, say Ito build ACIs. The multivariate
normal distribution N2(0,I l(cc,j3))is used to create ACIs for the parameters band a with the usual regularity requirements. Based on the asymptotic normality criteria of the ML, the two-sided 100(1 - e)%ACI for parameter b and a is
AsyCI_Uppei=b + Ze^var(b) and a + Ze/2^var(a),
AsyCI_Lower=b - Zs/2 ^var(b) and a - Zsj2 ^var(a), AIL= AsyCI_Upper - AsyCI_Lower,
where Zej2 is the right tail probability's percentile a/2 for the standard normal distribution. Once more, an R-based numerical method is offered to get the variance-covariance matrix. Also, the 100(1 -e)%ACI for R(x) and h(x) are given by R(X) ±Za/2<Jvix(R(x)), h(x) ±Zsj2<Jvaa(h(x)).
3. Bayesian Estimators
Here, Bayesian estimator of the parameters, RF and HRF of the EME distribution in case of IF and NIF priors under squared error (SE) loss function. Firstly, consider b and a have a gamma distribution with parameters (a, b) and (c, d) respectively. Assuming that b and a are independently distributed, the joint prior distribution of b and a is given by:
h 2 (b, a\ x) = b*dC cle-bp-da,
1,2 r(a)r(c)
where a, b, c and d are chosen to reflect the prior knowledge about the unknown parameters (the criteria to select the hyper-parameter values is discussed in Section 3.1). The joint posterior distribution of parameters b and a is defined as:
m r
Pa-3ace-bl3-dan(1 -y )a-1 xfixil311 -(1 -y )a} '
n2(P,a\x) = ■
m R
Jo Jo ba-3 ace-bb-daft (1 -y )a-1 xte ~x /b {l - (1 -y )a}' d b da
i=i
Hence, the marginal posterior distributions of b and a take the following forms:
m R
h,{px) = k- pa-3e-"b\ ace-da]\ (1 -y)a-1 xfi~x /p {l-(1 -y)af da,
0 /=1
m R
h2{a\x) = V ace-daj pa-3e-"p]\ (1 -y)a-1 x,e~xJp {l-(1 -y)af dp,
0 /=1
m R
where k1 = j j pa-3ace-"p-daI (1 -y )a-1 x£xlp {l - (1 -y )a}'dpda.
0 0 ,=i
The Bayesian estimator of b and a, expressed by b and a, are obtained as follows:
.m. R
p = k- jjpa-2ace-bp-dan (1 -W)a-1 xie~x {l - (1 ~W)a} dpda , 0 0 i=1
m R
a= k-j j pa-3ac+le-bp-dan (1 -y)a-1 lp {l - (1 -y)a}R dpda.
0 0 ,=i
The Bayesian estimators of R(x) and h(x) are given by:
(6)
Ш\ .m. R
1 -[ 1 -(1+xp1Kx/P J )ba~3tfe-bp~dan(1 - 1 xfi-x!p {l-(1 -y)a }'dadp, 0 0 ^ i=1
h(x) = K1tiirl J pa—5ac+Vbp—da^ (1 — v^xf" M — YXfdadp. (7)
00 (l — [l — (1+xP~Vx/p] ) -=1
The above Bayesian estimators b,a, R(x) and h(x) are not in closed forms but can be evaluated numerically for the given values of a, b, c, d, n, m, x and R .
Secondly, assuming the prior of parameters band a, denoted by g1(b) and g2(a) has the uniform (NIF) prior distribution. The joint prior for parameters band a, represented by g12(a, P), assuming independent of priors, is
gl2(a\\x) = (aP)-\ 0 < a< 1. The joint posterior density of b and a given the data x is given by:
m r
n (b, a x)=k-P-- n (1 - y a x,e-x 1 b {1 - (1 - y ■.
1=1
where k2 = j " j " P-3 II (1 -y )a-1 x^ x 1 p {l - (1 - y dpda.
0 0 i=1
Hence, the marginal posterior distributions of b and a take the following forms:
g, (p X) = j n, (P, a x) da = k2-p-3 j0¥ II (1 -y)a1 {1 - (1 - y)a f x,elPda,
a i=1
m r
g2(aX) = jn(P,\x) dp = j0 p-3n(1 -y)a-1 {1 -(1 -y)a}' xte-X'!P dp, p i=l
The Bayesian estimator of b and a, denoted by b and a, are obtained as follows:
(10)
m
P = E(b\x)=k2-JJp-2n(1 -y a-(1 -yrf dp da, (8)
0 0 i=1 ¥ ¥ m
a = E(a\x) = k2-1 JJap-^(1 -y )a-1 x te~x/p {l-(1 -y )a dp da. (9)
0 0 i=1
The Bayesian estimator of R(x) and h(x) are given by:
¥ ¥ , \ m
R (x) = ill1 -11 - d+b) b I) b n (! - b I1 - (! - y f da dp,
0 0 ' i=1
... ""axe^'bl-(1+xb^1)e~xJP 7m , < ^
h (x) = k2-l\\—-±-^bU (1 — Taxte-*'l<> |1 -(1 -y )af dadp. (11)
0 0 p2 (1 -[1 - (1+xp-1)e"x'/p] ) -=1
The above Bayes estimates b,a,R(x) and h (x) are assessed numerically for the given values of n, m, x andR. Integrals (8)-(11) are very hard to be solved analytically, so the Metropolis-Hastings (MH) algorithm will be used to solve these integrals.
3.1 Hyper-Parameter Elicitation
This sub-section handled the elicitation of the hyper-parameter values in case of IP. These hyperparameters of IP are obtained from ML estimators for b and a, by equating the mean and variance of
b' and a' with the mean and variance of the gamma distributions, where ¿=1,2,....,N and N is the number of samples available from the EME distribution. Thus,
2 - N 1 N ( 1 N \2 c
1 N „ 1 N t 1 N 1 N c 1 N 1 N
1 yp = £, _L_ yip -_L yp I = iL 1 ya' = c s _L_ y U' -1 ya' 2.
Nj~i b N-1 j~i y Nj~i J b1 Nj~i d N-1 j~i \ Nj~i J d2 Hence, the estimated hyper-parameters are obtained as follows
f^M (11 - T
1 N(* 1 ^ V 1 N(* 1 V 1 ^(1 T 1 N(- 1 y a
N-5 {'- N5 H N— 5 I'- n 5N-1 y - - N g* J n- y - - N 5 .
For more information (see Dey and Pradhan, 2014). 3.2 Bayesian Credible Intervals
Furthermore, the BCI of a and b denoted by aBCIF and /bBCIF is obtained under IF and NIF priors as follows:
U" m r
Pbcf = k-JJK\pa-2ace-bp-daft(1 -¥i)a-1 xie-xlp{\-(1 -y)af dpda = 0.95, (12)
L 0 i=1
U" m r
a bcf = k- JJ Pa-iac+le-bp- dan (1 -y)a-1 Xf-x'/P{1 - (1 -y)a} dpda = 0.95. (13)
L 0 i=1
u ¥ m r
Pbcnf = JJVP2n(1 -y)a-1 x^xJP{l-(1 -y)\R dpda = 0.95, (14)
l 0 i=1
u ¥ ,m. r
\bcnf = jj k2anfly )a-1 x^'p {!-(!-y)a}R dp da = 0.95. (15)
l 0 i=l
Integrals (12)-(15) are very hard to be solved analytically, so the MH algorithm will be used to solve these integrals Similary, the BCI of R(x) and h(x) provided in (6), (7) under IP and the BCI of R(x) and h(x) provided in (10) and (11) under NIP are obtained using the above procedure.
4. Numerical Illustration
To determine ML estimates (MLEs) and Bayesian estimates (BEs) for parameters, RF and HRF under the PT2C scheme, a simulation study was conducted. Different sample sizes (n), effective failure sizes (m), and picking parameter values are taken into consideration. The R 3.6.1 software is used to complete the following stages.
1. Using the same technique as that provided by Balakrishnan and Sandhu [22], which includes the following, random samples X1,X2,...,Xnare produced from the EME distribution under PT2C samples:
i. Generate m independent and identically (iid) random numbers W1,W2,...,Wmfrom uniform distribution U(0,1).
ii. Set V = Wy'+R"+-+R"+l) for i = 1,2,..., m.
iii. Set U = 1 -V V , ... V , and for i = 1,2,...,m .Then U,U,...,U is the PT2C
i m m-1 m-i+1 ' ' ' 1' 2' ' m
sample from U(0, 1) distribution.
iv. Finally, set Xi = F-1(Ui)fori = 1,2,...,m, where F_1(.)is the inverse CDF of EME distribution consideration, then X1,X2,...,Xmare the required PT2C samples from EME distribution with censoring scheme R = (Ri, R2,..., Rm).
2. Three different sampling schemes are considered as follows:
Scheme I: Ri = R2 =.. .= Rm-iand Rm = n -m (T2C),
Scheme II: Rm = n -m, R2 = R3 =.= Rm=0 and Scheme III: Ri = R2 = (n - m) / 2 , R3 = Ri = .= Rm=0.
3. The parameters b and a are chosen with values; Case 1: b = 0 5, a = 1.5 and Case 2: b = 0.5, a = 3
4. With the mission time x = 0.8, the number of stages m, and the censoring strategyR = (R R2,...,Rm), various sample sizes of n=50, 100, and 150 are chosen. The method described by Dey et al. [23] is used to choose the hyper-parameters for gamma priors
5. To create samples from the posterior distributions, the MH approach is applied.
6. The biases, mean squared errors (MSEs), average lengths (AILs), and CPs for MLEs and BEs are computed for various sample sizes, with the number of repeated samples being 1000 samples
7. A portion of the results, which are lengthy numerically, are shown Tables 1-3 for MLEs and BEs under IF.
Figures 2-8 provide examples from the investigation.
Regarding the behaviour of various estimations, the following findings are found.
V All the precision measures for MLEs and BEs tend to decrease with sample sizes n and number of stages m, in majority of the cases. The sample size n and number of stages m both enhance the CPs of the HRF estimates.
V Figure 2 shows that the MSEs of aand ]3, obtain the least values across all schemes, and the MSEs of a and /3 in Case 1 get the biggest values across all schemes.
Figure 2: MSEs for aand b estimates in Case 1 for all values of m
Figure 3 demonstrates that the MSEs of a and b in Case 2 obtain the lowest values among all schemes, whereas the MSEs of a and b obtain the highest values within all schemes
Figure 3: MSEs for aand b estimates in Case 2 for all values of m
Regarding Case 1, in Figure 4, the MSEs of R(x) and h(x) in all schemes take the least value, while the MSEs of R(x) and h(x) receive the biggest value
Figure 4: MSEs for RF and HRF estimates in Case 1 for all values of m
The MSEs of R(x) and h(x), in all schemes, obtain the least values, as shown in Figure 5.
Figure 5: MSEs for RF and HRF in Case 2 for all values of m
In most cases, it is possible to draw the conclusion that the MSEs of population parameters employing IF priors take the lowest values.
The widths of the BCIs via IF priors are shorter than those of the MLEs and BEs under NIP priors in Case 1 (b=0.5, a = 1.5).
The CPs for BEs under IF priors are higher than the equivalent for MLEs and BEs under NIF priors.
In Figure 6, for NIF prior, history graphs for various estimates of b and a are demonstrated. The plots of the parameter chains resemble a horizontal band without any discernible lengthy upward or downward trends, which are evidence of convergence.
(a) b and a at n=100, m= 50 for b =0.5, a = 1.5
(b) b and a at n=100, m= 50 for b =0.5, a = 3 Figure 6: Different BEs for b and a under NIF priors
In Figure 7, for IF priors, history graphs for various estimations of b and a are shown. The plots of the chains for the parameters resemble a horizontal band without any significant long-term rising
or downward trends, which are signs of convergence
(a) band a at n=100, m= 50 for b=0.5, a= 1.5
(b) b and a at n=100, m= 50 for b =0.5, a = 3 Figure 7: Different Bayesian estimates for b and a under gamma priors
Table 1: MLEs and associated measures fora,b, R(x) and h(x) in case 1
Scheme I
n m Estimate Mean Bias MSE AIL CP
b 0.490 0.010 0.017 0.511 95.6
20 a 1.780 0.280 0.562 2.728 95.8
R(x) 0.842 0.169 0.029 0.299 95.0
50 h(x) 0.627 0.366 0.134 0.758 95.0
b 0.485 0.015 0.010 0.383 96.6
30 a 1.734 0.234 0.471 2.531 96.2
R(x) 0.790 0.117 0.014 0.483 96.7
h(x) 0.729 0.264 0.070 1.118 96.7
b 0.477 0.023 0.020 0.550 95.2
20 a 1.810 0.310 0.560 2.670 95.6
R(x) 0.909 0.237 0.056 0.184 95.0
h(x) 0.470 0.523 0.273 0.737 95.0
b 0.492 0.008 0.006 0.297 96.9
100 50 a 1.626 0.126 0.159 1.484 95.5
R(x) 0.772 0.100 0.010 0.469 96.0
h(x) 0.765 0.228 0.052 1.031 96.0
b 0.495 0.005 0.004 0.237 96.5
70 a 1.582 0.082 0.098 1.184 96.7
R(x) 0.638 0.035 0.001 0.640 97.1
h(x) 0.978 0.015 0.000 1.108 97.1
b 0.490 0.010 0.007 0.324 96.1
50 a 1.615 0.115 0.138 1.384 95.8
R(x) 0.835 0.162 0.026 0.306 96.0
h(x) 0.676 0.317 0.100 0.767 96.0
b 0.491 0.009 0.004 0.258 96.8
70 a 1.599 0.099 0.103 1.198 95.7
R(x) 0.826 0.153 0.023 0.393 97.1
150 h(x) 0.686 0.308 0.095 0.964 97.1
b 0.495 0.005 0.003 0.202 96.7
100 a 1.562 0.062 0.070 1.011 95.3
R(x) 0.685 0.012 0.000 0.592 97.0
h(x) 0.915 0.078 0.006 1.103 97.0
b 0.498 0.002 0.002 0.165 96.8
130 a 1.530 0.030 0.047 0.838 96.3
R(x) 0.583 0.089 0.008 0.756 96.9
h(x) 1.046 0.053 0.003 1.173 96.9
Continued Table 1
Scheme II
n m Estimate Mean Bias MSE AIL CP
b 0.488 0.012 0.010 0.385 96.5
20 a 1.673 0.173 0.314 2.090 96.8
R(x) 0.533 0.139 0.019 0.906 95.0
50 h( x) 1.092 0.099 0.010 1.491 95.0
b 0.494 0.006 0.007 0.335 95.8
30 a 1.652 0.152 0.263 1.921 95.2
R(x) 0.577 0.096 0.009 0.898 96.7
h( x) 1.026 0.033 0.001 1.380 96.7
b 0.498 0.002 0.010 0.385 95.9
20 a 1.615 0.115 0.214 1.759 96.5
R(x) 0.491 0.181 0.033 0.893 95.0
h( x) 1.133 0.140 0.020 1.393 95.0
b 0.497 0.003 0.004 0.257 96.2
100 50 a 1.585 0.085 0.123 1.336 94.8
R(x) 0.512 0.160 0.026 0.967 96.0
h( x) 1.110 0.117 0.014 1.573 96.0
b 0.497 0.003 0.000 0.085 97.0
70 a 1.559 0.059 0.004 0.084 96.8
R(x) 0.549 0.123 0.015 0.938 100.0
h( x) 1.057 0.064 0.004 1.462 100.0
b 0.490 0.010 0.004 0.251 96.5
50 a 1.600 0.100 0.111 1.247 96.3
R(x) 0.499 0.173 0.030 0.859 96.0
h( x) 1.169 0.176 0.031 1.156 96.0
b 0.497 0.003 0.003 0.210 96.2
70 a 1.557 0.057 0.079 1.081 95.8
R(x) 0.569 0.104 0.011 0.913 97.1
150 h( x) 1.043 0.050 0.002 1.312 97.1
b 0.500 0.000 0.002 0.183 97.0
100 a 1.538 0.038 0.059 0.939 96.5
R(x) 0.472 0.201 0.040 0.949 97.0
h( x) 1.171 0.178 0.032 1.425 97.0
b 0.498 0.002 0.002 0.160 97.1
a 1.536 0.036 0.044 0.809 96.0
130 R(x) 0.465 0.208 0.043 0.912 96.9
h( x) 1.193 0.200 0.040 1.280 96.9
Continued Table 1
Scheme III
n m Estimate Mean Bias MSE AIL CP
b 0.483 0.017 0.010 0.392 96.1
20 a 1.693 0.193 0.319 2.081 95.9
R(x) 0.566 0.107 0.011 0.938 95.0
50 h( x) 1.057 0.064 0.004 1.502 95.0
b 0.495 0.005 0.007 0.332 96.3
30 a 1.645 0.145 0.254 1.892 95.9
R(x) 0.580 0.093 0.009 0.958 96.7
h( x) 0.988 0.006 0.000 1.626 96.7
b 0.490 0.010 0.010 0.381 96.3
20 a 1.617 0.117 0.191 1.653 95.9
R(x) 0.603 0.069 0.005 0.952 95.0
h( x) 0.958 0.035 0.001 1.614 95.0
100 b 0.496 0.004 0.004 0.253 96.4
50 a 1.588 0.088 0.123 1.333 95.4
R(x) 0.559 0.114 0.013 0.919 96.0
h( x) 1.069 0.075 0.006 1.371 96.0
b 0.494 0.006 0.003 0.209 96.8
70 a 1.581 0.081 0.093 1.151 95.6
R(x) 0.538 0.134 0.018 0.869 97.1
h( x) 1.100 0.106 0.011 1.227 97.1
b 0.498 0.002 0.004 0.247 96.6
50 a 1.560 0.060 0.092 1.166 96.8
R(x) 0.462 0.210 0.044 0.932 96.0
h( x) 1.173 0.180 0.032 1.444 96.0
b 0.500 0.000 0.003 0.216 96.8
70 a 1.546 0.046 0.071 1.030 96.3
R(x) 0.536 0.136 0.019 0.947 97.1
150 h( x) 1.090 0.097 0.009 1.471 97.1
b 0.496 0.004 0.002 0.185 97.2
100 a 1.556 0.056 0.064 0.967 95.4
R(x) 0.544 0.129 0.017 0.937 97.0
h( x) 1.081 0.088 0.008 1.423 97.0
b 0.497 0.003 0.002 0.160 96.3
130 a 1.542 0.042 0.046 0.828 95.7
R(x) 0.485 0.188 0.035 0.960 96.9
h( x) 1.153 0.160 0.025 1.560 96.9
Table 2: Bayes estimates and associated measures for a, b, R(x) and h(x) in case 1 using IF priors
Scheme I
N m Estimate Mean Bias MSE CIL CP
b 0.489 0.011 0.001 0.082 97.4
20 a 1.779 0.279 0.078 0.082 97.0
R(x) 0.841 0.169 0.028 0.308 100.0
50 h( x) 0.629 0.364 0.132 0.787 100.0
b 0.484 0.016 0.001 0.080 97.0
30 a 1.734 0.234 0.055 0.082 98.5
R(x) 0.789 0.116 0.014 0.483 96.7
h( x) 0.732 0.261 0.068 1.100 100.0
b 0.477 0.023 0.001 0.084 96.7
20 a 1.809 0.309 0.096 0.080 96.7
R(x) 0.909 0.237 0.056 0.186 100.0
h( x) 0.471 0.522 0.273 0.745 100.0
b 0.492 0.008 0.001 0.080 97.2
100 50 a 1.624 0.124 0.016 0.077 98.1
R(x) 0.714 0.041 0.002 0.452 96.0
h( x) 0.895 0.098 0.010 0.922 100.0
b 0.495 0.005 0.000 0.077 97.6
a 1.580 0.080 0.007 0.084 97.4
70 R(x) 0.638 0.035 0.001 0.640 97.1
h( x) 0.977 0.016 0.000 1.099 98.6
b 0.491 0.009 0.001 0.081 98.9
50 a 1.619 0.119 0.015 0.082 96.4
R(x) 0.793 0.120 0.014 0.281 96.0
h( x) 0.779 0.214 0.046 0.598 100.0
b 0.490 0.010 0.001 0.081 98.0
a 1.599 0.099 0.010 0.085 98.2
70 R(x) 0.825 0.153 0.023 0.374 100.0
150 h( x) 0.688 0.305 0.093 0.884 100.0
b 0.496 0.004 0.000 0.078 96.9
100 a 1.549 0.049 0.003 0.084 96.8
R(x) 0.697 0.024 0.001 0.571 99.0
h( x) 0.899 0.094 0.009 1.114 100.0
b 0.496 0.004 0.000 0.076 97.5
130 a 1.547 0.047 0.003 0.084 97.3
R(x) 0.567 0.105 0.011 0.799 96.2
h( x) 1.067 0.074 0.006 1.196 99.2
Continued Table 2
Scheme II
n m Estimate Mean Bias MSE CIL CP
ß 0.486 0.014 0.001 0.080 97.4
20 a 1.673 0.173 0.030 0.085 97.5
R (x) 0.532 0.140 0.020 0.965 100.0
50 h( x) 1.097 0.104 0.011 1.595 100.0
ß 0.494 0.006 0.000 0.080 96.5
30 a 1.651 0.151 0.023 0.090 97.5
R (x) 0.577 0.095 0.009 0.891 96.7
h( x) 1.024 0.031 0.001 1.308 100.0
ß 0.496 0.004 0.000 0.082 97.1
20 a 1.615 0.115 0.014 0.083 97.1
R (x) 0.490 0.182 0.033 0.991 100.0
h( x) 1.137 0.144 0.021 1.704 100.0
ß 0.493 0.007 0.000 0.083 98.8
100 50 a 1.590 0.090 0.009 0.080 97.8
R (x) 0.522 0.151 0.023 0.922 96.0
h( x) 1.118 0.125 0.016 1.333 100.0
ß 0.497 0.003 0.000 0.078 97.2
a 1.557 0.057 0.004 0.083 98.1
70 R (x) 0.549 0.123 0.015 0.938 100.0
h( x) 1.057 0.064 0.004 1.461 100.0
ß 0.495 0.005 0.000 0.079 96.8
50 a 1.574 0.074 0.006 0.082 96.6
R (x) 0.505 0.167 0.028 0.958 100.0
h( x) 1.117 0.124 0.015 1.555 98.0
ß 0.496 0.004 0.000 0.082 97.7
a 1.555 0.055 0.004 0.085 96.9
70 R (x) 0.567 0.105 0.011 0.911 98.6
150 h( x) 1.046 0.053 0.003 1.319 97.1
ß 0.498 0.002 0.000 0.077 98.2
100 a 1.547 0.047 0.003 0.085 97.2
R (x) 0.511 0.162 0.026 0.940 96.0
h( x) 1.125 0.132 0.018 1.370 100.0
ß 0.497 0.003 0.000 0.075 97.7
130 a 1.540 0.040 0.002 0.082 97.2
R (x) 0.486 0.186 0.035 0.925 98.5
h( x) 1.164 0.170 0.029 1.330 96.9
Continued Table 2
Scheme III
n m Estimate Mean Bias MSE CIL CP
b 0.483 0.017 0.001 0.081 97.1
20 a 1.693 0.193 0.038 0.088 98.2
R(x) 0.565 0.107 0.012 0.968 100.0
50 h(x) 1.059 0.066 0.004 1.590 100.0
b 0.494 0.006 0.000 0.083 98.9
30 a 1.644 0.144 0.021 0.080 97.4
R(x) 0.579 0.094 0.009 0.959 100.0
h(x) 0.991 0.003 0.000 1.626 100.0
b 0.488 0.012 0.001 0.079 96.4
20 a 1.617 0.117 0.014 0.084 97.1
R(x) 0.602 0.070 0.005 0.973 100.0
h(x) 0.962 0.031 0.001 1.672 100.0
b 0.498 0.002 0.000 0.079 96.8
100 50 a 1.562 0.062 0.004 0.084 96.9
R(x) 0.457 0.215 0.046 0.927 98.0
h(x) 1.189 0.196 0.038 1.427 98.0
b 0.494 0.006 0.000 0.077 97.1
a 1.580 0.080 0.007 0.089 97.3
70 R(x) 0.538 0.135 0.018 0.865 100.0
h(x) 1.102 0.109 0.012 1.254 97.1
b 0.497 0.003 0.000 0.079 96.7
50 a 1.559 0.059 0.004 0.082 97.3
R(x) 0.472 0.200 0.040 0.912 96.0
h(x) 1.172 0.179 0.032 1.271 100.0
b 0.499 0.001 0.000 0.079 98.0
a 1.547 0.047 0.003 0.084 96.8
70 R(x) 0.535 0.137 0.019 0.928 100.0
150 h(x) 1.094 0.101 0.010 1.487 97.1
b 0.495 0.005 0.000 0.076 97.6
100 a 1.552 0.052 0.003 0.082 98.0
R(x) 0.515 0.158 0.025 0.887 99.0
h(x) 1.127 0.134 0.018 1.416 97.0
b 0.497 0.003 0.000 0.076 98.3
130 a 1.534 0.034 0.002 0.085 97.1
R(x) 0.489 0.183 0.034 0.950 100.0
h(x) 1.148 0.154 0.024 1.438 100.0
5. Discussion and Summary This study uses maximum likelihood and Bayesian techniques to analyse parameter estimators, reliability function estimator, and hazard rate function estimator for EME distributions under PT2Cschemes. Gamma and uniform priors are taken into account under the squared error loss function to construct the Bayesian estimators. On the basis of IF and NIF priors, it is possible to derive approximate confidence intervals as well as Bayesian credible intervals. A simulation study is conducted to compare the effectiveness of every estimate. The Bayesian estimates using the gamma prior are, roughly speaking, generally more accurate than the MLEs, according to a numerical illustration. When compared to other schemes, Scheme I's MSEs have the highest value. Additionally, the MSEs for each estimate use the value for Scheme III that is the lowest. Comparatively speaking, the Bayesian estimates using gamma priors have the highest coverage probability.
References
[1] Balakrishnan, N. (2007). Progressive censoring methodology: an appraisal (with discussion). Test 16, 211-296.
[2] Hofmann, G., Cramer, E., Balakrishnan, N. and Kunert, G. (2005). An asymptotic approach to progressive censoring. Journal of Statistical Planning and Inference, 130(1), 207-227.
[3] Krishna, H., and Kumar, K. (2011). Reliability estimation in Lindley distribution with progressively type II right censored sample. Mathematics and Computers in Simulation, 82(2), 281-294.
[4] Balakrishnan, N., and Aggarwala, R. (2000). Progressive Censoring Theory, Methods and Applications. Birkhauser Boston, MA.
[5] Wu, S. J. (2002). Estimation of the parameters of the Weibull distribution with progressively censored data. Journal of the Japan Statistical Society, 32(2), 155-163.
[6] Ng, H.K.T. (2005). Parameter estimation for a modified Weibull distribution, for progressively type-II censored samples. IEEE Transactions on Reliability, 54(3), 374-380.
[7] Dey, S., Singh, S., Tripathi, Y.M., and Asgharzadeh, A. (2016). Estimation and prediction for a progressively censored generalized inverted exponential distribution. Statistical Methodology, 32, 185-202.
[8] Hassan, A.S. and Abd-Alla, M. and El-Elaa, H.G.A. (2017). Estimation in step stress partially accelerated life test for exponentiated Pareto distribution under progressive censoring with random removal. Journal of Advances in Mathematics and Computer Science, 25(1), 1-16.
[9] EL-Sagheer, R.M. (2019). Estimating the parameters of Kumaraswamy distribution using progressively censored data. Journal of Testing and Evaluation, 47(2). https://doi.org/10.1520/JTE20150393
[10] Noor, F., Sajid, A., Ghazal, M., Khan, I, Zaman, M., and Baig, I. (2020). Bayesian estimation of Rayleigh distribution in the presence of outliers using progressive censoring. Hacettepe Journal of Mathematics & Statistics, 49 (6), 2119 - 2133.
[11] Alshenawy, R., Ali Al-Alwan, Ehab M. Almetwally, Ahmed Z. Afify, and Hisham M. Almongy. (2020). Progressive Type-II censoring schemes of extended odd Weibull exponential distribution with applications in medicine and engineering. Mathematics, 8(10)1679.
https://doi. org/10.3390 /math 8101679
[12] Shrahili, M., El-Saeed, A.R., Hassan, A.S., Elbatal, I., and Elgarhy, M. (2022). Estimation of entropy for log-logistic distribution under progressive Type II censoring. Journal of Nanomaterials, Doi.org/10.1155/2022/2739606
[13] Dara, S.T., Ahmad, M. (2012). Recent Advances in Moment Distributions and their Hazard Rate", Ph.D. Thesis. National College of Business Administration and Economics, Lahore, Pakistan.
[14] Hasnain, S.A., Iqbal, Z., and Ahmad, M., (2015).0n exponentiated moment exponential distribution. Pakistan Journal of Statistics, 31(2), 267-280.
[15] Gupta, R. D., and Kundu, D., (1999). Generalized exponential distribution. Australian and New Zealand Journal Statistical, 41(2), 173-188.
[16] Tripathi, Y.M., Kayal, T., and Dey, S. (2017). Estimation of the PDF and the CDF of exponentiated moment exponential distribution. International Journal of System Assurance Engineering and Management, 8(2), 1282-1296.
[17] Fatima, K. and Ahmad, S.P. (2018). Bayesian approach in estimation of shape parameter of the exponentiated moment exponential distribution. Journal of Statistical Theory and Applications, 17(2), 359-374.
[18] Akhter, Z., MirMostafaee, S. M.T.K. and Ormoz, E. (2022). On the order statistics of exponentiated moment exponential distribution and associated inference. Journal of Statistical Computation and Simulation, 92(6), 1322-1346, DOI: 10.1080/00949655.2021.1991927.
[19] Iqbal, Z. and Hasnain, S.A., Salman, M., Ahmad, M. and Hamedani, G. (2014). Generalized exponentiated moment exponential distribution. Pakistan Journal of Statistics, 30(4), 537-554.
[20] Ahmadini, A.A.H., Hassan, A.S., Mohamed, R.E., Alshqaq, S.S. and Nagy, H.F. (2021). A new four-parameter moment exponential model with applications to lifetime data. Intelligent Automation & Soft Computing,29 (1), 131-146.
[21] Shrahili, M., Hassan, A.S., Almetwally E.M., Ghorbal, A.B. and Elbatal, I. (2022). Alpha power moment exponential model with application to biomedical science. Scientific Programming. https:// doi.org/ 10.1155/ 2022 /6897405
[22] Balakrishnan, N., Sandhu, R.A.(1995). A simple simulation algorithm for generating progressively type II censored samples. American Statistical Association, 49(2), 229-230.
[23] Dey, S., Pradhan, B. (2014). Generalized inverted exponential distribution under hybrid censoring. Statistical Methodology. 18, 101-114.