BAYESIAN ANALYSIS OF EXTENDED MAXWELL-BOLTZMANN DISTRIBUTION USING SIMULATED AND REAL-LIFE DATA SETS.
Nuzhat Ahad •
University of Kashmir, Srinagar, India [email protected]
S.P.Ahmad •
University of Kashmir, Srinagar, India [email protected]
J.A.Reshi* •
Govt. Degree College Pulwama, India [email protected]
Abstract
The objective of the study is to use Bayesian techniques to estimate the scale parameter of the 2Kth order weighted Maxwell-Boltzmann distribution(KWMBD). This involved using various prior assumptions such as extended Jeffrey's, Hartigan's , Inverse-gamma and Inverse-exponential, as well as different loss functions including squared error loss function (SELF), precautionary loss function (PLF), Al Bayyati's loss function (ALBF), and Stein's Loss Function (SLF).The maximum likelihood estimation (MLE) is also obtained. We compared the performances of MLE and bayesian estimation under each prior and its associated loss functions. And demonstrated the effectiveness of Bayesian estimation through simulation studies and analyzing real-life datasets.
Keywords: 2Kth Order Weighted Maxwell-Boltzmann Distribution, Prior Distribution, Loss Function and Bayesian estimation.
1. Introduction
The Maxwell-Boltzmann distribution, characterizes the probability distribution of speeds for particles in a gas at various temperatures. It provides a statistical framework for understanding the distribution of kinetic energies among particles, which makes it vital for modeling physical systems and predicting their behavior. Because of its practical significance, scientists and engineers closely examine the Maxwell-Boltzmann distribution to attain a deeper understanding of various scientific phenomena and to create precise models of complex systems. Tyagi and Bhattacharya [15] were the first to explore the Maxwell distribution as a lifetime model, and introduced
considerations of Bayesian and minimum variance unbiased estimation methods for determining its parameters and reliability function. Chaturvedi and Rani [6] derived classical and Bayesian estimators for the Maxwell distribution by extending it with an additional parameter. Various Statisticians and Mathematicians have carried out the Bayesian paradigm of Maxwell-Boltzmann distribution by using loss functions and prior distributions, See, Spiring and Yeung [14], Rasheed [11] , Reshi[13] , and Ahmad and Tripathi[1] .
The 2Kth order weighted maxwell-Boltzmann distribution (KWMBD) is a flexible, symmetric continuous univariate probability distribution suitable for modelling datasets of decreasing-increasing, bathtub, increasing and constant behaviour. The probability density function (pdf) of KWMBD is given by:
x2
x2(k+1) a-(3+2k) e-202
f (x) =-;- x > 0, a > 0, k e R. (1)
J 2k+ 2 r(k + 3)
And, the corresponding cummulative distribution function (cdf) of KWMBD is given by:
_ r ((k+3), 202)
F(x) = 1--^-^^ x > 0, a > 0, k e R. (2)
V 7 r(k + §) W
x x
Figure 1: Probability density plot and cumulative distribution plot of KWMBD for different combinations of parameters.
2. Methodological Procedure
Bayesian approach utilizes prior beliefs, observed data, and a loss function to make decision in a structured manner, and is considered more reliable for estimating distribution parameters Compared to the classical approach, especially when the prior distribution accurately represents the parameter's random behavior. In Bayesian analysis, parameters are treated as uncertain variables, allowing prior knowledge to be incorporated into the analysis. This prior information is typically described using a probability distribution known as the prior distribution. Friesl and Hurt[7] noted that employing Bayesian theory is a viable approach for incorporating prior information into the model, potentially improving the inference process and reflects the parameter's behavior. However, there are no strict rules for choosing one prior over another, frequently, prior distributions are selected based on an individual's subjective knowledge and beliefs . When sufficient information about the parameter is available, informative priors are preferred; otherwise, non-informative priors, such as the uniform prior, are used. Aslam [4] demonstrated the
application of prior predictive distribution for determining the prior density. In this study, we assume the parameter a follows an extension of Jeffrey's prior proposed by Al-Kutobi[3] and a2 follows a inverse-gamma prior and are given by:
2.1. Extension of Jeffrey's prior
The prior, known as extension of Jeffrey's prior is given by:
g(a) = [I(a)]ci; ci e R+
where, I (a) = -nE{ logf (x)} is fisher-information matrix. Thus, the resulting extension of Jeffrey's-prior for KWMBD will be:
g(a) =
1
a2
ci e R+ (3)
2.2. Inverse-gamma prior
The density of parameter a2 on assuming it to follow Gamma(j8, A) distribution is given by:
g(a2 ) = rAAjj (a2)-^-1e-A (4)
2.3. Loss functions
The idea of loss functions had been introduced first by Laplace, and later during the mid-20th century it was reintroduced by Weiss[16] . Loss function, serves as a measure of the discrepancy between observed data and the values predicted by a statistical model. Decisions in Bayesian inference, apart from relying on experimental data, are not entirely controlled by the loss function. Moreover, the relationship between the loss function and the posterior probability is significant. The choice of a loss function depends on the specific characteristics of the data and the goals of the analysis. Han[9] pointed out that, in Bayesian analysis choosing the right loss function and prior distribution is essential for making accurate statistical inferences. The Bayesian estimator is directly impacted by the choice of loss function, while the parameters of the prior density function may be affected by hyperparameters. Various symmetric and asymmetric loss functions have been demonstrated to be effective in research conducted by Zellner [17] , Reshi [12], and Ahmad [2] , among others. In this study, we have explored squared error, precautionary, Al-Bayyati's, and Stein's loss functions to enhance the comparison of Baye's estimators. And are given by:
2.3.1. Squared error loss function
The squared error loss function is given by:
lsq(a, a) = c(a - a)2; c G R+ (5)
2.3.2. Precautionary loss function
The Precautionary loss function is given by:
j t- \ c(a - a)2
lpr (a, a) = v a (6)
2.3.3. Al-Bayyati's loss function
The Al-Bayyati's loss function is given by:
lAl (a, a) = a°2 (a - a)2; c2 G R+ (7)
Nuzhat Ahad, S.P.Ahmad, J.A. Reshi
BAYESIAN ANALYSIS OF EXTENDED MAXWELL-BOLTZMANN RT&A, No 2 (78) DISTRIBUTION USING SIMULATED AND REAL-LIFE DATA SETS._Volume 19, June, 2024
2.3.4. Stein's loss function
The Stein's loss function is given by:
lSt(a, a) = 0 - log(- 1 (8)
aa
3. Parametric Estimation of KWMBD
In this section, we discuss the various estimation methods for KWMB Distribution.
3.1. Maximum Likelihood Estimation
Let xx, x2, x3,..., xn be a random sample of size n from kth Order Weighted Maxwell-Boltzmann Distribution. Therefore the maximum likelihood estimator(MLE) of a is:
x2
i (9)
n(2k + 3)
3.2. Baye's Estimator under Extension of Jeffrey's Prior
The Joint Probability Density Function of x and given a is given by:
E x2
n x2(k+1)a-n(3+2k)elk-i=1 _i_
r (k +1)
L(x|a) = —-^ ^ | 3.--(10)
The posterior probability density function of a for given data x is given by:
rci(a|x) a L(x|a)g(a)
E x2
n Tfi-^-n /„ , '=1
n x2(k+1)a-n(3+2fc)e=2 i1
TJk^)
n (a|x) « —-(-5T--i—
n U7 r k + 3) a2c1
E1 x2
n1 (a|x) = ka-n(3+2k)-2c1 e ^ where k is normalising constant independent of a and is given by:
n 2
oo E x2
k-1 = i a-n(3+2k)-2c1 e '-itzr da
-n(3+2k)-2c1+1
£ 2 r (n(3+2k)+2c1 -1
k-1 = ±t±iL v 2
-n(3+2k)-2ci +3 2-2 1
Therefore, the posterior probability density function is:
E x2
-n(3+2k)-2ci +3 , , , '=1
2—2 a-n(3+2k)-2c1 e 2a2
n1 (a|x) = -_n(3+2k)- 2ci +1--(11)
—n(3+2k)-2c1 +1
xf)
=1
£ X2 ) 2 r ( n(3+2k)+2c1 -1
3.2.1. Baye's Estimator under squared error loss function
The Risk Function Under SELF is given by:
R(sq,ej)(&) = J c(& — a)2n1 (alx)da
e x2 i=1
e x2 г f n(3+2k)+2c1—2
R(sq,ej)( & ) = c& +(n(3 + 2=) + 2ci — 3) — 2&c\
now, the Baye's estimator is obtained by solving
d(R(sq,ej)(&)) _
i=1
n(3+2k)+2c1—1
d &
and, is given by:
i(el,s4) \
n
E x2 Г( n(3+2k)+2ci—2
i=1 I 2
n(3+2k)+2c1 —1 2
(12)
(13)
3.2.2. Baye's Estimator under precautionary Loss function The Risk Function Under PLF is given by:
( k — a)2
R(pre,ej)( a ) = f c & ^ n1(a lX)da
e xi2
R(pre,ej)( &) = c & + c
i=1
& (n(3 + 2k) + 2c1 — 3)
2c
e x2 г /n(3+2k)+2c1 —2
i=1 i H 2
2
г
n(3+2k)+2c1 —1
2
now, the Baye's estimator is obtained by solving
d(R( pre, ej)( & ))
d &
and, is given by:
pre,ej) \
E x2 i=1
(n(3 + 2k) + 2c1 — 3)
(14)
(15)
3.2.3. Baye's Estimator under Al-Bayyati's loss function The Risk Function Under Al-Bayyati's loss function is given by:
R(alb,ej) (&) = J aC2 (& — a)2n1 (alx)da
R
e xi2 i=1
(alb,ej) ( & ) = & +(n(3 + 2kf+2c1 — c2 — 3) — 2Ä\
n
e x2 г ^ n(3+2k)+2c1—c2—2 j
2
n(3+2k)+2c1—c2—1 "
(16)
2
г
2
n
0
n
2
г
now, the Baye's estimator is obtained by solving
d(R(alb,ej)(a ))
da
and, is given by:
*(alb,ej) — \
n
l x2 ^ n(3+2k)+2ci-c2-2 j
2
n(3+2k)+2c1 —c2-1 2
(17)
3.2.4. Baye's Estimator under combination of Stein's loss function
The Risk Function Under SLF is given by:
œ
R(ste,ej)(a) — / (ja — l°g (— ^ (a|X)da
R(ste,ej)(ä) —
\
- r f n(3+2k)+2ci \
^ r f n(3+2k)2+2ci-1 — l°g(&) — m — 1
l xi ° i—1
where, m is constant of integration.
Now, the Baye's estimator is obtained by solving
d(R(ste,ej)(â ))
dâ
and, is given by:
\ste,ej) — ^
n
L x2 r ^n(3+2k)+2c1 -1 j
r |'n(3+2k)+2cA
3.3. Baye's Estimator under Inverse-Gamma Prior
The Joint Probability Density Function of x and given a2 is given by:
L x,2
L(x|a2)
2 i—1
T-r 2(k+1), 2, -n(3+2k) Η n xi ( + )(a2) 2 e 2a2
r (k + I)
The posterior probability density function of a2 for given data x is given by:
n2(a2|x) a L(x|a2)g(a2)
n2(a2 |x) a
L x,2
" 2(k+1), 2n -n(3+2k) i—1
n x^ ;(a2) 2 e 2a2
i—1 i
r (k + I)
2 —n(3+2k)—2ß—2
2 \ T /
Aß n a 1 A
A (a2 )—ß—Va2
n2 (a2|x) — k(a ) 2 e
r(ß)
'Êx2
\
(18)
(19)
(20)
0
2
r
0
2
a
where k is normalising constant independent of and is given by:
E xf
7-1 ft 2^ -nft+^Mß-2 til
k 1 = (a2) 2 e\ ' da2
r f n(3+2k)+2ß\
n(3+2k)+2ß
E xf 1 2
^ + A
Therefore, the posterior probability density function is:
n(3+2k)+2ß
n „ \ 2
E x2
-+A I J7
E x2 / Ts — n(3+2k) — 2ft—2
^ + A I (a2)-2-e V ,
n2(a2|x) = --, „,. ---(21)
2V ^ r A n(3+2k)+2ft\ v '
3.3.1. Baye's Estimator under squared error loss function
The Risk Function Under SELF is given by:
CO
R(sq,igp)(a2) = J c(a2 - a2)2n(a2|x)d(a2) 0
2
E/M /e x2
+ A| I + A
2
R(sq,igp) (a2) = c(k2)2 + ^ n(3+2k)+2ft-2 j ^n(3+2k)+2ft-4 j - ^C2T ^
now, the Baye's estimator is obtained by solving
R(sq,igp) (a2)
da2)
and, is given by:
l(sH,igP) \
E x2 =T- + A
(n(3 + 2k) + 2ß - 2)
3.3.2. Baye's Estimator under precautionary Loss function The Risk Function Under PLF is given by:
R(pre,igp)(a2)= [ c (a ^ ) n2(a2^dtX2
0
'=1t- + a i i •■=h- + A
2 1
R(pre,igp) (a ) = ca + cOh ((n(3+2k)+2ß-2)(n(3+2k)+2ß"-4)Y _ 2C ( n(3+2k)+2ß-2\ (24)
(n "4
(23)
1
k
2
2
now, the Baye's estimator is obtained by solving
d(R(pre,'gp)(a2 ))
da2
and, is given by:
"" ( pre,ig p)
\
E x2 2 I Щ- + Л
\J(n(3 + 2k) + 2ß - 2)(n(3 + 2k) + 2ß - 4))
(25)
3.3.3. Baye's Estimator under Al-Bayyati's loss function
The Risk Function Under Al-Bayyati's loss function is given by:
R(alb,igp)(u2) = J (a2)c2(a2 - a2)2П(a2|x)da2 0
, E V2 г
%lb,igp)('2) = (*2)4 1 ^
- + Л I
n(3+2k)+2ß-2c2
n(3+2k)+2ß \
)+f 4
\C2+2 г/n(3+2k)+2ß-2c2-4
+ I + Л I
/n(3+2k)+2ß\
) ( Ex? V2+1 г(
I i=U + Л I A
now, the Baye's estimator is obtained by solving
n(3+2k)+2ß-2c2 -2\
in(3+2k)+2ß\
d(R(alb,igp)(a-2))
da2
and, is given by:
i(M,igp) \
E x2
+ Л
(n(3 + 2k) + 2ß - 2)
3.3.4. Baye's Estimator under combination of Stein's loss function
The Risk Function Under SLF is given by:
(27)
R(s,igp)(K2) = I I 02 - l°g[ 02 ) - 1 ) (a2|x)da2
R(s,igp)(a'2 ) = a2
(n(3 + 2k) + 2ß)
2
E x2
- l°g(â) - m - 1
+ Л
(28)
where, m is constant of integration.
Now, the Baye's estimator is obtained by solving
d(R(s,igp) (a2))
d a2
and, is given by:
i(ste,igp) \
E x,2
+ Л
(n(3 + 2k) + 2ß)
(29)
0
2
(26)
г
г
г
0
2
2
2
0
2
Table 1: Baye's Estimation under Hartigan's Prior Distribution and Different Combinations of Loss Functions.
Prior
Loss Function
Baye's Estimator
Hartigan's (i.e.ci = 3/2) Squared-error
Precautionary Al-Bayyati's
Stein's
' E x2 r(n(3+2k)+i-_^ r^ "(3+2k)+2\
' Ex2
("(3+2k))
I E x2 r( "(3+2k)-C2+1 -^ r^ n(3+2k)-c2+1 -j
rp r( )
V 2 r( n(3+2k)+3-
Table 2: Baye's Estimation under Inverse-Exponential Prior Distributions and Different Combinations of Loss Functions.
Prior
Loss Function
Baye's Estimator
Inverse-Exponential (i.e.ß = 1) Squared-error
Precautionary
Al-Bayyati's
Stein's
\ I )
(n(3+2k))
1 A x2 | 2 '=1 +A|
/ (n(3+2k))(n(3+2k)-2))
\ 1 A '2 ^ 2 1 ^ +a|
(n(3+2k))
1 A '2 ^ 2 1 ^ +a|
(n(3+2k)+2)
3.4. Simulation Study
We conducted simulation studies using R software, generated samples of sizes n=10, 50, and 100 to observe the effect of small, medium, and large samples on the estimators of scale parameter a of the 2kth order weighted Maxwell Boltzmann distribution. Each process is replicated 500 times to examine the performance of the MLEs and Bayesian estimators under different priors such as the extension of Jeffrey's prior, Hartigan's prior, inverse-Gamma prior, and inverse-exponential prior, across different loss functions in terms of average estimates, biases, variances, and mean squared errors by considering different parameter combinations.The results are presented in the tables below:
Table 3: Average estimate, Bias, Variance and Mean Squared Error under Extension of Jeffrey's prior.
n a k C1 C2 Criterion amle «sq «pre aalb aste
"10 3 -0.5 2 5 Estimate 2.97912 2.87293 2.90732 3.27915 2.80839
Bias -0.02088 -0.12707 -0.09268 0.27915 -0.19161
Variance 0.23825 0.22157 0.22691 0.28866 0.21173
MSE 0.23869 0.23772 0.23772 0.36658 0.24844
"50 3 -0.5 2 5 Estimate 3.00890 2.98656 2.99396 3.06295 2.97196
Bias 0.00890 -0.01344 -0.00604 0.06295 -0.02804
Variance 0.04693 0.04624 0.04647 0.04864 0.04579
_MSE 0.04701 0.04642 0.04642 0.05260 0.04658
100 3 -0.5 2 5 Estimate 3.00411 2.99291 2.99663 3.03075 2.98551
Bias 0.00411 -0.00709 -0.00337 0.03075 -0.01449
Variance 0.02164 0.02148 0.02153 0.02203 0.02137
_MSE 0.02166 0.02153 0.02153 0.02297 0.02158
"10 4 01 12 3 Estimate 3.96513 3.94644 3.97758 4.14350 3.88675
Bias -0.03487 -0.05356 -0.02242 0.14350 -0.11325
Variance 0.25397 0.25158 0.25557 0.27733 0.24403
MSE 0.25518 0.25445 0.25445 0.29792 0.25685
"50 4 01 12 3 Estimate 4.00312 3.99937 4.00563 4.03732 3.98695
Bias 0.00312 -0.00063 0.00563 0.03732 -0.01305
Variance 0.05165 0.05155 0.05172 0.05254 0.05123
_MSE 0.05166 0.05155 0.05155 0.05393 0.05140
100 4 01 12 3 Estimate 3.99978 3.99790 4.00103 4.01676 3.99168
Bias -0.00022 -0.00210 0.00103 0.01676 -0.00832
Variance 0.02381 0.02379 0.02382 0.02401 0.02371
MSE 0.02381 0.02379 0.02379 0.02429 0.02378
Table 4: Average estimate, Bias, Variance and Mean Squared Error under Hartigan's prior.
n a k C1 C2 Criterion âmle âsq âpre âalb âste
10 3 -0.5 1.5 5 Estimate 2.98117 2.94416 2.98117 3.38551 2.87491
Bias -0.01883 -0.05584 -0.01883 0.38551 -0.12509
Variance 0.20672 0.20162 0.20672 0.26660 0.19225
MSE 0.20708 0.20474 0.20474 0.41521 0.20789
50 3 -0.5 1.5 5 Estimate 2.99573 2.98825 2.99573 3.06548 2.97350
Bias -0.00427 -0.01175 -0.00427 0.06548 -0.02650
Variance 0.04357 0.04335 0.04357 0.04562 0.04292
MSE 0.04359 0.04349 0.04349 0.04991 0.04363
100 3 -0.5 1.5 5 Estimate 2.99912 2.99537 2.99912 3.03344 2.98793
Bias -0.00088 -0.00463 -0.00088 0.03344 -0.01207
Variance 0.02168 0.02163 0.02168 0.02218 0.02152
MSE 0.02168 0.02165 0.02165 0.02330 0.02167
10 4 0.1 1.5 3 Estimate 3.96310 3.93226 3.96310 4.12731 3.87314
Bias -0.03690 -0.06774 -0.03690 0.12731 -0.12686
Variance 0.25001 0.24614 0.25001 0.27116 0.23879
MSE 0.25137 0.25073 0.25073 0.28737 0.25489
50 4 0.1 1.5 3 Estimate 3.99271 3.98647 3.99271 4.02426 3.97411
Bias -0.00729 -0.01353 -0.00729 0.02426 -0.02589
Variance 0.04954 0.04939 0.04954 0.05033 0.04908
MSE 0.04959 0.04957 0.04957 0.05092 0.04975
100 4 0.1 1.5 3 Estimate 4.00132 3.99819 4.00132 4.01704 3.99197
Bias 0.00132 -0.00181 0.00132 0.01704 -0.00803
Variance 0.02222 0.02219 0.02222 0.02240 0.02212
MSE 0.02222 0.02219 0.02219 0.02269 0.02218
âmle = Estimate under maximum likelihood estimation, aSq = Bayes estimate under squared error loss function, âpre= Bayes estimate under precautionary loss function, âaib = Bayes estimate under Al-Bayyati's loss function, &ste= Bayes estimate under Stein's loss function.
Table 5: Average estimate, Bias, Variance and Mean Squared Error under Inverse-Gamma prior.
n a k V X C2 Criterion «mle âsq «pre «alb «ste
10 3 -0.5 1.5 3.5 5 Estimate 2.96815 2.95500 3.02987 4.08292 2.82360
Bias -0.03185 -0.04500 0.02987 1.08292 -0.17640
Variance 0.22183 0.20296 0.21337 0.38746 0.18531
MSE 0.22284 0.20498 0.20498 1.56018 0.21642
50 3 -0.5 1.5 3.5 5 Estimate 3.01586 3.01248 3.02758 3.17368 2.98309
Bias 0.01586 0.01248 0.02758 0.17368 -0.01691
Variance 0.04434 0.04357 0.04400 0.04835 0.04272
MSE 0.04459 0.04372 0.04372 0.07852 0.04301
100 3 -0.5 1.5 3.5 5 Estimate 3.00761 3.00593 3.01346 3.08362 2.99109
Bias 0.00761 0.00593 0.01346 0.08362 -0.00891
Variance 0.02039 0.02021 0.02031 0.02127 0.02001
MSE 0.02045 0.02024 0.02024 0.02826 0.02009
10 4 0.1 1.2 3 3 Estimate 3.96991 3.96910 4.03283 4.39706 3.85199
Bias -0.03009 -0.03090 0.03283 0.39706 -0.14801
Variance 0.24080 0.23493 0.24254 0.28833 0.22127
MSE 0.24171 0.23589 0.23589 0.44598 0.24318
50 4 0.1 1.2 3 3 Estimate 3.97652 3.97628 3.98878 4.05281 3.95172
Bias -0.02348 -0.02372 -0.01122 0.05281 -0.04828
Variance 0.05037 0.05013 0.05044 0.05207 0.04951
MSE 0.05092 0.05069 0.05069 0.05486 0.05184
100 4 0.1 1.2 3 3 Estimate 4.00210 4.00195 4.00822 4.03995 3.98952
Bias 0.00210 0.00195 0.00822 0.03995 -0.01048
Variance 0.02531 0.02525 0.02533 0.02573 0.02509
MSE 0.02531 0.02525 0.02525 0.02733 0.02520
Table 6: Average estimate, Bias, Variance and Mean Squared Error under Inverse-Exponential prior.
n a k V X C2 Criterion «mle «sq «pre «alb «ste
10 3 -0.5 1 3.5 5 Estimate 2.93546 2.99603 3.07599 4.23702 2.85660
Bias -0.06454 -0.00397 0.07599 1.23702 -0.14340
Variance 0.22744 0.21819 0.23000 0.43639 0.19836
MSE 0.23161 0.21821 0.21821 1.96661 0.21892
50 3 -0.5 1 3.5 5 Estimate 2.98571 2.99747 3.01265 3.15961 2.96794
Bias -0.01429 -0.00253 0.01265 0.15961 -0.03206
Variance 0.04084 0.04052 0.04093 0.04502 0.03973
MSE 0.04105 0.04053 0.04053 0.07050 0.04076
100 3 -0.5 1 3.5 5 Estimate 2.99384 2.9997 3.00724 3.07762 2.98481
Bias -0.00616 -0.0003 0.00724 0.07762 -0.01519
Variance 0.02369 0.0236 0.02372 0.02484 0.02336
MSE 0.02373 0.0236 0.02360 0.03087 0.02360
10 4 0.1 1 3 3 Estimate 3.96977 3.99370 4.05866 4.43061 3.87445
Bias -0.03023 -0.00630 0.05866 0.43061 -0.12555
Variance 0.25840 0.25532 0.26369 0.31424 0.24030
MSE 0.25931 0.25536 0.25536 0.49966 0.25606
50 4 0.1 1 3 3 Estimate 3.99208 3.99679 4.00938 4.07391 3.97204
Bias -0.00792 -0.00321 0.00938 0.07391 -0.02796
Variance 0.05112 0.05100 0.05132 0.05298 0.05037
MSE 0.05118 0.05101 0.05101 0.05845 0.05115
100 4 0.1 1 3 3 Estimate 3.99707 3.99942 4.00569 4.03745 3.98698
Bias -0.00293 -0.00058 0.00569 0.03745 -0.01302
Variance 0.02562 0.02559 0.02567 0.02608 0.02543
MSE 0.02563 0.02559 0.02559 0.02749 0.02560
From the results of simulation tables 3,4,5, and 6 , conclusions are drawn regarding the performance and behavior of the estimators under different priors, which are summarized below.
• The performances of the Bayesian and MLEs become better when the sample size increases.
• It has been observed that Bayesian estimation, with square error and precautionary loss function, outperforms MLE estimation, while mle estimation outperforms Bayesian estimation
with Albayyati's and Stein's loss functions.
• In terms of MSE, the bayesian estimation under precautionary loss function and squared error loss function gives smaller MSEs as compared to other loss functions.
3.5. Fitting of real life data-set:
For illustrative purposes, we analyze three different types of real datasets. The dataset I consists of tensile strength measurements (in GPA) from 69 carbon fibers tested under tension at gauge lengths of 20mm. These measurements were initially reported by Bader and Priest [5] . The datasets II consists of an accelerated life test conducted on 59 conductors, with failure times measured in hours. Reported first by Johnston[10] . The dataset III comprises times between arrivals of 25 customers at a facility and reported first Grubbs[8] . Our objective is to evaluate and contrast the performance of KWMBD estimates using mle and baysian estimation.
Table 7: Average estimate,Mean Squared Error, AIC, BICfor posterior distribution under different priors for dataset I.
criterion MLE Ex-Jeffreys Prior Hartigan's Prior I-Gamma Prior I- Exponential Prior
Estimate 2.5001 2.4390 2.4911 2.4144 2.4819
MSE 0.2440 0.2418 0.24320 0.2430 0.2426
AIC 228.6145 197.1729 198.5274 196.5776 198.2806
BIC 230.8486 199.4070 200.7615 198.8117 200.5147
Table 8: Estimates and MSE for Extension of Jeffrey's and Inverse-Gamma Priors with different loss functions for dataset I.
Smle priors Ssq Spre Salb Sste
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
2.5001 0.2440 EX-Jeffrey's Prior 2.4390 0.2418 2.4475 0.2417 2.4911 0.2432 2.4224 0.2424
I-Gamma Prior 2.4391 0.2418 2.456 0.2416 2.5460 0.2506 2.4064 0.2436
Table 9: Average estimate,Mean Squared Error, AIC, BICfor posterior distribution under different priors for dataset II.
criterion MLE Ex-Jeffreys Prior Hartegan's Prior I-Gamma Prior I- Exponential Prior
Estimate 7.16117 6.957312 7.129853 6.855565 7.077978
MSE 2.59377 2.561495 2.583413 2.576478 2.570564
AIC 319.9468 246.6048 247.7058 246.1869 247.3245
BIC 322.0243 248.6823 249.7833 248.2644 249.4021
Table 10: Estimates and MSE for Extension of Jeffrey's and Inverse-Gamma Priors with different loss functions for dataset II.
Smle priors Ssq Spre Salb Sste
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
7.1612 2.5938 EX-Jeffrey's Prior 6.9577 2.5615 6.9858 2.5610 7.2538 2.6359 6.9027 2.5670
I-Gamma Prior 6.9370 2.5628 6.9931 2.5612 7.5631 2.9010 6.8294 2.5837
Table 11: Average estimate,Mean Squared Error, AIC, BICfor posterior distribution under different priors for dataset III.
criterion MLE Ex-Jeffreys Prior Hartegan's Prior I-Gamma Prior I- Exponential Prior
Estimate 4.0405 3.9242 4.0003 3.8025 3.9433
MSE 0.6053 0.6015 0.6010 0.6264 0.6003
AIC 108.1082 85.9577 86.3621 85.4566 86.0532
BIC 109.3270 87.1765 87.5810 86.6755 87.2721
Table 12: Estimates and MSE for Extension of Jeffrey's and Inverse-Gamma Priors with different loss functions for dataset III.
ämle priors Ssq Spre Salb äste
Estimate MSE Estimate MSE Estimate MSE Estimate MSE Estimate MSE
4.0405 0.6054 EX-Jeffrey's Prior 3.9242 0.6015 3.9621 0.5998 4.0811 0.6131 3.8522 0.6126
I-Gamma Prior 3.9070 0.6032 3.9829 0.6001 4.2331 0.6713 3.7699 0.6381
The results of tables 7 , 8 ,9 ,10 ,11 and 12 demonstrate that the estimation of parameters for KWMBD under both priors ( Extension of Jeffrey's and Inverse Gamma prior) and precautionary loss function is better compared to the other three loss functions considered and mle estimation, owing to its lower Mean Squared Error (MSE).
4. conclusion:
We compared estimation methods for the scale parameter a of the 2kth order weighted Maxwell-Boltzmann distribution, utilizing both Maximum Likelihood Estimation (MLE) and Bayesian Estimation under various loss functions and prior distributions. This comparison is based on the simulated data and real-life datasets. Results of simulated data reveal that as the sample size increases, MSE decreases. and the Bayesian Estimation with the square error loss function and precautionary loss function outperforms Maximum Likelihood Estimation (MLE). Furthermore, results from the real-life datasets demonstrate that the estimation of parameters of KWMBD under both prior distributions and precautionary loss function yields better performance, with smaller MSE compared to other estimators.
Conflict of interest: The authors confirm that they have no conflicts of interest to disclose regarding the publication of this paper.
References
[1] A. Ahmad and R. Tripathi. Bayesian estimation of weighted inverse maxwell distribution under different loss functions. Earthline Journal of Mathematical Sciences, 8(1):189-203, 2022.
[2] A. Ahmed, S. Ahmad, and J. Reshi. Bayesian analysis of rayleigh distribution. International Journal of Scientific and Research Publications, 3(10):1-9, 2013.
[3] H. Al-Kutobi. On comparison estimation procedures for parameter and survival function exponential distribution using simulation. PhD thesis, Ph. D. Thesis, Baghdad University, College of Education (Ibn-Al-Haitham !, 2005.
[4] M. Aslam. An application of prior predictive distribution to elicit the prior density. Journal of Statistical Theory and applications, 2(1):70-83, 2003.
[5] M. Bader and A. Priest. Statistical aspects of fibre and bundle strength in hybrid composites. Progress in science and engineering of composites, pages 1129-1136,1982.
[6] A. Chaturvedi and U. Rani. Classical and bayesian reliability estimation of the generalized maxwell failure distribution. Journal of Statistical Research, 32(1):113-120, 1998.
[7] M. Friesl and J. Hurt. On bayesian estimation in an exponential distribution under random censorship. Kybernetika, 43(1):45-60, 2007.
[8] F. E. Grubbs. Approximate fiducial bounds on reliability for the two parameter negative exponential distribution. Technometrics, 13(4):873-876,1971.
[9] M. Han. E-bayesian estimation and its e-mse under the scaled squared error loss function, for exponential distribution as example. Communications in Statistics-Simulation and Computation, 48(6):1880-1890, 2019.
[10] G. Johnston. Statistical models and methods for lifetime data, 2003.
[11] H. Rasheed. Minimax estimation of the parameter of the maxwell distribution under quadratic loss function. Journal of Al-Rafidain University College For Sciences (Print ISSN: 1681-6870, Online ISSN: 2790-2293), (1):43-56, 2013.
[12] J. Reshi, A. Ahmed, and K. Mir. Some important statistical properties, information measures and estimations of size biased generalized gamma distribution. Journal of Reliability and Statistical Studies, pages 161-179, 2014.
[13] J. A. Reshi, B. A. Para, and S. A. Bhat. Parameter estimation of weighted maxwell-boltzmann distribution using simulated and real life data sets. In Adaptive Filtering-Recent Advances and Practical Implementation. IntechOpen, 2021.
[14] F. A. Spiring and A. S. Yeung. A general class of loss functions with industrial applications. Journal of Quality Technology, 30(2):152-162, 1998.
[15] R. Tyagi and S. Bhattacharya. Bayes estimation of the maxwell™s velocity distribution function. Statistica, 29(4):563-567,1989.
[16] L. Weiss. Introduction to wald (1949) statistical decision functions. In Breakthroughs in Statistics: Foundations and Basic Theory, pages 335-341. Springer, 1992.
[17] A. Zellner. Bayesian estimation and prediction using asymmetric loss functions. Journal of the American Statistical Association, 81(394):446-451,1986.