Научная статья на тему 'THE EXPECTED FISHER INFORMATION MATRIX OF POISSON HALF LOGISTIC MODEL'

THE EXPECTED FISHER INFORMATION MATRIX OF POISSON HALF LOGISTIC MODEL Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
21
4
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Poisson half logistic / Maximum likelihood estimation / Fisher information matrix / confidence interval

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Ibrahim Abdullahi, Aminu Suleiman Mohammed, Sani Musa

This study delves into the computation and evaluation of the expected Fisher information matrix within the context of the Poisson-type I half logistic (PHL) distribution. Leveraging confidence intervals and their associated coverage probabilities, our investigation aimed to study the performance of information matrix by the maximum likelihood method in estimating parameters. Our results unveiled a consistent trend: as the sample size expanded, a reduction in the length of the confidence interval was observed, and the 95% asymptotic confidence interval’s coverage probability aligned within the expected nominal size. This serves as a testament to the accuracy and robustness of the information matrix’s performance within the PHL distribution framework. Also, tested using some real data set.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «THE EXPECTED FISHER INFORMATION MATRIX OF POISSON HALF LOGISTIC MODEL»

THE EXPECTED FISHER INFORMATION MATRIX OF POISSON HALF LOGISTIC MODEL

Ibrahim Abdullahi1, Aminu Suleiman Mohammed2* and Sani Musa3

department of Mathematics, Yusuf Maitama Sule University, Kano-Nigeria.

2Department of Statistics, Ahmadu Bello University, Zaria. 3Department of Mathematics and Computer Science, Sule Lamido, University Kafin-Hausa, Jigawa, Nigeria

[email protected] [email protected] [email protected]

^Corresponding author: Email: [email protected]

Abstract

This study delves into the computation and evaluation of the expected Fisher information matrix within the context of the Poisson-type I half logistic (PHL) distribution. Leveraging confidence intervals and their associated coverage probabilities, our investigation aimed to study the performance of information matrix by the maximum likelihood method in estimating parameters. Our results unveiled a consistent trend: as the sample size expanded, a reduction in the length of the confidence interval was observed, and the 95% asymptotic confidence interval's coverage probability aligned within the expected nominal size. This serves as a testament to the accuracy and robustness of the information matrix's performance within the PHL distribution framework. Also, tested using some real data set.

Keywords: Poisson half logistic, Maximum likelihood estimation, Fisher information matrix, confidence interval.

1. Introduction

The information matrix in maximum likelihood estimation is crucial as it quantifies the precision of parameter estimates, aiding in the construction of confidence intervals. It reflects the inverse of the variance-covariance matrix of the score function, providing insights into the asymptotic behavior of the maximum likelihood estimator.

Confidence intervals, a key component of statistical inferences, leverage the information matrix to quantify the uncertainty surrounding parameter estimates. They offer a range of plausible values for the parameters, enhancing the interpretability and reliability of statistical analyses. In essence, the information matrix and confidence intervals together form integral tools for understanding the robustness and precision of maximum likelihood estimates in statistical inference.

In engineering, the information matrix is crucial for assessing the precision of parameter estimates in various models. For example, in structural engineering, when estimating parameters related to material properties or structural components, the information matrix helps engineers

understand how well their estimates capture the underlying characteristics of the system. This is vital for designing structures with optimal safety margins. In social science, the information matrix is essential for understanding the reliability of parameter estimates in models describing human behavior or societal trends. For instance, in economics, when estimating the coefficients of a model describing consumer behavior, the information matrix helps economists gauge the precision of their estimates, informing policy decisions.

There have been several contributions in the literature regarding the applications of the information matrix. for example, [1] provided a discussion on deriving the information matrix for a logistic distribution. [2] derived the asymptotic expansions of the information matrix test statistic. Small-sample performance of the information matrix test was discussed by [3]. The performance evaluation of track fusion with information matrix filter was studied by [4]. The approximate Fisher information matrix to characterize the training of deep neural networks was used by [5]. The Fisher information matrix in gravitational-wave data analysis was extended by [6]. The general expressions for the quantum Fisher information matrix with its applications to discrete quantum imaging was provided by [7].

The rest of the paper follows: Section 2, discussed about Fisher information matrix. Section 3, provided the expected Fisher information matrix of Poisson-type I half logistic (PHL), and some simulation studies with real data example. Section 4, is the conclusions.

2. On the Fisher information matrix

This section elucidates the significance of Fisher information by delving into fundamental concepts in statistics, including unbiased estimators, the information inequality (Cramer-Rao inequality), and the asymptotic normality of maximum likelihood estimation (MLE). For details (see, [8]).

2.1. Asymptotic characteristics of MLE

To comprehend the significance of Fisher information, this section elucidates fundamental statistics concepts, including unbiased estimators, the Cramer-Rao inequality (information inequality), and the asymptotic normality of MLE.

Estimation in statistics involves mapping real values from observed data. Various methods exist for estimation. Let g(x) be an estimator where X represents the observation pattern. For instance, a constant function that maps a specific value, irrespective of observed data, can serve as an estimator. Evaluation of estimation methods is crucial.

Mean Square Error (MSE) is one criterion for evaluation. Assuming a random variable x. is generated by a distribution with a probability density function P(x| g*) and true parameter value g *, MSE is defined as;

MSE = E |) — g*)2 J (1)

Where E(X) = J xp(x)dx and Var(X) = Jx (x - E(x))2 p(x)dx . MSE can be decomposed into

x x

the variance of the estimator and the square of the bias between the expectation of the estimator and the true parameter value.

Focusing on unbiased estimators with zero bias (E(g(x) — g*) = 0), the variance of the estimator becomes crucial. The Cramer-Rao inequality establishes a lower bound for the variance of an unbiased estimator g :

Var(£) >----(2)

F (£*)

where is the Fisher information. An efficient estimator achieves this lower bound. For unidimensional parameters, the inverse of the Fisher information sets a bound on the estimator's lower variance.

Returning to the Maximum Likelihood (ML) estimator, ML exhibits desirable properties, including asymptotic normality and asymptotic efficiency. In regularity conditions, the ML estimator £ satisfies:

(3)

This implies that the ML estimator asymptotically follows a normal distribution with the mean being the true parameter value and the variance (covariance matrix) being the inverse of the Fisher information.

In summary, the asymptotic efficiency of the ML estimator, characterized by the variance being the inverse of the Fisher information, makes it the best choice from the Mean Square Error perspective, given the restriction to unbiased estimators. This property, known as asymptotic efficiency, allows psychology researchers to optimize not only the estimation method but also experimental design and stimuli for variance reduction (increasing Fisher information) in their studies.

2.2 Definition of the Fisher Information Matrix

Let £ = (£,....,£) represent k-dimensional parameters. The Fisher information matrix for the i-th participant (or trial) concerning parameter | is defined as

F (f) = E

where log L(£ yi ) is a k x1 column vector, and T denotes the transpose operation. In other words, f (£) is a k x k matrix. The (m, n) element of the Fisher information matrix is given by

S Sf

S

logL(f y ) I] —log L(f\ y)

(4)

Fi (f)( m,„) = E

S - log L(\ yi ) — log L(\ yi )

. The expectation is over dependent variables y,

assuming the model P(y ) is true. The Fisher information depends on parameter values £ and stimuli (as well as the model). Although conventionally, the stimuli symbol is omitted in the Fisher information matrix representation, it's important to note that the Fisher information is dependent on experimental design and stimuli.

Additionally, when the true model is known, the following equation holds:

Fi (f) = E

— log L(\ yt ) J log L(\ yi )

= -E

— S\

log L(\ y,. )

(5)

This means that researchers can calculate the Fisher information matrix using either the square of the score function or the second derivatives of the log-likelihood function. The choice between methods depends on the characteristics of the models. In the definition using second derivatives, the (m, n) element of the Fisher information matrix is given by

Ft (f)

(m,n)

= - E

S\ S\

log L(f\yi )

(6)

3. The Fisher Information Matrix of PHL

Here, we derive the expected Fisher information matrix of the Poisson half logistic distribution, and applied it to study the confidence interval of the maximum likelihood estimators using simulation studies and real data example.

3.1 On the PHL

The Poisson half logistic (PHL) distribution was introduced by [9], using the convolution of half logistic (HL) and Poisson distributions. The PHL was applied to right censored data in [9]. The probability density, and cumulative distribution are respectively given by;

f ( x ) =

and

2aAe

Al _ -ax^ { 1+e

( eA — 1 )(1 + e-x )

(7)

F ( x) = ■

-1

eA -1

(8)

Where a > 0 and A e R - {0}.

The quantile function of the PHL distribution can be used to obtain a random data distributed according to CHLP(a, A): if U is a uniform (0, 1), then,

X = —-<!ln

1 -

ln (u (eA -1) +1)

A

- ln

1+

ln (u (eA -1) +1)

A

(9)

is a random variable distributed PHL.

There have been several contributions regarding the HL distribution, one can see, complementary Poisson generalized half logistic [10], generalized half-logistic Poisson [11], extension of the generalized half logistic [12], estimation of the reliability of a stress-strength system from Poisson half logistic distribution [13], type I half-logistic family [14], new extended cosine generalized half logistic [15], for more details see, [16].

3.2 The expected Fisher Information Matrix of PHL

The maximum likelihood of a and A can be obtain numerically by simultaneously solving (11) and (12) when set equal to zero using mathematical packages such as nlminb in R-software. Let a

vector of parameter be 0 = (a, A) , then the total log likelihood function of the PHL is given by

n

log(l(aA)) = n log 2 + n log A + n log a — a^ xf — n log (eA — l)

i=i

-2¿(l + ea ) + A±

(1 - e~ax' ) (1 + e~ax' )

(10)

The first partial derivative of log(l(aA)) that is dlog l(a, A)/da and dlog l(a, A)/dA are computed as;

Slog(l(a,A)) _

a ,=! ,=! 1

Sa

+ e

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2

i=1 (1 + e-ax )

(11)

A

e

IX

Ibrahim Abdullahi, Aminu Suleiman Mohammed, Sani Musa RL&A, No 2 (78) THE EXPECTED FISHER INFORMATION MATRIX OF PHL MODEL_Volume I9, June, 2024

Slog(l(a, A)) n neA | 1 — e~ax' | (12)

SA ~J~ e—x —1— 1^1 + e- J

For a very large sample we apply the usual approximation that the MLEs of the CHLP can be approximated as bivariate normal with mean zero and variance covariance matrix I *(a, A),

where l{oc, A) is the expected information matrix. Alternatively, we can use J ' (oc, A) ) evaluated

at ot and â to construct the asymptotic variance - covariance matrix of the MLEs.

Where,

(S2 log l S2 log l

J (a, A) =

f S2log(l(a, A)) ^ S(a, A)S(a, A)

T

y

Sa2 SaSA S2 log l S2 log l v SaSA SA y

(13)

Thus, we compute the element of the J (a, À) as S2log(l(a, A)) _ n

SA2 A ( e-ax — 1)2

S2log (l (a, A))

= 2£

SaSA S2log(l(a, A)) _ n

/ —ax, 1 \

j=1 ( e 1 — 1)

So1 a2 i=i (1 + e~ax' )2 i=i (l + e~ax' )3 i=i (l + ea )3

To construct the asymptotic distribution for the maximum likelihood estimate we consider the following Lemma and Theorem. Lemma 1. For k e R and q, 0 e N, let

J 1—e~

r 2aAXqe-a(k+1)xe K 1+e~ -g(k, q, 0) = J -e—o+2rdx (14)

0 ( eA—1)(1 + e~ax )

Then,

i(k. q,0)= * 2 llAj (J+1-k+-+1) (15)

(e — 1) j 0 „= 0 ^ — ) a J! Where Bo (.,.) is a partial derivative of a beta function with respect to p.

- a-1—e—

Proof. By expanding the exponential expression e v y then applying generalized binomial expansion to (1 + e~ax )(}. Finally, some algebraic transformation of u = (1 — e~ax ) and obtain the integral.

Theorem 1. The maximum likelihood estimators (ocn,An^jof (or, Aj are consistent estimators and

*Jn(an — a,An — A^j is asymptotically normal with mean vector 0 and the variance covariance

matrix I 1, where j = — ¡r _ _S log l_ | and the elements of the Fisher information matrix

^ S(a, A)S(a, A)T )

I are;

J S2(logl(a,A)) lj_+ eA (16)

^ SA2 J A2 (eA — 1)2

E__l(a,A)) J = —2g(1,1,2) (17)

I SaSA )

n

n

n

E f (>'2(/og/CX)) 1 = -1 + 2?(1,2,2) + 2X^(1, 2,3) - 2X^(2,2,3) (18)

^ SaSX J a

Where (.,.,.) i s given in Lemma 1.

For r = 1, 2., let 0 = (a, A)T be the estimates of © , and 0r be the rth component of © . Then, a 100(1—e)% asymptotic confidence interval for 0r is given

ACIr = (©,»' (19)

Where 0f is the rth component of 0, /rr is the (r, l* f' diagonal elements of / ', and 11' , is the quantile 1 - e/2 of the standard normal distribution.

3.3 Simulation study

In this part, we evaluate the Information matrix through the confidence interval and its coverage probability, by the performance of the maximum likelihood estimates using simulation study. We generate 10,000 samples from the PHL (a, 1), each of sample sizes n=20, 50,...,300 using some selected values of c > 0 and X e R -|0j.

The resulting simulations are displayed in Figures 1, 2,3 and 4. The result shows that the method of maximum likelihood performed consistently and length of the confidence interval decrease as sample size increase, also, the coverage probability (PC) of the 95% asymptotic confidence interval is within the nominal size, showing how accurate the information matrix performed. Below is the simulation algorithm.

1. Choose the sample size n, replication number M,

2. Choose the values of parameters c and X,

3. Generate random Pi ~ Uniform(0, 1) distribution, i = 1, 2, 3, ■ ■ ■, n,

4. Generate random Xi, i = 1, 2, 3, • * * n, from (3),

5. Calculate the MLEs from the simulated data,

6. Compute the expected information matrix

7. Compute the 95% asymptotic confidence interval for 0 = (a, A) using

Acir = (©,w 2Jj^er + w,

8. Compute the length of ACI

9. Repeat steps 2-4, M times.

10. Compute the average ACI and coverage probability (CP)

./• = 1,2

Figure 1: Plots of the estimated average length of ACL and CP for the simulated data for a = 1.0 and K= 1.0

150 200 n

Figure 2: Plots of the estimated average length of ACL and CP for the simulated data for a = 2.5 and X= 2.5

Figure 3: Plots of the estimated average length of ACL and CP for the simulated data for a = 0.8 and K= 0.9

Figure 4:Plots of the estimated average length of ACL and CP for the simulated data for a = 0.5 and X= 3.5

3.4 Application to Real Dataset

This subsection consists of illustration of the PHL expected information matrix to obtain confidence interval for the parameters using a real data with good fit by KS (Kolmogorov Smirnov) test statistic. The data set is given by [17] the data set are the 100 observations on breaking stress of carbon fibers (in Gba): The data set analyzed are: 194, 3.7, 2.74, 2.73, 2.5, 3.6, 3.11, 3.27, 2.87, 1.47, 3.11,4.42, 2.41, 3.19, 3.22, 1.69, 3.28, 3.09, 1.87, 3.15, 4.9, 3.75, 2.43, 2.95, 2.97, 3.39, 2.96, 2.53,2.67, 2.93, 3.22, 3.39, 2.81, 4.2, 3.33, 2.55, 3.31, 3.31, 2.85, 2.56, 3.56, 3.15, 2.35, 2.55, 2.59,2.38, 2.81, 2.77, 2.17,

2.83, 1.92, 1.41, 3.68, 2.97, 1.36, 0.98, 2.76, 4.91, 3.68, 1.84, 1.59, 3.19,1.57, 0.81, 5.56, 1.73, 1.59, 2, 1.22, 1.12, 1.71, 2.17, 1.17, 5.08, 2.48, 1.18, 3.51, 2.17, 1.69,1.25, 4.38, 1.84, 0.39, 3.68, 2.48, 0.85, 1.61, 2.79, 4.7, 2.03, 1.8, 1.57, 1.08, 2.03, 1.61, 2.12,1.89, 2.88, 2.82, 2.05, 3.65.

We estimated the parameters by maximum likelihood and tested the good fit by KS test as a = 1.2004, P = 7.2595, with KS = 0.0823. The asymptotic Fisher information matrix is computed and the asymptotic confidence intervals are computed to verify the performance of the derived information matrix as ACIa= (1.0391, 1.3618) and ACIX = (4.8419, 9.6772). The confidence intervals are very good indicating the accuracy of the computed information matrix. Figure 5 shows the plot of the fitted PHL density and cumulative distribution function showing the good fit.

I =

( 401.9967 -21.3447 -21.3447 1.7906

and I-1 =

0.00677717 0.08078816

0.08078816 1.52152823

Figure 5: Plots of the estimated density and cumulative distribution function of PHL

4. Conclusion

In this study, we computed and assessed the expected information matrix for the PHL distribution, utilizing confidence intervals and their coverage probabilities. Our findings underscore the consistent performance of the maximum likelihood method, showcasing: as sample size increased, the confidence interval length consistently decreased. Moreover, the 95% asymptotic confidence interval' s coverage probability (PC) remained within the expected nominal size, affirming the accuracy and reliability of the information matrix's performance.

Acknowledgments: I will like to thank the editor, and referees for their useful comments which improve the paper.

References

[1] Decani, J. S., & Stine, R. A. (1986). A note on deriving the information matrix for a logistic distribution. The American Statistician, 40(3), 220-222.

[2] Chesher, A., & Spady, R. (1991). Asymptotic expansions of the information matrix test statistic. Econometrica: Journal of the Econometric Society, 787-815.

[3] Orme, C. (1990). The small-sample performance of the information-matrix test. Journal of Econometrics, 46(3), 309-331.

[4] Chang, K. C., Zhi, T., & Saha, R. K. (2002). Performance evaluation of track fusion with information matrix filter. IEEE Transactions on Aerospace and Electronic Systems, 38(2), 455-466.

[5] Liao, Z., Drummond, T., Reid, I., & Carneiro, G. (2018). Approximate fisher information matrix to characterize the training of deep neural networks. IEEE transactions on pattern analysis and machine intelligence, 42(1), 15-26.

[6] Wang, Z., Liu, C., Zhao, J., & Shao, L. (2022). Extending the Fisher information matrix in

gravitational-wave data analysis. The Astrophysical Journal, 932(2), 102.

[7] Fiderer, L. J., Tufarelli, T., Piano, S., & Adesso, G. (2021). General expressions for the quantum Fisher information matrix with applications to discrete quantum imaging. PRX Quantum, 2(2), 020308.

[8] Miura, K. (2011). An introduction to maximum likelihood estimation and information geometry. Interdisciplinary Information Sciences, 17(3), 155-174.

[9] Abdel-Hamid, A. H. (2016). Properties, estimations and predictions for a Poisson-half-logistic distribution based on progressively type-II censored samples. Applied Mathematical Modelling, 40(15-16), 7164-7181.

[10] Muhammad, M., & Liu, L. (2021). A new three parameter lifetime model: The complementary Poisson generalized half logistic distribution. IEEE Access, 9, 60089-60107.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

[11] Muhammad, M. (2017). Generalized half-logistic Poisson distributions. Communications for Statistical Applications and Methods, 24(4), 353-365.

[12] Muhammad, M., & Liu, L. (2019). A new extension of the generalized half logistic distribution with applications to real data. Entropy, 21(4), 339.

[13] Muhammad, I., Wang, X., Li, C., Yan, M., & Chang, M. (2020). Estimation of the reliability of a stress-strength system from Poisson half logistic distribution. Entropy, 22(11), 1307.

[14] Cordeiro, G. M., Alizadeh, M., & Diniz Marinho, P. R. (2016). The type I half-logistic family of distributions. Journal of Statistical Computation and Simulation, 86(4), 707-728.

[15] Muhammad, M., Bantan, R. A., Liu, L., Chesneau, C., Tahir, M. H., Jamal, F., & Elgarhy, M. (2021). A new extended cosine — G distributions for lifetime studies. Mathematics, 9(21), 2758.

[16] Muhammad, M., Tahir, M. H., Liu, L., Jamal, F., Chesneau, C., & Abba, B. (2023). On the Type-I Half-logistic Distribution and Related Contributions: A Review. Austrian Journal of Statistics, 52(5), 34-62.

[17] Nichols, M. D., & Padgett, W. J. (2006). A bootstrap control chart for Weibull percentiles. Quality and reliability engineering international, 22(2), 141-151.

i Надоели баннеры? Вы всегда можете отключить рекламу.