SOME PROPERTIES OF TSALLIS ENTROPY BASED ON A DOUBLY TRUNCATED (INTERVAL) RANDOM
VARIABLE
S. Jalayeria, G.R. Mohtashami Borzadaran a*, M. Khorashadizadeh b
a Ferdowsi University of Mashhad, Iran
samirajala [email protected] a* Ferdowsi University of Mashhad, Iran gr [email protected] b University of Birjand, Iran m.khorashadizadeh@bir jand.ac.ir
Abstract
In this paper, we first study doubly truncated (interval) Tsallis entropy and suggest doubly truncated (interval) cumulative residual Tsallis entropy (ICRT), which is an extension of cumulative residual Tsallis entropy (CRT) and the dynamic CRT defined by the aid of Sati and Gupta and of Kumar, respectively. We investigate some properties and characterization of this measure, such as its relation with doubly truncated Shannon entropy, mean residual (past) life, and hazard rate (or reversed hazard rate). Also, the twin measure, doubly truncated (interval) cumulative past Tsallis entropy, is determined, and some of its properties are studied. Moreover, their monotonicity and related aging classes of distributions are expressed, and the upper (lower) bound for them is acquired. In the end, we propose four nonpara-metric estimators and compare their performance by utilizing simulation data. Also, being based on the best-proposed estimator, a real data set is additionally examined.
Keywords: Doubly truncated (interval) Tsallis entropy, Doubly truncated (interval) cumulative residual Tsallis entropy (ICRT), Doubly truncated (interval) cumulativ e past Tsallis entropy (ICPT), Hazar d rate, Reversed hazar d rate, Mean residual life, Mean past life, Nonparametric estimators
1. Introduction
The notion of entropy, later generali zed to infor mation theor y and statistical mechanics, was initially created by physicists in the area of equilibrium thermodynamics. The most famous one is due to [22], that plays an essential role in measuring the average uncertainty of a random variable. Entropy plays an important role in meas uring the index of dispersion, volatility, or uncertainty related to a random variable X. Her e and during this paper , X is an absolutely continuous nonnegativ e random variable, with probability density function (pdf) f(x) and survival function F(x) = P(X > x). Then the average amount of uncertainty associated with the random variable X as given by Shannon entropy, is
!• TO
H(X) = - f (x) ln f (x)dx.
Jo
Although, in certain situations, the Shannon entropy is not suitable where some generalized forms are of importance. Several generalized entropy measur es are accessible in literatur e, which
have many huge properties consisting of smoothness, big dynamic range with respect to certain conditions, and many others, which lead them to greater flexibility in practice. One prevalent generalization is the Tsallis entropy, introduced by [24], determined as a generalization of the Boltzmann-Gibbs entropy. Inside the studying of statistical mechanics, Tsallis entropy gives a much broader view of how disorder emerges in macr oscopic systems. For a continuous nonnegative random variable X, Tsallis entropy is deter mined as
t x>=oM1 _ f1 (xTdx) •
a _. .1 _/ f (x) dx), (1)
wher e 0 < a = 1. Clearly, when a ^ 1, we have Ta (X) ^ H(X). Tsallis exploited its nonextensive featur es, and it has more and more extensiv e applic ations in science and technolog y. This entropy measur e is extra flexible because of the parameter a, and it increases the scope of application. Tsallis entropy preserves many significant characteristics of Shannon entropy except for the additivity property. From the years 2000 on, an increasingly wide spectrum of natural, artificial, and socially complicated systems were ide ntified that verify the predictions and conclusions derived from this nonadditiv e entropy. Extensiv e or nonextensiv e statistical mechanics deriv e from the additivity or nonadditivity of the corresponding entropy measur es. The Tsallis entropy is broadly utilized in physics to examine the distribution characterizing the movement of cold atoms in dissipativ e optical lattices [9] and signal processing [23]. More properties and applications of Tsallis entropy have been mentioned in [24,25].
Considering the measur es based on residual lifetime random variable, Xt = (X — t\X > t) has an essential role in many grounds, including reliability theory, survival analysis, and information theory. So, [10, 6] defined the residual Tsallis entropy (RT) based on the random variable Xt by
_ r (f (x)-
a — '
rt(x t)=d—rO _ I,"(4
The expected uncertainty involved in the remaining lifetime of a component is measur ed basically by RT. It is clear that RT(X;0) = Ta(X). Lately, [10, 4] introduced an entropy-based measur e of uncertainty in past lifetime distributions and denominated it past Tsallis entr opy (PT). The uncertainty of the idle time of a component or system that is based on past lifetime random variable Xf = (t — X\X < t) is indicated by PT, and it is given by
PT(X t) = a—■(' _ f,(mr dx),
a — ' 1
and also, PT(X; m) = Ta(X).
Currently, many resear chers advanced new measur es of uncertainty to overcome the limitations of traditional entropy measur es and increase the applicability of information measur es in diverse areas of science and engineering. With this motivation, [18] studied an alternative to Shannon differential entropy. The cumulativ e residual entropy (CRE) is obtained by replacing the pdf f(x) in H(X) with the survival function F(x) = P(X > x), given by H(X) = _ r F(x) ln F(x)dx. The CRE is regarded to be greater stable due to the fact that the distribution function is greater regular than the pdf, and it owns more mathematical properties and special applications. Also, it is easily computable, alw ays nonnegativ e, and its definition is valid in both the continuous and discr ete cases. Additionally , the distribution exists despite the fact that the pdf does now not.
In infor mation theor y, numer ous attempts have been made by resear chers, and an eminent amount of work has been done from both theoretical and application points of view for studying and extending the notion of CRE. Motived by the extensiv e applicability of H(X), a cumulativ e version of (1) studied by [19], is determined as the cumulativ e Tsallis entropy (CRT)
1- I F(x)x-
CRT(X) = 0-7 _ P(x)X**)
Although [19] denoted that CRT(X) tends to CRE(X) when a ^ 1, wher e CRE(X) = - f0°° F(x) ln (F(x)dx, defined by [18], but [16] showed with a counter example that it is not true. The cumulativ e past Tsallis entropy (CPT) has also been introduced and studied by [16] as follows:
CPT(X) = 0-7 - f F(x)adx)
[19] gave the dynamic version of cumulativ e residual Tsallis entropy (DCRT), which is the CRT of the residual random variable Xt and it is given by
DCRT(X; t) = (l — /' (F§rdx) ,
and DCRT(X;0) = CRT(X). Furthermore, [8] studied many properties of DCRT, and [16] introduced the dynamic version of cumulativ e past Tsallis entropy (DCPT) by
DCPT(X; t) = i1 - 0 (Ff)a dx) ,
a — 1 ' 1
and DCPT(X; to) = CPT(X). Occasionally , in many conditions, we just possess information betw een two points. Thus, we have to look at the statistical me asur es (particularly in infor mation theor y and reliability) under the case of doubly truncated random variables. For instance, in reliability, if X indicates the lifetime of a unit, then the random variable Xt1,t2 = (X — t1 |t1 < X < t2) is known as the doubly truncated residual lifetime. Note that the well-known random variable, Xt = (X — tlX > t), is the particular case of Xt1,t2 when t2 tends to to. Also, doubly truncated past lifetime is the random variable X* ^ = (t2 — X111 < X < t2), which in the specific case when t1 = 0, it is the past lifetime random variable X*. Another generalization of Tsallis entropy is based on a doubly truncated (interval) random variable [13], which reads as follows:
Ta (Xi '1-'2 ) = a^—ri1 - ¡1(m—m)a *) • (2)
where (t1, t2) € D = {(t1, t2) : F(t1) < F(t2)} and Ta (X;0, to) is the Tsallis entropy Ta (X), and Ta(X; t1, to) is the residual entropy RT(X; t1) and also Ta(X;0, t2) is the past entropy PT(X; t2). Also, when a ^ 1, we have Ta (X; tu t2) ^ H(X; h, t2) = — ¡1? F{t2f)%h) In (Ffl^)).
The distribution function estimation is not only an inter esting problem by itself, but also it emerges naturally in actual problems of many scientific fields, consisting of seismology , hydrology, envir onmental sciences, and so on. Curr ently, in those disciplines, numer ous methodologies have appear ed for attacking statistical problems based on nonparam etric ideas. With this motivation, the perfor mance of four nonparametric estimators of ICPT is compar ed, and also a real-life data set is illustrated based on the best-pr oposed estimator .
In this paper, some properties of Ta (X; t1, t2) are introduced. Addition ally, we discuss the doubly truncated (interval) cumulativ e residual Tsallis entropy (ICRT) and doubly truncated (inter val) cumulativ e past Tsallis entropy (ICPT), which can be general forms of the preceding findings. Some properties of ICRT and ICPT and their relationships with reliability measur es, including hazar d rate (or reversed hazar d rate) and mean residual life (or mean past life), are studied. Finally , we consider four empirical and ker nel-based estimators. Then, by using simulated data, we compar e the behavior of the proposed estimators. In addition, a real data set from envir onmental monitoring is studied.
2. Doubly truncated Tsallis entropy
In this section, we express some properties and characterization results of Ta (X; t1, t2). First, for the Ta (X; t1, t2), an upper interval is acquired with respect to t2, for any fixed t1, in the next theorem. [13] proved a result similar to the following theorem, with respect to t1, for any fixed
t2. Also, it should be noted that [11] introduced the generalized failure rate (GFR) based on the doubly truncated random variables by
hi (ti, t2) = lim
P(t1 < x < t1 + h\t1 < x < t2) h
f(t1) (3)
F(t2 ) - F(t1 )
and
"P(t2 < x < t2 + h\t1 < x < t2)
h2 (t1, t2 ) = lim
m
h
f(t2) (4)
F(t2 ) - F(t1 )
where their relationships with m(t1, t2) = E(X\t1 < X < t2) = J^2 xjjj^—jj^) dx for (t1, t2) e D are as follows:
Bm(t1,t2)
h1 (t1, t2) = m(t t > (5)
m(t1, t2) — t1
dm(t1,t2)
hi (t1, t2 ) = -dJj-r. (6)
A 1 t2 — m(t1, t2)
A lower (upper) bound for the ICRT(X; t1, t2) when increasing the ICRT property is acquired in the next theorem, for 0 < a < 1(a > 1).
Theorem 1. The random variable X has increasing doubly truncated (interval) Tsallis entropy property if and only if the following inequalities are satisfied for all (t1,t2) e D and 0 < a < 1(a > 1):
( 1 / 'dm(tlt2) \a—1N
1 — 1 I--T I I < (>)TX(X; t1, t2).
a 1 t2 — m(t1, t2) I - v 1
a — 1
\
Proof. By differentiating Ta(X; t1, t2) of the form (2) with respect to t2, we have
dTa (X; t1, t2 ) = ( f (t2 ) )X_a f (t2 ) ^ f(x) a \
dt2 a - 1 F(t2 ) - F(t1 )) a F(t2 ) - F(t1 ) jt1 ( F(t2 ) - Fft )) dx )
-1
ha(tu t2) +--h2(t1, t2)(1 - (a - 1)Ta(X; t1, t2))
a — 1 a — 1
— 1 a
= h2 (t1, t2 ) — h2 X — 1 (h, t2 ) + — (1 — (« — 1T (X; t1, t2 )). So, after sui table substitution of equation (6) and simplifying the equation we have,
Ta (X; t1, t2) < (>) (1 - a (h2 (t1, t2))a-1 ),
the proof is complete. ■
We study the effect of increasing transfor mation on Ta(Y; t1, t2).
Lemma 1. Let X be a nonnegativ e continuous random variable with cumulativ e distrib ution function (cdf) F, and take Y = $(X), where ty(-) is a strictly increasing differentiable function. Then
Ta (Y t t ) = (l- i ^ (h ) ( f (x) X 1 dx)
K,1'2) a - 1{ Jmax{0^-1 (t1 )A F($-1 (t2 )) - F(^ (t1 )) J ($>(x))a-1 )' If Z = aX + b, with a > 0 and b > 0, so FaX+b(z) = FX(-), then
Ta(7. t t )= aa-1 - 1 +(aa-1 - 1 )Ta(X; t1 - b t2 - b ) T (Z;t1,t2) = + )T (X;—'—).
There are an identity and inequalities for doubly truncated (interval) Tsallis entropy based on the assumptions of the following proposition.
Proposition 1. Let X be a random variable with support in [0, r] wher e r > 0 and symmetric with respect to 2; that is, F(x) = F(r — x) for 0 < x < r. Then
Ta (X; t1, t2) = Ta (X; r — t1, r — t2); 0 < t1, t2 < r.
Proof. We have
r (X; •■, '2 > = ('—( j^mr dx)
(. - C ( jfJhrr) )a dx)
a - 1
1
a - 1
1
a -
1
-r (■ - Ç (Fr - £Fr - 2) >• )>)
a - 1 - l-t2 ( F(r - h ) — F(r - t2) rdy) Ta (X; r - tl, r - t2 ).
■
Example 1. If X is uniformly distributed in [0, r], then for 0 < t1, t2 < r, we have Ta (X; t1, t2) = (t2 - ti)1 a, which is in agreement with Proposition 1.
Proposition 2. Let X be a nonnegativ e and absolutely continuous random variable. Then for a > 1(0 < a < l), we have
1 - (t2 - t1 )(<)Ta (X; t1, t2 ) < (t2 - t1 ) - 1. (7)
Proof. The upper bound and lower bound given in (7) can be obtained from the well-known inequality ln x < x - 1, wher e x > 0. Let x = F(tf(-l(tl). Then xa-1 > 0 for a > 1(0 < a < 1), and by using H(X; t1, t2) < (t2 - t1 ) - 1 [15], the proof is complete. ■
Proposition 3. Let X be a nonnegativ e and absolutely continuous random variable with cdf F(x) and pdf f(x). If f(x) is decreasing in x, then for 0 < a < 1(a > 1),
1 - hq^ t2 )(t2 - t1 ) ()TK (X )>(<) 1 - K (^ t2 )(t2 - t1 )
-(—)- > (<)T (X;t1, t2 ) > (<)-J—)-,
where h1 (t1, t2 ) and h2 (t1, t2 ) are defined in (3) and (4).
Proof. Let f(x) be decreasing in x. Then for t1 < x < t2, we have f (t1 ) > f(x) > f (t2 )
So,
Then
F(t2) - F(t1 )- F(t2 ) - F(t1 )- F(t2) - F(t1 )
c ( )a dx > i: ( fx )a dx >> J: ( fw )a dx.
1 — hX1(tl, t2 )(t2 — h ) < 1 — ft ( j{h ) (—xl{ti ) )a dx < 1 — hX2 (t i, t2 )(t2 — h ). Thus for 0 < a < 1(a > 1), after some calculations, the proof is complete. ■
Example 2. Let X be a nonnegat ive and absolute ly continuous random variable with cdf F(x) = 1 — e—x and pdf f (x) = e—x. Then, Ta(X; tu 12) = ^ (l — 1 —— ^^, for all a > 1(0 < a < 1) and t1, t2 (t1 < t2 ), which is in agreement with Proposition 2 and Proposition 3. For increasing function f (x), the above proposition can be similarly proved.
3. Interval cumulative residual and past Tsallis entropy
Let X be an absolutely continuous random variable and let D = {(x,y) : F(x) < F(y)}. Then we define the ICPT and ICRT functions, respectiv ely, as follows:
ICFT(X1 „, ,) = ^ (i - (FJ^^) r (8)
and
ICRT(X; ti, t2) = (i - ¡h U, F(x). Jadx} , V 7 a - i V Jti KF(ti) - F (t 2)' )'
F(x) )a dx\ (9)
a - i V Jti KF(ti) - F (t2)
where (ti, t2) e D. It is clear that, ICRT(X;0, m) is CRT(X), and ICRT(X; ti, m) is DCRt(X; ti). Also, ICPT(X; 0, m) is CPT(X), and ICPE(X;0, t2) is DCPT(X; t2). The applications of classes of life distributions can be demonstrated in different areas, including reliability, engineering, biological science, maintenance, and biometrics. Hence, statisticians and reliability analysts are interested in modeling survival information and classifications of life distributions based on a few aspects of aging. For instance , we refer the reader to [i5, i, 26]. So, the corresponding aging classes are defined as follows.
Definition 1. Consider the random variable X.
• X is said to have decreasing interval cumulativ e residual Tsallis entropy (DICRT) property if and only if for any fixed t2, ICRT(X; ti, t2) is decreasing with respect to ti.
• X is said to have increasing interval cumulativ e past Tsallis entropy (IICPT) property if and only if for any fixed ti, ICPT(X; ti, t2) is increasing with respect to t2.
An upper bound for ICRT(X; ti, t2) with the decreasing (increasing) ICRT property is ac-quir ed in the next theor ems.
Theorem 2. The random variable X has decreasing (increasing) ICRT property if and only if the following inequality is satisfied for all (ti, t2) e D and 0 < a < i(a > i):
ICRT(X; ti, t2) < (>) 1
a — i
( , t(, ) ( i + Mht2) V-^
i _ i ( F(ti) )a I i + dti
a(f(ti )M p(ti, t2) V v
Proof. By differentiating ICRT(X; ti, t2) of the form (9) with respect to ti, we have
dICRT(X; ti, t2) = (( F(ti) )K
dti a - i F(ti) - F(t2))
_a f (ti) rt2 ( F(x)
(m-m ^dx)
"F(h) - F(t2) Jti F (ti) - F(t2)
i ■(^M)ahia(ti, t2)
a - iv f (ti)
/V
hi (ti, t2)(i - (a - i)ICRT(X; ti, t2)).
a - i
By the definition of the GFR in (3) and (4), their relationships with p(ti, t2) = E(X - ti |ti < X < t2) and p*(ti, t2) = E(t2 - X|ti < X < t2) are, respectiv ely, as follows:
i i dp(ti,t2)
(t'2 ) = "KHrtr ■ <10)
i _ dp* (ti,t2 )
h2 (ti, t2 ) = -, 9t\ . (ii)
Ki ' P(ti, t2)
So, after suitable substitution of Eqs. (10) and (11) and simplifying the equations, we have ICRKX; ,1, t2) < >a-^l - I((ki(,h h
■
Example 3. Let X be distributed uniformly on (0, fi), fi > 0, then it can be easily verified that,
JCRT(X- , , ) 1 (1 (fi - ,1 )X+1 - (fi - ,2)X+1 ) I(ZRT(X, ^,2) = —(1--(,2 - tl)*(1 + a)-),
Vfa ,2) = .
the differentiation of ICRT with respect to t1 is negative for all (t1, t2) € D, which shows that the uniform distribution hasDICRT property and theorem2 is satisfied.
There exist no nonnegative random variables with increasing ICRT(IICRT) over the domain [0, to), indicated in the following theorem.
Theorem 3. If X is a nonnegative nondegenerate random variable, then ICRT(X; ,1, ,2) cannot be an increasing function with respect to ,1 for any real fixed ,2.
Proof. First note that, using lHopitals rule, we have
lim ICRT(X; ti, t2) = lim (l - [** ( F(x\, . )a dx)
tl ^2 v 2 tl ^2 K-H Jti yF(h ) - F(t2)> )
, /tl2 (F(x))adx
1 — lim
a - i\ ti ^2 (F(ti) - F(t2))a
(F (ti ))a
a — 1
1 — lim
^2 af (ti)(F(t2) - F(ti))a
Now, on the contrary, suppose that ICRT(X; ti, t2) is increasing in ti. Then for all ti < t2, ICRT(X; ti, t2) < ICRT(X; t2, t2) = -to, which contradicts with the fact that ICRT(X; ti, t2) eft for all (ti, t2) e D. ■
In the following proposition, we obtain a lower bound, according to X) = /TO F^) dt, for E(p(X)\ti < X < t2).
Proposition 4. Suppose that F is an absolutely continuous distribution function with ICRT(X; ti, t2) < to. Then, for 0 < a < i
E(p(X)\ti < X < t2) > (a - i)ICRT(X; ti, t2) - i. Proof. By using E(^(X)\ti < X < t2) > ICRE(X; ti, t2) [5], we have
r f2 F (x) l F (x) )d
Jti F (ti) - F (t2 )log( F (ti) - F (t2 ))dx f <2 F (x) ( F (x) ) \ - Jti F (ti) - F (t2 )\ [F (ti )- F (t2 )' )
< 1: (( A)) - ¥
< t: (< Foa® >* - 0 dx
= a < m-kr)r - (h-1)dx-
— to
Then
FW log( c, F(xl Qdx
ti F(ti) - F(t2) (h) - F(t2)
f2 (_FM
ti (F(ti) - F(t2)
> T+ (t2-ti)dx
rt2 F(x)
> - Jti (F(ti) - F(t2)) dx
= (a - i)ICRT(X; ti, t2) - i.
■
The following theorem tries to clarify the problem, achieving when the interval entropy uniquely appoints the distribution function.
Theorem 4. Let X be a nonnegative and continuous random variable and let ICRT(X; ti, t2) be increasing with respect to ti and decreasing with respect to t2. Then ICRT(X; ti, t2) uniquely determines F(x).
Proof. By differentiating ICRT(X; ti, t2) with respect to tj(j = i, 2), we have dICRT(X; ti, t2) _ i ( , F(t2)
dt2 a - i
( ( F(t2) y
^ (F(ti) - F(t2))
f (t2) ft2 ( F(x)
ih ( F(x)_)adx)
Jti (F(ti) - F(t2)) dx)
F(ti) - F(t2)Jti F(ti) - F(t2)
-i ■( lM)a h2a (ti, t2)
and
a - r f (t2) a
+ -—ih2(ti, t2)(i - (a - i)ICRT(X; ti, t2))
h2(ti, t2^( J^rh2a-i(ti)2)
-a-i(i - (a - i)ICRT(X; ti, t2))),
dICRT(X; ti, t2) = f( F(ti) )a
dti a - i F(ti) - F(t2))
f (tl) /t2 (^"F^)adx)
Jti (F(ti) - F(t2)) J
a
F(ti) - F(t2)Jti F(ti) - F(t2)' i <J^)ahia(ti,t2)
a - rf(ti)' a
hi (ti, t2)(i - (a - i)ICRT(X; ti, t2))
i ( F(ti) )K h a-i ,a - i(f (ti)
hi (ti, t2){ a-i(mr hi a-i(ti, t2)
(i - (a - i)ICRT(X; ti, t2))^j.
a - i
Thus, for fixed t2 and arbitrary ti, hi(ti, t2) is a positive solution to the following equation:
g(xt2) = xt2( ^ ()a xt2 "-i - (i - (a - i) ICRT(X; ti, t2))) (i2)
dICRT(X; ti, t2) dti '
t
2
Similarly, for fixed ,1 and arbitrary ,2, we have h2(,1, ,2) as a positive solution to the following equation:
7(y,1) = (jj^ryha-1 - a-1 (1 - (a - 1) ICRT(X; ,1, ,2))) (13)
dICRT(X; ,1, ,2)
+
((y^Tx,2a-1 - (1 - (a - 1)ICRT(X; ,1, ,2))) , ((y^)*y,1a-1 - (1 - (a - 1)ICRT(X; ,1, ,2))) .
d,2
By differentiating g and 7 with respect to x,2 and , we get
dg(x,2) = a dx,2 a - 1
and
dY(Vh) = a
dy,1 a - 1
Furthermore, the second-order derivatives of g and 7 with respect to x,2 and are a( j,1))*x,2a-2 >
0 and a( j,))1*Vha-2 > 0, respectively. Then the functions g and 7 are minimized at points x,2 =
1 1
((1 - (* - 1) ICRT(X; ,1, ,2))( J^f) - and = ((1 - (a - 1) ICRT(X; h,,2))(J^f) a-1,re-spectively. In addition,
dICRT(X; ,i, ,2) , ,
g(0) =--dii—< 0 g(TO) = ^
and
dICRT(X; ,i, ,2) ^ 0 . ,
7(0) =--^-2 < 0, Y(to) = TO.
d,2
So, both functions g and 7 first decrease and then increase with respect to x,2 and respectively, which conclude that equations (12) and (13) have unique roots h1(,1, ,2) and h2(,1, ,2), respectively. Now, ICRT(X; ,1, ,2) uniquely determines GFRs and the distribution function, with attention to Remark 3.1 [14]. ■
Similar to Theorems 2, 3, and 4 and Proposition 4, we have the following results:
• The random variable X has decreasing (increasing) ICRT property if and only if the following inequality is satisfied for all (,1, ,2) € D and 0 < a < 1(* > 1):
1
ICPT(X; ,i, ,2) < (>)
1
1 - 1 ( F(,2) )* I 1 d,2
\
* f (,2) I V(,1, ,2)
• If X is a nonnegative nondegenerate random variable, then ICPT(X; ,1, ,2) cannot be a decreasing function with respect to ,2 for any real fixed ,1.
• Suppose that F is an absolutely continuous distribution function with ICPT(X; ,1, ,2) < to, then
E(v*(X)\,1 < X < ,2) > (a - 1)ICPT(X;,1,,2) - 1.
• Let X be a nonnegative and continuous random variable and let ICPT(X; ,1, ,2) be increasing with respect to ,1 and decreasing with respect to ,2. Then ICPT(X; ,1, ,2) uniquely determines F(x).
Example 4. Let X be distributed uniformly on (0, fi), fi > 0, then it can be easily verified that,
1 ,7№ + 1 - ,1 № + 1
ICPT(X; ,1, ,2) = — (1 - (2 , -;1 + )), a - 1 (,2 - ,1) (1 + a)
t2 - ti
V (tV t2) =
As the ICPT is increasing with respect to t2, X has IICPT properties. As in Lemma i, the following theorem is proved by the same approach.
Lemma 2. Let X be a nonnegative continuous random variable with cdf F, and take Y = $(X), where $(■) is a strictly increasing differentiable function. Then
ICRT(Y 1,) = a-i f1 - CZ-imi ^i(i^i-n))" (x)dx).
Proposition 5. If Z = aX + b, with a > 0 and b > 0, so FaX+b(z) = FX(-), then
ICRT(Z; ti, t2) = —a + aICRT(X; tt-, ^b).
a - 1 a a
There is an identity for doubly truncated (interval) CRT in the following theorem.
Theorem 5. Let X be a random variable with support in [0, r] and symmetric with respect to 2, that is, F(x) = F(r - x) for 0 < x < r. Then
ICRT(X; ti, t2) = ICPT(X; r - t2, r - ti), 0 < ti, t2 < r.
Proof. The theorem is proved by the following equation:
ICRTX i, [2 > = a-i 0 - C(-m-rm)a dx)
0 -1: (w-F-xhr) )a dx)
a - 1
1
a - 1
1
a -
1
h F(r - t, ) - F(r - t2)'
0-c ( Fr -1 - ,2 ) )" *)
1- -
(1 - C ( F(r - -,2) ^ *)
a - 1\ Jr-t2 F(r - ti) - F(r - t2) ICPT(X;r - t2,r - ti).
■
Example 5. If X is uniformly distributed in [0, r], then for 0 < ti, t2 < r, we have ICRT(X; t\, t2) = ICPT(X; r - t2, r - t1 ) = a-i (1 - (r-t(^—)' which is in agreement with Theorem 5.
Similar to Lemma 2, Proposition 5, and Theorem 5, we have the following results:
• Let X be a nonnegative continuous random variable with cdf F, and take Y = $(X), where $(■) is a strictly increasing differentiable function. Then
icpt (Y; (i, t2)=a-i (i- (t_ n ( f^ ¿-VH,, »)' <(x)dx) •
• If Z = aX + b, with a > 0 and b > 0, so FaX+b (z) = FX (- ), then
ICPT(Z; t,, t2) = —a + aICPT(X; ,b).
a -1 a a
• Let X be a random variable with support in [0, r] and symmetric with respect to 2, that is, F(x) = F(r - x) for 0 < x < r. Then
ICPT(X; t,, t2) = ICRT(X; r - t2, r - t,); 0 < t,, t2 < r.
Example 6. If X is uniformly distributed in [0, r], for 0 < t1, t2 < r, we have ICPT(X; ix, t2)
ICRT(X; r - t2, r - ti ) = (1 -
t2
a+1 _^ a+1
(t2-t1 f(1+«)
), which is in agreement with Remark4 (part 1).
Let X and Y be two random variables. Also, the distribution function and density function of X are indicated by F(t) and f (t) and those of Y are denoted by G(t) and g(t), separately . Now we compar e the two random variables X and Y based on doubly truncated (interval) cumulativ e residual and past Tsallis entropy. So, we first need the following definitions, which can be seen in [20]
f (x)
Definition 2. X is said to be less than or equal to Y in usual stochastic ordering, if gx is
lr
decr easing in x > 0. We write X < Y.
Definition 3. X is said to be less than or equal to Y in likelihood ratio ordering, if F(x) < G(x) ,
st
for all x > 0. We write X Y.
Navarro and Rubio(2011) expressed that The two random variables X and Y satisfy X < Y if, and only if, [X — t1 |t1 < X < t2] <st [Y — t1 |t1 < Y < t2], whenever(t1 < t2). Also, we compar e two random variables X and Y based on the properties of (interval) CRT and (interval) CPT in likelihood ratio ordering.
Theorem 6. Let X and Y be two nonnegativ e absolutely continuous random variables with survival functions F(x) and G(x), respectiv ely. If X <(>)lrY for all t1, t2 > 0, then ICRT(X; t1, t2) < (>) ICRT(Y; t1, t2), for 0 <a < 1; otherwise for a> 1, ICRT(X; t1, t2) > (<) ICRT(Y; t1, t2).
Proof. The assumption X <(>)lrY implies that
Fxt,
< (>) GxM2,
F(x)
F(t1) - F(t2)
)a < (>) (t?
G (x)
G(t1) - G(t2)
) ,
1 -12 (^-F^rdx > < 1 - r rdx.
For a > 1, we have
It1 KF(t1) - F(t2 )
-(1 - i2 (i^^x) rdx) > (<) -^(1 -i
- n Jt1 yF(t1) - F(t2Y J "Vn Jt1
Jt1 vG(t1) - G(t2)'
a — 1
f2 ,
1 -I (
G (x)
1) - F(t2)
ICRT(X; t1, t2) > (<) ICRT(Y; t1, t2).
t1 yG(h) - G(t2)
) dx
For 0 < a < 1, it follows that
a — 1
(1 -1: (
F(x)
F(t1) - F(t2)
) dx < (>)
a — 1
1 _ ft2 ( G(x)_
Jt1 (G(t1) - G(t2)
) dx
ICRT(X; t1, t2) < (>)ICRT(Y; t1, t2).
■
(
1
1
Theorem 7. Let X and Y be two nonnegativ e absolutely continuous random variables with cdfs
st
F(x) and G(x), respectiv ely. If X < Y for all t1, t2 > 0, then ICPT(X; t1, t2) > ICPT(Y; t1, t2), for 0 < a < 1; otherwise for a > 1, ICPT (X; t1, t2) < ICPT(Y; t1, t2).
st
Proof. The assumption that X < Y implies that
F(x)
i - (
F(t2) - F(ti) F(x)
Fx'|,'2 > GXM2, )* > (
G(x)
G(t2) - G(ti)
)a,
ti KF(t2) - F(ti)
)adx < i - (
G(x)
ti KG(t2) - G(ti)
)a dx.
for a > i, we have
oM1 -1'2 (WJ-y,r dx) < oM1 - /,
|- lt2(
For 0 < a < i, it follows that
Example 7. Let
F(x)-
and
G (x)
(xx y\ x > x0, i, x < x0,
(x)*, i,
x > x0 , x < x0 .
G(x)
ti yG(t2) - G(ti)
-))a dx)
ICPT(X; ti, t2) < ICPT(Y; ti, t2).
aM1 -C(^-k))a> a-rO- .C(
G(x)
ti KG(t2) - G(ti)
-))a dx)
ICPT(X; tr, t2) > ICPT(Y; tr, t2).
■
That is, X and Y have Pareto distributions with parameters ^ and j82, respectiv ely. If ^ >
lr
and 0 < < r, hence X < Y for a > i, then ICRT(X; tr, t2) > ICRT(Y; tr, t2). Also, the
assumptions of the theorem hold, and therefore [X - tr |tr < X < t2 ] <st [Y - tr |tr < Y < t2], whenever(tr < t2).
4. Empirical estimation of ICPT
By utilizing various empirical estimators of the cdf, we suggest four non-parametric estimators ICPT(X; ti, t2) and also compare the implementation of the proposed estimators. For an actual-life fact set, we study the monotonicity of ICPT based totally on its kernel-smoothed estimator .
First, we introduce four nonparametric estimators, by mentioning the name ICPT1 (X; tr, t2), ICPT2(X; tr, t2), ICPT3(X; tr, t2) and ICPT4(X; tr, t2), of ICPT through utilizing empirical distribution function, mean empirical distribution function, median empirical distribution function, and kernel-smoothed function and their implementation by the Monte-Carlo simulation. Let Xi, X2, . . . , Xn be an independent and identically distributed random sample dra wn from a population having distribution function F(x) and survival function ^(x). Now, the first nonparametric estimator of ICPT1 (X; tr, t2) may be written as
ICPT1 (X; ti, t2)
a-i(r - /i (#
Fni}(x)
Fyn> (t2) - Fn\ti)
)adx^J ,
for 0 < a = i, wher e F„\x) = n Y^=r I(Xi < x), x e R, is the empirical distribution function
and
I(Xi < x)
i ifX < x, 0 otherwise,
(
is the indicator function of the event X < x. Let X(j), X(2),..., X(n) be the order statistics of random sample. Noting the sample values lying between ti and t2 so that ti < xj,x(j+1),...,x(k) < t2, then
ICPT (X; '1,= M1 — § ^xr'Ff^TFFw'^^
= a—rf1 — § C^*)
•M1 — )—Fi'),, » jo- • (14>
The second estimator of ICPT2 (X; t1, t2) can be acquired by replacing mean empirical distribu-
(2)
tion function F! ) (x) in (14)as
ICPT(X; t1, t2) = 7—-T(1 — ) 1 ¿(x0) — x0+1))(Fn%))a) , (15)
\ (Fn (t2) — Fn (t1)) t=j J
k
a_1V (f!2(t2) — f!2>(t1))" §
wher e the mean empirical distribution function is defined as
1n
f! (x) = n+j § I(Xi < x), x e R.
The third nonparametric estimator of ICPT3 (X; t1, t2) can be achieved by utilizing median empirical distribution function in (14) as follows:
ICPT3(X; "•t2) = 0—T i1 - (tf,(t;) — Fi)(tl)). j™ — W^»*) . <16>
wher e f!3\x) = §=1 I(X'„+o4 , x e R, is the median empirical distribution function.
The fourth estimator can be defined by utilizing Kernel-smoothed estimator f!4\x) of the distribution function in (14) as follows:
ICPT* x 12 > = a—r f1 — (f^ )—^». —W^»*), <17>
where F^^x), the kernel-smoothed estimator of distribution function, is defined as
F^>(x) = n i L(),
wher e L is a distribution function of positiv e kernel K, that is, L(u) = fu ~ K(t)dt and h is the bandwidth of parameter . Now, we utilize the normal kernel function K(u) = -—n exp (Up).
5. Simulation
It is widely recognized that the smoothed estimator has a better perfor mance compar ed to a nonsmoothed estimator . To demonstrate the effectiv eness of the empirical and ker nel estimators, a Monte-Carlo simulation examination is accomplished. The estimated values are computed based on 1000 simulations from Exp(0.5) (exponential distribution) each of size n(n = 30,35,40,50,60) for different truncation limits and a = 0.2; 3.5. Bias and mean square error (MSE) are also calculated. In Tables 1 and 2, we present the exact value, bias, and the
MSE of the proposed estimators of ICPT. The MSE of the estimators corresponding to truncation limit (0.2, 4) and for a = 0.2, 3.5 is also displa yed in Figur e i for increasing sample size.
It is obvious that in nearly all cases ICPT4 (X; t|,t2) (17) performs way better with less MSE than the other estimators as determined in (14) (15) and (16). Further ,for a = 0.2, ICPT1 (X; tr, t2) produces better result than ICPT3(X; tr, t2), while ICPT2(X; tr, t2) yields poor estimates as MSE is higher in comparison with the other estimators of ICPT. Also, for a = 3.5, it can be seen that there is a slight difference between the first, second and third estimators and The fourth estimator is significantly better estimator . It is expected, one can depict from Tables i and 2 that ICPT as a measur e of uncertainty declines for a shrinking interval. Generally , we can conclude that kernel smoothed estimator gives better estimates of ICPT than the other proposed estimators in terms of MSE. Also, the values of MSE of the proposed estimators are reduced by increasing sample size, which is caused by dependence of the MSE of the empirical estimators to the sample size.
It is obvious that, in nearly, all cases ICPT4(X; t|, t2) defined by (17) perform a way better with less MSE than the other estimators, as determined in (14), (15), and (16). Furthermore, for a = 0.2, ICPT1 (X; tr, t2) produces abetter result than ICPT3 (X; tr, t2), while ICPT2 (X; tr, t2) yields poor estimates as the MSE is higher in comparison with the other estimators of ICPT. Also, for a = 3.5, it can be seen that there is a slight difference between the first, second and thir d estimators and The fourth estimator is significantly better estimator . It is expected that one can depict from Tables i and 2 that ICPT, as a measur e of uncertainty , declines for a shrinking inter val. Generally , we can conclude that the ker nel-smoothed estimator gives better estimates of ICPT than the other proposed estimators in terms of the MSE. Also, the values of MSE of the proposed estimators are reduced by increasing sample size, which is caused by the dependence of the MSE of the empirical estimators on the sample size.
Table 1: Bias and MSE of ICPT1 (X; tr, t2), ICPT2 (X; tr, t2), ICPT3 (X; tr, t2) and ICPT4(X; tr, t2) for a = 3.5 and different truncation limits (n = 30, 35, 40, 50, 60).
a = 3.5 ICPT1 (X; tr, t2) ICPT2(X; tr, t2) ICPT3 (X; tr, t2) ICPT4 (X; tr, t2)
(t|, t2) n Exact value Biasi/ MSEi Bias2/ MSE2 Bias3/ MSE3 Bias4/ MSE4
30 0.54832/0.325730 0.49378/0.28433 0.51636/ 0.30243 0.20438/0.17744
35 0.549833/0.32429 0.50497/0.28408 0.52796/ 0.30195 0.23830/0.16294
(0.1,4.5) 40 -0.50468 0.55001/0.32373 0.50943/ 0.28386 0.52995/0.29999 0.28788/0.16066
50 0.55571/0.32236 0.51445/0.28234 0.53018/0.29798 0.35550/0.17280
60 0.55720/0.32132 0.52253/ 0.28551 0.53218/0.29726 0.38246/0.17472
30 0.52237/0.33138 0.43733/0.27303 0.47260/0.29833 0.14750/0.25057
35 0.53415/0.32091 0.45608/0.26707 0.48867/0.28337 0.22450/0.21577
(0.2,4) 40 -0.53930 0.53631/0.31804 0.47334/0.26538 0.49120/0.27857 0.26394/0.20544
50 0.53835/0.31399 0.48180/0.26242 0.49168/0.27427 0.32690/0.19580
60 0.54001/ 0.31037 0.48255/ 0.25854 0.49838/0.27144 0.36282/0.17503
30 0.66016/0.59085 0.58444/0.46060 0.60264/0.49963 0.28883/0.38595
35 0.67913/0.52852 0.60971/0.44713 0.61856/0.46831 0.37049/0.37528
(0.3,3.9) 40 -0.71898 0.68008/ 0.50869 0.61383/ 0.43901 0.63064/0.45793 0.42447/0.34087
50 0.68175/0.49875 0.62145/ 0.43391 0.64163/0.45061 0.49303/0.33731
60 0.68447/0.49486 0.62355/0.42886 0.64253/0.44791 53037/0.33580
The nonparametric estimators of the distribution function are occasionally consider ed as plotting positions because they supply the ordinate values in plotting the distribution function.
Table 2: Bias andMSE of ICPT1 (X; tr, t2), ICPT2 (X; tr, t2), ICPT1 (X; tr, t2) and ICPT4(X; tr, t2) for a = 0.2 and different truncation limits (n = 30, 35, 40, 50, 60).
a = 0.2 ICPTi(X; tu t2) ICPT2(X; tu t2) ICPT3(X; tr, t2) ICPT4 (X; tu t2)
(t|,t2) n Exact value Biasi/ MSE1 Bias2/ MSE2 Bias3/ MSE3 Bias4/ MSE4
30 0.01443/0.62585 0.20562/0.85397 0.15292/0.84328 0.31217/0.46912
35 -0.08361/ 0.40997 0.10770/0.63329 0.03216/0.50496 0.19145/0.31041
(0.1,4.5) 40 3.81827 -0.13264/0.38650 0.01795/0.46097 0.00052/0.41305 0.12438/0.27354
50 -0.2265/0.21496 -0.12186/0.27259 -0.16442/0.23459 0.00011/0.15006
60 -0.26605/0.17343 -0.19538/0.20058 -0.20950/0.18536 -0.10140/0.11435
30 0.0668/ 0.45989 0.21781/0.60482 0.13608/ 0.55571 0.33128/0.50470
35 -0.02364/0.25232 0.091067/0.42669 0.07336/0.3542 0.17299/0.21602
(0.2,4) 40 3.19537 -0.04258/0.23084 0.02478/0.31252 0.00027/0.26733 0.11015/0.19296
50 -0.14634/ 0.12285 -0.05928/0.15386 -0.07653/0.13229 0.01410/0.11127
60 -0.15977/0.09937 -0.10696/0.13898 -0.14774/0.10505 -0.053750/0.07863
30 0.06254/0.52412 0.15421/0.42303 0.10033/0.45151 0.21734/0.35308
35 0.03341/0.27368 0.04638/ 0.23737 0.03507/0.26058 0.14486/0.22552
(0.3,3.9) 40 3.04031 -0.09178/0.21888 0.02148/0.15290 0.01607/0.18283 0.07261/0.20835
50 -0.15733/0.17706 -0.10894/0.11774 -0.11561/0.13794 0.04314/0.09471
60 -0.18802/0.10640 -0.15383/0.09217 -0.15181/0.10345 -0.10009/0.07318
Figure 1: Graphical showing of the MSE of four estimators. Sample size for fixed truncation limit (0.2, 4). (I) Plot of the MSE for fixed truncation limit (0.2, 4) and a = 0.2 and (II) Plot of the MSE for fixed truncation limit (0.2, 4) and a = 3.5.
6. Real data
In this part, an actual life data set is examined to illustrate the applicability and usefulness of the best-pr oposed estimator of ICPT in actual status. For this pur pose, we have taken into account the data set vinyl chloride acquired from clean upgradient groundwater monitoring wells [2]. Vinyl chloride is an organic compound that is unstable . In envir onmental investigations, this aspect is of extraor dinar y significance due to the fact that it is both anthr opogenic and car cino-genic. Nonetheless, in lots of backgr ound monitoring wells, low levels of this component are deter mined. This compound low surface detections in clean upgradient backgr ound monitoring wells is because of cross pollution from air or gas or the analytic system itself. The data set is provided as follows. Data Set (g/ l) : 5.i, i.2, i.3, 0.6, 0.5, 2.4, 0.5, i.i, 8.0, 0.8, 0.4, 0.6, 0.9, 0.4, 2.0, 0.5, 5.3, 3.2, 2.7, 2.9, 2.5, 2.3, i.0, 0.2, 0.i, 0.i, i.8, 0.9, 2.0, 4.0, 6.8, i.2, 0.4, 0.2 has been fitted with exponential distribution by [21]. They acclaimed that this data set follows Exp(0.53208i4) (exponential distribution). To examine the behavior of the ICPT, we have calculated estimated values of ICPT4 (X; t|, t2) by means of the use of its best-proposed estimator for different trunca-
tion limits and a = 0.2, 3.5 as shown in Table 3. It has been deter mined that the estimated values are decreasing in t1 and increasing in t2 for a = 0.2, 3.5. So, by increasing (decreasing) the fourth estimator of ICPT(doubly truncated CPT), the amount of the dispersion of vinyl chloride obtained from clean upgradient groundwater monitoring wells increases (decreases). As expected for 0 < = 1, ICPT is an increasing functi on of the inter val. It is worth noting that this result is according to the monotonicity of ICPT(X; t1, t2) for Exp(0.5320814 ) and a = 0.3, 1.5.
Table 3: Kernel estimates of ICPT4 (X; t1, tj ))for the Vinyl chloride data for different truncation limits (t1, tj) and = 0.2; 3.5.
a\(t 1, t2) (0.4,2.9) (0.6,2.9) (08,2.9) (1,2.9) (0.2,1.8) (0.2,2) (0.2,2.4) (0.2,2.8)
0.2 2.035697 1.84515 1.658554 1.47661 1.108958 1.073237 1.576643 1.94168
3.5 -0.419520 -0.508537 -0.641847 -0.919710 -1.837256 -1.312966 -0.754232 -0.437168
7. Conclusion
In information theor y and also in reliability , there are several uncertainty measur es that play a central role. In this paper , we first studied the notion of doubly truncated (interval) Tsallis entropy and suggested the doubly truncated (interval) cumulativ e residual Tsallis entropy (ICRT) and doubly truncated (interval) cumulative past Tsallis entropy (ICPT) whose some of their properties and their relations with hazar d rate (reversed hazar d rate) and mean residual (past) life were studied. Also, we introduced ordering classes for ICRT and ICPT and gave some characterization. In the end, we have proposed four nonparametric estimators and compar ed their perfor manc e by utilizing simulation data. Also, based on the best-pr oposed estimator , an actual data set was additionally examined.
R eferences
[1] Barlow R.E. and Proschan F. Statistical theory of reliability and life testing: probability models, Florida State Univ Tallahassee,(1975).
[2] Bhaumik, D. K., Kapur, K., and Gibbons, R. D. (2009). Testing parameters of a gamma distribution for small samples. Technometrics, 51(3):326-334.
[3] Ebrahimi, N. (1996). How to measur e uncertainty in the residual life time distribution. Sankhy: The Indian Journal of Statistics, Series A, 48-56.
[4] Gupta, R. D. and Nanda, A. K. (2002). a- and ^-entropies and relative entropies of distributions. Journal of Statistical Theory and Applications, 1(3):177-190.
[5] Khorashadizadeh, M., Rezaei Roknabadi, A. H. and Mohtashami Borzadaran, G. R. (2013). Doubly truncated (interval) cumulativ e residual and past entropy. Statistics & Probability Letters, 83(5):1464-1471.
[6] Kumar, V. and Taneja, H. C. (2011). A generalized entropy-based residual lifetime distributions. International Journal of Biomathematics, 4(02):171-184.
[7] Kundu, C. and Singh, S. (2020). On generalized interval entropy. Communications in Statistics-Theory and Methods, 49(8):1989-2007.
[8] Kumar, V. (2017). Characterization results based on dynamic Tsallis cumulativ e residual entropy. Communications in Statistics-Theory and Methods, 46(17):8343-8354.
[9] Lutz, E. (2003). Anomalous diffusion and Tsallis statistics in an optical lattice. Physical Review A, 67(5):051402.
[10] Nanda, A. K. and Paul, P. (2006). Some results on generalized residual entropy. Information Sciences, 176(1):27-47.
[11] Navarro, J. and Ruiz, J. M. (1996). Failure-rate functions for doubly-truncated random variables. IEEE Transactions on Reliability, 45(4):685-690.
[12] Navarro, J., and Rubio, R. (2011). A note on necessar y and sufficient conditions for ordering properties of coherent systems with exchangeable components. Naval Research Logistics (NRL), 58(5):478-489.
[13] Nourbakhsh M. and Yari G. Doubly truncated generalized entropy, In Proceedings of the 1st International Electronic Conference Conference on Entropy and its Applications, 3-21 November 2014,
[14] Misagh, F. (2012). Some Properties of Interval Entropy Function and their Applications. World Applied Sciences Journal, 20(12):1666-1671.
[15] Moharana, R. and Kayal, S. (2020). Properties of Shannon Entropy for Double Truncated Random Variables and its Applications. Journal of Statistical Theory and Applications, 19(2):261-273.
[16] Mohamed, M. S. (2020). On Cumulativ e Tsallis Entropy and Its Dynamic Past Version. Indian Journal of Pure and Applied Mathematics, 51(4):1903-1917.
[17] Moharana, R. and Kayal, S. (2019). On shift-dependent generalize d entropies for doubly truncated random variable. Journal of Statistics and Management Systems, 22(5):923-942.
[18] Rao, M., Chen, Y., Vemuri, B. C. and Wang, F. (2004). Cumulativ e residual entropy: a new measur e of information. IEEE Transactions on Information Theory, 50(6):1220-1228.
[19] Sati, M. M. and Gupta, N. (2015). Some characterization results on dynamic cumulativ e residual Tsallis entropy. Journal of Probability and Statistics, 8 pages, 287-294.
[20] Shaked M. and Shanthikumar J.G. (Eds.). Stochastic orders, New York, NY: Springer New York, 2007.
[21] Shanker, R., Hagos, F., and Sujatha, S. (2015). On modeling of Lifetimes data using exponential and Lindle y distribut ions. Biometrics & Biostatistics International Journal, 2(5):1-9.
[22] Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3):379-423.
[23] Tong, S., Bezerianos, A., Paul, J., Zhu, Y. and Thakor, N. (2002). Nonextensiv e entropy measur e of EEG following brain injury from cardiac arrest. Physica A: Statistical Mechanics and its Applications, 305(3-4):619-628.
[24] Tsallis, C. (1988). Possible generalizati on of Boltzmann-Gibbs statistics. Journal of Statistical Physics, 52(1):479-487.
[25] Tsallis, C. and Brigatti, E. (2004). Nonextensiv e statistical mechanics: A brief introduction. Continuum Mechanics and Thermodynamics, 16(3):223-235.
[26] Zacks S. Introduction to Reliability Analysis Probability Models and Methods, Springer -Verlag, New York, 1992.