Научная статья на тему 'Sharma-Mittal Entropy Properties on Generalized (k) Record Values'

Sharma-Mittal Entropy Properties on Generalized (k) Record Values Текст научной статьи по специальности «Науки о Земле и смежные экологические науки»

CC BY
133
37
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Generalized (k) record values / Sharma-Mittal entropy / Maximum entropy principle / Characterization / Concomitants of generalized (k) record values / Residual Sharma-Mittal entropy

Аннотация научной статьи по наукам о Земле и смежным экологическим наукам, автор научной работы — Jerin Paula, P. Yageen Thomasb

In this paper, we derive Sharma-Mittal entropy of generalized (k) record values and analyse some of its important properties. We establish some bounds for the Sharma-Mittal entropy of generalized (k) record values. We generate a characterization result based on the properties of Sharma-Mittal entropy of generalized (k) record values for the exponential distribution. We further establish some distribution-free properties of Sharma-Mittal divergence information between the distribution of a generalized (k) record value and the parent distribution. We extend the concept of Sharma-Mittal entropy to the concomitants of generalized (k) record values arising from a Farlie-Gumbel-Morgenstern (FGM) bivariate distribution. Also, we consider residual Sharma-Mittal Entropy and used it to describe some properties of generalized (k) record values.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Sharma-Mittal Entropy Properties on Generalized (k) Record Values»

Sharma-Mittal Entropy Properties on Generalized (k)

Record Values

Jerin Paul® and P. Yageen Thomas^ •

aDepartment of Statistics, Vimala College (Autonomus), Thrissur, India ^Department of Statistics, University of Kerala, Kraiavattom, Trivandrum, India jerinstat@gmail.com & yageenthomas@gmail.com

Abstract

In this paper, we derive Sharma-Mittal entropy of generalized (k) record values and analyse some of its important properties. We establish some bounds for the Sharma-Mittal entropy of generalized (k) record values. We generate a characterization result based on the properties of Sharma-Mittal entropy of generalized (k) record values for the exponential distribution. We further establish some distribution-free properties of Sharma-Mittal divergence information between the distribution of a generalized (k) record value and the parent distribution. We extend the concept of Sharma-Mittal entropy to the concomitants of generalized (k) record values arising from a Farlie-Gumbel-Morgenstern (FGM) bivariate distribution. Also, we consider residual Sharma-Mittal Entropy and used it to describe some properties of generalized (k) record values.

Keywords: Generalized (k) record values, Sharma-Mittal entropy, Maximum entropy principle, Characterization, Concomitants of generalized (k) record values, Residual Sharma-Mittal entropy

1. Introduction

In equilibrium thermodynamics, physicists originally developed the notion of entropy, which was later extended through the development of statistical mechanics. Shannon [30] introduced a generalization of Boltzmann-Gibbs entropy, and later it was known as Shannon entropy or Shannon information measure. Shannon entropy represents an absolute limit on any communication's best possible lossless compression. More generally, the concept of entropy is a measure of uncertainty associated with a random variable. For a continuous random variable X with probability density function (pdf) f, the Shannon entropy is defined by

CO

H(X) = - f f (x) log f (x) dx. (1)

0

In the continuous case, H(X) is also referred to as the differential entropy. It is known that H(X) measures the uniformity of f. When H(Xi) > H(X2), for any two random variables with pdf fi and f2 respectively, then we conclude that it is more difficult to predict outcomes of X1, as compared with predicting outcomes of X2 [see, 37]. One main drawback of H(X) is that for some probability distributions, it may be negative and then it is no longer an uncertainty measure. This drawback is removed in the generalized entropies like Renyi entropy [29], Tsallis entropy [36] and so on.

Subsequently Sharma-Mittal entropy [31] was introduced as a two parameter measure Ha, p (X)

of a random variable X with pdf f as

1=1 1-a

Ha, p (X) = 1/ {f (x)V dx) - 1), (2)

with a, p > 0, a = 1 = p and a = p. It is clear to be note that if we take limit p ^ 1 in (2) then Sharma-Mittal entropy becomes Renyi entropy [29] which is given by

Ha,1(X) = Y-T^ !og/ {f (x)}a dx. (3)

If we take limit as p ^ a, in (2), then the resulting expression is Tsallis entropy [36] and is given by

1

1 — a

Ha,a (X) = ! J {f (X)}" dx - 1. (4)

In the limiting case when both parameters approach 1, we recover the ordinary Shannon entropy [30] as given in (1).

One may observe several applications of Sharma-Mittal entropy from the available literature. Frank and Daffertshofer [10] have established the relation between anomalous diffusion process and Sharma-Mittal entropy. Masi [17] explained how this entropy measure unifies Renyi and Tsallis entropies. For more details on the applications of this entropy see, Akturk et al. [4] and Kosztoiowicz and Lewandowska [14]. Nielsen and Nock [21] obtained a closed-form formula for the Sharma-Mittal entropy of any distribution belonging to the exponential family of distributions.

Successive extremes occurring in a sequence of Independent and identically distributed (nd) random variables have been called by Chandler [8] as the record values of the sequence. Properties of record statistics arising from a distribution help to understand the intrinsic properties of the parent distribution as well. A limitation that one encounters in dealing with statistical inference problems based on classical record values is about their limited occurrence, as the expected value of inter arrival times of records is infinite [see, 11]. Also the occurrence of an outlier in a sequence of random variables arrests the subsequent realization of record values. However one may observe that generally the kth record values as introduced by Dziubdziela and Kopocinski [9] occur more frequently than those of the classical records. The reason for this is that the generation of the sequence of upper (k) records makes k — 1 of the upper extreme values (outliers) of the sequence incapacitated from their occurrence in the constructed record sequence. Similar property holds with the generated sequence of lower (k) record values as well. Suppose {Xn } is a sequence of nd random variables. Then for a positive integer k > 1, the sequence of upper kth record times {TU(n,k), n > 1} is defined as [see, 20, p. 82]:-

TU(1,k) = k,

and, for n> 1

TU(n+1,k) = min{j : j > TU(n,k), Xj > X%(„,k)— k+1:TU(„,k) }, where Xi:m denotes the i-th order statistic in a sample of size m. Now if we write

XU(n,k) = XTU(„,k) —k+1:TU(„,k) , for n = 1,2,. . .

then {XU(n,k)} is known as the sequence of the kth upper record values. In a similar manner we can define the sequence {XL(n,k)} of kth lower record values as well. It is to be noted that kth member of sequence of the classical record values is also called as kth record value. This contradicts with the kth record values as defined in [9]. Pointing out this conflict in the usage of kth record values of Dziubdziela and Kopocinski [9], and as it generates the classical record values for k = 1, Minimol and Thomas [18,19], Paul [22], Paul and Thomas [23, 24, 25] and Thomas and

Paul [34, 35] have called the kth record values as defined in Dziubdziela and Kopocinski [9] as the generalized (k) record values. Agreeing with the contention of above authors, we also call the kth record values of [9] as generalized(k)record values all through this paper.

Suppose {Xi, i > 1} is a sequence of random variables with absolutely continuous cdf F(x) and pdf f (x). Let {XU(n^)} be the sequence of GURV's generated from the sequence {Xi}. Then the pdf fxunk) (x) of Xu(n,k) is given by [see 6]

kn

fXu^x) = Щ [- ln {1 - F(x)}]n-1 [1 - F(x)]k-1 f (x), -<» < x < <», n = 1,2..... (5)

for n > 2. In a similar manner we can define generalized lower (k) record values (GLRV's) as well. If we write XL(nk) to denote the nth GLRV, then the pdf fXl^nk) (x) of XL(n^) is given by [see, 28]

kn

fXUn/k)(x) = щ [- ln №)}]n-1 [F(x)]k-1 f (x), -<» < x < <», n = 1,2..... (6)

Generalized (k) record values arise naturally in problems such as industrial stress testing, meteorological analysis, hydrology, sporting, stock markets, athletic events and seismology. Anderson et al. [5] have attributed some connection between record statistics and the strain released in quakes. Majumdar and Ziff [16] have enlisted the detailed involvement of record theory in its multiple applications in spin glasses, adaptive process, evolutionary models of a biological population. See also Sibani and Henrik [33] for some record dynamics arising in some physical systems. For more details on applications of record, values see, Arnold et al. [6], Nevzorov [20] and the references therein.

Of late several articles have been published on various information measures associated with record values. [7] studied some information properties of records based on Shannon entropy. Abbasnejad and Arghami [1] studied the Renyi entropy properties of records and compared the same information with that of the iid observations. Baratpour et al. [7], Ahmadi and Fashandi [2] and Paul and Thomas [23, 24, 26, 27] have obtained some characterization results based on Shannon, Renyi, Tsallis and Mathai-Haubold entropies of record values. Shannon information in k-records was studied by Madadi and Tata [15].

The rest of this paper is organized as follows. In section 2, we express the Sharma-Mittal entropy of nth generalized upper (k) record arising from an arbitrary distribution in terms of Sharma-Mittal entropy of nth generalized upper (k) record arising from a standard exponential distribution. Section 3 provides bounds for Sharma-Mittal entropy of generalized (k) records. Section 4 characterizes exponential distribution by maximizing Sharma-Mittal entropy of generalized (k) record values arising from a specified class of distributions. Section 5 contains expressions for some measures associated with Sharma-Mittal entropy on generalized (k) records and concomitants of generalized (k) records. In subsection 5.1, it is shown that the Sharma-Mittal divergence information between generalized (k) record value and the parent distribution is distribution-free. Section 5.2 contains the representation of Sharma-Mittal entropy of concomitants of generalized (k) record values arising from the FGM family of bivariate distributions. In section 5.3, we provide an expression for the residual Sharma-Mittal entropy of nth generalized upper (k) record arising from an arbitrary distribution in terms of the corresponding expressions for the nth generalized upper (k) record arising from a standard uniform distribution.

2. Sharma-Mittal Entropy of Generalized (k) Record Values

In this section, we describe some properties of Sharma-Mittal entropy of generalized(k)record values. In the following theorem, we express Sharma-Mittal entropy of nth generalized upper (k) record arising from an arbitrary distribution in terms of Sharma-Mittal entropy of nth generalized upper (k) record arising from standard exponential distribution. In the theorem and in the remaining part of this paper we use the notation G(a, b) to denote the well known gamma

j. Paul, p.y. Thomas RT&A, No 1 (67)

SM ENTROPY PROPERTIES ON GRVS Volume 17, March 2022

distribution with pdf

a!

Г(Ь)

яь

(x) = wm я > 0, b > 0, x > 0.

Theorem 1. Let {Xj, i > 1} be a sequence of iid continuous random variables from a distribution with cdf F(x), pdf f (x) and quantile function F-1(.). Let {XU(n,fc)} be the associated sequence of generalized upper (k) record values. Then the Sharma-Mittal entropy of XU(„,fc) can be expressed as

H (X ) _ 1 if kna r((n - 1)a +1)

wwnv i _ ß \ V{r(w)}«[(k - 1)a + 1](n_i)«+i

v E

g(k_1)a+1,(n_1)a+1

{/ (f-1(1 _ ))}*_])1-" _ 1,

1_a

(7)

where U is a random variable, with G((k _ 1)a + 1, (n _ 1)a + 1) distribution.

Proof. The Sharma-Mittal entropy of nth generalized upper (k) record value is given by

Ha, ß (XU(n,k)) g \[f_

n_1и _ riv\lk_1 a N 1_a

d^ -1

kn{_ log(1 _ F(x))}n_![1 _ F(x)] ( (n _ 1)! f (x)

On putting u = _ log[1 _ F(x)], x = [F_x(1 _ e_u)] and du = ¡xx)dx we get

1 I/ r~ knae_u[(k_1)a+1]u(n_1)a r ( -, \

H.,(XU(»,»)) = N /0 -[(n—r1+1ii-{/(F (1 _ 0} d"' _ 1

1 I/ knar((n _ 1)a + 1) r [(k _ 1)a + 1](n_1)a

1 _ ß II [(k _ 1)a + 1](n_1)a{r(n)}«J r((n _ 1)a + 1)

0

1_ß

x e_u[(k_1)a+1]u(n_1)a {/ (F_X (1 _ e_U^ }" 1 d^ ^ _ 1

knar((n _ 1)a + 1)

1 _ M V[(k _ 1)a + 1](n_i)a+i{r(n)}a

x E.

g(k_ 1)a+1,(n_1)a+1

1_ß

{/ (f_"(1 _ e_u^'j) 1:1 _ 1

(8)

Now we state the following theorem without proof as the proof is just similar to the proof of theorem 1.

Theorem 2. Let {Xj, i > 1} be a sequence of iid continuous random variables with common cdf F(x), pdf f(x) and quantile function F-1(.). Let {XL(„,k)} be the associated sequence of genralized lower (k) record values. Then the Sharma-Mittal entropy of XL(„,fc) can be expressed as

knar((n _ 1)a + 1)

ß (XL(n,k) ) 1 _ ß U [(k _ 1)a + 1](n_l)a+l {r(n)}

-E.

,g(i_1)«+1,(n_1)«+1

1f„-U

F_1(e

a- 1'

1_l

1

where U is a random variable with G(1, (n _ 1)a + 1) distribution.

1

Oi

The following is a corollary to theorem 1.

Corollary 1. Let {Xi, i > 1} be a sequence of iid continuous random variables arising from standard exponential distribution. Let {X^(nk)} be the associated sequence of generalized upper (k) record values. Then the Sharma-Mittal entropy of Xu(n k) can be expressed as

Ha, ß (Xu (n,k))

knar((n - 1)a + 1) \ i

1 -ß IV{r(n)}a[ka](n-1)a+1

1lP

1

(10)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The following theorem follows from theorems 1 and 2 as a consequence of corollary 1.

Theorem 3. Let {Xi, i > 1} be a sequence of iid continuous random variables having a common cdf F(x), pdf f (x) and quantile function F—1(.). Let {XU(n,k)} and {XL(n,k)} be the associated sequences of generalized upper and lower(k)record values respectively. Then the Sharma-Mittal entropy of XU(„k) and XL(nk) can be expressed as

Ha, ß (XU(n,k)) - (^Ha, ß (XU (n,k)) + JLß

ka

(k - 1)a + 1

(n-1)a+1

x E

gl,(n-1)a+1

{f (f-1 (1 - =))}

a-1

1=1

1

Ha, ß (XL(n,k)) - (^Ha, ß (XU (n,k)) + 1-ß

ka

1 - ß

(n-1)a+1

(11)

E

g1,(n-1)a+1

(k - 1)a + 1

T-ß

{f(F-1 (e-U))}

a-1

1 - ß'

(12)

where X'U(n k) denotes the nth generalized upper (k) record value arising from the standard exponential distribution and U is a random variable, with G((k — 1)a + 1, (n — 1)a + 1) distribution.

3. Bounds for Sharma-Mittal Entropy of Generalized (k) Record Values

Baratpour et al. [7] and [1] have obtained bounds for Shannon entropy of records and Renyi entropy of records respectively. In this section, we use the relation (7) for deriving some bounds on Sharma-Mittal entropy of generalized upper (k) record values.

Theorem 4. If X has pdf f (x) and the Sharma-Mittal entropy Ha,p(XU(n,k)) of XU(n,k) arising from f (x) is such that Ha,p(XU(n,k)) < & then we have

(a) for all a > 1 and 0 < p < 1, Ha,p(X^nk)) < (h„,p(X*u{nX)) + j—p)

(n—1)a+1

ka

(k-1)a+1

1=1 1-a

BnSa,ß (f ) I - T-p, and

(b) for 0 < a < 1 and ß > 1, Ha,ß(Xu(nk)) > (h«,ß(X*u^ + j-ß)

(n-1)a+1

ka

(k-1)a+1

1lP

1-a

BnSa,ß (f) - tLp, where,

(i) X

U(n,k)

denotes the nth generalized upper (k) record value arising from the standard

exponential distribution (ii) Bn - e-((nLr,and

r((n-1)a+1)

(iii) Sa(f)- f Af(x) {f (x)}a-1dx, where AF(x) is the hazard function of X.

1

a

1- a

1

X

X

Proof. The Sharma-Mittal entropy of nth generalized upper (k) record value is given by

Ha, p (XU(n,fc)) = \H«, p (XU (n,fc}) + g

ka

(n—1)a+1

xE

g(k—1)a+1,(n—1)a+1

(k — 1)a + 1_

{f (f—1 (1 — e—U))}a—1 ])

TN !—P

a—1] \ 1—a 1

1 — g

where g(k—is the pdf corresponding to the G((k — 1)a + 1, (n — 1)a + 1) distribution. Since the mode of the distribution with pdf g(k—is mn = (k——1+1 we have

g , = e—(n—1)a[(n — 1)a ](n—1)a =

g(k—^a+^n—^^) = r((n — 1)a + 1) = n

Hence we have g(k—1)«+^—1)a+1 (u) < Bn. Now for a > 1 and 0 < p < 1 the entropy is

Ha, p (XU(n,k)) = [Ha, p (XU (n,k)) + Y—g

k

(k 1) + 1

(n 1) +1

x j g(k—1)a+1,(n—1)a+1(u) "If(F 1 (1 — e U))}

i T. \ i a—1

,—dM

1—a

1— P

< ^Ha, p (XU (n,k)) + p

k

(k 1) + 1

(n 1) +1

k—p 1—a

x/ {f (F—1 (1— e—U))f—V 1 0

1 — p

1

k

(k 1) + 1

1—p 1—a

Ha, p (XU (n,k)) + 1—p

TO

x / Af (y) {f(y)}a—1dy . 1— p

— TO /

Ha, p (XU (n,k)) + 1—p

(n 1) +1

k

(k 1) + 1

(n 1) +1

BnSa ( f )

1— p

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

For 0 < a < 1 and p > 1 the proof is similar.

4. Characterization Property by the Sharma-Mittal Entropy of Generalized (k) Record Values

Sometimes we may observe the uncertainty prevailing in the system under study as so large that we are curious to know the type of distribution which governs the system. That is, in such a system, we look for a distribution that is capable of possessing maximum entropy as suggested in Jaynes [12]. This section derives exponential distribution as the distribution that maximizes the Sharma-Mittal entropy of record values under some information constraints. Let C be a class of all distributions with cdf F(x) over the support set R+ with F(0) = 0 such that

(i) Af(x,0) = a(0)b(x)

(ii) b(x) < M, where M is a positive real constant with b(x) = B'(x) such that b(x) and «(0) are non-negative functions of x and 0 receptively.

TO

1

B

n

B

n

1

Now we prove the following theorem.

Theorem 5. Under the conditions described above Sharma-Mittal entropy Ha,p(XU(n,k)) arising from the distribution F(x) is maximum in C, if and only if F(x; 0) = 1 — e—Ma(0)x.

Proof. Let XU(nk) be the nth genralized upper (k) record value arising from the cdf F(x; 0) € C. Then by (7) we have

Ha, p (XU(n,k))

Ha, p (XU (n,k)) + 1—p

ka

(k — 1)a + 1

(n—1)a+1

x E.

'g(k—1)a+1,(n—1)a+1

{f (f—1 (1 — e—U))}-1])

a —11 \ 1—a 1

Ha, p (XU (n,k)) + g

ka

(k — 1)a + 1

1 — p

(n—1)a+1 [(k — 1)a + 1](n—1)a+1

J e—u[(k—1)a+1]u(n—1)a j f (f—1 (1 — e—U)) }

i r \ |a—1

1 du

r((n — 1)a + 1) 1

1—p 1—a

1 — p

Ha, p (XU(n,k)) + Y—p

ka

(k — 1)a + 1

&

x j e—u[(k—1)a+1 u(n—1)a |a(0)b

(n—1)a+1 [(k — 1)a + 1](n—1)a+1

r((n — 1)a + 1)

1—p

a(0)

e—a(0)B[B—1{ Our),

a-1

du

1 — p

= yHa, p (XU (n,k)) + 1—p

ka

(n—1)a+1 [(k — 1)a + 1](n—1)a+1

(k — 1)a + 1

&

x j e—ukau(n—1)a [a(0)]a—1 ba—1

a(0)

1—p 1—a

du

r((n — 1)a + 1) 1

1 — p'

(13)

Noting that b( x) < M we have

H (X ) < (H X ) + 1 )l [a(0)M]a—1 [ka](n—1)a+1 f e—ukau(n—1)adu

Ha, p (XU(nk)> < [Ha, p (XU (n,k)) + —p) l -r((n — 1)a + 1)-J ' U ) ^

1—1 1—a

Then clearly

1 — p

< (Ha,p(XU(n,k)) + Y—p) {[a(0)]K—1 Ma—1}

1—p

1 — p

Ha, p (XU(n,k))

<

<

1 fkna r((n — 1)a + 1) a( a—1 1—p I {r(n)}a [ka](n—1)a+1 {[a(0) M}

U^^ {[a(0)] M}a—1

1—p

1 — p

1—p 1—a

1

(14)

(15)

&

x

1

1

1

1

This proves the necessary part of the theorem.

On the other hand, suppose the nth generalized upper (k) record value arising from F(x; d) = 1 — e-Ma(e)x has maximum Sharma-Mittal entropy in class C. Then we have

«.p«™) = {(("ff,—, 1(1-++,+1 ((«WIMr1')— 1). (16)

It is clear to be note that the maximum entropy of nth generalized upper (k) record value (Xu(n,k)) arising from any arbitrary distribution under conditions (i) and (ii) will holds the inequality (15). As (16) is the expression on the right side of (15), it then follows that exponential distribution attains the maximum Sharma-Mittal entropy in the class C.

5. Some Properties of Sharma-Mittal Entropy on Generalized (k) Record

Values

This section provides exact expressions for the Sharma-Mittal divergence measure on generalized (k) record values. Further in this section, we derive expressions for Sharma-Mittal entropy of concomitants of generalized upper and lower (k) record values arising from the Farlie-Gumbel-Morgenstern family. In the last part of this section, we derive an expression for residual Sharma-Mittal entropy of generalized upper (k) record values arising from an arbitrary distribution.

5.1. Sharma-Mittal Divergence Measure on Generalized(k)Record Values

Sharma and Mittal in 1977 introduced a two parameter divergent measure viz. Shrma-Mittal divergence measure denoted by Da, p (f : g), between two distributions f (x) and g(x) and is defined by

Da, p (f : g) = ¿r | ( / (gMY 1 f(x)dx) — 1) , V a > 0, a = 1 = p. (17)

p—1 h 1 Vgw; 1

[3] shown that, most of the widely used divergence measures such as Renyi, Tsallis, Bhattacharya and Kullback-Liabler divergences are special cases of Sharma-Mittal divergence measure.

In this section we study the Sharma-Mittal divergence between the probability distribution of nth generalized upper (k) record value and the parent distribution from which it arises.

Theorem 6. The Sharma-Mittal divergence between the nth generalized upper (k) record and the parent distribution is given by the following representation

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

D..p(^f) = {() " — j (18)

Proof. The Sharma-Mittal information between the nth generalized upper (k) record and the parent distribution is given by

D

1 ,/r |kn(— log[1 — F(x)]}n—1 [1 — F(x)]k—1

, P ^(-k),f) = p — 1 -^-f (x)dx

1

On putting u = — log[1 — F(x)], we get x = [F 1 (1 — e u)], du = 1 ^Fu)dx and hence we have

1 ( I'& knae—u[(k—1)a+1] u(n—1)a \ 1— p(fU,,k),f) = p—1 h0 --— 1(19)

1—1 1—a

kna r((n — 1)a + 1)

1—p 1—a

1

p — 1 |V (r(n))a [(k — 1)a + 1](n—1)a+\

Hence the theorem. ■

Note 1. The Sharma-Mittal divergence between the nth upper record and the parent distribution can also be represented as

1

Da, p (fU(n),f ) = { Ha, p (XU(n,k)) + J—1

ka

(k — 1)a + 1

((n—1)a+1)(1—p)

1—a 1

(20)

p-1

where, X*u(n k) denotes the nth generalized upper (k) record value arising from the standard exponential distribution.

Remark 1. The Sharma-Mittal information between the nth generalized upper (k) record value XU(nk) and the parent distribution as given by 18 and 20 establishes that this information is a distribution free information measure.

5.2. Sharma-Mittal Entropy of Concomitants of Generalized (k) Records from Farlie-Gumbel-Morgenstern (FGM) family of Distributions

Let X and Y be two random variables with cdf's given by FX(x) and FY(y) respectively with corresponding pdf's fX(x) and fY(y) and jointly distributed with cdf F(x, y) given by, [see, 13].

F(x,y) = Fx(x)FY(y) {1 + y(1 — Fx(x))(1 — Fy(y))}, —1 < Y < 1, (21)

where y is known as association parameter. Then the family of distributions having the above form of cdf's is called Farlie-Gumbel-Morgenstern (FGM) family of distributions. It is obvious that (21) includes the case of independence as well when y = 0. The joint pdf corresponding to the cdf defined in (21) is given by,

f (x,y)= fx(x)fY(y) {1 + y(1 — 2Fx(x))(1 — 2Fy(y))}, —1 < y < 1. (22)

Let (X1,Y1), (X2,Y2), ..., (Xn,Yn) be two-dimensional random vectors with the common bivariate distribution function F(x, y) as given in (21). If we construct the sequence of GURV's {XU(n,k)} from the marginal sequence {Xi}, then the Y value occuring in an ordered pair with X observations equal to XU(n,k) is called the concomitant of the nth generalized upper(k)record value. We write YU^n/k] to denote concomitant of nth GURV XU(n,k). Similarly the concomitant of nth GLRV, XL(nk) as well can be defined and we denote it by YL[nk]. Then the pdf of YU[nk] is denoted by fYU[n,k] and is given by

fYUlnM (y) = JfY\x(y\x) fxU(n,k)(x) dx = fY(y) {1 — Yn(1 — 2Fy(y))}, (23)

where yn = — 2 j k+1^ j y. Using (2) and (23) we can represent the Sharma-Mittal entropy of concomitant of nth generalized upper (k) record value as follows:

Ha, p (YU[n,k] ) = Y—rp | (£& (fY (y) {1 — yn (1 — 2FY(y))})xdy^j — 1

, ( / X

1 / r& \ 1—a

± III r, , — , _ 1

1 — p

' /»CO

/ {fY(y)}a ({1 — yn(1 — 2Fy(y))})ady

, J —CO

1

On putting FY(y) = m, y = Fy 1 (m) and fy(y)dy = dM, we get

1

1—P

Ha,p(Yu[n,k]) = g {(jO1 {fr(Fy—X(m))^—1 (1 — 7n(1 — 2u)}a du) 1—1

1 ■'£,.

1 — p

{fY (Fy—1 (U ))} a—1 (1 — 7n (1 — 2U)}

1—p

-1,

where U is a uniformly distributed random variable over (0,1). Similarly the Sharma-Mittal

entropy of concomitant of nth generalized lower(k)record can be represented by

Ha,p(YL[n,k]) = Y—p | (Eu [{fr(Fy—1 (1 — U))}'—1 (1 + 7n(1 — 2U)}a] ) ^ — 1

5.3. The Residual Sharma-Mittal Entropy of Generalized (k) Record Values

Suppose X represents the lifetime of a unit with pdf f (.), then Ha, p(X) as defined in (2) is useful for measuring the associated uncertainty. Suppose a component is known to have survived up to an age t. In that case, information about the remaining lifetime is an important characteristic required for data analysis arising from areas such as reliability, survival studies, economics, business etc. However, for the analysis of uncertainty about the remaining life time of the unit, we will consider residual Sharma-Mittal entropy and is defined by

Ha,p(X;f) = r—p!(/{§y}adx)j, (24)

where Ha, p (X; t) measures the expected uncertainty contained in the conditional density of X — t given X > t and F(t) = 1 — F(t). In this section we derive a closed form representation for the residual Sharma-Mittal entropy of record values in terms of residual Sharma-Mittal entropy of uniform distribution over [0,1]. The survival function of the nth generalized(k)upper record, denoted by FXU(n,k) (x), is given by

F (x) = f [—klogF(x)]jF(y)k = r(n + 1; — klogF(x)) (25)

FWx) = f-j-F(x) =-f^+1)—, (25)

where r(a; x) denotes the incomplete Gamma function and is defined by

p to

r(a; x) = / e—Mua—1 du, a, x > 0.

Jx

Lemma 1. Let Zu(n,k) denote the nth generalized upper (k) record value from a sequence of observations from U(0,1). Then

H (Z . t) = { (knar((n — 1)a + 1; — [(k — 1)a + 1] log(1 — t)) _ 1 (?6)

Ha,p (Zu(n,k); t)= 1 — p ^ [(k — 1)a + 1](n—1)a+1(r(n; —k log(1 — t))}a) 'j . ( )

Proof. By considering (5), (24) and (25), the residual Sharma-Mittal entropy of ZU(n,k) is given by ( )

ka [—k log(1 — x)](n—1)a [1 — x](k—1)a

1—1 1-a

Ha,p (Zu(n,k); t) = p ili (r(n; —k log(1 — t))}a dx) — 1

X

On putting -k log(1 - x) = u, x = 1 - e u and kdx = e k du.

i-ß

1 I / foo u(n-1)ae-u [(k-1)a+1]

Ня,ß(zU(n,kY, t) = —ß j I k^j-klog(i-t) {rnr-kîogïî-iw-1 ^.

Now we consider the transformation | [(k - 1)(2 - a) + 1] = v, u = (к-щ2-к)+1 and du

_k_dv

(k-1)(2-a )+1dV.

1 kn Hß (Zu(n,k); t) = 1-ß il J-[(k-1)(2-a )+1]log(1-t) [(k - 1)a + 1](n-1)a+1 ^

u(n-1)ae-V 1-a _ 1

{Г(п; -klog(1 - t))}

1 J f knaГ((п - 1)a + 1; - [(k - 1)a + 1] log(1 - t)\ ^ _ 1 .

1 - ß H [(k - 1)a + 1](n-1)a+1 {Г(п; -klog(1 - t))}a

Hence the lemma.

Theorem 7. The residual Sharma-Mittal entropy of Xu(n/k) arising from an arbitrary distribution can be written in terms of the residual Sharma-Mittal entropy of ZU(nk) as follows

H, ß (Xu(nk) ; t) = {h„, ß (Zu(n,k) ; F(t)) + (ev {f (f-1(1 - e ^-H1 ))}" 1

1

1 - ß

where V - r-[(k-1)a+1] log(1-F(t))((n - 1)a + 1;1).

Proof. The residual Sharma-Mittal entropy of XU(n) is given by

1=1

(28)

1=ß

H (X . t)= 1 J if ka [-k log(1 - F(x))](n-1)a[1 - F(x)](k-1)a \ 1-a . Ha,ß(Xu(n,k); t) = 1-ß jU -{Г(п; -klog(1 - F(t)))}a-- 1

On putting u = -k log[1 - F(x)], x = F 1 (1 - e u ) and kdx = e u du we get

1 \ ( [o° k.a-1u(n-1)ae-u [(k-1)a+1]

Ha, ß (Xu(n,k); 0 = —ß[[ J-k log(1-F(t)) {Г(п; -k log(1 - F(t)))}a

1-ß

{f (f-1(1 - e-u ))}a-1dH 1-a - 1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

1

j. Paul, p.y. Thomas RT&A, No 1 (67)

SM ENTROPY PROPERTIES ON GRVS Volume 17, March 2022

Now we consider the transformation | [(k — 1)(2 — a) + 1] = v, M = (k—i)(2k—a)+i and du rdv.

(k—1)(2—a )+1l Ha, p (XU(n,k); t)

Hence the theorem.

kn

1 — p IV ■'— [(k—1)a+1] log(1—F(t)) [(k — 1)a + 1](n—1)a+1

. 1—p

e—vv(n—1)a , , v )} a—1 \ 1=5

- (r(n; —klog(1^F(,)))}a {f (F—1(1 — e" 1 >(2-)+1 >)} d") 1 { (knar((n — 1)a + 1; —[(k — 1)a + 1] log(1 — F(t))

1

1 —pi V [(k — 1)a + 1](n—1)a+1 (r(n; — klog(1 — F(t)))}a

i—p '

xEyjf (f—1 (1 — e — (k—1)(v—")+1 ))}" a—1 = {Ha,p(Zu(n,k);F(t)) + ^J (ev [{f (F—1(1 — e — (k—1)(V—a)+1 ))}a—^

1—p

1

(29)

1— p

1

CO

References

[1] Abbasnejad, M. and Arghami, N. R. (2011). Renyi entropy properties of records. Journal of Statistical Planning and Inference, 141:2312-2320.

[2] Ahmadi, J. and Fashandi, M. (2012). Characterizations of symmetric distributions based on Renyi entropy. Statistics & Probability Letters, 82:798-804.

[3] Akturk, E., Bagci, G., and Sever, R. (2007). Is Sharma-Mittal entropy really a step beyond Tsallis and Renyi entropies? arXiv preprint cond-mat/0703277.

[4] Akturk, O. U., Akturk, E., and Tomak, M. (2008). Can Sobolev inequality be written for Sharma-Mittal entropy? International Journal of Theoretical Physics, 47:3310-3320.

[5] Anderson, P. E., Jensen, H. P., Oliveira, L. P., and Sibani, P. (2004). Evolution in complex systems. COMPLEXITY, 10:49-56.

[6] Arnold, B. C., Balakrishnan, N., and Nagaraja, H. N. (1998). Records. John Wiley and Sons, New York.

[7] Baratpour, S., Ahmadi, J., and Arghami, N. R. (2007). Entropy properties of record statistics. Statistical Papers, 48:197-213.

[8] Chandler, K. N. (1952). The distribution and frequency of record values. Journal of the Royal Statistical Society. Series B, 14:220-228.

[9] Dziubdziela, W. and Kopocinski, B. (1976). Limiting properties of the kth record values. Zastos. Mat., 15:187-190.

[10] Frank, T. and Daffertshofer, A. (2000). Exact time-dependent solutions of the Renyi Fokker-Planck equation and the Fokker-Planck equations related to the entropies proposed by Sharma and Mittal. Physica A: Statistical Mechanics and its Applications, 285:351-366.

[11] Glick, N. (1978). Breaking records and breaking boards. American Mathematical Monthly, 85:2-26.

[12] Jaynes, E. T. (1957). Information theory and statistical mechanics II. Physical review, 108:171.

[13] Johnson, N. L., Kotz, S., and Balakrishnan, N. (2002). Continuous Multivariate Distributions, Models and Applications, volume 1. John Wiley & Sons, New York.

[14] Kosztolowicz, T. and Lewandowska, K. D. (2012). First-passage time for subdiffusion: The nonadditive entropy approach versus the fractional model. Physical Review E, 86:021108.

[15] Madadi, M. and Tata, M. (2014). Shannon information in k-records. Communications in Statistics-Theory and Methods, 43:3286-3301.

[16] Majumdar, S. N. and Ziff, R. M. (2008). Universal record statistics of random walks and Lévy flights. Physical review letters, 101:050601.

[17] Masi, M. (2005). A step beyond Tsallis and Rényi entropies. Physics Letters A, 338:217-224.

[18] Minimol, S. and Thomas, P. Y. (2013). On some properties of Makeham distribution using generalized record values and its characterizations. Brazilian Journal of Probability and Statistics, 27:487-501.

[19] Minimol, S. and Thomas, P. Y. (2014). On characterization of Gompertz distribution by

generalized record values. Journal of Statistical Theory and Applications, 13:38-45.

[20] Nevzorov, V. B. (2001). Records: Mathematical Theory. Translation of Mathematical Monographs, vol. 194. American Mathematical Society, Providence, RI, USA.

[21] Nielsen, F. and Nock, R. (2012). A closed-form expression for the Sharma-Mittal entropy of exponential families. Journal of Physics A: Mathematical and Theoretical, 45:1-8.

[22] Paul, J. (2014). On generalized lower(k)record values arising from power function distribution. Jonrnal of the Kerala Statistical Association, 25:49-64.

[23] Paul, J. and Thomas, P. Y. (2013). On a property of generalized record values arising from exponential distribution. Indian Association for Productivity, Quality and Reliability Transactions, 38:19-27.

[24] Paul, J. and Thomas, P. Y. (2014). On Tsallis entropy of generalized(k)record values. In Proceedings of seminar on Process Capability Studies With Special Emphasis on Com- putational Techniques & Recent Trends in Statistics, pages 1-14. Nirmala Academic and Research Publications (NARP).

[25] Paul, J. and Thomas, P. Y. (2015a). On generalized upper(k)record values from Weibull distribution. Statistica, 75:313-330.

[26] Paul, J. and Thomas, P. Y. (2015b). Tsallis entropy properties of record values. Calcutta Statistical Association Bulletin, 67:47-60.

[27] Paul, J. and Thomas, P. Y. (2019). On some properties of mathai-haubold entropy of record values. Journal of the Indian Society for Probability and Statistics, 20(1):31-49.

[28] Pawlas, P. and Szynal, D. (1998). Relations for single and product moment of k — th record values from exponential and gumbel distribution. J. Appl. Statist. Sci, 7:53-62.

[29] Rényi, A. (1961). On measures of entropy and information. In Proceedings of Fourth Berkeley Symposium on Mathematics, Statistics and Probability I960, pages 547-561, University of California Press, Berkeley.

[30] Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27:379-423.

[31] Sharma, B. D. and Mittal, D. P. (1975). New nonadditive measures of entropy for discrete probability distributions. J. Math. Sci, 10:28-40.

[32] Sharma, B. D. and Mittal, D. P. (1977). New non-additive measures of relative information. Journal of Combinatorics Information & System Sciences, 2:122-132.

[33] Sibani, P. and Henrik, J. J. (2009). Record statistics and dynamics. In Meyers, R. A., editor, Encyclopaedia of Complexity and Systems Science, pages 7583-7591. Springer Science+Business Media, LLC., New York, USA.

[34] Thomas, P. Y. and Paul, J. (2014). On generalized lower (k) record values from the Fréchet distribution. Journal of the Japan Statistical Society, 44:157-178.

[35] Thomas, P. Y. and Paul, J. (2019). On diagnostic devices for proposing half-logistic and inverse half-logistic models using generalized (k) record values. Communications in Statistics-Theory and Methods, 48(5):1073-1091.

[36] Tsallis, C. (1988). Possible generalization of Boltzmann-Gibbs statistics. Journal of statistical physics, 52:479-487.

[37] Zarezadeh, S. and Asadi, M. (2010). Results on residual Rényi entropy of order statistics and record values. Information Sciences, 180:4195-4206.

i Надоели баннеры? Вы всегда можете отключить рекламу.