Научная статья на тему 'Local asymptotic normality of family of distributions from incomplete observations'

Local asymptotic normality of family of distributions from incomplete observations Текст научной статьи по специальности «Математика»

CC BY
63
8
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
КОНКУРИРУЮЩИЕ РИСКИ / COMPETING RISKS / СЛУЧАЙНОЕ ЦЕНЗУРИРОВАНИЕ / RANDOM CENSORING / СТАТИСТИКА ОТНОШЕНИЯ ПРАВДОПОДОБИЯ / LIKELIHOOD RATIO / ЛОКАЛЬНАЯ АСИМПТОТИЧЕСКАЯ НОРМАЛЬНОСТЬ / LOCAL ASYMPTOTIC NORMALITY

Аннотация научной статьи по математике, автор научной работы — Abdushukurov Abdurahim A., Nurmuhamedova Nargiza S.

In this paper we prove the property of local asymptotic normality of the likelihood ratio statistics in the competing risks model under random censoring by non-observation intervals.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Local asymptotic normality of family of distributions from incomplete observations»

УДК 519.24

Local Asymptotic Normality of Family of Distributions from Incomplete Observations

Abdurahim A. Abdushukurov* Nargiza S. Nurmuhamedova

National University of Uzbekistan, VUZgorodok, Tashkent, 100174, Uzbekistan

Received 27.12.2013, received in revised form 10.01.2014, accepted 20.02.2014 In this paper we prove the property of local asymptotic normality of the likelihood ratio statistics in the competing risks model under random censoring by non-observation intervals.

Keywords: competing risks, random censoring, likelihood ratio, local asymptotic normality.

Introduction

The likelihood ratio statistics (LRS) plays an important role in decision theory. For example, while testing a simple hypothesis H0 against a complicated alternative Hi with an undefined law of distribution the criterions based on the LRS, according to the Neyman-Pearson lemma, are uniformly more powerful for any size n of observations (see [1,2]). Here appear some interesting examples when the alternative Hi depends on n and is close to H0, i.e. Hi = Hin ^ H0 as n ^ to. In such cases asymptotic properties of the LRS become transparent, which are useful for estimation theory and hypothesis testing. Among them there is the local asymptotic normality (LAN) of LRS. There is a number of papers devoted to investigations of the LAN for LRS and its applications in statistics. The most remarkable works are [2-5], which show that the LAN allows the development of asymptotic theory for most maximum likelihood and Bayesian type estimators and prove the contiguality properties of the family of probability distributions. In the papers [6-11] the properties of the LAN for LRS in the competing risks model (CRM) under random censoring of observations on the right and both sides were established. This paper includes investigations of the LAN for LRS in the CRM under random censoring by nonobservation intervals.

1. Competing risks model under random censoring by non-observation intervals

In the CRM it is interesting to investigate a random variable (r.v.) X with values from a measurable space (X, B) and events (A(i),..., A(k)) forming a complete group, where k is fixed. In practice, a r.v. X means, obviously, the survival or reliability time of some object (individual, physical system) exposed to k competing risks and failing in case one of the events {A(i), i = 1,..., k}. The pairs {(X, A(i)), i =l,.„,k} denote the time and reason the object fails (see more about the CRM in [6,12,13]). During the experiment under homogenous conditions an ensemble (X, A(i), ...,A(k)) is observed, and we obtain a sequence {(Xj, A(i),..., A(k)), j > 1}. Let j = I(AjU be the indicator of the event AjU Every vector Zj = (Xj, j,..., ^jfc)) induces

*a_abdushukurov@rambler.ru © Siberian Federal University. All rights reserved

a statistical model with sample space Y = X x{0,1}(k) = X x{0,1}x ... x {0,1} and a a-algebra C of sets of the form B x D1 x ... x Dk, where B e B and Di c {0,1}, i = 1,..., k. We suppose that the distribution of the vector Zj on (Y, C) depends on an unknown parameter в = (вр..., 6s) e 0:

Ql(B x Di x ... x Dk) = PgX e B, d(1) e Di,..., d(k) e Dfc), (1)

where 0 is an open set in Rs. Let the distribution (1) be absolutely continuous with respect to the a-finite measure v(x) = x(x) x e1 x ... x ek, where x is the Lebesque measure on R and ei are counting measures concentrated at the points y(i) e {0,1}, i = 1, k. In what follows we consider a statistical scheme where the sample (Xj, Aj1), ..., Ajk)) is nonobservable if the r.v. Xj falls in the interval [Y1j, Y2j], where {(Y1j, Y2j), j Y 1} is the sequence of independent and identically distributed (i.i.d) random vectors with an unknown distribution G(u,v), (u,v) e R2 (possibly implicitly depending on в). Here the samples (Xj, Aj1),..., Ajk)) and the pairs (Y1j,Y2j) are assumed to be independent and Pg (Y1j < Y2j) = 1 for every j Y 1. This scheme models the experiments where the observation of object j with life time Xj might be stopped at a random moment Y1j and renewed at a random moment Y2j. We call such a statistical model the CRM under random censoring by non-observation intervals. In this case instead of events (Aj1),..., Ajk)) we observe the events (Dj0), Dj1),..., Djk)), where Dj0) = {w : Y1j(w) < Xj(w) < Y2j(w)} and

Djj) = Ajj) П ({w : Xj(w) < Yj(w)} U {w : Xj(w) > Y2j(w)}), i = 1,..., k. Let Д^ = I(Djj)), i = 0,1,..., k and Wj = e1j + e2j, where e1j = I (Xj < Y1j) and e2j = I (Xj > Y2j). It is obvious that Aj0) = 1 — Wj and Ajj) = Wj^jj). In the CRM we are interested in the properties of pairs {(Xj, Ajj)), i = 1, k}, therefore we consider the subdistributions

Qig(B) = Q*e(B x {0} x ... x {0} x {1} x {0} x ... x {0}), i = 1,..., k, (2)

produced from (1) when Di = {1} and D; = {0}, i = l, l = 1,..., k. Let Qg(B) By h(i) and h we denote the densities of subdistributions Qig and Qg:

У Qjg(B).

i=1

Qig (B) = I h(i)(x; e)x(dx), i = 1,...,k, Qg (B)

Jb

j h(x; e)x(dx), B

(3)

where h = h(1) + ... + h(k). For B = (—те; x] we put Qig((—те; x]) = H(i)(x; в), i = 1,k and Qg((—те; x]) = H(x; в). Now we define the cumulative hazard functions (c.h.f.) of the pairs (X, A(i)):

Л^А; в)= / lim Pg(t<X < t + Д, A(i)/X > t)x(dt)

л^0

dH(i)(t; в) 1 - H(t; в)

, i = 1,..., k, x e R1.

(4)

Then the c.h.f. corresponding to the r.v. X is Л^; в)^Л(,)(1; в). In the CRM the exponential

i= 1

hazard functionals F(i)(x; в) = 1 — exp{—Л(i)(x; в)}, i = 1,k describes the distribution of the pairs (X, A(i)) in terms of the i-th risk. In view of the equality Л^; в) = — log(1 — H(x; в)), we have

k

1 — H(x; в) = Pg(X > x) = Ц (1 — F(i)(x; в)). (5)

i=1

. d - ____

Define the density f (i)(x; в) = — F(i)(x; в), i = 1, k. Then the density of intensity for i-th risk is f(i)/(1 — F(i)). On the other hand, by formulas (3-5) for every (x; в) e R1 x 0 and i = 1,..., k,

we have

i.e.

f (i)(x; в) _ h(i)(x; в)

1 - F(i)(x; в) _ 1 - H(x; в) ’

k

h(i)(x; в)_ f (i)(x; в) Ц (1 - F(j)(x; в)). j = 1 j=i

(6)

Assume that on the n-th stage of experiments the sample Z(n) _ (Z1,...,Zn), where Zj _ WjXj + (1 — Wj )[Y1j ,Y2j], is available for observation; this means that every observable Zj is a r.v. Xj (when Wj _ 1) or an interval [Yj, Y2j] (when Wj _ 0). Denote by p(z; в) the density of one observable without multipliers depending on the unknown nuisance distribution G. Then, according to (6), we have the following "truncated" likelihood function of the sample Z(n):

Pn(Z(n); в)_ П p(Zm; в)_ П

m=1

m=1

П

i=1

f (i)(Xm; в) (1 - F(j)(Xm; в))

j=i

j=i

x [H(Ym в) - H(Yim; в)]

1—w

(7)

П

k .S (i)

П [h(i)(Xm; в)]'-

,i=1

[H(Y2m; в) - H(Y1m; в)]

1—W-

w

X

w

1

Let for every u G Rs, в + n 1/2u _ Ф„(и; в) G 0 and be the distribution induced by the sample Z(n). Then we have the LRS of the model

T (u) dQ(n) (z(n) ) /dQ(n) (z(n) ) pn(Z \ ^n(U; в))

(u) _ a(QФn(и.0)(Z )/«(У0 (Z )_ p (z(n); в)

П

k П i=1 rh(i) (Xm ;fn(u; в))1 s(i)' m wm ГH(Y2m; $n(u; в)) - H(Y1m; ^n(u; в)!

[ h(i)(Xm; в) J [ H(Y2m; в) - H(Y1m; в) J

1—w

1

(8)

Put \n,e(u) _ log Ln,e (u), we shall now study the properties of the random function \n,e (u).

2. Local asymptotic normality

k

Let N(i) _ {x : h(i)(x; в) > 0} and N _ p| N(i). We need some regularity conditions:

i=1

(C1) The supports {N(i), i _ 1,k} are independent of в and N _ 0;

: exist the der

dmh(i)(x; в)

dmh(i) (x; в)

(C2) There exist the derivatives-----0/vm ’—, m _ 1, 2; i _ 1,..., k; j _ 1,..., s, for all в G 0;

двт

(C3)

двт

p,(dx) < то, m _ 1, 2; i _ 1,..., k; j _ 1,..., s for all в G 0;

(C4) There are finite integrals Гг(*)(в) _ 1, ..., s and в G 0;

d d

— log h(i)(X; в) — log h(i)(X; в)

for all l, j

— C50

(C5) The matrix IX (0)

IX (°)

l,j= 1,s

E IgV)

i=i

l,j=1,s

J2 I(i) (0) is positively

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

i=1

defined for all 0 e 0.

I(i)(0) is, obviously, the Fisher information matrix for the pair (X, S(i)), and so is IX(0) for the r.v. X. Let

Sn( Z(n); 0)

5 log p„(Z(n); 0) d0

'У ] 1в (Xj, Y1j, Y2j ,Wj ),

j= 1

where

,r ^ -A (i) dlog h(i)(x; 0) dlog(H(y2; 0) - H(yi; 0))

h(x,yi,y2,w) = -----—--------+(1 - w)----------—-------------.

i=1

We note that J(0) = J1(0) + J2(0), where

■ 0 Г Г Г dlogh(i)(x;в)!dlogh(»(x;в)\ ,

J1(0)=L [L—— (—as—) dH< ’<X;0)+

+ f“ d'°gd'°g^0)dHC>(x;0)

dG(yby2f

J2(0) =

fVl dlog(H(y2; 0) - H(y1; 0)) f dlog(H(y2; 0) - H(yi; 0)) V

d0 l d0

oo

X

oo

x (H(y2; 0) - H(y1; 0))dG(y1,y2).

Let (u; v) be the scalar product of vectors u, v e Rs. The following theorem asserts the LAN for the LRS

Theorem 2.1. Let the regularity conditions (C1)-(C5) hold and det{J(0)} = 0. Then for the LRS Lnfi (u) we have the representation

Ln,e(u) = exp < n 1/2 ^ (le(Xj,Yj,Y2j, wj); u) - - (l(0)uT; u) + Rn(u; 0)

j=1

where for all u e Rs

л (n)

Rn(u; 0) ^ 0

as n ^ to, and

L k1/2Eh(Xj,Y1j,Y2j,Wj)Mn)j ^ Ns(0; J(0)).

(9)

(10)

(11)

It follows from (9) that the LRS Ln,g (u) is approximated by the exponential density, and Xn,e(u) has asymptotically s-dimensional normal distribution. For the proof of Theorem 3.1 we need the following lemmas.

Let {li(x) = (1i1(x),...,1iS(x)), i = 1,..., k} and 1o(y1 ,y2) = (Zcu(yb y2),..., 1os(y1, y2)) be vector-valued functions, possibly depending on 0, and let

k

1(x, y1, y2, w) = w E S(i)1i(x) + (1 - w)1o(y1, y2).

i=1

Lemma 3.1. Suppose 'the following conditions hold:

(A) Eg fij(X)|] < ж for all i = 1,k; j = 1,s and в £ О;

(B) Eg [(1 — w) |10j(Y1 ,Y2)|] < ж for all j = 1,..., s and в £ О; Then for any в £ О

рж рж k / pyi рж

Egl(X,YuY2,w)= Ys / k(x)dH(i)(x; в) + / h(x)dH(i) (x; в)

J —ж j y i i_i \j —ж j У2

p ж p ж

+ / / 1o(yi,y2)(H(y2; в) - H(VP; в)) dC(y1,y2).

—ж yi

dG(yi, y2)+

(12)

Lemma 3.2. Let the regularity conditions (C1)-(C4) hold. Then

Eg

d logpn (Z(n); в) дв

0 for all в £ О.

(13)

Lemma 3.3. With the conditions (C1)-(C4), for all в £ О

Eg

д logPn(Z(n); в) f д logpn(Z(n); в) дв V дв

(d2Pn(Z(n); в) V дв^дв.

j,l_ 1,s

(14)

— t q(yi,y2;в) = \JH(y2;в) - H(yi;в)

Note g(t)(x;в) = л/ъЩхЩ, Cni(x;u) = 9

, / ч q(yi,y2;в +u)

and nn(yi,y2;u) = —^-----------------1-

q(yi,y2;в)

Lemma 3.4. Let the regularity conditions (C1)-(C5) hold. Then for |u| ^ 0 we have:

Eg

’£<*(i) Hi(X; u)

i_i

~7 (Ь(в)и; u) = o (|u|2) ,

Eg [(1 - w)V1 (Yi, Y2; u^ - - (Ыв)и; и) = o (|u|2)

Eg

’£*(i)

i_i

&d(X;u) - u;

Eg

(1 - w)

Pg

пП(Yi,Y2;u) - u;

9g(i)(X; в)\2

дв J

9g(Yi,Y2; в) дв

o (|u|2) > = o (|u|2),

’Ysii)tni(X; u)

i_i

w)

> S

o | u| 2

pg(|(1 - w)'nn(Yi,Y2; ^ > s) = o (|u|2)

Eg

'£^(i) fni(X; u)

i_i

+ ^ (Ji^Ku) = o (|u|2),

Eg [(1 - w)nn(Yi, Y2; u)] + 8 (12^^,; u) = o (|u|2) .

(15)

(16)

(17)

(18)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(19)

(20) (21)

(22)

2

The proof of Theorem 3.1. From (8) we have

Ln,e (u) = exp {x^e(u) + xfy(u)} ,

where

Хщв(u) = E wj E j log

r(*) 1

из 'j

j= 1 i=1

n k

h(i)(X'; в + n-1/2u)

h(i)(Xj; в)

(23)

(24)

213 Wj£ j log 1 + Cni(Xj;

j=1 i= 1

; n ~1/2u

Xn,e(u) = E(1 - wj)log

5 = 1

H(Y2j; в + n-1/2u) - H(Y15; в + n-1/2u)

H(Yj; в) - H(Y1j; в)

n

^3(1 - Wj)log ^1 + nn(Y1j,Y2j; n-1/2u)j,

j= 1

and

X^eW + Xni (u) = Xn,e (n 1/2u)-

Define An = < max max |Cni(Xj; n-1/2u)| <e >, Bn = < max |nn(Y1j, Y2j; n-1/2i

I 1AjAn 1 AiAk I I 1^jAn 1

(25)

(26)

(i)

< 1 and A

Due to the fairness of those events and Taylor’s formulas for some have

n k n k

x^e(u) = 213 W-E s^l)Cni(Xj; n-1/2u)^3 wE ^ji)e2i(Xj; n-1/2u) +

j=1 i=1 j=1 i=1

n k з

+ E wjE Jji)ajn £ni(Xj;n-1/2u)

j=1 i=1

and

jn

1<A •

< 1 we (27)

x^e(u) = 2E(1 - wj)nn(Y1j,Y2j;n 1/2u) -E(1 - wj)пП(Y1j,Y2j;n 1/2u)+

3

j=1

j=1

+ 13 (1 wj )ejn nn(Y1j 7 Yj ; n ^ u)

j=1

To prove the theorem it is enough to show the following as n ^ to:

pe(An) ^ 0, pe(Bn) ^ 0,

Pe

^3 w^3 ^ji)Cni(Xj; n 1/2u) - n

1/2

Ew5'E *

(i)

7d log h(i)(Xj; в).

j=1 i= 1

+ 4 №(в)и;u)

jj

j = 1 i=1 V

> Л ^ 0,

дв

; u +

(28)

(29)

(30)

Pe ( 2E(1 - wj )nn(Y1j,Y2j; n 1/2u) -

j=1

-n 1/2 E (1 - wj) (

fdlog (H(Y,j; в) - H(Y1j; в)) \ , 1

j=1

дв

u \ +4 (J2 (в)и; u)

> A ^ 0,

pe

Ew-E i(Xj;

j = 1 i= 1

,-V2u) - 1

) - 4 №(в)и;u)

Pe

E(1 - Wj)пП(Yij;n 1/2u) - т (J2(в)и;u)

> e I ^ 0,

> e I ^ 0,

j=1

Pe (E W'E A A (X-;

u'=1 i=1

n 1/2u)

Pe I Ё (1 - Wj) nn(Yij,Y2j; n 1/2u)

U'=1

>e

> e I ^ 0.

From (19), if n ^ to

Pe (An) <E Pe | j=i

; n 1/2u)

i=1

>e = nPe I

'E Sj£ni(Xj; n 1/2u)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

i=1

(32)

(33)

(34)

(35)

> e) = o(1).

The second convergence in (29) can be proved in the same way using (20). According to (17) and Markov’s inequality, for n ^ to we have

Pe

Ew'E sfeni(Xj;

j = 1 i=1

1

wj 2^ sj' Yni(Xj;n~1/2u) - “EwE E1 u;

n j=1 i=1

dg(i)(Xj; в)

дв

€ Ee ^E w'E Sji)

j = 1 i= 1 k

E sji)

i= 1

i/2u) _ 1 A ag(,)(Xj; в>

Cni(Xj; n 1/2u)-----u;

дв

> e I €

(36)

€ -Ee

£

W.

Cni(Xj; n 1/2u) _ ( n 1/2u;

-i/2u; dg(i)(Xj; в)

дв

o(1).

In the other hand, according to Lemma 3.1 and the law of large numbers, for n ^ to

1

-EwE sj(i)| u;

n j=1 i=1

dg(i)(Xj; в)

дв

t 1 v"1 .s(i) dg(i)(Xj; в) f dg(i)(Xj; вА Qв"} 1 \

u “EwEsj—д^(—д^) u 4№(в)и;u).

дв дв

(37)

jj

j=1 i=1

From (36) and (37) we have (32). In the same way (33) is proved. From (29) and (32) we obtain (34). In the same way we prove (35). Let us now prove (30) and (31). According to (21), when n ^ to

Ee |wj E sji)Cni(Xj; n 1/2u)^ = _8 (^(в)и; u) + o .

A^Wj sniw- i g

Consequently, when n ^ to the expression (30) is equivalent to

(38)

/ n k k

A(u) = Pe 12 E Wj E sji)Cni(Xj; n - 1/2u) _ Ee Wj E sji)Cni(Xj; n - 1/2u)

V j=1 ^ i= 1 i=1

1/2

- ’A A

i)( д log h(i)(Xj; в).

дв

> e ^ 0.

3

0

3

2

2

2

2

n

Now, as the summands are independent, from Markov’s inequality we have

A(u) < -2Ee l E<

. j=1

; n-1/2u) - Eg

i=1

r(i)

г-1/2 ^ x(i) ( d log h(i)(X,; в)

-------дв-------;

wx;n-1/2u)

i=1

2

1

= 4>\ Eg

’ WX; »—1/2 u) - 2», £ W д log hdie(X’; в); »-1/2<

i=1 i=1 '

n 2

- (39)

Eg

nn 2

oE ’A-A’; n—1/2u)

i=1

4n 1/2

Eg

-E «Л

i)f д log h(i)(Xj; в).

i= 1

дв

X Eg

W’E ’^ni(Xj ; n 1/2u)

; n-1/2u

u

X

2

-

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The first summand tends to zero when n ^ ro because of (51) from the proof of Lemma 3.4 (see Section 4) and (38), and the second summand tends to zero too since (38). So, (30) is fulfilled. In the same way we prove (31). Now, (10) follows from (29)-(35), and to prove (11) we use the central limit theorem. The theorem is proved.

3. The proofs of Lemmas 3.1-3.4

Proof of Lemma 3.1. It is easy to see that for all в G 0 and m = 1,..., s under the condition of Lemma

Eg [|Zm(X,YbY2,w)|] < k max {Eg b(i) |Zim(X)|U + Eg [(1 - w) |/om(Yb Y>)|] < ro.

1^i^k t L J J

Compute the expectation for the events {w =1} and {w = 0}. We have

Eg [wZ(X,Y1,Y2,w)] = Eg { E [Eg [«(i)1i(X)I (X < YO/yJ I(Y1 < Y2)

r ^i=1 1 ,

+Eg [«(i)1i(X)I (X > YO/YiJ I(Y1 < Y>) j =

/ОО p OO k / p yi p OO \

/ E(/ 1i(x)dH(i)(x; в)+ / li (x)dH(i)(x; в) dG(yby2)

-O «/yi i=1 \J—O J У2 /

also

Eg [(1 - w)/(X,Y ,Y2,w)] = Eg {Eg [/o(Y1,Yi)I (Y1 < X < Y2)/(Y1,Y2)] I(Y1 < Y2)]} = Eg {Zo(Y1,Y2)Eg [I(Y1 < X < Y2)/(Y1, Y2)] I(Y1 < Y2)]} =

= Eg {/o(Yb Y2) (H(Y2; в) - H(Y1; в)) I(Y1 < Y2)} =

/СО p О

/ /0(Y1, Y2) (H(Y2; в) - H(Y1; в)) dG(y 1, У2).

-О J yi

Now adding these formulas we obtain (11).

Proof of Lemma 3.2. We have

d logpn (Z(n); 6) д6

W)

'У ]1 (Xj, Yij, Y2j ,Wj),

j=i

where lg (Xj,Yij,Y2j,wj) = wj У Sj)kg(Xj) + (1 — wj)l0g {Yu,Y2j) is a vector-function from

i=i

Lemma 3.1, where

dlog h(i)(x; 6) dlog(H(y2; 6) — H(yi; 6))

iig(x) =------d6-------’ l°g(yi,y2) =--------------дб------------•

The fact that conditions (A), (B) of Lemma 3.1 hold follows from

Гж dh(i)(x; 6)

Eg

S(i) |igij (X)|

d6j

p(dx) < ж for all 6 G в,

(40)

also

/»Ж /»Ж

Eg [(1 — w) \lg0j (Yi,Y2)\\= /

J — Ж J yi

d log h(x; 6)

d

w (H(y2; 6) — H(yi; 6))

dG(yi,y2) =

/»Ж /»Ж

оЖ рж

<

ж J y 1

ГУ 2

' У1 rXta ^^l/2

y2

y1

d6j

\Jh(x; 6)

\/h(x; 6)p(dx)

dG(yi,y2) <

( d log h(x; 6) V d6j

•Ж /* Ж

1 1/2

dH(x; 6)

[H(y2; 6) — H(yi; 6)\i/2

(41)

dG(yi,y2)

/*Ж /*Ж

< j(6)) / / / [HЫ;6) — H(yi;6)\i/2dG(yi,y2) < j(6)) /

J — Ж j У1

-ж J yi

here we use the regularity conditions (C1)-(C4), formula (40), and also the Cauchy-Bunyakovsky-Schwarz inequality. Thus, by (40) and (41) the expectation in (13) exists, and

Eg

dlogPn(Z(n); 6)

d6

= nEgig (X, Yi, Y2, w) •

(42)

By lemma 3.1, for any 6 G в, we have

Eg ig (X,Yi,Y2,w)= Г \00 W Г d l°g ^ (x; 6) dH (i)(x; 6)+

J—Ж J yi i = i \J—Ж д6

'—ж J yi i=i \J — ж (i).......... (i)(

d6

У°° d log hd6(x; 6 dH(i)(x; 6^ dG(yi,y2) +

жж

+

d log(H(y2; 6) — H(yi; 6))

ж yi

жж

d6

(H(y2; 6) — H(yi; 6))dG(yi,y2)

E

ryi dh(i)(x; 6)

p(dx) +

dh(i) (x; 6)

'-00 .yi -ж д6 Jy 2 д6

+д(H(y2; 6) — H(yi; 6))dG(yim) =

p(dx) +

жж

дд

— —(H(y2; 6) — H(yi; 6)) + — (H(y2; 6) — H(yi; 6))

—ж yi

Now, (13) follows from (42) and (43).

dG(yi,y2) = 0.

ж

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2

Proof of lemma 3.3. Since le • leT = \\hj • lei\\j ,=уу, where

, (XYY ) l (XYY ) ^ b(i> d log h(i)(X; в) d log h(i)(X; в)

lej (X, Уь У2, w) lei (X, Уь У2, w) = w2^b-ду---------дв-----

+

+(1 - w)

„ i

дlog(H(y2; в) - H(yi; в)) дlog(H(У2; в) - H(yi; в))

дв,

дв,

then by Lemmas 3.1 and 3.2

Ee

д logpn(Z(n); в) f дlogpn(Z(n); в) V дв V дв !

= п1(в),

where 1(в) is the matrix with elements

«■•=Л Г. t (£

Г дlog h(i)(x; в) дlog h(i)(x; в) JTT(i). лЛ

+ X. —!Щ —Щ-^шЫ{х; e)) iG{’J" У2>+

, [: [: дlog(H(У2; в) - H(yi; в)) д log(H(y2; в) - H(yi; в)) 1Г1/ ,

+ / / --------хх---------------XX-------«От, y2)

дв,

дв

= А(/)(в) + 42) (в).

(44)

It is easy to see that the first summand in (44) is estimated for all в e 0 as

д log h(i)(x; в)

д log h(i)(x; в)

iH(i)(x; в) =

Р>(в)| < 2]Г

i=i ' k

= 2E

i=i

i / 2

/1 (дlog%><*в)) iH(i>(x; J = 21 [4‘>(e)4‘>(e)]i/2

£(2iH W(x; E

(45)

where we use the condition (C4) and the Cauchy-Bunyakovsky-Schwarz inequality. Similarly, we estimate р2>(в) for all в e 0 and j, l = 1,... , s:

42V)

::

<

дЛ.^; в)

Iу. дв,

^.(ix)

дh(x; в) дв,

::

( .. i

'У2 ( д log h(x; в)

^(ix)

iG(yi, y2)

(H(y2; в) - H(yi; в))

дв

-: ^ У1 L-'yi V

Гу2 (д log h(x; в)

•\/h(x; в) I ^(ix)

.1

V дв,

•\/h(x; в м ^(ix)

iG(yi,y2)

(H(y2; в) - H(yi; в))

г 2 п i/2

„ /: Г Р(2iH(x;в) Г h(x;9)^(ix)x

4-: 4 у. 4yi V дв4 / 4.1

ГУ2 (д log h(x; в) \

ЛД дв, J

iH(x; в)

iG(yi,y2)

(H(y2; в) - H(yi; в))

< [/X(в)/*(в)р2 < гс.

-: ^ У1

X

-: ^ У1

X

X

X

The expressions (45) and (46) imply the existence of Jji(O). To prove (14), note first that for all d G 0 and j, l = 1,s:

Eg

d2 logpn(Z(n); e)

nEg

d2lg (X,Yi ,Y2,w)

dej дв1

where, by Lemma 3.1, we have the chain of equalities

dej дв1

Eg

d2lg (X,Yi,Y2,w)

dej дв1

l г. u r: j*

+ Гd loie; 1У1 ■ »■'■«) ■'G(».y2).

z*™ Z*™

+

-™ J yi

z*™ z*™ k

У2 "Jj дв1

d2 log(H(У2; •) - H(yi; •))

dej дв1

(H(y2; •) - H(yi; e))dG(yi,y2) =

Г r^l Г fdihbhie)hw,x.в) + «*<»<*■■ •)

J-ж Jyi J-ttl dej дв1 ’ дв1

x; •) dh(i)(x; •)

•j dei

z*™ z*™

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

+

dej

p(dx)

дв1 d^j h(i)(x; •)

d2(H(У2; •) - H(yi; •))

p(dx) h(i)(x; •)

+

f™ [d2h(i)(x; •) w dh(i)(x; •) dh(i)(x; •)

+ L ~jT h()(x; e)-------------------del-------de—

dG(yi,y2)+

dej del

-(H(У2; •) - H(yi; •))-

d(H(y2; •) - H(yi; •)) d(H(y2; •) - H(yi; •))

de.

de

j

dG(yi,y2)

(H (y2; e) - h (yi; e))

E

r™ '™ k x 'yi d 2h(i)(x; e) . Г ™ d2h(i)(x; e) ^

-^(dx) + / —dQ-Qep^^) dG(yi,y2 ) +

deldej

+

p™ z*™

-™ ^ yi

d2 (H (y2; e) - h (yi; e)) deldej

dG(yi,y2) - Jij (e) =

+

p™ z*™

-™ ^ Уi

d2(H(y2; e) - H(yi; e)) + d2(H(y2; e) - H(yi; e))

deldej + deldej

-Jij (e) = -Jlj (e).

dG(yi,y2)-

The equality (14) follows from (47) and (48).

Proof of lemma 3.f. Under the regularity conditions of the lemma

(47)

(48)

k z*™ z*™ / z* z* \

/ / I / h(i)(x; e + u)p,(dx) + / h(i)(x; e + u)g(diH dG(yi,y2) =

i=i 4-™ •' yi \4л 4 Г2 J

= 1 (vi2)u; u) = о (|u|2),

__(i) ___(i)

where A = N П (-те, yi), Г2 = N П (y2, те),

V

(2)

™™

d2h(x; e*) f d2h(x; e*) ,

-^(dx)+ ----т—4M(dx) dG(yi, y2),

de2

de2

’-™ j yi wA de •'A

e* is between e and e + u. It is easy to verify that for |u| ^ 0

y/h(i)(x; e + u) - у2h(i)(x; e) 1 7 d log h(i) (x; e)

y/h(i)(x; e)

----u;

2 ;

de

= o (|u|) .

(49)

(50)

-™ ^ yi

i

Therefore, when |u| —^ 0

eJ

[ i= 1

, .„.„л 1 Л.. dlogh(i)(X;ey

sni(X;u) 2 ( u; de

o (|u|^ .

Similarly, we have

/OO n 1 . .

J (H(у2\e + u) - h(yi;e + u)) dc(yi,y2) (v22)u;uj = o (|u|2)

N Wn(yi,o)

where

also

v.

(2)

N Wn( — <»,yi)

d2

-Qg2(H(У2;e*)- H(yi;e*))

dG(yi,y2 ),

y/H(y2; e + u) - H(yi; e + u) - ^H(y2; 0) - H(yi; e)

ун (y2; e) - H (yi; e)

1 У ; 5log (H(y2; e) - H(yi; e)) \

de /

2 u;'

= o (|u|) ,

and hence

Ed (1 - w)

' (Y Y ; ) 1 f ; dlog (H(Y2; e) - H(Yi; e))'

nn(Yi,y2;u) - ^ I u;-----------de-------

) } = o (|u|2) •

By (49)

Ee

™Ё<*(0[& (X; u)]

i=i

E

i=i

O

' -O J yi \ J Г1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

h(i)(x; e + u) -л/h(i)(x; 0 m ^(dx)+

j h(i)(x; e + u) - ^ h(i)(x; e)^ ^(dx)j dG(yi,y2) =

h(x; 0)^х)+ / h(x; e)^(dx) ldG(yi,y2) + o (|u|2) -

> -Уу2 /

к г /-° /•O / /• yi I----------- I--------

-2^^ / / ( / yh(i)(x; e + u)yh(i)(x; e)^(dx)+

i=i |_</ —oJyi \J —O

+

'y 2

/»O /* OO

dG(yi,y2)

(x; e + u)yh(i)(x; e)^(dx)J

O O ( )

2 / [1 - (H(y2; e) - H(yi; e))]dG(yi, y2) + o (|u|2) =

—O yi

Г /* OO /»OO /»yi

= ^E / ( (^ni(x;u) + i)dH(i)(x;e)+

i=i —O yi —O

+ / (£ni(x; u) + 1) dH(i)(x; e)^ dG(yi,y2) =

y2

/* OO /• OO / /* yi /• OO \

2^ ( / Cni(x; u)dH(i)(x; e) + / Cni(x;u)dH(i)(x; en dG(yi,y2)+

i=i J —oJ yi \ d—O Л 2 /

+o (|u|2) = 2Eg

’E^E (X; u)

+ o (|u|2) •

2

O

2

2

2

(51)

(52)

(53)

(54)

(55)

Now, (15) and (17) follow from (51), also (16) and (18) follow from (15). From (15) and (55) we obtain (21). On the other hand, by (52)

Ee [(1 - ^)пП(Yi,Y2; u)] =

/го рго . >2

ЫН(У2\ 0 + u) - H(yi; 0 + u) - УH(y2; 0) - H(yi; 0)) dG(yi,y2) =

-го Jyi ' '

/го /*го

/ (H(y2; 0) - H(yi; в)) dG(yi,yj) + o (|u|2)- (56)

- го j yi гого

-2 / / (Vn(yi,y2; u) + 1)(H (y2;в) - H (yi;0)) dG(yi,y2) =

го yi

2Ee [(1 - w)Vn(Yi,Y2;u)] + o (|u|2) •

Now (22) follows from (16) and (56). In order to establish (19) and (20), note that from (51) and (54), respectively, we have

Eel w^5(i)

i=i

, ,v , 1( d log h(i)(X; 0)

S“(X;u) - 2 (u;---------------

o (|u|2) ,

Ee< (1 - w)

(Y Y ; ) 1 ( ; dlog (H(Y2; 0) - H(Yi; 0))

nn(Yi, y2; u) - ^ I u;----------q0------------

o (|u|S

By (57)

Pe

wYJ5(i)^rn (X; u)

> £

<

^ Pe

’E*(i

S (X;и - 1 dlogh(i)(X;0)

Sni(X;u) 2 ( u; d0

- -

> -

- 2

<

^ Pe

■e A

i=i ^

<

<

-4 Ee w^N(i

4 H S id О "d!=1^)' d”"1" -

1-^u;dloghd0(X;dH(i)(x;0)1 dG(yi,y2) =o (|u|2),

■ 1 / d log h(i)(X; 0)

Sni(X; u) 2 ( u; d0

+

^i = (-TO, yi) n \ x :

d0

d log h(i)(x; 0)

d0

> |u } , ^2 = (У2, to) n x :

d log h(i)(x; 0)

d0

(57)

(58)

(59)

> |u|

where the first summand on the right hand side of (59) is o (|u|2) by (51), and the second is also of the order o (|u|2) thanks to the convergence of the integral Ji(0). Quite similarly, using (58) we have

^ Pe

Pe (|(1 - w)?7n(Yi, Y2;u)| > -) <

(1 ) (Y Y ; ) 1 ( ; 5log(H(Y2; 0) - H(Yi; 0))

(1 - w)nn(Y1, Y2;u) - 2 I u;-----------------------

< Pe ((1 - w)nn(Yi, Y2; u) > -2) = o (|u|2) •

Now, (19) and (20) follow from (59) and (60), respectively.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

+ A (60)

2

2

2

2

References

[1] E.Leman, J.Romano, Testing statistical hypothesis, Springer Verlag, 2008.

[2] G.Roussas, Contiguity of Probability Measures: Some Applications in Statistics, University Press, Cambridge, 1972.

[3] I.A.Ibragimov, R.Z.Khas’minskii, Asymptotic theory of estimation, М., Nauka, 1979, (in Russian).

[4] J.Hajek, Local asymptotic minimax and admissibility in estimation, Proc. Sixth. Berkeley Symp. on Math. Statist. and Prob, 1(1972), 175-194.

[5] L. le Cam, Locally asymptotically normal families of distributions, Unif. Calif. Publ. Statist., 3(1960), 37-98.

[6] A.A.Abdushukurov, Estimators of unknown distributions from incomplete observations and its properties, LAMBERT Academic Publishing, 2011 (in Russian).

[7] A.A.Abdushukurov, N.S.Nurmuhamedova, Approximation of the likelihood ratio statistics in competing risks model under random censorship from both sides, ACTA NUUz, 4(2011), 162-172 (in Russian).

[8] A.A.Abdushukurov, N.S.Nurmuhamedova, Asymptotics of the likelihood ratio statistics in competing risks model under multiple right censorship on the right, In: Statistical Methods of estimation and Hypothesis Testing, Perm, Perm Gos. Univ., (2012), no. 21, 4-15 (in Russian).

[9] A.A.Abdushukurov, N.S.Nurmuhamedova, Local asymptotic normality in the competing risks model, Uzbekskii Matematicheskii Zhurnal, (2012), no. 2, 5-12 (in Russian).

[10] A.A.Abdushukurov, N.S.Nurmuhamedova, Local asymptotic normality of statical experiments, LAMBERT Academic Publishing, 2012 (in Russian).

[11] A.A.Abdushukurov, N.S.Nurmuhamedova, Local approximate normality of likelihood ratio statistics in competing risks model under random censorship from both sides, Far East Journal of Theoretical Statistics, 42(2013), no. 2, 107-122.

[12] M.D.Burke, S.Csorgo, L.Horvath, Strong approximations of some biometric estimates under random censorship, Z. Wahrscheinlich. verw. Gebiete, 56(1981), 87-112.

[13] N.Langberg, F.Proshan, A.J.Quinzi, Converting dependent models into independents ones, preserving essential features, Ann. Probab, 6(1978), 174-181.

Локальная асимптотическая нормальность семейств распределений по неполным наблюдениям

Абдурахим А. Абдушукуров Наргиза С. Нурмухамедова

В данной статье доказано свойство локальной асимптотической нормальности статистики отношения правдоподобия в модели конкурирующих рисков при случайном цензурировании интервалом ненаблюдения.

Ключевые слова: конкурирующее риски, случайное цензурирование, статистика отношения правдоподобия, локальная асимптотическая нормальность.

i Надоели баннеры? Вы всегда можете отключить рекламу.