Научная статья на тему 'Nonparametric estimation for an autoregressive model'

Nonparametric estimation for an autoregressive model Текст научной статьи по специальности «Математика»

CC BY
107
33
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
NONPARAMETRIC AUTOREGRESSION / ASYMPTOTICAL EFFICIENCY / KERNEL ESTIMATES / MINIMAX

Аннотация научной статьи по математике, автор научной работы — Arkoun Ouerdia, Pergamenchtchikov Sergei

The paper deals with the nonparametric estimation problem at a given fixed point for an autoregressive model with unknown distributed noise. Kernel estimate modifications are proposed. Asymptotic minimax and efficiency properties for proposed estimators are shown.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Nonparametric Estimation for an Autoregressive ModelUniversité de RouenUniversité de Rouen (France)

The paper deals with the nonparametric estimation problem at a given fixed point for an autoregressive model with unknown distributed noise. Kernel estimate modifications are proposed. Asymptotic minimax and efficiency properties for proposed estimators are shown.

Текст научной работы на тему «Nonparametric estimation for an autoregressive model»

ВЕСТНИК ТОМСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА

2008 Математика и механика № 2(3)

УДК 519.2

O. Arkoun, S. Pergamenchtchikov NONPARAMETRIC ESTIMATION FOR AN AUTOREGRESSIVE MODEL

The paper deals with the nonparametric estimation problem at a given fixed point for an autoregressive model with unknown distributed noise. Kernel estimate modifications are proposed. Asymptotic minimax and efficiency properties for proposed estimators are shown.

Key words: asymptotical efficiency, kernel estimates, minimax, nonparametric autoregression.

AMS (2000) Subject Classification : primary 62G07,62G08; secondary 62G20.

1. Introduction

We consider the following nonparametric autoregressive model

Ук = S(xk)Ук-1 +^k , ^ k^ n (1Л)

where S(.) is an unknown R ^ R function, xk = k/n, y0 is a constant and the noise

random variables ()1<k<n are i.i.d. with E^k = 0 and E^ = 1.

The model (1.1) is a generalization of autoregressive processes of the first order. In [4] the process (1.1) is considered with the function S having a parametric form. Moreover, the paper [5] studies spectral properties of the stationary process (1.1) with the nonparametric function S.

This paper deals with a nonparametric estimation of the autoregression coefficient function S at a given point z0, when the smoothness of S is known. For this problem we make use of the following modified kernel estimator

~ 1 n

Sn (z0 ) = ~ Z Q(uk )Ук-1 Ук 1( A >d) , (I-2)

Л k=1

where Q(.) is a kernel function,

n

_ xk - z0

An= Z Q(«k)yk-1 with Uk = h ,

k=1

d and h are some positive parameters.

First we assume that the unknown function S belongs to the stable local Holder class at the point z0 with a known regularity 1 < p < 2. This class will be defined below.

We find an asymptotical (as n ^ ») positive lower bound for the minimax risk with the

normalyzing coefficient

_2ji_

«p„ = n2P+1. (1.3)

To obtain this convergence rate we set in (1.2)

___i_

h = n 2^+1 and d = Knnh , (1.4)

where Kn > 0,

lim Kn = 0 and lim -y = 0 . (1.5)

Kn

As to the the kernel function we assume that

f_lQ(z)dz > 0 and J-1 zQ(z)dz = 0 . (1.6)

In this paper we show that the estimator (1.2) with the parameters (1.4)-(1.6) is asymptotically minimax, i.e. we show that the asymptotical upper bound for the minimax risk with respect to the stable local Holder class is finite.

At the next step we study sharp asymptotic properties for the minimax estimators (1.2).

To this end similarly to [1] we introduce the weak stable local Holder class. In this case we find a positive constant giving the exact asymptotic lower bound for the minimax risk with the normalyzing coefficient (1.3). Moreover, we show that for the esti-

mator (1.2) with the parameters (1.4)-(1.5) and the indicator kernel Q = 1 the asymptotic upper bound of the minimax risk coincides with this constant, i.e. in this case such estimators are asymptotically efficient. In [9], Belitser consider the above model with lipshitz condtions.

The autor proposed a recursive estimator , and consider the estimatimation problem in a fixed t. By the quadratic risk, Belitser establish the convergence rate witout showing it’s optimality. Moulines at al in [10], show that the convergence rate is optimal for the quadratic risk by using a recursive method for autoregressive model of order d. We note that in our paper we establish an optimal convergence rate but the risk considered is different from the one used in [10], and assymptions are weaker then those of [10].

The paper is organized as follows. In the next section we give the main results. In Section 3 we find asymptotical lowers bounds for the minimax risks. Section 4 is devoted to uppers bounds. Appendix contains some technical results.

2. Main results

Fisrt of all we assume that the noise in the model (1.1), i.e. the i.i.d. random variables (£k )1<k< have a density p (with respect to the Lebesgue measure) from the functional class V defined as

V := {p > 0 : J-+* p{x)dx = 1, j+® xp{x)dx = 0 , j+® x2p{x)dx = 1

and J-+*|x|4 p(x)dx (2.1)

with ct* > 3. Note that the (0,1) -gaussian density belongs to V. In the sequel we denote this density by p0.

The problem is to estimate the function S(.)at a fixed point z0 e (0,1), i.e. the value S(z0). For this problem we make use of the risk proposed in [1]. Namely, for any estimate S = (z0) (i.e. any mesurable with respect to the observations (yk )1sk<„ func-

tion) we set

(S„, S) = sup ES,p |S„ (zo) - S(zo )|, (2.2)

where ES, , is the expectation taken with respect to the distribution PS,p of the vector

(ylyn) in (1.1) corresponding to the function S and the density p from V.

To obtain a stable (uniformly with respect to the function S) model (1.1) we assume (see [4] and [5]) that for some fixed 0 < s < 1 the unknown function S belongs to the stability set

rE = {S e C1[0,1] :||S|| < 1-6} , (2.3)

where ||S|| = sup0<xS1 |S(x)| . Here Ci[0,1] is the Banach space of continuously differentiable [0,1] ^ R functions.

For fixed constants K > 0 and 0 < a < 1 we define the corresponding stable local Holder class at the point z0 as

H(W (z0, K, 6) = {S erE :||S\ < K and Q* (z0, S) < K} (2.4)

with p = 1 + a and

„V S) |^(x) - SM

Q (zo, S) = sup 1.

xe[0,1] \X - Z0 |

First we show that the sequence (1.3) gives the optimal convergence rate for the functions S from H(z0, K, s). We start with a lower bound.

Theorem 2.1. For any K > 0 and 0 < s < 1

inf sup qnm„ (S„, S) > 0 , (2.5)

S SeH(|3) (z0 ,K,e)

where the infimum is taken over all estimators.

Now we obtain an upper bound for the kernel estimator (1.2)

Theorem 2.2. For any K > 0 and 0 < s < 1 the kernel estimator (1.2) with the parameters (1.4) - (1.6) satisfies the following inequality

lim sup y„m„ (S„, S) . (2-6)

SeH(P) (z0,K,e)

Theorem 2.1 and Theorem .2.2 imply that the sequence (1.3) is the optimal (minimax) convergence rate for any stable Holder class of regularity P, i.e. the estimator (1.2) with the parameters (1.4) - (1.6) is minimax with respect to the functional class (2.4).

Now we study some efficiency properties for the minimax estimators (1.2). To this end similarly to [1] we make use of the family of the weak stable local Holder classes at the point z0, i.e. for any 8 > 0 we set

< (z0, e) = {S erE : ||S|| < 8-1 and ph (z0, S)| < 8hp}, (2.7)

where

(z0 ’ = J-! (S(z0 + uh) - S(z0 ))du

and h is given in (1.4).

Moreover, we set

t(S ) = 1 - S2 (z0). (2.8)

With the help of this function we describe the sharp lower bound for the minimax risks in this case.

Theorem 2.3. For any 8 > 0 and 0 < s < 1

inf sup x-1/2 (Sn, S') > E|n|, (2.9)

S? (zo

where n is a gaussian random variable with the parameters (0,1/2).

Theorem 2.4. The estimator (1.2) with the parameters (1.4) - (1.5) and Q( z) =

satisfies the following inequality

lim lim sup t-1/2 (S)y„M„ (Sn, S) < E|n|,

6^0 ^ ^(PO (Zo >s)

where n is a gaussian random variable with the parameters (0,1/2).

Theorems 2.3 and 2.4 imply that the estimator (1.2), (1.4) - (1.5) with the indicator kernel is asymptotically efficient.

Remark 2.1. One can show (see [1]) that for any 0 < 8 < 1 and n > 1

H(z0,8,6) c U$ (zo, 6).

This means that the «natural» normalyzing coefficient for the functional class (2.7) is the sequence (1.3). Theorem 2.3 and Theorem 2.4 extend usual the Holder approach for the point estimation by keeping the minimax convergence rate (1.3).

3. Lower bounds

3.1. Proof of Theorem 2.1 Note that to prove (2.5) it suffices to show that

limi^f sup Es,poy„(Sn,5) > 0 , (3.1)

S SeH(|3) (z0K,e)

where

Vn(Sn>S) = 9„ \S„(zo)-S(zoi-

We make use of the similar method proposed by Ibragimov and Hasminskii to obtain a lower bound for the density estimation problem in [7]. First we chose the corresponding parametric family in H(z0, K, s). Let V be a two times continuously differ-

1*1

entiable function such that J_1 V(z)dz > 0 and V(z) = 0 for any |z| > 1 We set

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

u T rf x z0

su(x) = — VI—-0- , (3.2)

v h

where and h are defined in (1.3) and (1.4).

It is easy to see that for any z0 - h < x < z0 + h

| u

— V I -------- I — V lOI <

h^

l^u (x) — (Zo I = ht- V V X—ZL| — V(0) < V*

< |u|V* \x - z0|a

where V = maxz|<1 |V(z)| . Therefore, for all 0 < u < u = k/v* we obtain that

|Sa (x) — Su (z0 )| sup J------------------------1 < K.

zq - h< x< zq +h X — Zo

Moreover, by the definition (3.2) for all x > z0 + h

Su (x) = Su (z0 + = 0 and 4 (x) = 4 (z0 - = 0

for all x < z0 - h respectively. Therefore, the last inequality implies that

sup Q* (z0Su) < K,

|u| <U

where the function Q* (z0 S) is defined in (2.4).

This means that there exists nK,6 > 0 such that Su e H(z0, K, s) for all \u\ < u* and n > nK ,6. Therefore, for all n > nK,6 and for any estimator Sn we estimate with below the supremum in (3.1) as

sup ES,pVn(Sn,s) ^ sup ESM,p0 Vn(Sn,su) >-^ ESB>P0Vn)du (3.3)

SeH (|3) ( z0, K ,e) |u| <u*

for any 0 < b < u*.

Notice that for any S the measure PS,po is equivalent to the measure P0,po , where P0,po is the distribution of the vector (y1,-,yn) in (1.1) corresponding to the function S = 0 and the gaussian (0,1) noise densityp0, i.e. the random variables (y1,-,yn) are

i.i.d. N(0,1) with respect to the measure P0,Po . In the sequel we denote P0,Po by P. It is

easy to see that in this case the Radon-Nikodym derivative can be written as

2

/7P u 2

pn (u) = - 2

1 n 1 n

with ?n =—Z V2 («k -iand nn =-Z V^k-i^k.

9n k=1 9n ?n k=1

Through the large numbers law we obtain

, *

1 k .

P - lim d = lim — Z V2 (wk ^Li = J^ V2 (w)dw = ^2 ,

n^x> n^x> nh k=k,

where

k* = [nz0 - nh] +1 and k* = [nz0 + nh]. (3.4)

Here [a] is the integer part of a.

Moreover, by the central limit theorem for martingales (see [2] and [3]), it is easy to see that under the measure P

nn ^ N(0,1) as n ^ x.

Therefore we represent the Radon-Nykodim density in the following asymptotic form

2 2 u a

uan„---—+ rn

Pn (« ) = e

where P - lim r = 0.

This means that in this case the Radon-Nikodym density (pn (u))ni_1 satisfies the L.A.N. property and we can make use the method from theorem 12.1 of [7] to obtain the following inequality

—y 2b ^ E5“ ’ pv" (S" ’Su ■1 (3.5)

2

max(1,b-4b) ct p/b _°2y

where I (b, ct) =--------------j= jb e 2 du

b v2n

and 0 < b < w*. Therefore, inequalities (3.3) and (3.4) imply (3.1). Hence Theorem 2.1. ■

3.2 Proof of Theorem 2.3

First, similarly to the proof of Theorem 2.1 we choose the corresponding parametric functional family Su,v (.) in the form (3.2) with the function V = Vv defined as

Vv (x) = V-1 C QV (u)g (]du .

Where Qv (u) = 1{H <1-2v} + 21{1-2v<|«| <1-v}with 0 <V< V4 and g is some even

nonnegative infinitely differentiable function such that g(z) = 0 for \z\ = 1and J-1 g(z)dz = 1. One can show (see [1]) that for any b >0, 0 <8 < 1 and 0 < v < 1/4 there exists n* = n* (b, 8, v) > 0 such that for all |u| < b and n > n*

Su,v e U«*„> (z0,6).

Therefore, in this case for any n > n*

9n sup T-1/2 (S)^„ {§„, S) > sup t-1/1 (s)es,Po (Sn,S)

SeU® ( zo ,e) SeU® (zo ,e)

“ T*(n’ b^~2b ^ E“’v’p wn (S" ’ Su’v ,

where T* (n, b) = inf |x-1/2| (S ).

|u|<b1 1

The definition (2.8) and (3.2) imply that for any b > 0

lim sup |t(s )- ^ = 0.

|u| <b

Therefore, by the same way as in the proof of Theorem 2.1 we obtain that for any b > 0 and 0 < v < 1/4

inf sup x-1/2 (S>„^„ (S„,S) ^1 (b, )> (3.6)

S SeU® (z0,e)

where the function I(b,ctv) is defined in (3.5) with = — V(u)du. It is easy to

check that aV ^ 2 as v ^ 0. Limiting b ^ » and v ^ 0 in (3.6) yield the inequality (2.9). Hence Theorem 2.3. ■

4. Upper bounds

4.1. Proof of Theorem 2.2

First of all we set

A „„A A — 1

A =-T and A = -r 1( A >K„). (4.1)

-n A

Now from (1.2) we represent the estimate error as

1( -4<k„)

(Zo)-S(Zo) = -S(Zo)1(-j <k ) + — AZn + —A*». (4.2)

with

ZLQ(«t)yt-i^t A rj _Zn=iQ(«k)(S(xk)-S(zo))y2-

Zn and Bn =-

Vn Vn

Note that, the first term in the right hand of (4.2) is studied in Lemma A.3. To estimate the second term we make use of Lemma A.2 which implies directly

lim sup supES pZ2 <»

SeH® №) P,<P ’P

and, therefore, by A.8 we obtain

lim sup supes, L4 |Zn\ <

SeH(P (z0 K,e) PeP

:

Let us estimate now the last term in the right hand of (4.2). To this end we need to show that

lim sup sup ES,pBl <x. (4.3)

SeH(|3) (z0K,e) PE^>

Indeed, putting rk = S(xk) - S(z0) - S(z0 )(xk - z0) by the Taylor Formula we represent Bn as

h • ~ 1 ~

Bn = ~S(zo)Bn +— Bn,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

9n 9n

where Bn = zn=iQ(«k Ky2k-1 and Bn = Zn=iQ(«k hyt-i- We remind that by the con-

J

dition (1.6) J-1 uQ(u)du = 0. Therefore through Lemma A.2 we obtain

h2 -2 lim — sup sup ES, p Bn = 0.

n——ro

n

9n SeH(P) (z0,K,e) PeP

(R) * *

Moreover, for any function S e H(z0, K, s) and for k* < k < k ( k* and k are given in (3.4))

\rk\ = |jZ*((u)-S(z0)))du| ^KW -z0f ^ Khh = K

i.e. B < A. Therefore, by Lemma A.2

— 1 »2

lim sup sup—ESp Bn <o>.

SeH(|3) (z0,K,e) 9n

This implies (4.3). Hence Theorem 2.2. ■

4.2. Proof of Theorem 2.4

Similarly to Lemma A.2 from [1] by making use of Lemma A.1 and Lemma A.2 we can show that

—Jp'Zn ^ N(0,1) as n ^ x

uniformly in S erE and p e P. Therefore, by Lemma A.2 we obtain that uniformly in

S erE and p e P

x-1/2 (S) A Zn ^ N(0,12) as n ^ «

Moreover, by applying the Burkholder inequality and Lemma A.2 to the martingale Zn we deduce that

lim sup sup Es pZn <»•

SeH(|3) (z0,K,e) P^"P

Therefore, inequality A.8 implies that the sequence (jinZn )n> is uniformly inte-grable. This means that

lim sup sup It-12 (S)Es,p 1^ Z„| - E | J = 0,

SeH(|3) (z0,K,e) peP

where n is a gaussian random variable with the parameters (0,1/2). Now to finish this proof we have to show that

. . Es „B2

5^0 n^co (Z0 ,E) p^P

lim lim sup sup ES , p Bn2 = 0. (4.4)

8,n (

Indeed, by setting fS (u) = S(z0 + Hu) - S(z0) we rewrite Bn as

, *

1 ^ 0 Bn = — I fs(«*)yh =<Pn Gn (fs,S) +-0-nh(zo,S), (4.5)

<Pn k=k* t(S)

where Gn if,S^k 1f ^ )1 —_L J41 f (u)du

T(S)

and (z0, S) is defined in (2.7). The definition (2.8) implies that for any S e r.

s2 <t(S) < 1.

From here by the definition (2.7) we obtain that

s2 <t(S) < 1. (4.6)

c

\Bn\<<P„I Gn(f,S) | + -

s

Moreover, for any S e U$ n (z0, s) the function f satisfies the following inequality

||/5|| +1 /J <8-lh.

We note also that ynh2 ^ 0 as n ^ Therefore, by making use of Lemma A.2 with R = h/8 we obtain (4.4). Hence Theorem 2.4. ■

5. Appendix

In this section we study distribution properties of the stationary process (1.1). Lemma A.1 For any 0 < s < 1 the random variables (1.1) satisfy the following moment inequality

m* = sup sup sup sup ES,pyk <». (A.1)

n> 1 0<k<«Sere p^P

Proof. One can deduce from (1.1) with S e TE that for all 1 < k < n

yt <[(1 -B)k|j^ + X(1 -s)k-jj <8y0 + 8^£ (1 -s)k-j |$/

Moreover, by the Holder inequality with q = 4/3 and p = 4

y4k ^ 8 y0\4 +-T £ (1 •

S j=1

Therefore, for any p e P

_ 4 ■ |4 8

Es,pyk ^ 8\yo\ +~ s

Hence Lemma A.1. ■

Now for any K > 0 and 0 < s < 1 we set

,E = { erE :||S\ < K}. (A.2)

Lemma A.2. Let the function f is two times continuously differentiable in [-1,1], such that f (u) = 0 for \u\ > 1. Then

— 1 2

lim sup------2 SUP SUP SUP p at (f, S) <«, (A.3)

R>0 (Rh) Ilf 1 <R Se@k>e peP

where ||/|I = ||/|| +|f\\ and Gn (f, S) is defined in (4.5).

Proof. First of all, note that

n o

Z f («k )yk-l = Tn + an, (A.4)

k=l

where

k* k*

Tn = T f (uk )yZk and an = T (f (wk ) - f (wk-1))yii - /(wk* )y^*

k=k* k=k*

with k * and k* defined in (3.4). Moreover, from the model (1.1) we find

k* 2 2

T = In (f) + I f («k )S2 (Xk)yl 1 + Mn,

k=k*

k* k*

where (/) = Z /(«k) and Mn = Z f(«k)(2S(xk)yk-i^k +n)

k=k* k=k*

with nk = -1. By setting

k* k*

Cn = Z (S2 (xk) - S2 (zo))f (uk)yjt-1 and Dn = Z /(«k)(yf-i- yk2)

k=k*

k=k*

we get

1 T = 1 In (f) + 1 A«

2 n T(S) 92 T(S) 9

(A.5)

with An = Mn + Cn + S2 (z0)Dn . Moreover, taking into account that ^ = nh we obtain

W)

1 k i

= £ f (t)dt + Z f f (“* )dt - I1! f (t)dt =

k=k k-1

£ f (t)dt + Z f (f («*) - f (t))dt + f f (t)dt - 1(t)dt.

k=k* 1 * 1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

We remind that

< R . Therefore

k*

-1 Z f («*) - £f (t)dt

nh k=fe

2R

nh

Taking this into account in (A.5) and the lower bound for t(S) given in (4.6) we find

that

-2--^ Lf (t )d — t(S)

1 (2 R Mn Cn Dn

s2 v nh nh nh nh

(A.6)

Note that the sequence (Mn )nH is a square integrable martingale. Therefore,

-s, p

1

nh

1 1= ^ si, 4/TO/ 4 ^ ^ 4R m +CT )

-TTTEs,p Z / («k)(2S(xk)yk-i^k +nk) ^---------------------------

(nh)2 k=k,

nh

where m* is given in (A.1). Moreover, taking into account that |S(xk) -S(z0)| < L\xk -z0| for any S e &L s and that k* -k* < 2nh we obtain that

k*

Es,pC„2 < Z |(S2(xk) - S2 (zo ))|2 f2 («k )Es,pyh * 16R2L2mfh2.

nh k=k

(nh)2

Let us consider now the last term in the right hand of the inequality (A.6). To this end we make use of the integration by parts formula, i.e. we represent Dn as

k*

Dn = Z ((f(«k) - f («k-1))yl-1 + f («k*-1)y|*-1 - f («k*)yZk* • k=k*

Therefore, taking into account that

< R we obtain that

Es,pD2 < 3R2Es,p

2 ^ 4,4,4 Z yk-i + yk* + yk*-i

nh k=k,

By the same way we estimate the second term in the right hand of (A.4). Hence Lemma A.2. ■

lim sup sup ES,pA <®.

Se&k>e peP

(A.8)

Proof. It is easy to see that the inequality (A.7) follows directly from Lemma A.2.

By making use of Lemma A.2 with the condition (1.5) we obtain the inequality

1. Galtchouk L., Pergamenchtchikov S. Asymptotically efficient estimates for nonparametric regression models // Statist. Probab. Lett. 2006. V. 76. No. 8. P. 852 - 860.

2. Helland, Inge S. Central limit theorems for martingales with discrete or continuous time.

Scand. J. Statist. 1982. V. 9. No. 2. P. 79 - 94.

3. Rebolledo R. Central limit theorems for local martingales // Z. Wahrsch. Verw. Gebiete. 1980. V. 51. No. 3. P. 269 - 286.

4. Dahlhaus R. On the Kullback-Leibler information divergence of locally stationary processes // Stochastic Process. Appl. 1996. V. 62. No. 1. P. 139 - 168.

5. Dahlhaus R. Maximum likelihood estimation and model selection for locally stationary proc-

esses // J. Nonparametr. Statist. 1996. V.6. No. 2 - 3. P. 171 - 191.

6. Shiryaev A.N. Probability. Second Edition. Springer, 1992.

7. Ibragimov I.A. and Hasminskii R.Z. Statistical Estimation: Asymptotic Theory. Berlin, New York: Springer, 1981.

8. Belitser E. Local minimax pointwise estimation of a multivariate density // Statisti. Nederlan-dica. 2000. V. 54. No. 3. P. 351 - 365.

9. Belitser E. Recursive estimation of a drifted autoregressive parameter // The annals of Statistics. 2000. V. 26. No. 3. P. 860 - 870.

10. Moulines et al. On recursive estimation for time varying autoregressive processes // The Annals of Statistics. 2005. V. 33. No. 6. P. 2610 - 2654.

(A.8). ■

REFERENCES

CTarta npmaTa b nenaTt 04.06. 2008 r.

i Надоели баннеры? Вы всегда можете отключить рекламу.