Научная статья на тему 'Guaranteed parameter estimation of stochastic linear regression by sample of fixed size'

Guaranteed parameter estimation of stochastic linear regression by sample of fixed size Текст научной статьи по специальности «Математика»

CC BY
146
46
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ОЦЕНИВАНИЕ ПАРАМЕТРОВ / ПРОЦЕСС АВТОРЕГРЕССИИ / МОДЕЛЬ ARARCH / УСЕЧЕННЫЕ ПОСЛЕДОВАТЕЛЬНЫЕ ОЦЕНКИ / ГАРАНТИРОВАННАЯ ТОЧНОСТЬ / PARAMETER ESTIMATION / AUTOREGRESSIVE PROCESS / ARARCH MODEL / TRUNCATED SEQUENTIAL ESTIMATORS / GUARANTEED ACCURACY

Аннотация научной статьи по математике, автор научной работы — Dogadova T. V., Vasiliev V. A.

The method of parameter estimation of the multivariate linear regression by sample of fixed size is proposed. This method makes possible to get the parameter estimators with guaranteed accuracy in the mean square sense. There are constructed and investigated the truncated sequential estimators of ARARCH(1,1), AR(1) and AR(2). Asymptotic efficiency of the parameter estimator AR(1) with unknown noise variance is established.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Guaranteed parameter estimation of stochastic linear regression by sample of fixed size»

ВЕСТНИК ТОМСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА

2014 Управление, вычислительная техника и информатика № 1 (26)

УДК 519.233.22

T.V. Dogadova, V.A. Vasiliev GUARANTEED PARAMETER ESTIMATION OF STOCHASTIC LINEAR REGRESSION BY SAMPLE OF FIXED SIZE

The method of parameter estimation of the multivariate linear regression by sample of fixed size is proposed.

This method makes possible to get the parameter estimators with guaranteed accuracy in the mean square

sense. There are constructed and investigated the truncated sequential estimators of ARARCH(1,1), AR(1) and

AR(2). Asymptotic efficiency of the parameter estimator AR(1) with unknown noise variance is established.

Keywords: parameter estimation; autoregressive process; ARARCH model; truncated sequential estimators;

guaranteed accuracy.

Modern evolution of mathematical statistics is turned to development of data processing methods by dependent sample of finite size. One of such possibilities gives a well-known sequential estimation method, which was successfully applied to parametric and non-parametric problems. This approach for a scheme of independent observations has been primarily proposed in [1]. Then this idea has been applied to parameter estimation problem of dynamic systems in many papers and books (see [2-7, 12, 13, 15] among others). In particular, sequential estimators of the parameter AR(1) with unknown noise variance were proposed in [15].

To obtain sequential estimators with an arbitrary accuracy one needs to have a sample of unbounded size. However, in practice the observation time of a system is usually not only finite but fixed. One of the possibilities for finding estimators with the guaranteed accuracy of inference using a sample of fixed size is provided by the approach of truncated sequential estimation. The truncated sequential estimation method was developed in [8, 9] and others for parameter estimation problems in discrete-time dynamic models. In these papers, estimators of dynamic system parameters with known variance by sample of fixed size were constructed. Another but very similar approach was proposed in [10, 11].

It is known that nonlinear stochastic systems are being widely used for describing real processes in economics, technics, medicine etc. For simple models, for example, scalar first-order autoregression with discrete and continuous time, one-step sequential estimation procedure [2-5] can be constructed. In these cases one-step sequential estimators appear to be least squares estimators calculated in a special stopping time. These estimators are unbiased and simple for researching. In more complicated models such as autoregressive processes of high order and multidimensional regressive processes we can apply two-step sequential estimation procedure [4-7] etc. At that, there is a set of multidimensional models that makes possible to construct one-step procedure of estimation the unknown parameters [2, 5, 12]. In this paper we consider models of this type. There is constructed truncated sequential parameter estimation procedure of general regression. As an example we analyze models of scalar processes ARARCH(1,1), AR(1) and twodimensional autoregression of special type.

1. General regression model

Let (Q, F,P) be an arbitrary but fixed probability space with filtration F* = {Fn }n>0. It is supposed that the observable p-dimensional process {x(n)} satisfies the following equation:

x(n) = A(n - 1)X + B(n -1)^(n), n > 1, (1)

where A(n), B(n) are Fn - adapted observable matrixes of size p x q, p x m respectively. Elements of these matrixes may depend on realizations of the process (x(n)).

Noises \(n) form the sequence of Fn - adapted independent identically distributed (i.i.d.) random vectors with E\(n) = 0, E\(n)\'(n) = I; X = (Aj,..., A )' - vector of unknown parameters. Here and below

the prime means transposition.

The purpose is to construct the truncated sequential estimator of the parameter 0 = a' X, where a is a given constant vector.

For construction the estimation procedure we introduce pseudoinverse matrixes A+ (n) = [ A '(n) A(n)]-1 A '(n) (assume all the inverse matrixes [ A '(n) A( n)]-1 are almost surely (a.s.) determined). Moreover, the Fn - adapted matrixes Z(n) := B (n) B (n) are supposed to be known or uniformly bounded in sense of square forms for all n > 0:

X( n)<X a.s. (2)

Truncated sequential estimators of 0 will be constructed on the basis of the weighted least squares (LS) estimator:

N

X c(n)a 'A+ (n - 1)x(n)

6~ _ n=1

N =

N

X c(n)

n=1

where c(n) = w(n)-c(n -1), c(n) = {a'A+ (n)E+ (n)(A+ (n))'a} 1, Z+ (n) = Z(n) if X(n) is known and I in the other case; and w(n) - some non-negative weight function satisfying the inequalities w(n) < 1, n > 1. According to (1), deviation of the estimator 0N has the form:

N

X c(n)a A+ (n - 1)Z(n)

0N-0=—

N

X c( n)

n=1

(3)

where Z(n) = B(n -1)^ (n).

Define the truncated sequential estimator 0H N for the parameter 0 by sample of size N as

where the stopping time

and the weights

ZH, N

JH ,N

h n=1

X P„c(n)a' A (n - 1)x(n) -X

N

X c (n )> H

n=1

(4)

'■H ,N

N

inf{k e [1,N]: X c(n) > H}, X c(n) > H,

n=1 n=1

N

N, X c(n) < H

n =1

(5)

Pn =

1, n <T

H N

N

1, n = Th,n , X c (n)< H,

n=1

N

aH , n = Th ,n , X c (n)> H,

a H =

XH, N-1

H - X c(n)

n=1

c

(XH, N )

n=1

Define 5H,N = PX\ X c (n)< H I and a2 = 1 if the process (B (n)) is observable and ||X|| in the other

case. As well as we denote EX the expectation under the distribution PX with the given parameter X.

Theorem 1. Assume the process (1), matrix functions A(n) and B(n) are such that the condition (2) is fulfilled and EXc(n) <<x>, n e [1,N]. Then for every N > 1 and H > 0 the estimator 0H N, defined in (4), possess the following property:

EX (0H,N 0)2 < H + 02 ^H,N.

Proof. To prove the theorem we find with (3) the deviation of truncated sequential estimator (4)

-0-x

1 XH, N

0HN -0 = 77 X Pnc(n)a'A+ (n- 1)C(n)-X

H n=1

N

X c (n )> H

n=1

N

X c (n )< H

n=1

Estimate the mean square deviation of 0H N. Second moment of the first summand can be estimated similar to, e.g. [3], considering the definition of %,N, c(n) and properties Pn < 1

XH ,N

Ex(0h,n -0)2 <^j- Ex X P2c2(n)a'A+ (n - 1)£(n)Z'(n)(A+ (n -1))'a + 02-5

H

H ,N -

n=1

a2 XH, N

<—2 - Ex X P^c2(n)a'A+ (n - 1)E+ (n - 1)(A+ (n -1))'a + 02-5h,n <

H2 n=1

2 ^h N 2

<-02 - Ex S' P«c(n) + 02 - 5h,n = •^ + 02 - 5h,n .

H n=1 H

Theorem 1 is proved. Apply general estimation algorithm in the following problems.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2. ARARCH(1,1)

Let |xn} be a scalar first order ARARCH process:

xn = X - xn-1 Wc22 +C12x2-1 -^n, n > 1

(6)

with the initial zero mean variable x0 having the eighth moment; } is a sequence of i.i.d. zero mean random variables having density which is an even function, does not increase as module of argument grows and EE, n 2 = 1, in addition, x0 and } are mutually independent. The parameters a2 and a2 are supposed to

be known and a? > 0.

Note that the volatility coefficients B(n) = -,Ja2 +0^x^ in (6) are observable.

Put in the truncated sequential plan (4), (5) weights w(n) = 1 and H = HN = P^M1 • N, where

1 + ^i2^i2

P e (0,1). Then the estimator (4) and the stopping time (5) in this case will be defined as

follows:

" 1 XN P x • X

T _ V rn-^n An-1

N ~ u Z 2 22'X

hn n=i c0 +a1x„-i

N

z-

n—1

.2 , _2„2 > Hn

n=1 C0 + C1 Xn -1

(7)

inf \k e[1, N ]: Z

ln-1

2 2 2 n=1 C0 + C1 Xn-1

I N

>Hn f, Z-

X

n-1

N,

N

n=1 C0 + C1 Xn-1 2

X

n-1

'-t 2 2 2 <H N,

=1 G + c xn-1

where the weights Pn are defined as:

Pn =

1, 1 < n < X

N ■> N

1, n = Xn , Z

vn-1

•< HN,

2 2 2 N ’

n=1 C0 + C1 Xn-1

N X2

E n—1 ^ TT

2 2 2 _ N’

n=1 C0 + C1 Xn-1

XN-1

Hn - Z

A

Kn-1

1 c0 + c0 X°-1)

A-_ 1

X N -1

1A X N-1

n

Theorem 2. Assume model (6). Then for every 0 < L <<x> there exists the number p = p(L) such that the estimator (7) satisfies the property

sup ea( n - ^)° < N.

|A|<L ’ N

The proof of Theorem 2 is almost the same as in [9] (see Corollary 2 and Section 4) for the autoregression with drifting parameter. The exact expressions of numbers P and p can be found in [9, Section 4] as well.

3. Optimal parameter estimation of AR(1)

Let {X„}n>0 be scalar autoregressive process:

Xn =A.- Xn-1 + C^„ , (8)

with the initial zero mean variable x0 having the eighth moment; {^n} is a sequence of i.i.d. random variables with E^n = E^ = 0, E^n 2 = 1 and E^ < <x>; in addition, x0 and {^n} are mutually independent. The process (8) assumed to be stable, i.e. |A| < 1.

The main problem is to estimate the parameter A with guaranteed in the mean square sense accuracy when c is unknown.

In this section we will consider two variants of estimators - with the known principal term of the mean square error (having more simple structure) and optimal in the asymptotic minimax sense estimator. The first estimator will be used in Section 3b as the pilot estimator in the construction of the optimal one.

In both cases we construct truncated sequential estimator on the basis of the LS estimator:

* N /N 2

XN = Z Xn • Xn-1 Z Xn-1.

n=1 / n=1

3a. Adaptive truncated sequential parameter estimation of AR (1)

Define in (4), (5) the threshold HN = h •c2m • N, where m = m (N) is a sequence of integer numbers

satisfying the following.

Assumption 1.

a) m (N) = o (N), m (N) ^ ro as N ^ ro ;

b)

log m (N)

as N ^ ro ;

(N)

<5

(N) {JN

c) for some 5e( 0,1) fulfilled , N

V ’ N - m (N)

and the numberh e^0,((-1) -(1 + 5) 1 j.

2

Define the pilot LS-type estimator of variance c as follows

1 m r 12

- Z[Xn — KXn—11 ■

c2 =■

where

m n=1

Xm pro/[-1,1]'^'m,

m

Z x°-1 > m (log m)

^ m = ^ m •X

Analogously to [11] can be obtained

n=1

Em(c;m -c2)4 < ?'C°g”)0

m

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

where m = (a, c 2 ).

Define in the general estimation procedure (4), (5) the weight functions

w

(n ) =

cm >(logm)

-1

1 < n < m, m < n < N;

the truncated stopping time

inf U e[1, N ]: Z x°-1 > hn f, Z X°-1 > hn ,

n=m+1

N,

n=m+1

N

Z X°-1 < HN;

n=m+1

the weight functions

(9)

(10)

(11)

Pn =

1, 1 < n < X

N’

N

1, n = xn , E Xn_i < Hn ,

n=m+1

N

a N, n = XN, E Xn—1 ^ Hn ,

n=m+1

where

f X N-1 2 ^

a N = HN — E Xn-1

V n=m+1 y

Then the truncated sequential estimator (4) has the form

1

VT i .

Xn —1

XN =

h

E Pnxnxn—1 -X

N n=m+1

N _

E x2_1 ^ HN, al >(l08 m)"

n=m+1

(12)

(13)

Denote for every N such that log m (N) >a 2 the function

2C22 -(logm) 2C21 C1/4 -(logm)3/2 2C -(logm)

S N =

N2 - m2

where C21 =

8 (1+ 5)2 B4 E (^2 —a2 )4

1 _ h (1 + 8)

V

Burkholder inequality.

V Co

N 2

C22 =

hNm2(a2 —(logm)_1 )4 ’ h4 - C (1+ 8)6

:(Co — h (1 + 8))_

, B4 is the coefficient from the

f 1

According to Assumption 1 Sn = o V n y as N ^“.

The following theorem contains the main result of the section.

Theorem 3. Assume the model (8) with the parameter |X| < 1. Then the truncated sequential estimator

(13) has the following property:

1) E^(N — X) < N-h + SN;

if in addition the noises and x0 for some positive integer s have moments of the order 8s then there exist the numbers C (s) such that as N

Proof. The proof of the first assertion of the theorem is based on the following representation of the estimator’s deviation:

a xn

XN — X = ^- E PnXn—1^n -X

h

N n=m+1

—x-

f N am >(logm)1 )

X M a2 < h + X

V _ n=m+1 _ L J y

N 2 2 ,

E xn—1 ^ HN, am >(log m)

n=m+1

= I1 +12 + I3.

(14)

According to this formula we have

E,(XN — x) <E,I2 + 2E^22 + 2E^32.

(15)

Consider separately the summands in (15). By the definition of the stopping time xn using (9) and the technique proposed in [8, 9, 15] we can estimate

E„A2 <a2E,E,,

Hn

= a2 E,

, w 2 ,

Hn

XN

E f

n=m+1 XN

1 2 )

1pn lFm -X

_ y

A

E pno^;

V n=m+1 y

-X

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

a2 >

2 1 XN

<a2 E,, —— E,

, rr 2 ~ ,

Hn

\

E P»x«—1|Fm

V n=m+1

am >

1 E a2

hN . a2

a m >(logm)1

±+loim f eJ,

hN hN V ^

-X

1 log m

am >(log m)

(log m )—1 (log m)—1

<

<---+

_2 2 am — a

_2 2 1 am — a

4 )1/4 1 C -(log m)'

3/2

) I <~

/ ) hN

hN hN'Jm

Estimate E,/2 from above:

N

E

n=m+1

E, I2 < P,| E xh < Hn

Define the number C0 [X] = ^1+ X2 — |X|

(16)

. According to Lemma 2 in [9] for every k > m we have:

E xn - Co(x) E ^«.

n=m+1

n=m+1

Note that in the stable case |X| < 1

Co (x) - (V2—1)2.

Using this formula and the Chebyshev inequality we get

N

P,| E x2—1 <Hn !<PjQ (X) E P2 <Hn | =

Kn=m+1 y V n=m+1

f

N

= p,

< P,

h

1__ E ( —a2 )a2 —

^ N — mn=m+r ” a />a Co-(N — m)

V

1N

N m n=m+1

>a2 —

ha2N

<

Co -(N — m)

<

(17)

< P.

1 N

N m n=m+1

hN (am —a2) Co-(N — m )

> a

1 —

hN

Co-(N — m)

<-

8B4 E,(2 —a2 )

74 N 4 E,(am —a2 )

<

1 — -

hN

Y

Co-(N — m)

8 (1+ 8)2 B4 E,(^12 —a2 )

-(N — m)

1 — -

hN

- Co4-(N — m)

Co-(N — m)

4 - C (1 + 8)6 (log m)2 _ C21 + C22 - (log m)2

a211 - £ (1 + 8)

- N2

[a2 (Co — h (1 + 8))]'

4 N2 - m2 N2

N2 - m2

In the last inequality we used (9).

Let us estimate the third summand in (15)

E,,/2 < P.

v ^ >(am >(logm)1).

Using (9) and the Chebyshev inequality for N large enough we get

C -(logm)2

P,(am >(logm)1 )=)

2 2 2 a — am > a —

f 1

The first assertion of Theorem 3 follows from the obtained inequalities (15), (16), (18), (19). The second assertion can be proved analogously to the first one.

Theorem 3 is proved. Similar results for the sequential estimators of X were presented in [15].

(18)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(19)

3b. Efficiency of H

N

In this section we consider a little bit more complicated modification of the estimator (13) and prove its optimality in the sense of some risk function defined below.

- 2

Put in the definition (13) the threshold Hn = h

a„

N N1—xm

-(N — m ), hN = 1 — (log N) 1 and m = m (N)

is a sequence of integer numbers satisfying the following.

Assumption 2.

a) m(N )= o(N), m(N) —y ^ as N ^«;

b)

log m (N)

m (

1

(N) ( Jn - log2 N

as N — «;

c) for some 8e( o,1) fulfilled

m

(N)

N — m (N)

<8.

Here the pilot estimators of variance a and X are defined as follows

- 2

a =

- E [Xn — XmXn—1 ]2

m , L. J

m n=1

and for some r e (o, 1)

Xm = ProJ[-r,r]Xlm , where estimator Xm is defined in (13) with Hn from Section 3a.

Similarly to (9) the following inequalities can be proved

E^a--a2)

~2 2 )4 — C '(log m)

u I a— a ) — 2

m2

(20)

The estimators AN, times xN and weights w (n), Pn are defined in (10)-(13) with HN introduced

above.

To prove the optimality of the estimator AN we establish first the following inequality

lim N• Eu(AN -A)2 — 1 -A2.

(21)

Using the representation (14) for the deviation of the estimator AN we estimate second moments of the summands in the right hand side of (14).

Similar to (16) we have

2 2 1 f XN

Eu —a2 E — Eu

Hn

(1 -Am)

Z Pnxn-1 I Fm

\ n=m+1

•x

a >

(log m)

-1

a211 -

N

+hN2 • n—— Eu

N - m

— hN2------— (1 -A2 ) + h

N N - m ' ’ ■

a2 -a2

m ° a2 am (1 -A - )x

-2 log m N ^ N

7T1/4

a- >o°gm) a- >(logm) E!.(a--a2 )4 |

— hN2 • ( A2-

N-m v ’

h-2 1

N-m

A2 - A2

N

N-m

iEu(( -A)2 —

h -2 1 (1 -a2 ) + h -2 C • (log m) + h-2 1 f ^

hN • n - m (1 A)+ N ^ (N - m) — hN ^ N - m [ m •

= o

N

(22)

Estimate Eu/2 from below

N

Z

n=m+1

EuI22 — Pj Z x„2-! < Hn |.

Notice that when HN is determined in Section 3a, the method of estimation of this probability used in Section 3a cannot be applied. Then we will use representation for the probability argument applying the Ito formula for the process jx^ j:

1 |x2 - x2 2A N

'-m AN

— :Zx„21 --0_ = , ,

N - mn=m+1 n 1 -A2 1 -A2 | N - m N - mn =-+1

N m n=m+1

Using this formula and the Chebyshev inequality we get

N

Pul Z x2_1 <Hn | = P

— P,

1 2 2 xN - xm

1 -A2 N - m

n=m+1

2A N

f a2

H

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

N

N

Z x2_1

v 1 -A2 N - m 1 -A2 N - mn=m+1

N 1 N , ,

Z xn-1^n- N— Z (n-a2)

N - m n=m+1 V ’

N

1-A2 1-A2

-(1 hN)

1-A2

<-

(a2 ) (1 - hN )4 ( - m)4

-• E,,

XN + Xm +

2| A|

N

Z x«-i^,

n=m+1

Z (n -1)

n=m+1

(a2 )4 (1 - r2 ) (1 - hN )4

~ 2 _2 am - a

+ 2(1 -r)a2|Am - A|

<

2 • 44

(a2 ) (1 - hN )4 (N - m)4

e, xN+e, xm+2 iAi • e, N Z xn-1«n 4 + E Z ( - 1 n=m+1 4

n=m+1

64

E, a2 a2

m

= O

(a2 )4 (1 - r2 )8 (1 - hN )4

(log N )4 (log N )4 (log m )2 (log N)

/lr x2 + 2 + 2 (N - m) m m

4 2 l~ |4

+ 2a EJAm -A

= o (L

VN

(23)

The last inequality is true due to (20), the second assertion of Theorem 3, the second property in Assumption 2 and the obvious inequality supE,< « which fulfills for the stable process (8).

n>0

Then the inequality (21) follows from (14), (19), (22) and (23).

From (21) it follows that the truncated estimator jA n |n is optimal (see [11, 13]) in the asymptotic minimax sense

lim RrN (A n )> lim inf RrN (A n )= lim RrN (A n )

>r , .. ’ -kt . A », ’ Nv '

= 1,

N

NAN

where

Rr,N (AN) = sup sup 1 (A f )N • E,(AN -A)2

p |A<1-r

and the infimum is taken over the class of all (non-randomized) estimators AN of the parameter A. Here P is

the class of all densities f(.) of the noises j«n} having finite second moments and the Fisher information 2 _j

I (A, f ) = ——^ I (f)(=(1 -A2) for the case of Gaussian densities f(.)),

( f '(x )^2

I (f ) = J

v f (x)/

f (x )dx.

4. AR(2)

Consider two-dimensional autoregressive special type process AR(2)

x1 (n) = A1 x1 (n -1) + A2x2 (n -1) + « (n),

x2 (n) = A2x1 (n - 1) - A1 x2 (n - 1) + «2 (n), (24)

t ' where the parameter A = (Aj, A2) to be estimated belongs to the whole plane R2and «(n) = ( (n), «2 (n))

'

form a sequence of i.i.d. zero mean random vectors independent on x(0) = ((0),x2 (0)) with E«(n) = 0,

E«(n)•«'(n) =I, E|«(l)8 <», as well as Ex(0) = 0 and E||x(0)||8 <o>.

Note that parameter estimation problem for similar two-dimensional stochastic continuous time system was considered in [2].

The main aim of this section is to estimate with guaranteed accuracy the parameter 0 = a'A, where the given vector a is such that ||a|| = 1.

For definition of the truncated sequential estimators we will use the following representation of (24)

where

x (n) = A( n -1)A + «( n)

( x1 (n) x2 (n)^

A (n ) =

-x2 (n) x1 (n)

According to general notation in Section 1 in this case we have B (n) = I and as follows E = I. Be-1

sides A+ (n) = A (n) =

x (n

• A (n) and choosing in the definition of c (n) weights w (n ) = 1 we obtain

1 xH ,N

0H,N = H Z P«a'A(n- 1)x(n)^X

H n=1

c (n) = | x(n -1) . Then according to (4), (5) the truncated sequential estimation plan of 0 has the form

" N 2 "

X|x(n-1) > H

inf\ke[1,N]: Z||x(

J n=1

N, ^llx (n -1)||2 < H,

lH ,N

2 I N 2

(n-1) >Hi,Z||x(n-1) >H,

n=1 J n=1

where

1, n < x

H ,N’

N

a

1, n = xH,n, ZI|x(n -1 )2<H,

n=1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

N

n = XH,N, Z|Ix(n - 1)2> H.

n=1

According to Theorem 1

Ea (0H,N - 0)2 < H + 02 • Pa ( Z |x (n -1)2 < H ^.

(25)

To estimate the second summand in the right hand side of the last inequality we will use the following equation

:C

x (n II =

||x(n -1)||2 + 2xT (n - 1)A«(n) + ||«(n)|2, n > 1

(26)

with

A =

'—1 —2 ^ v A2 -A1 j

which can be easily obtained from the following representation of (24):

x(n) = Ax(n -1) + «(n), n > 1.

Using (26) similar to (17) we get the inequality:

Z ||x(n -1)||2 > C0 (A) Z11« (n)|2, C (—) = k/ 1 + 21—12 -^1|A||

n=1

n=1

and C0 (A) > C*, C* =

Vi + 2L2 -V2L

when A < L.

Suppose H = HN = 2hC*N, h e (0,1). Then

P—( Z||x(n -1)||2 <Hn )<P( C* • ZZ||«(n)|2 <Hn )<P

N

Z_

n=1

> 2(1-h)

C_

N 2

where C is a positive number.

From (25) and the last inequality it follows that the estimator 0N =0H N with H = HN for every 0 < L < w satisfies the inequality

sup ea

e—(N -0)

2 1 L2C

<----------

2hC*N N

2

2

5. Simulations

To confirm the convergence of the constructed estimators we made the simulations of the estimator AN defined in (13) (Table 1) with w (n) = 1 and HN = h • N in case of known a2 = 1:

For this purpose we used the software package MATLAB. In Table 1 the average

1 100

A (N >=Icq'S*.n (k >

of estimators (27) for the k-th realization x(k^ = (x(k)), k = 1 ...100 of the process (8) and their quality characteristics

2 1 100/ S \ 2

SA(N) = — l(A n (k)-—)

for different N are given.

T a b l e 1

Estimation of the parameter 1 with h = 0,2

N = 100 N = 200 N = 500

X X (n) Sx(N ) X (N) Sx( N) X (n ) Sx( N)

0,2 0,2031 0,0395 0,2111 0,0240 0,2041 0,0090

-0,2 -0,1755 0,0521 0,0092 0,0257 -0,1973 0,0092

0,9 0,8836 0,0426 0,8678 0,0252 0,8967 0,0066

-0,9 -0,8874 0,0407 -0,9082 0,0222 -0,8943 0,0114

1 0,9841 0,0514 0,9722 0,0164 1,0013 0,0091

-1 -0,9730 0,0395 -0,9942 0,0162 -0,9993 0,0104

4 4,0107 0,0166 4,0183 0,0074 4,0087 0,0026

-4 -4,0060 0,0228 -3,9987 0,0071 -4,0008 0,0050

T a b l e 2

Estimation of the parameter 1 with h = 0,6

N = 100 N = 200 N = 500

X X (n ) Sx(N ) X (N) Sx( N) X (n ) Sx( N)

0,2 0,2253 0,0149 0,1991 0,0090 0,2001 0,0029

-0,2 -0,2126 0,0141 -0,1945 0,0090 -0,2004 0,0029

0,9 0,8874 0,0145 0,8872 0,0067 0,8945 0,0027

-0,9 -0,8997 0,0127 -0,9015 0,0054 -0,9012 0,0037

1 1,0085 0,0123 0,9898 0,0077 0,9967 0,0038

-1 -0,9732 0,0171 -1,0044 0,0051 -0,9893 0,0033

4 3,9996 0,0047 3,9986 0,0027 4,0057 0,0014

-4 -3,9947 0,0068 -4,0038 0,0034 -4,0040 0,0015

Looking at the simulation results we can say that the deviation becomes less with growth of the sample size. It means that the estimator’s value becomes closer to the true meaning of the parameter. This fact proves that these estimation procedures are quite effective. Moreover, the considered estimator works in the unstable case as well.

Summary

In this paper the guaranteed parameter estimation problem of a general multivariate regression model is solved. The truncated sequential estimator of the dynamic parameter is constructed by sample of fixed size. At the same time this method makes possible to obtain estimators with a given mean square accuracy. Three examples are given. In the first example the parameter estimation problem of ARARCH(1,1) is considered. It is supposed that the unknown parameter belongs to the whole line. The presented estimator has given mean square accuracy and the same rate of convergence as the least squares estimator in the stable case. Similar results were obtained for the stable AR(1) and AR(2) model of a special type. Asymptotic optimality in the minimax sense for the truncated sequential estimator of AR(1) is proved.

Results of simulation confirm the efficiency of the presented estimation procedure.

Truncated sequential estimators can be successfully used similar to sequential estimators (see, e.g.,

[14]) as a pilot estimators in various adaptive procedures (prediction, control, filtration etc.).

REFERENCES

1. Wald A. Sequential Analysis. N. Y.: Wiley. (1947).

2. Liptser R. Sh., Shiryaev A.N. Statistics of random processes. 1: General theory. N. Y.: Springer-Verlag. (1977). 2: Applica-

tions. N.Y.: Springer-Verlag. (1978).

3. Borisov V.Z., Konev V.V. On sequential estimation of parameters in discrete-time processes. Automation and Remote Con-

trol. (1977).

4. Konev V.V. Sequential parameter estimation of stochastic dynamical systems. Tomsk: Tomsk Univ. Press. (1985).

5. Dobrovidov A.V., Koshkin G.M., Vasiliev V.A. Non-parametric state space models. Heber City, UT. USA: Kendrick Press.

(2012).

6. Galtchouk L., Konev V. On sequential estimation of parameters in semimartingaleregression models with continuous time

parameter. Annals of Statistics. (2001).

7. Kuechler U., Vasiliev V. On guaranteed parameter estimation of a multiparameter linear regression process. Automatica,

Journal of IFAC, Elsevier. No. 46 (4). P. 637-646. (2010).

8. Fourdrinier D., Konev V., Pergamenshchikov S. Truncated sequential estimation of the parameter of a first order auto-

regressive process with dependent noises. Mathematical Methods of Statistics. (2009).

9. Konev V.V., Pergamenshchikov S.M. Truncated sequential estimation of the parameters in random regression. Sequential

Analysis. V. 9. Issue 1. P. 19-41. (1990).

10. Vasiliev V.A. One investigation method of ratio type estimators. Preprint 5 of the Math. Inst. of Humboldt University, Berlin. P. 1-15. (2012); http://www2.mathematik.hu-berlin.de/publ/pre/2012/p-list-12.html

11. Vasiliev V.A. A truncated estimation method with guaranteed accuracy. Annals of the Institute of Statistical Mathematics, V. 66. Issue 1. P. 141-163. (2014).

12. Vorobeichikov S.E., Konev V.V. About sequential indification of stochastic systems. Technical Cybernetics. No. 4. P. 176-182. (1980).

13. Shiryaev A.N., Spokoiny V.G. Statistical experiments and decisions. Asymptotic theory. Singapore: World Scientific, (2000).

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

14. Kuechler U., Vasiliev V. On a certainty equivalence design of continuous-time stochastic systems. SIAM Journal of Control and Optimization. V. 51. No. 2. P. 938-964. (2013).

15. Dmitrienko A., Konev V., Pergamenshchikov S. Sequential generalized least squares estimator for an autoregressive parameter. Sequential Analysis. V. 16. Issue 1. P. 25-46. (1997).

Догадова Татьяна Валерьевна. E-mail: aurora1900@mail.ru Васильев Вячеслав Артурович. E-mail: vas@mail.tsu.ru

Томский государственный университет Поступила в редакцию 10 января 2014 г.

Догадова Т.В., Васильев В.А. (Томский государственный университет, Российская Федерация).

Гарантированное оценивание параметров стохастической линейной регрессии по выборке фиксированного размера. Ключевые слова: оценивание параметров; процесс авторегрессии; модель ARARCH; усеченные последовательные оценки; гарантированная точность.

Предлагается метод оценки параметров многомерной линейной регрессии по выборке фиксированного объема. Этот метод позволяет получить оценки параметров c гарантированной в среднеквадратическом смысле точностью. Построены и исследованы усеченные последовательные оценки параметров процессов ARARCH(1,1), AR(1) и AR(2). Установлена асимптотическая эффективность оценки параметра AR(1) с неизвестной дисперсией шума.

i Надоели баннеры? Вы всегда можете отключить рекламу.