ВЕСТНИК ТОМСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА
2015 Управление, вычислительная техника и информатика № 3 (32)
УДК 519.233.22
DOI 10.17223/19988605/32/4
M.I. Kusainov
RISK EFFICIENCY OF ADAPTIVE ONE-STEP PREDICTION OF AUTOREGRESSION WITH PARAMETER DRIFT
A scalar stable autoregressive process with the dynamic parameter corrupted by additive noise is studied. The model parameters are assumed to be unknown. The truncated estimators of the parameters are used to build adaptive one-step predictors. The associated problem is to minimize a risk function of special form, defined to account for sample size and squared prediction error’s sample mean. A sequential procedure is introduced to achieve the minimal risk.
Keywords: adaptive predictors; asymptotic risk efficiency; optimal sample size; scalar autoregression; stopping time; truncated parameter estimators.
When studying dynamic systems, the identification problem is often a major one to consider. A system’s model is assumed to incorporate unknown parameters, estimation of which is vital for later research. Classic methods, such as maximum likelihood estimation, least squares fitting, etc., have known asymptotic properties. However, infinite samples do not occur in reality and efforts are being made to achieve non-asymptotic qualities of estimators. E.g., one may consider confidence regions for finite number of measurements (see [1, 2] among others).
Another approach are methods that employ sequential analysis. Among such methods is sequential estimation method (see, e.g., [3-12]), which has guaranteed accuracy by samples of random and finite, albeit unbounded, size. Its idea was further developed into the truncated sequential estimation method (see, e.g., [12-
15]), which utilizes samples of bounded random size.
Recently, the truncated estimation method was suggested in [16] as a modification of the truncated sequential estimation method. Truncated estimators were constructed for ratio type functionals and only need samples of fixed non-random size to achieve guaranteed accuracy in the sense of the L2m-norm, m > 1.
One possible use for estimated parameters is to predict future values of the modeled random process by existing observations. In order to control both the quality of predictions and the required sample size, a loss function dependent on the two is introduced. A risk efficiency problem arises, where the expected loss is minimized by choosing a certain duration of observations. Similar problems for autoregressive processes were examined in [17] and [18], the least squares estimators and sequential estimators of unknown parameters were used.
In this paper we consider a scalar stable AR(1) with parameter drift and construct real-time predictors based upon the truncated estimators of the parameters. All the parameters are assumed to be unknown. A similar model was studied in [14]. Sequential approach was, for the first time, applied to it to good effect. Resulting estimators were shown to have preassigned mean square accuracy and uniform in parameter asymptotic normality. These estimators, however, had different form than in this paper due to some parameters being known. We solve the optimization problem associated with the loss function of a special form. The proposed procedure is shown to be asymptotically risk efficient as the cost of prediction error tends to infinity. The simulation results confirm the result, but are not included for editorial reasons.
The scalar case without the drift was considered in [19], multivariate AR(1) in [20] and ARMA(1,1) in [21].
1. Problem statement
Consider the stable scalar autoregressive process satisfying the equation
xk = *■ k-A-t +^k> k>1 (1)
33
where
К = ^ + k >0»
the parameter X is unknown, the condition Ex0 < да holds, ^ and qk are sequences of independent identically distributed (i.i.d.) zero mean random variables, that are also independent of each other, with finite variances = E^2»сЦ = Epj2. In addition, to guarantee stability of the process (1) we assume the following
Я2 +c2 <1. (2)
It is known that the optimal in the mean square sense one-step predictor is the conditional expectation of the process with respect to its past, i.e.
x°pt =Xxk_1, к >1.
Therefore, one needs an estimator A,k for the unknown parameter X to construct the adaptive predictors of the form
xk = Я к _ixk _1» k >1. (3)
Write the corresponding prediction errors
ek = xk _ xk =(Я _ Яk_i)xk_i + Цk_ixk_i + Sk.
Let еП denote the sample mean of squared prediction error
2 1 n „9
e2 = - s ek2. (4)
nk=1
Define the loss function
T A 2
Ln = ~en + n
n
One way the parameter A( > 0) can be interpreted is being the cost of prediction error. The corresponding risk function
A2
Rn = Ee Ln =— Eeen + n»
(5)
E0 denotes expectation under the distribution P0 with the given parameter e = (A,,c?,c2). Define
0 = je: Я2 + сЦ < 1,c2 < да| the process’ stability parameter region. The main aim is to minimize the risk Rn on the sample size n.
2. Main result
To solve the stated problem we shall use the truncated estimation method introduced in [16]. This method makes it possible to obtain the ratio type estimators with guaranteed accuracy using a sample of fixed size. According to the method, the truncated estimator of the autoregressive parameter Я is based on a ratio type estimator, the least-squares type in this case
k
S x;_A-
- k > 1, (6)
Я k = iT
s x;_1
i=1
and has the form
Яk =Яkxf Ak > Hk)» k >1
- 1 k 2
where Ak = — S xi_1, the notation %(B) means the indicator function of the set B and
k i=1
Hk = log _17 2(k +1).
(7)
(8)
34
It should be noted, that according to [16], Hk can be taken as any decreasing slowly changing positive function.
A model similar to (1) was studied in [14], but variances of the noises £, and ni, i > 1 were assumed to be known. This information allowed construction of true sequential least-squares estimators. However, it is absent in our case, forcing one to use estimators of the form (7).
Rewrite the formulae (3)-(5) with Ak replaced by Ak as follows
xk = A k -A-1» (9)
ek =(A- A k-1) xk-1 + Лк -1xk -1 + Sk»
e2 = 1 П e2
en = ^ ek»
nk=1
Rn =— Eq en + n.
A
n
To minimize the risk Rn we rewrite the risk function (10) using the definition of eП
(10)
where
a? =
1 - (A2 +аЛ)
Rn = - [a| +a^a2 + D n v ъ 1
2
, Dn =1 i£e(Xkopt - ?k)2 +—i (alk-1 -al), a?* = EQx2k.
n k=1 n k=1
From here on C denotes those non-negative constants, the values of which are not critical. Further we show that
Dn = o(1) as n ^ да.
To this end we establish the two following estimates
E
k=1
a
x,k-1
-a.
<C,
(11)
(12)
(13)
(14)
E Ee (x°opi - ?k)k < C log2 n.
k=1
In order to prove (13), write the solution of xk using its definition (1) as follows
k -1 i k
xk = E ^k-i П (A + Л - j) + xo П (A + Л - j)-
i=0 j=1 j=1
Squaring and taking the expectation of both sides, we use independence and zero mean of £, and ni, i > 1 to obtain
a2xk = kE1akk П E (A + Л- j )k + Exk П E (A + p - j )k =
i=0 j=1 j=1
= ak sV +ap)' + Exk(Ak +a^)k.
i=0
Using (2) one gets
k k -1 2 2 ■
a2 E (A2 +aP ) =
4
i =0
1 - (A2 +aP)
(1 - (A2 +aP)k-1) = a2(1 - (A2 +aP)k-1)
and thus,
From (2) it follows, that hence (13).
_2 2 2 /л 2 . 2 ч k-1 . 77 2 /л 2 , 2 \k
ax,k -ax =-a x(A +ap) + Ex0(A +ал) .
E
k=1
a
x,k-1
-a.
<C,
35
To prove (14) rewrite its left-hand side
S E (x°kpt - xk)2 = E E0 (V1 -A)2 x2_1.
k=1 k=1
We establish the properties of the estimators A k.
Define k0 = max {1, [e(°x) ]11, where [a] 1 denotes the integer part of a.
Lemma 1. Assume the model (1) and let for some integer m > 1 the conditions
E^4m < да, Ex4m < да, E(A + p1)4m <1 be true. Then the truncated estimators Ak satisfy
(i) for 1 < k < k0
Eq (Ak _A)2m < C;
(ii) for k > k0
Eq (Ak _A)2” < ^
k
The proof of Lemma 1 is presented in Section 4.
(15)
(16)
(17)
(18)
Remark 1. The parameter Hk can be taken as a constant H, provided that H e (0,a,). The condition (2)
then guarantees H < lim Ak, considering
k ^да
a,
lim A k = - 2 2
k^да 1 - (A2 +a2)
Pq- a.s.,
(19)
which follows from the ergodicity of the process (xk )k>0 (see, e.g., [7]). Then the estimators Ak would satisfy
Eq (Ak _ A)2m < km k
for every k > 1, which can be proved similarly to Theorem 2 of [16]. This, though, requires knowledge of a,.
The Cauchy-Schwarz-Bunyakovsky inequality, (15) and (18) yield (14). The relation (12) follows directly from (13), (14).
Denote
2 2 2 2 (20)
„2 _ 2 , 22
a =ai;+a2ax.
In view of (12) we minimize the principal term of Rn, analogously to [17]
D A 2
Rn «—a + n-----> min
П n
to get the optimal sample size
noA = A17 2a
and the corresponding approximate minimal risk value
Ro = 2 A17 2a + О (log2 A) as A
(21)
(22)
Similarly to [17, 18, 20], we introduce the stopping time TA as an estimator of noA, replacing a2 in its definition with an estimator a 2n
Ta = inf {n > A172an}, (23)
n>n. ^ '
where nA is the initial sample size depending on A and specified below (see Theorem 1),
a n = -n E(xk-A nxk _1)2 •
nk=1
(24)
36
We formulate a theorem to prove the asymptotic equivalence of TA and n°A in the sense of almost sure and mean convergences (see respectively (27), (28) below) and the optimality of the adaptive prediction procedure in the sense of equivalence of Rn° and the modified risk
ra = Ee^TA = AEe T eTA + EeTA , (25)
TA
see (29).
Theorem 1. Assume that
E^6 <да, Ex06 <да, E(X + V16 <1 (26)
and nA in (23) is such that
Па > max{k,, Ar log2 A}, Па • A-1/2 —>да > 0
with r e (2 / 5,1 / 2). Let the predictors Xk be defined by (9) and the risk functions defined by (5), (25). Then for every ee 0 and a2 > 0
——A--------> 1 Pe- a.s., (27)
° А>да e ’ v '
A
EeTA
А>да
1,
RA
R.°
А>да
->1.
The proof of Theorem 1 is presented in Section 2.
n
(28)
(29)
3. Proofs
3.1. Proof of Lemma 1
It can be shown (see, e.g., [22], Lemma 1), that to guarantee Eexkm < C, k, m > 1, the following suffices
Etfm <да, Ex02m <да, E(X + ^)2m < 1.
Thus, from the conditions of Lemma 1 on noise moments for e e 0 it follows
sup Eex4m < C.
k >0
By the definition (7) of truncated estimators X k, their deviation has the form
Xk -X = (Xk-X)•xjAk >Hk)-X-x(Ak <Hk).
The definitions (6) and (1) yield
k k
2
X k-X = ^
S x-1^i + ЕЛ,--1 x-1
i=1
k2 s x-
i=1
Hence, from (31)
(X k -X)2m =
( k k ^2m
S xi-&■ + Ел,- -1xi-1
i=1 i=1
-X(Ak > Hk) + X2m ^x(Ak < Hk
( k ^2m
V i=1 J
From the definition of Ak (see (7)) and Hk (8) it follows
(30)
(31)
37
Eq(kk -k)2m SX+ Sr
k v i=1 /=1
logmk
( k
\2m
■k2mPe(Ak <Hk).
(32)
k k 2
It can be easily shown, that the sums S and Sr-Л^ for k > 1 form martingales. Thus, by the
i =1 i =1
Burkholder inequality and the Holder inequality and (30), similarly to [16], Section 5.2, we get
log mk E
, 2m 9
\ 2m ~2m-1
S xi -Л- + Sr -1X-1
i=1 i=1
<
22m-1 log mk
2m
\ 2m
S xt-&
i=1
( — 2 2m^
+ Ee Sr--1Xi-1
V i=1 2 2
C logm k
2m
( — 2 2 > m ( — 2 4 > m
Ee Sx2^2 + Ee S r2-1X^
V V i=1 2 V i=1 2 2
C log mk
The first assertion (17) of Lemma 1 follows from (32) and (33).
For the second summand of (6), using the Chebyshev inequality for k > k0 one has
,e0 (Ak-«2 )2m
k2mPe(Ak <Hk)<CPQ
Ak -«2
>«X-h7 l<c
(«2 - Hk)
2m
Note that for k > k
0’
(33)
(34)
Hk =
Vlog(k +1 Tl^g
.(«2 )-2
■ = ctv
and hence the difference - Hk > 0.
To estimate (34) we rewrite «2
«2 =
«I (1 -k2)
1 -(k2 +«2) (1 -k2)(1 -(k2 +«2))
—2 2 «5«r
1 -k2 (1 -k2)(1 -(k2 +«:?)) 1 -k2
1 («g«r «2)-
Using this and the definition of the process (1), one can write Ak - «2 as follows
A k-«* =
1
(1 -k2)
k2 _ x0 - xk , 2k S r x2 2 7
— S 2i -1 xi-1 + 7 S ki_1^ixi_1
k i=1 k i=1
. 1 — rc 2 24 . 1 — / 2 2 J2J2\ . 1 — _ 2(2 _24
+ j S (^i ) + 7 S (ri-1 Xi-1 «r«x,i-1) +, S «2 («x,i-1 «x )
k i=1 k i=1 k i=1
Then (34), the Burkholder inequality, (30) and (13) yield
k 2mPe (a— < Hk) < eg.
k
Together with (33) this proves the second assertion of Lemma 1.
3.1. Proof of Theorem 1
The conditions on noise moments (26) yield for 9 e 0
suP Ee x7 < C
k >0
Note that assuming the distribution of n is symmetrical, the condition E(k + r1)16 < 1 reduces to
k16 +120(k14«rr +k2«rr4) + 1820(k12«r +k4«22) + 8008(k10«rr +k6«rr0) + 12870k8«rr +«Гг6 <1
>12^4 , л4 12
«r+k «r
10 6 6 10 «r+k «r
‘8«8 . „
r r
where «rm = Er\1m, m = 2,8.
2 m
(35)
<
38
Rewrite formula (24) for 62n using the definition of the process (1)
62 = — E (^k + 4k-1xk-1 + (1 - 1n )Xk-1)2 = _ ^ (^2 + ^2-1xl-1) + Wn + Vn
Пк=1
nk=1
(36)
where
wn = (^n E xl-1, vn = - E \k4k-1xk-1 - - E(1n - E)xk-1 (^к+nk-1xk-1)-
n k=1 n k=1 n k=1
Analogously to [17], we show that
-» c2 Pe - a.s.
(37)
Consider Wn. Using the properties (18) and the Chebyshev inequality for any s> 0 we get
P(| 1 n -1|> s) < -4Eq (1 n - 1)4 < Cn-2 log2 n. s
From the Borel-Cantelli lemma it follows that Together with (19) and (35) this yields Similar arguments can be used to show
1 n- -1 n—да Pe - a.s.
Wn- 0 n—да Pe - a.s. (38)
V n- 0 n—да Pe - a.s. (39)
At the same time, strong law of large numbers, (13) and the Borel-Cantelli lemma yield
- E g2+42-1x2-1)-
n k=1
-» c2 Pe - a.s.
(40)
Then (37) follows from the representation (36), (38)-(40).
From the definition (23) of TA it follows that with Pe -probability one TA — да as A — да. Therefore, by
(11) we have c6TA — c2 Pe -a.s. and hence
TA
— 1 Pe - a.s.
To prove (28) we introduce for any positive A the auxiliary sequence of numbers уA,
A1/26 A-
Denote
У A,n = n 2 AX—Г—Л ’ n > 1.
2log A
mn = — E(^2 +4jt-1-x2-1 - c2 ].
nk=1V ;
Observe that mn can be represented as a sum of two martingales and decaying to zero as O(n ') sequence
mn =—E - c^)+-n E (42-1xl-1 - c4 c—-1)+-n E c4 (6x,k-1 - 62).
n k=1 n k=1 nk=1
By the definition of TA and (36) we have
EeTA < nA + E Pe
n>nA
n A 1 < - E +42-1xk!-1)+Wn + vn
nk=1
<nA + E {Pe fn 2 A 1 <c2 + 2yA, n ]+Pe (|vn >A,n /2) +Pe (Wn >Уа,п /2) +Pe (|mn >A,n )}. (41)
n>nA 1 V '
It can be shown, analogously to [19], that for n > nA
Pe(| v n |> У An / 2) + Pe (Wn > У An / 2) + Pe(| mn |> Уа,п ) < C У~а,пп-1 = 4Ca2 log2 A •n-5 and at the same time
39
>1.
(42)
nA + 2 Ре(n2 A 1 ^^2 + 2YA,n 1
n>nA_______________________
A 2c
Therefore, by assumptions on nA
A~V2 2 {^8 (|Vn |>YA,n / 2) + Pe (Wn >lA,n / 2) + Ре (I ™n |>YA,n )}<
n>nA
< CA3/2 log2 A 2 n-5 < CA3/2 log2 A • nA4 < CA'^ log-6 A A^ > 0.
n>nA
Then from (41) - (43) it follows that
Analogously it can be shown that
and thus, in view of (44) the assertion (28) holds.
To prove (29) we need the following properties
Pe (Ta < N') = O(A'r), Pe (TA > N") = O(A'1),
N ' = [(c-s) A1/2]1, N '' = [(c + s) A1/2]1 +1, 0 <s<c, which can be established similarly to (4.31) of [19].
Rewrite the left-hand side of (29) using (22) and (25)
—— E8Ta , lim —< 1
a>* A1/2c
,. E8Ta lim-j-C >1
А>да A c
Ra = AE8 rAeTA + E8TA Rnl ~ 2A1/2c + O(log2 A) ‘
From (28) and (46) it follows that to prove (29) it suffices to show the convergence
A1/2 E
1
8 “ vTa
TAC A
->1.
To this end we show that
Al/2Ee fejA z(Ta < N') -a>aa
-t A
>0, A1/2E^e2 i(Ta > N'')-
1
t
->0,
A1/2Ee T^~e2AX(N'<Ta <N")-Tac A
-и.
(43)
(44)
(45)
(46)
(47)
(48)
(49)
All three relations are proved analogously to (4.39)-(4.41) of [19] using the definition of e^. E.g., for (49) we write
A1/2Ее TLё-А X(N' < Ta < N") = A1/2Ее 2 (X - X*-1 )2 x2-1X(N' < Ta < N'') +
Ta Ta k=1
+2A1/2ЕеT22= (^p*-1 Xk-1 '(Xk-1 -XKkxk-1 -(X*-1 -X)%-1x2-1 )X(N'<Ta <N") +
Ta k=1
+^4-^ 2 (^ +лT-lxT-l)x(N'<Ta <N"). (50)
Ta k=1
By the definitions of N' and N", the Cauchy-Schwarz-Bunyakovsky inequality and Lemma 1, for the first summand one gets
A1/2E8-L- 2 (X-X*-1)2x2-1X(N'<Ta <N'')<CA-1/2 2 Ее(X-X^)2x^ <CA-1/2log2 A ——— 0.
T2c *=1 *=1 A>”
For the second summand of (50) the Doob’s maximal inequality for martingales (see, e.g., [3]) and the Cauchy-Schwarz-Bunyakovsky inequality can be used to show
40
>0.
A1/2Ee фт t (SkЛк-Л-1 - (Vi - ^Л-1 - (Vi - J*(N' < TA < N") < CA" ^
1A к -1
1
-1/4
Rewrite the left-hand side of the last summand of (50)
A1/2 E,
1 1
t (U +Л2-1 x2-1)x( N '< Ta < N") -
2 —
1A С к-1
л1/2^ 1 ......., лт„, , Л/2 - 1
- A172Ee—— штлу(N' < Ta < N") + A172cE^-X(N' < Ta < N").
TAC
TA
We show that the first summand converges to 0 and the second one converges to 1. By (13) the Doob’s maximal inequality and the Cauchy-Schwarz-Bunyakovsky inequality
A172Ееф-\щл |x(N'<Ta < N") < CA
1
1/2
1
f
TAC
(N ")2
Ee max,,[ t(^ + л1-1х1-1 -(С^+СлС1,к-1)) 1<«<N V к-1
2
1/2
<
< CA
( N"
-1/2 ^^77 /e2 , 2 2 /_2 , 2_ 2 442
t Ee (^k + Лк-1 xk-1 (C + СлСх,к-1))
к-1
ч1/2
< CA
-1/4
A^go
->0.
The almost sure convergence of the second summand to 1 follows from (27), boundedness of the family
IA1/2 ф-у (N" < TA < N") a and bounded convergence theorem.
1 1a )A>1
The assertion (29) follows from (48) and (49).
Summary
The problem of building optimal predictions for the values of scalar stable autoregressive process with parameter drift is considered. The predictors are constructed on the basis of the truncated estimators, which are shown to have prescribed mean-square accuracy on samples of fixed size. The optimal sample size is establish both theoretically and as a stopping time defined on observational data. Asymptotic equivalence of the two is proved in the sense of almost sure convergence and convergence in mean, as well as asymptotic equivalence of the corresponding risk functions.
Acknowledgements
The author wishes to thank his scientific adviser professor Vasiliev V.A., who provided valuable comments and ideas regarding the research theme.
REFERENCES
1. Weyer E., Campi M. Non-Asymptotic Confidence Ellipsoids for the Least-Squares Estimate // Automatica. 2002. V. 38, Is. 9. P. 1539-1547.
2. Weyer E., Campi M. Guaranteed Non-Asymptotic Confidence Regions in System Identification // Automatica. 2005. V. 41, Is. 10.
P. 1751-1764.
3. LiptserR., Shiryaev A. Statistics of Random Processes. N.Y. : Springer, 1977.
4. Konev V., Vorobeichikov S. On Sequential Identification of Stochastic Systems // Izvestia of USSR Academy of Sciences, Technical
Cybernetics. 1980. Is. 4. P. 176-182.
5. Konev V. Sequential Parameter Estimation of Stochastic Dynamical Systems. Tomsk University Press, Tomsk, 1985.
6. Vasiliev V., Konev V. The Sequential Parameter Identification of the Dynamic Systems in the Presence of Multiplicative and Additive
Noises in Observations // Automation and Remote Control. 1985. V. 46. P. 706-716.
7. Konev V., Pergamenshchikov S. On the Duration of Sequential Estimation of Parameters of Stochastic Processes in Discrete Time //
Stochastics. 1986. V. 18, Is. 2. P. 133-154.
8. GaltchoukL., Konev V. On Sequential Estimation of Parameters in Semimartingale Regression Models with Continuous Time Param-
eter // Annals of Statistics. 2001. V. 29, Is. 5. P. 1508-1536.
9. Kuchler U., Vasiliev V. On Sequential Parameter Estimation for Some Linear Stochastic Differential Equations with Time Delay //
Sequential Analysis. 2001. V. 20, Is. 3. P. 117-146.
41
10. Kuchler U., Vasiliev V. On Guaranteed Parameter Estimation of a Multiparameter Linear Regression Process // Automatica. 2010. V. 46, Is. 4. P. 637-646.
11. Malyarenko A., Vasiliev V. On Parameter Estimation of Partly Observed Bilinear Discrete-Time Stochastic Systems // Metrika.
2012. V. 75, Is. 3. P. 403-424.
12. Politis D., Vasiliev V. Sequential Kernel Estimation of a Multivariate Regression Function // Proceedings of IX International Conference ’System Identification and Control Problems’. Moscow, 2012. P. 996-1009.
13. Konev V., Pergamenshchikov S. Truncated Sequential Estimation of the Parameters in Random Regression // Sequential Analysis. 1990. V. 9, Is. 1. P. 19-41.
14. Konev V., Pergamenshchikov S. On Truncated Sequential Estimation of the Drifting Parametermean in the First Order Autoregressive Models // Sequential Analysis. 1990. V. 9, Is. 2. P. 193-216.
15. Fourdrinier D., Konev V., Pergamenshchikov S. Truncated Sequential Estimation of the Parameter of a First Order Autoregressive Process with Dependent Noises // Mathematical Methods of Statistics. 2008. V. 18, Is. 1. P. 43-58.
16. Vasiliev V. A Truncated Estimation Method with Guaranteed Accuracy // Annals of Institute of Statistical Mathematics. 2014. V. 66. P. 141-163.
17. Sriram T. Sequential Estimation of the Autoregressive Parameter in a First Order Autoregressive Process // Sequential Analysis. 1988. V. 7, Is. 1. P. 53-74.
18. Konev V., Lai T. Estimators with Prescribed Precision in Stochastic Regression Models // Sequential Analysis. 1995. V. 14, Is. 3. P. 179-192.
19. Vasiliev V., Kusainov M. Asymptotic Risk-Efficiency of One-Step Predictors of a Stable AR(1) // Proceedings of XII All-Russian Conference on Control Problems. Moscow, 2014. P. 2619-2627.
20. Kusainov M., Vasiliev V. On Optimal Adaptive Prediction of Multivariate Autoregression // Sequential Analysis. 2015. V. 34, Is. 2. 23 p.
21. Kusainov M. On Optimal Adaptive Prediction of Multivariate ARMA(1,1) Process // Вестник Томского государственного университета. Управление, вычислительная техника и информатика. 2015. № 1(30). P. 44-57.
22. Malyarenko A. Estimating the Generalized Autoregression Model Parameters for Unknown Noise Distribution // Automation and Remote Control. 2012. V. 71, Is. 2. P. 291-302.
Kusainov Marat Islambekovich. E-mail: rjrltsk@gmail.com Tomsk State University, Tomsk, Russian Federation
Поступила в редакцию 22 апреля 2015 г.
Кусаинов М.И. (Томский государственный университет. Россия).
Риск-эффективность адаптивных одношаговых прогнозов авторегрессии с шумящим параметром.
Ключевые слова: адаптивные прогнозы; асимптотическая риск-эффективность; оптимальный размер выборки; скалярная авторегрессия; момент остановки; усечённое оценивание.
DOI 10.17223/19988605/32/4
Изучается скалярный устойчивый процесс авторегрессии с аддитивным шумом в параметре динамики. Параметры модели предполагаются неизвестными. Усечённые оценки параметров используются для построения адаптивных одношаговых прогнозов. Сопутствующая проблема заключается в минимизации функции риска специального вида, описывающей размер выборки и выборочное среднее квадрата ошибки прогноза. Последовательная процедура вводится для достижения минимального риска.
REFERENCES
1. Weyer, E. & Campi, M. (2002) Non-Asymptotic Confidence Ellipsoids for the Least-Squares Estimate. Automatica. 38(9). pp. 1539-
1547. DOI: 10.1109/CDC.2000.914211
2. Weyer, E. & Campi, M. (2005) Guaranteed Non-Asymptotic Confidence Regions in System Identification. Automatica. 41 (10).
pp. 1751-1764.
3. Liptser, R. & Shiryaev, A. (1977) Statistics of Random Processes. New York: Springer.
4. Konev, V. & Vorobeichikov, S. (1980) On Sequential Identification of Stochastic Systems. Izvestia of USSR Academy of Sciences,
Technical Cybernetics. 4. pp. 176-182.
5. Konev, V. (1985) Sequential Parameter Estimation of Stochastic Dynamical Systems. Tomsk: Tomsk State University Press.
6. Vasiliev, V. & Konev, V. (1985) The Sequential Parameter Identification of the Dynamic Systems in the Presence of Multiplicative
and Additive Noises in Observations. Automation and Remote Control. 46. pp. 706-716. (In Russian).
7. Konev, V. & Pergamenshchikov, S. (1986) On the Duration of Sequential Estimation of Parameters of Stochastic Processes in Dis-
crete Time. Stochastics. 18 (2). pp. 133-154. DOI: 10.1080/17442508608833405
8. Galtchouk, L. & Konev, V. (2001) On sequential estimation of parameters in semimartingale regression models with continuous time
parameter. Annals of Statistics. 29 (5). pp. 1508-1536. DOI: 10.1214/aos/1013203463
9. Kuchler, U. & Vasiliev, V. (2001) On Sequential Parameter Estimation for Some Linear Stochastic Differential Equations with Time
Delay. Sequential Analysis. 20 (3). pp. 117-146. DOI: 10.1081/SQA-100106052
42
10. Kuchler, U. & Vasiliev, V. (2001) On Guaranteed Parameter Estimation of a Multiparameter Linear Regression Process. Automati-ca. 46 (4). pp. 637-646. DOI: 10.1016/j.automatica.2010.01.003
11. Malyarenko, A. & Vasiliev, V. (2012) On Parameter Estimation of Partly Observed Bilinear Discrete-Time Stochastic Systems. Metrika. 75 (3). pp. 403-424. DOI: 10.1007/s00184-010-0333-5
12. Politis, D. & Vasiliev, V. (2012) Sequential Kernel Estimation of a Multivariate Regression Function. Proceedings of IX International Conference ’System Identification and Control Problems’. Moscow. pp. 996-1009.
13. Konev, V. & Pergamenshchikov, S. (1990) Truncated Sequential Estimation of the Parameters in Random Regression. Sequential Analysis. 9 (1). pp. 19-41. DOI: 10.1080/07474949008836194
14. Konev, V. & Pergamenshchikov, S. (1990) On Truncated Sequential Estimation of the Drifting Parametermean in the First Order Autoregressive Models. Sequential Analysis. 9 (2). pp. 193-216. DOI: 10.1080/07474949008836205
15. Fourdrinier, D., Konev, V. & Pergamenshchikov, S. (2008) Truncated Sequential Estimation of the Parameter of a First Order Autoregressive Process with Dependent Noises. Mathematical Methods of Statistics. 18 (1). pp. 43-58.
DOI: 10.3103/S1066530709010037
16. Vasiliev, V. (2014) A Truncated Estimation Method with Guaranteed Accuracy. Annals of Institute of Statistical Mathematics. 66. pp. 141-163. DOI: 10.1007/s10463-013-0409-x.
17. Sriram, T.(1988) Sequential Estimation of the Autoregressive Parameter in a First Order Autoregressive Process. Sequential Analysis. 7 (1). pp. 53-74. DOI: 10.1080/07474948808836142
18. Konev, V. & Lai, T. (1995) Estimators with Prescribed Precision in Stochastic Regression Models. Sequential Analysis. 14 (3). pp. 179-192. DOI: 10.1080/07474949508836330
19. Vasiliev, V. & Kusainov, M. (2004) Asymptotic Risk-Efficiency of One-Step Predictors of a Stable AR(1). Proceedings of XIIAll-Russian Conference on Control Problems. Moscow. pp. 2619-2627. (In Russian).
20. Kusainov, M. & Vasiliev, V. (2015) On Optimal Adaptive Prediction of Multivariate Autoregression. Sequential Analysis. 34 (2). pp. 211-234. DOI: 10.1080/07474946.2015.1030977
21. Kusainov, M. (2015) On Optimal Adaptive Prediction of Multivariate ARMA (1,1) Process. Vestnik Tomskogo gosudarstvennogo universiteta. Upravlenie, vychislitel'naya tekhnika i informatika - Tomsk State University Journal of Control and Computer Science. 1(30). pp. 44-57. (In Russian).
22. Malyarenko, A. (2012) Estimating the Generalized Autoregression Model Parameters for Unknown Noise Distribution. Automation and Remote Control. 71 (2). pp. 291-302.
43