DOI: 10.17516/1997-1397-2020-13-4-480-491 УДК 519.21
Rate of the Almost Sure Convergence of a Generalized Regression Estimate Based on Truncated and Functional Data
Halima Boudada*
University Freres Mentouri Constantine 1, Algeria
Sara Leulmi^
LAMASD Laboratory University Freres Mentouri Constantine 1, Algeria
Soumia Kharfouch*
University Salah Boubnider Constantine 3, Algeria
Received 06.02.2020, received in revised form 25.04.2020, accepted 26.05.2020 Abstract. In this paper, a nonparametric estimation of a generalized regression function is proposed. The real response random variable (r.v.) is subject to left-truncation by another r.v. while the covariate takes its values in an infinite dimensional space. Under standard assumptions, the pointwise and the uniform almost sure convergences, of the proposed estimator, are established.
Keywords: functional data, truncated data, almost sure convergence, local linear estimator. Citation: H. Boudada, S. Leulmi, S. Kharfouchi, Rate of the Almost Sure Convergence of a Generalized Regression Estimate Based on Truncated and Functional Data, J. Sib. Fed. Univ. Math. Phys., 2020, 13(4), 480-491. DOI: 10.17516/1997-1397-2020-13-4-480-491.
1. Introduction and preliminaries
The investigation of the link between a scalar variable of interest Y and a functional covariate X is among the most famous nonparametric statistical works in the last two decades. We mention [1] who proposed a new version of the estimator of the regression operator m(x) = E(Y/X = x), in the case of independent and identically distributed (i.i.d.) observations, and studied its almost complete convergence. They used the so called local linear method.
In the case of complete data, many works followed this last method. For example, in [9] the uniform almost-complete convergence of the local linear conditional quantile estimator was established, while in [8] the case of a generalized regression function with functional dependent data was considered. The asymptotic normality of the local linear estimator of the conditional density for functional time series data was studied in [12] and both the pointwise and the uniform almost complete convergences, of a generalized regression estimate, were investigated in [7]. All these studies were carried in the case of complete data, however in practice, one or more truncation variables may interfere with the variable of interest and prevent its observation in a complete manner. In this setting of truncation model, one can find many works such as that
* [email protected] [email protected] ^ [email protected] © Siberian Federal University. All rights reserved
of [5] where a kernel conditional quantile estimator was proposed and its strong uniform almost sure convergence established. Similarly, [2] studied the almost complete convergence rate and the asymptotic normality of a family of nonparametric estimators for the ^-regression model. But, as far as we know, the local linear method has not been investigated for truncated data.
Hence, our goal is to propose a generalized regression estimator, when the response variable is subject to left-truncation, and to establish both its pointwise and its uniform almost sure convergences.
To this end, this article is ordered as follows. In Section 2, we recall some basic knowledge of the left -truncation model and we construct our local linear estimator. Section 3 is devoted to prove its pointwise almost sure convergence. Finally, its uniform convergence is established in Section 4.
To make things more easier for readers, we give the definition of the almost complete convergence:
Let (Wn)neN* be a sequence of real random variables r.r.v.. We say that (Wn)neN* converges almost completely to some r.r.v. W, and we note Wn —>a'co■ W, if and only if ye > 0,
w
Y^ P(\Wn — WI > e) < to. Moreover, let (vn)neN* be a sequence of positive real numbers going
n=1
to zero; we say that the rate of the almost complete convergence of (Wn)neN* to W is of order
w
(vn) and we note Wn — W = Oa.co.(vn), if and only if 3e0 > 0^ ^ P(\Wn — W| > e0vn) < to. It
n=1
is clear, from Borel Cantelli lemma, that this convergence is stronger than the almost-sure one
(a.s.).
2. Estimation
Let (Xi, Yi) for i = 1,... ,N, be N identical and independent couples distributed as (X, Y) which takes its values in Fx R, where F is a semi metric space endowed with a semi metric d. The unknown distribution function (d.f.) of Y is denoted by F.
Let T be another r.v. which has unknown d.f. G and (Ti)i=1,...,N be a sample of i.i.d. random variables that are distributed as T. T is supposed independent of (X,Y). N is unknown but deterministic. In the left truncation model, the lifetime Yi and the truncation r.v. Ti are both observable only when Yi > Ti. We denote (Yi,Ti),i = 1,2,... ,n (n < N) the actual observed sample of size n which, as a consequence of truncation, is a binomial r.v. with parameters N and n = P(Y > T). It is clear that if n = 0, no data can be observed, and therefore, we suppose throughout this article that n> 0.
By the strong law of large numbers, we have
n
mn ■■= n ^ ^, P — p.s.
We point out that if the original data (Yi, Ti),i = 1, 2,... ,N are i.i.d., the observed data (Yi, Ti), i = 1, 2,... ,n are still i.i.d. (see [6]).
Under random left truncation model, following [10], the d.f.s of Y and T are expressed respectively as,
/V fw
G(u)dF(u) and G*(t)= m-1 G(t A u)dF(u),
-w J —w
where t A u = min(t, u) and are estimated by their empirical estimators,
nn
k(y) = n—1Y.i^v} and Gn(t) = n—1Y. 1iTi<,t}. i=1 i=1
Define
C(y) := G*(y) - F*(y) = G(y)(1 - F(y)), the empirical estimator of C(y) is defined by
Cn(y) = n l{Ti^ys:Yi}-
i=i
The nonparametric maximum likelihood estimators of F and G are given respectively by
'nCn(Ti) - 1'
nCn(Ti)
Fn(y) = l - n \nC:CYU 1 and Gn(y)= n
According to [4], n can be estimated by
Hn = C-1(y)Gn(y)(l - Fn(y)),
which is independent of y.
Our results will be stated with respect to the conditional probability P(.) related to the n-sample instead of the probability measure P(.) related to the ^-sample. We donate by E and E the respective expectation operators of P(.) and P(.).
For any d.f. L, let aL = inf {y : L(y) > 0} and bL = sup {y : L(y) < 1} be its two endpoints. The asymptotic properties of Fn,Gn and nn are obtained only if aG < aF and bG < bF. We take two real numbers c and d such that [c,d] c [aF,bF], we are going to use this inclusion in the uniform consistency of the distribution law G(.) of the truncated r.v. T which is stated over a compact set (see Remark 6 in [11]).
Hence, based on the idea of the Nadaraya-Watson kernel smoother, the estimator of the general regression function mv(x) defined, for all x e F, by mv(x) = E (<(Y)/X = x), where < is a known real-valued borel function, is defined by
~ ( ) = sn=! <(Yi)K (h-1d(Xi,x)) G-1(Yi) mV En=i K (h-1d(Xi,x)) G-1(Yi) '
where K is a standard univariate kernel function and the bandwidth h := hn is a sequence of strictly positive real numbers which plays a smoothing parameter role.
Note that all the sums containing G-1(Yi) are taken for i such that Gn(Yi) = 0 . Following [1] and [7], the local linear estimator of mv in the case of truncated data is obtained as the solution for a of the following minimization problem
n
mrn2^ (<(Yi) - a - bp(Xi, x)f K (h-1d(Xu x))G-1(Yi), ( ' i=1
where l(.,.) is a known operator from F x F into R such that, Vx e F, f3(x, x) = 0. By a simple calculus, one can derive the following explicit estimator
m (x) = ^W(x)<(Y) (o°-=o
mip(x)= T.nd=1 Wi3 (x) U
where
Wij (x) = Aij (x)G-1(Yi )G-1(Yj),
with
Aij(x) := l(Xi,x) (l(Xi,x) - l(Xj,x)) K(h-1 d(Xi,x))K(h-1d(Xj,x)).
3. Pointwise almost sure corvengence
For any positive real h, let B(x, h) := {y G F/d(x, y) < h} be a closed ball in F of center x and radius h, <$x(h, h') := P(h < d(x, X) < h') and <$x(h) := $x(0, h).
To establish the asymptotic behaviour of our estimator mv(x) for a fixed point x in F, we use the following assumptions: (H1) For any h> 0; §x(h) > 0.
(H2) There exists b > 0 such that for all x1:x2 G B(x, h); \mv(x1) — mv(x2)\ ^ Cxdb(x1, x2) where Cx is a positive constant depending on x. (H3) The function /(.,.) is such that
3 0 <M1 < M2, Vx' G F,M1d(x,x') < \3(x,x')\ < M2d(x,x').
(H4) The kernel K is a positive and differentiable function on its support [0,1].
(H5) The bandwidth h satisfies lim h = 0 and lim (* ) = 0.
n^^ n^^ yy n®x(h) J
(H6) There exists an integer n0, such that
1 fi d
Vn > n°, zjh) I *x(zh>h)K(z)) >
(H7) h f jS(u,x)dPx(u) = o J 32(u, x)dPX( u) , where dPX is the distribution of X.
B(x,h) \B(x,h) J
(H8) Vm > 2; am : x ——> E(\^m(Y)\/X) is a continuous operator on F.
Remark 3.1. Hypotheses (H1)-(H5) are standard in the nonparametric functional regression setting. The rest of the hypotheses have already been used in the literature, we refer for (H6) and (H7) to 11] and for (H8) to [7].
Theorem 3.1. Assume that assumptions (H1)-(H8) are satisfied, then
mV(x) — mv{x) = O(hb) + ^nxh^ .
We remark that to prove our theorem we need to define the following pseudo-estimators
..2
ri(x) = n(n — 1)En(A12(x)) g^)Aj(x^1 (Y)
and
.2
Mx) = n(n — i)E (A12 (x)) g G-1(Yi)G-1(Yj)Aij(x)^(Yj), for l = 0,1 Consider the following decomposition
fnv(x) — mv(x) = —— - mv(x)
ri(x) ro(x)
■ {ri(x) — m i(x)}+--{m i(x) — E(rn i(x))} +--{E(rn i(x)) — mv(x)} +
ro(x) ro(x) ro(x)
(x)
+ {(m 0(x) — ro(x)) + (E(m o(x)) — mo(x)) + (—E(fo(x)) + 1)} . (1)
ro(x)
Moreover, we note for any x e F and for all i = 1,... ,n
Ki(x) := K (h-1d(Xi,x)) and lli(x) := l(Xi,x). To make things easier, we introduce the following lemmas. Lemma 1. Under the assumptions (H1)-(H8), we have
\rt (x) - mi (x) \ = Oa.s.
ln 'I
n$x(h) / '
Proof. For l = 0,1
\ri (x) - mi (x)
2
(n - 1)E(Ai2(x)) g^n1 Y(x]^1 (Y) -
<
n(n - 1)E (Au(x)) *
YJG-1(Yi)G-1(Yj )Aij (x)ipl(Yj)
i=j
<
\G2n(y) - G2(y)W G2(aF)G2n (aF) )
E
i=j
G2n(aF) + M V Aij (x)vl(Yj)
n(n - 1)E (Au(x))
From Theorem 3.2 of [4] we have - ¡j\ = Oa.s(n 1/2), while Remark 6 of [11] gives
\Gn(aF) - G(aF)\ = Oa s(n-1/2) which are negligible with respect to O( W ^n.
V V n<Px(h)
of the proof is completed in [7]. Thus, we have \ri(x) - m;(x)\ = Oa.
ln n
n^x(h)
. The rest □
Lemma 2. Under the assupmtions (H1), (H2) and (H4), we obtain
\E(mi(x)) - mv(x)\ = O(hb).
Proof. We have
E(m i (x)) =
E
n(n - 1)E (Au(x))
Y^G-1(Yi )G-1(Yj )Aij (x)<p(Yj)
i=j
E
1
E (A!2(x)) \G(Yi )G(Y2)
A12(x)^(Y2)) =
M
E (A!2(x)) 1
E (Au(x))
E
E (Ai2(x)mv(X2)).
So we can write, under assumption (H4) \mv(x) - E(m 1(x))\
1
\E (A^x)) \
^ sup \mv(x) - mv(x')\.
x'eB(x,h)
\E (A12(x) (mv(x) - mv(X2))) \ <
Using (H2), we obtain \E(m 1(x)) - mv(x)\ = O(hb).
2
M
x
x
2
M
2
M
Lemma 3. i) Under the assumptions (H1)-(H8), we get
I I ln)
m i (x) — E(m i (x)) = oac0
J n^x(h) J '
ii) Under the assumptions (H1), (H3)-(H7), we obtain
mo(x)—1=n^h)) and
> 0, suchthat mo(x) < <
n=1
Proof. Remark that
m 1(x) = Q(x) [M2,1(x)M4t,o(x) — M3,1(x)M3,o(x)], (2)
where for p = 2,3,4 and l = 0,1
Q(x) = nWxW (3)
Q(x)= n(n — 1)E (A12(x)) (3)
and
Ml(x) =V ^(x)ßr2(xWl(Yi) (4)
l(x) = n$x(h) ^ hP-2G(Yi) ■ (4)
So, we have
mi(x) — E (mi(x)) = Q(x) {M2}i(x)MA,o(x) — E (M2,i(x)M4,o(x))} — — Q(x) {M3A(x)M3, o(x) — E (M3,i(x)M3, o(x))} .
Notice that Q(x) = O(1), see the proof of Lemma (4.4) of [1]. We need to prove that for p = 2,3,4 and l = 0,1
, , ln
E(Mpii(x)) = O(1); Mp11(x) — E(Mp, i(x)) = O. ' '
n^x(h)
E(M2,i(x))E(M4fi(x)) — E(M2,i(x)M4fi(x)) = O (J
ln n
n^x(h) J '
f / ln n
E(M3,1(x))E(M3,o(x)) — E(M3A(x)Ms,o(x)) = O U ^-Ifi) I ■ Using assumptions (H1)-(H4), we can easily have for p = 2, 3,4 and l = 0,1
E(M (x)) = E ( 1 ^ .Ki(x)3r2(x)^l(Yi) \ =
E(MP,i(x)) = E -^h) ^-h—GY)- =
= ^h2-p^-i(h)E
i
E (Ki(x)ßp-2(x)vl(Yi)^f'V/°(Xi,Yi)
L 1{Y1^T1}
GYi)
= h2-p^-i(h)E (Ki(x)ßp-2(x)mlv(Xi)) .
Lemma A.1 (i) in [1] and the condition (H2) allow us to get E(MPj(x)) = O(1).
(5)
n
Treatment of the term Mp,i(x) - E(Mpil(x)). We put
MPi(x) - E(Mpii(x)) = -Y^Zi(x),
where
Z(P ' l\x) =
1 J Kx)ßrZ(x)Vl (Y)_ E / Kx)ßi (x)vl(Y)
hP-2^x(h)
G(Yi)
G(Yi)
The main point is to evaluate asymptotically the mth-order moment of the r.r.v. z\p,l\x). By using Lemma A.1 (i) in [1] , we have
E
{Z(p'l)(x)}m = h(-p+2)m$-m(h)E
ck (-i)m-k (MKi(x)ßp 2(x)^l(Yi) I x
k=0
G(Yi)
E
Kx)ßp-\x)vl (Yi) G(Yi)
mk
O (*X-m+1)(h)) .
Finally, it suffices to apply Corollary A.8 (ii) in [3] with a^ = ^(h) to get, for p e {2, 3,4} and l e {0,1}
Mpl(x) - E(Mp,l(x)) = Oa
ln)
n^x(h) / '
Moving to study the term E(M21(x))E(M40(x)) - E(M21(x)M40(x)), we have E(M2i(x))E(M4,o(x)) - E(M2,i(x)M4t,o(x)) =
= n-1h-2$-2(h)E {Ki(x)0i(x)) E (K1(x)^(Y1)) + O((n$x(h))-1), by using similar arguments as previously, we get
E(M2,i(x))E(M4i0(x)) - E(M2,1(x)M4,o(x)) = O((n$x(h))-1 ),
which is, under (H5), negligible with respect to O
By similar arguments, one can prove that
ln J
n^x(h))'
E(M3A(x))E(M3io(x)) - E(M3,i(x)M3,o(x)) = O
ln n
n$x(h) / '
For the second part of the lemma, it's easy to find that E (m0(x)) = 1 and this leads us to get the last result.
Theorem 3.1 is proved. □
4. Uniform almost sure convergence
In this section, we will investigate the uniform almost sure convergence of mv on some subset Sf of F, such that Sf ^ Ufc=i B(xk, rn), where xk G Sf and rn (respectively dn) is a sequence of positive real (respectively integer) numbers. For this, we need the following assumptions.
(U1) There exist a differentiable function $ and strictly positive constants C,C1 and C2 such that
Vx G SF, Vh> 0; 0 < C1$(h) < $x(h) < C2$(h) < x
and
3 no > 0, Vn < no, $'(n) < C, where $' denotes the first derivative of $ with $(0) = 0. (U2) The generalized regression function mv satisfies
3 C > 0, 3 b > 0, Vx G SF, x' G B(x, h), \mv(x) — mv(x')\ < Cdb(x,x').
(U3) The function 3(.,.) satisfies (H3) uniformly on x and the following Lipschitz's condition
3 C > 0, Vxi G Sf, x2 G Sf, x G F, \3(x, xi) — ¡¡(x, x2)\ ^ Cd(xi, x2).
(U4) The kernel K fulfils (H4) and is Lipschitzian on [0,1].
(ln n \
(U5) limn_yoo h = 0, and for rn = O - , we have for n large enough
V n )
(ln n)2 , n$(h)
^-—j- < ln dn < -¡-^ n$(h) ln n
and
^^d^^^ < x for some ¡3 > 1.
n=1
(U6) The bandwidth h satisfies 3 n0 G N, 3 C > 0, such that
i
Vn>no, Vx gSf $x(zh,h) — (z2K (z)) >C> 0
$x(h) Jo dz
fx(h) J0
and
h
/ ¡(u,x)dPx (u) = o / 02(u,x)dP x (u)
JB(x,h) \JB(x,h) J
uniformly on x.
(U7) 3 C > 0 such that Vm > 2 : E(\pm(Y)\/X = x) < vm(x) < C < x with um(.) continuous on Sf .
Remark 4.1. These hypothesis are the uniform version of the assumed conditions in the point-wise case and have already been used in the literature (see [7]).
Theorem 4.1. Under assumptions (U1)-(U7), we have
sup \mv(x) - mv{x)\ = 0(hb) + 00.J J n^h)) '
The proof of Theorem 4.1 is based on the same decomposition (1) and on the following lemmas Lemma 4. Under the assumptions (U1)-(U7), we get
sup \n(x) - ml(X)\ = Oa.s.^ .
Proof. By following the same steps as the proof of Lemma 1 and using Lemma 2.2 in [7] we get our result. □
Lemma 5. Under the assumptions (U1), (U2) and(U4), we obtain that
sup \E(m1(x)) — mv(x)\ = O(hb).
xESf
Proof. Poof of Lemma 5 is similar to that of Lemma 2. □
Lemma 6. i) Under the assumptions (U1)-(U7), we have
sup \m1(x) — E(mi(x))\ = Oa n
\" "iv^v -—'V \ ^a.co \ i/. \
xeSf \\l n<P(h) J
ii) If assumptions (U1), (U3)-(U6) are satisfied, we get
sup \mo(x) - 1\ = Oa.co \ —tt^ and
xeSF VV n®(h) J
3$ > 0, such that Y^ P( inf m0(x) <ti)< m.
i \x^Sf J
n=1 K '
Proof. By considering the same decompositions and notations (2)-(5), following the same steps as in the proof of Lemma 3 and using Lemma 6 (i) in [7] instead of Lemma A.1 (i) in [1], we get under assumptions (U1)-(U4) and (U6)
sup Q(x) = O(1) and sup E(Mp1(x)) = O(1) uniformly on x, for p = 2, 3, 4 and l = 0, 1,
1 \
x£Sf
and
sup \E(M2,i(x))E(M4,o(x)) — E(M2,i(x)M4,o(x))\ = O | J
sup \E(M3A(x))E(M3,o(x)) — E(M3,i(x)M3,o(x))\ = O[ , ,
xeSf \n<P(h)J
which is, using hypothesis (U5), equals to O (* —^^ ).
\ V n®(h) J
Now we prove that
sup \MPii(x) — E(Mp,i(x)) \ = O a.co
xeSf P P \\ n$(h)
In dn
To this end, we need the following decomposition.
Let be j(x) = arg min d(x,x„); we have je{i,2,...,dn}
sup \MP}l(x) — E(MPtl(x))\ < sup \MPtl(x) — MP}l(xj{x))\ +
xeSp
xESf + sup \Mp,I(
xj(x )) — E(Mp, i(
xj(x )))\ +
xeSj:
+ sup \E(Mpii(xj(x))) — E(Mp,i(x))\
xGSj:
:= Dp'l + Dp'l + Dp'l.
Using (U1), (U3) and (U4), we get
Cr
Taking
DPi'1 ^ nh$h) sup, ^\^(Yi)\1B(x'h)UB(x3(x),h)(X).
Crn
\^l(Yi)\ 1B(x,h)UB(xj(x) ,h)(Xi);
Zi
The assumption (U7) allows us to write
Crm
E\Z^\ < n
hm$m-i(h)'
(6)
Using Corollary A.8 (ii) in [3] with an = h^h), we get
- V Zi = E(Zi) + Oa
n -^
i=1
rn ln n \ nh$(h) J .
Applying (6) again (for m = 1), one gets
dp' = O (h)+O-(/
nh$(h)
Combining this last result with assumption (U5) and the second part of the assumption (U1), we obtain
Dp'1 = Oa
I ln dn \ n$(h) I .
(7)
For the term Dp' , since
thus
Dp'1 < E ( sup \Mpl(x) — Mp^x))\
\xGSf
Dp,l = O D3 — Oa.co
ln dn
n$(h) I '
And finally for the term Dp'1, we have For all n > 0
P ( Dpl > n\
ln dn
ln dn
n$(h)l = P L^Xy \Mp'1 (xj(x)) — E(Mp'l(xj(x)))\ > ny n$(h)
<
< dn x max P \Mp,l(xj(x)) — E(Mpj(xj{x)))\ > n\
ln dn
je{ i '...'dn}
$(h)
rn ln n
Taking for p = 2, 3, 4
QP-2^ , , -\,JtV.\ ( . ARP-2
Y
p'hP-2
hP-2$x (h)
^Ki(xj(x))ßp (xj(x))W (Yi) / ^Ki(xj(x))ßp (xj(x) W (Yi)
— E 1
G(Yi) \ G(Yi)
Using the binomial Theorem and hypothesis (U1), (U2) and (U7), we obtain for p = 2, 3,4
E\YPii\m = O($-m+1(h)). So, we can apply a Bernstein-type inequality as done in the Corollary A.8 (i) in [3], to obtain
p 1
n
YP
i=i
Thus, by choosing ß such that Crj2 = ß, we get
>innm)) <2exp ^ta d-
p K < Cdnr'.
Then, hypothesis (U5) allows us to write
DP' = Oaco{j <»)
Finally, the result of lemma (6) follows from the relations (7), (8) and (9).
The second part of the lemma (6) can be directly deduced from the proof of the first one such that E(rho(x)) = 1. For the last part, it comes straightforward that
inf m0(x) < 1 ^3x £ Sf such that
xeSf 2
1 1 ^ ( 1\
i — mo(x) > - ^ sup \i — mo(x)\ > - ^y^p inf mo(x) < - < <x. 2 xESf 2 n=1 \xeSF 2 J
Theorem 4.1 is proved.
The authors would like to thank the Editor and the anonymous reviewer for their valuable comments.
References
[1] J.Barrientos-Marin, F.Ferraty, P.Vieu, Journal of Nonparametric Statistics, 22(2010), no. 5, 617-632. DOI: 10.1080/10485250903089930
[2] S.Derrar, A.Laksaci, E.Ould Said, Journal of Statistical Theory and Practice, 9(2015), no. 4, 823-849. DOI: 10.1080/15598608.2015.1032455
[3] F.Ferraty, P.Vieu, Nonparametric functional data analysis: theory and practice, Springer Science & Business Media, 2006.
[4] S.He, G.L.Yang, Estimation of the truncation probability in the random truncation model, Annals of Statistics, 1998, 1011-1027.
[5] N.Helal, E.Ould-Said, Kernel conditional quantile estimator under left truncation for functional regressors, Opuscula Mathematica, 36(2016), 25-48.
[6] M.Lemdani, E.Ould-Said, Asymptotic behavior of the hazard rate kernel estimator under truncated and censored data, Communications in Statistics - Theory and Methods, 36(2007), no. 1, 155-173.
[7] S.Leulmi, F.Messaci, Journal of Siberian Federal University. Mathematics abd Physics, 12(2019), 379-391. DOI: 10.17516/1997-1397-2019-12-3-379-391
[8] S.Leulmi, F.Messaci, Local linear estimation of a generalized regression function with functional dependent data Communications in Statistics - Theory and Methods, 47(2018), no. 23, 5795-5811.
[9] F.Messaci, N.Nemouchi, I.Ouassou, M.Rachdi, Statistical MeAhods & Applications, 24(2015), no. 4, 597-622. DOI:10.1007/s10260-015-0296-9
[10] W.Stute, Almost sure representations of the product-limit estimator for truncated data, The Annals of Statistics, 21(1993), no. 1, 146-156.
[11] M.Woodroofe, Estimating a distribution function with truncated data, The Annals of Statistics, 13(1985), no. 1, 163-177.
[12] X.Xiong, P.Zhou, C.Ailian, Asymptotic normality of the local linear estimation of the conditional density for functional time-series data, Communications in Statistics - Theory and Methods, 47(2018), no. 14, 3418-3440.
Скорость почти надежной сходимости обобщенной регрессионной оценки на основе усеченных и функциональных данных
Халима Будада Сара Леулми
Университет Фрер Мантури Константине 1, Алжир
Соумиа Харфучи
Университет Салах Бубнидер Константине 3, Алжир
Аннотация. В этой статье предлагается непараметрическая оценка обобщенной функции регрессии. Случайная переменная реального ответа (г.у.) подвергается усечению влево другим г.у., в то время как ковариата принимает свои значения в бесконечномерном пространстве. При стандартных предположениях устанавливаются точечные и равномерные почти наверняка сходимости предлагаемой оценки.
Ключевые слова: функциональные данные, усеченные данные, почти уверенная сходимость, локальная линейная оценка.