Recursive computation of the invariant measure of a stochastic di?erential equation driven by a L? evy process
arXiv:math/0509712v1 [math.PR] 30 Sep 2005
Fabien Panloup?
February 8, 2008
Abstract We investigate some recursive procedures based on an exact or “approximate” Euler scheme with decreasing step in vue to computation of invariant measures of solutions to S.D.E. driven by a L? evy process. Our results are valid for a large class of S.D.E. that can be governed by L? evy processes with few moments or can have a weakly meanreverting drift, and permit to ?nd again the a.s. C.L.T for stable processes. Keywords : stochastic di?erential equation ; L? evy process ; invariant distribution ; Euler scheme ; almost sure central limit theorem.
1
Introduction
The purpose of this paper is the computation of the invariant measure ν of a stochastic di?erential equation driven by a L? evy process. Practically, the aim of this work is to construct a sequence of simulatable empirical measures (? νn )n≥1 such that ν ?n (ω, f ) → ν (f ) a.s. for a range of functions f containing bounded continuous functions, by exploiting the ergodic properties of the S.D.E. under Lyapunov assumptions. This work is motived by two aspects. On the one hand, we want to extend some methods investigated for Brownian di?usions (see below) to S.D.E. with jumps having a Markovian structure. Then, we know that solutions to homogeneous S.D.E driven by L? evy processes are Markov and Feller processes under classical conditions (see Theorem 1) and that, this property is not preserved if the driving process is a more general semimartingale (see [10]). On the other hand, the second interest is to propose a way to simulate invariant measures of dynamical systems which are usually used in modelling. In fact, there are many situations where a Brownian di?usion is not satisfactory because, for instance, the noise is discontinuous or is too intensive. Let us get interested in an example coming from modelling of combinations of fragmentation and coalescence phenomenas. For instance, consider a polymer formation at a critical temperature. Then, one observes that molecules break up and recombine simultaneously. This phenomena is modelled by a random process called E.F.C (Exchangeable FragmentationCoalescence) process. In [3], Berestycki shows that, under appropriate conditions, the dust generated by this process is a solution to a S.D.E. with meanreversion and purely discontinuous L? evy noise. We will go back over this example in section 8.
Laboratoire de Probabilit? es et Mod` eles Al? eatoires,Universit? e Paris 6, UMR 7599, bureau 4D1, 175, rue du Chevaleret F75013 Paris, France. panloup@ccr.jussieu.fr
?
1
For other situations of modelling by S.D.E. driven by L? evy processes, we refer to Barndo?Nielsen ([2]) for examples in ?nancial modelling (where one can notably see the practical interest of the invariant measure of L? evy driven OrnsteinUllenbeck processes), ProtterTalay ([16]) (for ?nancial examples and a telephone noise model) and Deng ([7]) who models the spot prices of electricity by a Brownian di?usion with meanreverting perturbed by a compound Poisson noise. As mentioned before, this problem has already been investigated in the case of Brownian ?h ?h di?usions. In [20], Talay approximates ν (f ) by 1/N N n=1 f (Xn ) where (Xn )n≥1 is an Euler scheme with constant step h. The drawbacks of this algorithm are that it is not recursive and needs some ellipticity assumptions to be e?cient. Then, our approach is inspired by a second work due to Lamberton and Pag` es (see [11] and [12]) where the sequence of empirical measures is constructed from an Euler scheme with decreasing step. ?k )k≥1 this Euler scheme and (ηk )k≥1 a sequence of weights such that Denoting by (X n→ +∞ Hn = n ? ? ? ? → +∞, they are able to show under Lyapunov assumptions on the k =1 ηk ? coe?cients and easy conditions on steps and weights we don’t recall, that, 1 Hn
n k =1
? k ?1 ) ? ηk f (X ? ? ? ? → ν (f )
n→ +∞
p.s.,
where ν is the invariant measure of the di?usion and f is a continuous function with polynomial growth (see [11] and [12] for more details). One of the di?culties encountered in generalizing this kind of result to S.D.E. driven by L? evy processes is the simulation of the Euler scheme because the increments of the jumps component of a L? evy process are only simulatable in particular cases. However, when ”the L? evy measure is simulatable” (in a sense that we will de?ne after), we can approximate the jumps component by a simulatable compound Poisson process obtained by truncation of the small jumps (It is also possible to stop this process at its ?rst jump time, see Section 2.2). Note that when the truncation is small, the simulation of the increments of the compound Poisson process can be timeconsuming (because of the intensity) but thanks to the decreasing step, it is possible to adjust steps and truncation such that the simulation is reasonable. Our main result is to obtain the convergence of the sequence of empirical measures under Lyapunov assumptions in the two above situations.
2
Setting and background
1) The stochastic di?erential equation. Considering a positive number c and a L? evy l measure π on R , we denote by (Xt )t≥0 , a c` adl` ag process solution to the S.D.E.
c dXt = bc (Xt? )dt + σ (Xt? )dWt + κ(Xt? )dZt
where bc : Rd → Rd is a continuous vector ?eld, σ and κ are continuous on Rd with values c) in Md,l (set of d × l matrices), (Wt )t≥0 is a ldimensional Brownian motion and (Zt t≥0 is a purely discontinuous L? evy process independent of (Wt )t≥0 with characteristic function: E{ei<u,Zt > } = exp t
c
ei<u,y> ? 1 ? i < u, y > 1{y≤c} π (dy )
?t ≥ 0.
c c c c c ∈dy } , we recall that Zt = Yt + Nt where (Yt )t≥0 and De?ning nt (ω, dy ) = 0<s≤t 1{?Zs (Ntc )t≥0 are two independent L? evy processes such that
Ytc =
{y ≤c}
y (nt (., dy ) ? tπ (dy )) 2
and, Ntc =
0<s≤t
c ?Zs 1{Ls >c} .
(Ntc )t≥0 is a compound Poisson process with intensity λ = π (y  > c) and law of jumps ?(dy ) = 1{y>c} π (dy )/π (y  > c). Remark 1. In most papers dealing with these processes, the S.D.E. is written dXt = evy process. According to the L? evyKhintchine decompof (Xt? )dLt where (Lt )t≥0 is a L? sition, our formulation is a little more general but the main interest for us is to separate each part of the L? evy process because they act di?erently on the dynamical system. First, we isolate the drift part from the others because it usually produces the meanreverting e?ect. Then, the two other parts are considered as noises and we distinguish them because they have not the same behavior. With the above formulation, the S.D.E. does not admit a unique representation because of the truncation function (except if π is symmetrical). However, in many situations, we can rewrite the S.D.E. with a formulation which does not depend on c. So is the case if (Ntc )t≥0 is integrable (then we can compensate for the big jumps, i.e. set c = +∞) or if (Ytc )t≥0 has locally bounded variations (then we are not compelled to compensate the small jumps, i.e. it is possible to set c = 0). 2 To this end, we introduce two assumptions (H1 evy p ) (resp. (Hq )) on the moments of the L? measure which we are going to use for rewriting the S.D.E. Let p ∈ R+ and q ∈ [0, 1]: (H1 p) :
y >c
π (dy )y 2p < +∞,
(H2 q)
:
y ≤c
π (dy )y 2q < +∞.
These assumptions are independent of the truncation function. They provide the following c) indications on the moment of (Zt t≥0 and on the local behavior of the small jumps. We have
c 2p (H1 < +∞ p ) ?? EZt 
?t ≥ 0 and, (H2 q ) ?? E{
0<s≤t
c 2q c ≤c } < + ∞ ?Zs  1?Zs
?t ≥ 0.
c is integrable and if q ≤ 1/2, (Z c ) In particular, if p ≥ 1/2, Zt t t≥0 has locally integrable variations. Accordingly, we afterwards adopt the following representation for the S.D.E.:
dXt = b(Xt? )dt + σ (Xt? )dWt + κ(Xt? )dZt
(1)
where the de?nition of b and Z depends on p and q : consider p ≥ 0 and q ∈ [0, 1] such as 2 (H1 p ) and (Hq ) are satis?ed. Then, if Case 1 : p > 1/2, b(x) = bc (x) +
{y >c}
yπ (dy )κ(x) = b∞ (x), Nt = Ntc ? t
∞ Zt = Zt = Y t + Nt
with,
Yt = Ytc
and
yπ (dy ).
{y >c}
Case 2 : 0 ≤ p, q ≤ 1/2, b(x) = bc (x) +
{y ≤c}
yπ (dy )κ(x) = b0 (x), yπ (dy ) and 3
0 Zt = Zt = Y t + Nt
with,
Yt = Ytc + t
{y ≤c}
Nt = Ntc .
Case 3 : p ≤ 1/2 < q ,
b(x) = bc (x)
c Zt = Zt = Yt + Nt
with, Yt = Ytc
and
Nt = Ntc .
Remark 2. In cases 1 and 2, the representation is really independent of c whereas in case 3, we adopt the convention which seems to be the most natural. For technical arguments, in the critical case p = 1/2, we choose the notation associated with case 3. In case 1, we ?nd compound Poisson processes with moment strictly superior to 1 or 1dimensional stable processes with index α ∈ (1, 2), case 2 contains compound Poisson processes with moment smaller than 1 and 1dimensional stable processes with index α ∈ (0, 1). Finally, in case 3, we have for example, the 1dimensional stable processes with index α = 1. We recall a result of existence and uniqueness (see e.g. [15]). Theorem 1. Assume that b, σ and κ are locally lipschitz functions with sublinear growth. Then, weak unicity holds for the S.D.E. (1). On the other hand, if (?, F , (Ft ), P) is a ?ltred probability space satisfying the usual conditions and X0 is a random variable on (?, F , P) with values in Rd , then for any (Ft )Brownian motion (Wt )t≥0 , for any (Ft )measurable (Zt )t≥0 de?ned as previously, the S.D.E. (1) admits a unique c` adl` ag solution (Xt )t≥0 with initial condition X0 . Moreover, (Xt )t≥0 is a Feller Markov process. 1 Af (x) = (?f bc )(x)+ T r (σ ? D2 f σ )(x)+ 2
2 by Then, the in?nitesimal generator of (Xt )t≥0 is de?ned for all f ∈ CK
(2) 2) Exact and approximate Euler schemes From now, we tackle the principal issue of this paper: computation of invariant measures for S.D.E. driven by a L? evy process. We want to construct a sequence of empirical measures based on the Euler scheme with decreasing step. So, our ?rst intention is to build this Euler scheme but, compared to the Brownian di?usions, a new problem appears which is the simulation of jumps. Thus, we know that we can exactly simulate increments of the jumps part of a L? evy process only in some particular cases such as stable or compound Poisson processes. In the opposite case, we have to approximate increments of Z by simulatable random variables. To this end, n→ +∞ we introduce a sequence of positive numbers (un )n≥1 such as un < c and un ? ? ? ? ? → 0, c,n and (Y )n≥1 , a sequence of c` adl` ag processes de?ned by Ytc,n =
0<s≤t
π (dy ) f (x + κ(x)y ) ? f (x) ? (?f κ(x)y )1{y≤c} .
?Ysc 1{?Ysc ∈{un <y≤c}} ? t
yπ (dy )
un <y ≤c
?t ≥ 0
Y c,n is a compensated compound Poisson process with parameters λc n = π (un < y  ≤ c) n→ +∞ π (dx) c c,n c ? ? ? ? ? → Y locally uniformly in L2 . and ?n = 1un <y≤c π(un <y≤c) . We have that Y Then, we call (Y n ) and (Z n ), the sequence of c` adl` ag processes de?ned by Zn = Y n + N where ?t ≥ 0 Ytn =
L
Ytc,n Y c,n +
in case 1 and 3 yπ ( dy ) t in case 2. un <y ≤c
Remark 3. We notice that Z n is the sum of a drift term and a compound Poisson process (dx) n with parameters λn = π (y  > un ) and ?n = 1y>un π(π y >un ) . Then, increments of Z are simulatable if λn and the coe?cient associated with the drift term can be calculated, and if ?n is simulatable for all n ∈ N? . Classical methods of calculus or approximation of integrals and the rejection method allow a large range of L? evy processes to satisfy these conditions. If there exists (un )n≥1 such that Z n is simulatable for all n ∈ N? , we say that the L? evy measure is simulatable. 4
We are now able to de?ne the exact Euler scheme (scheme (A)) which is used when the increments of Z are exactly simulatable and the approximate Euler schemes (schemes (B) and (C)) which is used when the L? evy measure is simulatable. Let x ∈ Rd and consider the below Euler schemes: ? 0 = x, X ? = x, X 0 ? = x, X 0
C B
? n+1 = X ? n + γn+1 b(X ? n ) + √γn+1 σ (X ? n )Un+1 + κ(X ?n )Z ?n+1 X ? B )Un+1 + κ(X ? B )Z ?B ?B = X ? B + γn+1 b(X ? B ) + √γn+1 σ (X X n n n+1 n+1 n n C C C C C √ ? ? ? ? ? ?C X n+1 = Xn + γn+1 b(Xn ) + γn+1 σ (Xn )Un+1 + κ(Xn )Zn+1
(A) (B) (C)
where in each of the schemes, (γn )n∈N is a decreasing sequence of positive numbers satisfying lim γn = 0 γn = +∞,
n→ +∞ n≥1
(Un )n∈N is a sequence of i.i.d square integrable random variables with values in Rl , such B C ?n )n∈N , (Z ?n ?n that EU1 = 0 and ΣU1 = Id , and at last, (Z )n∈N ,and (Z )n∈N are sequences independent of (Un )n∈N of independent random variables with values in Rl such that
) ?n (R = Zγn , Z
l
) n ? B (R Z n = Zγn
l
and
) n ? C (R Z n = Zγn ∧T n
l
n  > 0}. According to these schemes, we de?ne sequences of with T n = inf {s > 0, ?Zs weighted empirical measures by
?n ∈ N? ,
1 ν ?n = Hn
n k =1
ηk δX ? k ?1
1 ν ?n = Hn
B
n
ηk δX ?B
k =1
k ?1
1 and ν ?n = Hn
C
n
ηk δX ?C .
k =1
k ?1
(3)
where (ηk )k∈N is a sequence of positive numbers satisfying n k =1 ηk .
n≥1 ηn
= +∞ and Hn =
Remark 4. In scheme (B), we replace increments of Z by increments of Z n and in scheme (C), we stop Z n at its ?rst jump time because we notice that asymptotically, under conditions between (un )n≥0 and (γn )n∈N? , only the ?rst jump of Z n is important for the convergence of the empirical measures. Note that it is possible to improve the approximation of Zt replacing the truncated jumps by a Brownian term (see [1] in the unidimensional case). We will get interested in this approximation in a future paper where we study the rate of convergence of the above empirical measures because this approximation generally improves the rate of scheme (B). 3) About continuous time ergodicity results. Our aim is to show that the above weighted empirical measures converge to the invariant measure but before stating the corresponding results and in order to explain what kind of approach we are going to use, we do a fast background on the various methods which permit to obtain a corresponding continuous time result, i.e.: if ν is the invariant measure of a semigroup (Pt )t≥0 , νt (ω, f ) = 1 t
t 0 x f (Xs )ds ? ? ? ? → ν (f ) t→+∞
a.s. ?x ∈ Rd ,
(4)
for every f in a range of functions containing C b (Rd ). Under a Has’minskii assumption, it is wellknown that if a Feller Markov process satis?es an irreducibility assumption ( ? t0 > 0 such as Pt0 (x, dy ) = pt0 (x, y )dy and dy ? a.s., ?x ∈ Rd , pt0 (x, y ) > 0), existence and uniqueness of the invariant measure is ensured and (4) holds for every locally bounded function f ∈ L1 (ν ) (see [13]). Likewise, for a Feller Markov process satisfying the same 5
Has’minskii assumption and the following asymptotic ?atness assumption: There exist p > 0, x0 ∈ Rd and α0 > 0 such that for any x ∈ Rd
t∈R+ x0 p sup E{Xt  } < +∞ x0 p x and E{Xt ? Xt  } ≤ e?α0 t x ? x0 p
the same result holds for any continuous function f such that f (x) ≤ C (xp + 1). Yet, these results contain two drawbacks for us. On the one hand, the proof is strongly dependent of the homogeneousness of the Markov process and therefore, is not transposable to an Euler scheme with decreasing step which is not an homogeneous Markov chain. On the other hand, irreducibility (resp. asymptotic ?atness) is generally obtained under constraining conditions of regularity (resp. meanreverting). That’s why we choose to take inspiration to another approach which is based on martingale methods. In [13], this kind of result is stated for Brownian di?usions (Theorem 2) but we are able to extend it to S.D.E. driven by a L? evy process (see [14]).
3
Main results
1) Notations. Throughout this paper, we use the notation . for the Euclidean norm on Rd . For any d × l matrix M , we de?ne M by: M = sup{x≤1} M x/x. For a symmetric d × d matrix M , we set λM = max(0, λ1 , . . . , λd ) where λ1 , . . . , λd are the eigenvalues of M . One notices that for each 1 × d matrix x: M x?2 = x? M x ≤ λM x2 . (5)
We denote by Cb (Rd ) (resp. C0 (Rd )) the set of bounded continuous functions on Rd with 2 (Rd ), the set of values in R (resp. continuous functions that go to 0 at in?nity) and CK 2 d C functions on R with values in R and compact support. Finally, we say that f is a pHolder function on E with values in F (where E and F are normed linear spaces) if [f ]p = supx,y∈E f (x) ? f (y ) F / x ? y p E < + ∞. 2) Assumptions. Let V : Rd → R+? be a C 2 Lyapunov function (limx→+∞V (x) = +∞) √ such that ?V  ≤ C V and D2 V is bounded. We set v := min V and for p > 0, we de?ne ? Vp if p ∈ (0, 1/2] ? [ p ]2p 1 λp := sup λ(D2 V +(p?1) ?V ??V ) (x) and hp := V ? 2 x∈Rd [V p?1 ?V ]2p?1 if p ∈ (1/2, 1] .
λp and hp are ?nite under the assumptions on V . Let a ∈ (0, 1], p > 0 and q ∈ [0, 1] be parameters respectively relative to the meanreverting intensity, the moment of the L? evy process and the behavior of the small jumps. We introduce assumptions (Sa,p,q ) (growth assumption) and (Ra,p,q ) (meanreverting assumption): Assumption (Sa,p,q ) : Assumption (Ra,p,q ) : (?V b) + hp y 2p π (dy ) κ b2 ≤ CV a T r (σσ ? ) + κ T r (σσ ? ) + κ α>0
2 2(p∨q )
≤ CV a+p?1 if p < 1 ≤ CV a if p ≥ 1.
There exist β ∈ R,
2p V 1?p 1 q ≤p
such that if p < 1
2p
≤ β ? αV a
2
(?V b) + cp λp T r (σσ ? ) +
y 2 π (dy ) κ
+ dp 6
y 2p π (dy ) Vκp?1
≤ β ? αV a
if p ≥ 1
√ 2(p?1) where cp = 22(p?1)∧1 and dp = 0 (resp. [ V ]1 ) if p = 1 (resp. p > 1). Remark 5. An important point of our work has been to obtain independence between the assumptions and the truncation function because we remind that c is not a structural parameter of the process. This is one of the reasons for which we introduced three cases. In fact, in cases 1 and 2, we notice that the assumptions are independent of c by construction. We have the same property in case 3 (p ≤ 1/2 < q ) because under (Sa,p,q ), (Ra,p,q ) is satis?ed for any c > 0 if and only if it is satis?ed for one c. It is due to the fact that if p ≤ 1/2 and q > 1/2, condition κ 2q = O(V p+a?1 ) is too constraining for κ to be a su?ciently strong meanreverting to ensure (Ra,p,q ). (see end of the proof of proposition 2 in case p < 1 for more details) 3) Main results. Our main results are theorems 2 and 3. In theorem 2, we obtain a result under uncomplicated conditions on steps and weights. In theorem 3, we show that under more constraining conditions on steps and weights, we can relax the assumptions on the coe?cients of the S.D.E. The second theorem can be very useful when the L? evy process has few moments.
2 Theorem 2. Let a ∈ (0, 1], p > 0 and q ∈ [0, 1]. Assume that (H1 p ), (Hq ), (Ra,p,q ) 2( p ∨ 1) and (Sa,p,q ) are satis?ed, that E{U1  } < +∞ and that the sequence (ηn /γn )n≥1 is nonincreasing. Then,
(1) If p/2 + a ? 1 > 0, the sequence (? νn )n≥1 is almost surely tight.
x→+∞
Moreover, if
, then every weak limit of κ(x) = o(x) and + κ ≤ CV this sequence is an invariant probability for the S.D.E. (1). In particular, if (Xt )t≥0 admits a unique invariant probability ν , for every continuous function f such that p ?n (f ) = ν (f ). f = O(V 2 +a?1 ), lim ν
n→∞
B
T r (σσ ? )
2q
p +a?1 2
(2) The same result holds for (? νn )n≥1 and under the additional condition: π (y  > un )γn ? ? ? ? ? → 0, the same result holds for (? νn )n≥1 . Remark 6. Let us get interested in implications of the distribution of the L? evy measure on the above theorem. Clearly, the moment of the L? evy process is the principal structural constraint because it imposes p but, when p is ?xed, we can notice that behavior of small jumps has an important in?uence on the conditions which ensure that every weak limit of the sequence of empirical measures is invariant. This in?uence is natural because, for instance, the error induced by the discretization is easier to control for a compound Poisson process than for a process which has not locally bounded variations . Remark 7. Theorem 2 associated with scheme (B) need not satisfying condition (6). However, considering simulation, we also have to add a supplementary condition. Indeed, we see that π (y  > un )γn is the parameter of the underlying Poisson law to Zn . Then, when we simulate the procedure, we must impose a condition such as
n∈N
C
n→ +∞
(6)
sup π (y  > un )γn < M < +∞.
Below, we explain some examples where the above theorem is available. In the ?rst one, conditions on coe?cients are adapted to the case q = 1 (the worse case). In the second one, we choose to take bc = σ = 0 in order to recall that a S.D.E. driven by a noncentered purely discontinuous L? evy process can generate a su?ciently strong meanreversion for satisfying some ergodic properties. In the two situations, we take V (x) = 1 + x2 . 7
Example 1. Assume that assumption (H1 p ) (with p > 0) is satis?ed and that b(x) = ?ρ(x)x/x+T (x) such that C1 x? ≤ ρ(x) ≤ C2 x? (0 < C1 ≤ C2 ) and (T (x)x) = 0, for x su?ciently large. Assume that T  ≤ CV
2
?+1 2
and
T r (σσ ? ) + κ T r (σσ ? ) + κ
2 2
≤ CV = o(V
p+ ? ? 1 2 ?+1 2
)
if p < 2 if p ≥ 2.
Example 2. Assume that assumption (H1 p ) is satis?ed with p > 1/2 and consider the c = ?κ(X ? ) following S.D.E.: dXt = κ(Xt? )dZt t {y >c} yπ (dy )dt + κ(Xt? )dZt . In order to simplify, we suppose that we are in the 1dimensional case. We de?ne h = ( {y>c} yπ (dy )). If h = 0 and κ(x) = ?sgn(h)ρ(x)x/x with C1 x? ≤ ρ(x) ≤ C2 x? for x su?ciently 2 large. Then, assumptions of theorem 2 are ful?lled if (H1 p ) and (Hq ) are satis?ed with 1?p p ∈ (1/2, 1), q < 1/2 and ? ∈ ( 1?2q , 1) or p ≥ 1, q ≤ 1/2 and ? ∈ (?1 ∨ (1 ? p), 1) or p > 1, q > 1/2 and ? ∈ (?1 ∨ (1 ? p), 1 ∧
p ?1 2q ?1 ).
Then, if (1 ? p) ∨ ?1 < ? ≤ 1 and (ηn /γn )n∈N? is nonincreasing, theorem 2 can be applied.
The interest of Theorem 2 is to be easy to use. For example, in scheme (A), we only have to take a nonincreasing sequence (γn )n∈N converging to 0, with in?nite sum and ηn = γn . The next theorem (Theorem 3) is more constraining for steps and weights but becomes necessary if the coe?cients do not satisfy the assumptions associated with Theorem 2 or if we want to obtain convergence for ”less integrable” functions.
2 Theorem 3. Let a ∈ (0, 1], p > 0 and q ∈ [0, 1]. Assume that (H1 p ), (Hq ), (Ra,p,q ) 2( p ∨ 1) and (Sa,p,q ) are satis?ed, that E{U1  } < +∞ and that the sequence (ηn /γn )n≥1 is nonincreasing. Then,
(1) Consider s ∈ (1, 2] satisfying s> s>
2p ?1 2p+(a?1)( 2pp ) 2p 2p+a?1
if
1 2
<p≤1
if p ≥ 1.
x→+∞
(7)
the sequence (? νn )n≥1 is almost surely tight. Moreover, if κ(x) = o(x) and p T r (σσ ? ) + κ 2q ≤ CV s +a?1 , then every weak limit of this sequence is an invariant probability for the S.D.E. (1). In particular, if (Xt )t≥0 admits a unique inp variant probability ν , for every continuous function f such that f = O(V s +a?1 ), lim ν ?n (f ) = ν (f ).
n→∞
If p/s + a ? 1 > 0, there exist explicit sequences (γn )n≥1 and (ηn )n≥1 such that
(2) The same result holds for (? νn )n≥1 and under the additional condition: π (y  > un )γn ? ? ? ? ? → 0, the same result holds for (? νn )n≥1 . Remark 8. The family of explicit sequences (ηn )n≥1 and (γn )n≥1 satisfying su?cient conditions for the above theorem are given in the following section (see Proposition 1 for general conditions, remark 9 for conditions adapted to polynomial steps and weights and example 3 in the particular case where the meanreverting has linear growth). In the following example, we consider the same kind of S.D.E. as in Example 1 in the particular case a = 1. We give conditions on σ and κ and explicit polynomial weights and steps such that the above theorem is valid. 8
C
B
n→ +∞
(8)
2 Example 3. Assume that (H1 p ) and (Hq ) are satis?ed with p > 0 and q ∈ [0, 1], and ? r ? r 1 2 consider γn = Cn , ηn = Cn (r1 ≤ r2 ), s ∈ (1, 2] with
1 1 0 < r1 < 2(1 ? ) and r2 < 1 or 0 < r1 ≤ 2(1 ? ) and r2 = 1. s s Then, if b is de?ned as in example 1 with ? = 1 and, T 2 ≤ CV, T r (σσ ? ) + κ
2q
≤ CV
p s
and
T r (σσ ? ) + κ
2
= o(V ),
theorem 3 is valid (under less constraining assumptions on σ and κ than in example 1). 4) Organization of the proof of theorems 2 and 3. First, we prove theorems 2 and 3 for scheme (A). In section 4, we show that (? νn )n∈N? is a.s. tight and in section 5, we show that every weak limit of (? νn )n∈N? is invariant for the S.D.E. (1). Then, we observe that theorems 2 and 3 are obtained by combination of Propositions 1 and 3. At last, in section 6, we bring up the main di?erences in the proof of the main theorems for schemes (B) and (C).
4
A.s. tightness of (? νn (w, dx))n∈N?
The main result of this section is Proposition 1. Before stating it, we introduce function fa,p de?ned for all s ∈ (1, 2] by fa,p (s) = s
p+a?1 p a ?1 + 2( s p∧1)
if s ≥ 2p ∧ s if s ≥ 2p
(9)
Suppose p + a ? 1 > 0. s → fa,p (s) is a nondecreasing function which satis?es f1,p (s) = s for all p > 0 and fa,p (2) = 2. We notice that fa,p (s) > 1 if and only if s satis?es assumption (7).
2 Proposition 1. Let a ∈ (0, 1], p > 0 and q ∈ (0, 1]. Assume that (H1 p ), (Hq ), (Ra,p,q ) and (Sa,p,q ) are satis?ed. If E{U1 2(p∨1) } < +∞ and (ηn /γn )n≥1 is nonincreasing, then,
(1) sup ν ?n (V Consequently, if
p 2 n≥1
p +a?1 2
) < +∞
a.s.
+ a ? 1 > 0, the sequence (? νn )n∈N is a.s. tight.
(2) Let s ∈ (1, 2) satisfying assumption (7) and consider (ηn )n≥1 and (γn )n≥1 such that 1 ηn ( √ )fa,p (s) is nonincreasing and γn Hn γn Then, supn≥1 ν ?n (V as p/s + a ? 1 > 0.
p +a?1 s
(
n≥1
ηn √ )fa,p (s) < +∞. Hn γn
(10)
) < +∞ a.s. and the sequence (? νn )n∈N is a.s. tight as soon
Remark 9. If γn = Cn?r1 and ηn = Cn?r2 with r1 ≤ r2 , then assumption (10) is ful?lled in the following cases : 0 < r1 < 2 1 ? 1 fa,p (s) and r2 < 1 or 0 < r1 ≤ 2 1 ? 1 fa,p (s) and r2 = 1. (11)
The rest of this section is devoted to the proof of this theorem. We establish a recursive control of the Euler scheme (Proposition 2) notably thanks to lemma 1. Then, we develop technical arguments close to [12] and prove Proposition 1 in subsection 4.4. 9
4.1
Control of the moments of the jump components
In order to control the noise induced by the jumps part, we establish a lemma where we study the behavior of the moments of Z near 0. This control is fundamental for proposition 2 and has a direct implication on the quality of assumption (Ra,p,q ). Lemma 1. (i) Let p > 0 such that E{Ntc 2p } = t E{Ntc ? t
2p y >c y  π (dy ) 2p y >c π (dy )y 
< +∞. Then, if p > 0 + CT,p,ct2(p∨1) if p > 1/2,
+ δc (t)t2
2 y >c y  pπ (dy )
2p y >c yπ (dy ) }
≤t
where δc is locally bounded. (ii) Let q ∈ [0, 1] such that E{Ytc + t
2q y ≤c y  π (dy )
< + ∞. if q ≤ 1/2 if 1/2 < q ≤ 1.
2q y ≤c yπ (dy ) }
≤t
2q y ≤c y  π (dy )
E{Ytc 2q } ≤ Cq t (iii) If
2p y >c π (dy )y  π (dy )
2q y ≤c y  π (dy )
+ CT,q,c t2
< ∞ with p ≥ 1, then if p = 1, E{Zt 2 } = t y 2 π (dy ),
and if p ≥ 1, for any ? > 0, there exist C? > 0 such that for any t ≤ T , E{Zt 2p } ≤ t y 2p π (dy ) + ? + C? tp∧2
Remark 10. The behavior of moments of the jump component is relatively di?erent of the behavior of a Brownian motion. In particular, an important point of the behavior of jumps is that we can not ?nd α > 1 and p > 0 such that E{Zt 2p } = O(tα ) (that is a valid equality for the Brownian motion). In fact, if the above equality was valid, the Kolmogorov criterion would show that Z is continuous. That is why in (Ra,p,q ) with p > 1, we have a (parasital) component of order 2p coming from the jumps part. Proof . (i) (Ntc )t≥0 is a compound Poisson process with parameters λ = π (y  > c) and π (dy ) c ?(dy ) = π( Rn 1Tn ≤t , where (Rn )n≥1 is a sequence of i.i.d r.v. y >c) . Then, we set Nt = with law ? and (Tn )n∈N is the sequence of jump times of a Poisson process with intensity λ independent of (Rn )n≥1 . We have
n n≥1
E{Ntc 2p } where
=
n≥1
E{
i=1
Ri 2p }e?λt
(λt)n = λtE{R1 2p }Fλ (t) n!
F (t) = e?λt
n≥0
+1 n 2p E{ n i=1 Ri  } (λt) . E{R1 2p } (n + 1)!
Using the following inequality
n n ?(ai )n i=1 ? R , n

i=1
ai α ≤ n(2p)∨1?1 10
i=1
ai 2p ,
we obtain
We deduce that F is an analytic function on R satisfying F (0) = 1 and therefore, F (t) = 1 + tδc (t) with δc (t) ≤ C (T, p, c, λ) The ?rst equality follows by noticing that E{R1 2p } = ? u, v ∈ R+ , ? α ≥ 1 (ii) If
2q y ≤c π (dy )y  1 λ
+1 2p E{ n (n + 1)(2p?1)+ i=1 Ri  } ≤ . (n + 1)!E{R1 2p } n!
? t ∈ [0, T ].
2p {y >c} y  π (dy ).
The second
inequality of (i) is obtained by using the following elementary inequality
(u + v )α ≤ uα + α2α?1 (uα ? 1)v + v α ).
(12)
the elementary inequality u + v α ≤ uα + v α for α ∈ (0, 1] and the compensation formula, we obtain
c E{Zt +t
< +∞ with q ≤ 1/2, then Y c has locally bounded variations. By
yπ (dy )2q } = E{
?Ytc
0<s≤t
2q
} ≤ E{
0<s≤t
?Ytc
2q
}=t
y ≤c
y 2q π (dy ).
Let now q ∈ (1/2, 1]. As Y c is a martingale, we derive from the BurkholderDavisGundy inequality q E{Ytc 2q } ≤ Cq E{( ?Ysc )2 ) 2 }.
0<s≤t
The second inequality follows by using u + v α ≤ uα + v α for α ∈ (0, 1] and the compensation formula. (iii) M , de?ned by Mt = Zt 2 ? t π (dy )y 2 π (dy ) for any t, is a martingale, then, in particular, E{Zt 2 } = t y 2 π (dy ). (13)
If p > 1, consider φp : x → x2p . By Ito formula, we obtain
2p E{Ytc ∧τn  } = E 0<s≤t∧τn c Ysc 2p ? Ysc? 2p ? (?φp (Ysc ? )?Ys ) n→ +∞
where (τn )n≥1 is a sequence of stopping times such that τn ? ? ? ? ? → +∞. We have ? 2 φp (x) = αx2(p?1) 1{i=j } + 2(p ? 1)xj xi x2(p?2) , then ?xj ?xi ? 2 φp ? D 2 φp (x) ∞ = max ( j i (x)) ≤ Cp x2(p?1) . i,j ∈{1,q } ?x ?x Consequently, by the compensation formula, the inequality: u + v α ≤ 2α∨1?1 (uα + v α ) and the fact that ?Ysc and Ysc? are independent, we obtain
2p E{Ytc ∧τn  } ≤ C E
≤C ≤C
sup Ysc? + θ ?Ysc 2(p?1) ?Ysc 2 θ ∈ 0<s≤t [0 1] t y 2p π (dy ) π (dy )y 2 + t E{Ysc 2(p?1) }ds y ≤c y ≤c 0 t E{Ysc 2(p?1) }ds. y 2p π (dy )t + C 0 y ≤c 11
Thanks to Fatou lemma, we deduce E{Ytc 2p } ≤ Cp y 2p π (dy )t + C
t 0
y ≤c
E{Ysc 2(p?1) }ds
?t ≥ 0,
?p > 1.
(14)
If p ∈ [1, 2], we derive from Jensen inequality E{Ysc 2(p?1) } ≤ E{Ysc 2 }p?1 ≤ sp?1 (
y ≤c
π (dy )y 2 )p?1 .
Then, by induction on k : = inf {l ≥ 0, l ≥ p ? 1}, we obtain E{Ytc 2p } ≤ Cp
c→0 y ≤c
y 2p π (dy )t + Cp,c tp∧2 .
2p y ≤c y 
As y≤c y 2p π (dy ) ? ? → 0, we can choose c? > 0 such that Cp (12) and independence between Y c? and N c? , we deduce E{Zt 2p } ≤ E{Ntc? ?t yπ (dy )2p }+C E{Ntc? ?t
≤ ? then using
y >c?
y >c?
yπ (dy )}E{Ytc? 2p?1 }+E{Ytc? 2p } .
Standardyzing ?, the second inequality of (iii) follows from the second inequality of (i).
4.2
A recursive stability relation
We ?rst state a technical lemma useful to proposition 2. Lemma 2. a) If p ∈ [0, 1/2], V p is αHolder for any α ∈ [2p, 1] and ? p ∈ (0, 1], W = p(?V )V p?1 is αHolder for any positive α ∈ [2p ? 1, 1]. b) Let x, y ∈ Rd and ξ ∈ [x, x + y ]. If p ≤ 1,
√
D2 (V p )(ξ )y ?2 ≤ pv p?1 λp y 2 .
1
V (x) If moreover, y  ≤ ? [√ with ? ∈ [0, 1), then, V]
D2 (V p )(ξ )y ?2 ≤ λp (1 ? ?)2(p?1) V p?1 (x)y 2 . If p > 1, √ D2 (V p )(ξ )y ?2 ≤ pλp 22(p?1)∨1?1 V p?1 (x) + [ V ]1 y 2(p?1) y 2 . Proof . Consider a continuous function f : Rd → R such that f  α is Lipschitz. Then, f is αholderian. Using this argument, a) follows. Let us now consider the inequalities in b). We have D 2 (V p ) = pV p?1 D 2 V + p(p ? 1)V p?2 ?V.?V = pV p?1 D2 V + (p ? 1) ?V ? ?V . (15) V
1
As V p?1 is bounded if p ≤ 1, we derive the ?rst inequality from elation (5). For the √ √ V (x) √ second one, we consider: ξ = x + θy with θ ∈ [0, 1] and y  ≤ ? [ V ] . As V is a Lipschitz 1 function, √ √ √ √ V (ξ ) ≥ V (x) ? [ V ]1 y  ≥ (1 ? ?) V (x) ? V p?1 (ξ ) ≤ (1 ? ?)2(p?1) V p?1 (x). 12
If p > 1, √ √ √ V (ξ ) ≤ V (x) + [ V ]1 y 
?
√ √ V p?1 (ξ ) ≤ ( V (x) + [ V ]1 y )2(p?1) .
The results follows from the following inequality: ? u, v ≥ 0 u + v α ≤ 2α∨1?1 (uα + v α ).
2 Proposition 2. Let p > 0. Assume (H1 p ), (Hq ), (Ra,p,q ) and (Sa,p,q ). Then, if 2( p ∨ 1) ′ E{U1  } < +∞, there exist n0 ∈ N, α > 0, β ′ ∈ R such that ? n ≥ n0 ,
? n+1 )/Fn } ≤ V p (X ?n ) + pγn+1 V p?1 (X ?n ) β ′ ? α′ V a (X ?n ) . E{V p (X
(16)
For this proof, one needs to investigate separately cases p < 1 and p ≥ 1. We detail case p < 1 and if p ≥ 1, we indicate the process of the proof which is close to Lemma 3 of [12]. ? and One also notices that in order to show (16), one only has to show that there exist β α ? and a function h such that ??α ? n+1 )/Fn } ≤ V p (X ?n ) + pγn+1 V p?1 (X ?n) β ? n ) + h(γn+1 )V a+p?1 (Xn ) E{V p (X ? V a (X ?, we obtain (16). where h(γn+1 ) ≤ α ? /2 for n su?ciently large. Setting α′ = α ? /2 and β ′ = β Proof of proposition 2 if p < 1. ? n+1 = X ?n+1 ? X ? n as follows: pose ?X
k
First, we consider cases 1 and 2 and we decom? ? n+1,1 = γn+1 b(X ?n ) ? ?X √ ? ?n )Un+1 ?Xn+1,2 = γn+1 σ (X ? ? n+1,3 = κ(X ?n )Z ?n+1 . ?X
? n+1,k = X ?n + X
i=1
? n+1,i ?X
with
(17)
According to the above decomposition, we show the three followings steps: i) There exists n1 ∈ N such that for every n ≥ n1
? n+1,1 ) ? V p (X ? n )/Fn } ≤ pγn+1 (?V b) (X ? n ) + Cγ 2 V a+p?1 (X ?n ). E{V p (X n+1 V 1?p ii) For every ? > 0, there exists n2,? ∈ N such that for every n ≥ n2,? ,
1 ? n+1,2 ) ? V p (X ? n+1,1 )/Fn } ≤ ?γn+1 V a+p?1 (X ?n ) + C? E{V p (X γn+1 .
(18)
(19)
i) We derive from Taylor formula that
iii) For every ? > 0, there exists n3,? such that ? ?n ) 2p php y 2p π (dy ) + ? ? ? γn+1 κ(X p ? p ? E{V (Xn+1,3 ) ? V (Xn+1,2 )/Fn } ≤ ? ? ? n ) + C 2 γn+1 ?γn+1 V a+p?1 (X ?
if q ≤ p if q > p
(20)
? n+1,1 )?2 /Fn }, ?n ) + 1 E{D 2 (V p )(ξ 1 )(?X ? n+1,1 ) ? V p (X ? n )/Fn } = pγn+1 (?V b) (X E{V p (X n+1 1 ? p V 2 13
n→ +∞ 1 ? ? ? ? ? ? ? ? ? →0 where ξn +1 b(Xn ). As γn ? +1 ∈ [Xn , Xn + γn+1 b(Xn )]. Set x = Xn and y = γn√ √ V ( x ) and b ≤ C V , there exists n1 ∈ N such that for n ≥ n1 , y  ≤ 2[√V ] a.s. Then, i) follows
1
from the second inequality of lemma 2, b) applied with ? = 1/2.
ii) Un+1 is independent of Fn and E{Un+1 } = 0. Then, by Taylor formula, we obtain ? n+1,2 )?2 /Fn } with ξ 2 ? ? ? n+1,2 )?V p (X ? n+1,1 )/Fn } = 1 E{D2 (V p )(ξ 2 )(?X E{V p (X n+1 n+1 ∈ [Xn+1,1 ; Xn+1,2 ]. 2 √ √ Set y = γn+1 σ (x)Un+1 . As σ (x) ≤ Cσ V (x), then the conditions of the second inequality of lemma 2 are satis?ed if Un+1  ≤ ρn+1 = C [√V ]2 √γ . Therefore,
σ 1 n+1
2 ?2 p ?1 ? ? ?n ) ≤ CV a+2(p?1) (X ? n ). E{D 2 (V p )(ξn (Xn )T r (σσ ? )(X +1 )(?Xn+1,2 ) 1{Un+1 ≤ρn+1 } /Fn } ≤ CV
according to assumption (Sa,p,q ). By the ?rst inequality of lemma 2, we also have
2 ?2 p+a?1 ? ? E{D 2 (V p )(ξn (Xn )E{Un+1 2 1{Un+1 >ρn+1 } }. +1 )(?Xn+1,2 ) 1{Un+1 >ρn+1 } /Fn } ≤ CV
Now, consider ? > 0. On the one hand, there exists C? > 0 such that V a+2(p?1) ≤ ?V a+p?1 + C? and on the other hand, there exists n2,? ∈ N such that for every n ≥ n2,? , n→ +∞ E{Un+1 2 1Un+1 >ρn+1 } ≤ ? because ρn ? ? ? ? ? → +∞ and U1 ∈ L2 . ii) follows. iii). Let us ?rst consider Case 1 where p ∈ (1/2, 1] and q ∈ (0, 1]. From Taylor formula , 3 2 ? ? ? ? ? ? ? ? there exist ξn +1 ∈ [Xn+1,2 , Xn+1,2 + κ(Xn )Yn+1 ] and ξn+1 ∈ [Xn+1,2 + κ(Xn )Yn+1 , Xn+1 ] such that
3 ? ? ? ? n+1 ) ? V p (X ?n+1,2 ) = p W (X ? n+1,2 )κ(X ?n )Y ?n+1 + p W (ξn V p (X +1 ) ? W (Xn+1,2 )κ(Xn )Yn+1 3 ? ? ? ? ? ? n+1,2 + κ(X ?n )Y ?n+1 )κ(X ?n )N ?n+1 + p W (ξn + p W (X +1 ) ? W (Xn+1,2 + κ(Xn )Yn+1 )κ(Xn )Nn+1 .
?n+1 , N ?n+1 and σ (Fn , Un+1 ) are independent and in this case, with W = V p?1 ?V . Y ? ? Yn+1 and Nn+1 have mean 0. Therefore, we derive from the ?rst assumption of lemma 2 ? n+1 ) ? V p (X ? n+1,2 )/Fn } ≤ p[W ]2(p∨q)?1 κ(X ?n ) 2(p∨q) E{Y ?n+1 2(p∨q) } E{V p (X ?n ) 2p E{N ?n+1 2p } + php κ(X with hp = [W ]2p?1 . By lemma 1 (inequalities (i).2 and (ii).2)) and assumption (Sa,p,q ), we obtain ? n+1 ) ? V p (X ? n+1,2 )/Fn } ≤ php E{V p (X + Cp,q
y ≤c y >c
?n ) y 2p π (dy ) κ(X
2p
2p a+p?1 ? (Xn ). y 2(p∨q) π (dy ) + δc (γn+1 )γn +1 V c→0 2(p∨q ) π (dy ) ? ? → y ≤c y 
By construction, this inequality is valid for any c > 0. As
n→ +∞
0
under (H2 ? ? ? ? → 0, there exist c? > 0 and n3,? ∈ N such that for every n ≥ n3,? , q ) and γn ? Cp,q
y ≤c? 2p y 2(p∨q) π (dy ) + δc? (γn+1 )γn +1 ≤ ?. x→+∞
Then, iii) follows if q ≤ p. If q > p, we have κ 2p = o(V a+p?1 ). Therefore, for every ? > 0, κ 2p ≤ ?V a+p?1 + C? and the result is also obvious. 14
We prove now iii) in case 2 (where p ≤ 1/2 and q ≤ 1/2). From lemma 2, V p is αHolder for every α ∈ [2p, 1]. Therefore, we have ?n+1 ) ? V p (X ?n+1,2 )/Fn } ≤ [V ]2(p∨q) κ(X ?n ) 2(p∨q) E{Y ?n+1 2(p∨q) } E{V p (X ?n ) 2p E{N ?n+1 2p }. + php κ(X where hp = [V p ]2p /p. Using inequalities (i).1 and (ii).1 of lemma 1, iii) follows by the same process as case 1. Summing (18), (19) and (20), we obtain: for every ? > 0, there exists n? ∈ N such that for every n ≥ n? , ?n ) + 1q≤p hp ? n+1 )/Fn } ≤ V p (X ?n ) + +pγn+1 V p?1 (?V b)(X E{V p (X ?n ) + C? ). + γn+1 (?V p+a?1 (X We derive from (Ra,p,q ) that ? n+1 )/Fn } ≤ V p (X ?n ) + pγn+1 V p?1 (X ?n ) β ? αV a (X ?n ) + γn+1 (?V p+a?1 (X ?n ) + C? ) E{V p (X and the remark at the beginning of the proof allows us to conclude. The proof is a little di?erent in case 3 because the decomposition introduced at the beginning of the proof now depends on the truncation. That’s why we set here for 0 < c′ ≤ c
c′ c′ ? ?n ?X +1,1 = γn+1 b (Xn ), c′ ?n ? ? ?X +1,3 = κ(Xn )(Zn+1 +
?n ) π (dy )y 2p κ(X
2p
yπ (dy )),
c′ <y ≤c
3 ? ? c′ ? c′ ? c′ and ?X n+1,2 = ?Xn+1,2 . We still have : ?Xn+1 = i=1 ?Xn+1,i . Using lemmas 1 and 2, we check that for every ? > 0, it is possible to choose c′ ? su?ciently small such that i), ii) and iii) are vaid with the associated decomposition, and ?nally, we obtain
? n ) + γn+1 (?V p+a?1 (X ? n ) + C? . ?n+1 )/Fn } ≤ V p (X ? n ) + pγn+1 V p?1 (?V bc′ ? (X E{V p (X
1 2 ? n ) + C? ?n ) + γn+1 ?V p+a?1 (X ? n ) + pγn+1 V p?1 (?V bc )(X + C? κ(Xn ) ≤ V p (X
.
because bc? = bc ? κ(x)
′
yπ (dy ). c′ ? <y ≤c κ(x) ≤ CV
Now, under (Sa,p,q ), we have =
x→+∞
p+ a ? 1 2q
=
o(V p+a?1 (x)).
because q > 1/2. Using (Ra,p,q ) and the remark at the beginning of the proof, the result also follows in case 3. Proof of proposition 2 if p ≥ 1: Thanks to Taylor formula with order 2, we derive from independence and mean properties that ? n+1 )/Fn } = V p (X ?n )+pγn+1 V p?1 (X ?n )(?V b)(X ?n )+E{D 2 (V p )(ξn+1 )?X ? n+1 2 /Fn }. E{V p (X (21) ? ? where ξn+1 ∈ [Xn , Xn+1 ]. From lemma 2, we have
√ ? n+1 2p 1p>1 . ? n )?X ?n+1 2 + [ V ]2(p?1) ?X ? ?2 ≤ pλp 2(p?1)∨1?1 V p?1 (X D 2 (V p )(ξn+1 )?X 1 n+1 15
Then, using lemma 1.(iii), independence properties and assumption (Sa,p,q ), we show that ?n )?X ? n+1 2 /Fn } ≤ γn+1 V p?1 (X ? n ) T r (σσ ? (X ? n )+ E{V p?1 (X Next, by the following inequality, ? u, v ≥ 0 ?α>1 (u + v )α ≤ uα + α2α?1 (uα?1 v + v α ), we deduce ? n+1 2p ≤ κ(X ?n ) 2p .Z ?n+1 2p ?X ?n ) 2p?1 Z ?n+1 2p?1 √γn+1 V a ?n )(1 + Un+1 ) + γ p V ap (X ? n )(1 + Un+1 2p ) . 2 (X + Cp κ(X n+1 and we derive from lemma 1.(iii) and assumption (Sa,p,q ) ? n+1 2p /Fn } ≤ γn+1 E{?X ?n ) y 2p π (dy ) κ(X
2p
1 2 + γn+1 ? + C? γn +1
?n ) y 2 π (dy ) κ(X
2
2 a+p?1 ? +Cγn (Xn ) . +1 V
(22)
∧(p?1)
? n ). (23) V ap (X
? ?2 /Fn } and we deduce Summing (22) and (23), we obtain a control of E{D2 (V p )(ξn+1 )?X n+1 from (21), the following inequality ?n) ? n+1 )/Fn } ≤ V p (X ?n ) + Cγn+1 ? + C? γ (p? 2 )∧1 )V a+p?1 (X E{V p (X n+1 ?n ) (?V b))(X ?n ) + cp λp V p?1 (X ? n ) T r (σσ ? )(X ?n ) + a2 κ(X ?n ) + pγn+1 V p?1 (X
2
1
+ a2p
κ 2p ? (Xn ) V p ?1
Recognizing assumption (Ra,p,q ), we choose ? su?ciently small in order to use the remark at the beginning of the proof and conclude.
4.3
Some technical consequences of Proposition 2
At this step, we want to derive Lp control results for (V (Xn ))n∈N from the inequality (16). When a = 1, it is easy to show that (V (Xn ))n∈N is bounded in Lp but when a < 1, we are not able to prove the same result. However, Lamberton and Pag` es ([12]) showed that it was possible to compense that incompetence by a weaker result (see lemma 4 of [12]). Then, we use again these ideas with a little di?erent approach showing that a supermartingale property can be derived from (16). ? n ) and Ψ(X ? n ) ∈ L1 (P) ?n ∈ N and assume Lemma 3. Let Φ, Ψ : Rd → R+ such that Φ(X that there exist n0 ∈ N, β > 0 and α > 0 such that ?n ≥ n0 , ? n+1 )/Fn } ≤ Φ(X ? n ) + γn+1 (β ? αΨ(X ?n )). E{Φ(X Consider a nonincreasing sequence (θn )n∈N of nonnegative numbers with ? n ) + α n θk γk Ψ(X ? k ?1 ) + β Then, (Sn )n≥n0 de?ned by Sn = θn Φ(X k =1 nonnegative supermartingale and therefore, ? n?1 )} < +∞ θn γn E{Ψ(X and
n≥1 θn γn
< ∞. k>n θk γk is a
n≥1
1 +∞ ? n )} n→ E{Φ(X = O( ). θn
? n ) ? E{Φ(X ? n )/Fn?1 }, Proof . Denoting by (Mn )n≥1 the martingale de?ned by: Mn = Φ(X we have for n ≥ n0 ?n+1 ) ≤ θn+1 (Φ(X ?n+1 ) ? E{Φ(X ? n+1 )/Fn } + θn+1 E{Φ(X ? n+1 )/Fn } θn+1 Φ(X ? n ) + γn+1 (β ? αΨ(X ?n )) . ≤ θn+1 Mn+1 + θn+1 Φ(X 16
As (θn )n∈N is nonincreasing, we can write
n+1 n
? n+1 ) + α θn+1 Φ(X
k =1
? k ?1 ) + β θk γk Ψ(X ?
k>n+1
?n) + α θk γk ≤ θn+1 Mn+1 + θn Φ(X ? n ≥ n0
k =1
? k ?1 ) + β θk γk Ψ(X
θk γk
k>n
E{Sn+1 /Fn } ≤ Sn
On the other hand, Sn0 ∈ L1 , therefore (Sn )n≥n0 is a nonnegative supermartingale and the lemma follows.
2 Corollary 1. Let a ∈ (0, 1], p > 0 and q ∈ [0, 1]. Assume (H1 p ), (Hq ), (Ra,p,q ), (Sa,p,q ) 2( p ∨ 1) and E{U1  } < +∞. Then,
(i) For every nonincreasing sequence (θn )n∈N of nonnegative numbers such that ?n?1 )} < ∞ θn γn E{V p+a?1 (X and
n≥1
θn γn < ∞,
n≥1
1 +∞ ?n )} n→ E{V p (X = O( ). θn
ηn (ii) If ( γ )n∈N is nonincreasing, n
(
n≥1
p p ηn 2 ? n ) ? V 2 (X ?n?1 + γn b(X ?n?1 )2 } < +∞. ) E{V 2 (X Hn γn
(24)
Moreover, if condition (7) and (10) are satis?ed for s ∈ (1, 2), (
n≥1
p p ηn fa,p (s) ?n ) ? V s (X ? n?1 + γn b(X ? n?1 )fa,p (s) } < +∞. ) E{V s (X Hn γn
(25)
Remark 11. The additional assumptions on steps, weights and s appear in this corollary because we prove it using Chow theorem. fa,p (s) plays the role of the Chow exposant. Then, we need condition (7) because fa,p (s) must be strictly rather than 1. Proof . (i) It su?ces to check the asumptions of lemma 3 with Φ = V p and Ψ = V a+p?1 . ? n ) and Ψ(X ? n ) ∈ L1 (P) because b, σ On the one hand, we obtain by induction that Φ(X 2 p 2 p ? and κ are sublinear, E{Un+1  } < +∞ and E{Zn+1  } < +∞. On the other hand, we have by proposition 2 that ?n+1 )/Fn } ≤ V p (X ? n ) + γn+1 V p?1 (X ?n ) β ′ ? α′ V a (X ?n ) E{V p (X
′
? n ≥ n0 ≥ 1..
Now, we can suppose β ≥ 0 . As a > 0 and lim V (x) = +∞, ? C ∈ R such that V
p ?1
≤
α V p+a?1 2β ′ p
′
x→∞
+ C . We deduce
′
? ′ ? α V p+a?1 (X ? n?1 ) ? n )/Fn?1 } ≤ V p (X ? n?1 + γn b(X ?n?1 )) + γn βα E{V (X 2
(26)
and conclude the ?rst part by lemma 3. (ii). Let us begin the second part by two remarks. On the one hand, we notice that (24) is a particular case of (25) because fa,p (2) = 2 and (10) is systematically satis?ed in this n ( η√ )2 ) is nonincreasing because (ηn /γn )n∈N so is and, case. In fact, ( γ1 n Hn γn (
n≥1
η1 ηn √ )2 ≤ Hn γn γ1
n≥1
ηn ≤ 2 Hn
∞ 1
dt < ∞. t2
17
Then, we only prove (25). On the other hand, we observe that it su?ces to show that
p ?n) ? V p ?n?1 + γn b(X ?n?1 ))fa,p (s) } ≤ Cγn s (X E{V s (X fa,p (s) 2
? n?1 )}. E{V p+a?1 (X
(27)
Indeed, if (27) is satis?ed, (
n≥1
p ηn fa,p (s) ?n )?V p ?n?1 +γn b(X ? n?1 )fa,p (s) } ≤ s (X ) E{V s (X Hn γn
(
n≥1
ηn ? n?1 )}. √ )fa,p (s) E{V p+a?1 (X Hn γn
(28) Now, the right sum is ?nite because assumption (10) corresponds to assumptions associated with (i). Then, we only prove (27). Assume ?rst that p/s ≤ 1/2. In this case , fa,p (s) = s and V s is 2p/sholderian (see p lemma 2). If q/s ≤ 1/2, then V s is also 2p/sholderian. We have
2 ? n ) ? V s (X ? n?1 + γn b(X ? n ))s /Fn?1 } ≤ Cγn E E{V s (X p p s p
(V
p ?1 s
√ ?n?1 Un ) s /Fn?1 ?V (ξn ) γn σ (X
?n?1 )N ?n 2p /Fn?1 } + E{κ(X ?n?1 )Y ?n 2q /Fn?1 } + C E{κ(X
2 ? n,1 ; X ?n,2 ]. According to lemma 1, we have under (H1 where ξn ∈ [X p ) and (Hq )
?n 2p } + E{Y ?n 2q } = O(γn ). E{N Then, thanks to assumption (Sa,p,q ) and using the fact that ?(V s ) is bounded, we infer
2 ?n ) ? V s (X ? n?1 + γn b(X ? n ))s /Fn?1 } ≤ Cγn ? n?1 ) E{V s (X V a+p?1 (X p p s p
(29)
and recognize (27). If q/s > 1/2, we have
p ?n) ? V p ?n?1 + γn b(X ?n ))s /Fn?1 } ≤ C E{κ(X ?n?1 )N ?n 2p /Fn?1 } s (X E{V s (X s p √ 2 ?n?1 Un + κ(X ?n?1 )Y ?n s /Fn?1 E (V s ?1 ?V (ξn ) γn σ (X + Cγn
?n,1 ; X ? n,1 + √γn σ (X ?n?1 Un + κ(X ?n?1 )Y ?n ]. Now, as E{Y ?n 2 } = O(γn ), we where ξn ∈ [X have by Jensen inequality
2 ? n?1 ) ?n?1 )Y ?n s /Fn } ≤ Cγn κ(X E{κ(X s
s
2 ? n?1 ) ≤ Cγn V a+p?1 (X
s
because s ≤ 2q . (27) follows. Assume now that p/s > 1/2. Applying the following inequality, ?u, v ≥ 0, with u = ?n ), v = V (X ?α ≥ 1, uα ? v α  ≤ Cα u ? v uα?1 + u ? v α (30)
? n?1 + γn b(X ? n?1 )) and r = (2p)/s, we obtain V (X √ √ p ?1 ?n) ? V p ? n?1 + γn b(X ? n?1 )) ≤ C  V (X ? n?1 ) ? n ) ? V (X ? n?1 + γn b(X ? n?1 ))V p s (X s 2 (X V s (X √ √ 2p ? n?1 ) ? n ) ? V (X ? n?1 + γn b(X ? n?1 )) s (X + C  V (X We deduce from (Sa,p,q ) that ?n ? (X ? n?1 + γn b(X ? n?1 )) ≤ X
a + p? 1 ?n?1 ) √γn Un  + Z ?n  CV 2p (X a √ ? ? 2 CV (Xn?1 ) γn Un  + Zn  18
if p < 1 if p ≥ 1
As
√
V is Lipschitz, one ?nds
p 2p p s ?n ) ? V p ? n?1 + γn b(X ? n?1 )) ≤ CV r (X ?n?1 ) √γn Un  + Z ?n  2sp ?n  + γn s (X Un  s + Z V s (X
where r =
(p s + (p s +
a+p?1 a?1 ) 2p ) ∨ ( s ap a?1 ) ∨ 2 s
if p < 1 if p ≥ 1.
α
2 ?n α } = O(γn On the one hand, we derive from lemma 1 that E{Z ). Therefore, as 2p/s ≥ 1/2, we have
E
√
2p s ?n  2sp ?n  + γn Un  s + Z γn Un  + Z
p
fa,p (s)
= O(γn
fa,p (s) 2
).
On the other hand, we notice that rfa,p (s) ≤ a + p ? 1 (fa,p has been constructed in this way). Therefore, inequality (27) follows.
4.4
Proof of Proposition 1
Let s ∈ (1, 2]. By a convexity argument (see lemma 3 of [12]), it can be shown that if proposition 2 is satis?ed, then the same result is satis?ed for every p ? ∈ (0, p]. In particular, ? such that ?k ≥ n0 , ? , β, there exists n0 ∈ N, α ?k )/Fk?1 } ≤ V s (X ? k?1 ) + γk V E{V s (X
p p p ?1 s
??α ? k ?1 ) β ? k ?1 ) . (X ? V a (X
?∈R As in proof of corollary 1.(i), we derive from the above inequality that there exists β such that ?n ≥ n0 , V
p +a?1 s
? ? s s ? ?k?1 )≤ V (Xk?1 ) ? E{V (Xk )/Fk?1 } + β. (X α ? γk
p
p
Then, in order to prove Proposition 1, we only have to show that sup
n≥n0 +1
1 Hn
n k =n0 +1
p ηk ? k?1 ) ? E{V p ? k )/Fk?1 } s (X V s (X γk
<∞
a.s.
We decompose the above sum as follows: 1 Hn
n k =n0 +1
p ηk p ? ? k )/Fk?1 }) = 1 (V s (Xk?1 ) ? E{V s (X γk Hn
n k =n0 +1
p ηk ? k ?1 ) ? V p ?k ) s (X V s (X γk
(31) + 1 Hn
n k =n0 +1
p p ηk ?k ) ? E{V s (X ?k )/Fk?1 } . V s (X γk
(32) By Abel transform, 1 Hn
n k =n0 +1
p p p p ηk ?k?1 ) ? V s (X ? k ) = 1 ηn0 V s (X ? n ) ? ηn V s (X ?n) V s (X 0 γk Hn γn0 γn
+
1 Hn
n
(
k =n0+1
1 ηn0 p ? ηk?1 p ? ηk ? )V s (Xk?1 ) ≤ V s (Xn0 ). γk γk?1 Hn γn0 19
n because ( η ? ? ? ? → + ∞, γn )n∈N is nonincreasing. Then, as Hn ?
n→ +∞
sup
n≥n0
1 Hn
n k =n0 +1
p ηk ? k ?1 ) ? V p ?k ) s (X V s (X γk
< + ∞.
For (32), one considers the martingale (Mn )n∈N de?ned by
n
Mn =
k =1
p ηk ? k ) ? E{V p ? k )/Fk?1 } . s (X V s (X Hk γk
By the inequality u + v α ≤ 2α?1 (uα + v α ) and Jensen inequality, ? k ) ? E{V s (X ? k )/Fk?1 }fa,p (s) ≤ C E V s (X ?k ) ? V s (X ?k?1 + γk b(X ? k?1 )fa,p (s) E V s (X ? k?1 + γk b(X ? k?1 )) ? V s (X ? k ) /Fk?1 }fa,p (s) + C E E{ V s (X
p ?k ) ? V p ?k?1 + γk b(X ? k?1 ))fa,p (s) s (X ≤ C E V s (X p p p p p p
By corollary 1, we deduce that (Mn )n∈N is bounded in Lfa,p (s) (with fa,p (s) > 1 from n→∞ condition (7)). Therefore, thanks to Chow theorem, a.s., Mn ? ? ? → M∞ a.s. where M∞ is ?nite a.s. We deduce from Kronecker lemma: 1 Hn
n k =n0 +1
p ηk n→∞ ? k ) ? E{V p ? k )/Fk?1 } ? s (X V s (X ? ? → 0. γk
5
Identi?cation of the weak limits of (? νn(ω, dx))n≥1
In this section, we show that every weak limit of (? νn (ω, dx))n≥1 is an invariant distribution for (Xt )t≥0 . For this purpose, we deduce from the EcheverriaWeiss theorem (see [6], p. 2 (Rd ) ? C (Rd ), it su?ces to show that for every weak limit 238 and [11]) that if A(CK 0 2 (Rd ), ν (Af ) = 0. We note that A(C 2 (Rd ) ? C (Rd ) if ν of (? νn )n≥1 , for every f ∈ CK 0 K κ(x) = o(x) (and this condition can not be improved in general). Then, the main result of this section is the following theorem.
x→+∞ x→+∞
2 Proposition 3. Let a ∈ (0, 1], p > 0 and q ∈ [0, 1]. Assume (H1 p ), (Hq ), (Sa,p,q ),
κ(x) sup 1 n≥1 H n
=
o(x) and (ηn /γn )n≥1 is nonincreasing. If moreover, ? k ?1 ) κ(X
2q
n
ηk
k =1
? k ?1 ) +T r σσ ? (X
<∞
and,
k ≥1
2 ηk a+p?1 ? (Xk?1 )} < +∞, 2 γ E{V Hk k
(33) (34)
then,
2 ? f ∈ CK (Rd ), a.s.,
Af dν ?n ? ? ? → 0.
n→∞
Consequently, every weak limit of (? νn (ω, dx))n≥1 is invariant for the S.D.E. (1). Remark 12. Under assumptions of theorem 2 and 3, we see that assumption (33) is systematically satis?ed thanks to Proposition 1 and corollary 1.
20
5.1
Proof of Proposition 3
Proof of Proposition 3 is built in two successive stages which are represented by propositions 4 and 5. Then, we observe that combination of these propositions implies Proposition 3. Before stating the ?rst stage, we establish a lemma which will be necessary for its proof. Lemma 4. Let Φ : Rd → Rl be a continuous function with compact support, Ψ : Rd → R+ , θ a locally bounded function, (hθ 1 )θ ∈[0,1] and (h2 )θ ∈[0,1] two families of functions de?ned on d + d R × R with values in R satisfying the two following assumptions: 1. There exists δ0 > 0 such that for every γ ∈ [0, δ0 ], θ ∈ [0, 1] hθ ? ? ? ? →0 1 (x, γ ) ? 2. sup
x∈K,θ ∈[0,1] x→+∞
et
γ →0
? ? ? ? → 0. hθ 2 (x, γ ) ? for every compact set K.
x→+∞
θ hθ ? ? →0 1 (x, γ ) ? h2 (x, γ ) ?
Then, for every sequence (xk )k∈N de Rd , 1 Hn
n
sup
k =1 θ ∈[0,1]
θ Φ(hθ ? ? ? ? → 0. 1 (xk ?1 , γk )) ? Φ(h2 (xk ?1 , γk )) Ψ(xk ?1 ) ?
n→ +∞
Proof . Φ has compact support therefore, we derive from the ?rst assumption that there exists Mδ0 > 0 such that for every x > Mδ0 , γ ≤ δ0 and θ ∈ [0, 1],
θ Φ(hθ 1 (x, γ )) = Φ(h2 (x, γ )) = 0.
Consider ρ → w(ρ, Φ) = sup{η > 0, supx?y≤η Φ(x) ? Φ(y ) ≤ ρ}. As Φ is uniformly continuous, w(ρ, Φ) > 0 for every ρ > 0. Thanks to the second assumption, for every ρ > 0, there exists δρ ≤ δ0 such that for every γ ≤ δρ , θ ∈ [0, 1],
x≤Mδ0 k → +∞ θ sup hθ 1 (x, γ ) ? h2 (x, γ ) ≤ w(ρ, Φ). n→ +∞
As γk ? ? ? ? → 0, there exists kρ ∈ N such that γk ≤ δρ for k ≥ kρ . Using that Hn ? ? ? ? ? → +∞, we deduce lim sup
n→ +∞
1 Hn
n
sup
k =1 θ ∈[0,1]
θ Φ(hθ 1 (xk ?1 , γk )) ? Φ(h2 (xk ?1 , γk )) Ψ(xk ?1 ) ≤ ρ sup Ψ(x). x≤Mδ0
The result follows from the local boundedness of Ψ.
Proposition 4. Let q ∈ [0, 1] such that (H1 q ) is satis?ed. Assume that the assumptions of Proposition 3 are ful?lled. Then, a.s., ? f ∈
2 CK (Rd ), lim n→∞
1 Hn
n
ηk
k =1
? k )/Fk?1 } E{?f (X ? k?1 ) = 0. ? Af (X γk
(35)
21
2 and decompose the in?nitesimal Proof of proposition 4 in case 3. Let f ∈ CK generator in three parts:
A1 f (x) = (?f b)(x), A3 f (x) =
A2 f (x) = T r (σ ? D 2 f σ )(x),
and,
π (dy )(f (x + κ(x)y ) ? f (x) ? (?f κ(x)y )1{y≤c} )dy.
Taking notations (17) (associated to case 3), we then decompose the proof in three steps: ? k?1 + γk b(X ?k?1 )) ? f (X ? k ?1 ) f (X ? k?1 ) + R1 (γk , X ? k ?1 ) /Fk?1 = A1 f (X γk
n 1 n→∞ ? k ?1 ) ? ηk R1 (γk , X ? ? → 0.
Step 1. E
1 with Hn
Step 2.
E
? k?1,2 ) ? f (X ? k?1,1 ) f (X ? k?1 ) + R2 (γk , X ? k ?1 ) /Fk?1 = A2 f (X γk 1 Hn
n 1 n→∞ ? k ?1 ) ? ηk R2 (γk , X ? ? → 0.
with
Step 3.
1 ? k ) ? f (X ? k?1,2 )/Fk?1 } = A3 f (X ? k?1 ) + R3 (γk , X ? k ?1 ) E{f (X γk with 1 Hn
n 1 n→∞ ? k ?1 ) ? ηk R3 (γk , X ? ? → 0.
Combination of the three steps implies proposition 4. We refer to proposition 4 of [12] for ? k?1 is Fk?1  measurable and Z ?k , Uk and steps 1 and 2 and only prove the last step. As X Fk?1 are independent, we have ? k?1,2 + κ(X ? k?1 )Z ?k )/Fk?1 } = Qγ f (X ? k ?1 ) E{f (X k Qγ f (x) =
Rd
with
E{f (S (x, γ, u) + κ(x)Zγ )}PU1 (du),
√ where S (x, γ, u) = x + γb(x) + γσ (x)u. We set Vt = S (x, γ, u) + κ(x)Zt . Applying Ito formula to (f (Vt ))t≥0 , we obtain
t
f (Vt ) = f (S (x, γ, u)) +
0
?f (Vs? )κ(x)dYs +
? f S (x, γ, u) + κ(x)Zs? , x, ?Zs , H
0<s≤t
where, H (z, x, y ) = f (z + κ(x)y ) ? f (z ) ? ?f (z )κ(x)z 1{y≤c} .
t
?f
(36)
(t, ω ) → 0 ?f (Vs? )κ(x)dYs is a martingale because ?f is bounded. Thanks to compensation formula and using a change of variable, we obtain
1
E{f (S (x, γ, u)+κ(x)Zγ )} = E{Vγ )} = f (S (x, γ, u))+γ E
dv
0
? f (S (x, γ, u)+κ(x)Zvγ , x, y ) . π (dy )H (37)
22
Now, we notice that A3 f (x) =
? f (x, x, y ) = E π (dy )H
1 0 dv
? f (x, x, y ) . Then, π (dy )H where, PU1 (du). (38)
1 ? k?1 ) + R3 (γk , X ? k ?1 ) ? k ) ? f (X ? k?1,2 )/Fk?1 = A3 f (X E f (X γk
1
R3 (γ, x) =
E
Rd 0
du
Rd
? f (S (x, γ, u) + κ(x)Zvγ , x, y ) ? H ? f (x, x, y ) π (dy ) H
We denote by R3,1 and R3,2 the parts of R3 associated to respectively, small and big jumps of Z, i.e.
1
R3,1 (γ, x, c) = R3,2 (γ, x, c) =
E
Rd 0 1
dv
Rd
? f (S (x, γ, u) + κ(x)Zvγ , x, y ) ? H ? f (x, x, y ) 1{y≤c} PU (du), π (dy ) H 1 ? f (S (x, γ, u) + κ(x)Zvγ , x, y ) ? H ? f (x, x, y ) 1{y>c} PU (du). π (dy ) H 1
E
Rd 0
dv
Rd
We investigate successively R3,1 and R3,2 . From Taylor formula, we have for every y  ≤ c ? f (S (x, γ, u) + κ(x)Zvγ , x, y ) ? H ? f (x, x, y ) ≤ 1 Rx (Z, γ, u, v, y )κ(x)y 2 , H 2 x c 2 where R (Z , γ, u, v, y ) = sup D f S (x, γ, u) + κ(x)(Zvγ + θy ) ? D 2 f (x + θκ(x)y ) .
θ ∈[0,1]
With the notations Φ = D 2 f , Ψ(x) = κ(x) .y 2 and: hθ 1 (x, γ ) = S (x, γ, u) + κ(x)(Zvγ + θy ),
x→+∞
hθ 2 (x, γ ) = x + θκ(x)y,
we show that assumptions of lemma 4 are ful?lled with ?xed u, v, y and ω. On the one hand, as κ(x) = κ(x) = xv (x) and v (x) ? ? ? ? → 0. Therefore, as b and σ are sublinear, ? ? S (x, γ, u) + κ(x)(Zvγ + θy ) ≥ x 1 ? γC1 ? (Zvγ  + y )v (x) ? C2 ? x + θκ(x)y  ≥ x 1 ? v (x).y  .
x→∞ x→∞
o(x), there exists a continuous function v such that where C > 0 (39)
process and v (x) ? ? ? ? → 0, for every γ ≤ δ0 and θ ∈ [0, 1], a.s., hθ ? ? ? → +∞ 1 (x, γ ) ?
x→∞
Consider δ0 > 0 such as 1 ? δ0 C > 0. As Z is locally bounded because it is a c` adl` ag and hθ ? ? ? → + ∞, 2 (x, γ ) ?
x→∞
On the other hand, comsider a compact set K of Rd . We have sup
x∈K,θ ∈[0,1]
√ γ →0 θ hθ ? ? → 0 a.s. 1 (x, γ ) ? h2 (x) ≤ sup b(x)+ σ (x) + κ (x) (γ + γ u + Zvγ ) ?
x∈K
because b, σ , κ are locally bounded and limt→0 Zt = 0 a.s. Therefore, by lemma 4, for any sequence (xk )k∈N of Rd , for every u ∈ Rd , v ∈ [0, 1] and y  ≤ c, 1 Hn
n k =1 n→∞ ? f (S (xk?1 , γk , u) + κ(xk?1 )Zvγ , xk?1 , y ) ? H ? f (xk?1 , xk?1 , y ) ? ηk H ? ? → 0 a.s. k
Now, as ?f and D 2 f are bounded, we derive from Taylor formula that H x (Z c , γ, u, y ) ? H x (Z c , 0, u, y ) 1{y≤c} ≤ 23 2 ?f ∞ κ(x) .y  2 D 2 f ∞ κ(x) 2 y 2 .
Then, for any q ∈ [1, 2], H x (Z c , γ, u, y ) ? H x (Z c , 0, u, y ) 1{y≤c} ≤ C κ(x) 1 Hn
n k =1 2q
y 2q .
(40)
Thereofre, by assumption (H2 q ) (q > 1/2 in case 3), we ?nally obtain by Lebesgue theorem ηk R3,1 (γk , xk?1 , c) ? ? ? →0
n k =1 ηk 2q n→∞
if
1 n∈N Hn sup
n k =1
ηk κ(xk?1 )
2q
< ∞. 0
? k ?1 ) κ(X Well, a.s. Now, let us look into R3,2 .
1 supn∈N H n 1
< ∞ a.s., therefore,
1 Hn
n→∞ n ? ? ? → k =1 ηk R3,1 (γk , Xk ?1 , c)?
R3,2 (γ, x, c) = +
E
Rd 0 1
dv
{y >c}
π (dy ) f (S (x, γ, u) + κ(x)(Zvγ + y )) ? f (x + κ(x)y ) f (x) ? f (S (x, γ, u) + κ(x)Zvγ ) PU1 (du)
PU1 (du)
E
Rd 0
dv
{y >c}
One follows the same process as before. Using lemma 4, one begins by showing that for any sequence (xk )k∈N , y such that y  > c, v ∈ [0, 1], u ∈ Rd , a.s., 1 Hn and
n k =1
ηk f (S (xk?1 , γk , u) + κ(xk?1 )(Zvγk + y ) ? f (xk?1 + κ(xk?1 )y ) ? ? ? →0
n k =1 c ηk f (xk?1 ) ? f (S (xk?1 , γk , u) + κ(xk?1 )Zuγ ) ? ? ? → 0. k n→∞
n→∞
(41) (42)
1 Hn
By the dominated convergence theorem (which can be applied without particulary conditions because π (y  > c) is ?nite and f is bounded), we deduce that for any sequence (xk )k∈N , n 1 n→∞ ηk R3,2 (γk , xk?1 , c) ? ? ? →0 a.s. Hn
k =1
Step 3 is proved in case 3. Let us explain the main di?erences in cases 1 and 2. Proof of proposition 4 in cases 1 and 2. First, we examine case 2. We notice that in this case, 1 ? k?1 ) + R3,2 (γk , X ? k?1 , 0) ? k ) ? f (X ? k?1,2 )/Fk?1 = A3 f (X E f (X γk (41) and (42) are still valid with c = 0 but, as the L? evy measure is not ?nite (except if q = 0), we have to give arguments for using the dominated convergence theorem. As f is bounded and Lipschitz, it is therefore 2q holderian for any q ∈ [0, 1/2]. Then, et f (S (x, γ, u) + κ(x)(Zvγ + y )) ? f (S (x, γ, u) + κ(x)(Zvγ )) ≤ C κ(x) f (x + κ(x)y ) ? f (x) ≤ C κ(x)
2q 2q
y 2q
y 2q
2 Proposition 5. Let a ∈ (0, 1], p > 0 and q ∈ [0, 1] such that (H1 p ), (Hq ) and (Sa,p,q ) are satis?ed. Assume that assumptions of Proposition 3 are ful?lled. Then,
Assumption (33) allows us to conclude. In case 1, it su?ces to amount to case 3 if q > 1/2 and to case 2 if q ≤ 1/2.
1 lim n→∞ Hn
n k =1
ηk ? k ) ? f (X ? k?1 )/Fk?1 } = 0 E{f (X γk
a.s.
Proof . We do not detail the proof of this proposition which is an adaptation of proposition 3 in [12]. 24
6
Proof of main theorems for schemes (B) and (C)
The aim of this section is to bring up the main di?culties induced by the approximation of jumps in (B) and (C). We remind that for scheme (A), main theorems (theorem 2 and 3) have been proved in two successive steps . First, we got interested in tightness results (Proposition 1) and then, we proved that every weak limit was invariant for (Xt )t≥0 (Proposition 3). We follow the same process for schemes (B) and (C) and will successively verify if Proposition 1 and Proposition 3 are still valid.
6.1
A.s. tightness of ν ?n (ω, dx) and ν ?n (ω, dx)
B
C
The result of tightness associated with schemes (B) and (C) is strictly identical to Proposition 1 (In particular, assumption (6) is not necessary for tightness). Looking precisely into the proof of this theorem (for scheme (A)), we realize that the properties of jumps that we use, are: control of the moment of components of jump (lemma 1) which is fundamental for proposition 2, independence between law of Y and truncation function, and indepen?n )n∈N , (N ?n )n∈N and (Un )n∈N . We show in lemma 5 that we have the dence between (Y same controls as in lemma 1 for moments of components of jumps associated with schemes (B) and (C). Then, as scheme (B) satis?es the two independence properties, Proposition 1 ? C )n∈N and (N ? C )n∈N are not independent. It creates follows in this case. In scheme (C), (Y n n several technical di?culties in proof of proposition 2 in case p < 1 but the process of the proof is the same. Then, we only state an analogous result lemma 1. Lemma 5. Let T0 be a positive number. (i) Let p > 0 such that y>c π (dy )y 2p < +∞. Then, for any t ≤ T0 , E{Ntc∧T n 2p } ≤ t
2p y >c y  π (dy ) 2p y >c yπ (dy ) }
if ≤t
2p y >c y  π (dy )
p>0 p > 1/2,
E{Ntc∧T n ? (t ∧ T n )
+ CT0 ,p,c t2(p∧1) if
2q y ≤c y  π (dy )
(ii) Let T be a (Ft ) stopping time and q ∈ [0, 1] such that for any t ≤ T0 , E{Ytc,n ∧T + t ∧ T
2q y ≤c y  π (dy )}
< +∞. Then,
≤t
2q y ≤c y  π (dy )
if
q ≤ 1/2
2q E{Ytc,n ∧T  } ≤ Cq t
2q y ≤c y  π (dy )
if 1/2 < q ≤ 1.
n 2 } ≤ t y 2 π (dy ). If p ≥ 1, for every ? > 0, (iii) For every (Ft )stopping time T , E{Zt ∧T there exists C? > 0, n0 ∈ N such that for every t ≥ T0 and n ≥ n0 , n 2p E{Zt  }≤t 2p n E{Zt ∧T n  } ≤ t
y 2p π (dy ) + ? + C? tp∧2
and,
y 2p π (dy ) + ? + C? tp∧2 .
Proof . The proof is let to the reader. Remark 13. The fact that the control is only valid for su?ciently large n does not set any problem because inequality 16 just needs to be valid for su?ciently large n.
25
6.2
Identi?cation of the limit of (? νn )n∈N and (? νn )n∈N
B C
B
C
The theorem which is obtained for (? νn )n∈N and (? νn )n∈N is strictly identical to Proposition 3 under the additional condition (6) for scheme (C). We recall that the proof of Proposition 3 is built in two steps represented by propositions 4 and 5. We notice that proposition 5 is still valid without additional di?culties. On the other hand, proof of the analogous result to proposition 4 contains some new di?culties that the three following lemmas allow us to overcome. Lemma 6 justi?es the legitimacy of schemes (B) and (C). Indeed, it shows that the error produced by the approximation of the jumps part is inconsequential on the invariance of the limiting probability. Then, we state two technical lemmas. Lemma 7 ensures a uniform control in n of (Z n )n∈N and lemma 8 is a variant of lemma 4. We denote K with values in C b (Rd , R) d? e?ned by by Ak,B and Ak,C the operators on C2 1 Ak,B f (x) = (?f bc )(x) + T r (σ ? D 2 f σ )(x) + 2 ? f (x, x, y )π (dy ) H
{y ≥uk }
1 1 ? e?π(Dk )γk Ak,C f (x) = (?f bc )(x) + T r (σ ? D 2 f σ )(x) + 2 π (Dk )γk
? f (x, x, y )π (dy ). H
{y ≥uk }
Lemma 6. Assume that (H2 q ) is satis?ed and consider a sequence (xk )k ∈N such that 1 sup n≥1 H n
n
ηk κ
k =1
2q
(xk?1 ) < ∞.
(43)
K (Rd , R), Then, for every function f ∈ C2
1 n→ +∞ H n lim 1 n→ +∞ H n lim
n k =1 n k =1
ηk Af (xk?1 ) ? Ak,B f (xk?1 ) = 0 ηk Af (xk?1 ) ? Ak,C f (xk?1 ) = 0.
and if π (Dn )γn ? ? ? ? ? → 0,
n→ +∞
? f (x, x, y )π (dy ). When q ≥ 1/2, we deduce Proof . We have Ak,B f (x) ? Af (x) = {y<uk } H from Taylor formula and the boundedness of ?f and D 2 f that there exists Cq > 0 such that ? f (x, x, y )1{y≤u } ≤ Cq κ(x) H k When q ≤ 1/2, as f is 2q holderian, ? f (x, x, y )1{y≤u } ≤ [f ]2q κ(x) H k Setting vk,q =
2q {y ≤uk } y  π (dy ), 2q 2q
y 2q 1{y≤uk } .
y 2q 1{y≤uk } + sup ?f (x). κ(x) .y 1{y≤uk } .
x∈suppf
we have C vk,q κ(xk?1 ) 2q + vk,1 Cvk,q κ(xk?1 ) 2q if q ≤ 1/2 if q ≥ 1/2
Af (xk?1 ) ? Ak,B f (xk?1 ) ≤
k →∞
As vk,α ? ? ? → 0 for every α ≥ q under assumption (H2 q ), the ?rst result follows from (43). One deduces the second one by noticing that Ak,B ? Ak,C (x) ≤ Cπ (Dk )γk (1 + κ(x) 26
2q
).
Lemma 7. Consider Z n = Y n + N . We are able to build a sequence of c` adl` ag processes + , Rd ) + , Rd ) D ( R D ( R ? n )n∈N and Z ? such that Z ?n ? (Z = Z n and Z = Z with
n ?s sup sup Z  < +∞ n∈N 0≤s≤T
?T >0
and
lim sup
n→+∞,γ →0 0≤s≤γ
n ?s sup Z =0
a.s.
(44)
Proof . Z n converges locally uniformly in L2 towards Z . Then, it is clear that Z n converges in law towards Z for the topology of uniform convergence and therefore for Sko? n )n∈N rokhod topology. Thanks to Skorokhod theorem of representation, there exists (Z + d + d ? with Z ? n D(R=,R ) Z n and Z ? D(R=,R ) Z such that Z ? n tends a.s. towards Z ? for Skoand Z rokhod topology. On the one hand, f → supx∈[0,T ] f (x) is continuous for the Skorokhod topology that ensures the ?rst assumption. On the other hand, by de?nition of Skorokhod topology, for every T > 0, there exists a sequence (λn )n∈N of increasing functions from [0, T ] to [0, T ] such that
0≤s≤T
sup λn (s) ? s ? ? ? ? ? →0
n→ +∞
et
0≤s≤T
n→ +∞ ? n (λn (s)) ? Z ?s  ? sup Z ? ? ? ? → 0.
? is right continuous and Z ?0 = 0, for every ρ > 0, there exists δ > 0 such that for As Z ?s  ≤ ρ/2. Then, there exists n0 ∈ N such that for every n ≥ n0 , every s ∈ [0, 2δ], Z
s∈[0,2δ]
? n (λn (s)) ≤ ρ sup Z
?
sup
s∈[0,inf n≥n0 λn (2δ)]
n ?s Z  ≤ ρ.
The result follows taking n1 ≥ n0 such that λn (2δ) ≥ δ if n ≥ n1 . Lemma 8. Let Φ : Rd → Rl be a continuous function with compact support, Ψ : Rd → R+ , θ,n a locally bounded function, (hθ,n 1 )θ ∈[0,1],n∈N? and (h2 )θ ∈[0,1],n∈N? be families of functions on Rd × R+ with values in Rd satisfying the two following assumptions: 1. There exists δ0 > 0 such that for every γ ∈ [0, δ0 ], θ ∈ [0, 1],
n≥1
inf hθ,n ? ? ? ? → +∞ 1 (x, γ ) ?
x→+∞
and
n≥1
inf hθ,n ? ? ? ? → + ∞. 2 (x, γ ) ?
x→+∞
2. sup
x∈K,θ ∈[0,1] θ,n hθ,n ? ? ? ? ? ? ? →0 1 (x, γ ) ? h2 (x, γ ) ? γ →0,n→+∞
for every compact set K.
Then, for every sequence (xk )k∈N in Rd , 1 Hn
n
sup
k =1 θ ∈[0,1]
θ,n Φ(hθ,n ? ? ? ? → 0. 1 (xk ?1 , γk )) ? Φ(h2 (xk ?1 , γk )) Ψ(xk ?1 ) ?
n→ +∞
Idea of the proof of proposition 4 for schemes (B) and (C). We only want to explain why step 3 of proposition 2 is still valid for schemes (B) and (C). We use the same process but technical arguments can be di?erent. First, consider scheme (B). We have 1 k,B B,k ? B ) ? f (X ? B )/F B ?B ?B E f (X k k ?1,2 k ?1 = A3 f (Xk ?1 ) + R3 (γk , Xk ?1 ) γk
B,k R3 (γ, x) = 1
o` u, PU1 (du). (45)
E
Rd 0
du
y >uk
k ? f (S (x, γ, u) + κ(x)Zvγ ? f (x, x, y ) π (dy ) H , x, y ) ? H
27
B,k n 1 ? Using lemmas 7 and 8, we are able to show that H ? ? → 0 a.s. k =1 ηk R3 (γk , Xk ?1 ) ? n and lemma 6 then ensures that step 3 is valid for scheme (B). For scheme (C), using the stopping time theorem, we show that
B
n→∞
1 ? C )Z ? C ) ? f (X ? C )/F C } = Ak f (X ? C ) + RC,k (γk , X ? C ) where E{f (X 3 k k k ?1,2 k ?1 k ?1 k ?1 3 γk
C,k R3 (γ, x) = 1 Rd 0 Dk k ? f (S (x, γ, u) + κ(x)Zvγ ? f (x, x, y ) e?λk v π (dy )dv PU (du). E H , x, y ) ? H 1 1 Hn n→∞ C,k n ?C ? ? → k =1 ηk R3 (γk , Xk ?1 )?
where λk = π (Dk )γk . Then, by lemma 6, it su?ces to show that that is deduced from lemmas 7 and 8.
0,
7
An a.s C.L.T. for non squareintegrables random variables
The “classical” a.s. C.L.T due to Brosamler ([5]) and Schatte ([18]) is the following result. Let (Un )n∈N? be a sequence of i.i.d. random variables with values in Rd such that E U1 = 0 and ΣU1 = Id . Then, P ? p.s 1 ln n
n k =1
1 δ 1 k √k
(Rd )
k i=1
Ui
? N (0, Id ).
This result is naturally associated with the Central Limit Theorem which expresses the fact that every squareintegrable random variable is in the domain of normal attraction of the normal law. In the case where one has no more squareintegrability, Berkes, Horvath and Khoshnevian ([4]) obtained an extension of this result associated with the non squareintegrable attractive laws which are stable laws (with index α ∈ (0, 2)). We are going to show that theorem 2 allows us to ?nd again this extension. α,c Denote by (Zt )≥0 a symmetrical onedimensional αstable process such that the characα +∞ α,c teristic function φ of Z1 satis?es φ(u) = e?ρu where ρ = 2c 0 y ?α sin ydy . Consider a sequence of symmetrical i.i.d. r.v (Vn )n∈N with c x → +∞ where x > 0, γ > 0 and δ(x) ? ? ? ? → 0. (46) P(V1 ≥ x) = α + δ(x)(x?α (ln x)?γ ) x Under these conditions, we know that We have the following result:
V1 +...+Vn (R) nα
1
? α,c (see GnedenkoKolmogorov, [8]). =? Z 1
Theorem 4. Let (ηk )k∈N? be a nonincreasing sequence with in?nite sum such that α,c 1 , (kηk )k∈N? is nonincreasing and ? = L(Z1 ). Then, if γ > α 1 Hn
n k =1
1 kα
ηk δ V1 +...+Vk =? ?
(R)
a.s.
and in particular,
1 ln n
n k =1
1 (R) δ V +...+Vk =? ?. 1 k 1 kα
a.s.
In order to prove this theorem, we ?rst need an almost sure invariance principle due to Stout ([19]). ?n )n≥1 be sequences of i.i.d. random variables such Proposition 6. Let (Vn )n≥1 and (Z (R) α,c 1 ?1 = Z that Z and V1 is de?ne as above. Then, if γ > α , there exists a probability space 1 ? ? ? ? ? ? ?1 = Z ?1 such that (?, F , P) and sequences (Vn )n≥1 and (Zn )n≥1 with V1 = V1 and Z
n n i=1
?i ? Z
?i V
i=1
n→ +∞
=
o(n α (ln n)?ρ ) 28
1
with 0 < ρ < γ ?
1 . α
L α,c ?1 = Proof of theorem 4. First, we assume that V1 = Z Z1 . Consider
S0 = 0
and
Sn =
?1 + ... + Z ?n Z nα
1 1
?n ≥ 1.
1 1 α ? γn+1 Sn + γn We notice that Sn+1 = Sn ? α +1 Zn+1 + Rn+1 with γn = n and Rn+1 = 1 ? O( n 2 Sn ). The idea of the proof is to compare (Zn )n≥0 with the (exact ) Euler scheme (A) α,c 1 with initial value 0 associated with the S.D.E. (Eα,c ) de?ned by dXt = ? α . Xt? dt + dZt α,c 1 As (Zt )t≥0 is a selfsimilar process with index α , its Euler scheme can be written
?0 = 0 X
and
1 ?n + γ α Z ? ? n+1 = X ? n ? 1 γn+1 X X n+1 n+1 . α
Assumptions of theorem 2 are clearly ful?lled with a = 1 for any p ∈ (0, α/2) and q ∈ (α/2, 1). Now, we know that (Eα,c ) admits a unique invariant measure ? such that ? = α,c L(Z1 ) (see [17], p188), therefore, by theorem 2, 1 Hn
n k =1
? ?. ηk δX ? k ?1 = ?n+1 = (1 ?
? k0 cn n→ +∞ 1 α(n+1) )?n
(R)
(47) + Rn+1 . for every
? n . We have ?0 = 0 and Then, consider ?n = Sn ? X n ≥ k0 + 1 with cn = cn = exp ?
n 1 ?1 k =k0 +1 (1 ? αk ) .
1 Setting k0 = inf {k ≥ 0, k ? α > 0}, we deduce that ?n =
+ c1 n
n k =k0 +1 ck Rk
We show that ?n ? ? ? ? ? → 0 a.s. We notice that
n
n k =k0 +1
1 1 ln(1 ? ) = exp αk α
1/k + O(
k =k 0
1 ) k2
n→ +∞
?
C ′n α .
1
?1 is integrable, therefore, On the one hand, if α > 1, Z E{Rk } ≤ C k
k ≥1
k ≥1
k
1 +2 α
?1 } ≤ C E{Z
n→ +∞
1
k ≥1
k
1 1+ α
< +∞
We deduce that that 1 cn
k ≥1 Rk  n k =k0 +1
< +∞ and as cn ? ? ? ? ? → +∞, we derive from Kronecker lemma
n→ +∞
ck Rk ? ? ? ? ? →0
?
?n ? ? ? ? ? →0
n→ +∞
a.s. if α > 1.
?1 has a moment of order θ for every θ < α. It follows: On the other hand , if α ≤ 1, Z E{Rk θ } ≤ C n n2θ+ α
θ
E{X1 }θ =
C′
nθ(2+ α )?1
θ k ≥1 Rk 
1
1 2α Therefore, if θ satis?es θ (2 + α ) ? 1 > 1, i.e. if 2+ α < θ < α, we have that implies that n 1 θ n→ +∞ cθ ? ? ? ? →0 k Rk  ? θ cn k =k0 +1 n Now, as θ ≤ 1, c1 k =k0 +1 ck Rk n a.s. We derive from (47) that θ
< +∞
≤
1 cθ n
n θ θ k =k0 +1 ck Rk  .
It follows that ?n ? ? ? ? ? →0
n→ +∞
1 Hn
n k =1
1 kα
? +...+Z ? ηk δ Z =? ? 1 k
n→ +∞
a.s.
(48)
29
Now, consider a sequence (Vn )n≥0 of i.i.d. symmetrical random variables satisfying (46). ?n )n≥1 is a sequence of i.i.d. random variables, we As (48) is almost surely valid and (Z ?n )n≥1 of i.i.d random variables such notice that (48) is also valid for every sequence (Z L ?1 = Z ?1 . By proposition 6, we obtain that there exists a sequence of i.i.d. random that Z ? ?1 and: variables (Vn )n≥0 such that V1 = V 1 Hn
n k =1
1 kα
=? ? ηk δ V ? +...+V ? 1 k
n→ +∞
a.s.
(49)
?n )n≥1 are sequence of i.i.d. random variables such that V1 = V ?1 , (49) is As (Vn )n≥1 and (V also right for (Vn )n≥1 .
8
Simulations
Example 1. Denote by (Zt )t≥0 a Cauchy process with parameter 1 (with L? evy measure 2 de?ned by π (dy ) = 1/y dy ) and consider the OU process solution to dXt =?Xt? dt + dZt corresponding to (E1,1 ) deined in the previous subsection. The unique invariant measure of (Xt )t≥0 is the Cauchy law (see [17], p.188) and the assumptions of theorem 2 are ful?lled with V (x) = 1 + x2 , a = 1 and every p ∈ (0, 1/2) and q ∈ (1/2, 1). Therefore,
B C ν ?n (f ), ν ?n (f ), ν ?n (f ) ? ? ? ? ? →
1
n→ +∞
f (x) dx π (1 + x2 )
a.s.
for every f such that f = O(x 2 ?? ) for every ? > 0. In the following ?gures, one compares the theoretical density of the invariant measure with the density obtained by convolution of each of the empirical measures by a Gaussian kernel for N = 5.104 . We choose ηn = √ √ γn = 1/ n, un = γn (in order to have π (Dn )γn → 0) and t indicates the CPU time.
0.35 Approximated density Theoretical density 0.3 0.3 0.35 Approximated density Theoretical density 0.3 0.35 Approximated density Theoretical density
0.25
0.25
0.25
0.2
0.2
0.2
0.15
0.15
0.15
0.1
0.1
0.1
0.05
0.05
0.05
0 ?20
?15
?10
?5
0
5
10
15
20
0 ?20
?15
?10
?5
0
5
10
15
20
0 ?20
?15
?10
?5
0
5
10
15
20
Figure 1: Sch. (A), t = 12.5 Figure 2: Sch. (B), t = 16.6
Figure 3: Sch. (C), t = 16.4
We observe that the best rate is obtained for the exact Euler scheme. In order to have a more precise idea of the di?erences between the three Euler schemes, let us observe n → νn (f ) where f (x) = x0.4 for several choices of polynomial steps. We set γn = ηn = 1/nθ and un = γn (resp. β = 0.5) for scheme (B)(resp. for schemes (C)). We are able to show that these choice of truncations give the best rate (see [14]). We notice that among the tested steps, the best rate seems to be obtained for θ = 0.3. Notably, in schemes (B) and (C), we see that if the step is slow, the stabilization is slow and if the step is too fast (θ = 0.1), there are not su?cient variations to correct the error. Example 2. Now, we study the following S.D.E: dXt = (1 ? Xt? )dt ? Xt? dZt 30
1.3
1.3
1.3
1.28
1.28
1.28
1.26 θ = 0.3 θ = 0.1 1.22 θ = 0.5 1.2
1.26 θ = 0.5 1.24 θ = 0.3 θ = 0.1
1.26 θ = 0.3
1.24
1.24
1.22
1.22 θ = 0.7
1.2 θ =0.7 1.18
1.2 θ = 0.5 1.18 θ = 0.1
1.18
1.16 θ = 0.7 1.14
1.16
1.16
1.14
1.14
1.12
1.12
1.12
1.1
0
200
400
600 n = iterations*103
800
1000
1200
1.1
1.1 0 200 400 600 n = iterations*10
3
800
1000
1200
0
200
400
600 n = iterations*103
800
1000
1200
Figure 4: Scheme (A)
Figure 5: Scheme (B)
Figure 6: Scheme (C)
where (Zt )t≥0 is a driftfree subordinator with L? evy measure π de?ned by π (dy ) = f 3 , 1 (y )
2 2
y2
dy,
where fa,b is the density of the β (a, b)distribution. This S.D.E. models the dust generated by a particular E.F.C. process (see Introduction) whose sudden dislocations do not create dust, having parameters (according to the notations of [3]): ck = 0, ce = 1 νcoag (dy ) = f 3 , 1 (y )dy.
2 2
We notice that (S1,1, 1 ) and (R1,1, 1 ) are satis?ed with V (x) = 1 + x2 but the problem 2 2 is that we do not have κ(x) = o(x). However, as supp π is restrained to [0, 1] without singularities in 0 or 1, we are able to show that assumption κ(x) = o(x) is not necessary in this case. Then, let us represent the computation of the invariant measure obtained for scheme (B) and (C). (We are not able to simulate scheme (A) in that case).
3 Scheme (B) Scheme (C)
2.5
2
1.5
1
0.5
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 7: Approximated density, N = 106
References
[1] S. Asmussen and J. Rosinski. (2001), Approximations of small jumps of Levy processes with a view towards simulation, J. of Applied Probability 38 (2001), 482493. [2] Barndo?Nielsen O., Mikosh T., Resnick S. (2001), L? evy Processes and Applications, Birkauser. [3] Berestycki J. (2004), Exchangeable FragmentationCoalescence Processes and their Equilibrium Measures, Elec. Jour. of Prob., Vol. 9, pp. 770824. [4] Berkes I., Horvath L., Khoshnevisan D. (1998), Logarithmic averages of stable random variables are asymptotically normal Stoch. Processes. and Appl., 77, pp. 3551.
31
[5] Brosamler, G. (1988), An almost everywhere central limit theorem. Math. Proc. Cambridge Phil. Soc. 104, pp. 561574. [6] Ethier S., Kurtz T. (1986), Markov processes, characterization and convergence, Wyley series in probability and mathematical statistics, Wiley, New York. [7] Deng S. (2000), Pricing Electricity derivatives under alternative stochastic spot price models, of the 33rd Hawaii conf. on Syst. Sc. [8] Gnedenko, B., Kolmogorov A. (1954). Limit Distributions for Sums of Independent Random Variables, AddisonWesley, Cambridge, MA. [9] Heyde,C.C. (1969): A note concerning behavior of iterated logarithm type. Proc. Amer. Math. Soc. 23, pp. 8590. [10] Jacod, J. and Protter, P. (1991): Une remarque sur les ? equations di?? erentielles ` a solutions Markoviennes. S? eminaire de Probabilit? es XXV. Lecture Notes in Math. 1485, pp. 138139. Springer, Berlin. [11] Lamberton D., Pag` es G. (2002), Recursive computation of the invariant distribution of a di?usion. Bernoulli, 8, pp. 367405. [12] Lamberton D., Pag` es G. (2003), Recursive computation of the invariant distribution of a di?usion: the case of a weakly mean reverting drift, Stoch. Dynamics, 4, pp. 435451. [13] Pag` es G. (2001), Sur quelques algorithmes r? ecursifs pour les probabilit? es num? eriques, ESAIM: P.S. 5 (2001), pp.141170. [14] Panloup F., Thesis, in progress. [15] Protter P. (1990), Stochastic Integration and Di?erential Equations, Springer. [16] Protter P., Talay D. (1997), The Euler scheme for L? evy driven stochastic di?erential equations, Ann. Prob.,Vol. 25, pp. 393423. [17] Sato K. (1999), L? evy Processes and In?nitely Divisible Distributions, Cambridge University Press, Cambridge. [18] Schatte, P. (1988), On strong versions of the central limit theorem. Math. Nachr. 137, pp. 249256 [19] Stout W.F. (1979), Almost sure invariance principles whenImage. Z. Wahr. verw. Geb. 49, pp. 2332 [20] Talay D. (1990), Second order discretization schemes of stochastic di?erential systems for the computation of the invariant law, Stoch. Stoch. Rep., 29(1), pp. 1336.
32
0.35 Approximated density Theoretical density 0.3
0.25
0.2
0.15
0.1
0.05
0 ?20
?15
?10
?5
0
5
10
15
20