• Nem Talált Eredményt

Semilinear stochastic PDEs driven by additive Wiener noise

2.3. Weak and strong convergence

2.3. Weak and strong convergence

This section contains the main result of the chapter and its proof. Theorem 2.3.7 states a weak error estimate for abstractly defined approximations of quantities of the form E[Φ(X)] = E[QK

i=1ϕi(RT

0 X(t) dνti)] for (νi)Ki=1 ⊂ MT, (ϕi)Ki=1 ⊂ Gp2,m(H,R), m>2, andX being the solution to (2.1.2). Theorem 2.3.2 provides a strong error estimate for approximations of X. For parabolic problems, weak convergence, more precisely, convergence of approximations ofE[ϕ(X(t))] for fixed t ∈ [0, T] has been considered [2], and for Volterra equations in Subsection 1.5.7 but only in the linear case f = 0. The rate of convergence for E[Φ(X)] is twice the strong rate as expected. We begin by presenting a family of abstractly defined approximations.

2.3.1. Approximation. Assume that (2.1.1), (2.1.5)–(2.1.3) hold. Suppose that (Vh)h∈(0,1)is a family of finite-dimensional subspaces ofHand letPh:HVh and (Rs)s∈[0,1], let the corresponding error operator (Fh,∆t)h,∆t∈(0,1), given by Fh,∆tn =Bnh,∆tS(tn) forn= 0, . . . , N, satisfy the smooth data error estimate

where (e−tA)t>0and (e−tAh)t>0 are the analytic semigroups generated by−Aand

−Ah, respectively. As before, we introduce the piecewise continuous operator func-tion ˜Fh,∆t: [0, T]→ L(H) given by ˜Fh,∆t(t) =Bnh,∆t−S(t) fort∈(tn−1, tn] andn= 1, . . . N with ˜Fh,∆t(0) = Fh,∆t0 . By (2.1.1) and (2.3.2) the family ( ˜Fh,∆t(t))t∈[0,T]

satisfies fort∈(0, T] the bound As F˜h,∆t(t)

6Rs hσρ + ∆tσ2

tσ+s2 , 06σ62, 06s61−σ/2.

(2.3.5)

The discrete and continuous stochastic convolutions are defined by WS(t) =

Z t 0

S(ts) dW(s), t∈[0, T]; WBn

h,∆t =

n−1

X

j=0

Z tj+1 tj

Bn−jh,∆tdW(t), n= 1, . . . , N.

We now define approximations of equation (2.1.2). For h,∆t ∈ (0,1), let (Xh,∆tn )Nn=0 be given by

Xh,∆tn =Bh,∆tn x0+ ∆t

n−1

X

j=0

Bh,∆tn−jf(Xh,∆tj ) +WBnh,∆t, n= 1, . . . , N, (2.3.6)

andXh,∆t0 =Bh,∆t0 x0.

2.3.2. Strong convergence. Boundedness in theLp(Ω;H)-sense of the ap-proximate family (Xh,∆tn )Nn=0 is stated in the next proposition. For a proof in the parabolic case, i.e., forρ= 1, see [2, Proposition 3.15]. The general case is proved in the same way but using the different smoothing property in (2.3.1).

Proposition 2.3.1. Let the setting of Section 2.3.1 hold. For p>2 it holds that

sup

h,∆t∈(0,1)

max

n∈{0,...,N}

Xh,∆tn

Lp(Ω;H)<∞.

We next prove strong convergence. This is interesting in itself, but it is also used in our proof of the Malliavin regularity ofX in Proposition 2.3.4.

Theorem 2.3.2. Let the setting of Section 2.3.1 hold, letX be the solution of (2.1.2)and let(Xh,∆t)h,∆t∈(0,1] be given by(2.3.6). Forγ∈[0, β),p∈[2,∞), there existsC >0such that

max

n∈{0,...,N}

Xh,∆tnX(tn)

Lp(Ω;H)6C hγ+ ∆tργ2

, h,∆t∈(0,1).

Proof. For n = 0 the estimate holds by (2.3.2). For n > 1, we take the difference of (2.1.2) and (2.3.6) to obtain the equation for the error:

Xh,∆tnX(tn) = Bh,∆tnS(tn) x0+

n−1

X

j=0

Z tj+1

tj

Bn−jh,∆tS(tnt)

f(X(t)) dt

+

n−1

X

j=0

Z tj+1

tj

Bn−jh,∆t f(Xh,∆tj )−f(X(t))

dt+WBn

h,∆tWS(tn).

(2.3.7)

2.3. WEAK AND STRONG CONVERGENCE 63

The deterministic nature of the first two terms allows us to obtain twice the rate of convergence compared to the other terms. This will be used later in the proof of Lemma 2.3.6. Recall that

By Proposition 2.2.1, (2.1.4), (2.3.5) it holds that

Using (2.1.4), (2.1.6), (2.3.1), and Proposition 2.2.2 yields

For the error of the stochastic convolution we write the difference in the form

(2.3.10)

By (2.1.7) and (2.3.5) withσ=γρ, ands= 1−βρ, we obtain the estimate WBnh,∆tWS(tn)

Lp(Ω;H)6p(p−1) 2

Z tn 0

Aβρ−1

2 L02

A1−βρ F˜h,∆t(t)

2 Ldt12 6CR1−βρZ tn

0

tρ(β−γ)−1dt12

hγ+ ∆tργ2

6C(hγ+ ∆tργ2).

Collecting the estimates yields that, for alln= 0, . . . , N, Xh,∆tnX(tn)

Lp(Ω;H)6C

hγ+ ∆tργ2 + ∆t

n−1

X

j=0

Xh,∆tjX(tj) Lp(Ω;H)

.

The proof is concluded by using Gronwall’s lemma.

2.3.3. Regularity and weak convergence. Here we state and prove our main result on weak convergence. It is based on a strong error estimate in the M1,p(H) norm combined with boundedness of X and Xh,∆t in M1,p,q(H) for suitablep, q. The methodology was introduced in [2], but here we exploit it further in a more general setting. We begin by proving the Malliavin differentiability of Xh,∆t.

Proposition2.3.3. Let the setting of Section 2.3.1 hold, and letXh,∆tbe given by (2.3.6). Forp∈[2,∞),q∈[2,1−ρβ2 ), it holds that

sup

h,∆t∈(0,1)

max

n∈{0,...,N}

Xh,∆tn

M1,p,q(H)+ Xh,∆tn

M1,∞,q(H)

<∞.

Sketch of proof. Note first that DXh,∆t0 = 0 as Xh,∆t0 is deterministic.

Therefore, it follows inductively that Xh,∆tj , j = 0, . . . , N, are differentiable and the derivative satisfies the equation

DrXh,∆tn = ∆t

n−1

X

j=0

Bh,∆tn−jf0(Xh,∆tj )DrXh,∆tj +

n−1

X

j=0

χ[tj,tj+1)(r)Bh,∆tn−j.

The proof is performed by straightforward analysis of this equation using the dis-crete Gronwall’s lemma, see [2, Proposition 3.16] for details in the parabolic case

ρ= 1. The general case is treated analogously.

The Malliavin regularity ofX is next obtained by a limiting procedure.

Proposition2.3.4. Let the setting of Section 2.3.1 hold and letX be the solu-tion to(2.1.2). Forp∈[2,∞),q∈[2,1−ρβ2 ), it holds thatXC((0, T),M1,p,q(H)).

Furthermore, we have that

sup

t∈[0,T]

X(t)

M1,∞,q(H)<∞.

Proof. Let ˜Xh,∆t(t) =Xh,∆tn fort∈[tn, tn+1),n= 0, . . . , N−1,h,∆t∈(0,1).

By Proposition 2.3.3 it holds, in particular, that the family ( ˜Xh,∆t)h,∆t∈(0,1) is bounded in the Hilbert space X =L2((0, T);M1,2,2(H)). From Proposition 2.2.2 and Theorem 2.3.2 it follows that ˜Xh,∆tXash,∆t→0 in the Hilbert spaceY= L2((0, T);L2(Ω;H)). Therefore, by Lemma 2.1.1,X ∈ X =L2((0, T);M1,2,2(H)).

2.3. WEAK AND STRONG CONVERGENCE 65

By [41, Lemma 3.6] it holds that also Z ·

0

S(· −s)f(X(s)) ds∈L2((0, T);M1,2,2(H)) withDrRt

0S(ts)f(X(s)) ds=Rt

rS(ts)f0(X(s))DrX(s) ds, for 06r6t6T, andR·

0S(· −s) dW(s)∈L2(0, T;M1,2,2(H)) withDr

Rt

0S(ts) dW(s) =S(tr), for 06r6t6T. We remark that [41, Lemma 3.6] is formulated for semigroups, but the semigroup property is not used in the proof. We have thus proved that we can differentiate the equation forX term by term, and obtain the equation

DrX(t) =

(S(tr) +Rt

rS(ts)f0(X(s))DrX(s) ds, t∈(r, T],

0, t∈[0, r].

A straightforward analysis of this equation, by a Gronwall argument, as in the proof

of [2, Proposition 3.10] completes the proof.

In the proof of [2, Lemma 4.6], which is the analogue of Lemma 2.3.6 below, a bound

kAhδ2Phxk6kAhδ2PhAδ2kkAδ2xk6CkAδ2xk, (2.3.11)

was used in the special caseδ= 1. This estimate is true for allδ∈[0,1] for both the finite element method and for spectral approximation. For δ >1 it holds only for spectral approximation. As we needδ∈[0,2/ρ) we cannot rely on (2.3.11). In [103, Lemma 5.3] it is shown that for finite element discretization and forδ= 0,1,2 it holds

kAhδ2Phxk6C kAδ2xk+hδkxk

, xH.

The next lemma is a generalization of this result, assuming the availability of a non-smooth data estimate of the form (2.3.4). It will be used in the proof of Lemma 2.3.6 below with X =M1,p(H) for a certain p. By using it we don’t have to rely on (2.3.11) and this way we may include finite element discretizations under the same generality as spectral approximations.

Lemma2.3.5. Let the setting of Section 2.3.1 hold and letX be a Banach space such that the embedding L2(Ω;H)⊂ X is continuous. For κ∈ [0,2), σ ∈ [0, κ), there existsC >0 such that forYL2(Ω;H)it holds that

Ahκ2PhY X 6

Aκ2Y

X+Chσ Y

L2(Ω;H), h∈(0,1).

Proof. By the continuous embeddingL2(Ω;H)⊂ X we get that A

κ 2

h PhY X 6

Aκ2Y X+

A

κ 2

h PhAκ2 Y

X 6

Aκ2Y X+C

A

κ 2

h PhAκ2 L(H)

Y

L2(Ω;H). By [87, Chapter 2, (6.9)] we have that

Ahκ2PhAκ2 = 1 Γ(κ/2)

Z 0

tκ2−1 e−tAhPhe−tA dt.

Therefore, by (2.3.4), The next result is a strong error estimate in the M1,p(H) norm. Together with the regularity stated in Propositions 2.3.3 and 2.3.4 it is the key to the proof of Theorem 2.3.7 below on weak convergence.

Lemma 2.3.6. Let the setting of Section 2.3.1 hold, let X be the solution of (2.1.2), and let Xh,∆t be given by (2.3.6). For γ ∈ [0, β), p = 1−ργ2 , there exists

Proof. The proof is similar to that of Theorem 2.3.2. First note that we have the continuous embeddingsHLp(Ω;H)⊂L2(Ω;H)M1,p(H). Therefore, for

The first two terms was already estimated as desired in (2.3.8) and (2.3.9). Choose κso that max(δ,2γ)< κ <2/ρ, whereδis the parameter in (2.1.4). Sinceρκ <2,

2.3. WEAK AND STRONG CONVERGENCE 67 For the first term we get by (2.1.4), Propositions 2.2.1, and 2.3.1 that

sup By Propositions 2.2.2, 2.3.3 and 2.3.4, we conclude that

Aκ2 f(Xh,∆tj )−f(X(t))

By (2.3.10), (2.1.8), and (2.3.5), withs= 1−βρ,σ= 2γρ, and since p= 1−ργ2

Lemma 2.1.2 finishes the proof.

We next state our main result on weak convergence. Note that this is a more general type of weak convergence result than the ones in the previous chapter which concern the convergence of|E[ϕ(Xh,∆tn )−ϕ(X(tn))]|for fixedtn ∈[0, T]. This is a special case of the following theorem.

Theorem 2.3.7. Let X be the solution of (2.1.2) and let Xh,∆t be given by

Proof. We start by observing that by (2.1.6) we have

K

Here we use the convention that an empty product equals 1. We get

2.3. WEAK AND STRONG CONVERGENCE 69 Propositions 2.3.3 and 2.3.4 ensure that

sup

This completes the proof.

Finally, we formulate a corollary of Theorem 2.3.7 that can be used to prove convergence of covariances and higher order statistics of approximate solutions. We demonstrate this for covariances; higher order statistics can be treated in a similar way.

Corollary 2.3.8. Let X be the solution of (2.1.2) and letXh,∆t be given by (2.3.6). LetX˜h,∆t(t) =Xh,∆tn , fort∈[tn, tn+1),n∈ {0, . . . , N−1}andX˜h,∆t(t) =

In particular, forφ1, φ2H andt1, t2∈(0, T], it holds that Cov X˜h,∆t(t1), φ1

,X˜h,∆t(t2), φ2

−Cov

X(t1), φ1 ,

X(t2), φ2 6C h+ ∆tργ

, h,∆t∈(0,1).

Proof. The first statement follows from Theorem 2.3.7 by settingϕi=hφi,·i, νi = δti, i ∈ {1, . . . , K}, whereδti is the Dirac measure concentrated at ti. The second is a consequence of the first and the fact that

Cov X˜h,∆t(t1), φ1

,X˜h,∆t(t2), φ2

−Cov

X(t1), φ1

,

X(t2), φ2

=EX˜h,∆t(t1), φ1

X˜h,∆t(t2), φ2

−E

X(t1), φ1

X(t2), φ2

−EX˜h,∆t(t1), φ1

X(t1), φ1 E

X(t2), φ2

−EX˜h,∆t(t1), φ1

EX˜h,∆t(t2), φ2

X(t2), φ2 .

2.4. Examples

In this section we consider two different types of equations and write them in the abstract form of Section 2.1. We verify the abstract assumptions in both cases. Numerical approximation by the finite element method and suitable time discretization schemes are proved to satisfy the assumptions of Section 2.3. We start with parabolic stochastic partial differential equations and continue with Volterra equations in a separate subsection.

2.4.1. Stochastic parabolic partial differential equations. LetD ⊂Rd for d= 1,2,3 be a convex polygonal domain. Let ∆ = Pd

i=1

2

∂x2i be the Laplace operator andg∈ Gb2(R,R). We consider the stochastic partial differential equation:

(2.4.1)

˙

u(t, x) = ∆u(t, x) +g(u(t, x)) + ˙η(t, x), (t, x)∈(0, T]× D,

u(t, x) = 0, (t, x)∈(0, T]×∂D,

u(0, x) =u0(x), x∈ D.

The noise ˙η is not well defined as a function, as it is written, but makes sense as a random measure. We will study this equation in the abstract framework of Section 2.1. Let H = L2(D), A: D(A)HH be given by A = −∆

with D(A) = H01(D)∩H2(D). Let (S(t))t∈[0,T] denote the analytic semigroup S(t) = e−tA of bounded linear operators generated by −A. Assumption 2.1.1 is satisfied withρ= 1, as is easily seen by a spectral argument. The driftf:HH is the Nemytskii operator determined by the action (f(r))(x) = g(r(x)), x ∈ D, rH. Condition (2.1.4) for f is verified in [106] for δ = d2 +ε. With these definitions we may write (2.4.1) is the abstract Itô form

dX(t) +AX(t) dt=f(X(t)) dt+ dW(t), t∈(0, T]; X(0) =x0, with mild solution given by (2.1.2).

Let (Th)h∈(0,1)denote a family of regular triangulations ofDwherehdenotes the maximal mesh size. Let (Vh)h∈[0,1] be the finite element spaces of continuous piecewise linear functions with respect to (Th)h∈(0,1) and Ph: HVh be the orthogonal projector. Let the operatorsAh:VhVh be defined by (1.5.52).

2.4. EXAMPLES 71

Remark 2.4.1. If the domain D is such that the pairs of eigenvalues and eigenfunctions (λn, en)n∈Nof A are known, e.g.,D= [0,1]d, then instead of finite element discretization one can consider a spectral Galerkin approximation. Let the eigenvalues be ordered in increasing order so that λn 6 λn+1 for every n ∈ N. Further, let h=λ

1 2

N+1 and Vh = span{φn :n 6N}. ByPh:HVh we denote the orthogonal projector and we defineAh=APh=PhA=PhAPh.

We discretize in time by a semi-implicit Euler-Maruyama method. By defining Bh,∆t1 = (I+ ∆tAh)−1PhandBh,∆tn = (Bh,∆t1 )nPh forn>1, the discrete solutions (Xh,∆tn )Nn=0 are recursively given by

Xh,∆tn =Bh,∆t1 Xh,∆tn−1+ ∆tBh,∆t1 f(Xh,∆tn−1) + Z tn

tn−1

B1h,∆tdW(s), n= 1, . . . , N, Xh,∆t0 =Phx0.

Iterating the scheme gives the discrete variation of constants formula (2.3.6). For both finite element and spectral approximation the assumptions (2.3.1), (2.3.2), (2.3.3), (2.3.4), are valid, see, for example, [103].

2.4.2. Stochastic Volterra integro-differential equations. Consider the semi-linear stochastic Volterra type equation

(2.4.2)

˙ u(t, x) =

Z t 0

b(ts)∆u(t, x) ds+g(u(t, x)) + ˙η(t, x), (t, x)∈(0, T]× D,

u(t, x) = 0, (t, x)∈(0, T]×∂D,

u(0, x) =u0, x∈ D.

Suppose that b satisfies Assumptions 1.5.1 and 1.5.30. We write (2.4.2) in the abstract It¯o form

dX(t) +Z t 0

b(ts)AX(s) ds

dt=f(X(t)) dt+ dW(t), t∈(0, T];X(0) =x0, (2.4.3)

with A, f,W, x0 as in Subsection 2.4.1. Here one needs δ= d2 +ε < 2ρ and this requiresρ < 43 andε small in the cased= 3 but causes no restrictions in the case d= 1,2. Under the above assumptions there exist a resolvent family of operators (S(t))t∈[0,T] defined by (1.5.7) and the mild solution of (2.4.3) is given by (2.1.2).

By Propositions 1.5.6 and 1.5.9 condition (2.1.1) holds forS. Thus, the setting of Section 2.1 is applicable.

Using the notation from Subsections 1.5.6 and 1.5.7 we now turn our attention to the numerical approximation via the semilinear variant of (1.5.61):

Xh,∆tnXh,∆tn−1+ ∆t

n

X

k=1

ωn−kAhXh,∆tk

!

= ∆tPhf(Xh,kn−1) + Z tn

tn−1

PhdW(t), n= 1, . . . , N, Xh,k0 =Phx0.

It is possible to write (Xh,kn )Nn=0 as a variation of constants formula (2.3.6) with Bh,∆tn defined by (1.5.62). The stability (2.3.1) holds by (1.5.71), (1.5.72), (1.5.73) and interpolation. The smooth data error estimate (2.3.2) was proved in Lemma 1.5.27. It remains to verify (2.3.3). By (1.5.74) there exist ˜C so that

Fh,∆tn 6Ct˜

δ

n2 hρδ + ∆tδ2

, 06δ62, n= 1, . . . , N.

Let 06δ62. Interpolation with 06s61 yields As Fh,∆tn

6 Fh,∆tn

1−s

A1Fh,∆tn

s

6 Fh,∆tn

1−s

A1S(tn) +

A1Bh,∆tn

s

6

Ct˜ nδ2 hδρ + ∆tδ21−s 2L1

2tn12s 6C˜1−s(2L1

2)st

δ(1−s)+s

n 2 hδ(1−s)ρ + ∆tδ(1−s)2 . Settingσ=δ(1s) andRs= ˜C1−s(2L1

2)syields the estimate As Fh,∆tn

6Rst

σ+s

n 2 hσρ + ∆tσ2

, 06σ62, 06s61−σ 2, forn= 1, . . . , N. Therefore (2.3.3) holds.

CHAPTER 3

Linear stochastic PDEs driven by additive Lévy