• Nem Talált Eredményt

Intermittency for the stochastic heat equation with Lévy noise

N/A
N/A
Protected

Academic year: 2022

Ossza meg "Intermittency for the stochastic heat equation with Lévy noise"

Copied!
33
0
0

Teljes szövegt

(1)

arXiv:1707.04895v2 [math.PR] 29 Jun 2018

Carsten Chong Péter Kevei

Abstract

We investigate the moment asymptotics of the solution to the stochastic heat equation driven by a (d+ 1)-dimensional Lévy space–time white noise. Unlike the case of Gaussian noise, the solution typically has no finite moments of order 1 + 2/dor higher. Intermittency of order p, that is, the exponential growth of thepth moment as time tends to infinity, is established in dimensiond= 1 for all values p(1,3), and in higher dimensions for somep(1,1 + 2/d).

The proof relies on a new moment lower bound for stochastic integrals against compensated Poisson measures. The behavior of the intermittency exponents when p 1 + 2/d further indicates that intermittency in the presence of jumps is much stronger than in equations with Gaussian noise. The effect of other parameters like the diffusion constant or the noise intensity on intermittency will also be analyzed in detail.

AMS 2010 Subject Classifications: primary: 60H15, 37H15 secondary: 60G51, 35B40

Keywords: comparison principle; intermittency; intermittency fronts; Lévy noise; moment Lya- punov exponents; stochastic heat equation; stochastic PDE

Center for Mathematical Sciences, Technical University of Munich, Boltzmannstraße 3, 85748 Garching, Ger- many, e-mail: carsten.chong@tum.de

MTA-SZTE Analysis and Stochastics Research Group, Bolyai Institute, Aradi vértanúk tere 1, 6720 Szeged, Hungary, e-mail: kevei@math.u-szeged.hu

1

(2)

1 Introduction

We consider the stochastic heat equation onRd given by

tY(t, x) = κ

2∆Y(t, x) +σ(Y(t, x)) ˙Λ(t, x), (t, x)∈(0,∞)×Rd, Y(0,·) =f,

(1.1) where κ ∈ (0,∞) is the diffusion constant, σ a globally Lipschitz function and f a bounded measurable function onRd. The forcing term ˙Λ that acts in a multiplicative way on the right-hand side of (1.1) is aLévy space–time white noise, which is the distributional derivative of aLévy sheet ind+ 1 parameters. More precisely, we assume that Λ takes the form

Λ(dt,dx) =bdtdx+ρ W(dt,dx) +Z

R

z(µ−ν)(dt,dx,dz), (1.2) whereb∈Ris the mean of Λ,ρ∈Ris the Gaussian part of Λ, W is a Gaussian space–time white noise (see [26]),µis a Poisson measure on (0,∞)×Rd×Rwith intensity measureν(dt,dx,dz) = dtdx λ(dz), andλis a Lévy measure satisfying

λ({0}) = 0 and Z

R

(1∧ |z|2)λ(dz)<. Under the assumption that there existsp∈[1,1 + 2/d) with

mλ(p) :=Z

R|z|pλ(dz)

1

p <, (1.3)

it is shown in [23] that (1.1) admits a uniquemild solution Y satisfying sup

(t,x)[0,T]×RdkY(t, x)kp = sup

(t,x)[0,T]×Rd

E[|Y(t, x)|p]1p <∞ (1.4) for allT ≥0. A mild solution to (1.1) is a predictable process Y satisfying the stochastic Volterra equation

Y(t, x) =Y0(t, x) +Z t

0

Z

Rdg(ts, xy)σ(Y(s, y)) Λ(ds,dy), (t, x)∈(0,∞)×Rd, (1.5) where

Y0(t, x) :=Z

Rdg(t, xy)f(y) dy, (t, x)∈(0,∞)×Rd, (1.6) and

g(t, x) :=g(κ;t, x) := 1

(2πκt)d/2e|x|

2

2κt, (t, x)∈(0,∞)×Rd, (1.7) is the heat kernel in dimensiond. As proved in [11], condition (1.3) can be relaxed to include Lévy noises with bad moment properties such as α-stable noises, but in this paper, we will work with (1.3) as a standing assumption.

Our goal is to investigate the behavior of the moments of the solution Y as time tends to infinity. In particular, we are interested in conditions under which the solutionY to (1.5) exhibits the phenomenon of intermittency. The following definition follows [7], Definition III.1.1, [12], Equations (1.6) and (1.7), and [20], Definition 7.5.

(3)

Definition 1.1 LetY be the mild solution to (1.5) andp∈(0,∞).

(1) Y is said to be weakly intermittent of order p if

0< γ(p)γ(p)<, (1.8)

where thelower and upper moment Lyapunov exponents γ(p) andγ(p) are defined as γ(p) := lim inf

t→∞

1 t inf

xRdlogE[|Y(t, x)|p] and γ(p) := lim sup

t→∞

1 t sup

xRd

logE[|Y(t, x)|p]. (1.9) (2) Y is said to have a linear intermittency front of order pif

0< λ(p)λ(p)<, (1.10)

where thelower and upper intermittency fronts λ(p) and λ(p) are defined as λ(p) := sup

(

α >0: lim sup

t→∞

1 t sup

|x|≥αt

logE[|Y(t, x)|p]>0 )

,

λ(p) := inf (

α >0: lim sup

t→∞

1 t sup

|x|≥αt

logE[|Y(t, x)|p]<0 )

,

(1.11)

with the convention that sup∅:= 0 and inf∅:= +∞.

For important classes of random fields, the purely moment based notion of weak intermittency in (1.8) translates into an interesting path property called physical intermittency: With high probability, the random field exhibits an extreme mass concentration at large times, in the sense that it almost vanishes onRdexcept for exponentially small areas where it develops a whole cascade of exponentially sized peaks. We refer to [4, Section 2.4] for a precise statement.

Similarly, if the initial condition f decays at infinity (in this case we cannot expect to have (1.8) because of lacking uniformity in the spatial variable), the property (1.10) would indicate that intermittency peaks, originating from the initial mass around the origin, spread in space at a (quasi-)linear speed.

Review of literature

The intermittency problem has been investigated by many authors in various situations. For example, [7] is a classical reference for intermittency in the parabolic Anderson model (PAM) on Zd, which is the discrete-space analogue of (1.1) with

σ(x) =σ0x, x∈R, (1.12)

for some σ0 >0. For the stochastic heat equation, and in particular the continuous PAM driven by a Gaussian space–time white noise, this is analyzed in all its facets in [4, 9, 12, 17, 18], just to name a few. We also refer to [20] for a good overview of the subject.

When it comes to stochastic PDEs with non-Gaussian noise, there is much less literature on this topic. Apart from work on the discrete PAM (see [1, 13] and the references therein), we are

(4)

only aware of [3] that considers the intermittency problem in continuous space and time. This article investigates the Lévy-drivenstochastic wave equation in one spatial dimension, and shows that the solution is weakly intermittent of any order p ≥ 2 under natural assumptions. For the proof of the intermittency upper bounds, the authors employ predictable moment estimates for Poisson stochastic integrals, which are surveyed in [21] in detail. The proof of the lower bound, by contrast, relies onL2-techniques, which are the same as in the Gaussian case treated in [14] or [20].

Summary of results

For the stochastic heat equation (1.1), however, there is an important difference that necessitates the development of new techniques for the intermittency analysis. Namely, as soon as Λ contains a non-Gaussian part, the solution to (1.1) will typically have finite moments only up to the order (1+2/d)−ǫ, even if Λ itself has moments of all orders or has bounded jump sizes like in the case of a standard Poisson noise, see Theorem 3.1. In particular, as soon as we are in dimensiond≥2, the solution hasnofinite second moment. This is in sharp contrast to the Gaussian case where it is well known that the solution to the stochastic heat equation, if it exists, has finite moments of all orders.

And because, as a consequence of the comparison principle in Theorem 3.3, we cannot expect in general that the solution is weakly intermittent of order 1, we are forced to consider moments of non-integer orders in the range (1,1 + 2/d) ⊆ (1,2). Therefore, well-established techniques for estimating integer moments of the solution (see [4, 9]) do not apply in this setting.

This problem can be remedied by an appropriate use of the Burkholder–Davis–Gundy (BDG) inequalities for verifying the intermittency upper bounds, see Theorem 2.4. However, for the correspondinglower bounds, the moment estimates that are available in the literature (including again the BDG inequalities, but also “predictable” versions thereof, see e.g. [21]) do not combine well with the recursive Volterra structure of (1.5). So although these estimates are sharp, we cannot apply them to produce the desired intermittency lower bounds. In order to circumvent this, we use decoupling techniques to establish an – up to our knowledge – new moment lower bound for Poisson stochastic integrals in Lemma 3.4, which we think is of independent interest. With this inequality we then prove the weak intermittency of (1.1) under quite general assumptions. More precisely, if Λ has mean zero, we show in Theorem 3.5 and Theorem 3.6 that we havepth order intermittency for allp∈(1,3) in dimension 1, and for somep∈(1,1+2/d) in dimensionsd≥2. In the latter case, a small diffusion constantκ, or a high noise intensity also leads to intermittency of any desired order.

Noises with positive or negative mean are treated in Theorem 3.10 or Theorem 3.12, respectively.

Moreover, the moment estimates in Lemma 3.4 also permit us to determine the asymptotics of the intermittency exponents asp→ 1 + 2/d orκ→0, see Theorem 4.1. The results suggest that intermittency in the Lévy case is much more pronounced than with Gaussian noise.

Our proofs further indicate that the principal source of intermittency is different between the jump and the Gaussian case. In fact, intermittency in the Gaussian case is caused by the slow decrease in time of the heat kernel, so peaks in the past are remembered for a long time and accumulate to new peaks in the future. By contrast, in the Lévy-driven equation, it is the singularity of the heat kernel at the origin that causes the high-order intermittent behavior of the solution. So here, for p close to 1 + 2/d, peaks of orderp amplify over short time and hence generate even higher peaks. We refer to Remark 3.9 for details.

(5)

In the sequel, we will use the letter C to denote a constant whose value may change from line to line and does not depend on anything important in the given context. Sometimes, if we want to stress the dependence of the constant on an important parameter, sayp, we will writeCp. Furthermore, for reasons of brevity, we writeRRab and RRRab forRabRRd andRabRRd

R

R, respectively.

2 Intermittency upper bounds

We first investigate the upper indices γ(p) and λ(p), respectively. For a random field Φ(t, x), indexed by (t, x) ∈ (0,∞) ×Rd, and exponents β ∈ R, c ∈ [0,∞) and p ∈ [1,∞), we use the notation

kΦkp,β,c:= sup

t(0,)

sup

xRd

eβt+c|x|kΦ(t, x)kp, (2.1) and

(g⊛Φ)(t, x) =Z t

0

Z

Rdg(ts, xy)Φ(s, y) Λ(ds,dy) (2.2) if Φ is predictable and the stochastic integral (2.2) exists for all (t, x)∈(0,∞)×Rd. The key ingre- dient for the intermittency upper bounds is the following Lp-estimate for stochastic convolutions.

The Gaussian case has been obtained in [12, Proposition 2.5] and [20, Proposition 5.2].

Proposition 2.1 (Weighted stochastic Young inequality) Let d∈N,1≤p <1 + 2/d and assume thatρ= 0 if p <2. For anyc≥0 and β > κc2d/2, we have

kg⊛Φkp,β,cCβ,c(κ, p)kΦkp,β,c (2.3)

with

Cβ,c(κ, p) =Cp 2d|b|

β12κc2d+ 2d(3−p)2p Γ(1−d2(p−1))1pmλ(p) p2+(2−p)d2p (πκ)d(p−1)2p (β−12κc2d)2−d(p−1)2p + mλ(2) +|ρ|

(2κ(β−12κc2))141{d=1, p2}

! ,

(2.4)

where Cp >0 does not depend on Λ, κ, β, c or d, and it is bounded on [1 +ǫ,1 + 2/d) for any ǫ >0.

The assumption in Proposition 2.1 that ρ = 0 if p <2 means that if d≥ 2, then necessarily the Gaussian part vanishes because p < 1 + 2/d ≤ 2. This is reasonable since the stochastic heat equation (1.1) has no function-valued solution in general if d ≥ 2 and ρ > 0, see e.g. [20, Section 3.5]. Moreover, in dimension d= 1, we shall only consider the case p ≥ 2 ifρ > 0. The reason behind is that in the case of Gaussian noise, intermittency of order less than 2 is open, see the remark after Theorem 3.5.

Remark 2.2 The three terms in (2.4) illustrate in a nice way the different contributions of the noise to the size of g⊛Φ. The first part comes from the deterministic drift of the noise, the

(6)

second summand is the Lp-contribution originating from the jumps, and the third term is the L2-contribution of the jumps and the Gaussian part (if p ≥ 2). It is important to notice that a Gaussian noise alone has no extra Lp-contribution to Cβ,c(κ, p) for p > 2, which reflects the equivalence of moments of the normal distribution. Furthermore, asp→1 + 2/d, the second term explodes for all non-trivial Lévy measuresλ, no matter how good their integrability properties are.

This is a first indication that the solution to a Lévy-driven stochastic heat equation (1.1) usually has no finite moments of order 1+2/dor higher. We confirm this rigorously in Theorem 3.1 below.

With the help of Proposition 2.1, we can extend the local moment bound (1.4) obtained in [23]

to a global bound.

Proposition 2.3 Assume thatf satisfies |f(x)|=O(ec|x|) as|x| → ∞ for somec≥0and that σ in (1.5) is Lipschitz continuous with

|σ(x)σ(y)| ≤L|xy|, x, y∈R,

for some L > 0, and also σ(0) = 0 if c > 0. Further suppose that Λ takes the form (1.2) and satisfies (1.3)for some1≤p <1+2/das well asρ= 0ifp <2. Then there exists a numberβ0 >0 such that the stochastic heat equation (1.5)has a unique mild solution Y (up to modifications) with kYkp,β,c<for all ββ0.

We obtain as an immediate consequence upper bounds for the moments of the solution Y to the stochastic heat equation (1.5).

Theorem 2.4 (Intermittency upper bounds) Grant the assumptions and notations of Propo- sition 2.3.

(1) We have γ(p)<.

(2) If c >0 and σ(0) = 0, then λ(p)<.

3 Intermittency lower bounds

3.1 High moments

One important difference between the stochastic heat equation with jump noise and with Gaussian noise is that the solutionY to (1.5) has no large moments, even in dimensiond= 1 and no matter how good the integrability properties of the jumps are. In order to understand this, let us consider the situation where σ ≡1, f ≡0, and Λ is a standard Poisson random measure, that is, λ=δ1, b = 1, and ρ = 0. Denoting by (Si, Yi) the space–time locations of the jumps of Λ, we have for (t, x)∈(0,∞)×Rd,

Y(t, x) =Z t

0

Z

Rdg(ts, xy) Λ(ds,dy) =X

i=1

g(tSi, xYi)1{Si<t}.

(7)

If t >1, conditionally on the event that at least one point falls into (t−1, t)×Qdi=1(xi−1, xi), we have

Y(t, x)≥g(U, V) = 1

(2πκU)d2e|V|

2 2κU,

where U, V1, . . . , Vd are independent and uniformly distributed on (0,1), and V = (V1, . . . , Vd).

Now

E[g(U, V)p] = 1 (2πκ)pd2

Z 1 0 upd2

Z 1

0 epv

2 2κudv

d

du= 1

(2πκ)pd2 Z 1

0 ud(1−p)2 Z 1

u

0 epy

2 dy

!d

du

≥ 1 (2κπ)pd2

Z 1

0

epy

2 dy

dZ 1

0

ud(1−p)2 du,

which is finite if and only if p <1 + 2/d. So we conclude that E[|Y(t, x)|1+2d] =∞

for all (t, x)∈(1,∞)×Rd, and, in fact for allt >0. It is not surprising that this holds in a much more general setting. The following results also answers an open problem posed in [3, Remark 1.5].

Its proof will be given after the proof of Theorem 3.6.

Theorem 3.1 (Non-existence of high moments) Consider the situation described in Propo- sition 2.3 and assume that λ6≡ 0. Furthermore, suppose that there exists (t0, x0) ∈ (0,∞)×Rd such that

σ(Y0(t0, x0))6= 0, (3.1)

where Y0 is defined in (1.6). IfY denotes the unique mild solution to (1.1), then sup

(t,x)[0,T]×Rd

E[|Y(t, x)|1+2d] = +∞ (3.2) for all T > t0.

Remark 3.2 The arguments presented in [4] linking the notion of weak intermittency as defined in Definition 1.1 with physical intermittency remain valid even if γ(p) = ∞ for large values of p, provided we have γ(p) ↑ ∞ for ppmax = inf{p > 0: γ(p) = ∞} ≤ 1 + 2/d. Under mild assumptions, this is indeed the case as we will see in Theorem 4.1.

3.2 The martingale case

In this subsection, we assume that Λ has mean zero, that is, b = 0. As in the Gaussian case, we cannot hope for weak intermittency of order 1 in general. This is a consequence of the following comparison principle for the stochastic heat equation driven by a nonnegative pure-jump Lévy noise, whose proof we postpone to the end of Section 5.2. The Gaussian analogue was established in [22, Theorem 3.1].

(8)

Theorem 3.3 (Comparison principle) Let σ be a non-decreasing Lipschitz function andΛ be a Lévy noise as in (1.2) with b ∈R, ρ = 0 and λ satisfying λ((−∞,0]) = 0 and mλ(p) <for somep∈[1,1 + 2/d). Assume thatf1f2≥0 are two bounded measurable initial conditions, and Y1 and Y2 the corresponding mild solutions to (1.1). There exist modifications of Y1 and Y2 such that, with probability 1, we have Y1(t, x)≥Y2(t, x) for all (t, x)∈[0,∞)×Rd.

In particular, if we have in addition thatfis a bounded nonnegative function and0≤σ(x)Lx for some L >0, then the mild solution Y to (1.1)has a nonnegative modification with

e(b0)Lt Z

Rdg(t, xy)f(y) dy≤E[|Y(t, x)|] =E[Y(t, x)]≤e(b0)Lt Z

Rdg(t, xy)f(y) dy (3.3) for all(t, x)∈(0,∞)×Rd. So ifb= 0, we have γ(1) = 0iff is strictly positive on a set of positive Lebesgue measure; γ(1) = 0 if infxRdf(x)>0; λ(1) = 0 if f(x) =O(ec|x|) for some c >0; and λ(1) = 0 by definition.

Thus, we are left to consider exponents in the region p ∈ (1,1 + 2/d). In dimension 1, we can use Itô’s isometry to calculate second moments, and there are essentially no differences to the estimates (or exact formulae) obtained in the Gaussian case ([9, 12, 17]). However, for d≥2, we cannot use Itô’s isometry becausep is strictly between 1 and 2. Instead, our main tool for proving intermittency in the regimep <2 are the following moment lower bounds for stochastic integrals with respect to compensated Poisson random measures, which are of independent interest and complement existing sharp (but for our purposes not feasible) estimates in the literature (see [21]).

Lemma 3.4 Let (Ft)t0 be a filtration on the underlying probability space and N be an (Ft)t0- Poisson random measure on[0,∞)×E, whereE is a Polish space. Further suppose thatmdenotes the intensity measure ofN, andH: Ω×[0,∞)×E→Ris an (Ft)t0-predictable process such that the process

t7→

Z t 0

Z

EH(s, x) ˜N(ds,dx)

is a well-defined (Ft)t0-local martingale, where N˜(dt,dx) := N(dt,dx)−m(dt,dx) is the com- pensation of N.

Then there exists for every p∈(1,2] a constant Cp >0 that is independent of H and m such that

E

"

Z Z

[0,)×EH(t, x) ˜N(dt,dx)

p#

Cp RR

[0,)×EE[|H(t, x)|p]m(dt,dx)

(1∨m([0,∞)×E))1p2 , (3.4) where/∞:= 0. In particular, if the right-hand side of (3.4) is infinite, then also the left-hand side of (3.4) is infinite. Furthermore, for every p ∈(1,2], the constants Cp can be chosen to be bounded away from 0 for p∈[p,2].

We are now ready to state the intermittency lower bounds for (1.1) that complement the corresponding upper bounds in Theorem 2.4. We start with non-vanishing initial data.

Theorem 3.5 (Intermittency lower bounds – I) Let Y be the solution to (1.5) constructed under the assumptions of Proposition 2.3. Additionally assume that

Lf := inf

xRdf(x)>0 and Lσ := inf

xR\{0}

|σ(x)|

|x| >0, (3.5)

(9)

and that Λ has the properties

b= 0, λ6≡0 and Z

R|z|1+2d1{|z|>1}λ(dz)<. (3.6) Then the following statements are valid.

(1) There exists a value p0 = p0(Λ, κ, σ) ∈ [1,1 + 2/d) such that we have γ(p) > 0 for all exponents p0< p <1 + 2/d.

(2) For given p ∈ (1,1 + 2/d), there exists κ0 = κ0(Λ, p, σ) ∈(0,∞] such that γ(p) > 0 for all diffusion constants 0< κ < κ0.

(3) Given p∈ (1,1 + 2/d) and κ >0, there exists L0 =L0(Λ, p, κ) ∈[0,∞) such that γ(p) >0 ifσ has the property Lσ > L0.

(4) In dimension d= 1, we can take p0 = 1, κ0=∞ and L0= 0.

To paraphrase, under the assumptions of Theorem 3.5, we have weak intermittency of order p for every p ∈ (1,3) in dimension 1, while for higher dimensions we have this if p is close enough to 1 + 2/d, or κ is small enough, or the size of σ (or equivalently, the noise intensity) is large enough. It remains an open question whether in dimension d≥2, we always have intermittency of all orders p ∈ (1,1 + 2/d). Also, in contrast to the jump case where we have an affirmative answer, it seems to be open whether the solution to (1.1) ind= 1 with Gaussian noise is weakly intermittent of orderp∈(1,2).

For decaying initial condition, we have the following counterpart for the indicesλ(p).

Theorem 3.6 (Intermittency lower bounds – II) Let Y be the solution to (1.5)constructed in Proposition 2.3. Further assume that c > 0, Lσ > 0 (as defined in (3.5)), σ(0) = 0, that f is nonnegative and strictly positive on a set of positive Lebesgue measure, f(x) = O(ec|x|) as

|x| → ∞, and that Λ satisfies (3.6).

(1) There exists a valuep1=p1(Λ, κ, σ) ∈[1,1+2/d) such thatλ(p)>0for allp∈(p1,1+2/d).

(2) Given p ∈ (1,1 + 2/d), there exists κ1 = κ1(Λ, p, σ) ∈ (0,∞] such that λ(p) > 0 for all 0< κ < κ1.

(3) Given p∈(1,1 + 2/d) and κ >0, there exists L1 =L1(Λ, p, κ) ∈[0,∞) such that λ(p) >0 for all σ satisfyingLσ > L1.

(4) In d= 1, we can take p1= 1, κ1 =∞ and L1 = 0.

Remark 3.7 If d= 1, mλ(2)<∞ and we consider the indices γ(2), γ(2), λ(2) and λ(2), there is – thanks to Itô’s isometry – absolutely no difference between a Lévy and a Gaussian noise if we replaceσ by√

wherev=ρ2+mλ(2)2 is the variance of Λ. For example, the explicit formulae derived in [9] immediately extend to the Lévy case.

(10)

Remark 3.8 In [9], the authors consider the stochastic heat equation with a measure-valued (e.g., a Dirac delta) initial condition. Their proof for the existence and uniqueness of solutions can be adapted to the Lévy setting by replacing L2-estimates with Lp-type estimates from the BDG inequalities. Furthermore, since the heat operator smooths out a rough initial condition immediately, the intermittency properties of the solution will only depend on its decay and support properties. For example, Theorem 3.6 as well as the Theorems 3.10(2), 3.12 and 4.1(2) continue to hold for the solution with a Dirac delta initial condition.

Remark 3.9 The intermittency of (1.1) with Gaussian noise is analytically due to the non- integrable tails ofg2 att= +∞(see [12, 17]). Translated into the picture of physical intermittency, this suggests that peaks in the past remain “visible” for a long time, and finally add up to new peaks.

In the Lévy case, our proofs hint at the same phenomenon in dimension 1 for the intermittency islands of low order (i.e.,p close to 1). However, regardless of dimension, peaks of orders close to 1 + 2/d, which are the dominating ones from a macroscopic level, arise from the singularity of the heat kernel at small times (this is further confirmed in the asymptotics we derive in Theorem 4.1).

It seems that high-order intermittency islands immediately trigger the formation of similar (or even larger) islands, leading to “clusterings” of peaks. It would be interesting for future research to specify and prove these heuristics.

3.3 Noise with positive or negative drift

In this section we consider the intermittency problem for (1.1) when the noise Λ has a non-zero mean. If Λ has a positive mean, that is, ifb >0, then under natural assumptions, the solution to (1.1) is even weakly intermittent of order 1 (and hence also of all ordersp∈[1,1 + 2/d)).

Theorem 3.10 (Intermittency for noises with positive drift) Suppose thatY is the solution to (1.1) constructed in Proposition 2.3 and assume that σ is a nonnegative Lipschitz continuous function with Lσ >0 (as defined in (3.5)). Furthermore, if c= 0, suppose that Lf, as defined in (3.5), is strictly positive, while for c >0, suppose that f is nonnegative and strictly positive on a set of positive Lebesgue measure. Ifb >0, the following statements are valid.

(1) If c= 0, then γ(1)>0.

(2) If c >0, then λ(1)>0.

If Λ has a negative drift, we restrict ourselves to theparabolic Anderson model whereσis given by (1.12). In this case, we can reformulate (1.1) as an equation driven by the martingale part of Λ only. In fact, decomposing Λ(dt,dx) =bdtdx+M(dt,dx), equation (1.1) can be written in the form

tY(t, x) = κ

2∆Y(t, x) +0Y(t, x) +σ0Y(t, x) ˙M(t, x), (t, x)∈(0,∞)×Rd, Y(0,·) =f.

(3.7) This is thed-dimensionalstochastic cable equation driven by the zero-mean Lévy space–time white noise ˙M. In a similar form, it has been studied in [26] for Gaussian driving noise in dimension

(11)

d= 1. Its mild form is the same as in (1.5) but with greplaced by g(t, x) =g(t, x)e0t, (t, x)∈(0,∞)×Rd.

Proposition 3.11 Under the assumptions of Proposition 2.3, there exists β1 > 0 such that (3.7) has a unique mild solution Y satisfying kYkp,β,c <for all ββ1. Furthermore, it is a modification of the unique mild solution to (1.1) constructed in Proposition 2.3.

We omit the proof since the existence and uniqueness result follows exactly as in the proof for Proposition 2.3. Moreover, the second statement holds because weak and mild solutions are equivalent in our present setting: The proof is the same as in [26, Theorem 3.2] for Gaussian M and d= 1.

Theorem 3.12 (Intermittency for noises with negative drift) Let Y be the mild solution to (1.1)as in Proposition 2.3. Suppose thatb <0, mλ(1 + 2/d)<and that σ is given by (1.12) with σ0 >0. If c= 0, also assume that Lf > 0, and if c > 0, that f is nonnegative and positive on a set of positive Lebesgue measure.

(1) If λ6≡0, Theorem 3.5(1)–(3) and Theorem 3.6(1)–(3) continue to hold.

(2) Let a value p∈(1,1 + 2/d) be given, with the restriction p≥2 if ρ6= 0. Whenever κ or |b| is large enough, orσ0 is small enough (each time keeping the other two variables fixed), we haveγ(p)≤γ(p)<0 andλ(p) =λ(p) = 0.

4 Asymptotics of intermittency exponents

As seen in the previous sections, the intermittency of the mild solution to (1.1) is stronger for higher values of p or smaller values of κ. In this section, we investigate the limiting behavior of γ(p),γ(p),λ(p) andλ(p) as

p→1 +2

d and κ→0.

In (4.2) and (4.4) below, one should keep in mind that, although not explicitly indicated in the notation, the indices γ(p) etc. also depend onκ.

Theorem 4.1 (Asymptotics of intermittency exponents) Consider a noiseΛwith non-zero Lévy measureλ.

(1) Let c = 0 and grant the assumptions of Theorem 3.5, Theorem 3.10 or Theorem 3.12 de- pending on whether Λ has mean b = 0, b > 0 or b < 0. If b > 0 or b < 0, we also impose that σ is of the form (1.12). Then we have

lim

p1+d2

1 +2dp

log1 + 2dplogγ(p) = lim

p1+2d

1 + 2dp

log1 +2dplogγ(p) = 2

d, (4.1) 0<lim inf

κ0 κ1+2/d−pp−1 γ(p)≤lim sup

κ0 κ1+2/d−pp−1 γ(p)<. (4.2)

(12)

(2) Let c > 0 and grant the assumptions of Theorem 3.6, Theorem 3.10 or Theorem 3.12 de- pending on whether Λ has mean b = 0, b > 0 or b < 0. If b > 0 or b < 0, we also impose that σ is of the form (1.12). Then we have

1

d ≤lim inf

p1+2d

1 + 2dp

log1 + 2dplogλ(p)≤lim sup

p1+2d

1 + 2dp

log1 + 2dplogλ(p)≤ 2

d. (4.3) If in addition the initial condition decays superexponentially in the sense that |f(x)| = O(ec|x|) as |x| → ∞ for every c≥0, then

0<lim inf

κ0 κ1+1/d−p1+2/d−pλ(p)≤lim sup

κ0

κ1+1/d−p1+2/d−pλ(p)<. (4.4) Remark 4.2 (1) Equation (4.1) asserts that the moment Lyapunov exponents γ(p) andγ(p), which determine the exponential rates at which E[|Y(t, x)|p] grows for t → ∞, themselves increase at a superexponential speed as p approaches 1 + 2/d. This is much faster than in the Gaussian case, where for the PAM (1.12) ind= 1 with constantf [4, Theorem 2.6] and [20, Theorem 6.4] showed that the Lyapunov exponents have a cubic growth as n→ ∞:

γ(n) =γ(n) = σ40

4!κn(n2−1), n∈N. (4.5)

We conclude that the intermittent behavior of the stochastic heat equation with jumps is much stronger than with Gaussian noise.

(2) Similarly, (4.3) states that the velocity at which pth order intermittency peaks propagate in space grows superexponentially when p→ 1 + 2/d. Again, this is on a much faster scale than in the Gaussian case, where the indicesλ(p) andλ(p) typically only increase linearly in p: see [19, Proposition 3.11] where for the PAM (1.12) in d= 1 with compactly supported initial dataf, the authors showed that

0<lim inf

n→∞

λ(n)

n ≤lim sup

n→∞

λ(n)

n <. (4.6)

We also remark that in the jump case, the asymptotics of the exponents γ(p) and γ(p) as p→1 + 2/dare similar to the exponentsλ(p) andλ(p), in contrast to the Gaussian case, cf.

(4.5) and (4.6).

(3) Regarding the asymptotics forκ >0, a notable difference between jump and Gaussian noise is that in the former case, the rate at which γ(p) and γ(p) increases as κ → 0 explicitly depends onp, whereas in the latter case, at least forp∈N, it typically does not, see (4.5).

(4) Another interesting observation is that for jump noises, the asymptotics of λ(p) and λ(p) forκ → 0 exhibit a phase transition at p = 1 + 1/d. If p∈ (1,1 + 1/d), they decrease like κ(1+1/dp)/(1+2/dp), if p= 1 + 1/d, they are bounded away from zero and infinity in κ, and forp∈(1 + 1/d,1 + 2/d), they increase likeκ(p1+1/d)/(1+2/dp). Intuitively speaking, this is because for small κ there are two effects that counteract each other: On the one hand, a

(13)

small diffusion constant reduces the speed at which the initial mass at the origin can spread.

On the other hand, ifκ is small, once an intermittency peak is built up, it takes longer for the Laplace operator to smooth it out, which facilitates the development and transmission of further peaks. Thus, for small values ofp, the first effect is dominant, while for large values of p, it is the second effect that wins. In the Gaussian case, the behavior is again different.

Here for anyp∈[2,∞), we have 0<lim inf

κ0 λ(p)≤lim sup

κ0 λ(p)<. (4.7)

The lower bound follows from [12, Theorem 1.3] together with the fact thatλ(2)λ(p) for all p ≥2, while the upper bound follows as in the proof of Theorem 4.1 from the formula (2.4).

5 Proofs

5.1 Proofs for Section 2

Lemma 5.1 Define gβ,c(t, x) := g(t, x)eβt+c|x| for (t, x) ∈ (0,∞)×Rd. If 0 < p < 1 + 2/d, c≥0 andβ > κc2d/2, then

Z

0

Z

Rdgpβ,c(t, x) dtdx≤ 2d2(3p)Γ(1− d2(p−1))

p1+d(1p2)(πκ)d2(p1)(β−12κc2d)1d2(p1), where Γ denotes the gamma function Γ(x) =R0tx1etdt.

Proof. Ifβ > κc2d/2, then Z

0

Z

Rdgpβ,c(t, x) dtdx=Z

0

epβt pd2(2πκt)d2(p1)

Z

Rd

e2κtp |x|2

(2πκt/p)d2epc|x|dxdt

Z

0

epβt pd2(2πκt)d2(p1)

Z

R

e2κtp |x|2

(2πκt/p)12epc|x|dx

!d

dt

Z

0

2depβt pd2(2πκt)d2(p1)

Z

R

e2κtp |x|2

(2πκt/p)12epcxdx

!d

dt

=Z

0

2depβt

pd2(2πκt)d2(p1)e12dκpc2tdt

= 2d2(3p)Γ(1−d2(p−1))

p1+d(1p2)(πκ)d2(p1)(β−12κc2d)1d2(p1).

(14)

Proof of Proposition 2.1. We use the triangle inequality to split k(g⊛Φ)(t, x)kp≤ |ρ|

Z t 0

Z

Rdg(ts, xy)Φ(s, y)W(ds,dy) p

+

Z t 0

Z

Rd

Z

R

g(ts, xy)Φ(s, y)z(µ−ν)(ds,dy,dz) p

+|b|

Z t 0

Z

Rdg(ts, xy)Φ(s, y) dsdy p

=:I1(t, x) +I2(t, x) +I3(t, x)

into a Gaussian, a pure-jump and a drift part. Recall thatI1 vanishes for d≥2. For d= 1 and p∈[2,3), we have from the BDG inequalities (see [15, Theorem VII.92]) together with Minkowski’s integral inequality that

eβt+c|x|I1(t, x)≤ |ρ|Cpeβt+c|x| Z t

0

Z

R

g2(t−s, xy)kΦ(s, y)k2pdsdy

1 2

≤ |ρ|CpkΦkp,β,c

Z t

0

Z

R

g2(t−s, xy)e2β(ts)+2c(|x|−|y|)dsdy

1 2

≤ |ρ|CpkΦkp,β,c

Z

0

Z

R

g2β,c(s, y) dsdy

1 2 .

So we deduce from Lemma 5.1 that sup

(t,x)(0,)×R

eβt+c|x|I1(t, x) ≤Cp|ρ| 1

(2κ(β−12κc2))14kΦkp,β,c. (5.1) In order to estimate I3 we only need Minkowski’s integral inequality and Lemma 5.1 to obtain

I3(t, x)≤ |b| Z t

0

Z

Rdg(ts, xy)kΦ(s, y)kpdsdy

≤ |b|eβtc|x|kΦkp,β,c

Z t

0

Z

Rdg(ts, xy)eβ(ts)+c(|x|−|y|)dsdy

≤ |b|eβtc|x|kΦkp,β,c

Z t

0

Z

Rdg(ts, xy)eβ(ts)+c(|xy|)dsdy

≤ 2d|b|

β12κc2deβtc|x|kΦkp,β,c.

(5.2)

We turn to the estimation of I2. Ifp≤2, we use the BDG inequality to deduce I2(t, x)pCppE

"

Z t

0

Z

Rd

Z

R|g(ts, xy)Φ(s, y)z|2µ(ds,dy,dz)

p 2#

Cpp Z t

0

Z

Rd

Z

R

gp(t−s, xy)kΦ(s, y)kpp|z|pν(ds,dy,dz)

Cpp(mλ(p))pepβtpc|x| 2d2(3p)Γ(1− d2(p−1))

p1+d(1p2)(πκ)d2(p1)(β−12κc2d)1d2(p1)kΦkpp,β,c.

(15)

At the second inequality we used that (Pi=1ai)rPi=1ari for any r ∈ [0,1] and nonnegative numbers (ai)iN. Ifd= 1 and 2< p <3, we use [21, Theorem 1] withα = 2 to obtain

I2(t, x)pCpp E

"Z t

0

Z

R

Z

R|g(ts, xy)Φ(s, y)z|2ν(ds,dy,dz)

p 2#

+Z t

0

Z

R

Z

R

gp(t−s, xy)kΦ(s, y)kpp|z|pν(ds,dy,dz)

! .

(5.3)

For the first term, again by Minkowski’s integral inequality and Lemma 5.1, we have

E

"Z t

0

Z

R

Z

R|g(ts, xy)Φ(s, y)z|2ν(ds,dy,dz)

p 2#!1p

mλ(2)eβtc|x|

(2κ(β−12κc2))14kΦkp,β,c, (5.4) while for the second term,

Z t

0

Z

R

Z

R

gp(t−s, xy)kΦ(s, y)kpp|z|pν(ds,dy,dz)

1

p ≤ 23−p2p Γ(32p)1pmλ(p)eβtc|x|

p4−p2p (πκ)p−12p (β−12κc2)3−p2p kΦkp,β,c. (5.5) Substituting (5.4) and (5.5) back into (5.3), we obtain

eβt+c|x|I2(t, x)≤Cp

mλ(2)

(2κ(β−12κc2))14 + 23−p2p Γ(32p)1pmλ(p) p4−p2p (πκ)p−12p (β−12κc2)3−p2p

kΦkp,β,c. (5.6) The statement now follows from inequalities (5.1), (5.2) and (5.6). Finally, since Cp comes from BDG inequalities, it remains bounded on [1 +ǫ,1 + 2/p).

Proof of Proposition 2.3. The proof combines Proposition 2.1 with arguments in [12, Theo- rem 1.1] (see also [20, Theorem 8.1]).

As usual, we consider the Picard iteration sequenceY(0) =Y0 and Y(n)(t, x) =Y0(t, x) +Z t

0

Z

Rdg(ts, xy)σ(Y(n1)(s, y)) Λ(ds,dy)

forn∈N, and defineu(n)=Y(n)Y(n1). After possibly enlarging the value ofL, we can assume that |σ(x)| ≤L(1 +|x|) for all x ∈R. Now let us choose β0 > 12κc2dlarge enough such that the factor Cβ,c(κ, p) in front of kΦkp,β,c on the right-hand side of (2.3) satisfies

Cβ,c(κ, p)< 1

L for all ββ0. (5.7)

Using the Lipschitz property of σ, we obtain for all ββ0 and n ∈ N as a consequence of Proposition 2.1,

ku(n)kp,β,c=kg⊛(σ(Y(n1))−σ(Y(n2)))kp,β,cCβ,c(κ, p)kσ(Y(n1))−σ(Y(n2))kp,β,c

qku(n1)kp,β,c. . .qn1ku(1)kp,β,c

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

An interesting property of Gaussian I l f noise was found experimentally a few years ago as a result of investigations of I l f noise driven stochastic resonance in a

Investigating the effects of the inlet geometries on the rotating incoherent noise sources (in most cases broadband noise sources) of the fan with the help of the beamforming maps

NOISE MEASUREMENTS AND NOISE DISTRIBUTION IN THE CITY OF SZEGED OVER A 6 YEAR TIME PERIOD..

The Kalman filter is the optimal solution for state space estimation of linear systems based on measurements corrupted by additive zero mean Gaussian white noise with

Channel CNR is adjusted to the required level by measuring the source inband noise including phase noise component and adding the required delta noise from an external

The calculation of the heat exchange of the human body can be executed with the help of the so-called heat-balance equation, as studies have proved that the subjective heat sensation

The decision on which direction to take lies entirely on the researcher, though it may be strongly influenced by the other components of the research project, such as the

The signal which has the minimum distance to the received signal is estimated as the transmitted signal (in the case of Gaussian noise Euclidean metric provides the best –