• Nem Talált Eredményt

1Introduction Ontailbehaviourofstationarysecond-orderGalton–Watsonprocesseswithimmigration

N/A
N/A
Protected

Academic year: 2022

Ossza meg "1Introduction Ontailbehaviourofstationarysecond-orderGalton–Watsonprocesseswithimmigration"

Copied!
43
0
0

Teljes szövegt

(1)

arXiv:1801.07931v3 [math.PR] 26 Aug 2020

On tail behaviour of stationary second-order Galton–Watson processes with immigration

M´aty´as Barczy∗,⋄, Zsuzsanna B˝osze∗∗, Gyula Pap∗∗∗

* MTA-SZTE Analysis and Stochastics Research Group, Bolyai Institute, University of Szeged, Aradi v´ertan´uk tere 1, H–6720 Szeged, Hungary.

** Institute for Mathematical Stochastics, Georg-August-Universit¨at G¨ottingen, Goldschmidt- str. 7, 37077 G¨ottingen, Germany.

*** Bolyai Institute, University of Szeged, Aradi v´ertan´uk tere 1, H-6720 Szeged, Hungary.

e-mail: barczy@math.u-szeged.hu (M. Barczy), zsuzsanna.boesze@uni-goettingen.de (Zs.

B˝osze).

⋄ Corresponding author.

Abstract

A second-order Galton–Watson process with immigration can be represented as a coor- dinate process of a 2-type Galton–Watson process with immigration. Sufficient conditions are derived on the offspring and immigration distributions of a second-order Galton–

Watson process with immigration under which the corresponding 2-type Galton–Watson process with immigration has a unique stationary distribution such that its common marginals are regularly varying. In the course of the proof sufficient conditions are given under which the distribution of a second-order Galton–Watson process (without immigra- tion) at any fixed time is regularly varying provided that the initial sizes of the population are independent and regularly varying.

1 Introduction

Branching processes have been frequently used in biology, e.g., for modeling the spread of an in- fectious disease, for gene amplification and deamplification or for modeling telomere shortening, see, e.g., Kimmel and Axelrod [18]. Higher-order Galton–Watson processes with immigration having finite second moment (also called Generalized Integer-valued AutoRegressive (GINAR) processes) have been introduced by Latour [19, equation (1.1)]. P´enisson and Jacob [21] used higher-order Galton–Watson processes (without immigration) for studying the decay phase of an

2010 Mathematics Subject Classifications: 60J80, 60G70.

Key words and phrases: second-order Galton–Watson process with immigration, regularly varying distri- bution, tail behavior.

Supported by the Hungarian Croatian Intergovernmental S & T Cooperation Programme for 2017-2018 under Grant No. 16-1-2016-0027. M´aty´as Barczy is supported by the J´anos Bolyai Research Scholarship of the Hungarian Academy of Sciences.

(2)

epidemic, and, as an application, they investigated the Bovine Spongiform Encephalopathy epi- demic in Great Britain after the 1988 feed ban law. As a continuation, P´enisson [20] introduced estimators of the so-called infection parameter in the growth and decay phases of an epidemic.

Recently, Kashikar and Deshmukh [16, 17] and Kashikar [15] used second order Galton–Watson processes (without immigration) for modeling the swine flu data for Pune, India and La-Gloria, Mexico. Kashikar and Deshmukh [16] also studied their basic probabilistic properties such as a formula for their probability generator function, probability of extinction, long run behavior and conditional least squares estimation of the offspring means. Higher-order Galton–Watson processes with immigration are special multi-type Galton–Watson processes with immigration, and to give an example for an application of such processes for modeling epidemics, for exam- ple, we can mention D´enes et al. [7], where a 17-type Galton–Watson process with immigration has been applied to describe the risk of a major epidemic in connection with the 2012 UEFA European Football Championship took place in Ukraine and Poland between 8 June and 1 July 2012.

Let Z+, N, R, R+, R++, and R−− denote the set of non-negative integers, positive integers, real numbers, non-negative real numbers, positive real numbers and negative real numbers, respectively. For functions f :R++ → R++ and g :R++ → R++, by the notation f(x)∼g(x), f(x) = o(g(x)) and f(x) = O(g(x)) as x → ∞, we mean that limx→∞ f(x)

g(x) = 1, limx→∞ f(x)

g(x) = 0 and lim supx→∞ f(x)g(x) < ∞, respectively. The natural basis of Rd will be denoted by {e1, . . . ,ed}. For x ∈ R, the integer part of x is denoted by ⌊x⌋. Every random variable will be defined on a probability space (Ω,A,P). Equality in distributions of random variables or stochastic processes is denoted by =.D

First, we recall the Galton–Watson process with immigration, which assumes that an indi- vidual can reproduce only once during its lifetime at age 1, and then it dies immediately. The initial population size at time 0 will be denoted by X0. For each n ∈ N, the population consists of the offsprings born at time n and the immigrants arriving at time n. For each n, i∈N, the number of offsprings produced at time n by the ith individual of the (n−1)th generation will be denoted by ξn,i. The number of immigrants in the nth generation will be denoted by εn. Then, for the population size Xn of the nth generation, we have

(1.1) Xn=

XXn−1

i=1

ξn,in, n∈N, where P0

i=1 := 0. Here

X0, ξn,i, εn :n, i ∈N are supposed to be independent non-negative integer-valued random variables, and {ξn,i : n, i ∈ N} and {εn : n ∈ N} are supposed to consist of identically distributed random variables, respectively. If εn = 0, n ∈ N, then we say that (Xn)n∈Z+ is a Galton–Watson process (without immigration).

Next, we introduce the second-order Galton–Watson branching model with immigration.

In this model we suppose that an individual reproduces at age 1 and also at age 2, and then it dies immediately. For each n ∈N, the population consists again of the offsprings born at time n and the immigrants arriving at time n. For each n, i, j ∈ N, the number of

(3)

offsprings produced at time n by the ith individual of the (n−1)th generation and by the jth individual of the (n−2)nd generation will be denoted by ξn,i and ηn,j, respectively, and εn denotes the number of immigrants in the nth generation. Then, for the population size Xn of the nth generation, we have

(1.2) Xn =

XXn−1

i=1

ξn,i+

XXn−2

j=1

ηn,jn, n ∈N,

where X−1 and X0 are non-negative integer-valued random variables (the initial population sizes). Here

X−1, X0, ξn,i, ηn,j, εn : n, i, j ∈ N are supposed to be non-negative integer- valued random variables such that

(X−1, X0), ξn,i, ηn,j, εn : n, i, j ∈ N are independent, and {ξn,i : n, i ∈ N}, {ηn,j : n, j ∈ N} and {εn : n ∈ N} are supposed to consist of identically distributed random variables, respectively. Note that the number of individuals alive at time n ∈ Z+ is Xn+Xn−1, which can be larger than the population size Xn of the nth generation, since the individuals of the population at time n−1 are still alive at time n, because they can reproduce also at age 2. The stochastic process (Xn)n>−1 given by (1.2) is called a second-order Galton–Watson process with immigration or a Generalized Integer-valued AutoRegressive process of order 2 (GINAR(2) process), see, e.g., Latour [19].

Especially, if ξ1,1 and η1,1 are Bernoulli distributed random variables, then (Xn)n>−1 is also called an Integer-valued AutoRegressive process of order 2 (INAR(2) process), see, e.g., Du and Li [8]. If ε1 = 0, then we say that (Xn)n>−1 is a second-order Galton–Watson process without immigration, introduced and studied by Kashikar and Deshmukh [16] as well.

The process given in (1.2) with the special choice η1,1 = 0 gives back the process given in (1.1), which will be called a first-order Galton–Watson process with immigration to make a distinction.

For notational convenience, let ξ, η and ε be random variables such that ξ=D ξ1,1, η=D η1,1

and ε=D ε1, and put mξ :=E(ξ)∈[0,∞], mη :=E(η)∈[0,∞] and mε :=E(ε)∈[0,∞].

If (Xn)n∈Z+ is a (first-order) Galton–Watson process with immigration such that mξ ∈ (0,1), P(ε = 0) < 1, and P

j=1P(ε = j) log(j) < ∞, then the Markov process (Xn)n∈Z+

admits a unique stationary distribution µ, see, e.g., Quine [22]. If ε is regularly varying with index α∈R++, i.e., P(ε > x)∈R++ for all x∈R++, and

x→∞lim

P(ε > qx)

P(ε > x) =q−α for all q ∈R++, then, by Lemma E.5, P

j=1P(ε=j) log(j)<∞. The content of Theorem 2.1.1 in Basrak et al. [3] is the following statement.

1.1 Theorem. Let (Xn)n∈Z+ be a (first-order) Galton–Watson process with immigration such that mξ ∈ (0,1) and ε is regularly varying with index α ∈ (0,2). In case of α ∈ [1,2), assume additionally that E(ξ2) < ∞. Then the tail of the unique stationary distribution µ

(4)

of (Xn)n∈Z+ satisfies µ((x,∞))∼

X i=0

mξ P(ε > x) = 1

1−mαξ P(ε > x) as x→ ∞, and hence µ is also regularly varying with index α.

Note that in case of α = 1 and mε = ∞ Basrak et al. [3, Theorem 2.1.1] assume additionally that ε is consistently varying (or in other words intermediate varying), but, eventually, it follows from the fact that ε is regularly varying. Basrak et al. [3, Remark 2.2.2] derived the result of Theorem 1.1 also for α ∈ [2,3) under the additional assumption E(ξ3) < ∞ (not mentioned in the paper), and they remark that the same applies to all α∈[3,∞) (possibly under an additional moment assumption E(ξ⌊α⌋+1)<∞).

In Barczy et al. [2] we study regularly varying non-stationary (first-order) Galton–Watson processes with immigration.

As the main result of the paper, in Theorem 2.1, in the same spirit as in Theorem 1.1, we present sufficient conditions on the offspring and immigration distributions of a second-order Galton–Watson process with immigration under which its associated 2-type Galton–Watson process with immigration has a unique stationary distribution such that its common marginals are regularly varying. According to our knowledge, such a result has not been established so far, e.g., we could not find any reference which would address regularly varying GINAR(2) processes. Our result and the applied technique might be extended to a p-th order Galton–

Watson branching process with immigration, however such an extension is not immediate, for example, it is not clear what would replace the constant P

i=0mαi in Theorem 2.1. More generally, one can pose an open problem, namely, under what conditions on the offspring and immigration distributions of a general p-type Galton–Watson branching process with immigration, its unique (p-dimensional) stationary distribution is jointly regularly varying. We also note that there is a vast literature on tail behavior of regularly varying time series (see, e.g., Hult and Samorodnitsky [12]), however, the available results do not seem to be applicable for describing the tail behavior of the stationary distribution for regularly varying branching processes. The link between GINAR and autoregressive processes is that their autocovariance functions are identical under finite second moment assumptions, but we can not see that it would imply anything for the tail behavior of a GINAR process knowing the tail behaviour of a corresponding autoregressive process. Further, in our situation the second moment is infinite, so the autocovariance function is not defined.

Very recently, B˝osze and Pap [5] have studied regularly varying non-stationary second-order Galton–Watson processes with immigration. They have found some sufficient conditions on the initial, the offspring and the immigration distributions of a non-stationary second-order Galton- Watson process with immigration under which the distribution of the process in question is regularly varying at any fixed time. The results in B˝osze and Pap [5] can be considered as extensions of the results in Barczy et al. [2] on not necessarily stationary (first-order) Galton–

Watson processes with immigration. Concerning the results in B˝osze and Pap [5] and in the

(5)

present paper, there is no overlap, for more details see Remark 2.2.

The paper is organized as follows. In Section 2, first, for a second-order Galton–Watson process with immigration, we give a representation of the unique stationary distribution and its marginals, respectively, then our main result, Theorem 2.1, is formulated. The rest of Section 2 is devoted to the proof of Theorem 2.1. In the course of the proof, we formulate an auxiliary result about the tail behaviour of a second-order Galton–Watson process (without immigration) with a regularly varying initial distribution at time 0 and with value 0 at time −1, see Proposition 2.3. We close the paper with seven appendices which are used throughout the proofs. In Appendix A, we recall a representation of a second-order Galton–Watson process without or with immigration as a (special) 2-type Galton–Watson process without or with immigration, respectively. In Appendix B, we derive an explicit formula for the expectation of a second-order Galton–Watson process with immigration at time n and describe its asymptotic behavior as n→ ∞. Appendix C is about the existence and estimation of higher order moments of a second-order Galton–Watson process (without immigration). In Appendix D, we recall a representation of the unique stationary distribution for a 2-type Galton–Watson process with immigration. In Appendix E, we collect several results on regularly varying functions and distributions, to name a few of them: convolution property, Karamata’s theorem and Potter’s bounds. Appendix F is devoted to recall and (re)prove a result on large deviations for sums of non-negative independent and identically distributed regularly varying random variables due to Tang and Yan [26, part (ii) of Theorem 1]. Finally, in Appendix G, we present a variant of Proposition 2.3, where the initial values X−1 and X0 are independent and regularly varying together with a second type of proof, see Proposition G.1.

2 Tail behavior of the marginals of the stationary distri- bution of second-order Galton–Watson processes with immigration

Let (Xn)n>−1 be a second order Galton–Watson process with immigration given in (1.2), and let us consider the Markov chain (Yk)k∈Z+ given by

Yn :=

"

Yn,1

Yn,2

# :=

"

Xn

Xn−1

#

=

YXn−1,1

i=1

"

ξn,i

1

# +

YXn−1,2

j=1

"

ηn,j

0

# +

"

εn

0

#

, n ∈N,

which is a (special) 2-type Galton–Watson process with immigration, and (e1Yk)k∈Z+ = (Xk)k∈Z+, (e2Yk+1)k>−1 = (Xk)k>−1 (for more details, see Appendix A). If mξ ∈ R++, mη ∈R++, mξ+mη <1, P(ε= 0) <1 and E(1{ε6=0}log(ε))<∞, then there exists a unique stationary distribution π for (Yn)n∈Z+, see Appendix D, since then Mξ,η is primitive due to the fact that

M2ξ,η =

"

mξ mη

1 0

#2

=

"

m2ξ+mη mξmη

mξ mη

#

∈R2

++.

(6)

Moreover, the stationary distribution π of (Yn)n∈Z+ has a representation π =D

X i=0

V(i)ii),

where (V(i)ki))k∈Z+, i ∈ Z+, are independent copies of a (special) 2-type Galton–Watson process (Vk(ε))k∈Z+ (without immigration) with initial vector V0(ε) =ε and with the same offspring distributions as (Yk)k∈Z+, and the series P

i=0V(i)i (ε) converges with probability 1, see Appendix D. Using the considerations for the backward representation in Appendix A, we have (e1Vk(ε))k∈Z+ = (Vk(ε))k∈Z+ and (e2Vk+1(ε))k>−1 = (Vk(ε))k>−1, where (Vk(ε))k>−1

is a second-order Galton–Watson process (without immigration) with initial values V0(ε) =ε and V−1(ε) = 0, and with the same offspring distributions as (Xk)k>−1. Consequently, the marginals of the stationary distribution π are the same distributions π, and it admits the representation

π =D X

i=0

Vi(i)i),

where (Vk(i)i))k∈Z+, i∈Z+, are independent copies of (Vk(ε))k>−1. This follows also from the fact that the stationary distribution π is the limit in distribution of Yn as n → ∞ and

Yn =

"

Xn

Xn−1

#

, n∈Z+,

thus the coordinates of Yn converge in distribution to the same distribution π as n → ∞. Note that (Xn)n>−1 is only a second-order Markov chain, but not a Markov chain. More- over, (Xn)n>−1 is strictly stationary if and only if the distribution of the initial population sizes (X0, X−1) coincides with the stationary distribution π of the Markov chain (Yk)k∈Z+. Indeed, if (X0, X−1)⊤ D=π, then Y0 =D π, thus (Yk)k∈Z+ is strictly stationary, and hence for each n, m∈Z0, (Y0, . . . ,Yn)= (YD m, . . . ,Yn+m), yielding

(X0, X−1, X1, X0, . . . , Xn, Xn−1)= (XD m, Xm−1, Xm+1, Xm, . . . , Xn+m, Xn+m−1).

Especially, (X−1, X0, X1, . . . , Xn)= (XD m−1, Xm, Xm+1, . . . , Xn+m), hence (Xn)n>−1 is strictly stationary. Since (Xm, Xm−1, Xm+1, Xm, . . . , Xn+m, Xn+m−1) is a continuous function of (Xm−1, Xm, Xm+1, . . . , Xn+m), these considerations work backwards as well. Consequently, π is the unique stationary distribution of the second-order Markov chain (Xn)n>−1.

2.1 Theorem. Let (Xn)n>−1 be a second-order Galton–Watson process with immigration such that mξ ∈R++, mη ∈R++, mξ+mη <1 and ε is regularly varying with index α∈(0,2).

In case of α ∈ [1,2), assume additionally that E(ξ2) <∞ and E(η2)< ∞. Then the tail of the marginals π of the unique stationary distribution π of (Xn)n>−1 satisfies

π((x,∞))∼ X

i=0

mαi P(ε > x) as x→ ∞,

(7)

where m0 := 1 and

mk:= λk+1+ −λk+1

λ+−λ , λ+:=

mξ+q

m2ξ + 4mη

2 , λ:=

mξ−q

m2ξ+ 4mη

(2.1) 2

for k ∈N. Consequently, π is also regularly varying with index α.

Note that λ+ and λ are the eigenvalues of the offspring mean matrix Mξ,η given in (B.2) related to the recursive formula (B.1) for the expectations E(Xn), n ∈ N. For each k ∈Z+, the assumptions mξ ∈R++ and mη ∈ R++ imply mk ∈R++. Further, by (B.4), for all k ∈Z+, we have mk =E(Vk,0), where (Vn,0)n>−1 is a second-order Galton–Watson process (without immigration) with initial values V0,0 = 1 and V−1,0 = 0, and with the same offspring distributions as (Xn)n>−1. Consequently, the series P

i=0mαi appearing in Theorem 2.1 is convergent, since for each i ∈ N, we have mi = E(Vi,0) 6 λi+ < 1 by (B.5) and the assumption mξ+mη <1.

We point out that in Theorem 2.1 only the regular variation of the marginals π of π is proved, the question of the joint regular variation of π remains open.

2.2 Remark. Note that there is no overlap between the results in the recent paper of B˝osze and Pap [5] on non-stationary second-order Galton-Watson processes with immigration and in the present paper. In [5] the authors always suppose that the initial values X0 and X−1

of a second-order Galton-Watson process with immigration (Xn)n>−1 are independent, so in the results of [5] the distribution of (X0, X−1) can not be chosen as the unique stationary distribution π, since the marginals of π are not independent in general. ✷ For the proof of Theorem 2.1, we need an auxiliary result on the tail behaviour of second- order Galton–Watson processes (without immigration) having regularly varying initial distri- butions.

2.3 Proposition. Let (Xn)n>−1 be a second-order Galton–Watson process (without immigra- tion) such that X0 is regularly varying with index β0 ∈ R+, X−1 = 0, mξ ∈ R++ and mη ∈ R+. In case of β0 ∈ [1,∞), assume additionally that there exists r ∈ (β0,∞) with E(ξr)<∞ and E(ηr)<∞. Then for all n ∈N,

P(Xn> x)∼mβn0P(X0 > x) as x→ ∞,

where mi, i∈ Z+, are given in Theorem 2.1, and hence, Xn is also regularly varying with index β0 for each n∈N.

Proof of Proposition 2.3. Let us fix n ∈ N. In view of the additive property (A.4), it is sufficient to prove

P

X0

X

i=1

ζi,0(n)> x

!

∼mβn0P(X0 > x) as x→ ∞.

(8)

This relation follows from Proposition E.13, since E(ζ1,0(n)) =mn∈R++, n ∈N, by (B.4). ✷ In Appendix G, we present a variant of Proposition 2.3, where the initial values X−1 and X0 are independent and regularly varying together with a second type of proof, see Proposition G.1.

Proof of Theorem 2.1. First, note that, by Lemma E.5, E(1{ε6=0}log(ε))<∞. We will use the ideas of the proof of Theorem 2.1.1 in Basrak et al. [3]. Due to the representation (A.4), for each i∈Z+, we have

Vi(i)i)=D

εi

X

j=1

ζj,0(i), where

εi, ζj,0(i) : j ∈ N are independent random variables such that {ζj,0(i) : j ∈ N} are independent copies of Vi,0, where (Vk,0)k>−1 is a second-order Galton–Watson process (without immigration) with initial values V0,0 = 1 and V−1,0 = 0, and with the same offspring distributions as (Xk)k>−1. For each i ∈ Z+, by Proposition 2.3, we obtain P(Vi(i)i) >

x) ∼ mαi P(ε > x) as x → ∞, yielding that random variables Vi(i)i), i ∈ Z+, are also regularly varying with index α. Since Vi(i)i), i ∈Z+, are independent, for each n ∈Z+, by Lemma E.10, we have

P Xn

i=0

Vi(i)i)> x

∼ Xn

i=0

mαi P(ε > x) as x→ ∞, (2.2)

and hence the random variables Pn

i=0Vi(i)i), n ∈Z+, are also regularly varying with index α. For each n ∈N, using that Vi(i)i), i∈Z+, are non-negative, we have

lim inf

x→∞

π((x,∞))

P(ε > x) = lim inf

x→∞

P(P

i=0Vi(i)i)> x)

P(ε > x) >lim inf

x→∞

P(Pn

i=0Vi(i)i)> x) P(ε > x) =

Xn i=0

mαi,

hence, letting n→ ∞, we obtain

(2.3) lim inf

x→∞

π((x,∞)) P(ε > x) >

X i=0

mαi.

Moreover, for each n∈N and q∈(0,1), we have lim sup

x→∞

π((x,∞))

P(ε > x) = lim sup

x→∞

P Pn−1

i=0 Vi(i)i) +P

i=nVi(i)i)> x P(ε > x)

6lim sup

x→∞

P Pn−1

i=0 Vi(i)i)>(1−q)x

+P P

i=nVi(i)i)> qx

P(ε > x) 6L1,n(q) +L2,n(q) with

L1,n(q) := lim sup

x→∞

P Pn−1

i=0 Vi(i)i)>(1−q)x

P(ε > x) , L2,n(q) := lim sup

x→∞

P P

i=nVi(i)i)> qx P(ε > x) .

(9)

Since ε is regularly varying with index α, by (2.2), we obtain L1,n(q) = lim sup

x→∞

P Pn−1

i=0 Vi(i)i)>(1−q)x

P(ε >(1−q)x) · P(ε >(1−q)x)

P(ε > x) = (1−q)−α Xn−1

i=0

mαi

and

L2,n(q) = lim sup

x→∞

P P

i=nVi(i)i)> qx

P(ε > qx) · P(ε > qx)

P(ε > x) =q−αlim sup

x→∞

P P

i=nVi(i)i)> qx P(ε > qx) , and hence

n→∞lim L1,n(q) = (1−q)−α X

i=0

mαi,

n→∞lim L2,n(q) =q−α lim

n→∞lim sup

x→∞

P P

i=nVi(i)i)> qx P(ε > qx) . The aim of the following discussion is to show

(2.4) lim

n→∞lim sup

x→∞

P P

i=nVi(i)i)> qx

P(ε > qx) = 0, q ∈(0,1).

First, we consider the case α∈(0,1). For each x∈R++, n ∈N and δ∈(0,1), we have P

X i=n

Vi(i)i)> x

!

=P X

i>n

Vi(i)i)> x, sup

i>n

̺iεi >(1−δ)x

!

+P X

i>n

Vi(i)i)> x, sup

i>n

̺iεi 6(1−δ)x

!

=P X

i>n

Vi(i)i)> x, sup

i>n̺iεi >(1−δ)x

!

+P X

i>n

Vi(i)i)1i6(1−δ)̺−ix} > x, sup

i>n

̺iεi 6(1−δ)x

!

6P

sup

i>n

̺iεi >(1−δ)x

+P X

i>n

Vi(i)i)1i6(1−δ)̺−ix} > x

!

=:P1,n(x, δ) +P2,n(x, δ), where ̺ is given in (B.6). By subadditivity of probability,

P1,n(x, δ)6X

i>n

P(̺iεi >(1−δ)x) =X

i>n

P(ε >(1−δ)̺−ix).

Using Potter’s upper bound (see Lemma E.12), for δ ∈ (0,α2), there exists x0 ∈R++ such that

(2.5) P(ε >(1−δ)̺−ix)

P(ε > x) <(1 +δ)[(1−δ)̺−i]−α+δ <(1 +δ)[(1−δ)̺−i]α2

(10)

if x ∈ [x0,∞) and (1−δ)̺−i ∈ [1,∞), which holds for sufficiently large i ∈ N due to

̺∈(0,1). Consequently, if δ∈(0,α2), then

n→∞lim lim sup

x→∞

P1,n(x, δ)

P(ε > x) 6 lim

n→∞

X

i>n

(1 +δ)[(1−δ)̺−i]α2 = 0, since ̺α2 < 1 (due to ̺ ∈ (0,1)) yields P

i=0−i)−α/2 < ∞. Now we turn to prove that limn→∞lim supx→∞ PP2,n(x,δ)

1>x) = 0. By Markov’s inequality, P2,n(x, δ)6 1

x X

i>n

E Vi(i)i)1i6(1−δ)̺−ix}

.

By the representation Vi(i)i)=D Pεi

j=1ζj,0(i), we have E Vi(i)i)1i6(1−δ)̺−ix}

=E

εi

X

j=1

ζj,0(i)1i6(1−δ)̺−ix}

!

=E

"

E

εi

X

j=1

ζj,0(i)1i6(1−δ)̺−ix}

εi

!#

=E

εi

X

j=1

E(ζ1,0(i))1i6(1−δ)̺−ix}

!

=E(ζ1,0(i))E εi1i6(1−δ)̺−ix}

,

since {ζj,0(i) :j ∈N} and εi are independent. Moreover, E εi1i6(1−δ)̺−ix}

=E ε16(1−δ)̺−ix}

= Z

0

P ε16(1−δ)̺−ix} > t dt

=

Z (1−δ)̺−ix 0

P(t < ε6(1−δ)̺−ix) dt6

Z (1−δ)̺−ix 0

P(ε > t) dt.

By Karamata’s theorem (see, Theorem E.11), we have

y→∞lim Ry

0 P(ε > t) dt

yP(ε > y) = 1 1−α, thus there exists y0 ∈R++ such that

Z y 0

P(ε > t) dt6 2yP(ε > y)

1−α , y ∈[y0,∞), hence Z (1−δ)̺−ix

0

P(ε > t) dt6 2(1−δ)̺−ixP(ε >(1−δ)̺−ix) 1−α

whenever (1− δ)̺−ix ∈ [y0,∞), which holds for i > n with sufficiently large n ∈ N and x ∈ [(1 −δ)−1̺ny0,∞) due to ̺ ∈ (0,1). Thus, for sufficiently large n ∈ N and x∈[(1−δ)−1̺ny0,∞), we obtain

P2,n(x, δ)

P(ε > x) 6 1 xP(ε > x)

X

i>n

E(ζ1,0(i))

Z (1−δ)̺−ix 0

P(ε > t) dt

6 2(1−δ) 1−α

X

i>n

P(ε >(1−δ)̺−ix) P(ε > x) ,

(11)

since E(ζ1,0(i))6̺i, i∈Z+, by (B.5) and ζ1,0(0) = 1. Using (2.5), we get P2,n(x, δ)

P(ε > x) 6 2(1−δ) 1−α

X

i>n

(1 +δ)[(1−δ)̺−i]α2

for δ ∈(0,α2), for sufficiently large n ∈N and for all x∈[max(x0,(1−δ)−1̺ny0),∞). Hence for δ ∈(0,α2) we have

n→∞lim lim sup

x→∞

P2,n(x, δ)

P(ε > x) 6 lim

n→∞

2(1−δ2) 1−α

X

i>n

[(1−δ)̺−i]α2 = 0, where the last step follows by the fact that the series P

i=0i)α2 is convergent, since ̺∈(0,1).

Consequently, due to the fact that P(P

i=nVi(i)i)> x)6P1,n(x, δ) +P2,n(x, δ), x∈R++, n ∈ N, δ ∈ (0,1), we obtain (2.4), and we conclude limn→∞L2,n(q) = 0 for all q ∈ (0,1).

Thus we obtain lim sup

x→∞

π((x,∞))

P(ε > x) 6 lim

n→∞L1,n(q) + lim

n→∞L2,n(q) = (1−q)−α X

i=0

mαi

for all q∈(0,1). Letting q ↓0, this yields lim sup

x→∞

π((x,∞)) P(ε > x) 6

X i=0

mαi.

Taking into account (2.3), the proof of (2.4) is complete in case of α∈(0,1).

Next, we consider the case α ∈[1,2). Note that (2.4) is equivalent to

n→∞lim lim sup

x→∞

P P

i=nVi(i)i)>√ x P(ε >√

x) = lim

n→∞lim sup

x→∞

P P

i=nVi(i)i)2

> x P(ε2 > x) = 0.

Repeating a similar argument as for α ∈(0,1), we obtain P

X

i=n

Vi(i)i)

!2

> x

!

=P

X

i=n

Vi(i)i)

!2

> x, sup

i>n

̺2iε2i >(1−δ)x

!

+P

X

i=n

Vi(i)i)

!2

> x, sup

i>n

̺2iε2i 6(1−δ)x

!

=P

X

i=n

Vi(i)i)

!2

> x, sup

i>n

̺2iε2i >(1−δ)x

!

+P

X

i=n

Vi(i)i)12

i6(1−δ)̺−2ix}

!2

> x, sup

i>n

̺2iε2i 6(1−δ)x

!

(12)

6P

sup

i>n

̺2iε2i >(1−δ)x

+P

X

i=n

Vi(i)i)12

i6(1−δ)̺−2ix}

!2

> x

!

=:P1,n(x, δ) +P2,n(x, δ)

for each x∈R++, n∈N and δ∈(0,1). By the subadditivity of probability, P1,n(x, δ)6

X i=n

P(̺2iε2i >(1−δ)x) = X

i=n

P(ε2 >(1−δ)̺−2ix)

for each x ∈ R++, n ∈ N and δ ∈ (0,1). Since ε2 is regularly varying with index α2 (see Lemma E.3), using Potter’s upper bound (see Lemma E.12) for δ ∈ 0,α4

, there exists x0 ∈R++ such that

(2.6) P(ε2 >(1−δ)̺−2ix)

P(ε2 > x) <(1 +δ)[(1−δ)̺−2i]α2<(1 +δ)[(1−δ)̺−2i]α4

if x ∈ [x0,∞) and (1−δ)̺−2i ∈ [1,∞), which holds for sufficiently large i ∈ N (due to

̺∈(0,1)). Consequently, if δ ∈(0,α4), then

n→∞lim lim sup

x→∞

P1,n(x, δ)

P(ε2 > x) 6 lim

n→∞

X i=n

(1 +δ)[(1−δ)̺−2i]α4 = 0,

since ̺α2 < 1 (due to ̺ ∈ (0,1)). By Markov’s inequality, for x ∈ R++, n ∈ N and δ∈(0,1), we have

P2,n(x, δ)

P(ε2 > x) 6 1

xP(ε2 > x)E

X

i=n

Vi(i)i)12

i6(1−δ)̺−2ix}

!2!

= 1

xP(ε2 > x)E X

i=n

Vi(i)i)212

i6(1−δ)̺−2ix}

!

+ 1

xP(ε2 > x)E

X i,j=n, i6=j

Vi(i)i)Vj(j)j)12

i6(1−δ)̺−2ix}12j6(1−δ)̺−2jx}

!

=:J2,1,n(x, δ) +J2,2,n(x, δ)

for each x∈R++, n ∈N and δ ∈(0,1). By Lemma C.2, (B.4) and (B.5) with X0 = 1 and X−1 = 0, we have

E(Vi(i)(n)2) = E

 Xn j=1

ζj,0(i)

!2

= Xn

j=1

E (ζj,0(i))2 +

Xn j,ℓ=1, j6=ℓ

E ζj,0(i)

E ζℓ,0(i)

6csub

Xn j=1

̺i+ Xn j,ℓ=1, j6=ℓ

̺i̺i 6csubi+ (n2−n)̺2i 6csub̺in+̺2in2

(13)

for i, n ∈N. Hence, using that (εi, Vi(i)i))=D εi,Pεi

j=1ζj,0(i)

and that εi and {ζj,0(i) :j ∈N} are independent, we have

J2,1,n(x, δ) = X

i=n

E Vi(i)i)212

i6(1−δ)̺−2ix}

xP(ε2 > x)

= X

i=n

E Pεi

j=1ζj,0(i)2

1

i6(1−δ)12̺−ix12}

xP(ε2 > x)

= X

i=n

P

066(1−δ)12̺−ix12 E P

j=1ζj,0(i)2

P(εi =ℓ) xP(ε2 > x)

6 X

i=n

P

066(1−δ)12̺−ix12 (csub̺iℓ+̺2i2)P(ε=ℓ) xP(ε2 > x)

= X

i=n

csub̺iE(ε126(1−δ)̺−2ix}) xP(ε2 > x) +

X i=n

̺2iE(ε2126(1−δ)̺−2ix}) xP(ε2 > x)

=:J2,1,1,n(x, δ) +J2,1,2,n(x, δ).

Since ε2 is regularly varying with index α2 ∈[12,1) (see Lemma E.3), by Karamata’s theorem (see, Theorem E.11), we have

y→∞lim Ry

0 P(ε2 > t) dt

yP(ε2 > y) = 1 1− α2, thus there exists y0 ∈R++ such that

Z y 0

P(ε2 > t) dt6 2yP(ε2 > y)

1− α2 , y ∈[y0,∞), hence

E(ε2126(1−δ)̺−2ix}) = Z

0

P(ε2126(1−δ)̺−2ix}> y) dy

=

Z (1−δ)̺−2ix 0

P(y < ε2 6(1−δ)̺−2ix) dy

6

Z (1−δ)̺−2ix 0

P(ε2 > t) dt 6 2(1−δ)̺−2ixP(ε2 >(1−δ)̺−2ix) 1− α2

whenever (1−δ)̺−2ix∈[y0,∞), which holds for i> n with sufficiently large n ∈N, and x∈ [(1−δ)−1̺2ny0,∞) due to ̺ ∈(0,1). Thus for δ ∈ (0,α4), for sufficiently large n ∈ N (satisfying (1−δ)̺−2n ∈(1,∞) as well) and for all x ∈[max(x0,(1−δ)−1̺2ny0),∞), using

(14)

(2.6), we obtain

J2,1,2,n(x, δ)6 2(1−δ) 1− α2

X i=n

P(ε2 >(1−δ)̺−2ix)

P(ε2 > x) 6 2(1−δ) 1− α2

X i=n

(1 +δ)[(1−δ)̺−2i]α4

= 2(1−δ2) 1− α2

X i=n

[(1−δ)̺−2i]α4. Hence for δ∈(0,α4), we have

n→∞lim lim sup

x→∞ J2,1,2,n(x, δ)6 2(1−δ2) 1−α2 lim

n→∞

X i=n

[(1−δ)̺−2i]α4 = 0,

yielding limn→∞lim supx→∞J2,1,2,n(x, δ) = 0 for δ∈(0,α4). Further, if α ∈(1,2), or α= 1 and mε <∞, we have

J2,1,1,n(x, δ)6csub X

i=n

̺i mε

xP(ε2 > x), and hence, using that limx→∞xP(ε2 > x) =∞ (see Lemma E.4),

n→∞lim lim sup

x→∞ J2,1,1,n(x, δ)6csubmε lim

n→∞

X i=n

̺i

!

lim sup

x→∞

1

xP(ε2 > x) = 0, yielding limn→∞lim supx→∞J2,1,1,n(x, δ) = 0 for δ ∈(0,1).

If α = 1 and mε =∞, then we have J2,1,1,n(x, δ) =

X i=n

csub̺i E ε1

6(1−δ)12̺−ix12}

xP(ε2 > x) for x∈R++, n ∈N and δ ∈(0,1). Note that

E(ε16y})6 Z

0

P(ε16y} > t) dt = Z y

0

P(t < ε6y) dt6 Z y

0

P(t < ε) dt=:L(y)e for y ∈R+. Because of α = 1, Proposition 1.5.9a in Bingham et al. [4] yields that Le is a slowly varying function (at infinity). By Potter’s bounds (see Lemma E.12), for every δ∈R++, there exists z0 ∈R++ such that

L(y)e

L(z)e <(1 +δ)y z

δ

for z >z0 and y>z. Hence, for x> z20, we have E ε1{ε6(1−δ)12̺−ix12}

6Le (1−δ)12̺−ix12

6L(̺e −ix12)6(1 +δ)̺−iδL(xe 12), i>n,

(15)

where we also used that Le is monotone increasing. Using this, we conclude that for every δ∈R++, there exists z0 ∈R++ such that for x>z02, we have

J2,1,1,n(x, δ)6(1 +δ)csub

L(xe 12) xP(ε2 > x)

X i=n

̺−iδ. Here, since ̺∈(0,1) and δ∈R++, we have limn→∞P

i=n̺−iδ = 0, and L(e √

x)

xP(ε2 > x) = L(e √ x)

x1/4 · 1

x3/4P(ε >√

x) →0 as x→ ∞,

by Lemma E.4, due to the fact that Le is slowly varying and the function R++∋x 7→P(ε >

√x) is regularly varying with index −1/2. Hence limn→∞lim supx→∞J2,1,1,n(x, δ) = 0 for δ∈(0,1) in case of α = 1 and mε =∞.

Consequently, we have limn→∞lim supx→∞J2,1,n(x, δ) = 0 for δ ∈(0,α4).

Now we turn to prove limn→∞lim supx→∞J2,2,n(x, δ) = 0 for δ ∈ (0,1). Using that {(εi, Vi(i)i) :i∈N} are independent, we have

J2,2,n(x, δ)6 1 xP(ε2 > x)

X i,j=n, i6=j

E(Vi(i)i)12

i6(1−δ)̺−2ix})E(Vj(j)j)12

j6(1−δ)̺−2jx}).

Here, using that

εi, Vi(i)i) D

= εi,Pεi

j=1ζj,0(i)

, where εi and {ζj,0(i):j ∈N} are independent, and (B.5) with X0 = 1 and X−1 = 0, we have

E(Vi(i)i)12

i6(1−δ)̺−2ix}) =E εi

X

j=1

ζj,0(i)

!

12i6(1−δ)̺−2ix}

!

=

⌊(1−δ)X12̺−ix12 ℓ=0

E X

j=1

ζj,0(i)

!

P(εi =ℓ)

⌊(1−δ)X12̺−ix12 ℓ=0

ℓ̺iP(εi=ℓ) =̺iE(εi12i6(1−δ)̺−2ix})

for x∈R++ and δ∈(0,1). If α∈(1,2), or α= 1 and mε<∞, then J2,2,n(x, δ)6 1

xP(ε2 > x) X i,j=n, i6=j

̺i+jE(εi12i6(1−δ)̺−2ix})E(εj12j6(1−δ)̺−2jx})

6 m2ε xP(ε2 > x)

X i,j=n, i6=j

̺i+j 6 m2ε xP(ε2 > x)

X i=n

̺i

!2

(16)

for x∈R++ and δ∈(0,1), and then, by Lemma E.4,

n→∞lim lim sup

x→∞ J2,2,n(x, δ)6m2ε lim

n→∞

X i=n

̺i

!2

lim sup

x→∞

1 xP(ε2 > x)

=m2ε

n→∞lim

̺2n (1−̺)2

·0 = 0, yielding that limn→∞lim supx→∞J2,2,n(x, δ) = 0.

If α= 1 and mε=∞, then we can apply the same argument as for J2,1,1,n(x, δ). Namely, J2,2,n(x, δ)6 (1 +δ)2

xP(ε2 > x) X i,j=n, i6=j

̺(1−δ)(i+j)(L(xe 12))2

6(1 +δ)2 (L(xe 12))2 xP(ε2 > x)

X i,j=n, i6=j

̺(1−δ)(i+j) = (1 +δ)2 (L(xe 12))2 xP(ε2 > x)

X i=n

̺(1−δ)i

!2

for x∈R++ and δ∈(0,1), where (eL(x12))2

xP(ε2 > x) = L(xe 12) x12

!2

1 x34 P(ε >√

x) →0 as x→ ∞,

yielding that limn→∞lim supx→∞J2,2,n(x, δ) = 0 for δ∈(0,1) in case of α = 1 and mε =∞ as well.

Consequently, limn→∞lim supx→∞ PP2,n2(x,δ)>x) = 0 for δ ∈ (0,α4) yielding (2.4) in case of α∈ [1,2) as well, and we conclude limn→∞L2,n(q) = 0 for all q∈(0,1). The proof can be

finished as in case of α ∈(0,1). ✷

2.4 Remark. The statement of Theorem 2.1 remains true in the case when mξ ∈(0,1) and mη = 0. In this case we get the statement for classical Galton–Watson processes, see Theorem 2.1.1 in Basrak et al. [3] or Theorem 1.1. However, note that this is not a special case of Theorem 2.1, since in this case the mean matrix Mξ,η is not primitive. ✷

Appendices

A Representations of second-order Galton–Watson pro- cesses without or with immigration

First, we recall a representation of a second-order Galton–Watson process without or with immigration as a (special) 2-type Galton–Watson process without or with immigration, respec- tively. Let (Xn)n>−1 be a second-order Galton–Watson process with immigration given in

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

The main result of the paper [10, Theorem 1.1] says that the above problem possesses at least nine nontrivial solutions U = ( u 1 , u 2 ) , satisfying the following sign

S ônego , Patterns on surfaces of revolution in a diffusion problem with variable diffusivity, Electron.. Differential Equations

In this article we prove a classification theorem (Main theorem) of real planar cubic vector fields which possess two distinct infinite singularities (real or complex) and

R adulescu ˘ , Eigenvalue problems associated with nonhomogeneous differential operators in Orlicz–Sobolev spaces, Anal. R adulescu ˘ , Nonlinear elliptic equations with

In the present paper, we obtain an existence result for a class of mixed bound- ary value problems for second-order differential equations.. A critical point theorem is used, in

In Section 2, we present conditions on the initial, offspring and immigration distributions under which the distribution of a not necessarily stationary Galton–Watson process with

In particular, intersection theorems concerning finite sets were the main tool in proving exponential lower bounds for the chromatic number of R n and disproving Borsuk’s conjecture

According to a Perron type theorem, with the possible exception of small solutions the Lyapunov exponents of the solutions of the perturbed equation coincide with the real parts of