• Nem Talált Eredményt

2.1 Initial value problem

N/A
N/A
Protected

Academic year: 2023

Ossza meg "2.1 Initial value problem"

Copied!
95
0
0

Teljes szövegt

(1)
(2)

Contents

1 Introduction 4

2 Basic definitions 7

2.1 Initial value problem . . . 7

2.2 Operator semigroups . . . 9

2.3 Operator splitting . . . 11

3 Classical splittings 13 3.1 Linear problems . . . 14

3.2 Nonlinear problems . . . 15

3.2.1 Preliminary estimations . . . 17

3.2.2 Convergence of sequential splitting . . . 20

3.2.3 Convergence of additive splitting . . . 26

3.2.4 Convergence of Marchuk–Strang type and symmetrically weighted splittings 28 3.2.5 Examples . . . 29

3.2.6 Summary . . . 38

3.3 Combined methods . . . 40

3.3.1 Linear problems . . . 40

3.3.2 Nonlinear problems . . . 42

3.3.3 Numerical experiments . . . 48

3.3.4 Summary . . . 52

4 Iterative splittings 53 4.1 Iterative splitting for linear problems . . . 53

4.1.1 The two-level method . . . 54

4.1.2 Thek-level method . . . 57

4.1.3 The family of methods . . . 60

4.1.4 Numerical experiments . . . 62

4.1.5 Summary . . . 66

4.2 Nonlinear problems . . . 66

4.2.1 The waveform relaxation method for semi-linear partial differential equations 68 4.2.2 The iteration error . . . 69

4.2.3 Reaction–diffusion and reaction–advection problems . . . 73

4.2.4 The iteration error for reaction–diffusion problems in a discrete framework 74 4.2.5 Windowing . . . 75

4.3 The waveform relaxation method combined with numerical methods . . . 79

4.3.1 Convergence of the combined method . . . 79

4.3.2 Reaction–diffusion and reaction–advection problems . . . 82

(3)

4.4 Numerical experiments for the waveform relaxation method . . . 83

4.4.1 The Fisher-equation . . . 83

4.4.2 Autocatalysis . . . 87

4.5 Summary . . . 90

(4)

1 Introduction

Ordinary and partial differential equations appear in many areas of science and engineering as math- ematical models of a large variety of natural phenomena. For example in fluid dynamics, air pollu- tion transport or reaction kinetics the main objective of mathematical research is to find thesolution of the given system of equations. In many cases the evolution of a quantity in time is sought, where at a given point of time the quantity can be a real number, a set of numbers or—in the most abstract form—a function of other variables, such as a distribution in space. Even in the simplest case to find the symbolic solution of the given differential equation is impossible, therefore methods that can provide an approximate solution are in the focus of many scientific work. A large variety of numerical methods have been developed that perform satisfactorily for different types of problems, however no universal numerical scheme exists that would generate an approximation of the required accuracy in reasonable time. Thus to handle a complex system containing terms of different char- acteristics one needs additional tools.

Operator splitting is a family of methods that are widely used to solve large systems of differ- ential equations that describe the evolution of the given variables in time. The splitting procedure replaces the original complex problem with a set of subproblemsthat are simpler from the aspect of numerical treatment. For the subproblems different, suitable numerical methods can be used, thus approximation of prescribed accuracy can be produced in reasonable time for each subprob- lem. Then a solution for the original problem can be derived from the ones of the subproblems.

Operator splitting was proposed the first time in 1957: Bagrinovskii and Godunov [3] applied the method for partial differential equations. Yanenko, [55] used it in the solution of the heat equation and Dyakonov [17] for a larger class of linear equations. Since then many scientific work have been devoted to examine the method from different aspects: Marchuk [46] and Strang [53], Csom´os et al.

[15], Farag´o et al [24] developed different version of splitting; Bjorhus [8] gave general results for linear problems with unbounded operators; [22] studied the convergence of the method. In practice the subproblems are solved numerically, thus it is crucial to analyze the interaction between splitting and numerical schemes in thesecombined methods, see e. g. [14] and [4]. Applications of operator splitting methods can be found—among many others —in [1], [2], [30], [45] for reaction–diffusion problems; and in [57] to the more complex problem of modeling of air pollution transport.

An important field of applications are reaction–diffusion problems. Reaction–diffusion equa- tions arise in many area of natural sciences. They are important in chemistry, for example in the description of pattern–formation; in meteorology, the modeling of air pollution; in the modeling of reactors in chemical engineering; in chemical industry: the description of technological processes;

in mechanical engineering design: in combustion processes and flame propagation; in biochemistry:

in the study of metabolic processes; in water-quality modeling. Since in these cases the number of unknown functions is large and the number of spatial dimension can be two or three, therefore one needs efficient approximation methods to solve these systems of partial differential equation.

There is a natural way to apply operator splitting to reaction–diffusion equations. The original problem can be split into two subproblems corresponding to the two physical processes modeled by

(5)

the equation: reaction and diffusion. Thus one obtains a problem that is usually a system of ordinary differential equations, a kinetic differential equation; and a system of partial differential equations that describes the diffusion. These subproblems have different properties, different methods can be used to solve them efficiently.

One possible approach—which is commonly followed in the literature—to study reaction–

diffusion equations is to apply a spatial discretization on the problem, thereby replacing a system of partial differential equations by a system of ordinary differential equations. Although the obtained problem has substantially different characteristics, we can fruitfully use the results from its study for the original continuous problem.

As in the case of every approximation method the purpose of operator splitting is to generate an approximate solution of the given problem with a prescribed accuracy. The convergence of the method is a fundamental question and lies in the focus of many research papers. A typical scenario is to show the consistency of the method, then to find conditions under which the method is stable, finally based on the basic theorem ,,consistency+stability=convergence” the convergence of the method is proven.

In two main parts of this thesis results on the classical splittings and on iterative splittings are presented. In both cases after the discussion of linear problems the nonlinear problems are inves- tigated and last the results for the combination of splitting with numerical methods are presented.

In every part the results are illustrated with examples, the numerical results strongly support the presented theoretical results of each part.

In Chapter 2 the basic definitions are given that are discussed and investigated in this work. I present the concept of operator semigroups and collect the propositions that are the basic tools for the investigations in the last large part of my work, Section 4.2. The basic concept of operator splitting is introduced at the end of the chapter.

The convergence of operator splitting is proven in Sections 3.2 and 4.2 for nonlinear problems. A scenario different from the one mentioned above (consistency+stability=convergence) is followed here: I give explicit estimates of the global error, thus convergence is proven in a direct manner. In Section 3.2 operator splitting is applied on a very general problem, namely the acting operator is only locally Lipschitz-continuous. This is an extension of the existing results in the literature, where at least multiple differentiability of the operator is assumed. Beside the strong restriction of multiple differentiability the evaluation of error bounds—given in the literature—is rather complicated in practice, in many cases it is impossible without knowing the exact solution of the given problem.

The explicit error estimates presented in Section 3.2 allows quantitative error evaluations, thus to choose a suitable time step to satisfy the prescribed accuracy. This section is based on the work [34].

In Sections 3.3.2 and 4.1 the local order of splitting is investigated, which is a significant step towards proving consistency. Section 3.3.2 contains the results of [33] on the local error of combined methods applied for semi-linear problems. Similar calculations can be found in [29], although the nonlinearity is very specific: a linear operator shifted with a known function. In [33] I consider a sum of a linear and a nonlinear operator of a much more general type, I calculate the local order of 12 (three types of splittings combined with four different numerical schemes) combined methods.

(6)

I Section 4.1 I present the possible extensions of iterative splitting applied to the case when the original problem is split into more then two subproblems. The local order is calculated for a large family of methods for bounded linear problems.

Section 4.2 discusses the waveform relaxation method. It can be introduced as a natural modifi- cation of iterative splitting, however it evolved independently from that. The method is applied to a class of semi-linear problems which contains reaction–diffusion and reaction–advection equations.

Beside the proof of convergence I give explicit error estimates, consider the windowing-technique and discuss the combination of the method with numerical schemes applied to solve the subprob- lems. These results are published in [35].

Beside the wide area of continuous problems the results of Section 3.2 and 3.3.2 can be applied to discretized reaction–diffusion problems. The main strength of the results of Section 4.2 is that they can be applied directly on the continuous problem, however the application to discretized problems is also presented in details.

(7)

2 Basic definitions

2.1 Initial value problem

Definition 1. LetXbe a Banach space,D⊂Xa nonempty open set,F:D⊂X →Xandu0∈D(F).

Consider the autonomous initial value problem

u0(t) =F(u(t)), u(0) =u0. (1)

In this work the problem is assumed to have solution on a finite time interval, t ∈[0,T], with someT ∈R+.IfX =Rn then (1) formulates a system of ordinary differential equations. One can describe a system of partial differential equations withX =Cr(Rd,Rn).An important example is the family of reaction–diffusion equationsUt(t,x,y,z) =∆U(t,x,y,z) + f(U(t,x,y,z)), whereX = C2(R3,Rn), u(t):(x,y,z)7→U(t,x,y,z) and f represents the chemical reactions, being usually a polynomial or a rational function.

Definition 2. Theclassical solutionof (1) is the continuously differentiable functionu, for which u(t)∈D(F),

h→0lim

u(t+h)−u(t)

h −F(u(t))

=0, for every t∈(0,T) and the limit fort=0 is to be understood as lim

h→0+

u(t+h)−u(t)

h .

The classical result ([12], [44], Picard [51] and Lindel¨of [43]) for ordinary differential equations says that ifF is a locally Lipschitz continuous function, then (1) has a unique solution.

Remark 1. IfF is a bounded linear operator onX, then the solution of (1) isetFu0:=

j=0

tj j!Fju0. In many practical applications the central issue is to find the solution of (1) or—due to the difficulty of this task—to provide a satisfactorily accurate approximate solution for it. Operator splitting is a family of methods that are used in the process to generate approximation solutions for (1). The main results of this thesis are related to the consistency and convergence of splitting methods applied to problem (1). The following definitions contain the basic concepts used in the analysis of approximation techniques.

Definition 3. A time-discretization method M is a family of operators M(τ):X →X,where τ ∈ (0,T]. An approximate solution uN obtained by the time discretization method M of (1) at time t∈(0,T]is defined by

uN(t) = (M(t/N))N(u0), for N∈N.

(8)

Definition 4. A time-discretization methodMisconsistentif for the solutionuof (1)

τ→0lim

u(t+τ)−M(τ)(u(t)) τ

=0 uniformly fort∈[0,T−τ].

Definition 5. Theorder of consistencyof a consistent time-discretization method M is the largest p∈Nfor which there exist positive constantscandτ?∈(0,T]such that for the solutionuof (1)

sup

t∈[0,T−τ]

u(t+τ)−M(τ)(u(t)) τ

6cτp holds for everyτ ∈(0,τ?].

In the literature of operator splitting thelocal splitting errorand thelocal orderare often exam- ined in order to have an a priori guess concerning the consistency.

Definition 6. Thelocal errorof a time discretization method Misku(τ)−M(τ)(u0)k,whereuis the solution of (1). Thelocal orderof the method is the largestq∈Nfor which there exist positive constantscandτ?∈(0,T]such that for everyτ ∈(0,τ?]

u(τ)−M(τ)(u0) τ

6cτq holds for everyτ ∈(0,τ?].

Consequence 1. Consider a consistent splitting method (as a time-discretization method) of order p and let1>τ∈(0,T],then for the order of the local splitting error q one has q> p.

Thus results on the order of the local order provides an upper bound on the order of consis- tency, furthermore the techniques of the analysis of the local order are often helpful in the study of consistency. I will study this property of splitting methods in Sections 3.3.2 and 4.1.

Definition 7. A time-discretization methodMisstableif there exist constantsKandτ?such that k(M(τ))Nk6K

holds for all suchN∈Nandτ ∈(0,τ?]that satisfyNτ6T.

Definition 8. Letube the solution of (1). Theglobal errorof a time discretization method is EN(t):=ku(t)−uN(t)k=ku(t)−(M(t/N))N(u0)k, N∈N.

(9)

Definition 9. A time discretization method is calledconvergenton[0,T]if

N→∞lim EN(t) =0 for everyt∈[0,T].

Definition 10. The order of convergence of a time discretization method is the largest p∈N for which

N→∞lim Np−1EN(t) =0 for everyt∈[0,T].

Remark 2. In Section 4.2 the approximate solutions are constructed in a different manner—without time discretization —, thus convergence is to be understood as uniform convergence there, that is,

m→∞lim max

t∈(0,T]

ku(t)−um(t)k=0.

2.2 Operator semigroups

An important set of problems of the form (1) can be investigated successfully with the concept of operator semigroup. Here only those basic definitions and relations are collected that are directly used in the results of this thesis. Operator semigroups are the primary topic of the works [18] and [50].

Definition 11. The set {S(t)}t>0 of bounded linear operators on a Banach space X is called a strongly continuous semigroupor aC0semigroup if:

1. S(0) =I,

2. S(t+s) =S(t)S(s), for allt,s>0,

3. for everyx∈X the mapst7→S(t)xare continuous fromR+intoX.

A conventional notation for the semigroup is(S(t))t>0.

Proposition 1. For every strongly continuous semigroup(S(t))t>0there exist constants M>1and ω ∈Rsuch that for every x∈X

kS(t)xk6Meωtkxk for all t>0.

Definition 12. Let(S(t))t>0be a strongly continuous semigroup on a Banach spaceX and letD(A) be the subspace ofX defined by

D(A):=

x∈X : lim

τ→0

S(τ)x−x τ exists

.

(10)

The operatorA:D(A)⊆X →X,

Ax:= lim

τ→0

S(τ)x−x τ

is called the generator of the semigroup(S(t))t>0 or, in other words, A generates the semigroup (S(t))t>0.

Proposition 2. If A generates a strongly continuous semigroup(S(t))t>0then 1. A:D(A)⊆X→X is a linear operator.

2. If x∈D(A)then S(t)x∈D(A)and d

dtS(t)x=S(T)Ax=AS(t)x for every t>0.

Proposition 3. If A generates a strongly continuous semigroup(S(t))t>0then for every x∈D(A) S(t)x−x=

Z t

0

S(s)Axds.

Example 1. A bounded linear operatorA withD(A) =X generates the strongly continuous semi- groupetA:=

j=0

tj

j!Aj.In fact it is a uniformly continuous semigroup.

Example 2. Consider the first-order differential operator on C0(Rd)corresponding to the continu- ously differentiable vector fieldv:Rd→Rd

A f :=

d

i=1

viif

for f ∈C1c(Rd):={f ∈C1(Rd): f has compact support}andx∈Rd.The operatorAgenerates a strongly continuous semigroup.

Example 3. Consider the second-order differential operator A f :=

d

i=1

i2f

defined for every f in the Schwarz spaceS(Rd)—the space of infinitely many times differentiable, rapidly decreasing functions—, see [18, Section VI. 5.]. The operator Agenerates a strongly con- tinuous semigroup.

Proposition 4. If F generates a strongly continuous semigroup(S(t))t>0, then the solution of (1)is u(t) =S(t)u0,for every t∈(0,T].

(11)

2.3 Operator splitting

The basic idea of splitting is to decompose the operator on the right hand side of (1) into a sum of suboperators, and formulate a set of subproblems corresponding to the suboperators. In some sense the suboperators are simpler and the subproblems are easier to solve numerically or even symbolically in some cases. The most simple type of operator splitting is the sequential splitting.

The method of sequential splitting is as follows. LetN∈N,Nτ:={1, ...,N},τ:=T/Nand consider the time division{0,τ,2τ, ...,Nτ} ⊂[0,T].Suppose thatF is the sum ofAandB,approximate the solution of (1) by solving the following pair of subproblems fort∈[0,τ]

v01(t) =A(v1(t)), v1(0) =u0, w01(t) =B(w1(t)), w1(0) =v1(τ).

Let the approximation of u(τ) be defined as u(N)spl(τ):=w1(τ). Next solve the subproblems again witht∈[τ,2τ]

v02(t) =A(v2(t)), v2(τ) =u(N)spl(τ), w02(t) =B(w2(t)), w2(τ) =v2(2τ).

We define the approximation of u(2τ) as u(N)spl(2τ):=w2(2τ), and so on. Four different splitting schemes will be introduced in the next chapter, the lower index ”spl” will be used as a general notation and it will change accordingly to the examined splitting scheme.

Definition 13. The method defined by solving the subproblems

v0i(t) =A(vi(t)), vi((i−1)τ) =u(N)seq((i−1)τ), t∈[(i−1)τ,iτ]

w0i(t) =B(wi(t)), wi((i−1)τ) =vi(iτ), t∈[(i−1)τ,iτ] (2) u(N)seq(0):=u0, u(N)seq(iτ):=wi(iτ)

successively fori∈Nτ is calledsequential splitting, referred to as ”seq” in the followings.

Sequential splitting can be defined reversely as well

w0i(t) =A(wi(t)), wi((i−1)τ) =u(N)seq((i−1)τ), t∈[(i−1)τ,iτ]

v0i(t) =B(vi(t)), vi((i−1)τ) =wi(iτ), t∈[(i−1)τ,iτ] (3) u(N)seq(0):=u0, u(N)seq(iτ):=vi(iτ), i∈Nτ.

The procedure provides a sequence of values{u(N)seq(0):=u0,u(N)seq(τ), ...,u(N)seq(Nτ)}inX.Thus the sequential splitting is a time-discretization method which provides an approximation for the solution of (1) on the discrete set {0,τ, ...,Nτ}. The convergence of the method at an arbitrary time point t?∈[0,T]can be investigated by defining the algorithm (2) on[0,t?],with the time stepτ=t?/N.

(12)

Example 4. IfA andB generate theC0 semigroup(Q(t))t>0 and(S(t))t>0,then the approximate solution obtained by the sequential splitting (2) is

u(N)seq(iτ) = (S(τ)Q(τ))iu0, for everyi∈Nτ, and the splitting (3) gives

u(N)seq(iτ) = (Q(τ)S(τ))iu0, for everyi∈Nτ.

(13)

3 Classical splittings

Another splitting scheme, named after Marchuk and Strang who developed the new method inde- pendently, was first presented in the papers [46] and [53]. Consider (1) withF =A+B.

Definition 14. The algorithm defined as

v0i(t) =A(vi(t)), vi((i−1)τ) =u(N)MS((i−1)τ), t ∈[(i−1)τ,(i−1/2)τ],

w0i(t) =B(wi(t)), wi((i−1)τ) =vi((i−1/2)τ), t ∈[(i−1)τ,iτ], (4) z0i(t) =A(zi(t)), zi((i−1/2)τ) =wi(iτ), t ∈[(i−1/2)τ,iτ],

u(N)MS(0) =u0, u(N)MS(iτ) =zi(iτ), i∈Nτ

is calledMarchuk–Strang type splitting.

The method ofsymmetrically weighted splittingwas first mentioned in [52]. The first theoretical investigation of the method was done in [15], and later it was succesfully applied—among others—

in air pollution modeling [9].

Definition 15. The method ofsymmetrically weighted splittingis defined as

v0i(t) =A(vi(t)), vi((i−1)τ) =u(N)sw ((i−1)τ), t∈[(i−1)τ,iτ]

w0i(t) =B(wi(t)), wi((i−1)τ) =vi(iτ), t∈[(i−1)τ,iτ]

y0i(t) =B(yi(t)), yi((i−1)τ) =u(N)sw ((i−1)τ), t∈[(i−1)τ,iτ]

z0i(t) =A(zi(t)), zi((i−1)τ) =yi(iτ), t∈[(i−1)τ,iτ] (5) u(N)sw (0):=u0 u(N)sw (iτ) = 12wi(iτ) +12zi(iτ), i∈Nτ.

Example 5. IfAandBgenerate theC0semigroup(Q(t))t>0and(S(t))t>0,then one time step with the Marchuk–Strang type splitting and the symmetrically weighted splitting can be written as

u(N)MS(iτ) =Q(τ/2)S(τ)Q(τ/2)u(N)MS((i−1)τ), and

u(N)sw (iτ) = 1

2Q(τ)S(τ) +1

2S(τ)Q(τ)

u(N)sw ((i−1)τ, for everyi∈Nτ

respectively.

The sequential, the Marchuk–Strang type and the symmetrically weighted splittings are called classical splittings. Observe that for these methods to be correctly defined the sets D(A)andD(B) need to obey some requirements, which have to be considered in the investigations. Because of the structural similarity theadditive splittingis also discussed in this chapter. The method was first introduced and tested in [27]. It was further examined for bounded linear problems in [24].

(14)

Definition 16. The method defined as

v0i(t) =A(vi(t)), vi((i−1)τ) =u(N)as ((i−1)τ), t∈[(i−1)τ,iτ], y0i(t) =B(yi(t)), yi((i−1)τ) =u(N)as ((i−1)τ), t∈[(i−1)τ,iτ], (6)

u(N)as (0):=u0 u(N)as (iτ):=vi(iτ) +yi(iτ)−u(N)as ((i−1)τ), i∈Nτ

to approximate the solution of (1) is calledadditive splitting.

Example 6. In the case ofAandBbeing generators of theC0 semigroups(Q(t))t>0and(S(t))t>0 one step with additive splitting can be written as

u(N)as (iτ) = (Q(τ) +S(τ)−I)u(N)as ((i−1)τ), for everyi∈Nτ, whereI:X →X is the identity operator.

3.1 Linear problems

The main results on operator splitting for linear problems is summarized briefly in this section. Con- sider (1) and suppose thatF is linear and it can be decomposed asF=A+B,whereAandBare lin- ear, as well. A fundamental work is [8], where the splitting is examined for a more general problem, namely the right hand side of (1) is extended with an inhomogeneous part,u0(t) =F(u(t)) +f(t).

Here the results for the case f ≡0 are recalled.

Theorem 1(Corollary 4.4 in Bjørhus [8]). Let F (resp. A,B) be an infinitesimal generator of a C0 semigroup(U(t))t>0(resp. (Q(t))t>0,(S(t))t>0). Let D(A) =D(B) =D(F)and D(A2) =D(B2) = D(F2)be satisfied and let T >0.Then we have the approximation property

sup

06t6T−τ

kQ(τ)S(τ)u(t)−U(τ)u(t)k6τ2C(T), 06τ 6T whenever u0∈D(A2),where C(T)is a constant independent ofτ.

This theorem means that the sequential splitting is consistent of order one, under the given conditions.

In [23] the argumentation of [8] is followed and it is shown that the Marchuk–Strang type and the symmetrically weighted splittings are consistent of order two. They requireDk:=D(Ak)∩D(Bk)∩ D((A+B)k) to be dense inX fork=1,2,3 and Ak|Dk, Bk|Dk, (A+B)k|Dk be closed operators for k=1,2,3.

Theorem 2(Thm. 3.4 and 4.2 in Farag´o and Havasi [23]). There exist constants C1(T)and C2(T) independently ofτsuch that

kQ(τ/2)S(τ)Q(τ/2)u(t)−U(τ)u(t)k6τ3C1(T),

(15)

1

2Q(τ)S(τ) +1

2S(τ)Q(τ)

u(t)−U(τ)u(t)

3C2(T), for all u0∈D,where D=

3

\

k=1

Dk.

Consequence 2. If A and B are bounded linear operators on X then the conditions of the above theorems are fulfilled, furthermore stability is also guaranteed since

k(S(τ)Q(τ))nk=k(eτBeτA)nk6enτkAkenτkBk6eTkAkeTkBk, k(Q(τ/2)S(τ)Q(τ/2))nk6eT/2kAkeTkBkeT/2kAk

and

Q(τ)S(τ) +S(τ)Q(τ) 2

n

6kQ(τ)knkS(τ)kn6eT(kAk+kBk).

Consequently for bounded linear problems the classical splitting methods are convergent.

Theorem 3 (Theorem 2.1. in [24]). Let A and B be bounded linear operators on some Banach space X and consider the Cauchy problem(1). Then the local order of additive splitting(6)is1.

The work [4] is also to be mentioned here as a detailed study on the convergence of classical splittings for linear problems.

3.2 Nonlinear problems

Nonlinear problems are usually discussed in the context of Lie-operator formalism—e.q. [38], [22], [16], [32], [28]—, that allows brief formulation of the splitting error. Nevertheless, an actual representation of the Lie-operator in practice is complicated. It is more important that this technique demands the suboperators to be infinitely many times differentiable and the error estimations are expressed in terms of higher order derivatives. Let us mention just three examples, where a different approach is followed: in [5] the Schr¨odinger-equation is investigated with the sum of the differential operatori∆and a globally Lipschitz functionF, for whichF(0) =0 applies and its derivatives are bounded up to order four; in [53] nonlinear problems of a quite general form are shortly discussed assuming multiple differentiability of the operators acting on the right hand side; [41] also assumes multiple differentiability and uses series expansion for its analysis.

Beside the strong restriction of multiple differentiability the evaluation of error bounds—given in the literature—is rather complicated in practice, in many cases it is impossible without knowing the exact solution of the given problem.

The content of this section can be found in the work [34]. Operator splitting is analyzed in the general framework of Banach spaces and the acting operators are only assumed to be locally Lipschitz-continuous only. The convergence of the classical and additive splittings is proven in this very general case. The area of validity of the existing results for nonlinear problems is extended, on

(16)

the other hand the error bounds are expressed by the suboperators acting on the initial value, that is, in terms that are relatively easy to calculate in practice. Based on the results of [34] having two approximations of the unknown solution an a posteriori estimate can be calculated for the Lipschitz- constant. From that the global error can be estimated and a suitable time step can be chosen, thus one can ensure the approximation to fulfill the required accuracy. In the case of sequential splitting, based on the error estimates we propose an optimal decomposition which generates the lowest splitting error. This result is more of a heuristical nature and a deeper analysis would be necessary to clarify this question, although numerical tests on two relatively simple problems support our conjecture.

Operator splitting is frequently used in modeling large scale air pollution transport [57], among many other areas of applications, such as water quality modeling [20] or reaction kinetics [37]. The results of the present work form the general theoretical base for the application of operator splitting in a large class of problems.

Consider (1) with F = f+g, where D(f),D(g)⊂X and suppose that D:=D(f)∩D(g) is a nonempty open connected set. Let f :D(f)−→X and g:D(g)−→X be continuous operators.

Thus (1) can be written as

u0(t) = (f+g)(u(t)), u(t0) =u0, (7) withu0∈D,t0∈I,I⊂Rbeing an open interval.

Remark 3. If f andgare Lipschitz-continuous operators onD, —that is, there are constantsLf,Lg such that for anyw,z∈Dthe relations

kf(w)−f(z)k6Lfkw−zk and kg(w)−g(z)k6Lgkw−zk

hold,— then (7) has a unique solution in D. In practice to demand the global Lipschitz property to hold on Dis too strict, therefore beside the existence of the solution of (7) we only assume the operators f andgto be locally Lipschitz continuous.

The following analysis is built on two assumptions.

1. Lett0=0 and suppose that (7) has a unique solution[0,T]3t7→u(t),with someT >0.

2. Consider the following neighborhood of the unique solutionuof (7) Sr := [

t∈[0,T]

Sr(t), where Sr(t):={η :kη−u(t)k6r}, t∈[0,T],

whereris a given positive real number. Suppose that there existsLsuch that for anyt∈[0,T] and anyw,z∈Sr(t)the relations kf(w)− f(z)k6Lkw−zk andkg(w)−g(z)k6Lkw−zk hold.

In practice the constantL depends onras it will be illustrated in the numerical examples. Here L is chosen to the fixed value ofr. Lalso depends on f andg, although for our theoretical results no

(17)

real advantage is taken of this fact,Lcan be considered as max{Lf,Lg}. It is important to note that given this assumption the initial value problems

v0(s) = f(v(s)), v(0) =v0 and w0(s) =g(w(s)), w(0) =w0

have a unique solution inSr(t)for anyv0,w0∈Sr(t).The solutions of the sub-problems in (2) and (6) satisfy the relations

vi(t) =u(N)seq((i−1)τ) + Z t

0

f(vi(s))ds, wi(t) =vi(τ) + Z t

0

g(wi(s))ds and

yi(t) =u(N)as ((i−1)τ) + Z t

0

g(yi(s))ds, vi(t) =u(N)as ((i−1)τ) + Z t

0

f(vi(s))ds, respectively for everyi∈Nτ.Furthermore the solution of (7) satisfies

u(t) =u0+ Z t

0

(f+g)(u(s))ds for every t∈[0,T]. (8) Note that the operator f+gis locally Lipschitz continuous in everySr(t)with the Lipschitz constant 2L.

In the following sections the convergence of splittings (2) and (6) is proven in a direct way, namely, for everyi∈Nτ an upper bound for the global error will be formulated as a function ofτ.

Then the limitN→∞can easily be obtained fromτ→0.Given the properties of f andgas above the results derived for (2) can be obtained in a similar way for (3) as well.

3.2.1 Preliminary estimations

The main challenge of the splitting procedure is to remain in the sufficiently small neighborhood of the real solution, that is, to ensureu(N)spl(iτ)∈Sr(iτ). The error estimations—as functions of the time step τ—are expressed under the assumption that every term lies inside Sr. However the obtained error bounds lead out of this set. The following lemma forms the base for us to show the existence of a thresholdτ?,such that, the error bounds are valid for every time stepτ<τ?.On the other hand it allows a constructive error estimation, that is, an appropriate time stepτ can be calculated.

Lemma 1. Consider the continuous functions a :R+0 →R and b: R+0 →R, where b is strictly monotone increasing and unbounded. Assume that a(t0)6b(t0)<K with some K∈Rand t0∈R+0, furthermore for all t>t0such that a(t)6K we have a(t)6b(t).Then

a(t)6K for every t6b−1(K).

Obviously a(t)6K holds for anyt >t0that is close enough tot0, the purpose of this lemma is to show that it holds on[t0,b−1(K)].

(18)

Proof. Indirect: suppose that ˆt<b−1(K),where

tˆ:=max{tK :tK6b−1(K)anda(t)6K,∀t∈[t0,tK]}.

Thena(ˆt)<Ksince

a(t)ˆ 6b(t)ˆ <b(b−1(K)) =K.

Sinceais continuous there exist ˜t ∈(ˆt,b−1(K))such thata(˜t) = a(ˆt) +K

2 <K.Thus we arrived at a contradiction in the definition of ˆt,consequently ˆt is not less thenb−1(K).

Lemma 2. Suppose that h:D(h)−→X is Lipschitz-continuous on Bρ(x0):={η :kη−x0k6ρ}

with the Lipschitz constant Lρ.Consider the initial value problem x0(t) =h(x(t)), x(0) =x0.As far as x(t)∈Bρ(x0),for t>0we have

kx(t)−x0k6eLρt−1

Lρ kh(x0)k.

Proof.

kx(t)−x0k=kx0+ Z t

0

h(x(s))ds−x0k6 Z t

0

kh(x(s))−h(x0)k+kh(x0)kds6 6Lρ

Z t

0

kx(s)−x0kds+tkh(x0)k

is an integral-inequality for the functiont7→ kx(t)−x0k,for which Gronwall’s lemma gives kx(t)−x0k6

Z t

0

skh(x0)kLρeLρ(t−s)ds+tkh(x0)k.

By calculating the integral and some simplifications we get the statement.

Consequence 3. Apply Lemma 1 with t0:=0, a(t):=kx(t)−x0k, b(t):= eLρt−1

Lρ kh(x0)k and K:=ρ.Then for any06t6b−1(ρ) =L1

ρlnρLρkh(x+kh(x0)k

0)k the relationkx(t)−x0k6eLρt−1

Lρ kh(x0)k holds.

Consequence 4. Applying Lemma 2 for(7)with h:= f+g,x0:=u(t),ρ:=r,Bρ(x0):=Sr(t)and Lρ :=L,then one gets

ku(t+s)−u(t)k6 e2Ls−1

2L k(f+g)(u(t))k,

(19)

for any s∈[t,t+tr], with tr:= 1

2Llnr2L+k(f+g)(u(t))k

k(f+g)(u(t))k . Considering(2)and(6) kvi(t)−vi(0)k6eLt−1

L kf(u(N)spl((i−1)τ))k, for any t ∈[0,tv]and i∈Nτ,with tv:= 1

LlnrL+kf(u(N)spl((i−1)τ))k kf(u(N)spl((i−1)τ))k

. Lemma 3. Consider the solution u of (7). Then for any t∈[0,T]

k(f+g)(u(t))k6e2Ltk(f+g)(u0)k and

kg(u(t))k6e2Lt−1

2 k(f+g)(u0)k+kg(u0)k.

Proof. Chooset such thatSr(0)∩Sr(t)6= /0 andu(t)∈/Sr(0).Furthermore let ˜t<t be such that u(t)˜ ∈Sr(0)∩Sr(t).Then

k(f+g)(u(t))k6k(f+g)(u(t))−(f+g)(u(˜t))k+k(f+g)(u(˜t))k6 62Lku(t)−u(t)k˜ +k(f+g)(u(t))k.˜

Applying Consequence 4 one obtains k(f+g)(u(t))k62Le2L(t−˜t)−1

2L k(f+g)(u(˜t))k+k(f+g)(u(˜t))k6e2L(t−˜t)k(f+g)(u(˜t))k.

With similar stepsk(f+g)(u(˜t))k6e2L˜tk(f+g)(u0)kcan be obtained, thus k(f+g)(u(t))k6e2L(t−t)˜ k(f+g)(u(t˜))k6e2Ltk(f+g)(u0)k.

The second statement can be derived analogously

kg(u(t))k6kg(u(t))−g(u(t))k˜ +kg(u(t))k˜ 6Lku(t)−u(t)k˜ +kg(u(t))k˜ 6 6Le2L(t−˜t)−1

2L k(f+g)(u(˜t))k+kg(u(˜t))k6 6 e2L(t−˜t)−1

2 e2Lt˜k(f+g)(u0)k+kg(u(t))˜ −g(u0)k+kg(u0)k6 6e2L(t−t)˜ −1

2 e2L˜tk(f+g)(u0)k+Lku(˜t)−u0k+kg(u0)k6

(20)

u0

S

r

H0L

uHtŽ L uHtL

S

r

HtL

r

Figure 1: Illustration of Lemma 3.

6 e2L(t−t)˜ −1

2 e2L˜tk(f+g)(u0)k+e2Lt˜−1

2 k(f+g)(u0)k+kg(u0)k=

= e2Lt−1

2 k(f+g)(u0)k+kg(u0)k.

For any larger timetthe extension can be done by repeating the above argumentation. See Figure 1

for illustration.

At last recall that the remainder of the Taylor series of the exponential function can be expressed as

eat

n−1 i=0

(at)i

i! = (at)n n! eatˆ, where ˆt∈[0,t].Thus eat−1

a =tet6teat. 3.2.2 Convergence of sequential splitting

First a threshold for the time step of splitting is defined for which it can be shown that the following error estimates are valid. After that the convergence of splitting follows.

(21)

Definition 17. Considert?∈(0,T]and the function Gseq(t):=eLt e3Lt Lt2

2

e2Lt?−1

e2Lt−1kg(u0)k+Lt

4 t?e2Lt?−te2Lt?−1 e2Lt−1

!

k(f+g)(u0)k

! +

+eLtLt2 2

e2Lt?−1

2 k(f+g)(u0)k+kg(u0)k

!

+t e2Lt?−1

2 k(f+g)(u0)k+kf(u0)k+kg(u0)k

!!

fort >0.As it can be shown to be strictly monotone increasing one can introduce τ?:=G−1seq(r), obviouslyGseq(t)6rfor everyt∈[0,τ?].

It will turn out that an appropriate time step of sequential splitting is any numberτ 6τ?. Theorem 4. Consider u, the solution of (48)and the sequential splitting (2), let0<τ 6τ? such that t?/τ=:N∈N.Then

Eseq(N)(iτ)6e2Lτ2

2

e2Liτ−1

e2Lτ−1kg(u0)k+Lτ 4

iτe2Liτ−τe2Liτ−1 e2Lτ−1

k(f+g)(u0)k

, (9)

for every i∈ {0} ∪Nτ.

Proof. By definitionEseq(N)(0) =0.Suppose that the statement holds forEseq(N)((i−1)τ).Consider ku((i−1)τ+t)−wi(t)k=

=

u((i−1)τ) + Z t

0

(f+g)(u((i−1)τ+s))ds−vi(τ)− Z t

0

g(wi(s))ds

=

=

u((i−1)τ) + Z t

0

(f+g)(u((i−1)τ+s))ds−u(N)seq((i−1)τ)− Z τ

0

f(vi(s))ds− Z t

0

g(wi(s))ds 6 6Eseq(N)((i−1)τ) +

Z t 0

kf(u((i−1)τ+s))− f(vi(s))kds+ Z τ

t

kf(vi(s))kds+

+ Z t

0

kg(u((i−1)τ+s))−g(wi(s))kds6 6Eseq(N)((i−1)τ) +L

Z t

0

ku((i−1)τ+s)−vi(s)kds+

+ Z τ

t

kf(vi(s))kds+ Z t

0

Lku((i−1)τ+s)−wi(s)kds. (10) The last estimation requires vi(s) andwi(s)to be in Sr(s), that is, ku(s)−vi(s)k6r andku(s)− wi(s)k6rfors∈[(i−1)τ,t].Next we show that these requirements are fulfilled.

(22)

The first integrand ku(s)−vi(s)k in (10) is a continuous function of s and Espl(N)((i−1)τ)) = ku((i−1)τ)−vi((i−1)τ)k<r, therefore for s∈[(i−1)τ,τ] that is close enough to (i−1)τ we haveku(s)−vi(s)k6r.For simplicity let us change the variablestot,then

ku(t)−vi(t)k=

u((i−1)τ) + Z t

(i−1)τ(f+g)(u(s))ds−u(N)seq((i−1)τ)− Z t

(i−1)τ

f(vi(s))ds 6 6Eseq(N)((i−1)τ) +

Z t

(i−1)τkf(u(s))−f(vi(s))kds+ Z t

(i−1)τkg(u(s))kds.

Considering the Lipschitz-property and applying Lemma 3 for the last integrand one gets ku(t)−vi(t)k6Eseq(N)((i−1)τ) +

Z t

(i−1)τ

Lku(s)−vi(s)kds+

+ Z t

(i−1)τ

e2Ls−1

2 k(f+g)(u0)k+kg(u0)kds6 6Eseq(N)((i−1)τ) +

Z t (i−1)τ

Lku(s)−vi(s)kds+ (t−(i−1)τ)

e2Lt−1

2 k(f+g)(u0)k+kg(u0)k

.

Applying Gronwall’s inequality one gets

ku(t)−vi(t)k6 (11)

6eL(t−(i−1)τ)

Eseq(N)((i−1)τ) + (t−(i−1)τ)

e2Lt−1

2 k(f+g)(u0)k+kg(u0)k

6 6eL(t−(i−1)τ) e2Lτ2

2

e2L(i−1)τ−1

e2Lτ−1 kg(u0)k+

+Lτ

4 (i−1)τe2L(i−1)τ−τe2L(i−1)τ−1 e2Lτ−1

!

k(f+g)(u0)k

! +

+(t−(i−1)τ)

e2Lt−1

2 k(f+g)(u0)k+kg(u0)k

.

Let us define the strictly monotone increasing function bv(t):=eL(t−(i−1)τ) e2Lτ2

2

e2Lt?−1

e2Lτ−1kg(u0)k+Lτ

4 t?e2Lt?−τe2Lt?−1 e2Lτ−1

!

k(f+g)(u0)k

! +

+(t−(i−1)τ) e2Lt?−1

2 k(f+g)(u0)k+kg(u0)k

!!

,

(23)

then ku(t)−vi(t)k 6bv(t) holds for t close enough to (i−1)τ. Lemma 1 can be applied with t0:= (i−1)τ,a(t):=ku(t)−vi(t)k,b(t):=bv(t)andK:=r.Frombv(iτ)<Gseq(τ)6rit follows thatku(t)−vi(t)k6bv(t)holds on[(i−1)τ,b−1v (r)], consequently it holds on[(i−1)τ,iτ]for every i∈Nτ.

Since in (10) ku(s)−wi(s)k is a continuous function ofs and ku((i−1)τ)−wi((i−1)τ)k= ku((i−1)τ)−vi(iτ)k6bv(iτ)<r,therefore fors∈[(i−1)τ,iτ]that is close enough to(i−1)τwe haveku(s)−wi(s)k6r,thus the Lipschitz-condition holds and (10) is valid. Applying Gronwall’s inequality for (10) one gets

ku(t)−wi(t)k6 6eL(t−(i−1)τ)

Eseq(N)((i−1)τ) +L Z t

(i−1)τku(s)−vi(s)kds+ Z

t

kf(vi(s))kds

. (12)

The last term can be estimated as Z

t

kf(vi(s))kds6 Z

t

kf(vi(s))−f(u(s))k+kf(u(s))kds6 6

Z

t

Lkvi(s)−u(s)kds+ Z

t

kf(u(s))kds, thus (12) becomes

ku(t)−wi(t)k6eL(t−(i−1)τ)

Eseq(N)((i−1)τ) +L Z

(i−1)τku(s)−vi(s)kds+ Z

t

kf(u(s))kds

. (13) Integrate (11) to get the second term in the parenthesis

L Z

(i−1)τku(s)−vi(s)kds6 6L

Z

(i−1)τ

eL(s−(i−1)τ)

Eseq(N)((i−1)τ) + (s−(i−1)τ)

e2Ls−1

2 k(f+g)(u0)k+kg(u0)k

ds6 6(e−1)Eseq(N)((i−1)τ) +L

Z

(i−1)τ

e(s−(i−1)τ)

e2Liτ−1

2 k(f+g)(u0)k+kg(u0)k

ds=

= (e−1)Eseq(N)((i−1)τ) +e2 2

e2Liτ−1

2 k(f+g)(u0)k+kg(u0)k

. (14)

Via Lemma 3 the third term in the parenthesis becomes Z

t

kf(u(s))kds6 Z

t

e2Ls−1

2 k(f+g)(u0)k+kf(u0)kds6 6(iτ−t)

e2Liτ−1

2 k(f+g)(u0)k+kf(u0)k

.

(24)

Thus (13) becomes

ku(t)−wi(t)k6eL(t−(i−1)τ)

eEseq(N)((i−1)τ) +e2 2

e2Liτ−1

2 k(f+g)(u0)k+kg(u0)k

+

+(iτ−t)

e2Liτ−1

2 k(f+g)(u0)k+kf(u0)k

,

substitutingEseq(N)((i−1)τ)and considering thatiτ−t6τ fort∈[(i−1)τ,iτ]one gets ku(t)−wi(t)k6eL(t−(i−1)τ) e3Lτ2

2

e2L(i−1)τ−1

e2Lτ−1 kg(u0)k+

+Lτ

4 (i−1)τe2L(i−1)τ−τe2L(i−1)τ−1 e2Lτ−1

!

k(f+g)(u0)k

! + +e2

2

e2Liτ−1

2 k(f+g)(u0)k+kg(u0)k

e2Liτ−1

2 k(f+g)(u0)k+kf(u0)k

.

Let us define the strictly monotone increasing function bw(t):=eL(t−(i−1)τ) e3Lτ2

2

e2Lt?−1

e2Lτ−1kg(u0)k+Lτ

4 t?e2Lt?−τ

e2Lt?−1 e2Lτ−1

!

k(f+g)(u0)k

! +

+e2 2

e2Lt?−1

2 k(f+g)(u0)k+kg(u0)k

!

+τ e2Lt?−1

2 k(f+g)(u0)k+kf(u0)k

!!

,

thenku(t)−wi(t)k6bw(t),sinceiτ6t?and ie2Liτ−e2Liτ−1

e2Lτ−1 = (e2Lτ−1)

i k=1

ke2L(k−1)τ

is increasing with i. Lemma 1 can be applied witht0 := (i−1)τ, a(t):=ku(t)−wi(t)k, b(t):=

bw(t)andK :=r.Sincebw(iτ)<Gseq(τ)6r the estimationku(t)−wi(t)k6bw(t)holds on[(i− 1)τ,b−1i (r)], consequently it holds on[(i−1)τ,iτ]for everyi∈Nτ.

Finally forEseq(N)(iτ)start from relation (12), Eseq(N)(iτ) =ku(iτ)−wi(iτ)k6e

Eseq(N)((i−1)τ) +L Z

(i−1)τku(s)−vi(s)kds

.

Using (14) one gets Eseq(N)(iτ)6e2Lτ

Eseq(N)((i−1)τ) +Lτ2 2

e2Liτ−1

2 k(f +g)(u0)k+kg(u0)k

.

(25)

Consider the assumption forEseq(N)((i−1)τ) Eseq(N)(iτ)6e2Lτ2

2 e2Lτe2L(i−1)τ−1

e2Lτ−1 kg(u0)k+

+Lτ2

4 e2Lτ (i−1)e2L(i−1)τ−e2L(i−1)τ−1 e2Lτ−1

!

k(f+g)(u0)k+

+Lτ2 2

e2Liτ−1

2 k(f+g)(u0)k+Lτ2

2 kg(u0)k

,

the first and the last term can be merged Eseq(N)(iτ)6e2Lτ

2 2

e2Liτ−1

e2Lτ−1kg(u0)k+

+Lτ2 4

(i−1)e2Liτ−e2Liτ−e2Lτ

e2Lτ−1 +e2Liτ−1

k(f+g)(u0)k

=

=e2Lτ2

2

e2Liτ−1

e2Lτ−1kg(u0)k+Lτ2 4

ie2Liτ−e2Liτ−1 e2Lτ−1

k(f+g)(u0)k

,

which is the desired relation (9). Having Eseq(N)(iτ)<Gseq(τ)6r it follows by induction that (9)

holds for everyi∈Nτ.

Theorem 5. Operator splitting(2)is convergent at every t?∈[0,T]of order1.

Proof. Since the right side of (9) is an increasing function ofithe global error satisfies Eseq(N)(t?)6e2Lτ2

2

e2Lt?−1

e2Lτ−1kg(u0)k+Lτ

4 t?e2Lt?−τ

e2Lt?−1 e2Lτ−1

!

k(f+g)(u0)k

! .

The right side is an increasing function oft?thus Eseq(N)(t?)6e2Lτ

4

2τe2LT−1

e2Lτ−1kg(u0)k+

Te2LT−τe2LT−1 e2Lτ−1

k(f+g)(u0)k

for everyt?∈[0,T].The limit of the right side is zero as τ →0, which implies that it is zero as N→∞. One can easily check that the limit is not zero when it is multiplied byN,since

τ→0lim τ

e2Lτ−1 = 1 2L.

(26)

Remark 4. The form of the global error (9) suggests that the decomposition where g(u0) = 0 generates the smallest error, since the second term(f+g)(u0)does not change. Thus an optimal splitting can be defined. Although, (9) is only an upper estimation for the error, thus this observation is more of a heuristical nature. This question will be examined later in numerical tests.

Consider an arbitrary decomposition of the right hand side of (48): f+g,then the decomposition f˜+g˜ with ˜f(v):= f(v) +g(u0) and ˜g(w):=g(w)−g(u0) has the above property. Refined error estimates can be derived with Lf and Lg, taking the dependence of L on the suboperators into account. A simple shift of the suboperators withg(u0)however does not change the corresponding Lipschitz-constants.

3.2.3 Convergence of additive splitting

To show the convergence of additive splitting we follow the same scenario as in the previous section.

Definition 18. Considert?∈(0,T]and the strictly monotone increasing function Gas(t):=eLt e3LtLt2

2

e2Lt?−1

e2Lt−1 +te2Lt?+1 2

!

(kf(u0)k+kg(u0)k) fort>0.Let us defineτ?:=G−1as (r).

Theorem 6. Consider u, the solution of (7)and the splitting(6). Then forτ,such that t?/τ=:N∈N and0<τ 6τ?,

Eas(N)(iτ)6e3Lτ2 2

e2Liτ−1

e2Lτ−1(kf(u0)k+kg(u0)k), (15) for every i∈ {0} ∪Nτ.

Proof. By definition Eas(N)(0) =0.Suppose that the statement holds for Eas(N)((i−1)τ).Then Eas(N)((i−1)τ)<rand consider

Eas(N)(iτ) =ku(iτ)−(vi(iτ) +yi(iτ)−u(N)as ((i−1)τ))k=

=

u((i−1)τ) + Z

(i−1)τ

(f+g)(u(s))ds−u(N)as ((i−1)τ)− Z

(i−1)τ

f(vi(s))ds−

−u(N)as ((i−1)τ)− Z

(i−1)τg(yi(s))ds+u(N)as ((i−1)τ) 6 6Eas(N)((i−1)τ) +

Z

(i−1)τ

kf(u(s))−f(vi(s))kds+ Z

(i−1)τ

kg(u(s))−g(yi(s))kds6 6Eas(N)((i−1)τ) +L

Z

(i−1)τku(s)−vi(s)kds+L Z

(i−1)τku(s)−yi(s)kds. (16)

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

I have focused on two main fields of lower airway examinations: in the first part I studied the diagnostic value of different techniques and examination types used

The wave analysis is the process of building wave models of problem instances of a problem class and extracting common features that characterize the problem class in question.. A

We are interested in the existence of positive solutions to initial-value prob- lems for second-order nonlinear singular differential equations... We try to establish a more general

N i , Perturbations of second order linear elliptic problems by nonlinearities without Landesman–Lazer condition, Nonlinear Anal. K rbec , Boundary value problems with

This paper presents some existence and uniqueness results for a boundary value problem of fractional differential equations of order α ∈ (1, 2] with four- point nonlocal

It is known from the general theory of boundary value problems for functional differential equations that if ℓ i,j (i, j = 1, n) are strongly bounded linear operators, then

Abstract: In this paper, we study the existence of positive solutions for a nonlinear four- point boundary value problem with a p-Laplacian operator. By using a three functionals

For a given direction θ, for each initial energy e i ∈ E I we can calculate endpoint ε i θ of the corresponding rays. The propagation of rays in a homogeneous medium with a fixed