• Nem Talált Eredményt

An example of an application of the theory to global optimization for inner products is also provided

N/A
N/A
Protected

Academic year: 2022

Ossza meg "An example of an application of the theory to global optimization for inner products is also provided"

Copied!
10
0
0

Teljes szövegt

(1)

http://jipam.vu.edu.au/

Volume 7, Issue 5, Article 158, 2006

MAXIMIZATION FOR INNER PRODUCTS UNDER QUASI-MONOTONE CONSTRAINTS

KENNETH S. BERENHAUT, JOHN D. FOLEY, AND DIPANKAR BANDYOPADHYAY DEPARTMENT OFMATHEMATICS

WAKEFORESTUNIVERSITY

WINSTON-SALEM, NC 27106 berenhks@wfu.edu

URL:http://www.math.wfu.edu/Faculty/berenhaut.html DEPARTMENT OFMATHEMATICS

WAKEFORESTUNIVERSITY

WINSTON-SALEM, NC 27106 folejd4@wfu.edu

DEPARTMENT OFBIOSTATISTICS, BIOINFORMATICS ANDEPIDEMIOLOGY

MEDICALUNIVERSITY OFSOUTHCAROLINA

CHARLESTON, SC 29425 bandyopd@musc.edu

Received 19 August, 2006; accepted 04 September, 2006 Communicated by L. Leindler

ABSTRACT. This paper studies optimization for inner products of real vectors assuming mono- tonicity properties for the entries in one of the vectors. Resulting inequalities have been use- ful recently in bounding reciprocals of power series with rapidly decaying coefficients and in proving that all symmetric Toeplitz matrices generated by monotone convex sequences have off- diagonal decay preserved through triangular decompositions. An example of an application of the theory to global optimization for inner products is also provided.

Key words and phrases: Inner Products, Recurrence, Monotonicity, Discretization, Global Optimization.

2000 Mathematics Subject Classification. 15A63, 39A10, 26A48.

1. INTRODUCTION

This paper studies inequalities for inner products of real vectors assuming monotonicity and boundedness properties for the entries in one of the vectors. In particular, for r ∈ (0,1], we consider inner productsp·q, for vectorsp= (p1, p2, . . . , pn)andq= (q1, q2, . . . , qn), satisfying p,q∈Rn,pi ∈[A, B]for1≤i≤n, and one of the following properties

(1) (r-quasi-monotonicity)pi+1 ≥rpifor1≤i≤n−1.

ISSN (electronic): 1443-5756

c 2006 Victoria University. All rights reserved.

The first author acknowledges financial support from a Sterge Faculty Fellowship.

218-06

(2)

(2) (r-geometric monotonicity)pi+11rpifor1≤i≤n−1.

(3) (monotonicity)pi+1 ≥pi for1≤i≤n−1.

For discussion of various classes of sequences of monotone type, see for instance, Kijima [12], and Leindler [15, 14].

Our method involves, for each of the three cases mentioned, obtaining finite sets Pn = Pn(A, B, r)such that

min{v·q :v ∈ Pn} ≤p·q≤max{v·q :v∈ Pn}, for allpsatisfying the respective monotonicity assumption, above.

The paper proceeds as follows. In Section 2, we consider obtaining the setsPncorresponding to Property (1), above. An application to linear recurrences, which has been useful in the recent literature is also given. In Section 3, we consider the case of r-geometric monotonicity. The paper includes examples which provide an application of the theory to global optimization for inner products, for a specific vectorq.

2. THECASE OFr-QUASI-MONOTONICITY

In this section we consider the assumption of r-quasi-monotonicity of the entries in p = (p1, p2, . . . , pn)(as defined in (1), above), i.e.

(2.1) pi+1 ≥rpi

for1≤ i≤ n−1. The motivation for consideration of such a condition arose in a probability related context of investigating a monotone sequence{qi}with a geometric bound, i.e.

qi ≤Ari

whereA > 0andr < 1(see [2]). In this case the sequence {φi}defined byφi = qrii, satisfies 0≤φi ≤A, and

φi = qi

ri ≥ qi+1

rii+1r.

For a given vectort = (t0, t1, t2, . . . , tk)satisfyingt0 ≥0,ti ≥1for1≤i≤kand

(2.2) X

i

ti =N, define the vectorvtvia

(2.3) vt def= A

t0

z }| {

0,0, . . . ,0;r0, r1, . . . , rt1−1;r0, r1, . . . , rt2−1;r0, r1, . . . , rtk−1 . In addition, define the set of vectors

(2.4) PN =PN(A,0, r) = {vt :tsatisfies (2.2)}.

We have the following result regarding inner products.

Theorem 2.1. Suppose that p = (p1, . . . , pn) and q = (q1, . . . , qn) are n-vectors where p satisfies (2.1), for1≤i≤n−1and0≤pi ≤Afor1≤i≤n. We have,

(2.5) min{w·q:w ∈ Pn} ≤p·q≤max{w·q :w∈ Pn} wherep·qdenotes the standard dot productPn

i=1piqi.

The value in Theorem 2.1 lies in the fact that for any givenn,Pnis a finite set.

For a vectorp= (p1, p2, . . . , pn), we will use the notationpi,j to indicate the vector consist- ing of theiththroughjthentries inp, i.e.

(2.6) pi,j = (pi, pi+1, . . . , pj)

(3)

Proof of Theorem 2.1. First, supposep·q>0, and note that the lower bound in (2.5), for such vectors, follows from the fact thatvt = 0fort = (n,0, . . . ,0). We will obtain a sequence of vectors{pei}n+1i=1, satisfying

0≤p·q =pen+1·q≤pen·q ≤ · · · ≤pe1·q, such thatpe1 ∈ Pn.

In particular, consider the vectors pei = (pei(1),pei(2), . . . ,pei(n)) ∈ Rn, i = 1, . . . , n+ 1 defined recursively according to the following scheme.

(1) pen+1 =p.

(2) For1≤i≤n, set

Si ={s:i+ 1 ≤s ≤n and pei+1(s) = A}, and vi = min

Si

[{n+ 1}

. (3) For1≤i≤n, definepei(a function ofpei+1) via

pei =

pei+1(1),pei+1(2), . . . ,pei+1(i−1), cipei+1(i), cipei+1(i+ 1), . . . , cipei+1(vi−1), A,pei+1(vi+ 1), . . . ,pei+1(n)

= (w1i+1;ciw2i+1;w3i+1), (2.7)

say, whereciis given by

(2.8) ci =











 A

pi, ifw2i+1·qi,vi−1 >0 rpi−1

pi , ifw2i+1·qi,vi−1 ≤0andi >1 0, otherwise

.

Note thatpei+1 = (w1i+1,w2i+1,w3i+1).

It is not difficult to verify by induction thatwji+1,j = 1,2,3, are of the form w1i+1 =pe1,i−1i+1 = (p1, p2, . . . , pi−1)

(2.9)

w2i+1 =pei,vi+1i−1 = (pi, rpi, r2pi, . . . , rvi−i−1pi) (2.10)

w3i+1 =pevi+1i,n ∈ Pn−vi+1, (2.11)

We have that (2.7) and (2.8) imply

pei·q−pei+1·q = (ci−1)(w2i+1·qn−i,vi−1)≥0, and, for1≤i≤n+ 1,

(2.12) pei

(p1, p2, . . . , pi−1, rpi−1, r2pi−1, r3pi−1, . . . , rvi−ipi−1;w3i+1),

(p1, p2, . . . , pi−1, A, rA, r2A, . . . , rvi−i−1A;w3i+1) . Thusvi−1 ∈ {vi, i}, and in particular, fori= 2, we have

(2.13) pe2

(p1, rp1, r2p1, r3p1, . . . , rv2−2p1;w33),(p1, A, rA, r2A, . . . , rv2−3A;w33) . The vectorpe1 then satisfies

(2.14) pe1

(0,0, . . . ,0;w33),(A, rA, r2A, r3A, . . . , rv2−2A;w33),

(A, A, rA, r2A, . . . , rv2−3A;w33),(0, A, rA, r2A, . . . , rv2−3A;w33) ⊂ Pn

and the theorem is proven in this case. The proof follows similarly, ifp·q ≤ 0, and the proof

of the theorem is complete.

(4)

2 4 6 8 10 12 14

−4−2024

i

q_i

q vector

Figure 2.1: The vectorqin (2.15).

The following example provides an application of Theorem 2.1 to global optimization for inner products, for a specific vectorq.

Example 2.1. Consider the vectorq ∈R15given by

(2.15) q = 0.4361725,0.6454718,2.0226176,−4.1395363,0.9749134, 4.3806500,−4.0035597,0.6773984,−3.7420053,−2.7051776,

3.8209032,0.6327872,1.4719490,1.2277661,4.1026365 . The entries in q are depicted in Figure 2.1. Now, consider optimizing p·q over all p = (p1, p2, . . . , p15) ∈ R15, satisfying 0 ≤ pi ≤ 1 and (2.1) for some 0 < r ≤ 1. Theorem 2.1 implies that we need only compute and compare inner products with q over the finite set P15(1,0, r)as given in (2.4).

The results of the computations forr∈ {.1, .3, .7, .9}, are given in Figure 2.2.

It is possible to apply Theorem 2.1 in sequence to obtain bounds for linear recurrences, as is shown by the following theorem.

Theorem 2.2. Suppose that{bi}andi,j}satisfy

(2.16) bn=

n−1

X

k=0

(−αn,k)bk, n ≥1, whereb0 = 1and

(2.17) αn,k ∈[0, A],

for0≤k ≤n−1andn≥1, and

(2.18) rαn,k ≤αn,k+1.

Then, there exist{b0i}and0i,j}such that

|bn| ≤ |b0n|

(5)

2 4 6 8 10 12 14

0.00.8

i

p_i

0.1 minimal

−13.991

2 4 6 8 10 12 14

0.00.8

i

p_i

0.1 maximal

19.177

2 4 6 8 10 12 14

0.00.8

i

p_i

0.3 minimal

−12.437

2 4 6 8 10 12 14

0.00.8

i

p_i

0.3 maximal

17.21

2 4 6 8 10 12 14

0.00.8

i

p_i

0.7 minimal

−7.343

2 4 6 8 10 12 14

0.20.8

i

p_i

0.7 maximal

12.684

2 4 6 8 10 12 14

0.00.8

i

p_i

0.9 minimal

−2.38

2 4 6 8 10 12 14

0.00.8

i

p_i

0.9 maximal

11.256

Figure 2.2: Maximal and minimal values for inner products under the constraint in (2.1)

and

(2.19) b0n=

n−1

X

k=0

(−α0n,k)b0k, n ≥1, with each vector

α0i = (αi,00 , αi,10 , . . . , α0i,i−1)∈ Pi, for1≤i≤n, wherePiis as in (2.4).

In fact, there exists a set0102, . . . ,α0n}, withα0i ∈ Pi, such thatb0i is as large as possible (with its inherent sign) givenb0, b01, b02, . . . , b0i−1.

Remark 2.3. While Theorem 2.2 looks relatively simple, it has proven indispensable recently in two quite unrelated interesting contexts. The theorem was crucial, in proving a recent optimal explicit form of Kendall’s Renewal Theorem (see Berenhaut, Allen and Fraser [2]) stemming from bounds on reciprocals of power series with rapidly decaying coefficients. In a quite un- related context, a simpler version of Theorem 2.2 was also employed in Berenhaut and Bandy- opadhyay [3] in proving that all symmetric Toeplitz matrices generated by monotone convex sequences have off-diagonal decay preserved through triangular decompositions.

(6)

Proof of Theorem 2.2. The proof, here, involves applying Theorem 2.1 to successively “scale"

the rows of the coefficient matrix

[−αi,j] =

−α1,0 0 · · · 0

−α2,0 −α2,1 . .. ... ... ... . .. 0

−αn,0 −αn,1 · · · −αn,n−1

 .

while not decreasing the value of|bn|at any step.

First, define the sequences

¯

αi = (αi,0, . . . , αi,i−1) and bk,j = (bk, . . . , bj),

for0≤k ≤j ≤n−1and1≤i≤n.

Suppose thatbn>0. Expanding via (2.16),bncan be written as

(2.20) bn =C10b0+C11b1,

whereC10 andC11are constants, which depend on{αi,j}. IfC11 >0, then selectα¯01 = (α01,0)∈ P1so that−α¯01·b0,0is maximal, via Theorem 2.1. Similarly, ifC11 <0, selectα¯01 = (α01,0)∈ P1 so that−α¯01·b0,0is minimal. In either case, replacingα1,0byα01,0in (2.16) will result in a larger (or equal) value forC11b1, and in turn, referring to (2.20), a larger (or equal) value of|bn|.

Now, suppose that the first through(k−1)throws of theαmatrix are of the form described in the theorem (i.e. resulting in maximalbivalues for1≤i≤k−1with respect to the preceeding bj,0≤j ≤i−1), and expressbnin the form

(2.21) bn=Ck0b0+Ck1b1+· · ·+Ckkbk,

via (2.16). IfCkk ≥ 0, then select α¯0k ∈ Pk so that−α¯0k·b0,k−1 is maximal, via Theorem 2.1.

Similarly, ifCkk < 0, selectα¯0k ∈ Pk so that−α¯0k·b0,0 is minimal. In either case, referring to (2.21), replacing the values in α¯k by those in α¯0k in (2.16) will not decrease the value of|bn|.

The result follows by induction for this case. The case bn < 0is similar and the theorem is

proven.

For further results along these lines in the caser= 1andB = 0, see [4].

Note that, recurrences with varying or random coefficients have been studied by many previ- ous authors. For a partial survey of such literature see Viswanath [22] and [23], Viswanath and Trefethen [24], Embree and Trefethen [10], Wright and Trefethen [26], Mallik [16], Popenda [18], Kittapa [13], Odlyzko [17], Berenhaut and Goedhart [6, 7], Berenhaut and Morton [9], Berenhaut and Foley [5], and Stevi´c [19, 20, 21] (and the references therein). For a comprehen- sive treatment of difference equations and inequalities, c.f. Agarwal [1].

We now turn to consideration of the remaining cases ofr-geometric decay and monotonicity mentioned in the introduction.

3. THECASE OFr-GEOMETRIC MONOTONICITY

In this section we consider the assumption of r-geometric monotonicity of the entries in p= (p1, p2, . . . , pn), i.e.

pi+1 ≥ 1 rpi for1≤i≤n−1.

(7)

First, for a given integer0≤t≤n, define the vectorvtviav0 =0, and vtdef=

n−t

z }| {

0,0, . . . ,0, Art−1, Art−2, . . . , Ar, A . In addition, define the set of vectors

(3.1) Pn2 =Pn2(A,0, r) ={vt : 0≤t ≤n}.

Here, we have the following theorem.

Theorem 3.1. Suppose that p = (p1, . . . , pn) and q = (q1, . . . , qn) are n-vectors where p satisfies

(3.2) pi+1 ≥ 1

rpi

for1≤i≤n−1, and0≤pi ≤Afor1≤i≤n. We have,

min{w·q:w∈ Pn} ≤p·q≤max{w·q:w ∈ Pn}.

Proof. First, supposep·q >0, and note that the lower bound in (2.5) follows from the fact that vt =0fort = 0. As in the proof of Theorem 2.1, we will, again, obtain a sequence of vectors {pei}n+1i=1, satisfying

0≤p·q =pen+1·q≤pen·q ≤ · · · ≤pe1·q, such thatpe1 ∈ Pn2.

In particular, consider the vectorspei = (pei(1),pei(2), . . . ,pei(n)) ∈ Rn, i = 1,2, . . . , n+ 1 defined recursively according to the following scheme.

(1) pen+1 =p.

(2) For 1 ≤ i ≤ n, set Si = {s : i + 1 ≤ s ≤ nandpei+1(s) = Arn−s}, and vi = min(SiS{n+ 1}).

(3) For1≤i≤n, set pei =

pei+1(1),pei+1(2), . . . ,pei+1(i−1), cipei+1(i), cipei+1(i+ 1), . . . , cipei+1(vi−1), pei+1(vi),pei+1(vi+ 1), . . . ,pei+1(n)

= (w1i+1;ciw2i+1;w3i+1), (3.3)

whereci is given by

(3.4) ci =













Arn−i

pi , ifw2i+1·qi,vi−1 >0

1 rpi−1

pi

, ifw2i+1·qi,vi−1 ≤0andi >1

0, otherwise

.

It is not difficult to verify by induction thatwji+1,j = 1,2,3, are of the form w1i+1 =pe1,i−1i+1 = (p1, p2, . . . , pi−1)

(3.5)

w2i+1 =pe1,vi+1i−1 =

pi,1 rpi, 1

r2pi, . . . , 1 rvi−i−1pi

(3.6)

w3i+1 =pevi+1i,n = (Arn−vi, Arn−vi−1· · · , Ar, A)∈ Pn−vi+1. (3.7)

Now, note that from (3.2), and the boundpn≤A, we have that pi ≤Arn−i,

(8)

for1≤i≤n, andpi−1/r≤pifor2≤i≤n. Hence, (3.3) and (3.4) imply that pei·q−pei−1·q = (ci−1)(w2i+1·qi,vi−1)≥0,

and that, (3.8) pei

p1, p2, . . . , pi−2, pi−1,1

rpi−1, 1

r2pi−1, . . . , 1

rvi−i−1pi−1, Arn−vi, Arn−(vi+1), . . . , Ar, A

, p1, p2, . . . , pi−1, Arn−i, Arn−(i+1), . . . , Arn−(vi−1), Arn−vi, Arn−(vi+1), . . . , Ar, A

. Thusvi−1 ∈ {vi, i}, and fori= 2, we have

(3.9) pe2

p1,1 rp1, 1

r2p1, . . . , 1

rv2−i−1pi−1, Arn−v2, Arn−(v2+1), . . . , Ar, A

,

p1, Arn−2, Arn−3, . . . , Ar2, Ar, A

. The vectorpe1 then satisfies

(3.10) pe1

0,0, . . . ,0, Arn−v2, Arn−(v2+1), . . . , Ar, A

,(Arn−1, Arn−2, Arn−3, . . . , Ar2, Ar, A), 0, Arn−2, Arn−3, . . . , Ar2, Ar, A

⊂ Pn2, and the theorem is proven in this case. The proof follows similarly, ifp·q ≤ 0, and the proof

of the theorem is complete.

Now, for a given integer0≤t ≤n, define the vectorvtviav0 =0, and vtdef=

n−t

z }| { B, B, . . . , B,

t

z }| { A, A, . . . , A

. In addition, define the set of vectors

(3.11) Pn3 =Pn3(A, B,1) ={vt: 0≤t≤n}.

For the caser = 1 in either (2.1) or (3.2), we can similarly prove the following result. For B = 0the theorem follows directly from either Theorem 2.1 or Theorem 3.1 (see also Lemma 2.2 in [4])). For0< B < A, the proof is similar to that for Theorems 2.1 and 3.1, and will be omitted.

Theorem 3.2 (Monotonicity). Suppose that p = (p1, . . . , pn) and q = (q1, . . . , qn) are n- vectors wherepsatisfies

pi+1 ≥pi

for1≤i≤n−1, and0≤B ≤pi ≤Afor1≤i≤n. We have,

min{w·q :w∈ Pn3} ≤p·q≤max{w·q:w ∈ Pn3}.

We conclude with a return to global optimization for inner products for the vectorqas given in Example 2.1.

Example 2.1 (revisited). Consider the vectorq∈R15as given in (2.15).

The entries in q are depicted in Figure 2.1. Now, consider optimizing p·q over all p = (p1, p2, . . . , p15) ∈ R15, satisfying 0 ≤ pi ≤ 1 and (3.2) for some 0 < r ≤ 1. Theorem 3.2 implies that we need only check over the finite setP152 (1,0, r)as given in (3.11). The results of the computations forr∈ {.1, .3, .7, .9}, are given in Figure 3.1.

(9)

2 4 6 8 10 12 14

0.00.8

i

p_i

0.1 minimal

4.102

2 4 6 8 10 12 14

0.00.8

i

p_i

0.1 maximal

4.241

2 4 6 8 10 12 14

0.00.8

i

p_i

0.3 minimal

4.102

2 4 6 8 10 12 14

0.00.8

i

p_i

0.3 maximal

4.651

2 4 6 8 10 12 14

0.00.8

i

p_i

0.7 minimal

4.102

2 4 6 8 10 12 14

0.00.8

i

p_i

0.7 maximal

6.817

2 4 6 8 10 12 14

0.00.8

i

p_i

0.9 minimal

4.102

2 4 6 8 10 12 14

0.00.8

i

p_i

0.9 maximal

9.368

Figure 3.1: Maximal and minimal values for inner products under the constraint in (3.2).

REFERENCES

[1] R.P. AGARWAL, Difference Equations and Inequalities. Theory, Methods and Applications, Sec- ond edition (Revised and Expanded), Marcel Dekker, New York, (2000).

[2] K.S. BERENHAUT, E.E. ALLENANDS.J. FRASER. Bounds on coefficients of inverses of formal power series with rapidly decaying coefficients, Discrete Dynamics in Nature and Society, in press.

[3] K.S. BERENHAUTAND D. BANDYOPADYAY. Monotone convex sequences and Cholesky de- composition of symmetric Toeplitz matrices, Linear Algebra Appl., 403 (2005), 75–85.

[4] K.S. BERENHAUT AND P. T. FLETCHER. On inverses of triangular matrices with monotone entries, J. Inequal. Pure Appl. Math., 6(3) (2005), Art. 63.

[5] K.S. BERENHAUT AND J.D. FOLEY, Explicit bounds for multi-dimensional linear recurrences with restricted coefficients, in press, Journal of Mathematical Analysis and Applications (2005).

[6] K.S. BERENHAUTANDE.G. GOEDHART, Explicit bounds for second-order difference equations and a solution to a question of Stevi´c, Journal of Mathematical Analysis and Applications, 305 (2005), 1–10.

(10)

[7] K.S. BERENHAUTANDE.G. GOEDHART, Second-order linear recurrences with restricted coef- ficients and the constant(1/3)1/3, in press, Mathematical Inequalities & Applications, (2005).

[8] K.S. BERENHAUT AND R. LUND. Renewal convergence rates for DHR and NWU lifetimes, Probab. Engrg. Inform. Sci., 16(1) (2002), 67–84.

[9] K.S. BERENHAUTANDD.C. MORTON. Second order bounds for linear recurrences with nega- tive coefficients, J. Comput. Appl. Math., 186(2) (2006), 504–522.

[10] M. EMBREE ANDL.N. TREFETHEN. Growth and decay of random Fibonacci sequences, The Royal Society of London Proceedings, Series A, Mathematical, Physical and Engineering Sciences, 455 (1999), 2471–2485.

[11] W. FELLER, An Introduction to Probability Theory and its Applications, Volume I, 3rd Edition, John Wiley and Sons, New York (1968).

[12] M. KIJIMA, Markov processes for stochastic modeling, Stochastic Modeling Series, Chapman &

Hall, London, 1997.

[13] R.K. KITTAPPA, A representation of the solution of thenth order linear difference equations with variable coefficients, Linear Algebra and Applications, 193 (1993), 211–222.

[14] L. LEINDLER, A new class of numerical sequences and its applications to sine and cosine series, Analysis Math., 28 (2002), 279–286.

[15] L. LEINDLER, A new extension of monotonesequences and its applications, J. Inequal. Pure Appl.

Math. 7(1) (2006), Art. 39.

[16] R.K. MALLIK, On the solution of a linear homogeneous difference equation with varying coeffi- cients, SIAM Journal on Mathematical Analysis, 31 (2000), 375–385.

[17] A.M. ODLYZKO, Asymptotic enumeration methods, in: Handbook of Combinatorics (R. Graham, M. Groetschel, and L. Lovasz, Editors), Volume II, Elsevier, Amsterdam, (1995), 1063–1229.

[18] J. POPENDA, One expression for the solutions of second order difference equations, Proceedings of the American Mathematical Society, 100 (1987), 870–893.

[19] S. STEVI ´C, Growth theorems for homogeneous second-order difference equations, ANZIAM J., 43(4) (2002), 559–566.

[20] S. STEVI ´C, Asymptotic behavior of second-order difference equations, ANZIAM J., 46(1) (2004), 157–170.

[21] S. STEVI ´C, Growth estimates for solutions of nonlinear second-order difference equations, ANZIAM J., 46(3) (2005), 459–468.

[22] D. VISWANATH, Lyapunov exponents from random Fibonacci sequences to the Lorenz equations, Ph.D. Thesis, Department of Computer Science, Cornell University, (1998).

[23] D. VISWANATH, Random Fibonacci sequences and the number 1.13198824..., Mathematics of Computation, 69 (2000), 1131–1155.

[24] D. VISWANATH AND L.N. TREFETHEN, Condition numbers of random triangular matrices, SIAM Journal on Matrix Analysis and Applications, 19 (1998), 564–581.

[25] H.S. WILF, Generatingfunctionology, Second Edition, Academic Press, Boston, (1994).

[26] T.G. WRIGHTANDL.N. TREFETHEN, Computing Lyapunov constants for random recurrences with smooth coefficients, Journal of Computational and Applied Mathematics, 132(2) (2001), 331–

340.

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

[Carrizosa and Messine (2007)] Carrizosa E., Messine F.(2007), An exact global optimization method for deriving weights from pairwise compar- ison matrices, Journal of

The example presented above shows that the components of solutions of the system of difference equations with the linear delay lag function are power-low decaying, those with the

We present an application of a recently developed algorithm for rigorous in- tegration forward in time of delay differential equations (DDEs) to a computer assisted proof of

We present an application of a recently developed algorithm for rigorous in- tegration forward in time of delay differential equations (DDEs) to a computer assisted proof of

In [10], we proved characterizations of ≤ ∗ for normal matrices independently of general results from [8].. In dealing with the characterization of ≤ − for normal matrices,

Relationships among principal minors, particularly inequalities that occur among products of principal minors, for all matrices in a given class of square matrices have been studied

It is well known that the method of atomic decompositions plays an important role in martingale theory, such as in the study of martingale inequalities and of the duality theorems

It is well known that the method of atomic decompositions plays an important role in mar- tingale theory, such as in the study of martingale inequalities and of the duality theorems