• Nem Talált Eredményt

Nonnegative Matrices

In document LINEAR EQUATIONS AND MATRIX ALGEBRA I (Pldal 39-44)

1,10 Nonsymmetric Matrices

6 Note here i and j are regarded as submatrices of the matrix [i, j]

1.13 Nonnegative Matrices

A general quadratic surface in these coordinates would be represented as

( x )rA x = /(*;•, x{) = constant. (1.12.29) T h e expanded form of Eq. (1.12.29) is known as a bilinear form rather

than a quadratic form. T h e normal to the surface is again given by

N = J £ , (1.12.30)

and hence the principal axes χ are given by

A x = λχ. (1.12.31) T h e dual problem is then

xTATx = f(x'i, χ{) = constant, (1.12.32) with principal axes χ given by

ΑΓχ = λ χ , (1.12.33)

since the eigenvalues of the transpose matrix AT equal the eigenvalues of the matrix A . T h e eigenvectors of the matrix operator are the principal axes of the associated quadratic form. T h e principal axes of the surface are skewed in general. Nevertheless, if the eigenvectors are complete, the surface can be transformed to the form

+ \x\ + ... + Kx2 — constant (1.12.34) and

λι ( * ί )2 + λ2 ( 4 )2 + ··· + K(x'nf = constant. (1.12.35) In this case if a root is repeated, w e may not be able to assume rotational symmetry. Instead the two eigenvectors may collapse into only one vector, since there is no orthogonality relationship between eigenvectors of a given skewed system.

1.13 Nonnegative Matrices

Of particular usefulness in the numerical solution of differential equations is the theory of nonnegative matrices. In this section we define

several matrix properties and relate these properties to nonnegative matrices.

Frequently one is interested in estimating the largest eigenvalue of a matrix without actually solving the secular equation. A useful theorem is the Gerschgorin theorem which states that the largest eigenvalue is equal to or less than the maximum value of the sum of the magnitudes of the elements in any row. T h a t is, if A = [a^]

lAm a x l < m a x] £ | ef i| (1.13.1) T h e proof of this theorem is simple. L e t λ be any eigenvalue of A and e the corresponding eigenvector. W e then have

(1.13.2) which is true for all i. N o w choose the element of e of largest amplitude, say ek . T h e n w e have

(1.13.3) Consequently, the largest eigenvalue is bounded by Eq. (1.13.1).

Frequently the largest eigenvalue is called the spectral radius of a matrix, since all eigenvalues lie within or on a circle of radius A m a x in­

die complex plane. W e shall denote the spectral radius of A as r(A).

Gerschgorin's theorem is then

r ( A ) < max

2) I

ai} (1.13.4)

A n y matrix A is said to be reducible if there exists a permutation transformation P , i.e., if the rows and columns can be permuted similarly, such that

P A Pr = An

0 * 2 2 j (1.13.5)

where the submatrices A n , A2 2 are square, but not necessarily of the same order. I f no permutation transformation exists such that (1.13.5) is true, then A is called irreducible. T h e property of irreducibility implies a connectedness in the problem as seen by the following example.

Consider a vector χ and a reducible matrix A . T h e product A x can be

1.13 N O N N E G A T I V E M A T R I C E S 41

T h e result indicates that the transformation of the components of x2 is independent of the components of x1 . T h e solution of the equation

A x = y (1.13.7) can be accomplished as two separate problems

AnX ! + A1 2x2 = yx, (1.13.8a)

A2 2x2 = ya. (1.13.8b)

T h e values of x2 are independent of x1 . Physically this implies that some portion of the solution is independent of certain other values of the solution. Such a case arises in multigroup approximations where the fast flux in the core is " disconnected'' from the thermal flux in the reflec­

tor. On the other hand, if the matrix A is irreducible, then the com­

ponents of the solution of Eq. (1.13.7) are related to and dependent upon one another.

A nonnegative matrix A is a matrix such that

A = [α„], (1.13.9) and

fll7^0, all/,;. (1.13.10)

W e denote a nonnegative matrix A as A ^ 0. Similarly, if

Ai V> 0 , all ι,;, (1.13.11)

then A is called a positive matrix denoted A > 0. A very useful theorem regarding nonnegative matrices is the following. I f A is nonnegative, then A has a nonnegative real eigenvalue, and the corresponding eigen­

vector has nonnegative components, not all zero. T h e proof of the theorem is involved (see Reference 7, pp. 66-68), and we offer a heuristic justification instead. Since A is nonnegative, the quadratic form associated with A represents an ellipsoid and must have a principal axis somewhere in the first quadrant. Since A is nonnegative, any vector with nonnegative components is transformed by A into a nonnegative vector, hence the eigenvalue is nonnegative.

A sharpened form of the above theorem is the following7: if A is a nonnegative irreducible matrix, then A has a positive real eigenvalue, and the corresponding eigenvector has positive components. T o prove

7 From Reference 8. Some further results in this section are also from Refer­

ence 8, Chapter II.

this we note first that A has an eigenvector χ > 0, χ Φ 0 by the pre­

vious theorem. If the corresponding eigenvalue is zero, then we have

A x = λχ = 0 . (1.13.12) Since χ Φ 0, then A must have at least one column identically zero,

which implies A is reducible, contrary to hypothesis. Therefore, λ Φ 0.

Conversely, if the eigenvector has some zero components, then we have, after a permutation of rows of χ and corresponding rows and columns of A ,

(1.13.13) and

A x An Ar

A2i Ao.

:][»']=[£:*:]=

λ

id- <

u3

-

i4

>

But then A2 1 = 0 and again A is reducible contrary to the hypothesis.

Therefore, χ > 0.

T h e above result is contained in a classical theorem by Perron and Frobenius which can be stated: If A is a nonnegative irreducible matrix, then A has a positive simple real eigenvalue λ0 equal to the spectral radius of A . T h e corresponding eigenvector has all positive components.

T o prove that λ0 equals the spectral radius of A , we consider the matrix Β with 0 ^ B , and 0 ^ b{j ^ a{j , all i, j. T h u s every element of Β is nonnegative and equal to or less than the corresponding element of A . W e denote the relationship as 0 ^ Β ^ A . W e have

A x = λ0χ, where χ has positive components. Similarly,

Ary = A0y,

(1.13.15)

(1.13.16) where y has positive components. N o w let

B z = yz, (1.13.17) where γ is any eigenvalue of B . W e now show γ < λ0 for Β < A and

γ = λ0 for Β = A , which proves λ0 equals the spectral radius.

From Eq. (1.13.17) we have

γζ{ = g bii*J > (1.13.18)

1.13 N O N N E G A T I V E M A T R I C E S 43

dX = 0. (1.13.24)

F r o m Eq. (1.13.23) we readily see that the derivative of the secular polynomial can be written

dP(X)

dX = _ J £ | MW- A I | , (1.13.25) where Mu is the ith principal minor of A . F r o m previous results we

know 0 < Mt i < A and hence

- I M „ - λ0Ι I > 0 (all i ) . (1.13.26) W e then have

dP(XQ)

dX > 0 (1.13.27)

and hence λ0 is a simple root.

T h u s we have shown that λ0 equals the spectral radius of A and further, if any element of A increases, then the spectral radius increases. Having and

(1.13.19)

since all elements of A , Β are nonnegative. W e multiply Eq. (1.13.19) by yi and sum on i to obtain

(1.13.20)

(1.13.21) hence

I f γ = λ0 , then the equality holds in Eq. (1.13.19) and requires that (1.13.22) and then Β = A .

T o prove that λ0 is a simple root, we need only show that the deter­

minant

(1.13.23) has a zero of multiplicity one when λ = λ() . If any polynomial P(X) has a repeated root at λ0

established the Perron-Frobenius theorem for nonnegative irreducible matrices, we may immediately sharpen the earlier theorem regarding nonnegative matrices in general. In particular, if A is a nonnegative reducible matrix, then A has a nonnegative real eigenvalue which equals the spectral radius of A , and as before the corresponding eigenvector has nonnegative components. T o prove that the nonnegative eigenvalue is the spectral radius, we merely write A in reduced form

A

=[

A

o

1 1

i;;] <

u3

-

28

>

and examine the matrices An , A2 2 . I f they are also reducible, we continue the reduction until all diagonal submatrices are irreducible or null. If the Au = 0, then all the eigenvalues are zero. I f any Au Φ 0 then the largest eigenvalue of the nonzero Α^· determines the spectral radius. Also, for two matrices A , Β such that 0 ^ Β ^ A , it follows from above that

r(B) < r ( A ) .

W e shall have occasion to use these results in Chapters I I I and I V when we discuss the technique for solving simultaneous equations.

In document LINEAR EQUATIONS AND MATRIX ALGEBRA I (Pldal 39-44)

KAPCSOLÓDÓ DOKUMENTUMOK