• Nem Talált Eredményt

Definition 1.1. A (real) matrix is a rectangular array of real numbers (in square brackets). The numbers are called the entries of the matrix. A matrix is said to be an m×n matrix or a matrix of size m×n (m, n ∈ N) if the number of rows and columns are m and n, respectively.

Example 1.2. The following object is a matrix:

1 2 3

−3 −2 √ 2

, it has two rows and three columns.

Definition 1.3. The set of allm×nmatrices with real entrieswill be denoted by Rm×n. If A ∈ Rm×n then A has m rows and n columns. The rows are numbered from the top down, and the columns are numbered from left to right.

Definition 1.4. The(i, j)-entry of the matrix A∈ Rm×n will be denoted by ai,j or Ai,j (16i6m, 16j6n).

Example 1.5. The matrix

1 2 3

−3 −2 √ 2

has two rows and three columns;

therefore, it is a 2×3 matrix.

The (2, 3)-entry of the matrix is √

2, that is, √

2 is the third element in the second row.

Definition 1.6. The extended form of A is

A=

a1,1 a1,2 · · · a1,m

a2,1 a2,2 · · · a2,m ... ... . .. ... an,1 a1,2 · · · an,m,

 .

The matrix A has a more compact form: [ai,j]m×n. In general, in a compact form of A the entries are described by a ‘simple’ formula.

Example 1.7. A compact form of matrix

2 5 10 17 5 8 13 20 10 13 32 25

 is (i2+j2)3×4, where the number of rows is 3, the number of columns is 4, and the i2+j2 is the (i, j)-entry of the matrix is i2+j2.

Definition 1.8. Two matrices A andB areequal (in written A=B) if they have the same size (say m×n) and the corresponding entries are equal, that is, equality Ai,j =Bi,j holds for every index pair (i, j), where 16i, j6n.

Example 1.9. The matrices

2 5 10 17 5 8 13 20 10 13 32 25

 and (i2 +j2)3×4 are equal.

The first matrix is in extended form, while the second matrix is in compact form.

Definition 1.10. LetA∈Rm×n andB∈Rr×s be matrices. TheirsumA+B is defined if and only if A and B have the same size. If m = r and n = s then A+B∈Rm×n and

A+B=

A1,1+B1,1 . . . A1,n+B1,n ... . .. ... Am,1+Bm,1 . . . Am,n+Bm,n

.

Example 1.11. If A=

11 −21 7 14

, B= [ij−i+j]2×3, andC=

−11 21

−7 −13

then the sum A+B does not exist, the sum of matrices A and C exists:

A+C=

11+ (−11) −21+21 7+ (−7) 14+ (−13)

= 0 0

0 1

. Theorem 1.12. If A, B, and C are matrices of the same size then

A+B=B+A, (commutative law)

A+ (B+C) = (A+B) +C. (associative law)

Definition 1.13. A matrix is called zero matrix if all of its entries are0. A zero matrix of size m×n will be denoted by 0m×n. The negative of a matrix A is obtained from A by multiplying each entry of A by −1. The difference of matrices A and B is A−Bdef.= A+ (−B).

Theorem 1.14. For every m×n matrix A we have that (a) A+0m×n =0m×n+A=A,

(b) A+ (−A) = (−A) +A=0m×n.

Definition 1.15. If A = [ai,j]m×n is an m ×n matrix and λ is any real number then the scalar multiple λ·A of A is the m×n matrix [λai,j]m×n. Example 1.16. If A=

11 −21 6

7 14 −8

and λ= −3 then

λ·A=

(−3)·11 (−3)·(−21) (−3)·6 (−3)·7 (−3)·14 (−3)·(−8)

=

−33 63 −18

−21 −42 24

. Theorem 1.17. Let A be an m×n matrix and λ be a real number. Then

0·A=0m×n and λ·0m×n=0m×n.

Theorem 1.18. Let A, B, and C be arbitrary m×n matrices and λ, µ be real scalars. Then

(1) A+B=B+A,

(2) (A+B) +C= (A+B) +C, (3) A+0m×n =0m×n+A=A, (4) A+ (−A) = (−A) +A=0m×n, (5) λ·(A+B) =λ·A+λ·B, (6) (λ+µ)·A=λ·A+µ·A, (7) (λµ)·A=λ·(µ·A), (8) 1·A=A.

Definition 1.19. Let A be an m ×n matrix. The transpose of A is the n×m matrix whose rows are just the columns of A in the same order. The transpose of A will be denoted by AT.

Example 1.20. If A=

11 −21 6

7 14 −8

∈R2×3 then its transpose is

AT =

11 7

−21 14 6 −8

∈R3×2.

Theorem 1.21. Let Aand Bbe arbitrary m×nmatrices and λ be any real number. Then

(a) AT is an n×m matrix, (b) (AT)T =A,

(c) (λ·A)T =λ·AT, (d) (A+B)T =AT +BT.

Definition 1.22. A matrix is said to be symmetric if AT =A.

Theorem 1.23. The matrix A ∈ Rm×n is symmetric if and only if m =n (that is,Ais a square matrix) andAi,j=Aj,ifor all indexesi, j(16i, j6n).

Example 1.24. If A =

11 −21 7 14

and B =

11 −21

−21 15

then A is not symmetric (e.g., A1,2 6=A2,1), but B is symmetric.

Definition 1.25. Let A and B be matrices with A ∈ Rm×n and B ∈ Rr×s. The product of A and B is defined if and only if n=r. (To be continued) Definition 1.26. Let A and B be matrices with A ∈ Rm×n and B ∈ Rn×s. Then the product of A and B is defined, the product AB of A and B is the m×s matrix whose (i, j)-entry is computed in the following way:

Multiply each entry of row i of A by the corresponding entry of columnj of B, and add the results.

This is the dot product of row i (of A) and column j (of B).

LetA and B matrices that have sizes:

A B

m × n r × s

Their product ABis defined only ifn=r; if n=rthen ABis of sizem×s:

A B

m × n = r × s AB∈Rm×s.

Example 1.27. Let

A=

2 −3 6 3 −14 −8

, B=

2 −3

6 3

−14 −8

, and C=

2 −3 3 −14

.

Then we can form nine products with two factors with the aid of A, B, C:

A·A, A·B, A·C, B·A, B·B, B·C, C·A, C·B, C·C.

The products A·A, A·C, B·B, C·B do not exist, while the others are the following:

A·B=

2·2+ (−3)·6+6·(−14) 2·(−3) + (−3)·3+6·(−8) 3·2+ (−14)·6+ (−8)·(−14) 3·(−3) + (−14)·3+ (−8)·(−8)

=

−98 −63 34 13

, B·A=

−5 36 36 21 −60 12

−52 154 −20

,

B·C=

−5 36 21 −60

−52 154

, C·A=

−5 36 36

−36 187 130

, C·C=

−5 36

−36 187

.

Products A·B and B·A show that matrix product is not commutative:

A·B6=B·A.

Definition 1.28. Let A be anm×n matrix. The matrix A is said to be a square matrix if m=n. Themain diagonal of a square matrix of size n×n consists of the elements ai,i (i = 1, . . . , n). The identity matrix is a square matrix with 1’s on the main diagonal and0’s elsewhere. The identity matrix of size n×n will be denoted by In.

Theorem 1.29. Let A be an arbitrary matrix of size m×n. Then ImA=A and AIn =A.

Theorem 1.30 (Associative law). Let A, B, and C be matrices such that AB and (AB)C exist. Then

(AB)C=A(BC).

Example 1.31. If

A=

then both of the products (AB)C and A(BC) exist:

(AB)C=

Theorem 1.32 (Distributive laws). Let A, B, and C be matrices such that the indicated operations can be performed. Then

A(B+C) =AB+AC and (B+C)A=BA+CA.

Example 1.33. Let A, B and C be the following matrices:

A= Then the matrices A(B+C) and AB+AC exist, moreover,

A(B+C) =

Theorem 1.34. Let A and B be matrices such that AB exists, and let λ be an arbitrary scalar. Then

(a) λ·(AB) = (λ·A)B=A(λ·B), (b) (AB)T =BTAT.

Warning!

If the order of the factors in a product of matrices changed, the product may change (or may not exist).

Example 1.35. If A, B and C are the following matrices:

A= 0 1

1 1

, B=

1 0

−1 0

, C=

−1 0 1 1

,

then the matrices ABC, ACB, BAC, BCA, CAB and CBA are pairwise distinct 2×2 matrices:

ABC = 1 0

0 0

, ACB =

0 0

−1 0

,

BAC=

1 1

−1 −1

, BCA =

0 −1 0 1

,

CAB=

1 0

−1 0

, CBA =

0 −1 0 0

.

Remark 1.36. (a) For matrix multiplication the cancellation laws1 do not hold, that is,

– the equality AB=AC does not imply B=C and – the equality BA=CA does not imply B=C.

(b) There are matrices A∈Rm×n andB∈Rn×p such thatAB=0m×p, but A6=0m×n and B6=0n×p.2

Definition 1.37. A square matrix is said to bediagonalif each entry outside the main diagonal is 0. A square matrix is said to be lower/upper triangular if all the entries above/below the main diagonal is 0.

1For real numbers the cancellation law states that ifab=acthenb=c(a, b, cR).

In this case we have only one law. Why?

2For real numbersa andbwe have thatab=0impliesa=0orb=0.

KAPCSOLÓDÓ DOKUMENTUMOK