• Nem Talált Eredményt

Linear Algebra

N/A
N/A
Protected

Academic year: 2022

Ossza meg "Linear Algebra"

Copied!
37
0
0

Teljes szövegt

(1)
(2)

Introduction

This material is made for the Linear Algebra course VEMKMA1143G at the University of Pannonia. The classes usually include students from the Faculty of Engineering and the Faculty of Economics. Some of them study for their BSc degree, some of them for Masters. This variety of background and motivation is a really hard challenge for the course instructor. Therefore, we collected the material in a very compact form. At the lectures and tutorials we try to cover parts of this note with extended material and extra explanations. However, these notes try to serve as the spine of the course material.

Linear Algebra is the branch of Mathematics concerning linear equations, linear maps and their representations through matrices and vector spaces. It also has practical applications in our modern daily life in various industries. We collected the fundamental material in eight Sections. The course builds on previous mathematical knowledge in Euclidean Geometry, Elementary Algebra, a little bit of Group Theory comes very useful as well.

The Sections are short and include only the most fundamental results and facts, usually without proofs.

Also each section contains some examples with full solutions. The student readers should understand these explanations. After that, we hope the reader can solve the exercises at the end of each section.

The workbook finishes with the solution of the exercises.

Veszpr´em, December 2018. J´anos Bar´at

Associate Professor

(3)

1 Systems of linear equations

1.1 Two equations and two unknowns

In high school, we learnt how to solve a system of two linear equations in two unknowns. The main idea behind our method was the following. If we add two equations, that is, we add the left-hand sides and the right-hand sides of two equations, then we get another valid equation. Let us see the following

2x+ 3y= 11 (1.1)

4x+ 5y= 6 (1.2)

Here, we add (-2) times equation (1.1) to equation (1.2) to get

−y=−16 (1.3)

Now we can add 3 times equation (1.3) to equation (1.1) to get

2x=−37 (1.4)

Hence the final solution isx=−372 andy= 16.

What we were doing here, can be generalised to any system of linear equations in mequations inn unknowns. Before doing so, we recall that the equations of the formax+by=c are calledlinear, since the solution set of such equations correspond tolinesin a Cartesian coordinate system. What we mainly expect is that two lines in the plane have a unique intersection point, just as it happened in our first example. However, there are two other possibilities:

Example 1.1. Consider the system

2x+ 3y= 6 (1.5)

4x+ 6y= 12 (1.6)

It is apparent that the two equations carry the same information and the pair(x,6−2x3 )is a solution of the system for any real numberx. Therefore, the system has an infinite number of solutions. Geometrically, the two lines corresponding to the two equations coincide in this case.

Example 1.2. Consider the system

2x+ 3y= 6 (1.7)

4x+ 6y= 11 (1.8)

Now multiply the first equation by 2, and we see that the two equations are contradictory. The system has no solution. Geometrically, the two lines corresponding to the two equations are parallel.

In what follows, we will see that this trichotomy applies to the general case (that is, when there are mequations andnunknowns).

1.2 Gauss-Jordan elimination

In this section, we describe a general method for finding all solutions to a system ofm linear equations innunknowns. First we look at the casem= 3, n= 3. We use the notationx1, x2, x3 for the variables.

Example 1.3. Solve the following system:

2x1+ 4x2+ 6x3= 18

4x1+ 5x2+ 6x3= 24 (1.9)

3x1+x2−2x3= 4

(4)

Solution: Our method will be to simplify the equations as we did in the previous section. We begin by dividing the first equation by 2. This gives us

x1+ 2x2+ 3x3= 9

4x1+ 5x2+ 6x3= 24 (1.10)

3x1+x2−2x3= 4

As we saw in the previous section, adding two equations together leads to a third, valid equation.

This equation may replace either of the two equations used to obtain it in the system. We begin the simplification of the system by multiplying the first equation by (−4) and adding it to the second equation.

This leads to

x1+ 2x2+ 3x3= 9

−3x2−6x3=−12 (1.11)

3x1+x2−2x3= 4

Now we multiply the first equation by (−3) and add it to the third equation.

x1+ 2x2+ 3x3= 9

−3x2−6x3=−12 (1.12)

−5x2−11x3=−23

Note that in system (1.12) the variable x1 has been eliminated from the second and third equation.

Next we divide the second equation by−3.

x1+ 2x2+ 3x3= 9

x2+ 2x3= 4 (1.13)

−5x2−11x3=−23

We multiply the second equation by (−2) and add it to the first and then multiply the second equation by 5 and add it to the third:

x1 −x3 = 1

x2 +2x3 = 4

−x3 = −3 Now we simply multiply the third equation by (−1).

x1 −x3 = 1

x2 +2x3 = 4 x3 = 3

Finally, we add the third equation to the first and then multiply the third equation by (−2) and add it to the second. We obtain a system that is equivalent to (1.9):

x1 = 4

x2 = −2

x3 = 3

(1.14)

This is the unique solution to the system. The method we used here is theGauss-Jordan elimina- tion.

We introduce a notation that makes our life easier. Amatrixis a rectangular array of numbers. We will study matrices in detail in Section 4. For instance, the coefficients of the variables in system (1.9) can be written as the entries of a matrixA, called the coefficient matrix of the system:

(5)

A=

2 4 6

4 5 6

3 1 −2

.

We will repeatedly use three simple steps to achieve the reduced row echelon form (as in (1.2)) from the coefficient matrix. This was illustrated by an example in the beginning of this section. The three possible steps are the

Elementary row operations i Multiply a row by a nonzero number.

ii Add a multiple of one row to another row.

iii Interchange two rows.

Example 1.4. The following series of matrices show a typical Gauss-Jordan elimination process.

1 3 1 9

1 1 −1 1

3 11 5 35

→

1 3 1 9

0 −2 −2 −8

0 2 2 8

→

1 3 1 9

0 −2 −2 −8

0 0 0 0

→

1 0 −2 −3

0 1 1 4

0 0 0 0

The first arrow hides the following two elementary row operations: We subtract row 1 from row 2 and we subtract 3 times row 1 from row 3. The second arrow means the following: We subtract row 2 from row 3. The third arrow corresponds to the following: We multiply row 2 by -1/2. We can also say that we divided by -2. After that we subtract 3 times row 2 from row 1.

At the end, we read off the following: x1−2x3=−3 andx2+x3= 4. That is, we have the freedom to set the value of x3 (free variable), and after that x1 and x2 are uniquely determined. However, we have an infinite number of choices forx3, there are infinitely many solutions to the system.

For instance, settingx3= 4 gives us the solution: x1= 5,x2= 0,x3= 4. We can check the equations:

5 + 3×0 + 4 = 9 5 + 0−4 = 1 and

3×5 + 11×0 + 5×4 = 35.

Remark 1.5. If there are more variables than equations, then there will always be some free variables.

Remark 1.6. If we get a row, where all instances are 0, but the right-hand side is non-zero, then we got a contradiction. In that case, there is no solution to the system.

Excercises

Solve the following systems using the Gauss-Jordan elimination.

1.

3x1 + 3x2 + x3 = 5 2x1 + 3x2 + x3 = 1 2x1 + x2 + 3x3 = 11

2.

x1 + 3x2 + 5x3 + 7x4 = 12 3x1 + 5x2 + 7x3 + x4 = 0 5x1 + 7x2 + x3 + 3x4 = 4 7x1 + x2 + 3x3 + 5x4 = 16 3.

2x + y − z = 8

−3x − y + 2z = −11

−2x + y + 2z = −3 4.

2x + 4y − 6z = 18

4x + 5y + 6z = 24

2x + 7y + 12z = 40

5. x1 + 2x2 − x3 + x4 = 7 3x1 + 6x2 − 3x3 + 3x4 = 21

(6)

2 Vectors

2.1 Addition and scalar multiples of vectors

We define ann-component row vector as an ordered set ofn numbers written as (x1, x2, . . . , xn). Here xi∈Rfor alli. Similarly ann-component column vector is an ordered set ofnnumbers written as

 x1

x2

... xn

Here alsoxi is called the ith coordinate. If a vector has k coordinates, then we call it a k-vector. The natural notation is used for the all-zero vector (of any size): 0= (0,0, . . .0) or 0. The word orderedis important in the definition of a vector. The two vectors (1,2) and (2,1) are different. In this text, we denote the vectors by boldface lower-case letters as: v,w,t etc. or underlined lower-case lettersv, u, x etc. Two vectorsa= (a1, . . . , ak) andb= (b1, . . . , bl) are equal if and only if they have the same number of components and they are all equal. That isk=l andai=bi for alli.

Addition. Let a = (a1, . . . , ak) and b = (b1, . . . , bk) be two vectors, then their sum is a+b = (a1+b1, . . . , ak+bk).

Example. (3,2,4) + (1,−1,−2) = (4,1,2).

Multiplication by a scalar. Leta= (a1, . . . , ak) be a vector andα∈R. Then the product αais given by (αa1, . . . , αak).

We can combine the two operations.

Example 2.1. Let a=

 1 2 3

andb=

−1 3 3

. Calculate2a+ 3b.

Solution: 2a+ 3b=

2·1−3·1 2·2 + 3·3 2·3 + 3·3

=

−1 13 15

.

We can prove a number of facts regarding these operations:

Theorem 2.2. Let a,b,cbe vectors of the same size andα, β be scalars. Then the following hold:

a+b=b+a. Commutative law.

a+0=a.

0a=0.

(a+b) +c=a+ (b+c). Associative law.

α(a+b) =αa+αb.

(α+β)a=αa+βa.

(αβ)a=α(βa).

Another notion we learnt in high school, is the length of a vector in two dimensions. That arises as a direct application of the Pythagorean theorem. Now if x = (x1, x2, . . . , xn), then the length is px21+· · ·+x2n and denoted by|x|.

Excercises

1. Leta= (8,6,8,5) andb= (−5,−9−1,3). Calculate−4a+ 7band−7(a+b).

2. Leta= (4,4−7),b= (−8,1,4) andc= (3,4−5). Calculatea+b+ 4c.

3. Leta= (−3,−4,5,−9). What is the length of a?

(7)

2.2 Product of two vectors

Scalar product. Leta= (a1, a2, . . . , an) and b= (b1, b2, . . . , bn) be two vectors of the same size. The scalar product ofa andb, denoted as a·bis given by a·b=a1b1+a2b2+· · ·+anbn. Note that the result of the scalar product is a number. There are alternate names for this product: inner product, dot product. Sometimes the vectors can be column vectors or one of each type. The important thing is that they have the same number of components.

Example 2.3. Let a=

 1 2 3

andb=

−1 3 3

. Let us calculate a·b.

Solution: a·b= (1)(−1) + (2)(3) + (3)(3) =−1 + 6 + 9 = 14.

Example 2.4. Let a=

 4 2 1 3

andb= (2,−2,1,0). Calculate the dot product.

Solution: a·b= (4)(2) + (2)(−2) + (1)(1) + (3)(0) = 8−4 + 1 + 0 = 5.

Theorem 2.5. Let a,bandcben-vectors and let αbe a scalar. The following rules hold:

a·0= 0.

a·b=b·a. Commutative law.

a·(b+c) =a·b+a·c. Distributive law.

(αa)·b=α(a·b).

Excercises

1. Calculate the scalar product of the two vectors:

(−2,−1,−5,2) and (−4,−1,−4,4). (3,8,0) and (−8,0,−7). (8,1,−3) and (3,−4,7).

2. Calculate the scalar product of the two vectors:

 x y z

and

 y z x

.

3. Letabe ak-vector. Show thata·a≥0.

4. Perform the indicated computations with a=

 7

−3

−8

,b=

 1 2

−6

andc=

 0

−2 5

.

2a·3b, a·(b+c), (2b)·(3c−5a), (a−c)·(3b−4a)

Vector product. One suspects that there might be a meaningful way to associate a vector to a pair of vectors. The notion of avector productexists in three dimensions. We postpone its definition to Section 5 and make use of it in Section 6.

2.3 Vectors in space

The most typical usage of vectors happens in three dimensions. In real space, where we live. It is natural, that we can describe the exact place of a point-like object using three coordinates. For that, we usually use a Cartesian coordinate system with three axis. When we are given only two vectors, they always lie in a plane, although they have three coordinates. Therefore, we can use our knowledge of Euclidean Geometry. For instance the law of cosines is a generalisation of Pythagoras Theorem: In a triangle with sidelengtha, bandc, where the angle opposite to the side of lengthc isγ, the following holds:

c2=a2+b2−2abcosγ.

(8)

Theorem 2.6. In the three-dimensional space, the dot product of a and b can be also calculated as follows: a·b=|a||b|cosγ, where γis the angle between the two vectors.

Example 2.7. Let a= (1,0,2)andb= (2,5,3)be two vectors in space.

a. Draw the two vectors in a3-dimensional Cartesian coordinate system.

b. Calculate the vector2a+ 7b.

c. What is the length of vectors aandb?

d. Determine the angle between aandb?

e. Give the opposite of a, a vector parallel toaand one perpendicular to a.

f. What is the unit vector parallel toa?

g. Calculate the vectors of length 3 and length1/2parallel to a.

Solution:

b. 2a+ 7b= 2·(1,0,2) + 7·(2,5,3) = (2,0,4) + (14,35,21) = (16,40,24).

c. The length ofa: |a|=p

a21+a22+a23=√

12+ 02+ 22=√ 5.

The length of b: |b|=p

b21+b22+b23=√

22+ 52+ 32=√ 38.

d. Let us denote by γ the angle betweenaand b. Now cosγ= a·b

|a||b| = 1·2 + 0·5 + 2·3

√5√

38 = 8

√190. Henceγ= arccos8

190.

e. The opposite ofais−a= (−1,0,−2).

Any vector parallel toais a scalar multiple ofa. For instance, 3a= (3,0,6) or−0.7a= (−0.7,0,−1.4).

One way of determining the vectors perpendicular to a is the following: Any such vector x = (x1, x2, x3) has inner product 0 witha. Therefore, the following equation holds:

1·x1+ 0·x2+ 2·x3= 0.

Sincex2disappears, we might choosex2and one other coordinate ofxarbitrarily and then calculate the third one using the above equation. Letx1= 5 and x2= 10. Now 1·5 + 0·10 + 2·x3= 0. That is, x3=−52 and (5,10,−52) is a vector perpendicular toa.

f. The unit vector in the same direction asais the following: u= |a|a =1

5(1,0,2) = (1

5,0,2

5) g. We calculate this, using the previous answer. The vector of length 3 parallel to a is 3·u = (35,0,65). Similarly the vector of length 1/2 parallel toais 1/2·u= (215,0,15).

Example 2.8. Let v= (3,−1,2) anda= (1,1,−2).

a/ Find the projection ofv ona.

b/ Split vector v into components parallel and perpendicular toa.

Solution:

a/ We have to use the following formula: projav= v·a

|a|2a.

Now|a|=p

a21+a22+a23=p

12+ 12+ (−2)2=√ 6 and v·a= 3·1−1·1−2·2 =−2.

Therefore, projav=−13a= (−13,−13,23).

b/ By definition, the component parallel toais projav, which we determined in the previous part. On the other hand, the component perpendicular toaisv−projav= (3,−1,2)−(−13,−13,23) = (103,−23,43).

Excercises

1. Letu= (2,3,−1) and v= (0,−1,4) be two vectors in space.

(a) Draw the two vectors in a 3-dimensional Cartesian coordinate system.

(b) Calculate the vector 2v−3u.

(c) What is the length of vectors uandv?

(d) Determine the angle between uandv?

(9)

(e) Give the opposite of v, a vector parallel tov and one perpendicular to v.

(f) What is the unit vector parallel tov?

(g) Calculate the vectors of length 4 and length 1/3 parallel tov.

2. Letv= (4,7,9) anda= (2,−1,3).

(a) Find the projection of vona.

(b) Split vectorv into components parallel and perpendicular toa.

(10)

3 Linear independence, dimension in vector spaces

Ifaandbare two vectors that have the same number of components, then any vector of the formµa+λb is alinear combinationofa andb, whereµandλare real numbers. Notice thatµandλcan be zero as well. Therefore, all linear combinations of two vectors usually constitute a plane. The set of all linear combinations is also called thespan.

Example 3.1. Let a= (−2,3−4) andb= (1,1,5). Can one writex= (5,3,1)or y= (−5,5,−13) as a linear combination ofa andb?

Solution: We write a linear combination as follows: αa+βb. This expression gives us the following equations for the coordinates ofx= (5,3,1):

−2α+ β = 5

3α+ β = 3

−4α+ 5β = 1

Solving the first two equations forαand β, we getα=−0.4 andβ = 4.2. However, these values give a contradiction in the third equation. Thereforexdoes not belong to the span ofaand b.

Fory= (−5,5,−13), we get the following system:

−2α+ β = −5

3α+ β = 5

−4α+ 5β = −13

Solving this, we getα= 2 andβ=−1. Thereforey= 2a−1b.

We can generalise linear combination to any number of vectors. Letv1,v2, . . . ,vk be a set of vectors that have the same number of components. If α1, α2, . . . , αk are real numbers, then α1v12v2+ . . .+αkvk is a linear combination of v1,v2, . . . ,vk. All linear combinations of a set of vectorsS is the span of S. Another very important concept islinear independence. We say that the set of vectors {v1,v2, . . . ,vk}is linearly independent, if the only solution to the equationα1v12v2+. . .+αkvk=0 is the trivial solution α1 = α2 = · · · = αk = 0. Equivalently, we could define linear dependence, and sometimes this is useful. We say that a non-zero vector x is linearly dependent on the vectors {v1,v2, . . . ,vk}, if the equationx=α1v12v2+. . .+αkvkhas a solution. This automatically means that at least one of the coefficients is non-zero.

If we have an arbitrary set of vectors, then the rank of the set is the maximum number of linearly independent vectors in the set. A setH of vectors form agenerating set ofV, if every vector v∈V can be written as a linear combination of some vectors in H. That is, H spans V. As we will se, our goal is to have larger and larger independent sets of vectors and similarly smaller and smaller sets of vectors generating the same set.

We are working with sets of vectors. Some of these collection of vectors form a closed compact structure. We have already experienced that certain calculations with 2- or 3-component vectors is comfortable. We formalize this in the following fundamental notion.

Definition 3.2. A real vector spaceV is a set of vectors, together with two operations: addition and scalar multiplication, where the scalars are real numbers. The set of vectors must satisfy certain nice properties:

closed under addition and scalar multiplication addition is associative, commutative

there is a zero vector, additive identity scalar multiplication is associative there is a multiplicative identity there are two distributive laws

Example 3.3. Let V ={(1)}. That is a single vector with one component. Is it a vector space?

Solution: This is not a vector space, since it is not closed under addition: (1) + (1) = (2)∈/ V. Example 3.4. Let V ={(0,0)}. That is a single vector with two coordinates each of them being 0.

Solution: This is a vector space! We can check all properties. For instance, (0,0) + (0,0) = (0,0). Also for any real numberα, we getα(0,0) = (0,0). There is a multiplicative identity, for instance 2.

(11)

Example 3.5. The set of points inR2 that lie on a line passing through the origin constitutes a vector space.

Solution: We can check all properties. For instance, closed under scalar multiplication, since the scalar multiples of a vector are parallel to the original one. For the existance of the additive identity, it is important that the line goes through the origin. Points of other lines do not form a vector space.

Let H ={v1,v2, . . . ,vk} be a set of n-vectors. If H is linearly independent and spans all vectors of a vector space V, then H is a basis of V. In most cases, we work in the vector spaceRn. That is, the set of all real vectors withncomponents. This particular vector space has the followingstandard basis: (1,0, . . . ,0),(0,1,0, . . . ,0),. . . , (0,0, . . . ,0,1). Here each vector containsn−1 zeroes and one 1.

We denote the elements of the standard basis by ei, where the index i shows the position of the 1.

For instance v = (a, b, c) ∈ R3 can be written as the linear combination of e1,e2 and e3 as follows:

v=a·e1+b·e2+c·e3.

Proposition 3.6. If {v1,v2, . . . ,vk} is a basis of V, then every vectorv ∈V can be uniquely written asα1v1+· · ·+αkvk, whereαi ∈R.

If the vector spaceV has a finite basis, then the number of elements in the basis is thedimensionofV. There exist vector spaces such that the number of elements in the basis is infinite. In that case the dimension is∞. The vector space consisting of the sole zero vector has dimension 0.

Lemma 3.7 (Steinitz exchange lemma). If L={v1,v2, . . . ,vk}is a basis of the vector space V and G={g1,g2, . . . ,gn} spansV, then k≤nand possibly after reordering thegi, the set

{v1,v2, . . . ,vk,gk+1, . . . ,gn}spans V.

Proof idea: We use induction. The inductive step is the following: for any vectorv∈L, there exists a vectorg∈Gsuch thatG\g∪vspansV.

Corollary 3.8.

(i) IfL is a set of linearly independent vectors in a vector spaceV andGspans V, then for anyv∈L, there exists ag∈Gsuch that L\v∪gis linearly independent. (elementary basis exchange)

(ii) IfL is a set of linearly independent vectors, B a basis andG spans the vector space V, then |L| ≤

|B| ≤ |G|.

(iii) Any two basis of a fixed vector spaceV must have the same number of elements. This number is the dimensionof V.

(iv) Any nindependent vectors of a vector space of dimensionnform a basis.

Let B = {b1,b2, . . . ,bn} be a basis, and let v be an arbitrary vector in V. We know that v = α1b1+· · ·+αnbn for some α1, . . . , αn, the coefficients (or coordinates) with respect to the basis B.

After the previous lemma and its corollaries, we face the following. What happens if we change a basis B1 to another basisB2. If we know the coefficients with respect to basisB1, how can we calculate the coefficients with respect to the new basisB2? This is given by an algorithm, the basis exchange process.

Each step is an elementary basis exchange, whose calculations coincide with that of an elimination step of the Gauss-Jordan elimination. We illustrate this in the following example.

Example 3.9. Let a1 = (2,3,1,4), a2 = (1,1,2,2), a3 = (0,0,1,1) and a4 = (3,−1,−2,−4). Do a1,a2,a3,a4 form a basis in R4? If they do, then determine the coefficients of v = (9,1,2,−2) with respect to this basis.

Solution: We start with the canonical basis ofR4. In each step, we try to include a new vector from our set to the basis using the exchange lemma. The starting table looks like this:

basis a1 a2 a3 a4 v

e1 2 1 0 3 9

e2 3 1 0 −1 1

e3 1 2 1 −2 2

e4 4 2 1 −4 −2

In each step, we select a non-zero entry, preferably a 1, and perform an elementary exchange. We get the following series of tables.

(12)

basis a1 a2 a3 a4 v

a2 2 1 0 3 9

e2 1 0 0 −4 −8

e3 −3 0 1 −8 −16 e4 0 0 1 −10 −20

−→

basis a1 a2 a3 a4 v

a2 0 1 0 11 25

a1 1 0 0 −4 −8

e3 0 0 1 −20 −40 e4 0 0 1 −10 −20

−→

−→

basis a1 a2 a3 a4 v

a2 0 1 0 11 25

a1 1 0 0 −4 −8

a3 0 0 1 −20 −40

e4 0 0 0 10 20

−→

basis a1 a2 a3 a4 v

a2 0 1 0 0 3

a1 1 0 0 0 0

a3 0 0 1 0 0

a4 0 0 0 1 2

Since we were able to include all four vectors, the set {a1,a2,a3,a4} form a basis in R4. The last column of the table shows thatv= 3a2+ 2a4.

We can use the basis exchange process to various other purposes. For instance, we can determine the rank of a set of vectors, S say. In each step, we try to include a vector of S. This process might stop before we empty S. If there are rows filled with 0, that cannot be used for further exchanges. We always need a non-zero entry below the vector, we plan to include. Therefore, if we can include at most kvectors, then the rank isk.

Example 3.10. Let a1 = (1,1,2), a2 = (2,1,0), a3 = (0,1,1), a4 = (8,5,4). We form two sets of vectorsH1={a1,a2,a3},H2={a1,a2,a4}. Determine whether H1 andH2 is linearly independent.

Solution: We include the vectors into the basis exchange table. We try to perform elementary basis exchanges to include the vectors of H1 or H2. We start with a1 and a2, and see whether the third exchange is possible.

basis a1 a2 a3 a4

e1 1 2 0 8

e2 1 1 1 5

e3 2 0 1 4

−→

basis a1 a2 a3 a4

e1 0 1 −1 3

a1 1 1 1 5

e3 0 −2 −1 −6

−→

basis a1 a2 a3 a4

a2 0 1 −1 3

a1 1 0 2 2

e3 0 0 −3 0

−→

basis a1 a2 a3 a4

a2 0 1 0 3

a1 1 0 0 2

a3 0 0 1 0

As we can read it, H1 is linearly independent. However, a4 = 2a1+ 3a2. Therefore, H2 is a set of linearly dependent vectors.

Example 3.11. Let a1= (1,0,2),a2= (2,1,5),a3= (−1,−1,−3),a4= (5,2,12),a5= (4,2,10), and letH ={a1,a2,a3,a4,a5}.

Determine the rank ofH.

Are there two linearly independent vectors in H? And two linearly dependent ones?

Can we add a new vector to H such that the rank increases?

Solution: The rank is given by the maximum number of vectors that can be included in the basis during the basis transformation. We start with the following table and perform elementary basis exchanges:

basis a1 a2 a3 a4 a5

e1 1 2 −1 5 4

e2 0 1 −1 2 2

e3 2 5 −3 12 10

−→

basis a1 a2 a3 a4 a5

a1 1 2 −1 5 4

e2 0 1 −1 2 2

e3 0 1 −1 2 2

−→

basis a1 a2 a3 a4 a5

a1 1 0 1 1 0

a2 0 1 −1 2 2

e3 0 0 0 0 0

We included two vectors ofH, therefore the rank is 2.

Now{a1,a2} is a linearly independent set and{a2,a5}form a linearly dependent set.

(13)

Every vector that is not a linear combination of a1 and a2 increases the rank. One such vector is (0,0,1) since{a1,a2,e3}form a basis.

In the typical vector spaceRn, there might be a setV of vectors that is closed with respect to addition and scalar multiplication. That is, V itself satisfies the properties of a vector space. In that case we call V a subspace. We have already encountered this phenomenon in the Example 3.5. Actually, in any dimension, the multiples of a fixed non-zero vector form a 1-dimensional subspace. The dimension of a subspace V is the number of elements in a basis of V. A vector space V is the direct sum of its two subspacesV1andV2if and only if every vectorv ofV can be uniquely written as the sum of two vectors v1 ∈V1 and v2 ∈V2. Necessarily the intersection ofV1 and V2 must be the zero vector of V. Also the dimensionn1ofV1 andn2ofV2 must satisfyn1+n2=n, wherenis the dimension of V.

Example 3.12. Determine the dimension of the subspaces below and give a basis for each of them.

V1={λ1(1,−2,3) +λ2(1,0,1) :λ1, λ2∈R},V2={λ(1,0,0) :λ∈R}.

Is it true that V1⊕V2=R3? In case of the affirmative, split the vector x= (4,−2,5) to components inV1 andV2.

Solution: The dimension of V1 is 2, and (1,−2,3),(1,0,1) is a basis. The dimension of V2 is 1, and (1,0,0) is a basis.

Now we have to check whether their union is independent. We include them in a basis exchange table and solve it by the standard method. We includex to determine the coefficients in the possible basis.

Sincee1=b3we can make this exchange for free.

basis b1 b2 b3 x

b3 1 1 1 4

e2 −2 0 0 −2

e3 3 1 0 5

−→

basis b1 b2 b3 x b3 −2 0 1 −1 e2 −2 0 0 −2

b2 3 1 0 5

−→

basis b1 b2 b3 x

b3 0 0 1 1

b1 1 0 0 1

b2 0 1 0 2

We found that indeed b1,b2,b3 is a basis and x=b1+ 2b2+b3. Therefore theV1-component is (3,−2,5) and theV2-component is (1,0,0)

Excercises

1. Leta= (2,−3),b= (0,5). Can we getc= (−2,23) as a linear combination of aandb?

2. Let a = (5,4,−2,3), b = (2,0,−1,5), c = (3,0,4,−6). Can we get x = (6,4,0,19) as a linear combination ofa,b, andc?

3. Let a = (−1,2,0), b= (3,5,2), and c= (−2,1,4). Let H ={a,b,c}. How can we get the zero vector ofR3 from the vectors ofH? IsH linearly independent?

4. Leta1= (1,3,2), a2 = (2,1,5), a3= (3,4,2). Do a1, a2,a3 form a basis ofR3? If yes, calculate the coordinates of v= (14,17,18) with respect to this basis.

5. Let a = (2,4,−8), b= (−5,−9,18), c= (7,2,−7). Let H ={a, b, c}. How can we get the zero vector using the elements inH? IsH linearly independent? Let x= (0,1,−12) andy= (2,2,2).

Can we get xoryas a linear combination of aandb?

6. LetH1={(1,1,1),(1,1,0)},

H2={(1,1,1),(1,1,0),(1,0,0)} and H3={(1,1,1),(1,1,0),(1,0,0),(0,1,1)}.

Check the following features for each vector set: linear independence, basis, generating set.

7. Let a1 = (1,2,4), a2 = (−3,1,2), a3 = (−2,3,6), a4 = (−1,5,10), a5 = (4,1,2), and let H = {a1,a2,a3,a4,a5}. What is the rank of H? Add a non-zero vector to H without changing the rank.

8. Let V1 = {(t, t, t) ∈ R3 : t ∈ R} and V2 = {λ1(1,0,2) +λ2(−1,3,0) : λ1, λ2 ∈ R}. Show that V1⊕V2=R3. Split the vectorv= (1,10,2) to components inV1andV2.

(14)

4 Matrices

Anm×nmatrixA is a rectangular array ofmnnumbers arranged inmrows andncolumns:

A=

a11 a12 . . . a1j . . . a1n a21 a22 . . . a2j . . . a2n ... ... ... ... ai1 ai2 . . . aij . . . ain

... ... ... ... am1 am2 . . . amj . . . amn

We call the row vector (ai1, ai2, . . . , ain) row i and the column vector

 a1j a2j ... amj

column j. The ij-

element orijth component of Ais aij. In short, we write A= (aij), and we might specify the range of indices: 1≤i≤mand 1≤j≤n. Whenm=n, the matrix is asquare matrix. The entries of formaii

form themain diagonalof the matrix A= (aij).

Given a matrix A, we might interchange rows and columns to get the transpose of A. In notation, AT = (aji) ifA= (aij).

A square matrix is upper triangular if all its entries below the main diagonal are zero. It is lower triangularif all entries above the main diagonal are zero. A matrix isdiagonalif all its non-zero entries lie in the main diagonal. In other words: A= (aij) is upper triangular ifaij = 0 fori > j, lower triangular ifaij= 0 fori < j, and diagonal ifaij = 0 fori6=j.

Excercises

1. Calculate the entries of a 5×5 matrix such that theij-element isi2−3j.

2. Calculate the entries of a 6×6 matrix such that theij-element is−3i−4j.

3. Calculate the entries of a 6×6 matrix such that theij-element isi+j (mod 6).

4. Determine the 3×4 matrixAsuch thataij = 3i−j. Give the transpose ofA.

4.1 Matrix operations

Addition of Matrices. LetA and B be two matrices of the same sizem×n. We define the m×n matrixA+B by adding the corresponding elements ofAandB.

Example 4.1. The sum of two matrices.

2 3 −1 4

5 −3 0 2

1 2 1 0

+

1 0 3 −3

−1 2 3 −1

2 0 −1 5

=

3 3 2 1

4 −1 3 1

3 2 0 5

Multiplication by a scalar. LetA= (aij) be anm×nmatrix andβ a real number (scalar). Now the m×nmatrixβAis given such that theij-element of βAisβaij.

Example 4.2. Let A=

4 0 2

−1 3 1

andB =

3 2 1

−1 0 2

. Let us calculate 3A−2B.

Solution: 3A−2B = 3

4 0 2

−1 3 1

−2

3 2 1

−1 0 2

=

12 0 6

−3 9 3

6 4 2

−2 0 4

= 6 −4 4

−1 9 −1

(15)

Excercises

1. LetA=

−3 6 −8

−6 4 1

−9 2 0

andB=

1 −7 5

−3 −9 −5

4 2 4

. Calculate−9A+ 6B.

2. LetA=

1 2 4

−7 3 −2

andB=

4 0 5 1 −3 6

. Calculate−2A+ 3B.

4.2 Matrix multiplication

A matrix can be thought of as a collection of row vectors and similarly column vectors. Therefore, it comes as no surprise that matrix multiplication is an iterated scalar product of vectors.

Product of two matrices. LetA= (aij) be anm×n matrix, and letB = (bij) be ann×pmatrix.

The product ofAandB is anm×pmatrixC= (cij), where

cij= (ith row ofA)·(jth column ofB).

We can expand this to cij =ai1b1j+ai2b2j+· · ·+ainbnj.

Notice that two matrices can be multiplied together only if the number of columns in the first matrix equals the number of rows in the second matrix.

Example 4.3. If A=

1 3

−2 4

andB=

3 −2

5 6

, calculateAB andBA.

Solution: SinceA is a 2×2 matrix and so isB, their productC=ABis a 2×2 matrix. IfC= (Cij), then we calculatec11as the dot product of the first row ofA and the first column ofB.

A=

1 3

−2 4

B=

3 −2

5 6

Thusc11= 3 + 15 = 18.

Similarly, to compute c12 we do A=

1 3

−2 4

B=

3 −2

5 6

Thusc12=−2 + 18 = 16.

Continuing, we find c21=−6 + 20 = 14 andc22= 4 + 24 = 28. Therefore,AB=

18 16 14 28

. Similarly, we calculateBAto get

3 −2

5 6

1 3

−2 4

=

7 1

−7 39

.

This shows the important fact that matrix products do not commute in general.

Example 4.4. Let A=

1 1 2 0

3 1 2 1

0 2 1 2

. Determine the rank ofA.

Solution: The rank of a matrix equals the rank of the vector set formed by the columns of the matrix.

We use the basis transformation process to find the rank.

basis a1 a2 a3 a4

u1 1 1 2 0

u2 3 1 2 1

u3 0 2 1 2

−→

basis a1 a2 a3 a4

a1 1 1 2 0

u2 0 −2 −4 1

u3 0 2 1 2

−→

basis a1 a2 a3 a4

a1 1 1 2 0

a4 0 −2 −4 1

u3 0 6 9 0

Now we can includea2in the basis and the rank is 3.

In the class of square matrices, the following matrix plays an important role. LetIn denote then×n identity matrix consisting of 1s in the main diagonal and 0 everywhere else. It has the property that InA = AIn = A for every n×n matrix A. In the class of n×n matrices, In plays a role similar to that of 1 plays in the class of rational numbers. Therefore, it is important to ask the following question.

(16)

Given an n×n matrixA, is there another n×n matrixB such that AB=BA=In? If the answer is yes, B is theinverse ofA denoted asA−1. However, this is not always the case. Some matrices do not have an inverse. There are several methods to calculate the inverse. We use one here, based on the basis exchange process. The key formula is the following. In the starting table, we use the columns ofA, and we extend this by the vectors of the standard basis. This second part looks exactly as In. Now we try to include all ncolumn vectors in the bases by the basis exchange process. If we succeed, then the left part of the table isIn. Or it can be made so by interchanging the rows. In that case, the extended part of the table shows us the inverse ofA. In this case the rank of A wasn. However, if the rank ofA is smaller, then we cannot include all vectors in the basis. Therefore, Adoes not have an inverse. It also means that the determinant ofAis 0. Let us see a small example.

Example 4.5. Let A=

2 −3

−4 5

. Compute the inverse of Aif it exists.

Solution: We use the basis transformation process.

basis a1 a2 u1 u2

u1 2 −3 1 0

u2 −4 5 0 1

−→

basis a1 a2 u1 u2 a1 1 −3/2 1/2 0

u2 0 −1 2 1

−→

basis a1 a2 u1 u2 a1 1 0 −5/2 −3/2

a2 0 1 −2 −1

We can check the solution:

−5/2 −3/2

−2 −1 2 −3

−4 5

1 0 0 1

.

Example 4.6. Let A=

1 2

−2 −4

. Compute the inverse of Aif it exists.

Solution: We use the basis transformation process.

basis a1 a2 u1 u2

u1 1 2 1 0

u2 −2 −4 0 1

−→

basis a1 a2 u1 u2

a1 1 2 1 0

u2 0 0 2 1

This is as far as we can go. We deduce that the rank ofA is 1, thereforeAis not invertible.

Exercises

1. Compute the following

1 4 0 2

3 −6

2 4

1 0

−2 3

 .

2. LetA=

2 −3 −5

−1 4 5

1 −3 −4

,B=

−1 3 5

1 −3 −5

−1 3 5

, andC=

2 −2 −4

−1 3 4

1 −2 −3

.

ComputeAB, ACand CA. Explain what you get.

3. LetA=

 1 2 3 4 5 0

, B =

6 7 8 9 10 11 12 13

and C=

−6 −5 −4 −3

−2 −1 0 1

2 3 4 5

 Calculate the following products, if they exist: AB,BA,CBT,BC.

(17)

4. LetA=

1 −1 0

2 0 3

, B=

2 4 −3

1 −1 2

3 −2 4

and C=

−1 1

0 3

2 2

4 −1

 Calculate the following products, if they exist: AC,CA,ATBT,AB,CTB,BAT. 5. A:= 2 −5 4

B:=

3 1 0

−2 2 5

4 1 −3

 C:=

 2

−4 7

 Calculate the following products: A(BC) and (AB)C.

6. LetA=

1 −3

0 2

, B=

2 −1 4

3 1 5

, ´es C=

0 −2 1

4 3 2

−5 0 6

 Calculate the following products: AB,BC,A(BC) and (AB)C.

7. LetA=

1 −3 4 2 −5 7 0 −1 1

. Calculate A−1 if it exists.

8. LetA=

3 2 1 4 3 1 3 4 1

. CalculateA−1 if it exists.

9. LetA=

1 2 3 0 1 4 5 6 0

. IfA is invertible, find the inverse!

10. B :=

3 1 0

−2 2 5

4 1 −3

Is B invertible? If yes, determine the inverse matrix!

11. A:=

1 0 0 0 1 0

B:=

3 1 0

1 −1 2

1 1 1

Is AT or B invertible? If yes, determine the inverse matrix!

(18)

5 Determinants

Determinants were first used to determine the solution of a system of linear equations. However, we first learnt about matrices and now it is easier to imagine that the determinant is a number associated to a matrix. First of all, if the matrix is diagonal, then its determinant is the product of the elements in the main diagonal.

Example 5.1. Let A=

3 0 0

0 −2 0

0 0 1/2

, thendet(A) =−3.

Also, if the matrix is upper (or lower) triangular, then its determinant is the product of the elements in the main diagonal.

Example 5.2. Let B=

3 0 0

12 −2 0

π 1/e 1/2

, thendet(B) =−3.

To distinguish determinants from matrices, we use vertical lines around the array of numbers if we denote a determinant opposed to brackets in case of matrices. However, the same elementary row operations that we used for matrices (in Section 1), can be used for calculating the determinant. The following rules apply.

(i) The determinant is unchanged if we add a multiple of a row (column) to another row (column).

(ii) We can factor out a common divisor from any row or column.

(iii) If we interchange two rows (columns), the sign of the determinant changes.

In what follows, we can use the following strategy to calculate a determinant: we use elementary row operations to transform our original matrix to a triangular matrix. At the end, we easily calculate the final determinant.

Example 5.3. Calculate the determinant |A|=

1 3 5 2

0 −1 3 4

2 1 9 6

3 2 4 8

.

Solution: There is already a 0 in the first column, so it is simplest to reduce the other elements of the first column to 0. We continue aiming for an upper triangular matrix.

Multiply the first row by −2 and add it to the third row and multiply the first row by−3 and add it to the fourth row.

|A|=

1 3 5 2

0 −1 3 4

2 1 9 6

3 2 4 8

Multiply the second row by−5 and−7 and add it to the third and fourth rows, respectively.

=

1 3 5 2

0 −1 3 4

0 0 −16 −18

0 0 −32 −26

Subtract the third row twice from the fourth =

1 3 5 2

0 −1 3 4

0 0 −16 −18

0 0 0 10

Now we have an upper triangular matrix and|A|= (−1)(−16)10 = 160.

Example 5.4. Calculate the determinant |A|=

1 −2 3 −5 7

2 0 −1 −5 6

4 7 3 −9 4

3 1 −2 −2 3

−5 −1 3 7 −9 .

(19)

Solution: Adding row 2 and then row 4 to row 5, we obtain |A|=

1 −2 3 −5 7

2 0 −1 −5 6

4 7 3 −9 4

3 1 −2 −2 3

0 0 0 0 0

= 0.

This illustrates that a little looking before doing all the computations can simplify the matters con- siderably.

Example 5.5. Let C=

−3 −3 6

−5 7 0

−5 −3 −3

=−3

1 1 −2

−5 7 0

−5 −3 −3

=−3

1 1 −2

−5 7 0 0 −10 −3

=

−3

1 1 −2

0 12 −10

0 −10 −3

=−3

1 1 −2

0 2 −13

0 −10 −3

=−3

1 1 −2

0 2 −13 0 0 −68

= (−3)2(−68) = 408.

Example 5.6.

2 −3 5

1 7 2

−4 6 −10

= 0, since the third row is −2 times the first row.

So far we calculated the determinant without actually defining it. We fill this gap now. It should be apparent why this delay was useful. A rook placement in a matrix of order n is a set of n en- tries, one from each row and column. Given a matrix A of order n, we define the determinantas a signed sum of products of entries in all rook placements. The number of rook placements is n!, which grows exponentially with n. The sign of a product (in the evaluation of the determinant) is calculated as follows. Given the entries of a rook placement, we associate a permutation. For instance the rook placement{a11, a22, . . . , a88}corresponds to the permutation 123. . .8. This monotone increasing permu- tation has always positive sign. The rook placement {a13, a24, a31, a47, a55, a62, a78, a86} corresponds to the permutation 34175286, that is, we only list the second indeces. Now we have to determine the number of inversions that transforms 34175286 to 12345678. If this number isk, then the sign of the product is (−1)k in the evaluation of the determinant. In our case the number of inversions is 9. Therefore, the producta13a24a31a47a55a62a78a86 will have a negative sign in the summation.

Example 5.7. Let us calculate the following 2×2 determinant: D= a b c d .

Solution: The product of the entries in the main diagonal has always positive sign. The other diagonal corresponds to the permutation (21). Therefore we need 1 inversion, hence the product has negative sign.

D=ad−bc.

Example 5.8. Let us calculate the following 3×3 determinant:

a11 a12 a13 a21 a22 a23 a31 a32 a33

.

Solution: There are 6 rook placements, so the following products must be present in the evaluation of the determinant: a11a22a33, a11a23a32, a12a21a33, a12a23a31, a13a22a31, a13a21a32. Now the signs are respectively 1,−1,−1,1,−1,1. Therefore the determinant is: a11a22a33+a12a23a31+a13a21a32− a11a23a32−a12a21a33−a13a22a31.

There is one more method to calculate the determinant. The philosophy behind this method is the fact that smaller determinants are easier to calculate.

LetB= (bij) be ann×nmatrix. The (n−1)×(n−1) matrix arising fromB by deleting row 2 and column 5 is denoted byB25.

The expansion of det(B) in rowiis the following summation:

det(B) =ai1(−1)i+1|Bi1|+ai2(−1)i+2|Bi2|+· · ·+ain(−1)i+n|Bin|.

This is also known as Laplace expansion or expansion by cofactors in rowi. Similarly, we can expand a determinant in a column.

(20)

Example 5.9. Let A=

3 5 2 4 2 3

−1 2 4

. Let us calculate the determinant by expanding in the second row or the third column.

Solution: det(A) = 4(−1)3|A21|+ 2(−1)4|A22|+ 3(−1)5|A23|. Now|A21|= 5 2

2 4 = 5∗4−2∗2 = 16,

|A22|= 3 2

−1 4 = 3∗4−(−1)∗2 = 14, and|A23|= 3 5

−1 2 = 3∗2−(−1)∗5 = 11.

Thereforedet(A) =−4∗16 + 2∗14−3∗11 =−69.

Similarly, if we expand in the third column, say, we obtain:

|A13|= 4 2

−1 2 = 4∗2−(−1)∗2 = 10, |A33|= 3 5

4 2 = 3∗2−4∗5 =−14.

Therefore,det(A) = 2∗10−3∗11 + 4∗(−14) =−69.

The following is known as the multiplicativity of the determinant.

Fact 5.10. det(AB) =det(A)det(B).

Example 5.11. Verify the previous fact forA=

1 −1 2

3 1 4

0 −2 5

 andB=

1 −2 3

0 −1 4

2 0 −2

.

Solution: We first calculate det(A) = 5−12 + 8 + 15 = 16 and det(B) = 2−16 + 6 = −8. After that we calculate AB =

1 −1 2

3 1 4

0 −2 5

1 −2 3

0 −1 4

2 0 −2

 =

5 −1 −5

11 −7 5

10 2 −18

. Now det(AB) = 630−110−50−350−50−198 =−128, which is really 16(−8) =det(A)det(B).

Exercises

1. What is the sign of the following products in the evaluation of a determinant of order 6?

a23a31a42a56a14a65, a32a43a14a51a66a25

2. Using only the definition of the determinant, show that the determinant below is 0.

α1 α2 α3 α4 α5 β1 β2 β3 β4 β5

a b 0 0 0

c d 0 0 0

e f 0 0 0

3. Calculate the coefficients ofx3 andx4 in the expression below, using the definition of the determi- nant. f(x) =

2x x 1 2

1 x 1 −1

3 2 x 1

1 1 1 x

4. The 3×3 Vandermonde determinant is given by D3=

1 1 1

a1 a2 a3 a21 a22 a23

. Show thatD3= (a2−a1)(a3−a2)(a3−a1).

5. The 4×4 Vandermonde determinant is given by D4=

1 1 1 1

a1 a2 a3 a4

a21 a22 a23 a24 a31 a32 a33 a34

. Show thatD4= (a2−a1)(a3−a2)(a3−a1)(a4−a1)(a4−a2)(a4−a3).

(21)

6. In each example below, evaluate the determinant using the methods of this chapter.

2 −1 3

4 0 6

5 −2 3 ,

3 −1 2 1

4 3 1 −2

−1 0 2 3

6 2 5 2

,

1 2 3 4

2 3 4 1

3 4 1 2

4 1 2 3

,

3 1 1 1

1 3 1 1

1 1 3 1

1 1 1 3

,

1 1 1 1

1 2 3 4

1 3 6 10

1 4 10 20 ,

2 5 −3 −2

−2 −3 2 −5

1 3 −2 0

−1 −6 4 0 ,

6 1 −9 1

−8 −6 3 0

−6 −8 −5 0

2 7 3 0

.

7. Calculate the following determinant using row expansions:

1 0 0 1

1 a 0 0

1 1 b 0

1 0 1 c

8. Calculate the following determinant using elementary row/column operations:

2 8 6 4

0 1 3 0

6 1 6 9

9 9 1 9

,

1 0 3 2

2 1 5 −1

−4 1 0 1

0 1 2 3

,

2 −1 0 2

−4 2 −9 3

2 −6 4 −2

1 3 2 2

9. Calculate the following determinant by expanding in the first column:

a 1 1 1 b 0 1 1 c 1 0 1 d 1 1 0

5.1 Cramer’s rule

We can write a system ofnequations in nunknowns in the following concise form:

Ax=b.

Here Adenotes ann×nmatrix,xandbaren-dimensional column vectors.

Ifdet(A)6= 0, then the system has a unique solution given by Cramer’s rule:

x1= D1

det(A), x2= D2

det(A), . . . xn= Dn

det(A),

whereDj is the determinant of the matrix obtained by replacing thejth column of Aby the vectorb.

Example 5.12. Let us solve, using Cramer’s rule, the following system:

2x1 + 4x2 + 6x3 = 18 4x1 + 5x2 + 6x3 = 24 3x1 + x2 − 2x3 = 4

Solution: First we calculate the determinant of the matrix formed by the coefficients of the equations.

D =

2 4 6

4 5 6

3 1 −2

= 66= 0, so the system has a unique solution. Now replacing the first column, we

get D1 =

18 4 6

24 5 6

4 1 −2

= 24, and similarly D2 =

2 18 6

4 24 6

3 4 −2

= −12, and D3 =

2 4 18 4 5 24

3 1 4

= 18.

Therefore, x1 = DD1 = 246 = 4, x2 =−126 =−2, and x3 = DD3 = 186 = 3. We can check the solution by substituting it to the equations. For instance, into the third: 3·4−2−2·3 = 4.

Of course, we remember from Section 1, that some systems of linear equations might have 0 or infinitely many solutions. In Cramer’s rule, these two events must be covered by the case, wheredet(A) = 0. If we

(22)

find that some Di6= 0, then the system has no solutions. Finally, if det(A) =D1=· · ·=Dn = 0, then there are infinitely many solutions. However, Cramer’s rule does not give a recipe how to find them.

Example 5.13. Let us solve, using Cramer’s rule, the following system:

x1 + x2 − x3 = 6 3x1 − 2x2 + 5x3 = 3 6x1 + x2 + 2x3 = 21

Solution: First, we calculate the determinant of the matrix formed by the coefficients of the equations.

D=

1 1 −1

3 −2 5

6 1 2

=

1 0 0

3 −5 8 6 −5 8

= 40

1 0 0

3 −1 1 6 −1 1

= 0. We first subtracted column 1 from column 2 and added column 1 to column 3. Secondly we factored out 5 from column 2 and 8 from column 3.

Now let us calculate the modified determinants:

D1=

6 1 −1

3 −2 5

21 1 2

=

6 1 −1

3 −2 5

3 −2 5

= 0, where we subtract 3 times row 1 from row 3.

D2=

1 6 −1

3 3 5

6 21 2

=

1 6 −1

3 3 5

3 3 5

= 0, where we subtract 3 times row 1 from row 3.

D3 =

1 1 6

3 −2 3

6 1 21

=

1 1 6

3 −2 3 3 −2 3

= 0. We conclude that the system has an infinite number of solutions.

Example 5.14. Let us solve, using Cramer’s rule, the following system:

x1 + x2 − x3 = 4 2x1 − 3x2 + x3 = −5 4x1 − x2 − x3 = −3

Solution: First, we calculate the determinant of the matrix formed by the coefficients of the equations.

D=

1 1 −1

2 −3 1

4 −1 −1

=

1 1 −1

0 −5 3

0 −5 3

= 0. We subtracted multiples of row 1 from row 2 and 3.

Now we have to calculate the modified determinants:

D1 =

4 1 −1

−5 −3 1

−3 −1 −1

= −

−1 1 4

1 −3 −5

−1 −1 −3

= −

−1 1 4

0 −2 −1

0 −4 −8

= −

−1 1 4

0 −2 −1

0 0 −6

= 12. This shows that the system has no solutions.

Exercises

Solve the following system of equations using Cramer’s rule.

1.

2x1 + x2 + x3 = 6 3x1 − 2x2 − 3x3 = 5 8x1 + 2x2 + 5x3 = 11 2.

2x1 + 5x2 − x3 = −1 4x1 + x2 + 3x3 = 3

−2x1 + 2x2 = 0

3.

x1 + 2x2 + 3x3 − 2x4 = 6 2x1 − x2 − 2x3 − 3x4 = 8 3x1 + 2x2 − x3 + 2x4 = 4 2x1 − 3x2 + 2x3 + x4 = −8

Hivatkozások

KAPCSOLÓDÓ DOKUMENTUMOK

Osciilators with quasi linear amplitude stabilization [3,4] have two main sources of distortion: the quasi linear components are not perfectly linear in practice; and the

A central result of this paper is that, if competition among adults is important and modelled through an appropriate (non-linear and non-monotone) b( · ), then the model may

The basis functions of the cubic trigonometric Bézier curve are contain two arbi- trarily selected real values λ and µ as shape parameters.. When these parameters are changing the

The hypothesis of this form of the theorem is that two complete and atomic Boolean algebras with completely distributive operators are given, say A and B , and say of the

It is obvious that radial spaces of tightness ≤ µ are µ-sequential but it is also clear that the converse of this statement fails, as is demonstrated by the existence of

„ Correlations are between -1 and +1; the value of r is always between -1 and 1, either extreme indicates a perfect linear association. „ b) If r=1, we say that there is

„ Correlations are between -1 and +1; the value of r is always between -1 and 1, either extreme indicates a perfect linear association. „ b) If r=1, we say that there is

Also, if λ ∈ R is a non-zero real number and v is a non-zero space vector, then we define λv the following way: we multiply the length of v by |λ| and the direction of the product