Transcript of 004.AVI (Lecture 4)

By Eckhard M.S. Hitzer, University of Fukui, Prof. Ryosuke Nagaoka (University of the Air, Japan)

So please have a look at the screen. Here you see for example a vector x in R plus, but we have here another element of R plus, which is y and on the left side we see the difference and we see the difference is not in R plus. And also if we take scalar multiples of a vector, the yellow vector is a multiple of the red vector, we see if the scalar is negative also then the element, the multiple element is not in R plus. Now a similar situation is if we take the first quadrant in R two, then we can look for example at the multiples

... Prof. Nagaoka ...

and we see that negative multiples are also not in the first quadrant. And here we have the difference of the vector u and the vector v, here is the vector v and the vector u, and the gray vector is the difference, also it is not in the first quadrant, again.

And now I want to explain to you about - yes - the definition of linear independence. You have a set of vectors and the set of vectors is linear independent if we try to make a linear combination, which gives us the zero vector. And if all coefficients (are) have to be zero and there is no other solution than that all coefficients are zero, then we say that the set of vectors a one (a1) to a n (an) is a linear independent set of vectors, or it is one dimensionally linearly independent.

Next let us change to see what is linear independence. If we have a set of vectors again, and they are linearly dependent and a linear combination of them gives the zero vector, then there must be at least amongst the coefficients we use one coefficient, which is not equal to zero. Now if we have here a set of linearly independent vectors and we make a linear combination of them, then the linear combination is unique. That is if we try have this produce the same result with another set of coefficients alpha one prime to alpha k prime, then all coefficients must be pairwise equal, alpha one equal alpha one prime, and so forth.

Next if we have a set of vectors a one (a1) to a k (ak) and the set of vectors is linearly dependent, then at least one of these vectors is a linear combination of the k minus one other vectors.

I want also to illustrate this in a set of applets. Here we have the three dimensional space and we have one vector and we have linearly dependent on this one vector the whole line (of), which is formed by scalar multiples of this vector. Then we have two vectors v one and v two and they together span this plane here. And any multiple and the linear combinations of these multiples are in the plane.

Next again here we have two vectors v one and v two, and they define the this plane by their span. And a third vector can (be) for example be a multiple of one of these vectors, can be a linear combination of the two vectors, (or) then the third vector is linearly dependent on the first two vectors. But if it points out of the plane than it (the third vector) is linearly independent of the first two vectors.

[main page]

Soli Deo Gloria. Created by Eckhard Hitzer (Fukui).
Last Modified 16 January 2004
EMS Hitzer and Prof. R. Nagaoka are not responsible for the content of external internet sites.