+1(978)310-4246 credencewriters@gmail.com
Select Page

Description

5 bonus points for clean and organization
1) (6) What single elementary row operation that will create a 4 in a lower left corner of
given matrices and will not create any fractions in the first row? Answer is not unique
Ã¯Æ’Âª2
2 2Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ 3 1 0Ã¯Æ’ÂºÃ¯Æ’Â»
Answer: Write both the operation and matrix
Operation:_________________
E=
Ã¯Æ’Âª_
Ã¯Æ’Âª
Ã¯Æ’ÂªÃ¯Æ’Â« _
_Ã¯Æ’Â¹
_ Ã¯Æ’ÂºÃ¯Æ’Âº
_ Ã¯Æ’ÂºÃ¯Æ’Â»
_
_
_
2) (8) Determine whether the given matrices is (are) elementary matrices: Your answers is
either YES or NO under the matrices
a) Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«1 0 1 Ã¯Æ’ÂºÃ¯Æ’Â»
b) Ã¯Æ’ÂªÃ¯Æ’Âª0 Ã¢Ë†â€™ 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
c) Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 1 2Ã¯Æ’ÂºÃ¯Æ’Â»
e) Ã¯Æ’ÂªÃ¯Æ’Âª0 0 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 1 1Ã¯Æ’ÂºÃ¯Æ’Â»
f) Ã¯Æ’ÂªÃ¯Æ’Âª1 1 1Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
g) Ã¯Æ’ÂªÃ¯Æ’Âª0 1 Ã¢Ë†â€™ 5Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’Âª0
d) Ã¯Æ’Âª
Ã¯Æ’Âª0
Ã¯Æ’Âª
Ã¯Æ’Â«0
0Ã¯Æ’Â¹
1 0 0Ã¯Æ’ÂºÃ¯Æ’Âº
0 4 / 7 0Ã¯Æ’Âº
Ã¯Æ’Âº
0 0 1Ã¯Æ’Â»
0
0
h) Ã¯Æ’ÂªÃ¯Æ’Âª0 Ã¢Ë†â€™ 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 2Ã¯Æ’ÂºÃ¯Æ’Â»
3) (8) Find the inverses of the following matrices
a) Ã¯Æ’ÂªÃ¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â»
c) Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
b) Ã¯Æ’ÂªÃ¯Æ’Âª0 3 0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 / 2Ã¯Æ’ÂºÃ¯Æ’Â»
1
d) Ã¯Æ’ÂªÃ¯Æ’Âª0 2 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 0Ã¯Æ’ÂºÃ¯Æ’Â»
4) (6) Which of the following 3 Ãƒâ€” 3 matrices are in row-echelon form or in reduced row
echelon form? Note: Write your choices (ref or r.r.e.f or neither BELOW THE
MATRIX)
a) Ã¯Æ’ÂªÃ¯Æ’Âª0 0 0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â»
b) Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 0Ã¯Æ’ÂºÃ¯Æ’Â»
c) Ã¯Æ’ÂªÃ¯Æ’Âª0 1 1 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 0Ã¯Æ’ÂºÃ¯Æ’Â»
c) Ã¯Æ’Âª0 1 0Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 6 0Ã¯Æ’ÂºÃ¯Æ’Â»
d) Ã¯Æ’Âª0 1 0Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
e) Ã¯Æ’Âª0 1 0Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
5) (16) Use Gauss-Jordan elimination to solve the system. Discuss the value of a and b such that
the following system has
a) no solution,
b) Unique solution or
c) Infinitely many solutions.
d) Solve the case c) parametrically.
Ã¯Æ’Â¬ax + 2 y = 2
Ã¯Æ’Â­
Ã¯Æ’Â® x Ã¢Ë†â€™ 4y = b
Solution:
a)
b)
c)
2
d)
6) (8) Find the general in parametric form solution for the augmented matrix. Identify the
homogenous solution
( AX h = 0 ) and the particular solution: AX p = b )
1 Ã¢Ë†â€™ 5Ã¯Æ’Â¹
Ã¯Æ’Âª4 Ã¢Ë†â€™ 1 Ã¢Ë†â€™ 3 2 Ã¯Æ’Âº
Ã¯Æ’Â«
Ã¯Æ’Â»
Solution:
2
0 Ã¢Ë†â€™ 7Ã¯Æ’Â¹
Ã¯Æ’Âª2
2
5
0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’Âª
7) (5) For the matrix A =
Ã¯Æ’ÂªÃ¢Ë†â€™ 1 1 Ã¢Ë†â€™ 5 0 Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
4Ã¯Æ’Â»
Ã¯Æ’Â« 3 Ã¢Ë†â€™4 8
Solution:
calculate tr (4 A Ã¢Ë†â€™ 3tr (Ã¢Ë†â€™2 A) Ã¯â€šÂ´ A)
Ã¯Æ’Â¦ a2 Ã¢Ë†â€™ 3 aÃ¯Æ’Â¶
Ã¯Æ’Â· is invertible?
8) (5) What is the value(s) of a so that the matrix A = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â¨ 7a Ã¢Ë†â€™ 13 a Ã¯Æ’Â¸
Solution:
9) (12) Given A = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â«2 Ã¢Ë†â€™ 5Ã¯Æ’Â»
Express A and A Ã¢Ë†â€™1 as a product of elementary matrices (EM):
3
Solution: (Write out all EM ). Read the lecture notes)
3
10) (10) Given A Ã¢Ë†â€™1 = Ã¯Æ’Âª
Ã¯Æ’Âº and a polynomial p( x) = x Ã¢Ë†â€™ 2 x + 4 .
Ã¢Ë†â€™
1
2
Ã¯Æ’Â«
Ã¯Æ’Â»
11) (8)Find all 2 x 2diagonal matrices A that satisfy A2 – 12A + 35I = 0.
4
Find p( A Ã¢Ë†â€™1 )
12) (6)Find the standard matrix for the stated composition of linear operators on R2. A rotation
of 270 Ã¢Ë†Ëœ (counterclockwise), followed by a reflection about the line y = x .
13) (6)
a)
b)
14) (6)
Relation ? solve xÃ¢â‚¬â„¢ and yÃ¢â‚¬â„¢. What is the relation of xÃ¢â‚¬â„¢, yÃ¢â‚¬â„¢ and x. y?
5
Math 410, Linear Algebra Fall 2020 : Chapter 1.8 and 1.9
I)
Chapter 1.8 Linear Transformation:
1)
Recall a vector in R n is called order n-tuples. It is represented by a column. But for
convenient we can write in a comma-delimited form (or sometimes I physics: Contravariant vector)
Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ : Ã¯Æ’Â·
x=Ã¯Æ’Â§ : Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ : Ã¯Æ’Â·
Ã¯Æ’Â§x Ã¯Æ’Â·
Ã¯Æ’Â¨ nÃ¯Æ’Â¸
or x = ( x1 , x 2 ,……, x n )
And the standard basis vector associated with order n-tuples is
In R 3
e = (e1 , e2 , e3 )
Example 1:
We write a vector in R 3 in a standard basis vector as
Ã¯Æ’Âª
Ã¯Æ’Âº
x = (3,Ã¢Ë†â€™2,5) = 3Ã¯Æ’Âª0Ã¯Æ’Âº Ã¢Ë†â€™ 2Ã¯Æ’ÂªÃ¯Æ’Âª1Ã¯Æ’ÂºÃ¯Æ’Âº + 5Ã¯Æ’ÂªÃ¯Æ’Âª0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«1Ã¯Æ’ÂºÃ¯Æ’Â»
Recall that in Cal 1 a function f(x) is a transformation from a value x in R to a value f(x) in R
We say
f : x Ã¢â€ â€™ f ( x)
In linear algebra we deal the same transformation with vectors in R n
T : Rn Ã¢â€ â€™ Rm
2)
If the transformation is a matrix, we denote it as T A
The matrix transformation, transforms a vector x in R n to a vector w in R m .
The most easy way to find a standard matrix transformation is to observe the matrix operations
Properties: For a matrix transformation T with u.v be vectors and k be a scalar
The proof is specially easy and straight forward. Students try c) and d)
Example 2: Consider a system of linear equation: A matrix transformation TA : R 4 Ã¢â€ â€™ R 3
That is we can write
Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3 1 Ã¢Ë†â€™ 5 Ã¯Æ’Â¶Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 3 Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
T A ( X ) = AX = Ã¯Æ’Â§ 4 1 Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ 3 Ã¯Æ’Â·
0
Ã¯Æ’Â§5 Ã¢Ë†â€™1 4
0 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â¨ 8 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¨ 2 Ã¯Æ’Â¸
Example 3:
Some linear special transformations
Zero matrix transformation
T0 ( X ) = 0 X = 0
Example 4:
Identity matrix transformation ( linear )
TI ( X ) = IX = I transforms a matrix to it-self
3)
It is also obvious that the properties of matrices multiplications are compatible.
Thus if
TA ( X ) = TB ( X ) Ã¢â€ â€™ AX = BX then A = B
4)
Example 5:
Find a standard matrix:
If
TA : R 2 Ã¢â€ â€™ R 4 and
Ã¯Æ’Â¦ 2x Ã¢Ë†â€™ 3y Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â§ 3x Ã¯Æ’Â·
Ã¯Æ’Â© x Ã¯Æ’Â¹ Ã¯Æ’Âª 3x Ã¯Æ’Âº
T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§
Ã¯Æ’Â· Note, I usually write this instead of T Ã¯Æ’Âª y Ã¯Æ’Âº = Ã¯Æ’Âª x + 2 y Ã¯Æ’Âº It is the same
Ã¯Æ’Â¨ yÃ¯Æ’Â¸ Ã¯Æ’Â§ x + 2y Ã¯Æ’Â·
Ã¯Æ’Â« Ã¯Æ’Â»
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â§ Ã¢Ë†â€™ 2x + y Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â«Ã¢Ë†â€™ 2 x + y Ã¯Æ’Â»
Ã¯Æ’Â¸
Then the matrix representation of T is
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
0 Ã¯Æ’Â·
Ã¯Æ’Â§ 3
, where A = Ã¯Æ’Â§
1
2 Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦2Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦1Ã¯Æ’Â¶ Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Or, slowly, we see T (e1 ) = T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨0Ã¯Æ’Â¸ Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦0Ã¯Æ’Â¶ Ã¯Æ’Â§ 0 Ã¯Æ’Â·
and T (e2 ) = T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨1Ã¯Æ’Â¸ Ã¯Æ’Â§ 2 Ã¯Æ’Â·
Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4Ã¯Æ’Â¶
Example 6: Find the image of X = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· under the transformation in example 5:
Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â¦ 7 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
0 Ã¯Æ’Â·Ã¯Æ’Â¦ Ã¢Ë†â€™ 4 Ã¯Æ’Â¶ Ã¯Æ’Â§ Ã¢Ë†â€™ 12 Ã¯Æ’Â·
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4Ã¯Æ’Â¶ Ã¯Æ’Â§ 3
Ã¯Æ’Â§ Ã¯Æ’Â·=
T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§
2 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â¨ 5 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§ 6 Ã¯Æ’Â·
Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸ Ã¯Æ’Â§ 1
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â§ 13 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Example 7: Find a transformation that satisfiesÃ¢â‚¬Â
Ã¯Æ’Â¦ Ã¢Ë†â€™ 2Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶
Ã¯Æ’Â¦1Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶
T is 2×2 and T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· and T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ 2Ã¯Æ’Â¸ Ã¯Æ’Â¨ 2Ã¯Æ’Â¸
Ã¯Æ’Â¨ 3 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 4Ã¯Æ’Â¸
Think this as
Ã¯Æ’Â¦c
TA = Ã¯Æ’Â§Ã¯Æ’Â§ 1
Ã¯Æ’Â¨ c2
Ã¯Æ’Â¦ c1
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
c3 Ã¯Æ’Â¶
Ã¯Æ’Â· . We have
c4 Ã¯Æ’Â·Ã¯Æ’Â¸
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â¦c
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· and Ã¯Æ’Â§Ã¯Æ’Â§ 1
c4 Ã¯Æ’Â¸Ã¯Æ’Â¨ 2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 3 Ã¯Æ’Â¸
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ Ã¢Ë†â€™ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
So that
Ã¯Æ’Â¦ c1
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
Ã¯Æ’Â¦ c1
And Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Å¾ c1 + 2c2 = 2; c3 + 2c3 = 3
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 2 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ Ã¢Ë†â€™ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¢Ë†â€™ 2c1 + 3c2 = 3 and Ã¢Ë†â€™ 2c3 + 3c4 = 4
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
Now you can solve 2 2×2 systems with 4 unknowns: or
Ã¢Ë†â€™ 2c1 + 3c 2 = 3 and Ã¢Ë†â€™ 2c3 + 3c 4 = 4
Ã¯Æ’Â¦ 1
Ã¯Æ’Â§
Ã¯Æ’Â§Ã¢Ë†â€™ 2
Ã¯Æ’Â§ 0
Ã¯Æ’Â§
Ã¯Æ’Â§ 0
Ã¯Æ’Â¨
0 Ã¯Æ’Â¶Ã¯Æ’Â¦ c1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
3 0 0 Ã¯Æ’Â·Ã¯Æ’Â§ c 2 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
=
0 1 2 Ã¯Æ’Â·Ã¯Æ’Â§ c3 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
0 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ c 4 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
2
0
You can solve this 4×4 system.
Or simply we can solve it as the example 1.6
Ã¯Ââ€º
Ã¯ÂÂ
A = Te1 | Te2 as in the example 7(1.8) in the textbook
Here we have an augmented matrix for an 2 (2×2) system with the same coefficients
3 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 0 1/ 7 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 1 2 2 3Ã¯Æ’Â¶ Ã¯Æ’Â¦1 2 2 3 Ã¯Æ’Â¶ Ã¯Æ’Â¦1 2 2
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¢Ë†â€™ 2 3 3 4 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 7 7 10 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 1 10 / 7 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 1 10 / 7 Ã¯Æ’Â¸
We have
c1 = 0 c2 = 1; c4 = 10 / 7 and c3 = 1 / 7
Therefore
1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 0
Ã¯Æ’Â·Ã¯Æ’Â·
A = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨1 / 7 10 / 7 Ã¯Æ’Â¸
Some Standard matrix simple transformations
5)
Reflection about x-axis T Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â«0 Ã¢Ë†â€™ 1Ã¯Æ’Â»
6)
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â«0 Ã¢Ë†â€™ 1Ã¯Æ’Â»
7)
Reflection about the line y = x
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« x Ã¯Æ’Â» Ã¯Æ’Â«1 0Ã¯Æ’Â»
8)
Reflection about the line xy plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 1Ã¯Æ’ÂºÃ¯Æ’Â»
9)
Reflection about the line xz plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’ÂªÃ¢Ë†â€™ y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 Ã¢Ë†â€™ 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Graph is similar to the above case
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
10)
Reflection about the line yz plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª 0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Graph is similar to the above case
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
11)
Orthogonal Projection on x – axis
12)
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« 0 Ã¯Æ’Â» Ã¯Æ’Â«0 0 Ã¯Æ’Â»
Orthogonal Projection on y – axis
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« x Ã¯Æ’Â» Ã¯Æ’Â«0 1 Ã¯Æ’Â»
13)
Orthogonal Projection on xy plane or ( xz plane , yz plane )
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 0Ã¯Æ’ÂºÃ¯Æ’Â»
14)
Rotation operators:
a)
by counter-clockwise ( positive rotation) by angle ÃŽÂ¸
RÃ¯ÂÂ± = Ã¯Æ’Âª
Ã¯Æ’Â« sin Ã¯ÂÂ±
b)
Ã¢Ë†â€™ sin Ã¯ÂÂ± Ã¯Æ’Â¹
cos Ã¯ÂÂ± Ã¯Æ’ÂºÃ¯Æ’Â»
by clockwise ( negative rotation) by angle ÃŽÂ¸
RÃ¢Ë†â€™Ã¯ÂÂ± = Ã¯Æ’Âª
Ã¯Æ’Â«Ã¢Ë†â€™ sin Ã¯ÂÂ±
sin Ã¯ÂÂ± Ã¯Æ’Â¹
cos Ã¯ÂÂ± Ã¯Æ’ÂºÃ¯Æ’Â»
Example 8: What is the domain and co-domain of A has size 6 Ãƒâ€” 3.
The domain is in R 3 and the co-domain is in R 6
Example 9: What is the domain and co-domain of A if
Ã¯Æ’Â¦a
Ã¯Æ’Â§
Ã¯Æ’Â§2
Ã¯Æ’Â§1
Ã¯Æ’Â§
Ã¯Æ’Â§c
Ã¯Æ’Â¨
0 0 Ã¢Ë†â€™1
1 b
2
1 Ã¢Ë†â€™1 c
2 e
e
1Ã¯Æ’Â¶
Ã¯Æ’Â·
dÃ¯Æ’Â·
The domain is in R 4 and the co-domain is in R 5
4Ã¯Æ’Â·
Ã¯Æ’Â·
3 Ã¯Æ’Â·Ã¯Æ’Â¸
Example 10: Use matrix multiplication to find the image of the vector (3, Ã¢Ë†â€™4) when
it is rotated about the origin through an angle of ÃŽÂ¸ =30Ã‚Âº counter-clockwise
Ã¯Æ’Â¹
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂºÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Â«Ã¢Ë†â€™ 4Ã¯Æ’Â» Ã¯Æ’Â« sin 30Ã¯â€šÂ° cos 30Ã¯â€šÂ° Ã¯Æ’Â» Ã¯Æ’Â«Ã¢Ë†â€™ 4Ã¯Æ’Â» Ã¯Æ’Âª 3 Ã¢Ë†â€™ 2 3 Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« 2
Ã¯Æ’ÂºÃ¯Æ’Â»
II)
1.9 Composition of matrices transformations
If TA : R m Ã¢â€ â€™ R n and TB : R k Ã¢â€ â€™ R m then the composition of
TAB : (k Ã¯â€šÂ´ m) Ã¯â€šÂ´ (m Ã¯â€šÂ´ n) = k Ã¯â€šÂ´ n
We write
TBA = TB (TA X ) = TB Ã¯ÂÂ¯ TA (X )
This is the composition transformation:
4)
TB Ã¯ÂÂ¯ TA (X )
Find a standard matrix:
Example 11: Given transformations
T1 : R 3 Ã¢â€ â€™ R 2 and T2 : R 2 Ã¢â€ â€™ R 3
T1 ( x, y, z) = (3x Ã¢Ë†â€™ 2 y Ã¢Ë†â€™ z,Ã¢Ë†â€™2x + y + 2z)
T2 ( x, y) = (2x + 5 y,Ã¢Ë†â€™2x + y, x + 3 y)
Find a standard matrices
T2 Ã¯ÂÂ¯ T1 ( X ) and T1 Ã¯ÂÂ¯ T2 ( X )
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§ y Ã¯Æ’Â· Ã¢â€ â€™ A1 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·
We see that T1 Ã¯Æ’Â§ y Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
2 Ã¯Æ’Â¸Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¢Ë†â€™2 1
2 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â§ z Ã¯Æ’Â· Ã¯Æ’Â¨Ã¢Ë†â€™ 2 1
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨zÃ¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â¦ x Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â¦ xÃ¯Æ’Â¶ Ã¯Æ’Â§
and T2 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¢â€ â€™ A2 = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ y Ã¯Æ’Â¸ Ã¯Æ’Â§ 1 3 Ã¯Æ’Â·Ã¯Æ’Â¨ y Ã¯Æ’Â¸
Ã¯Æ’Â§ 1 3Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â· Ã¯Æ’Â¦ 9 10 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·
T2 Ã¯ÂÂ¯ T1 ( X ) = A2 A1 = Ã¯Æ’Â§Ã¯Æ’Â§
2 Ã¯Æ’Â¸Ã¯Æ’Â§
Ã¢Ë†â€™ 4 Ã¢Ë†â€™ 3 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨Ã¢Ë†â€™ 2 1
Ã¯Æ’Â¨
Ã¯Æ’Â·
Ã¯Æ’Â¨ 1 3Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4 1 8Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶ Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 8 5 4 Ã¯Æ’Â·
and T1 Ã¯ÂÂ¯ T2 ( X ) = A1 A2 = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§
2Ã¯Æ’Â¸ Ã¯Æ’Â§
Ã¯Æ’Â§ 1 3 Ã¯Æ’Â·Ã¯Æ’Â¨ Ã¢Ë†â€™ 2 1
Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¢Ë†â€™ 3 1 5Ã¯Æ’Â¸
Note:
*
In general the multiplication is not commute, thus we have
T1 Ã¯ÂÂ¯ T2 Ã¯â€šÂ¹ T2 Ã¯ÂÂ¯ T1
*
The rotation transformations are commute
Example 12: Given transformations
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· and R2 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R1 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸
We will see by inspection that if we rotate a vector by 20Ã‚Âº then rotate it 30Ã‚Âº more is the same as
rotating it by 30Ã‚Âº then rotate 20Ã‚Âº more. The total is 50Ã‚Âº rotation.
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos50Ã¯â€šÂ° Ã¢Ë†â€™ sin 50Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 50Ã¯â€šÂ° cos50Ã¯â€šÂ° Ã¯Æ’Â¸
III)
The inverse of the transformation matrices
The inverse of TA Ã¯ÂÂ¯ T2 I s defined as TAÃ¢Ë†â€™1 = TAÃ¢Ë†â€™1
The inverse of the transformation T A is related to T A
Ã¢Ë†â€™1
Ã¢Ë†â€™1
TATA = TA TA = I n
Ã¢Ë†â€™1
Example 12: The inverse of RÃ¯ÂÂ± is RÃ¯ÂÂ± Ã¢Ë†â€™
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â¦ cos( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¢Ë†â€™ sin( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos(20Ã¯â€šÂ°) sin( 20Ã¯â€šÂ°) Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· RÃ¢Ë†â€™20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¨ sin (Ã¢Ë†â€™20Ã¯â€šÂ°) cos( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ sin (20Ã¯â€šÂ°) cos(20Ã¯â€šÂ°) Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos(20Ã¯â€šÂ°) sin( 20Ã¯â€šÂ°) Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R20Ã¯ÂÂ¯ RÃ¢Ë†â€™20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ sin (20Ã¯â€šÂ°) cos(20Ã¯â€šÂ°) Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 Ã¯Æ’Â¸
5w1 Ã¢Ë†â€™ 6w2 = 4 x1
Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶
Example 13: Find T Ã¢Ë†â€™1 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· if
3w1 + 4w2 = 5 x1
Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸
Ã¯Æ’Â¦ 5 Ã¢Ë†â€™ 6 Ã¯Æ’Â¶Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· or AW = X It is easy to solve for w. If A is invertible
Ã¢Ë†â€™
3
4
Ã¯Æ’Â¨
Ã¯Æ’Â¸Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ x2 Ã¯Æ’Â¸
A Ã¢Ë†â€™1 AW = A Ã¢Ë†â€™1 X Ã¯Æ’Å¾ W = A Ã¢Ë†â€™1 X
then we have:
We have
Ã¯Æ’Â¦ 5 Ã¢Ë†â€™ 6Ã¯Æ’Â¶
1 Ã¯Æ’Â¦ 4 6Ã¯Æ’Â¶ 1 Ã¯Æ’Â¦ 4 6Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ A Ã¢Ë†â€™1 =
Ã¯Æ’Â§
Ã¯Æ’Â·= Ã¯Æ’Â§
Ã¯Æ’Â·
The inverse of a 2×2 matrix is A = Ã¯Æ’Â§Ã¯Æ’Â§
20 Ã¢Ë†â€™ 18 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸ 2 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨Ã¢Ë†â€™ 3 4 Ã¯Æ’Â¸
2w + 3w2
Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ 1 Ã¯Æ’Â¦ 4 6 Ã¯Æ’Â¶Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ Ã¯Æ’Â¦Ã¯Æ’Â§ 1
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = 3w1 5w2
Therefore T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â§
Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸ 2 Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â¸Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 2 + 2
Ã¢Ë†â€™1
or simply ,
3w 5w Ã¯Æ’Â¶
Ã¯Æ’Â¦
T Ã¢Ë†â€™1 (w1 , w2 ) = Ã¯Æ’Â§ w1 + 3w2 , 1 + 2 Ã¯Æ’Â·
2
2 Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¶
Ã¯Æ’Â·
Ã¯Æ’Â·
Ã¯Æ’Â¸
Math 410, Linear Algebra Fall 2020 : Chapter 1.8 and 1.9
I)
Chapter 1.8 Linear Transformation:
1)
Recall a vector in R n is called order n-tuples. It is represented by a column. But for
convenient we can write in a comma-delimited form (or sometimes I physics: Contravariant vector)
Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ : Ã¯Æ’Â·
x=Ã¯Æ’Â§ : Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ : Ã¯Æ’Â·
Ã¯Æ’Â§x Ã¯Æ’Â·
Ã¯Æ’Â¨ nÃ¯Æ’Â¸
or x = ( x1 , x 2 ,……, x n )
And the standard basis vector associated with order n-tuples is
In R 3
e = (e1 , e2 , e3 )
Example 1:
We write a vector in R 3 in a standard basis vector as
Ã¯Æ’Âª
Ã¯Æ’Âº
x = (3,Ã¢Ë†â€™2,5) = 3Ã¯Æ’Âª0Ã¯Æ’Âº Ã¢Ë†â€™ 2Ã¯Æ’ÂªÃ¯Æ’Âª1Ã¯Æ’ÂºÃ¯Æ’Âº + 5Ã¯Æ’ÂªÃ¯Æ’Âª0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«1Ã¯Æ’ÂºÃ¯Æ’Â»
Recall that in Cal 1 a function f(x) is a transformation from a value x in R to a value f(x) in R
We say
f : x Ã¢â€ â€™ f ( x)
In linear algebra we deal the same transformation with vectors in R n
T : Rn Ã¢â€ â€™ Rm
2)
If the transformation is a matrix, we denote it as T A
The matrix transformation, transforms a vector x in R n to a vector w in R m .
The most easy way to find a standard matrix transformation is to observe the matrix operations
Properties: For a matrix transformation T with u.v be vectors and k be a scalar
The proof is specially easy and straight forward. Students try c) and d)
Example 2: Consider a system of linear equation: A matrix transformation TA : R 4 Ã¢â€ â€™ R 3
That is we can write
Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3 1 Ã¢Ë†â€™ 5 Ã¯Æ’Â¶Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 3 Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
T A ( X ) = AX = Ã¯Æ’Â§ 4 1 Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ 3 Ã¯Æ’Â·
0
Ã¯Æ’Â§5 Ã¢Ë†â€™1 4
0 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â¨ 8 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¨ 2 Ã¯Æ’Â¸
Example 3:
Some linear special transformations
Zero matrix transformation
T0 ( X ) = 0 X = 0
Example 4:
Identity matrix transformation ( linear )
TI ( X ) = IX = I transforms a matrix to it-self
3)
It is also obvious that the properties of matrices multiplications are compatible.
Thus if
TA ( X ) = TB ( X ) Ã¢â€ â€™ AX = BX then A = B
4)
Example 5:
Find a standard matrix:
If
TA : R 2 Ã¢â€ â€™ R 4 and
Ã¯Æ’Â¦ 2x Ã¢Ë†â€™ 3y Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â§ 3x Ã¯Æ’Â·
Ã¯Æ’Â© x Ã¯Æ’Â¹ Ã¯Æ’Âª 3x Ã¯Æ’Âº
T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§
Ã¯Æ’Â· Note, I usually write this instead of T Ã¯Æ’Âª y Ã¯Æ’Âº = Ã¯Æ’Âª x + 2 y Ã¯Æ’Âº It is the same
Ã¯Æ’Â¨ yÃ¯Æ’Â¸ Ã¯Æ’Â§ x + 2y Ã¯Æ’Â·
Ã¯Æ’Â« Ã¯Æ’Â»
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â§ Ã¢Ë†â€™ 2x + y Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â«Ã¢Ë†â€™ 2 x + y Ã¯Æ’Â»
Ã¯Æ’Â¸
Then the matrix representation of T is
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
0 Ã¯Æ’Â·
Ã¯Æ’Â§ 3
, where A = Ã¯Æ’Â§
1
2 Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦2Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦1Ã¯Æ’Â¶ Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Or, slowly, we see T (e1 ) = T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨0Ã¯Æ’Â¸ Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦0Ã¯Æ’Â¶ Ã¯Æ’Â§ 0 Ã¯Æ’Â·
and T (e2 ) = T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨1Ã¯Æ’Â¸ Ã¯Æ’Â§ 2 Ã¯Æ’Â·
Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4Ã¯Æ’Â¶
Example 6: Find the image of X = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· under the transformation in example 5:
Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â¦ 7 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
0 Ã¯Æ’Â·Ã¯Æ’Â¦ Ã¢Ë†â€™ 4 Ã¯Æ’Â¶ Ã¯Æ’Â§ Ã¢Ë†â€™ 12 Ã¯Æ’Â·
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4Ã¯Æ’Â¶ Ã¯Æ’Â§ 3
Ã¯Æ’Â§ Ã¯Æ’Â·=
T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§
2 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â¨ 5 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§ 6 Ã¯Æ’Â·
Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸ Ã¯Æ’Â§ 1
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â§ 13 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Example 7: Find a transformation that satisfiesÃ¢â‚¬Â
Ã¯Æ’Â¦ Ã¢Ë†â€™ 2Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶
Ã¯Æ’Â¦1Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶
T is 2×2 and T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· and T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ 2Ã¯Æ’Â¸ Ã¯Æ’Â¨ 2Ã¯Æ’Â¸
Ã¯Æ’Â¨ 3 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 4Ã¯Æ’Â¸
Think this as
Ã¯Æ’Â¦c
TA = Ã¯Æ’Â§Ã¯Æ’Â§ 1
Ã¯Æ’Â¨ c2
Ã¯Æ’Â¦ c1
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
c3 Ã¯Æ’Â¶
Ã¯Æ’Â· . We have
c4 Ã¯Æ’Â·Ã¯Æ’Â¸
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â¦c
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· and Ã¯Æ’Â§Ã¯Æ’Â§ 1
c4 Ã¯Æ’Â¸Ã¯Æ’Â¨ 2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 3 Ã¯Æ’Â¸
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ Ã¢Ë†â€™ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
So that
Ã¯Æ’Â¦ c1
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
Ã¯Æ’Â¦ c1
And Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Å¾ c1 + 2c2 = 2; c3 + 2c3 = 3
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 2 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ Ã¢Ë†â€™ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¢Ë†â€™ 2c1 + 3c2 = 3 and Ã¢Ë†â€™ 2c3 + 3c4 = 4
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
Now you can solve 2 2×2 systems with 4 unknowns: or
Ã¢Ë†â€™ 2c1 + 3c 2 = 3 and Ã¢Ë†â€™ 2c3 + 3c 4 = 4
Ã¯Æ’Â¦ 1
Ã¯Æ’Â§
Ã¯Æ’Â§Ã¢Ë†â€™ 2
Ã¯Æ’Â§ 0
Ã¯Æ’Â§
Ã¯Æ’Â§ 0
Ã¯Æ’Â¨
0 Ã¯Æ’Â¶Ã¯Æ’Â¦ c1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
3 0 0 Ã¯Æ’Â·Ã¯Æ’Â§ c 2 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
=
0 1 2 Ã¯Æ’Â·Ã¯Æ’Â§ c3 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
0 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ c 4 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
2
0
You can solve this 4×4 system.
Or simply we can solve it as the example 1.6
Ã¯Ââ€º
Ã¯ÂÂ
A = Te1 | Te2 as in the example 7(1.8) in the textbook
Here we have an augmented matrix for an 2 (2×2) system with the same coefficients
3 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 0 1/ 7 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 1 2 2 3Ã¯Æ’Â¶ Ã¯Æ’Â¦1 2 2 3 Ã¯Æ’Â¶ Ã¯Æ’Â¦1 2 2
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¢Ë†â€™ 2 3 3 4 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 7 7 10 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 1 10 / 7 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 1 10 / 7 Ã¯Æ’Â¸
We have
c1 = 0 c2 = 1; c4 = 10 / 7 and c3 = 1 / 7
Therefore
1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 0
Ã¯Æ’Â·Ã¯Æ’Â·
A = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨1 / 7 10 / 7 Ã¯Æ’Â¸
Some Standard matrix simple transformations
5)
Reflection about x-axis T Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â«0 Ã¢Ë†â€™ 1Ã¯Æ’Â»
6)
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â«0 Ã¢Ë†â€™ 1Ã¯Æ’Â»
7)
Reflection about the line y = x
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« x Ã¯Æ’Â» Ã¯Æ’Â«1 0Ã¯Æ’Â»
8)
Reflection about the line xy plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 1Ã¯Æ’ÂºÃ¯Æ’Â»
9)
Reflection about the line xz plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’ÂªÃ¢Ë†â€™ y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 Ã¢Ë†â€™ 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Graph is similar to the above case
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
10)
Reflection about the line yz plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª 0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Graph is similar to the above case
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
11)
Orthogonal Projection on x – axis
12)
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« 0 Ã¯Æ’Â» Ã¯Æ’Â«0 0 Ã¯Æ’Â»
Orthogonal Projection on y – axis
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« x Ã¯Æ’Â» Ã¯Æ’Â«0 1 Ã¯Æ’Â»
13)
Orthogonal Projection on xy plane or ( xz plane , yz plane )
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 0Ã¯Æ’ÂºÃ¯Æ’Â»
14)
Rotation operators:
a)
by counter-clockwise ( positive rotation) by angle ÃŽÂ¸
RÃ¯ÂÂ± = Ã¯Æ’Âª
Ã¯Æ’Â« sin Ã¯ÂÂ±
b)
Ã¢Ë†â€™ sin Ã¯ÂÂ± Ã¯Æ’Â¹
cos Ã¯ÂÂ± Ã¯Æ’ÂºÃ¯Æ’Â»
by clockwise ( negative rotation) by angle ÃŽÂ¸
RÃ¢Ë†â€™Ã¯ÂÂ± = Ã¯Æ’Âª
Ã¯Æ’Â«Ã¢Ë†â€™ sin Ã¯ÂÂ±
sin Ã¯ÂÂ± Ã¯Æ’Â¹
cos Ã¯ÂÂ± Ã¯Æ’ÂºÃ¯Æ’Â»
Example 8: What is the domain and co-domain of A has size 6 Ãƒâ€” 3.
The domain is in R 3 and the co-domain is in R 6
Example 9: What is the domain and co-domain of A if
Ã¯Æ’Â¦a
Ã¯Æ’Â§
Ã¯Æ’Â§2
Ã¯Æ’Â§1
Ã¯Æ’Â§
Ã¯Æ’Â§c
Ã¯Æ’Â¨
0 0 Ã¢Ë†â€™1
1 b
2
1 Ã¢Ë†â€™1 c
2 e
e
1Ã¯Æ’Â¶
Ã¯Æ’Â·
dÃ¯Æ’Â·
The domain is in R 4 and the co-domain is in R 5
4Ã¯Æ’Â·
Ã¯Æ’Â·
3 Ã¯Æ’Â·Ã¯Æ’Â¸
Example 10: Use matrix multiplication to find the image of the vector (3, Ã¢Ë†â€™4) when
it is rotated about the origin through an angle of ÃŽÂ¸ =30Ã‚Âº counter-clockwise
Ã¯Æ’Â¹
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂºÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Â«Ã¢Ë†â€™ 4Ã¯Æ’Â» Ã¯Æ’Â« sin 30Ã¯â€šÂ° cos 30Ã¯â€šÂ° Ã¯Æ’Â» Ã¯Æ’Â«Ã¢Ë†â€™ 4Ã¯Æ’Â» Ã¯Æ’Âª 3 Ã¢Ë†â€™ 2 3 Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« 2
Ã¯Æ’ÂºÃ¯Æ’Â»
II)
1.9 Composition of matrices transformations
If TA : R m Ã¢â€ â€™ R n and TB : R k Ã¢â€ â€™ R m then the composition of
TAB : (k Ã¯â€šÂ´ m) Ã¯â€šÂ´ (m Ã¯â€šÂ´ n) = k Ã¯â€šÂ´ n
We write
TBA = TB (TA X ) = TB Ã¯ÂÂ¯ TA (X )
This is the composition transformation:
4)
TB Ã¯ÂÂ¯ TA (X )
Find a standard matrix:
Example 11: Given transformations
T1 : R 3 Ã¢â€ â€™ R 2 and T2 : R 2 Ã¢â€ â€™ R 3
T1 ( x, y, z) = (3x Ã¢Ë†â€™ 2 y Ã¢Ë†â€™ z,Ã¢Ë†â€™2x + y + 2z)
T2 ( x, y) = (2x + 5 y,Ã¢Ë†â€™2x + y, x + 3 y)
Find a standard matrices
T2 Ã¯ÂÂ¯ T1 ( X ) and T1 Ã¯ÂÂ¯ T2 ( X )
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§ y Ã¯Æ’Â· Ã¢â€ â€™ A1 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·
We see that T1 Ã¯Æ’Â§ y Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
2 Ã¯Æ’Â¸Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¢Ë†â€™2 1
2 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â§ z Ã¯Æ’Â· Ã¯Æ’Â¨Ã¢Ë†â€™ 2 1
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨zÃ¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â¦ x Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â¦ xÃ¯Æ’Â¶ Ã¯Æ’Â§
and T2 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¢â€ â€™ A2 = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ y Ã¯Æ’Â¸ Ã¯Æ’Â§ 1 3 Ã¯Æ’Â·Ã¯Æ’Â¨ y Ã¯Æ’Â¸
Ã¯Æ’Â§ 1 3Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â· Ã¯Æ’Â¦ 9 10 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·
T2 Ã¯ÂÂ¯ T1 ( X ) = A2 A1 = Ã¯Æ’Â§Ã¯Æ’Â§
2 Ã¯Æ’Â¸Ã¯Æ’Â§
Ã¢Ë†â€™ 4 Ã¢Ë†â€™ 3 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨Ã¢Ë†â€™ 2 1
Ã¯Æ’Â¨
Ã¯Æ’Â·
Ã¯Æ’Â¨ 1 3Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4 1 8Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶ Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 8 5 4 Ã¯Æ’Â·
and T1 Ã¯ÂÂ¯ T2 ( X ) = A1 A2 = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§
2Ã¯Æ’Â¸ Ã¯Æ’Â§
Ã¯Æ’Â§ 1 3 Ã¯Æ’Â·Ã¯Æ’Â¨ Ã¢Ë†â€™ 2 1
Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¢Ë†â€™ 3 1 5Ã¯Æ’Â¸
Note:
*
In general the multiplication is not commute, thus we have
T1 Ã¯ÂÂ¯ T2 Ã¯â€šÂ¹ T2 Ã¯ÂÂ¯ T1
*
The rotation transformations are commute
Example 12: Given transformations
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· and R2 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R1 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸
We will see by inspection that if we rotate a vector by 20Ã‚Âº then rotate it 30Ã‚Âº more is the same as
rotating it by 30Ã‚Âº then rotate 20Ã‚Âº more. The total is 50Ã‚Âº rotation.
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos50Ã¯â€šÂ° Ã¢Ë†â€™ sin 50Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 50Ã¯â€šÂ° cos50Ã¯â€šÂ° Ã¯Æ’Â¸
III)
The inverse of the transformation matrices
The inverse of TA Ã¯ÂÂ¯ T2 I s defined as TAÃ¢Ë†â€™1 = TAÃ¢Ë†â€™1
The inverse of the transformation T A is related to T A
Ã¢Ë†â€™1
Ã¢Ë†â€™1
TATA = TA TA = I n
Ã¢Ë†â€™1
Example 12: The inverse of RÃ¯ÂÂ± is RÃ¯ÂÂ± Ã¢Ë†â€™
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â¦ cos( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¢Ë†â€™ sin( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos(20Ã¯â€šÂ°) sin( 20Ã¯â€šÂ°) Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· RÃ¢Ë†â€™20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¨ sin (Ã¢Ë†â€™20Ã¯â€šÂ°) cos( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ sin (20Ã¯â€šÂ°) cos(20Ã¯â€šÂ°) Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos(20Ã¯â€šÂ°) sin( 20Ã¯â€šÂ°) Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R20Ã¯ÂÂ¯ RÃ¢Ë†â€™20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ sin (20Ã¯â€šÂ°) cos(20Ã¯â€šÂ°) Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 Ã¯Æ’Â¸
5w1 Ã¢Ë†â€™ 6w2 = 4 x1
Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶
Example 13: Find T Ã¢Ë†â€™1 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· if
3w1 + 4w2 = 5 x1
Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸
Ã¯Æ’Â¦ 5 Ã¢Ë†â€™ 6 Ã¯Æ’Â¶Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· or AW = X It is easy to solve for w. If A is invertible
Ã¢Ë†â€™
3
4
Ã¯Æ’Â¨
Ã¯Æ’Â¸Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ x2 Ã¯Æ’Â¸
A Ã¢Ë†â€™1 AW = A Ã¢Ë†â€™1 X Ã¯Æ’Å¾ W = A Ã¢Ë†â€™1 X
then we have:
We have
Ã¯Æ’Â¦ 5 Ã¢Ë†â€™ 6Ã¯Æ’Â¶
1 Ã¯Æ’Â¦ 4 6Ã¯Æ’Â¶ 1 Ã¯Æ’Â¦ 4 6Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ A Ã¢Ë†â€™1 =
Ã¯Æ’Â§
Ã¯Æ’Â·= Ã¯Æ’Â§
Ã¯Æ’Â·
The inverse of a 2×2 matrix is A = Ã¯Æ’Â§Ã¯Æ’Â§
20 Ã¢Ë†â€™ 18 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸ 2 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨Ã¢Ë†â€™ 3 4 Ã¯Æ’Â¸
2w + 3w2
Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ 1 Ã¯Æ’Â¦ 4 6 Ã¯Æ’Â¶Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ Ã¯Æ’Â¦Ã¯Æ’Â§ 1
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = 3w1 5w2
Therefore T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â§
Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸ 2 Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â¸Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 2 + 2
Ã¢Ë†â€™1
or simply ,
3w 5w Ã¯Æ’Â¶
Ã¯Æ’Â¦
T Ã¢Ë†â€™1 (w1 , w2 ) = Ã¯Æ’Â§ w1 + 3w2 , 1 + 2 Ã¯Æ’Â·
2
2 Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¶
Ã¯Æ’Â·
Ã¯Æ’Â·
Ã¯Æ’Â¸
Math 410, Linear Algebra
I)
Chapter 1.8
Fall 2020
Chapter 1.8 Linear Transformation
1)
Recall a vector in R n is called order n-tuples. It is represented by a column. But for convenient
we can write in a comma-delimited form (or sometimes I physics: Contravariant vector)
Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ : Ã¯Æ’Â·
x=Ã¯Æ’Â§ : Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ : Ã¯Æ’Â·
Ã¯Æ’Â§x Ã¯Æ’Â·
Ã¯Æ’Â¨ nÃ¯Æ’Â¸
or x = ( x1 , x 2 ,……, x n )
And the standard basis vector associated with order n-tuples is
In R 3
e = (e1 , e2 , e3 )
Example 1:
We write a vector in R 3 in a standard basis vector as
Ã¯Æ’Âª
Ã¯Æ’Âº
x = (3,Ã¢Ë†â€™2,5) = 3Ã¯Æ’Âª0Ã¯Æ’Âº Ã¢Ë†â€™ 2Ã¯Æ’ÂªÃ¯Æ’Âª1Ã¯Æ’ÂºÃ¯Æ’Âº + 5Ã¯Æ’ÂªÃ¯Æ’Âª0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«1Ã¯Æ’ÂºÃ¯Æ’Â»
Recall that in Cal 1 a function f(x) is a transformation from a value x in R to a value f(x) in R
We say
f : x Ã¢â€ â€™ f ( x)
In linear algebra we deal the same transformation with vectors in R n
T : Rn Ã¢â€ â€™ Rm
2)
If the transformation is a matrix, we denote it as T A
The matrix transformation, transforms a vector x in R n to a vector w in R m .
The most easy way to find a standard matrix transformation is to observe the matrix operations
Properties: For a matrix transformation T with u.v be vectors and k be a scalar
The proof is specially easy and straight forward. Students try c) and d)
Example 2: Consider a system of linear equation: A matrix transformation TA : R 4 Ã¢â€ â€™ R 3
That is we can write
Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3 1 Ã¢Ë†â€™ 5 Ã¯Æ’Â¶Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 3 Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
T A ( X ) = AX = Ã¯Æ’Â§ 4 1 Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ 3 Ã¯Æ’Â·
0
Ã¯Æ’Â§5 Ã¢Ë†â€™1 4
0 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â¨ 8 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¨ 2 Ã¯Æ’Â¸
Example 3:
Some linear special transformations
Zero matrix transformation
T0 ( X ) = 0 X = 0
Example 4:
Identity matrix transformation ( linear )
TI ( X ) = IX = I transforms a matrix to it-self
3)
It is also obvious that the properties of matrices multiplications are compatible. Thus if
TA ( X ) = TB ( X ) Ã¢â€ â€™ AX = BX then A = B
4)
Find a standard matrix:
Example 5:
If
TA : R 2 Ã¢â€ â€™ R 4 and
Ã¯Æ’Â¦ 2x Ã¢Ë†â€™ 3y Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â§ 3x Ã¯Æ’Â·
Ã¯Æ’Â© x Ã¯Æ’Â¹ Ã¯Æ’Âª 3x Ã¯Æ’Âº
T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§
Ã¯Æ’Â· Note, I usually write this instead of T Ã¯Æ’Âª y Ã¯Æ’Âº = Ã¯Æ’Âª x + 2 y Ã¯Æ’Âº It is the same
Ã¯Æ’Â¨ yÃ¯Æ’Â¸ Ã¯Æ’Â§ x + 2y Ã¯Æ’Â·
Ã¯Æ’Â« Ã¯Æ’Â»
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â§ Ã¢Ë†â€™ 2x + y Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â«Ã¢Ë†â€™ 2 x + y Ã¯Æ’Â»
Ã¯Æ’Â¸
Then the matrix representation of T is
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
0 Ã¯Æ’Â·
Ã¯Æ’Â§ 3
, where A = Ã¯Æ’Â§
1
2 Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦2Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦1Ã¯Æ’Â¶ Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Or, slowly, we see T (e1 ) = T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨0Ã¯Æ’Â¸ Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦0Ã¯Æ’Â¶ Ã¯Æ’Â§ 0 Ã¯Æ’Â·
and T (e2 ) = T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨1Ã¯Æ’Â¸ Ã¯Æ’Â§ 2 Ã¯Æ’Â·
Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4Ã¯Æ’Â¶
Example 6: Find the image of X = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· under the transformation in example 5:
Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â¦ 7 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
0 Ã¯Æ’Â·Ã¯Æ’Â¦ Ã¢Ë†â€™ 4 Ã¯Æ’Â¶ Ã¯Æ’Â§ Ã¢Ë†â€™ 12 Ã¯Æ’Â·
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4Ã¯Æ’Â¶ Ã¯Æ’Â§ 3
Ã¯Æ’Â§ Ã¯Æ’Â·=
T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§
2 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â¨ 5 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§ 6 Ã¯Æ’Â·
Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸ Ã¯Æ’Â§ 1
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â§ 13 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Example 7: Find a transformation that satisfiesÃ¢â‚¬Â
Ã¯Æ’Â¦1Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶
Ã¯Æ’Â¦ Ã¢Ë†â€™ 2Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶
T is 2×2 and T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· and T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ 2Ã¯Æ’Â¸ Ã¯Æ’Â¨ 2Ã¯Æ’Â¸
Ã¯Æ’Â¨ 3 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 4Ã¯Æ’Â¸
Think this as
Ã¯Æ’Â¦c
TA = Ã¯Æ’Â§Ã¯Æ’Â§ 1
Ã¯Æ’Â¨ c2
c3 Ã¯Æ’Â¶
Ã¯Æ’Â· . We have
c4 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¦ c1
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â¦c
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· and Ã¯Æ’Â§Ã¯Æ’Â§ 1
c4 Ã¯Æ’Â¸Ã¯Æ’Â¨ 2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 3 Ã¯Æ’Â¸
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ Ã¢Ë†â€™ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
So that
Ã¯Æ’Â¦ c1
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
Ã¯Æ’Â¦c
And Ã¯Æ’Â§Ã¯Æ’Â§ 1
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Å¾ c1 + 2c2 = 2; c3 + 2c3 = 3
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 2 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ Ã¢Ë†â€™ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¢Ë†â€™ 2c1 + 3c2 = 3 and Ã¢Ë†â€™ 2c3 + 3c4 = 4
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
Now you can solve 2 2×2 systems with 4 unknowns: or
Ã¢Ë†â€™ 2c1 + 3c 2 = 3 and Ã¢Ë†â€™ 2c3 + 3c 4 = 4
Ã¯Æ’Â¦ 1
Ã¯Æ’Â§
Ã¯Æ’Â§Ã¢Ë†â€™ 2
Ã¯Æ’Â§ 0
Ã¯Æ’Â§
Ã¯Æ’Â§ 0
Ã¯Æ’Â¨
0 Ã¯Æ’Â¶Ã¯Æ’Â¦ c1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
3 0 0 Ã¯Æ’Â·Ã¯Æ’Â§ c 2 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
=
0 1 2 Ã¯Æ’Â·Ã¯Æ’Â§ c3 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
0 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ c 4 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
2
0
You can solve this 4×4 system.
Or simply we can solve it as the example 1.6
Ã¯Ââ€º
Ã¯ÂÂ
A = Te1 | Te2 as in the example 7(1.8) in the textbook
Here we have an augmented matrix for an 2 (2×2) system with the same coefficients
3 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 0 1/ 7 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 1 2 2 3Ã¯Æ’Â¶ Ã¯Æ’Â¦1 2 2 3 Ã¯Æ’Â¶ Ã¯Æ’Â¦1 2 2
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¢Ë†â€™ 2 3 3 4 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 7 7 10 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 1 10 / 7 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 1 10 / 7 Ã¯Æ’Â¸
We have
c1 = 0 c2 = 1; c4 = 10 / 7 and c3 = 1 / 7
Therefore
1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 0
Ã¯Æ’Â·Ã¯Æ’Â·
A = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨1 / 7 10 / 7 Ã¯Æ’Â¸
Some Standard matrix simple transformations
5)
Reflection about x-axis T Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â«0 Ã¢Ë†â€™ 1Ã¯Æ’Â»
6)
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â«0 Ã¢Ë†â€™ 1Ã¯Æ’Â»
7)
Reflection about the line y = x
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« x Ã¯Æ’Â» Ã¯Æ’Â«1 0Ã¯Æ’Â»
8)
Reflection about the line xy plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 1Ã¯Æ’ÂºÃ¯Æ’Â»
9)
Reflection about the line xz plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’ÂªÃ¢Ë†â€™ y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 Ã¢Ë†â€™ 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Graph is similar to the above case
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
10)
Reflection about the line yz plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª 0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Graph is similar to the above case
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
11)
Orthogonal Projection on x – axis
11)
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« 0 Ã¯Æ’Â» Ã¯Æ’Â«0 0 Ã¯Æ’Â»
Orthogonal Projection on y – axis
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« x Ã¯Æ’Â» Ã¯Æ’Â«0 1 Ã¯Æ’Â»
12)
Orthogonal Projection on xy plane or ( xz plane , yz plane )
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 0Ã¯Æ’ÂºÃ¯Æ’Â»
13)
Rotation operators:
a)
by counter-clockwise ( positive rotation) by angle ÃŽÂ¸
RÃ¯ÂÂ± = Ã¯Æ’Âª
Ã¯Æ’Â« sin Ã¯ÂÂ±
b)
Ã¢Ë†â€™ sin Ã¯ÂÂ± Ã¯Æ’Â¹
cos Ã¯ÂÂ± Ã¯Æ’ÂºÃ¯Æ’Â»
by clockwise ( negative rotation) by angle ÃŽÂ¸
RÃ¢Ë†â€™Ã¯ÂÂ± = Ã¯Æ’Âª
Ã¯Æ’Â«Ã¢Ë†â€™ sin Ã¯ÂÂ±
sin Ã¯ÂÂ± Ã¯Æ’Â¹
cos Ã¯ÂÂ± Ã¯Æ’ÂºÃ¯Æ’Â»
Example 8: What is the domain and co-domain of A has size 6 Ãƒâ€” 3.
The domain is in R 3 and the co-domain is in R 6
Example 9: What is the domain and co-domain of A if
Ã¯Æ’Â¦a
Ã¯Æ’Â§
Ã¯Æ’Â§2
Ã¯Æ’Â§1
Ã¯Æ’Â§
Ã¯Æ’Â§c
Ã¯Æ’Â¨
0 0 Ã¢Ë†â€™1
1 b
2
1 Ã¢Ë†â€™1 c
2 e
e
1Ã¯Æ’Â¶
Ã¯Æ’Â·
dÃ¯Æ’Â·
The domain is in R 4 and the co-domain is in R 5
Ã¯Æ’Â·
4
Ã¯Æ’Â·
3 Ã¯Æ’Â·Ã¯Æ’Â¸
Example 10: Use matrix multiplication to find the image of the vector (3, Ã¢Ë†â€™4) when
it is rotated about the origin through an angle of ÃŽÂ¸ =30Ã‚Âº counter-clockwise
Ã¯Æ’Â¹
+
2
Ã¯Æ’Âª
Ã¯Æ’Âº
3
cos
30
Ã¯â€šÂ°
Ã¢Ë†â€™
sin
30
Ã¯â€šÂ°
3
2
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
=
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂºÃ¯Æ’Âª Ã¯Æ’Âº
Ã¯Æ’Â«Ã¢Ë†â€™ 4Ã¯Æ’Â» Ã¯Æ’Â« sin 30Ã¯â€šÂ° cos 30Ã¯â€šÂ° Ã¯Æ’Â» Ã¯Æ’Â«Ã¢Ë†â€™ 4Ã¯Æ’Â» Ã¯Æ’Âª 3 Ã¢Ë†â€™ 2 3 Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« 2
Ã¯Æ’ÂºÃ¯Æ’Â»
II)
1.9 Composition of matrices transformations
If TA : R m Ã¢â€ â€™ R n and TB : R k Ã¢â€ â€™ R m then the composition of
TAB : (k Ã¯â€šÂ´ m) Ã¯â€šÂ´ (m Ã¯â€šÂ´ n) = k Ã¯â€šÂ´ n
We write
TBA = TB (TA X ) = TB Ã¯ÂÂ¯ TA (X )
This is the composition transformation:
TB Ã¯ÂÂ¯ TA (X )
4)
Find a standard matrix:
Example 11: Given transformations
T1 : R 3 Ã¢â€ â€™ R 2 and T2 : R 2 Ã¢â€ â€™ R 3
T1 ( x, y, z) = (3x Ã¢Ë†â€™ 2 y Ã¢Ë†â€™ z,Ã¢Ë†â€™2x + y + 2z)
T2 ( x, y) = (2x + 5 y,Ã¢Ë†â€™2x + y, x + 3 y)
Find a standard matrices
T2 Ã¯ÂÂ¯ T1 ( X ) and T1 Ã¯ÂÂ¯ T2 ( X )
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§ y Ã¯Æ’Â· Ã¢â€ â€™ A1 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
We see that T1 Ã¯Æ’Â§ y Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¢Ë†â€™
2
1
2
Ã¢Ë†â€™
2
1
2
Ã¯Æ’Â¸Ã¯Æ’Â§ z Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â§zÃ¯Æ’Â· Ã¯Æ’Â¨
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â¦ x Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â¦ xÃ¯Æ’Â¶ Ã¯Æ’Â§
and T2 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¢â€ â€™ A2 = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ y Ã¯Æ’Â¸ Ã¯Æ’Â§ 1 3 Ã¯Æ’Â·Ã¯Æ’Â¨ y Ã¯Æ’Â¸
Ã¯Æ’Â§ 1 3Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â· Ã¯Æ’Â¦ 9 10 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
T2 Ã¯ÂÂ¯ T1 ( X ) = A2 A1 = Ã¯Æ’Â§Ã¯Æ’Â§
2 Ã¯Æ’Â¸Ã¯Æ’Â§
Ã¢Ë†â€™
4
Ã¢Ë†â€™
3
Ã¯Æ’Â¨Ã¢Ë†â€™ 2 1
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â·
Ã¯Æ’Â¨ 1 3Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4 1 8Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶ Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 8 5 4 Ã¯Æ’Â·
and T1 Ã¯ÂÂ¯ T2 ( X ) = A1 A2 = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§
2Ã¯Æ’Â¸ Ã¯Æ’Â§
Ã¯Æ’Â§ 1 3 Ã¯Æ’Â·Ã¯Æ’Â¨ Ã¢Ë†â€™ 2 1
Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¢Ë†â€™ 3 1 5Ã¯Æ’Â¸
Note:
*
In general the multiplication is not commute, thus we have
T1 Ã¯ÂÂ¯ T2 Ã¯â€šÂ¹ T2 Ã¯ÂÂ¯ T1
*
The rotation transformations are commute
Example 12: Given transformations
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· and R2 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R1 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸
We will see by inspection that if we rotate a vector by 20Ã‚Âº then rotate it 30Ã‚Âº more is the same as
rotating it by 30Ã‚Âº then rotate 20Ã‚Âº more. The total is 50Ã‚Âº rotation.
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos50Ã¯â€šÂ° Ã¢Ë†â€™ sin 50Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 50Ã¯â€šÂ° cos50Ã¯â€šÂ° Ã¯Æ’Â¸
III)
The inverse of the transformation matrices
The inverse of TA Ã¯ÂÂ¯ T2 I s defined as TAÃ¢Ë†â€™1 = TAÃ¢Ë†â€™1
The inverse of the transformation T A is related to T A
Ã¢Ë†â€™1
Ã¢Ë†â€™1
Ã¢Ë†â€™1
TATA = TA TA = I n
Example 12: The inverse of RÃ¯ÂÂ± is RÃ¯ÂÂ± Ã¢Ë†â€™
Ã¯Æ’Â¦ cos( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¢Ë†â€™ sin( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos(20Ã¯â€šÂ°) sin( 20Ã¯â€šÂ°) Ã¯Æ’Â¶
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· RÃ¢Ë†â€™20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin (Ã¢Ë†â€™20Ã¯â€šÂ°) cos( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ sin (20Ã¯â€šÂ°) cos(20Ã¯â€šÂ°) Ã¯Æ’Â¸
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos(20Ã¯â€šÂ°) sin( 20Ã¯â€šÂ°) Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R20Ã¯ÂÂ¯ RÃ¢Ë†â€™20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ sin (20Ã¯â€šÂ°) cos(20Ã¯â€šÂ°) Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 Ã¯Æ’Â¸
5w1 Ã¢Ë†â€™ 6w2 = 4 x1
Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶
Example 13: Find T Ã¢Ë†â€™1 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· if
3w1 + 4w2 = 5 x1
Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸
Ã¯Æ’Â¦ 5 Ã¢Ë†â€™ 6 Ã¯Æ’Â¶Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· or AW = X It is easy to solve for w. If A is invertible
Ã¯Æ’Â¨ Ã¢Ë†â€™ 3 4 Ã¯Æ’Â¸Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ x2 Ã¯Æ’Â¸
A Ã¢Ë†â€™1 AW = A Ã¢Ë†â€™1 X Ã¯Æ’Å¾ W = A Ã¢Ë†â€™1 X
then we have:
We have
Ã¯Æ’Â¦ 5 Ã¢Ë†â€™ 6Ã¯Æ’Â¶
1 Ã¯Æ’Â¦ 4 6Ã¯Æ’Â¶ 1 Ã¯Æ’Â¦ 4 6Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ A Ã¢Ë†â€™1 =
Ã¯Æ’Â§
Ã¯Æ’Â·= Ã¯Æ’Â§
Ã¯Æ’Â·
The inverse of a 2×2 matrix is A = Ã¯Æ’Â§Ã¯Æ’Â§
20 Ã¢Ë†â€™ 18 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸ 2 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨Ã¢Ë†â€™ 3 4 Ã¯Æ’Â¸
Therefore
Ã¯Æ’Â¦w Ã¯Æ’Â¶
T Ã¢Ë†â€™1 Ã¯Æ’Â§Ã¯Æ’Â§ 1 Ã¯Æ’Â·Ã¯Æ’Â· =
Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸
1 Ã¯Æ’Â¦ 4 6 Ã¯Æ’Â¶Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2w1 6 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§
Ã¯Æ’Â·
2 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ w2 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸
Math 410, Linear Algebra
I)
Chapter 1.8
Fall 2020
Chapter 1.8 Linear Transformation
1)
Recall a vector in R n is called order n-tuples. It is represented by a column. But for convenient
we can write in a comma-delimited form (or sometimes I physics: Contravariant vector)
Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ : Ã¯Æ’Â·
x=Ã¯Æ’Â§ : Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ : Ã¯Æ’Â·
Ã¯Æ’Â§x Ã¯Æ’Â·
Ã¯Æ’Â¨ nÃ¯Æ’Â¸
or x = ( x1 , x 2 ,……, x n )
And the standard basis vector associated with order n-tuples is
In R 3
e = (e1 , e2 , e3 )
Example 1:
We write a vector in R 3 in a standard basis vector as
Ã¯Æ’Âª
Ã¯Æ’Âº
x = (3,Ã¢Ë†â€™2,5) = 3Ã¯Æ’Âª0Ã¯Æ’Âº Ã¢Ë†â€™ 2Ã¯Æ’ÂªÃ¯Æ’Âª1Ã¯Æ’ÂºÃ¯Æ’Âº + 5Ã¯Æ’ÂªÃ¯Æ’Âª0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«1Ã¯Æ’ÂºÃ¯Æ’Â»
Recall that in Cal 1 a function f(x) is a transformation from a value x in R to a value f(x) in R
We say
f : x Ã¢â€ â€™ f ( x)
In linear algebra we deal the same transformation with vectors in R n
T : Rn Ã¢â€ â€™ Rm
2)
If the transformation is a matrix, we denote it as T A
The matrix transformation, transforms a vector x in R n to a vector w in R m .
The most easy way to find a standard matrix transformation is to observe the matrix operations
Properties: For a matrix transformation T with u.v be vectors and k be a scalar
The proof is specially easy and straight forward. Students try c) and d)
Example 2: Consider a system of linear equation: A matrix transformation TA : R 4 Ã¢â€ â€™ R 3
That is we can write
Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3 1 Ã¢Ë†â€™ 5 Ã¯Æ’Â¶Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 3 Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
T A ( X ) = AX = Ã¯Æ’Â§ 4 1 Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ 3 Ã¯Æ’Â·
0
Ã¯Æ’Â§5 Ã¢Ë†â€™1 4
0 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â¨ 8 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¨ 2 Ã¯Æ’Â¸
Example 3:
Some linear special transformations
Zero matrix transformation
T0 ( X ) = 0 X = 0
Example 4:
Identity matrix transformation ( linear )
TI ( X ) = IX = I transforms a matrix to it-self
3)
It is also obvious that the properties of matrices multiplications are compatible. Thus if
TA ( X ) = TB ( X ) Ã¢â€ â€™ AX = BX then A = B
4)
Find a standard matrix:
Example 5:
If
TA : R 2 Ã¢â€ â€™ R 4 and
Ã¯Æ’Â¦ 2x Ã¢Ë†â€™ 3y Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â§ 3x Ã¯Æ’Â·
Ã¯Æ’Â© x Ã¯Æ’Â¹ Ã¯Æ’Âª 3x Ã¯Æ’Âº
T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§
Ã¯Æ’Â· Note, I usually write this instead of T Ã¯Æ’Âª y Ã¯Æ’Âº = Ã¯Æ’Âª x + 2 y Ã¯Æ’Âº It is the same
Ã¯Æ’Â¨ yÃ¯Æ’Â¸ Ã¯Æ’Â§ x + 2y Ã¯Æ’Â·
Ã¯Æ’Â« Ã¯Æ’Â»
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â§ Ã¢Ë†â€™ 2x + y Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â«Ã¢Ë†â€™ 2 x + y Ã¯Æ’Â»
Ã¯Æ’Â¸
Then the matrix representation of T is
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
0 Ã¯Æ’Â·
Ã¯Æ’Â§ 3
, where A = Ã¯Æ’Â§
1
2 Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦2Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦1Ã¯Æ’Â¶ Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Or, slowly, we see T (e1 ) = T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨0Ã¯Æ’Â¸ Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦0Ã¯Æ’Â¶ Ã¯Æ’Â§ 0 Ã¯Æ’Â·
and T (e2 ) = T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨1Ã¯Æ’Â¸ Ã¯Æ’Â§ 2 Ã¯Æ’Â·
Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4Ã¯Æ’Â¶
Example 6: Find the image of X = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· under the transformation in example 5:
Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 Ã¢Ë†â€™ 3Ã¯Æ’Â¶
Ã¯Æ’Â¦ 7 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
0 Ã¯Æ’Â·Ã¯Æ’Â¦ Ã¢Ë†â€™ 4 Ã¯Æ’Â¶ Ã¯Æ’Â§ Ã¢Ë†â€™ 12 Ã¯Æ’Â·
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4Ã¯Æ’Â¶ Ã¯Æ’Â§ 3
Ã¯Æ’Â§ Ã¯Æ’Â·=
T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§
2 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â¨ 5 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§ 6 Ã¯Æ’Â·
Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸ Ã¯Æ’Â§ 1
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â§ 13 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Example 7: Find a transformation that satisfiesÃ¢â‚¬Â
Ã¯Æ’Â¦1Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶
Ã¯Æ’Â¦ Ã¢Ë†â€™ 2Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶
T is 2×2 and T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· and T Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ 2Ã¯Æ’Â¸ Ã¯Æ’Â¨ 2Ã¯Æ’Â¸
Ã¯Æ’Â¨ 3 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 4Ã¯Æ’Â¸
Think this as
Ã¯Æ’Â¦c
TA = Ã¯Æ’Â§Ã¯Æ’Â§ 1
Ã¯Æ’Â¨ c2
c3 Ã¯Æ’Â¶
Ã¯Æ’Â· . We have
c4 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¦ c1
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â¦c
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· and Ã¯Æ’Â§Ã¯Æ’Â§ 1
c4 Ã¯Æ’Â¸Ã¯Æ’Â¨ 2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 3 Ã¯Æ’Â¸
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ Ã¢Ë†â€™ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â·
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
So that
Ã¯Æ’Â¦ c1
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ c3
Ã¯Æ’Â¦c
And Ã¯Æ’Â§Ã¯Æ’Â§ 1
Ã¯Æ’Â¨ c3
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Å¾ c1 + 2c2 = 2; c3 + 2c3 = 3
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 2 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸
c2 Ã¯Æ’Â¶Ã¯Æ’Â¦ Ã¢Ë†â€™ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 3 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¢Ë†â€™ 2c1 + 3c2 = 3 and Ã¢Ë†â€™ 2c3 + 3c4 = 4
c4 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ 3 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
Now you can solve 2 2×2 systems with 4 unknowns: or
Ã¢Ë†â€™ 2c1 + 3c 2 = 3 and Ã¢Ë†â€™ 2c3 + 3c 4 = 4
Ã¯Æ’Â¦ 1
Ã¯Æ’Â§
Ã¯Æ’Â§Ã¢Ë†â€™ 2
Ã¯Æ’Â§ 0
Ã¯Æ’Â§
Ã¯Æ’Â§ 0
Ã¯Æ’Â¨
0 Ã¯Æ’Â¶Ã¯Æ’Â¦ c1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
3 0 0 Ã¯Æ’Â·Ã¯Æ’Â§ c 2 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
=
0 1 2 Ã¯Æ’Â·Ã¯Æ’Â§ c3 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
0 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ c 4 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¯Æ’Â·Ã¯Æ’Â¸
2
0
You can solve this 4×4 system.
Or simply we can solve it as the example 1.6
Ã¯Ââ€º
Ã¯ÂÂ
A = Te1 | Te2 as in the example 7(1.8) in the textbook
Here we have an augmented matrix for an 2 (2×2) system with the same coefficients
3 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 0 1/ 7 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 1 2 2 3Ã¯Æ’Â¶ Ã¯Æ’Â¦1 2 2 3 Ã¯Æ’Â¶ Ã¯Æ’Â¦1 2 2
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¢Ë†â€™ 2 3 3 4 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 7 7 10 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 1 10 / 7 Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 1 10 / 7 Ã¯Æ’Â¸
We have
c1 = 0 c2 = 1; c4 = 10 / 7 and c3 = 1 / 7
Therefore
1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 0
Ã¯Æ’Â·Ã¯Æ’Â·
A = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨1 / 7 10 / 7 Ã¯Æ’Â¸
Some Standard matrix simple transformations
5)
Reflection about x-axis T Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â«0 Ã¢Ë†â€™ 1Ã¯Æ’Â»
6)
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â«0 Ã¢Ë†â€™ 1Ã¯Æ’Â»
7)
Reflection about the line y = x
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« x Ã¯Æ’Â» Ã¯Æ’Â«1 0Ã¯Æ’Â»
8)
Reflection about the line xy plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 1Ã¯Æ’ÂºÃ¯Æ’Â»
9)
Reflection about the line xz plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’ÂªÃ¢Ë†â€™ y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 Ã¢Ë†â€™ 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Graph is similar to the above case
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
10)
Reflection about the line yz plane
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª 0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Graph is similar to the above case
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
11)
Orthogonal Projection on x – axis
11)
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« 0 Ã¯Æ’Â» Ã¯Æ’Â«0 0 Ã¯Æ’Â»
Orthogonal Projection on y – axis
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« y Ã¯Æ’Â» Ã¯Æ’Â« x Ã¯Æ’Â» Ã¯Æ’Â«0 1 Ã¯Æ’Â»
12)
Orthogonal Projection on xy plane or ( xz plane , yz plane )
T Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª y Ã¯Æ’ÂºÃ¯Æ’Âº = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« z Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 0Ã¯Æ’ÂºÃ¯Æ’Â»
13)
Rotation operators:
a)
by counter-clockwise ( positive rotation) by angle ÃŽÂ¸
RÃ¯ÂÂ± = Ã¯Æ’Âª
Ã¯Æ’Â« sin Ã¯ÂÂ±
b)
Ã¢Ë†â€™ sin Ã¯ÂÂ± Ã¯Æ’Â¹
cos Ã¯ÂÂ± Ã¯Æ’ÂºÃ¯Æ’Â»
by clockwise ( negative rotation) by angle ÃŽÂ¸
RÃ¢Ë†â€™Ã¯ÂÂ± = Ã¯Æ’Âª
Ã¯Æ’Â«Ã¢Ë†â€™ sin Ã¯ÂÂ±
sin Ã¯ÂÂ± Ã¯Æ’Â¹
cos Ã¯ÂÂ± Ã¯Æ’ÂºÃ¯Æ’Â»
Example 8: What is the domain and co-domain of A has size 6 Ãƒâ€” 3.
The domain is in R 3 and the co-domain is in R 6
Example 9: What is the domain and co-domain of A if
Ã¯Æ’Â¦a
Ã¯Æ’Â§
Ã¯Æ’Â§2
Ã¯Æ’Â§1
Ã¯Æ’Â§
Ã¯Æ’Â§c
Ã¯Æ’Â¨
0 0 Ã¢Ë†â€™1
1 b
2
1 Ã¢Ë†â€™1 c
2 e
e
1Ã¯Æ’Â¶
Ã¯Æ’Â·
dÃ¯Æ’Â·
The domain is in R 4 and the co-domain is in R 5
Ã¯Æ’Â·
4
Ã¯Æ’Â·
3 Ã¯Æ’Â·Ã¯Æ’Â¸
Example 10: Use matrix multiplication to find the image of the vector (3, Ã¢Ë†â€™4) when
it is rotated about the origin through an angle of ÃŽÂ¸ =30Ã‚Âº counter-clockwise
Ã¯Æ’Â¹
+
2
Ã¯Æ’Âª
Ã¯Æ’Âº
3
cos
30
Ã¯â€šÂ°
Ã¢Ë†â€™
sin
30
Ã¯â€šÂ°
3
2
TÃ¯Æ’Âª Ã¯Æ’Âº = Ã¯Æ’Âª
=
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂºÃ¯Æ’Âª Ã¯Æ’Âº
Ã¯Æ’Â«Ã¢Ë†â€™ 4Ã¯Æ’Â» Ã¯Æ’Â« sin 30Ã¯â€šÂ° cos 30Ã¯â€šÂ° Ã¯Æ’Â» Ã¯Æ’Â«Ã¢Ë†â€™ 4Ã¯Æ’Â» Ã¯Æ’Âª 3 Ã¢Ë†â€™ 2 3 Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« 2
Ã¯Æ’ÂºÃ¯Æ’Â»
II)
1.9 Composition of matrices transformations
If TA : R m Ã¢â€ â€™ R n and TB : R k Ã¢â€ â€™ R m then the composition of
TAB : (k Ã¯â€šÂ´ m) Ã¯â€šÂ´ (m Ã¯â€šÂ´ n) = k Ã¯â€šÂ´ n
We write
TBA = TB (TA X ) = TB Ã¯ÂÂ¯ TA (X )
This is the composition transformation:
TB Ã¯ÂÂ¯ TA (X )
4)
Find a standard matrix:
Example 11: Given transformations
T1 : R 3 Ã¢â€ â€™ R 2 and T2 : R 2 Ã¢â€ â€™ R 3
T1 ( x, y, z) = (3x Ã¢Ë†â€™ 2 y Ã¢Ë†â€™ z,Ã¢Ë†â€™2x + y + 2z)
T2 ( x, y) = (2x + 5 y,Ã¢Ë†â€™2x + y, x + 3 y)
Find a standard matrices
T2 Ã¯ÂÂ¯ T1 ( X ) and T1 Ã¯ÂÂ¯ T2 ( X )
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§ y Ã¯Æ’Â· Ã¢â€ â€™ A1 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
We see that T1 Ã¯Æ’Â§ y Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¢Ë†â€™
2
1
2
Ã¢Ë†â€™
2
1
2
Ã¯Æ’Â¸Ã¯Æ’Â§ z Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â§zÃ¯Æ’Â· Ã¯Æ’Â¨
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â¦ x Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â¦ xÃ¯Æ’Â¶ Ã¯Æ’Â§
and T2 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¢â€ â€™ A2 = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ y Ã¯Æ’Â¸ Ã¯Æ’Â§ 1 3 Ã¯Æ’Â·Ã¯Æ’Â¨ y Ã¯Æ’Â¸
Ã¯Æ’Â§ 1 3Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â· Ã¯Æ’Â¦ 9 10 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
T2 Ã¯ÂÂ¯ T1 ( X ) = A2 A1 = Ã¯Æ’Â§Ã¯Æ’Â§
2 Ã¯Æ’Â¸Ã¯Æ’Â§
Ã¢Ë†â€™
4
Ã¢Ë†â€™
3
Ã¯Æ’Â¨Ã¢Ë†â€™ 2 1
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â·
Ã¯Æ’Â¨ 1 3Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 5Ã¯Æ’Â¶
Ã¯Æ’Â¦ Ã¢Ë†â€™ 4 1 8Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â¦ 3 Ã¢Ë†â€™ 2 Ã¢Ë†â€™ 1Ã¯Æ’Â¶ Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 8 5 4 Ã¯Æ’Â·
and T1 Ã¯ÂÂ¯ T2 ( X ) = A1 A2 = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 1 Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§
2Ã¯Æ’Â¸ Ã¯Æ’Â§
Ã¯Æ’Â§ 1 3 Ã¯Æ’Â·Ã¯Æ’Â¨ Ã¢Ë†â€™ 2 1
Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¢Ë†â€™ 3 1 5Ã¯Æ’Â¸
Note:
*
In general the multiplication is not commute, thus we have
T1 Ã¯ÂÂ¯ T2 Ã¯â€šÂ¹ T2 Ã¯ÂÂ¯ T1
*
The rotation transformations are commute
Example 12: Given transformations
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· and R2 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R1 = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸
We will see by inspection that if we rotate a vector by 20Ã‚Âº then rotate it 30Ã‚Âº more is the same as
rotating it by 30Ã‚Âº then rotate 20Ã‚Âº more. The total is 50Ã‚Âº rotation.
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos30Ã¯â€šÂ° Ã¢Ë†â€™ sin 30Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos50Ã¯â€šÂ° Ã¢Ë†â€™ sin 50Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 30Ã¯â€šÂ° cos30Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ sin 50Ã¯â€šÂ° cos50Ã¯â€šÂ° Ã¯Æ’Â¸
III)
The inverse of the transformation matrices
The inverse of TA Ã¯ÂÂ¯ T2 I s defined as TAÃ¢Ë†â€™1 = TAÃ¢Ë†â€™1
The inverse of the transformation T A is related to T A
Ã¢Ë†â€™1
Ã¢Ë†â€™1
Ã¢Ë†â€™1
TATA = TA TA = I n
Example 12: The inverse of RÃ¯ÂÂ± is RÃ¯ÂÂ± Ã¢Ë†â€™
Ã¯Æ’Â¦ cos( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¢Ë†â€™ sin( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos(20Ã¯â€šÂ°) sin( 20Ã¯â€šÂ°) Ã¯Æ’Â¶
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· RÃ¢Ë†â€™20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin (Ã¢Ë†â€™20Ã¯â€šÂ°) cos( Ã¢Ë†â€™20Ã¯â€šÂ°) Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ sin (20Ã¯â€šÂ°) cos(20Ã¯â€šÂ°) Ã¯Æ’Â¸
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos20Ã¯â€šÂ° Ã¢Ë†â€™ sin 20Ã¯â€šÂ° Ã¯Æ’Â¶ Ã¯Æ’Â¦ cos(20Ã¯â€šÂ°) sin( 20Ã¯â€šÂ°) Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
R20Ã¯ÂÂ¯ RÃ¢Ë†â€™20Ã¯ÂÂ¯ = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 20Ã¯â€šÂ° cos20Ã¯â€šÂ° Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ sin (20Ã¯â€šÂ°) cos(20Ã¯â€šÂ°) Ã¯Æ’Â¸ Ã¯Æ’Â¨ 0 1 Ã¯Æ’Â¸
5w1 Ã¢Ë†â€™ 6w2 = 4 x1
Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶
Example 13: Find T Ã¢Ë†â€™1 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· if
3w1 + 4w2 = 5 x1
Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸
Ã¯Æ’Â¦ 5 Ã¢Ë†â€™ 6 Ã¯Æ’Â¶Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· or AW = X It is easy to solve for w. If A is invertible
Ã¯Æ’Â¨ Ã¢Ë†â€™ 3 4 Ã¯Æ’Â¸Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸ Ã¯Æ’Â¨ x2 Ã¯Æ’Â¸
A Ã¢Ë†â€™1 AW = A Ã¢Ë†â€™1 X Ã¯Æ’Å¾ W = A Ã¢Ë†â€™1 X
then we have:
We have
Ã¯Æ’Â¦ 5 Ã¢Ë†â€™ 6Ã¯Æ’Â¶
1 Ã¯Æ’Â¦ 4 6Ã¯Æ’Â¶ 1 Ã¯Æ’Â¦ 4 6Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Å¾ A Ã¢Ë†â€™1 =
Ã¯Æ’Â§
Ã¯Æ’Â·= Ã¯Æ’Â§
Ã¯Æ’Â·
The inverse of a 2×2 matrix is A = Ã¯Æ’Â§Ã¯Æ’Â§
20 Ã¢Ë†â€™ 18 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸ 2 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨Ã¢Ë†â€™ 3 4 Ã¯Æ’Â¸
Therefore
Ã¯Æ’Â¦w Ã¯Æ’Â¶
T Ã¢Ë†â€™1 Ã¯Æ’Â§Ã¯Æ’Â§ 1 Ã¯Æ’Â·Ã¯Æ’Â· =
Ã¯Æ’Â¨ w2 Ã¯Æ’Â¸
1 Ã¯Æ’Â¦ 4 6 Ã¯Æ’Â¶Ã¯Æ’Â¦ w1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2w1 6 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· = Ã¯Æ’Â§
Ã¯Æ’Â·
2 Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸Ã¯Æ’Â§Ã¯Æ’Â¨ w2 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 3 5 Ã¯Æ’Â·Ã¯Æ’Â¸
Math 410, Linear Algebra
Fall 2020
LESSON 1
SYSTEM OF LINEAR EQUATIONS
I)
Matrices
We are starting with some of the basic notation for matrices.
An mÃ¯â€šÂ´ n matrix is a matrix with n rows and m columns, dented by Amn
This is often called the size or dimension of the matrix. (Historically they are
called arrays of entries).
The elements a ij is also called the entry of the i of the matrix Aij . The indices i
and j indicate the ith row and jth column. A matrix in general, can be presented as
follows
Amn
II)
Ã¯Æ’Â¦ a11
Ã¯Æ’Â§
Ã¯Æ’Â§ a 21
Ã¯Æ’Â§ .
=Ã¯Æ’Â§
Ã¯Æ’Â§ .
Ã¯Æ’Â§ .
Ã¯Æ’Â§
Ã¯Æ’Â§a
Ã¯Æ’Â¨ m1
a12
a 22
.
.
.
am2
…….. a1n Ã¯Æ’Â¶
Ã¯Æ’Â·
…….. a 2 n Ã¯Æ’Â·
Ã¯Æ’Â·
……..
Ã¯Æ’Â·
……..
Ã¯Æ’Â·
Ã¯Æ’Â·
……..
Ã¯Æ’Â·
…….. a mn Ã¯Æ’Â·Ã¯Æ’Â¸
A system of Linear Equations
Examples of how to use the matrices in the linear algebra
Example 1:
A single linear equation
2x + 3 y Ã¢Ë†â€™ z = 1
can be written as
Ã¯Ââ€º2
(1)
3 Ã¢Ë†â€™ 1Ã¯ÂÂ = Ã¯Ââ€º1Ã¯ÂÂ
The LHS (left hand side) of this equation is a 3 x 1 matrix and the RHS (left hand side) ia a
1×1 matrix.
This equation has 2 parts
i)
Ã¯Ââ€º2 3 Ã¢Ë†â€™ 1Ã¯ÂÂ = Ã¯Ââ€º0Ã¯ÂÂ is called the homogeneous equation associated of (1)
ii)
Ã¯Ââ€º2 3 Ã¢Ë†â€™ 1Ã¯ÂÂ = Ã¯Ââ€º1Ã¯ÂÂ is called the non- homogeneous equation associated of (1)
Example 2:
A system of equations
Ã¯Æ’Â¬2 x + 3 y Ã¢Ë†â€™ z = 1
Ã¯Æ’Â­
Ã¯Æ’Â®x Ã¢Ë†â€™ 2 y = 3
can be written as:
Ã¯Æ’Â¦ 2 3 Ã¢Ë†â€™ 1 1Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 1 Ã¢Ë†â€™ 2 0 3Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
(2)
The zero of the 2×3 entry appears, since there is no z-value , i.e. 0z
This form of matrix is called the augmented matrix
Again, the system has 2 parts
III)
Ã¯Æ’Â¦ 2 3 Ã¢Ë†â€™1 0Ã¯Æ’Â¶
Ã¯Æ’Â· is is called the homogeneous equation associated of (2)
i) Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â¨ 1 Ã¢Ë†â€™ 2 0 0Ã¯Æ’Â¸
Ã¯Æ’Â¦ 2 3 Ã¢Ë†â€™ 1 1Ã¯Æ’Â¶
Ã¯Æ’Â· is called the non- homogeneous equation associated of (2)
ii) Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·
1
Ã¢Ë†â€™
2
0
3
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Row echelon and reduced row echelon forms
1. To determine the solution to a system, we need to have the augmented matrix in a
form where the values of the variables or the relation between the variables can be
identified.
2. A matrix that has three properties is said to be in row echelon form (ref).
a. The first nonzero entry in a row is 1. This is called a leading 1.
b. Any rows that consist entirely of zeros are grouped together at the bottom
of the matrix.
c. In any two successive rows that have nonzero entries, the leading 1 in the
lower row occurs farther to the right than the leading 1 in the higher row.
Example 3: The row echelon forms (ref)
Ã¯Æ’Â¦1 1 2 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
a) Ã¯Æ’Â§ 0 1 Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â§0 0 0 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
b)
Ã¯Æ’Â¦1 0 2 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 1 Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â§0 0 1 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
c)
Ã¯Æ’Â¦1 0 0 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 1 Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â§0 0 1 Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
3. A matrix in reduced row echelon form (rref) is such a matrix. It has the
following properties:
a. The first nonzero entry in a row is 1. This is called a leading 1.
b. Any rows that consist entirely of zeros are grouped together at the bottom
of the matrix.
c. In any two successive rows that have nonzero entries, the leading 1 in the
lower row occurs farther to the right than the leading 1 in the higher row.
d. Each column that contains a leading 1 has zeros everywhere else in that
column.
Example 4: The reduced row echelon forms (rref)
a)
Ã¯Æ’Â¦1 0 0Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 1 0Ã¯Æ’Â·
Ã¯Æ’Â§ 0 0 0Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
b)
Ã¯Æ’Â¦1 0 0Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 1 0Ã¯Æ’Â·
Ã¯Æ’Â§0 0 1Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
c)
Ã¯Æ’Â¦1 0 0Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 0 0Ã¯Æ’Â·
Ã¯Æ’Â§ 0 0 0Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Example 5 The following matrices are neither in ref or rref
a)
Ã¯Æ’Â¦1 0 0Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 0 0Ã¯Æ’Â·
Ã¯Æ’Â§0 0 1Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
b)
Ã¯Æ’Â¦1 0 0Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§0 1 0Ã¯Æ’Â·
Ã¯Æ’Â§ 0 0 2Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
c)
Ã¯Æ’Â¦ 0 0 0Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 1 0Ã¯Æ’Â·
Ã¯Æ’Â§ 0 0 0Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
4. In general, the variables in a linear system that correspond to the leading 1’s in its
augmented matrix are called the leading variables, and the remaining variables
are called the free variables.
5. If a linear system has infinitely many solutions, then a set of parametric equations
from which all the solutions can be obtained is called a general solution of the
system.
6. GaussÃ¢â‚¬ÂJordan elimination is an algorithm for reducing a matrix to reduced row
echelon form. There are two parts: in the forward phase, zeros are introduced
beneath the leading 1’s, and a backward phase in which zeros are introduced
The top left nonzero entry is scaled to be 1, and then for each nonzero entry below
this 1, the opposite of the entry below is multiplied by the row with the leading 1
and added to the row containing the nonzero entry. The result of this will be only
zeros below the leading 1. When each column consists of a leading 1 and zeros
below, we move onto the next column containing a pivot element and repeat the
process. When all of the leading 1s have zeros below (the forward phase), the
process is done moving from right to left, to zero out the entries above the leading
1s (the backward phase).
7. If only the forward phase is used, then the procedure produces a row echelon form
and is called Gaussian elimination.
8. A system of m linear equations with n unknowns is said to be homogeneous if
the constant terms are all zero.
Ã¯Æ’Â¬n
Ã¯Æ’Â¯Ã¯Æ’Â¥ a1 j x j = 0
Ã¯Æ’Â¯ j =1
Ã¯Æ’Â¯n
Ã¯Æ’Â¯Ã¯Æ’Â¥ a 2 j x j = 0
Ã¯Æ’Â¯ j =1
Ã¯Æ’Â¯
Ã¯Æ’Â­:
Ã¯Æ’Â¯:
Ã¯Æ’Â¯
Ã¯Æ’Â¯n
Ã¯Æ’Â¯Ã¯Æ’Â¥ a mj x j = 0
Ã¯Æ’Â¯ j =1
Ã¯Æ’Â¯Ã¯Æ’Â®
(1)
(2)
(m)
The summation is sum on the index
n
Ã¯Æ’Â¥a
j =1
1j
x j = a11 x1 + a12 x2 + a13 x3 + Ã¯Æ’â€” Ã¯Æ’â€” Ã¯Æ’â€” + a1n xn
9. Every homogeneous system of linear equations is consistent because all such
systems have the solution x1 = 0, x2 = 0,Ã¢â‚¬Â¦,xn = 0. This solution is called the trivial
solution. If there are other solutions, they are called nontrivial solutions.
10. Every homogeneous linear system has two following possibilities:
a) The system has only the trivial solution.
b) The system has infinitely many solutions (in addition to the trivial
solution).
Example 5: In practice, the augmented matrix is written without vertical line
The system with augmented matrix (without the vertical line)
Ã¯Æ’Â¦ 1 Ã¢Ë†â€™1 1 0Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 2 1 Ã¢Ë†â€™ 4 0 Ã¯Æ’Â· possesses trivial solution (0,0,0) AND (1, Ã¢Ë†â€™ 2,1)
Ã¯Æ’Â§ 1 Ã¢Ë†â€™ 2 3 0Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
But the system
Ã¯Æ’Â¦ 1 Ã¢Ë†â€™1 1 0Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 2 1 Ã¢Ë†â€™ 4 0 Ã¯Æ’Â· possesses only trivial solution (0, 0, 0)
Ã¯Æ’Â§ 1 Ã¢Ë†â€™ 2 Ã¢Ë†â€™1 0Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
11. If a homogeneous linear system has n unknowns and the reduced row echelon
form of its augmented matrix has r nonzero rows, then the system has n Ã¢Ë†â€™ r free
variables.for
Example 6: The augmented (with vertical line for beginners)
The rref of a system is (5 unknowns and 2 rows of zeros)
The solution will have 5 Ã¢â‚¬â€œ 2 = 3 free unknowns
Ã¯Æ’Â¦1
Ã¯Æ’Â§
Ã¯Æ’Â§0
Ã¯Æ’Â§0
Ã¯Æ’Â§
Ã¯Æ’Â§0
Ã¯Æ’Â§
Ã¯Æ’Â¨0
0 0 0 0 0Ã¯Æ’Â¶
Ã¯Æ’Â·
1 0 0 0 0Ã¯Æ’Â·
0 1 0 0 0Ã¯Æ’Â·
Ã¯Æ’Â·
0 0 0 0 0Ã¯Æ’Â·
Ã¯Æ’Â·
0 0 0 0 0Ã¯Æ’Â¸
12. A homogeneous linear system with more unknowns than equations must have at
least one free variable, and thus it has infinitely many solutions.
Example 7: Systems of 3 equations and 4 unknowns
2 2 1 0Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 1 Ã¢Ë†â€™ 1 3 1 0 Ã¯Æ’Â· possesses many solutions13. For large linear systems that
Ã¯Æ’Â§ Ã¢Ë†â€™ 2 2 0 1 0Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
require a computer solution, it is generally more efficient to use Gaussian elimination
to reduce a matrix to row echelon form, and then solve for the variables using
value of the variable, and move up the rows, solving for each variable by substituting
values as we determine them.
13. Every matrix the reduced row echelon form (rref) is unique, regardless of the
sequence of elementary row operations used to obtain it.
14. Row echelon forms (ref) of a matrix are (not unique).
15. The reduced row echelon form and all row echelon forms of a matrix A have the
same number of zero rows, and the leading 1’s always occur in the same positions.
These are called the pivot positions of A.
16. The columns containing the leading 1’s are called the pivot columns of A, and
the rows containing the leading 1’s are called the pivot rows of A.
17. When having a computer implement row reduction algorithms, it is possible
that round off errors are introduced. Such algorithms are called unstable, and
there are techniques for minimizing erroneous results due to such errors.
The following examples will further clarify the concepts:
IV)
Elementary row operation (ERO)
1. Multiply a row (an equation) by a non zero constant
2. Interchange 2 rows ( 2 equations)
3. Add a constant times one row (one equation) to another
Example 8: Gaussian eliminations:
Ã¯Æ’Â¬x Ã¢Ë†â€™ 2 y = 3
Ã¯Æ’Â­
Ã¯Æ’Â®2 x + 3 y = 1
From the augmented matrix:
Ã¯Æ’Â¦1 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â¶
Ã¯Æ’Â¦1 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â¶
Ã¯Æ’Â¦1 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· (1) Ã¢â€ â€™ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â·Ã¯Æ’Â·
(2) Ã¢â€ â€™ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ 2 3 Ã¢Ë†â€™ 1Ã¯Æ’Â¸
Ã¯Æ’Â¨0 7 Ã¢Ë†â€™ 7Ã¯Æ’Â¸
Ã¯Æ’Â¨ 0 1 Ã¢Ë†â€™ 1Ã¯Æ’Â¸
Here we have 2 elementary row operations (ERO)
i)
ERO1: Row 2 replaced by Row2 Ã¢â‚¬â€œ 2 times row 1 R2 Ã¢â€ â€™ R2 Ã¢Ë†â€™ 2R1
Ã¢Ë†â€™1
R2 Ã¢â€ â€™
R2 Ã¢Ë†â€™ 2 R1
ii)
ERO2: Row 2 replaced by (-1/7)time row2
7
After 2 Eos we come up with ref. If we stop here, then it is called Gauss-eleimination
From here, we see that y = Ã¢Ë†â€™ 1. We can find x by back substitution
From the last matrix,
Solve
x Ã¢Ë†â€™ 2 y = 3 , it is easy to see that x = 1. Thus the solution is ( x, y ) = (1,Ã¢Ë†â€™1)
Example 9: Gauss-Jordan eliminations:
We will continue the last matrix of the Example 8 above
Ã¯Æ’Â¦1 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â¶
Ã¯Æ’Â¦1 Ã¢Ë†â€™ 2 3 Ã¯Æ’Â¶
Ã¯Æ’Â¦1 0 1 Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· (3) Ã¢â€ â€™ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· (4) Ã¢â€ â€™ Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ 0 1 Ã¢Ë†â€™ 1Ã¯Æ’Â¸
Ã¯Æ’Â¨ 0 1 Ã¢Ë†â€™ 1Ã¯Æ’Â¸
Ã¯Æ’Â¨ 0 1 Ã¢Ë†â€™ 1Ã¯Æ’Â¸
iii)
EO3: Row 1 replaced by Row1 + 2 time row2 R1 Ã¢â€ â€™ R1 + 2R1
Now the last matrix represents an rref of the system:
and the solution is (1,Ã¢Ë†â€™1)
Before we go further, pay attention to the facts:
a) The Gauss-elimination leads to the row echelon form (REF) and
b) The Gauss-Jordan elimination leads to the reducedrow echelon form (RREF)
V)
Back substitution and/ or Gauss-Jordan:
Example 10:
Find the general solution for the system:
x1 + 2 x 2 Ã¢Ë†â€™ x3 + 3x 4 Ã¢Ë†â€™ 2 x5 = 2
(1)
2 x1 + x 2 + x3 + 5 x 4 + 2 x5 = 1
(2)
3×1 + 5 x 2 Ã¢Ë†â€™ 3×3 + 7 x 4 Ã¢Ë†â€™ x5 = 5 (3)
We know that ( by III-4) the solution will have 2 free variables ( 5 unknowns and 3 equations).
So we will go ahead and let
x4 = t and x5 = s
x1 + 2 x 2 Ã¢Ë†â€™ x3 + 3t Ã¢Ë†â€™ 2s = 2 (1)
2 x1 + x 2 + x3 + 5t + 2s = 1
(2)
3×1 + 5 x 2 Ã¢Ë†â€™ 3×3 + 7t Ã¢Ë†â€™ s = 5 (3)
Augmented matrix
Ã¯Æ’Â¦ 1 2 Ã¢Ë†â€™1 3 Ã¢Ë†â€™ 2 2Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 2 1 1 5 2 1 Ã¯Æ’Â· First, we will have 2 (ERO)
Ã¯Æ’Â§ 3 5 Ã¢Ë†â€™ 3 7 Ã¢Ë†â€™1 5Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
R2 Ã¢â€ â€™ R2 Ã¢Ë†â€™ 2R1 and R3 Ã¢â€ â€™ R3 Ã¢Ë†â€™ 3R1 :
Ã¯Æ’Â¦1 2 Ã¢Ë†â€™1 3 Ã¢Ë†â€™ 2 2 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 Ã¢Ë†â€™ 3 3 Ã¢Ë†â€™ 1 6 Ã¢Ë†â€™ 3Ã¯Æ’Â·
Ã¯Æ’Â§ 0 Ã¢Ë†â€™ 1 0 Ã¢Ë†â€™ 2 5 Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
we perform 2 more EROs R2 Ã¯â€šÂ« R3 and R3 Ã¢â€ â€™ R3 Ã¢Ë†â€™ 3R2 :
Ã¯Æ’Â¦1 2 Ã¢Ë†â€™1 3 Ã¢Ë†â€™ 2 2 Ã¯Æ’Â¶
Ã¯Æ’Â¦1 2 Ã¢Ë†â€™1 3 Ã¢Ë†â€™ 2 2 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§ 0 Ã¢Ë†â€™ 1 0 Ã¢Ë†â€™ 2 5 Ã¢Ë†â€™ 1 Ã¯Æ’Â· Ã¢â€ â€™ Ã¯Æ’Â§ 0 Ã¢Ë†â€™ 1 0 Ã¢Ë†â€™ 2 5 Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â§ 0 Ã¢Ë†â€™ 3 3 Ã¢Ë†â€™ 1 6 Ã¢Ë†â€™ 3Ã¯Æ’Â·
Ã¯Æ’Â§0 0 3
5 Ã¢Ë†â€™ 9 0 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨
1
R1 Ã¢â€ â€™ R1 Ã¢Ë†â€™ R2 and then R1 Ã¢â€ â€™ R1 + 2R2 :
3
We can perform back- substitution from here. But we go further:
Ã¯Æ’Â¦ 1 2 0 14 / 3 Ã¢Ë†â€™ 5 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 0 0 14 / 3 Ã¢Ë†â€™ 5 2 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â· Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¢â€ â€™ Ã¯Æ’Â§0 Ã¢Ë†â€™1 0 Ã¢Ë†â€™ 2
5 Ã¢Ë†â€™ 1Ã¯Æ’Â· Ã¢â€ â€™ Ã¯Æ’Â§ 0 Ã¢Ë†â€™ 1 0 Ã¢Ë†â€™ 2
5 Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â§0 0 3
5
Ã¢Ë†â€™ 9 0 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ 0 0 3
5
Ã¢Ë†â€™ 9 0 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Finally R2 Ã¢â€ â€™ Ã¢Ë†â€™ R2 and R3 Ã¢â€ â€™
Ã¯Æ’Â¦ 1 0 0 14 / 3 Ã¢Ë†â€™ 5 2 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¢â€ â€™ Ã¯Æ’Â§0 1 0
2
Ã¢Ë†â€™ 5 1Ã¯Æ’Â·
Ã¯Æ’Â§0 0 1 5 / 3 Ã¢Ë†â€™ 3 0Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
1
R3 :
3
This problem, I used 8 Gauss-Jordan EROs (maximum is 9 for a 3 variables) and get rref
Therefore, we write the solution in terms of free variable t and s
= 2 Ã¢Ë†â€™ 14t / 3 + 5s
x1
x2
= 1 Ã¢Ë†â€™ 2t + 5s
x3 = 0 Ã¢Ë†â€™ 5t / 3 + 3s
This is a parametric solution ( 2-parameter) for the system.
The next step is to go ahead that what we will come up to ( in the vector form):
Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ Ã¢Ë†â€™ 14 / 3 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 5 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§
Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ x 2 Ã¯Æ’Â· = Ã¯Æ’Â§ 1 Ã¯Æ’Â· + Ã¯Æ’Â§ Ã¢Ë†â€™ 2 Ã¯Æ’Â·t + Ã¯Æ’Â§ 5 Ã¯Æ’Â· s
Ã¯Æ’Â§ x Ã¯Æ’Â· Ã¯Æ’Â§ 0 Ã¯Æ’Â· Ã¯Æ’Â§ 5 Ã¯Æ’Â· Ã¯Æ’Â§ 3Ã¯Æ’Â·
Ã¯Æ’Â¨ 3Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨
Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
The second vector can be extended by a scaler either 3 or -3
Ã¯Æ’Â¦ x1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ Ã¢Ë†â€™ 14 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 5 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§
Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ x 2 Ã¯Æ’Â· = Ã¯Æ’Â§ 1 Ã¯Æ’Â· + Ã¯Æ’Â§ Ã¢Ë†â€™ 6 Ã¯Æ’Â·t + Ã¯Æ’Â§ 5 Ã¯Æ’Â· s
Ã¯Æ’Â§ x Ã¯Æ’Â· Ã¯Æ’Â§ 0 Ã¯Æ’Â· Ã¯Æ’Â§ 15 Ã¯Æ’Â· Ã¯Æ’Â§ 3 Ã¯Æ’Â·
Ã¯Æ’Â¨ 3Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨
Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
This is a plane which is passing the point (2,1,0) and containing the 2 vectors
(-14,-6, 15) and ( 5, 5, 3). Later ih the chapter 4 we will examine further ( or in the Calculus 3)
VI) Consistent and Inconsistent systems:
In algebra and calculus 1 we already study the solution of the algebraic equation:
To make it simple
Example 11:
a)
b)
c)
b
is consistent if a Ã¢â€°Â  0
a
0
The equation 0x = 0 gives x = is consistent
0
b
The equation 0x = b gives x = is inconsistent for any b
0
The equation
ax = b gives x =
The casea a, b are consistents ( there is solution(s)).
Case a) the solution is unique
Case b) the solution is not unique
Case c) the solution does not exist (DNE)
Example 12: Discuss the value (s) of a so that the solution
i)
ii)
iii)
DNE
unique
is not unique. Solve this case.
x Ã¢Ë†â€™ ay = 2
2x + 3 y = 6
The augmented matrix is
2Ã¯Æ’Â¶
Ã¯Æ’Â¦1 Ã¢Ë†â€™ a
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â·
ERO: R2 Ã¢â€ â€™ R2 Ã¢Ë†â€™ 2R1
2
3
6
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Note: In practice we knew that the row 2 will be replaced by a new row 2, therefore, we write
ERO1 :
R2 Ã¢Ë†â€™ 2R1 ( means R2 Ã¢â€ â€™ R2 Ã¢Ë†â€™ 2R1 ) then ERO1 :
R2 Ã¢â€ â€™
R2
3 + 2a
2 Ã¯Æ’Â¶
Ã¢Ë†â€™a
2 Ã¯Æ’Â¶ Ã¯Æ’Â¦Ã¯Æ’Â§ 1 Ã¢Ë†â€™ a
Ã¯Æ’Â¦1
2 Ã¯Æ’Â·
Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â· Ã¢â€ â€™
Ã¯Æ’Â§
Ã¯Æ’Â·
2Ã¯Æ’Â¸
0 1
Ã¯Æ’Â¨ 0 3 + 2a
3 + 2a Ã¯Æ’Â¸
Ã¯Æ’Â¨
Now, back-substitution ( but we donÃ¢â‚¬â„¢t have to, since the solution for y is now
2
y=
3 + 2a
Ã¯Æ’Â¦0Ã¯Æ’Â¶
This solution tell us that we wonÃ¢â‚¬â„¢t have case b) as explained in the example 11 b) : Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â¨0Ã¯Æ’Â¸
Therefore,
3
a)
The solution is unique when a Ã¯â€šÂ¹ Ã¢Ë†â€™
2
3
b)
There is no solution when a = Ã¢Ë†â€™
2
c)
No.
VII) Matrices operations
Matrix addition. If A and B are matrices of the same size, then they can be added.
(This is similar to the restriction on adding vectors, namely, only vectors from the same
space R n can be added; you cannot add a 2Ã¢â‚¬Âvector to a 3Ã¢â‚¬Âvector, for example.) If A = [aij]
and B = [bij] are both m x n matrices, then their sum, C = A + B, is also
an m x n matrix, and its entries are given by the formula
cij = aij + bij
Thus, to find the entries of A + B, simply add the corresponding entries of A and B.
Example 13: Consider the following matrices:
Which two can be added? What is their sum?
Since only matrices of the same size can be added, only the sum F + H is defined
(G cannot be added to either F or H). The sum of F and H is
Since addition of real numbers is commutative, it follows that addition of matrices (when
it is defined) is also commutative; that is, for any matrices A and B of the same
size, A + B will always equal B + A.
Example 14: If any matrix A is added to the zero matrix of the same size, the result is
clearly equal to A:
This is the matrix analog of the statement a + 0 = 0 + a = a, which expresses the fact that
the number 0 is the additive identity in the set of real numbers.
Example 15: Find the matrix B such that A + B = C, where
If
then the matrix equation A + B = C becomes
This is easy to see
b11 = 1 , b12 = Ã¢Ë†â€™1, b21 = Ã¢Ë†â€™3 and b22 = Ã¢Ë†â€™2
Therefore,
B=Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« Ã¢Ë†â€™ 3 Ã¢Ë†â€™ 2Ã¯Æ’Â»
This example motivates the definition of matrix subtraction: If A and B are matrices of
the same size, then the entries of A Ã¢Ë†â€™ B are found by simply subracting the entries
of B from the corresponding entries of A. Since the equation A + B = C is equivalent
to B = C Ã¢Ë†â€™ A, employing matrix subtraction above would yield the same result:
VIII Matrices multiplication.
1)
Scalar Multiplication
A scalar k with a matrix is multiply k with all entries of the matrix. If A = [aij] is a
matrix and k is a scalar, then
That is, the matrix kA is obtained by multiplying each entry of A by k.
Example 16: If
then the scalar multiple 3 or -5 with A are obtained by multiplying every entry of A by 3
or -5:
0
15 Ã¯Æ’Â¹
3A = Ã¯Æ’Âª
and Ã¢Ë†â€™ 5A = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âº
Ã¯Æ’Â«Ã¢Ë†â€™ 6 12 3 Ã¯Æ’Â»
Ã¯Æ’Â« 10 Ã¢Ë†â€™ 20 Ã¢Ë†â€™ 5Ã¯Æ’Â»
Example 17: If A and B are matrices of the same size, then A Ã¢Ë†â€™ B = A + (Ã¢Ë†â€™ B), where
Ã¢Ë†â€™ B is the scalar multiple (Ã¢Ë†â€™1) B. If
then
Example 18: If
then
But A+B is not possible (not compatible under addition, since they are not the same size)
2)
Matrices Multiplication: The matrix A and B can be multiplied if they
are compatible under the matrices multiplication rule:
Given Amn and Bqp .
Amn Ã¯â€šÂ´ Bqp is compatible if n = q and the result matric C mp , i.e:
Amn Ã¯â€šÂ´ Bqp = C mp
It is not necessary that B Ã¯â€šÂ´ A is multiplication compatible.
The entries of C = (AB) can be found by the rule
p
cij = ( AB) ij = (row i of A) Ã¯Æ’â€” (col j of B) = Ã¯Æ’Â¥ aik bkj
k =1
For example c35 = ( AB) 35 = dot product of (row 3of A) Ã¯Æ’â€” (col 5 of B)
Note that, in general
AB Ã¢â€°Â  BA
The matrix m x n matrix A as composed of the row vectors r1, r2,Ã¢â‚¬Â¦, rm from Rn and
the n x p matrix B as composed of the column vectors c1, c2,Ã¢â‚¬Â¦, cp from Rn,
and
the rule for computing the entries of the matrix product AB is r i Ã‚Â· c j = ( AB) ij , that is,
Example 19: Given the two matrices
determine which matrix product, AB or BA, is defined and evaluate it
Note that A is A is 2 Ã¯â€šÂ´ 3 and B is 3 Ã¯â€šÂ´ 4 . So AB is compatible but BA is not.
Therefore
0 4 1Ã¯Æ’Â¹
Ã¯Æ’Â© 1 0 Ã¢Ë†â€™ 3Ã¯Æ’Â¹ Ã¯Æ’Âª
C = AÃ¯â€šÂ´ B = Ã¯Æ’Âª
Ã¯â€šÂ´ Ã¯Æ’ÂªÃ¢Ë†â€™ 2 3 Ã¢Ë†â€™ 1 5Ã¯Æ’ÂºÃ¯Æ’Âº is a C is 2 Ã¯â€šÂ´ 4
Ã¯Æ’Âº
Ã¯Æ’Â«Ã¢Ë†â€™ 2 4 1 Ã¯Æ’Â» Ã¯Æ’Âª 0 Ã¢Ë†â€™ 1 2 1Ã¯Æ’Âº
Ã¯Æ’Â«
Ã¯Æ’Â»
Now
3
cij = Ã¯Æ’Â¥ aik bkj .
k =1
3
c11 = Ã¯Æ’Â¥ a1k bk1 = a11b11 + a12b21 + a13b31 = 1 + 0 + 0 = 1 (dot product of row 1 and column 1)
k =1
3
c12 = Ã¯Æ’Â¥ a1k bk 2 = a11b12 + a12b22 + a13b32 = 0 + 0 + 3 = 3 (dot product of row 1 and column 2)
k =1
And so on.,, We will have
Example 20: If
and
compute the (3, 5) entry of the product CD.
First, note that since C is 4 x 5 and D is 5 x 6, the product CD is indeed defined, and its
size is 4 x 6. However, there is no need to compute all twentyÃ¢â‚¬Âfour entries of CD if only
one particular entry is desired. The (3, 5) entry of CD is the dot product of row 3 in C and
column 5 in D:
Example 21: We will see that AB Ã¢â€°Â  BA in the following example:
Note that
while
Math 410, Linear Algebra
Fall 2020
Matrices operations
I)
If A and B are matrices of the same size, then they can be added. (This is similar to
namely, only vectors from the same space R n can
be added; you cannot add a 2Ã¢â‚¬Âvector to a 3Ã¢â‚¬Âvector, for example.) If A = [aij] and B = [bij]
are both m x n matrices, then their sum, C = A + B, is also an m x n matrix, and its
entries are given by the formula
cij = aij + bij
Thus, to find the entries of A + B, simply add the corresponding entries of A and B.
Example 1: Consider the following matrices:
Which two can be added? What is their sum?
Since only matrices of the same size can be added, only the sum F + H is defined
(G cannot be added to either F or H). The sum of F and H is
Since addition of real numbers is commutative, it follows that addition of matrices (when
it is defined) is also commutative; that is, for any matrices A and B of the same
size, A + B will always equal B + A.
Example 2: If any matrix A is added to the zero matrix of the same size, the result is
clearly equal to A:
This is the matrix analog of the statement a + 0 = 0 + a = a, which expresses the fact that
the number 0 is the additive identity in the set of real numbers.
Example 3: Find the matrix B such that A + B = C, where
If
then the matrix equation A + B = C becomes
This is easy to see
b11 = 1 , b12 = Ã¢Ë†â€™1, b21 = Ã¢Ë†â€™3 and b22 = Ã¢Ë†â€™2
Therefore,
B=Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« Ã¢Ë†â€™ 3 Ã¢Ë†â€™ 2Ã¯Æ’Â»
This example motivates the definition of matrix subtraction: If A and B are matrices of
the same size, then the entries of A Ã¢Ë†â€™ B are found by simply subracting the entries
of B from the corresponding entries of A. Since the equation A + B = C is equivalent
to B = C Ã¢Ë†â€™ A, employing matrix subtraction above would yield the same result:
II) Matrices multiplication.
1)
Scalar Multiplication
A scalar k with a matrix is multiply k with all entries of the matrix. If A = [aij] is a
matrix and k is a scalar, then
That is, the matrix kA is obtained by multiplying each entry of A by k.
Example 4: If
then the scalar multiple 3 or -5 with A are obtained by multiplying every entry of A by 3
or -5:
0
15 Ã¯Æ’Â¹
3A = Ã¯Æ’Âª
Ã¢Ë†â€™ 5A = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âº
Ã¯Æ’Â«Ã¢Ë†â€™ 6 12 3 Ã¯Æ’Â» and
Ã¯Æ’Â« 10 Ã¢Ë†â€™ 20 Ã¢Ë†â€™ 5Ã¯Æ’Â»
Example 5: If A and B are matrices of the same size, then A Ã¢Ë†â€™ B = A + (Ã¢Ë†â€™ B), where
Ã¢Ë†â€™ B is the scalar multiple (Ã¢Ë†â€™1) B. If
then
Example 6: If
then
But A+B is not possible (not compatible under addition, since they are not the same size)
2)
Matrices Multiplication: The matrix A and B can be multiplied if they
are compatible under the matrices multiplication rule:
Given
Amn Ã¯â€šÂ´ Bqp
Amn and Bqp
.
C
is compatible if n = q and the result matric mp , i.e:
Amn Ã¯â€šÂ´ Bqp = C mp
It is not necessary that B Ã¯â€šÂ´ A is multiplication compatible.
The entries of C = (AB) can be found by the rule
p
cij = ( AB) ij = (row i of A) Ã¯Æ’â€” (col j of B) = Ã¯Æ’Â¥ aik bkj
k =1
For example c35 = ( AB) 35 = dot product of (row 3of A) Ã¯Æ’â€” (col 5 of B)
Note that, in general
AB Ã¢â€°Â  BA
The matrix m x n matrix A as composed of the row vectors r1, r2,Ã¢â‚¬Â¦, rm from Rn and
the n x p matrix B as composed of the column vectors c1, c2,Ã¢â‚¬Â¦, cp from Rn,
and
the rule for computing the entries of the matrix product AB is r i Ã‚Â· c j = ( AB) ij , that is,
Example 7: Given the two matrices
determine which matrix product, AB or BA, is defined and evaluate it
Note that A is A is 2 Ã¯â€šÂ´ 3 and B is 3 Ã¯â€šÂ´ 4 . So AB is compatible but BA is not.
Therefore
0 4 1Ã¯Æ’Â¹
Ã¯Æ’Â© 1 0 Ã¢Ë†â€™ 3Ã¯Æ’Â¹ Ã¯Æ’Âª
C = AÃ¯â€šÂ´ B = Ã¯Æ’Âª
Ã¯â€šÂ´ Ã¯Æ’ÂªÃ¢Ë†â€™ 2 3 Ã¢Ë†â€™ 1 5Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’Âº
Ã¯Æ’Â«Ã¢Ë†â€™ 2 4 1 Ã¯Æ’Â» Ã¯Æ’Âª 0 Ã¢Ë†â€™ 1 2 1Ã¯Æ’Âº
Ã¯Æ’Â«
Ã¯Æ’Â» is a C is 2 Ã¯â€šÂ´ 4
Now
3
cij = Ã¯Æ’Â¥ aik bkj
k =1
.
3
c11 = Ã¯Æ’Â¥ a1k bk1 = a11b11 + a12b21 + a13b31 = 1 + 0 + 0 = 1
k =1
(dot product of row 1 and column 1)
3
c12 = Ã¯Æ’Â¥ a1k bk 2 = a11b12 + a12b22 + a13b32 = 0 + 0 + 3 = 3
k =1
(dot product of row 1 and column 2)
And so on.,, We will have
Example 8: If
and
compute the (3,5) entry of the product CD.
First, note that since C is 4 x 5 and D is 5 x 6, the product CD is indeed defined, and its
size is 4 x 6. However, there is no need to compute all twentyÃ¢â‚¬Âfour entries of CD if only
one particular entry is desired. The (3, 5) entry of CD is the dot product of row 3 in C and
column 5 in D:
Example 9: We will see that AB Ã¢â€°Â  BA in the following example:
Note that
while
III) Properties of Matrices:
Properties of Matrix Arithmetic. For scalars a, b and matrices A,B,C
The following rules of matrix arithmetic are valid with assumptions that the
matrices are compatible under the addition and multiplication
a.
b.
c.
d.
e.
f.
g.
h.
i.
1)
2)
A+B=B+A [Commutative law for matrix addition]
A+(B+C)=(A+B)+C [Associative law for matrix addition]
A(BC)=(AB)C [Associative law for matrix multiplication]
A(B+C)=AB+AC [Left distributive law]
(B+C)A=BA+CA n
(a+b)C=aC+bC
(aÃ¢â‚¬â€œb)C=aCÃ¢â‚¬â€œbC
a(bC)=(ab)C
a(BC)=(aB)C=B(aC)
There are identity square matrices I nn and square matrices Ann such that
IA = AI
such that square matrix Ann for which
If there exists a matrix Bnn
Bnn Ann = Bnn Ann = I nn
Then A and B are the inverse of each other. It is denoted by A Ã¢Ë†â€™1 or B Ã¢Ë†â€™1
They are called non-singular matrices.
If there is such B then A is called a singular matrix
3)
4)
If B and C are inverses of A, then B = C
For a 2×2 matrix
A=Ã¯Æ’Âª
Ã¯Æ’Âº.
Ã¯Æ’Â«c d Ã¯Æ’Â»
Ã¯Æ’Â¦a b Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â·
Note : It is valid to use the matrix notation A = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨c d Ã¯Æ’Â¸
The inverse of A denoted by A Ã¢Ë†â€™1 is given by A Ã¢Ë†â€™1 =
1 Ã¯Æ’Â¦ d Ã¢Ë†â€™ bÃ¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
ad Ã¢Ë†â€™ bc Ã¯Æ’Â§Ã¯Æ’Â¨ Ã¢Ë†â€™ c a Ã¯Æ’Â·Ã¯Æ’Â¸
Example 10: A = Ã¯Æ’Âª
then A =
=
Ã¯Æ’Âº
4 + 6 Ã¯Æ’ÂªÃ¯Æ’Â«2 1 Ã¯Æ’ÂºÃ¯Æ’Â» 10 Ã¯Æ’ÂªÃ¯Æ’Â«2 1 Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’Â« Ã¢Ë†â€™ 2 4Ã¯Æ’Â»
5)
6)
7)
The inverse of a product:
( AB) Ã¢Ë†â€™1 = B Ã¢Ë†â€™1 A Ã¢Ë†â€™1
The inverse of inverse of A is A it-self ( A Ã¢Ë†â€™1 ) Ã¢Ë†â€™1 = B Ã¢Ë†â€™1 A Ã¢Ë†â€™1
The exponent of a square matrix. For n integers
( ) ( )
n
A Ã¢Ë†â€™n = A Ã¢Ë†â€™1 = A n
Ar A s = Ar + s
s
A r = A rs .
Ã¢Ë†â€™1
( )
8)
Transpose properties
(A )
T T
=A
( A Ã¯â€šÂ± B )T = AT Ã¯â€šÂ± B T
(kA)T = kAT
( AB )T = B T AT
9)
If A is an invertible matrix, then A T is also invertible and
(A )
T Ã¢Ë†â€™1
10)
( )
= A Ã¢Ë†â€™1
T
Special identity (Pascal Triangle Coefficients of Binomial is hold)
( A + B )2
= A 2 + 2 AB + B 2
Example 11:
(2 A)2 = ( A + A)2
Therefore
= A 2 + 2 AA + A 2 = 4 A 2
(kA)2
= k 2 A2
Proof: Let this for students (use the induction method)
11)
Polynomial of matrices:
p( A) = a n A n + a n Ã¢Ë†â€™1 A n Ã¢Ë†â€™1 + ….. + a 2 A 2 + a1 A + a 0
Math 410, Linear Algebra
1)
Chapter 1.5
Fall 2020
Elementary Matrices in Finding the Inverse Matrix A Ã¢Ë†â€™1
In previous sections, we know that there are only 3 types of elementary row
operations (ERO)
1)
Add a row with multiple of the other rows R j Ã¢â€ â€™ R j + aRi
2)
Replace a row by its multiple. R j Ã¢â€ â€™ aR j
3)
Interchange 2 rows. R j Ã¯â€šÂ« Ri
Any matrix resulted by a sequences of EROs are equivalent matrices.
Example 1: Given the matrix A = Ã¯Æ’Âª
Ã¯Æ’Âº . We wifh to apply some EROs to reduce
Ã¯Æ’Â« Ã¢Ë†â€™ 3 Ã¢Ë†â€™ 2Ã¯Æ’Â»
A into a ref or ref form
Ã¯Æ’Âª Ã¢Ë†â€™ 3 Ã¢Ë†â€™ 2Ã¯Æ’Âº Ã¢â€ â€™
Ã¯Æ’Âª
Ã¯Æ’Âº Ã¢â€ â€™Ã¯Æ’Âª
Ã¯Æ’Âº Ã¢â€ â€™Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â«
Ã¯Æ’Â» E11 Ã¯Æ’Â«0 Ã¢Ë†â€™ 5Ã¯Æ’Â» E2 Ã¯Æ’Â«0 1 Ã¯Æ’Â» E3 Ã¯Æ’Â«0 1Ã¯Æ’Â»
In this demonstration, we apply 3 EROs. These EROs are obtained by performing same
EROs on the Identity matrix:
They are:
E1 Ã¢â€ â€™ Ã¯Æ’Âª
Ã¢â€ â€™Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âº which is R2 Ã¢â€ â€™ R2 + 3R1
Ã¯Æ’Â«0 1 Ã¯Æ’Â»
Ã¯Æ’Â«3 1Ã¯Æ’Â»
0 Ã¯Æ’Â¹
R
E2 Ã¢â€ â€™ Ã¯Æ’Âª
Ã¢â€ â€™Ã¯Æ’Âª
which is R2 Ã¢â€ â€™ 2
Ã¯Æ’Âº
Ã¯Æ’Âº
Ã¢Ë†â€™5
Ã¯Æ’Â«0 1 Ã¯Æ’Â»
Ã¯Æ’Â«0 Ã¢Ë†â€™ 1 / 5Ã¯Æ’Â»
E3 Ã¢â€ â€™ Ã¯Æ’Âª
Ã¢â€ â€™Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âº
Ã¯Æ’Â«0 1 Ã¯Æ’Â»
Ã¯Æ’Â«0 1Ã¯Æ’Â»
In short, we have:
which is R1 Ã¢â€ â€™ R1 + R2
Ã¯Æ’Âª0 1Ã¯Æ’Âº Ã¯Æ’Âª0 Ã¢Ë†â€™ 1 / 5Ã¯Æ’Âº Ã¯Æ’Âª3 1Ã¯Æ’Âº Ã¯Æ’ÂªÃ¢Ë†â€™ 3 Ã¢Ë†â€™ 2Ã¯Æ’Âº = Ã¯Æ’Âª0 1Ã¯Æ’Âº
Ã¯Æ’Â«
Ã¯Æ’Â»Ã¯Æ’Â«
Ã¯Æ’Â»Ã¯Æ’Â«
Ã¯Æ’Â»Ã¯Æ’Â«
Ã¯Æ’Â» Ã¯Æ’Â«
Ã¯Æ’Â»
The sequence of EROÃ¢â‚¬â„¢s lead to
E3 E2 E1 A = I
2)
The inverses of elementary matrices The inverses of elementary matrices
are also elementary matrices and they can be found as follow:
1)
Add a row with multiple of the other rows R j Ã¢â€ â€™ R j + aRi
2)
Replace a row by its multiple. R j Ã¢â€ â€™ aR j
3)
Interchange 2 rows. R j Ã¯â€šÂ« Ri
1)
If E = R j Ã¢â€ â€™ R j + aRi then E = E Ã¢Ë†â€™1 = R j Ã¢â€ â€™ R j Ã¢Ë†â€™ aRi
2)
If R j Ã¢â€ â€™ aR j then E = E Ã¢Ë†â€™1 = R j Ã¢â€ â€™
3)
If R j Ã¯â€šÂ« Ri then E = E Ã¢Ë†â€™1 = R j Ã¯â€šÂ« Ri
1
Rj
a
Example 2:
3)
a)
E=Ã¯Æ’Âª
Ã¢â€ â€™ E Ã¢Ë†â€™1 = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âº
Ã¯Æ’Â«0 1Ã¯Æ’Â»
Ã¯Æ’Â«0 1 Ã¯Æ’Â»
b)
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¢Ë†â€™1
E = Ã¯Æ’Âª0 1 0Ã¯Æ’Âº Ã¢â€ â€™ E = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 4Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 / 4Ã¯Æ’ÂºÃ¯Æ’Â»
c)
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¢Ë†â€™1
E = Ã¯Æ’Âª0 1 0Ã¯Æ’Âº Ã¢â€ â€™ E = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 2 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 Ã¢Ë†â€™ 2 1Ã¯Æ’ÂºÃ¯Æ’Â»
The inverses of elementary matrices
From the EROs on a matrix A, we have
E3 E2 E1 A = I
Now, we appliy from the left of this equation by inverse elementary matrices
E1Ã¢Ë†â€™1 E 2Ã¢Ë†â€™1 E3Ã¢Ë†â€™1 E3 E 2 E1 A = E1Ã¢Ë†â€™1 E 2Ã¢Ë†â€™1 E3Ã¢Ë†â€™1 I
E1Ã¢Ë†â€™1 E 2Ã¢Ë†â€™1 E 2 E1 A = E1Ã¢Ë†â€™1 E 2Ã¢Ë†â€™1 E3Ã¢Ë†â€™1 I Ã¢â€ â€™ E1Ã¢Ë†â€™1 E1 A = E1Ã¢Ë†â€™1 E 2Ã¢Ë†â€™1 E3Ã¢Ë†â€™1 I Ã¢â€ â€™ A = E1Ã¢Ë†â€™1 E 2Ã¢Ë†â€™1 E3Ã¢Ë†â€™1
Ã¯â‚¬Â±Ã¯â‚¬Â²Ã¯â‚¬Â³
Ã¯â‚¬Â±Ã¯â‚¬Â²Ã¯â‚¬Â³
I
I
This is how we decomposed the matrix A in to product of the elementary matrices
A = E1Ã¢Ë†â€™1 E2Ã¢Ë†â€™1 E3Ã¢Ë†â€™1
Reversely, from
both sides,
E3 E2 E1 A = I we operate( on the right) by the matrix inverse of A
We have:
E3 E 2 E1 AA Ã¢Ë†â€™1 = IA Ã¢Ë†â€™1 Ã¢â€ â€™ A Ã¢Ë†â€™1 = E3 E 2 E1
In general an invertible matrix can be decomposed in to the product of the k elementary
matrices
A = E1Ã¢Ë†â€™1 E 2Ã¢Ë†â€™1 E3Ã¢Ë†â€™1 ….E kÃ¢Ë†â€™Ã¢Ë†â€™11 E kÃ¢Ë†â€™1
Also, the inverse of a matrix A can be found by product of the k elementary matrices
A Ã¢Ë†â€™1 = E k E k Ã¢Ë†â€™1 …….E 2 E1
Example 3: Find the inverse of
A = Ã¯Æ’ÂªÃ¯Æ’Âª1 3 4Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«1 1 2Ã¯Æ’ÂºÃ¯Æ’Â»
We will try to make equivalent to Identity through EROs
6Ã¯Æ’Â¹
Ã¯Æ’Âª1 3 4Ã¯Æ’Âº Ã¢â€ â€™ ( E = R Ã¢â€ â€™ R Ã¢Ë†â€™ R ) Ã¢â€ â€™ Ã¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’Âº Ã¢â€ â€™ ( E = R Ã¢â€ â€™ R Ã¢Ë†â€™ R ) Ã¢â€ â€™ Ã¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’Âº
1
2
2
1
2
3
3
1
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«1 1 2Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«1 1 2 Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 Ã¢Ë†â€™ 1 Ã¢Ë†â€™ 4Ã¯Æ’ÂºÃ¯Æ’Â»
6Ã¯Æ’Â¹
R
Ã¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’Âº ( E : R Ã¢â€ â€™ R + R ) Ã¢â€ â€™ Ã¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’Âº ( E : R Ã¢â€ â€™ 3 ) Ã¢â€ â€™ Ã¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’Âº
3
2
Ã¯Æ’Âª
Ã¯Æ’Âº 3 3
Ã¯Æ’Âª
Ã¯Æ’Âº 4 3
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¢Ë†â€™6
Ã¯Æ’ÂªÃ¯Æ’Â«0 Ã¢Ë†â€™ 1 Ã¢Ë†â€™ 4Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 6Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’Âº ( E : R Ã¢â€ â€™ R + 2 R ) Ã¢â€ â€™ Ã¯Æ’Âª0 1 0Ã¯Æ’Âº( E : R Ã¢â€ â€™ R Ã¢Ë†â€™ R ) Ã¢â€ â€™ Ã¯Æ’Âª0 1 0Ã¯Æ’Âº
2
3
1
3
Ã¯Æ’Âª
Ã¯Æ’Âº 5 2
Ã¯Æ’Âª
Ã¯Æ’Âº 6 1
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’Âª0 1 0Ã¯Æ’Âº E : R Ã¢â€ â€™ R Ã¢Ë†â€™ 2 R ) Ã¢â€ â€™ Ã¯Æ’Âª0 1 0Ã¯Æ’Âº
1
2
Ã¯Æ’Âª
Ã¯Æ’Âº 7 1
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Following the EROs we have:
0 Ã¯Æ’Â¹
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
E1 = Ã¯Æ’ÂªÃ¢Ë†â€™ 1 1 0Ã¯Æ’Âº E 2 = Ã¯Æ’Âª 0 1 0Ã¯Æ’Âº E3 = Ã¯Æ’Âª0 1 0Ã¯Æ’Âº E 4 = Ã¯Æ’Âª0 1
0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â« 0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ 1 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 1 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 1 / 6Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
E5 = Ã¯Æ’Âª0 1 2Ã¯Æ’Âº E6 = Ã¯Æ’Âª0 1 0 Ã¯Æ’Âº E7 = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
It follows:
Ã¢Ë†â€™1
1
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¢Ë†â€™1
Ã¢Ë†â€™1
Ã¢Ë†â€™1
= Ã¯Æ’Âª1 1 0Ã¯Æ’Âº E 2 = Ã¯Æ’Âª0 1 0Ã¯Æ’Âº E3 = Ã¯Æ’Âª0 1 0Ã¯Æ’Âº E 4 = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0 Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«1 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 Ã¢Ë†â€™ 1 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 6Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¢Ë†â€™1
5
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¢Ë†â€™1
Ã¢Ë†â€™1
= Ã¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’Âº E6 = Ã¯Æ’Âª0 1 0Ã¯Æ’Âº E7 = Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
E
E
We have
A = E1Ã¢Ë†â€™1 E 2Ã¢Ë†â€™1 E3Ã¢Ë†â€™1 E 4Ã¢Ë†â€™1 E5Ã¢Ë†â€™1 E 6Ã¢Ë†â€™1 E 7Ã¢Ë†â€™1
A = Ã¯Æ’ÂªÃ¯Æ’Âª1 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0 Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’Âª0 1 Ã¢Ë†â€™ 2Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«1 0 1Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 Ã¢Ë†â€™ 1 1Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 6Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
And the inverse of A is A Ã¢Ë†â€™1 = E 7 E 6 E5 E 4 E3 E 2 E1
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âª
A = Ã¯Æ’Âª0 1 0Ã¯Æ’Âº Ã¯Æ’Âª0 1 0 Ã¯Æ’Âº Ã¯Æ’Âª0 1 2Ã¯Æ’Âº Ã¯Æ’Âª0 1
0 Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’Âª0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’Âª 0 1 0Ã¯Æ’ÂºÃ¯Æ’Âº Ã¯Æ’ÂªÃ¯Æ’ÂªÃ¢Ë†â€™ 1 1 0Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1 Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 1Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 0 Ã¢Ë†â€™ 1 / 6Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«0 1 1Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ 1 0 1Ã¯Æ’ÂºÃ¯Æ’Â» Ã¯Æ’ÂªÃ¯Æ’Â« 0 0 1Ã¯Æ’ÂºÃ¯Æ’Â»
Ã¢Ë†â€™1
The inverse of a 2×2 matrix is given by
A=Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â«c d Ã¯Æ’Â»
3)
A Ã¢Ë†â€™1 =
ad Ã¢Ë†â€™ bc Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ c a Ã¯Æ’ÂºÃ¯Æ’Â»
Solve a linear system:
A system of equations can have no solution, unique solution od many solutions
This is th coincidence of the algebra:
If x =
1
a
0
: no solution; x = is unique, for b Ã¯â€šÂ¹ 0 or x = has many solutions
0
b
0
But in the matrices algebra, we can divide a matrix. The following example will show
A system can be written in the form:
AX = b
In the chapter 1.1, 1.2 we examined the solution for the general system of linear
equations. Now we are narrow to the case of the number of unknowns is the same
number of equation. This tell that A is a square matrix :
AX = b
A be a n Ã¯â€šÂ´ n ; X and B are n Ã¯â€šÂ´ 1 matrices
The solution of this equation has two parts
X = X H + X p the homogeneous solution and a particular solution according to
the systems with solution satisfying
AX H = 0 and AX p = b
In general AX = b has unique solution if the inverse of A exists, then
A -1 AX = A Ã¢Ë†â€™1b Ã¯Æ’Å¾ X = A Ã¢Ë†â€™1b .
In this case, we only have the particular solution, the homogeneous solution is 0
Example 4: Solve:
2 x + 4 y = 10
3x + 7 y = 5
This can be written as AX = b
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Ã¯Æ’Â¦10 Ã¯Æ’Â¶
Where A = Ã¯Æ’Âª
; X = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â·; b == Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Âº
Ã¯Æ’Â«3 7 Ã¯Æ’Â»
Ã¯Æ’Â¨ yÃ¯Æ’Â¸
Ã¯Æ’Â¨5Ã¯Æ’Â¸
1 Ã¯Æ’Â© 7 Ã¢Ë†â€™ 4Ã¯Æ’Â¹Ã¯Æ’Â¦10 Ã¯Æ’Â¶
We have A = Ã¯Æ’Âª
Ã¯Æ’Â§ Ã¯Æ’Â·
X = A Ã¢Ë†â€™1b Ã¯Æ’Å¾ X =
Ã¯Æ’Âº
14 Ã¢Ë†â€™ 12 Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ 3 2 Ã¯Æ’ÂºÃ¯Æ’Â»Ã¯Æ’Â§Ã¯Æ’Â¨ 5 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â«3 7 Ã¯Æ’Â»
X =
1 Ã¯Æ’Â© 7 Ã¢Ë†â€™ 4Ã¯Æ’Â¹Ã¯Æ’Â¦10 Ã¯Æ’Â¶ 1 Ã¯Æ’Â¦ 70 Ã¢Ë†â€™ 20 Ã¯Æ’Â¶ 1 Ã¯Æ’Â¦ 50 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 25 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·= Ã¯Æ’Â§
Ã¯Æ’Â·= Ã¯Æ’Â§
Ã¯Æ’Â·=Ã¯Æ’Â§
Ã¯Æ’Â·
14 Ã¢Ë†â€™ 12 Ã¯Æ’ÂªÃ¯Æ’Â«Ã¢Ë†â€™ 3 2 Ã¯Æ’ÂºÃ¯Æ’Â»Ã¯Æ’Â§Ã¯Æ’Â¨ 5 Ã¯Æ’Â·Ã¯Æ’Â¸ 2 Ã¯Æ’Â§Ã¯Æ’Â¨ Ã¢Ë†â€™ 30 + 10 Ã¯Æ’Â·Ã¯Æ’Â¸ 2 Ã¯Æ’Â§Ã¯Æ’Â¨ Ã¢Ë†â€™ 20 Ã¯Æ’Â·Ã¯Æ’Â¸ Ã¯Æ’Â§Ã¯Æ’Â¨ Ã¢Ë†â€™ 10 Ã¯Æ’Â·Ã¯Æ’Â¸
Therefore
5)
x = 25 and y = -10
Solve a linear system:
Math 410, Linear Algebra
Fall 2020
Elementary Matrices in Finding the Inverse Matrix A Ã¢Ë†â€™1
I)
Elementary Matrices:
In previous sections, we know that there are only 3 types of elementary row
operations (ERO)
1)
Add a row with multiple of the other rows R j Ã¢â€ â€™ R j + aRi
2)
Replace a row by its multiple. R j Ã¢â€ â€™ aR j
3)
Interchange 2 rows. R j Ã¯â€šÂ« Ri
Any matrix resulted by a sequences of EROs are equivalent matrices.
The 3 equivalent elementary matrices associated with EROs are obtained from the
identity matrices with the same row operation
Example 1
Ã¢â‚¬Â¢
A solution of a linear system in n unknowns can be written as an
ordered nÃ¢â‚¬Âtuple: (s1,s2,Ã¢â‚¬Â¦,sn).
Ã¢â‚¬Â¢
The set of all ordered nÃ¢â‚¬Âtuples of real numbers is denoted by the symbol Rn, and
elements of Rn are called vectors.
Ã¢â‚¬Â¢
Vectors are denoted in boldface type, such as a, b, v, w, and x. The following are
two common ways of writing vectors.
a. CommaÃ¢â‚¬Âdelimited form: (s1,s2,Ã¢â‚¬Â¦,sn)
b. ColumnÃ¢â‚¬Âvector form: [s1s2Ã¢â€¹Â®sn]
Ã¢â‚¬Â¢
The standard basis vectors for Rn are denoted by e1, e2,Ã¢â‚¬Â¦,en.
e1=[100Ã¢â€¹Â®0], e2=[010Ã¢â€¹Â®0], Ã¢â‚¬Â¦, en=[000Ã¢â€¹Â®1]
Ã¢â‚¬Â¢
For x = (x1, x2,Ã¢â‚¬Â¦, xn), we can express x as x = x1e1 + x2e2 +Ã¢â€¹Â¯+ xnen.
If A and B are matrices of the same size, then they can be added. (This is similar to
namely, only vectors from the same space R n can
be added; you cannot add a 2Ã¢â‚¬Âvector to a 3Ã¢â‚¬Âvector, for example.) If A = [aij] and B = [bij]
are both m x n matrices, then their sum, C = A + B, is also an m x n matrix, and its
entries are given by the formula
cij = aij + bij
Thus, to find the entries of A + B, simply add the corresponding entries of A and B.
Example 1: Consider the following matrices:
Which two can be added? What is their sum?
Since only matrices of the same size can be added, only the sum F + H is defined
(G cannot be added to either F or H). The sum of F and H is
Since addition of real numbers is commutative, it follows that addition of matrices (when
it is defined) is also commutative; that is, for any matrices A and B of the same
size, A + B will always equal B + A.
Example 2: If any matrix A is added to the zero matrix of the same size, the result is
clearly equal to A:
This is the matrix analog of the statement a + 0 = 0 + a = a, which expresses the fact that
the number 0 is the additive identity in the set of real numbers.
Example 3: Find the matrix B such that A + B = C, where
If
then the matrix equation A + B = C becomes
This is easy to see
b11 = 1 , b12 = Ã¢Ë†â€™1, b21 = Ã¢Ë†â€™3 and b22 = Ã¢Ë†â€™2
Therefore,
B=Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Â« Ã¢Ë†â€™ 3 Ã¢Ë†â€™ 2Ã¯Æ’Â»
This example motivates the definition of matrix subtraction: If A and B are matrices of
the same size, then the entries of A Ã¢Ë†â€™ B are found by simply subracting the entries
of B from the corresponding entries of A. Since the equation A + B = C is equivalent
to B = C Ã¢Ë†â€™ A, employing matrix subtraction above would yield the same result:
II) Matrices multiplication.
1)
Scalar Multiplication
A scalar k with a matrix is multiply k with all entries of the matrix. If A = [aij] is a
matrix and k is a scalar, then
That is, the matrix kA is obtained by multiplying each entry of A by k.
Example 4: If
then the scalar multiple 3 or -5 with A are obtained by multiplying every entry of A by 3
or -5:
0
15 Ã¯Æ’Â¹
3A = Ã¯Æ’Âª
Ã¢Ë†â€™ 5A = Ã¯Æ’Âª
Ã¯Æ’Âº
Ã¯Æ’Âº
Ã¯Æ’Â«Ã¢Ë†â€™ 6 12 3 Ã¯Æ’Â» and
Ã¯Æ’Â« 10 Ã¢Ë†â€™ 20 Ã¢Ë†â€™ 5Ã¯Æ’Â»
Example 5: If A and B are matrices of the same size, then A Ã¢Ë†â€™ B = A + (Ã¢Ë†â€™ B), where
Ã¢Ë†â€™ B is the scalar multiple (Ã¢Ë†â€™1) B. If
then
Example 6: If
then
But A+B is not possible (not compatible under addition, since they are not the same size)
2)
Matrices Multiplication: The matrix A and B can be multiplied if they
are compatible under the matrices multiplication rule:
Given
Amn and Bqp
.
Amn Ã¯â€šÂ´ Bqp
C
is compatible if n = q and the result matric mp , i.e:
Amn Ã¯â€šÂ´ Bqp = C mp
It is not necessary that B Ã¯â€šÂ´ A is multiplication compatible.
The entries of C = (AB) can be found by the rule
p
cij = ( AB) ij = (row i of A) Ã¯Æ’â€” (col j of B) = Ã¯Æ’Â¥ aik bkj
k =1
For example
c35 = ( AB) 35 = dot product of (row 3of A) Ã¯Æ’â€” (col 5 of B)
Note that, in general
AB Ã¢â€°Â  BA
The matrix m x n matrix A as composed of the row vectors r1, r2,Ã¢â‚¬Â¦, rm from Rn and
the n x p matrix B as composed of the column vectors c1, c2,Ã¢â‚¬Â¦, cp from Rn,
and
the rule for computing the entries of the matrix product AB is r i Ã‚Â· c j = ( AB) ij , that is,
Example 7: Given the two matrices
determine which matrix product, AB or BA, is defined and evaluate it
Note that A is A is 2 Ã¯â€šÂ´ 3 and B is 3 Ã¯â€šÂ´ 4 . So AB is compatible but BA is not.
Therefore
0 4 1Ã¯Æ’Â¹
Ã¯Æ’Â© 1 0 Ã¢Ë†â€™ 3Ã¯Æ’Â¹ Ã¯Æ’Âª
C = AÃ¯â€šÂ´ B = Ã¯Æ’Âª
Ã¯â€šÂ´ Ã¯Æ’ÂªÃ¢Ë†â€™ 2 3 Ã¢Ë†â€™ 1 5Ã¯Æ’ÂºÃ¯Æ’Âº
Ã¯Æ’Âº
Ã¯Æ’Â«Ã¢Ë†â€™ 2 4 1 Ã¯Æ’Â» Ã¯Æ’Âª 0 Ã¢Ë†â€™ 1 2 1Ã¯Æ’Âº
Ã¯Æ’Â«
Ã¯Æ’Â» is a C is 2 Ã¯â€šÂ´ 4
Now
3
cij = Ã¯Æ’Â¥ aik bkj
k =1
.
3
c11 = Ã¯Æ’Â¥ a1k bk1 = a11b11 + a12b21 + a13b31 = 1 + 0 + 0 = 1
k =1
(dot product of row 1 and column 1)
3
c12 = Ã¯Æ’Â¥ a1k bk 2 = a11b12 + a12b22 + a13b32 = 0 + 0 + 3 = 3
k =1
And so on.,, We will have
Example 8: If
(dot product of row 1 and column 2)
and
compute the (3, 5) entry of the product CD.
First, note that since C is 4 x 5 and D is 5 x 6, the product CD is indeed defined, and its
size is 4 x 6. However, there is no need to compute all twentyÃ¢â‚¬Âfour entries of CD if only
one particular entry is desired. The (3, 5) entry of CD is the dot product of row 3 in C and
column 5 in D:
Example 9: We will see that AB Ã¢â€°Â  BA in the following example:
Note that
while
Math 410, Linear Algebra
Chapter 1.7
Chapter 1.7 Diagonal, Triangular and Symmetric Matrices
Fall 2020
Example 1: The product of a matrix A and with its transpose is symmetric
Ã¯Æ’Â¦2
Ã¯Æ’Â§
Ã¯Æ’Â§1
T
A =Ã¯Æ’Â§
Ã¢Ë†â€™1
Ã¯Æ’Â§
Ã¯Æ’Â§3
Ã¯Æ’Â¨
Ã¯Æ’Â¦ 2 1 Ã¢Ë†â€™1 3Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
A = Ã¯Æ’Â§1 4
0 1Ã¯Æ’Â·
Ã¯Æ’Â§ 3 Ã¢Ë†â€™ 2 1 4Ã¯Æ’Â·
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¦2
Ã¯Æ’Â¦ 2 1 Ã¢Ë†â€™ 1 3 Ã¯Æ’Â¶Ã¯Æ’Â§
Ã¯Æ’Â§
Ã¯Æ’Â·Ã¯Æ’Â§ 1
AAT == Ã¯Æ’Â§ 1 4
0 1 Ã¯Æ’Â·Ã¯Æ’Â§
Ã¯Æ’Â§ 3 Ã¢Ë†â€™ 2 1 4 Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 1
Ã¯Æ’Â¨
Ã¯Æ’Â¸Ã¯Æ’Â§ 3
Ã¯Æ’Â¨
1 3 Ã¯Æ’Â¶
Ã¯Æ’Â·
4 Ã¢Ë†â€™ 2Ã¯Æ’Â·
0 1 Ã¯Æ’Â·
Ã¯Æ’Â·
1 4 Ã¯Æ’Â·Ã¯Æ’Â¸
1 3 Ã¯Æ’Â¶
Ã¯Æ’Â· Ã¯Æ’Â¦15 9 15 Ã¯Æ’Â¶
Ã¯Æ’Â·
4 Ã¢Ë†â€™ 2Ã¯Æ’Â· Ã¯Æ’Â§
= Ã¯Æ’Â§ 9 18 Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â·
0 1
Ã¯Æ’Â· Ã¯Æ’Â§Ã¯Æ’Â¨15 Ã¢Ë†â€™ 1 40 Ã¯Æ’Â·Ã¯Æ’Â¸
1 4 Ã¯Æ’Â·Ã¯Æ’Â¸
Example 2:
3
0 Ã¯Æ’Â¶
0
0 Ã¯Æ’Â¶
Ã¯Æ’Â¦1 / 2 0
Ã¯Æ’Â¦1 / 8
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
A = Ã¯Æ’Â§ 0 1 / 3 0 Ã¯Æ’Â· = Ã¯Æ’Â§ 0 1 / 27
0 Ã¯Æ’Â·
Ã¯Æ’Â§ 0
Ã¯Æ’Â§ 0
0 1 / 4 Ã¯Æ’Â·Ã¯Æ’Â¸
0
1 / 64 Ã¯Æ’Â·Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¨
Example 3:
Determine whether the matrix is symmetric A = Ã¯Ââ€ºaij Ã¯ÂÂ
a)
aij = 2i 2 + 2 j 3 . Since a ji = 2 j 2 + 2i 3 # 2i 2 + 2 j 3 not symmetric
b)
a ij = 2i + 2 j
Since a ji = 2 j + 2i = 2i + 2 j they are symmetric
Math 410, Linear Algebra
Dr. B Truong
Name: _______________________
CHAPTER 3 REVIEW
1) Equations of lines
a) Lines: In the space, the vector equation of the lines are given by
r = r0 + tv
Where
r0 is a starting vector (point) P0 ,, v is the direction vector and t is the parameter
Example 1:
Equation of a line passing the point P0 = (2,Ã¢Ë†â€™1,2) and parallel to v = (1,4,Ã¢Ë†â€™2)
i)
ii)
iii)
Ã¯Æ’Â¦ xÃ¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
The vector equation r = r0 + tv Ã¢â€ â€™ Ã¯Æ’Â§ y Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â· + t Ã¯Æ’Â§ 4 Ã¯Æ’Â·
Ã¯Æ’Â§ z Ã¯Æ’Â· Ã¯Æ’Â§ 2 Ã¯Æ’Â· Ã¯Æ’Â§ Ã¢Ë†â€™ 2Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
The parametric equation:
Ã¯Æ’Â¬ x = 2+t
Ã¯Æ’Â¯
Ã¯Æ’Â­ y = Ã¢Ë†â€™1 + 4t
Ã¯Æ’Â¯ z = 2 Ã¢Ë†â€™ 2t
Ã¯Æ’Â®
x Ã¢Ë†â€™ 2 y +1 z Ã¢Ë†â€™ 2
=
=
= (t )
The symmetric equation
1
4
Ã¢Ë†â€™2
In general the symmetric equation of a line is given by
x Ã¢Ë†â€™ x0 y Ã¢Ë†â€™ y 0 z Ã¢Ë†â€™ z 0
=
=
= (t )
a
b
c
P0 = ( x0 , y 0 , z 0 ) is a point on the line and vector v = ( a, b, c) is parallel to the line
Example 2: Find an equation for the line passing A = (1,3,Ã¢Ë†â€™1) and B = (2,0,3)
a) First we find a vector // to AB which is AB = B Ã¢Ë†â€™ A = (2 Ã¢Ë†â€™ 1,0 Ã¢Ë†â€™ 3,3 Ã¢Ë†â€™ (Ã¢Ë†â€™1)) = (1,Ã¢Ë†â€™3,4)
b) Choose point. Here we choose A
the symmetric equation of a line is
x Ã¢Ë†â€™ 1 y Ã¢Ë†â€™ 3 z Ã¢Ë†â€™ (Ã¢Ë†â€™1)
=
=
= (t )
1
Ã¢Ë†â€™3
4
Ã¯Æ’Â¦ xÃ¯Æ’Â¶ Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
Vector form: r = r0 + tv Ã¢â€ â€™ r = Ã¯Æ’Â§ y Ã¯Æ’Â· = Ã¯Æ’Â§ 3 Ã¯Æ’Â· + t Ã¯Æ’Â§ Ã¢Ë†â€™ 3 Ã¯Æ’Â·
Ã¯Æ’Â§ z Ã¯Æ’Â· Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â· Ã¯Æ’Â§ 4 Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
7) Equations of Planes ( 2 and 3-space):
One way to look at the plane is
Secondly we choose a point P0 = ( x0 , y 0 , z 0 ) on a plane and a point
P = ( x, y, z ) variable on the plane. We know n is orthogonal to PP0 :
So, we have
n Ã¯Æ’â€” PP0 = 0
Therefore equation for the plane is
(a, b, c) Ã¯Æ’â€” ( x Ã¢Ë†â€™ x0 , y Ã¢Ë†â€™ y0 , z Ã¢Ë†â€™ z 0 ) = 0
a( x Ã¢Ë†â€™ x0 ) + b( y Ã¢Ë†â€™ y0 ) + c( z Ã¢Ë†â€™ z 0 ) = 0
Example 4: Equation of the plane passing the point P0 = (3,Ã¢Ë†â€™1,2)
Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
and orthogonal to the line r = Ã¯Æ’Â§ 2 Ã¯Æ’Â· + t Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 2Ã¯Æ’Â· Ã¯Æ’Â§ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
Pay attention to the fact that the vector parallel to the line is orthogonal to the plane:
Thus n = ( 2,1,1) Equation of the plane passing the point P0 = (3,Ã¢Ë†â€™1,2)
is
a( x Ã¢Ë†â€™ x0 ) + b( y Ã¢Ë†â€™ y0 ) + c( z Ã¢Ë†â€™ z 0 ) = 0
2( x Ã¢Ë†â€™ 3) + 1( y Ã¢Ë†â€™ (Ã¢Ë†â€™1)) + 1( z Ã¢Ë†â€™ 2) = 0 or
2x Ã¢Ë†â€™ 6 + y + 1 + z Ã¢Ë†â€™ 2 = 0 Ã¢â€ â€™ 2x + y + z = 7
Secondly we choose a point P0 = ( x0 , y 0 , z 0 ) on a plane and a point P = ( x, y, z ) variable
on the plane. We know n is orthogonal to PP0 :
Example 5:
Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2Ã¯Æ’Â¶
Ã¯Æ’Â¦2Ã¯Æ’Â¶ Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
The lines r1 = Ã¯Æ’Â§ 2 Ã¯Æ’Â· + t Ã¯Æ’Â§ 1 Ã¯Æ’Â· and r1 = Ã¯Æ’Â§ 0 Ã¯Æ’Â· + t Ã¯Æ’Â§ 2 Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 2Ã¯Æ’Â· Ã¯Æ’Â§ 1Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â· Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
are not parallel, since the vectors parallel to the lines are
Ã¯Æ’Â¦ 2Ã¯Æ’Â¶
Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
v1 = Ã¯Æ’Â§ 1 Ã¯Æ’Â· and v2 = Ã¯Æ’Â§ 2 Ã¯Æ’Â· which are not parallel
Ã¯Æ’Â§1Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Example 6:
The planes 2 x + 3 y Ã¢Ë†â€™ 5 z = 0 and 4 x Ã¢Ë†â€™ 6 y Ã¢Ë†â€™ 2 z = 7 are perpendicular,
because the normal vectors n1 = (2,3. Ã¢Ë†â€™ 5) , n2 = (4,Ã¢Ë†â€™6,Ã¢Ë†â€™2) perpendicular
n1 Ã¯Æ’â€” n2 = (2,3,Ã¢Ë†â€™5) Ã¯Æ’â€” (4,Ã¢Ë†â€™6,Ã¢Ë†â€™2) = 0
8) Distance from a point to the plane:
The distance from a point P1 to the plane is the projection of the line P1 P0 onto the normal lines
of the plane. P0 is any point chosen on the plane.
D = Proj n ( P0 P1 ) =
n Ã¯Æ’â€” P0 P1
n Ã¯Æ’â€” P0 P1
=
n
n
Example 7: Distance from P = (1,Ã¢Ë†â€™2,Ã¢Ë†â€™1) to the plane 2 x Ã¢Ë†â€™ 5 y + 3z = 6
We know that n = (2,Ã¢Ë†â€™5,3)
Choose a point on the plane say P0 = (3,0,0) .Thus u = P0 P1 = (2,2,1)
D = Proj n u =
(2,Ã¢Ë†â€™5,3) Ã¯Æ’â€” (2,2,1)
4 Ã¢Ë†â€™ 10 + 3
nÃ¯Æ’â€”u
3
3 29
=
=
=
=
n
(2,Ã¢Ë†â€™3,4)
29
4 + 9 + 16
29
Find the vector component of u along v
u = (1,Ã¢Ë†â€™2,Ã¢Ë†â€™1) , v = (2,Ã¢Ë†â€™2,1)
Example 8:
a)
Find the distance between the point (4,Ã¢Ë†â€™1) and the line 2 x Ã¢Ë†â€™ y + 3 = 0
According the formula
D=
b)
ax0 + by 0 + c
a2 + b2
=
2(4) + (Ã¢Ë†â€™1)( Ã¢Ë†â€™1) + 3
2 2 + (Ã¢Ë†â€™1) 2
=
12 12 5
=
5
5
Find the distance between the point ( 4,Ã¢Ë†â€™1,3) and the plane 2 x Ã¢Ë†â€™ y + 3 z + 6 = 0
According the formula
D=
ax0 + by0 + cz 0 + d
=
2(4) + (Ã¢Ë†â€™1)( Ã¢Ë†â€™1) + 3(3) + 6
=
24 12 14
=
7
14
a +b +c
2 + (Ã¢Ë†â€™1) + 3
Example 9: In 2-dim: Equation of the line segment from A( 2,-1) and B ( 3, 5)
Ã¯Æ’Â¦ x0 Ã¯Æ’Â¶
Ã¯Æ’Â¦x Ã¯Æ’Â¶
Ã¯Æ’Â¦ xÃ¯Æ’Â¶
Let r = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· OA = r0 = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· OB r1 = Ã¯Æ’Â§Ã¯Æ’Â§ 1 Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ yÃ¯Æ’Â¸
Ã¯Æ’Â¨ y1 Ã¯Æ’Â¸
Ã¯Æ’Â¨ y0 Ã¯Æ’Â¸
2
2
2
2
2
2
Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â¦Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â¦ x Ã¯Æ’Â¶Ã¯Æ’Â¶
Equation of the line is r Ã¢Ë†â€™ r0 = t (r1 Ã¢Ë†â€™ r0 ) Ã¯Æ’Å¾ Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¢Ë†â€™ Ã¯Æ’Â§Ã¯Æ’Â§ 0 Ã¯Æ’Â·Ã¯Æ’Â· = t Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â§Ã¯Æ’Â§ 1 Ã¯Æ’Â·Ã¯Æ’Â· Ã¢Ë†â€™ Ã¯Æ’Â§Ã¯Æ’Â§ 0 Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ y Ã¯Æ’Â¸ Ã¯Æ’Â¨ y0 Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¨ y0 Ã¯Æ’Â¸ Ã¯Æ’Â¨ y0 Ã¯Æ’Â¸ Ã¯Æ’Â¸
Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¢Ë†â€™ Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = t Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¢Ë†â€™ Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ y Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ 1Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ 1Ã¯Æ’Â¸ Ã¯Æ’Â¸
Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ Ã¯Æ’Â¦ 3Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
or Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· + t Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¢Ë†â€™ Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· + t Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ y Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ 1Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¨ 5 Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ 1Ã¯Æ’Â¸ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ 1Ã¯Æ’Â¸ Ã¯Æ’Â¨ 6 Ã¯Æ’Â¸
Ã¯Æ’Â¦ x Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· = Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â· + t Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·Ã¯Æ’Â·
Ã¯Æ’Â¨ y Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¢Ë†â€™ 1Ã¯Æ’Â¸ Ã¯Æ’Â¨ 6 Ã¯Æ’Â¸
Example 10: Find the area given by the vertices A(1,3,1), B (5,2,-1) and C (4,2,-3)
Find the any 2 vectors connecting thede points
AC = C Ã¢Ë†â€™ A = (3,Ã¢Ë†â€™1,4) AB = B Ã¢Ë†â€™ A = (4,Ã¢Ë†â€™1,Ã¢Ë†â€™2)
j
k Ã¯Æ’Â¶
Ã¯Æ’Â¦i
Ã¯Æ’Â·
1
1Ã¯Æ’Â§
Area = ( AB Ã¯â€šÂ´ AC ) = Ã¯Æ’Â§ 3 Ã¢Ë†â€™ 1 4 Ã¯Æ’Â· =
2
2Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â¨ 4 Ã¢Ë†â€™1 Ã¢Ë†â€™ 2Ã¯Æ’Â¸
1
(2 + 4,16 + 6,Ã¢Ë†â€™3 + 4) = 1 (6,22,1) = 1 36 + 484 + 1 = 521
2
2
2
2
Example 11: Find the volume of the parallelepiped formed by
3 vectors u, v and w as its adjacent sides.
u = (1,1,1); v = (3,Ã¢Ë†â€™1,4) and w = (4,Ã¢Ë†â€™1,Ã¢Ë†â€™2) .
We randomly choose a combination of the triple scalar product:
=
u1
V = u Ã¯Æ’â€” (v Ã¯â€šÂ´ w) = v1
w1
u2
v2
w2
u3 Ã¯Æ’Â¦ 1 1
1 Ã¯Æ’Â¶
Ã¯Æ’Â§
Ã¯Æ’Â·
v3 = Ã¯Æ’Â§ 3 Ã¢Ë†â€™ 1 4 Ã¯Æ’Â· = (2 + 4,16 + 6,Ã¢Ë†â€™3 + 4)
w3 Ã¯Æ’Â§Ã¯Æ’Â¨ 4 Ã¢Ë†â€™ 1 Ã¢Ë†â€™ 2 Ã¯Æ’Â·Ã¯Æ’Â¸
= = (2 + 4,16 + 6,Ã¢Ë†â€™3 + 4) = (6,22,1) = 36 + 484 + 1 = 521
1)
Orthogonality in R n
Equation of lines and planes in general
a) Vectors u and v are orthogonal if their dot product is = 0
uÃ¯Æ’â€”v = 0 Ã¯Æ’â€º u Ã¢Å Â¥ v
The sign Ã¯Æ’â€º indicates that iff ( if and only if) ( both ways implication)
b)
c)
2)
The equation of the line (in 2-D)
ax + by + c = 0
a vector orthogonal to it is n = ( a, b)
Note that the vector parallel to it is v = (b,Ã¢Ë†â€™a) or
The equation of the plane (in 3-D)
ax + by + cz + d = 0 ,
a vector orthogonal to it is n = ( a, b, c)
(1)
(2)
v = ( Ã¢Ë†â€™b , a )
(3)
Projection theorem:
vÃ¢Å Â¥
Proj vÃ¢Å Â¥ u
u
vv
Projv u
Projv u =
Projv u =
u Ã¯Æ’â€”v
u Ã¯Æ’â€”v
v
2
v
2
v is the vector component of u along v
v is the vector component of u along v
(4)
ProjvÃ¢Å Â¥ u = u Ã¢Ë†â€™ Projv u is the vector component of u orthogonal to v ( v Ã¢Å Â¥ )
3)
(5)
The matrix of the projection of a vector v onto a line which made an angle ÃŽÂ¸ with the
(6)
Ã¯Æ’Â¦ cos 2 Ã¯ÂÂ±
ProjvÃ¯ÂÂ± u = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin Ã¯ÂÂ± cosÃ¯ÂÂ±
sin Ã¯ÂÂ± cosÃ¯ÂÂ± Ã¯Æ’Â¶
Ã¯Æ’Â·
sin 2 Ã¯ÂÂ± Ã¯Æ’Â·Ã¯Æ’Â¸
This formula can be derived easier than in the text.
Let the vector v = (cos Ã¯ÂÂ± , sin Ã¯ÂÂ± ) be a vector which makes an angle ÃŽÂ¸ with the
positive x-axis is ( here, it not necessary that the line passing to the origin )
Then the projection of any vector u on v is
Projv u =
u Ã¯Æ’â€”v
v
2
v=
Ã¯Æ’Â¦ cos Ã¯ÂÂ± Ã¯Æ’Â¶
u Ã¯Æ’â€” (cos Ã¯ÂÂ± , sin Ã¯ÂÂ± )
Ã¯Æ’Â·Ã¯Æ’Â·
(cos Ã¯ÂÂ± , sin Ã¯ÂÂ± ) = u Ã¯Æ’â€” (cos Ã¯ÂÂ± , sin Ã¯ÂÂ± )Ã¯Æ’Â§Ã¯Æ’Â§
1
Ã¯Æ’Â¨ sin Ã¯ÂÂ± Ã¯Æ’Â¸
Ã¯Æ’Â¦ cosÃ¯ÂÂ± Ã¯Æ’Â¶
Ã¯Æ’Â·Ã¯Æ’Â·(cos Ã¯ÂÂ±
Which is Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin Ã¯ÂÂ± Ã¯Æ’Â¸
Ã¯Æ’Â¦ cos 2 Ã¯ÂÂ±
sin Ã¯ÂÂ± )u = PÃ¯ÂÂ± u = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin Ã¯ÂÂ± cos Ã¯ÂÂ±
sin Ã¯ÂÂ± cos Ã¯ÂÂ± Ã¯Æ’Â¶
Ã¯Æ’Â·u
sin 2 Ã¯ÂÂ± Ã¯Æ’Â·Ã¯Æ’Â¸
4)
Let the vector v = (cos Ã¯ÂÂ± , sin Ã¯ÂÂ± ) be a vector which makes an angle ÃŽÂ¸ with the
positive x-axis is ( here, it not necessary that the line passing to the origin )
Then the reflection of any vector u over v is
Ã¯Æ’Â¦ cos 2Ã¯ÂÂ±
H Ã¯ÂÂ± = Ã¯Æ’Â§Ã¯Æ’Â§
Ã¯Æ’Â¨ sin 2Ã¯ÂÂ±
5)
sin 2Ã¯ÂÂ± Ã¯Æ’Â¶
Ã¯Æ’Â·
Ã¢Ë†â€™ cos 2Ã¯ÂÂ± Ã¯Æ’Â·Ã¯Æ’Â¸
(7)
The distance D between a point a line
Let P0 ( x0 , y0 ) and the line Ã¢â€žâ€œ: ax + by + c = 0
D=
ax0 + by0 + c
a +b
2
2
=
uÃ¯Æ’â€”n
n
u is a vector from P0 ( x0 , y0 ) to any point on the line and
n is a vector orthogonal to the line.
(8)
5)
The distance D between a point to a plane (Ãâ‚¬)
Let P0 ( x0 , y0 , x0 ) and the line (Ãâ‚¬): ax + by + cz + d = 0
u is a vector from P0 ( x0 , y0 , x0 ) to any point on the plane and
n is a vector
orthogonal to the line.
D=
6)
ax0 + by0 + cz 0 + d
a +b +c
2
2
2
=
uÃ¯Æ’â€”n
(8)
n
Equations of lines and planes in space
a)
Equation of the line
r (t ) = r0 + tv
(9)
This is called the parametric equation of a line passing the point r0 and
Parallel to the vector v . Here we can consider the point r0 as a vector
connecting the point P0 ( x0 , y0 , z 0 ) to the origin. If the line passes the origin then
r0 = 0
b)
r (t ) = tv
(9a)
r (t ) = r0 + t1v1 + t 2 v2
(10)
Equation of the plane
This is called the parametric equation of a plane passing the point r0 and
Parallel to the non-co-linear vectors v1 and v2 . Here again, we can consider
the point r0 as a vector connecting the point P0 ( x0 , y0 , z 0 ) to the origin.
If the line passes the origin then r0 = 0 , and
r (t ) = t1v1 + t 2 v2
II)
(10b)
The cross product:
In vector study, the cross product of the 2 vectors u, v is a vector and it is defined as
i
j
u Ã¯â€šÂ´ v = u1 u 2
v1 v2
k
u3 = (u 2 v3 Ã¢Ë†â€™ u3 v2 , u3 v1 Ã¢Ë†â€™ u1v3 , u1v2 Ã¢Ë†â€™ u1v2 )
v3
(11)
Main properties:
a) The cross product of 2 vectors u, v is a vector which is orthogonal to both
b) u Ã¯â€šÂ´ v = Ã¢Ë†â€™v Ã¯â€šÂ´ u
2
2
2
c) u Ã¯â€šÂ´ v = u v Ã¢Ë†â€™ (u Ã¯Æ’â€” v) 2 Lagrange identity Ã¢â‚¬Â¦.
Geometric interpretation:
u Ã¯â€šÂ´ v = u v sin Ã¯ÂÂ± =Area the parallelogram formed by its vectors
u and v as its adjacent sides and the angle between them is ÃŽÂ¸.
So basically, we can find the area of any triangle ABC: If ÃŽÂ¸ is angle A then
Area ABC =
1
AB Ã¯Æ’â€” AC sin Ã¯ÂÂ±
2
(12)
The triple scalar product:
u1
u Ã¯Æ’â€” (v Ã¯â€šÂ´ w) = v1
w1
u2
v2
w2
u3
v3
w3
(13)
The absolute value of the scaler product is the volume of the parallelepiped formed by
3 vectors u, v and w as its adjacent sides.
______________________________________________________________________________
Example 1:
a)
The planes 2 x + 3 y Ã¢Ë†â€™ z = 0 and 4 x + 6 y Ã¢Ë†â€™ 2 z = 7 are parallel, because the
normal vectors n1 = (2,3 Ã¢Ë†â€™ 1) n2 = (4,6,Ã¢Ë†â€™2)
are parallel
b)
The planes 2 x + 3 y Ã¢Ë†â€™ z = 0 and 4 x + 6 y Ã¢Ë†â€™ 5 z = 7 are not parallel, because the
normal vectors n1 = (2,3 Ã¢Ë†â€™ 1) n2 = (4,6,Ã¢Ë†â€™5)
are not parallel
c)
d)
Ã¯Æ’Â¦2Ã¯Æ’Â¶ Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶ Ã¯Æ’Â¦ 2Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â§ Ã¯Æ’Â·
The lines r1 = Ã¯Æ’Â§ 2 Ã¯Æ’Â· + t Ã¯Æ’Â§ 1 Ã¯Æ’Â· and r1 = Ã¯Æ’Â§ 0 Ã¯Æ’Â· + t Ã¯Æ’Â§ 2 Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â· Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 2Ã¯Æ’Â· Ã¯Æ’Â§ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¨ Ã¯Æ’Â¸
are not parallel, since the vectors parallel to the lines
Ã¯Æ’Â¦ 2Ã¯Æ’Â¶
Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
v1 = Ã¯Æ’Â§ 1 Ã¯Æ’Â· and v2 = Ã¯Æ’Â§ 2 Ã¯Æ’Â· are not parallel
Ã¯Æ’Â§1Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
The planes 2 x + 3 y Ã¢Ë†â€™ 5 z = 0 and 4 x Ã¢Ë†â€™ 6 y Ã¢Ë†â€™ 2 z = 7 are perpendicular,
because the normal vectors n1 = (2,3. Ã¢Ë†â€™ 5) , n2 = (4,Ã¢Ë†â€™6,Ã¢Ë†â€™2) perpendicular
( n1 Ã¯Æ’â€” n2 = (2,3,Ã¢Ë†â€™5) Ã¯Æ’â€” (4,Ã¢Ë†â€™6,Ã¢Ë†â€™2) = 0
Example 2: u = (1,Ã¢Ë†â€™3,4) , v = (3,Ã¢Ë†â€™2,2)
a)
Find the vector component of u along v
u = (1,Ã¢Ë†â€™2,Ã¢Ë†â€™1) , v = (2,Ã¢Ë†â€™2,1)
Ã¯Æ’Â¦Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
1 Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·
Projv u = 2 v =
Ã¯Æ’Â§Ã¯Æ’Â§ Ã¢Ë†â€™ 2Ã¯Æ’Â·
4 + 4 +1Ã¯Æ’Â§Ã¯Æ’Â§ Ã¯Æ’Â·
v
Ã¯Æ’Â¨Ã¯Æ’Â¨ Ã¢Ë†â€™1 Ã¯Æ’Â¸
u Ã¯Æ’â€”v
b)
Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶ Ã¯Æ’Â¶Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· Ã¯Æ’Â·Ã¯Æ’Â§ Ã¯Æ’Â· 5 Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’â€” Ã¯Æ’Â§ Ã¢Ë†â€™ 2 Ã¯Æ’Â· Ã¯Æ’Â·Ã¯Æ’Â§ Ã¢Ë†â€™ 2 Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 Ã¯Æ’Â·
Ã¯Æ’Â§ 1 Ã¯Æ’Â· Ã¯Æ’Â·Ã¯Æ’Â§ 1 Ã¯Æ’Â· 9 Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸ Ã¯Æ’Â¸Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Find the vector component of u orthogonal to v ( v Ã¢Å Â¥ )
Ã¯Æ’Â¦ 1 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 2 Ã¯Æ’Â¶
Ã¯Æ’Â¦ 9 Ã¢Ë†â€™ 10 Ã¯Æ’Â¶
Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â· 5Ã¯Æ’Â§ Ã¯Æ’Â· 1Ã¯Æ’Â§
Ã¯Æ’Â· Ã¢Ë†â€™1Ã¯Æ’Â§ Ã¯Æ’Â·
ProjvÃ¢Å Â¥ u = u Ã¢Ë†â€™ Projv u = Ã¯Æ’Â§ Ã¢Ë†â€™ 2 Ã¯Æ’Â· Ã¢Ë†â€™ Ã¯Æ’Â§ Ã¢Ë†â€™ 2 Ã¯Æ’Â· = Ã¯Æ’Â§ Ã¢Ë†â€™ 18 + 10 Ã¯Æ’Â· =
Ã¯Æ’Â§8Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¢Ë†â€™ 1 Ã¯Æ’Â· 9 Ã¯Æ’Â§ 1 Ã¯Æ’Â· 9 Ã¯Æ’Â§ Ã¢Ë†â€™ 9 Ã¢Ë†â€™ 5 Ã¯Æ’Â· 9 Ã¯Æ’Â§14 Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨
Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Example 3:
a)
Find the distance between the point (4,Ã¢Ë†â€™1) and the line 2 x Ã¢Ë†â€™ y + 3 = 0
According the formula
D=
ax0 + by 0 + c
a +b
2
2
=
2(4) + (Ã¢Ë†â€™1)( Ã¢Ë†â€™1) + 3
2 + (Ã¢Ë†â€™1)
2
2
=
12 12 5
=
5
5
Find the distance between the point ( 4,Ã¢Ë†â€™1,3) and the plane 2 x Ã¢Ë†â€™ y + 3 z + 6 = 0
According the formula
b)
D=
ax0 + by0 + cz 0 + d
a2 + b2 + c2
=
2(4) + (Ã¢Ë†â€™1)( Ã¢Ë†â€™1) + 3(3) + 6
2 2 + (Ã¢Ë†â€™1) 2 + 32
=
24 12 14
=
7
14
Example 3:
Projv u =
u Ã¯Æ’â€”v
v
2
v
The vector w = u + v is the diagonal (AD) of a parallelepiped at which u and v
represent 2 adjacent side.Further more, the other diagonal BA represents the subtraction
vector u – v
Let P( x1 , y1 ) and Q( x2 , y2 ) are points in a coordinate system. x and y are called the
components of the vector
The vector PQ ( with the tail is P and the tip is at Q ) is
PQ = ( x2 Ã¢Ë†â€™ x1 , y2 Ã¢Ë†â€™ y1 )
This also apply to 3 dimension
(1)
PQ = ( x2 Ã¢Ë†â€™ x1 , y 2 Ã¢Ë†â€™ y1 , z 2 Ã¢Ë†â€™ z1 )
The 2 vectors u and v are equal if their components are equal terms by terms
The linear combinations of the set of n vector Ã¯ÂÂ»v j Ã¯ÂÂ½j =1 is defined as
n
n
(2)
2)
w = Ã¯Æ’Â¥ C j v j = C1v1 + C 2 v2 + C3 v3 + ….C n vn
j =1
The Norm, v , of a vector is the length ( or the magnitude) of a vector is defined by
(3)
*
The norm of a vector is always
a) v Ã¯â€šÂ³ 0 with equality w hen v = 0
b)
*
kv = k v
The unit vector of a vector is
(4)
u=
v
v
(5)
The process of finding the unit vector of a vector v is called normalizing
*
The standard unit vector in R n and ( here, in R 3 )
The vector v is a linear combination of the standard unit vector and its
the components
standard units: i = (1,0,0); j = (0,1,0); k = (0,0,1)
v = v( x1 , x2 , x3 ) = x1i + x2 j + x3 k = x1 (1,0,0) + x2 (0,1,0) j + x3 (0,0,1)
In general, a vector in R n can be expressed as
v = v( x1 , x2 ,…xn ) = x1e1 +, x2 e2 + x3 e3 ,….. + xn en
Where
Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â¦1Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§0Ã¯Æ’Â·
Ã¯Æ’Â§ 0Ã¯Æ’Â·
Ã¯Æ’Â§0Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
Ã¯Æ’Â§
Ã¯Æ’Â·
e1 = . , e2 = . + ….en = Ã¯Æ’Â§ . Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§.Ã¯Æ’Â·
Ã¯Æ’Â§.Ã¯Æ’Â·
Ã¯Æ’Â§.Ã¯Æ’Â·
Ã¯Æ’Â§0Ã¯Æ’Â·
Ã¯Æ’Â§ 0Ã¯Æ’Â·
Ã¯Æ’Â§1Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
Ã¯Æ’Â¨ Ã¯Æ’Â¸
( co-variant vectors)
Important note here, the vector is actually a brief call of the covariant vector, it is a column
vector denoted by e j while the contra-variant vector is written as in horizontal vector e j a
transpose of e j , thus
Ã¯Æ’Â¦0Ã¯Æ’Â¶
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§0Ã¯Æ’Â·
e j = (0,0,0…1,0,..0) while e j = Ã¯Æ’Â§ 1 Ã¯Æ’Â·
Ã¯Æ’Â§ Ã¯Æ’Â·
Ã¯Æ’Â§.Ã¯Æ’Â·
Ã¯Æ’Â§0Ã¯Æ’Â·
Ã¯Æ’Â¨ Ã¯Æ’Â¸
But some textbooks did not pay attention to this and they write the vectors in horizontal position
for convenient
*
The distance between 2 vecto…