Mathematics for Management -- Supplementary Electronic Materials

return to the main page "Calculus II for Management"
return to the section "Vectors & Matrices"

Worked-Out Exercises: Linear Combinations of Vectors

The following set of exercises is given together with hints, solutions, and solution paths. They are designed such that you can first try to solve them and in case you need help, please, open the "hints" section. For checking your answer, please, open the "solution" section and for getting more details or checking your way of solving the exercise, please, open the "solution path" section.

 
Exercise 1: Linear dependence & independence
 
Decide whether the following vectors are linearly independent or linearly dependent:

a) \(\vec{v}_1 = (1,2)^T\) and \(\vec{v}_2 = (2,1)^T\)

b) \(\vec{v}_1 = (1,2,1,2)^T\), \(\vec{v}_2 = (-1,0,0,2)^T\) and \(\vec{v}_3 = (1,-1,0,0)^T\)

    Hint (please, click on the "+" sign to read more)

  • The value of \(\det( \vec{v}_1 \vec{v}_2) \neq 0\) implies linear independence.
  • The unique solution of \(\lambda_1 \vec{v}_1 + \lambda_2 \vec{v}_2 = \vec{0}\) implies linear independence.
  • The non-solvability of \(\lambda_1 \vec{v}_1 = \vec{v}_2\) (collinearity condition) implies linear independence for two vectors.

  •     Solution (please, click on the "+" sign to read more)

    a) The vectors are linearly independent.

    b) The three vectors are linearly independent.


        Solution Path (please, click on the "+" sign to read more)

    a) Here, we have three ways to check for linear independence:

  • The value of \(\det( \vec{v}_1 \vec{v}_2) \neq 0\) implies linear independence.
  • The unique solution of \(\lambda_1 \vec{v}_1 + \lambda_2 \vec{v}_2 = \vec{0}\) implies linear independence.
  • The non-solvability of \(\lambda_1 \vec{v}_1 = \vec{v}_2\) (collinearity condition) implies linear independence for two vectors.
  • For the given two vectors let us use the colinearity condition as a test for linear independence or linear dependence: \[ \lambda \begin{pmatrix} 1 \\ 2 \end{pmatrix} \, \, = \, \, \begin{pmatrix} 2 \\ 1 \end{pmatrix} \qquad \Longrightarrow \quad \left\{ \begin{array}{l c l} \lambda \cdot 1 \, \, = \, \, 2 & \, \, \Rightarrow \, \, & \lambda \, \, = \, \, 2 \lambda \cdot 2 \, \, = \, \, 1 & \, \, \Rightarrow \, \, & \lambda \, \, = \, \, \tfrac{1}{2} \end{array} \right. \] Hence, the two vectors are not collinear (do not lie on the same line) and are thus linearly independent.

    b) For the three vectors in \(\mathbb{R}^4\) we discuss the homogeneous linear system \[ \lambda_1 \begin{pmatrix} 1 \\ 2 \\ 1 \\ 2 \end{pmatrix} + \lambda_2 \begin{pmatrix} -1 \\ 0 \\ 0 \\ 2 \end{pmatrix} + \lambda_3 \begin{pmatrix} 1 \\ -1 \\ 0 \\ 0 \end{pmatrix} \, \, = \, \, \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} \]

    The equivalent augmented matrix for the unknowns \(\lambda_1\), \(\lambda_2\), and \(\lambda_3\) reads as \begin{eqnarray*} \left(\begin{array}{c c c | c} 1 & -1 & 1 & 0 \\ 2 & 0 & -1 & 0 \\ 1 & 0 & 0 & 0 \\ 2 & 2 & 0 & 0 \end{array} \right) & \leadsto & \left(\begin{array}{c c c | c} 1 & 0 & 0 & 0 \\ 2 & 2 & 0 & 0 \\ 2 & 0 & -1 & 0 \\ 1 & -1 & 1 & 0 \end{array} \right) \, \, \leadsto \, \, \left(\begin{array}{c c c | c} 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & -1 & 1 & 0 \end{array} \right) & \leadsto & \left(\begin{array}{c c c | c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & -1 & 1 & 0 \end{array} \right) \, \, \leadsto \, \, \left(\begin{array}{c c c | c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right) \end{eqnarray*}
    1. swap rows 2. (II) - 2x (I), (III) - 2x (I), (IV) - (I) 3. (II)/2, and (III)/(-1) 4. (IV) + (II) - (III)

    Hence, the homogeneous system has a unique solution and thus the three vectors are linearly independent.

     
    Exercise 2:Unique coordinate representation with the aid of a basis
     
    The vectors \(\vec{v}_{1}\), \(\vec{v}_{2}\) and \(\vec{v}_{3}\) are a basis of \(\mathbb{R}^{3}\). Find the unique coordinate triple \((\lambda_{1}, \lambda_{2}, \lambda_{3}) \in \mathbb{R}^{3}\) to represent \(\vec{v}_{4}\) as a linear combination of the basis vectors, i.e. \(\vec{v}_{4} = \sum_{i=1}^3 \lambda_i \vec{v}_i\). Consider \[ \vec{v}_{1} = \begin{pmatrix} 8 \\ 1 \\ 3 \end{pmatrix} \, , \, \vec{v}_{2} = \begin{pmatrix} 5 \\ 4 \\ 10 \end{pmatrix} \, , \, \vec{v}_{3} = \begin{pmatrix} 0 \\ 2 \\ 7 \end{pmatrix} \, , \quad \text{and} \quad \vec{v}_{4} = \begin{pmatrix} 3 \\ -1\\ 0 \end{pmatrix} \, . \]

        Hint (please, click on the "+" sign to read more)

    If \(\vec{v}_{4}\) is a linear combination of \(\vec{v}_{1}\), \(\vec{v}_{2}\) and \(\vec{v}_{3}\) then the linear factors \(\lambda_{1}, \lambda_{2}, \lambda_{3}\) can be thought of as the entries of a vector \(x = (\lambda_{1}, \lambda_{2}, \lambda_{3})\) that solves uniquely the system of linear equations \(Ax = \vec{v}_{4}\), where the columns of A are the vectors  \(\lambda_{1}, \lambda_{2}, \lambda_{3}\).


        Solution (please, click on the "+" sign to read more)

    The unique coordinate triple is \[ \lambda_2 \, \, = \, \, -1 \, , \qquad \lambda_3 \, \, = \, \, 1 \, , \qquad \text{and} \qquad \lambda_1 \, \, = \, \, 1 \] such that \[ \begin{pmatrix} 8 \\ 1 \\ 3 \end{pmatrix} - \begin{pmatrix} 5 \\ 4 \\ 10 \end{pmatrix} + \begin{pmatrix} 0 \\ 2 \\ 7 \end{pmatrix} \, \, = \, \, \begin{pmatrix} 8-5 \\ 1-4+2\\ 3-10+7 \end{pmatrix} \, \, = \, \, \begin{pmatrix} 3 \\ -1\\ 0 \end{pmatrix} \]


        Solution Path (please, click on the "+" sign to read more)

    First, we check that \(\vec{v}_{1}\), \(\vec{v}_{2}\) and \(\vec{v}_{3}\) are a basis of \(\mathbb{R}^{3}\): \[ \det \begin{pmatrix} 8 & 5 & 0 \\ 1 & 4 & 2 \\ 3 & 10 & 7 \end{pmatrix} \, \, = \, \, 8 \cdot 4 \cdot 7 + 5 \cdot 2 \cdot 3 - 10 \cdot 2 \cdot 8 - 7 \cdot 1 \cdot 5 \, \, = \, \, 39 \, \, \neq \, \, 0 \, . \] Next, the representation \(\sum_{i=1}^3 \lambda_i \vec{v}_i = \vec{v}_{4}\) leads to \[ \lambda_1 \begin{pmatrix} 8 \\ 1 \\ 3 \end{pmatrix} + \lambda_2 \begin{pmatrix} 5 \\ 4 \\ 10 \end{pmatrix} + \lambda_3 \begin{pmatrix} 0 \\ 2 \\ 7 \end{pmatrix} \, \, = \, \, \begin{pmatrix} 3 \\ -1\\ 0 \end{pmatrix} \] and thus the augmented matrix system for \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) \begin{eqnarray*} \left( \begin{array}{c c c | c} 8 & 5 & 0 & 3 \\ 1 & 4 & 2 & -1 \\ 3 & 10 & 7 & 0 \end{array} \right) & \leadsto & \left( \begin{array}{c c c | c} 1 & 4 & 2 & -1 \\ 0 & -2 & 1 & 3 \\ 0 & -27 & -16 & 11 \end{array} \right) & \leadsto & \left( \begin{array}{c c c | c} 1 & 4 & 2 & -1 \\ 0 & -2 & 1 & 3 \\ 0 & -59 & 0 & 59 \end{array} \right) \end{eqnarray*} 1. reorder rows: (II), (III) and (I), then according to the reordered rows: (II) - 3x (I), (III) - 8x (I). 2. (III) + 16x (II)

    Hence, the unique coordinate triple is \[ \lambda_2 \, \, = \, \, -1 \, , \qquad \lambda_3 \, \, = \, \, 1 \, , \qquad \text{and} \qquad \lambda_1 \, \, = \, \, 1 \] such that \[ \begin{pmatrix} 8 \\ 1 \\ 3 \end{pmatrix} - \begin{pmatrix} 5 \\ 4 \\ 10 \end{pmatrix} + \begin{pmatrix} 0 \\ 2 \\ 7 \end{pmatrix} \, \, = \, \, \begin{pmatrix} 8-5 \\ 1-4+2\\ 3-10+7 \end{pmatrix} \, \, = \, \, \begin{pmatrix} 3 \\ -1\\ 0 \end{pmatrix} \]

     
    Exercise 3: Constructing a basis in 3-dim. space
     
    Find a vector \(\vec{v} \in \mathbb{R}^3\) such that the following three vectors form a basis of \(\mathbb{R}^3\) \[ \vec{u}_1 \, \, = \, \, \begin{pmatrix}1\\-1\\3\end{pmatrix} \, , \qquad \vec{u}_2 \, \, = \, \, \begin{pmatrix}-2\\2\\-5\end{pmatrix} \, , \qquad \text{and} \qquad \vec{v} \, . \]

        Hint (please, click on the "+" sign to read more)

    We are given two linearly independent vectors in \(\mathbb{R}^3\). With the aid of the cross product, we thus can compute a third vector perpendicular to these (and also linearly independent for these). Hence, computing \(\vec{v}_{1} \times \vec{v}_{2} = \vec{v}\) solves the problem.


        Solution (please, click on the "+" sign to read more)

    \(\vec{v}= \begin{pmatrix} -1 \\ -1 \\ 0 \end{pmatrix}\)


        Solution Path (please, click on the "+" sign to read more)

    We see that \(\vec{u}_1\) and \(\vec{u}_2\) are not collinear (i.e. they do not lie on the same line) and are thus linearly independent. We obtain \(\vec{v}\) by computing the vector product of \(\vec{u}_1\) and \(\vec{u}_2\) (thus by this construction \(\vec{v}\) is linearly independent to both \(\vec{u}_1\) and \(\vec{u}_2\)): \[ \vec{v} \, \, = \, \, \begin{pmatrix}1\\-1\\3\end{pmatrix} \times \begin{pmatrix}-2\\2\\-5\end{pmatrix} \, \, = \, \, \begin{pmatrix} 5 - 6 \\ -6 - (-5) \\ 2 - 2 \end{pmatrix} \, \, = \, \, \begin{pmatrix} -1 \\ -1 \\ 0 \end{pmatrix} \]

     
    Exercise 4: Cosine of the angle between two vectors
     
    Find the value of the cosine of the angle between the vectors \[ \vec{u}_1 \, \, = \, \, \begin{pmatrix}1\\-1\\3\end{pmatrix} \qquad \text{and} \qquad\vec{u}_2 \, \, = \, \, \begin{pmatrix}-2\\2\\-5\end{pmatrix} \, . \]

        Hint (please, click on the "+" sign to read more)

    To apply the formula \[ \cos\left( \alpha(\vec{u}_1, \vec{u}_2) \right) \, \, = \, \, \frac{\langle \vec{u}_1 , \vec{u}_2 \rangle}{\| \vec{u}_1 \| \cdot \| \vec{u}_2 \|} \, , \qquad \text{with \(0 \leq \alpha(\vec{u}_1, \vec{u}_2) \leq \pi\)} \, . \]


        Solution (please, click on the "+" sign to read more)

    \(\cos\left( \alpha(\vec{u}_1, \vec{u}_2) \right) = -0.9972414\)


        Solution Path (please, click on the "+" sign to read more)

    To apply the formula \[ \cos\left( \alpha(\vec{u}_1, \vec{u}_2) \right) \, \, = \, \, \frac{\langle \vec{u}_1 , \vec{u}_2 \rangle}{\| \vec{u}_1 \| \cdot \| \vec{u}_2 \|} \, , \qquad \text{with \(0 \leq \alpha(\vec{u}_1, \vec{u}_2) \leq \pi\)} \, . \] we require \[ \langle \vec{u}_1 , \vec{u}_2 \rangle \, , \quad \| \vec{u}_1 \| \, \, = \, \, \sqrt{\langle \vec{u}_1 , \vec{u}_1 \rangle} \, , \quad \text{and} \quad \| \vec{u}_2 \| \, \, = \, \, \sqrt{\langle \vec{u}_2 , \vec{u}_2 \rangle} \, . \] We have \[ \langle \vec{u}_1 , \vec{u}_2 \rangle \, \, = \, \, \left\langle \begin{pmatrix}1\\-1\\3\end{pmatrix} , \begin{pmatrix}-2\\2\\-5\end{pmatrix} \right\rangle \, \, = \, \, 1 \cdot (-2) + (-1) \cdot 2 + 3 \cdot (-5) \, \, = \, \, -19 \] as well as \begin{eqnarray*} \| \vec{u}_1 \| & = & \sqrt{\langle \vec{u}_1 , \vec{u}_1 \rangle} \, \, = \, \, \sqrt{1^2+1^2+3^2} \, \, = \, \, \sqrt{11} \| \vec{u}_2 \| & = & \sqrt{\langle \vec{u}_2 , \vec{u}_2 \rangle} \, \, = \, \, \sqrt{2^2+2^2+5^2} \, \, = \, \, \sqrt{33} \end{eqnarray*} such that \[ \cos\left( \alpha(\vec{u}_1, \vec{u}_2) \right) \, \, = \, \, \frac{\langle \vec{u}_1 , \vec{u}_2 \rangle}{\| \vec{u}_1 \| \cdot \| \vec{u}_2 \|} \, \, = \, \, \frac{-19}{\sqrt{11} \cdot \sqrt{33}} \, \, = \, \, -0.9972414 \]

     
    Exercise 5: Scalar product
     
    Compute the value of the following scalar product: \[ \left\langle \begin{pmatrix} 2 \\ 3 \\ -1 \end{pmatrix} , \begin{pmatrix} 4 \\ 5 \\ 6 \end{pmatrix} \right\rangle \, . \]

        Solution (please, click on the "+" sign to read more)

    \(17,\)


        Solution Path (please, click on the "+" sign to read more)

    We have \[ \left\langle \begin{pmatrix} 2 \\ 3 \\ -1 \end{pmatrix} , \begin{pmatrix} 4 \\ 5 \\ 6 \end{pmatrix} \right\rangle \, \, = \, \, 8 + 15 - 6 \, \, = \, \, 17 \]


    Copyright Kutaisi International University — All Rights Reserved — Last Modified: 12/ 10/ 2022