# Linear continuation – Serlo

Zur Navigation springen Zur Suche springen

The principle of linear continuation states that every linear map is exactly determined by the images of the basis vectors. It provides an alternative way to characterize a linear map.

## Motivation

So far, we have mostly specified linear maps by saying where each vector of a vector space ${\displaystyle V}$ is mapped. Those are a lot of vectors, e.g. infinitely many for ${\displaystyle V=\mathbb {R} ^{n}}$. Is there a way to specify the map with less vectors? Perhaps finitely many ones?

For every vector ${\displaystyle v\in V}$ of our starting vector space we have to provide the information to which vector of the target vector space it should be mapped. Every such vector can be represented within a basis: If ${\displaystyle V}$ is a ${\displaystyle K}$-vector space with basis ${\displaystyle \{b_{1},\dots ,b_{n}\}}$ and ${\displaystyle v\in V}$, then there are unique coefficients ${\displaystyle \lambda _{1},\dots ,\lambda _{n}\in K}$ such that ${\displaystyle v=\sum _{i=1}^{n}\lambda _{i}b_{i}}$ holds.

Now, consider a linear map ${\displaystyle f:V\to W}$ into another ${\displaystyle K}$-vector space ${\displaystyle W}$. The basis vectors of ${\displaystyle V}$ then have images ${\displaystyle f(b_{1})=:w_{1},\dots ,f(b_{n})=:w_{n}\in W}$. Now, an important trick follows: we can use these images ${\displaystyle w_{1},\dots ,w_{n}}$ as building bricks to construct ${\displaystyle f(v)}$: by linearity (= additivity + homogeneity) of ${\displaystyle f}$, we have that:

{\displaystyle {\begin{aligned}f(v)&=f(\sum _{i=1}^{n}\lambda _{i}b_{i})\\[0.3em]&{\color {OliveGreen}\left\downarrow \ f{\text{ is additive}}\right.}\\[0.3em]&=\sum _{i=1}^{n}f(\lambda _{i}b_{i})\\[0.3em]&{\color {OliveGreen}\left\downarrow \ f{\text{ is homogenous}}\right.}\\[0.3em]&=\sum _{i=1}^{n}\lambda _{i}f(b_{i})\\[0.3em]&{\color {OliveGreen}\left\downarrow \ f(b_{i})=w_{i}\right.}\\[0.3em]&=\sum _{i=1}^{n}\lambda _{i}w_{i}\end{aligned}}}

This is amazing: For any ${\displaystyle v\in V}$, the image ${\displaystyle f(v)}$ can be reconstructed using ${\displaystyle w_{1},\dots ,w_{n}}$. Than means the information how the (often infinitely) many ${\displaystyle v\in V}$ are mapped by ${\displaystyle f}$ can be condensed in specifying only ${\displaystyle n}$ vectors! For a linear map ${\displaystyle f:\mathbb {R} ^{3}\to \mathbb {R} ^{3}}$, knowing three vectors ${\displaystyle w_{1},w_{2},w_{3}}$ already suffices to know the image of all infinitely many vectors.

The following theorem assures mathematically that this reconstruction works for any finite dimensional vector space:

## Principle of linear continuation

Theorem (Linear continuation)

Let ${\displaystyle K}$ be a field, ${\displaystyle V}$ and ${\displaystyle W}$ two ${\displaystyle K}$-vector spaces and ${\displaystyle \lbrace b_{1},\dots ,b_{n}\rbrace }$ a basis of ${\displaystyle V}$. Further, let ${\displaystyle w_{1},\dots ,w_{n}\in W}$ be any vectors from ${\displaystyle W}$. Then, there exists exactly one linear map ${\displaystyle f:V\rightarrow W}$ with ${\displaystyle f(b_{i})=w_{i}}$ for all ${\displaystyle i\in \{1,\dots ,n\}}$.

How to get to the proof?

First we have to find and define a suitable map ${\displaystyle f}$. This map is basically given in the "motivation" section. But, is it really mathematically well-defined?

Once we have chosen a map, we should check that it is indeed linear and satisfies the requirement ${\displaystyle f(b_{i})=w_{i}}$. Thus a suitable map exists.

Finally we have to show that the map with these properties is uniquely determined. To do this, we assume that there is another map with the same properties. Then we have to show that this map with ${\displaystyle f}$ is identical.

Proof

Let ${\displaystyle v\in V}$. Since ${\displaystyle b_{1},\dots ,b_{n}}$ form a basis of ${\displaystyle V}$, there are clearly certain coefficients ${\displaystyle \lambda _{1},\dots ,\lambda _{n}\in K}$ such that ${\displaystyle v=\sum _{i=1}^{n}\lambda _{i}b_{i}}$. Now we set

${\displaystyle f(v)=f\left(\sum _{i=1}^{n}\lambda _{i}\cdot b_{i}\right):=\sum _{i=1}^{n}\lambda _{i}\cdot w_{i}}$

Because the coefficients ${\displaystyle \lambda _{i}}$ are uniquely determined, the map ${\displaystyle f}$ is well-defined.

Further, it follows immediately that ${\displaystyle f}$ satisfies the requirement ${\displaystyle f(b_{i})=w_{i}}$ for every ${\displaystyle i\in \{1,\dots ,n\}}$, because for every ${\displaystyle i}$ we have that:

${\displaystyle f(b_{i})=f(1_{K}\cdot b_{i}+\sum _{j\neq i}0_{K}\cdot b_{j})=1_{K}\cdot w_{i}+\sum _{j\neq i}0_{K}\cdot w_{j}=w_{i}}$

Now we show that ${\displaystyle f}$ is linear. For this, let ${\displaystyle v,v'\in V}$ with ${\displaystyle v=\sum _{i=1}^{n}\lambda _{i}b_{i}}$ and ${\displaystyle v'=\sum _{i=1}^{n}\mu _{i}b_{i}}$ as well as ${\displaystyle \mu \in K}$. Then:

{\displaystyle {\begin{aligned}f(v+v')&=f(\sum _{i=1}^{n}\lambda _{i}b_{i}+\sum _{i=1}^{n}\mu _{i}b_{i})\\[0.3em]&=f(\sum _{i=1}^{n}(\lambda _{i}+\mu _{i})b_{i})\\[0.3em]&=\sum _{i=1}^{n}(\lambda _{i}+\mu _{i})w_{i}\\[0.3em]&=\sum _{i=1}^{n}\lambda _{i}w_{i}+\sum _{i=1}^{n}\mu _{i}w_{i}\\[0.3em]&=f(v)+f(v')\end{aligned}}}

Aktuelles Ziel: homogeneity

{\displaystyle {\begin{aligned}f(\mu \cdot v)&=f(\mu \sum _{i=1}^{n}\lambda _{i}b_{i})\\[0.3em]&=f(\sum _{i=1}^{n}\mu \lambda _{i}b_{i})\\[0.3em]&=\sum _{i=1}^{n}\mu \lambda _{i}w_{i}\\[0.3em]&=\mu \sum _{i=1}^{n}\lambda _{i}w_{i}\\[0.3em]&=\mu \cdot f(v)\end{aligned}}}

Finally we want to show that ${\displaystyle f}$ is uniquely determined by the properties of being linear and for every ${\displaystyle i\in \{1,\dots ,n\}}$ mapping the basis vector ${\displaystyle b_{i}}$ to ${\displaystyle w_{i}}$. To do this, suppose there is a second map ${\displaystyle g:V\rightarrow W}$ with exactly these two properties. We then have to show that ${\displaystyle f=g}$. Let for this ${\displaystyle v=\sum _{i=1}^{n}\lambda _{i}b_{i}\in V}$ be arbitrarily. Then:

{\displaystyle {\begin{aligned}g(v)=&g(\sum _{i=1}^{n}\lambda _{i}b_{i})\\[0.3em]&{\color {OliveGreen}\downarrow {g{\text{ is linear}}}}\\[0.3em]=&\sum _{i=1}^{n}\lambda _{i}g(b_{i})\\[0.3em]&{\color {OliveGreen}\downarrow g(b_{i})=w_{i}}\\[0.3em]=&\sum _{i=1}^{n}\lambda _{i}w_{i}=f(v)\end{aligned}}}

We have shown that ${\displaystyle f}$ and ${\displaystyle g}$ take the same value for every vector ${\displaystyle v\in V}$. So both maps are the same and we are done with the proof of uniqueness.

Hint

In the premise on the principle of linear continuation, a basis ${\displaystyle \{b_{1},\dots ,b_{n}\}}$ of ${\displaystyle V}$ occurs. That is, ${\displaystyle V}$ must be finite-dimensional. However, ${\displaystyle W}$ might be infinite-dimensional.

Actually, the statement also holds for ${\displaystyle V}$ being infinite-dimensional. The proof works similar to the one above.

## Examples

### Example 1

Example

We consider the ${\displaystyle \mathbb {R} }$-vector space ${\displaystyle \mathbb {R} ^{2}}$ with the basis ${\displaystyle \{b_{1},b_{2}\}}$ where ${\displaystyle b_{1}:=(2,1)^{T}}$ and ${\displaystyle b_{2}:=(1,1)^{T}}$. It can easily be seen that this is basis. (you may now think a moment about why) Let ${\displaystyle w_{1}:=(1,3)^{T}}$ and ${\displaystyle w_{2}:=(2,2)^{T}}$ be two vectors. By the theorem above, there hence exists a unique linear map ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ given by ${\displaystyle f(b_{1})=w_{1}}$ and ${\displaystyle f(b_{2})=w_{2}}$. What is the image of ${\displaystyle f}$ for a general vector ${\displaystyle (x,y)^{T}\in \mathbb {R} ^{2}}$?

We proceed as in the theorem on the principle of linear continuation: let ${\displaystyle (x,y)^{T}}$ be a vector in ${\displaystyle \mathbb {R} ^{2}}$. First, we represent ${\displaystyle (x,y)^{T}}$ as a linear combination of basis vectors ${\displaystyle b_{1},b_{2}}$. So we determine ${\displaystyle \lambda _{1},\lambda _{2}\in \mathbb {R} }$ such that ${\displaystyle (x,y)^{T}=\lambda _{1}\cdot b_{1}+\lambda _{2}\cdot b_{2}}$. They are given by:

${\displaystyle {\begin{pmatrix}x\\y\end{pmatrix}}=\lambda _{1}\cdot b_{1}+\lambda _{2}\cdot b_{2}=\lambda _{1}\cdot {\begin{pmatrix}2\\1\end{pmatrix}}+\lambda _{2}\cdot {\begin{pmatrix}1\\1\end{pmatrix}}={\begin{pmatrix}2\lambda _{1}+\lambda _{2}\\\lambda _{1}+\lambda _{2}\end{pmatrix}}}$

So we need to solve the system of equations

{\displaystyle {\begin{aligned}x&=2\lambda _{1}+\lambda _{2}\\y&=\lambda _{1}+\lambda _{2}\end{aligned}}}

for ${\displaystyle \lambda _{1}}$ and ${\displaystyle \lambda _{2}}$. Subtracting the second equation from the first, we obtain ${\displaystyle x-y=\lambda _{1}}$. To get ${\displaystyle \lambda _{2}}$, we substitute this result into the second equation:

${\displaystyle y=\lambda _{1}+\lambda _{2}=(x-y)+\lambda _{2}}$

If we resolve for ${\displaystyle \lambda _{2}}$, we get ${\displaystyle \lambda _{2}=2y-x}$. Consequently, the linear combination we are looking for is ${\displaystyle (x,y)^{T}=(x-y)\cdot b_{1}+(2y-x)\cdot b_{2}}$.

By the proof of the theorem above, we know how ${\displaystyle f}$ acts on ${\displaystyle (x,y)^{T}}$:

{\displaystyle {\begin{aligned}f{\begin{pmatrix}x\\y\end{pmatrix}}&=f{\bigg (}(x-y)\cdot b_{1}+(2y-x)\cdot b_{2}{\bigg )}=(x-y)\cdot w_{1}+(2y-x)\cdot w_{2}\\[0.5em]&=(x-y){\begin{pmatrix}1\\3\end{pmatrix}}+(2y-x){\begin{pmatrix}2\\2\end{pmatrix}}={\begin{pmatrix}x-y+2(2y-x)\\3(x-y)+2(2y-x)\end{pmatrix}}\\[0.5em]&={\begin{pmatrix}-x+3y\\x+y\end{pmatrix}}\end{aligned}}}

So the ${\displaystyle f}$ has the general image

{\displaystyle {\begin{aligned}f:\mathbb {R} ^{2}&\to \mathbb {R} ^{2}\\[0.3em]{\begin{pmatrix}x\\y\end{pmatrix}}&\mapsto {\begin{pmatrix}-x+3y\\x+y\end{pmatrix}}\end{aligned}}}

### Example 2

Example

We consider the map ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ with ${\displaystyle f{\begin{pmatrix}v_{1}\\v_{2}\end{pmatrix}}={\begin{pmatrix}v_{1}+v_{2}\\3v_{2}\end{pmatrix}}}$.

As basis of ${\displaystyle \mathbb {R} ^{2}}$ we choose ${\displaystyle \lbrace b_{1}:=(2,0)^{T},b_{2}:=(1,1)^{T}\rbrace }$. Then

${\displaystyle f{\begin{pmatrix}2\\0\end{pmatrix}}={\begin{pmatrix}2\\0\end{pmatrix}}{\text{ and }}f{\begin{pmatrix}1\\1\end{pmatrix}}={\begin{pmatrix}2\\3\end{pmatrix}}}$

So we could also specify the linear map ${\displaystyle f}$ by requiring that it maps ${\displaystyle b_{1}}$ to ${\displaystyle (2,0)^{T}}$ and ${\displaystyle b_{2}}$ to ${\displaystyle (2,3)^{T}}$. This only requires fixing two vectors.

### Example 3

Example

Is there a linear map ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ with ${\displaystyle f{\begin{pmatrix}1\\1\end{pmatrix}}={\begin{pmatrix}1\\1\end{pmatrix}}}$ and ${\displaystyle f{\begin{pmatrix}2\\2\end{pmatrix}}={\begin{pmatrix}3\\1\end{pmatrix}}}$?

Assuming there is such a map, then we would have:

${\displaystyle {\begin{pmatrix}3\\1\end{pmatrix}}=f{\begin{pmatrix}2\\2\end{pmatrix}}=f\left(2\cdot {\begin{pmatrix}1\\1\end{pmatrix}}\right)=2\cdot f{\begin{pmatrix}1\\1\end{pmatrix}}=2\cdot {\begin{pmatrix}1\\1\end{pmatrix}}={\begin{pmatrix}2\\2\end{pmatrix}}}$

This is a contradiction. Hence such a linear map ${\displaystyle f}$ cannot exist.

Question: A linear map ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ should be specified by exactly 2 vectors and we have 2 vectors. Then why is there a contradiction, anyway?

The vectors ${\displaystyle {\begin{pmatrix}1\\1\end{pmatrix}}}$ and ${\displaystyle {\begin{pmatrix}2\\2\end{pmatrix}}}$ are linearly dependent, but the function values we assigned to them are not multiples of each other. This is where the contradiction comes from. However, this does not contradict the theorem of linear continuation. Because there the function values are given for a basis.

## Properties of the linear continuation

In the following, ${\displaystyle V}$ and ${\displaystyle W}$ are two ${\displaystyle K}$-vector spaces, ${\displaystyle \{b_{1},\ldots ,b_{n}\}}$ is a basis of ${\displaystyle V}$ and ${\displaystyle w_{1},\ldots ,w_{n}\in W}$ are vectors in ${\displaystyle W}$. Let ${\displaystyle f:V\to W}$ be a linear map with ${\displaystyle f(b_{i})=w_{i}}$ for all ${\displaystyle i\in \{1,\ldots ,n\}}$. Because of the above theorem such a linear map exists and it is unique.

Theorem (Properties of the linear continuation)

${\displaystyle f(V)=\operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$

In particular we have that ${\displaystyle f}$ is surjective if and only if ${\displaystyle \lbrace w_{1},\dots ,w_{n}\rbrace }$ is a generator of ${\displaystyle W}$.

How to get to the proof?

We establish the first statement by showing equality of sets. That is, we prove that ${\displaystyle f(V)\subseteq \operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$ and ${\displaystyle f(V)\supseteq \operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$ hold.

For the first inclusion we consider an element ${\displaystyle w\in f(V)}$. So there exists a ${\displaystyle v\in V}$ such that ${\displaystyle f(v)=w}$ holds. We can write this ${\displaystyle v}$ as a linear combination of the basic elements ${\displaystyle b_{1},\ldots ,b_{n}}$ of ${\displaystyle V}$. Together with the linearity of ${\displaystyle f}$ it can then be shown that we may also write ${\displaystyle w}$ as a linear combination of ${\displaystyle w_{1},\dots ,w_{n}}$.

For the other inclusion "${\displaystyle f(V)\supseteq \operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$" we now consider a ${\displaystyle w\in \operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$. Then we can write ${\displaystyle w}$ as a linear combination of ${\displaystyle w_{i}}$. Since ${\displaystyle w_{i}=f(b_{i})}$ holds, ${\displaystyle w}$ is representable as a linear combination of ${\displaystyle f(b_{1}),\ldots f(b_{n})}$. And since ${\displaystyle f}$ is linear, we can now show that ${\displaystyle w}$ lies in ${\displaystyle f(V)}$.

Thus we can easily prove that ${\displaystyle f}$ is surjective exactly if ${\displaystyle \lbrace w_{1},\dots ,w_{n}\rbrace }$ is a generator of ${\displaystyle W}$ using the following statements:

• ${\displaystyle f}$ is surjective if and only if ${\displaystyle W=f(V)}$ holds.
• ${\displaystyle \lbrace w_{1},\dots ,w_{n}\rbrace }$ is a generator of ${\displaystyle W}$ if and only if ${\displaystyle W=\operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$ holds.
• ${\displaystyle f(V)=\operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$ (our already proved statement).

Proof

Beweisschritt: ${\displaystyle f(V)=\operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$

${\displaystyle \subseteq }$: Let ${\displaystyle w\in f(V)}$. Then there is a ${\displaystyle v\in V}$ with ${\displaystyle f(v)=w}$. Since ${\displaystyle b_{1},\dots ,b_{n}}$ is a basis of ${\displaystyle V}$, there are coefficients ${\displaystyle \lambda _{1},\dots ,\lambda _{n}\in K}$ such that ${\displaystyle v=\sum _{i=1}^{n}\lambda _{i}b_{i}}$. Now we have:

{\displaystyle {\begin{aligned}w=f(v)=f(\sum _{i=1}^{n}\lambda _{i}b_{i})=\sum _{i=1}^{n}\lambda _{i}f(b_{i})=\sum _{i=1}^{n}\lambda _{i}w_{i}\end{aligned}}}

i.e., we managed to write ${\displaystyle w}$ as a linear combination of ${\displaystyle w_{i}}$, such that ${\displaystyle w\in \operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$.

${\displaystyle \supseteq }$: Let ${\displaystyle w\in \operatorname {span} \left(w_{1},\dots ,w_{n}\right)}$, then there are coefficients ${\displaystyle \lambda _{1},\dots ,\lambda _{n}\in K}$ such that ${\displaystyle w=\sum _{i=1}^{n}\lambda _{i}w_{i}}$. By definition of ${\displaystyle f}$ we have:

{\displaystyle {\begin{aligned}w=\sum _{i=1}^{n}\lambda _{i}w_{i}=\sum _{i=1}^{n}\lambda _{i}f(b_{i})=f(\sum _{i=1}^{n}\lambda _{i}b_{i})\in f(V)\end{aligned}}}

In particular, this implies the second statement:

Beweisschritt: ${\displaystyle f}$ is surjective, if and only if ${\displaystyle \lbrace w_{1},\dots ,w_{n}\rbrace }$ is a generator of ${\displaystyle W}$.

If ${\displaystyle f}$ is surjective, then:

${\displaystyle W=f(V)=\operatorname {span} (w_{1},...,w_{n})}$ (according to the statement above).

Therefore, ${\displaystyle \lbrace w_{1},...,w_{n}\rbrace }$ is a generator of ${\displaystyle W}$.

Conversely, if ${\displaystyle \lbrace w_{1},...,w_{n}\rbrace }$ is a generator, then we have that ${\displaystyle f(V)=\operatorname {span} (w_{1},...,w_{n})=W}$, and ${\displaystyle f}$ is surjective.

Theorem (Injective maps send bases to linearly independent vectors)

${\displaystyle f}$ is injective, if and only if ${\displaystyle \lbrace w_{1},\dots ,w_{n}\rbrace }$ is linearly independent.

How to get to the proof? (Injective maps send bases to linearly independent vectors)

For equivalence, we need to show two implications. In the proof of "${\displaystyle \Rightarrow }$" we want to show that the vectors ${\displaystyle w_{1},\dots ,w_{n}}$ are linearly independent if ${\displaystyle f}$ is injective. We assume that ${\displaystyle f}$ is injective and consider the zero vector as a linear combination of ${\displaystyle w_{1},\dots ,w_{n}}$, i.e. ${\displaystyle 0_{_{W}}=\mu _{1}w_{1}+\ldots \mu _{n}w_{n}}$ with ${\displaystyle \mu _{1},\ldots ,\mu _{n}\in K}$. We now want to prove that all coefficients ${\displaystyle \mu _{i}}$ vanish. If we replace in our linear combination ${\displaystyle w_{i}}$ with the respective ${\displaystyle f(b_{i})}$ and use the linearity of ${\displaystyle f}$, we get

${\displaystyle 0_{_{W}}=f(\mu _{1}\cdot b_{1}+\cdots +\mu _{n}\cdot b_{n})}$.

We know that ${\displaystyle f(0_{_{V}})=0_{_{W}}}$ because ${\displaystyle f}$ is linear. So

${\displaystyle f(0_{_{V}})=f(\mu _{1}\cdot b_{1}+\cdots +\mu _{n}\cdot b_{n})}$.

Using injectivity of ${\displaystyle f}$, it follows that ${\displaystyle 0_{_{V}}=\mu _{1}\cdot b_{1}+\cdots +\mu _{n}\cdot b_{n}}$. Since the basis ${\displaystyle b_{1},\dots ,b_{n}}$ is linearly independent, we have ${\displaystyle \mu _{i}=0}$ for all ${\displaystyle i=1,\ldots ,n}$.

In the proof of "${\displaystyle \Leftarrow }$", our goal is to show that ${\displaystyle f}$ is injective if ${\displaystyle w_{1},\dots ,w_{n}}$ are linearly independent. To do this, we consider two vectors ${\displaystyle v,{\tilde {v}}\in V}$ with ${\displaystyle f(v)=f({\tilde {v}})}$. We want to show that ${\displaystyle v={\tilde {v}}}$. Since ${\displaystyle b_{1},\dots ,b_{n}}$ forms a basis of ${\displaystyle V}$, we can represent ${\displaystyle v}$ and ${\displaystyle {\tilde {v}}}$ as a linear combination of them:

${\displaystyle v=\mu _{1}\cdot b_{1}+\cdots +\mu _{n}\cdot b_{n}\,}$ and ${\displaystyle \,{\tilde {v}}={\tilde {\mu }}_{1}\cdot b_{1}+\cdots +{\tilde {\mu }}_{n}\cdot b_{n}\,}$ with ${\displaystyle \,\mu _{1},\ldots ,\mu _{n},{\tilde {\mu }}_{1},\ldots ,{\tilde {\mu }}_{n}\in K}$

To prove ${\displaystyle v={\tilde {v}}}$, it is enough to show that ${\displaystyle \mu _{i}={\tilde {\mu }}_{i}}$ for ${\displaystyle i=1,\ldots ,n}$ holds. With ${\displaystyle f(v)=f({\tilde {v}})}$ and the linearity of ${\displaystyle f}$ we get

${\displaystyle \mu _{1}\cdot f(b_{1})+\cdots +\mu _{n}\cdot (b_{n})={\tilde {\mu }}_{1}\cdot f(b_{1})+\cdots +{\tilde {\mu }}_{n}\cdot f(b_{n})}$

Because of ${\displaystyle f(b_{i})=w_{i}}$ we get the representation

${\displaystyle \mu _{1}\cdot w_{1}+\cdots +\mu _{n}\cdot w_{n}={\tilde {\mu }}_{1}\cdot w_{1}+\cdots +{\tilde {\mu }}_{n}\cdot w_{n}}$

Because of the linear independence of ${\displaystyle w_{1},\dots ,w_{n}}$ their linear combinations are unique and one has ${\displaystyle \mu _{i}={\tilde {\mu }}_{i}}$ for all ${\displaystyle i=1,\ldots ,n}$.

Proof (Injective maps send bases to linearly independent vectors)

We need to establish two directions.

Proof step: If ${\displaystyle f}$ is injective, then the ${\displaystyle \lbrace w_{1},w_{2},\ldots ,w_{n}\rbrace }$ are linearly independent.

Let ${\displaystyle \mu _{1},\mu _{2},\ldots ,\mu _{n}\in K}$ and let

${\displaystyle 0_{W}=\mu _{1}\cdot w_{1}+\cdots +\mu _{n}\cdot w_{n}=\mu _{1}\cdot f(b_{1})+\cdots +\mu _{n}\cdot f(b_{n})=f(\mu _{1}\cdot b_{1}+\cdots +\mu _{n}\cdot b_{n})}$

For any linear mapping, it is also true that ${\displaystyle f(0_{V})=0_{W}}$. Since ${\displaystyle f}$ is injective, we have

${\displaystyle \mu _{1}\cdot b_{1}+\cdots +\mu _{n}\cdot b_{n}=0_{V}}$

Further, since ${\displaystyle \lbrace b_{1},b_{2},\ldots ,b_{n}\rbrace }$ is a basis of ${\displaystyle V}$:

${\displaystyle \mu _{1}=\mu _{2}=\cdots =\mu _{n}=0_{K}}$

Thus, the ${\displaystyle \{w_{1},w_{2},\ldots ,w_{n}\}}$ are linearly independent.

Proof step: If the ${\displaystyle \lbrace w_{1},w_{2},\ldots ,w_{n}\rbrace }$ are linearly independent, then ${\displaystyle f}$ is injective.

Let ${\displaystyle v,v^{*}\in V}$ with ${\displaystyle f(v)=f(v^{*})}$. Then, there are some ${\displaystyle \mu _{1},\mu _{2},\ldots ,\mu _{n},{\mu _{1}}^{*},{\mu _{2}}^{*},\ldots ,{\mu _{n}}^{*}\in K}$ with ${\displaystyle v=\mu _{1}\cdot b_{1}+\cdots +\mu _{n}\cdot b_{n}}$ and ${\displaystyle v^{*}={\mu _{1}}^{*}\cdot b_{1}+\cdots +{\mu _{n}}^{*}\cdot b_{n}}$. We have that:

{\displaystyle {\begin{aligned}\mu _{1}\cdot w_{1}+\cdots +\mu _{n}\cdot w_{n}&=\mu _{1}\cdot f(b_{1})+\cdots +\mu _{n}\cdot f(b_{n})\\[0.3em]&=\ f(\mu _{1}\cdot b_{1}+\cdots +\mu _{n}\cdot b_{n})\\[0.3em]&{\color {OliveGreen}\left\downarrow \ {\text{since }}f(v)=f(v^{*})\right.}\\[0.3em]&=\ f({\mu _{1}}^{*}\cdot b_{1}+\cdots +{\mu _{n}}^{*}\cdot b_{n})\\[0.3em]&=\ {\mu _{1}}^{*}\cdot f(b_{1})+{\mu _{n}}^{*}\cdot f(b_{n})\\[0.3em]&=\ {\mu _{1}}^{*}\cdot w_{1}+\cdots +{\mu _{n}}^{*}\cdot w_{n}\end{aligned}}}

If ${\displaystyle \lbrace w_{1},w_{2},\ldots ,w_{n}\rbrace }$ are linearly independent, the representation is unique, so ${\displaystyle \mu _{i}={\mu _{i}}^{*};(i=1,2,\ldots ,n)\,\longrightarrow \,v=v^{*}}$. Thus ${\displaystyle f}$ is injective.

Theorem (Bijective maps send bases to bases)

${\displaystyle f}$ is bijective if and only if ${\displaystyle \lbrace w_{1},\dots ,w_{n}\rbrace }$ is a basis of ${\displaystyle W}$.

How to get to the proof? (Bijective maps send bases to bases)

We simply combine the statements of the last two theorems.

Proof (Bijective maps send bases to bases)

Proof step: If ${\displaystyle f}$ is bijective, then ${\displaystyle \{w_{1},\dots ,w_{n}\}}$ is a basis of ${\displaystyle W}$.

Since ${\displaystyle f}$ is bijective, also ${\displaystyle f}$ is injective and surjective. Therefore, according to the last two theorems, ${\displaystyle \lbrace w_{1},...,w_{n}\rbrace }$ form a linearly independent generator. This generator is always a basis.

Proof step: If ${\displaystyle \{w_{1},\dots ,w_{n}\}}$ is a basis of ${\displaystyle W}$, then ${\displaystyle f}$ is bijective.

Suppose ${\displaystyle \lbrace w_{1},...,w_{n}\rbrace }$ is a basis - so in particular it is linearly independent and a generator. Then, we have by the last two theorems that ${\displaystyle f}$ is injective and surjective - so in particular bijective.

## Exercises

Exercise (Linear maps under some conditions)

Let ${\displaystyle u=(1,0,-1)^{T},\,v=(0,1,2)^{T}}$ and ${\displaystyle w=(1,2,3)^{T}}$. Is there an ${\displaystyle \mathbb {R} }$-linear map ${\displaystyle f:\mathbb {R} ^{3}\mathbb {R} ^{2}}$ that satisfies ${\displaystyle f(u)=(0,1)^{T},\,f(v)=(1,-1)^{T},\,f(w)=(2,1)^{T}}$?

How to get to the proof? (Linear maps under some conditions)

First you should check if the vectors ${\displaystyle u,v,w}$ are linearly independent. If this is the case, ${\displaystyle \{u,v,w\}}$ is a basis of ${\displaystyle \mathbb {R} ^{3}}$ because of ${\displaystyle \operatorname {dim} (\mathbb {R} ^{3})=3}$. Using the principle of linear continuation, the existence of such a linear map would follow ${\displaystyle f}$. Let thus ${\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}\in \mathbb {R} }$:

${\displaystyle \lambda _{1}u+\lambda _{2}v+\lambda _{3}w={\begin{pmatrix}\lambda _{1}+\lambda _{3}\\\lambda _{2}+2\lambda _{3}\\-\lambda _{1}+2\lambda _{2}+3\lambda _{3}\end{pmatrix}}={\begin{pmatrix}0\\0\\0\end{pmatrix}}.}$

But then also ${\displaystyle \lambda _{1}=-\lambda _{3},\,\lambda _{2}=-2\lambda _{3}}$ and so ${\displaystyle 2\lambda _{1}=\lambda _{2}}$ must be fulfilled. However, this equation has not only the "trivial" solution ${\displaystyle \lambda _{1}=\lambda _{2}=\lambda _{3}=0}$. In fact, the upper equation is satisfied for ${\displaystyle \lambda _{1}=1,\,\lambda _{2}=2,\,\lambda _{3}=-1}$. Thus, one obtains

{\displaystyle {\begin{aligned}u+2v=w.\end{aligned}}}

For such a map ${\displaystyle f}$, the relation ${\displaystyle f(u)+2f(v)=f(w)}$ would then have to hold, which is a contradiction to

{\displaystyle {\begin{aligned}f(u)+2f(v)=(2,-1)^{T},\quad f(w)=(2,1)^{T}\end{aligned}}}

Solution (Linear maps under some conditions)

Let us first assume that such a linear map ${\displaystyle f}$ would exist. By the following calculation

{\displaystyle {\begin{aligned}u+2v={\begin{pmatrix}1\\0\\-1\end{pmatrix}}+{\begin{pmatrix}0\\2\\4\end{pmatrix}}={\begin{pmatrix}1\\2\\3\end{pmatrix}}=w\end{aligned}}}

we see that ${\displaystyle f(u)+2f(v)=f(w)}$ should hold. But this is a contradiction to the other conditions, because those would imply

{\displaystyle {\begin{aligned}f(u)+2f(v)=(0,1)^{T}+2(1,-1)^{T}=(2,-1)^{T}\neq (2,1)^{T}=f(w)\end{aligned}}}

So there is no such ${\displaystyle f}$.