# Linear map – Serlo

Zur Navigation springen Zur Suche springen

Linear maps are special maps between vector spaces that are compatible with the vector space structure. They are one of the most important concepts of linear algebra and have numerous applications in science and technology.

## Motivation

### What makes linear maps special

We have learned about the structure of vector spaces and studied various properties of them. Now we want to consider not only isolated vector spaces, but also maps between them. Some of these maps fit well with the underlying vector space structure and are therefore called linear maps or vector space homomorphisms. They are a generalization of linear functions through the origin in one dimension, whose graphs are lines (hence the name).

It is a typical approach in algebra to study maps that preserve the structure of an algebraic object, such as a vector space. For many algebraic objects such as groups, rings or fields, one often studies the corresponding structure-preserving maps between the respective algebraic structures - group homomorphisms, ring homomorphisms and field homomorphisms. For vector spaces, the structure-preserving maps are the linear maps (= vector space homomorphisms).

So let ${\displaystyle V}$ and ${\displaystyle W}$ be two vector spaces. When is a map ${\displaystyle f:V\to W}$ structure-preserving or well compatible with the underlying vector space structures in ${\displaystyle V}$ and ${\displaystyle W}$? For this, let's repeat what the vector space structure is all about: They basically allow for two operations:

• Addition of vectors: two vectors can be added, in a similar way to how numbers are added.
• Scalar multiplication: vectors with a scaling factor (which is an element of the field) can be scaled. That means: compressed, stretched or mirrored.

Let's start with of addition of vectors: when is a function ${\displaystyle f:V\to W}$ compatible with the additions ${\displaystyle +_{_{V}}}$ and ${\displaystyle +_{_{W}}}$ on the respective vector spaces ${\displaystyle V}$ and ${\displaystyle W}$? The most natural definition is the following:

A map is compatible with the addition if a sum is preserved by the map. Meaning, if ${\displaystyle v_{3}=v_{1}+_{_{V}}v_{2}}$ is a sum within the vector space ${\displaystyle V}$, then the images of ${\displaystyle v_{1}}$, ${\displaystyle v_{2}}$ and ${\displaystyle v_{3}}$ , which are situated in vector space ${\displaystyle W}$, also form a corresponding sum: ${\displaystyle f(v_{3})=f(v_{1})+_{_{W}}f(v_{2})}$

Thus, a map compatible with addition satisfies for all ${\displaystyle v_{1},v_{2},v_{3}\in V}$ the implication:

${\displaystyle v_{3}=v_{1}+_{_{V}}v_{2}\implies f(v_{3})=f(v_{1})+_{_{W}}f(v_{2})}$

This implication can be summarized in one equation by substituting the premise ${\displaystyle v_{3}=v_{1}+_{_{V}}v_{2}}$ into the second equation. It thus suffices to require for all ${\displaystyle v_{1},v_{2}\in V}$ that:

${\displaystyle f(v_{1}+_{_{V}}v_{2})=f(v_{1})+_{_{W}}f(v_{2})}$

This equation describes the first characteristic property of the linear map, namely "being compatible with vector addition". We can visualize it well for maps ${\displaystyle \mathbb {R} ^{2}\to \mathbb {R} ^{2}}$. A map is compatible with addition if and only if the triangle given by the vectors ${\displaystyle v_{1}}$, ${\displaystyle v_{2}}$ and ${\displaystyle v_{3}=v_{1}+_{_{V}}v_{2}}$ is preserved under applying the map. That means, also the three vectors ${\displaystyle f(v_{1})}$, ${\displaystyle f(v_{2})}$ and ${\displaystyle f(v_{3})=f(v_{1}+_{_{V}}v_{2})}$ hive to form a triangle:

If ${\displaystyle f}$ is not compatible with addition, there are vectors ${\displaystyle v_{1}}$ and ${\displaystyle v_{2}}$ with ${\displaystyle f(v_{1}+_{_{V}}v_{2})\neq f(v_{1})+_{_{W}}f(v_{2})}$. The triangle generated by ${\displaystyle v_{1}}$, ${\displaystyle v_{2}}$ and ${\displaystyle v_{3}=v_{1}+_{_{V}}v_{2}}$ is then not preserved, because the triangle side ${\displaystyle v_{1}+_{_{V}}v_{2}}$ of the initial triangle is not mapped to the triangle side ${\displaystyle f(v_{1})+_{_{W}}f(v_{2})}$ in the target space:

### Compatibility with scalar multiplication

Analogously, we can naturally define that a map ${\displaystyle f:V\to W}$ is compatible with scalar multiplication if and only if it is preserved by the map. So it should hold for all ${\displaystyle w,v\in V}$ and for all scalars ${\displaystyle \lambda \in K}$ that

${\displaystyle w=\lambda \cdot _{_{V}}v\implies f(w)=\lambda \cdot _{_{W}}f(v)}$

Note that ${\displaystyle \lambda }$ is a scalar and not a vector and thus is not changed by the map under consideration. In other words, it can be "pulled out of the bracket". This move is only allowed if both vector spaces have the same underlying field. Both the domain of definition ${\displaystyle V}$ and the range of values ${\displaystyle W}$ must be vector spaces over the same ${\displaystyle K}$.

Linear maps thus preserve scalings. From ${\displaystyle w=\lambda v}$ one may conclude ${\displaystyle f(w)=\lambda f(v)}$. For the case where ${\displaystyle f(v)\neq 0}$, straight lines of the form ${\displaystyle \{\lambda v:\lambda \in \mathbb {R} \}}$ are mapped to the straight line ${\displaystyle \{\lambda f(v):\lambda \in \mathbb {R} \}}$. The above implication can be summarized in an equation. For all ${\displaystyle v\in V}$ and ${\displaystyle \lambda \in K}$, we require that:

${\displaystyle f(\lambda \cdot _{_{V}}v)=\lambda \cdot _{_{W}}f(v)}$

For maps ${\displaystyle \mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ this means that a scaled vector ${\displaystyle \lambda \cdot _{_{V}}v}$ is mapped to the correspondingly scaled version ${\displaystyle \lambda \cdot _{_{W}}f(w)}$ of the image vector:

If a map is not compatible with scalar multiplication, there is a vector ${\displaystyle v}$ and a scaling factor ${\displaystyle \lambda }$ such that ${\displaystyle f(\lambda \cdot _{_{V}}v)\neq \lambda \cdot _{_{W}}f(w)}$:

### Recap

A linear map is a special map between vector spaces that is compatible with the structure of the underlying vector spaces. In particular, this means that a linear map ${\displaystyle f:V\to W}$ has the following two characteristic properties:

• compatibility with addition: ${\displaystyle \forall v_{1},v_{2}\in V:\,f(v_{1}+_{_{V}}v_{2})=f(v_{1})+_{_{W}}f(v_{2})}$.
• compatibility with scalar multiplication: ${\displaystyle \forall v\in V,\lambda \in K:\,f(\lambda \cdot _{_{V}}v)=\lambda \cdot _{_{W}}f(v)}$

The compatibility with addition is called additivity and the compatibility with scalar multiplication is called homogeneity.

## Definition

Definition (Linear map)

Let ${\displaystyle \color {Orange}V}$ and ${\displaystyle \color {Purple}W}$ vector spaces over the same field ${\displaystyle K}$. Let ${\displaystyle {\color {Orange}+_{{}_{V}}}\colon {\color {Orange}V}\times {\color {Orange}V}\to {\color {Orange}V}}$ and ${\displaystyle {\color {Purple}+_{{}_{W}}}\colon {\color {Purple}W}\times {\color {Purple}W}\to {\color {Purple}W}}$ be the respective inner operations. Further, let ${\displaystyle {\color {Orange}\cdot _{{}_{V}}}\colon K\times {\color {Orange}V}\to {\color {Orange}V}}$ and ${\displaystyle {\color {Purple}\cdot _{{}_{W}}}\colon K\times {\color {Purple}W}\to {\color {Purple}W}}$ be the scalar multiplications.

Now let ${\displaystyle f\colon {\color {Orange}V}\to {\color {Purple}W}}$ be a map between these vector spaces. We call ${\displaystyle f}$ a linear map from ${\displaystyle {\color {Orange}V}}$ to ${\displaystyle {\color {Purple}W}}$ if the following two properties are satisfied:

1. additivity: For all ${\displaystyle v_{1},v_{2}\in V}$ we have that
${\displaystyle f\left(v_{1}{\color {Orange}+_{{}_{V}}}v_{2}\right)=f(v_{1}){\color {Purple}+_{{}_{W}}}f(v_{2})}$
2. homogeneity: For all ${\displaystyle v\in V}$ and ${\displaystyle \lambda \in K}$ we have that
${\displaystyle f(\lambda {\color {Orange}\cdot _{{}_{V}}}v)=\lambda {\color {Purple}\cdot _{{}_{W}}}f(v)}$

Hint

If it's clear from the context, in the future we'll also just write "${\displaystyle +}$" instead of ${\displaystyle {\color {Orange}+_{{}_{V}}}}$ and ${\displaystyle {\color {Purple}+_{{}_{W}}}}$. Similarly, "${\displaystyle \cdot }$" is often used instead of ${\displaystyle {\color {Orange}\cdot _{{}_{V}}}}$ and ${\displaystyle {\color {Purple}\cdot _{{}_{W}}}}$ are used. Sometimes the dot for scalar multiplication is completely omitted.

Hint

In the literature, the term vector space homomorphism or homomorphism for short is also used as a synonym for the term linear map. The ancient Greek word homós stands for equal, morphé stands for shape. Literally translated, a vector space homomorphism is a map between vector spaces, which leaves the "shape" of the vector spaces invariant.

## Explanation of the definition

The characteristic equations of the linear map are ${\displaystyle f(v_{1}+v_{2})=f(v_{1})+f(v_{2})}$ and ${\displaystyle f(\lambda \cdot v)=\lambda \cdot f(v)}$. What do these two properties intuitively mean? According to the additivity property, it doesn't matter whether you first add ${\displaystyle v_{1}}$ and ${\displaystyle v_{2}}$ and then map them, or whether you first map both vectors and then add them. Both ways lead to the same result:

${\displaystyle {\color {OliveGreen}\underbrace {f({\color {Blue}\underbrace {v_{1}+v_{2}} _{\text{addition}}})} _{\text{mapping }}}={\color {Blue}\underbrace {{\color {OliveGreen}\underbrace {f(v_{1})} _{\text{mapping }}}+{\color {OliveGreen}\underbrace {f(v_{2})} _{\text{mapping }}}} _{\text{addition}}}}$

What does the homogeneity property mean? Regardless of whether you first scale ${\displaystyle v}$ by ${\displaystyle \lambda }$ and then map it or first map the vector and then scale it by ${\displaystyle \lambda }$, the result is the same:

${\displaystyle {\color {OliveGreen}\underbrace {f({\color {Blue}\underbrace {\lambda \cdot v} _{\begin{array}{c}{\text{scalar multiplication }}\end{array}}})} _{\begin{array}{c}{\text{mapping }}\end{array}}}={\color {Blue}\underbrace {\lambda \cdot {\color {OliveGreen}\underbrace {f(v)} _{\text{mapping}}}} _{\text{scalar multiplication}}}}$

The characteristic properties of linear maps mean that the orders of function mapping and vector space operations do not matter.

## Charakterization: linear combinations are mapped to linear combinations

Besides the defining property that linear maps get along well with the underlying vector space structure, linear maps can also be characterized by the following property:

Linear maps are precisely those maps that map linear combinations to linear combinations

.

This is an important property because linear combinations are used to define important structures on vector spaces such as the linear independence or having generators. Also the definition of the basis relies on the notion of linear combination. The connection to linear combinations can be seen by looking at the two characteristic equations of linear maps:

{\displaystyle {\begin{aligned}f(v_{1}+v_{2})&=f(v_{1})+f(v_{2})\\[0.5em]f(\lambda \cdot v)&=\lambda \cdot f(v)\end{aligned}}}

We can apply the two formulas above step-by-step to a linear combination like ${\displaystyle 3\cdot u+5\cdot w-2\cdot z}$ for vectors ${\displaystyle u,w}$ and ${\displaystyle z}$ from ${\displaystyle V}$ . This allows us to "get the linear combination out of the bracket":

{\displaystyle {\begin{aligned}&f(3\cdot u+5\cdot w-2\cdot z)\\[0.3em]&{\color {OliveGreen}\left\downarrow \ {\text{additivity of }}f\right.}\\[0.3em]=&f(3\cdot u)+f(5\cdot w-2\cdot z)\\[0.3em]&{\color {OliveGreen}\left\downarrow \ {\text{additivity of }}f\right.}\\[0.3em]=&f(3\cdot u)+f(5\cdot w)+f(-2\cdot z)\\[0.3em]&{\color {OliveGreen}\left\downarrow \ {\text{homogeneity of }}f\right.}\\[0.3em]=&3\cdot f(u)+5\cdot f(w)-2\cdot f(z)\end{aligned}}}

The linear combination ${\displaystyle 3\cdot u+5\cdot w-2\cdot z}$ is mapped by ${\displaystyle f}$ to ${\displaystyle 3\cdot f(u)+5\cdot f(w)-2\cdot f(z)}$ and thus keeps its structure. The situation is similar for other linear combinations. For by the property ${\displaystyle f(v_{1}+v_{2})=f(v_{1})+f(v_{2})}$ sums "can be pulled out of the bracket" and by the property ${\displaystyle f(\lambda \cdot v)=\lambda \cdot f(v)}$ scalar multiplications "can be pulled out of the bracket". We thus obtain the following alternative characterization of the linear map: linear combinations are mapped to linear combinations.

## Examples

### Stretch in ${\displaystyle x}$-direction

Our first example is a stretch by the factor ${\displaystyle \beta }$ in ${\displaystyle x}$-direction in the plane ${\displaystyle \mathbb {R} ^{2}}$. Here, every vector ${\displaystyle a=(a_{x},a_{y})^{T}\in \mathbb {R} ^{2}}$ is mapped to ${\displaystyle f(a)=(\beta a_{x},a_{y})^{T}}$. The following figure shows this map for ${\displaystyle \beta =2}$. The ${\displaystyle y}$-coordinate remains the same and the ${\displaystyle x}$-coordinate is doubled:

Now let's see if this map is compatible with addition. So let's take two vectors ${\displaystyle a}$ and ${\displaystyle b}$, sum them ${\displaystyle a+b}$ and then stretch them in ${\displaystyle x}$-direction. The result is the same as if we first stretch both vectors in ${\displaystyle x}$-direction and then add them:

This can also be shown mathematically. Our map is the function ${\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} ^{2},\ f(\left(x,y)^{T}\right)=(\beta x,y)^{T}}$. We can now check the property ${\displaystyle f(a+b)=f(a)+f(b)}$:

{\displaystyle {\begin{aligned}f(a+b)&=f\left({\begin{pmatrix}a_{x}\\a_{y}\end{pmatrix}}+{\begin{pmatrix}b_{x}\\b_{y}\end{pmatrix}}\right)\\[0.5em]&=f\left({\begin{pmatrix}a_{x}+b_{x}\\a_{y}+b_{y}\end{pmatrix}}\right)\\[0.5em]&={\begin{pmatrix}\beta (a_{x}+b_{x})\\a_{y}+b_{y}\end{pmatrix}}\\[0.5em]&={\begin{pmatrix}\beta a_{x}+\beta b_{x}\\a_{y}+b_{y}\end{pmatrix}}\\[0.5em]&={\begin{pmatrix}\beta a_{x}\\a_{y}\end{pmatrix}}+{\begin{pmatrix}\beta b_{x}\\b_{y}\end{pmatrix}}\\[0.5em]&=f\left({\begin{pmatrix}a_{x}\\a_{y}\end{pmatrix}}\right)+f\left({\begin{pmatrix}b_{x}\\b_{y}\end{pmatrix}}\right)\\[0.5em]&=f(a)+f(b)\\[1em]\end{aligned}}}

Now let's check the compatibility with scalar multiplication. The following figure shows that it doesn't matter if the vector ${\displaystyle a}$ is first scaled by a factor of ${\displaystyle \lambda }$ and then stretched in ${\displaystyle x}$-direction or first stretched in ${\displaystyle x}$-direction and then scaled by ${\displaystyle \lambda }$:

This can also be shown formally: For ${\displaystyle a\in \mathbb {R} ^{2}}$ and ${\displaystyle \lambda \in \mathbb {R} }$ we have that

{\displaystyle {\begin{aligned}f(\lambda a)&=f\left(\lambda {\begin{pmatrix}a_{x}\\a_{y}\end{pmatrix}}\right)=f\left({\begin{pmatrix}\lambda a_{x}\\\lambda a_{y}\end{pmatrix}}\right)\\[0.5em]&={\begin{pmatrix}\beta (\lambda a_{x})\\\lambda a_{y}\end{pmatrix}}={\begin{pmatrix}\lambda \beta a_{x}\\\lambda a_{y}\end{pmatrix}}\\[0.5em]&=\lambda {\begin{pmatrix}\beta a_{x}\\a_{y}\end{pmatrix}}=\lambda f\left({\begin{pmatrix}a_{x}\\a_{y}\end{pmatrix}}\right)\\[0.5em]&=\lambda f(a).\end{aligned}}}

So our ${\displaystyle f}$ is a linear map.

### Rotations

In the following, we consider a rotation ${\displaystyle D_{\alpha }}$ of the plane by the angle ${\displaystyle \alpha }$ (measured counter-clockwise) with the origin as center of rotation. Thus, it is a map ${\displaystyle D_{\alpha }:\mathbb {R} ^{2}\to \mathbb {R} ^{2}}$ that assigns to every vector ${\displaystyle v\in \mathbb {R} ^{2}}$ the vector ${\displaystyle \alpha }$ rotated by the angle ${\displaystyle D_{\alpha }(v)\in \mathbb {R} ^{2}}$:

Rotating a vector ${\displaystyle v\in \mathbb {R} ^{2}}$ by the angle ${\displaystyle \alpha }$

Let us now convince ourselves that ${\displaystyle D_{\alpha }}$ is indeed a linear map. To do this, we need to show that:

1. ${\displaystyle D_{\alpha }}$ is additive: for all ${\displaystyle v,w\in \mathbb {R} ^{2}}$, we have ${\displaystyle D_{\alpha }(v+w)=D_{\alpha }(v)+D_{\alpha }(w)}$.
2. ${\displaystyle D_{\alpha }}$ is homogeneous: For all ${\displaystyle v\lambda \in \mathbb {R} ^{2}}$ and ${\displaystyle \lambda \in \mathbb {R} }$ we have ${\displaystyle D_{\alpha }(\lambda \cdot v)=\lambda \cdot D_{\alpha }(v)}$.

First, we check additivity, that is, the equation ${\displaystyle D_{\alpha }(v+w)=D_{\alpha }(v)+D_{\alpha }(w)}$. If we add two vectors ${\displaystyle v,w\in \mathbb {R} ^{2}}$ and then rotate their sum ${\displaystyle v+w}$ by the angle ${\displaystyle \alpha }$, the same vector should come out, as if we first rotate the vectors by the angle ${\displaystyle \alpha }$ and then add the rotated vectors ${\displaystyle D_{\alpha }(v)}$ and ${\displaystyle D_{\alpha }(w)}$. This can be visualized by the following two videos:

Now we come to homogeneity: ${\displaystyle D_{\alpha }(\lambda \cdot v)=\lambda \cdot D_{\alpha }(v)}$. If we first stretch a vector ${\displaystyle v\lambda \in \mathbb {R} ^{2}}$ by a factor ${\displaystyle \lambda \in \mathbb {R} }$ and then rotate the result ${\displaystyle \lambda \cdot v}$ by the angle ${\displaystyle \alpha }$, we should get the same vector as if we first rotate the by an angle ${\displaystyle \alpha }$ and then scale the result ${\displaystyle D_{\alpha }(v)}$ by the factor ${\displaystyle \lambda }$. This is again visualized by two videos:

Thus, rotations in ${\displaystyle \mathbb {R} ^{2}}$ are indeed linear maps.

### Linear maps between vector spaces of different dimension

An example of a linear map between two vector spaces with different dimensions is the following projection of the space ${\displaystyle \mathbb {R} ^{3}}$ onto the plane ${\displaystyle \mathbb {R} ^{2}}$:

${\displaystyle f\colon \mathbb {R} ^{3}\to \mathbb {R} ^{2};\quad {\begin{pmatrix}x\\y\\z\end{pmatrix}}\mapsto {\begin{pmatrix}x\\y\end{pmatrix}}}$

We now check whether the vector addition is preserved. That means, for vectors ${\displaystyle a,b\in \mathbb {R} ^{3}}$ we need that

${\displaystyle f(a+b)=f(a)+f(b)}$

This can be verified directly:

{\displaystyle {\begin{aligned}f(a+b)&=f\left({\begin{pmatrix}a_{x}\\a_{y}\\a_{z}\end{pmatrix}}+{\begin{pmatrix}b_{x}\\b_{y}\\b_{z}\end{pmatrix}}\right)=f\left({\begin{pmatrix}a_{x}+b_{x}\\a_{y}+b_{y}\\a_{z}+b_{z}\end{pmatrix}}\right)\\[0.5em]&={\begin{pmatrix}a_{x}+b_{x}\\a_{y}+b_{y}\end{pmatrix}}={\begin{pmatrix}a_{x}\\a_{y}\end{pmatrix}}+{\begin{pmatrix}b_{x}\\b_{y}\end{pmatrix}}\\[0.5em]&=f\left({\begin{pmatrix}a_{x}\\a_{y}\\a_{z}\end{pmatrix}}\right)+f\left({\begin{pmatrix}b_{x}\\b_{y}\\b_{z}\end{pmatrix}}\right)=f(a)+f(b).\end{aligned}}}

Now we check homogeneity. For all ${\displaystyle \lambda \in \mathbb {R} }$ and ${\displaystyle a\in \mathbb {R} ^{2}}$ we need:

${\displaystyle f(\lambda \cdot a)=\lambda \cdot f(a).}$

We have that

{\displaystyle {\begin{aligned}f(\lambda \cdot a)&=f\left(\lambda \cdot {\begin{pmatrix}a_{x}\\a_{y}\\a_{z}\end{pmatrix}}\right)=f\left({\begin{pmatrix}\lambda a_{x}\\\lambda a_{y}\\\lambda a_{z}\end{pmatrix}}\right)\\[0.5em]&={\begin{pmatrix}\lambda a_{x}\\\lambda a_{y}\end{pmatrix}}=\lambda \cdot {\begin{pmatrix}a_{x}\\a_{y}\end{pmatrix}}=\lambda \cdot f\left({\begin{pmatrix}a_{x}\\a_{y}\\a_{z}\end{pmatrix}}\right)\\[0.5em]&=\lambda \cdot f(a).\end{aligned}}}

So the projection ${\displaystyle f}$ is a linear map.

### A non-linear map

Next, we investigate some examples for non-linear maps. It is easy to come up with such maps: basically any function on ${\displaystyle \mathbb {R} }$ whose graph is not a line is a non-linear map. So "most maps are non-linear".

Of course, there are also examples for non-linear maps on ${\displaystyle \mathbb {R} ^{2}}$. For instance, consider the norm mapping on the plane which assigns the length to every vector:

${\displaystyle \|\cdot \|_{2}\,\colon \,\mathbb {R} ^{2}\to \mathbb {R} ;\quad {\begin{pmatrix}x\\y\end{pmatrix}}\mapsto {\sqrt {x^{2}+y^{2}}}}$

This map is not a linear map, because it does not preserve either vector addition or scalar multiplication. We show this by a counterexample:

Consider the two vectors ${\displaystyle (1,0)^{T}}$ and ${\displaystyle (0,1)^{T}\in \mathbb {R} ^{2}}$. If we add the vectors first and map them (determine their length) afterwards, we get

${\displaystyle \left\|{\begin{pmatrix}1\\0\end{pmatrix}}+{\begin{pmatrix}0\\1\end{pmatrix}}\right\|_{2}=\left\|{\begin{pmatrix}1\\1\end{pmatrix}}\right\|_{2}={\sqrt {1^{2}+1^{2}}}={\sqrt {2}}.}$

Now we determine the lengths of the vectors first and then add the results:

${\displaystyle \left\|{\begin{pmatrix}1\\0\end{pmatrix}}\right\|_{2}+\left\|{\begin{pmatrix}0\\1\end{pmatrix}}\right\|_{2}={\sqrt {1^{2}+0^{2}}}+{\sqrt {0^{2}+1^{2}}}={\sqrt {1}}+{\sqrt {1}}=2}$

Thus we have that

${\displaystyle \left\|{\begin{pmatrix}1\\0\end{pmatrix}}+{\begin{pmatrix}0\\1\end{pmatrix}}\right\|_{2}\neq \left\|{\begin{pmatrix}1\\0\end{pmatrix}}\right\|_{2}+\left\|{\begin{pmatrix}0\\1\end{pmatrix}}\right\|_{2}}$

This shows that the norm mapping is not additive. Finding a contradiction to one property (either additivity or homogeneity) already proves that the normal mapping is not linear.

Alternatively, we could have shown that the norm mapping is not homogeneous:

${\displaystyle \left\|(-1)\cdot {\begin{pmatrix}1\\0\end{pmatrix}}\right\|_{2}=\left\|{\begin{pmatrix}-1\\0\end{pmatrix}}\right\|_{2}={\sqrt {(-1)^{2}+0^{2}}}=1\neq -1=(-1)\cdot \left\|{\begin{pmatrix}1\\0\end{pmatrix}}\right\|_{2}.}$

### Applied examples

Linear maps are used in almost all technological fields. Here is just a very tiny collection of some examples:

1. In order to make predictions or control machines, complicated functions are often approximated by linear ones (regression). Mainly because linear maps are easy to handle.
2. The best known case where linear maps make our lives easier are computer graphics. Any scaling of a photo or graphic is a linear map. Even different screen resolutions ended up being linear maps.
3. Search engines use page ranks of a website to sort their search results. our "Serlo-page", also gets a ranking this way. To determine the page rank, a so-called Markov chain is used, which is a somewhat more sophisticated linear map.

## Linear maps preserve structure

Main article: Properties of linear maps

A linear map, also called vector space homomorphism, preserves the structure of the vector space. This is shown in the following properties of a linear mapping ${\displaystyle f:V\to W}$:

• The zero vector is mapped to the zero vector: ${\displaystyle f(0)=0}$.
• Inverses are mapped to inverses: ${\displaystyle f(-v)=-f(v)}$.
• Linear combinations are mapped to linear combinations.
• Compositions of linear maps are again linear
• Images of subspaces are subspaces
• The image of a span is the span of the individual image vectors: ${\displaystyle f(\operatorname {span} (M))=\operatorname {span} (f(M))}$ (${\displaystyle M\subseteq V}$ is supposed to be an arbitrary set)

## Relation to linear functions and affine maps

Linear functions in one dimension take the form ${\displaystyle f(x)=mx+t}$ with ${\displaystyle m,t\in \mathbb {R} }$. They are only linear maps in some cases, namely for ${\displaystyle t=0}$. As an example, for ${\displaystyle m=1}$ and ${\displaystyle t=2}$:

${\displaystyle f(x+y)=x+y+2\neq x+y+2+2=f(x)+f(y)}$

Maps are in fact linear, if and only if ${\displaystyle t=0}$, i.e., the map takes the form ${\displaystyle f(x)=mx}$ with ${\displaystyle m\in \mathbb {R} }$. The functions of the form ${\displaystyle f(x)=mx+t}$ are called affine-linear maps or simply affine maps: They are the sum of a linear map and a constant translational term ${\displaystyle t}$. Every linear map is affine-linear, but not the other way round!

However, affine maps still map straight lines to straight lines and preserve parallel lines and ratios of distances.

We can always decompose an affine map ${\displaystyle x\mapsto A(x)}$ into a linear map ${\displaystyle x\mapsto L(x)}$ and a translation ${\displaystyle x\mapsto x+t}$. We have that also ${\displaystyle A(x)=L(x)+t}$. Because the translations ${\displaystyle x\mapsto x+t}$ are easy to describe, the linear part is usually more interesting. In the theory we therefore only look at the linear part.

## Exercises

### The identity is a linear map

Exercise (The identity is a linear map)

Let ${\displaystyle V}$ be a ${\displaystyle K}$-vector space. Prove that the identity ${\displaystyle \operatorname {id} :V\to V}$ with ${\displaystyle \operatorname {id} (v)=v}$ is a linear map.

Proof (The identity is a linear map)

The identity is additive: Let ${\displaystyle v,w\in V}$, then.

${\displaystyle \operatorname {id} (v+w)=v+w=\operatorname {id} (v)+\operatorname {id} (w)}$

The identity is homogeneous: Let ${\displaystyle \lambda \in K}$ and ${\displaystyle v\in V}$, then

${\displaystyle \operatorname {id} (\lambda \cdot v)=\lambda \cdot v=\lambda \cdot \operatorname {id} (v)}$

### The map to zero is a linear map

Exercise (The map to zero is a linear map)

Let ${\displaystyle V,W}$ be two ${\displaystyle K}$-vector spaces. Show that the map to zero ${\displaystyle f:V\to W}$, which maps all vectors ${\displaystyle v\in V}$ to the zero vector ${\displaystyle 0_{{}_{W}}}$, is linear.

Proof (The map to zero is a linear map)

${\displaystyle f}$ is additive: let ${\displaystyle v_{1},v_{2}}$ be vectors in ${\displaystyle V}$. Then

${\displaystyle f(v_{1}+v_{2})=0_{{}_{W}}=0_{{}_{W}}+0_{{}_{W}}=f(v_{1})+f(v_{2})}$

${\displaystyle f}$ is homogeneous: Let ${\displaystyle v\in V}$ and let ${\displaystyle \lambda \in K}$. Then

${\displaystyle f(\lambda \cdot v)=0_{{}_{W}}=\lambda \cdot 0_{{}_{W}}=\lambda \cdot f(v)}$

Thus, the map to zero is linear

### Linear maps on the real numbers

Exercise (Linear maps on the real numbers)

Let ${\displaystyle g:\mathbb {R} \to \mathbb {R} ,\,x\mapsto m\cdot x+t}$ with ${\displaystyle m,t\in \mathbb {R} }$. Show that ${\displaystyle g}$ is a linear map, if and only if ${\displaystyle t=0}$.

Solution (Linear maps on the real numbers)

Let first ${\displaystyle g}$ be a linear map. Since linear maps map the origin to the origin, ${\displaystyle g(0)=0}$ must hold. Now ${\displaystyle g(0)=t}$ and so ${\displaystyle t=0}$ must hold.

Let now ${\displaystyle t=0}$. We show that ${\displaystyle g:\mathbb {R} \to \mathbb {R} ,x\mapsto m\cdot x}$ is linear:

Let ${\displaystyle x}$ and ${\displaystyle y}$ be any two real numbers. We have that
{\displaystyle {\begin{aligned}&g(x+y)\\[0.3em]&{\color {OliveGreen}\left\downarrow \ {\text{definition of }}g\right.}\\[0.3em]=&m\cdot (x+y)\\[0.3em]=&m\cdot x+m\cdot y\\[0.3em]&{\color {OliveGreen}\left\downarrow \ {\text{definition of }}g\right.}\\[0.3em]=&g(x)+g(y)\end{aligned}}}
Let ${\displaystyle x}$ and ${\displaystyle \lambda }$ be two real numbers. We have that
{\displaystyle {\begin{aligned}&g(\lambda \cdot x)\\[0.3em]&{\color {OliveGreen}\left\downarrow \ {\text{definition of }}g\right.}\\[0.3em]=&m\cdot (\lambda \cdot x)\\[0.3em]=&m\cdot \lambda \cdot x\\[0.3em]=&\lambda \cdot (m\cdot x)\\[0.3em]&{\color {OliveGreen}\left\downarrow \ {\text{definition of }}g\right.}\\[0.3em]=&\lambda \cdot g(x)\end{aligned}}}
So ${\displaystyle g}$ is a linear map, if and only if ${\displaystyle t=0}$.