# Subspace – Serlo

Zur Navigation springen Zur Suche springen

In this article we consider the subspace of a vector space. The subspace is a subset of the vector space, which itself is a vector space.

A subset ${\displaystyle U}$ of the vector space ${\displaystyle V}$ can be identified as a subspace (i.e., it is again a vector space) if and only if the following three properties are satisfied:

• ${\displaystyle 0_{V}\in U}$.
• For all ${\displaystyle v,u\in U}$ we have that ${\displaystyle v+u\in U}$.
• For all ${\displaystyle u\in U}$ and for all ${\displaystyle \lambda \in K}$ we have that ${\displaystyle \lambda \cdot u\in U}$.

This equivalence is called the subspace criterion. If one of them toes not hold, then we do not have a subspace.

## Motivation

As we have already seen in connection with general algebraic structures like groups or fields, sub-structures (like sub-groups or sub-fields) play a major role in mathematics. To repeat: Substructures are (small) subsets of a (large) original structure, which allow for the same computations as the original sets. For example, considering the algebraic structure "group", a subgroup is a subset of a group, which is itself a group. For instance, the set of integer numbers with addition ${\displaystyle (\mathbb {Z} ,+)}$ can be seen as a subgroup of the set of rational numbers ${\displaystyle (\mathbb {Q} ,+)}$, which is again a subgroup of the real numbers ${\displaystyle (\mathbb {R} ,+)}$. In the same way, for the algebraic structure "field", a subfield is a subset of a field, which itself is a field.

In linear algebra we consider a new algebraic structure: the "vector space". As before, we can study the corresponding substructure: a sub-vector space, or simply subspace is a subset of a vector space, which is again a vector space.

## Definition of a subspace

Definition (subspace)

Let ${\displaystyle (V,+,\cdot )}$ be a ${\displaystyle K}$-vector space. Consider a subset ${\displaystyle U\subseteq V}$ with the constrained operations ${\displaystyle +_{U}:U\times U\to U,(u_{1},u_{2})\mapsto u_{1}+u_{2}}$ and ${\displaystyle \cdot _{U}:K\times U\to U,(\lambda ,u)\mapsto \lambda \cdot u}$. Then, this subset is called a subspace of ${\displaystyle V}$ if ${\displaystyle (U,+_{U},\cdot _{U})}$ is itself a ${\displaystyle K}$-vector space.

Hint

In this definition, often a little detail is taken for granted: One needs that the constrained operations ${\displaystyle +_{U}\colon U\times U\to U}$ and ${\displaystyle \cdot _{U}\colon K\times U\to U}$ have again values only in ${\displaystyle U}$. That is, for a subspace ${\displaystyle U\subseteq V}$ it is required that for ${\displaystyle u_{1},u_{2}\in U}$ and ${\displaystyle \lambda \in K}$ also ${\displaystyle u_{1}+u_{2}\in U\subseteq V}$ and ${\displaystyle \lambda \cdot u_{1}\in U\subseteq V}$ hold. This does not hold for any vector space ${\displaystyle V}$, see the Example below.

Hint

You might recall the notion of a subgroup. We can also think of every vector space ${\displaystyle (V,+,\cdot )}$ as an abelian group ${\displaystyle (V,+)}$. Now if ${\displaystyle (U,+_{U},\cdot _{U})}$ is a subspace of ${\displaystyle (V,+,\cdot )}$, then ${\displaystyle (U,+_{U})}$ forms a subgroup of ${\displaystyle (V,+)}$.

## Subspace criterion

### Derivation of the criterion

How do we find out if a subset ${\displaystyle U}$ of a vector space ${\displaystyle V}$ is a subspace? For ${\displaystyle U}$ to be a vector space, all vector space axioms for ${\displaystyle U}$ must be fulfilled. Let's first use an example to see how this works.

#### Checking the vector space axioms for an example

We consider the subset ${\displaystyle U=\{(x,2x)^{T}\,\mid \,x\in \mathbb {R} \}}$ of the ${\displaystyle \mathbb {R} }$-vector space ${\displaystyle V=\mathbb {R} ^{2}}$. Visually this subset is a line. We want to find out, whether ${\displaystyle U}$ is a subspace of ${\displaystyle V=\mathbb {R} ^{2}}$. So by definition, we need to show that the set ${\displaystyle U}$ together with the operations ${\displaystyle +_{U}}$ and ${\displaystyle \cdot _{U}}$ satisfy all vector space axioms. We proceed as in the article Proofs for vector spaces. That means, we prove that the vector addition and the scalar multiplication are well defined and that eight axioms hold.

First we have to show that the two operations are well-defined. The crucial point here is whether we really "land in the subspace again" under addition and scalar multiplication. More precisely, the addition ${\displaystyle +}$ in ${\displaystyle V}$ is a map ${\displaystyle V\times V\to V}$ with ${\displaystyle (u,v)\mapsto u+v}$. Our new addition arises by first restricting the domain of definition ${\displaystyle V\times V}$ to the subset ${\displaystyle U\times U}$. We get a map ${\displaystyle +'\colon U\times U\to V}$. So the range of values remains the same for now. However, to make the set ${\displaystyle U}$ a vector space, we need a map ${\displaystyle +'\colon U\times U\to U}$, so after addition, we "land again in the subspace". So, is the image of ${\displaystyle +'}$ contained in ${\displaystyle U}$? What we would like to show is:

For all ${\displaystyle u,u'\in U}$ we have that also ${\displaystyle u+u'\in U}$.

This property is also called completeness under addition.

Analogously one can derive a criterion for the well-definedness of the scalar multiplication:

For all ${\displaystyle u\in U}$ and ${\displaystyle \lambda \in K}$ we have that also ${\displaystyle \lambda u\in U}$.

This property is called completeness under scalar multiplication.

We now check both properties in our concrete example:

First, the addition. Let ${\displaystyle u,u'be\in U}$. That is, ${\displaystyle x,y\in \mathbb {R} }$ exist such that ${\displaystyle u=(x,2x)^{T}}$ and ${\displaystyle u'=(y,2y)^{T}}$. Then ${\displaystyle u+u'=(x+y,2(x+y))^{T}}$. If we set ${\displaystyle z:=x+y}$, we have that ${\displaystyle u+u'=(z,2z)^{T}}$. So ${\displaystyle u+u'\in U}$.

Now the scalar multiplication. Let ${\displaystyle u=(x,2x)^{T}\in U}$ as we just did, and let ${\displaystyle \lambda \in K}$. Then we have ${\displaystyle \lambda \cdot u=(\lambda x,2\lambda x)^{T}}$. If we set ${\displaystyle z:=\lambda x}$, we have that ${\displaystyle \lambda \cdot u=(z,2z)^{T}}$. So ${\displaystyle \lambda \cdot u'\in U}$.

Hence, the completeness under addition and scalar multiplication are indeed valid. So the vector space operations are well defined. We note that here we have worked very concretely with the definition of the set ${\displaystyle U}$. More specifically, we have used that every element of ${\displaystyle U}$ is of the form ${\displaystyle (x,2x)^{T}}$.

Now we check the eight vector space axioms.

First, the four axioms for addition:

Associative law of addition: Let ${\displaystyle u,v,w\in U}$. We must show that ${\displaystyle u+_{U}(v+_{U}w)=(u+_{U}v)+_{U}w}$. Since ${\displaystyle +_{U}}$ is the constrained version of ${\displaystyle +}$, we must show ${\displaystyle u+(v+w)=(u+v)+w}$. This follows from the associative law for the vector space ${\displaystyle V}$. Note that since ${\displaystyle U\subseteq V}$, we also have ${\displaystyle u,v,w\in V}$, so the relations in ${\displaystyle V}$ are indeed valid.

The commutative law for addition in ${\displaystyle U}$ can be traced back to the commutative law in ${\displaystyle V}$.

Existence of a neutral element: We must show that an element ${\displaystyle 0_{_{U}}\in U}$ exists such that ${\displaystyle 0_{_{U}}+_{U}u=u+_{U}0_{_{U}}=u}$ for all ${\displaystyle u\in U}$. Since ${\displaystyle V}$ is a vector space, it contains a zero vector with ${\displaystyle 0+v=v+0=v}$ for all ${\displaystyle v\in V}$. In particular, this holds for all ${\displaystyle u\in U}$. Since addition in ${\displaystyle U}$ is only the restriction of addition in ${\displaystyle V}$, it suffices to show, That ${\displaystyle 0\in U}$. For then we can set ${\displaystyle 0_{_{U}}=0}$. The element ${\displaystyle 0\in V=\mathbb {R} ^{2}}$ is more precisely the vector ${\displaystyle (0,0)^{T}}$. This can be written as ${\displaystyle (0,2\cdot 0)^{T}}$ and thus lies in ${\displaystyle U}$. So there is indeed a neutral element of addition within ${\displaystyle U}$.

Existence of additive inverse in ${\displaystyle U}$: Let ${\displaystyle u\in U}$. We must show that there exists a ${\displaystyle u'\in U}$, such that ${\displaystyle u+_{U}u'=0}$. We know that ${\displaystyle u+(-u)=0}$ holds in ${\displaystyle V}$. So if we can show ${\displaystyle -u\in U}$, we are done: then we can choose ${\displaystyle u'=-u}$. We know that ${\displaystyle -u=(-1)\cdot u\in U}$ holds. Furthermore, we have already shown that ${\displaystyle U}$ is complete under scalar multiplication, so indeed ${\displaystyle -u\in U}$ follows.

The four axioms of scalar multiplication can also be traced back to the corresponding properties of ${\displaystyle V}$. This works similarly to the first two axioms of addition. We use that all relevant equations hold analogously in ${\displaystyle V}$ if one expresses the operations in ${\displaystyle U}$ by those in ${\displaystyle V}$ So we indeed have a subspace.

In order to show that the operations ${\displaystyle +_{U}}$ and ${\displaystyle \cdot _{U}}$ are well-defined, we needed to establish the properties of completeness formulated above. For this we have worked closely with the definition of ${\displaystyle U}$. Furthermore, for the third axiom of addition, we had to show that the neutral element of addition in ${\displaystyle V}$ is also an element of ${\displaystyle U}$. Again, we worked concretely with the definition of ${\displaystyle U}$. The axiom for the existence of the inverse with respect to addition could be traced back to the completeness of scalar multiplication. For all other axioms we could use that the analogous axioms in ${\displaystyle V}$ hold.

So, in total, we actually only needed to show three things:

• The completeness of ${\displaystyle U}$ with respect to addition.
• The completeness of ${\displaystyle U}$ with respect to scalar multiplication
• ${\displaystyle 0\in U}$

For these we had to work with the definition of ${\displaystyle U}$ and ${\displaystyle V}$. The above arguments that these three properties suffice should be true for every vector space ${\displaystyle V}$ and all subsets ${\displaystyle U}$ of ${\displaystyle V}$. So, in the general case, it should be enough to prove these three properties (and it actually is).

But first, we demonstrate that the three rules are necessary. That is, we show that none of the three rules can be omitted. For this we give subsets of ${\displaystyle V=\mathbb {R} ^{2}}$, each of which violates exactly one of the three rules and is indeed not a subspace.

#### Counterexample: The empty set

We first consider the empty set ${\displaystyle U=\emptyset }$. This is of course a subset of ${\displaystyle \mathbb {R} ^{2}}$.

Let us check completeness with respect to of addition, ${\displaystyle \forall u,u'\in U:u+u'\in U}$, it is satisfied. This is because all statements about the empty set (missing) trivially always hold. In the same way, the completeness of scalar multiplication is satisfied.

However, the third rule is violated: ${\displaystyle 0\notin U}$. That is simply because the empty set contains by definition no elements. The property ${\displaystyle 0\notin U}$ cannot be derived in general from the completeness of addition and of scalar multiplication.

Now, ${\displaystyle U}$ is not a vector space, because ${\displaystyle U}$ contains no element, in particular no neutral element of addition. Accordingly, ${\displaystyle U}$ cannot be a subspace. Be aware that the property ${\displaystyle 0\in U}$ cannot in general be derived from the completeness properties and must be, in principle, separately checked. (However, this step is often seen as obvious)

#### Counterexample: Vectors with integer entries

Our second example shows that scalar multiplication is actually needed: take the set of integer vectors ${\displaystyle U:=\mathbb {Z} ^{2}}$. If we identify the vectors with points in ${\displaystyle \mathbb {R} ^{2}}$, we get some kind of "lattice":

This set is obviously a subset of ${\displaystyle \mathbb {R} ^{2}}$ and again the question arises whether it is a subspace. In contrast to the first example now the zero vector ${\displaystyle 0=(0,0)^{T}}$ is contained in ${\displaystyle U=\mathbb {Z} ^{2}}$. All other axioms of vector addition are also valid. The sum of two vectors from ${\displaystyle \mathbb {Z} ^{2}}$ is again in ${\displaystyle \mathbb {Z} ^{2}}$.

Nevertheless, the ${\displaystyle \mathbb {Z} ^{2}}$ is not a subspace of ${\displaystyle \mathbb {R} ^{2}}$, because ${\displaystyle \mathbb {Z} ^{2}}$ is not complete with respect to scalar multiplication. For instance, ${\displaystyle v=(1,0)^{T}\in \mathbb {Z} ^{2}}$ and ${\textstyle \lambda ={\frac {1}{2}}\in \mathbb {R} }$, but ${\textstyle \lambda \cdot v=\left({\frac {1}{2}},0\right)^{T}}$ is not contained in ${\displaystyle \mathbb {Z} ^{2}}$. Thus ${\displaystyle \mathbb {Z} ^{2}}$ does not satisfy all vector space axioms and is therefore not a subspace.

The completeness with respect to scalar multiplication can also not be derived from the other two properties. If we want to prove that ${\displaystyle U}$ is a subspace, we must always show that for every ${\displaystyle u\in U}$ and for every scalar ${\displaystyle \lambda \in K}$, we also have ${\displaystyle \lambda \cdot u\in U}$.

#### Counterexample: A cross of coordinate axes

We have already seen in both examples of above that every subspace contains the zero vector and is closed under scalar multiplication. Finally, we want to look at a third and last example, which satisfies the above two conditions, but still does not satisfy all vector space axioms. For this we choose the axis cross, the set formed by union of the two lines through the origin ${\displaystyle G:=\{(0,t)^{T}:t\in \mathbb {R} \}}$ and ${\displaystyle H:=\{(t,0)^{T}:t\in \mathbb {R} \}}$. So we consider the subset ${\displaystyle U:=G\cup H\subseteq \mathbb {R} ^{2}}$. Illustrated in the plane as points, the set looks like an infinite "cross":

Is ${\displaystyle U}$ a subspace? Obviously, the zero vector ${\displaystyle 0}$ is contained in ${\displaystyle U}$. Moreover, we have that for any ${\displaystyle v\in U}$ and ${\displaystyle \lambda \in \mathbb {R} }$ that also ${\displaystyle \lambda v}$ is an element of ${\displaystyle U}$. Thus ${\displaystyle U}$ is complete under scalar multiplication. Nevertheless, ${\displaystyle U}$ is not a subspace. To see this, we choose the vectors ${\displaystyle v_{1}=(1,0)^{T}}$ and ${\displaystyle v_{2}=(0,1)^{T}}$. Then, we have ${\displaystyle v_{1},v_{2}\in U}$, but for the sum we have that ${\displaystyle v_{1}+v_{2}=(1,1)^{T}\notin U}$.

So the completeness under the vector space addition can not be derived from the other properties. That means we always have to check the completeness under the vector space addition to prove that ${\displaystyle U}$ is a subspace.

### Statement and proof of the criterion

We considered an example that a subset ${\displaystyle U}$ of ${\displaystyle V}$ is a subspace if it satisfies the following three properties:

• completeness with respect to addition,
• completeness with respect to scalar multiplication, and.
• ${\displaystyle 0\in U}$.

We have seen examples of subsets ${\displaystyle U}$ of ${\displaystyle V}$ where one of these properties was not satisfied in each case and which also do not form a subspace of ${\displaystyle V}$. So we assume that these three properties are necessary and sufficient for a subset to be a subspace. This is the theorem of the subspace criterion, which we will now prove.

Theorem (subspace criterion)

A subset ${\displaystyle U}$ of a ${\displaystyle K}$-vector space ${\displaystyle V}$ with vector addition ${\displaystyle +:V\times V\to V}$ and scalar multiplication ${\displaystyle \cdot :K\times V\to V}$ is a subspace exactly if the following three conditions hold:

1. ${\displaystyle 0_{V}\in U}$.
2. For all ${\displaystyle v,u\in U}$ we have that ${\displaystyle v+u\in U}$.
3. For all ${\displaystyle u\in U}$ and for all ${\displaystyle \lambda \in K}$ we have that ${\displaystyle \lambda \cdot u\in U}$.

In other words: A subset of a vector space is a subspace if it contains the zero element and is complete with respect to vector addition and scalar multiplication.

Proof (subspace criterion)

The theorem contains an if and only if", which means that we have to show two implications. One direction is: Every subspace satisfies conditions 1, 2 and 3. The other direction can be formulated as follows: Any subset of the vector space that satisfies conditions 1, 2 and 3 must be a subspace.

Proof step: Every subspace satisfies the conditions 1, 2 and 3.

Let ${\displaystyle U}$ be any subspace of ${\displaystyle V}$. Then by definition , ${\displaystyle U}$ is also a vector space. Thus for ${\displaystyle U}$ all axioms from the definition of a vector space hold.

In particular, this also means that the operations ${\displaystyle +:U\times U\to U}$ and ${\displaystyle \cdot :K\times U\to U}$ restricted to ${\displaystyle U}$ are well-defined. But this is only another formulation of the conditions 2) and 3).

Moreover, since U together with ${\displaystyle +}$ forms an abelian group, ${\displaystyle U}$ contains the neutral element ${\displaystyle 0_{V}}$. That is, it we also have that condition 1 holds.

Proof step: If conditions 1, 2 and 3 are satisfied, the subset must be a subspace.

Let ${\displaystyle U\subseteq V}$ be a subset of ${\displaystyle V}$ for which the three conditions hold. We need to show that ${\displaystyle U}$ is a vector space. To do this, we show that ${\displaystyle U}$ satisfies all properties from the definition of a vector space.

From condition 1, it follows that ${\displaystyle U}$ is a non-empty set. From conditions 2 and 3 we can deduce that the of ${\displaystyle V}$ on ${\displaystyle U}$ restricted links ${\displaystyle +:U\times U\to U}$ and ${\displaystyle \cdot :K\times U\to U}$ are well-defined. So we have a non-empty set ${\displaystyle U}$ with an operation ${\displaystyle +}$ (vector addition) and an operation ${\displaystyle \cdot }$ (scalar multiplication).

Now we have to prove that ${\displaystyle U}$ together with ${\displaystyle +}$ forms an abelian group and the axioms of scalar multiplication hold. We may use that ${\displaystyle V}$ is a vector space. From this, by ${\displaystyle U\subseteq V}$ we can conclude:

• associative law: For all ${\displaystyle v,w,z\in U}$ we have that: ${\displaystyle v+(w+z)=(v+w)+z}$
• commutative law: For all ${\displaystyle v,w\in U}$ we have that: ${\displaystyle v+w=w+v}$
• scalar distributive law: For all ${\displaystyle \lambda ,\mu \in K}$ and all ${\displaystyle v\in U}$ we have that: ${\displaystyle (\lambda +\mu )\cdot v=(\lambda \cdot v)+(\mu \cdot v)}$
• vectorial distributive law: For all ${\displaystyle \lambda \in K}$ and all ${\displaystyle v,w\in U}$ we have that: ${\displaystyle \lambda \cdot (v+w)=(\lambda \cdot v)+(\lambda \cdot w)}$
• associative law for scalars: For all ${\displaystyle \lambda ,\mu \in K}$ and all ${\displaystyle v\in U}$ we have that: ${\displaystyle (\lambda \cdot \mu )\cdot v=\lambda \cdot (\mu \cdot v)}$
• Neutral element of scalar multiplication: For all ${\displaystyle v\in U}$ and for ${\displaystyle 1\in K}$ (the neutral element of multiplication in ${\displaystyle K}$) we have that: ${\displaystyle 1\cdot v=v}$. The 1 is also called neutral element of scalar multiplication.

It remains to establish the axioms existence of a neutral element and existence of an inverse element. The former is given by condition 1. So it suffices to show that for every ${\displaystyle u\in U}$ there exists a ${\displaystyle {\tilde {u}}\in U}$ such that ${\displaystyle u+{\tilde {u}}=0_{V}}$. But now, for any ${\displaystyle u\in U}$ condition 3 implies

${\displaystyle -u=(-1)\cdot u\in U}$

So ${\displaystyle {\tilde {u}}:=(-1)\cdot u}$ is the inverse element to ${\displaystyle u}$ and contained in ${\displaystyle U}$.

Hint

Instead of ${\displaystyle 0_{V}\in U}$ in some mathematical texts also ${\displaystyle U\neq \emptyset }$ is required. Both requirements are equivalent (if one adds the other two conditions 2 and 3): If there is a ${\displaystyle v\in U}$, then because of the completeness of scalar multiplication also ${\displaystyle 0_{V}=0_{K}\cdot v}$ must be contained in ${\displaystyle U}$.

Hint

Another equivalent formulation of the criterion is:

A non-empty subset ${\displaystyle U\subseteq V}$ is a subspace if for any two vectors ${\displaystyle u,v\in U}$ also every linear combination ${\displaystyle \lambda u+\mu v}$, with an arbitrary ${\displaystyle \lambda ,\mu \in K}$ lies in ${\displaystyle U}$.

It is easy to convince ourselves of the equivalence of both formulations:

Since ${\displaystyle U}$ is not empty, there exists some ${\displaystyle u\in U}$, and therefore also ${\displaystyle 0_{K}\cdot u+0_{K}\cdot u=0_{V}}$ lies in ${\displaystyle U}$ (point 1). With ${\displaystyle u}$ also ${\displaystyle \lambda \cdot u=\lambda \cdot u+0_{K}\cdot 0_{V}}$ is contained in ${\displaystyle U}$ (point 3). Finally, with ${\displaystyle u,v\in U}$, we also have that ${\displaystyle u+v=1_{K}\cdot u+1_{K}\cdot v}$ is in ${\displaystyle U}$ (point 2).

Conversely, ${\displaystyle U}$ is not empty by 1. Further, if it contains some elements ${\displaystyle u}$ and ${\displaystyle v}$, then by means of 3, also ${\displaystyle \lambda \cdot u}$ and ${\displaystyle \mu \cdot v}$ must lie in ${\displaystyle U}$ and from 2 we finally get ${\displaystyle \lambda \cdot u+\mu \cdot v\in U}$.

## How to prove that a set is a subspace

### General proof structure

Before we examine the procedure in more detail with an example, it is useful to understand the general proof structure. How can we show that a set ${\displaystyle U}$ is a subspace of a ${\displaystyle K}$-vector space ${\displaystyle V}$? We can use the subspace criterion that we just established. In order for us to use the criterion we must first check the preconditions. The theorem requires that ${\displaystyle U\subseteq V}$. Then, to show that ${\displaystyle U}$ is a subspace, we need to check the three properties from the criterion. So, in total, we need to show the following four statements:

1. ${\displaystyle U\subseteq V}$
2. ${\displaystyle 0\in U}$.
3. For all ${\displaystyle v,u\in U}$ we have that ${\displaystyle v+u\in U}$.
4. For all ${\displaystyle u\in U}$ and for all ${\displaystyle \lambda \in K}$ we have that ${\displaystyle \lambda \cdot u\in U}$.

Hint

We can also replace the second statement "${\displaystyle 0\in U}$" by "${\displaystyle U\neq \emptyset }$". If we add the conditions 3. and 4. the two statements are equivalent.

What do proofs of these statements look like? The proof structure of these statements looks like this:

1. Proof of "${\displaystyle U\subseteq V}$": Let ${\displaystyle u\in U}$. Then, we have that ${\displaystyle u\in V}$, since ...
2. Proof of "${\displaystyle 0\in U}$": Let ${\displaystyle 0\in V}$ be the zero vector. Then, we have that ${\displaystyle 0\in U}$, since ...
3. Proof of "${\displaystyle \forall \ v,u\in U:\ v+u\in U}$": Let ${\displaystyle u,v\in U}$ be arbitrary. We have that ... and hence ${\displaystyle u+v\in U}$.
4. Proof of "${\displaystyle \forall \ u\in U\ \forall \lambda \in K:\ \lambda \cdot u\in U}$": Let ${\displaystyle \lambda \in K}$ and ${\displaystyle u\in U}$ be arbitrary. Since ... we know that ${\displaystyle \lambda \cdot u\in U}$.

### Finding a proof idea

We consider an easy example problem, in order to get an idea for the proof:

Exercise

Let ${\displaystyle u\in \mathbb {R} ^{2}}$ and ${\displaystyle U:=\{\lambda \cdot u|\ \lambda \in \mathbb {R} \}}$. Show that: ${\displaystyle U}$ is a subspace of the ${\displaystyle \mathbb {R} }$-vector space ${\displaystyle \mathbb {R} ^{2}}$.

We want to apply the subspace criterion to ${\displaystyle U}$. To do this, we check the premises of the theorem according to the above scheme.

• ${\displaystyle U\subseteq \mathbb {R} ^{2}}$: let ${\displaystyle v\in U}$. By definition of ${\displaystyle U}$, there is some ${\displaystyle \lambda \in \mathbb {R} }$ with ${\displaystyle v=\lambda \cdot u}$. Since ${\displaystyle \mathbb {R} ^{2}}$ is a vector space, it follows that ${\displaystyle v=\lambda \cdot u\in \mathbb {R} ^{2}}$.
• ${\displaystyle 0\in U}$: We have seen in "Vector space: properties" that for every vector ${\displaystyle v\in \mathbb {R} ^{2}}$ we have ${\displaystyle 0\cdot v=0}$. So we have that also ${\displaystyle 0\cdot u=0}$. Thus we get ${\displaystyle 0\cdot U}$.
• completeness of addition: Let ${\displaystyle v,w\in U}$. By definition of ${\displaystyle U}$, there exist ${\displaystyle \lambda ,\mu \in \mathbb {R} }$ with ${\displaystyle v=\lambda \cdot u}$ and ${\displaystyle w=\mu \cdot u}$. Since ${\displaystyle v,w\in \mathbb {R} ^{2}}$, we can add them: ${\displaystyle v+w=\lambda \cdot u+\mu \cdot u=(\lambda +\mu )\cdot u}$. Because of ${\displaystyle \lambda +\mu \in \mathbb {R} }$ we finally obtain ${\displaystyle v+w\in U}$.
• completeness of scalar multiplication: Let ${\displaystyle v\in U}$ and let ${\displaystyle \mu \in \mathbb {R} }$. By definition of ${\displaystyle U}$ there is a ${\displaystyle \lambda \in \mathbb {R} }$ with ${\displaystyle v=\lambda \cdot u}$. Since ${\displaystyle v\in \mathbb {R} ^{2}}$ we can multiply it with ${\displaystyle \mu }$: ${\displaystyle \mu \cdot v=\mu \cdot (\lambda \cdot u)=(\mu \cdot \lambda )\cdot u}$. Because of ${\displaystyle \mu \cdot \lambda \in \mathbb {R} }$ it follows that ${\displaystyle \mu \cdot v\in U}$.

This shows that all conditions hold, so by the subspace criterion, ${\displaystyle U}$ is indeed a subspace of ${\displaystyle \mathbb {R} ^{2}}$.

### Writing down the proof

Now we write down the proof, by generalizing the simple example to "any vector space":

Proof

We check the premises of the theorem according to the above scheme.

• ${\displaystyle U\subseteq \mathbb {R} ^{2}}$: let ${\displaystyle v\in U}$. Then there exists a ${\displaystyle \lambda \in \mathbb {R} }$ with ${\displaystyle v=\lambda \cdot u}$. Hence, ${\displaystyle v\in \mathbb {R} ^{2}}$.
• ${\displaystyle 0\in U}$: because of ${\displaystyle 0\cdot u=0}$, we have ${\displaystyle 0\in U}$.
• completeness of addition: let ${\displaystyle v,w\in U}$. Then there exist ${\displaystyle \lambda ,\mu \in \mathbb {R} }$ with ${\displaystyle v=\lambda \cdot u}$ and ${\displaystyle w=\mu \cdot u}$. We calculate: ${\displaystyle v+w=\lambda \cdot u+\mu \cdot u=(\lambda +\mu )\cdot u\in U}$.
• completeness of scalar multiplication: let ${\displaystyle v\in U}$ and let ${\displaystyle \mu \in \mathbb {R} }$. Then there is a ${\displaystyle \lambda \in \mathbb {R} }$ with ${\displaystyle v=\lambda \cdot u}$. We calculate ${\displaystyle \mu \cdot v=\mu \cdot (\lambda \cdot u)=(\mu \cdot \lambda )\cdot u\in U}$.

This shows that all the preconditions hold. So it follows from the subspace criterion that ${\displaystyle U}$ is a subspace of ${\displaystyle \mathbb {R} ^{2}}$.

## Examples and counterexamples for subspaces

### Examples

In the following, we will look at first examples to consolidate our idea of subspaces and to avoid misinterpretations. We will also use the subspace criterion.

#### Trivial subspaces

In every ${\displaystyle K}$-vector space ${\displaystyle V}$, there are tow "trivial subspaces":

Example (Trivial subspaces)

Let ${\displaystyle V}$ be a vector space. First, the zero vector space ${\displaystyle \{0_{_{V}}\}}$ is always a subspace of ${\displaystyle V}$. This is a vector space and since ${\displaystyle 0_{_{V}}\in V}$, we also have ${\displaystyle \{0_{_{V}}\}\subseteq V}$.

On the other hand, the complete vector space ${\displaystyle V}$ is also a subspace of ${\displaystyle V}$. Finally we have that ${\displaystyle V\subseteq V}$ and ${\displaystyle V}$ is a vector space.

Since ${\displaystyle \{0_{_{V}}\}}$ and ${\displaystyle V}$ are always subspaces for every vector space ${\displaystyle V}$, they are called trivial subspaces.

the following example with ${\displaystyle V=\mathbb {R} }$ shows that sometimes, only the trivial subspaces are subspaces:

Example (subspaces of ${\displaystyle \mathbb {R} }$)

${\displaystyle \mathbb {R} }$ as an ${\displaystyle \mathbb {R} }$-vector space has only the trivial subspaces ${\displaystyle \lbrace 0\rbrace }$ and ${\displaystyle \mathbb {R} }$, as one can easily verify:

Let ${\displaystyle U\subseteq \mathbb {R} }$ be a subspace with ${\displaystyle U\neq \{0\}}$. We want to show ${\displaystyle U=\mathbb {R} }$. Since ${\displaystyle U\neq \{0\}}$, there exists a real number ${\displaystyle 0\neq u\in U}$. Because ${\displaystyle U}$ is complete under scalar multiplication, we have that for all ${\displaystyle a\in \mathbb {R} }$ it holds that ${\displaystyle a={\frac {a}{u}}\cdot u\in U}$. Hence, ${\displaystyle U=\mathbb {R} }$.

Warning

Although ${\displaystyle \mathbb {Q} }$ is a subfield of ${\displaystyle \mathbb {R} }$, the set ${\displaystyle \mathbb {Q} }$ is not a subspace of ${\displaystyle \mathbb {R} }$-vector space ${\displaystyle \mathbb {R} }$. For instance, for ${\displaystyle {\tfrac {1}{2}}\in \mathbb {Q} }$ we have that the scalar multiple ${\displaystyle \pi \cdot {\tfrac {1}{2}}\notin \mathbb {Q} }$.

#### Line through the origin

In this example, we consider a straight line ${\displaystyle U}$ in ${\displaystyle \mathbb {R} ^{2}}$ that passes through the origin. Let the equation of the line be given by ${\displaystyle y=2x}$. So we can write down the straight line as a set of points:

${\displaystyle U=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}{\bigg |}y=2x\right\}=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}{\bigg |}2x-y=0\right\}.}$

Exercise

Show that ${\displaystyle U}$ is a subspace of the ${\displaystyle \mathbb {R} }$-vector space ${\displaystyle \mathbb {R} ^{2}}$.

Proof

We want to use the subspace criterion above. Because of ${\displaystyle 0=2\cdot 0}$ we have that ${\displaystyle (0,0)^{T}\in U}$. Let ${\displaystyle (x,y)^{T},(x',y')^{T}\in U}$, i.e. ${\displaystyle y=2x}$ and ${\displaystyle y'=2x'}$. From this we obtain ${\displaystyle y+y'=2(x+x')}$ and hence ${\displaystyle (x,y)^{T}+(x',y')^{T}=(x+x',y+y')^{T}\in U}$. Let ${\displaystyle (x,y)^{T}\in U}$ and ${\displaystyle \lambda \in \mathbb {R} }$. Since ${\displaystyle y=2x}$ we have that also ${\displaystyle \lambda y=2\lambda x}$ and hence ${\displaystyle \lambda \cdot (x,y)^{T}=(\lambda x,\lambda y)^{T}\in U}$. So we have shown that all the conditions of the subspace criterion are satisfied. Thus, ${\displaystyle U}$ is a subspace of ${\displaystyle \mathbb {R} ^{2}}$.

Alternative proof

We can also see in another way that ${\displaystyle U}$ is a subspace. To do this, we consider the characterization of ${\displaystyle U}$:

${\displaystyle U=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}{\bigg |}y=2x\right\}=\left\{{\begin{pmatrix}x\\2x\end{pmatrix}}{\bigg |}x\in \mathbb {R} \right\}=\left\{x\cdot {\begin{pmatrix}1\\2\end{pmatrix}}{\bigg |}x\in \mathbb {R} \right\}.}$

Now we recall the section "How to prove that a set is a subspace", where we saw that such subsets form subspaces. These subsets were of the form ${\displaystyle U:=\{x\cdot u|x\in \mathbb {R} \}}$ with some ${\displaystyle u\in \mathbb {R} ^{2}}$. In this example ${\displaystyle u=(1,2)^{T}}$.

#### A subspace of ${\displaystyle \mathbb {R} ^{3}}$

In the following exercise we consider a plane in ${\displaystyle \mathbb {R} ^{3}}$ which passes through ${\displaystyle 0}$. We show that this plane always forms a subspace of ${\displaystyle \mathbb {R} ^{3}}$.

Exercise (plane within ${\displaystyle \mathbb {R} ^{3}}$)

Let ${\displaystyle U=\left\{(u_{1},u_{2},u_{3})^{T}\in \mathbb {R} ^{3}\mid u_{1}-u_{2}-u_{3}=0\right\}\subseteq \mathbb {R} ^{3}}$. Show that ${\displaystyle U}$ is a subspace of ${\displaystyle \mathbb {R} ^{3}}$.

Proof (plane within ${\displaystyle \mathbb {R} ^{3}}$)

For the proof we have to show that the subspace criterion is satisfied. We divide this proof into three steps:

Proof step: ${\displaystyle 0\in U}$

We have that ${\displaystyle 0=(0,0,0)^{T}\in U}$, since ${\displaystyle 0-0-0=0}$.

Proof step: ${\displaystyle U}$ is complete with respect to addition

Consider two vectors ${\displaystyle a=(a_{1},a_{2},a_{3})^{T}}$ and ${\displaystyle b=(b_{1},b_{2},b_{3})^{T}}$ from ${\displaystyle U}$. By definition of ${\displaystyle U}$, we have that ${\displaystyle a_{1}-a_{2}-a_{3}=0}$ and ${\displaystyle b_{1}-b_{2}-b_{3}=0}$. Now, under addition,

${\displaystyle a+b={\begin{pmatrix}a_{1}+b_{1}\\a_{2}+b_{2}\\a_{3}+b_{3}\end{pmatrix}}}$

So

{\displaystyle {\begin{aligned}&(a_{1}+b_{1})-(a_{2}+b_{2})-(a_{3}+b_{3})\\[0.3em]=&\underbrace {a_{1}-a_{2}-a_{3}} _{=0}+\underbrace {b_{1}-b_{2}-b_{3}} _{=0}=0\end{aligned}}}

Hence ${\displaystyle a+b\in U}$, which establishes completeness of addition.

Proof step: ${\displaystyle U}$ is complete with respect to scalar multiplication

Let ${\displaystyle \lambda \in \mathbb {R} }$ and let ${\displaystyle a=(a_{1},a_{2},a_{3})^{T}\in U}$. So we have that ${\displaystyle a_{1}-a_{2}-a_{3}=0}$. Consequently,

${\displaystyle \lambda \cdot a=\lambda \cdot {\begin{pmatrix}a_{1}\\a_{2}\\a_{3}\end{pmatrix}}={\begin{pmatrix}\lambda \cdot a_{1}\\\lambda \cdot a_{2}\\\lambda \cdot a_{3}\end{pmatrix}}}$

So we have:

{\displaystyle {\begin{aligned}\lambda \cdot a_{1}-\lambda \cdot a_{2}-\lambda \cdot a_{3}&=\lambda \cdot (\underbrace {a_{1}-a_{2}-a_{3}} _{=0})\\[0.3em]&=\lambda \cdot 0=0\end{aligned}}}

Thus ${\displaystyle \lambda \cdot a\in U}$ and hence, ${\displaystyle U}$ is also complete under scalar multiplication.

We have proved the conditions of the subspace criterion and thus shown that ${\displaystyle U}$ is a subspace.

#### A subspace of the polynomials

Let us now turn to a slightly more abstract example, namely the polynomial vector space. We show that the subset of polynomials of degree less or equal ${\displaystyle n}$ is a subspace:

Theorem (polynomials of degree ${\displaystyle \leq n}$)

Let ${\displaystyle n\in \mathbb {N} _{0}}$. Then ${\displaystyle K[X]_{\leq n}:=\lbrace f\in K[X]{\big |}\operatorname {deg} (f)\leq n\rbrace \subseteq K[X]}$ is a subspace of the vector space of polynomials ${\displaystyle K[X]}$.

Proof (polynomials of degree ${\displaystyle \leq n}$)

We must show that the three conditions of the subspace criterion hold:

Proof step: ${\displaystyle 0\in K[X]_{\leq n}}$

We have ${\displaystyle \operatorname {deg} (0)=-\infty \leq n}$, so ${\displaystyle 0\in K[X]_{\leq n}}$

Proof step: ${\displaystyle K[X]_{\leq n}}$ is complete with respect to addition

Let ${\displaystyle f,g\in K[X]_{\leq n}}$. Then ${\displaystyle \operatorname {deg} (f)\leq n,\operatorname {deg} (g)\leq n}$. Thus, we can find ${\displaystyle f_{i},g_{i}\in K}$ with ${\displaystyle 0\leq i\leq n}$, such that ${\displaystyle f=\sum _{i=0}^{n}f_{i}x^{i}}$ and ${\displaystyle g=\sum _{i=0}^{n}g_{i}x^{i}}$ holds. So we arrive at ${\displaystyle f+g=\sum _{i=0}^{n}(f_{i}+g_{i})x^{i}}$, which means that the degree ${\displaystyle \operatorname {deg} (f+g)\leq n}$. So indeed, ${\displaystyle f+g\in K[X]_{\leq n}}$

Proof step: ${\displaystyle K[X]_{\leq n}}$ is complete with respect to scalar multiplication

Let ${\displaystyle f\in K[X]_{\leq n}}$ and ${\displaystyle \lambda \in K}$. Then ${\displaystyle \operatorname {deg} (f)\leq n}$. Thus, we can find ${\displaystyle f_{i}\in K}$ with ${\displaystyle 0\leq i\leq n}$, such that ${\displaystyle f=\sum _{i=0}^{n}f_{i}x^{i}}$ holds. So we arrive at ${\displaystyle \lambda \cdot f=\sum _{i=0}^{n}(\lambda f_{i})x^{i}}$, which implies for our degree that ${\displaystyle \operatorname {deg} (\lambda \cdot f)\leq n}$. So indeed, ${\displaystyle \lambda \cdot f\in K[X]_{\leq n}}$.

Now all three subspace criteria are fulfilled, and hence, ${\displaystyle K[X]_{\leq n}\subseteq K[X]}$ is indeed a subspace.

### Counterexamples

We have already seen above three examples for subsets of ${\displaystyle \mathbb {R} ^{2}}$ which do not form a subspace. For a better understanding we now also consider counterexamples for other vector spaces.

#### Line that does NOT pass through the origin

Example

We have considered this straight line in the examples:
${\displaystyle U=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}{\bigg |}2x-y=0\right\},}$

which describes a subspace of ${\displaystyle \mathbb {R} ^{2}}$. Now we displace it up by distance one and get the following set:

${\displaystyle G:=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}{\bigg |}2x-y+1=0\right\}.}$

#### Bounded subset of ${\displaystyle \mathbb {R} ^{3}}$

Example

We consider the ${\displaystyle \mathbb {R} }$-vector space ${\displaystyle \mathbb {R} ^{3}}$. Let

${\displaystyle W:=\{(x,y,z)^{T}\in \mathbb {R} ^{3}\mid x^{2}+y^{2}+z^{2}\leq 5\}\subseteq \mathbb {R} ^{3}.}$

${\displaystyle W}$ is not a subspace of ${\displaystyle \mathbb {R} ^{3}}$. This is because ${\displaystyle W}$ is not closed under scalar multiplication. We know ${\displaystyle (1,0,0)^{T}\in W}$. But ${\displaystyle 3\cdot (1,0,0)^{T}=(3,0,0)^{T}\notin W}$, because ${\displaystyle 3^{2}+0^{2}+0^{2}=9>5}$.

Alternatively, we can show that ${\displaystyle W}$ is not complete with respect to addition. For example, ${\displaystyle (2,0,0)^{T}\in W}$ and ${\displaystyle (0,2,0)^{T}\in W}$, but ${\displaystyle (2,0,0)^{T}+(0,2,0)^{T}=(2,2,0)^{T}\notin W}$ because ${\displaystyle 2^{2}+2^{2}=8>5}$.

#### Graph of a non-linear function

Example

Now, consider the ${\displaystyle \mathbb {R} }$-vector space ${\displaystyle \mathbb {R} ^{3}}$. Let

${\displaystyle G=\{(x,y,xy)^{T}\in \mathbb {R} ^{3}\mid x,y\in \mathbb {R} \}.}$

The set ${\displaystyle G}$ is not a subspace of ${\displaystyle \mathbb {R} ^{3}}$, because it is not complete with respect to addition. To see this, we consider the two elements ${\displaystyle (1,1,1)^{T},(2,0,0)^{T}\in G}$. Then, we have ${\displaystyle (1,1,1)^{T}+(2,0,0)^{T}=(3,1,1)^{T}\notin G}$, since ${\displaystyle 3\cdot 1=3\neq 1}$. Intuitively, the set ${\displaystyle G}$ fails to be a subspace, because it is a "curved surface" and not a plain one through the origin.

#### polynomials with degree ${\displaystyle n}$ is not a subspace

Example

As a more abstract example, consider now the polynomial vector space ${\displaystyle K[X]}$ for any field ${\displaystyle K}$. Let

${\displaystyle M=\{p\in K[X]\mid \deg(p)=5\}.}$

We show that ${\displaystyle M}$ does not form a subspace of ${\displaystyle K[X]}$ since it is not complete with respect to addition. To see this, consider the two elements ${\displaystyle X^{5}+1,-X^{5}\in M}$. Then, we have ${\displaystyle X^{5}+1-X^{5}=1\notin M}$, since ${\displaystyle \deg(1)=0\neq 5}$.

## Other criteria for subspaces

We will now learn about three criteria that make proofs easier in many cases. For this we will anticipate and use the notion of a linear map.

### Kernel of a linear map

In the examples for subspaces, we considered the following sets:

{\displaystyle {\begin{aligned}U_{1}&=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}{\bigg |}\,2x-y=0\right\}\subseteq \mathbb {R} ^{2},\\[0.5em]U_{2}&=\left\{{\begin{pmatrix}a\\b\\c\end{pmatrix}}\in \mathbb {R} ^{3}{\bigg |}\,a-b-c=0\right\}\subseteq \mathbb {R} ^{3}\end{aligned}}}

We proved above that ${\displaystyle U_{1}}$ and ${\displaystyle U_{2}}$ are subspaces of ${\displaystyle \mathbb {R} ^{2}}$ and ${\displaystyle \mathbb {R} ^{3}}$, respectively. The two sets are defined according to the same principle. The subspaces contain all vectors that satisfy certain conditions. The conditions are

{\displaystyle {\begin{aligned}2x-y=0{\text{ and }}a-b-c=0\end{aligned}}.}

These look very similar. Both conditions tell us that some expression in ${\displaystyle x}$ and ${\displaystyle y}$ or in ${\displaystyle a,b}$ and ${\displaystyle c}$ should be zero. This expression is linear in ${\displaystyle x,y}$ and ${\displaystyle a,b,c}$, respectively. That is, both formulas can also be written down as linear maps:

{\displaystyle {\begin{aligned}f&\colon \mathbb {R} ^{2}\to \mathbb {R} ;\quad (x,y)^{T}\mapsto 2x-y\\[0.5em]g&\colon \mathbb {R} ^{3}\to \mathbb {R} ;\quad (a,b,c)^{T}\mapsto a-b-c\end{aligned}}}

With these, we can rewrite our subspaces as

{\displaystyle {\begin{aligned}U_{1}&=\left\{{\begin{pmatrix}x\\y\end{pmatrix}}\in \mathbb {R} ^{2}{\bigg |}\,f{\begin{pmatrix}x\\y\end{pmatrix}}=0\right\}\subseteq \mathbb {R} ^{2},\\[0.5em]U_{2}&=\left\{{\begin{pmatrix}a\\b\\c\end{pmatrix}}\in \mathbb {R} ^{3}{\bigg |}\,g{\begin{pmatrix}a\\b\\c\end{pmatrix}}=0\right\}\subseteq \mathbb {R} ^{3}\end{aligned}}}

Thus ${\displaystyle U_{1}}$, as well as ${\displaystyle U_{2}}$, is the kernel of a linear map. One can show, in general, that the kernel of a linear map is always a subspace.

### Image of a linear map

Just as with the kernel, we can show in general that the image of a linear map is always a subspace. This sometimes allows us to find simpler proofs that a given set is a subspace.

Example (Linear map on the polynomials of degree smaller or equal to 1)

We consider an example of a linear map of ${\displaystyle V:=\mathbb {R} [X]_{\leq 1}}$, defined on the vector space of real polynomials of degree smaller or equal to 1 and mapping into ${\displaystyle \mathbb {R} ^{3}}$.

We define a map that assigns to a polynomial in vector space ${\displaystyle V}$ the vector of its function values at positions ${\displaystyle 0}$, ${\displaystyle 2}$ and ${\displaystyle 4}$. So we define the linear map ${\displaystyle f:V\to \mathbb {R} ^{3}}$ by

${\displaystyle f(p)={\begin{pmatrix}p(0)\\p(2)\\p(4)\end{pmatrix}}}$

We recall how the addition and scalar multiplication of polynomials is defined: For two polynomials ${\displaystyle p_{1},p_{2}\in V}$ the sum ${\displaystyle p_{1}+p_{2}}$ is defined via ${\displaystyle (p_{1}+p_{2})(x)=p_{1}(x)+p_{2}(x)}$ for all ${\displaystyle x}$. For a scalar ${\displaystyle \lambda \in \mathbb {R} }$ and a polynomial ${\displaystyle p\in V}$ the scalar multiplication ${\displaystyle \lambda \cdot p}$ is given by ${\displaystyle (\lambda \cdot p)(x)=\lambda \cdot p(x)}$.

First, we prove that ${\displaystyle f}$ is indeed a linear map. For this we have to prove the additivity and the homogeneity. So we choose ${\displaystyle p_{1},p_{2}\in V}$ and ${\displaystyle \lambda \in \mathbb {R} }$.

{\displaystyle {\begin{aligned}f(p_{1}+p_{2})&={\begin{pmatrix}(p_{1}+p_{2})(0)\\(p_{1}+p_{2})(2)\\(p_{1}+p_{2})(4)\end{pmatrix}}={\begin{pmatrix}p_{1}(0)+p_{2}(0)\\p_{1}(2)+p_{2}(2)\\p_{1}(3)+p_{2}(4)\end{pmatrix}}\\[0.3em]&={\begin{pmatrix}p_{1}(0)\\p_{1}(2)\\p_{1}(4)\end{pmatrix}}+{\begin{pmatrix}p_{2}(0)\\p_{2}(2)\\p_{2}(4)\end{pmatrix}}=f(p_{1})+f(p_{2}).\end{aligned}}}

homogeneity:

{\displaystyle {\begin{aligned}f(\lambda p)&={\begin{pmatrix}(\lambda p)(0)\\(\lambda p)(2)\\(\lambda p)(4)\end{pmatrix}}={\begin{pmatrix}\lambda \cdot p(0)\\\lambda \cdot p(2)\\\lambda \cdot p(4)\end{pmatrix}}=\lambda \cdot {\begin{pmatrix}p(0)\\p(2)\\p(4)\end{pmatrix}}=\lambda \cdot f(p).\end{aligned}}}

Thus we know that ${\displaystyle U=\{(p(0),p(2),p(4))^{T}\mid p\in V\}}$ is a subspace of ${\displaystyle \mathbb {R} ^{3}}$.

If we look at the calculation again, we realize that the ${\displaystyle x}$-values ${\displaystyle 0,2}$ and ${\displaystyle 4}$ did not matter at all. We could have chosen other ones, as well. The proof goes the same way for the statement:

Let ${\displaystyle r,s,t\in \mathbb {R} }$ (and these numbers may or may not be different). The map ${\displaystyle f:V\to \mathbb {R} ^{3}}$, ${\displaystyle f(p)=(p(r),p(s),p(t))^{T}}$ is linear and the image of ${\displaystyle f}$ is a subspace of ${\displaystyle \mathbb {R} ^{3}}$.

We know that ${\displaystyle U=f(V)\subseteq \mathbb {R} ^{3}}$ is a subspace. We also find an explicit representation for ${\displaystyle U}$: a polynomial ${\displaystyle p\in V}$ has the form ${\displaystyle p(x)=ax+b}$ for ${\displaystyle a}$ and ${\displaystyle b\in \mathbb {R} }$. Moreover, ${\displaystyle p(0)=b}$ and ${\displaystyle p(2)=2a+b}$ and ${\displaystyle p(4)=4a+b}$.

The subspace ${\displaystyle U}$ thus has the form

${\displaystyle U=\left\{{\begin{pmatrix}b\\2a+b\\4a+b\end{pmatrix}}{\Bigg |}a,b\in \mathbb {R} \right\}=\left\{a\cdot {\begin{pmatrix}0\\2\\4\end{pmatrix}}+b\cdot {\begin{pmatrix}1\\1\\1\end{pmatrix}}{\Bigg |}a,b\in \mathbb {R} \right\}.}$

So it is a plane in ${\displaystyle \mathbb {R} ^{3}}$.

### Span of vectors

We will later prove a general theorem that every product of a subset ${\displaystyle M\subseteq V}$ is a subspace of ${\displaystyle V}$.

This allows us to shorten one of the proofs above:

We proved above that for ${\displaystyle u\in \mathbb {R} ^{2}}$ and ${\displaystyle U:=\{\lambda \cdot u|\ \lambda \in \mathbb {R} \}}$ the set ${\displaystyle U}$ is a subspace of the ${\displaystyle \mathbb {R} }$-vector space ${\displaystyle \mathbb {R} ^{2}}$.

The set ${\displaystyle U}$ is exactly the span of the set ${\displaystyle M=\{u\}}$ in the vector space ${\displaystyle V=\mathbb {R} ^{2}}$. The span of ${\displaystyle M}$ is exactly all linear combinations of elements from ${\displaystyle M}$. In our case, these are even the multiples of ${\displaystyle u}$. Therefore ${\displaystyle U}$ is a subspace of ${\displaystyle \mathbb {R} ^{2}}$.

## Exercises

Exercise (A subspace of ${\displaystyle \mathbb {R} ^{2}}$?)

Is ${\displaystyle U=\left\{x={\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}\mid x_{1}^{2}=x_{2}^{2}\right\}}$ a subspace of ${\displaystyle \mathbb {R} ^{2}}$?

How to get to the proof? (A subspace of ${\displaystyle \mathbb {R} ^{2}}$?)

The equation ${\displaystyle x_{1}^{2}=x_{2}^{2}}$ is equivalent to ${\displaystyle x_{1}=x_{2}\vee x_{1}=-x_{2}}$. These equations are the diagonals in ${\displaystyle \mathbb {R} ^{2}}$. This set is just a "45-degree-rotated" version of the axis cross in this example and hence NOT a subspace. Again, the set is complete with respect to scalar multiplication, but not complete with respect to addition. We may take a vector from each of the first and second bisectors to find a counterexample.

Solution (A subspace of ${\displaystyle \mathbb {R} ^{2}}$?)

The set ${\displaystyle U}$ is not a subspace, because ${\displaystyle u_{1}=(1,1)^{T}}$ and ${\displaystyle u_{2}=(-1,1)^{T}}$ are in ${\displaystyle U}$. But ${\displaystyle u_{1}+u_{2}=(0,2)^{T}}$ is not in ${\displaystyle U}$.

Exercise (A subspace of${\displaystyle \mathbb {R} ^{2}}$? (Teil II))

Is ${\displaystyle U=\left\{x={\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}\mid x_{1}^{3}=x_{2}^{3}\right\}}$ a subspace of ${\displaystyle \mathbb {R} ^{2}}$?

How to get to the proof? (A subspace of${\displaystyle \mathbb {R} ^{2}}$? (Teil II))

After the last exercise we are somewhat warned that powers can be critical in the conditions, as they "curve surfaces". But ${\displaystyle x_{1}^{3}=x_{2}^{3}}$ is equivalent for real numbers to ${\displaystyle x_{1}=x_{2}}$ (our surface is flat), and all elements of ${\displaystyle U}$ are thus the multiples of the vector ${\displaystyle {\begin{pmatrix}1\\1\end{pmatrix}}}$.

Solution (A subspace of${\displaystyle \mathbb {R} ^{2}}$? (Teil II))

${\displaystyle U}$ is a subspace of ${\displaystyle \mathbb {R} ^{2}}$. Since ${\displaystyle x_{1}^{3}=x_{2}^{3}\iff x_{1}=x_{2}}$ we have

${\displaystyle U=\left\{x={\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}\mid x_{1}=x_{2}\right\}=\left\{t\cdot {\begin{pmatrix}1\\1\end{pmatrix}},t\in \mathbb {R} \right\}.}$

According to a Example from above, ${\displaystyle U}$ is a subspace of ${\displaystyle \mathbb {R} ^{2}}$.

Exercise (A subspace of ${\displaystyle \mathbb {R} ^{3}}$?)

Is ${\displaystyle U=\left\{x={\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}\in \mathbb {R} ^{3}{\bigg |}x=\alpha \cdot {\begin{pmatrix}1\\2\\3\end{pmatrix}}+\beta \cdot {\begin{pmatrix}2\\3\\1\end{pmatrix}},\alpha ,\beta \in \mathbb {R} ,x_{2}=0\right\}}$ a subspace of ${\displaystyle V=\mathbb {R} ^{3}}$?

How to get to the proof? (A subspace of ${\displaystyle \mathbb {R} ^{3}}$?)

We check the subspace criterion, as described above in this example.

Solution (A subspace of ${\displaystyle \mathbb {R} ^{3}}$?)

Since we consider a subset of ${\displaystyle V=\mathbb {R} ^{3}}$, we have ${\displaystyle U\subseteq V}$.

The zero vector ${\displaystyle 0}$ satisfies both conditions: For ${\displaystyle \alpha =\beta =0}$ we have ${\displaystyle x=0\cdot {\begin{pmatrix}1\\2\\3\end{pmatrix}}+0\cdot {\begin{pmatrix}2\\3\\1\end{pmatrix}}=0}$, and the second component of the zero vector is zero. So ${\displaystyle U}$ is not empty.

Now we have to show that for ${\displaystyle u,v\in U}$ also ${\displaystyle u+v\in U}$ holds. To do this, we again check both conditions separately.

If ${\displaystyle u=\alpha _{1}\cdot (1,2,3)^{T}+\beta _{1}\cdot (2,3,1)^{T}}$ and ${\displaystyle v=\alpha _{2}\cdot (1,2,3)^{T}+\beta _{2}\cdot (2,3,1)^{T}}$, then ${\displaystyle u+v=(\alpha _{1}+\alpha _{2})\cdot (1,2,3)^{T}+(\beta _{1}+\beta _{2})\cdot (2,3,1)^{T}}$. So ${\displaystyle u+v}$ again has the required form.

If the second components of ${\displaystyle u}$ and ${\displaystyle v}$ are zero, then this is also true for the second component of ${\displaystyle u+v}$.

Now the last thing we have to prove is that with ${\displaystyle u\in U}$ and ${\displaystyle \lambda \in \mathbb {R} }$ we also have ${\displaystyle \lambda \cdot u\in U}$.

We see that ${\displaystyle \lambda \cdot u=(\lambda \cdot \alpha _{1})\cdot (1,2,3)^{T}+(\lambda \cdot \beta _{1})\cdot (2,3,1)^{T}}$, so it has again the required form. With ${\displaystyle u}$ the second component of ${\displaystyle \lambda \cdot u}$ is also zero.

Thus ${\displaystyle U}$ is indeed a subspace of ${\displaystyle V}$.

Solution (A subspace of ${\displaystyle \mathbb {R} ^{3}}$? (Alternative solution))

Later we have more methods available and can perform the subspace proof with more abstract methods.

We see that ${\displaystyle U=U_{1}\cap U_{2}}$, where ${\displaystyle U_{1}}$ is the span of ${\displaystyle (1,2,3)^{T}}$ and ${\displaystyle (2,3,1)^{T}}$. The span is always a subspace.

${\displaystyle U_{2}}$ is the kernel of the linear map ${\displaystyle f:\mathbb {R} ^{3}\to R}$, ${\displaystyle f((x_{1},x_{2},x_{3})^{T})=x_{2}}$. The kernel of a linear map is also always a subspace.

In the next section we prove that the intersection of two subspaces is again a subspace, so we can identify that ${\displaystyle U}$ is a subspace of ${\displaystyle \mathbb {R} ^{3}}$.

Exercise (A subspace of ${\displaystyle K^{3}}$?)

Let ${\displaystyle K}$ be any field and ${\displaystyle V=K^{3}}$. Let ${\displaystyle c\in K}$. We define ${\displaystyle U=\{\,(x_{1},x_{2},x_{3})\in K^{3}\,\vert \,x_{1}+x_{2}+x_{3}=c\,\}}$. What conditions must ${\displaystyle c}$ satisfy for ${\displaystyle U}$ to be a subspace of ${\displaystyle V}$?

Solution (A subspace of ${\displaystyle K^{3}}$?)

We first assume that ${\displaystyle U}$ is a subspace of ${\displaystyle V}$. Then ${\displaystyle (0,0,0)\in U}$ must hold. But according to the definition of ${\displaystyle U}$ we have that ${\displaystyle 0+0+0=c}$. So we know that ${\displaystyle U}$ can be a vector space only if ${\displaystyle c=0}$.

We now want to check whether ${\displaystyle U}$ is indeed a subspace in the case ${\displaystyle c=0}$. For this, we use the subspace criterion and assume ${\displaystyle c=0}$:

For ${\displaystyle (x_{1},x_{2},x_{3})=(0,0,0)}$ we have that ${\displaystyle x_{1}+x_{2}+x_{3}=0+0+0=0=c}$. Hence, ${\displaystyle 0_{V}\in U}$.

If ${\displaystyle x=(x_{1},x_{2},x_{3})}$ and ${\displaystyle y=(y_{1},y_{2},y_{3})}$ in ${\displaystyle U}$, then we have that ${\displaystyle x_{1}+x_{2}+x_{3}=c=0}$ and ${\displaystyle y_{1}+y_{2}+y_{3}=c=0}$. Thus,${\displaystyle (x_{1}+y_{1})+(x_{2}+y_{2})+(x_{3}+y_{3})=0}$, so ${\displaystyle x+y\in U}$.

If ${\displaystyle x=(x_{1},x_{2},x_{3})\in U}$, then we have that ${\displaystyle x_{1}+x_{2}+x_{3}=c=0}$. For every ${\displaystyle \lambda \in K}$ we have that also ${\displaystyle \lambda x_{1}+\lambda x_{2}+\lambda x_{3}=\lambda (x_{1}+x_{2}+x_{3})=0}$, so ${\displaystyle \lambda x\in U}$.

Thus the conditions of the subspace criterion hold and ${\displaystyle U}$ is a subspace of ${\displaystyle V}$ for ${\displaystyle c=0}$.