Sum of subspaces – Serlo

Aus Wikibooks

In this article, we define the sum of two subspaces. This sum will again be a subspace, containing the two initial subspaces. We can think of the sum as a structure-preserving union.

What is the sum of subspaces?[Bearbeiten]

Consider two subspaces and of a vector space . Now we want to combine these subspaces into a larger subspace that contains and . A first approach could be to consider . However, we have already seen in the article union and intersection of vector spaces that the union is generally not a subvector space.

Why is that the case? For and , the vector is not always in , as you can see from this example.

Union of two lines in two-dimensional space
Union of two lines in two-dimensional space

In order to solve the problem, we add all sums of the form with and to the union of the two subspaces and . That means, we consider . This expression still seems very complicated, but we can simplify it to .

Question: Why is ?

Since and are subspace, the vector is contained in both subspaces. Therefore, the following applies to all :

Therefore, . Analogously, we get .

We call this set the sum of and because it consists of the sums of vectors from and . Later we will show that this is a subspace.

Definition[Bearbeiten]

Definition (Sum of two subspaces)

Let and be two subspaces of a vector space . Then we define the sum of and as

The sum is a subspace[Bearbeiten]

We still have to prove that is a subspace.

Theorem (The sum is a subspace)

The sum
is a subspace of .

How to get to the proof? (The sum is a subspace)

We need to check the subspace criterion. To do so, we utilise the fact that all vectors can be written as with and . We can then trace the conditions of the subspace criterion back to the respective properties of and .

Proof (The sum is a subspace)

Proof step:

Since and are subspaces, we have and . Thus, .

Proof step: is closed with respect to addition

Let . We must show that . According to the definition of , there exist and , such that and . We know that and are subspaces and therefore closed with respect to addition. Hence,

.

Proof step: is closed with respect to scalar multiplication

Let and . We must show that . According to the definition of , there exist and , such that . Since and are closed with respect to scalar multiplication, we have

.

Examples[Bearbeiten]

Sum of two lines in ℝ² [Bearbeiten]

The lines and

We consider the following two lines in :

So is the -axis and is the line that runs through the origin and the point . What is the sum ?

Using the definition we can calculate a convenient set description for :

We can write each vector in as with matching . Specifically, for each vector we can find scalars and such that , namely and . Therefore, holds.

Intuitively, you can immediately see that . This is because is a subspace of , which contains the straight lines and . The only subspaces of are the null space, lines that run through the origin and . As the straight lines and do not coincide but are different, cannot be a line. Therefore, we must have .

Sum of two lines in ℝ³[Bearbeiten]

The lines and

Consider the following lines in :

Here is the line in that runs through the origin and the point and is the line that runs through the origin and . We want to determine the sum .

So is the plane that is spanned by the vectors and .

Sum of two planes in ℝ³ [Bearbeiten]

The planes and

Consider the following two planes:

The planes are not equal. We can see this, for example, from the fact that the vector lies in , but not in . Therefore, the two planes should intuitively span the entire space . So we can initially assume that .

We now try to prove this assumption. To do so, we have to show that each vector lies in the sum . We must therefore find vectors for and such that . Then applies. Here we can use the definitions of and : Each vector can be written as with . Similarly, each vector can be written as with . So we want to find numbers for the vector satisfying

We can re-write this as

How can we choose such that the above equation is satisfied? For instance,

will do this job.

To summarise, the following applies to any vector :

Therefore, indeed holds, i.e. the two planes together span the entire .

Absorption property of the sum[Bearbeiten]

The line and the plane

We have already looked at a few examples of sums in the space . Now let's look at another example in . Let

Then is the line that runs through the origin and through the point . The subspace is the -plane.

What is the sum of the subspaces ? The line lies in the -plane, i.e. in . The sum is intuitively the subspace consisting of and . Since is already contained in , the sum should simply be , i.e. . This is indeed the case, as the exercise below shows.

Intuitively, this should also apply more generally: Let and be two subspaces of an arbitrary vector space . If lies in , i.e. , then the sum should simply result in . This is called the absorption property, as is absorbed by when taking the sum. We prove it in the following exercise.

Exercise (Absorption property of the sum)

Let be a -verctor space, as well as and two subspaces of . Whenever , then it follows that .

Solution (Absorption property of the sum)

We assume that applies and prove that . To show this equality, we prove the two inclusions and .

Proof step:

Let . Then,

Proof step:

Let . Then there are vectors and , such that . Since we have . We know that is a subspace and therefore closed under addition. Furthermore, . Thus we get .

Hint

From the absorption property, we conclude for any subspace . This is because every subspace is contained within itself, i.e., .

Alternative definitions[Bearbeiten]

Using the intersection[Bearbeiten]

We have constructed a subspace of , which contains the two subspaces and . Since we have included only "necessary" vectors in our construction of , this sum should be the smallest subspace that contains both and .

We can also describe the smallest subspace containing and differently: We first consider all subspaces that contain and and then take the intersection of these subspaces. This intersection still contains and and is also a subspace, since the intersection of any number of subspaces is again a subspace. Intuitively, there should be no smaller subspace with this property. Thus, we also obtain the smallest subspace that contains both and . According to these considerations, it should therefore be the case that is equal to the intersection of all subspaces containing and . We now want to prove this:

Theorem (Definition of the sum over the intersection of subspaces)

Let be a vector space, as well as and two subspaces of . For gilt:

Proof (Definition of the sum over the intersection of subspaces)

We prove the two inclusions and .

Proof step:

It is sufficient to show that is a subspace that contains . Then it follows from the definition of that

We first show that is contained in . Then, being contained in will follow analogously. So let . Since is a subspace, . Therefore, .

Proof step:

We must show that every subspace of that contains both and must also contain .

Let be such a subspace. Let . Then there exist and with .

In particular, applies. Since is a subspace, holds.

We have thus shown: .

This renders us the two alternative definitions:

Definition (Definition of the sum of subvspaces via the intersection)

Let be a vector space, as well as and two subspaces of . Then the sum of and is given by

Using the span[Bearbeiten]

We can describe the smallest subspace containing and or in yet a third way. In the article "span", we saw that for a given subset of , the span of is the smallest subspace containing . Therefore, is the smallest subspace that contains and . So it must also be equal to the sum .

Theorem (Definition via the span)

Let be a vector space, as well as and two subspaces of . Then,

Proof (Definition via the span)

We show the two inclusions and .

Proof step:

Let . Then there exist and with . Because the span of consists of linear combinations of vectors from and , we indeed have .

Proof step:

we have seen that is the smallest subspace that contains . Since is a subspace of that contains , we finally obtain .

Dimension formula [Bearbeiten]

Now that we know what the sum of two subspaces and of a vector space is, we can ask ourselves how large the sum is. The sum of subspaces is the vector space analogue of the union of sets. For two sets and , the union has a maximum of elements. If and share elements, i.e. have a non-empty intersection, then has fewer than elements, because we count the elements from twice. This gives us the formula

In order to transfer this formula to vector spaces, we need the correct concept of the size of a vector space, i.e. the analogue for the cardinality of a set for vector spaces. This is exactly the idea of the dimension of a vector space. Therefore, if an analogue formula holds for vector spaces, the following should be true:

If is finite, we can convert this formula to a formula for , namely

Before we prove our assumption, we will test it with a few examples:

The lines and

Let us reconsider the two lines from the example above:

We have already calculated above that . This fits our assumption: is two-dimensional, and are one-dimensional and the intersection is zero-dimensional.

The planes and

Let us look again at the example above with the two planes:

We have already calculated above that and the figure shows that and intersect in a straight line. This means that the dimension of is three, the dimension of and are both two and the dimension of is just one. So the dimension formula also holds in this case.

As a final example, we consider the subspace in and

The subspace is a line through the origin, i.e. and we have . Because , the Absorption property of the sum tells us that . For the same reason, we have . Thus,

So the dimension formula is also valid in this case.

Theorem (Dimension formula)

Let be a finite-dimensional -vector space, as well as and two subspaces of . Then,

How to get to the proof? (Dimension formula)

The motivation for our formula comes from the world of finite sets. Therefore, we would also like to trace the proof back to the case of (finite) sets. The structure of a vector space can be reduced to its basis, which is indeed a finite set. The cardinality of a basis is exactly the dimension of the vector space, so we can trace the dimension formula back to a statement about the cardinality of (finite) basis sets. To do so, we have to choose suitable bases of , and for which . In this case, we obtain from the number-of-elements-formula for sets that has the desired size. Then we just have to prove that is a basis of . We do this by reducing everything to the fact that and are already bases of and .

To construct the desired bases and , we use the basis completion theorem. With this we can extend a basis of to one of and one of .

Proof (Dimension formula)

Let now and . Then there is a basis of . We can extend it to a basis of , as well as to a basis of .

We now show that is a basis of .

Proof step: is a generating system

Since according to the previous theorem we have , we know that is a generating system of .

Proof step: is linearly independent

Let with and such that

We can re-write this as

Since is a basis of , we can write the above element as a linear combination of these basis vectors:

This is equivalent to

Since is a basis of , it follows that for all and thus we get for all .

Plugging into our first equation, we then get

This is a linear combination of the basis vectors from , so must also apply for all and for all . Hence is linearly independent.

Since is a basis of , we have

Warning

The formula from the above theorem cannot be used for infinite-dimensional vector spaces. The reason is that there is no unique, meaningful way to subtract infinity from infinity. To illustrate this problem, consider the sets and . Then and thus , which makes mathematically no sense. The same can happen with vector spaces: For example, we can consider and in . Again, and we have .

However, if we move the term with the intersection to the other side of the equation, then the formula makes sense also for infinite-dimensional vector spaces. This means that for any subspaces and of a vector space , we have

For this formula to also make sense in infinite dimensions, we require , which is a mathematically meaningful and true statement.