Isomorphisms – Serlo

Aus Wikibooks

Isomorphic Structures and Isomorphisms[Bearbeiten]

Isomorphic Structures[Bearbeiten]

We consider the vector space of polynomials of degree less than or equal to and we consider . Vectors in these spaces have a one-to-one correspondence, as we have already seen in the introduction article to vector spaces:

We also found that addition and scalar multiplication work the same way in both vector spaces:



In general, vector spaces can be thought of as sets with some structure. In our example, we can match the sets 1 to 1. And also the structures (i.e. addition and multiplication) can be matched. So both vector spaces "essentially carry the same information", although they formally comprise different objects. In such a case, we will call the two vector spaces isomorphic (to each other). The bijection which identifies the two vector spaces is then called an isomorphism.

We now derive what the mathematical definition is of "two vector spaces and are isomorphic":

The identification of the ``sets`` is given by a bijective mapping . Preserving the structure means that addition and scalar multiplication are preserved when mapping back and forth with and . But "preserving addition and scalar multiplication" for a mapping between vector spaces is nothing else than "being linear". So we want and to be linear.

Definition (Isomorphic)

The vector spaces and are isomorphic if there is a bijective map between them such that and are linear. We then write .

Let us now return to our example from above. In this case, the identification map we are looking for from the Definition would look like this:

Isomorphism[Bearbeiten]

We also want to give a name to the map introduced above:

Definition (Isomorphism)

An isomorphism between vector spaces and is a bijective map such that and are linear.

Alternative Derivation [Bearbeiten]

Now let's look at the term "vector space" from a different point of view. We can also think of a vector space as a basis together with corresponding linear combinations of the basis. So we can call vector spaces "equal" if we can identify the bases 1 to 1 and the corresponding linear combinations are generated in the same way. In other words, we are looking for a mapping that preserves both bases and linear combinations. What property must the mapping have in order to generate the same linear combinations? The answer is almost in the name: The mapping must be linear.

Let us now turn to the question of what property a linear map needs in order to map bases to bases. A basis is nothing else than a linearly independent generator. Thus, the map must preserve generators and linear independence. A linear map that preserves a generator is called an epimorphism - that is, a surjective linear map. A linear map that preserves linear independence is called a monomorphism and is thus an injective linear map. So the function we are looking for is an epimorphism and a monomorphism at the same time. As a monomorphism it must be injective. As an epimorphism, on the other hand, the mapping must be surjective. So overall we get a bijective linear map. This we again call an isomorphism. This gives us the alternative definition:

Definition (Alternative definition of isomorphism and isomorphism)

Two vector spaces and are isomorphic if there is a bijective linear map between them.

A map is called an isomorphism if it is a bijective linear map.

Inverse Mappings of Linear Bijections are Linear[Bearbeiten]

We have derived two descriptions for isomorphisms. Thus we have also two different definitions. The first one seems to require more than the second one: In the first definition, an isomorphism must additionally satisfy that is linear. Does this give us two different mathematical objects, or does linearity of already imply linearity of ? According to our intuition, both definitions should define the same objects. So being linear should then imply being linear. And indeed, this is the case:

Theorem (The inverse map of a bijective linear map is again linear.)

Let be a bijective linear map. Then the inverse mapping is also linear.

How to get to the proof? (The inverse map of a bijective linear map is again linear.)

We want to show that is linear. For this, both and must hold for all vectors and scalars .

We have given that is linear and bijective with inverse map . How can we use this to show the linearity of ? Since is the inverse map of , we have:

Together with the linearity of , this gives us:

In the same way we can proceed for homogeneity .

Proof (The inverse map of a bijective linear map is again linear.)

For the inverse of , it holds that:

So for every vector and every vector

Proof step: is additive.

Let and be two vectors. Then we have:

Thus the inverse function is additive.

Proof step: Is homogeneous.

Let be a vector and is a scalar. Then we have that

Thus the inverse function is homogeneous.

Thus have shown that the inverse is also linear.

Classifying Isomorphic Structures[Bearbeiten]

Bijections of Bases Generate an Isomorphism[Bearbeiten]

In the alternative derivation , we used the intuition that an isomorphism is a linear map that "preserves bases". This means that bases are sent to bases and linear combinations are preserved. So, describing it a bit more formally, we considered the following:

We already know the following: If is a linear map between two vector spaces and is an isomorphism, then maps bases of to bases of .

But we don't know yet whether a linear map that sends a basis to a basis, is already an isomorphism. This statement indeed turns out to be true.

Theorem

Let be a field, two -vector spaces, a basis of and a linear map.

Then is an isomorphism if and only if is mapped by to a basis of .

Proof

Proof step:

Let be an isomorphism. Then is by definition both a monomorphism and an epimorphism.

We want to show that preserves bases. That is, the image of under is a linearly independent generator of .

Proof step: is linearly independent

We know from the article on monomorphisms that those preserve linear independence. The set is a basis and thus linearly independent. So its image under is also linearly independent.

Proof step: is a generator of

We know from the article on epimorphisms that those preserve generators . The set is a basis and hence a generator of . So its image under is a generator of .

Proof step:

maps to a basis of .

Proof step: Injectivity

Since maps the linearly independent set to the linearly independent set , preserves linear independence. From the article on monomorphisms we know that must thus be injective.

Proof step: Surjectivity

maps the basis , (which is in particular a generator), to the basis (which is also a generator). From the article on epimorphisms we know that must thus be surjective.

is linear by premise. Together with injectivity and surjectivity it follows that is an isomorphism.

Theorem

Let and be two -vector spaces with bases and . Let further be a bijective mapping. Then there is exactly one isomorphism with .

Proof

From the article about linear continuation we know that we can find a unique linear map with for all . Thus, as required by the premise, .

We still have to show that the mapping is an isomorphism. By the previous theorem, we must show that maps a basis of to a basis of . Now we have constructed exactly such that . That is, maps the basis to the basis since is bijective. So is an isomorphism.

If we have given a bijection between bases, then there is a nice description of the inverse of : We know that is characterized by the conditions and . Further, the principle of linear continuation tells us that we need to know only on a basis of to describe it completely. Now we have already chosen the basis of W. That is, we are interested in for . Because is bijective, there is exactly one with . Therefore, we get from the above conditions. Now how can we describe this element more precisely? is the unique preimage of under . So . In other words, is the linear map induced by from to .

Classification of Finite Dimensional Vector Spaces[Bearbeiten]

When are two finite-dimensional vector spaces isomorphic? If and are finite-dimensional vector spaces, then we have bases of and of . From the previous theorem we know that an isomorphism is uniquely characterized by the bijection of the bases. When do we find a bijection between these two sets? Exactly when they have the same size, so . Or in other words, if and have the same dimension:

Theorem (Finite dimensional vector spaces with the same dimension are isomorphic)

Let be finite-dimensional vector spaces. Then:

Proof (Finite dimensional vector spaces with the same dimension are isomorphic)

Proof step:

Let .

Two vector spaces are called isomorphic if there exists an isomorphism between them. We know that an isomorphism exists between vector spaces if we can find a bijective mapping between the bases of them. Since , we find a bijective mapping between bases. Thus, there exists an isomorphism between and .

Thus, and are isomorphic.

Proof step:

Let .

Let be an isomorphism between and . We know that an isomorphism maps bases to bases. That is, is a basis of . In particular, since the mapping is an isomorphism, it is bijective. Thus .

This implies .

We have shown that all -vector spaces of dimension are isomorphic. In particular, all such vector spaces are isomorphic to the vector space . Because the is a well-describable model for a vector space, let us examine in more detail the isomorphism constructed in the last theorem.

Let be an -dimensional -vector space. We now follow the proof of the last theorem to understand the construction of the isomorphism. We use that bases of and of have the same size. For the isomorphism, we construct a bijection between a basis of and a basis of . The space has as kind of "standard basis", given by the canonical basis .

Following the proof of the last theorem, we see that we must choose a basis of and a basis of . For we choose the standard basis and for we choose some basis of . Next, we need a bijection between the standard basis and the basis . That is, we need to associate exactly one with each . We can thus name the images of as . Because is bijective, we get . In essence, we have used this to number the elements of B. Mathematically, numbering the elements of is the same as giving a bijection from to , since we can simply map to the -th element of .

The principle of linear continuation now provides us with an isomorphism . By linear continuation, this isomorphism sends the vector to the element .

Now what about the map that sends to , i.e., the inverse map of ?

We have already computed above what the mapping looks like in this case. is just the mapping induced by via the principle of linear continuation. That is, for basis vectors, we know that maps to . And where does it map a general vector ? HEre, we use the principle of linear continuation: We write as a linear combination of our basis . By linearity, the mapping now sends to . In particular, the describe where is located with respect to the basis vectors . This is just like GPS coordinates, which tells you your position with respect to certain anchor points (there prime meridian and equator). Therefore, we can say that sends each vector to its coordinates with respect to the basis .

Definition (Coordinate mapping)

Let be a -dimensional -vector space and a basis of . We define the isomorphism as the continuation of the following bijection between the base and the standard basis of :

We call the coordinate mapping with respect to .

We now want to investigate how many choices the construction of the coordinate map depends on.

Example (Coordinate mapping between the vector space of real quadratic polynomials and )

We consider the two -vector spaces and that of real polynomials of degree . The coordinate mapping then looks like this:

, .

The coordinate mapping depends on the choice of the basis. If you have different bases, you get different mappings.

Example (Different bases create different coordinate mappings)

We consider the following two bases of : and .

For we have

So the coordinate mapping with respect to looks like this

For the base we have

Thus, the coordinate mapping with respect to is.

These two mappings are not the same. For example

Even if we only change the numbering of the elements of a base, we already get different coordinate mappings.

Example (Different numbering of the basis result in different coordinate images)

We consider the standard basis of . We want to find out what the coordinate mappings and look like. For we already know this:

For we have

The construction of the coordinate mapping thus provides us with the following description

These two mappings are different. For example

In order to speak of the coordinate mapping, we must also specify the order of the basis elements. A basis where we also specify the order of the basis elements is called an ordered basis.

Definition (Ordered basis)

Let be a field and a finite-dimensional -vector space. Let . Then we call an ordered basis of if is a basis of .

With this notion we can simplify the notation of the coordinate mapping. If is an ordered basis, we also denote the coordinate mapping as .

We have now talked about a class of isomorphisms from to . Are there any other isomorphisms from to ? That is, are there isomorphisms that are not coordinate mappings? In fact, every isomorphism from to is a coordinate mapping with respect to a proper basis.

Theorem (All isomorphisms are coordinate mappings)

Let be an isomorphism. Then there is exactly one ordered basis of such that .

How to get to the proof? (All isomorphisms are coordinate mappings)

We have constructed the coordinate mapping as an inverse mapping. For this we bijectively mapped the standard basis of to a basis of . To reconstruct this basis, we need to consider the preimages of the standard basis under . That is, we need , which requires choosing an ordering of . For instance, we may set . Now, we know that is a basis because is an isomorphism. Further, we have just above applied the principle of linear continuation backwards, which told us that all of is induced by only the bijection . Further above, we have also seen that is already induced by the bijection . But this gives exactly the coordinate mapping with respect to .

Proof (All isomorphisms are coordinate mappings)

We define for . Then is the image of the standard basis under the mapping . Since is an isomorphism, it maps bases to bases. Thus is a basis of .

Define the ordered basis . We now show . For this it is sufficient to prove equality on the basis , since and are linear. For any it holds that

So indeed, .

Examples of vector space isomorphisms[Bearbeiten]

Example (Real polynomials of -th degree and )

For , we can establish an isomorphism between the space of polynomials of at most second degree and the space .

We define the mapping vis .

Claim: is an isomorphism.

For this, we need to prove three things:

  1. is a linear map
  2. is injektive
  3. is surjektive

Proof step: Linearity of

Since is defined for every polynomial and has values in , is well-defined as a mapping.

So we still have to prove that for and it always holds that and .

This is completely analogous to this calculation.

Proof step: Injectivity of

Let and .

This means that the polynomial of the highest second degree has three zeros:. It follows (e.g., with polynomial division) that we can write as , where is again a polynomial (or a constant, i.e., a zero-degree polynomial). But because the degree of is at most two, must be constant and equal to , and thus is then the zero polynomial, i.e. the zero vector of the vector space .

Now, since the kernel of consists only of the zero vector, is injective.

Proof step: Surjectivity of

In proving this assertion, we use polynomial interpolation in the Lagrangian form.

For this purpose we define three polynomials via

has zeros at and , and the denominator is the numerator at the position . Hence , since the numerator and denominator then contain the same number.

Quite analogously, and as well as and .

Now, if we have any vector , then we define the polynomial by

Then, ans analogously as well as .

Thus we have shown that is surjective.

We also see that we can use this procedure for arbitrary degrees of polynomials and arbitrary points, as long as the number of points is equal to the maximum degree of the polynomials plus 1. We can also replace everywhere by or without the need to change anything in the proof.

Example (Convergent sequences modulo zero sequences)

Let . We already know the vector spaces of convergent sequences and of zero sequences. We also know that is a subspace. Therefore, we can form the factor space .

In the following, we will show that .

We define a mapping

So the image of under this map is the coset of the sequence which takes the constant value . This is convergent with limit . We have to show that is linear and bijective.

Proof step: Linearity of

We need to show additivity and homogeneity of in order to get linearity.

Proof step: Additivity of

Let . Then

Proof step: Homogeneity of

Let . We consider as a vector of the -vector space and as a scalar. Then

Proof step: Injectivity of

We need to show that . So let . That means, . Thus, we have .

Therefore, which establishes the assertion.

Proof step: Surjectivity of

Let and let . We set . Then .

Therefore , which implies . This establishes surjectivity.

Example (The isomorphism theorem)

One of the most important examples is the isomorphism between the image space of a linear map and the quotient space .

All this is described here.


Exercises[Bearbeiten]

Exercise (complex -vector spaces)

Let be a finite-dimensional -vector space. Show that (interpreted as -vector spaces).

Solution (complex -vector spaces)

Set . We choose a basis of . Define for all .

We have to show that is an -basis of . Then, . According to a theorem above, we have as -vector spaces.

We now show -linear independence.

Proof step: is -linearly independent

Let and assume that . We substitute the definition for , conclude the sums and obtain . By -linear independence of we obtain for all . Thus, for all . This establishes the -linear independence.

Now only one step is missing:

Proof step: is a generator with respect to

Let be arbitrary.

Since is a -basis of , we can find some , such that . We write with for all . Then we obtain

So is inside the -span of . This establishes the assertion.


Exercise (Isomorphic coordinate spaces)

Let be a field and consider . Prove that holds if and only if .

Solution (Isomorphic coordinate spaces)

We know that for all . We use the theorem above, which states that finite-dimensional vector spaces are isomorphic exactly if their dimensions coincide. So holds if and only if .

Exercise (Isomorphism criteria for endomorphisms)

Let be a field, a finite-dimensional -vector space and a -linear map. Prove that the following three statements are equivalent:

(i) is an isomorphism.

(ii) is injective.

(iii) is surjective.

(Note: For this task, it may be helpful to know the terms kernel and image of a linear map. Using the dimension theorem, this exercise becomes much easier. However, we give a solution here, which works without the dimension theorem).

Solution (Isomorphism criteria for endomorphisms)

(i)(ii) and (iii): According to the definition of an isomorphism, is bijective, i.e. injective and surjective. Therefore (ii) and (iii) hold.

(ii)(i): Let be an injective mapping. We need to show that is also surjective. The image of is a subspace of . This can be verified by calculation. We now define a mapping that does the same thing as , except that it will be surjective by definition. This mapping is defined as follows:

The surjectivity comes from the fact that every element can be written as , for a suitable . Moreover, the mapping is injective and linear. This is because already has these two properties. So and are isomorphic. Therefore, and have the same finite dimension. Since is a subspace of , holds. This can be seen by choosing a basis in , for instance the basis given by the vectors . These are also linearly independent in , since . And since and have the same dimension, the are also a basis in . So the two vector spaces and must now be the same, because all elements from them are -linear combinations formed with the . Thus we have shown that is surjective.

(iii)(i): Now suppose is surjective. We need to show that is also injective. Let be the kernel of the mapping . You may convince yourself by calculation, that this kernel is a subspace of . Let be a basis of . We can complete this (small) basis to a (large) basis of , by including the additional vectors . We will now show that are linearly independent. So let coefficients be given such that

By linearity of we conclude: . This means that the linear combination

is in the kernel of . But we already know a basis of . Therefore there are coefficients , such that

Because of the linear independence of it now follows that . Therefore, the are linearly independent. Next, we will show that these vectors also form a basis of . To do this, we show that each vector in can be written as a linear combination of the . Let . Because of the surjectivity of , there is a , with . Since the form a basis of , there are coefficients such that

If we now apply to this equation, we get:

Here we used the linearity of . Since the first elements of our basis are in the kernel, their images are . So we get the desired representation of :

Thus we have shown that forms a linearly independent generator of . So these vectors form a basis of . Now if were not , two finite bases in would not contain equally many elements. This cannot be the case. Therefore, , so is the trivial vector space and is indeed injective.

Exercise (Function spaces)

Let be a finite set with elements and let be a field. We have seen that the set of functions from to forms a -vector space, denoted by . Show that .

Solution (Function spaces)

We already know according to a theorem above that two finite dimensional vector spaces are isomorphic exactly if they have the same dimension. So we just need to show that holds.

To show this, we first need a basis of . For this, let be the elements of the set . We define by

We now show that the functions indeed form a basis of .

Proof step: are linearly independent

Let with being the zero function. If we apply this function to any with , then we obtain: . By definition of it follows that

.

Since was arbitrary and must hold for all , it follows that . So we have shown that are linearly independent.

Proof step: generate

Let be arbitrary. We now want to write as a linear combination of . For this we show , i.e., is a linear combination of with coefficients . We now verify that for all . Let be arbitrary. By definition of we obtain:

.

Since equality holds for all , the functions agree at every point and are therefore identical. So we have shown that generate .

Thus we have proved that is a basis of . Since we have basis elements of , it follows that .

Hint

If is an infinite set, then is infinite-dimensional. In the special case , is isomorphic to the sequence space .