An endomorphism is a linear deformation of a vector space . Formally, an endomorphism is a linear mapping that sends to itself, i.e., . A bijective endomorphism is called an automorphism. Intuitively, an automorphism is a linear deformation that can be undone.
We already know linear maps. These are mappings between two vector spaces which are compatible with the (linear) vector space structure. We now examine a few examples of linear maps that we have already learned about in previous articles.
We now consider a rotation by the angle counter-clockwise, with the origin as center of rotation. This is a mapping which assigns to each vector the vector rotated by the angle :
Drotation of a vector by the angle
In the introductory article on linear maps we have seen that rotations about the origin are linear. We can visualize as in the first example by applying the mapping to the checkerboard pattern. The individual squares then sustain their shape, but they are rotated.
The map "presses" vectors onto the straight line .
You can easily check that is a linear map.
We also apply this linear map to the checkerboard pattern to visualize it.
The entire grid is "flattened" onto the straight line .
Linear deformations of an arbitrary vector space[Bearbeiten]
In all the above examples, we were able to visualize the linear maps as distortions of the checkerboard pattern in . This was possible because all of the above functions map from into itself. We can illustrate any linear maps as a deformation of a checkerboard. The deformation of the checkerboard shows us how the map acts on the standard basis vectors and of and integer multiples of them.
Any linear map is a linear deformation of the space . Let us generalize this idea to general vector spaces . We can think of linear maps from to as linear deformations or transformations of the vector space . In contrast, a linear map is a transport of the vector space to . We give a separate name to a linear map which deforms the vector space, i.e., which maps from to : Such a linear map will be called an endomorphism. So endomorphisms are just those linear maps for which the domain of definition and the target space coincide.
In the examples in we have seen that some deformations preserve the space and others "flatten" it in a certain sense. The mappings that preserve space can be undone. When the space is flattened, the ma cannot be undone because information is lost. For example, in the above linear map "projection onto a straight line", information is lost about what the -component of the original vector was. It is not possible to recover the vector after applying the transformation. So there are deformations of space that can be undone, and some that cannot. One can undo a deformation exactly if the associated mapping is bijective, i.e., invertible. This gives us the definition of a reversible deformation of space, i.e., an invertible endomorphism. Such a mapping is called an automorphism.
Let be a -vector space. A linear map that maps to itself is called an endomorphism. We denote the set of all endomorphisms of by .
We call a bijective endomorphism automorphism. The set of all automorphisms of is denoted by .
Every automorphism is a bijective linear map and therefore also an isomorphism. But not every isomorphism is an automorphism. This is because isomorphisms can also map two different vector spaces to each other.
We consider the linear map . Since it maps the vector space into itself, is an endomorphism. The mapping preserves and sends to . Thus, we can think of as a reflection along the -axis. We can undo a reflection by reflecting a second time. This means is its own inverse mapping. Formally, this is denoted or . Such a mapping is also called "self-inverse." Because has an inverse, that is, it is invertible, it follows that is bijective. Thus, is also an automorphism.
Next we consider the endomorphism . This is a counter-clockwise rotation by degrees. To convince yourself that really is such a rotation, you may calculate how acts on the standard basis vectors and . If it is such a rotation on these two vectors, then by linearity, must be such a rotation everywhere. A short calculation gives , as well as , which is exactly the desired rotation. Again, we can easily specify an inverse by "turning back" or rotating clockwise by degrees. This rotation is given by . We briefly check that is indeed the inverse of :
for . So
and is also an automorphism in this example.
The following animation shows how this mapping deforms the plane.
The transformation looks reversible, that is, it looks like an automorphism.
We can check this by showing that is both injective and surjective.
To show injectivity, let us look at the kernel of , i.e., the set . For a vector in the kernel, then holds. From this we directly get and therefore also . Thus, the kernel consists only of the zero vector and hence is injective.
To show surjectivity, we take any and find a suitable preimage. That means, we are looking for some with . It is directly clear that must hold. Furthermore, must be true. This can be transformed to . So is a preimage of . Since was arbitrary, is surjective.
Linear maps of the form with are called shears. You can show as an exercise that a shear is always an automorphism, no matter what is.
Let us now consider the mapping . This is an endomorphism from to that maps every point on the plane to a point on the -axis. So we can think of as "flattening" the 2-dimensional plane onto the 1-dimensional -axis."
Since maps the points in exclusively onto the axis, is not a surjective mapping. Nor is it injective, because for every we can find different such that holds, e.g., and vice versa. So is not an automorphism.
Let us now consider an example in . For this we look at the linear map . Because maps the vector space back to , the mapping is an endomorphism.
We now want to check whether is also an automorphism. To do this, we need to check surjectivity and injectivity.
For injectivity, we consider the kernel of , that is, the set . Thus, for vectors from the kernel of , holds. From this we can directly conclude that and , so , must hold. We thus see that the kernel of contains not only the zero vector, but also the set of all vectors . Hence, is not injective and therefore cannot be bijective. In particular, is not an automorphism.
Visually, compresses vectors onto the plane . Thus, information is lost. Given a vector , it is no longer possible to say in an unambiguous way from which vector it arose under the mapping , since there are very many ways to represent as the sum of two numbers . For example, .
There are also endomorphisms on vector spaces other than and . For instance, we may take an arbitrary field and consider, as a vector space over it, the sequence space
Now, we take the mapping
If we write out the first sequence members, the situation looks like this:
Thus, the mapping interchanges even and odd sequence members. We briefly justify why is linear. Addition and scalar multiplication in the sequence space is understood component-wise, i.e., for and and we have
Since only swaps the order of the components, is linear. We can also check linearity of explicitly.
Question: How do you directly show that is linear?
We prove additivity and homogeneity of . Let and and . Then
So is an endomorphism of . Is also an automorphism? To answer that, we need to verify that can be undone. The mapping swaps even and odd sequence members. So if we swap the sequence members back, is undone. This second swap can be done by just applying again. As with the very first example, is self-inverse - in formulas this is called or . Since is invertible, the mapping is bijective. So is an automorphism.
In the article Vector space of a linear map we saw that the set of linear maps between two -vector spaces and again forms a vector space. Since holds, the set of endomorphisms is also a vector space. That is, we can add endomorphisms of a vector space and multiply them by scalars. In particular, we can link two endomorphisms and by addition and obtain an endomorphism . This space is given by
where denotes addition in the vector space .
Can and be connected in another way? Intuitively, and are two representations of the vector space . We can now deform the space with and then deform the result with . This produces a new deformation of the vector space. That is, we again get an endomorphism of . This resulting mapping, that corresponds to applying and one after the other, is called composition and denoted . Thus, the composition of two endomorphisms is always an endomorphism. In summary, we can "connect" two endomorphisms and by forming the addition or the composition .
Because we have composition as an operation in addition to addition, carries more structure than just vector space structure. We will prove later that the set of endomorphisms on forms a ring with these two operations. Here, the addition in the ring is the addition of the mappings and the multiplication in the ring is the composition of the mappings.
It is now an interesting question, when the ring has a unit element and is commutative. If this was the case, we would even have a field and could build more vector spaces over it. Now, a unit element exists if there is a neutral element of multiplication. That is, if there is a such that and holds for all . We already know a mapping that satisfies this property: the identity. This is a linear map and so holds. So the ring has a unit element.
Is a commutative ring? To answer this, we need to check that holds for all . To get an intuition whether the statement is true or false, we consider again examples with . Let be the projection onto the -axis; that is, for , holds. Furthermore, let be the rotation by clockwise (or by counter-clockwise) about the origin; that is, holds. We want to investigate whether . What do the maps and do visually? The map first pushes all space onto the -axis and then rotates it by in a clockwise direction. So our result is on the -axis.
The mapping first rotates the space by clockwise and then pushes everything to the -axis. So the result of the mapping lies on the -axis.
Consequently, and are different mappings. Therefore, is not a commutative ring. More generally, for any vector space with , is not a commutative ring. We deal with this below in a corresponding exercise.
As announced above, we now prove that is always a ring:
Theorem (Endomorphism ring)
Let be a field and be a -vector space. Then the set of endomorphisms on together with addition and composition a ring with unit.
For two endomorphisms , the endomorphisms and are defined by
Here, is the addition on .
Proof (Endomorphism ring)
In order that forma a ring with unit, the following properties must be satisfied:
The tuple () is an Abelian group.
Associative law of addition:
Commutative law of addition:
Existence of an additive neutral element:
Existence of additive inverse:
The tuple () is a monoid
Associative law of multiplication:
Existence of a multiplicative neutral element:
The distributive laws hold
Distributive law I:
Distributive law II:
Before we start with the proof, let's keep the following simple fact in mind:
Let and .
The mappings and map elements of to elements of .
Accordingly, are elements of .
By premise, is a vector space. Therefore, we can apply the computational rules that hold in the -vector space to the elements .
Proof step: () is an Abelian group
Proof step: Associative law of addition
The associative law of addition reds:
We prove this equation by establishing for all the equality for each vector .
So let and . Then
This shows the associative law of addition.
Proof step: Commutative law of addition
The commutative law of addition is:
We prove this by establishing that for all we have for each .
So let and . Then
This shows the commutativity of the addition.
Proof step: Existence of an additive neutral element
We need to show the following statement:
We prove this statement by establishing:
For this, let us choose , where is the zero map from to . We now show that is the neutral element of the addition. For this, let and . Then
The additive neutral element here is therefore the zero mapping .
Proof step: Existence of additive inverses
We have to show the following statement:
This is done by establishing:
Since is a -vector space, any vector has an additive inverse, namely . Then holds. Therefore, for any endomorphism , we can simply choose . Now we still have to show that. For this, let and . Then
So the additive inverse of a mapping is given by .
We have thus proved that is an Abelian group.
We could have shown this statement differently. In the article Vector space of a linear map we considered the set of linear maps between two -vector spaces and . We call this set . We have seen that forms a -vector space. It is then true that . So is also a vector space and thus an Abelian group.
Proof step: () is a monoid
Proof step: Associative law of multiplication
The associative law of multiplication in is:
This is true because the composition of mappings is associative.
Proof step: Existence of a multiplicative neutral element
We have to establish the following statement:
This is proven by establishing the following statement:
We choose , where is the identity on . We further want to show that is the neutral element of the multiplication. For this, let and . Then
So the neutral element of the multiplication in given by the identity on , i.e., .
Proof step: Distributive laws
Proof step: Distributive law I
The distributive law I reads:
We prove this equation by establishing for all the equality with . For this, let and . Then
This establishes the distributive law I.
Proof step: Distributive law II
The distributive law II reads:
We prove this equation by establishing the equation for all and . So let and . Then
Above we have already examined some examples of endomorphisms and automorphisms. We have seen that endomorphisms which "flatten" a vector space are not bijective and therefore not automorphisms. On the other hand, endomorphisms which do not "flatten" a vector space are indeed automorphisms.
Question: What does "not flattening" mean in a mathematical language?
An endomorphism "does not flatten a vector space" if there is no vector mapped to zero by . That is, for all vectors , holds. Since is a linear map, this is exactly the case if is injective, i.e., if it is a monomorphism.
For endomorphisms of finite-dimensional vector spaces, being "non-flattening" is equivalent to being an automorphism: Let be an endomorphism of an -dimensional vector space . If the mapping is an automorphism, then it is injective. So does not flatten . Conversely, if we assume that does not flatten , it follows that is injective. Thus, no information from is lost when mapping with . From this, we can conclude that the image is also -dimensional. So must hold. Thus, is also surjective and therefore an automorphism.
We have seen that an injective endomorphism over a finite-dimensional vector space is automatically surjective. Does the converse statement also hold? In other words: If is a surjective endomorphism of a -dimensional vector space, does it follow that is injective? If is surjective, then and hence holds. Suppose is not injective. Then there is a vector for which . Thus, "flattens the direction" in which points. This means, when mapping by , we lose at least one dimension of . Consequently, we would have . This is a contradiction to . Therefore, must be injective. So if is surjective, then is also injective.
possibly refer to an explanation in the article "Linear maps between finite dimensional vector spaces" when it is written.
We show these statements again formally in the following theorem.
Theorem (Endomorphisms on finite-dimensional vector spaces)
Let be a finite-dimensional vector space and be an endomorphism. Then, the following statements are equivalent
is an isomorphism
is a monomorphism
is an epimorphism
Proof (Endomorphisms on finite-dimensional vector spaces)
We already know that for two finite-dimensional vector spaces and with and a linear map , that the three statements
is an isomorphism
is a monomorphism
is an epimorphism
So for an endomorphism from a finite-dimensional vector space , the three statements must also be equivalent.
Link the theorem for general linear maps as soon as it is written
In the infinite-dimensional case, the above argument no longer works. We have exploited in the finite-dimensional case that for an -dimensional vector space and a subspace it already follows from that . Above, we used . However, in infinite-dimensional vector spaces this does not hold. Here, a paradoxical effect occurs: One can place an infinite-dimensional subspace into another infinite-dimensional space of the same size, without filling all of (This is related to Hilbert's Hotel paradox).
So for endomorphisms of an infinite-dimensional vector space , it does not hold that is surjective exactly when is injective. To understand this better, we now examine concrete counterexamples.
Example (An injective endomorphism that is not surjective)
You can easily verify that is linear. Why is injective? For with we have , so . Thus, follows and is injective.
Why is not surjective? To see this, we need to find a vector in that is not "hit" by . For instance, consider . No matter which we choose, it holds for that the first sequence member is equal to . So is never hit by . Therefore, is not surjective.
The procedure in this example is an example often given in the context of the Hilbert Hotel paradox. In this example, one shifts all elements of the set by by the mapping
This mapping is also injective, but just like it is not surjective. The difference is that is a linear map between vector spaces and is only a map on . So infinite-dimensional vector spaces show some similar weird effects as infinitely large sets.
Example (A surjective endomorphism that is not injective)
We consider again the sequence space over . Now we define the endomorphism
So for . Again, one can easily verify that is linear.
First we verify that is surjective. For this, let be any vector. We want to find a vector for which holds. This is true for and thus is surjective.
Why is not injective? To see this, we need to find some with and . An example is (again) given by . Then , but . Thus, is not injective.
We know that the endomorphisms form a ring with unit. The automorphisms are exactly all invertible endomorphisms. In other words, the automorphisms of a vector space are exactly the multiplicatively invertible elements, of the endomorphism ring. Recall that the multiplication in the endomorphism ring is just the composition of mappings. In the following theorem we show that is indeed a group with respect to this multiplication.
Theorem (Automorphism group)
Let be a field and a -Vector space.
The set forms a group with respect to composition .
Further, we have .
Proof (Automorphism group)
We need to show that
is closed with respect to
There is a neutral element with respect to in
Each element in has a multiplicative inverse.
Proof step: Closedness with respect to
We prove that for all automorphisms and it also holds that .
For this, let and . So and are also endomorphisms. Because forms a ring with multiplication , is closed with respect to . Thus, holds. So all we have to do is to justify that is bijective. Because and are bijective, there are respectively inverses and . We now show that is the inverse of . It holds that
So has an inverse and is therefore bijective. Therefore, is an automorphism.
Proof step: Associativity of
We need to show that for all automorphisms and the following holds
Let and , as well as be any vector. Then
Since the mappings and coincide on all vectors , they are equal, i.e., .
Proof step: Existence of a neutral element
We need to find an element such that for all it is true that .
We choose . Then , so is linear. Thus, . Further, is bijective, so .
Let now . Then
So is the neutral element in .
Proof step: Existence of a multiplicative inverse
We prove that for every there exists an automorphism such that .
Let be an automorphism. Then is bijective and thus there exists an inverse mapping , for which . We only have to show that . What we know is that inverses of linear maps are again linear. Consequently, , as the inverse of the linear map , is also linear. Furthermore, is bijective because has an inverse . So holds, and thus is an inverse of in .
The automorphisms form a group, but are no longer a ring. This is because no longer has an additive structure: If we have two automorphisms and from a vector space , need not be an automorphism again. And indeed, there are counterexamples for this:
Example (Sum of automorphisms which is not an automorphism)
We consider the vector space and define the automorphisms
It is easy to show that and are linear and bijective, so . Now
Since this mapping does not hit the vector , it is not surjective. So is not bijective and therefore not an automorphism.
is never closed under addition, unless .
For vector spaces with the automorphism group is not commutative. As with the endomorphism ring, the composition of the mappings is not commutative. We demonstrate this non--commutativity in an exercise below.
Linearity can easily be verified. Since the domain and target space are the same, is therefore an endomorphism.
We now want to show that is bijective.
To do this, we must show that is injective and surjective.
We start with injectivity.
Let and with .
Then, for we have , which implies and thus .
This establishes injectivity.
Now we show surjectivity.
For this, let .
We define .
So is indeed surjective.
So we have shown that is an automorphism.
Exercise (Transformation in the space of Fibonacci sequences)
Let be a field and be the vector space of Fibonacci sequences
where is the space of all sequences in . Show:
is isomorphic to .
There is an endomorphism that swaps the first two entries of each sequence, that is, and holds for all .
is an automorphism.
Solution (Transformation in the space of Fibonacci sequences)
We show that
is an isomorphism. The linearity can be easily verified.
For injectivity we show . So let with , i.e., . We show for all by induction. By assumption, the statement holds for . Now let be fixed. To establish the induction step, we must show that . As an induction assumption, we can use that for all . By definition of the sequence it follows that , which establishes the induction step and completes the induction.
For surjectivity, we use that any sequence in can be defined by specifying the first two members of the sequence: Let . We define inductively as in the proof of injectivity for , and . Then and .
Proof step: There is an endomorphism that swaps the first two entries of each sequence.
We use the isomorphism from the first part of the exercise. Obviously
is linear and maps from to , so it is an endomorphism. Thus, is also linear as a concatenation of linear maps. Since maps from to , is an endomorphism, and by construction
for all . So swaps the first two entries of each sequence.
Proof step: is an automorphism.
We have to show that is an isomorphism. Since is an isomorphism, is an isomorphism if and only is also an isomorphism. The endomorphism simply swaps the two components of a vector in . So it maps the (ordered) basis from to the basis . Since a linear map is bijective if and only if it maps bases to bases, is an isomorphism.
Exercise (Shears are automorphisms)
Let be a scalar. We consider the mapping . Show that is an automorphism.
Solution (Shears are automorphisms)
The linearity of can easily be verified. Since maps to itself, is an endomorphism and all that remains is to show bijectivity.
We prove injectivity by showing . Let , that is, holds. We want to show . Since holds, we get from the second vector component. It now follows that and that holds. Thus, is injective.
Second, we need to show that is surjective. For this, let . We must show that there exists a with . If we insert the definition of , the vector we are looking for must thus satisfy . That is, must hold. From this we get , so . If we set , the following is true
So is surjective.
Since is bijective and an endomorphism, it is also an automorphism.
Exercise (Non-commutativity in the endomorphism ring)
Let be an -dimensional -vector space with . Show: The endomorphism ring is not commutative.
Solution (Non-commutativity in the endomorphism ring)
Let a basis of , where by assumption holds. We define two noncommutative endomorphisms using the principle of linear continuation, by specifying the images of the basis vectors: For set
So the mapping swaps the first two basis vectors, while maps the first basis vector to the zero vector. In defining and we needed that there are basis vectors. For we now have
The basis vector is an element of the basis of , so it cannot be the zero vector. Therefore
So holds. Thus, the endomorphism ring is not commutative if .
Exercise (Commutativity in the endomorphism ring)
Let be a one-dimensional -vector space, i.e., . Show: that the endomorphism ring is commutative.
Solution (Commutativity in the endomorphism ring)
Let be a basis of and let be arbitrary. Endomorphisms are already uniquely determined by the images of the basis vectors. Since there is only one basis vector due to , we have and thus
for certain . Since , each is of the form for some . With linearity of and the commutativity of multiplication in , it follows that
Analogously one can show for any from which follows
Thus, for all we have
where in we have exploited the commutativity of multiplication in .
In the above two problems we saw that is commutative if and noncommutative if . What is the situation for ? If , then is the null vector space. There is only one endomorphism over the null space . This endomorphism is the null mapping . So holds. Since the ring consists of only one element, is commutative.
Über 150 ehrenamtliche Autorinnen und Autoren – die meisten davon selbst Studierende – haben daran mitgewirkt. Wir wollen, dass alle Studierende die Konzepte der Hochschulmathematik verstehen und dass hochwertige Bildungsangebote frei verfügbar sind. Bei dieser Mission kannst du mitmachen oder uns mit einer Spende unterstützen.
Feedback? Do you want to join?
If you have questions concerning the content, or didn't understand something, the feel free to contact us! We would love to answer your questions! Also we are thankful for critics and/or comments!
If you share our vision to explain university math in an comprehensible way, then contact us under:
This article is licensed under the free license CC-BY-SA 3.0. With that you can use it, modify it or share it freely, as long as you name „Serlo“ as source and put you changes under the same CC-BY-SA 3.0 oder an compatible license. On the page „Kopier uns!“ we explain you what you have to pay attention to, when using our texts, picture or videos.