# Introduction: Vector space – Serlo

Zur Navigation springen Zur Suche springen

We already know the vector spaces ${\displaystyle \mathbb {R} ^{2}}$ and ${\displaystyle \mathbb {R} ^{3}}$ from school. There we got to know them in the form coordinate systems. The concept of a vector space is much broader in mathematics. In the following, we will develop the abstract mathematical concept of vector space starting from the vector spaces known from school. They have a wide application in science, technology and data analysis.

## The vector space ${\displaystyle \mathbb {R} ^{n}}$

Explanation and definition of a vector space (YouTube- video by the YouTube channel "MJ Education")

In ${\displaystyle \mathbb {R} ^{2}}$ and ${\displaystyle \mathbb {R} ^{3}}$ we know vectors in the form of points in the plane or in the space. Sometimes we also encounter arrows as representatives of vectors in the coordinate system. vectors can be described in ${\displaystyle \mathbb {R} ^{2}}$ by two and in ${\displaystyle \mathbb {R} ^{3}}$ by three coordinates. The following map shows for example the arrow representation of the vector ${\displaystyle v=(2,1)^{T}}$:

Often, however, three coordinates are not enough to represent all the desired information. This is shown by the following two examples:

Suppose, we send off a radio probe (balloon with measurement device), in order to investigate the earth's atmosphere. Beside the position of the probe (three data points) we record different measured data, namely temperature and air pressure. The three coordinates of the ${\displaystyle \mathbb {R} ^{3}}$ are already needed to represent the position of the probe. For the representation of the other measured values we need two more coordinates. We assume that the probe is located 20 m in eastern direction, 30 m in northern direction and in a height of 15 m starting from the measuring station. Our instruments in the radio probe show at this time a temperature of 13° C and an air pressure of 1 bar. To write down all recorded data at once, we write down the row vector ${\displaystyle a=(20,30,15,13,1)^{T}}$. Here, the superscript T (transposed) allows the space-saving notation as a row vector. The notation as column vector is

${\displaystyle a={\begin{pmatrix}20\\30\\15\\13\\1\end{pmatrix}}}$

Thus we are in ${\displaystyle \mathbb {R} ^{5}}$ instead of ${\displaystyle \mathbb {R} ^{3}}$, because we need five instead of three numbers to describe the vector.

Example (Stocks)

We consider the stock values of 30 companies at a certain point of time. We can record those in a vector with 30 entries, where each entry stands for the value of the respective share at the specific point in time. We get a vector in ${\displaystyle \mathbb {R} ^{30}}$ that gives the current state of the financial market. We can extend the 30 values further by including other stocks. Ultimately, then, we can choose any natural number ${\displaystyle n}$ as the dimension of our "stock vector". The current state of a stock market can thus be encoded by a vector of the ${\displaystyle \mathbb {R} ^{n}}$ with ${\displaystyle n}$ entries.

We have seen from the examples that it can be useful to extend ${\displaystyle \mathbb {R} }$ by adding more dimensions to a general vector space ${\displaystyle \mathbb {R} ^{n}}$. And there are many more examples! In the transition from ${\displaystyle \mathbb {R} ^{2}}$ to ${\displaystyle \mathbb {R} ^{3}}$ we can still vividly imagine that we increase the dimension by adding an independent direction. In higher dimensions we lack this geometric notion. However, we can imagine higher dimensional vector spaces very well in the tuple notation. An additional dimension can be achieved by adding another number. These numbers can all be chosen independently and we call them coordinates.

## Generalization to ${\displaystyle K^{n}}$

So far we have created vector spaces by adding further dimensions to ${\displaystyle \mathbb {R} }$. Now we want to look at which properties of the real numbers are relevant for this and, based on this, generalize the vector space notion further. We are familiar with the rules of ${\displaystyle \mathbb {R} }$. We already know the vector addition and the scalar multiplication in ${\displaystyle \mathbb {R} ^{2}}$ and in ${\displaystyle \mathbb {R} ^{3}}$ and we can visualize these vividly.

In the same way, however, we can also calculate in higher dimensions. Thus the sum of the vectors ${\displaystyle v:=(-1,0,2,4)^{T}}$ and ${\displaystyle w:=(2,1,-1,0)^{T}\in \mathbb {R} ^{4}}$ is just given by summing up the entries:

${\displaystyle v+w={\begin{pmatrix}\color {red}{-1}\\0\\2\\4\end{pmatrix}}+{\begin{pmatrix}\color {red}{2}\\1\\-1\\0\end{pmatrix}}={\begin{pmatrix}\color {red}{-1+2}\\0+1\\2+(-1)\\4+0\end{pmatrix}}={\begin{pmatrix}\color {red}{1}\\1\\1\\4\end{pmatrix}}}$

The scalar multiplication of ${\displaystyle v:=\left(0,3,-1,{\tfrac {1}{2}}\right)^{T}\in \mathbb {R} ^{4}}$ with some ${\displaystyle \alpha :=2}$ is done by multiplying all entries separately:

${\displaystyle \alpha \cdot v=2\cdot {\begin{pmatrix}0\\3\\\color {OliveGreen}{-1}\\{\tfrac {1}{2}}\end{pmatrix}}={\begin{pmatrix}2\cdot 0\\2\cdot 3\\\color {OliveGreen}{2\cdot (-1)}\\2\cdot {\tfrac {1}{2}}\end{pmatrix}}={\begin{pmatrix}0\\6\\\color {OliveGreen}{-2}\\1\end{pmatrix}}}$

Just as in ${\displaystyle \mathbb {R} ^{4}}$ we can proceed in general also in ${\displaystyle \mathbb {R} ^{n}}$. Let us now consider which properties of ${\displaystyle \mathbb {R} }$ guarantee that a computation with vectors in ${\displaystyle \mathbb {R} ^{n}}$ is possible. We see from above examples that scalar multiplication and addition of vectors in each component corresponds to multiplication and of addition in ${\displaystyle \mathbb {R} }$, respectively. Thus we compute in the first component of addition ${\displaystyle \color {red}{-1+2=1}}$. Likewise we have that for scalar multiplication in the third component ${\displaystyle \color {OliveGreen}{2\cdot (-1)=-2}}$.

So the arithmetic in ${\displaystyle \mathbb {R} ^{n}}$ is traced back to the addition and multiplication in ${\displaystyle \mathbb {R} }$. Here we have another possibility for abstraction. A set in which one can add and multiply as in the real numbers is called a field (and ${\displaystyle \mathbb {R} }$ is such a field). So it should be sufficient if the numbers of the vector tuple come from a field. Thus we can form a vector space from every general field ${\displaystyle K}$. So it also works for other fields like the rational numbers ${\displaystyle \mathbb {Q} }$ or the complex numbers ${\displaystyle \mathbb {C} }$. Analogous to the ${\displaystyle \mathbb {R} ^{n}}$ we start with the field ${\displaystyle K}$ and build up a vector space ${\displaystyle K^{n}}$ by adding further "independent directions".

Example (The vector space ${\displaystyle \mathbb {Q} ^{3}}$)

The vector space ${\displaystyle \mathbb {Q} ^{3}}$ is like the ${\displaystyle \mathbb {R} ^{3}}$ a set of tuples ${\displaystyle (a,b,c)^{T}}$, only that the entries are exclusively rational numbers from ${\displaystyle \mathbb {Q} }$ and not real numbers from ${\displaystyle \mathbb {R} }$. We have hence that ${\displaystyle a,b,c\in \mathbb {Q} }$. Thus ${\displaystyle (1,2,3)^{T}}$ and ${\displaystyle \left(-1,{\tfrac {1}{2}},-{\tfrac {42}{23}}\right)^{T}}$ are vectors from ${\displaystyle \mathbb {Q} ^{3}}$. In contrast ${\displaystyle (1,{\sqrt {2}},-3)^{T}}$ is not a vector from ${\displaystyle \mathbb {Q} ^{3}}$, because in the second component with ${\displaystyle {\sqrt {2}}}$ there is a non-rational (or also called irrational) number in the tuple.

## Relation to polynomials

Above we used vectors of ${\displaystyle \mathbb {R} ^{n}}$ in tuple notation to describe systems with ${\displaystyle n}$ units of information. We find the structure of computing with tuples elsewhere as well. Consider the polynomial of degree 2 (a quadratic polynomial), given by ${\displaystyle f(x)=7x^{2}+3x-2={\color {Red}7}\cdot x^{2}+{\color {OliveGreen}3}\cdot x^{1}+({\color {NavyBlue}-2})\cdot x^{0}}$. We always sort the summands such that the exponents are ordered descending from the degree of the polynomial 2 to ${\displaystyle 0}$. In doing so, we note that this polynomial has similarities to the vector ${\displaystyle ({\color {Red}7},{\color {OliveGreen}3},{\color {NavyBlue}-2})^{T}}$. Here the first coefficient of the polynomial is in the first component of the vector and so on. Basically, the vector encodes the polynomial.

We can observe the same similarity between addition and scalar multiplication of polynomials on the one hand and the associated operations of vectors on the other. Let us take the polynomials ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)=7x^{2}+3x-2}$ and ${\displaystyle g:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle g(x)=x^{2}+6}$ and the scalar ${\displaystyle \rho =-1}$. We can write the polynomials as tuples:

${\displaystyle {\begin{array}{cccc}f(x)&=&{\color {Red}7}x^{2}&+{\color {OliveGreen}3}x&+({\color {NavyBlue}-2})&\leftrightarrow &({\color {Red}7},{\color {OliveGreen}3},{\color {NavyBlue}-2})^{T}\\g(x)&=&{\color {Red}1}x^{2}&+{\color {OliveGreen}0}x&+{\color {NavyBlue}6}&\leftrightarrow &({\color {Red}1},{\color {OliveGreen}0},{\color {NavyBlue}6})^{T}\end{array}}}$

Now we calculate ${\displaystyle f(x)+g(x)}$ in both forms of representation:

${\displaystyle {\begin{array}{ccc}{\color {Red}7}x^{2}&+{\color {OliveGreen}3}x&+({\color {NavyBlue}-2})&\leftrightarrow &({\color {Red}7},{\color {OliveGreen}3},{\color {NavyBlue}-2})^{T}\\&+&&&+\\{\color {Red}1}x^{2}&+{\color {OliveGreen}0}x&+{\color {NavyBlue}6}&\leftrightarrow &({\color {Red}1},{\color {OliveGreen}0},{\color {NavyBlue}6})^{T}\\&=&&&=\\{\color {Red}8}x^{2}&+{\color {OliveGreen}3}x&+{\color {NavyBlue}4}&\leftrightarrow &({\color {Red}8},{\color {OliveGreen}3},{\color {NavyBlue}4})^{T}\end{array}}}$

Also the multiplication of ${\displaystyle f(x)}$ with the factor ${\displaystyle \rho }$ corresponds with the respective calculation in the associated vector tuples:

${\displaystyle {\begin{array}{ccc}&-1&&&-1\\&\cdot &&&\cdot \\{\color {Red}7}x^{2}&+{\color {OliveGreen}3}x&+({\color {NavyBlue}-2})&\leftrightarrow &({\color {Red}7},{\color {OliveGreen}3},{\color {NavyBlue}-2})^{T}\\&=&&&=\\{\color {Red}-7}x^{2}&+({\color {OliveGreen}-3})x&+{\color {NavyBlue}2}&\leftrightarrow &({\color {Red}-7},{\color {OliveGreen}-3},{\color {NavyBlue}2})^{T}\\\end{array}}}$

Every second degree polynomial can be uniquely represented by a three-dimensional vector in the way described. Conversely, every three-dimensional vector uniquely describes a second-degree polynomial. Thus we find a bijective map between the set of second degree polynomials and the ${\displaystyle \mathbb {R} ^{3}}$. Similarly, there exists a bijective map between third degree polynomials and the ${\displaystyle \mathbb {R} ^{4}}$ and in general between polynomials of ${\displaystyle n}$-th degree and the ${\displaystyle \mathbb {R} ^{n+1}}$.

So far we have allowed as coefficients for polynomials all real numbers. We can also consider polynomials whose coefficients are elements of ${\displaystyle \mathbb {Q} }$. Accordingly, the entries of the corresponding vector are rational numbers. polynomials ${\displaystyle n}$-th degrees with rational coefficients thus correspond to vectors from the vector space ${\displaystyle \mathbb {Q} ^{n+1}}$. Actually, instead of ${\displaystyle \mathbb {R} }$ or ${\displaystyle \mathbb {Q} }$, any field is allowed.

## General vector spaces in mathematics

We have found that we can calculate with polynomials of degree ${\displaystyle n}$ in the same way as with vectors of ${\displaystyle K^{n+1}}$. Thus, the set of polynomials of degree ${\displaystyle n}$ has a similar structure compared to ${\displaystyle K^{n+1}}$. However, when considering all polynomials, that is, polynomials of any degree, we reach our limits with the notion of the ${\displaystyle K^{n}}$. In this set the polynomials can have arbitrary large exponents:

{\displaystyle {\begin{aligned}p_{1}(x)&=x^{\color {OliveGreen}1}\\p_{2}(x)&=x^{\color {OliveGreen}2}+x^{1}\\p_{3}(x)&=x^{\color {OliveGreen}3}+x^{2}+x^{1}\\&\vdots \end{aligned}}}

To describe this set by tuples, we need infinitely many entries. The space of all polynomials includes infinitely many dimensions, while in ${\displaystyle K^{n}}$ we are limited to ${\displaystyle n}$ dimensions. Thus the set of all polynomials cannot be expressed by a set ${\displaystyle K^{n}}$. Nevertheless, polynomials and tuples have a common structure, as we have already seen. This allows a further step of abstraction: by summarizing this common structure in a definition, we can talk about tuples as well as polynomials and about other sets with these structures.

What is this common structure? The commonality of polynomials and of tuples is that they can be added and scaled and that both operations behave similarly on both sets. This is the common structure that vector spaces have: vectors are objects that can be added and scaled.

We have noted a structural difference between the ${\displaystyle K^{n}}$ and the vector space of all polynomials. However, they have in common that their elements can be added and scaled. Thus it seems obvious to consider this property of vectors as the defining property of an every vector space.

Up to now we have not considered which calculation rules apply to the addition and scalar multiplication of vectors in general vector spaces. In ${\displaystyle \mathbb {R} }$ we have the associative and commutative law as well as the distributive law and we know neutral and inverse elements concerning addition and multiplication. As we have seen above, arithmetic in ${\displaystyle \mathbb {R} ^{n}}$ can be traced back to arithmetic in ${\displaystyle \mathbb {R} }$. Accordingly certain calculation rules of the real numbers transfer to the vector space ${\displaystyle \mathbb {R} ^{n}}$ and analogously of every field ${\displaystyle K}$ to the ${\displaystyle K^{n}}$.

## Deriving the definition of a vector space

The addition, scalar multiplication and all associated arithmetic laws provide the formal definition of the vector space. The starting point of our description of a vector space is a set ${\displaystyle V}$ containing all vectors of a vector space. In order for our vector space ${\displaystyle V}$ to contain at least one vector, we require that ${\displaystyle V}$ has to be non-empty. We have seen that the essential structure of a vector space is given by the arithmetic operations performed on it. So we need to formally describe addition and scalar multiplication on a vector space.

### The additive structure of a vector space

We have already required that a vector space ${\displaystyle V}$ should be a non-empty set. Now we define via axioms what properties its additive structure must have. First, we note that an addition of vectors is an inner opperation [1] ${\displaystyle \boxplus :V\times V\to V}$. So it is a map where two vectors are mapped to another vector. The function value is the sum of the two input vectors.

We denote this map with the symbol ${\displaystyle \boxplus }$. So ${\displaystyle \boxplus (v,w)}$ is the sum of the two vectors ${\displaystyle v}$ and ${\displaystyle w}$. The notation ${\displaystyle \boxplus (v,w)}$ is analogous to the notation ${\displaystyle f(v,w)}$, where instead of "${\displaystyle f}$" we write the symbol "${\displaystyle \boxplus }$". Instead of the notation ${\displaystyle \boxplus (v,w)}$ the so-called infix notation ${\displaystyle v\boxplus w}$ is usually used, which we want to use in the following.

We use here the operation sign "${\displaystyle \boxplus }$" to better distinguish between the vector addition and of addition of numbers "${\displaystyle +}$", which we can first consider independently. In most textbooks, the symbol "${\displaystyle +}$" is also used for vector addition. Whether the addition of vectors or of numbers is meant, must be inferred from the respective context.For convenience, we will also later use the symbol "${\displaystyle +}$" instead of "${\displaystyle \boxplus }$".

To show that the set ${\displaystyle V}$ is provided with an operation "${\displaystyle \boxplus }$", we write ${\displaystyle (V,\boxplus )}$. However, in order for us to consider "${\displaystyle \boxplus }$" as an addition, this operation must satisfy certain characteristic properties that we already know from the addition of numbers. These are:

1. ${\displaystyle V}$ is complete with respect to ${\displaystyle \boxplus }$. That means, the sum of two vectors again yields a well-defined vector:
${\displaystyle \forall v,w\in V:v\boxplus w\in V}$
2. Die vector addition is commutative (${\displaystyle \boxplus }$ satisfies the commutative law):
${\displaystyle \forall v,w\in V:v\boxplus w=w\boxplus v}$
3. Die vector addition is associative (${\displaystyle \boxplus }$ satisfies the associative law):
${\displaystyle \forall v,w,z\in V:(v\boxplus w)\boxplus z=v\boxplus (w\boxplus z)}$
4. The vector addition has a neutral element. This means that there is at least one vector ${\displaystyle e\in V}$ for which
${\displaystyle \forall v\in V:v\boxplus e=e\boxplus v=v}$

Later we will show that it already follows from the other axioms that every vector space has exactly one neutral element. This neutral element ${\displaystyle e}$ is called the zero vector. For the zero vector from the vector space ${\displaystyle V}$ we write "${\displaystyle 0_{V}}$". If it is clear which vector space the zero vector comes from, then we write down "${\displaystyle 0}$".

5. For every vector ${\displaystyle v}$ there exists at least one additive inverse element ${\displaystyle i\in V}$. For the vector ${\displaystyle i}$ inverse to ${\displaystyle v}$ we have that:
${\displaystyle v\boxplus i=i\boxplus v=e}$

This means that the addition of every vector with its (additive) inverse must yield the neutral element ${\displaystyle e}$ or, in other words, the zero vector. We will show later that the inverse vector ${\displaystyle i}$ is unique. So for every vector ${\displaystyle v}$ there is exactly one inverse vector ${\displaystyle i}$ to it. We call this vector inverse or negative to ${\displaystyle v}$ and usually write "${\displaystyle -v}$" for it.

A set with an operation satisfying the above five axioms is also called an abelian group [2].

### The scalar multiplication

We have already defined which properties the addition of vectors must fulfil. The scalar multiplication of vectors is still missing. So that we can distinguish the scalar multiplication of the normal number multiplication, we use for it first the symbol "${\displaystyle \boxdot }$". In textbooks the symbol "${\displaystyle \cdot }$" is used instead of "${\displaystyle \boxdot }$" or the dot is even omitted completely. Which operation is meant then, results from the context. We will use this notation later. The scalar multiplication maps a number (scaling factor) and a vector to another vector.

${\displaystyle {\color {OliveGreen}\underbrace {\rho } _{\text{scaling factor}}}\boxdot {\color {NavyBlue}\underbrace {v} _{\text{initial vector}}}={\color {NavyBlue}\underbrace {w} _{\text{scaled vector}}}}$

The notation ${\displaystyle \rho \boxdot v}$ means that ${\displaystyle v}$ is stretched (or compressed) by ${\displaystyle \rho }$. It is obvious to define the scalar by ${\displaystyle \rho \in \mathbb {R} }$. However, we can still generalize this. All sets, in which one can add and multiply similarly to the real numbers, come into question as basic set for scaling factors. Such a set is called a field (missing) in mathematics.

The properties of scalar multiplication "${\displaystyle \boxdot }$" are similar to those of multiplication of numbers. We now want to define scalar multiplication formally by axioms. As with of addition, a non-empty set ${\displaystyle V}$ is the starting point of the definition. In addition, we need a field ${\displaystyle K}$. The scalar multiplication is an outer operation ${\displaystyle \boxdot :K\times V\rightarrow V}$ satisfying the following properties:

1. scalar distributive law:
${\displaystyle \forall \lambda ,\rho \in K\ \forall v\in V:(\lambda +\rho )\boxdot v=(\lambda \boxdot v)\boxplus (\rho \boxdot v)}$
2. vectorial distributive law:
${\displaystyle \forall \lambda \in K\ \forall v,w\in V:\lambda \boxdot (v\boxplus w)=(\lambda \boxdot v)\boxplus (\lambda \boxdot w)}$
3. associative law for scalars:
${\displaystyle \forall \lambda ,\rho \in K\ \forall v\in V:(\lambda \cdot \rho )\boxdot v=\lambda \boxdot (\rho \boxdot v)}$
4. Let ${\displaystyle 1\in K}$ be the neutral element of the multiplication in the field ${\displaystyle K}$. Then, ${\displaystyle 1}$ is also the neutral element of scalar multiplication:
${\displaystyle \forall v\in V:1\boxdot v=v}$

In order to be able to scale vectors, we also need a field ${\displaystyle K}$ in the definition of a vector space. This field contains the scaling factors. Therefore, vector spaces ${\displaystyle V}$ are always defined over a field ${\displaystyle K}$. We say "${\displaystyle V}$ is a vector space over ${\displaystyle K}$" or briefly "${\displaystyle V}$ is a ${\displaystyle K}$-vector space" to express that the scaling factors for ${\displaystyle V}$ come from ${\displaystyle K}$.

### Definition of a vector space

We can write down our considerations in a compressed way to get the formal definition of a vector space:

Definition (vector space)

Let ${\displaystyle V}$ be a non-empty set with an inner operation ${\displaystyle \boxplus :V\times V\to V}$ (of vector addition) and an outer operation ${\displaystyle \boxdot :K\times V\to V}$ (of scalar multiplication). The set ${\displaystyle V}$ with these two operations is called vector space over the field ${\displaystyle K}$ or alternatively ${\displaystyle K}$-vector space if the following axioms hold:

• ${\displaystyle V}$ together with the operation ${\displaystyle \boxplus }$ forms an abelian group (missing). That is, the following axioms are satisfied:
1. associative law: For all ${\displaystyle v,w,z\in V}$ we have that: ${\displaystyle v\boxplus (w\boxplus z)=(v\boxplus w)\boxplus z}$.
2. commutative law: For all ${\displaystyle v,w\in V}$ we have that: ${\displaystyle v\boxplus w=w\boxplus v}$.
3. Existence of a neutral element: There is an element ${\displaystyle 0\in V}$ such that for all ${\displaystyle v\in V}$ we have that: ${\displaystyle v\boxplus 0=v}$. This vector ${\displaystyle 0}$ is called neutral element of addition or zero vector.
4. Existence of an inverse element: To every ${\displaystyle v\in V}$ there exists an element ${\displaystyle y\in V}$ such that we have ${\displaystyle v\boxplus y=0}$. The element ${\displaystyle y}$ is called inverse element to ${\displaystyle v}$. Instead of ${\displaystyle y}$ we also write ${\displaystyle -v}$.
• In addition, the following axioms of scalar multiplication ${\displaystyle \boxdot }$ must be satisfied:
1. Scalar distributive law: For all ${\displaystyle \lambda ,\rho \in K}$ and all ${\displaystyle v\in V}$ we have that: ${\displaystyle (\lambda +\rho )\boxdot v=(\lambda \boxdot v)\boxplus (\rho \boxdot v)}$.
2. Vector distributive law: For all ${\displaystyle \lambda \in K}$ and all ${\displaystyle v,w\in V}$ we have that: ${\displaystyle \lambda \boxdot (v\boxplus w)=(\lambda \boxdot v)\boxplus (\lambda \boxdot w)}$
3. Associative law for scalars: For all ${\displaystyle \lambda ,\rho \in K}$ and all ${\displaystyle v\in V}$ it holds that: ${\displaystyle (\lambda \cdot \rho )\boxdot v=\lambda \boxdot (\rho \boxdot v)}$.
4. Neutral element of scalar multiplication: For all ${\displaystyle v\in V}$ and for ${\displaystyle 1\in K}$ (the neutral element of multiplication in ${\displaystyle K}$) we have that: ${\displaystyle 1\boxdot v=v}$. The 1 is called neutral element of scalar multiplication.

Instead of "${\displaystyle V}$" one often writes "${\displaystyle (V,\boxplus ,\boxdot )}$". The last notation makes clear that the set ${\displaystyle V}$ includes the operations ${\displaystyle \boxplus }$ and ${\displaystyle \boxdot }$.

Hint

We use the symbols "${\displaystyle \boxplus }$" and "${\displaystyle \boxdot }$" to distinguish them from addition "${\displaystyle +}$" and multiplication "${\displaystyle \cdot }$". In the literature this distinction is often not made and from the context it becomes clear whether for example "${\displaystyle +}$" means an addition of numbers or of vectors.