Sine and cosine – Serlo

Zur Navigation springen Zur Suche springen

In this chapter we introduce the two trigonometric functions sine and cosine. They are the most important trigonometric functions and are used in geometry for triangle calculations and trigonometry. Waves such as electromagnetic waves and harmonic oscillations can be described by sine and cosine functions, so they are also omnipresent in physics.

Definition via unit circle

There are several ways to define the sine and cosine. The visually most accessible one is that on the unit circle. Here, a point ${\displaystyle P(x,y)}$ is considered that is located on a circle around the origin with radius ${\displaystyle r=1}$. The ${\displaystyle x}$ axis includes the angle ${\displaystyle \theta }$ with the distance from the origin to ${\displaystyle P(x,y)}$:

The angle ${\displaystyle \theta }$ uniquely determines where the point ${\displaystyle P(x,y)}$ is located. Thus the ${\displaystyle x}$-coordinate and the ${\displaystyle y}$-coordinate can be described by a function depending on ${\displaystyle \theta }$. We call these functions ${\displaystyle x(\theta )}$ and ${\displaystyle y(\theta )}$ the sine function ${\displaystyle \sin(\theta )}$ and cosine function ${\displaystyle \cos(\theta )}$ respectively:

In the following we take ${\displaystyle x}$ as the angle and write ${\displaystyle \sin(x)}$ instead of ${\displaystyle \sin(\theta )}$ and ${\displaystyle \cos(x)}$ instead of ${\displaystyle \cos(\theta )}$. This results in the following definition:

Definition (Definition of sine and cosine on the unit circle)

Let ${\displaystyle P}$ be the point on the unit circle whose position vector with the horizontal coordinate axis encloses the angle ${\displaystyle x}$. The coordinates of ${\displaystyle P}$ are then defined as ${\displaystyle (\cos(x),\sin(x))}$. Here ${\displaystyle \cos(x)}$ is called the cosine of ${\displaystyle x}$ and ${\displaystyle \sin(x)}$ the sine of ${\displaystyle x}$.

Graph of the sine and cosine function

The following animation shows how the graphs of the sine and cosine functions are constructed step by step:

This gives the following graph for the sine function:

For the cosine function we get:

Definition via exponential function

Representation by the complex exponential function

The sine and cosine function can also be defined as the sum of certain complex exponential functions. With this representation, properties of the sine and cosine can be demonstrated in a particularly elegant way.

Definition (Sine and cosine via complex exponential function)

We define the functions ${\displaystyle \sin }$ (sine) and ${\displaystyle \cos }$ (cosine) by

{\displaystyle {\begin{aligned}&\sin :\mathbb {R} \to \mathbb {R} :x\mapsto {\frac {1}{2\mathrm {i} }}\left(\mathrm {e} ^{\mathrm {i} x}-\mathrm {e} ^{-\mathrm {i} x}\right)\\[0.3em]&\cos :\mathbb {R} \to \mathbb {R} :x\mapsto {\frac {1}{2}}\left(\mathrm {e} ^{\mathrm {i} x}+\mathrm {e} ^{-\mathrm {i} x}\right)\end{aligned}}}

These functions are well-defined: For every real number ${\displaystyle x}$ the complex number ${\displaystyle \mathrm {e} ^{-\mathrm {i} x}}$ is the complex conjugate of ${\displaystyle \mathrm {e} ^{\mathrm {i} x}}$. Thus ${\displaystyle \mathrm {e} ^{ix}+\mathrm {e} ^{-ix}}$ is a real number and there is ${\displaystyle \cos(x)={\tfrac {1}{2}}\left(\mathrm {e} ^{\mathrm {i} x}+\mathrm {e} ^{-\mathrm {i} x}\right)\in \mathbb {R} }$. In an analogous way, one can show that ${\displaystyle \sin(x)={\tfrac {1}{2\mathrm {i} }}\left(\mathrm {e} ^{\mathrm {i} x}-\mathrm {e} ^{-\mathrm {i} x}\right)\in \mathbb {R} }$.

Deriving the exponential definition

One can show that ${\displaystyle e^{\mathrm {i} \theta }}$ is the point on the unit circle whose position vector with the ${\displaystyle x}$ axis encloses the angle ${\displaystyle \theta }$:

The real part of the complex number ${\displaystyle e^{\mathrm {i} \theta }}$ is ${\displaystyle \cos(\theta )}$, and the imaginary part is ${\displaystyle \sin(\theta )}$. There is hence ${\displaystyle e^{\mathrm {i} \theta }=\cos(\theta )+\sin(\theta )\mathrm {i} }$. At ${\displaystyle e^{-\mathrm {i} \theta }}$ we consider the angle ${\displaystyle -\theta }$. The point ${\displaystyle e^{-\mathrm {i} \theta }}$ is mirrored on the real axis on the other side:

Thus the real part of ${\displaystyle e^{-\mathrm {i} \theta }}$ is the same as for ${\displaystyle e^{\mathrm {i} \theta }}$, i.e. ${\displaystyle \cos(\theta )}$. However, the imaginary part is ${\displaystyle e^{\mathrm {i} \theta }}$ multiplied by the number ${\displaystyle -1}$ and thus equal to ${\displaystyle -\sin(\theta )}$. We get ${\displaystyle e^{-\mathrm {i} \theta }=\cos(\theta )-\sin(\theta )\mathrm {i} }$. So we have:

{\displaystyle {\begin{aligned}e^{\mathrm {i} \theta }&=\cos(\theta )+\sin(\theta )\mathrm {i} \\[0.3em]e^{-\mathrm {i} \theta }&=\cos(\theta )-\sin(\theta )\mathrm {i} \end{aligned}}}

By adding both equations we get:

${\displaystyle e^{\mathrm {i} \theta }+e^{-\mathrm {i} \theta }=2\cos(\theta )\implies \cos(\theta )={\frac {1}{2}}\left(e^{\mathrm {i} \theta }+e^{-\mathrm {i} \theta }\right)}$

And by subtracting the two equations we get:

${\displaystyle e^{\mathrm {i} \theta }-e^{-\mathrm {i} \theta }=2\sin(\theta )\mathrm {i} \implies \sin(\theta )={\frac {1}{2\mathrm {i} }}\left(e^{\mathrm {i} \theta }-e^{-\mathrm {i} \theta }\right)}$

Thus we have derived the two definitions ${\displaystyle \sin(x)={\tfrac {1}{2\mathrm {i} }}\left(\mathrm {e} ^{\mathrm {i} x}-\mathrm {e} ^{-\mathrm {i} x}\right)}$ and ${\displaystyle \cos(x)={\tfrac {1}{2}}\left(\mathrm {e} ^{\mathrm {i} x}+\mathrm {e} ^{-\mathrm {i} x}\right)}$. This derivation is illustrated again in the following figure:

Series expansion of sine and cosine

Definition as a series

This animation illustrates the definition of the sine function by a series. The higher the number ${\displaystyle N}$, the more summands are used in the series definition. Thus, for ${\displaystyle N=2}$, in addition to the sine function, the cubic polynomial ${\displaystyle \sum _{k=0}^{2-1}{\tfrac {(-1)^{k}}{(2k+1)!}}x^{2k+1}=x-{\tfrac {x^{3}}{6}}}$ is drawn in

.

Another mathematically precise definition that does not require geometrical notions is the so-called series representation, in which the sine and cosine are defined as a series. The series representation is less visual than the definition over the unit circle, but with it some properties of the sine and cosine can be proved more easily. It can also be used to extend the sine and cosine to complex numbers.

Definition (sine and cosine)

We define the functions ${\displaystyle \sin }$ (sine) and ${\displaystyle \cos }$ (cosine) by

{\displaystyle {\begin{aligned}&\sin :\mathbb {R} \to \mathbb {R} ,x\mapsto \sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(2k+1)!}}x^{2k+1}\\[0.3em]&\cos :\mathbb {R} \to \mathbb {R} ,x\mapsto \sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(2k)!}}x^{2k}\end{aligned}}}

Well-definedness

We have to prove that our series representation of the sine and cosine function is well-defined. That is, we have to show that for all ${\displaystyle x\in \mathbb {R} }$ the series ${\displaystyle \sum _{k=0}^{\infty }{\tfrac {(-1)^{k}}{(2k+1)!}}x^{2k+1}}$ and ${\displaystyle \sum _{k=0}^{\infty }{\tfrac {(-1)^{k}}{(2k)!}}x^{2k}}$ converge to a real number.

Theorem

For all real numbers ${\displaystyle x}$ the series ${\displaystyle \sum _{k=0}^{\infty }{\tfrac {(-1)^{k}}{(2k+1)!}}x^{2k+1}}$ and ${\displaystyle \sum _{k=0}^{\infty }{\tfrac {(-1)^{k}}{(2k)!}}x^{2k}}$ converge.

Proof

We prove the theorem explicitly for the series ${\displaystyle \sum _{k=0}^{\infty }{\tfrac {(-1)^{k}}{(2k+1)!}}x^{2k+1}}$ of the sine function. The proof for the series of the cosine function can be done analogously. For ${\displaystyle x=0}$ we first find:

{\displaystyle {\begin{aligned}&\sum _{k=0}^{\infty }{{\frac {(-1)^{k}}{(2k+1)!}}x^{2k+1}}\\[0.3em]&\ {\color {OliveGreen}\left\downarrow \ x=0\right.}\\[0.3em]=\ &\sum _{k=0}^{\infty }{{\frac {(-1)^{k}}{(2k+1)!}}0^{2k+1}}\\[0.3em]=\ &\sum _{k=0}^{\infty }{0}=0\end{aligned}}}

For ${\displaystyle x=0}$ the series therefore converges. For ${\displaystyle x\neq 0}$ we apply the ratio test. For this, let ${\displaystyle a_{k}:={\tfrac {(-1)^{k}}{(2k+1)!}}x^{2k+1}}$ for all ${\displaystyle k\in \mathbb {N} _{0}}$, so that we have ${\displaystyle \sum _{k=0}^{\infty }{{\tfrac {(-1)^{k}}{(2k+1)!}}x^{2k+1}}=\sum _{k=0}^{\infty }{a_{k}}}$. There is:

{\displaystyle {\begin{aligned}&\lim _{k\to \infty }{\left|{\frac {a_{k+1}}{a_{k}}}\right|}\\[0.3em]=\ &\lim _{k\to \infty }{\left|-{\frac {x^{2(k+1)+1}\cdot (2k+1)!}{x^{2k+1}\cdot (2(k+1)+1)!}}\right|}\\[0.3em]=\ &\lim _{k\to \infty }{\left|-{\frac {x^{2k+3}\cdot (2k+1)!}{x^{2k+1}\cdot (2k+3)!}}\right|}\\[0.3em]=\ &\lim _{k\to \infty }{\left|{\frac {x^{2}}{(2k+2)(2k+3)}}\right|}=0\end{aligned}}}

Since ${\displaystyle \lim _{k\to \infty }{\left|{\tfrac {a_{k+1}}{a_{k}}}\right|}=0}$ the series converges according to the ratio test.

Equivalence of exponential and series definition

We have learned several definitions of the sine and cosine function. We have already established a connection between the exponential representation and the definition on the unit circle. Now we need to show that the exponential and series definitions are equivalent to each other.

Theorem

There is for all ${\displaystyle x\in \mathbb {R} }$:

{\displaystyle {\begin{aligned}\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(2k+1)!}}x^{2k+1}&={\frac {1}{2\mathrm {i} }}\left(\mathrm {e} ^{\mathrm {i} x}-\mathrm {e} ^{-\mathrm {i} x}\right)\\[0.5em]\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(2k)!}}x^{2k}&={\frac {1}{2}}\left(\mathrm {e} ^{\mathrm {i} x}+\mathrm {e} ^{-\mathrm {i} x}\right)\end{aligned}}}

Thus it does not matter whether the sine or cosine function is defined via its series representation or via its exponential representation.

Proof

We already know from the chapter on the exponential function (missing) that the exponential function has the series representation ${\displaystyle \exp(x)=\sum _{k=0}^{\infty }{\tfrac {x^{k}}{k!}}}$. If we now substitute ${\displaystyle \mathrm {i} {\tilde {x}}}$ for ${\displaystyle x}$ in the series representation, we get:

{\displaystyle {\begin{aligned}\mathrm {e} ^{\mathrm {i} {\tilde {x}}}&=\sum _{k=0}^{\infty }{\frac {(\mathrm {i} {\tilde {x}})^{k}}{k!}}\\[0.5em]&\ {\color {OliveGreen}\left\downarrow \ {\text{absolute convergence}}\implies {\text{split the series}}\right.}\\[0.5em]&=\sum _{l=0}^{\infty }{\frac {(\mathrm {i} {\tilde {x}})^{2l}}{(2l)!}}+\sum _{l=0}^{\infty }{\frac {(\mathrm {i} {\tilde {x}})^{2l+1}}{(2l+1)!}}\\[0.5em]&\ {\color {OliveGreen}\left\downarrow \ \mathrm {i} ^{2l}=(-1)^{l}{\text{and }}\mathrm {i} ^{2l+1}=\mathrm {i} \cdot (-1)^{l}\right.}\\[0.5em]&=\sum _{l=0}^{\infty }{\frac {(-1)^{l}}{(2l)!}}{\tilde {x}}^{2l}+\mathrm {i} \sum _{l=0}^{\infty }{\frac {(-1)^{l}}{(2l+1)!}}{\tilde {x}}^{2l+1}\end{aligned}}}

Now we plug ${\displaystyle -\mathrm {i} {\tilde {x}}}$ into the series representation of the exponential function:

{\displaystyle {\begin{aligned}\mathrm {e} ^{-\mathrm {i} {\tilde {x}}}&=\sum _{k=0}^{\infty }{\frac {(-\mathrm {i} {\tilde {x}})^{k}}{k!}}\\[0.5em]&\ {\color {OliveGreen}\left\downarrow \ {\text{absolute convergence}}\implies {\text{split the series}}\right.}\\[0.5em]&=\sum _{l=0}^{\infty }{\frac {(-\mathrm {i} {\tilde {x}})^{2l}}{(2l)!}}+\sum _{l=0}^{\infty }{\frac {(-\mathrm {i} {\tilde {x}})^{2l+1}}{(2l+1)!}}\\[0.5em]&\ {\color {OliveGreen}\left\downarrow \ (-\mathrm {i} )^{2l}=(-1)^{l}{\text{and }}(-\mathrm {i} )^{2l+1}=-\mathrm {i} \cdot (-1)^{l}\right.}\\[0.5em]&=\sum _{l=0}^{\infty }{\frac {(-1)^{l}}{(2l)!}}{\tilde {x}}^{2l}-\mathrm {i} \sum _{l=0}^{\infty }{\frac {(-1)^{l}}{(2l+1)!}}{\tilde {x}}^{2l+1}\end{aligned}}}

If we write ${\displaystyle \alpha (x)=\sum _{k=0}^{\infty }{\tfrac {(-1)^{k}}{(2k)!}}x^{2k}}$ and ${\displaystyle \beta (x)=\sum _{k=0}^{\infty }{\tfrac {(-1)^{k}}{(2k+1)!}}x^{2k+1}}$ then we have shown that

{\displaystyle {\begin{aligned}\mathrm {e} ^{\mathrm {i} {\tilde {x}}}&=\alpha (x)+\mathrm {i} \beta (x)\\\mathrm {e} ^{-\mathrm {i} {\tilde {x}}}&=\alpha (x)-\mathrm {i} \beta (x)\\\end{aligned}}}

For the difference, we get

${\displaystyle \mathrm {e} ^{\mathrm {i} x}-\mathrm {e} ^{\mathrm {-i} x}=\alpha (x)+\mathrm {i} \beta (x)-(\alpha (x)-\mathrm {i} \beta (x))=\mathrm {2i} \cdot \beta (x)}$

So:

${\displaystyle \sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(2k+1)!}}x^{2k+1}=\beta (x)={\tfrac {1}{2\mathrm {i} }}\left(\mathrm {e} ^{\mathrm {i} x}-\mathrm {e} ^{-\mathrm {i} x}\right)}$

And analogously:

${\displaystyle \mathrm {e} ^{\mathrm {i} x}+\mathrm {e} ^{\mathrm {-i} x}=\alpha (x)+\mathrm {i} \beta (x)+(\alpha (x)-\mathrm {i} \beta (x))=\mathrm {2} \cdot \alpha (x)}$

Hence:

${\displaystyle \sum _{k=0}^{\infty }{\frac {(-1)^{k}}{(2k)!}}x^{2k}=\alpha (x)={\tfrac {1}{2}}\left(\mathrm {e} ^{\mathrm {i} x}+\mathrm {e} ^{-\mathrm {i} x}\right)}$