Proving continuity – Serlo

Zur Navigation springen Zur Suche springen

Diese Seite ist noch im Entstehen und noch nicht offizieller Bestandteil des Buchs. Gib der Autorin / dem Autor Zeit, die Seite anzupassen!

Overview

There are several methods for proving continuity:

• Concatenation Theorems: If the function can be written as a concatenation of continuous functions, it's continuous by the Concatenation Theorems.
• Using the local nature of continuity: If a function looks like another well-known continuous function in a small neighborhood of a point, then it must also be continuous in this point.
• Considering the left- and right-sided limits: If one can show that the left- and right-sided limits of a function are the same in some point, then the function is continuous in this point.
• Showing the sequence criterion: Using the sequence criterion means that the limit can be pulled into the function, i.e. we can consider the limit of the arguments. For a sequence ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ of arguments with limit value ${\displaystyle x_{0}}$ it must hold ${\displaystyle \lim _{n\to \infty }f(x_{n})=f(x_{0})=f\left(\lim _{n\to \infty }x_{n}\right)}$.
• Showing the epsilon-delta criterion: For every ${\displaystyle \epsilon >0}$ it must be show that there exists some ${\displaystyle \delta >0}$ such that for all arguments ${\displaystyle x}$ with a distance smaller than ${\displaystyle \delta }$ from the point ${\displaystyle x_{0}}$, the inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ is satisfied.

Composition of continuous functions

Main article: Composition of continuous functions

General proof sketch

Following the concatenation theorems, every composition of continuous functions is again continuous function. So if ${\displaystyle f:D\to \mathbb {R} }$ can be written as a concatenation of continuous functions, we can directly infer continuity of ${\displaystyle f}$ . A corresponding proof could be of the following form:

Let ${\displaystyle f:\ldots }$ with ${\displaystyle f(x)=\ldots }$. The function ${\displaystyle f}$ is a concatenation of the following functions:

...List of continuous functions, which serve as building bricks for ${\displaystyle f}$ ...

Since ${\displaystyle f(x)=\ldots }$ (Expression how ${\displaystyle f}$ is constructed out of those bricks) , we know that ${\displaystyle f}$ is a concatenation of continuous functions and hence continuous, as well.

In a lecture, this proof scheme may of course only be applied if the concatenation theorems have already been treated before. In any way, it is a very efficient method to characterize continuous functions.

Example Problem

Exercise (Epsilon-Delta proof for the continuity of the Square Root Function)

Show, using the epsilon-delta criterion, that the following function is continuous:

${\displaystyle f:\mathbb {R} \to \mathbb {R} ,x\mapsto {\sqrt {5+x^{2}}}}$

How to get to the proof? (Epsilon-Delta proof for the continuity of the Square Root Function)

We need to show, that for any given ${\displaystyle \epsilon >0}$ , there is a ${\displaystyle \delta >0}$ , such that all ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-a|<\delta }$ satisfy the inequality ${\displaystyle |f(x)-f(a)|<\epsilon }$ . So let us take a look at the target inequality ${\displaystyle |f(x)-f(a)|<\epsilon }$ and estimate the absolute ${\displaystyle |f(x)-f(a)|}$ from above. We are able to control the term ${\displaystyle |x-a|}$ . Therefor, would like to get an upper bound for ${\displaystyle |f(x)-f(a)|}$ including the expression ${\displaystyle |x-a|}$ . So we are looking for an inequality of the form

${\displaystyle |f(x)-f(a)|\leq K(x,a)\cdot |x-a|}$

Here, ${\displaystyle K(x,a)}$ is some expression depending on ${\displaystyle x}$ and ${\displaystyle a}$ . The second factor is smaller than ${\displaystyle \delta }$ and can be made arbitrarily small by a suitable choice of ${\displaystyle \delta }$ . Such a bound is constructed as follows:

{\displaystyle {\begin{aligned}|f(x)-f(a)|&=\left|{\sqrt {5+x^{2}}}-{\sqrt {5+a^{2}}}\right|\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\text{expand with}}\left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|\right.}\\[0.3em]&={\frac {\left|{\sqrt {5+x^{2}}}-{\sqrt {5+a^{2}}}\right|\cdot \left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|}{\left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|}}\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\geq 0\right.}\\[0.3em]&={\frac {\left|x^{2}-a^{2}\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\\[0.3em]&={\frac {\left|x+a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\cdot |x-a|\\[0.3em]&\quad {\color {Gray}\left\downarrow \ K(x,a):={\frac {\left|x+a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\right.}\\[0.3em]&=K(x,a)\cdot |x-a|\end{aligned}}}

Since ${\displaystyle |x-a|<\delta }$ , there is:

${\displaystyle |f(x)-f(a)|=K(x,a)\cdot |x-a|

If we now choose ${\displaystyle \delta }$ small enough, such that ${\displaystyle K(x,a)\cdot \delta \leq \epsilon }$ , then we obtain our target inequality ${\displaystyle |f(x)-f(a)|\leq \epsilon }$. But ${\displaystyle K(x,a)}$ still depends on ${\displaystyle x}$ , so ${\displaystyle \delta }$ would have to depend on , too - and we required one choice of ${\displaystyle \delta }$ which is suitable for all ${\displaystyle x}$ . THerefore, we need to get rid of the ${\displaystyle x}$-dependence. This is done by an estimate of the first factor, such that our inequality takes the form ${\displaystyle K(x,a)\leq {\tilde {K}}(a)}$ :

{\displaystyle {\begin{aligned}K(x,a)&={\frac {\left|x+a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\text{triangle inequality}}\right.}\\[0.3em]&\leq {\frac {\left|x\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}+{\frac {\left|a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\frac {a}{b+c}}\leq {\frac {a}{b}}{\text{ for }}a,c\geq 0,b>0\right.}\\[0.3em]&\leq {\frac {\left|x\right|}{\sqrt {5+x^{2}}}}+{\frac {\left|a\right|}{\sqrt {5+a^{2}}}}\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\frac {|a|}{\sqrt {5+a^{2}}}}\leq 1{\text{, since }}{\sqrt {5+a^{2}}}\geq {\sqrt {a^{2}}}=|a|\right.}\\[0.3em]&\leq 2=:{\tilde {K}}(a)\end{aligned}}}

We even made ${\displaystyle {\tilde {K}}(a)}$ independent of ${\displaystyle a}$ , which would in fact not have been necessary. So we obtain the following inequality

${\displaystyle |f(x)-f(a)|\leq 2\cdot |x-a|<2\cdot \delta }$

We need the estimate ${\displaystyle 2\cdot \delta \leq \epsilon }$, in order to fulfill the target inequality ${\displaystyle |f(x)-f(a)|<\epsilon }$ . The choice of ${\displaystyle \delta ={\tfrac {\epsilon }{2}}}$ is sufficient for that. So let us write down the proof:

Proof (Epsilon-Delta proof for the continuity of the Square Root Function)

Let ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)={\sqrt {5+x^{2}}}}$. Let ${\displaystyle a\in \mathbb {R} }$ and an arbitrary ${\displaystyle \epsilon >0}$ be given. We choose ${\displaystyle \delta ={\tfrac {\epsilon }{2}}}$. For all ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-a|<\delta }$ there is:

{\displaystyle {\begin{aligned}|f(x)-f(a)|&=\left|{\sqrt {5+x^{2}}}-{\sqrt {5+a^{2}}}\right|\\[0.3em]&={\frac {\left|{\sqrt {5+x^{2}}}-{\sqrt {5+a^{2}}}\right|\cdot \left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|}{\left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|}}\\[0.3em]&={\frac {\left|x^{2}-a^{2}\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\\[0.3em]&={\frac {\left|x+a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\cdot |x-a|\\[0.3em]&\leq \left({\frac {\left|x\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}+{\frac {\left|a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\right)\cdot |x-a|\\[0.3em]&\leq \left({\frac {\left|x\right|}{\sqrt {5+x^{2}}}}+{\frac {\left|a\right|}{\sqrt {5+a^{2}}}}\right)\cdot |x-a|\\[0.3em]&\leq (1+1)\cdot |x-a|\\[0.3em]&\leq 2\cdot |x-a|\\[0.3em]&\quad {\color {Gray}\left\downarrow \ |x-a|<\delta ={\frac {\epsilon }{2}}\right.}\\[0.3em]&<2\cdot {\frac {\epsilon }{2}}=\epsilon \end{aligned}}}

Hence, ${\displaystyle f}$ is a continuous function.

Using the Local Nature of Continuity

If applicable, one can use the fact that continuity is a local property. Namely, when a function ${\displaystyle f}$ looks the same as another function ${\displaystyle g}$ in a certain point ${\displaystyle x}$, i.e. ${\displaystyle f(x)=g(x)}$ on some small neighborhood of ${\displaystyle x}$, and we know ${\displaystyle g}$ to be continuous, then ${\displaystyle f}$ must also be continuous at ${\displaystyle x}$. For example, consider the function ${\displaystyle f:\mathbb {R} \setminus \{0\}\to \mathbb {R} }$ with ${\displaystyle f(x)=1}$ for positive ${\displaystyle x}$ and ${\displaystyle f(x)=-1}$ for negative ${\displaystyle x}$. Now we choose some random positive number ${\displaystyle x_{0}}$. In a sufficiently small neighborhood of ${\displaystyle x_{0}}$, ${\displaystyle f}$ is constant ${\displaystyle 1}$:

Since constant functions are continuous, ${\displaystyle f}$ is also continuous at the point ${\displaystyle x_{0}}$. Similarly one can show that ${\displaystyle f}$ is also locally constant for negative number. Then ${\displaystyle f}$ is also continuous for negative numbers, and is therefore continuous for all real numbers. In the proof we write:

"For every ${\displaystyle x_{0}\in \mathbb {R} \setminus \{0\}}$ there exists a neighborhood around ${\displaystyle x_{0}}$ where ${\displaystyle f}$ is either constant ${\displaystyle 1}$ or constant ${\displaystyle -1}$. Since constant functions are continuous, ${\displaystyle f}$ must also be continuous at the point ${\displaystyle x_{0}}$. Therefore,${\displaystyle f}$ is continuous“.

Such an argument can often be applied to functions that are defined by a case differential. Our function ${\displaystyle f}$ is a good example of that. In conclusion, it's defined as:

${\displaystyle f:\mathbb {R} \setminus \{0\}\to \mathbb {R} :x\mapsto {\begin{cases}1&x>0\\0&x<0\end{cases}}}$

However, the local nature of continuity can not be used as an argument for every function defined by a case differential! Let's consider the following function ${\displaystyle g}$:

${\displaystyle g:\mathbb {R} \to \mathbb {R} :x\mapsto {\begin{cases}x^{2}&x\geq 0\\|x|&x<0\end{cases}}}$

For all points that are not zero we can construct a proof like we've done earlier in this chapter to show that the function is continuous at these points. However, at the point ${\displaystyle x_{0}=0}$ we have to construct a different argument. For example, here we could consider the left- and right-sided limits.

Sequence Criterion

Main article: Sequential definition of continuity

Review: Sequence Criterion

Definition (Sequence criterion of continuity)

A function ${\displaystyle f:D\to \mathbb {R} }$ with ${\displaystyle D\subseteq \mathbb {R} }$ is continuous, if for all ${\displaystyle x\in D}$ and all sequences ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ with ${\displaystyle \forall n\in \mathbb {N} :x_{n}\in D}$ and ${\displaystyle \lim _{n\to \infty }x_{n}=x}$ there is:

${\displaystyle \lim _{n\to \infty }f(x_{n})=f\left(\lim _{n\to \infty }x_{n}\right)=f(x)}$

General Proof Structure

In order to prove continuity of a function at some ${\displaystyle x_{0}}$ , we need to show that for each sequence ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ of arguments converging to ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$ , there is ${\displaystyle \lim _{n\to \infty }f(x_{n})=f(x_{0})}$ . A proof for this could schematically look as follows:

Let ${\displaystyle f:\ldots }$ be a function defined by ${\displaystyle f(x)=\ldots }$ and ${\displaystyle x_{0}=\ldots }$ . In addition, let ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ be any sequence of arguments satisfying ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$. Then, there is:

${\displaystyle \lim _{n\to \infty }f(x_{n})=\ldots =f(x_{0})}$

In order to prove continuity for the function ${\displaystyle f}$ (for all arguments in its domain of definition), we need to slightly adjust that scheme:

Let ${\displaystyle f:\ldots }$ be a function defined by ${\displaystyle f(x)=\ldots }$ and let ${\displaystyle x_{0}}$ be any element of the domain of definition for ${\displaystyle f}$. In addition, let ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ be any sequence of arguments satisfying ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$. Then, there is:

${\displaystyle \lim _{n\to \infty }f(x_{n})=\ldots =f(x_{0})}$

Example Problem

Show that the quadratic function ${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto x^{2}}$ is continuous.

Let ${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto x^{2}}$. Consider any sequence ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$, converging to ${\displaystyle x}$ . There is

{\displaystyle {\begin{aligned}\lim _{n\to \infty }f(x_{n})&=\lim _{n\to \infty }x_{n}^{2}\\&=\lim _{n\to \infty }x_{n}\cdot x_{n}\\[0.5em]&{\color {OliveGreen}\left\downarrow \ \lim _{n\to \infty }a_{n}\cdot b_{n}=\lim _{n\to \infty }a_{n}\cdot \lim _{n\to \infty }b_{n}\right.}\\[0.5em]&=\left(\lim _{n\to \infty }x_{n}\right)\cdot \left(\lim _{n\to \infty }x_{n}\right)\\&{\color {OliveGreen}\left\downarrow \ {\text{Voraussetzung}}\lim _{n\to \infty }x_{n}=x\right.}\\[0.5em]&=x\cdot x=x^{2}=f(x)\end{aligned}}}

So we may pull the limit inside the brackets for the quadratic function and hence, it is continuous.

Epsilon-Delta Criterion

Main article: Epsilon-delta definition of continuity

Review: Epsilon-Delta Criterion

Definition (Epsilon-Delta-definition of continuity)

A function ${\displaystyle f:D\to \mathbb {R} }$ with ${\displaystyle D\subseteq \mathbb {R} }$ is continuous at ${\displaystyle x_{0}\in D}$, if and only if for any ${\displaystyle \epsilon >0}$ there is a ${\displaystyle \delta >0}$ , such that ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ holds for all ${\displaystyle x\in D}$ with ${\displaystyle |x-x_{0}|<\delta }$ . Written in mathematical symbols, that means ${\displaystyle f}$ is continuous at ${\displaystyle x_{0}\in D}$ if and only if

${\displaystyle \forall \epsilon >0\,\exists \delta >0\,\forall x\in D:|x-x_{0}|<\delta \implies |f(x)-f(x_{0})|<\epsilon }$

General Proof Structure

In mathematical quantifiers, the epsilon-delta definition of continuity of a function ${\displaystyle f}$ at the point ${\displaystyle x_{0}}$ reads:

${\displaystyle {\color {Red}\forall \epsilon >0}\ {\color {RedOrange}\exists \delta >0}\ {\color {OliveGreen}\forall x\in D:|x-x_{0}|<\delta }{\color {Blue}{}\implies |f(x)-f(x_{0})|<\epsilon }}$

This technical method of writing the claim specifies the general proof structure for proving continuity using the epsilon-delta criterion:

${\displaystyle {\begin{array}{l}{\color {Red}\underbrace {{\underset {}{}}{\text{Sei }}\epsilon >0{\text{ beliebig.}}} _{\forall \epsilon >0}}\ {\color {RedOrange}\underbrace {{\underset {}{}}{\text{Wähle }}\delta =\ldots {\text{ Es existiert }}\delta {\text{, weil}}\ldots } _{\exists \delta >0}}\\{\color {OliveGreen}\underbrace {{\underset {}{}}{\text{Sei }}x\in D{\text{ mit }}|x-x_{0}|<\delta {\text{ beliebig.}}} _{\forall x\in D:|x-x_{0}|<\delta }}\ {\color {Blue}\underbrace {{\underset {}{}}{\text{Es ist: }}|f(x)-f(x_{0})|<\ldots <\epsilon } _{{}\implies |f(x)-f(x_{0})|<\epsilon }}\end{array}}}$

Example Problem and General Procedure

Exercise (Continuity of the quadratic function)

Prove that the function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)=x^{2}}$ is continuous.

How to get to the proof? (Continuity of the quadratic function)

For this proof, we need to show that the square function is continuous at any argument ${\displaystyle x_{0}\in \mathbb {R} }$ . Using the proof structure for the epsilon-delta criterion, we are given an arbitrary ${\displaystyle \epsilon >0}$ . Our job is to find a suitable ${\displaystyle \delta >0}$ , such that the inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ holds for all ${\displaystyle |x-x_{0}|<\delta }$.

In order to find a suitable ${\displaystyle \delta }$ , we plug in the definition of the function ${\displaystyle f(x)=x^{2}}$ into the expression ${\displaystyle |f(x)-f(x_{0})|}$ which shall be smaller than ${\displaystyle \epsilon }$:

${\displaystyle |f(x)-f(x_{0})|=\left|x^{2}-x_{0}^{2}\right|}$

The expression ${\displaystyle |x-x_{0}|}$ may easily be controlled by ${\displaystyle \delta }$. Hence, it makes sense to construct an upper estimate for ${\displaystyle \left|x^{2}-x_{0}^{2}\right|}$ which includes ${\displaystyle |x-x_{0}|}$ and a constant. The factor ${\displaystyle |x-x_{0}|}$ appears if we perform a factorization using the third binomial formula:

${\displaystyle |x^{2}-x_{0}^{2}|=|x+x_{0}||x-x_{0}|}$

The requirement ${\displaystyle |x-x_{0}|<\delta }$ allows for an upper estimate of our expression:

${\displaystyle |x+x_{0}||x-x_{0}|<|x+x_{0}|\cdot \delta }$

The ${\displaystyle \delta }$ we are looking for may only depend on ${\displaystyle \epsilon }$ and ${\displaystyle x_{0}}$ . So the dependence on ${\displaystyle x}$ in the factor ${\displaystyle |x+x_{0}|\cdot \delta }$ is still a problem. We resolve it by making a further upper estimate for the factor ${\displaystyle |x-x_{0}|}$ . We will use a simple, but widely applied "trick" for that: A ${\displaystyle x_{0}}$ is subtracted and then added again at another place (so we are effectively adding a 0) , such that the expression ${\displaystyle x-x_{0}}$ appears:

${\displaystyle |x+x_{0}|\cdot \delta =|x\underbrace {-x_{0}+x_{0}} _{=\ 0}+x_{0}|\cdot \delta =|x-x_{0}+2x_{0}|\cdot \delta }$

The absolute ${\displaystyle |x-x_{0}|}$ is obtained using the triangle inequality. This absolute ${\displaystyle |x-x_{0}|}$ is again bounded from above by ${\displaystyle \delta }$ :

${\displaystyle |x-x_{0}+2x_{0}|\cdot \delta \leq (|x-x_{0}|+|2x_{0}|)\cdot \delta <(\delta +2|x_{0}|)\cdot \delta }$

So reshaping expressions and applying estimates, we obtain:

${\displaystyle |f(x)-f(x_{0})|<(\delta +2|x_{0}|)\cdot \delta }$

With this inequality in hand, we are almost done. If ${\displaystyle \delta }$ is chosen in a way that ${\displaystyle (\delta +2|x_{0}|)\cdot \delta \leq \epsilon }$ , we will get the final inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ . This ${\displaystyle \delta }$ is practically found solving the quadratic equation ${\displaystyle \delta ^{2}+2|x_{0}|\delta =\epsilon }$ for ${\displaystyle \delta }$ . Or even simpler, we may estimate ${\displaystyle (\delta +2|x_{0}|)\cdot \delta }$ from above. We use that we may freely impose any condition on ${\displaystyle \delta }$ . If we, for instance, set ${\displaystyle \delta \leq 1}$, then ${\displaystyle \delta +2|x_{0}|\leq 1+2|x_{0}|}$ which simplifies things:

${\displaystyle |f(x)-f(x_{0})|<(\underbrace {\delta +2|x_{0}|} _{\leq 1+2|x_{0}|})\cdot \delta \leq (1+2|x_{0}|)\cdot \delta }$

So ${\displaystyle (1+2|x_{0}|)\cdot \delta \leq \epsilon }$ will also do the job. This inequality can be solved for ${\displaystyle \delta }$ to get the second condition on ${\displaystyle \delta }$(the first one was ${\displaystyle \delta \leq 1}$):

${\displaystyle (1+2|x_{0}|)\cdot \delta \leq \epsilon \iff \delta \leq {\frac {\epsilon }{1+2|x_{0}|}}}$

So any ${\displaystyle \delta }$ fulfilling both conditions does the job: ${\displaystyle \delta \leq 1}$ and ${\displaystyle \delta ={\tfrac {\epsilon }{1+2|x_{0}|}}}$ have to hold. Ind indeed, both are true for ${\displaystyle \delta :=\min \left\{1,{\tfrac {\epsilon }{1+2|x_{0}|}}\right\}}$. This choice will be included into the final proof:

Proof (Continuity of the quadratic function)

Let ${\displaystyle \epsilon >0}$ be arbitrary and ${\displaystyle \delta :=\min \left\{1,{\tfrac {\epsilon }{1+2|x_{0}|}}\right\}}$. If an argument ${\displaystyle x}$ fulfills ${\displaystyle \left|x-x_{0}\right|<\delta }$ then:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=|x^{2}-x_{0}^{2}|\\[0.5em]&=|x+x_{0}||x-x_{0}|\\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ |x-x_{0}|<\delta \right.}\\[0.5em]&<|x+x_{0}|\cdot \delta \\[0.5em]&=|x\underbrace {-x_{0}+x_{0}} _{=\ 0}+x_{0}|\cdot \delta \\[0.5em]&=|x-x_{0}+2x_{0}|\cdot \delta \\[0.5em]&\leq (|x-x_{0}|+2|x_{0}|)\cdot \delta \\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ |x-x_{0}|<\delta \right.}\\[0.5em]&<(\delta +2|x_{0}|)\cdot \delta \\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ \delta \leq 1\right.}\\[0.5em]&\leq (1+2|x_{0}|)\cdot \delta \\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ \delta ={\frac {\epsilon }{1+2|x_{0}|}}\right.}\\[0.5em]&=(1+2|x_{0}|)\cdot {\frac {\epsilon }{1+2|x_{0}|}}\\[0.5em]&=\epsilon \end{aligned}}}

This shows that the square function is continuous by the epsilon-delta criterion.

Concatenated absolute function

Exercise (Example for a proof of continuity)

Prove that the following function is continuous at ${\displaystyle x_{0}=1}$:

${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto f(x)=5\left|x^{2}-2\right|+3}$

How to get to the proof? (Example for a proof of continuity)

We need to show that for each given ${\displaystyle \epsilon >0}$ , there is a ${\displaystyle \delta >0}$ , such that for all ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-x_{0}|<\delta }$ the inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ holds. In our case, ${\displaystyle x_{0}=1}$. So by choosing ${\displaystyle |x-1|<\delta }$ for ${\displaystyle \delta }$ small enough, we may control the expression ${\displaystyle |x-1|}$ . First, let us plug ${\displaystyle x_{0}=1|}$ into ${\displaystyle |f(x)-f(x_{0})|}$ in order to simplify the inequality to be shown ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ :

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=|f(x)-f(1)|\\[0.5em]&=|5\cdot |x^{2}-2|+3-(5|1^{2}-2|+3)|\\[0.5em]&=|5\cdot |x^{2}-2|-5|\\[0.5em]&=5\cdot ||x^{2}-2|-1|\end{aligned}}}

The objective is to "produce" as many expressions ${\displaystyle |x-1|}$ as possible, since we can control ${\displaystyle |x-1|<\delta }$. It requires some experience with epsilon-delta proofs in order to "directly see" how this is achieved. First, we need to get rid of the double absolute. This is done using the inequality ${\displaystyle ||a|-|b||\leq |a-b|}$ . For instance, we could use the following estimate:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.3em]&=5\cdot ||x^{2}-2|-1|\\[0.3em]&=5\cdot ||x^{2}-2|-|1||\\[0.3em]&\ {\color {Gray}\left\downarrow \ ||a|-|b||\leq |a-b|\right.}\\[0.3em]&\leq 5\cdot |x^{2}-2-1|\\[0.3em]&=5\cdot |x^{2}-3|\end{aligned}}}

However, this is a bad estimate as the expression ${\displaystyle |x^{2}-3|}$ no longer tends to 0 as ${\displaystyle x\to 1}$ . To resolve this problem, we use ${\displaystyle 1=|-1|}$ before applying the inequality ${\displaystyle ||a|-|b||\leq |a-b|}$ :

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.3em]&=5\cdot |x^{2}-2|-1|\\[0.3em]&=5\cdot |x^{2}-2|-|-1||\\[0.3em]&\quad {\color {Gray}\left\downarrow \ |a-b|\geq ||a|-|b||\right.}\\[0.3em]&\leq 5\cdot |x^{2}-2-(-1)|\\[0.3em]&=5\cdot |x^{2}-1|\end{aligned}}}

A factor of ${\displaystyle |x-1|}$ can be directly extracted out of this with the third binomial fomula:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&\leq 5\cdot |x^{2}-1|\\[0.5em]&=5\cdot |x+1||x-1|\end{aligned}}}

And we can control it by ${\displaystyle |x-1|<\delta }$:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&\leq 5\cdot |x^{2}-1|\\[0.5em]&=5\cdot |x+1||x-1|\\[0.5em]&<5\cdot |x+1|\cdot \delta \end{aligned}}}

Now, the required ${\displaystyle \delta }$ must only depend on ${\displaystyle \epsilon }$ and ${\displaystyle x_{0}}$ . Therefore, we have to get rid of the ${\displaystyle x}$-dependence of ${\displaystyle 5|x+1|\cdot \delta }$. This can be done by finding an upper bound for ${\displaystyle 5|x+1|}$ which does not depend on ${\displaystyle x}$. As we are free to chose any ${\displaystyle \delta }$ for our proof, we may also impose any condition to it which helps us with the upper bound. In this case, ${\displaystyle \delta \leq 1}$ turns out to be quite useful. In fact, ${\displaystyle \delta \leq 2}$ or an even higher bound would do this job, as well. What follows from this choice?

As before, there has to be ${\displaystyle |x-1|<\delta }$. As ${\displaystyle \delta \leq 1}$ , we now have ${\displaystyle |x-1|<1}$ and as ${\displaystyle x\in [1-1;1+1]}$ , we obtain ${\displaystyle 1+x\in [1;3]}$ and ${\displaystyle |x+1|\leq 3}$. This is the upper bound we were looking for:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&<5\cdot |x+1|\cdot \delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ |x+1|\leq 3\right.}\\[0.5em]&\leq 15\cdot \delta \end{aligned}}}

As we would like to show ${\displaystyle |f(x)-f(1)|<\epsilon }$ , we set ${\displaystyle \delta ={\tfrac {\epsilon }{15}}}$. And get that our final inequality holds:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&<15\cdot \delta \\[0.5em]&\leq 15\cdot {\frac {\epsilon }{15}}\\[0.5em]&\leq \epsilon .\end{aligned}}}

So if the two conditions for ${\displaystyle \delta }$ are satisfied, we get the final inequality. In fact, both conditions will be satisfied if ${\displaystyle \delta =\min \left\{1,{\tfrac {\epsilon }{15}}\right\}}$, concluding the proof. So let's conclude our ideas and write them down in a proof:

Proof (Example for a proof of continuity)

Let ${\displaystyle \epsilon >0}$ be arbitrary and let ${\displaystyle \delta =\min \left\{1,{\tfrac {\epsilon }{15}}\right\}}$. Further, let ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-1|<\delta }$. Then:

Step 1: ${\displaystyle |x+1|\leq 3}$

As ${\displaystyle \delta =\min \left\{1,{\tfrac {\epsilon }{15}}\right\}}$ , there is ${\displaystyle \delta \leq 1}$. Hence ${\displaystyle |x-1|\leq 1}$ and ${\displaystyle x\in [1-1;1+1]}$. It follows that ${\displaystyle 1+x\in [1;3]}$ and therefore ${\displaystyle |x+1|\leq 3}$.

Step 2: ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=|f(x)-f(1)|\\[0.5em]&=|5\cdot |x^{2}-2|+3-(5|1^{2}-2|+3)|\\[0.5em]&=|5\cdot |x^{2}-2|-5|\\[0.5em]&=5\cdot ||x^{2}-2|-1|\\[0.5em]&\quad {\color {Gray}\left\downarrow \ 1\iff |-1|\right.}\\[0.5em]&=5\cdot ||x^{2}-2|-|-1||\\[0.5em]&\quad {\color {Gray}\left\downarrow \ |a-b|\geq ||a|-|b||\right.}\\[0.5em]&\leq 5\cdot |x^{2}-2-(-1)|\\[0.5em]&=5\cdot |x^{2}-1|\\[0.5em]&\quad {\color {Gray}\left\downarrow \ a^{2}-b^{2}=(a+b)(a-b)\right.}\\[0.5em]&=5\cdot (|x+1|\cdot \underbrace {|x-1|} _{<\delta })\\[0.5em]&<5\cdot |x+1|\cdot \delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ |x+1|\leq 3\right.}\\[0.5em]&\leq 15\delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ \delta ={\frac {\epsilon }{15}}\right.}\\[0.5em]&=15\cdot {\frac {\epsilon }{15}}\\[0.5em]&\leq \epsilon \end{aligned}}}

Hence, the function is continuous at ${\displaystyle x_{0}=1}$ .

Hyperbola

Exercise (Continuity of the hyperbolic function)

Prove that the function ${\displaystyle f:\mathbb {R} ^{+}\to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{x}}}$ is continuous.

How to get to the proof? (Continuity of the hyperbolic function)

The basic pattern for epsilon-delta proofs is applied here, as well. We would like to show the implication ${\displaystyle |x-x_{0}|<\delta \implies |f(x)-f(x_{0})|<\epsilon }$ . Forst, let us plug in what we know and reshape our terms a bit until a ${\displaystyle |x-x_{0}|}$ appears:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\left|{\frac {1}{x}}-{\frac {1}{x_{0}}}\right|\\[0.5em]&=\left|{\frac {x-x_{0}}{x\cdot x_{0}}}\right|\\[0.5em]&={\frac {|x-x_{0}|}{|x||x_{0}|}}\\[0.5em]&={\frac {1}{|x||x_{0}|}}\cdot |x-x_{0}|\end{aligned}}}

By assumption, there will be ${\displaystyle |x-x_{0}|<\delta }$, of which we can make use:

${\displaystyle |f(x)-f(x_{0})|=\ldots ={\frac {1}{|x||x_{0}|}}\cdot |x-x_{0}|<{\frac {1}{|x||x_{0}|}}\cdot \delta }$

The choice of ${\displaystyle \delta }$ may again only depend on ${\displaystyle \epsilon }$ and ${\displaystyle x_{0}}$ so we need a smart estimate for ${\displaystyle {\tfrac {1}{|x|}}}$ in order to get rid of the ${\displaystyle x}$-dependence. To do so, we consider ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$.

Why was ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$ and not ${\displaystyle \delta \leq 1}$ chosen? The explanation is quite simple: We need a ${\displaystyle \delta }$-neighborhood inside the domain of definition of ${\displaystyle f}$. If we had simply chosen ${\displaystyle \delta \leq 1}$ , we might have get kicked out of this domain. For instance, in case${\displaystyle x_{0}={\tfrac {1}{2}}}$, the following problem appears:

The biggest ${\displaystyle x}$-value with ${\displaystyle \left|x-{\tfrac {1}{2}}\right|<1}$ is ${\displaystyle x_{\mathrm {max} }={\tfrac {3}{2}}}$ and the smallest one is ${\displaystyle x_{\mathrm {min} }=-{\tfrac {1}{2}}}$. However, ${\displaystyle x_{\mathrm {min} }}$ is not inside the domain of definition as ${\displaystyle f:\mathbb {R} ^{+}\to \mathbb {R} }$. In particular, ${\displaystyle x=0}$ is element of that interval, where ${\displaystyle f(x)={\tfrac {1}{x}}}$ cannot be defined at all.

A smarter choice for ${\displaystyle \delta }$, such that the ${\displaystyle \delta }$-neighborhood doesn't touch the ${\displaystyle y}$-axis is half of the distance to it, i.e. ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$. A third of this distance or other fractions smaller than 1 would also be possible: ${\displaystyle \delta \leq {\tfrac {x_{0}}{3}}}$, ${\displaystyle \delta <{\tfrac {x_{0}}{20}}}$ or ${\displaystyle \delta <{\tfrac {x_{0}}{5321}}}$.

As we chose ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$ and by ${\displaystyle |x-x_{0}|<\delta }$ , there is ${\displaystyle x\in \left[{\tfrac {x_{0}}{2}};{\tfrac {3x_{0}}{2}}\right]}$. This allows for an upper bound: ${\displaystyle {\tfrac {1}{|x|}}\leq {\tfrac {2}{|x_{0}|}}}$ and we may write:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots <{\frac {1}{|x||x_{0}|}}\cdot \delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ \delta ={\frac {\epsilon \cdot |x_{0}|^{2}}{2}}\right.}\\[0.5em]&\leq {\frac {2}{|x_{0}|^{2}}}\cdot \delta \end{aligned}}}

So we get the estimate:

${\displaystyle |f(x)-f(x_{0})|\leq {\frac {2}{|x_{0}|^{2}}}\cdot \delta }$

Now, we want to prove ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ . Hence we choose ${\displaystyle \delta \leq {\tfrac {\epsilon \cdot |x_{0}|^{2}}{2}}}$. Plugging this in, our final inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ will be fulfilled.

So again, we imposed two conditions for ${\displaystyle \delta }$ : ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$ and ${\displaystyle \delta \leq {\tfrac {\epsilon \cdot |x_{0}|^{2}}{2}}}$. Both are fulfilled by ${\displaystyle \delta =\min \left\{{\tfrac {x_{0}}{2}},{\tfrac {\epsilon \cdot |x_{0}|^{2}}{2}}\right\}}$, which we will use in the proof:

Proof (Continuity of the hyperbolic function)

Let ${\displaystyle f:\mathbb {R} ^{+}\to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{x}}}$ and let ${\displaystyle x_{0}\in \mathbb {R} ^{+}}$ be arbitrary. Further, let ${\displaystyle \epsilon >0}$ be arbitrary. We choose ${\displaystyle \delta :=\min \left\{{\tfrac {x_{0}}{2}},{\tfrac {\epsilon \cdot |x_{0}|^{2}}{2}}\right\}}$. For all ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-x_{0}|<\delta }$ there is:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\left|{\frac {1}{x}}-{\frac {1}{x_{0}}}\right|\\[0.5em]&=\left|{\frac {x-x_{0}}{x\cdot x_{0}}}\right|\\[0.5em]&={\frac {|x-x_{0}|}{|x||x_{0}|}}\\[0.5em]&={\frac {1}{|x||x_{0}|}}|x-x_{0}|\\[0.5em]&\quad {\color {Gray}\left\downarrow \ {\frac {1}{|x|}}\leq {\frac {2}{|x_{0}|}}\right.}\\[0.5em]&\leq {\frac {2}{|x_{0}|^{2}}}|x-x_{0}|\\[0.5em]&\quad {\color {Gray}\left\downarrow \ |x-x_{0}|<\delta \right.}\\[0.5em]&<{\frac {2\delta }{|x_{0}|^{2}}}\\[0.5em]&\quad {\color {Gray}\left\downarrow \ \delta ={\frac {\epsilon \cdot |x_{0}|^{2}}{2}}\right.}\\[0.5em]&={\frac {2}{|x_{0}|^{2}}}\cdot {\frac {\epsilon \cdot |x_{0}|^{2}}{2}}\\[0.5em]&=\epsilon \end{aligned}}}

Hence, the function ${\displaystyle f}$ is continuous at ${\displaystyle x_{0}}$ . And as ${\displaystyle x_{0}}$ was chosen to be arbitrary, the whole function ${\displaystyle f}$ is continuous.

Concatenated square root function

Exercise (Epsilon-Delta-Beweis für Stetigkeit einer Wurzelfunktion)

Beweise mit der Epsilon-Delta-Definition der Stetigkeit, dass folgende Funktion stetig ist:

${\displaystyle f:\mathbb {R} \to \mathbb {R} ,x\mapsto {\sqrt {5+x^{2}}}}$

How to get to the proof? (Epsilon-Delta-Beweis für Stetigkeit einer Wurzelfunktion)

Wir müssen zeigen, dass für jedes ${\displaystyle \epsilon >0}$ ein ${\displaystyle \delta >0}$ existiert, so dass alle ${\displaystyle x\in \mathbb {R} }$ mit ${\displaystyle |x-a|<\delta }$ die Ungleichung ${\displaystyle |f(x)-f(a)|<\epsilon }$ erfüllen. Hierzu betrachten wir zunächst die Zielungleichung ${\displaystyle |f(x)-f(a)|<\epsilon }$ und schätzen den Betrag ${\displaystyle |f(x)-f(a)|}$ geschickt nach oben ab. Da wir den Term ${\displaystyle |x-a|}$ kontrollieren können, schätzen wir ${\displaystyle |f(x)-f(a)|}$ so nach oben ab, dass wir den Betrag ${\displaystyle |x-a|}$ erhalten. Wir suchen also eine Ungleichung der Form

${\displaystyle |f(x)-f(a)|\leq K(x,a)\cdot |x-a|}$

Dabei ist ${\displaystyle K(x,a)}$ irgendein von ${\displaystyle x}$ und ${\displaystyle a}$ abhängiger Term. Der zweite Faktor ist kleiner als ${\displaystyle \delta }$ und kann damit durch eine geschickte Wahl von ${\displaystyle \delta }$ beliebig klein gemacht werden. Eine solche Abschätzung ist folgende:

{\displaystyle {\begin{aligned}|f(x)-f(a)|&=\left|{\sqrt {5+x^{2}}}-{\sqrt {5+a^{2}}}\right|\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\text{Erweitere mit}}\left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|\right.}\\[0.3em]&={\frac {\left|{\sqrt {5+x^{2}}}-{\sqrt {5+a^{2}}}\right|\cdot \left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|}{\left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|}}\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\geq 0\right.}\\[0.3em]&={\frac {\left|x^{2}-a^{2}\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\\[0.3em]&={\frac {\left|x+a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\cdot |x-a|\\[0.3em]&\quad {\color {Gray}\left\downarrow \ K(x,a):={\frac {\left|x+a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\right.}\\[0.3em]&=K(x,a)\cdot |x-a|\end{aligned}}}

Wegen ${\displaystyle |x-a|<\delta }$ ist:

${\displaystyle |f(x)-f(a)|=K(x,a)\cdot |x-a|

Wenn wir ${\displaystyle \delta }$ so klein wählen, dass ${\displaystyle K(x,a)\cdot \delta \leq \epsilon }$ ist, folgt die Zielungleichung ${\displaystyle |f(x)-f(a)|\leq \epsilon }$. Jedoch hängt ${\displaystyle K(x,a)}$ von ${\displaystyle x}$ ab und diese Abhängigkeit würde sich auf ${\displaystyle \delta }$ vererben und wir dürfen ${\displaystyle \delta }$ nicht in Abhängigkeit von ${\displaystyle x}$ wählen. Deswegen müssen wir die Abhängigkeit des ersten Faktors von ${\displaystyle x}$ eliminieren. Dies erreichen wir, indem wir den ersten Faktor nach oben so abschätzen, dass wir eine Ungleichung der Form ${\displaystyle K(x,a)\leq {\tilde {K}}(a)}$ erreichen. Eine solche Umformung ist:

{\displaystyle {\begin{aligned}K(x,a)&={\frac {\left|x+a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\text{Dreiecksungleichung}}\right.}\\[0.3em]&\leq {\frac {\left|x\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}+{\frac {\left|a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\frac {a}{b+c}}\leq {\frac {a}{b}}{\text{ für }}a,c\geq 0,b>0\right.}\\[0.3em]&\leq {\frac {\left|x\right|}{\sqrt {5+x^{2}}}}+{\frac {\left|a\right|}{\sqrt {5+a^{2}}}}\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\frac {|a|}{\sqrt {5+a^{2}}}}\leq 1{\text{, da }}{\sqrt {5+a^{2}}}\geq {\sqrt {a^{2}}}=|a|\right.}\\[0.3em]&\leq 2=:{\tilde {K}}(a)\end{aligned}}}

Wir haben sogar ${\displaystyle {\tilde {K}}(a)}$ unabhängig von ${\displaystyle a}$ gemacht, was nicht nötig gewesen wäre. Somit haben wir die Ungleichung

${\displaystyle |f(x)-f(a)|\leq 2\cdot |x-a|<2\cdot \delta }$

Wir brauchen nun die Abschätzung ${\displaystyle 2\cdot \delta \leq \epsilon }$, damit die Zielungleichung ${\displaystyle |f(x)-f(a)|<\epsilon }$ erfüllt ist. Die Wahl von ${\displaystyle \delta ={\tfrac {\epsilon }{2}}}$ ist hierfür ausreichend.

Proof (Epsilon-Delta-Beweis für Stetigkeit einer Wurzelfunktion)

Sei ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ mit ${\displaystyle f(x)={\sqrt {5+x^{2}}}}$. Sei ${\displaystyle a\in \mathbb {R} }$ und ${\displaystyle \epsilon >0}$ beliebig. Wir wählen ${\displaystyle \delta ={\tfrac {\epsilon }{2}}}$. Für alle ${\displaystyle x\in \mathbb {R} }$ mit ${\displaystyle |x-a|<\delta }$ gilt:

{\displaystyle {\begin{aligned}|f(x)-f(a)|&=\left|{\sqrt {5+x^{2}}}-{\sqrt {5+a^{2}}}\right|\\[0.3em]&={\frac {\left|{\sqrt {5+x^{2}}}-{\sqrt {5+a^{2}}}\right|\cdot \left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|}{\left|{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}\right|}}\\[0.3em]&={\frac {\left|x^{2}-a^{2}\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\\[0.3em]&={\frac {\left|x+a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\cdot |x-a|\\[0.3em]&\leq \left({\frac {\left|x\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}+{\frac {\left|a\right|}{{\sqrt {5+x^{2}}}+{\sqrt {5+a^{2}}}}}\right)\cdot |x-a|\\[0.3em]&\leq \left({\frac {\left|x\right|}{\sqrt {5+x^{2}}}}+{\frac {\left|a\right|}{\sqrt {5+a^{2}}}}\right)\cdot |x-a|\\[0.3em]&\leq (1+1)\cdot |x-a|\\[0.3em]&\leq 2\cdot |x-a|\\[0.3em]&\quad {\color {Gray}\left\downarrow \ |x-a|<\delta ={\frac {\epsilon }{2}}\right.}\\[0.3em]&<2\cdot {\frac {\epsilon }{2}}=\epsilon \end{aligned}}}

Damit ist ${\displaystyle f}$ eine stetige Funktion.

Discontinuity of the topological sine function

Exercise (Discontinuity of the topological sine fucntion)

Prove the discontinuity at ${\displaystyle x_{0}=0}$ for the topological sine function:

${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto {\begin{cases}\sin \left({\frac {1}{x}}\right)&x\neq 0\\0&x=0\end{cases}}}$

How to get to the proof? (Discontinuity of the topological sine fucntion)

In this exercise, discontinuity has to be shown for a given function. This is done by the negation of the epsilon-delta criterion. Our objective is to find both an ${\displaystyle \epsilon >0}$ and an ${\displaystyle x\in \mathbb {R} }$, such that ${\displaystyle |x-x_{0}|<\delta }$ and ${\displaystyle |f(x)-f(x_{0})|\geq \epsilon }$ . Here, ${\displaystyle x}$ may be chosen depending on ${\displaystyle \delta }$ , while ${\displaystyle \epsilon }$ has to be the same for all ${\displaystyle \delta >0}$ . For a solution, we may proceed as follows

Step 1: Simplify the target inequality

First, we may simplify the two inequalities which have to be fulfilled by plugging in ${\displaystyle x_{0}=0}$ and ${\displaystyle f(x_{0})=0}$. We therefore get: ${\displaystyle |x|<\delta }$ and ${\displaystyle |f(x)|\geq \epsilon }$.

Step 2: Choose a suitable ${\displaystyle \epsilon >0}$

We consider the graph of the function ${\displaystyle f}$ . It will help us finding the building bricks for our proof:

We need to find an ${\displaystyle \epsilon >0}$ , such that there are arguments in each arbitrarily narrow interval ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )=(-\delta ,\delta )}$ whose function values have a distance larger than ${\displaystyle \epsilon }$ from ${\displaystyle f(x_{0})=f(0)=0}$ . Visually, no matter how small the width ${\displaystyle \delta }$ of the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle is chosen, there will always be some points below or above it.

Taking a look at the graph, we see that our function oscillates between ${\displaystyle -1}$ and ${\displaystyle 1}$ . Hence, ${\displaystyle \epsilon \leq 1}$ may be useful. In that case, there will be function values with ${\displaystyle |f(x)|\geq 1}$ in every arbitrarily small neighborhood around ${\displaystyle x_{0}=0}$. We choose ${\displaystyle \epsilon ={\tfrac {1}{2}}}$. This is visualized in the following figure:

After ${\displaystyle \epsilon }$ has been chosen, an arbitrary ${\displaystyle \delta >0}$ will be assumed, for which we need to find a suitable ${\displaystyle x}$. This is what we will do next.

Step 3: Choice of a suitable ${\displaystyle x\in \mathbb {R} }$

We just set ${\displaystyle \epsilon ={\tfrac {1}{2}}}$ . Therefore, ${\displaystyle \left|\sin \left({\tfrac {1}{x}}\right)\right|\geq {\tfrac {1}{2}}}$ has to hold. So it would be nice to choose an ${\displaystyle x}$ with ${\displaystyle \sin \left({\tfrac {1}{x}}\right)=1}$ . Now, ${\displaystyle \sin(a)=1}$ is obtained for any ${\displaystyle a={\tfrac {\pi }{2}}+2k\pi }$ with ${\displaystyle k\in \mathbb {Z} }$ . The condition for ${\displaystyle x}$ such that the function gets 1 is therefore:

${\displaystyle {\frac {1}{x}}={\frac {\pi }{2}}+2k\pi \iff x={\frac {1}{{\frac {\pi }{2}}+2k\pi }}}$

So we found several ${\displaystyle x}$ , where ${\displaystyle |f(x)|\geq \epsilon }$ . Now we only need to select one among them, which satisfies ${\displaystyle |x|<\delta }$ for the given ${\displaystyle \delta }$ . Our ${\displaystyle x}$ depend on ${\displaystyle k}$ . So we have to select a suitable ${\displaystyle k\in \mathbb {Z} }$ , where ${\displaystyle |x|<\delta }$ . To do so, let us plug ${\displaystyle x={\tfrac {1}{{\tfrac {\pi }{2}}+2k\pi }}}$ into this inequality and solve it for ${\displaystyle k}$ :

{\displaystyle {\begin{aligned}\left|x\right|<\delta &\iff \left|{\frac {1}{{\frac {\pi }{2}}+2k\pi }}\right|<\delta \\[0.5em]&\iff {\frac {1}{{\frac {\pi }{2}}+2k\pi }}<\delta \\[0.5em]&\iff 2k\pi +{\frac {\pi }{2}}>{\frac {1}{\delta }}\\[0.5em]&\iff 2k\pi >{\frac {1}{\delta }}-{\frac {\pi }{2}}\\[0.5em]&\iff k>{\frac {1}{2\pi }}\left({\frac {1}{\delta }}-{\frac {\pi }{2}}\right)\end{aligned}}}

So the condition on ${\displaystyle k}$ is ${\displaystyle k>{\tfrac {1}{2\pi }}\left({\tfrac {1}{\delta }}-{\tfrac {\pi }{2}}\right)}$. If we choose just any natural number ${\displaystyle k}$ above this threshold ${\displaystyle {\tfrac {1}{2\pi }}\left({\tfrac {1}{\delta }}-{\tfrac {\pi }{2}}\right)}$ , then ${\displaystyle |x|<\delta }$ will be fulfilled. Such a ${\displaystyle k}$ has to exist by Archimedes' axiom (for instance by flooring up the right-hand expression). So let us choose such a ${\displaystyle k}$ and define ${\displaystyle x}$ via ${\displaystyle x={\tfrac {1}{{\tfrac {\pi }{2}}+2k\pi }}}$ . This gives us both ${\displaystyle |x|<\delta }$ and ${\displaystyle |f(x)|\geq \epsilon }$ . So we got all building bricks together, which we will now assemble to a final proof:

Proof (Discontinuity of the topological sine fucntion)

Choose ${\displaystyle \epsilon ={\tfrac {1}{2}}}$ and let ${\displaystyle \delta >0}$ be arbitrary. Choose a natural number ${\displaystyle k}$ with ${\displaystyle k>{\tfrac {1}{2\pi }}\left({\tfrac {1}{\delta }}-{\tfrac {\pi }{2}}\right)}$. Such a natural number ${\displaystyle k}$ has to exist by Archimedes' axiom. Further, let ${\displaystyle x={\tfrac {1}{{\frac {\pi }{2}}+2k\pi }}}$. Then:

{\displaystyle {\begin{aligned}k>{\frac {1}{2\pi }}\left({\frac {1}{\delta }}-{\frac {\pi }{2}}\right)&\implies 2k\pi >{\frac {1}{\delta }}-{\frac {\pi }{2}}\\[0.5em]&\implies 2k\pi +{\frac {\pi }{2}}>{\frac {1}{\delta }}\\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ {\frac {1}{a}}>{\frac {1}{b}}>0\iff 00\right.}\\[0.5em]&\implies \left|{\frac {1}{{\frac {\pi }{2}}+2k\pi }}\right|<\delta \\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ x={\frac {1}{{\frac {\pi }{2}}+2k\pi }}\right.}\\[0.5em]&\implies \left|x\right|<\delta \end{aligned}}}

{\displaystyle {\begin{aligned}\left|f(x)-f(0)\right|&=\left|\sin \left({\frac {1}{x}}\right)-0\right|\\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ x={\frac {1}{{\frac {\pi }{2}}+2k\pi }}\right.}\\[0.5em]&=\left|\sin \left({\frac {\pi }{2}}+2k\pi \right)\right|\\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ \sin \left({\frac {\pi }{2}}+2k\pi \right)=1\right.}\\[0.5em]&=\left|1\right|\geq {\frac {1}{2}}=\epsilon \end{aligned}}}

Hence, the function is discontinuous at ${\displaystyle x_{0}=0}$.

Example Problems

Sequence Criterion: Absolute Value Function

Exercise (Continuity of the absolute function)

Prove continuity for the absolute function.

Proof (Continuity of the absolute function)

Let ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)=|x|}$ be the absolute function. Let ${\displaystyle x_{0}\in \mathbb {R} }$ be a real number and ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ a sequence converging to it ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$. In chapter „Grenzwertsätze: Grenzwert von Folgen berechnen“ we have proven the absolute rule, stating that ${\displaystyle \lim _{n\to \infty }|x_{n}|=|x_{0}|}$ , whenever there is ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$ . Hence:

${\displaystyle \lim _{n\to \infty }f(x_{n})=\lim _{n\to \infty }|x_{n}|=|x_{0}|=f(x_{0})}$

This proves continuity of the absolute function ${\displaystyle f}$ by the sequence criterion.

Discontinuity of the topological sine function

Exercise (Discontinuity of the topological sine function)

Prove discontinuity of the following function:

${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto {\begin{cases}\sin \left({\frac {1}{x}}\right)&x\neq 0\\0&x=0\end{cases}}}$

How to get to the proof? (Discontinuity of the topological sine function)

Discontinuity of ${\displaystyle f}$ means that this function has at least one argument where ${\displaystyle f}$ is discontinuous. For each ${\displaystyle x\neq 0}$ , ${\displaystyle f}$ is equal to the functionn ${\displaystyle \sin \left({\tfrac {1}{x}}\right)}$ in a sufficiently small neighbourhood of ${\displaystyle x}$. Since ${\displaystyle \sin \left({\tfrac {1}{x}}\right)}$ is just a composition of continuous functions, it is continuous itself and therefore, ${\displaystyle f}$ must be continuous for all ${\displaystyle x\neq 0}$ , as well. So we know that the discontinuity may only be situated at ${\displaystyle x=0}$ .

In order to prove that ${\displaystyle f}$ is discontinuous at ${\displaystyle x=0}$ , we need to fincd a sequence of arguments ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ converging to ${\displaystyle \lim _{n\to \infty }x_{n}=0}$ but with ${\displaystyle \lim _{n\to \infty }f(x_{n})\neq f(0)}$ . To find such a function, let us take a look at the graph of the function ${\displaystyle f}$ :

In this figure, we recognize that ${\displaystyle f}$ takes any value between ${\displaystyle -1}$ and ${\displaystyle 1}$ infinitely often in the vicinity of ${\displaystyle x=0}$. So, for instance, we may just choose ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ such that ${\displaystyle f(x_{n})}$ is always ${\displaystyle 1}$ . This guarantees that ${\displaystyle \lim _{n\to \infty }f(x_{n})=1\neq 0=f(0)}$ - and actually any other real number ${\displaystyle f(x_{n})\neq 0}$ between ${\displaystyle -1}$ and ${\displaystyle 1}$ in place of ${\displaystyle f(x_{n})=1}$ would do the job. But we need to make a specific choice for ${\displaystyle x_{n}}$ , and ${\displaystyle f(x_{n})=1}$ is a very simple one. In addition, we will choose ${\displaystyle x_{n}}$ to converges to zero from above.

The following figure also contains the sequence elements ${\displaystyle x_{n}}$ beside our function ${\displaystyle f}$. We may clearly see that for ${\displaystyle x_{n}\to 0}$ the sequence of function values converges to ${\displaystyle 1}$ , which is different from the function value ${\displaystyle f(0)=0}$ :

But what are the exact values of these ${\displaystyle x_{n}}$ for which we would like to have ${\displaystyle f(x_{n})=1}$ To answer this question, let us resolve the equation ${\displaystyle f(x)=1}$ for ${\displaystyle x}$ :

{\displaystyle {\begin{aligned}{\begin{array}{rrrl}&&f(x)&=1\\[0.5em]{\overset {f(0)\neq 0}{\iff {}}}&&\sin \left({\frac {1}{x}}\right)&=1\\[0.5em]\iff {}&\exists k\in \mathbb {Z} :&{\frac {1}{x}}&={\frac {\pi }{2}}+2k\pi \\[0.5em]\iff {}&\exists k\in \mathbb {Z} :&x&={\frac {1}{{\frac {\pi }{2}}+2k\pi }}\end{array}}\end{aligned}}}

So for each ${\displaystyle x={\tfrac {1}{{\frac {\pi }{2}}+2k\pi }}}$ with ${\displaystyle k\in \mathbb {Z} }$ , we have ${\displaystyle f(x)=1}$. In order to get positive ${\displaystyle x_{n}}$ converging to zero from above, we may for instance choose ${\displaystyle x_{n}={\tfrac {1}{{\frac {\pi }{2}}+2n\pi }}}$. In that case:

${\displaystyle \lim _{n\to \infty }x_{n}=\lim _{n\to \infty }{\frac {1}{{\frac {\pi }{2}}+2n\pi }}=0}$

And we have seen that ${\displaystyle \lim _{n\to \infty }f(x_{n})=1\neq 0=f(0)}$ . So we found just a sequence of arguments ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ , which proves discontinuity of ${\displaystyle f}$ at ${\displaystyle x=0}$ .

Proof (Discontinuity of the topological sine function)

Let ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)=\sin \left({\tfrac {1}{x}}\right)}$ for ${\displaystyle x\neq 0}$ and ${\displaystyle f(0)=0}$. We consider the sequence ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ defined by ${\displaystyle x_{n}={\tfrac {1}{{\frac {\pi }{2}}+2n\pi }}}$. For this sequence:

${\displaystyle \lim _{n\to \infty }x_{n}=\lim _{n\to \infty }{\frac {1}{{\frac {\pi }{2}}+2n\pi }}=0}$

And there is:

{\displaystyle {\begin{aligned}\lim _{n\to \infty }f(x_{n})&=\lim _{n\to \infty }f\left({\frac {1}{{\frac {\pi }{2}}+2n\pi }}\right)\\[0.5em]&=\lim _{n\to \infty }\sin \left({\frac {1}{\frac {1}{{\frac {\pi }{2}}+2n\pi }}}\right)\\[0.5em]&=\lim _{n\to \infty }\sin \left({\frac {\pi }{2}}+2n\pi \right)\\[0.5em]&=\lim _{n\to \infty }1=1\end{aligned}}}

Hence, ${\displaystyle \lim _{n\to \infty }f(x_{n})=1\neq 0=f(0)}$ although ${\displaystyle \lim _{n\to \infty }x_{n}=0}$ . This proves that ${\displaystyle f}$ is discontinuous at ${\displaystyle x=0}$ and therefore it is a discontinuous function.

Epsilon-Delta Criterion: Linear Function

Exercise (Continuity of a linear function)

Prove that a linear function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{3}}x}$ is continuous.

How to get to the proof? (Continuity of a linear function)

Graph of a function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{3}}x}$. Considering the graph , we see that this function is continuous everywhere.

To actually prove continuity of ${\displaystyle f}$ , we need to check continuity at any argument ${\displaystyle x_{0}\in \mathbb {R} }$ . So let ${\displaystyle x_{0}}$ be an arbitrary real number. Now, choose any arbitrary maximal error ${\displaystyle \epsilon >0}$ . Our task is now to find a sufficiently small ${\displaystyle \delta >0}$ , such that ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ for all arguments ${\displaystyle x}$ with ${\displaystyle |x-x_{0}|<\delta }$ . Let us take a closer look at the inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ :

{\displaystyle {\begin{aligned}{\begin{array}{rrl}&|f(x)-f(x_{0})|&<\epsilon \\[0.5em]\iff &\left|{\frac {1}{3}}x-{\frac {1}{3}}x_{0}\right|&<\epsilon \\[0.5em]\iff &\left|{\frac {1}{3}}(x-x_{0})\right|&<\epsilon \\[0.5em]\iff &{\frac {1}{3}}|x-x_{0}|&<\epsilon \end{array}}\end{aligned}}}

That means, ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<\epsilon }$ has to be fulfilled for all ${\displaystyle x}$ with ${\displaystyle |x-x_{0}|<\delta }$ . How to choose ${\displaystyle \delta }$ , such that ${\displaystyle |x-x_{0}|<\delta }$ implies ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<\epsilon }$ ?

We use that the inequality ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<\epsilon }$ contains the distance ${\displaystyle |x-x_{0}|}$ . As ${\displaystyle |x-x_{0}|<\delta }$ we know that this distance is smaller than ${\displaystyle \delta }$ . This can be plugged into the inequality ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|}$ :

${\displaystyle {\frac {1}{3}}|x-x_{0}|{\stackrel {|x-x_{0}|<\delta }{<}}{\frac {1}{3}}\delta }$

If ${\displaystyle \delta }$ is now chosen such that ${\displaystyle {\tfrac {1}{3}}\delta \leq \epsilon }$ , then ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<{\tfrac {1}{3}}\delta }$ will yield the inequality ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<\epsilon }$ which we wanted to show. The smallness condition for ${\displaystyle \delta }$ can now simply be found by resolving ${\displaystyle {\tfrac {1}{3}}\delta \leq \epsilon }$ for ${\displaystyle \delta }$:

${\displaystyle {\tfrac {1}{3}}\delta \leq \epsilon \iff \delta \leq 3\epsilon }$

Any ${\displaystyle \delta }$ satisfying ${\displaystyle 0<\delta \leq 3\epsilon }$ could be used for the proof. For instance, we may use ${\displaystyle \delta =3\epsilon }$. As we now found a suitable ${\displaystyle \delta }$, we can finally conduct the proof:

Proof (Continuity of a linear function)

Let ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{3}}x}$ and let ${\displaystyle x_{0}\in \mathbb {R} }$ be arbitrary. In addition, consider any ${\displaystyle \epsilon >0}$ to be given. We choose ${\displaystyle \delta =3\epsilon }$. Let ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-x_{0}|<\delta }$. There is:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\left|{\frac {1}{3}}x-{\frac {1}{3}}x_{0}\right|\\[0.5em]&=\left|{\frac {1}{3}}(x-x_{0})\right|\\[0.5em]&={\frac {1}{3}}|x-x_{0}|\\[0.5em]&\quad {\color {Gray}\left\downarrow \ |x-x_{0}|<\delta \right.}\\[0.5em]&<{\frac {1}{3}}\delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ \delta =3\epsilon \right.}\\[0.5em]&\leq {\frac {1}{3}}(3\epsilon )=\epsilon \end{aligned}}}

This shows ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$, and establishes continuity of ${\displaystyle f}$ at ${\displaystyle x_{0}}$ by means of the epsilon-delta criterion. Since ${\displaystyle x_{0}\in \mathbb {R} }$ was chosen to be arbitrary, we also know that the entire function ${\displaystyle f}$ is continuous.

Epsilon-Delta Criterion: Concatenated Absolute Value Function

Exercise (Example for a proof of continuity)

Prove that the following function is continuous at ${\displaystyle x_{0}=1}$:

${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto f(x)=5\left|x^{2}-2\right|+3}$

How to get to the proof? (Example for a proof of continuity)

We need to show that for each given ${\displaystyle \epsilon >0}$ , there is a ${\displaystyle \delta >0}$ , such that for all ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-x_{0}|<\delta }$ the inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ holds. In our case, ${\displaystyle x_{0}=1}$. So by choosing ${\displaystyle |x-1|<\delta }$ for ${\displaystyle \delta }$ small enough, we may control the expression ${\displaystyle |x-1|}$ . First, let us plug ${\displaystyle x_{0}=1|}$ into ${\displaystyle |f(x)-f(x_{0})|}$ in order to simplify the inequality to be shown ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ :

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=|f(x)-f(1)|\\[0.5em]&=|5\cdot |x^{2}-2|+3-(5|1^{2}-2|+3)|\\[0.5em]&=|5\cdot |x^{2}-2|-5|\\[0.5em]&=5\cdot ||x^{2}-2|-1|\end{aligned}}}

The objective is to "produce" as many expressions ${\displaystyle |x-1|}$ as possible, since we can control ${\displaystyle |x-1|<\delta }$. It requires some experience with epsilon-delta proofs in order to "directly see" how this is achieved. First, we need to get rid of the double absolute. This is done using the inequality ${\displaystyle ||a|-|b||\leq |a-b|}$ . For instance, we could use the following estimate:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.3em]&=5\cdot ||x^{2}-2|-1|\\[0.3em]&=5\cdot ||x^{2}-2|-|1||\\[0.3em]&\ {\color {Gray}\left\downarrow \ ||a|-|b||\leq |a-b|\right.}\\[0.3em]&\leq 5\cdot |x^{2}-2-1|\\[0.3em]&=5\cdot |x^{2}-3|\end{aligned}}}

However, this is a bad estimate as the expression ${\displaystyle |x^{2}-3|}$ no longer tends to 0 as ${\displaystyle x\to 1}$ . To resolve this problem, we use ${\displaystyle 1=|-1|}$ before applying the inequality ${\displaystyle ||a|-|b||\leq |a-b|}$ :

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.3em]&=5\cdot |x^{2}-2|-1|\\[0.3em]&=5\cdot |x^{2}-2|-|-1||\\[0.3em]&\quad {\color {Gray}\left\downarrow \ |a-b|\geq ||a|-|b||\right.}\\[0.3em]&\leq 5\cdot |x^{2}-2-(-1)|\\[0.3em]&=5\cdot |x^{2}-1|\end{aligned}}}

A factor of ${\displaystyle |x-1|}$ can be directly extracted out of this with the third binomial fomula:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&\leq 5\cdot |x^{2}-1|\\[0.5em]&=5\cdot |x+1||x-1|\end{aligned}}}

And we can control it by ${\displaystyle |x-1|<\delta }$:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&\leq 5\cdot |x^{2}-1|\\[0.5em]&=5\cdot |x+1||x-1|\\[0.5em]&<5\cdot |x+1|\cdot \delta \end{aligned}}}

Now, the required ${\displaystyle \delta }$ must only depend on ${\displaystyle \epsilon }$ and ${\displaystyle x_{0}}$ . Therefore, we have to get rid of the