# Epsilon-delta definition of continuity – Serlo

Zur Navigation springen Zur Suche springen

Among the sequence criterion, the epsilon-delta criterion is another way to define the continuity of functions. This criterion describes the feature of continuous functions, that sufficiently small changes of the argument cause arbitrarily small changes of the function value.

## Motivation

In the beginning of this chapter, we learned that continuity of a function may - by a simple intuition - be considered as an absence of jumps. So if we are at an argument where continuity holds, the function values will change arbitrarily little, when we wiggle around the argument by a sufficiently small amount. So ${\displaystyle f(x)\approx f(x_{0})}$, for ${\displaystyle x}$ in the vicinity of ${\displaystyle x_{0}}$ . The function values ${\displaystyle f(x)}$ may therefore be useful to approximate ${\displaystyle f(x_{0})}$ .

### Continuity when approximating function values

If a function has no jumps, we may approximate its function values by other nearby values . For this approximation, and hence also for proofs of continuity, we will use the epsilon-delta criterion for continuity. So how will such an approximation look in a practical situation?

Suppose, we make an experiment that includes measuring the air temperature as a function of time. Let ${\displaystyle f}$ be the function describing the temperature. So ${\displaystyle f(x)}$ is the temperature at time ${\displaystyle x}$. Now, suppose there is a technical problem, so we have no data for ${\displaystyle f(x_{0})}$ - or we simply did not measure ${\displaystyle f}$ at exactly this point of time. However, we would like to approximate the function value ${\displaystyle f(x_{0})}$ as precisely as we can:

Suppose, a technical issue prevented the measurement of ${\displaystyle f(x_{0})}$ . Since the temperature changes continuously in time - and especially there is no jump at ${\displaystyle x_{0}}$ - we may instead use a temperature value measured at a time close to ${\displaystyle x_{0}}$ . So, let us approximate the value ${\displaystyle f(x_{0})}$ by taking a temperature ${\displaystyle f(x)}$ with ${\displaystyle x}$ close to ${\displaystyle x_{0}}$ . That means, ${\displaystyle f(x)}$ is an approximation for ${\displaystyle f(x_{0})}$. How close must ${\displaystyle x}$ come to ${\displaystyle x_{0}}$ in order to obtain a given approximation precision?

Suppose that for the evaluation of the temperature at a later time ${\displaystyle x}$ , the maximal error shall be ${\displaystyle \epsilon =0{,}1\ \mathrm {^{\circ }C} }$ . So considering the following figure, the measured temperature should be in the grey region . Those are all temperatures with function values between ${\displaystyle f(x_{0})-\epsilon }$ and ${\displaystyle f(x_{0})+\epsilon }$ , i.e. inside the open interval ${\displaystyle (f(x_{0})-\epsilon ,f(x_{0})+\epsilon )}$ :

In this graphic, we may see that there is a region around ${\displaystyle x_{0}}$ , where function values differ by less than ${\displaystyle \epsilon }$ from ${\displaystyle f(x_{0})}$ . So in fact, there is a time difference ${\displaystyle \delta }$, such that all function values are inside the interval ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )}$ highlighted in grey:

Therefore, we may indeed approximate the missing data point ${\displaystyle f(x_{0})}$ sufficiently well (meaning with a maximal error of ${\displaystyle \epsilon }$) . This is done by taking a time ${\displaystyle x}$ differing from ${\displaystyle x_{0}}$ by less than ${\displaystyle \delta }$ and then, the error of ${\displaystyle f(x)}$ in approximating ${\displaystyle f(x_{0})}$ will be smaller than the desired maximal error ${\displaystyle \epsilon }$. So ${\displaystyle f(x)}$ will be the approximation for ${\displaystyle f(x_{0})}$ .

Conclusion: There is a ${\displaystyle \delta >0}$, such that the difference ${\displaystyle |f(x)-f(x_{0})|}$ is smaller than ${\displaystyle \epsilon }$ for all ${\displaystyle |x-x_{0}|}$ smaller than ${\displaystyle \delta }$ . I.e. ${\displaystyle |x-x_{0}|<\delta \implies |f(x)-f(x_{0})|<\epsilon }$

### Increasing approximation precision

What will happen, if we need to know the temperature value to a higher precision due to increased requirements in the evaluation of the experiment? For instance, if the required maximal temperature error is set to ${\displaystyle \epsilon _{2}=0{,}05\ \mathrm {^{\circ }C} }$ instead of ${\displaystyle \epsilon =0{,}1\ \mathrm {^{\circ }C} }$ ?

In that case, thare is an interval around ${\displaystyle x_{0}}$, where function values do not deviate by more than ${\displaystyle \epsilon _{2}}$ from ${\displaystyle f(x_{0})}$ . Mathematically speaking, there a ${\displaystyle \delta _{2}>0}$ exists, such that ${\displaystyle f(x)}$ differs by a maximum amount of ${\displaystyle \epsilon _{2}}$ from ${\displaystyle f(x_{0})}$ , if there is ${\displaystyle |x-x_{0}|<\delta _{2}}$ :

No matter how small we choose ${\displaystyle \epsilon }$ , thanks to the continuous temperature dependence, we may always find a ${\displaystyle \delta >0}$ , such that ${\displaystyle f(x)}$ differs at most by ${\displaystyle \epsilon }$ from ${\displaystyle f(x_{0})}$ , whenever ${\displaystyle x}$ is closer to ${\displaystyle x_{0}}$ than ${\displaystyle \delta }$ . We keep in mind:

No matter which maximal error ${\displaystyle \epsilon >0}$ is required, there is always an interval around ${\displaystyle x_{0}}$ , which is ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )}$ with size ${\displaystyle \delta >0}$, where all approximated function values ${\displaystyle f(x)}$ deviate by less than ${\displaystyle \epsilon }$ from the function value ${\displaystyle f(x_{0})}$ to be approximated.

This holds true , since the function ${\displaystyle f}$ does not have a jump at ${\displaystyle x_{0}}$ . In other words, since ${\displaystyle f}$ in continuous at ${\displaystyle x_{0}}$ . Even beyond that, we may always infer from the above characteristic that there is no jump in the graph of ${\displaystyle f}$ at ${\displaystyle x_{0}}$. Therefore, we may use it as a formal definition for continuity. As mathematicians frequently use the variables ${\displaystyle \epsilon }$ and ${\displaystyle \delta }$ when describing this characteristic, it is also called epsilon-delta-criterion for continuity.

### Epsilon-delta-criterion for continuity

Why does the epsilon-delta-criterion hold if and only if the graph of the function doen not have a jump at some argument (i.e. it is continuous there)? The temperature example allows us to intuitively verify, that the epsilon-delta-criterion is satisfied for continuous functions. But will the epsilon-delta-criterion be violated, when a function has a jump at some argument? To answer this question, let us assume that the temperature as a function of time has a jump at some ${\displaystyle x_{0}}$ :

Let ${\displaystyle \epsilon }$ be a given maximal error that is larger than the jump:

In that case, we may not choose a ${\displaystyle \delta }$-interval ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )}$ around ${\displaystyle x_{0}}$ , where all function values have a deviation lower than ${\displaystyle \epsilon }$ from ${\displaystyle f(x_{0})}$ . If we, for instance, choose the following ${\displaystyle \delta }$ , then there certainly is an ${\displaystyle x}$ between ${\displaystyle x_{0}-\delta }$ and ${\displaystyle x_{0}+\delta }$ which a function value differing by more than ${\displaystyle \epsilon }$ from ${\displaystyle f(x_{0})}$ :

When choosing a smaller ${\displaystyle \delta _{2}}$ , we will find a ${\displaystyle x\in (x_{0}-\delta _{2},x_{0}+\delta _{2})}$ with ${\displaystyle |f(x)-f(x_{0})|\geq \epsilon }$, as well:

No matter how small we choose ${\displaystyle \delta }$ , there will always be an argument ${\displaystyle x}$ with a distance of less than ${\displaystyle \delta }$ to ${\displaystyle x_{0}}$, such that the function value ${\displaystyle f(x)}$ differs by more than ${\displaystyle \epsilon }$ from ${\displaystyle f(x_{0})}$ . So we have seen that in an intuitive example, the epsilon-delta-criterion is not satisfied, if the function has a jump. Therefore, the epsilon-delta-criterion characterizes whether the graph of the function has a jump at the considered argument ${\displaystyle x_{0}}$ or not. That means, we may consider it as a definition of continuity. Since this criterion only uses mathematically well-defined terms, it may be used not just as an intuitive, but also as a formal definition.

Hint

In the above example, we did not pay attention to some of the aspects, we would have to consider when performing actual measurements. As an example, we assumed to have perfect measurements without any errors. Of course, this is not the case in reality. Every measurement at some time ${\displaystyle x}$ has an outcome differing from the actual value ${\displaystyle f(x)}$. In addition, we assumed instantaneous measurements at ${\displaystyle x}$. Usually, each recording of a value takes a some time. These uncertainties have to be taken into account for real experiments.

## Definition

### Epsilon-Delta criterion for continuity

The ${\displaystyle \epsilon }$-${\displaystyle \delta }$ definition of continuity at an argument ${\displaystyle x_{0}}$ inside the domain of definition is the following:

Definition (Epsilon-Delta-definition of continuity)

A function ${\displaystyle f:D\to \mathbb {R} }$ with ${\displaystyle D\subseteq \mathbb {R} }$ is continuous at ${\displaystyle x_{0}\in D}$, if and only if for any ${\displaystyle \epsilon >0}$ there is a ${\displaystyle \delta >0}$ , such that ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ holds for all ${\displaystyle x\in D}$ with ${\displaystyle |x-x_{0}|<\delta }$ . Written in mathematical symbols, that means ${\displaystyle f}$ is continuous at ${\displaystyle x_{0}\in D}$ if and only if

${\displaystyle \forall \epsilon >0\,\exists \delta >0\,\forall x\in D:|x-x_{0}|<\delta \implies |f(x)-f(x_{0})|<\epsilon }$

Explanation of the quantifier notation:

{\displaystyle {\begin{aligned}{\begin{array}{l}\underbrace {{\underset {}{}}\forall \epsilon >0} _{{\text{For all }}\epsilon >0}\underbrace {{\underset {}{}}\exists \delta >0} _{{\text{ there is a }}\delta >0}\underbrace {{\underset {}{}}\forall x\in D} _{{\text{, such that for all }}x\in D}\\[1em]\quad \underbrace {{\underset {}{}}|x-x_{0}|<\delta } _{{\text{ with distance to }}x_{0}{\text{ smaller than }}\delta }\underbrace {{\underset {}{}}\implies } _{\text{ there is}}\underbrace {{\underset {}{}}|f(x)-f(x_{0})|<\epsilon } _{{\text{, such that the distance of }}f(x){\text{ to }}f(x_{0}){\text{ is smaller than }}\epsilon }\end{array}}\end{aligned}}}

The above definition describes continuity at a certain point (argument). An entire function ${\displaystyle f:D\to \mathbb {R} }$ is called continuous, when it is continuous - according to the epsilon-delta criterion - at each of its arguments in the domain of definition.

### Derivation of the Epsilon-Delta criterion for discontinuity

We may also obtain a criterion of discontinuity by simply negating the above definition. Negating mathematical propositions has already been treated in chapter „Aussagen negieren“ . While doing so, an all quantifier ${\displaystyle \forall }$ gets transformed into an existential quantifier ${\displaystyle \exists }$ and vice versa. Concerning inner implication, we have to keep in mind that the negation of ${\displaystyle A\implies B}$ is equivalent to ${\displaystyle A\land \neg B}$ . Negating the epsilon-delta criterion of discontinuity, we obtain:

{\displaystyle {\begin{aligned}{\begin{array}{rrrrrcr}&\neg {\Big (}\forall \epsilon >0\,&\exists \delta >0\,&\forall x\in D:&|x-x_{0}|<\delta &\implies &|f(x)-f(x_{0})|<\epsilon {\Big )}\\[0.5em]\iff &\exists \epsilon >0\,&\neg {\Big (}\exists \delta >0\,&\forall x\in D:&|x-x_{0}|<\delta &\implies &|f(x)-f(x_{0})|<\epsilon {\Big )}\\[0.5em]\iff &\exists \epsilon >0\,&\forall \delta >0\,&\neg {\Big (}\forall x\in D:&|x-x_{0}|<\delta &\implies &|f(x)-f(x_{0})|<\epsilon {\Big )}\\[0.5em]\iff &\exists \epsilon >0\,&\forall \delta >0\,&\exists x\in D:&\neg {\Big (}|x-x_{0}|<\delta &\implies &|f(x)-f(x_{0})|<\epsilon {\Big )}\\[0.5em]\iff &\exists \epsilon >0\,&\forall \delta >0\,&\exists x\in D:&|x-x_{0}|<\delta &\land &\neg {\Big (}|f(x)-f(x_{0})|<\epsilon {\Big )}\\[0.5em]\iff &\exists \epsilon >0\,&\forall \delta >0\,&\exists x\in D:&|x-x_{0}|<\delta &\land &|f(x)-f(x_{0})|\geq \epsilon \end{array}}\end{aligned}}}

This gets us the negation of continuity (i.e. discontinuity):

${\displaystyle \exists \epsilon >0\,\forall \delta >0\,\exists x\in D:|x-x_{0}|<\delta \land |f(x)-f(x_{0})|\geq \epsilon }$

### Epsilon-Delta criterion for discontinuity

Definition (Epsilon-Delta definition of discontinuity)

A function ${\displaystyle f:D\to \mathbb {R} }$ with ${\displaystyle D\subseteq \mathbb {R} }$ is discontinuous at ${\displaystyle x_{0}\in D}$, if and only if there is an ${\displaystyle \epsilon >0}$ , such that for all ${\displaystyle \delta >0}$ a ${\displaystyle x\in D}$ with ${\displaystyle |x-x_{0}|<\delta }$ and ${\displaystyle |f(x)-f(x_{0})|\geq \epsilon }$ exists. Mathematically written, ${\displaystyle f}$ is discontinuous at ${\displaystyle x_{0}\in D}$ iff

${\displaystyle \exists \epsilon >0\,\forall \delta >0\,\exists x\in D:|x-x_{0}|<\delta \land |f(x)-f(x_{0})|\geq \epsilon }$

Explanation of the quantifier notation:

{\displaystyle {\begin{aligned}{\begin{array}{l}\underbrace {{\underset {}{}}\exists \epsilon >0} _{{\text{There is a }}\epsilon >0,}\underbrace {{\underset {}{}}\forall \delta >0} _{{\text{ such that for all }}\delta >0}\underbrace {{\underset {}{}}\exists x\in D} _{{\text{ there is an }}x\in D}\\[1em]\quad \underbrace {{\underset {}{}}|x-x_{0}|<\delta } _{{\text{ with distance to }}x_{0}{\text{ smaller than }}\delta }\underbrace {{\underset {}{}}\land } _{\text{ and}}\underbrace {{\underset {}{}}|f(x)-f(x_{0})|\geq \epsilon } _{{\text{the distance of }}f(x){\text{ to }}f(x_{0}){\text{ is bigger (or equal) }}\epsilon }\end{array}}\end{aligned}}}

## Further explanations considering the Epsilon-Delta criterion

The inequality ${\displaystyle |x-x_{0}|<\delta }$ means that the distance between ${\displaystyle x}$ and ${\displaystyle x_{0}}$ is smaller than ${\displaystyle \delta }$ . Analogously, ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ tells uns that the distance between ${\displaystyle f(x)}$ and ${\displaystyle f(x_{0})}$ is smaller than ${\displaystyle \epsilon }$ . Therefor, the implication ${\displaystyle |x-x_{0}|<\delta \implies |f(x)-f(x_{0})|<\epsilon }$ just says that whenever ${\displaystyle f(x)}$ and ${\displaystyle f(x_{0})}$ are closer together than ${\displaystyle \epsilon }$ , then we know that the distance between ${\displaystyle x}$ and ${\displaystyle x_{0}}$ before applying the function must have been smaller than ${\displaystyle \delta }$ . Thus we may interpret the epsilon-delta criterion in the following way:

No matter how small we set the maximal distance ${\displaystyle \epsilon }$ between function values ${\displaystyle f(x)}$ , there will always be a ${\displaystyle \delta >0}$, such that ${\displaystyle f(x)}$ and ${\displaystyle f(x_{0})}$ (after being mapped) are closer together than ${\displaystyle \epsilon }$ , whenever ${\displaystyle x}$ is closer to ${\displaystyle x_{0}}$ than ${\displaystyle \delta }$ .

For continuous functions, we can control the error ${\displaystyle f(x)}$ to be lower than ${\displaystyle \epsilon }$ by keeping the error in the argument sufficiently small (smaller than ${\displaystyle \delta }$). Finding a ${\displaystyle \delta }$ means answering the question: How low does my initial error in the argument have to be in order to get a final error smaller than ${\displaystyle \epsilon }$ . This may get interesting when doing numerical calculations or measurements. Imagine, you are measuring some ${\displaystyle x_{0}}$ and then using it to compute ${\displaystyle f(x_{0})}$ where ${\displaystyle f}$ is a continuous function. The epsilon-delta criterion allows you to find the maximal error ${\displaystyle \delta }$ in ${\displaystyle x}$ (i.e. ${\displaystyle |x-x_{0}|<\delta }$), which guarantees that the final error ${\displaystyle |f(x)-f(x_{0})|}$ will be smaller than ${\displaystyle \epsilon }$.

A ${\displaystyle \delta }$ may only be found if small changes around the argument ${\displaystyle x_{0}}$ also cause small changes around the function value ${\displaystyle f(x_{0})}$ . Hence, concerning functions continuous at ${\displaystyle x_{0}}$ , there has to be:

${\displaystyle x\approx x_{0}\implies f(x)\approx f(x_{0})}$

I.e.: whenever ${\displaystyle x}$ is sufficiently close to ${\displaystyle x_{0}}$ , then ${\displaystyle f(x)}$ is approximately ${\displaystyle f(x_{0})}$. This may also be described using the notion of an ${\displaystyle \epsilon }$-neighborhood:

For every ${\displaystyle \epsilon }$-neighborhood ${\displaystyle (f(x_{0})-\epsilon ,f(x_{0})+\epsilon )}$ around ${\displaystyle f(x_{0})}$ - no matter how small it may be - there is always a ${\displaystyle \delta }$-neighborhood ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )}$ around ${\displaystyle x_{0}}$, whose function values are all mapped into the ${\displaystyle \epsilon }$-neighborhood.

In topology, this description using neighborhoods will be generalized to a topological definition of continuity.

## Visualization of the Epsilon-Delta criterion

### Description of continuity using the graph

The epsilon-delta criterion may nicely be visualized by taking a look at the graph of a funtion. Let's start by getting a picture of the implication ${\displaystyle |x-x_{0}|<\delta \implies |f(x)-f(x_{0})|<\epsilon }$. This means, the distance between ${\displaystyle f(x)}$ and ${\displaystyle f(x_{0})}$ is smaller than epsilon, whenever ${\displaystyle x}$ is closer to ${\displaystyle x_{0}}$ than ${\displaystyle \delta }$ . So for ${\displaystyle x\in (x_{0}-\delta ,x_{0}+\delta )}$, there is ${\displaystyle f(x)\in (f(x_{0})-\epsilon ,f(x_{0})+\epsilon )}$. Hence, the point ${\displaystyle (x,f(x))}$ has to be inside the rectangle ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )\times (f(x_{0})-\epsilon ,f(x_{0})+\epsilon )}$ . This is a rectangle with width ${\displaystyle 2\delta }$ and height ${\displaystyle 2\epsilon }$ centered at ${\displaystyle (x_{0},f(x_{0}))}$:

We will call this the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle and only consider its interior. That means, the boundary does not belong to the rectangle. Following the epsilon-delta criterion, the implication ${\displaystyle |x-x_{0}|<\delta \implies |f(x)-f(x_{0})|<\epsilon }$ has to be fulfilled for all arguments ${\displaystyle x}$ . Thus, all points making up the graph of ${\displaystyle f}$ restricted to arguments inside the interval ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )}$ (in the interior of the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle, which is marked green) must never be above or below the rectangle (the red area):

So graphically, we may describe the epsilon-delta criterion as follows:

For all rectangle heights ${\displaystyle \epsilon >0}$ , there is a sufficiently small rectangle width ${\displaystyle \delta >0}$, such that the graph of ${\displaystyle f}$ restricted to ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )}$ (i.e. the width of the rectangle) is entirely inside the green interior of the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle, and never in the red above or below area.

### Example of a continuous function

For an example, consider the function ${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto {\tfrac {1}{3}}x}$ . This fucntion is continuous everywhere - and hence also at the argument ${\displaystyle x_{0}=1}$. There is ${\displaystyle f(x_{0})=f(1)={\tfrac {1}{3}}\cdot 1={\tfrac {1}{3}}}$. At first, consider a maximal final error of ${\displaystyle \epsilon =1}$ around ${\displaystyle f(x_{0})}$. With ${\displaystyle \delta =2}$ , we can find a ${\displaystyle \delta >0}$, such that the graph of ${\displaystyle f}$ is entirely situated inside the interior of the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle:

But not only for ${\displaystyle \epsilon =1}$, but for any ${\displaystyle \epsilon >0}$ we may find a ${\displaystyle \delta >0}$ , such that the graph of ${\displaystyle f}$ is situated entirely inside the respective ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle:

### Example for a discontinuous function

What happens if the function is discontinuous? Let's take the signum function ${\displaystyle \operatorname {sgn} }$, which is discontinuous at 0:

${\displaystyle \operatorname {sgn} :\mathbb {R} \to \mathbb {R} :x\mapsto {\begin{cases}1&x>0\\0&x=0\\-1&x<0\end{cases}}}$

And here is its graph:

The graph intuitively allows to recognize that at ${\displaystyle x_{0}=0}$ , there certainly is a discontinuity. And we may see this using the rectangle visualization, as well. When choosing a rectangle height ${\displaystyle \epsilon }$, smaller than the jump height (i.e. ${\displaystyle \epsilon <1}$), then there is no ${\displaystyle \delta }$, such that the graph can be fitted entirely inside the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle. For instance if ${\displaystyle \epsilon ={\tfrac {1}{2}}}$ , then for any ${\displaystyle \delta }$ - no matter how small - there will always be function values above or below the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle. In fact, this apples to all values except for ${\displaystyle f(0)=0}$:

## Dependence of delta or epsilon choice

### Continuity

How does the choice of ${\displaystyle \delta >0}$ depend on ${\displaystyle x_{0}}$ and ${\displaystyle \epsilon }$? Suppose, an arbitrary ${\displaystyle \epsilon >0}$ is given in order to check continuity of ${\displaystyle f}$. Now, we need to find a rectangle width ${\displaystyle \delta >0}$ , such that the restriction of the graph of ${\displaystyle f}$ to arguments inside the interval ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )}$ entirely fits into the epsilon-tube ${\displaystyle (f(x_{0})-\epsilon ,f(x_{0})+\epsilon )}$ . This of course requires choosing ${\displaystyle \delta }$sufficiently small. When ${\displaystyle \delta }$ is too large, there may be an argument ${\displaystyle x}$ in ${\displaystyle (x_{0}-\delta ,x_{0}+\delta )}$, where ${\displaystyle f(x)}$ has escaped the tube, i.e. it has a distance to ${\displaystyle f(x_{0})}$ larger than ${\displaystyle \epsilon }$ :

How small ${\displaystyle \delta }$ has to be chosen, will depend on three factors: The function ${\displaystyle f}$, the given ${\displaystyle \epsilon }$ and the argument ${\displaystyle x_{0}}$. Depending on the function slope, a different ${\displaystyle \delta }$ chosen (steep functions require a smaller ${\displaystyle \delta }$). Furthermore, for a smaller ${\displaystyle \epsilon }$ we also have to choose a smaller ${\displaystyle \delta }$ . The following diagrams illustrate this: Here, a quadratic function is plotted, which is continuous at ${\displaystyle x_{0}=1}$ . For a smaller ${\displaystyle \epsilon }$ , we also need to choose a smaller ${\displaystyle \delta }$ :

The choice of ${\displaystyle \delta }$ will depend on the argument ${\displaystyle x_{0}}$, as well. The more a function changes in the neighborhood of a certain point (i.e. it is steep around it), the smaller we have to choose ${\displaystyle \delta }$ . The following graphic demonstrates this: The ${\displaystyle \delta }$-value proposed there is sufficiently small at ${\displaystyle x_{0}}$ , but too large at ${\displaystyle x_{1}}$ :

In the vicinity of ${\displaystyle x_{1}}$ , the function ${\displaystyle f}$ has a higher slope compared to ${\displaystyle x_{0}}$. Hence, we need to choose a smaller ${\displaystyle \delta }$ at ${\displaystyle x_{1}}$ . Let us denote the ${\displaystyle \delta }$-values at ${\displaystyle x_{0}}$ and ${\displaystyle x_{1}}$ correspondingly by ${\displaystyle \delta _{0}}$ and ${\displaystyle \delta _{1}}$ - and choose ${\displaystyle \delta _{1}}$ to be smaller:

So, we have just seen that the choice of ${\displaystyle \delta }$ depends on the function ${\displaystyle f}$ to be considered, as well as the argument ${\displaystyle x_{0}}$ and the given ${\displaystyle \epsilon }$ .

### Discontinuity

For a discontinuity proof, the relations between the variables will interchange. This relates back to the interchange of the quantifiers under negation of propositions. In order to show discontinuity, we need to find an ${\displaystyle \epsilon >0}$ small enough, such that for no ${\displaystyle \delta >0}$ the graph of ${\displaystyle f}$ fits entirely into the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle. In particular, if the discontinuity is caused by a jump, then ${\displaystyle \epsilon }$ must be chosen smaller than the jump height. For ${\displaystyle \epsilon }$ too large, there might be a ${\displaystyle \delta }$, such that ${\displaystyle f}$ does fit into the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle:

Which ${\displaystyle \epsilon }$has to be chosen again depends on the function around ${\displaystyle x_{0}}$ . After ${\displaystyle \epsilon }$ has been chosen, an arbitrary ${\displaystyle \delta >0}$ will be considered. Then, an ${\displaystyle x}$ between ${\displaystyle x_{0}-\delta }$ and ${\displaystyle x_{0}+\delta }$ has to be found, such that ${\displaystyle f(x)}$ has a distance larger than (or equal to) ${\displaystyle \epsilon }$ to ${\displaystyle f(x_{0})}$ . That means, the point ${\displaystyle (x,f(x))}$ has to be situated above or below the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle. Which ${\displaystyle x}$ has to be chosen depends on a varety of parameters: the chosen ${\displaystyle \epsilon }$ and the arbitrarily given ${\displaystyle \delta }$, the discontinuity and the behavior of the function around it.

## Example problems

### Continuity

Exercise (Continuity of a linear function)

Prove that a linear function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{3}}x}$ is continuous.

How to get to the proof? (Continuity of a linear function)

Graph of a function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{3}}x}$. Considering the graph , we see that this function is continuous everywhere.

To actually prove continuity of ${\displaystyle f}$ , we need to check continuity at any argument ${\displaystyle x_{0}\in \mathbb {R} }$ . So let ${\displaystyle x_{0}}$ be an arbitrary real number. Now, choose any arbitrary maximal error ${\displaystyle \epsilon >0}$ . Our task is now to find a sufficiently small ${\displaystyle \delta >0}$ , such that ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ for all arguments ${\displaystyle x}$ with ${\displaystyle |x-x_{0}|<\delta }$ . Let us take a closer look at the inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ :

{\displaystyle {\begin{aligned}{\begin{array}{rrl}&|f(x)-f(x_{0})|&<\epsilon \\[0.5em]\iff &\left|{\frac {1}{3}}x-{\frac {1}{3}}x_{0}\right|&<\epsilon \\[0.5em]\iff &\left|{\frac {1}{3}}(x-x_{0})\right|&<\epsilon \\[0.5em]\iff &{\frac {1}{3}}|x-x_{0}|&<\epsilon \end{array}}\end{aligned}}}

That means, ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<\epsilon }$ has to be fulfilled for all ${\displaystyle x}$ with ${\displaystyle |x-x_{0}|<\delta }$ . How to choose ${\displaystyle \delta }$ , such that ${\displaystyle |x-x_{0}|<\delta }$ implies ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<\epsilon }$ ?

We use that the inequality ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<\epsilon }$ contains the distance ${\displaystyle |x-x_{0}|}$ . As ${\displaystyle |x-x_{0}|<\delta }$ we know that this distance is smaller than ${\displaystyle \delta }$ . This can be plugged into the inequality ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|}$ :

${\displaystyle {\frac {1}{3}}|x-x_{0}|{\stackrel {|x-x_{0}|<\delta }{<}}{\frac {1}{3}}\delta }$

If ${\displaystyle \delta }$ is now chosen such that ${\displaystyle {\tfrac {1}{3}}\delta \leq \epsilon }$ , then ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<{\tfrac {1}{3}}\delta }$ will yield the inequality ${\displaystyle {\tfrac {1}{3}}|x-x_{0}|<\epsilon }$ which we wanted to show. The smallness condition for ${\displaystyle \delta }$ can now simply be found by resolving ${\displaystyle {\tfrac {1}{3}}\delta \leq \epsilon }$ for ${\displaystyle \delta }$:

${\displaystyle {\tfrac {1}{3}}\delta \leq \epsilon \iff \delta \leq 3\epsilon }$

Any ${\displaystyle \delta }$ satisfying ${\displaystyle 0<\delta \leq 3\epsilon }$ could be used for the proof. For instance, we may use ${\displaystyle \delta =3\epsilon }$. As we now found a suitable ${\displaystyle \delta }$, we can finally conduct the proof:

Proof (Continuity of a linear function)

Let ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{3}}x}$ and let ${\displaystyle x_{0}\in \mathbb {R} }$ be arbitrary. In addition, consider any ${\displaystyle \epsilon >0}$ to be given. We choose ${\displaystyle \delta =3\epsilon }$. Let ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-x_{0}|<\delta }$. There is:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\left|{\frac {1}{3}}x-{\frac {1}{3}}x_{0}\right|\\[0.5em]&=\left|{\frac {1}{3}}(x-x_{0})\right|\\[0.5em]&={\frac {1}{3}}|x-x_{0}|\\[0.5em]&\quad {\color {Gray}\left\downarrow \ |x-x_{0}|<\delta \right.}\\[0.5em]&<{\frac {1}{3}}\delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ \delta =3\epsilon \right.}\\[0.5em]&\leq {\frac {1}{3}}(3\epsilon )=\epsilon \end{aligned}}}

This shows ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$, and establishes continuity of ${\displaystyle f}$ at ${\displaystyle x_{0}}$ by means of the epsilon-delta criterion. Since ${\displaystyle x_{0}\in \mathbb {R} }$ was chosen to be arbitrary, we also know that the entire function ${\displaystyle f}$ is continuous.

### Discontinuity

Exercise (Discontinuity of the signum function)

Prove that the signum function is ${\displaystyle \operatorname {sgn} :\mathbb {R} \to \mathbb {R} }$ is discontinuous:

${\displaystyle \operatorname {sgn}(x)={\begin{cases}1&x>0\\0&x=0\\-1&x<0\end{cases}}}$

How to get to the proof? (Discontinuity of the signum function)

In order to prove discontinuity of the entire function, we just have to find one single argument where it is discontinuous. Considering the graph of ${\displaystyle \operatorname {sgn} }$ , we can already guess, which argument this may be:

The function has a jump at ${\displaystyle x_{0}=0}$ . So we expect it to be discontinuous, there. It remains to choose an ${\displaystyle \epsilon >0}$ that makes it impossible to find a ${\displaystyle \delta >0}$ , that makes the function fit into the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle. This is done by setting ${\displaystyle \epsilon }$ smaller than the jump height ${\displaystyle 1}$ - for instance ${\displaystyle \epsilon ={\tfrac {1}{2}}}$. For that ${\displaystyle \epsilon }$, no matter how ${\displaystyle \delta >0}$ is given, there will be function values above or below the ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle.

So let ${\displaystyle \delta >0}$ be arbitrary. We need to show that there is an ${\displaystyle x}$ with ${\displaystyle |x-x_{0}|<\delta }$ but ${\displaystyle |f(x)-f(x_{0})|\geq \epsilon }$ . Let us take a look at the inequality ${\displaystyle |x-x_{0}|<\delta }$ :

${\displaystyle |x-x_{0}|<\delta {\stackrel {x_{0}=0}{\iff }}|x|<\delta }$

This inequality classifies all ${\displaystyle x}$ that can be used for the proof. The particular ${\displaystyle x}$ we choose has to fulfill ${\displaystyle |f(x)-f(x_{0})|\geq \epsilon }$ :

{\displaystyle {\begin{aligned}{\begin{array}{rrl}&|f(x)-f(x_{0})|&\geq \epsilon \\[0.5em]\iff &|\operatorname {sgn} (x)-\operatorname {sgn} (x_{0})|&\geq {\frac {1}{2}}\\[0.5em]\iff &|\operatorname {sgn} (x)-\operatorname {sgn} (0)|&\geq {\frac {1}{2}}\\[0.5em]\iff &|\operatorname {sgn} (x)|&\geq {\frac {1}{2}}\end{array}}\end{aligned}}}

So our ${\displaystyle x}$ needs to fulfill both ${\displaystyle |x|<\delta }$ and ${\displaystyle |\operatorname {sgn} (x)|\geq {\tfrac {1}{2}}}$ . The second inequality ${\displaystyle |\operatorname {sgn} (x)|\geq {\tfrac {1}{2}}}$ may be achieved quite easily: For any ${\displaystyle x\neq 0}$ , the value ${\displaystyle \operatorname {sgn}(x)}$ is either ${\displaystyle 1}$ or ${\displaystyle -1}$. So ${\displaystyle x\neq 0}$ does always fulfill ${\displaystyle |\operatorname {sgn} (x)|=1\geq {\tfrac {1}{2}}}$.

Now we need to fulfill the first inequality ${\displaystyle |x|<\delta }$. From the second inequality, we have just concluded ${\displaystyle x\neq 0}$ . This is particularly true for all ${\displaystyle x}$ with ${\displaystyle 0 . Therefore, we choos ${\displaystyle x}$ to be somewhere between ${\displaystyle 0}$ and ${\displaystyle \delta }$ , for instance ${\displaystyle x={\tfrac {0+\delta }{2}}={\tfrac {\delta }{2}}}$.

The following figure shows that this is a sensible choice. The ${\displaystyle 2\epsilon }$-${\displaystyle 2\delta }$-rectangle with ${\displaystyle \epsilon ={\tfrac {1}{2}}}$ and ${\displaystyle \delta ={\tfrac {1}{2}}}$ is drawn here. All points above or below that rectangle are marked red. These are exactly all ${\displaystyle x}$ inside the interval ${\displaystyle (-\delta ,\delta )}$ excluding ${\displaystyle 0}$. Our chosen ${\displaystyle x={\tfrac {\delta }{2}}}$ (red dot) is situated directly in the middle of the red part of the graph above the rectangle:

So choosing ${\displaystyle x={\tfrac {\delta }{2}}}$ is enough to complete the proof:

Proof (Discontinuity of the signum function)

We set ${\displaystyle x_{0}=0}$ (this is where ${\displaystyle f}$ is discontinuous). In addition, we choose ${\displaystyle \epsilon ={\tfrac {1}{2}}}$. Let ${\displaystyle \delta >0}$ be arbitrary. For that given ${\displaystyle \delta }$, we choose ${\displaystyle x={\tfrac {\delta }{2}}}$. Now, on one hand there is:

{\displaystyle {\begin{aligned}{\begin{array}{rrl}&{\frac {1}{2}}&<1\\[0.5em]\implies &{\frac {1}{2}}\delta &<\delta \\[0.5em]\implies &\left|{\frac {1}{2}}\delta \right|&<\delta \\[0.5em]\implies &\left|{\frac {1}{2}}\delta -0\right|&<\delta \\[0.5em]\implies &|x-x_{0}|&<\delta \end{array}}\end{aligned}}}

But on the other hand:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=|\operatorname {sgn} (x)-\operatorname {sgn} (x_{0})|\\[0.5em]&=\left|\operatorname {sgn} \left({\frac {\delta }{2}}\right)-\operatorname {sgn} (0)\right|\\[0.5em]&=\left|1-0\right|\\[0.5em]&=1\\[0.5em]&\geq {\frac {1}{2}}=\epsilon \end{aligned}}}

So indeed, ${\displaystyle \operatorname {sgn} }$ is discontinuous at ${\displaystyle x_{0}=0}$ . Hence, the function ${\displaystyle \operatorname {sgn} }$ is discontinuous itself.

## Relation to the sequence criterion

Now, we have two definitions of continuity: the epsilon-delta and the sequence criterion. In order to show that both definitions describe the same concept, we have to prove their equivalence. If the sequence criterion is fulfilled, it must imply that the epsilon-delta criterion holds and vice versa.

### epsilon-delta criterion implies sequence criterion

Theorem (The epsilon-delta criterion implies the sequence criterion)

Let ${\displaystyle f:D\to \mathbb {R} }$ with ${\displaystyle D\subseteq \mathbb {R} }$ be any function. If this function satisfies the epsilon-dela criterion at ${\displaystyle x_{0}\in D}$ , then the sequence criterion is fulfilled at ${\displaystyle x_{0}}$ , as well.

How to get to the proof? (The epsilon-delta criterion implies the sequence criterion)

Let us assume that the function ${\displaystyle f:D\to \mathbb {R} }$ satisfies the epsilon-delta criterion at ${\displaystyle x_{0}\in D}$ . That means:

For every ${\displaystyle \epsilon >0}$ , there is a ${\displaystyle \delta >0}$ such that ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ for all ${\displaystyle x\in D}$ with ${\displaystyle |x-x_{0}|<\delta }$ .

We now want to prove that the sequence criterion is satisfied, as well. So we have to show that for any sequence of arguments ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ converging to ${\displaystyle x_{0}}$ , there also has to be ${\displaystyle \lim _{n\to \infty }f(x_{n})=f(x_{0})}$ . We therefor consider an arbitrary sequence of arguments ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ in the domain ${\displaystyle x_{n}\in D}$ with ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$. Our job is to show that the sequence of function values ${\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }}$ converges to ${\displaystyle f(x_{0})}$ . So by the definition of convergence:

For any ${\displaystyle \epsilon >0}$ there has to be an ${\displaystyle N\in \mathbb {N} }$ such that ${\displaystyle |f(x_{n})-f(x_{0})|<\epsilon }$ for all ${\displaystyle n\geq N}$.

Let ${\displaystyle \epsilon >0}$ be arbitrary. We have to find a suitable ${\displaystyle N\in \mathbb {N} }$ with ${\displaystyle |f(x_{n})-f(x_{0})|<\epsilon }$ for all sequence elements beyond that ${\displaystyle N}$ , i.e. ${\displaystyle n\geq N}$ . The inequality ${\displaystyle |f(x_{n})-f(x_{0})|<\epsilon }$ seems familiar, recalling the epsilon-delta criterion. The only difference is that the argument ${\displaystyle x}$ is replaced by a sequence element ${\displaystyle x_{n}}$ - so we consider a special case for ${\displaystyle x}$ . Let us apply the epsilon-delta criterion to that special case, with our arbitrarily chosen ${\displaystyle \epsilon }$ being given:

There is a ${\displaystyle \delta >0}$, such that ${\displaystyle |f(x_{n})-f(x_{0})|<\epsilon }$ for all sequence elements ${\displaystyle x_{n}}$ fulfilling ${\displaystyle |x_{n}-x_{0}|<\delta }$ .

Our goal is coming closer. Whenever a sequence element ${\displaystyle x_{n}}$ is close to ${\displaystyle x_{0}}$ with ${\displaystyle |x_{n}-x_{0}|<\delta }$ , it will satisfy the inequality which we want to show, namely ${\displaystyle |f(x_{n})-f(x_{0})|<\epsilon }$. It remains to choose an ${\displaystyle N\in \mathbb {N} }$, where this is the case for all sequence elements beyond ${\displaystyle x_{N}}$ . The convergence ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$ implies that ${\displaystyle |x_{n}-x_{0}|}$ gets arbitrarily small. So by the definition of continuity, we may find an ${\displaystyle {\tilde {N}}\in \mathbb {N} }$, with ${\displaystyle |x_{n}-x_{0}|<\delta }$ for all ${\displaystyle n\geq {\tilde {N}}}$ . This ${\displaystyle {\tilde {N}}}$ now plays the role of our ${\displaystyle N}$. If there is ${\displaystyle n\geq N={\tilde {N}}}$, it follows that ${\displaystyle |x_{n}-x_{0}|<\delta }$ and hence ${\displaystyle |f(x_{n})-f(x_{0})|<\epsilon }$ by the epsilon-delta criterion. In fact, any ${\displaystyle N\geq {\tilde {N}}}$ will do the job. We now conclude our considerations and write down the proof:

Proof (The epsilon-delta criterion implies the sequence criterion)

Let ${\displaystyle f:D\to \mathbb {R} }$ e a function satisfying the epsilon-delta criterion at ${\displaystyle x_{0}\in D}$ . Let ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ be a sequence inside the domain of definition, i.e. ${\displaystyle x_{n}\in D}$ for all ${\displaystyle n\in \mathbb {N} }$ coverging as ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$. We would like to show that for any given ${\displaystyle \epsilon >0}$ there exists an ${\displaystyle N\in \mathbb {N} }$ , such that ${\displaystyle |f(x_{n})-f(x_{0})|<\epsilon }$ holds for all ${\displaystyle n\geq N}$ .

So let ${\displaystyle \epsilon >0}$ be given. Following the epsilon-delta criterion, there is a ${\displaystyle \delta >0}$, with ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ for all ${\displaystyle x\in D}$ close to ${\displaystyle x_{0}}$ , i.e. ${\displaystyle |x-x_{0}|<\delta }$ . As ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ converges to ${\displaystyle x_{0}}$ , we may find an ${\displaystyle N\in \mathbb {N} }$ with ${\displaystyle |x_{n}-x_{0}|<\delta }$ for all ${\displaystyle n\geq N}$ .

Now, let ${\displaystyle n\geq N}$ be arbitrary. Hence, ${\displaystyle |x_{n}-x_{0}|<\delta }$. The epsilon-delta criterion now implies ${\displaystyle |f(x_{n})-f(x_{0})|<\epsilon }$. This proves ${\displaystyle \lim _{n\to \infty }f(x_{n})=f(x_{0})}$ and therefore establishes the epsilon-delta criterion.

### Sequence criterion implies epsilon-delta criterion

Theorem (The sequence criterion implies the epsilon-delta criterion)

Let ${\displaystyle f:D\to \mathbb {R} }$ with ${\displaystyle D\subseteq \mathbb {R} }$ be a function. If ${\displaystyle f}$ satisfies the sequence criterion at ${\displaystyle x_{0}\in D}$ , then the epsilon-delta criterion is fulfilled there, as well.

How to get to the proof? (The sequence criterion implies the epsilon-delta criterion)

We need to show that the following implication holds:

${\displaystyle f{\text{ satisfies the sequence criterion in }}x_{0}\in D\implies f{\text{ satisfies the epsilon-delta criterion in }}x_{0}\in D}$

This time, we do not show the implication directly, but using a contraposition. So we will prove the following implication (which is equivalent to the first one):

${\displaystyle \neg \left(f{\text{ satisfies the epsilon-delta criterion in }}x_{0}\in D\right)\implies \neg \left(f{\text{ satisfies the sequence criterion in }}x_{0}\in D\right)}$

Or in other words:

${\displaystyle f{\text{ violates the epsilon-delta criterion in }}x_{0}\in D\implies f{\text{ violates the sequence criterion in }}x_{0}\in D}$

So let ${\displaystyle f:D\to \mathbb {R} }$ be a function that violates the epsilon-delta criterion at ${\displaystyle x_{0}\in D}$ . Hence, ${\displaystyle f}$ fulfills the discontinuity version of the epsilon-delta criterion at ${\displaystyle x_{0}}$. We can find an ${\displaystyle \epsilon >0}$,such that for any ${\displaystyle \delta >0}$ there is a ${\displaystyle x\in D}$ with ${\displaystyle |x-x_{0}|<\delta }$ but ${\displaystyle |f(x)-f(x_{0})|\geq \epsilon }$ . It is our job now to prove, that the sequence criterion is violated, as well. This requires choosing a sequence of aguments ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ , converging as ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$ but ${\displaystyle \lim _{n\to \infty }f(x_{n})\neq f(x_{0})}$ .

This choice will be done exploiting the discontinuity version of the epsilon-delta criterion. That version provides us with an ${\displaystyle \epsilon >0}$ , where ${\displaystyle |f(x)-f(x_{0})|\geq \epsilon }$ holds (so continuity is violated) for certain arguments ${\displaystyle x}$ . We will now construct our sequence exclusively out of those certain ${\displaystyle x}$ . This will automatically get us ${\displaystyle \lim _{n\to \infty }f(x_{n})\neq f(x_{0})}$.

So how to find a suitable sequence of arguments ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$, converging to ${\displaystyle x_{0}}$ ? The answer is: by choosing a null sequence ${\displaystyle (\delta _{n})_{n\in \mathbb {N} }}$. Practically, this is done as follows: we set ${\displaystyle \delta _{n}={\tfrac {1}{n}}}$ . For any ${\displaystyle \delta _{n}}$ , we take one of the certain ${\displaystyle x}$ for ${\displaystyle \delta =\delta _{n}}$ as our argument ${\displaystyle x_{n}}$ . Then, ${\displaystyle |x_{n}-x_{0}|<\delta _{n}}$ but also ${\displaystyle |f(x_{n})-f(x_{0})|\geq \epsilon }$. These ${\displaystyle x_{n}}$ make up the desired sequence ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$. On one hand, there is ${\displaystyle |x_{n}-x_{0}|<\delta _{n}}$ and as ${\displaystyle \lim _{n\to \infty }\delta _{n}=0}$ , the convergence ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$ holds. But on the other hand ${\displaystyle |f(x_{n})-f(x_{0})|\geq \epsilon }$ , so the sequence of function values ${\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }}$ does not converge to ${\displaystyle f(x_{0})}$ . Let us put these thoughts together in a single proof:

Proof (The sequence criterion implies the epsilon-delta criterion)

We establish the theorem by contraposition. It needs to be shown that a function ${\displaystyle f:D\to \mathbb {R} }$ violating the epsilon-delta criterion at ${\displaystyle x_{0}\in D}$ also violates the sequence criterion at ${\displaystyle x_{0}}$ . So let ${\displaystyle f:D\to \mathbb {R} }$ with ${\displaystyle D\subseteq \mathbb {R} }$ be a function violating the epsilon-delta criterion at ${\displaystyle x_{0}\in D}$ . Hence, there is an ${\displaystyle \epsilon >0}$, such that for all ${\displaystyle \delta >0}$ an ${\displaystyle x\in D}$ exists with ${\displaystyle |x-x_{0}|<\delta }$ but ${\displaystyle |f(x)-f(x_{0})|\geq \epsilon }$ .

So for any ${\displaystyle \delta _{n}={\tfrac {1}{n}}}$ , there is an ${\displaystyle x_{n}\in D}$ with ${\displaystyle |x_{n}-x_{0}|<\delta _{n}}$ but ${\displaystyle |f(x_{n})-f(x_{0})|\geq \epsilon }$. The inequality ${\displaystyle |x_{n}-x_{0}|<\delta _{n}}$ can also be written ${\displaystyle x_{0}-\delta _{n}\leq x_{n}\leq x_{0}+\delta _{n}}$. As ${\displaystyle \lim _{n\to \infty }\delta _{n}=0}$ , there is both ${\displaystyle \lim _{n\to \infty }x_{0}-\delta _{n}=x_{0}}$ and ${\displaystyle \lim _{n\to \infty }x_{0}+\delta _{n}=x_{0}}$. Thus, by the sandwich theorem, the sequence ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ converges to ${\displaystyle x_{0}}$.

But since ${\displaystyle |f(x_{n})-f(x_{0})|\geq \epsilon }$ for all ${\displaystyle n\in \mathbb {N} }$ , the sequence ${\displaystyle \left(f(x_{n})\right)_{n\in \mathbb {N} }}$ can not converge to ${\displaystyle f(x_{0})}$ . Therefore, the sequence criterion is violated at ${\displaystyle x_{0}}$ for the function ${\displaystyle f}$ : We have found a sequence of arguments ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ with ${\displaystyle \lim _{n\to \infty }x_{n}=x_{0}}$ but ${\displaystyle \lim _{n\to \infty }f(x_{n})\neq f(x_{0})}$.

## Exercises

### Square function

Exercise (Continuity of the square function)

Prove that the function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)=x^{2}}$ is continuous.

How to get to the proof? (Continuity of the square function)

For this proof, we need to show that the square function is continuous at any argument ${\displaystyle x_{0}\in \mathbb {R} }$ . Using the proof structure for the epsilon-delta criterion, we are given an arbitrary ${\displaystyle \epsilon >0}$ . Our job is to find a suitable ${\displaystyle \delta >0}$ , such that the inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ holds for all ${\displaystyle |x-x_{0}|<\delta }$.

In order to find a suitable ${\displaystyle \delta }$ , we plug in the definition of the function ${\displaystyle f(x)=x^{2}}$ into the expression ${\displaystyle |f(x)-f(x_{0})|}$ which shall be smaller than ${\displaystyle \epsilon }$:

${\displaystyle |f(x)-f(x_{0})|=\left|x^{2}-x_{0}^{2}\right|}$

The expression ${\displaystyle |x-x_{0}|}$ may easily be controlled by ${\displaystyle \delta }$. Hence, it makes sense to construct an upper estimate for ${\displaystyle \left|x^{2}-x_{0}^{2}\right|}$ which includes ${\displaystyle |x-x_{0}|}$ and a constant. The factor ${\displaystyle |x-x_{0}|}$ appears if we perform a factorization using the third binomial formula:

${\displaystyle |x^{2}-x_{0}^{2}|=|x+x_{0}||x-x_{0}|}$

The requirement ${\displaystyle |x-x_{0}|<\delta }$ allows for an upper estimate of our expression:

${\displaystyle |x+x_{0}||x-x_{0}|<|x+x_{0}|\cdot \delta }$

The ${\displaystyle \delta }$ we are looking for may only depend on ${\displaystyle \epsilon }$ and ${\displaystyle x_{0}}$ . So the dependence on ${\displaystyle x}$ in the factor ${\displaystyle |x+x_{0}|\cdot \delta }$ is still a problem. We resolve it by making a further upper estimate for the factor ${\displaystyle |x-x_{0}|}$ . We will use a simple, but widely applied "trick" for that: A ${\displaystyle x_{0}}$ is subtracted and then added again at another place (so we are effectively adding a 0) , such that the expression ${\displaystyle x-x_{0}}$ appears:

${\displaystyle |x+x_{0}|\cdot \delta =|x\underbrace {-x_{0}+x_{0}} _{=\ 0}+x_{0}|\cdot \delta =|x-x_{0}+2x_{0}|\cdot \delta }$

The absolute ${\displaystyle |x-x_{0}|}$ is obtained using the triangle inequality. This absolute ${\displaystyle |x-x_{0}|}$ is again bounded from above by ${\displaystyle \delta }$ :

${\displaystyle |x-x_{0}+2x_{0}|\cdot \delta \leq (|x-x_{0}|+|2x_{0}|)\cdot \delta <(\delta +2|x_{0}|)\cdot \delta }$

So reshaping expressions and applying estimates, we obtain:

${\displaystyle |f(x)-f(x_{0})|<(\delta +2|x_{0}|)\cdot \delta }$

With this inequality in hand, we are almost done. If ${\displaystyle \delta }$ is chosen in a way that ${\displaystyle (\delta +2|x_{0}|)\cdot \delta \leq \epsilon }$ , we will get the final inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ . This ${\displaystyle \delta }$ is practically found solving the quadratic equation ${\displaystyle \delta ^{2}+2|x_{0}|\delta =\epsilon }$ for ${\displaystyle \delta }$ . Or even simpler, we may estimate ${\displaystyle (\delta +2|x_{0}|)\cdot \delta }$ from above. We use that we may freely impose any condition on ${\displaystyle \delta }$ . If we, for instance, set ${\displaystyle \delta \leq 1}$, then ${\displaystyle \delta +2|x_{0}|\leq 1+2|x_{0}|}$ which simplifies things:

${\displaystyle |f(x)-f(x_{0})|<(\underbrace {\delta +2|x_{0}|} _{\leq 1+2|x_{0}|})\cdot \delta \leq (1+2|x_{0}|)\cdot \delta }$

So ${\displaystyle (1+2|x_{0}|)\cdot \delta \leq \epsilon }$ will also do the job. This inequality can be solved for ${\displaystyle \delta }$ to get the second condition on ${\displaystyle \delta }$(the first one was ${\displaystyle \delta \leq 1}$):

${\displaystyle (1+2|x_{0}|)\cdot \delta \leq \epsilon \iff \delta \leq {\frac {\epsilon }{1+2|x_{0}|}}}$

So any ${\displaystyle \delta }$ fulfilling both conditions does the job: ${\displaystyle \delta \leq 1}$ and ${\displaystyle \delta ={\tfrac {\epsilon }{1+2|x_{0}|}}}$ have to hold. Ind indeed, both are true for ${\displaystyle \delta :=\min \left\{1,{\tfrac {\epsilon }{1+2|x_{0}|}}\right\}}$. This choice will be included into the final proof:

Proof (Continuity of the square function)

Let ${\displaystyle \epsilon >0}$ be arbitrary and ${\displaystyle \delta :=\min \left\{1,{\tfrac {\epsilon }{1+2|x_{0}|}}\right\}}$. If an argument ${\displaystyle x}$ fulfills ${\displaystyle \left|x-x_{0}\right|<\delta }$ then:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=|x^{2}-x_{0}^{2}|\\[0.5em]&=|x+x_{0}||x-x_{0}|\\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ |x-x_{0}|<\delta \right.}\\[0.5em]&<|x+x_{0}|\cdot \delta \\[0.5em]&=|x\underbrace {-x_{0}+x_{0}} _{=\ 0}+x_{0}|\cdot \delta \\[0.5em]&=|x-x_{0}+2x_{0}|\cdot \delta \\[0.5em]&\leq (|x-x_{0}|+2|x_{0}|)\cdot \delta \\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ |x-x_{0}|<\delta \right.}\\[0.5em]&<(\delta +2|x_{0}|)\cdot \delta \\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ \delta \leq 1\right.}\\[0.5em]&\leq (1+2|x_{0}|)\cdot \delta \\[0.5em]&\quad {\color {OliveGreen}\left\downarrow \ \delta ={\frac {\epsilon }{1+2|x_{0}|}}\right.}\\[0.5em]&=(1+2|x_{0}|)\cdot {\frac {\epsilon }{1+2|x_{0}|}}\\[0.5em]&=\epsilon \end{aligned}}}

This shows that the square function is continuous by the epsilon-delta criterion.

### Concatenated absolute function

Exercise (Example for a proof of continuity)

Prove that the following function is continuous at ${\displaystyle x_{0}=1}$:

${\displaystyle f:\mathbb {R} \to \mathbb {R} :x\mapsto f(x)=5\left|x^{2}-2\right|+3}$

How to get to the proof? (Example for a proof of continuity)

We need to show that for each given ${\displaystyle \epsilon >0}$ , there is a ${\displaystyle \delta >0}$ , such that for all ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-x_{0}|<\delta }$ the inequality ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ holds. In our case, ${\displaystyle x_{0}=1}$. So by choosing ${\displaystyle |x-1|<\delta }$ for ${\displaystyle \delta }$ small enough, we may control the expression ${\displaystyle |x-1|}$ . First, let us plug ${\displaystyle x_{0}=1|}$ into ${\displaystyle |f(x)-f(x_{0})|}$ in order to simplify the inequality to be shown ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ :

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=|f(x)-f(1)|\\[0.5em]&=|5\cdot |x^{2}-2|+3-(5|1^{2}-2|+3)|\\[0.5em]&=|5\cdot |x^{2}-2|-5|\\[0.5em]&=5\cdot ||x^{2}-2|-1|\end{aligned}}}

The objective is to "produce" as many expressions ${\displaystyle |x-1|}$ as possible, since we can control ${\displaystyle |x-1|<\delta }$. It requires some experience with epsilon-delta proofs in order to "directly see" how this is achieved. First, we need to get rid of the double absolute. This is done using the inequality ${\displaystyle ||a|-|b||\leq |a-b|}$ . For instance, we could use the following estimate:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.3em]&=5\cdot ||x^{2}-2|-1|\\[0.3em]&=5\cdot ||x^{2}-2|-|1||\\[0.3em]&\ {\color {Gray}\left\downarrow \ ||a|-|b||\leq |a-b|\right.}\\[0.3em]&\leq 5\cdot |x^{2}-2-1|\\[0.3em]&=5\cdot |x^{2}-3|\end{aligned}}}

However, this is a bad estimate as the expression ${\displaystyle |x^{2}-3|}$ no longer tends to 0 as ${\displaystyle x\to 1}$ . To resolve this problem, we use ${\displaystyle 1=|-1|}$ before applying the inequality ${\displaystyle ||a|-|b||\leq |a-b|}$ :

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.3em]&=5\cdot |x^{2}-2|-1|\\[0.3em]&=5\cdot |x^{2}-2|-|-1||\\[0.3em]&\quad {\color {Gray}\left\downarrow \ |a-b|\geq ||a|-|b||\right.}\\[0.3em]&\leq 5\cdot |x^{2}-2-(-1)|\\[0.3em]&=5\cdot |x^{2}-1|\end{aligned}}}

A factor of ${\displaystyle |x-1|}$ can be directly extracted out of this with the third binomial fomula:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&\leq 5\cdot |x^{2}-1|\\[0.5em]&=5\cdot |x+1||x-1|\end{aligned}}}

And we can control it by ${\displaystyle |x-1|<\delta }$:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&\leq 5\cdot |x^{2}-1|\\[0.5em]&=5\cdot |x+1||x-1|\\[0.5em]&<5\cdot |x+1|\cdot \delta \end{aligned}}}

Now, the required ${\displaystyle \delta }$ must only depend on ${\displaystyle \epsilon }$ and ${\displaystyle x_{0}}$ . Therefore, we have to get rid of the ${\displaystyle x}$-dependence of ${\displaystyle 5|x+1|\cdot \delta }$. This can be done by finding an upper bound for ${\displaystyle 5|x+1|}$ which does not depend on ${\displaystyle x}$. As we are free to chose any ${\displaystyle \delta }$ for our proof, we may also impose any condition to it which helps us with the upper bound. In this case, ${\displaystyle \delta \leq 1}$ turns out to be quite useful. In fact, ${\displaystyle \delta \leq 2}$ or an even higher bound would do this job, as well. What follows from this choice?

As before, there has to be ${\displaystyle |x-1|<\delta }$. As ${\displaystyle \delta \leq 1}$ , we now have ${\displaystyle |x-1|<1}$ and as ${\displaystyle x\in [1-1;1+1]}$ , we obtain ${\displaystyle 1+x\in [1;3]}$ and ${\displaystyle |x+1|\leq 3}$. This is the upper bound we were looking for:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&<5\cdot |x+1|\cdot \delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ |x+1|\leq 3\right.}\\[0.5em]&\leq 15\cdot \delta \end{aligned}}}

As we would like to show ${\displaystyle |f(x)-f(1)|<\epsilon }$ , we set ${\displaystyle \delta ={\tfrac {\epsilon }{15}}}$. And get that our final inequality holds:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots \\[0.5em]&<15\cdot \delta \\[0.5em]&\leq 15\cdot {\frac {\epsilon }{15}}\\[0.5em]&\leq \epsilon .\end{aligned}}}

So if the two conditions for ${\displaystyle \delta }$ are satisfied, we get the final inequality. In fact, both conditions will be satisfied if ${\displaystyle \delta =\min \left\{1,{\tfrac {\epsilon }{15}}\right\}}$, concluding the proof. So let's conclude our ideas and write them down in a proof:

Proof (Example for a proof of continuity)

Let ${\displaystyle \epsilon >0}$ be arbitrary and let ${\displaystyle \delta =\min \left\{1,{\tfrac {\epsilon }{15}}\right\}}$. Further, let ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-1|<\delta }$. Then:

Step 1: ${\displaystyle |x+1|\leq 3}$

As ${\displaystyle \delta =\min \left\{1,{\tfrac {\epsilon }{15}}\right\}}$ , there is ${\displaystyle \delta \leq 1}$. Hence ${\displaystyle |x-1|\leq 1}$ and ${\displaystyle x\in [1-1;1+1]}$. It follows that ${\displaystyle 1+x\in [1;3]}$ and therefore ${\displaystyle |x+1|\leq 3}$.

Step 2: ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=|f(x)-f(1)|\\[0.5em]&=|5\cdot |x^{2}-2|+3-(5|1^{2}-2|+3)|\\[0.5em]&=|5\cdot |x^{2}-2|-5|\\[0.5em]&=5\cdot ||x^{2}-2|-1|\\[0.5em]&\quad {\color {Gray}\left\downarrow \ 1\iff |-1|\right.}\\[0.5em]&=5\cdot ||x^{2}-2|-|-1||\\[0.5em]&\quad {\color {Gray}\left\downarrow \ |a-b|\geq ||a|-|b||\right.}\\[0.5em]&\leq 5\cdot |x^{2}-2-(-1)|\\[0.5em]&=5\cdot |x^{2}-1|\\[0.5em]&\quad {\color {Gray}\left\downarrow \ a^{2}-b^{2}=(a+b)(a-b)\right.}\\[0.5em]&=5\cdot (|x+1|\cdot \underbrace {|x-1|} _{<\delta })\\[0.5em]&<5\cdot |x+1|\cdot \delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ |x+1|\leq 3\right.}\\[0.5em]&\leq 15\delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ \delta ={\frac {\epsilon }{15}}\right.}\\[0.5em]&=15\cdot {\frac {\epsilon }{15}}\\[0.5em]&\leq \epsilon \end{aligned}}}

Hence, the function is continuous at ${\displaystyle x_{0}=1}$ .

### Hyperbola

Exercise (Continuity of the hyperbolic function)

Prove that the function ${\displaystyle f:\mathbb {R} ^{+}\to \mathbb {R} }$ with ${\displaystyle f(x)={\tfrac {1}{x}}}$ is continuous.

How to get to the proof? (Continuity of the hyperbolic function)

The basic pattern for epsilon-delta proofs is applied here, as well. We would like to show the implication ${\displaystyle |x-x_{0}|<\delta \implies |f(x)-f(x_{0})|<\epsilon }$ . Forst, let us plug in what we know and reshape our terms a bit until a ${\displaystyle |x-x_{0}|}$ appears:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\left|{\frac {1}{x}}-{\frac {1}{x_{0}}}\right|\\[0.5em]&=\left|{\frac {x-x_{0}}{x\cdot x_{0}}}\right|\\[0.5em]&={\frac {|x-x_{0}|}{|x||x_{0}|}}\\[0.5em]&={\frac {1}{|x||x_{0}|}}\cdot |x-x_{0}|\end{aligned}}}

By assumption, there will be ${\displaystyle |x-x_{0}|<\delta }$, of which we can make use:

${\displaystyle |f(x)-f(x_{0})|=\ldots ={\frac {1}{|x||x_{0}|}}\cdot |x-x_{0}|<{\frac {1}{|x||x_{0}|}}\cdot \delta }$

The choice of ${\displaystyle \delta }$ may again only depend on ${\displaystyle \epsilon }$ and ${\displaystyle x_{0}}$ so we need a smart estimate for ${\displaystyle {\tfrac {1}{|x|}}}$ in order to get rid of the ${\displaystyle x}$-dependence. To do so, we consider ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$.

Why was ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$ and not ${\displaystyle \delta \leq 1}$ chosen? The explanation is quite simple: We need a ${\displaystyle \delta }$-neighborhood inside the domain of definition of ${\displaystyle f}$. If we had simply chosen ${\displaystyle \delta \leq 1}$ , we might have get kicked out of this domain. For instance, in case${\displaystyle x_{0}={\tfrac {1}{2}}}$, the following problem appears:

The biggest ${\displaystyle x}$-value with ${\displaystyle \left|x-{\tfrac {1}{2}}\right|<1}$ is ${\displaystyle x_{\mathrm {max} }={\tfrac {3}{2}}}$ and the smallest one is ${\displaystyle x_{\mathrm {min} }=-{\tfrac {1}{2}}}$. However, ${\displaystyle x_{\mathrm {min} }}$ is not inside the domain of definition as ${\displaystyle f:\mathbb {R} ^{+}\to \mathbb {R} }$. In particular, ${\displaystyle x=0}$ is element of that interval, where ${\displaystyle f(x)={\tfrac {1}{x}}}$ cannot be defined at all.

A smarter choice for ${\displaystyle \delta }$, such that the ${\displaystyle \delta }$-neighborhood doesn't touch the ${\displaystyle y}$-axis is half of the distance to it, i.e. ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$. A third of this distance or other fractions smaller than 1 would also be possible: ${\displaystyle \delta \leq {\tfrac {x_{0}}{3}}}$, ${\displaystyle \delta <{\tfrac {x_{0}}{20}}}$ or ${\displaystyle \delta <{\tfrac {x_{0}}{5321}}}$.

As we chose ${\displaystyle \delta \leq {\tfrac {x_{0}}{2}}}$ and by ${\displaystyle |x-x_{0}|<\delta }$ , there is ${\displaystyle x\in \left[{\tfrac {x_{0}}{2}};{\tfrac {3x_{0}}{2}}\right]}$. This allows for an upper bound: ${\displaystyle {\tfrac {1}{|x|}}\leq {\tfrac {2}{|x_{0}|}}}$ and we may write:

{\displaystyle {\begin{aligned}|f(x)-f(x_{0})|&=\ldots <{\frac {1}{|x||x_{0}|}}\cdot \delta \\[0.5em]&\quad {\color {Gray}\left\downarrow \ \delta ={\frac {\epsilon \cdot |x_{0}|^{2}}{2}}\right.}\\[0.5em]&\leq {\frac {2}{|x_{0}|^{2}}}\cdot \delta \end{aligned}}}

So we get the estimate:

${\displaystyle |f(x)-f(x_{0})|\leq {\frac {2}{|x_{0}|^{2}}}\cdot \delta }$

Now, we want to prove ${\displaystyle |f(x)-f(x_{0})|<\epsilon }$ . Hence we choose