# Exercises: Continuity – Serlo

Zur Navigation springen Zur Suche springen

## Lipschitz continuous functions are uniformly continuous

Exercise

Let ${\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} }$ be Lipschitz continuous with Lipschitz constant ${\displaystyle K\in \mathbb {R} ^{+}}$. That is

${\displaystyle |f(x)-f(y)|\leq K\cdot |x-y|}$

for all ${\displaystyle x,y\in \mathbb {R} }$. Prove that ${\displaystyle f}$ is uniformly continuous.

How to get to the proof?

We need to show that for all ${\displaystyle \epsilon >0}$ , there is a ${\displaystyle \delta >0}$ , such that for all ${\displaystyle x,y\in \mathbb {R} }$ with ${\displaystyle |x-y|<\delta }$ there is ${\displaystyle |f(x)-f(y)|<\epsilon }$. By our assumption, we have

${\displaystyle |f(x)-f(y)|\leq K\cdot |x-y|

In order for ${\displaystyle |f(x)-f(y)|<\epsilon }$ to hold, it suffices to have ${\displaystyle K\cdot \delta =\epsilon }$. We can reach this by taking ${\displaystyle \delta ={\tfrac {\epsilon }{K}}}$.

Proof

Let ${\displaystyle \epsilon >0}$ be arbitrary. We choose ${\displaystyle \delta ={\tfrac {\epsilon }{K}}}$. Then, for all ${\displaystyle x,y\in \mathbb {R} }$ with ${\displaystyle |x-y|<\delta ={\tfrac {\epsilon }{K}}}$:

${\displaystyle |f(x)-f(y)|\leq K\cdot |x-y|

## Continuity at the origin

Exercise

Prove that the following function is continuous at the origin ${\displaystyle x=0}$:

${\displaystyle f:\mathbb {R} \to \mathbb {R} ,~f(x)={\begin{cases}K\cdot x\cdot \sin \left({\frac {1}{x}}\right),&x\neq 0\\0,&x=0\end{cases}}}$

with a real number ${\displaystyle K>0}$

How to get to the proof?

In order to establish continuity at the origin ${\displaystyle x=0}$ , we make use of the epsilon-delta criterion. That means, for all ${\displaystyle \epsilon >0}$ we have to find a ${\displaystyle \delta >0}$ such that the inequality ${\displaystyle |f(x)-f(0)|<\epsilon }$ holds at all ${\displaystyle x}$ with ${\displaystyle |x|<\delta }$ . The question now is how to find a suitable ${\displaystyle \delta }$ for any given ${\displaystyle \epsilon >0}$ . To answer this question, we take a look at the function ${\displaystyle f}$ around ${\displaystyle x=0}$:

Since at ${\displaystyle x\neq 0}$ ${\displaystyle \left\vert \sin \left({\frac {1}{x}}\right)\right\vert \leq 1}$, there is

${\displaystyle |f(x)-f(0)|=|f(x)|=\left\vert K\cdot x\cdot \sin \left({\frac {1}{x}}\right)\right\vert =K\cdot |x|\left\vert \sin \left({\frac {1}{x}}\right)\right\vert \leq K\cdot |x|}$

This inequality also holds at ${\displaystyle x=0}$, since ${\displaystyle |f(0)|=|0|}$. Visually, the inequality above means that the graph of the function ${\displaystyle f}$ fits inside the "double wedge" given by ${\displaystyle |f(x)|\leq K\cdot |x|}$, where ${\displaystyle K>0}$ tells us, how much the wedge is "stretched" in ${\displaystyle y}$-direction. So if we choose ${\displaystyle K\cdot \delta =\epsilon \Leftrightarrow \delta ={\tfrac {\epsilon }{K}}}$, then at any ${\displaystyle x}$ with ${\displaystyle |x|<\delta }$ , there is

${\displaystyle |f(x)-f(0)|\leq K\cdot |x-0|

Which we can use to carry through the proof (see below).

Note: We can also "move the double wedge" to any ${\displaystyle (x_{0},f(x_{0}))}$ when investigating continuity at ${\displaystyle x_{0}}$ . If the graph fits inside the "moved double wedge" ${\displaystyle |f(x)-f(x_{0})|\leq K\cdot |x-x_{0}|}$ , then for any ${\displaystyle |x-x_{0}|<\delta ={\tfrac {\epsilon }{K}}}$, there is

${\displaystyle |f(x)-f(x_{0})|\leq K\cdot |x-x_{0}|

So any function fitting in such a double wedge is continuous. The converse does not hold true. There are functions which do not fit in any double wedge around ${\displaystyle |f(x)-f(x_{0})|\leq K\cdot |x-x_{0}|}$ but are continuous at ${\displaystyle x_{0}}$. An example is the square root function ${\displaystyle g:[0,\infty )\to \mathbb {R} ,\quad g(x)={\sqrt {x}}}$ around ${\displaystyle x_{0}=0}$.

Proof

Let ${\displaystyle \epsilon >0}$. We choose ${\displaystyle \delta =\epsilon }$. Let ${\displaystyle x}$ be any real number with ${\displaystyle |x|<\delta }$. Then:

{\displaystyle {\begin{aligned}\left|f(x)-f(0)\right|&=\left|x\cdot \sin \left({\frac {1}{x}}\right)-0\right|\\[0.3em]&=\left|x\cdot \sin \left({\frac {1}{x}}\right)\right|\\[0.3em]&\quad {\color {Gray}\left\downarrow \ |\sin(\cdot )|\in [0,1]\right.}\\[0.3em]&\leq |x|\\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\text{by assumption }}|x|<\delta \right.}\\[0.3em]&<\delta \\[0.3em]&\quad {\color {Gray}\left\downarrow \ {\text{by choice }}\delta =\epsilon \right.}\\[0.3em]&=\epsilon \\[0.3em]\end{aligned}}}

This shows that ${\displaystyle f}$ is continuous at ${\displaystyle x=0}$ .

## Extreme value theorem

Exercise (Maximum and minimum of a function)

Prove that the function ${\displaystyle f(x)={\frac {6x^{2}+x}{x^{3}+x^{2}+x+1}}}$ defined on ${\displaystyle [1,\infty )}$ attains a maximum, but not a minimum.

Solution (Maximum and minimum of a function)

Proof step: ${\displaystyle f}$ attains a maximum

The function ${\displaystyle f}$ is continuous on ${\displaystyle [1,\infty )}$, since it is composed by continuous functions and the denominator is strictly positive (${\displaystyle >0}$). The enumerator is also strictly positive, so ${\displaystyle f(x)>0}$ for ${\displaystyle x\geq 1}$, ${\displaystyle f(1)={\frac {7}{4}}>1}$, and

${\displaystyle \lim _{x\to \infty }f(x)=\lim _{x\to \infty }{\frac {6x^{2}+x}{x^{3}+x^{2}+x+1}}=\lim _{x\to \infty }{\frac {{\frac {6}{x}}+{\frac {1}{x^{2}}}}{1+{\frac {1}{x}}+{\frac {1}{x^{2}}}+{\frac {1}{x^{3}}}}}={\frac {0}{1}}=0}$

That means, there is an ${\displaystyle x_{0}>1}$ with ${\displaystyle f(x)<1}$ for all ${\displaystyle x\geq x_{0}}$. The extreme value theorem implies that ${\displaystyle f}$ attains a maximum on ${\displaystyle [1,x_{0}]}$ . One may even show that the maximum is even global. However, computing the maximum explicitly would require us to solve ${\displaystyle f(x)'=0}$ , which is quite a computational effort and can be even harder for other functions ${\displaystyle f}$. The extreme value theorem just allowed us to quickly show that there is a maximum - and saved us from the tedious solution of ${\displaystyle f(x)'=0}$.

Proof step: ${\displaystyle f}$ does not attain a minimum

There is ${\displaystyle f>0}$ on ${\displaystyle [1,\infty )}$ . And indeed, ${\displaystyle f}$ approaches 0 (see the previous proof step). But it does not attain 0. Since ${\displaystyle \lim _{x\to \infty }f(x)=0}$ and by continuity, there can not be a minimum (if there was an attained minimum ${\displaystyle f(x_{\epsilon })=\epsilon >0}$, then by ${\displaystyle \lim _{x\to \infty }f(x)=0}$, there would by ${\displaystyle f(x)={\tfrac {\epsilon }{2}}}$ for some ${\displaystyle x}$, which is a contradiction ).

Exercise (How often is a value attained #1)

1. Show that there is no continuous function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ , which attains each of its function values exactly twice.
2. Is there a continuous function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ , which attains each of its function values exactly three times?

Exercise (How often is a value attained #2)

Let ${\displaystyle n\in \mathbb {N} }$ with ${\displaystyle n>1}$. Show: There is no continuous function ${\displaystyle f:[0,1]\to \mathbb {R} }$ , which attains each of its function values exactly ${\displaystyle n}$ times.

## Intermediate value theorem and zeros

Exercise (Zero of a function)

Prove that the function

${\displaystyle f:\mathbb {R} \to \mathbb {R} ,\ f(x)=\sin(x)-e^{-x}}$

has exactly one zero inside the interval ${\displaystyle [0,{\tfrac {\pi }{2}}]}$ .

Solution (Zero of a function)

Proof step: ${\displaystyle f}$ has at least one zero

${\displaystyle f}$ is continuous as it is composed by the continuous functions ${\displaystyle \sin }$ and ${\displaystyle \exp }$. In addition,

${\displaystyle f(0)=\sin(0)-e^{0}=-1<0}$

and

${\displaystyle f({\tfrac {\pi }{2}})=\sin({\tfrac {\pi }{2}})-e^{-{\tfrac {\pi }{2}}}=1-\underbrace {e^{-{\tfrac {\pi }{2}}}} _{<1}>0}$

By means of the intermediate value theorem, there must be an ${\displaystyle {\tilde {x}}\in [0,{\tfrac {\pi }{2}}]}$ with ${\displaystyle f({\tilde {x}})=0}$.

Proof step: ${\displaystyle f}$ has exactly one zero

${\displaystyle \sin }$ is strictly monotonously increasing on ${\displaystyle [0,{\tfrac {\pi }{2}}]}$ . The function ${\displaystyle x\mapsto \exp }$ is also monotonously increasing on ${\displaystyle [0,{\tfrac {\pi }{2}}]}$ , so ${\displaystyle x\mapsto \exp(-x)}$ is decreasing and ${\displaystyle x\mapsto -\exp(-x)}$ again increasing. Hence there can be only one zero ${\displaystyle {\tilde {x}}\in [0,{\tfrac {\pi }{2}}]}$ with ${\displaystyle f({\tilde {x}})=0}$ , since a function with two zeros ${\displaystyle f({\tilde {x}}_{1})=f({\tilde {x}}_{2})=0}$ is never strictly monotonously increasing (it would either have to stay constant or go "down again" between ${\displaystyle {\tilde {x}}_{1}}$ and ${\displaystyle {\tilde {x}}_{2}}$).

Exercise (Solution of an equation)

Let ${\displaystyle a,b,c\in \mathbb {R} }$ with ${\displaystyle a. Prove that the equation

${\displaystyle {\frac {1}{x-a}}+{\frac {1}{x-b}}+{\frac {1}{x-c}}=2}$

Has at least three solutions.

Solution (Solution of an equation)

It is a powerful trick in mathematics, to transform the problem of finding solutions ${\displaystyle x}$ to ${\displaystyle f(x)=g(x)}$ as zeros of an auxiliary function ${\displaystyle h(x)=f(x)-g(x)}$ (if ${\displaystyle h(x)=0}$, then ${\displaystyle f(x)=g(x)}$ and vice versa). In our case, the continuous auxiliary function is

${\displaystyle h:\mathbb {R} \setminus \{a,b,c\}\to \mathbb {R} ,\ h(x)={\frac {1}{x-a}}+{\frac {1}{x-b}}+{\frac {1}{x-c}}-2}$

When approaching ${\displaystyle a}$ and ${\displaystyle b}$, this function goes to

${\displaystyle \lim _{x\to a+}h(x)=\infty }$

and

${\displaystyle \lim _{x\to b-}h(x)=-\infty }$

Therefore, there must be two arguments ${\displaystyle x_{1},x_{2}\in (a,b)}$ with ${\displaystyle h(x_{1})>0}$ and ${\displaystyle h(x_{2})<0}$ (${\displaystyle x_{1}}$ is close to ${\displaystyle a}$ and ${\displaystyle x_{2}}$ close to ${\displaystyle b}$). By the intermediate value theorem, there must hence be a zero ${\displaystyle {\tilde {x}}\in [x_{1},x_{2}]\subset (a,b)}$ with ${\displaystyle h({\tilde {x}})=0}$. This zero ${\displaystyle {\tilde {x}}}$ is one solution of the above equation.

The same argument works between ${\displaystyle b}$ and ${\displaystyle c}$. Since ${\displaystyle \lim _{x\to b+}h(x)=\infty }$ and ${\displaystyle \lim _{x\to c-}h(x)=-\infty }$ , we can use the intermediate value theorem and get a zero ${\displaystyle {\tilde {y}}\in (b,c)}$ with ${\displaystyle h({\tilde {y}})=0}$ .This is the second solution we have been looking for.

The third solution follows by a similar argument. There is ${\displaystyle \lim _{x\to c+}h(x)=\infty }$ and ${\displaystyle \lim _{x\to \infty }h(x)=-2<0}$. So the intermediate value theorem renders a ${\displaystyle {\tilde {z}}\in (c,\infty )}$ with ${\displaystyle h({\tilde {z}})=0}$ . The equation has therefore at least three solutions.

Exercise (Solution of an equation)

Let ${\displaystyle f:[0,1]\to \mathbb {R} }$ be continuous with ${\displaystyle f(0)=f(1)}$. Prove that there is a ${\displaystyle c\in [0,1]}$ with ${\displaystyle f(c)=f\left(c+{\tfrac {1}{2}}\right)}$ .

Solution (Solution of an equation)

We consider the following auxiliary function:

${\displaystyle h:[0,{\tfrac {1}{2}}]\to \mathbb {R} ,\ h(c)=f(x+{\tfrac {1}{2}})-f(x)}$

Finding a ${\displaystyle c\in [0,1]}$ with ${\displaystyle f(c)=f\left(c+{\tfrac {1}{2}}\right)}$ now amounts to finding a zero of ${\displaystyle h}$. Since ${\displaystyle f}$ is continuous, so is ${\displaystyle h}$ . In addition, at the endpoints of the interval, there is

${\displaystyle h(0)=f({\tfrac {1}{2}})-f(0)}$

and

${\displaystyle h({\tfrac {1}{2}})=f(1)-f({\tfrac {1}{2}}){\overset {\text{Vor.}}{=}}f(1)-f({\tfrac {1}{2}})=-(f({\tfrac {1}{2}})-f(0))=-h(0)}$

Fall 1: ${\displaystyle h(0)=0}$

This means ${\displaystyle f({\tfrac {1}{2}})-f(0)=0}$, or equivalently

${\displaystyle f(0)=f({\tfrac {1}{2}})}$

So we have found a solution ${\displaystyle c=0}$ to ${\displaystyle f(c)=f\left(c+{\tfrac {1}{2}}\right)}$.

Fall 2: ${\displaystyle h(0)\neq 0}$

We will first consider the case ${\displaystyle h(0)>0}$. Since ${\displaystyle h(0)>0}$ , there is ${\displaystyle h({\tfrac {1}{2}})=-h(0)<0}$. The intermediate value theorem now yields a ${\displaystyle c\in [0,{\tfrac {1}{2}}]\subset [0,1]}$ with

${\displaystyle h(c)=f(c+{\tfrac {1}{2}})-f(c)=0}$

This ${\displaystyle c}$ is a zero of ${\displaystyle h}$ and hence a desired solution for ${\displaystyle f(c)=f(c+{\tfrac {1}{2}})}$. The other case ${\displaystyle h(0)<0}$ can be treated using exactly the same arguments.

So for any choice of ${\displaystyle h(0)}$, there is a ${\displaystyle c\in [0,1]}$ with ${\displaystyle f(c)=f(c+{\tfrac {1}{2}})}$.

Exercise (Existence of exactly one zero)

Let ${\displaystyle n\in \mathbb {N} }$ be a natural number. We define the function ${\displaystyle f_{n}:\mathbb {R} \to \mathbb {R} ,x\mapsto x^{n}+4n^{2}x-10}$. Prove that ${\displaystyle f_{n}}$ has exactly one positive zero.

Solution (Existence of exactly one zero)

We need to show two things: At first, we need to show that a zero exists inside the interval ${\displaystyle ]0;+\infty [}$. Second, we need to assure that there is indeed only one such zero.

The function ${\displaystyle f_{n}}$ is a polynomial function and hence continuous. At the beginning of the interval ${\displaystyle x=0}$ , there is ${\displaystyle f(0)=-10<0}$ i.e. the graph of the function runs below the ${\displaystyle x}$-axis. At infinity, there is ${\displaystyle \lim _{x\to \infty }f_{n}(x)=+\infty }$, meaning that for large ${\displaystyle x}$ , the graph runs above the ${\displaystyle x}$-axis. As ${\displaystyle f_{n}}$ is continuous, we can apply the intermediate value theorem and get a zero ${\displaystyle x_{1}\in ]0;+\infty [}$.

Now we need to show that there is at most one zero. Both ${\displaystyle x\mapsto x^{n}}$ and ${\displaystyle x\mapsto x}$ are strictly monotonously increasing functions for ${\displaystyle x>0}$ . So we may assume that ${\displaystyle f_{n}}$ is also strictly monotonous, there. We can prove this assumption be taking the first derivative:

For ${\displaystyle x>0}$ there is: ${\displaystyle f_{n}'(x)=\underbrace {nx^{n-1}} _{>0}+\underbrace {4n^{2}} _{>0}>0}$.

One may show with a bit of effort that differentiable functions with positive derivative ${\displaystyle f_{n}(x)>0}$ are strictly monotonous in the sense that ${\displaystyle f_{n}(y)>f_{n}(x)}$ for ${\displaystyle y>x}$. If there were two zeros ${\displaystyle x_{1}, we would have ${\displaystyle f_{n}(x_{2})=f_{n}(x_{1})=0}$ although there is ${\displaystyle x_{2}>x_{1}}$ . This would contradict ${\displaystyle f_{n}}$ being monotonous and in hence excluded. Therefore, ${\displaystyle f_{n}}$ can have at most one zero (as all strictly monotonous functions).

Note: We could also prove that ${\displaystyle f_{n}}$ has at most one zero, only using that ${\displaystyle f_{n}}$ is differentiable with ${\displaystyle f_{n}'(x)>0}$:

Assume that, the function ${\displaystyle f_{n}}$ would have two zeros ${\displaystyle x_{1},x_{2}\in ]0;+\infty [}$ with ${\displaystyle x_{1}. Since the function ${\displaystyle f_{n}}$ is differentiable and ${\displaystyle f_{n}(x_{1})=0=f_{n}(x_{2})}$, we may use Rolle's theorem and get that some ${\displaystyle \xi \in ]x_{1};x_{2}[}$ exists wit ${\displaystyle f_{n}'(\xi )=0}$. But this is a contradiction to the first derivative of ${\displaystyle f_{n}}$ being strictly positive ${\displaystyle f_{n}'(x)>0}$ . So this is a second way to exclude the existence of two zeros.

## Continuity of the inverse function

Exercise (Continuity of the inverse function 1)

Let ${\displaystyle f:(-1,1)\to \mathbb {R} }$ be defined by

${\displaystyle f(x)={\frac {2x}{1-x^{2}}}}$
1. Prove that ${\displaystyle f}$ is continuous, strictly monotonous and injective.
2. Prove that ${\displaystyle f}$ ist surjectiive (so it is a 1-to-1-map from ${\displaystyle (-1,1)}$ to ${\displaystyle \mathbb {R} }$ an inverse function exists).
3. Why is the inverse function ${\displaystyle f^{-1}:f(\mathbb {R} )\to \mathbb {R} }$ continuous, monotonously increasing and bijective? Explicitly determine ${\displaystyle f^{-1}}$.

Solution (Continuity of the inverse function 1)

Part 1: ${\displaystyle f}$ is continuous on ${\displaystyle (-1,1)}$ since it is the quotient of the continuous polynomials ${\displaystyle x\mapsto 2x}$ and ${\displaystyle x\mapsto 1-x^{2}}$. Note that ${\displaystyle 1-x^{2}\neq 0}$ for all ${\displaystyle x\in (-1,1)}$.

Let ${\displaystyle x,y\in \mathbb {R} }$ with ${\displaystyle x. Then, strict monotony holds:

${\displaystyle x

Therefore, ${\displaystyle f}$ is also injective.

Part 2: The function runs towards infinity at the end points of the open interval ${\displaystyle (-1,1)}$ as follows:

${\displaystyle \lim _{x\to 1-}f(x)=\lim _{x\to 1-}{\frac {2x}{1-x^{2}}}=\lim _{x\to 1-}{\frac {2}{{\frac {1}{x}}-x}}{\overset {\frac {2}{0+}}{=}}\infty }$ and ${\displaystyle \lim _{x\to -1+}f(x)=\lim _{x\to -1+}{\frac {2x}{1-x^{2}}}=\lim _{x\to -1+}{\frac {2}{{\frac {1}{x}}-x}}{\overset {\frac {2}{0-}}{=}}-\infty }$

Since ${\displaystyle f}$ is continuous, the intermediate value theorem ensures that for each ${\displaystyle y\in \mathbb {R} }$ there is a ${\displaystyle x\in (-1,1)}$ mapped onto it: ${\displaystyle f(x)=y}$. Therefore, ${\displaystyle f(x)}$ is also surjective: ${\displaystyle f[(-1,1)]=\mathbb {R} }$.

Part 3: Since ${\displaystyle f}$ is bijective, the inverse map exists and is bijective, as well:

${\displaystyle f^{-1}:\mathbb {R} \to (-1,1)}$

The theorem about continuity of the inverse function tells us that ${\displaystyle f^{-1}}$ is continuous and strictly monotonously increasing. Now, let us compute ${\displaystyle f^{-1}}$. That means, we need to bring ${\displaystyle f(x)=y}$ into the form ${\displaystyle x=f^{-1}(y)}$ - i.e. we need to get ${\displaystyle x}$ standing alone on the left side of the equation:

${\displaystyle y={\frac {2x}{1-x^{2}}}\Leftrightarrow y-yx^{2}=2x\Leftrightarrow yx^{2}+2x-y=0}$

Fall 1: ${\displaystyle y=0}$

${\displaystyle 2x=0\iff x=0}$

Fall 2: ${\displaystyle y\neq 0}$

We can use the quadratic solution formula in order to solve for ${\displaystyle x}$:

${\displaystyle yx^{2}+2x-y=0\iff x_{1/2}={\frac {-2\pm {\sqrt {4+4y^{2}}}}{2y}}={\frac {-1\pm {\sqrt {1+y^{2}}}}{y}}}$

Since ${\displaystyle y=f(x)>0}$ for ${\displaystyle x>0}$, the only reasonable solution is ${\displaystyle x={\frac {-1+{\sqrt {1+y^{2}}}}{y}}}$ . Putting all together, the full definition for the inverse function reads

${\displaystyle f^{-1}(y)={\begin{cases}{\frac {-1+{\sqrt {1+y^{2}}}}{y}}&{\text{ für }}y\neq 0,\\0&{\text{ für }}y=0\end{cases}}}$

Hint

The distinction of two cases above is not very convenient. We can avoid it using a little trick: In case ${\displaystyle y\neq 0}$ enumerator and denominator of ${\displaystyle f^{-1}}$ can be multiplied by a factor of ${\displaystyle (-1-{\sqrt {1+y^{2}}})}$:

${\displaystyle f^{-1}(y)={\frac {(-1+{\sqrt {1+y^{2}}})(-1-{\sqrt {1+y^{2}}})}{y(-1-{\sqrt {1+y^{2}}})}}={\frac {1-(1+y^{2})}{-y(1+{\sqrt {y^{2}+1}})}}={\frac {-y^{2}}{-y(1+{\sqrt {y^{2}+1}})}}={\frac {y}{1+{\sqrt {y^{2}+1}}}}}$

Plugging in ${\displaystyle y=0}$ , we get that ${\displaystyle f^{-1}(0)={\tfrac {0}{1+{\sqrt {1+0^{2}}}}}={\tfrac {0}{2}}=0}$ is described correctly. So we can use the definition above for all ${\displaystyle y\in \mathbb {R} }$ and avoid the case distinction.

Exercise (Continuity of the inverse function 2)

Let

${\displaystyle g:[0,\infty )\to \mathbb {R} ,\ g(x)=\ln(x+e)+\cos \left({\frac {1}{\ln(x+e)}}\right)}$
1. Prove that ${\displaystyle g}$ is injective.
2. Determine the range ${\displaystyle g([0,\infty ))}$ of all attained values.
3. Why is the inverse function ${\displaystyle g^{-1}:g([0,\infty ))\to [0,\infty )}$ continuous?

Solution (Continuity of the inverse function 2)

Part 1:

${\displaystyle g}$ is continuous, as it is composed by the continuous functions ${\displaystyle x\mapsto x+e}$, ${\displaystyle x\mapsto {\frac {1}{x}}}$, ${\displaystyle x\mapsto \ln(x)}$ and ${\displaystyle x\mapsto \cos(x)}$ on ${\displaystyle [0,\infty )}$.

The logarithm is strictly monotonously increasing (and its inverse is decreasing): for ${\displaystyle x,y\in [0,\infty )}$ with ${\displaystyle x , there is:

${\displaystyle {\underset {\text{sms}}{\overset {\ln }{\Longrightarrow }}}\quad \ln(x+e)<\ln(y+e)\quad \Longrightarrow \quad {\frac {1}{\ln(x+e)}}>{\frac {1}{\ln(y+e)}}}$

Now, ${\displaystyle {\frac {1}{\ln(x+e)}}\in (0,1]}$ for ${\displaystyle x\in [0,\infty )}$. Since in addition, ${\displaystyle \cos }$ is strictly monotonously decreasing on ${\displaystyle (0,1]}$, we have

${\displaystyle \cos \left({\frac {1}{\ln(x+e)}}\right)<\cos \left({\frac {1}{\ln(y+e)}}\right)}$

So the ${\displaystyle \cos }$-term is also strictly monotonously increasing and so is ${\displaystyle g}$:

${\displaystyle \underbrace {\ln(x+e)+\cos \left({\frac {1}{\ln(x+e)}}\right)} _{=g(x)}<\underbrace {\ln(y+e)+\cos \left({\frac {1}{\ln(y+e)}}\right)} _{=g(y)}}$

Therefore, ${\displaystyle g}$ is also injective.

Part 2:

At the ends of the domain of definition, there is

${\displaystyle g(0)=\ln(e)+\cos \left({\frac {1}{\ln(e)}}\right)=1+\cos(1)}$

and

${\displaystyle \lim _{x\to \infty }\ln(x+e)=\infty {\text{ und }}\lim _{x\to \infty }{\frac {1}{\ln(x+e)}}=0}$

this implies

${\displaystyle \lim _{x\to \infty }g(x)=\lim _{x\to \infty }[\underbrace {\ln(x+e)} _{\to \infty }-\underbrace {\cos \left({\frac {1}{\ln(x+e)}}\right)} _{\to \cos(0)=1}]=\infty }$

${\displaystyle g}$ is continuous on the interval ${\displaystyle [0,\infty )}$. Hence, we can use a corollary of the intermediate value theorem, and get that ${\displaystyle g([0,\infty ))}$ is again an interval. Since ${\displaystyle g}$ is strictly monotonously increasing and ${\displaystyle \lim _{x\to \infty }g(x)=\infty }$ , we can conclude

${\displaystyle g([0,\infty ))=[g(0),\infty )=[1+\cos(1),\infty )}$

Part 3:

Since ${\displaystyle D=[0,\infty )}$ is an interval and ${\displaystyle g:[0,\infty )\to [1+\cos(1),\infty )}$ in bijective, we can use the theorem about continuity of the inverse function. I tells us that

${\displaystyle g^{-1}:[1+\cos(1),\infty )\to [0,\infty )}$

is indeed continuous.