Zum Inhalt springen

# Derivative and local extrema – Serlo

In this chapter we will use derivatives to derive necessary and sufficient criteria for the existence of extrema. In calculus, one often uses the theorem that a function ${\displaystyle f:D\to \mathbb {R} }$ must necessarily satisfy ${\displaystyle f'({\tilde {x}})=0}$ so that ${\displaystyle f}$ has a (local) extremum in ${\displaystyle {\tilde {x}}\in D}$. If the derivative function ${\displaystyle f'}$ in addition changes the sign at ${\displaystyle {\tilde {x}}}$ , then we have found an extremum. The sign change of the derivative is therefore a sufficient criterion for the extremum. We will now derive this statement and other consequences mathematically and illustrate them with the help of numerous examples. First, however, we will clearly define what an extremum is and what kinds of extrema there are.

## Types of extrema

A function ${\displaystyle f:D\to \mathbb {R} }$ can have two types of an extremum: A maximum or a minimum. This in turn can be local or global. As the names already suggest, a local minimum is for example a value ${\displaystyle f({\tilde {x}})}$, which is "locally minimal", i.e. In a neighbourhood of ${\displaystyle {\tilde {x}}}$ there is also ${\displaystyle f\geq f({\tilde {x}})}$. Mathematically: There is an interval ${\displaystyle ({\tilde {x}}-\epsilon ,{\tilde {x}}+\epsilon )}$ around ${\displaystyle {\tilde {x}}}$, so that ${\displaystyle f(x)\geq f({\tilde {x}})}$ for all arguments ${\displaystyle x}$ which lie in ${\displaystyle ({\tilde {x}}-\epsilon ,{\tilde {x}}+\epsilon )}$. A global maximum on the other hand is a value ${\displaystyle f({\hat {x}})}$, which is "everywhere maximal". That means, for all arguments ${\displaystyle x}$ from the domain of definition, we must have ${\displaystyle f(x)\leq f({\hat {x}})}$. This intuitive idea is illustrated in the following figure:

For local extrema, a distinction is also made between strict and non-strict extrema. A strict local minimum, for example, is one that is only "strictly" attained at a single point. A non-strict extremum can be attained on an entire subinterval.

We now define the intuitively explained terms formally:

Definition (Extrema)

Let ${\displaystyle D\subseteq \mathbb {R} }$ and ${\displaystyle f:D\to \mathbb {R} }$ be a function. Then, ${\displaystyle f}$ at ${\displaystyle {\tilde {x}}\in D}$ has a

• local maximum or minimum, if there is an ${\displaystyle \epsilon >0}$, such that ${\displaystyle f({\tilde {x}})\geq f(x)}$ (or ${\displaystyle f({\tilde {x}})\leq f(x)}$) for all ${\displaystyle x\in D}$ with ${\displaystyle |x-{\tilde {x}}|<\epsilon }$ holds.
• strict local maximum or minimum, if there is an ${\displaystyle \epsilon >0}$ such that ${\displaystyle f({\tilde {x}})>f(x)}$ (or ${\displaystyle f({\tilde {x}})) for all ${\displaystyle x\in D\setminus \{{\tilde {x}}\}}$ with ${\displaystyle |x-{\tilde {x}}|<\epsilon }$ holds.
• global maximum or minimum, if ${\displaystyle f({\tilde {x}})\geq f(x)}$ (or ${\displaystyle f({\tilde {x}})\leq f(x)}$) for all ${\displaystyle x\in D}$ holds.

Extremum is the umbrella term for a maximum or minimum. ${\displaystyle {\tilde {x}}\in D}$ is then called maximum or minimum.

A local maximum/minimum is also sometimes referred to in the literature as relative maximum/minimum, and a strict maximum/minimum as isolated maximum/minimum. With this definition it is also clear that every global extremum is also a local one. Similarly, every strict local extremum is also a local extremum in the usual sense. In the following we want to determine some necessary and sufficient conditions for (strict) local extrema, using the derivative. Unfortunately our criteria are not sufficient to characterise global extrema. Those are a bit harder to catch!

Question: Consider the functions

{\displaystyle {\begin{aligned}f&:[-1,2]\to \mathbb {R} ,\ f(x)=x^{2}\\[1em]g&:\mathbb {R} \to \mathbb {R} ,\ g(x)=\operatorname {sgn}(x)={\begin{cases}-1&{\text{ for }}x<0,\\0&{\text{ for }}x=0,\\1&{\text{ for }}x>0\end{cases}}\end{aligned}}}

Are the following statements true or false?

1. ${\displaystyle f}$ has a local minimum at ${\displaystyle {\tilde {x}}_{1}=0}$.
2. ${\displaystyle f}$ has a strict local minimum at ${\displaystyle {\tilde {x}}_{1}=0}$.
3. ${\displaystyle f}$ has a global minimum at ${\displaystyle {\tilde {x}}_{1}=0}$.
4. ${\displaystyle f}$ has a global maximum at ${\displaystyle {\tilde {x}}_{2}=-1}$.
5. ${\displaystyle f}$ has a local maximum at ${\displaystyle {\tilde {x}}_{3}=2}$.
6. ${\displaystyle g}$ has a strict local maximum at ${\displaystyle {\hat {x}}_{1}=1}$.
7. ${\displaystyle g}$ has a local maximum at ${\displaystyle {\hat {x}}_{1}=1}$ .
8. ${\displaystyle g}$ has a global minimum at ${\displaystyle {\hat {x}}_{2}=-2}$.
9. ${\displaystyle g}$ has a local minimum at ${\displaystyle {\hat {x}}_{3}=0}$.

Solution:

1. True. Since for ${\displaystyle \epsilon =1}$ there is ${\displaystyle \underbrace {0} _{=f(0)}\leq \underbrace {x^{2}} _{=f(x)}}$ for all ${\displaystyle x\in [-1,2]}$ with ${\displaystyle \underbrace {|x|} _{=|x-{\tilde {x}}|}<\underbrace {1} _{=\epsilon }}$.
2. True. Since for ${\displaystyle \epsilon =1}$ there is ${\displaystyle 0 for all ${\displaystyle x\in [-1,2]\setminus \{0\}}$ with ${\displaystyle |x|<1}$.
3. True. Since for all ${\displaystyle x\in [-1,2]}$ there is ${\displaystyle 0\leq x^{2}}$.
4. False. Since e.g. for ${\displaystyle x=2}$ there is ${\displaystyle f(2)=4>1=f(-1)}$.
5. True. Since for all ${\displaystyle x\in [-1,2]}$ there is ${\displaystyle f(x)\leq 4=f(2)}$. So ${\displaystyle f}$ has at ${\displaystyle {\tilde {x}}_{3}=2}$ a global and hence also a local maximum.
6. False. Since for all ${\displaystyle 0<\epsilon <1}$ and all ${\displaystyle x\in (1-\epsilon ,1+\epsilon )}$ there is ${\displaystyle g(x)=1=g(1)}$.
7. True. Since for ${\displaystyle \epsilon ={\tfrac {1}{2}}}$ there is ${\displaystyle g(1)=1\geq 1=g(x)}$ for all ${\displaystyle x\in \mathbb {R} }$ with ${\displaystyle |x-1|<{\tfrac {1}{2}}}$.
8. True. Since for all ${\displaystyle x\in \mathbb {R} }$ there is ${\displaystyle g(x)\geq -1=g(-2)}$.
9. False. Since for all ${\displaystyle \epsilon >0}$ there is an ${\displaystyle x\in (-\epsilon ,0)}$ with ${\displaystyle g(x)=-1<0=g(0)}$.

## Necessary condition for extrema

### Theorem and proof

In order for a function to have a local extremum at a position within its domain of definition, the function must have a horizontal tangent there. This means that the derivative at this point must be zero. This is exactly what the following theorem says:

Theorem (Necessary condition for extrema)

Let ${\displaystyle a with ${\displaystyle a,b\in \mathbb {R} }$. Let ${\displaystyle {\tilde {x}}\in (a,b)}$ and ${\displaystyle f:(a,b)\to \mathbb {R} }$ be differentiable at ${\displaystyle {\tilde {x}}}$. Let further ${\displaystyle {\tilde {x}}}$ be a local minimum (or maximum). Then, there is ${\displaystyle f'({\tilde {x}})=0}$.

Proof (Necessary condition for extrema)

We consider the case where ${\displaystyle f}$ has a local minimum at ${\displaystyle {\tilde {x}}}$. The proof in the case of a local maximum is analogous. We want to show that

${\displaystyle f'({\tilde {x}})=\lim _{h\to 0}{\frac {f({\tilde {x}}+h)-f({\tilde {x}})}{h}}=0}$

Since ${\displaystyle f}$ is differentiable at ${\displaystyle {\tilde {x}}}$, there is

${\displaystyle \lim _{h\downarrow 0}{\frac {f({\tilde {x}}+h)-f({\tilde {x}})}{h}}=f'({\tilde {x}})=\lim _{h\uparrow 0}{\frac {f({\tilde {x}}+h)-f({\tilde {x}})}{h}}}$

Since ${\displaystyle f}$ has a local minimum at ${\displaystyle {\tilde {x}}}$ , there is an ${\displaystyle \epsilon >0}$, such that for all ${\displaystyle h\in (0,\epsilon )}$ there is

${\displaystyle f({\tilde {x}}+h)\geq f({\tilde {x}})\iff f({\tilde {x}}+h)-f({\tilde {x}})\geq 0}$

So also

${\displaystyle {\frac {f({\tilde {x}}+h)-f({\tilde {x}})}{h}}\geq 0}$

From the limit value rules follows

${\displaystyle f'({\tilde {x}})=\lim _{h\downarrow 0}{\frac {f({\tilde {x}}+h)-f({\tilde {x}})}{h}}\geq 0}$

On the other hand there is an ${\displaystyle {\tilde {\epsilon }}>0}$, such that for all ${\displaystyle h\in (-{\tilde {\epsilon }},0)}$ there is

${\displaystyle f({\tilde {x}}+h)\geq f({\tilde {x}})\iff f({\tilde {x}}+h)-f({\tilde {x}})\geq 0}$

From the limit value rules follows

${\displaystyle f'({\tilde {x}})=\lim _{h\uparrow 0}{\frac {f({\tilde {x}}+h)-f({\tilde {x}})}{h}}\leq 0}$

So ${\displaystyle f'({\tilde {x}})\leq is0\leq f'({\tilde {x}})}$ and therefore ${\displaystyle f'({\tilde {x}})=0}$.

### Examples

The function ${\displaystyle f:[-1,2]\to \mathbb {R} ,\ f(x)=x^{2}}$ has a local (and even global) minimum at ${\displaystyle {\tilde {x}}=0}$ . Since ${\displaystyle 0\in (-1,2)}$ , the necessary criterion yields ${\displaystyle f'(0)=2\cdot 0=0}$.

If there is an extremum at a boundary point, the condition at this point does not have to be fulfilled! For instance, ${\displaystyle {\hat {x}}=-1}$ is a local maximum. But there is ${\displaystyle f'(-1)=2\cdot (-1)=-2\neq 0}$.

Example (Cubic power functions)

The necessary condition is not not sufficient. That means, it does not necessarily follow from ${\displaystyle f'({\tilde {x}})=0}$ that ${\displaystyle f}$ has an extremum at ${\displaystyle {\tilde {x}}}$. An example for this is the function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$ with ${\displaystyle f(x)=x^{3}}$. There is ${\displaystyle f'(x)=3x^{2}}$ and therefore ${\displaystyle f'(0)=0}$. But ${\displaystyle f}$ has no extremum at ${\displaystyle {\tilde {x}}=0}$, because for all ${\displaystyle x<0}$ there is ${\displaystyle f(x)<0}$ and for all ${\displaystyle x>0}$ there is ${\displaystyle f(x)>0}$. The zero point in this case is also called terrace or saddle point.

Example (Since function)

Of course, the condition ${\displaystyle f'({\tilde {x}})=0}$ can also be fulfilled at an infinite number of places in a function. An example is the sine function ${\displaystyle f:\mathbb {R} \to \mathbb {R} ,f(x)=\sin(x)}$. There is

${\displaystyle f'(x)=\cos(x){\overset {!}{=}}0\iff x\in \{{\tfrac {\pi }{2}}+k\pi \mid k\in \mathbb {Z} \}=\{\ldots ,-{\tfrac {5\pi }{2}},-{\tfrac {3\pi }{2}},-{\tfrac {\pi }{2}},{\tfrac {\pi }{2}},{\tfrac {3\pi }{2}},{\tfrac {5\pi }{2}},\ldots \}}$

Further there is ${\displaystyle |\sin(x)|\leq 1}$,

${\displaystyle \sin(x)=1\iff x\in \{{\tfrac {\pi }{2}}+2k\pi \mid k\in \mathbb {Z} \}=\{\ldots ,-{\tfrac {3\pi }{2}},{\tfrac {\pi }{2}},{\tfrac {5\pi }{2}},\ldots \}}$

and

${\displaystyle \sin(x)=-1\iff x\in \{-{\tfrac {\pi }{2}}+2k\pi \mid k\in \mathbb {Z} \}=\{\ldots ,-{\tfrac {5\pi }{2}},-{\tfrac {\pi }{2}},{\tfrac {3\pi }{2}},\ldots \}}$

Therefore ${\displaystyle \sin }$ has local maxima at ${\displaystyle {\tilde {x}}={\tfrac {\pi }{2}}+2k\pi }$ with ${\displaystyle k\in \mathbb {Z} }$ , and local minima at ${\displaystyle {\tilde {x}}=-{\tfrac {\pi }{2}}+2k\pi }$ with ${\displaystyle k\in \mathbb {Z} }$ .

Example (Exponential function)

Finally, the case ${\displaystyle f'(x)\neq 0}$ for all ${\displaystyle x}$ can occur. Let us consider the exponential function ${\displaystyle f:\mathbb {R} \to \mathbb {R} ,f(x)=\exp(x)}$. Then, there is for all ${\displaystyle x\in \mathbb {R} }$:

${\displaystyle f'(x)=\exp(x)>0}$

Since extrema at boundaries are also not possible, it follows from our criterion that the exponential function has no (local) extrema.

### Exercises

Exercise (Local extrema of functions)

Check if the following functions have local extrema:

1. ${\displaystyle f:\mathbb {R} \to \mathbb {R} ,\ f(x)=x^{4}}$
2. ${\displaystyle g:\mathbb {R} ^{+}\to \mathbb {R} ,\ g(x)=\ln(x)}$
3. ${\displaystyle h:\mathbb {R} \to \mathbb {R} ,\ h(x)=\cos(x)}$

Solution (Local extrema of functions)

Solution sub-exercise 1:

There is

${\displaystyle f'(x)=4x^{3}=0\iff x=0}$

Since furthermore ${\displaystyle f(x)=x^{4}=(x^{2})^{2}\geq 0}$ for all ${\displaystyle x\in \mathbb {R} }$, the point ${\displaystyle {\tilde {x}}=0}$ is a local (even global) minimum of ${\displaystyle f}$.

Solution sub-exercise 2:

Here, there is for all ${\displaystyle x\in \mathbb {R} ^{+}}$

${\displaystyle g'(x)={\frac {1}{x}}\neq 0}$

Hence, ${\displaystyle g}$ has no extrema.

Solution sub-exercise 3:

Finally there is

{\displaystyle {\begin{aligned}&h'(x)=-\sin(x){\overset {!}{=}}0\\\iff {}&x\in \{k\pi \mid k\in \mathbb {Z} \}=\{\ldots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\ldots \}\end{aligned}}}

Further there is ${\displaystyle |\cos(x)|\leq 1}$, as well as

{\displaystyle {\begin{aligned}&\cos(x)=1\\\iff {}&x\in \{2k\pi \mid k\in \mathbb {Z} \}=\{\ldots ,-2\pi ,0,2\pi ,\ldots \}\end{aligned}}}

and

{\displaystyle {\begin{aligned}&\cos(x)=-1\\\iff {}&x\in \{(2k-1)\pi \mid k\in \mathbb {Z} \}=\{\ldots ,-\pi ,\pi ,3\pi ,\ldots \}\end{aligned}}}

Therefore ${\displaystyle h}$ has local maxima at ${\displaystyle {\tilde {x}}=2k\pi }$, ${\displaystyle k\in \mathbb {Z} }$ , and local minima at ${\displaystyle {\tilde {x}}=(2k-1)\pi }$, ${\displaystyle k\in \mathbb {Z} }$.

### Application: Intermediate values for derivatives

We have already stated in the previous sections that the derivative function of a differentiable function does not necessarily have to be continuous. An example for this is the following function, which we have learned about in the chapter "derivatives of higher order":

${\displaystyle f:\mathbb {R} \to \mathbb {R} ,\ f(x)={\begin{cases}x^{2}\sin({\tfrac {1}{x}})&{\text{ for }}x\neq 0,\\0&{\text{ for }}x=0.\end{cases}}}$

However, it can be shown that the derivative function always fulfils the intermediate value property. The reason why this is not a contradiction is that continuity is a stronger property than the intermediate value property. To prove this, we will use our necessary criterion from the previous theorem. This result is also known in the literature as "Darboux's theorem":

Theorem (Darboux's theorem)

Let ${\displaystyle f:[a,b]\to \mathbb {R} }$ be differentiable. Further let ${\displaystyle f'(a) and ${\displaystyle c\in (f'(a),f'(b))}$. Then there is an ${\displaystyle x_{0}\in (a,b)}$ with ${\displaystyle f'(x_{0})=c}$.

Proof (Darboux's theorem)

We define the auxiliary function

${\displaystyle g:[a,b]\to \mathbb {R} ,\ g(x)=f(x)-cx}$

This function is differentiable with

${\displaystyle g'(x)=f'(x)-c}$

So there is ${\displaystyle g'(a)=f'(a)-c<0}$ and ${\displaystyle g'(b)=f'(b)-c>0}$. Hence

${\displaystyle g'(a)=\lim _{x\to a}{\frac {g(x)-g(a)}{x-a}}<0}$

Thus there is an ${\displaystyle x_{1}\in (a,b)}$ with

${\displaystyle {\frac {g(x_{1})-g(a)}{x_{1}-a}}<0}$

Since the denominator ${\displaystyle x_{1}-a}$ is positive, ${\displaystyle g(x_{1})-g(a)<0\iff g(x_{1}) follows. Similarly, there is a ${\displaystyle x_{2}\in (a,b)}$ with ${\displaystyle g(x_{2}). By the extreme value theorem, ${\displaystyle g}$ attains a minimum on ${\displaystyle [a,b]}$ . Since we have shown that ${\displaystyle x_{1},x_{2}\in (a,b)}$ with ${\displaystyle g(x_{1}) and ${\displaystyle g(x_{2}), the minimum must be in ${\displaystyle (a,b)}$. Let ${\displaystyle x_{0}\in (a,b)}$ be the minimum. According to our necessary criterion for an extremum, the following must now hold

${\displaystyle g'(x_{0})=f'(x_{0})-c=0}$

From this follows ${\displaystyle f'(x_{0})=c}$.

## Necessary condition: sign change

### Theorem

For many functions it can be tedious to determine only with the necessary condition ${\displaystyle f'({\tilde {x}})=0}$ whether ${\displaystyle f}$ has an extremum in ${\displaystyle {\tilde {x}}}$: There is a proof necessary that the function actually does not get greater or lower within the environment. Therefore we are now looking for sufficient conditions for an extremum, which saves us the extra proof work. One possibility is to investigate ${\displaystyle f'}$ in the surroundings of the possible extremum ${\displaystyle {\tilde {x}}}$. If the function increases on the left of ${\displaystyle {\tilde {x}}}$ and decreases on the right, then there is a maximum. If the function first decreases and then increases, there is a minimum.

Theorem (Necessary condition for extrema by sign change of the derivative)

Let ${\displaystyle a and ${\displaystyle f:(a,b)\to \mathbb {R} }$ be a differentiable function. And let ${\displaystyle f'({\tilde {x}})=0}$ for some ${\displaystyle {\tilde {x}}\in (a,b)}$. Then, there is

1. ${\displaystyle f}$ has a strict maximum at ${\displaystyle {\tilde {x}}}$, if there is an ${\displaystyle \epsilon _{1},\epsilon _{2}>0}$ , such that for all ${\displaystyle x\in ({\tilde {x}}-\epsilon _{1},{\tilde {x}})}$ there is ${\displaystyle f'(x)>0}$ and for all ${\displaystyle x\in ({\tilde {x}},{\tilde {x}}+\epsilon _{2})}$ there is ${\displaystyle f'(x)<0}$.
2. ${\displaystyle f}$ has a strict Minimum at ${\displaystyle {\tilde {x}}}$, if there is an ${\displaystyle \epsilon _{1},\epsilon _{2}>0}$ , such that for all ${\displaystyle x\in ({\tilde {x}}-\epsilon _{1},{\tilde {x}})}$ there is ${\displaystyle f'(x)<0}$ and for all ${\displaystyle x\in ({\tilde {x}},{\tilde {x}}+\epsilon _{2})}$ there is ${\displaystyle f'(x)>0}$.

Proof (Necessary condition for extrema by sign change of the derivative)

For the proof we use the mean value theorem:

Proof step: ${\displaystyle f}$ has a strict maximum at ${\displaystyle {\tilde {x}}}$

Let ${\displaystyle c\in ({\tilde {x}}-\epsilon _{1},{\tilde {x}})}$ be arbitrary. By the mean value theorem there is a ${\displaystyle y\in (c,{\tilde {x}})}$, such that ${\displaystyle {\frac {f({\tilde {x}})-f(c)}{{\tilde {x}}-c}}=f'(y)}$. Since according to our assumption ${\displaystyle f'(y)>0}$ and ${\displaystyle {\tilde {x}}-c>0}$, there is ${\displaystyle f({\tilde {x}})-f(c)>0}$ or ${\displaystyle f({\tilde {x}})>f(c)}$.

Furthermore, for all ${\displaystyle d\in ({\tilde {x}},{\tilde {x}}+\epsilon _{2})}$ there is a ${\displaystyle z\in ({\tilde {x}},d)}$, such that ${\displaystyle {\frac {f(d)-f({\tilde {x}})}{d-{\tilde {x}}}}=f'(z)}$. We know that ${\displaystyle f'(z)<0}$ and ${\displaystyle d-{\tilde {x}}>0}$. So we get ${\displaystyle f(d)-f({\tilde {x}})<0}$ or ${\displaystyle f({\tilde {x}})>f(d)}$.

If ${\displaystyle \epsilon =\min\{\epsilon _{1},\epsilon _{2}\}}$, then we have for all ${\displaystyle x\in ({\tilde {x}}-\epsilon ,{\tilde {x}}+\epsilon )}$ with ${\displaystyle x\neq {\tilde {x}}}$ that ${\displaystyle f({\tilde {x}})>f(x)}$. Thus ${\displaystyle f}$ in ${\displaystyle {\tilde {x}}}$ has a strict maximum.

Proof step: ${\displaystyle f}$ has a strict minimum at ${\displaystyle {\tilde {x}}}$

The proof is analogous to case 1: For all ${\displaystyle c\in ({\tilde {x}}-\epsilon _{1},{\tilde {x}})}$ there is according to the mean value theorem a ${\displaystyle y\in (c,{\tilde {x}})}$ with ${\displaystyle {\frac {f({\tilde {x}})-f(c)}{{\tilde {x}}-c}}=f'(y)}$. But there is ${\displaystyle f'(y)<0}$ and ${\displaystyle {\tilde {x}}-c>0}$ and thus we get ${\displaystyle f({\tilde {x}})-f(c)<0}$ or ${\displaystyle f({\tilde {x}}).

There is also for all ${\displaystyle d\in ({\tilde {x}},{\tilde {x}}+\epsilon _{2})}$ that we can find a ${\displaystyle z\in ({\tilde {x}},d)}$ such that ${\displaystyle {\frac {f(d)-f({\tilde {x}})}{d-{\tilde {x}}}}=f'(z)}$. But because there is ${\displaystyle f'(z)>0}$ and ${\displaystyle d-{\tilde {x}}>0}$, we also have ${\displaystyle f(d)-f({\tilde {x}})>0}$ or ${\displaystyle f({\tilde {x}}).

If ${\displaystyle \epsilon =\min\{\epsilon _{1},\epsilon _{2}\}}$, then it holds for all ${\displaystyle x\in ({\tilde {x}}-\epsilon ,{\tilde {x}}+\epsilon )}$ with ${\displaystyle x\neq {\tilde {x}}}$ that ${\displaystyle f({\tilde {x}}). So ${\displaystyle f}$ has a strict minimum at ${\displaystyle {\tilde {x}}}$.

Alternative proof (Necessary condition for extrema by sign change of the derivative)

Alternatively, the proposition can be proved with the monotony criterion. We show this only for the first statement. The second can be proved analogously. Because of ${\displaystyle f'(x)>0}$ for all ${\displaystyle x\in ({\tilde {x}}-\epsilon _{1},{\tilde {x}})}$, ${\displaystyle f}$ is strictly monotonously increasing according to the monotony criterion on ${\displaystyle ({\tilde {x}}-\epsilon _{1},{\tilde {x}})}$. For all ${\displaystyle x\in ({\tilde {x}}-\epsilon _{1},{\tilde {x}})}$ there is hence ${\displaystyle f(x).

In the same way it follows from ${\displaystyle f'(x)<0}$ for all ${\displaystyle x\in ({\tilde {x}},{\tilde {x}}+\epsilon _{2})}$ that ${\displaystyle f}$ is strictly monotonously decreasing on ${\displaystyle [{\tilde {x}},{\tilde {x}}+\epsilon _{2})}$. For all ${\displaystyle x\in ({\tilde {x}},{\tilde {x}}+\epsilon _{2})}$ there is hence ${\displaystyle f({\tilde {x}})>f(x)}$. With ${\displaystyle \epsilon =\min\{\epsilon _{1},\epsilon _{2}\}}$ we obtain ${\displaystyle f(x) for all ${\displaystyle x\in ({\tilde {x}}-\epsilon ,{\tilde {x}}+\epsilon )\setminus \{{\tilde {x}}\}}$. Thus ${\displaystyle {\tilde {x}}}$ is a strict local maximum of ${\displaystyle f}$.

Hint

If in the previous theorem only ${\displaystyle f'(x)\geq 0}$ or ${\displaystyle f'(x)\leq 0}$ applies, the statements are still valid. The only difference is that the extrema no longer have to be necessarily strict.

Warning

With the sufficient criterion only local extrema can be found. Whether these are also global, or whether there are global extrema at other places, must be examined separately.

### Example

Example (Where are zeros of the following polynomial functions)

We now consider the polynomial function ${\displaystyle g:(-2,0)\to \mathbb {R} }$ with ${\displaystyle g(x)=x^{3}-3x}$. To find the extreme points, we first differentiate ${\displaystyle g}$. There is

${\displaystyle g'(x)=3x^{2}-3=3(x-1)(x+1)}$

So the derivative on the interval ${\displaystyle (-2,0)}$ is zero only at the position ${\displaystyle {\tilde {x}}=-1}$. In our domain of definition ${\displaystyle (-2,0)}$ the factor ${\displaystyle (x-1)}$ is always negative. In the interval ${\displaystyle (-2,-1)}$ there is ${\displaystyle (x+1)<0}$. So we have ${\displaystyle g'(x)=3(x-1)(x+1)>0}$. In the interval ${\displaystyle (-1,0)}$ there is ${\displaystyle x+1>0}$ and so we get ${\displaystyle g'(x)=3(x-1)(x+1)<0}$.

According to our theorem ${\displaystyle g}$ has a strict local maximum at ${\displaystyle {\tilde {x}}=-1}$.

Question: Does the polynomial function ${\displaystyle h:(0.2)\to \mathbb {R} ,\ h(x)=x^{3}-3x}$ have an extremum?

Again there is

${\displaystyle h'(x)=3x^{2}-3=3(x-1)(x+1)}$

The derivative has in ${\displaystyle (0,2)}$ the zero ${\displaystyle {\hat {x}}=1}$. In the domain of definition, ${\displaystyle x+1}$ is always positive. On ${\displaystyle (0,1)}$ there is ${\displaystyle x-1<0}$, and therefore ${\displaystyle h'(x)<0}$. On ${\displaystyle (1,2)}$ however ${\displaystyle x-1>0}$, and therefore ${\displaystyle h'(x)>0}$. Hence ${\displaystyle h}$ has a strictly local minimum at ${\displaystyle {\hat {x}}=1}$.

### Exercises

Exercise (Extremum of a function)

Show that for ${\displaystyle n\in \mathbb {N} }$ the function

${\displaystyle f_{n}:\mathbb {R} _{0}^{+}\to \mathbb {R} ,\ f(x)=x^{n}e^{-x}}$

has a local maximum and minimum, which is a global maximum and minimum respectively

Solution (Extremum of a function)

Proof step: ${\displaystyle f_{n}}$ has a local maximum at ${\displaystyle {\tilde {x}}=n}$

${\displaystyle f_{n}}$ is differentiable on ${\displaystyle \mathbb {R} ^{+}}$ according to the product rule with

${\displaystyle f_{n}'(x)=nx^{n-1}e^{-x}+x^{n}e^{-x}(-1)=x^{n-1}e^{-x}(n-x)}$

According to the necessary criterion for the existence of a maximum ${\displaystyle {\tilde {x}}\in \mathbb {R} ^{+}}$, we need ${\displaystyle f_{n}'({\tilde {x}})=0}$ . Now

${\displaystyle f_{n}'(x)=\underbrace {x^{n-1}e^{-x}} _{\neq 0}(n-x)=0\iff n-x=0\iff x=n}$

So ${\displaystyle {\tilde {x}}=n}$ is the only candidate for our local maximum on ${\displaystyle \mathbb {R} ^{+}}$. Furthermore

${\displaystyle f_{n}'(x)=\underbrace {x^{n-1}e^{-x}} _{>0}(n-x)={\begin{cases}>0&\iff n-x>0\iff xn\end{cases}}}$

So there is ${\displaystyle f_{n}'(x)>0}$ for all ${\displaystyle x\in (0,n)}$ and ${\displaystyle f_{n}'(x)<0}$ for all ${\displaystyle x\in (n,n+1)}$. According to the sufficient criterion ${\displaystyle {\tilde {x}}=n}$ is therefore a (strict) local maximum of ${\displaystyle f_{n}}$.

Proof step: ${\displaystyle f_{n}}$ has a global maximum at ${\displaystyle {\tilde {x}}=n}$

Since ${\displaystyle \mathbb {R} ^{+}}$ has no boundary points, according to part 1 only ${\displaystyle {\tilde {x}}=n}$ can be considered for a global maximum of ${\displaystyle f_{n}}$. We have to show ${\displaystyle f_{n}(x)\leq f_{n}(n)}$ for all ${\displaystyle x\in \mathbb {R} ^{+}}$. Because of ${\displaystyle f_{n}'(x)>0}$ for all ${\displaystyle x\in (0,n)}$, the function ${\displaystyle f_{n}}$ is strictly monotonously increasing according to the monotony criterion on ${\displaystyle (0,n]}$. Therefore there is for all ${\displaystyle x\in (0,n)}$

${\displaystyle f_{n}(x)

Analogously it follows from ${\displaystyle f_{n}'(x)<0}$ for all ${\displaystyle x\in (n,\infty )}$ that ${\displaystyle f_{n}}$ is strictly monotonously decreasing on ${\displaystyle [n,\infty )}$ . So for all ${\displaystyle x\in (n,\infty )}$

${\displaystyle f_{n}(x)f(x)}$

In total, there is ${\displaystyle f_{n}(n)\geq f_{n}(x)}$ for all ${\displaystyle x\in \mathbb {R} ^{+}}$. Thus ${\displaystyle {\tilde {x}}=n}$ is a global maximum of ${\displaystyle f_{n}}$. Just like in the first part, we can also justify that ${\displaystyle {\hat {x}}=0}$ is also a global minimum of ${\displaystyle f_{n}}$.

Proof step: ${\displaystyle f_{n}}$ has a global minimum at ${\displaystyle {\hat {x}}=0}$

There is ${\displaystyle f_{n}(0)=0}$ and ${\displaystyle f_{n}(x)>0}$ for all ${\displaystyle x>0}$. So ${\displaystyle {\hat {x}}=0}$ a global and therefore also a local minimum of ${\displaystyle f_{n}}$.

### Conditions are not necessary

The condition in the previous theorem is a sufficient condition for the existence of an extreme point. There is however no necessary condition. We do not have that an extreme position exists exactly when one of the conditions in the previous sentence is fulfilled. The following example illustrates this.

Example

We consider the function

${\displaystyle f:\mathbb {R} \to \mathbb {R} ,x\mapsto {\begin{cases}x^{2}\sin \left({\frac {1}{x}}\right)+2x^{2}&,x\neq 0\\0&,x=0\end{cases}}}$

We have already seen that the function

${\displaystyle g:\mathbb {R} \to \mathbb {R} ,x\mapsto {\begin{cases}x^{2}\sin \left({\frac {1}{x}}\right)&,x\neq 0\\0&,x=0\end{cases}}}$

is differentiable and

${\displaystyle g':\mathbb {R} \to \mathbb {R} ,x\mapsto {\begin{cases}2x\cdot \sin \left({\frac {1}{x}}\right)-\cos \left({\frac {1}{x}}\right)&,x\neq 0\\0&,x=0\end{cases}}}$

For all ${\displaystyle x\in \mathbb {R} }$ there is ${\displaystyle f(x)=g(x)+2x^{2}}$. Consequently, ${\displaystyle f}$ is differentiable with the derivative function

${\displaystyle f':\mathbb {R} \to \mathbb {R} ,x\mapsto {\begin{cases}2x\cdot \sin \left({\frac {1}{x}}\right)-\cos \left({\frac {1}{x}}\right)+4x&,x\neq 0\\0&,x=0\end{cases}}}$

For all ${\displaystyle x\neq 0}$ there is ${\displaystyle \sin \left({\tfrac {1}{x}}\right)\geq -1}$ and ${\displaystyle x^{2}>0}$. Thus ${\displaystyle x^{2}\sin \left({\tfrac {1}{x}}\right)>-x^{2}}$. Hence

${\displaystyle f(x)=x^{2}\sin \left({\tfrac {1}{x}}\right)+2x^{2}>-x^{2}+2x^{2}=x^{2}>0}$

There is ${\displaystyle f(0)=0}$ and therefore the function ${\displaystyle f}$ has a (global) minimum at the position ${\displaystyle x=0}$. Next we show that there is no ${\displaystyle b\in \mathbb {R} ^{+}}$, such for all ${\displaystyle x\in (0,b)}$ the inequality ${\displaystyle f'(x)>0}$ is fulfilled. For this we construct a sequence ${\displaystyle (x_{n})_{n\in \mathbb {N} }}$ in ${\displaystyle \mathbb {R} ^{+}}$, which converges towards ${\displaystyle 0}$ and has the property that for all ${\displaystyle n\in \mathbb {N} }$ we have ${\displaystyle f'(x_{n})\leq 0}$. We define for all ${\displaystyle n\in \mathbb {N} }$

${\displaystyle x_{n}:={\frac {1}{2\pi n}}}$

Let ${\displaystyle n\in \mathbb {N} }$. Then, there is

{\displaystyle {\begin{aligned}&f'(x_{n})\\[0.3em]=\ &2x_{n}\cdot \sin \left({\frac {1}{x_{n}}}\right)-\cos \left({\frac {1}{x_{n}}}\right)+4x_{n}\\[0.3em]=\ &2{\frac {1}{2\pi n}}\cdot \sin \left({\frac {1}{\tfrac {1}{2\pi n}}}\right)-\cos \left({\frac {1}{\tfrac {1}{2\pi n}}}\right)+4{\frac {1}{2\pi n}}\\[0.3em]=\ &{\frac {1}{\pi n}}\cdot \sin \left(2\pi n\right)-\cos \left(2\pi n\right)+4{\frac {1}{2\pi n}}\\[0.3em]=\ &{\frac {1}{\pi n}}\cdot 0-1+4{\frac {1}{2\pi n}}\\[0.3em]&{\color {Gray}\left\downarrow \ \pi >2\land n\geq 1\implies {\frac {1}{\pi }}<2\land {\frac {1}{n}}\leq 1\right.}\\[0.3em]\leq \ &0-1+4\cdot {\frac {1}{2\cdot 2\cdot 1}}\\[0.3em]=\ &0-1+1=0\end{aligned}}}

## Necessary condition: presign of the second derivative

### Theorem

If ${\displaystyle f}$ is twice differentiable, we can also use the following sufficient criterion:

Theorem (Necessary condition for extrema via second derivative)

Let ${\displaystyle f:(a,b)\to \mathbb {R} }$ a twice differential function, and let ${\displaystyle f'({\tilde {x}})=0}$ hold for sone ${\displaystyle {\tilde {x}}\in (a,b)}$. Then,

1. ${\displaystyle f}$ has a strict maximum in ${\displaystyle {\tilde {x}}}$, if ${\displaystyle f''({\tilde {x}})<0}$ holds.
2. ${\displaystyle f}$ has a strict Minimum in ${\displaystyle {\tilde {x}}}$, if ${\displaystyle f''({\tilde {x}})>0}$ holds.

Proof (Necessary condition for extrema via second derivative)

1st statement: ${\displaystyle f'({\tilde {x}})=0}$, ${\displaystyle f''({\tilde {x}})<0}$ ${\displaystyle \Longrightarrow }$ ${\displaystyle f}$ has a strict maximum at ${\displaystyle {\tilde {x}}}$

There is

${\displaystyle f''({\tilde {x}})=\lim _{x\to {\tilde {x}}}{\frac {f'(x)-f'({\tilde {x}})}{x-{\tilde {x}}}}<0}$

Therefore there is an ${\displaystyle \epsilon >0}$ such that for all ${\displaystyle x\in ({\tilde {x}}-\epsilon ,{\tilde {x}}+\epsilon )}$ there is:

${\displaystyle {\frac {f'(x)-\overbrace {f'({\tilde {x}})} ^{=0}}{x-{\tilde {x}}}}={\frac {f'(x)}{x-{\tilde {x}}}}<0}$

If now ${\displaystyle x\in ({\tilde {x}}-\epsilon ,{\tilde {x}})}$, then because of ${\displaystyle x-{\tilde {x}}<0}$ we immediately get ${\displaystyle f'(x)>0}$. If, on the other hand, ${\displaystyle x\in ({\tilde {x}},{\tilde {x}}+\epsilon )}$, then because of ${\displaystyle x-{\tilde {x}}>0}$ it follows that ${\displaystyle f'(x)<0}$. According to the first sufficient criterion, ${\displaystyle {\tilde {x}}}$ is therefore a strict local maximum of ${\displaystyle f}$.

2nd statement: ${\displaystyle f'({\tilde {x}})=0}$, ${\displaystyle f''({\tilde {x}})>0}$ ${\displaystyle \Longrightarrow }$ ${\displaystyle f}$ has a strict minimum at ${\displaystyle {\tilde {x}}}$

There is

${\displaystyle f''({\tilde {x}})=\lim _{x\to {\tilde {x}}}{\frac {f'(x)-f'({\tilde {x}})}{x-{\tilde {x}}}}>0}$

Therefore there is an ${\displaystyle \epsilon >0}$ such that for all ${\displaystyle x\in ({\tilde {x}}-\epsilon ,{\tilde {x}}+\epsilon )}$ there is:

${\displaystyle {\frac {f'(x)-f'({\tilde {x}})}{x-{\tilde {x}}}}={\frac {f'(x)}{x-{\tilde {x}}}}>0}$

If now ${\displaystyle x\in ({\tilde {x}}-\epsilon ,{\tilde {x}})}$, then because of ${\displaystyle x-{\tilde {x}}<0}$ the inequality ${\displaystyle f'(x)<0}$ holds. Furthermore, ${\displaystyle x-{\tilde {x}}>0}$ for ${\displaystyle x\in ({\tilde {x}},{\tilde {x}}+\epsilon )}$ implies that then ${\displaystyle f'(x)>0}$. According to the first sufficient criterion, ${\displaystyle {\tilde {x}}}$ is a strict local minimum of ${\displaystyle f}$.

Warning

This sufficient criterion is also not necessary. Since we had deduced it from the first criterion, it is even weaker than this one. An example is given by the function

${\displaystyle f:\mathbb {R} \to \mathbb {R} ,\ f(x)=x^{4}}$

As we considered above, ${\displaystyle f}$ has a strict local minimum at ${\displaystyle {\tilde {x}}=0}$ . However, the second sufficient criterion is not applicable. There is in fact

${\displaystyle f''({\tilde {x}})=12{\tilde {x}}^{2}=12\cdot 0^{2}=0}$

This can be remedied by extending the second sufficient criterion, which we will discuss later.

### Example and Exercise

Example (Checking polynomials for extrema)

We again look at the polynomial function ${\displaystyle g:(-2.0)\to \mathbb {R} }$ with ${\displaystyle g(x)=x^{3}-3x}$. As we already know there is

${\displaystyle g'(x)=3x^{2}-3=3(x-1)(x+1)}$

Hence there is ${\displaystyle g'(x)=0\iff x=-1}$ to ${\displaystyle (-2,0)}$. Further

${\displaystyle f''(x)=6x}$

and therefore ${\displaystyle g''(-1)=-6<0}$. So ${\displaystyle g}$ has a strict local maximum at ${\displaystyle {\tilde {x}}=-1}$.

Exercise (Determining extrema of a function)

Consider the function

${\displaystyle f:\left[-{\tfrac {1}{2}},\infty \right)\to \mathbb {R} ,\ f(x)={\tfrac {3}{2}}x^{2}-4x-{\tfrac {4}{x+1}}}$

Determine all local and global extrema of ${\displaystyle f}$.

Solution (Determining extrema of a function)

Proof step: Determining local extrema of ${\displaystyle f}$

${\displaystyle f}$ is differentiable on ${\displaystyle \left(-{\tfrac {1}{2}},\infty \right)}$ with

${\displaystyle f'(x)=3x-4+{\tfrac {4}{(x+1)^{2}}}}$

For local extrema in ${\displaystyle \left(-{\tfrac {1}{2}},\infty \right)}$ there must necessarily be ${\displaystyle f(x)=0}$. Now

{\displaystyle {\begin{aligned}f(x)=0&\iff 3x-4+{\tfrac {4}{(x+1)^{2}}}=0\\&\iff 3x(x+1)^{2}-4(x+1)^{2}+4=0\\&\iff 3x(x^{2}+2x+1)-4(x^{2}+2x+1)+4=0\\&\iff 3x^{3}+6x^{2}+3x-4x^{2}-8x-4+4=0\\&\iff 3x^{3}+2x^{2}-5x=0\\&\iff 3x\left(x^{2}+{\tfrac {2}{3}}x-{\tfrac {5}{3}}\right)=0\\&\iff 3x\left(x+{\tfrac {5}{3}}\right)(x-1)=0\end{aligned}}}

This equation is fulfilled on ${\displaystyle \left(-{\tfrac {1}{2}},\infty \right)}$ for ${\displaystyle x_{1}=0}$ and ${\displaystyle x_{2}=1}$. So ${\displaystyle x_{1}}$ and ${\displaystyle x_{2}}$ are candidates for local extrema. ${\displaystyle f}$ is also twice differentiable on ${\displaystyle \left(-{\tfrac {1}{2}},\infty \right)}$ with

${\displaystyle f''(x)=3-{\tfrac {8}{(x+1)^{3}}}}$

Hence there is

${\displaystyle f''(0)=3-{\tfrac {8}{1}}=3-8=-5<0}$

According to our second criterion, ${\displaystyle f}$ has a strict local maximum at ${\displaystyle x_{1}=0}$ . Furthermore

${\displaystyle f''(1)=3-{\tfrac {8}{2^{3}}}=3-1=2>0}$

So ${\displaystyle f}$ has a strict local minimum at ${\displaystyle x_{2}=1}$ . Now we still have to examine the boundary point ${\displaystyle x_{3}=-{\tfrac {1}{2}}}$, because our criteria do not apply there! Since ${\displaystyle f}$ has a local maximum at ${\displaystyle x_{1}=0}$ , and on ${\displaystyle \left[-{\tfrac {1}{2}},0\right)}$ there are no further zeros of ${\displaystyle f'}$, the function ${\displaystyle f}$ is strictly monotonously decreasing on ${\displaystyle \left(-{\tfrac {1}{2}},0\right)}$ . So there is ${\displaystyle f\left(-{\tfrac {1}{2}}\right) for all ${\displaystyle x\in \left(-{\tfrac {1}{2}},0\right)}$. Therefore ${\displaystyle f}$v at ${\displaystyle x_{3}=-{\tfrac {1}{2}}}$.

Proof step: Determining global extrema of ${\displaystyle f}$

The following monotony table ("smi" = strictly monotonously increasing, "smd" = strictly monotonously decreasing) we get the first step of the proof for ${\displaystyle f}$:

${\displaystyle {\begin{array}{|c|c|c|c|}\hline x\in &(-{\tfrac {1}{2}},0)&(0,1)&(1,\infty )\\\hline f&{\text{smi}}&{\text{smd}}&{\text{smi}}\\\hline \end{array}}}$

Further there is ${\displaystyle f(-{\tfrac {1}{2}})=-5{\tfrac {5}{8}}<-4{\tfrac {1}{2}}=f(1)}$. Thus ${\displaystyle x_{3}=-{\tfrac {1}{2}}}$ is a global minimum of ${\displaystyle f}$. Finally

{\displaystyle {\begin{aligned}\lim _{x\to \infty }f(x)&=\lim _{x\to \infty }{\tfrac {3}{2}}x^{2}-4x-{\tfrac {4}{x+1}}\\[0.3em]&=\lim _{x\to \infty }\underbrace {x^{2}} _{\to \infty }\cdot \underbrace {\left({\tfrac {3}{2}}-{\tfrac {4}{x}}-{\tfrac {4}{x^{2}(x+1)}}\right)} _{\to {\tfrac {3}{2}}}=\infty \end{aligned}}}

So ${\displaystyle f}$ is unbounded from above and therefore has no global maximum.

## Extended sufficient crietrion

The problem with functions like ${\displaystyle f(x)=x^{4}}$ is that ${\displaystyle f''(0)=12\cdot 0^{2}=0}$ and so the second derivative vanishes. We cannot decide just by the second derivative whether and what kind of extrema are present. If we now differentiate ${\displaystyle f}$ two more times, we get ${\displaystyle f^{(4)}(0)=24>0}$. The question now is whether we can conclude from this, analogous to the second criterion, that ${\displaystyle f}$ in ${\displaystyle {\tilde {x}}=0}$ has a strict local minimum.

The answer is "yes" - but there is something we need to take care of: Let us look at the example ${\displaystyle g(x)=x^{3}}$. This has, in contrast to ${\displaystyle f}$ no extremum at ${\displaystyle {\tilde {x}}=0}$ , but a saddle point. And this although for the third derivative is also ${\displaystyle g^{(3)}(0)=6>0}$ . The difference is that here the smallest derivative order, which is not equal to zero, is equal to ${\displaystyle 3}$ and therefore odd. With ${\displaystyle f(x)=x^{4}}$ on the other hand, the smallest order is ${\displaystyle 4}$, so it is even. We can generalize this to the following criterion:

Theorem (sufficient criterion 2b for local extrema)

Let ${\displaystyle f:(a,b)\to \mathbb {R} }$ be an ${\displaystyle n}$-times differentiable function (${\displaystyle n\in \mathbb {N} }$), where ${\displaystyle f^{(n)}}$ is continuous on ${\displaystyle {\tilde {x}}\in (a,b)}$. Further let

{\displaystyle {\begin{aligned}f'({\tilde {x}})&=f''({\tilde {x}})=\ldots =f^{(n-1)}({\tilde {x}})=0\\f^{(n)}({\tilde {x}})&\neq 0\end{aligned}}}

Then, there is:

• If ${\displaystyle n}$ is even, then ${\displaystyle f}$ in the case of ${\displaystyle f^{(n)}({\tilde {x}})<0}$ in ${\displaystyle {\tilde {x}}}$ has a strict local maximum. If ${\displaystyle f^{(n)}({\tilde {x}})>0}$, then ${\displaystyle f}$ has a strict local minimum.
• If ${\displaystyle n}$ is odd, then ${\displaystyle f}$ has a saddle point in ${\displaystyle {\tilde {x}}}$.

Summary of proof (sufficient criterion 2b for local extrema)

For the proof we need the Taylor formula for ${\displaystyle f}$ up to the order ${\displaystyle n-1}$ with the Lagrange residuals

${\displaystyle f(x)=f({\tilde {x}})+{\frac {f'({\tilde {x}})}{1!}}(x-{\tilde {x}})^{1}+{\frac {f''({\tilde {x}})}{2!}}(x-{\tilde {x}})^{2}+\ldots +{\frac {f^{(n-1)}({\tilde {x}})}{(n-1)!}}(x-{\tilde {x}})^{n-1}+{\frac {f^{(n)}(\xi )}{n!}}(x-{\tilde {x}})^{n}}$

Proof (sufficient criterion 2b for local extrema)

Proof step: ${\displaystyle f^{(n)}({\tilde {x}})<0}$ and ${\displaystyle n}$ even ${\displaystyle \Longrightarrow }$ ${\displaystyle f}$ has a strict local minimum at ${\displaystyle {\tilde {x}}}$

Since ${\displaystyle f^{(n)}}$ is continuous at ${\displaystyle {\tilde {x}}}$ there is a ${\displaystyle \delta >0}$, so that ${\displaystyle f^{(n)}(x)<0}$ for ${\displaystyle x\in ({\tilde {x}}-\delta ,{\tilde {x}}+\delta )}$. According to Taylor's theorem, there is now for every m ${\displaystyle x\in ({\tilde {x}}-\delta ,{\tilde {x}}+\delta )}$ some ${\displaystyle \xi \in (x,{\tilde {x}})}$ (or ${\displaystyle \xi \in ({\tilde {x}},x)}$) with

${\displaystyle f(x)=\sum _{k=1}^{n-1}{\frac {f^{(k)}({\tilde {x}})}{k!}}(x-{\tilde {x}})^{k}+{\frac {f^{(n)}(\xi )}{n!}}(x-{\tilde {x}})^{n}}$

Since ${\displaystyle f''({\tilde {x}})=f''({\tilde {x}})=\ldots =f^{(n-1)}({\tilde {x}})=0}$ it follows that

${\displaystyle f(x)=f({\tilde {x}})+\underbrace {\frac {f^{(n)}(\xi )}{n!}} _{<0}\underbrace {(x-{\tilde {x}})^{n}} _{\geq 0}

If ${\displaystyle x\neq {\tilde {x}}}$, then ${\displaystyle (x-{\tilde {x}})^{n}is>0}$, and so there is even ${\displaystyle f(x) for all ${\displaystyle x\in ({\tilde {x}}-\delta ,{\tilde {x}}+\delta )\setminus \{{\tilde {x}}\}}$. So ${\displaystyle f}$ has a strict local maximum at ${\displaystyle {\tilde {x}}}$ . The proof that ${\displaystyle f}$ has a strict local minimum at ${\displaystyle {\tilde {x}}}$, if ${\displaystyle f^{(n)}({\tilde {x}})>0}$ is analogous.

Proof step: ${\displaystyle f^{(n)}({\tilde {x}})>0}$ and ${\displaystyle n}$ odd ${\displaystyle \Longrightarrow }$ ${\displaystyle f}$ has a saddle point at ${\displaystyle {\tilde {x}}}$

As in the proof of part 1, since ${\displaystyle f''({\tilde {x}})=f''({\tilde {x}})=\ldots =f^{(n-1)}({\tilde {x}})=0}$ and by Taylor's theorem there is :

${\displaystyle f(x)=f({\tilde {x}})+{\frac {f^{(n)}(\xi )}{n!}}(x-{\tilde {x}})^{n}}$

for some ${\displaystyle \xi \in (x,{\tilde {x}})}$ (or ${\displaystyle \xi \in ({\tilde {x}},x)}$). But since now ${\displaystyle n}$ is odd, there is ${\displaystyle (x-{\tilde {x}})^{n}>0}$ if ${\displaystyle x>{\tilde {x}}}$, and ${\displaystyle (x-{\tilde {x}})^{n}<0}$ if ${\displaystyle x<{\tilde {x}}}$. If now ${\displaystyle f^{(n)}({\tilde {x}})>0}$, then there is ${\displaystyle f(x)>f({\tilde {x}})}$ for ${\displaystyle x>{\tilde {x}}}$ and ${\displaystyle f(x) for ${\displaystyle x<{\tilde {x}}}$. Conversely, if ${\displaystyle f^{(n)}({\tilde {x}})<0}$, the inequalities apply in the opposite way. In either case ${\displaystyle {\tilde {x}}}$ is a saddle point.