226k views
0 votes
Note that f(x) is defined for every real x, but it has no roots. That is, there is no x∗ such that f(x∗) = 0. Nonetheless, we can find an interval [a, b] such that f(a) < 0 < f(b): just choose a = −1, b = 1. Why can’t we use the intermediate value theorem to conclude that f has a zero in the interval [−1, 1]?

User Once
by
7.1k points

1 Answer

3 votes

Answer: Hello there!

Things that we know here:

f(x) is defined for every real x

f(a) < 0 < f(b), where we assume a = -1 and b = 1

and the problem asks: "Why can’t we use the intermediate value theorem to conclude that f has a zero in the interval [−1, 1]?

The theorem says:

if f is continuous in the interval [a, b], and f(a) < u < f(b), there exist a number c in the interval [a, b], such f(c) = u

Notice that the function needs to be continuous in the interval, and in this case, we don't know if f(x) is continuous or not, so we can't apply this theorem.

User Dfa
by
8.1k points