Answer: Hello there!
Things that we know here:
f(x) is defined for every real x
f(a) < 0 < f(b), where we assume a = -1 and b = 1
and the problem asks: "Why can’t we use the intermediate value theorem to conclude that f has a zero in the interval [−1, 1]?
The theorem says:
if f is continuous in the interval [a, b], and f(a) < u < f(b), there exist a number c in the interval [a, b], such f(c) = u
Notice that the function needs to be continuous in the interval, and in this case, we don't know if f(x) is continuous or not, so we can't apply this theorem.