Answer:
See explanation below
Explanation:
In fact, the interval has to be closed (just as [a,b], for b>a). Otherwise, this is not neccesarily true. For example, consider the function f(x) = -1/x defined on the open interval (0.1). f is continuous (quotient of continuous functions) but it does not have a minimum value: it decreases infinitely near zero.
To show this result on the interval [a,b], the idea is the following:
We can use a previous theorem. If f is continuous on [a,b], there exists some N>0 such that N≤f(x) (that is, f is bounded below). Now, we take the biggest N such that N≤f(x) for all x∈[a,b] (this is known as the greatest lower bound)
The number N is the candidate for the minimum value of f. Next, we have to show that there exists some p∈[a,b] such that f(p)=N. To do this, we must use the continuty of f on [a,b]. There are many ways to do it, and usually they require the epsilon-delta definition of continuity.
This is just a description of the ideas involved, but of course, a rigorous proof would need more technical details, depending on the theorems you are allowed to use.