Ask Question
2 January, 07:19

Note that f (x) is defined for every real x, but it has no roots. That is, there is no x∗ such that f (x∗) = 0. Nonetheless, we can find an interval [a, b] such that f (a) < 0 < f (b) : just choose a = - 1, b = 1. Why can't we use the intermediate value theorem to conclude that f has a zero in the interval [-1, 1]?

+3
Answers (1)
  1. 2 January, 10:16
    0
    Answer: Hello there!

    Things that we know here:

    f (x) is defined for every real x

    f (a) < 0 < f (b), where we assume a = - 1 and b = 1

    and the problem asks: "Why can't we use the intermediate value theorem to conclude that f has a zero in the interval [-1, 1]?

    The theorem says:

    if f is continuous in the interval [a, b], and f (a) < u < f (b), there exist a number c in the interval [a, b], such f (c) = u

    Notice that the function needs to be continuous in the interval, and in this case, we don't know if f (x) is continuous or not, so we can't apply this theorem.
Know the Answer?
Not Sure About the Answer?
Find an answer to your question ✅ “Note that f (x) is defined for every real x, but it has no roots. That is, there is no x∗ such that f (x∗) = 0. Nonetheless, we can find an ...” in 📘 Mathematics if you're in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions.
Search for Other Answers