Answer:
(1) The truth condition
(2) The belief condition
(3) The justification condition
Step-by-step explanation:
The Truth Condition
Most epistemologists have found it overwhelmingly plausible that what is false cannot be known. For example, Hillary Clinton did not win the 2016 US Presidential election. Consequently, nobody knows that Hillary Clinton won the election. One can only know things that are true.
Sometimes when people are very confident of something that turns out to be wrong, we use the word “knows” to describe their situation. Many people expected Clinton to win the election. Speaking loosely, one might even say that many people “knew” that Clinton would win the election—until she lost. Hazlett (2010) argues on the basis of data like this that “knows” is not a factive verb. Hazlett’s diagnosis is deeply controversial; most epistemologists will treat sentences like “I knew that Clinton was going to win” as a kind of exaggeration—as not literally true.
Something’s truth does not require that anyone can know or prove that it is true. Truth is a metaphysical, as opposed to epistemological, notion: truth is a matter of how things are, not how they can be shown to be. So when we say that only true things can be known, we’re not (yet) saying anything about how anyone can access the truth. As we’ll see, the other conditions have important roles to play here. Knowledge is a kind of relationship with the truth—to know something is to have a certain kind of access to a false.
The Belief Condition
The belief condition is only slightly more controversial than the truth condition. The general idea behind the belief condition is that you can only know what you believe. Failing to believe something precludes knowing it. “Belief” in the context of the JTB theory means full belief, or outright belief. In a weak sense, one might “believe” something by virtue of being pretty confident that it’s probably true—in this weak sense, someone who considered Clinton the favourite to win the election, even while recognizing a nontrivial possibility of her losing, might be said to have “believed” that Clinton would win. Outright belief is stronger (see, e.g., Fantl & McGrath 2009: 141; Nagel 2010: 413–4; Williamson 2005: 108; or Gibbons 2013: 201.). To believe outright that p, it isn’t enough to have a pretty high confidence in p; it is something closer to a commitment or a being sure.
Suppose Walter comes home after work to find out that his house has burned down. He says: “I don’t believe it”. Critics of the belief condition might argue that Walter knows that his house has burned down (he sees that it has), but, as his words indicate, he does not believe it. The standard response is that Walter’s avowal of disbelief is not literally true; what Walter wishes to convey by saying “I don’t believe it” is not that he really does not believe that his house has burned down, but rather that he finds it hard to come to terms with what he sees. If he genuinely didn’t believe it, some of his subsequent actions, such as phoning his insurance company, would be rather mysterious.
A more serious counterexample has been suggested by Colin Radford (1966). Suppose Albert is quizzed on English history. One of the questions is: “When did Queen Elizabeth die?” Albert doesn’t think he knows, but answers the question correctly. Moreover, he gives correct answers to many other questions to which he didn’t think he knew the answer.
1The Justification Condition
Why is condition (iii) necessary? Why not say that knowledge is true belief? The standard answer is that to identify knowledge with true belief would be implausible because a belief might be true even though it is formed improperly. Suppose that William flips a coin, and confidently believes—on no particular basis—that it will land tails. If by chance the coin does land tails, then William’s belief was true; but a lucky guess such as this one is no knowledge. For William to know, his belief must in some epistemic sense be proper or appropriate: it must be justified