Recall premise 1 of SA: that I don't know that I have two hands unless I know that I am not a handless brain in a vat. G. E. Moore famously noted that this thought cuts both ways. One can just as well use it to argue that, since I do know that I have two hands, therefore I also know that I am not a brain in a vat.14 Neo-Mooreans follow Moore on this tack, but try to provide an account of knowledge to back it up. That is, they try to explain how one knows, in the typical case, that skeptical scenarios are false. In this sense, neo-Mooreans are involved in the same project as, and incur a burden analogous to, sensitivity theorists. They are in the business of providing an account of knowledge that can explain and support a theoretical response to the skeptical argument under consideration. Hence,
One example of this neo-Moorean approach is provided by James Pryor, who offers an account of perceptual justification (and perceptual knowledge) on which one can be justified in believing that one has two hands without being antecedently justified in believing that skeptical scenarios are false. Rather, one can be justified in believing that one has two hands, and even know that one does, on the basis of one's perceptual experience alone, without further evidence about one's perceptual conditions, the reliability of one's experience, the reliability of one's perceptual powers, or the like. Having gained this sort of justification via perceptual experience, one can then go on to reason that various skeptical scenarios are false, mimicking Moore's reasoning above.16
1. safety theories
We said that S's belief that p is sensitive just in case it satisfies the following condition:
If p were false, S would not believe that p.
In the closest possible world where p is false, S does not believe that p.
Ernest Sosa has argued that a belief is better safe than sensitive, where S's belief that p is safe just in case it satisfies the following condition:
S would believe that p only if p were true.
Here Sosa means to propose an alternative necessary condition on knowledge. Accordingly, we have:
Safety. S knows that p only if: S would believe that p only if p were true.18
The intuitive idea here is that, in cases of knowledge, one could not easily have been wrong. Alternatively, not easily would S believe that p when p is false. One's belief is therefore ‘safe’ in that sense. There is some controversy over how to best capture this intuitive idea, however. First, we can distinguish between a strong and a weak reading of the subjunctive conditional in Safety.
Strong Safety. S knows that p only if: In close possible worlds, always if S believes that p then p is true. (In close possible worlds, never does S believe that p and p is false.)
Weak Safety. S knows that p only if: In close possible worlds, usually if S believes that p then p is true. (In close possible worlds, almost never does S believe that p and p is false.)
Plausibly, Strong Safety does no better with the counterexamples raised against sensitivity theories in Part I. Those examples were constructed so that there are a small number of not-p-worlds very close to the actual world, insuring that the sensitivity condition is violated in cases that seem to be knowledge. Notice, however, that the condition expressed in Strong Safety is also violated in those examples. For example, there is a close world where S believes that the rookie misses, but it is false that the rookie misses. Since Sosa means to endorse a safety condition as an alternative to a sensitivity condition, it makes sense to interpret him as endorsing Weak Safety.19
Pritchard has recently argued for a position between Strong Safety and Weak Safety. The guiding idea is that knowledge is most threatened by error in the closest nearby worlds. Pritchard ties this idea to more general considerations about luck. In general, Pritchard argues, judgments about luck place more weight on those counterfactual events that are modally closest. For example, suppose that a sniper fires two shots, the first of which misses your head by inches and the second of which misses by yards. Intuitively, you are luckier to be missed by the first shot than to be missed by the second. Put differently, you more easily could have been hit by the first shot than by the second. Pritchard's idea is that knowledge is intolerant of luck in a similar way, and that this is what the safety condition should capture. Accordingly, Pritchard suggests the following:
P-Safety. S knows that p only if 1) in all close possible worlds, usually if S believes that p then p is true, and 2) in the closest possible worlds, always if S believes that p then p is true.20
We may now consider the safety theorist's approach to SA. As with other versions of the neo-Moorean approach, the strategy is to deny premise 2 of SA. That is, the strategy is to deny that I don't know that skeptical possibilities are false. First, consider my ordinary beliefs about the world, such as my belief that I have two hands. In normal environments, where no brains in vats or deceiving demons exist, many such beliefs will count as safe. For example, in normal environments where I believe that I have two hands, I would believe this only if it were true. But the same is true of my belief that I am not a handless brain in a vat. Since there are no close worlds where I am a handless brain in a vat, there are no close worlds where I believe that I am not but I am. The safety condition is satisfied. Assuming that remaining conditions on knowledge are satisfied as well, a safety theory allows that I know that I am not a handless brain in a vat. Similar considerations will apply to other skeptical scenarios.
Is it fair to assume that remaining conditions on knowledge are satisfied? An obvious worry is that, taken by itself, the safety condition is quite weak, and so it is no surprise that it is easily satisfied. To adequately determine the anti-skeptical force of the position, we need to know what conditions must be added to safety to get sufficient conditions for knowledge. One kind of case is especially relevant in this context; namely, those where S believes a proposition that is true in all close worlds, and therefore satisfies the safety condition by default.
For example, suppose that S is severely color-blind, so that he is unable to discriminate green from non-green objects. Suppose also that S forms a perceptual belief that the frog he sees is green (and S has no other reason for believing that the frog is green). Finally, suppose that frogs are by nature green, due to some feature of frog DNA. Accordingly, frogs are green in all nearby possible worlds. Given that S is color-blind, S could easily be wrong about the colors of other objects in the environment – he could easily mistake a non-green object for a green object. But S could not be easily wrong that the frog is green, since (we are supposing) this is a stable fact about a natural kind.
S's belief that the frog is green is safe, but clearly S does not know that the frog is green. What more is needed? Sosa argues that we must add a broader cognitive ability, one that gives rise to the safety of the particular belief in question. According to Sosa, a belief's safety ‘must be fundamentally through the exercise of an intellectual virtue’, where an intellectual virtue is a reliable or trustworthy source of truth.21 In the frog case above, S lacks a broader perceptual ability (for discriminating green objects from non-green objects) to ground the safety of his belief, and this explains why S does not know.
Sosa's suggestion, then, is to add a virtue-theoretic condition to a safety condition. In general, knowledge is true safe belief grounded in a broader intellectual virtue or ability. Greco has argued that a safety condition falls out of the virtue-theoretic condition.22 Following Sosa, we may think of an intellectual virtue as a kind of ability – an ability to form true beliefs and avoid false beliefs within a relevant range and under relevant conditions. Visual perception, for example, is (very roughly) an ability to form true beliefs about the locations and orientations of mid-size physical objects, under conditions of good light, etc. But abilities in general are to be understood in modal terms.
To see the point, consider that one might have success in the actual world without ability. In short, one might be lucky. For example, I might successfully hit a baseball in the actual world, but only because, by good fortune, my bat is in the right place at the right time. Ability requires counterfactual success – one has ability only if one continues to hit the ball in worlds that are relevantly close. For example, in worlds where the ball comes in a little higher or a little faster, the player with ability adjusts her swing accordingly.
But now the same applies to intellectual abilities. The person with excellent perception forms true beliefs and avoids false beliefs in the actual world, but continues to do so in relevantly close worlds. And that entails that a safety condition will be satisfied by her perceptual beliefs: In relevantly close worlds, if S (perceptually) believes that p then p is true.23
Recall that one of the advantages claimed for contextualism is that it explains the pull of skeptical considerations and the appeal of the skeptical arguments such as SA. In particular, contextualism explains the appeal of premise 2 of SA, the claim that I do not know that I am not a handless brain in a vat. Neo-Moorean accounts insist that premise 2 is false; that in the typical case one does know that one is not a handless brain in a vat. But then how do neo-Moorean accounts, and safety accounts in particular, explain the appeal that skeptical arguments such as SA have in the first place? Sosa's explanation is that it is easy to confuse safety with sensitivity. Beliefs about skeptical possibilities are not sensitive. Knowledge requires safety. But since sensitivity and safety are easily confused, one might confusedly think that one's (in fact safe) belief fails to satisfy a necessary condition on knowledge.24
Duncan Pritchard offers a different explanation of pro-skeptical intuitions.25 According to Pritchard, in typical cases we do know that skeptical possibilities are false, but claiming that we know violates pragmatic rules governing what is assertable in a conversational context. In particular, Pritchard invokes Grice's ‘conversational maxim of evidence’, which states that one's assertions should be supported by adequate evidence. Crucially, however, what counts as ‘adequate evidence’ changes with the ‘purpose or direction’ of the conversational context. This implies that what counts as assertable changes as well, with the result that knowledge is sometimes unassertable in a context. The application to skeptical considerations and skeptical arguments should be evident: in philosophical contexts where skeptical arguments and considerations are in play, the direction and purpose of those conversations make knowledge claims about the external world unassertable, since knowledge of the external world is exactly what is at issue. For example, in such contexts it is inappropriate to assert that I am not a handless brain in a vat, or that I know that I am not, even if both claims are literally true.
2. the objection to safety theories: that's too easy!
A number of objections have been raised against safety theories of knowledge, but here I will focus on a family of objections directed specifically at the safety theorist's neo-Moorean response to skepticism. The unifying theme of this family of objections is that the safety approach makes responding to skepticism too easy.
One way to understand this charge is that a safety approach ‘begs the question’ against skepticism in an inappropriate way. Specifically, the approach assumes that there is no close world where one is a brain in a vat or the victim of a deceiving demon, and so it assumes that one is not so victimized in the actual world. But this objection recalls the ‘natural but misguided’ objection that we saw raised against sensitivity theories in Part I. Our response there was to clarify the nature and purpose of a sensitivity theory (and now a safety theory) of knowledge. Such theories do not try to give accounts that would be persuasive in a debate with a committed skeptic. Rather, the idea is to give an account of knowledge that challenges something in the skeptical argument – that explains where the skeptical argument goes wrong, and thereby explains how knowledge is possible. The goal is not to offer something that is dialectically appropriate in a debate. It is to offer something theoretically adequate in an explanation.
Another version of the objection charges that safety theories beg the question in a different sense: they deny some essential component of the skeptical problematic. For example, it is sometimes claimed that the skeptic is working with an internalist conception of epistemic justification (where ‘epistemic justification’ names the sort of justification that knowledge requires). Insofar as safety theories adopt an externalist approach to justification, they deny an essential assumption of the skeptic's reasoning. This sort of objection is surely misguided, however, in that any anti-skeptical approach must deny something in the skeptical argument. The argument of SA, for example, is formally valid. Any approach that means to avoid its conclusion must deny at least one of the argument's premises.26 Moreover, safety theories do not merely deny some assumption of the skeptic's reasoning – they motivate that move with a theory of knowledge that explains why the premise in question is false. That sort of theoretical work is not ‘too easy’.
A third version of the objection does not claim that safety theories make the response to skepticism too easy. Rather, the charge is that safety theories make knowledge too easy. For example, safety theories make it possible to know the world through safe perception. There is typically no requirement that the perceiver herself can explain how she knows, or that she can otherwise reconstruct the knowledge-producing process or circumstances. Here we should heed an insight from James Van Cleve, however – that knowledge of the world is either ‘easy or impossible’.27 We can gloss Van Cleve's point this way: either knowledge of the world is impossible or near impossible, as skepticism claims, or it is widespread, as common sense claims. If the latter, then a sufficiently anti-skeptical account must explain not only how knowledge is possible, but how it is widespread. In other words, a sufficiently anti-skeptical account must explain how knowledge of the world could be easy. Understood in this light, it is a virtue rather than a vice of safety theories that they do just that.
Nevertheless, I want to argue that there is something right about the ‘that's too easy’ objection. The way to articulate that thought, however, is to find fault with SA itself, and with the way that argument articulates the skeptical problem. If safety-based responses to SA fail to adequately address the problem of skepticism, it is because SA does not capture the problem adequately in the first place. I will explore arguments to that effect in Parts III and IV.