(2) Discuss the following claim.
Is it true or false, and why?
‘Even scepticism is right, and we lack much of the knowledge we
ordinary take ourselves to have, it’s not too worrying. We can acknowledge that scepticism is right
but then just go on with our lives as before.’
I would say this claim about scepticism is absolutely true,
if by scepticism we mean Cartesian external world scepticism and Humean
inductivist scepticism (i.e. the forms of scepticism practised in philosophy
classrooms). The reasons are, to me, pretty obvious. If one is too worried
about Cartesian scepticism about the external world then one is liable to be
diagnosed with psychosis and Solipsism Syndrome – so clearly, it is not sage
advice to let that most extreme form of scepticism overtake one’s mind. One has
no option but to go on with life as before, even if one thinks that scepticism
about the external world is strictly speaking irrefutable (as I do). David
Hume’s scepticism about induction and the reality of causation is arguably a
good reminder of the inherent fragility of human knowledge, but it is of no use
in any other way: no human has the ability to take scepticism about induction
seriously outside the philosophy classroom, and if you tried to frequently remind yourself of the strength of Hume’s
argument, that would presumably either drive you insane or perhaps make you a
total relativist (which, I will argue, is stupid in its own way). So yes, we
can indeed acknowledge that scepticism is right and then just go on with our
lives as before.
At the start of his Meditations on First Philosophy, Rene Descartes
describes (in the present-tense) his decision to embark upon a process of
“doubting”. Very soon, this practice of “methodical doubt” leads him to rather
disturbing conclusions. The first of these is that he might be in a dream. As
he writes, since “on many occasions [he has] in sleep been deceived” by exactly
the “illusions” he sees in front of him, he cannot know that he is not sleeping
at that very moment (Descartes, 1641,
p. 1-7). While this thought is (of course) extremely disquieting, there is a
small consolation: even if the doubt is justified, he is “bound to confess that
there are at least some other objects yet more simple and more universal [than
those he sees in either the dreams or ‘real-life’], which are real and true”
(Descartes, p. 1-7).[1] However,
after some more meditation, he suddenly recognises that all of reality might be illusory. His new thought is that he could
be under the control of an evil demon who “has employed his whole energies in
deceiving me” (Descartes, p. 1-8). If this were true, it would mean that “the
heavens, the earth, colours, figures, sound, and all other external things are
nought but the illusions and dreams of which this genius has availed himself in
order to lay traps for my credulity” (Descartes, p. 1-8).
From “Meditations I” to “Meditations III”, Descartes does, of course, attempt to argue
his way out of this extreme doubt. In “Meditation I: Concerning Those Things
That Can Be Called into Doubt”, soon after imagining the evil demon, he supposedly
demonstrates that the existence of the Ego is apodictic by observing that he cannot
doubt that he is doubting.[2] In
“Meditation III: Concerning God, That He Exists”, he uses the Ego to provide a ‘proof’ that a benevolent God must
be real, and thus supposedly demonstrates that reality can’t possibly be an
illusion, since a benevolent God would not give him a false picture of reality
(Descartes, p. 1-13 to 1-19).
Unfortunately for Descartes, to
modern sensibilities, these arguments are deeply unsatisfactory. At the same
time, no strong arguments have been proposed to replace them. The end-result is
that few contemporary philosophers think that one can declare the evil demon supposition truly impossible (even if
they dismiss the dream thought experiment).
One might think that a
philosopher could respond to this doubt by simply “lowering her credence
function” in the external world (or some other such stupid thing), and that
Cartesian scepticism could therefore be ‘implemented’ in real life to some
degree. However, the problem with even conceding the possibility that nothing is real except one’s own mind is that that itself has rather
paradoxical implications. If we imagine that all of reality is an illusion
created by an evil demon, it follows, for example, that Descartes and his Meditations must be an illusion too, as
must all of philosophy, and philosophers, and the arguments of philosophers,
and discourse in general. Just as bizarrely, since this kind of Cartesian doubt
is fundamentally subjectivist, each philosopher ought really to be addressing
the question using the first-person – saying it is possible that “I am being deceived by an evil demon”,
rather than “Descartes shows that it is possible that all of reality is an
illusion except our minds”. And yet, if
you do address the question using the first-person, then there’s no point even
having a conversation about Cartesian scepticism with other philosophers. I know that I can’t know for sure that these
other philosophers exist, and maybe they are having the exact same thought, but
I can’t know that they even exist.
It is easy to see why it would be psychologically unhealthy to spend too much
time worrying about this in real life.
It’s notable that Descartes never
imagines that the evil demon has tampered with the laws of logic themselves (he
doesn’t doubt modus pomens, for
example). The reasons for this are, of course, obvious – though not uninteresting,
since I think they point to the ultimate futility of extreme scepticism.
The modern update of the
evil-demon thought experiment is the famous “Brain in a Vat” thought experiment,
and this – as far as I can tell – has exactly the same implications, and is
precisely as difficult to refute.[3] It
is also precisely as irrelevant to real life.
David Hume’s scepticism about
induction and causation seems, on the face of it, to be rather different from
the extreme doubt of Descartes. Instead of doubting everything, Hume argues
that one method of knowing we normally take for granted – inductive reasoning –
is actually not nearly so secure as it seems. One might almost be tempted to
say that this is something that we
could take seriously in our lives. However, as I will argue, I don’t think this
is so.
In Section IV of An Enquiry Concerning Human Understanding, Hume
draws a distinction between, on the one hand, deductive reasoning and
“relations of ideas” – i.e. the a priori truths
of mathematics and logic – and, on the other, inductive reasoning and “matters
of fact”, which are a posterori (Hume,
1748, Section IV, Part I). He points out that, unlike the contrary of a
mathematical or logical proposition, “The contrary of every matter
of fact is still possible; because it can never imply a contradiction, and is
conceived by the mind with the same facility and distinctness, as if ever so
conformable to reality” (Hume, Section IV, Part I). Thus, “That the sun will
not rise to-morrow is no less intelligible a proposition, and implies no more
contradiction than the affirmation, that it will rise” (Hume, Section IV, Part
I). For Hume, these truths mean that we can’t really know anything we haven’t observed. There is always, as it were, a missing premise involved in inferring
the future from the past, and this remains true no matter how many times the
pattern has repeated itself.
The
extreme Locke-influenced empiricism of Hume’s approach even led him to proclaim
that not even the relation of “cause and effect” is necessary. As he wrote, “the
knowledge of this relation is not, in any instance, attained by reasonings a
priori; but arises entirely from experience, when we find that any particular
objects are constantly conjoined with each other” (Hume, Section IV, Part I).
To Hume, this meant it was highly doubtful.
In
order to defeat Hume’s scepticism about induction, one does, of course, need
only one assumption: that nature is
uniform. Modern scientists would say that this assumption is clearly
correct, given that – ever since Newton – the physics profession has been able
to come up with laws that capture a deep reality with profound stability
(allowing one to make all sorts of novel predictions, like – recently – the
existence of gravitational waves). But Hume was himself writing after Newton,
and clearly didn’t think his law of universal gravitation falsified his point. Hume
would have claimed that, despite the explanatory and predictive successes of
Newton’s equations, there is still no way of knowing that when you wake up tomorrow
you won’t float to the ceiling of your room (and apples won’t fly up when you
“drop” them), because there is no law
guaranteeing that physical laws are fixed. No doubt all physicists would
dispute this – but that is only because they are assuming that the laws of
nature couldn’t themselves suddenly change (despite this being clearly logically possible).
Much as it is with Descartes, I certainly
think that Hume achieved an important philosophical milestone by adopting this
sceptical attitude towards a way of reasoning we take for granted every day. Much
as it is with Descartes, I think he’s right that no inductive reasoning is ever
absolutely certain in the way that we
can be absolutely certain that 2 and
2 makes 4. One difference with Cartesian scepticism is, I think, that there is
possibly one way of at least half-solving
Hume’s scepticism which doesn’t involve saying, “Oh well, I must get on with my
life”. This half-solution is to suggest that it’s just a basic conceptual mistake to hold inductive reasoning to the standards
of deductive reasoning. Can we not say that inductive and deductive reasoning
are just two different kinds of reasoning, and that both of them ought to be
judged on their own terms?
In
any case, even though it is perfectly true that inductive reasoning is only
justifiable by begging the question (“It’s worked in the past!”), it is not as
if deductive reasoning can justify itself either – except by appeal to our most
basic intuitions (“If A, then ¬¬A”)). Justifying justification is evidently always
quite a task.
Another
nuance of scepticism about induction that distinguishes it from Cartesian
external world scepticism is that it seems like it cannot be an all-or-nothing thing (despite the
impression Hume presents). Indeed, one problem
with Hume’s writings on induction is that he fails to mention the feature of
induction that a “frequentist” would immediately point out: that some outcomes
are simply more probable than others, and
that this truth remains even if you doubt that any inductive reasoning is
absolutely certain. One clearly doesn’t need Hume to be highly sceptical of
poorly founded inductive inferences like “My child seemed to develop autism after a vaccination,
therefore vaccines cause autism”, and yet one can clearly also have very high
confidence in any inductive claim that is based on well-tested scientific
theories and mountains of evidence (“Human beings and Neanderthals both evolved
from a species of hominid that we call Homo heidelbergensis”, “The universe is
approximately 13.7 billion years old”, “Human beings are warming the climate”, etc).
It certainly seems to make zero sense to say, “Hume is right, therefore all
inductive reasoning is unreliable”. Instead, if one accepts the Humean
position, one is forced to think of things on different levels of epistemology. The right way of thinking about
Hume seems to be to acknowledge that inductive reasoning is never absolutely
certain, but also that some kinds of inductive reasoning are extremely close to
absolutely certain and that other kinds are extremely far from absolutely
certain.
Hume’s
own ‘solution’ to this extreme scepticism was, of course, to ignore it except
when he was writing his books. In Section V of An Enquiry
Concerning Human Understanding, called “A Sceptical Solution of these
Doubts”, Hume says the following: “Nor need we fear that this
philosophy, while it endeavours to limit our enquiries to common life, should
ever undermine the reasonings of common life, and carry its doubts so far as to
destroy all action, as well as speculation. Nature will always maintain her
rights, and prevail in the end over any abstract reasoning whatsoever” (Hume, Section V, Part I). This captures the idea
that it is just not in human nature to avoid
believing in the reliability of such fundamental reasoning. Near the end of
Book I of A Treatise of Human Nature –
his first work, where he first explored this kind of scepticism – he describes himself
undergoing a kind of existential breakdown in response to the terrifying
implications of his results, but then remarks, “Most fortunately it happens,
that since reason is incapable of dispelling these clouds, nature herself
suffices to that purpose, and cures me of this philosophical melancholy and
delirium, either by relaxing this bent of mind, or by some avocation, and
lively impression of my senses, which obliterate all these chimeras. I dine, I
play a game of back-gammon, I converse, and am merry with my friends; and when
after three or four hours' amusement, I wou'd return to these speculations,
they appear so cold, and strain'd, and ridiculous, that I cannot find in my
heart to enter into them any farther” (Hume, 1737, p. 269).
To
me, this passage sums up how we should – and usually do – respond to scepticism:
namely, by treating it as an intellectual exercise that has no relevance to
real life at all (except, perhaps, when demonstrating the fascination of
philosophy to non-philosophers).
Bibliography
Descartes, R. 1641, Meditations
on First Philosophy, in The
Philosophical Works of Descartes (1911),
trans. Elizabeth Haldane, Cambridge University Press.
Hedden, B. Lecture notes.
Hume, D. 1737, A
Treatise of Human Nature, 1992 edition, Prometheus Books, New York.
Hume, D. 1741, An
Enquiry Concerning Human Understanding, 1910 edition, Harvard Classics.
Accessed from:
<http://18th.eserver.org/hume-enquiry.html#5>
1.) Suppose that you and your
friend (who you take to be just as rational as you) examine some evidence about
what killed the dinosaurs. You decide it’s 90% likely that the cause was a
meteorite, while your friend says he thinks it’s only 50% likely that the cause
was a meteorite. (i) Does your disagreement mean that one of you was less than
perfectly rational in evaluating the evidence? And (ii) what should you do once
you learn of your disagreement with your friend (should you drop your
confidence that the cause was a meteorite)? Does what you say about (i) affect
what you should say about (ii)?
On simple questions of fact or probability,
it is perfectly obvious that “uniquism” is the correct thesis about epistemic
consensus (ignoring issues of classic philosophical scepticism). That is, if
two fully rational agents are evaluating such questions with the same body of
evidence (and the amount of evidence is adequate to reach a verdict) they will necessarily reach the same verdict.
Nonetheless, as Thomas Kelly argues in the paper “Evidence can be Permissive” (2014),
the clash between uniquism and “permissivism” (the position that there are at least some questions or debates on
which fully rational agents can develop different doxastic attitudes towards
the same body of evidence) becomes a great deal trickier when the cases
themselves become more complex. Though a uniquist would, of course, want to put
this down to human failures, there is also a substantive epistemological reason
why disagreements necessarily become less clear-cut (less obviously a product
of one-sided irrationality) in empirical
cases of great complexity. This reason is that the role of seemingly non-objective
epistemic standards must grow greater
as the relevant hypotheses or theories are increasingly underdetermined by the available
evidence.
The case given
by this question is evidently an empirical case of considerable complexity –
one on which expert scientists do in fact disagree in real life – and one in
which epistemic standards of theoretical simplicity and ‘Jamesian trade-offs’
would, I believe, be crucial. At the same time, I am not sure that an ideally rational agent mightn’t be able
to work out, by means of some highly sophisticated philosophy of science, what
are objectively (or at least
“intersubjectively”) the best epistemic standards to apply to the development
of hypotheses for the extinction of the dinosaurs. Thus, if I am to interpret
the phrase “perfectly rational” in the question as meaning “ideally rational”, I am simply agnostic about
whether a disagreement of credence of this type (90% probability versus 50% probability
for the meteor hypothesis) could exist between two ideally rational agents. However,
I certainly believe that, if one takes a weaker, more ‘human’ sense of rational (meaning “reasonable, clear-eyed, rigorous”),
then the permissivist perspective is likely correct here: I don’t think a
disagreement of this degree on such a difficult question as the causation of an
event that happened 66 million years ago would (in itself) be enough conclude
that one of us must be less than rational. As for how I should react to
learning of the difference in probability assignments between me and my friend,
my answer would be simply, It depends.
Like Jennifer Lackey, I don’t subscribe either to “conformism” or
“non-conformism” as absolute doctrines on how to resolve disagreement, and
believe instead that specific details matter. In this case, the relevant
details would be the opinions of the experts in the field, and various facts
about the different ways in which my friend and I evaluated the evidence. Overall,
I think the answers to both questions (our rationality in this situation, and
how we should resolve our disagreement) aren’t straightforward, and can’t be
answered in abstract, purely philosophical terms.
The question of
what killed the dinosaurs is clearly not an easy one to answer. To understand
the series of events that caused the mass extinction that took place on our
planet 66 million years ago, one has to successfully look into the ancient past
and successfully evaluate the unfolding of that past, all on the basis of some highly
fragmentary, degenerate evidence from geology and archaeology. The magnitude of
this challenge becomes clearer when one reflects on how difficult it is for us to
understand the dynamics of our planetary system today. For example, in order to assess the claim that human
activity is warming the climate, we currently feed mountains of data into
state-of-the-art mathematical models and, even then, projections as to future
warming and sea-level rises vary widely from expert to expert (despite these
experts’ nominal status as “epistemic peers”). Given these facts, it is not at
all surprising that there is also fairly wide expert disagreement in real life
over what “killed the dinosaurs” – probably more so than there is over the
effect of humans on the climate. There is, of course, a consensus that c. 66
million years ago, in a very short space of geological time, a mass extinction
event or series of events took place, and that a meteor strike or strikes had
(at the very least) something to do
with this cataclysm (“Cretaceous–Paleogene
extinction event”, Wikipedia, 24 June 2016). But there is no consensus
about most of the other details of the event or events: with regard to the
meteor hypothesis, there is dispute between the original single-impact
“Alvarrez hypothesis” (focussing on Mexico’s Chicxulub crater) and the more recent “Multiple impact
hypothesis”; there is controversy over the importance of the “Deccan traps” in
polluting the atmosphere, and over the mechanisms by which it could have had
this effect; and there is controversy between those who opt for the “multiple
causes” thesis, with a longer time-frame of extinction, and those who insist on
one sudden, catastrophic cause (usually the meteor strike) (“Cretaceous–Paleogene extinction event”, Wikipedia,
24 June 2016).
Early in his
paper “Evidence can be Permissive”, Thomas Kelly makes an important, if
obvious, point about how cases of this complexity affect the debate between
permissivism and uniquism. He points out that, if “uniquism” is right, it has
to be a fairly flexible thesis, since it can’t possibly be saying the same
thing for simple cases and for more complicated cases (like that of the
dinosaur-extinction) (Kelly, 2014: 2). Kelly uses the example of a moderate disagreement
over the future outcome of a US Presidential election to argue that there’s an
inherent fuzziness to disagreements
over complex cases. Kelly makes this argument by means of a thought experiment:
he imagines that the standard doxastic attitudes of Alpha Centaurians towards
such future events are, unlike our own doxastic attitudes, always ultra
fine-grained – for example, “believing to
degree .5436497 that the Democrat will win, or believing to degree .5122894
that it will rain tomorrow” (Kelly, 2014: 3). As he observes, it would seem
highly counterintuitive if uniquism were
true about doxastic attitudes as fine-grained as those. Importantly, since this
thought experiment shows that there must be some
limit to the stringency of a fully rational doxastic attitude, it suddenly
becomes much harder to deny the specific, sensibly defended parameters that
Kelly allows in the Presidential-prediction case.
In my view, the
same kind of partial reductio would
apply in the case of disagreement over dinosaur-extinction. Clearly, it cannot
be the case that the only ideally rational attitude towards the hypothesis that
the meteor strike killed the dinosaurs is a probability assignment with several
decimal points. But, of course, just as
it did with Kelly, the major question still remains: just how narrow is the allowable range of probability
assessments? In my view, the answer has to do with epistemic standards.
Fortunately,
Kelly himself deals with exactly this issue of epistemic standards. After
finishing the Presidential-prediction example, Kelly introduces a relevant idea
from William James, the father of both psychology and Pragmatism: that reaching
the ‘correct answer’ is not so simple a matter as “attaining truth and avoiding
error”, but involves an inherent trade-off between two opposing goals – “not
believing what is false”, and “believing what is true” (Kelly, 2014: 6). While
someone targeting the former goal (an ‘intellectual conservative’) will be
inclined to suspend judgment at the slightest doubt,[1]
someone targeting the latter (an ‘intellectual risk-taker’) will be willing to
place much more confidence in the hypothesis they think is true, even if there
is not yet overwhelming evidence for it. Kelly argues that both these epistemic
strategies, in moderate forms, are perfectly rational and acceptable, which
gives an obvious support for a permissive view (Kelly, 2014: 8). I agree with this
claim of Kelly’s, and I believe it has direct implications for our case. If we
imagine, for example, that I am the intellectual risk-taker in the situation,
this gives us some clue as to how I might have diverged from my friend while
still remaining as rational as her. Perhaps I regarded the hypothesis that the
dinosaurs were wiped out by a single meteor strike to be simple, parsimonious
and emotionally satisfying (aesthetic epistemic standards), and when I
discovered that significant evidence supports it, this lifted its standing well
beyond the other hypotheses. Meanwhile, perhaps my friend decided that she was
going to do her best to evaluate the evidence on its own merits, and when that
process was concluded, she just didn’t think there was enough to be very confident
about any hypothesis over any other.
As I suggested
in the introduction, I don’t really know if an ideally rational agent would be
an intellectual risk-taker in the dinosaur-extinction case or an intellectual
conservative. I’m not really sure of the epistemic status of epistemic
standards in general – to what extent the valuation of “simplicity” (or the use
of “Occam’s Razor”) represents an objective principle, and whether there are
anything like objective truths as to how much of an intellectual risk-taker one
should be. I think it possible that an ideally rational agent might just have
some highly sophisticated philosophy of science that would allow her to decide
in a given case whether she should take an intellectual risk or be an
intellectual conservative – but I don’t really know. All in all, therefore, I
would prefer to remain agnostic on the question of whether either I or my
friend is necessarily being less than ideally rational in this case.
As I also
suggested in the introduction, however, I think on a more down-to-earth
conception of rationality, there is little reason to think that our gap in
probability assignments on such a difficult question would necessarily make one
of us irrational. As I’ve already made clear, I do think it’s possible at least
to conceive of a scenario in which we both reached very different probability
assignments on the hypothesis of the meteor strike simply because of different
Jamesian trade-offs (without either of us being too extreme in either our
risk-taking or intellectual conservatism). So, while in real life, I’m sure I
would be very tempted to call my friend “irrational” if I found out she had
such a significant disagreement with me (and would probably assume that she
didn’t pay enough attention to this or that bit of evidence), it seems to me
that from a third-person perspective, we could still both be rational with such
a disagreement (especially given the complexity of the case).
On the question of
how to deal with a disagreement of this kind, I opt for the “Justificationist”
position espoused by Jennifer Lackey in her paper “What’s the rational response
to everyday disagreement?”[2]. In
short, I believe that there is no absolute, invariant answer as to how we
should deal with disagreements among epistemic peers, and that instead,
context-dependent considerations are all important. In the dinosaur-extinction
case, it’s not hard to think of what might be counted as relevant
considerations. If I was directly aware of a key piece of evidence my friend
had overlooked somewhat (without her being irrational
per se), then I’d be less inclined to adjust my credence to match, but
(conversely), if I was aware that I’d lacked
the expertise to understand something that she did understand, I would think it
clearly rational to adjust my credence by some degree. Similarly, if I was
aware that my position was the one held by the majority of the experts in the
field, then I would not feel inclined to adjust my credence at all (and vice
versa).[3]
To answer the
last sub-question of the question: yes, my response to the question of me and
my friend’s rationality has affected what I said about how I should react to my
disagreement. If I had concluded that the disagreement would necessarily make
one of us irrational, my Justificationism would have led me to a more directly
conformist view in this case; I would have said that both me and my friend
should look for errors we might have made in our evaluation of the evidence,
and that we should both come closer in probability assignments until such time
as we find evidence that one party made bigger mistakes than the other.
In summary, I
think the following about the disagreement of me and my friend over the
extinction of the dinosaurs: in response to i.) given the complexity of the
dinosaur-extinction case and the significance of epistemic standards, I don’t know
for sure if either I or my friend would necessarily be less than fully
rational, but I think it is clear
that this level of disagreement in itself isn’t enough to make either of us irrational
in a conventional sense; in response to ii.) I think that there is no absolute
answer as to what I should do upon learning of this disagreement (it depends); and in response to the last
sub-question, my answer is yes, because
I am a Justificationist about disagreement.
Very Brief Reference List:
Hedden, Brian. Lecture notes.
Kelly, Thomas (2014). “Evidence can be Permissive”, in
Steup, Turri and Sosa (eds.) Contemporary
Debates in Epistemology, Wiley-Blackwell: 298-311.
Lackey, Jennifer (2012). “What’s the rational response to
everyday disagreements?”, in The
Philosopher’s Magazine, 4th Quarter: 101-106.
“Cretaceous-Paleogene Extinction Event” (well-cited), Wikipedia, last updated 24 June 2016.
Accessed 25 June 2016:
<https://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_extinction_event>
<https://en.wikipedia.org/wiki/Cretaceous%E2%80%93Paleogene_extinction_event>
(3) The claim Positive Introspection says
that if you know that P, then you are in a position to know that you know P.
The claim Negative Introspection says that if you don’t know that P, then you
are in a position to know that you don’t know P. Are either or both of these
claims true? Evaluate them using theories of knowledge covered in the course,
such as those of Descartes, Nozick, and Goldman
I believe the claim “Positive
Introspection” is true for our most certain knowledge but can be denied
otherwise. I believe the claim “Negative Introspection” is completely false. Unlike
most of the philosophers who have attempted to circumscribe how much we can
really know (eg Descartes), or analyse ‘knowledge’ (eg Gettier, Nozick or
Goldman), I don’t think it makes much sense to treat knowledge as one monolith.
Instead, I think we ought to use the principle of Positive Introspection to distinguish
three types of knowledge: 1.) absolute, Cartesian
knowledge, which requires us not only to be in a position to know that we know,
but in a position to know that we know for
sure; 2.) certain, non-Cartesian knowledge,
which requires only the standard principle of Positive Introspection; and 3.) weak or incidental knowledge, which is justified true belief that we are
not in a position to know that we know. The only absolute Cartesian knowledge
is that one’s thoughts exist, because that is the only thing that one is in a
position to know that one knows for sure
(that one can’t rationally doubt). Certain non-Cartesian knowledge is all accessibilist internalist justified true
belief (justified true belief for which you can access some adequate
justification). Weak or incidental knowledge is all justified true belief that
qualifies under externalist and mentalist internalist lights but not
accessibilist internalist lights (justified true belief which you know without
any adequate, accessible internal justification).
Rene Descartes was
one philosopher who clearly believed in the principle of Positive Introspection
and denied the principle of Negative Introspection. In The Meditations, Descartes famously begins his process of
“methodical doubt” because he wants to identify wholly solid foundations for
knowledge. As quickly becomes clear, Descartes believes that in order for the
foundations of knowledge to be wholly solid, they need to be wholly indubitable.
In Descartes’ view, if he reasons that he might
be in a dream as he stares at the room in front of him, then he cannot know that he is not (Descartes, 1641:
1-7). If he reasons that he might be
under the control of an evil demon “who has employed all his energies in deceiving
[him]”, then he cannot know that all of external reality – “the heavens, the
earth, colours, figures, sound, and all other external things” – is not just “the
illusions and dreams of which this genius has availed himself in order to lay
traps for my credulity” (Descartes, 1641: 1-8). Ultimately, and famously, the
terminus of his methodical doubt is doubt itself, and from this bedrock (one
can’t doubt that one is doubting) Descartes supposedly derives his first
absolute truth: the Ego exists.[4]
I wouldn’t want
to say that Descartes was necessarily wrong to conceive of knowledge in this
ultra-strict way, but it is nowadays evident that his foundationalist project
was doomed. I instead classify the kind of knowledge Descartes was searching
for – of which there is really only one example (one’s thoughts exist) – as its
own type of knowledge: absolute,
Cartesian knowledge. Absolute, Cartesian knowledge requires not only that
one is in a position to know that one knows P, but that one is in a position to
know that one knows P for sure. Notably,
when David Hume was questioning the ultimate validity of inductive reasoning
and the reality of causation, he was also adopting this conception of
knowledge. According to Hume, in order to know that the sun will rise tomorrow,
one has to know for sure that the
future is conformable to the past – but we don’t, so we don’t know that the sun
will rise tomorrow. And obviously, both Descartes and Hume denied the completely
false principle of Negative Introspection, since they both argued that people
don’t know plenty of things that they think they do know.
As I explained
in the introduction, I think there is another type of knowledge for which the
principle of Positive Introspection holds: certain,
non-Cartesian knowledge, which equates to accessibilist internalist justified
true belief (justified true belief for which you have some adequate, accessible
justification). The contemporary debate between so-called ‘internalists’ and
‘externalists’ takes back to Edmund Gettier’s hugely influential three-page
paper “Is Justified True Belief Knowledge?” In this short piece, Gettier imagined
a few thought experiments which cleverly drew attention to the flaws in the
traditional analysis of knowledge as mere “justified true belief”. As his cases
with “Jones” and “Smith” showed, it is possible to have a justified true belief
about some fact in a somewhat incidental fashion,
in such a way that it would seem perverse to use the word “knowledge” to
describe this justified true belief (it is possible to justifiably believe some
fact which turns out to be the right answer, but for the wrong reason)
(Gettier, 1963). Gettier’s observation spurred on fresh attempts to produce a
satisfactory account of knowledge, including the Causal Theory and Reliabilist account
of Alvin Goldman and the Tracking Theory of Robert Nozick. From this finally arrived
the internalist/externalist debate about justification.
Internalists (best
represented by the Evidentialists Richard Feldman and Earl Conee) believe that
in order to know P, one must have internal, mental justification. However, it is
not quite as simple as that; internalists importantly diverge on the principle
of Positive Introspection. Whereas the accessibilists believe that to know P, one
must have access to the justification
for believing P (which directly equates to being in a position to know that one
knows P), the mentalists believe that this is not necessary, only that the
justification for P is somewhere in
one’s mind. Accessibilist internalism is usually regarded as an untenable position,
for reasons that Goldman highlights: that, any given moment, we seem to know all kinds of things (“personal
facts, facts that constitute common knowledge, facts in our areas of expertise,
and so on” (Conee and Feldman, 2004: 67)) which we nevertheless couldn’t really
justify (Goldman, 2004). One doesn’t even need to think of a clever counterexample
like Goldman’s “Sally”[5] to
cast serious doubt on accessibilist internalism as a general account of knowledge,
since anyone who relies on experts for information about anything (eg the
origins of the universe, particle physics, chemistry, history, climate change) will
have, at best, internal access only to a schematic or simplified justification
for their beliefs – and yet we clearly want to say that informed laypeople can know that the universe is 13.8 billion
years old, or that the earth is warming due to human activity. Even basic
perceptual knowledge would be hard to justify without a good understanding of the
human perceptual system (and most people do not have this).
However, I don’t
believe this is at all fatal to accessibilist internalism, as long as one
allows that there are two different
conceptions of knowledge in question here: the accessibilist internalists
are seeking an account of what I have called certain, non-Cartesian knowledge, whereas the mentalist
internalists and externalists are seeking a broader account – for the mentalist
internalists, it’s an account of both this kind of knowledge and “weak” knowledge, and for the
externalists, it’s an account of this kind of knowledge, weak knowledge and “incidental”
knowledge. In my view, the kind of knowledge which does obey the principle of
Positive Introspection – the kind of knowledge for which we can access an
adequate justification – is knowledge of a higher status than the knowledge for
which we don’t have this kind of justification. The two kinds thus deserve to
be separated.
But what do I
mean by “weak” knowledge” and “incidental” knowledge? As I said in the
introduction, both weak knowledge and incidental knowledge represent justified
true belief for which we have no accessible justification. However, the reason
I have chosen two names is that they are subtly different. By weak knowledge, I
mean anything that a mentalist internalist would accept as knowledge but an
accessibilist internalist would not –in other words, most everyday knowledge of
things, forming part of one’s “web of belief”. On the other hand, by incidental
knowledge, I mean everything that an externalist would count as knowledge but
an internalist would not. According to Goldman’s “reliablist” account of
knowledge (a quintessential externalist account), it doesn’t matter whether you
have forgotten how to justify one of your true beliefs (and there is no
justification in your head), as long as it was formed by a reliable belief-forming process (Goldman, 2004). It is this
kind of knowledge, formed in the correct way, but without any internal
justification, that I call “incidental”. I think the average person has a lot
of incidental knowledge: things they know because they read something somewhere
once, or because they heard something, or because they witnessed an incident
that they can partly remember. Clearly, this knowledge doesn’t have the same
status as the knowledge for which we can provide an adequate justification – but,
at the same time, I think it would be a perversion of language to say that it
isn’t knowledge.
In conclusion, I
believe that the principle of Positive Introspection is true with respect to
our most certain knowledge, and that the principle of Negative Introspection is
completely false. I have laid out in this essay a pluralistic picture of
knowledge, with the principle of Positive Introspection playing a crucial
demarcatory role. I think absolute, Cartesian
knowledge requires an extra-strong version of the principle of Positive
Introspection; I think certain,
non-Cartesian knowledge requires the standard version of the principle; and
I think weak and incidental knowledge doesn’t require it at all.
Reference List:
Conee, Earl and Feldman, Richard (2004). Evidentialism: Essays in Epistemology, Clarendon
Press, Oxford.
Descartes, Rene (1641). Meditations
on First Philosophy, in The
Philosophical Works of Descartes (1911),
trans. Elizabeth Haldane, Cambridge University Press.
Gettier, Edmund (1963). "Is Justified True Belief
Knowledge?", Analysis, 23:
121–123.
Goldman, Alvin (2004). Pathways
to Knowledge: Public and Private, Oxford University Press.
Hedden, Brian. Lecture notes.
Hume, David. 1737, A
Treatise of Human Nature, 1992 edition, Prometheus Books, New York.
[1] Or
perhaps maintain a ‘smeared’ probability distribution between multiple
hypotheses until decisive evidence emerges.
[2]
Clearly, though, I am extending its application beyond “everyday”
disagreements.
[3]
Deferring to the majority of experts is clearly a kind of “conformism”, but I
wouldn’t necessarily recommend this kind of conformism if I was myself an
expert in the extinction of the dinosaurs. So I’m not suggesting anything
absolute by this.
[4] Which
is, of course, an invalid inference; one can’t go from mere thoughts to some
kind of mysterious intangible entity.
[5] A
woman who formed a justifiable belief about the health benefits of broccoli
from a “New York Times science-section
story” but later forgot her evidential source (Goldman, 2000: 10).
[1]
Obviously, these “more universal” objects sound a lot like the Platonic forms.
[2] Of
course, as many philosophers have pointed out, Descartes really only shows that
he cannot doubt his consciousness, rather than showing that he cannot doubt the
existence of a discrete entity called an Ego.
[3]
One could perhaps make the argument that the Brain in a Vat thought experiment
is ever-so-slightly more possible than the Evil Demon, since at least it’s kind
of “naturalistic” (it’s unclear to me whether different levels of epistemology
interact here).