Can beliefs be justified by anything other than evidence that they are true? I think allot of people would want to say “No” to that question at some time in their lives, myself included. Any other justification for beliefs seems somehow wrong and intellectually dishonest. But because
1 ) Beliefs have a causal connection with how we act,
2) Often we have to act on uncertainty about the actual state of affairs and
3) When rational people decide how to act based on uncertainty they must weigh the likelihood and the consequences of being right or wrong as to the state of affairs
it may be irrational to only consider the likelihood of being right or wrong and not considering the consequences.
In this blog I would like to offer some of my thoughts on pragmatic encroachment. But first let’s start with some observations of the traditional definition of knowledge.
There are various ways that philosophers have tried to define what Knowledge is. The most traditional is to say that a subject S knows a proposition P if and only if:
1) S believes P,
2) P is true,
and
3) S has sufficient reason for believing P
Now the third condition might be phrased differently. For example it might be stated as “3) S is justified in believing p.” Or “3) S’s belief in P is properly warranted.” [1]
As it turns out I think this 3rd conditions is ambiguous in a few respects. One way is that we often think someone might be “justified” in believing something even when we don’t think their justification is sufficient to call that belief “knowledge.” I might have believed the Seahawks would beat the Broncos in the Super Bowl. That belief might have been a “justified” belief based on different things I have learned about the two teams. Hence in that sense we can call that a “justified true belief.” We might say my belief was a rational belief. But I don’t think most people would say I “knew” the Seahawks would win the Super Bowl – at least not before the game started. So we can see there is “justified true belief” and there is “justified true belief.” The “justification” required for knowledge is greater than the “justification” needed to hold mere “justified belief.”
Notice that this ambiguity remains regardless of whether we use the formulation of “justified” belief or “sufficient reason” or “proper warrant.” What is “sufficient reason” to rationally believe something is less than the “sufficient reason” required to know something.
The justification that yields knowledge is stronger than the justification that allows us to simply say we are justified in believing something. This raises a few questions:
1) How much justification do you need to” know” something?
2) How much, if any, justification do you need to be “justified in believing” something?
3) Is there any difference in the forms of justification that can relate to “knowledge” versus the forms of justification that can relate to mere “rational belief.”
I think those questions are bit vague, and even if clarified, somewhat difficult to answer. But here are some thoughts. The justification for “knowledge” might require something close to 100% certainty. We might be inclined to say mere “rational belief” would require something like a preponderance of evidence. That is, that it is more likely than not true. But I think the cases presented by those who consider pragmatic encroachment shows “justification” (or “sufficient reason” or “proper warrant”) can get a bit more complicated than just looking at the certainty/probability that your belief is true.
Let’s consider theDeRose’s “bank cases” as set forth and explained by Jeremy Fantl and Matthew McGrath in their paper “Pragmatic Encroachment”:
Some of our intuitions about specific cases seem to support the claim that knowledge can depend on practical factors Consider DeRose’s famous (1992) “Bank Cases”:
‘Bank Case A (Low Stakes). My wife and I are driving home on a Friday afternoon. We plan to stop at the bank on the way home to deposit our paychecks. But as we drive past the bank, we notice that the lines inside are very long, as they often are on Friday afternoons. Although we generally like to deposit our paychecks as soon as possible, it is not especially important in this case that they be deposited right away, so I suggest that we drive straight home and deposit our paychecks on Saturday morning. My wife says, “Maybe the bank won’t be open tomorrow. Lots of banks are closed on Saturdays.” I reply, “No, I know it’ll be open. I was just there two weeks ago on Saturday. It’s open until noon.”
Bank Case B (High Stakes). My wife and I drive past the bank on a Friday afternoon, as in Case A, and notice the long lines. I again suggest that we deposit our paychecks on Saturday morning, explaining that I was at the bank on Saturday morning only two weeks ago and discovered that it was open until noon. But in this case, we have just written a very large and important check. If our paychecks are not deposited into our checking account before Monday morning, the important check we wrote will bounce, leaving us in a very bad situation. And, of course, the bank is not open on Sunday. My wife reminds me of these facts. She then says, “Banks do change their hours. Do you know the bank will be open tomorrow?” Remaining as confident as I was before that the bank will be open then, still, I reply, “Well, no. I’d better go in and make sure.” (913)’
It looks like Keith speaks truly in Case A in attributing knowledge to himself that the bank will be open tomorrow, while he also speaks truly in Case B in denying himself knowledge. The only thing that changes in the two cases is how important it is for Keith to be right about whether the bank will be open tomorrow. Therefore, it looks like how important it is for Keith to be right about whether the bank will be open tomorrow is relevant to whether Keith knows that the bank will be open tomorrow. And relevant in a clear way: holding fixed Keith’s evidence concerning whether the bank will be open tomorrow, whether he knows it will be open varies with variations in how important it is for him to be right about this.
But here we find some odd consequences. If this is the proper lesson to draw from the Bank Cases, it would appear to follow that two subjects can have the same evidence concerning whether the bank will be open tomorrow, even though one of them knows it’ll open tomorrow and the other doesn’t. ……What makes the difference in knowledge has nothing to do with these traditional factors. In fact, one subject might have more evidence than another that the bank will be open tomorrow – be better informed, have done more checking, etc. – but because much more is at stake for the more well-informed subject, the more well-informed subject can fail to know that the bank will be open tomorrow while the less-informed subject knows that the bank will be open tomorrow. All this is hard to swallow.
I think these cases can illustrate few different ambiguities about what it means to “know” something or be “justified” in believing something. The first ambiguity is the one I already mentioned. It seems to me that having gone to a bank a few weeks back and having it be open on a Saturday is pretty good justification for the belief it will be open next Saturday. Is it certain enough that we would say we “know” it will be open this Saturday? I think so, but it’s getting pretty close and some might disagree. If he went there 2 years ago we probably would say it’s not enough certainty to count as “knowing” whether it will be open this Saturday. So I think these examples are playing on that gray area of what amount of certainty we need before we call something knowledge. Accordingly this example tends to open the door to look at other ways Keith might or might not be “justified in believing” it is open on Saturday.
The bank cases clearly isolate the role of justification in our beliefs that deals not with the probability of our beliefs being true, but with the consequences of their being true or false. Let’s consider how that is working here.
First, saying that as “the stakes” increase, better evidence is required for knowledge, is not quite what this shows. It’s not just that “the” stakes are increased, but only certain stakes. Specifically the stakes are increased in such a way that if he acts on his belief and he is wrong he will suffer greater consequences.
Consider case C (another high stakes case). This case is just like case A as far as it goes. It is not the case that any important checks will bounce as in case B. There is nothing else that would cause any urgency for Keith to deposit that check before Monday. But let’s add a few other facts that increase the stakes. Keith is on his way to a very important interview. He is sure he will get this job if he is on time, because a decision maker told him that everyone was so impressed with his credentials and past interview that so long as he shows up, on time, for this interview they will probably make him an offer. This would be the offer of a lifetime. And he is not sure with parking and the odd traffic around the bank, whether he will be on time for that interview if he stopped to deposit that check.
It seems to me the stakes are just as high in case C as they are in case B. And I think we would still agree that Keith’s knowledge claim is just as valid as in case A. So it’s not just that “the” stakes went up in Case B. The stakes went up in a way that made his being wrong in his belief yield harsh consequences. Case C increases the stakes concerning his belief as well but it increases the stakes in a way that reinforces acting on his belief. Could we still say he knows the bank will be open on Saturday due to his going a few weeks ago? What about 2 years ago?
Rather than get bogged down on how much certainty we need for “knowledge” I would rather explore how this second view of “justification” works with our belief. The distinction is whether we are justified due to the probability of our belief being true or due to the consequences of our belief being true.
In an earlier blog I explained what a belief is so that we perhaps better understand how they might be “justified.” I accepted that “[a belief] is a disposition to respond in certain ways when the appropriate issue arises.” From W.V. Quine and J.S. Ullian’s book The Web of Belief. This description helps us make sense of the bank cases. Case B demands more “justification” to “respond” as if the bank will be open on Saturday. The way we would “respond” if the bank is open on Saturday, is to simply drive past the bank on Friday night. But that response is less justified if there is some doubt in our belief about the bank being open and we risk having an important check to bounce.
However the “response” of driving past the bank is not less justified if the stakes are raised in such a way that supports driving past the bank. Should our “disposition to respond in certain ways” (i.e., our belief) be effected by the stakes we have for responding a certain way? I think they should. That is, I think our beliefs should be effected by the stakes we have for responding a certain way.
Some people will recoil from this. They will think our beliefs should only be effected by the probabilities that they are true. I think that view will usually work out ok for them. However in certain circumstances this approach may lead to irrational behavior. But we are skipping ahead too fast. Let’s back up and think about a few things.
First in case C the inherent importance of holding a “true belief” seems to be overshadowed. Since there is no urgency to have the check deposited on Saturday, the belief “that the bank will be open on Saturday” being true seems relatively unimportant. Adding the fact that you might be late for a very important interview further decreases the concern whether that belief is actually true or false. The consequences of your “responding a certain way” is determining your “disposition to respond in certain ways” as much as, if not more than, any inherent importance of holding true beliefs about bank hours. The probabilities that the bank will actually be open on Saturday becomes relatively less important in Case C, because the decision is really hinging on the consequences of missing the interview.
When we look at the “justification for believing” that the bank is open on Saturday, in case A and C we tend to think he has more justification to than in Case B. And clearly he does have more justification to be “disposed to respond” by driving past the bank.
In sum I think these cases do indeed indicate not only that the probability of our beliefs being true is not the only consideration to holding true beliefs. In fact, I think we can see that given certain circumstances the probability of our beliefs being true can be relatively unimportant in whether we should hold them.
Now I think allot of what I said depends on how we understand “belief.” Some might not agree with my analysis. They might say that the belief is not better justified depending on the consequences. Remember the definition “[a belief] is a disposition to respond in certain ways when the appropriate issue arises.” (emphasis mine) They might say that the belief has the same justification regardless of the consequences, but the “appropriate issues” change leading to the action of Keith driving past the bank in Case A and C but not driving past in Case B.
They might argue that the belief should not be held more or less strongly dependent on the consequences but your actions should be change as the consequences change. This seems a sensible way to view things. If we were a computer program or robot that might be the best approach. But sometimes I think we know we should act a certain way but our doubts about probabilities prevent us from following through. But I wonder what people think of what I said so far so I will end here.
[1] “ A philosopher named Gettier provided some important counter examples to this definition which ends up being the subject of other important philosophical developments on this topic. However, I don’t mean to address that now. This idea of knowledge being “justified true belief” remains a sort of default view and its good enough for our purposes.
My first response is that you seem to be getting at something that sounds pretty similar to Pascal’s wager.
“3) When rational people decide how to act based on uncertainty they must weigh the likelihood and the consequences of being right or wrong as to the state of affairs”
The consequence that I suspect you are getting at has something to do with what happens to us after we die, in relation to God’s judgment. Is this correct?
“But sometimes I think we know we should act a certain way but our doubts about probabilities prevent us from following through.”
It is really difficult to write about beliefs and knowledge without equivocating.
In this sentence, when you say ‘know,’ I think that you are hinting at a special type of knowledge, knowledge which comes by faith. Is this the case?
You defined knowledge as a “justified true belief,” but of course we can have “justified false beliefs” and be unaware that they are false.
Knowledge is hard to come by. It is likely that most, probably all, of our scientific facts and theories are approximations of the truth, we most likely hold many justified false beliefs. This doesn’t stop us from claiming to have scientific knowledge.
A slight tangent here, I hope you don’t mind if I bring this up here; and I won’t be all sad n stuff if you delete this portion as it is not directly on topic. 😉 (I do think it is relevant to the larger conversation though)
It seems that part of the root of the theists concern with the atheistic world view is not that the atheist can’t give a good atheist account of ethics, but that the atheistic world view means that we don’t really matter. Nothing we do here really matters. Where under the theistic world view, everything matters, the theist is quite literally in an epic battle of cosmic proportion, every day.
The reason that I think that this is at the base of the theists concern with the atheistic world view is because, first, it’s disconcerting as hell to think that if the earth ceased to exist, no one would be left to care. When I deconverted, I felt very disturbed by the fact that I didn’t matter on a cosmic scale.
This idea of ‘nothing really matters’ also seems to come out in most theist vs atheist conversations if the conversation goes on long enough. Usually in the form of something like ‘why do you even talk/think/write about this stuff? You are atheist, so in the end, nothing matters!”
The reason that I bring this idea up, that of ‘nothing really matters,’ is that we are not dealing here with just beliefs, knowledge and evidence; we are dealing with complicated psychology, with people’s fears and hopes.
David W thanks for your comments. I thought I responded but just noticed that I did not.
I tend not to view faith as anything special. For the most part I think faith is belief and trust in God. I don’t think faith is epistemologically different than other beliefs.
I do not think false beliefs can be knowledge. If something is knowledge then it must be true. I think allot of what learn is science is actually true. But if not, it is not knowledge.
I agree with much of what you say and it is exactly the type of thing I would like to discuss on this blog. Great comment that really gets to the heart of things here.
I would say we are not just dealing with beliefs knowledge and evidence we are dealing with consequences of believing one thing or another. Our fears and hopes can be relevant to what a rational person does. This goes back to one of the first blog posts and comments made by Olivia:
https://trueandreasonable.co/2014/01/14/dealing-with-uncertainty-in-a-rational-way/#comments
She asked:
“Do you think it possible to claim what is rational before you have made assumptions about what matters for that agent? If a persons number one priority is living in line with the truth, then maybe they would be willing to have a lower chance of survival and that could arguably be a rational choice.”
I think we have to understand what our hopes and fears are to understand how to act rationally to achieve or avoid them.
My hope is that I live a life that is truly moral to the extent true morality exists. My fear is that I will not live that life when true morality exists. These hopes and fears drive my rational analysis.
This does not seem to be the view of many atheists – including your former professor. At times I think his goal or what matters to him is filling his head with beliefs that are more likely true than not true, and expunging beliefs that do not fit that do not hold that evidential quality.
If I thought this then I would simply try to read lots of data that I knew had a high chance of being correct and try to memorize it. If I thought the phone book was highly accurate then I would try to memorize that.
Its not that I don’t think truth is ever important. It can be extremely important. But often it’s not that important. Did I hear a cricket chirp before 10:54 on Tuesday or after 10:54 on Tuesday? Do I need to know the truth? Well if it were merely important that I fill my head with beliefs that were more likely true than not true and expunge any beliefs that didn’t fit that then I might investigate this.
Considering hopes fears goals priorities are all part of being rational. I think this is ignored.
Is it possible real morals exist if there is no God and we were just created out of matter and energy colliding along. I think it’s possible. But if that is how we were made I do think it is impossible that we would have any way of reliably knowing what the real moral properties are.