Printer Friendly

Skepticism and Guided Curiosity.

People who debunk false or nonsensical claims sometimes tend to specialise--e.g., concentrating on pseudohistorical claims such as the 9/11 conspiracy, paranormal claims such as those involving extraterrestrial UFOs, alternative health claims such as the canard that MMR vaccine causes autism, or the claims of particular religions. This is understandable, because the one thing that unites modern-day critics of these intellectual travesties is their commitment to thoroughness in researching and analysing claims before accepting them or debunking them. Of course, this is not to say that skeptics have always lived up to this ideal. But when one skeptic does not, fellow skeptics are generally just as critical of that lapse as they are towards any paranormalist.

But despite this tendency towards depth of knowledge, many skeptics also show a wide breadth. Many are quite well versed in what's wrong with many different pseudosciences--from different versions of alternative or complementary medicine to astrology to UFOs. In addition, some are well versed in other areas as well, such as religion, politics, or anything you may wish to discuss around the watercooler. As well as being able to discuss these topics, they also can give a coherent account of scientific reasoning and critical thinking. Is this because these people are polymaths? Well, some are, but I think that the main reason has to do with their curiosity.

This is not the way many people view skeptics. Many people see skeptics as, in the words of Spiro Agnew, "nattering nabobs of negativism." This is because skeptics reject quite a few popularly held beliefs. But skeptics hold a number of beliefs, albeit provisionally. Sit down with a group of people that includes at least one skeptic, and you will find a genuine interest in ideas. But more important, you will hear a lot of "Why is that?," "What's the evidence for that?," or "But what about..." They are curious, but they practise a special type of curiosity, which I shall call guided curiosity.

What am I on about here? Well, everyone holds that curiosity is a good thing; but if you think about it, unbridled curiosity can actually inhibit understanding. Take conspiracy theorists, for example. Being curious about how every girder split from its mountings in the World Trade Center on 9/11 wouldn't seem to be a bad thing. However, it's obvious that such detailed knowledge about an event such as this just isn't going to be available. Who would be wandering around the buildings as they were collapsing to gather it? And if anyone did, would they have survived long enough to tell us? And it begs the question about a conspiracy to assume that anyone was figuring these things out in advance in order to plan the attack. It's obvious that flying a plane into an iconic building will cause chaos; the perpetrators didn't need to know exactly how much or exactly how it would happen. So why should we be worried that we lack that information? But this lack of information is what starts some conspiracy theorists down the rabbit hole. Why don't we have it? Who is hiding it? Why? Religious believers fall into the same trap. What happened before the Big Bang? Obviously, since we have no answer at present, we must conclude that there must have been a God to cause it. Asking questions that admit of no answers, or improperly formulating them in a way that prevents a sensible answer to be given, is a sure-fire method of generating false, and sometimes ridiculous, beliefs. So there need to be limits on our curiosity.

I'm not suggesting limiting the range of one's curiosity to what is 'practical,' or of immediate interest, or to easily answerable questions. Rather, the point of limiting curiosity is this: The wider one draws the curiosity net, the more information one receives. But one will pick up more flotsam and jetsam as well. The point is to maximise knowledge while minimising the amount of nonsense or outright falsehood.

The method which does the most to achieve this is best stated by the eighteenth-century philosopher David Hume: "A wise man, therefore, proportions his belief to the evidence" (Hume, 1955). Let's refer to this as Hume's Rule. Sexism aside, the rule is stating that the stronger the evidence, the more confidence one should have in one's belief, and the opposite holds as well. Less evidence should result in a weaker belief. And a corollary of this principle is that one should not have a belief at all until one has examined the evidence. Following Hume's Rule is the basis of guided curiosity. In what follows I shall give some examples from religion and paranormal belief to show how guided curiosity keeps us from falling for nonsense.

The consequences for religious belief of Hume's Rule are readily apparent. Very few religious believers, when pressed, will hold that the evidence for religious belief is very strong. This is where the argument typically takes a turn: to hold that there is, after all, an exception to Hume's Rule, which applies only to religion. Religious belief is grounded on faith, not reason or evidence; and Hume's Rule applies only to beliefs based on evidence. Faith is, of course, precisely belief in the absence of evidence, and is, according to the religionist, a virtue which not only elevates the religionist who possesses it, but shows the simple-mindedness and shallowness of thought and character of the atheist or agnostic who lacks it, and instead asks for evidence.

By the time the atheist has defended her character from the above charge, there probably won't be much time left to return to the question why faith is only appropriate for religious claims. After all, faith has a companion in mundane affairs, gullibility--which is also belief in the absence of evidence--which is decidedly not considered a virtue in those who invested in Bernie Madoff's Ponzi scheme. But if you do find the time to pursue this point with the religionist, it's very unlikely that you will receive an informed answer. Instead you will probably be told that this comparison is insulting, and the debate will end there. Perhaps it is; but the fact that a person may be insulted by being told that he has a long nose doesn't by itself prove that the claim is wrong.

Hume's Rule has another important implication for religion, atheism and agnosticism. To see it, let's introduce just a bit of probability theory. Since evidence is what makes a belief probable, it follows that evidence which establishes the probability of a claim to be less than 0.5 (or 50%) should be disbelieved. This is because the probability of belief p and the belief in the denial of p (Not-p) must add up to 1. The sentence "It will either rain or not rain on my house today" has a probability of 1, or, in other words, expresses a certainty (the Law of the Excluded Middle in logic). According to the app on my phone, the probability that it will rain here today is 40%, or 0.4. When a belief p has a probability of 0.4, its denial, Not-p has a probability of 0.6 (1 minus 0.4). So I should believe that it won't rain today--but, applying Hume's Rule, my belief shouldn't be very strong, and I should be prepared to change it. Ditto my belief in a god, except that I assign the probability of there being a god to be much lower.

A claim with a probability of exactly 0.5 should be neither believed nor disbelieved. Thus, agnostics must be holding that the evidence for belief in a god is just as compelling as that for disbelief, or, in other words, a probability of 0.5 for each. A probability of less than 0.5 is grounds for atheism, for the reason just given. But most people who call themselves agnostics do not really believe that the probabilities are equal. They concede that the probability that there is a god is less than that of the belief that there isn't one, but they stick to the claim that we cannot be sure that a god doesn't exist. But this is just to miss the point of one of the basic axioms of probability theory given above. It is important to remember that the denial of the existence of something does not require evidence that the probability of its existence is 0. Most agnostics have no difficulty in dismissing the existence of Santa Claus or the Tooth Fairy, despite not having checked every claim of where Christmas presents or quarters under the pillow came from, and therefore not being in the position to say that the probability of their existence is 0.

It might be thought that the agnostic has an answer to this. Given that God is supposed to be transcendent, completely outside the realm of human experience, no evidence is possible for belief or disbelief, because there is no evidence at all. With no evidence leading us in either direction, suspension of belief (agnosticism) seems to be the only reasonable position. But this rebuttal isn't conclusive.

The best reason for suspending belief is that we are awaiting further evidence that might require us to change our minds. Now let's return to the agnostic's strong point that we are considering the transcendent, for which no empirical evidence can be found. If this is so, then there would be no reason to suspend belief pending further evidence--the supposition is just that there won't be any. Now, add this to another corollary of Hume's Law: The onus of proof is always on the person who puts the idea forward. When the claim is presented without any evidence to support it, Hume's wise person would disbelieve it. This is because there are always more ways of getting something wrong than of getting it right. Take for example, guessing the day of the week on which a total stranger was born. If you guess Thursday, you have one chance in seven of getting it right, and the smart money will be on you getting it wrong. So in the debate between the atheist and agnostic where both agree that there is no evidence available about the transcendent (literally the world for which no empirical evidence is possible), the onus of proof is on the believer, and the believer has none. Therefore, the claim should be disbelieved, and the atheist wins by default. In addition, remember that the theist claims to believe in some god or another--one with certain properties (even if she admits that there is no empirical evidence for those properties). But now go to a second theist, who believes in another god, with somewhat different properties. Both of these theists will be implicitly recognising the onus of proof, since they apply it to each other: the first will deny the second's god on the grounds that the evidence is insufficient, and vice versa. Repeat this a few thousands of times, and we have the point made so well by Richard Dawkins (2006): "We are all atheists about most of the gods that humanity has ever believed in. Some of us just go one god further."

Pseudoscientific claims fudge the onus of proof too. In fact, they do so with such regularity that we might consider this error as one of the defining characteristics of pseudoscience. Take conspiracy theories for example. Why is all the evidence about how "they" killed Kennedy, or placed the explosives in the World Trade Center towers to supplement the work of the airplanes, completely hidden? That it is missing is just the evidence that skeptics are supposedly too dense to see; its absence shows how clever and powerful "they" are that "they" can hide it so well. You can diagnose the informal fallacy involved here as failing to respect the onus of proof; or you can equally well call it begging the question (appealing to the very fact you are attempting to prove as evidence for the very fact that you are attempting to prove). Or you can call it the argument from ignorance, which involves saying that because you cannot disprove my claim, I must be right--whether or not 1 have presented any evidence. But the main point is that they have managed the impossible feat of creating something out of nothing. This, by the way, is the thing about informal fallacies: they are like cockroaches, in that if you spot one you can be assured that there are a bunch more lurking where you can't see them.

A corollary of Hume's Rule is the requirement that we search not only for the evidence that supports our belief, but for that which goes against it. Looking only for the supporting evidence is confirmation bias. The religious, the conspiracy theory advocate and the paranormal believer are notorious for this cherry picking. The Christian apologist, searching for miracles, concentrates on the one little baby that survives the plane crash, but ignores the 200 others who perish. The believer in her own telepathic powers zeroes in on the few times she 'knows' what her friend will say next, while remaining blindly oblivious to the many more times she guesses wrong. The 9/11 conspiracy theorist pays attention to any problem, no matter how insignificant, in the received account of how the towers collapsed, while not being troubled at all by the fact that there is no evidence whatsoever of anyone planting any explosives in the buildings beforehand. The believer in any sort of complementary or alternative medicine will keep track of every cold that cures itself after a dose of echinacea while ignoring the ones that cure themselves without it. The graphologist (handwriting analyst) keeps track of every case where a handwriting sample shows large loops on the descenders of fs, gs, js, ps, qs and zs and its writer has a better than average libido, and ignores those with large libidos but with handwriting characteristics that cannot be described this way, as well as those whose handwriting has thin or small descenders but who are nevertheless quite libidinous. (If you think I'm making this up, check Paterson (1980: 11), and don't ask how she measured libidinousness.)

Let us look at one more rule of guided curiosity. It's not only important to pick up new information; it's also essential to compare that new information with what you already have. Is the new information consistent with what you already believe? If not, you have work to do to reconcile these beliefs. Perhaps you will have to reject the new information; perhaps you will have to modify it to a certain degree. Or maybe you will have to do one or both of these things to your present beliefs to achieve a fit. This shuffling process will always be with us, as long as we are gaining new information. (For a better account of this process, see Quine and Ullian, 1978.) If the new observation is consistent with your old beliefs, then you can ask how it enhances your understanding of what you already know. What implications are there from the new belief to other hitherto unsuspected claims? Do these implications suggest further claims which can be tested?

An example will show what I mean. Therapeutic Touch (TT) is a healing modality which involves practitioners manipulating the "energy field" which surrounds the body without actually touching the body itself. (This has led me to describe TT as "neither therapy nor touch.") One shape of the energy field is supposedly healthy, other abnormal ones are making us sick. The crucial thing to note about TT theory is that practitioners work on just this energy field, not on the body underneath it. So, let's take these claims at face value. If the healers can manipulate the field, they must be able to discern its presence somehow without simply inferring it from the presence of a body. But if we can perceive its presence through sight, smell, touch or hearing (probably taste isn't an option here), then everyone should be aware of it. But we are not. Only TT practitioners are. Well, that must be because there's another sensory mechanism which not all of us have--only those who would be good therapeutic touch practitioners have it. The inference from the claim that the energy field can be worked with, to the claim that practitioners must be able to recognise its presence, is not one that is often made by TT believers. But it was made by a nine-year-old from Loveland, Colorado, Emily Rosa. And though no TT practitioner had thought of doing this, the young Ms. Rosa thought about how to test this claim. For her science project she set up a solid barrier dividing a table in half. The barrier left enough room for a TT practitioner to pass her hand underneath, just high enough for the experimenter to have her hand underneath it, or not. Whether it was or was not was determined by a randomizer. So, if the TT practitioner could do better than chance, on a test designed to rule out the other sensory modalities, this would be evidence of the energy field that some gifted people could detect. Needless to say, Ms Rosa's experiment did not confirm this hypothesis, but it did lead to her being the youngest person ever to be published in a top rank medical journal (Rosa et al., 1998).

There is another important implication of Hume's Rule. He tells us that belief should be based on the preponderance of evidence, or on the probability that evidence confers on the belief. But the evidence for or against the belief is continually shifting as more of it becomes available. Along with this, the probabilities will fluctuate. Remembering this is how the skeptic following guided curiosity avoids dogmatism even when she has a fairly strongly held belief. She is always ready to modify her beliefs, and in some cases switch from belief to disbelief or vice versa as the new evidence comes in. And it is also why a skeptic should not only state her beliefs, but state them along with the degree of confidence--her estimate of the likelihood that more evidence will require her to revise or abandon those beliefs. Or better yet, always be prepared to state the belief along with the evidence for it. With these qualifications, there is no harm in provisionally stating a belief with a probability not much higher than .5, or disbelief even when the probability is a bit less than .5. Taking a belief seriously confers the benefit that, once a belief is stated along with the evidence for it, it can be examined, and implications drawn from it, which in turn can lead to new understanding. However, dismissing it because the probability is not much above 0.5 forgoes this possibility. The important thing to remember about dogmatism is that what is wrong with it is not the forceful stating of the belief, but concentrating on the belief rather than the evidence for it, or the unwillingness to budge from it when new evidence comes to light.

Some skeptics will be disappointed that I have gone all this way without mentioning one of the cardinal principles of skepticism, that there are times--quite a lot of them, actually--when you shouldn't express a belief at all; you should straightforwardly admit that you do not know. There are two advantages to this admission. The first is that it serves as a stimulus to curiosity: having admitted that you don't already know gives you a good reason to try to find out. Second, it prevents you from misleading others (and yourself). When they think you know and you don't, they might follow you when they shouldn't.

There are two situations where one should say that one doesn't know. The first is when this expression is simply a substitute for "I don't care." This expression is tantamount to admitting that you have very little evidence and you are not prepared to gather any more. For example, just by reading the headlines and deciding that I have no interest in the articles they head, I couldn't help finding out that Prince William, the Duke of Cambridge, and his wife, Kate Middleton, recently had a son, whom they named "Archie." But his birthweight? I don't know; meaning "I don't care." After all, one cannot expect to have time to look into everything; one must prioritise.

The second situation occurs when the evidence you have at present is equally compelling for belief and disbelief, and it is possible to get more. In this situation it makes sense to suspend belief and wait for further evidence. But there's an important exception here: sometimes waiting isn't a viable option; circumstances require an immediate response. Fortunately, these special cases are quite rare, so withholding judgement while awaiting more evidence is an available option, and a good one.

Otto von Neurath (1921) compared our belief system to a leaky ship at sea. We are continually replacing rotten planks with fresh ones, but never are able to replace the whole bottom at once, given that we wish to remain afloat. Thus we will never have a perfect set of beliefs; there will always be some false ones in there that we haven't found yet. The best we can hope for is a gradual improvement. To continue with his metaphor, when we find a particularly strong plank for the boat which doesn't fit very well with the old ones already in place, going to all the work to make it fit may result in a much less leaky boat overall. Similarly, encountering a new belief that is inconsistent with some old ones, but with a lot of evidence backing it up, may require the modification of several of the old beliefs at once. But the result may be a more coherent belief system overall. But not a perfect one. Non-skeptics may find this disconcerting. They are like the sailors who aren't in it for the pleasure, but just want to get somewhere--anywhere--and who just want to go along for the ride. But the true skeptic enjoys the sailing for its own sake.


Dawkins, Richard (2006) The God Delusion. Boston, Houghton Mifflin Company.

Hume, David, "Of Miracles," in Hume (1955) An Inquiry Concerning Human Understanding. Indianapolis, The Bobbs-Merill Company Inc.

Paterson, Jane (1980) Know Yourself Through Your Handwriting. Montreal, Readers Digest Assn.

Quine, W.v.O. and J.S. Ullian (1978) The Web of Belief. New York, Random House.

Rosa, L., E. Rosa, L. Sarner, S. Barrett (1998) A Close Look at Therapeutic Touch. Journal of the American Medical Association, Vol. 279(13): 1005-1010.

Von Neurath, Otto (1921) Anti-Spengler. Munich, G.D.W. Callwey

Dale Beyerstein retired as Chair of the Philosophy Department at Langara College in Vancouver. He was on the Executive Committee of the BC Civil Liberties association and one of the founders of the BC Skeptics.
COPYRIGHT 2019 Canadian Humanist Publications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Beyerstein, Dale
Publication:Humanist Perspectives
Date:Dec 22, 2019
Previous Article:The Need to Believe.
Next Article:Anecdotes and Arguments.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters