Critical Rationalism Overview

Table of Contents

Critical Rationalism (CR) is an epistemology developed by 20th century philosopher Karl Popper. “Epistemology” means the philosophy of knowledge. An epistemology is a philosophical framework to guide effective thinking, learning, and evaluating ideas.

Reasonable epistemologies say what reason is and how it works. Epistemology is the most important intellectual field because reason is used in every other field. How do you figure out which ideas are good in physics, politics, poetry or psychology? You use the methods of reason! Most people don’t have a very complete, conscious understanding of their epistemology (how they think reason works), and haven’t studied the matter, which leaves them with an intellectual handicap in all fields.

Epistemology offers methods, not specific answers. It doesn’t tell you which theory of gravity is true; it tells you how to productively think and argue about gravity. It doesn’t give you a fish or tell you how to catch fish; instead, it tells you how to evaluate a debate over fishing techniques. Epistemology is about the correct methods of arguing, truth-seeking, deciding which ideas make sense, etc. Epistemology also tells you how to handle disagreements (which are common to every field).

CR is general purpose: it applies in all situations and with all types of ideas. It deals with arguments, explanations, emotions, aesthetics, whims, arithmetic – anything – not just science, observation, data or prediction. CR is itself a group of ideas, so it can be used to evaluate itself.

Fallibility

CR is fallibilist rather than authoritarian or skeptical. Fallibility means people are capable of making mistakes and it’s impossible to get a 100% guarantee that any idea is true (true = not a mistake). And mistakes are common, so we shouldn’t try to ignore fallibility. Fallibility is a routine issue, not a rare edge case.

It’s also impossible to (correctly) be 99% or even 1% sure of an idea’s truth. Some mistakes are unpredictable because they involve issues that no one has thought of yet. We can loosely, informally estimate how likely we are to be mistaken about something due to known types of mistakes, but we have no reliable way to make estimates about unknown mistakes (types of mistakes that we don’t already understand).

There are decisive logical arguments against attempts to achieve any infallibility (including partial or probabilistic infallibility).

Attempts to dispute fallibilism can be refuted by a regress argument. You make a claim. I ask how you guarantee the claim is correct (even a 1% guarantee). You make a second claim which gives some argument to guarantee the correctness of the first claim (probabilistically or not). No matter what you say, I ask how you guarantee the second claim is correct. So you make a third claim to defend the second claim. No matter what you say, I ask how you guarantee the correctness of the third claim. If you make a fourth claim, I ask you to defend that one. And so on. I can repeat this pattern infinitely. This is an old argument which no one has ever found a reasonable way around. You either have to give infinitely many new arguments, or you have to repeat an argument and therefore use circular logic. (Note: If your correctness claims are probabilistic, then the overall probability of correctness will approach zero as we repeat the regress. That’s an additional problem besides being unable to end the regress.)

CR’s response to the regress problem is to accept our fallibility and figure out how to deal with it. Don’t try to fight the regress problem (you can’t win); find a different approach which doesn’t rely on infallibilist guarantees. Unfortunately, since Aristotle, most philosophers have been fighting the regress problem and losing (and making increasingly complex arguments that confuse the issues, since they can’t win honestly).

Many philosophers think knowledge requires justification, truth and belief. They think that they need a guarantee of truth to have knowledge. So they have to either get around fallibility (and the regress problem) or accept that we don’t know anything (skepticism). Most people find skepticism unacceptable because we do know things – e.g. how to build working cars, computers and space shuttles. But there’s no way around fallibility, so the field of philosophy has had difficulties for thousands of years.

Philosophers have faced a problem: fallibility seems to be indisputable, but also seems to lead to skepticism. The CR way forward is to question the premises. CR solves the problem with a theory of fallible knowledge. You don’t need a guarantee (or probability) to have knowledge. The problem was due to an incorrect theory of knowledge and the perspective behind it.

I’ll explain how fallible knowledge can be knowledge after some further comments on the standard non-CR view.

Justification is the Major Error

The mainstream perspective is: after we come up with an idea, we should justify it. We don’t want bad ideas, and we know some of the ideas we come up with will be bad. So we try to argue for ideas to show they’re good rather than bad. We try to prove our idea or try to get a lesser approximation of proof. A new idea starts with no status (it’s a mere guess, hypothesis, speculation) and can become knowledge after being justified enough with arguments that favor it.

Justification is always provided by something, some kind of source of justification. The source of justification can be a person, a religious book, an argument, or something else. This is fundamentally authoritarian – it looks for sources of authority to provide justification. It’s saying if an idea comes from this source, or it’s endorsed by this source, then it’s a good idea. It’s judging ideas by which intellectual authorities back them. Ironically, it’s commonly the authority of reasoned argument that’s appealed to for justification (This is ironic because intellectual authority is contrary to reason.). People say their rational arguments are the authority justifying their claims.

Authority is an irrational approach to truth-seeking. We should be evaluating what the idea actually says, not who or what endorses the idea.

And which sources have the authority to provide justification? The claim that a source has justifying authority is itself fallible, and will need to itself be justified by a prior justifying authority. But that prior justifying authority will also need to get its authority from some prior justification. This leads to a regress problem.

So the standard approach to epistemology is a search for authorities to justify ideas, rather than a search for good ideas, but that doesn’t work well.

Fallible Knowledge

CR says we don’t have to justify our beliefs; instead, we should use critical thinking to correct our mistakes. Rather than seeking justification, we should seek our errors so we can fix them.

Knowledge is good, useful ideas (as against bad ideas). Knowledge isn’t proof, certainty or guarantees. We may always make mistakes, but that doesn’t prevent us from learning new things, fixing some of our mistakes, and making progress.

When a new idea is proposed, don’t ask “How do you know it?” or demand proof or justification. Instead, consider if you see anything wrong with it. If you see nothing wrong with it, then it’s a good idea (knowledge), as far as you know.

Knowledge is always tentative – we may learn something new and change our mind in the future – but that doesn’t prevent it from being useful and effective (e.g. building a space shuttle that successfully reaches the moon). You don’t need justification or perfection to reach the moon; you just need to fix errors with your designs until they’re good enough to work. This approach avoids the regress problem and is compatible with fallibility.

The standard view said, “We may make mistakes. What should we do about that? Find a way to justify an idea as not being a mistake.” But that’s impossible.

CR says, “We may make mistakes. What should we do about that? Look for our mistakes and try to fix them. We may make mistakes while trying to correct our mistakes, so this is an endless process. But the more we fix mistakes, the more progress we make, and the better our ideas are.”

Tentative, fallible knowledge may sound limited, but that’s a mistaken perspective. Don’t be sad that we can’t have something more that’s logically impossible. Tentative, fallible knowledge is adequate for inventing cell phones and everything else that modern science offers. It’s all we’ve ever had in math, economics, psychology, art, architecture, medicine, law, and every other field – and we’ve successfully developed valuable knowledge in those fields.

CR calls tentative, fallible knowledge “conjectural knowledge” or just plain “knowledge”. CR says it’s the only kind of knowledge we have.

Guesses and Criticism

Our ideas are always fallible, tentative guesses with no special authority, status or justification. We learn by brainstorming guesses and using critical arguments to reject bad guesses. (This process is literally evolution, which is the only known answer to the very hard problem of how new knowledge can be created.) Criticism must explain why an idea doesn’t work (fails to accomplish the purpose its for), rather than demanding (impossible) positive justification or demanding authority. An idea without justification or authority may still be correct; a criticism is needed to point out that an idea is incorrect.

This requires a mindset shift. Instead of asking for a “burden of proof” before paying any attention to an idea, you should get good at criticizing ideas. Learn a bunch of types of criticism so that if an idea is bad in a standard way – if it repeats any common, known error – then you can easily and quickly criticizing it. You should learn principles of criticism that apply to broad categories, not criticisms of individual bad ideas or of small groups of bad ideas. If an idea doesn’t make any standard error then it merits some attention. New ideas that don’t make any already-known errors are hard to come by.

How do you know which critical arguments are correct? That’s the wrong question because it seeks positive answers again instead of thinking critically. You can just guess which criticisms are correct or incorrect. The important thing is that critical arguments are simply ideas which are open to criticism just like any other idea. What if you miss something? Then you’ll be mistaken and hopefully figure it out later. You must accept your fallibility, perpetually work to find and correct errors, and still be aware that you are always making some other mistakes without realizing it. You can get clues about some important, relevant mistakes because problems come up in your life (indicating to direct more attention there and try to improve something).

CR recommends making bold, clear guesses which are easier to criticize, rather than hedging a lot to make criticism difficult. We learn more by facilitating criticism instead of trying to avoid it.

Buckets and Searchlights

CR says many people view minds like buckets (or sponges). Learning works by pouring water (knowledge) into a mind. The water/knowledge can come from a teacher or from nature (water/knowledge is poured into people through the senses).

The bucket theory of mind says learners are passive. They receive knowledge instead of actively creating knowledge.

CR proposes instead that learning is like a searchlight. We have to actively choose where to shine the light. Learners seek knowledge and can’t look everywhere, rather than passively taking in all the information/knowledge/water around them.

CR says that, when people learn, the learner does most of the work. Teachers play only a smaller, secondary role.

We learn by conjectures and refutations (a.k.a. guesses and criticism) whether we’re making a new discovery or learning something that millions of people already know. Learning something new is the same process either way. From the learner’s perspective, in either case it’s a new idea to him.

Teachers and educational books have an important role but it’s more limited than commonly recognized. The student’s role is more primary. Teachers are helpers who can give tips and guidance, explain common mistakes, and share existing knowledge. It’s easier to guess an idea if someone is trying to tell it to you than if you have to make it up from scratch. But the learner still does have to figure it out. Someone’s speech (or words you read in a book) cannot directly put knowledge into your mind. You have to interpret the words, figure out what they mean, and figure out how to use them. You always have to think for yourself some.

Science and Evidence

CR gives some extra attention to science.

First, CR offers a theory of what science is: a scientific idea is one which could be contradicted by observation because it makes an empirical claim about reality.

Second, CR explains the role of evidence in science: evidence is used to refute incorrect hypotheses which are contradicted by observation. Evidence is not used to support hypotheses. There is evidence against but not evidence for. Evidence helps a theory when it refutes some of that theory’s rivals (alternative ideas that it was competing with).

Evidence is either compatible with a hypothesis or not. Logically (and given some context including background knowledge), evidence either contradicts an idea (incompatible) or does not contradict it (compatible). No amount of compatible evidence can justify a hypothesis because there are infinitely many contradictory hypotheses which are also compatible with the same data. (Those hypotheses commonly have other problems besides contradicting the data, which we can criticize. For example, consider a list of all possible lists of predictions. On this list are infinitely many hypotheses which match all past data and then make every possible prediction for the future. These can be criticized for being arbitrary claims without explanations or general principles, but cannot be refuted just with data. Despite being bad ideas, they are just as “justified” by matching the data as good ideas are – which shows that this kind of “justification” is inadequate.)

These two big ideas about science are where CR has had the largest impact on mainstream thinking. Many people now see science as being about empirical claims which we then try to refute with evidence.

CR also explains that observation is selective and interpreted. We first need ideas to decide what to look at and which aspects of it to pay attention to. If someone asks you to “observe”, you have to ask them what to observe (unless you can guess what they mean from context). The world has more places to look, with more complexity, than we can pay attention to. So we have to do a targeted search according to some guesses about what would be productive to investigate. In particular, we often look for evidence that would contradict (not support) our hypotheses in order to test them and try to correct our errors.

We also need to use ideas to interpret our evidence. We don’t see puppies directly; we see photons, which we interpret as meaning there is a puppy over there. (We don’t exactly see the photons either; they hit our eye which sends an electrical signal to our brain, and the mental image in our brain is based on that signal.) Our interpretation is fallible – sometimes people are confused by mirrors, fog (you can mistakenly interpret whether you did or didn’t see a person in the fog), mirages (where blue light from the sky goes through the hotter air near the ground then up to your eyes, so you see blue below you and think you found an oasis), poor eyesight (some people need glasses but don’t know that yet), etc.


This has been a brief summary. In order to understand CR more, I recommended my list of Popper reading selections. Popper wrote a ton, so I wanted to help people find the highlights. CR has a lot to say, so this summary has been missing many things, including CR’s refutation of induction.