Fallibilism and Problem Solving with Meta Levels

Philosophers have many questions. Which ideas are true or false? Good or bad? What is reason and how do we think rationally? What is knowledge and how do we get it?

These questions are attempts to deal with human fallibility, which is our capacity to make mistakes. The history of the word “fallible” is related to falling, tripping, being deceived and failing. Those are all things that can happen to us, but don’t have to.

Western philosophical ideas were first developed in ancient Greece. The early Greeks distinguished between “epistēmē” (divine knowledge, truth) and “doxa” (fallible knowledge, guesswork, opinion). (I learned the most about this from Karl Popper, who I also learned a lot about fallibility from.)

Epistēmē is the perfect, final truth known only to the gods, which can’t be improved. Doxa is accessible to humans and can be improved. Doxa may be similar to the truth, or truthlike, but it’s not the (perfect) truth.

From the word “epistēmē” comes the modern word “epistemology”, which is the philosophy of knowledge. Today, knowledge has been confused with epistēmē. A person who denies the attainability of epistēmē, like the early Greeks denied, is now called a skeptic – a denier of all knowledge, not just divine, infallible or certain knowledge. A real skeptic is actually someone who denies the attainability of doxa too, while someone who only denies the attainability of epistēmē is a fallibilist.

A skeptic denies we can know anything. When people think fallibilists are skeptics, they are revealing that they believe doxa is not knowledge. So they are admitting their own skepticism of a type of knowledge. Fallibilists are skeptical of epistēmē, while many conventional thinkers are skeptical of doxa. Since epistēmē is impossible to get, denying doxa is a big issue that brings you close to skepticism. If you have no epistēmē and don’t think there is any other kind of knowledge, then you’ll think you have no knowledge! So people often try to convince themselves that they do have epistēmē. One of the main reasons they dislike fallibilists is that our challenges and critical arguments make it harder for them to tell themselves that they have epistēmē.

Epistēmē is infallible knowledge with a 100% guarantee against error. But humans only have doxa, which is fallible knowledge, or in other words knowledge that could be mistaken. It’s not “justified, true belief” (as most philosophers desire) because it might not be true.

Our culture has the anti-fallibilist idea that only epistēmē is knowledge, doxa isn’t. But people also frequently deny being infallibilists because they think they have only e.g. a 99% guarantee against error, not 100%. This attempt to solve the problem of fallibility with probability doesn’t really make sense.

True and false is a black and white distinction. But some ideas seem to fall somewhere in the middle: pretty good but not perfect. Hence Popper’s idea of “truthlike” ideas and getting “closer” to the truth. And hence also the confusion of the probability and goodness of an idea (it’s assumed that good ideas are more likely to be true, and that ideas which are likely to be true are good ideas). Basically people (incorrectly) think an idea with a 99% probability of being true is a 99th percentile idea in terms of quality.


So we want the truth but we’re fallible. So what do we do about errors? Seeking guarantees against error, even partial guarantees, doesn’t work.

CR refutes induction and justificationism. But it doesn’t emphasize their largest weakness: they have nothing to do with error correction. Foundationalism, too, doesn’t address error correction. Those strategies are more about error avoidance – do this and you won’t be wrong or you’ll be less likely to be wrong. Instead of accepting fallibility and figuring out how to live with it, they try to avoid it. Because they inevitably fail to avoid fallibility, and that’s where all their effort went, they don’t work. Fallibility happens anyway and they have no idea how to handle it.

Truth is successful solutions to problems (in contexts), as against ideas that don’t work. Every idea has some sort of goal or purpose which it succeeds at or not (or maybe partially succeeds, so it’s partially true – we’ll deal with that later).

Good ideas have a high degree of knowledge. This concept admits we’re fallible and don’t know what the truth is, and may improve the idea later, but still distinguishes better and worse ideas.

Reason and rationality are about correcting errors.

Knowledge is error corrected information. The more error correction (quantity and effectiveness), the more knowledge. Knowledge and error correction are both also contextual.

The central issue, if we accept fallibility instead of trying to fight it, is how to error correct our ideas.

This leads to a key fact: error correction is digital, not analog. That means it corresponds to integers, not real numbers. That means error correction only works with qualitative differences, not quantitative differences. That’s because you can’t effectively error correct a matter of degree. (Error correction working better with digital systems is discussed in David Deutsch’s book The Beginning of Infinity.)

As a simplified model, think of error correction like rounding numbers. 1.2 rounds to 1 not to 2. This corrects the 0.2 error, which is how far away from the correct answer it was. This system, based on all true ideas being integers, allows all errors of less than 0.5 to be corrected.

If any real number can be the answer, how do you error correct 1.2? It could be exactly right. You have no way to know if it should be higher, lower or the same. This is why our computers are digital. Errors happen in computers, e.g. the voltage is higher or lower than intended, but computers can correct those errors by rounding to the closest valid value (of which there are just two, voltage or no voltage).


How do we seek the truth and learn? We’re fallible, capable of error. Progress is by error correction. Error correction requires seeking out mistakes and trying to come up with better ideas (brainstorming, criticism). We need mechanisms for:

  • Being told mistakes (Paths Forward)
  • Digital judgments of ideas
  • Anti-bias policies (bias is a type of error)
  • Understanding and managing error rate (not overreaching)
  • Automatization and practice (3 learning stages: correct once, correct many times, fast/automatic)
  • Unbounded/universal error correction
  • Meta ideas

A meta idea means an idea about an idea. If X is an idea, then the statement “X is an error” is an idea about another idea (X).

There are many useful meta ideas, like “Idea X plus the following modifications…” or “Idea X for the following limited purpose only…” or “Given we don’t know X, disagree about Y, and are stuck on Z … what should we do?” or “This disagreement (X) is at an impasse (Y).”.

Meta levels are also useful. An idea (X) is level 0. A meta idea – an idea about X – is level 1. It’s 1 level removed from the original idea. We’ll call that idea X1. What is an idea about X1? It’s a meta meta idea. It’s a level 2 meta idea. We could call it X2. And then an idea about X2 would be a level 3 meta idea (a meta meta meta idea), which we might call X3, and so on.

What does this mean more concretely? We could run into a problem in the original discussion, and then talk about the problem itself and what to do about the problem. We’re now talking at meta level 1. While having that conversation, we might have another problem. If we discuss that problem, we’re now talking at meta level 2. We can repeat this and talk at higher meta levels instead of giving up or getting stuck. Switching to higher meta levels is one of the most important techniques in problem solving.

Broadly, while we can solve problems and figure things out, we might not solve any particular problem soon. While we could learn any truth, we won’t necessarily learn it this century. It can take a long time to figure out a specific thing.

So problem solving requires having many options for which problem(s) to solve. If we had to solve specific problems, we might get stuck and fail for our entire lives. But if we have enough problems to choose from – preferably infinitely many choices – then we should be able to find at least one that we can solve (especially if the problems are significantly different instead of all being variations on the same theme).

How can we get many options for problems to solve? Meta levels. If we reach meta level 10, then we have at least 11 different problems we could solve: the issue at level 0, 1, 2, 3, etc…

Are these new problems, at higher meta levels, very useful? Yes because they specifically don’t depend on solving the prior problem. When you discuss problem X, and what to do about it, you’re thinking about how to proceed, or what to do, without already knowing the answer to X. Broadly, you can take an unlimited number of things you’re stuck on and consider what to do given that situation. The answer to that doesn’t depend on solving any of the problems you’re stuck on. In this way, you can always generate a new problem that doesn’t rely on any of the problems you find hard, so you should always be able to get unstuck. (I’ve written other stuff about this kind of problem solving before.)

You can reach an impasse while trying to resolve an impasse, at which point you can discuss that problem situation itself and what to do about it. If you reach an impasse again, you can switch to a higher meta level. In general, the higher meta levels get less ambitious and therefore easier. Given we’re stuck on tons of stuff, what should we do? Something really easy, minimal and low-ambition!

There are some difficulties that I haven’t addressed here. One of the main issues is that people actively block or sabotage problem solving. That’s one of the main causes of failure. Meta levels don’t fix that. People can keep screwing things up at every meta level if they try to. More broadly, some problems are hard to contain. They can be hard to talk about while keeping them isolated as objects of discussion that don’t affect the discussion. Meta level problem solving is kinda like sticking problems in containment devices and then discussing what to do with the devices (and if you get stuck, just get an even bigger containment device and stick everything in there, and iterate with even bigger containment devices as needed), but certain problems can be hard to contain.


Evolution is the only known method of error correction at a low level; all other error correction is a layer of abstraction on top of evolution (a little like how a battalion is an abstraction on top of individual soldiers). We need step by step progress to avoid introducing too many new errors at once and be able to better narrow down which changes caused which problems and to be better able to revert/undo changes. That’s like how evolution works with a pretty low mutation rate so there’s mostly continuity over time. Replication with variation and selection means mostly replication with a little bit of variation. If there were tons of variation it wouldn’t be replication anymore because it wouldn’t be like the previous generation.


All errors are correctable. If something isn’t correctable, it’s not an error since better is impossible.

All life is problem solving. That means there are things we can improve. We can build nicer homes and better machines. We can understand science and philosophy better. We could have better dinners, better books, better everything. Life should be a process of ongoing improvement. Life is about making progress.

But we make mistakes. Some of our solutions don’t work. Some of our ideas are wrong. That’s the hard part. Without mistakes, life would be easy; there’d be no violence, no suffering, no bad outcomes of any kind. (The laws of physics aren’t inherently evil, so it’s OK that we’re stuck with them.)

So we need to deal with mistakes. That’s the central problem of philosophy. But most effort has gone towards avoiding and preventing mistakes. That’s a mistake because mistakes happen anyway. We need to start by figuring out what to do about mistakes that happen. Consider questions like: How do you tell what’s a mistake? How do you fix a mistake?