Table of Contents
Ideas should be judged as refuted or non-refuted.
What does this claim mean, why does it matter, and is it actually correct?
People commonly believe ideas start at a score of 0, and then have to reach 0.95 to meet the burden of proof. Details can vary, e.g. ideas might start with a higher score up to 0.5 at most. The score changes when we think about the idea, make arguments, gather and consider evidence, etc.
People disagree about some of the details, but Critical Fallibilism (CF) says something clearly different: ideas start at 1 (non-refuted) and can be lowered to 0 (refuted), but can’t go up. Ideas also can’t have any score in between 0 and 1. We can’t prove we’re right (or more likely to be right, or that our idea is better); we just guess ideas, look for errors, and try to eliminate bad ideas.
An alternative CF perspective, which is fine too, is that ideas start at “unknown” or “undecided” and then, when you first evaluate them, you choose 0 or 1. The important thing is you shouldn’t come up with an idea and then instantly act on it, believing it’s just as good as any other non-refuted idea. New ideas need to be critically considered before being used. They need to stay non-refuted after an appropriate amount of consideration, which is often just a few seconds for little things, but can be days for big decisions. It’s also important to CF that “undecided” is not a long term status. You don’t keep researching the idea indefinitely while calling it undecided. If an idea passes initial review, it’s non-refuted. Initial review has two basic phases. One phase is checking if you already know of a criticism that applies to the idea. And the other phase is trying to think of a new criticism that applies to the idea. In CF, initial review of an idea usually takes under 5 minutes.
Non-refuted does not mean you should act on an idea. It means that current knowledge doesn’t already refute the idea. Deciding what actions to take in your life is a related but somewhat different matter than making an epistemological evaluation. (The bar for action is actually having an idea that directly says “Take this action now” and that is non-refuted. But if the idea had not passed initial review yet, then it would be refuted, because we know we shouldn’t act on ideas that haven’t passed initial review, so it’d be wrong to demand immediate action. Even in a very rushed situation, an initial review is still done. It’s just done in a shortened way by your subconscious mind. Relying purely on intuitions from your subconscious is riskier and should be avoided when you aren’t in a big rush. 10 seconds of conscious thought is a lot more than 0 and will catch some additional errors.)
Our tool is refutation not support. Either an idea has been refuted or it hasn’t.
The number of refutations doesn’t directly matter. If an idea doesn’t work for one reason or 20 reasons, the result is the same: it doesn’t work. However, the number of refutations can make a difference when you try to modify the idea to create a variant idea without the errors. More errors tends to be harder to fix (but not necessarily; one important criticism can be harder to fix than 20 minor points).
The reason ideas can’t change from refuted to non-refuted is that if you change an idea to fix some problem, now you have a different idea. That’s a variant idea, not the same idea. The original idea without the fix is still refuted, and the new version was never refuted at any time because the criticism of the original idea does not apply to the new variant. People sometimes refer to many versions of an idea by the same name, e.g. “democracy” or “induction”. We can get away with this sometimes but it also leads to a lot of confusion. In some conversations, it helps to give each variant of an idea a different name. You can use descriptive names (“direct democracy”) or numbers (“induction-7”). We could specify dozens of different versions of direct democracy, and number them, but we usually don’t; we try to allocate precision where it’s actually useful, and most conversations don’t rely on precise nuances about democracy.
Note: If on average we’re highly precise about one thing per five conversations, and each conversation touches on 100 things, then we’d only use high precision with one thing in 500. 100 is a low estimate for the number of things per conversation since basically every word and phrase is a thing. The previous sentence touched on (with low precision) the quantity 100, the “is” relationship, the difference between “a” and “the”, lowness, estimates, numbers, things, “per”, conversations, basicness, what “every” means, words, and phrases – and those 13 things aren’t a complete list.
A refuted idea is one that we know won’t work. What does it mean to not work? It fails at a goal/purpose/objective. Ideas can only be judged in relation to a goal and a context (with all the background information that might be relevant). Context is “the circumstances that form the setting for an event, statement, or idea, and in terms of which it can be fully understood and assessed” (New Oxford Dictionary).
You can define “idea” in different ways. You could include the goal and even the context in the “idea”. But we’ll split those up. So we won’t judge ideas alone. We’ll judge groups of 3 things: an idea with a goal and a context (an IGC).
The same idea can work for one goal and fail at a different goal, and it’s up to you to consider which goals you have and don’t have. E.g. telling someone a complaint may fail at the goal of rational problem solving but succeed at the goal of starting a fight. Sometimes people actually do try to start fights on purpose (including without admitting they were doing that); that’s a real goal someone might have. Ideas can also succeed at dumb goals that no one would realistically have; that’s still success at some goals even if it isn’t useful to us. Normally we don’t consider goals unless at least one person considers them potentially worthwhile.
A critical argument can refute an idea for some goals but leave it non-refuted for some other goals.
An idea can also work in one context and fail in another. E.g. if I’m trying to throw a ball and hit something, a particular throw will work in Earth’s gravity but miss with Mars’ gravity. The context of which planet I’m on is relevant to my ideas about how to aim my throws. So criticism is contextual. A criticism might only apply on Earth but not Mars, or vice versa.
Context is basically any background information. It’s big because there are many, many pieces of context we could point out. Context can be pretty much the whole of reality. But usually most of it isn’t relevant. We should bring context up when we figure out how some particular piece of context matters (e.g. that gravity matters to throwing objects, so which planet we’re on is relevant, so we better take that into account when training pitchers for our inter-planetary baseball league). Often, a few of the most important parts of (what would be) context are mentioned in the idea or goal, so they’re no longer context, since they’re already in the idea or goal.
Many of our ideas, but not all, only need to work on Earth. And they only need to work in the society we live in today, or a similar society, but not any society. We generally try to only worry about relevant contexts.
An IGC is an (idea, goal, context) triple. A triple is a group of three things (like a pair, but for three instead of two). Any IGC is either refuted or non-refuted. You may object that an idea could also be too vague to judge, but I’d consider that refuted. If an idea is too vague to evaluate, believe or act on, then it’s bad! Ideas have to be clear in order to provide value.
Our evaluations represent our current knowledge. If something is non-refuted that just means the idea will accomplish the goal (in the context) as far as we know currently. We don’t know a reason it will fail. But it still might fail. There could be an error that we didn’t think of.
Refuted ideas can never be rescued or saved. If an idea was unable to defend against a criticism, you can’t fix that by thinking of something new. The something new was not in the idea. If you add something to an idea, or make any other changes, now you have a variant idea, which is a new idea that should receive it’s own evaluation. (When evaluating a variant, we should make sure to consider whatever criticism refuted the prior idea to see if it refutes the variant too. We should also watch out for strategies to immunize ideas against criticism by adding generic defensive strategies, e.g. adding “maybe” everywhere. Yes you’re right that “maybe” that idea will work if it isn’t literally impossible, but that isn’t useful.)
It’s possible to think in terms of modifying (changing) ideas and count variants as the new versions of the same idea instead of a new idea. You could look at epistemology that way and get right answers. But it makes things harder and less elegant. It treats two things as the same which are different. It allows refuted ideas to become non-refuted again due to being changed. And basically it’s the same as the issue of mutable state in programming, which functional programmers have explained should be avoided in general (it’s confusing and is one of the most common sources of bugs).
What if we have a criticism of an idea but we’re unsure if it’s right? How do we handle uncertainty with a black and white thinking system? A little bit of the answer is we always deal with uncertainty and must use our judgment and make decisions anyway. And our decisions always involve accepting and acting on an IGC (idea, goal, context) while rejecting other IGCs. Having our intellectual judgments take the same form as our decision making is best. But the main answer is that judging gets way, way easier with clear goals. Instead of “I want a good house” consider a goal like “I want a house that costs under 100k and no other considerations matter”. Given that goal, could you confidently evaluate whether a particular house passes or fails? Yes you could. Broadly, when in doubt, declare everything refuted for failing to help guide you adequately. If your ideas and goals aren’t good enough to enable clarity for you, then they’re bad. (What if you want to use some idea anyway because you think transitioning to clearer thinking will take too long? So better philosophy is a future aspiration but not something you’re ready for immediately. Well, then, you have your answer: you think that idea, which you want to use, will work. You have a judgment of it. If you didn’t think it’d work, you wouldn’t want to use it. So yes, you can aspire to improve your thinking, and in the mean time you can act on things you think will work.)
What if we come up with a criticism of a criticism? If the original IGC didn’t provide that counter-criticism, then it was inadequate. Treat it as refuted and propose a new IGC which is like the previous one, but with a variant idea that has a footnote explaining the potential criticism and counter-criticism.
Is everything instantly refuted the moment anyone comes up with any criticism? Not quite. First check if the IGC already addresses the criticism. This should be pretty quick and easy in general (if it’s not, then usually the IGC isn’t good enough, but you could also be dealing with a hard issue and a lot of complexity, in which case putting in more effort is reasonable). In other words, a new criticism (like any idea) has to be exposed to our library/archive of known criticism (which is part of our context). Take the ways of looking for errors that you already know (both generic and subject-specific) and try to answer the criticism.
Critics can’t just say “It’s bad!”; they have to give a reason an idea will fail at a goal like “That house costs a million bucks which is more than the goal budget.” A correct criticism tells you a reason an idea fails at a relevant goal, and also the criticism has to survive initial critical review. We’re not talking about unlimited creative, critical thought in the indefinite future; just some limited, standard thought to see if we already know a refutation (this usually takes under 5 minutes, and for tiny ideas it’s often just a few seconds).
If you do creative research, and come up with new knowledge, you’re actually changing the context! Creating new background knowledge doesn’t actually refute the old IGCs. What it can refute is the old I and G combined with the new C, which is useful. (This is related to Objectivism’s idea of contextual knowledge, which says ideas can remain non-refuted in a prior context even as you learn new things that refute them. Objectivism likes to view progress in terms of moving on to new, better knowledge without invalidating our old knowledge – it still had value and was useful in its context even if we know better today. That contrasts with the Critical Rationalism’s view of progress as a succession of new problems and errors, and belief that all our ideas are flawed/imperfect. Despite the different emphasis, these views are actually basically compatible. Progress involves getting closer to perfection without reaching it, and you can look at that in terms of making improvements, correcting errors, or both.)
An IGC should be judged by what knowledge already exists. The goal when considering a criticism is not to innovate, merely to see if the criticism refutes the idea given existing understanding. You’re just trying to use what you already know to check if that existing knowledge already addresses the criticism or not. If the criticism is pre-refuted – addressed in advance by the IGC – then the IGC never becomes refuted by that criticism, since it preemptively addressed it. If the criticism is not pre-refuted then the IGC is refuted.
When an IGC is refuted, people should consider variants on it. Can we come up with a similar IGC that is not refuted? Sometimes this is easy. That means, in retrospect, that the criticism was minor. Sometimes it’s hard or impossible. That means the criticism was severe. Minor and severe are not fundamental parts of epistemology. They’re just approximations that help connect this to people’s existing intuitions. They’re things people often estimate in advance. Those estimates may be OK but aren’t reliable and shouldn’t be used to judge IGCs or criticisms.
The standard view is that we use both decisive and degree arguments. Paraphrasing Richard Feynman: If it contradicts the evidence, it’s wrong. That’s an example of mainstream acceptance of some decisive refutation. Many people think we rule out some ideas using decisive arguments like contradicting evidence or internal logical contradiction, but there are many remaining ideas, which cannot be decisively refuted, which we evaluate using degree arguments. Decisive arguments are powerful when they work, but many ideas aren’t decisively refuted, so then we consider which ideas are better, which have more merit, etc.
Aside: Decisive positive arguments – proofs – have mixed mainstream acceptance. They’re particularly appealing to fans of positive arguments because they’re the most powerful positive arguments possible. But because those arguments are infallibilist, some people reject them or only accept them in special cases like math.
My proposal, in short, is to get better at decisive negative arguments and use them exclusively. They’re better than indecisive arguments. A lot of my opponents agree that decisive refutations are better, and just think they’re too hard to come by. Let’s improve our skill and see how far we can take decisive arguing. The assumption that we can’t get very far using only decisive criticism is under-explored. And it’s already well known that degree arguments are problematic. That’s part of what the problem of induction is about. And how do you objectively assign weights to different arguments? Big problem. No clear answers. Since that’s a mess, and has remained a mess despite massive attention to try to improve it, it’s worth considering alternative approaches.
This article will share some further, relevant thoughts but will not attempt to offer a complete solution. I think CF has a reasonably complete solution but it’s complicated and I explain parts of it in many articles.
Degree arguments deal with things like how much (to what degree) an argument or piece of evidence supports a claim. They deal with amounts of goodness, justification, authority, persuasiveness, certainty, plausibility, likeliness, strength, etc. Any or all of those. Even Karl Popper’s idea of which idea best survives criticism is a degree argument – it’s trying to judge the degree/amount of criticism survival of ideas. David Deutsch’s idea of hard to vary is also a degree approach: look at the degree/amount of hardness to vary. All these things try to evaluate ideas on a spectrum or continuum that’s similar to the real number line.
The common idea of stronger or weaker arguments, which Popper and Deutsch both used, is also part of the indecisive/partial/degree approach to arguing.
Degree arguments are vague and are due to vague goals. Clear goals define success and failure. Then ideas can be evaluated in a pass/fail way.
Instead of one vague goal, consider many goals. If you have trouble coming up with a single clear goal, you can brainstorm many goals, just like you brainstorm many ideas. If you have 10 ideas, 10 goals and 1 context, then there are 100 IGC triples (consider each idea with each goal and each context). There are also goal combinations. You can take any group of goals and evaluate whether ideas succeed at all goals in that group.
We should always act on a non-refuted IGC. Why? Better something without a known error than with a known error. That’s what using our knowledge instead of ignoring it means.
Evaluating Many IGCs
Is 100+ IGCs a lot to evaluate? Yes but there are some things that help.
First, you don’t need to evaluate all IGCs. Any non-refuted IGC can be acted on. If you think it’s good enough and want to take action instead of analyzing more, go ahead. The moment you find one non-refuted IGC, you can move on to action if you want to.
By the way, technically the decision to proceed to action is itself an idea which must be exposed to criticism. So it’s another IGC evaluation. And you could also criticize and question that evaluation. And so on. There are no bounds, provided by reality, on criticism and analysis. You can do it indefinitely and never act. But you can also choose to act. You get to choose how much to criticize and analyze and where to direct that scrutiny. There are ideas that can help guide those choices and if you decide they’re problematic you can reconsider them and seek a better way to decide.
Second, there are patterns in IGC evaluations. You mean find that all ideas with property X fail at all goals with property Y. You can come up with an argument for that. Then you can quickly refute many IGCs: all you have to check is that the idea meets criterion X, the goal meets criterion Y, and there’s no counter-argument already, preemptively included in the IGC. Lots of criticism applies to whole groups/categories of things. We don’t have to evaluate ideas one by one. (If you find you’re evaluating IGCs one by one and they don’t seem very similar, you may have chosen the ideas and goals to each be representatives of a category. That cuts down on the number of IGCs to evaluate in the first place but also reduces the number you’ll be able to address at once using patterns.)
Third, your brain is a computer. A desktop computer can do billions of computations per second. You can and do evaluate huge numbers of IGCs. Most of that happens subconsciously.
Fourth, thinking can take some work. Evaluating specific, clear IGCs lets you make step by step progress. If this method actually works, that’s good, even if it takes significant effort.
Fifth, you can organize your search for an IGC to act on. A typical search pattern loosely organizes the goals by ambition: which seem easier to achieve? Then evaluate a few IGCs with easy goals. If they fail, look more thoroughly for any working solution to any of the easiest goals. If that fails too, look at some harder goals (in case you were wrong about which goals are easy or hard), and brainstorm new ideas and new, easier goals.
On the other hand, if some non-refuted IGCs are found, then consider some harder, more ambitious goals and check if any non-refuted IGCs can be found. Keep escalating to harder goals until you’re satisfied or you find some goals where the IGCs all fail.
The goal is to quickly figure out roughly the borderline of: what are the most ambitious goals that I have non-refuted IGCs for? You want to find a difficulty/ambition level for which you have working solutions, and a difficulty/ambition level for which you do not have working solutions. This gives you a decent idea of which goals are achievable with current solutions and what your options are. You can then pick an IGC to use or start developing new solutions that will enable succeeding at more or harder goals.
Goals can sometimes, but not always, be placed in a rough spectrum of how ambitious, hard and desirable they are. When you can’t rank the goals, just try evaluating a variety of IGCs to get a sample of results to get a feel for which goals are achievable or not with current solutions.
Degrees and Goals
Do goals have degrees of goodness or ambition? Are we moving degrees from ideas to goals? Although goal success or failure is binary, how desirable a goal is may be a matter of degree. How do we decide between goals?
Goals are judged using the same thinking process as other ideas. We can criticize our goals. We can consider what (other) goals each goal succeeds or fails at. Goals are evaluated in terms of other goals. Does this lead to an infinite regress? You can’t question or evaluate all goals at once. That wouldn’t work. But you can question any one goal, or group of goals, in terms of other goals. And, eventually, through many steps, you could end up critically considering and changing all of your goals. No goal is immune to consideration.
Degrees of goal ambition, difficulty or desirability do not play a fundamental role in epistemology. They’re loose, informal guesses (that should be exposed to criticism) that people can use to decide where to allocate effort. They aren’t evaluations of what’s true or false.
How do starting points work? You need to start with some goals. If you had none inborn, you’d have to make up some initial goals. They could be random. You just need something. One way to think of it is that you have the implied initial goal of being able to think. Since thinking requires goals, you need to get some initial goals that’ll enable thinking.
Will we get permanently stuck with some bad goals? That shouldn’t happen because any goal can be questioned, criticized, challenged and replaced. But what if your other goals allow no way to criticize a particular error, and no way to adopt a new goal that allows criticizing that error? Is it possible to design a system of goals that necessarily gets stuck? Maybe it’s possible as an abstract logic puzzle, but I don’t think it’s the situation any human beings are in. I think people have sufficiently varied goals and ideas, and willingness to try new things, to enable a jump to universality for what progress they could make. I don’t think it’s a close call.
The question is a bit like asking whether you could design computer software with major flaws/bugs which is impossible to fix with only piecemeal modifications and refactorings. The only possible way to fix it is by throwing out all the code and starting over from scratch. Maybe there is some way to do that with careful (perverse) design, but it’s at least really atypical. I don’t think any actual software, which was made to solve a real world problem, is like that. Of course, some software is difficult to improve. It can be messy, chaotic, confused, tangled, etc. Similarly, some people are in a difficult situation, with a tangled mess of confused goals and ideas, and it’s hard but possible for them to make major progress.
If people were going to get stuck because their initial goals formed a sort of closed system that didn’t allow unbounded improvement, it should have happened long ago, very near the beginning. We’ve already shown tons of flexibility to develop more advanced ideas and goals, like modern science. Are most people limited thinkers with bad, unfixable goals and it’s just a few geniuses who aren’t stuck? No. Some people are in hard situations with lots of static memes. People may be dishonest, second-handed, disorganized social climbers. It can be a hard situation to get out of. But why would it be literally impossible? There are no signs of impossibility or refutations of possibility. Some people do improve, dramatically, from bad situations. That happens sometimes. If someone wanted to argue for an “infinitely stuck” type viewpoint, they should present a model that explains who is stuck, why, what specifically is getting them stuck, how to tell who is really stuck and who just appears stuck but isn’t, etc. There are no serious, developed theories like that. Meanwhile there are ways to think people aren’t stuck like understanding universality, jumps to universality, bounded and unbounded systems, etc. (David Deutsch’s book The Beginning of Infinity discusses some of that.)
There aren’t hard and fast rules for what to think or do. Epistemology isn’t a step by step guide to your whole life. It has some structure, steps, options and guidance. It has some dos, don’ts and tips. But it leaves room to think, judge, etc. You can decide your goals, choose how to allocate your attention, etc. Maybe epistemology could become more clearcut in the future. For now, “don’t ever act on refuted ideas / known errors” – and accompanying ideas like IGCs with clear goals that we can evaluate in a binary way – is actually a big improvement on the status quo.