Yes or No Philosophy

A “binary” issue is one with only two answers, e.g. yes or no. Epistemology is fundamentally binary. E.g. you can accept an idea, or not. You can reject an idea, or not. You can decide a criticism refutes an idea, or not. You can decide an idea solves a problem, or not. You can act on an idea, or not. You can choose something, or not.

The idea of supporting arguments is a mistake. The idea of strong or weak arguments is a mistake.

People commonly find binary judgements difficult or scary. They want to hedge or equivocate. That only makes things worse. Either you accept and act on an idea, or you don’t, and there’s no point in being vague about which idea you’re choosing and why. (If you accept and act on a compromise, you have accepted and acted on a different idea – any combination of ideas is an idea of its own.)

Uncertainty

Being wishy-washy doesn’t help with uncertainty or ignorance. Instead, incorporate the state of your knowledge into your thinking. Consider: “Given that I don’t know X, Y or Z, what should I do right now?” You can answer this without needing to know X, Y or Z (and you can include every relevant thing you don’t know on that list). Meanwhile, actions which rely on X being true or false should both be rejected, currently, because you don’t know whether X is true or false.

Note that “I am uncertain of X” is itself an idea, which you should either accept or reject (make a binary judgement). You may be fallible, but saying you’re uncertain whether you’re uncertain won’t get you anywhere. You have to live your life using your best judgement, including judgement of what you do and don’t know, and judgement about how to be open to new information and to changing your mind as appropriate.

Deciding you should do or believe something given the situation that you don’t know X is contextual knowledge. The context is you don’t know X, and your answer is appropriate to that context, but not to all contexts (if you find out an answer to X, or to anything else relevant, then you have a context that’s new in a relevant way). All knowledge is contextual knowledge (as Ayn Rand said). All thinking you ever do is in a situation that may later change intellectually as you get new pieces of evidence and learn new ideas. Your situation (context) also changes in other ways: e.g. you create ideas about how to budget on $30,000 income per year, and later you get a new situation with more income so you update your budgeting strategy for that new context.

Tentativity

You may learn new things in the future and change your mind. In the mean time, you must act on the best ideas you have. Saying that you’re uncertain if you have a door, because you may be confused, isn’t a productive way to think. Your knowledge may be fallible, but you should still take it seriously. Do your best and be willing to reconsider as appropriate, but don’t lose confidence just because you’re not omniscient. (That point about omniscience is from Ayn Rand.) Your fallibility is not an argument that your door doesn’t exist, and it shouldn’t be used as a universal source of doubt, nor as a biased source of doubt that’s selectively used in some cases.

Fallibility says you are capable of error. That doesn’t imply any particular idea is an error. Fallibility isn’t a criticism of any specific claim, other than a claim to infallibility. Fallibility can also be used for criticizing ideas which are inadequately concerned with error correction, e.g. criticizing academia’s approach to discussion for inadequate interest in criticism.

There are cases where you have appropriate doubts, e.g. you don’t know if your late friend is going to show up within the next 15 minutes or not. In this case, you should accept the idea that you don’t know when your friend will show up (claiming either he will or won’t show up in the next 15 minutes are both bad ideas). You may have some further ideas about it, e.g. that if he was stuck in traffic then he will be here in the next 15 minutes, but if he was in a car accident then he won’t be. You can guess how likely each is in many ways, including by imagining how frequently each scenario would happen if the day were replayed trillions of times. This kind of statistical guess is valid and reasonable, but is not a core part of epistemology. A statistical guess is an idea which receives a binary evaluation: you can either accept it as your fallible guess about what’s going on (and reject all ideas that contradict it), accept it as one of multiple open possibilities, or reject it.

How do you know which statistical guess to make? You use rational thinking (guesses and criticism) to consider the issues, consider your background knowledge (like frequency of car accidents and what statistics is) and consider which statistical guess to accept. A statistical guess is an idea like any other, dealt with using the same methods. It has no special role in epistemology. Usually we aren’t able to assign good numbers for these guesses, and that’s fine because we only care about broad categories relevant to action: Should I wait for my friend? Should I limit myself to easy-to-interrupt activities while waiting? To make decisions like that, you don’t need to know a specific percentage chance that your friend will show up soon.

Consider this example: My friend forgets his wallet when we go out to lunch. Will he pay me back $10 tomorrow? I expect he will. He might die first, or move to another country and never speak to me again, but I don’t bother to attach any number to these rare events. I have no reason to expect them to happen, so I don’t worry about them. They don’t need to play a role in my decision making. Minor uncertainty like that (what if my friend dies of a heart attack in the next 12 hours?) is irrelevant to most problems which don’t involve that level of precision. Solutions should have some level of fault tolerance so you don’t need to worry about faults significantly smaller than the tolerances. (When something huge is at stake, like a billion dollars or the lives of the men on a space shuttle, then you need a more precise solution with more elements to mitigate tiny risks.)

Non-Refuted

We should accept and act on non-refuted ideas. There’s no higher or better status an idea can have. Positive justification is an unattainable myth.

Why should we choose non-refuted ideas? Because they have no known errors and the only alternatives are refuted ideas: ideas that do have known errors. An idea that we don’t see anything wrong with is preferable to one that we do see something wrong with. (Karl Popper has written some similar arguments.) And, more specifically, we act on ideas to achieve goals. It never makes sense to act on an idea, to try to achieve a goal, while believing it won’t achieve that goal (due to a criticism). Errors are reasons that ideas fail at purposes/goals.

What if we have multiple, competing, non-refuted ideas to solve a problem? Then it doesn’t matter which you use; they’re all fine. You may change problems to a more ambitious one if you like (by adding extra requirements to your goal, you can rule out some solutions and then act on one that gives you something extra), but you can also just proceed with any solution and move on to thinking about something else.

What if you have two non-refuted ideas that contradict each other, each claims the other won’t work, and you’re unsure how to proceed? Then since neither can address the matter satisfactorily (and thus guide you about what to do), they are both refuted. Both are inadequate to guide you in how to address this problem. If an idea cannot point out any error in a contradictory rival claim, then there’s no reason to accept it over that rival (besides bias) – the knowledge in the idea is inadequate to distinguish what’s correct and you should conclude that you don’t know. Then your options are to solve a less ambitious problem (e.g. given you don’t know how to resolve the conflict between those two ideas, what should you do?) or to brainstorm new solutions to this problem (e.g. try to come up with improved variant ideas).

Meta Levels

“Given X, now what?” is a repeatable formula. Each use takes you to a meta-level of the issue. A typical use in epistemology is: “Given we don’t know the answer to the conflict between ideas A and B, what should we do or think?” The statement can also include some limited, inconclusive information you do know about the conflict, like, “Given we don’t know the answer to the conflict between ideas A and B, but we do know C and D, what should we do or think?”

You can take a difficult, open issue and treat it as part of the context (by accepting its solution being unknown) and then ask what to do in that case. This simplifies things, and makes your problem solving less ambitious. You can and should reduce your ambition as needed to enable you to succeed instead of fail. Solving what you can, and acting within the limits of your knowledge, skill and resources, is the best you can do. Being overly ambitious and failing won’t help anything. By asking progressively less ambitious questions as you get stuck or come under resource pressure (like running low on time), you can always find a solution: a way to act that you have no criticism of and you’re therefore satisfied with.

This method allows for people to always find win/win solutions (ideas about what to do that they have no criticism of) for how to act in their lives, and do it within their time constraints (and other resource constraints). And it still works when other people are involved, as long as they are rational, know the method, and want a solution.

Note: I’ve just given some summary here, not a full explanation of the method. There’s more information in my common preference finding article and the articles linked from it. The articles often presents explanations in terms of a conflict between two people, but the same methods apply for a conflict between two ideas within one person, or for a conflict involving more than two people.

All Criticisms Are Decisive

Either an idea does or doesn’t solve a problem (equivalently: accomplish its purpose). People don’t understand this due to stating problems vaguely without clear criteria for what is and isn’t a solution. With improved problem statements, you’ll find that all criticisms are decisive or do nothing (there’s no in-between). A criticism either explains why an idea won’t achieve the success criteria its supposed to (so don’t use it, unless you refute the criticism), or the criticism doesn’t explain that.

When you act, you pick an idea to accept and you reject the alternatives. Life involves binary choice. Your thinking should mirror this. Hedging won’t get you anywhere because you still have to act on some ideas and not others. When you act, you have some kind of plan, strategy or idea behind the action. If you have multiple ideas, then either they fit together as one big idea, one overall plan, or not (meaning you’re trying to act on contradictory ideas at the same time and will fail).

Confusion about this is common because of compromise ideas. What if there are two extreme ideas and you find a middle ground? Then you rejected both extreme ideas and accepted a third idea, which is a new and different idea (even though it shares some pieces with the rejected ideas). So, again, when you act on a compromise idea, you accept one idea about how to act and reject all the others. If the accepted idea is a complex, multi-part idea which contains some good aspects of rejected ideas, that doesn’t prevent it from being a single idea in its own right that you’re accepting and acting on, while the other versions of it and rivals are all rejected. For a given issue, you always have to pick something you accept and reject everything else.

If you accept three ideas about an issue (X, Y, Z), then you’re accepting the single idea “X, Y and Z”. Viewing them as separate issues is useful for some purposes. You can analyze them individually, or use just one of them in a new grouping (like “W and Z”). But they can also be viewed as one group – a single bigger idea – which is what you’re choosing, accepting or acting on.

Contextual Criticism

Criticism is contextual. A criticism points out why an idea fails to solve a problem (accomplish a purpose) or fails to solve a category of problems. The same idea might work in some other context – it could still potentially solve some other problem. Some criticisms are really specific (they say why the idea doesn’t work for this exact situation) and some point out more broad problems (why it doesn’t work in lots of situations).

For example, if you’re writing something and I point out a typo then I’ve criticized the use of that writing in all the situations where you wouldn’t want a typo, but it’s still OK for more informal purposes. (And, of course, you can fix the typo and create a new and improved piece of writing. The edited version is a new potential solution despite being very similar to a previous idea. Small changes to ideas sometimes don’t matter much, but they can matter a lot when they’re specifically designed to fix a mistake.)

Whether a criticism matters depends on what your goal is. A criticism says why something won’t work in some way, but that may not matter to your goal. If you’re writing an academic paper, you should fix typos. But if you’re casually texting with someone, you don’t need to present as a flawless typist. Socially, perfect typing can be a negative that makes it look like you’re trying too hard.

Burden of Proof

Saying “prove it” is not a criticism. Demanding someone meet a “burden of proof” is not a criticism. A criticism is an explanation of why an idea doesn’t solve a problem (or category of problems). If you need a generic criticisms of a vague idea, try “Unless you say what a purpose of the idea is, and explain how it will succeed at that purpose, then it’s not suitable for any purpose.” On the other hand, if the idea is clear and reasonable, but you’re uncertain because you don’t know how to prove it’ll definitely work, then you should look for why it’s false. If you can’t see anything at all wrong with it besides lack of proof, then you shouldn’t reject it. This may be difficult at first if you aren’t very good at finding flaws in ideas and asking demanding questions about them, but it’s a skill worth developing.

While developing the skill of formulating issues as criticisms in words, I don’t think you should ignore common sense, tradition, convention or your intuitions. Ignoring those things is risky even if you don’t have inexplicit doubts. And remember that “I have inexplicit doubts.” is a sentence you can say. A good solution will address that problem instead of ignoring it.

New Ideas

If you make a bunch of decisive criticisms, won’t you run out of ideas? No. If you make a small change to an idea, that’s a new, different idea. The old idea was refuted, but the new one may not be. You need to check if the criticism which refuted the old idea also refutes the new idea, or not.

Some criticisms rule out big categories of ideas so that many variant ideas will also be refuted. Those are the best and most valuable criticisms in general. Some criticisms only rule out a specific idea but wouldn’t apply to a similar idea. How do you know which type of criticism you have? You try to create a new idea which keeps the main points of the old idea but is no longer refuted. If you can’t do that, it shows the criticism was powerful and broad. If you can do it, then you have a new idea which you don’t know a refutation of, so that’s good. If you want, you can try to revise the criticism to be broader.

Conclusion

Clear, rational thinking involves decisive, binary judgments of ideas, not arguing and evaluating ideas in terms of degrees or amounts of goodness.

When one idea is superior to another, that’s for a specific reason: it succeeds at a purpose that the other fails at. That’s the proper way to judge ideas. When you choose one idea over another, you should be able to say why in terms of your chosen idea being non-refuted for a purpose that the other idea is refuted for. Choose an idea according to a goal you have which it works for and the alternative fails at. Just vaguely saying it seems better overall, or got more points in a score system, is bad reasoning. (It’s also fine to choose one idea over another just because you have to pick one, but without thinking it’s better. You can think two ideas are tied and then choose one randomly or by intuition.)

A rational argument is a criticism which explains why an idea fails at a goal/purpose. It’s decisive, which means it contradicts the thing it criticizes, which means you can’t accept both the criticism and the criticized idea. You must take sides. If the criticism is right, the idea is wrong. If the idea is right, the criticism is wrong. They could both be wrong, but they can’t both be right. (You can also sometimes find a flaw in your background knowledge that means they didn’t contradict after all.) Arguments which don’t contradict anything – which merely say to raise or lower your opinion of something – are bad arguments.

Although it’s widespread, viewing arguments as strong, weak or medium strength is an error. And evaluating how good an idea is, rather than whether it succeeds or fails at one or more goals/purposes, is also an error.


Some of the ideas in this article were inspired by the philosophical writings of Karl Popper and Ayn Rand. My main point about all good arguments being decisive, and judging ideas as refuted or non-refuted rather than by how good they are (an amount or degree of goodness), is intended to be an improvement on Popper and Rand.

Learn more about Yes or No Philosophy.