Uncertainty and Binary Epistemology
Table of Contents
The way to correct errors involves looking for errors – for causes of failure – and trying to fix them. Fixing means changing from failure to success. It does not involve increasing the goodness of factors. Most factors had no error anyway, so an increase won’t change an error to a non-error. Generally when something doesn’t work (at a goal we have), only a few factors are causing it to not work, so “improvements” to other factors are actually irrelevant (to our current, specific goal).
Besides correcting an error to make something work, you can also use an alternative. That deals with the error by avoiding it instead of fixing it. Often, doing something else is easier than trying to fix a bad idea.
Those are a few ideas from Critical Fallibilism (CF). People wonder how to reconcile them with the concept of uncertainty.
CF talks about things like solutions and errors. It talks about fixing ideas (including plans for action) to get from failure to success. But what if you aren’t sure what works or succeeds? How can an epistemology focused on decisive, binary judgments (like “error or not error” or “success or failure”) handle uncertainty?
What if an idea seems pretty good, or pretty likely to succeed? What if there is a refutation of an idea, but there are several arguments against it, and there are several arguments against those arguments, and there are even more layers of debate beyond that, and your overall conclusion is you don’t know which arguments are right? Often you do have an opinion, based on your partial understanding, which is a lot better than a random or arbitrary guess, but which isn’t a clear, decisive conclusion. And what if a plan involves rolling dice, literally or metaphorically, so it only has a statistical probability of success?
There’s a conflict between mental models. Do you look for and fix errors? Or do you take ideas with some level of goodness and raise that level? Should your confidence/certainty be based on amount of goodness or on something else like breakpoints? Should epistemology focus on error correction or on increasing positives and reducing negatives? Should we primarily try to deal with discrete issues or try to optimize quantities?
Uncertainty, Degrees and Statistics
We have lots of uncertainty in life. Using degree judgments instead of binary judgments isn’t a solution to uncertainty. Your goals should sometimes have confidence/certainty requirements. E.g. I want to have a 93% chance to succeed at X. If something involves a gamble like dice rolls, then your goal should specify an acceptable chance of getting the desired outcome.
Most goals don’t talk about probability or anything similar, so it’s implied that you want to follow a plan that you have a reasonable, rational belief will work – not that you want an absolute guarantee. That means acting on plans with no known (decisive) errors – act on plans where you don’t already know (to the best of your knowledge) that it will fail. It also means making a reasonable effort to find out about potential errors, rather than not knowing about any errors due to your ignorance.
If we judge some factor, say size (with more is better) by degrees, we can also attach an uncertainty. E.g., with this plan, size will increase from 20 to at least 23 and I’m 90% confident of that. The size value and the uncertainty are different numbers. There’s an amount of the factor itself and also, separately, an amount of confidence. You could combine a binary judgment about size with a degree of uncertainty. You could say the plan will increase the size to be big enough for success, and you’re 90% confident that will work. If you want to incorporate confidence judgments into your thinking, they are a separate matter from how you evaluate factors themselves (by considering whether they pass breakpoints relevant to goals, or alternatively by weighting a bunch of factors to get a combined goodness score).
This is like how you can measure something and say it’s 3 inches plus or minus 1 inch. The 1 inch error bar is the range you’re 95% confident the real measurement is within. The number 95% is assumed (implied as a default in our culture), but another number could be stated. 95% is a two standard deviation range.
Uncertainty shouldn’t typically be dealt with by degree numbers either. What is the breakpoint for being confident or certain enough? What sort of certainty does your goal require? This is often better addressed with explanations and English reasoning, not math. And how do figure out that you have specifically 90% certainty that your plan will work? Why not 80%? How do you measure? (There is no good way to determine confidence/certainty quantities.)
Outside of statistical scenarios, certainty numbers (also called “credences”) are basically made up. Instead, consider whether you know of any non-refuted criticism of an idea, or not. If you do, address the criticism instead of ignoring it. Saying you’re 90% confident (rather than 100%), because there’s a criticism, is a way of not thinking about that criticism and figuring out how to deal with the problem the criticism raises. Expressing lower confidence doesn’t do anything to address the problem raised by the criticism. It also doesn’t figure out that the criticism is mistaken and there actually is no problem. When dealing with ideas, using confidence numbers is commonly a way of not thinking things through.
Confidence amounts work better for some specific areas like measurement because measurements involves random variance that falls on a bell curve, and numbers are good for calculating and expressing things like the mean and standard deviation of a known statistical distribution. Confidence numbers can express how my uncertainty goes down if I get to measure something twenty times instead of just two times.
So this stuff depends somewhat on your field. If you’re working on statistics or something where statistics directly applies (like a measurement tool) then using degree numbers for some things is good, but not the complete story. How certain are you that your statistical methods are correct? How certain are you about the foundations of mathematics? What is your certainty regarding the general theory of measurement itself? Those are not questions that can be productively addressed with a number like 99.8%. Instead, you should consider alternative views, criticisms, etc., and approach it like a regular debate.
CF’s important claim is that degree numbers are the wrong tool for epistemology – for evaluating ideas. And in the world today, overall, in general, degree numbers are overrated. But they’re certainly appropriate for many things like measurement and probability. The attempt to apply probability (of physical events) to probability of ideas (being true) is a mistake. Probability is part of physics and math, not philosophy.
It’s only probability in epistemology that CF objects to. Probability and confidence amounts are a good tool for dealing with dice rolls, gambling, random fluctuations, studies with random samples, measurement uncertainty or demographic data (e.g. black people being stopped by cops at higher rates, or wealthy people’s children being statistically more likely to get into prestigious universities). But they’re the wrong tool for evaluating ideas, debating ideas, reaching conclusions about which ideas to accept or act on, choosing between ideas, and making intellectual progress. Critical thinking should use criticisms – explanations of error – and responses. That type of reasoning doesn’t correspond to probabilities, degrees or belief or amounts of goodness.
Regress
Can you get an infinite regress by looking at the uncertainty of your uncertainty of your uncertainty? If you do it all with numbers, and you reject infallibilism, then yes. Even if you’re an infallibilist and say the uncertainty is literally, exactly 0%, you could be asked the uncertainty of that, say 0%, and be asked again infinitely more times.
This problem applies specifically to using uncertainties for ideas. If you are 80% certain of an idea, then that is itself another idea about the first idea. If ideas should be assigned (un)certainties, then the meta-idea (the judgment of the certainty of the first idea) should itself have a certainty. And when you make that evaluation, you have created a new idea, which should itself have an uncertainty. And so on, infinitely.
The same problem does not occur when applying uncertainties or probabilities to something other than ideas. Suppose you think job applicants should be assigned probabilities saying how confident you are that they’ll perform well at the job. (I think a critical discussion using decisive arguments would be a better method.) So you decide Joe is 60% likely to be a good hire, or in other words you have 60% certainty that you should hire Joe. There’s no regress here. The idea of applying certainties to job candidates doesn’t imply that you should also apply certainties to your evaluations of job candidates. It’s only when your policy is to apply certainties to all ideas that each new judgment requires a new judgment which requires a new judgment and so on – because certainties are themselves ideas.
The way out of the regress, when evaluating ideas, is to stop using real numbers, amounts, degrees, quantities, etc. Suppose you evaluate something as appropriate for belief and action using some philosophical methods like critical discussion and having zero decisive refutations. That is not a degree of confidence which would have its own degree of confidence which would have its own degree of confidence. It’s a judgment that’s open to criticism. If you can find an issue that wasn’t taken into account, speak up. If no one can (who is willing to say so), then there is no (known) problem with proceeding. We don’t know how likely we are to be wrong. We don’t know what the uncertainty is as a quantity. We can only quantify uncertainties in special cases (like dice rolls and other issues where statistics apply). What we know is we don’t have a better option than an idea with no known refutations (explanations of errors). It’s the best we can do (right now).
In general, we should act when we have a criticism of deliberating more but no criticism of proceeding. Act when you have exactly one non-refuted option.
Action
Ultimately, action or belief acceptance (which are essentially the same thing), like anything, must be decided by considering whether it’s refuted or non-refuted. To decide that action is non-refuted, you must decide that non-action (continuing to research, debate, etc.) is refuted. If action and non-action are both non-refuted, then you have an inconsistent state. Neither idea was able to refute an open alternative, so they shouldn’t be claiming they’re right. The rational view is that they don’t know the answer between the non-refuted alternatives; it’s not rational to be biased in favor of themselves. In that case, you should instead conclude that you don’t know.
If two non-contradictory ideas are non-refuted, you can reach a conclusion: they’re both fine. But when two ideas contradict each other (in a relevant way) and aren’t refuted, then neither is adequate for you to figure out what to do. At least one of them must be wrong since they contradict, and you don’t know which one(s) are wrong.
In an unresolved contradiction, each ideas lacks adequate knowledge and reasoning to say why it should be used over some alternative. If you can’t rule out X or Y (but you do rule out all other known alternatives), then you conclusion should be “X or Y”. The conclusions “X” and “Y” are both wrong and unreasonable. The “X” and “Y” ideas should both be rejected. To conclude “X” would just be biased or arbitrary when there is no refutation of Y and also, if Y is correct, then X is an error. In the future, if you come up with an unanswered refutation of Y, then you can conclude X2 which consists of the original X plus the new argument (you still wouldn’t conclude that X is right; what’s right is a new idea similar to X but better).
Whenever you do act, you should know some reason that acting now is better than delaying action further (since those options contradict each other). You should have some idea of how much thought is appropriate to put into the issue you’re facing. The way to address uncertainties in the general case is to accept your fallibility and do your best with the resources (including knowledge) available to you. You can use statistics where appropriate but your choices always depend on some non-statistical, non-numerical judgment about whether it’s a good use of time and effort to ponder more or it’s better to act/conclude now.
You need policies for when to act/decide/conclude, when to be done with an intellectual issue. You do this all the time so the policies need to be simple and reusable (at least for most cases). And they need to themselves be open to criticism. Does that create a regress? No. Someone could propose an unbounded number of criticisms. But we don’t have to answer every criticism that could be proposed. To do our best, at most we need to answer every criticism anyone actually thinks of and does propose.
Even if someone is acting in bad faith, we can (worst case scenario) address all the criticisms they can think of and then be done even though there are many more logically possible criticisms. Can’t a person think of infinitely many criticisms? Yes, easily, by using patterns. But we can respond to a whole pattern or category at once. What makes it easy for him to come up with a long list of criticisms – their similarity – also lets us answer them as one list instead of as individuals. We can give a rebuttal based on what’s similar over the whole list instead of basing our comments on the individual differences, so then our answer applies to all of them. And if the person keeps making dumb infinitely long lists, we can meta-criticize his methodology.