Error Correction Policies Are Hard

Not everyone should have a public debate policy now.

Table of Contents

Public, transparent error correction policies (a.k.a. rationality policies, paths forward policies or debate policies), involving being open to debate and critical discussion, are hard. The basic point of them is that if someone knows you're mistaken about something and is willing to tell you, you shouldn't ignore them and stay wrong (as people often do).

This article assumes prior familiarity with error correction and debate policies. For additional context beyond the above links, you may want to read My Experience with My Debate Policy, Paths Forward to Correct Errors, Using Intellectual Processes to Combat Bias, and some articles that these articles link.

You should expect that you're wrong about a lot of issues. People are fallible and flawed, and human knowledge in general could advance much further.

You should expect that some people could tell you a lot of your errors if you got enough attention and they wanted to tell you. There are presumably some people out there who are smarter than you, who've read more books, who've got more education than you, and so on. And there are a lot of people who know a ton about one thing. Even some people who know much less than you overall will be able to correct you about something because they'll pick the one topic they're really good at to bring up. Even if you're right more than them overall, they will selectively bring up what they're best at and most confident about. And that's the expected, desired outcome, not some sort of unfair trick! And it can happen repeatedly. You'll get corrected about A by someone who has spent the last 20 years reading all the books about A, and then you'll get corrected about B by a professor who specializes in B, and then you'll get corrected about C by one of the few people in the world with work experience doing C, and then you'll get corrected about D by someone who has access to millions of dollars of laboratory equipment (or super computers) who uses it to prove you wrong.

People have leaked classified military secrets on internet forums to win debates. Your main protection against losing a lot of debates is being unpopular. If no one cares much, you can put up an error correction policy without receiving many corrections. If millions of people care a lot, and you make a lot of claims and are open to correction, then you will receive a lot of corrections where they're right and you're wrong. (You may also receive a lot more corrections where the person trying to correct you is mistaken.)

You need to like corrections or you shouldn't put up public, transparent error correction policy. You shouldn't rely on being unpopular. That's basically bluffing: pretending you want something that you don't really want but don't expect to get. You'd be asking for corrections but secretly hoping to be mostly ignored.

I want to lose a debate. I want to find out, right now, something important that I'm wrong about, and change my mind for the better. I'd prefer this shortcut over maybe figuring it out on my own after a lot of time and effort. I'd rather know now than in 10 years. I have plenty of things to figure out on my own, and I'll never get to all of them, so I would like help. Some people don't want to have a debate policy because they don't want to lose debates. The main concern with a debate policy should be that you win your debates and spend time on it without getting the chance to lose debates and be corrected.

You need to see corrections as helpful not embarrassing. You need to find corrections exciting not draining. You need to actually value corrections, handle them well, and not be inclined to shoot the messenger. You should have a positive attitude towards people who correct you and treat them well so they feel rewarded not punished for correcting you. Otherwise this kind of policy isn't for you. If you're going to have negative emotions about people who successfully correct you, please don't post a policy or you'll be mistreating those people and basically betraying people who tried to help you.

You also need to avoid being overwhelmed by corrections. Don't put up this kind of policy as a beginner. First, learn most of what humanity already knows about your field, then put up a policy when corrections and new ideas are a lot harder to come by and more valuable to you. If you could easily learn something new just by opening any book about the topic, why ask the public for help? (If you've read some books, it's fine to not know everything that could be found in at least one obscure book. You should make reasonable efforts to search for information. It's impossibly hard to search every book or paper and find every useful, relevant idea that's been published.)

If you're basically going to say "I think I'm right and I'd really like to know if anyone has a correction and will debate me.", then you should be an expert at your field. If you're a beginner, you should learn at your own pace, not at the pace that people contact you with corrections. By contrast, if you're an expert who is having a hard time learning more, because you believe you have to pioneer new ideas to make more progress (inventing new ideas can be slow and unreliable), then corrections and new ideas that you were unaware of would be really valuable and would significantly speed up your knowledge creation progress.

Who Needs Rational Debate Policies?

This particular type of rationality policy, where you're open to public debate and error correction, makes the most sense for experts who already share ideas in public. It's suitable for public intellectuals who think they have important ideas that they want to spread. It's suitable for people who have had debates and want more debates. It's irrational to try to spread ideas if you aren't open to debate and error correction. You shouldn't be a thought leader, who influences what other people believe, while ignoring criticism of your errors, because then you'll teach avoidable errors to many people. You shouldn't be a scientist in a top lab, privileged to use expensive equipment, who ignores criticism, because then you'll use the equipment on the wrong experiments, which is really wasteful and it would have been better if you had been open to criticism.

If you're a school teacher or professor and you teach standard, mainstream ideas out of a textbook, it'd be nice if you were open to error correction, but it's not that crucial. If you're a professor who develops new, original ideas and teaches those to students, it's really important that you be open to error correction because it's bad to teach unique, personal, idiosyncratic, non-mainstream, non-standard errors to students. It's reasonably understandable for popular errors to spread that our society considers standard knowledge, but if you're spreading errors that not many people believe, and you won't listen to criticism, then you're doing something bad.

The more you're trying to be an original or important thinker, and share ideas with others, the more you should be open to debate and error correction. I'm a great example of someone who should absolutely be open to debate given the essays I write which attempt to publicly share and teach my original, controversial ideas. I also claim my work is important and competitive with prestigious experts – I claim it's not amateur stuff – which is another reason I should be open to criticism.

A typical example of someone who should be open to error correction is Richard Dawkins. He's written books aimed at intelligent laymen not just other experts. He claims to have a lot of expertise. He claims to have some originality. He claims his ideas are very important. He makes controversial claims. He claims to know that a lot of other people are wrong about some of their beliefs. He's critical of others. He tries to get people to listen to him. He tries to be a thought leader: someone who takes a leadership role in thinking and gets others to follow along and learn form him.

There are many people where I think it'd be wonderful if they had an error correction policy enabling public debate and criticism. Dawkins is a great example of someone where it's such a perfect fit that not having one is actually problematic. He's actually such a great fit for that kind of policy that not inventing the concept of such a policy himself is questionable, given how smart he says he is and the problem situation he's in. Did he try to figure out how to be open to error corrections? I wouldn't make a very big deal out of any one individual not inventing ideas about rationality policies, but if you look at thousands of people like Dawkins, then I think none of them inventing it is really bad and indicates that many of them aren't as smart, wise and rational as they claim. A few of them could be amazing and not invent rationality policies, but if they were all amazing, or even half of them were amazing, then I think someone besides me would have already invented a debate policy or some other sort of solution to the same problem/need/goal.

If you get a lot of attention, and have a lot of resources, and you really want to be open to debate and receive helpful corrections, it makes sense to try to figure something out. Or if you think all the smart people are able to get past gatekeepers to contact you, and the general public is stupid and has no value to offer you, then maybe you shouldn't be making books, podcasts or essays aimed at the general public.

Handling Corrections

How hard is having a policy where you're open to error correction from the public? You'll need a lot of expertise at every field you talk about or you'll be flooded with corrections (if you get attention). Even experts should expect a lot of corrections if they get a lot attention, so non-experts should expect the corrections to be overwhelming and should go learn to be an expert first (and they can have some other, more limited sort of policy in the meantime that doesn't promise open debate to a conclusion with anyone with a correction).

To avoid being overwhelmed by corrections, you'll need good organization and good reuse of ideas. And if you're really popular, you'll probably also need helpers, assistants or proxies, rather than handling everything personally.

You may also want some barriers to entry like requiring people read your book before contacting you or having a $100 price for submitting corrections. I think $100 is a reasonable amount for a famous intellectual that still leaves them decently open to corrections. It's an amount that's possible for a smart person who works a minimum wage job in the U.S. If anyone thinks some smart people who can't afford it are being ignored, they could pay the $100 for someone else who they think has something good to bring up. There are a lot of people who could pay $100 for someone else if they wanted to. There are downsides to paywalling access, like it makes it harder to get attention and will discourage people from sending you corrections, but it could really reduce the low quality correction attempts. It'd be reasonable to refund the $100 when you agree with a correction, or even to pay an error correction bounty prize, however that kind of policy could also lead to complaints from people who think they deserve a refund or prize that you didn't give them, so there's a potential hassle there.

To deal with many people trying to correct your errors, you may also need a debating style that is good at short, decisive wins. If you have rambling, friendly conversations that people enjoy, it'll encourage more corrections, including unserious ones. If you win debates in clear, fast ways, it'll discourage unserious people from wanting to talk with you. Another debating style aspect that can save a lot of time, and discourage most people from wanting to talk with you just to chat with a smart or prestigious person, is meta discussion. Examples of meta discussion that many people dislike include asking people what they did to reach the conclusion they hold and why they think that was adequate or asking if their claim is their original research or if someone else already wrote it down well. Note that any debate style that keeps some people away may also keep away some valuable corrections; there are tradeoffs.

Philosophical Expertise

To have a rational error correction policy, you'll want expertise at every field you talk about or which is important to your claims. That basically means you have to be an expert at some generic, hard-to-avoid topics like rationality, logic, critical thinking, epistemology, discussion methodology, and some other parts of philosophy.

A scientist who knows nothing about philosophy will have a hard time because there's no reasonable way to avoid discussing rationality. Someone can say "You think A because B. Whether you're correct depends on what is a good argument. I think you're making a mistake about how you evaluate arguments." That's a philosophical issue. Scientific expertise won't tell you how to address that potential correction. You could try to argue that such a discussion is unnecessary, but that is itself a philosophy topic, not a science topic, plus I think you'd be wrong and struggle to even make a plausible case.

Similarly, someone could bring up that you claim to know idea A, where A is something in your field, e.g. chemistry. Whether you're correct depends on what knowledge is and what the proper methods of acquiring and evaluating knowledge are. So you can be drawn into a discussion of epistemology even though you're a chemist.

You basically need expertise at all prerequisites for your field and your claims. And when there are multiple ways to reach a conclusion, using different prerequisites, it's generally good to be familiar with many ways, including all the standard, well-known ways. You should know any line of thinking to deal with your topic that doesn't involve something really obscure or weird, plus more.

Few people have enough expertise for this. I consider this a huge practical problem for the world. Most "experts" lack the expertise to evaluate a lot of the premises they're basing their work on. Most experts are betting the productiveness of their careers on the work of other people which they don't know how to evaluate the quality of, and they often don't even know who those other people are.

A potential solution to not being an expert at the prerequisites and premises of your field is outsourcing or delegating. If a scientist uses a philosophical premise, and a critic questions it, then a philosopher could handle the debate from there. The scientist could pass the discussion on to the right kind of expert and let him defend the premise. This could work in theory but in practice it's generally not a good system today.

One of the reasons it doesn't work today is most philosophers are both bad at philosophy and closed to debate. What sometimes happens in practice is a scientist says "go debate a philosopher for that issue" and you say "which one?" and they won't answer and won't take any responsibility for the problem that they can't actually find any philosopher willing to defend their premise in debate. Or they might find a philosopher who does a bad job, and loses the debate (or quits to avoid losing), but then the scientist is unwilling to say "OK, well, that guy failed, so I need to find someone better or handle this myself." If you delegate philosophy debate to someone else, you should monitor it and make sure it's handled well. This is like how a manager in a business should monitor his employees and make sure they do a good job, and the manager should get things fixed when there are problems, e.g. by training the employee better, assigning someone else to the task, hiring someone better, or doing the task right himself.

Learners

Students, beginners, people with little public writing or videos, and people who don't think they have much expertise shouldn't try to emulate my rational debate policy. They shouldn't try to be open to unbounded Paths Forward and any error corrections from anyone. These types of policies were designed for expert public intellectuals and highly skilled thinkers. They're suitable primarily for people who are involved in making progress in fields, not for people who are trying to catch up to existing knowledge.

What sort of policy is reasonable for beginners? One with clear limitations and boundaries. For example, you might say you'll spend up to an hour a month considering your potential biases (without necessarily replying) if people point out ways they think you're biased. Many people don't want to be told about their biases, so this policy could make a difference to what feedback you get even though it doesn't offer debate or transparency, just because it communicates that you're interested in improving regarding biases you have. But, for a beginner, simply saying you'd like it if anyone could point out any of your biases would probably work as well or better than having a policy with no hard guarantees (and you shouldn't be offering any significant hard guarantees as a beginner).

Many policies suitable for beginners, like saying you'll sometimes reply to criticism but you offer no guarantees, are kind of pointless and don't really need to be policies: that's the same thing people would expect if you had no policy. If you want people to know that you like criticism more than most people, just say that; no is policy needed.

Discouraging Inappropriate Policies

I've discussed some ways rational error correction and debate policies are hard partly because I want to discourage people from putting up rational debate policies that don't make sense for them. If there's much risk they'll break their word and not follow their policy, they shouldn't post the policy. Breaking your word about your policy is a bad experience that I don't want people to go through. And that kind of behavior harms the reputation of rationality policies and gets people to distrust them. Making changes to policies frequently, or changing your mind and taking them down, also helps defeat the purpose of the policies and teaches the public not to trust them. Policies should be pretty stable over time. So I wanted people to be more aware of difficulties. I know people can do whatever they want and I can't stop people from putting up inappropriate policies. Some people are dishonest or irresponsible, but talking about difficulties with policies can reduce mistakes from reasonable, well-meaning people.

If you put up a policy without really carefully thinking it through and knowing what you're doing and being great at logic and debate, you may end up breaking your word. Also, if you're familiar with my philosophy work but haven't requested a debate with me, for whatever reason, then an open debate policy may not be right for you, since I could use it.

Remember, a public, written policy that makes promises is open to unlimited word-lawyering: you have to follow it exactly as written or you will have broken your word. This is dangerous unless you're great at writing precise, careful statements (similar skills to being a lawyer) and you're great at word-lawyering (otherwise you'll fail to predict what kind of word-lawyering some pedantic person might do to you). Video game experts find unintended strategies and glitches; if your policy allows for those, you could end up breaking your word; and if you aren't the kind of person who finds those things, you may not know how to avoid that happening. Hackers find unexpected and unintended behaviors in software and use them to break through security protections to take control of computers; even if you put a lot of protections in your policy, a smart person may find a way through that requires you to do something you didn't intend or don't want to do, and you will have broken your word unless you do it. You can't be expected to be absolutely perfect, and it may sometimes be reasonable to reject a claim that's blatantly against the spirit of your policy and provide clarification. But you have to actually be good at skills like logic, reading comprehension, and considering untended implications or else this won't work out well for you and you won't be in a good position to judge what's reasonable to reject. What's reasonable is having a tiny bit ambiguity that should be interpreted in line with the spirit of the policy, because it's impossible to choose words with exactly zero ambiguity. It's not reasonable to have a lot of ambiguity or write statements that clearly, logically and literally have meanings you didn't intend.

I've had people complain that my own debate policy is too weak. Why does it have any rules, restrictions or qualifiers. Why not just offer unlimited, unrestricted debate to anyone at all? Because that could easily be abused. Because that'd be promising to spend an unlimited amount of my time on any person who asks for it in bad faith. The people making those complaints clearly don't understand the potential for abuse and wouldn't know how to write their own policy in a way they could actually follow (if they ever got much attention). I think they're used to doing what they think is reasonable rather than following written rules, so they don't really understand that committing to written rules means actually promising to follow them even when you don't really like it or think it's unreasonable. It's your job to plan ahead and write your written rules with restrictions and limitations so they don't promise you'll do things you won't want to do or will think are unreasonable. People reading your policy should know in advance what to expect from you and shouldn't find that you break your word or dislike them when they try to use your policy.

Another possibility is to put up a policy which is clearly labelled as a draft and a work in progress, which may change at any moment and may not be followed. This could make sense if you're pretty close to being ready for a policy. Maybe you think you're ready now but you're not confident about every wording. But please don't put up a draft policy if you're nowhere near ready to have a policy; that's unnecessary and misleading. You could share a draft policy as an essay for feedback, even as a beginner, but don't post it as your actual (draft) policy that someone could use, that you're trying to follow, unless there's a realistic chance that you'll follow it.

The Limitations of Rationality Policies

If you have a policy and appropriately deal with all corrections people offer, does that mean you're now right about everything, or you're the smartest, or all your ideas are the best knowledge that humanity has as of today? No. Even if you're super famous and have a great policy, some corrections still won't reach you, even from people who speak the same language as you.

Even if you have a policy, you should do research and seek out good ideas and corrections instead of relying exclusively on your policy to bring them to you. Even if you're famous, don't think this sort of policy is so effective that it can fully replace all other types of error correction.

If you don't even have ten thousand fans (as I don't), then it's reasonable to try your best with a rationality policy, but it may not bring in a lot of corrections. But if you aren't getting much attention, then the policy doesn't take much work to follow, so it's fine. And the ratio of high quality correction attempts to low quality ones may be better for people with fewer fans.

I find having a policy saves time for me because I can direct people to it when they attempt unstructured or ambiguous debate with me that looks low value to me. Before I had a policy, my options were to continue the discussion just in case they had a good point or to end the discussion according to my best judgment and take some risk of missing out on a good point. As a fallibilist, I was quite persistent in discussions and didn't like to unilaterally end them. Directing people to my debate policy is a better option for me. It's a way to end the discussion while still allowing a way that I could be corrected if I'm wrong (they could still use my policy).

Judging that a conversation isn't worth my time to continue further has some risk. I could be wrong. I find most intellectuals are way too unconcerned with that risk. Having a debate policy is a safety mechanism to help with that risk. I can use my judgment, but my policy gives people a way to overrule my judgment and continue the conversation if it's important. But if they do that, then they have to follow the rules and restrictions in my policy, so that protects my time.

The Biggest Difficulty with Rationality Policies

Based on my experience and my philosophical analysis, there is a hard problem involved with rationality policies that I don't have a full solution for. They are still a work in progress that haven't been tested out by many people. I think I have some good ideas about rationality policies, but no one famous has used a policy with a huge audience yet. When these ideas become battle tested, they'll no doubt be improved.

The hard problem I know about is dishonesty. I talked about this earlier in terms of people who post a policy but then break their word. But there's another way dishonesty comes up. People who use your policy may break their word. If your policy puts any requirements on people who use it – if it has any rules at all about what happens when they use your policy – then people may not do what they agreed to do.

If your policy requires that someone read a book before asking for your attention, they might lie about having read it. If your policy requires someone write 20 blog posts before requesting your attention, they might lie about being a blogger. People can lie about their identity, their expertise, their debate history, and so on.

If someone agrees to any rules about how the debate or discussion will be organized, or any restrictions on how it ends, they might not follow that later. Things might start OK but then they might start breaking rules in the middle of your conversation or near the end. If they agree to participate in a post mortem after the main conversation finishes, they might not do it.

People often follow the rules they agreed to when they're happy and things are going well. But if they start losing in some way, or something doesn't go the way they hoped, then they may get upset and then start breaking their word when they're emotional. After losing a debate, they may no longer be in the mood for doing a post mortem about it because they're too emotional (and if they leave for a week to calm down, they probably won't come back and do the post mortem after that, because if they tried to then it'd bring the same emotions back up). Many people are emotionally fragile.

Although there is a misogynist idea that emotional fragility mostly applies to women, most of my personal experience with sore losers in debates and other emotional online behavior has been with men. Although not everyone sees it this way, anger is in fact an emotion (and is more common with men). Road rage is people, more often men, getting emotional. Lots of domestic abuse comes from the emotional fragility, not masculinity or strength, of men. Wanting to duel someone because you feel like they insulted your honor is a pretty male way of being emotional which, unfortunately, is how some people react to some intellectual criticism. I'd rather deal with a debate partner who cries like a girl than gets angry at me. I don't agree with all the gendered stereotypes about emotions, but I wanted to talk about them to emphasize that it's common for men to get emotional in arguments. Some people think of being emotional as a gendered issue but it apples to every demographic group. Some emotional men don't think of themselves as emotional, so they wouldn't think my warnings about emotions or dishonesty apply to them, and I wanted to argue with that a bit.

It's hard to design policies so there's no way for people to break their word to you that would matter. You could ask them to put $5,000 in escrow which you'll keep if they break their word (on your sole judgment, or using someone else you think is reasonable as the judge), but that has tradeoffs like dramatically reducing the amount of potential corrections you receive. And the people who most value their time are often already rich, so they'd actually need to ask for a lot more money for the money to matter much to them.

There are lots of protections against liars that you could try, but many have downsides and I don't know a great solution overall. For example, you could engage with people who get upvotes while interacting with your fan community (people with a positive community history are more trustworthy), but that might not work well because many fan communities downvote dissent or at least downvote outlier dissent, so people with particularly advanced corrections may be downvoted. Many great ideas have been rejected for decades before being accepted or still haven't been accepted to this day. For example, Mendelian genetic inheritance was unappreciated for more than 30 years even though Mendel shared the ideas with relevant people.

Conclusions

If public intellectuals had rational policies for receiving error corrections and being open to some debate about their claims, it could make the world significantly better. It could speed up scientific and philosophical progress. It could lead to better decision making and better allocation of billions of dollars of funding. Politicians are another important group who ought to be open to error correction and feedback, though perhaps in somewhat different ways that don't require them to have expert-level skill at multiple intellectual topics. Maybe a politician's team ought to have enough expertise that the politician can delegate debate about any relevant issue to a team member, at least if they're a successful and important enough politician to have a team of a dozen or more people.

Anyone can try to be a public intellectual on the internet and can post a rationality policy. But most people shouldn't and would just end up breaking their word. If a prestigious public intellectual breaks his word and loses reputation, that's actually useful: his fans ought to learn that he was overrated. But if unknown bloggers and YouTubers put up and then break policies, society doesn't benefit. What can you do if you aren't ready for a policy? What's actionable here? Please suggest having a policy to public intellectuals that you follow. You can also look for public intellectuals that deal with error correction and criticism better than most and use these kinds of issues to help inform your evaluations of thinkers.