Error Correction Mechanisms and Parenting

Table of Contents

Critical Fallibilism's (CF's) concept of error correction mechanisms helps people deal with fallibility. Common approaches to fallibility allow arbitrary judgments without transparency, thus enabling bias. On the extreme other end of the spectrum is putting unbounded effort into handling fallibility: just keep trying to agree with critics until you get mutual agreement, get mutual consent, or they give up and stop engaging, but never, ever unilaterally quit a discussion where anyone disagrees with you (and therefore might conceivably be able to correct an error of yours). CF's idea of error correction mechanisms is designed to rationally deal with fallibility using limited time, energy and other resources.

This essay explains error correction mechanisms, discusses their application to public intellectuals and debates, then considers applying them to parenting (which provides a different perspective on the issue which I think will be interesting even to philosophers who don't care about parenting). I theorized that using error correction mechanisms is a way to create a better successor to the parenting philosophy called Taking Children Seriously (TCS). TCS tried to apply Popperian epistemology to parenting, including viewing parents as fallible. Trying to take fallibilism seriously, TCS told parents to seek agreement and consent with their children without putting limits on the time, energy and other resources used.

Fallibility

How to deal with fallibility is a fundamental philosophical problem. A standard attitude (among people interested in rationality) is basically "try your best" which unfortunately allows for too much bias. If you're confident and wrong, as fallibilism says people sometimes are, then you're at risk of making important mistakes when being dismissive of critics. An alternative approach to fallibility is to require unlimited effort being open to error correction, but people can't really do this because they have a limited amount of effort to use.

So one approach requires zero effort and the other requires infinite effort. TCS told parents to keep trying to find common preferences with their children until they succeed, which is an unbounded effort type of approach. Advocating unbounded effort towards error correction is rare and I've only seen Karl Popper fans do it. I considered unbounded effort approaches interesting despite being both wrong and wildly impractical because I think the mainstream approach is bad and we should be doing more to manage our fallibility.

What other options are there besides having no real burden to do anything or an unlimited burden? How do you deal with fallibility well using limited time, energy and other resources?

The mainstream approach can be seen as an honor system where it's generally considered rude to call people out and there is little attempt at external policing: people are just supposed to police their own decisions about what ideas and people to pay attention to or not. So one alternative using bounded effort is to have a culture of people encouraged to observe others and criticize their rationality whenever they see a potential mistake. This could be mean, biased, unpleasant and ineffective (it could have elements of a popularity contest or witch hunt), and it could silence people who would otherwise share ideas but who don't want to deal with this criticism. I can also imagine a hypothetical scenario with a very different society than ours where this works better.

For CF, I developed a different approach. You can use structured methods that provide backup plans for error correction in case your best judgment is incorrect. So first, people should try to be reasonable, rational, open to criticism, etc. They should expend resources (like time and effort) when they think it's a good idea. Then, second, CF says to have at least one backup plan. CF divides the solution up into multiple parts. The first part is similar to the mainstream approach, but then it's followed up with additional parts instead of with nothing or with unlimited effort.

Each backup plan should have a level of independence and autonomy – it's written down in advance and it can overrule your best judgment – so what happens isn't just determined by your (potentially arbitrary) judgment. It's important to have a strategy with at least two parts so you can have at least one part that uses your judgment and at least one part that restricts your judgment and is capable of overruling your judgment. Your judgment is important and shouldn't be left out, but it's also potentially mistaken and shouldn't be fully trusted.

If there is no mechanism that can overrule your judgment, then when you're wrong, biased, overconfident or irrational, you'll judge that your right, be dismissive, and stay wrong. When we acknowledge we're fallible, we should want to have some ways that when our best judgment is mistaken, we can still correct errors. We want to avoid dismissing a critic who we think is dumb, when actually we're wrong and could have learned from him. Serious fallibilists see that as an important risk to mitigate. This is somewhat similar to how scientists use double blinding instead of just trusting themselves to be unbiased and reasonable.

Consider a case where you think think a lead isn't promising, a critic sounds very wrong or incompetent, a debate sounds unproductive, etc. Then the standard view is you should say "no" to that opportunity and spend your energy elsewhere. But, being more concerned with fallibilism, CF asks what if you're wrong? If you refuse to listen to a critic, and actually you're in the wrong, then you'll stay wrong and you're the one sabotaging progress. Is there anything we can do to avoid that without an infinitely large burden? Yes! Can we do anything useful without even having a big burden? Yes!

For example, you can allocate one hour per month to analysis and reflection regarding people and ideas that you were dismissive of. You can then reach out to the person or research the idea if you change your mind about dismissing it. This is a reasonably small burden. It's not a huge burden and certainly not an infinitely large burden. Is this effective? Somewhat. I think it'd be an improvement for many people.

That's just a simple example. There's a lot of room for innovation on error correction mechanisms since they're a new idea. I've developed more mechanisms in Paths Forward Summary and the essays it links to including Using Intellectual Processes to Combat Bias.

One of my favorite ideas is having a debate policy so, if you decline a conversation, a critic still has the option to challenge you to a formal debate. The debate would follow rules you wrote down in advance specifying what debates you accept, from who, with what debate terms. This also enables people to criticize your debate policy itself: if you're too dismissive or closed to debate, or you have biased debate terms, people will be able to clearly see that and comment on it. Sharing your debate policy transparently in writing also encourages you to follow the same policy for every critic which helps avoid bias, and it lets your audience hold you accountable if you don't do what you said you'd do (at least for anything that happens in public; people can't monitor your private actions but critics can publicly challenge you).

Written policies plus transparency can be seen as a way to move away from an honor system and voluntarily invite being policed by others. Instead of a culture where all intellectuals are policed by default (a problematic idea mentioned earlier), this is an opt-in system. In friendlier terms, this is using people as accountability buddies rather than policemen. And instead of being policed about whatever the public chooses, you're policed about the specific policies you write down. What do you get in return for potentially receiving this criticism from others? First, you get the criticism: you should want to know if you're violating your own written policies (if you don't want to know that, don't write those policies down). Second, having these policies, having transparency and being open to criticism about them can improve your reputation and earn you a larger audience.

I've focused my error correction mechanism thinking most on public intellectuals and debate. Public intellectuals are already putting work into thinking and should already be trying to be rational, but I find they don't debate enough and aren't receptive enough to criticism. And I think if they improved their rationality, they could set a good example for others and it could improve society. On the other hand, if they're wrong and ignore criticism and stay wrong, they can spread bad ideas to many people. If they're wrong but no one knows about that error, they can also spread bad ideas, but I don't know how to avoid that. I'd at least like to stop the avoidable spread of bad ideas in cases where better ideas are already known.

My History

In the past, inspired by the philosophical ideas (but not personal example) of my former mentor David Deutsch (a Popperian and a founder of TCS), I tried to be a good fallibilist in the unbounded effort way. I didn't want to refuse debates. I didn't want to end discussions without mutual consent. I didn't want to unilaterally declare that someone else is wrong, who disagrees, and then end the interaction, because what if actually I'm wrong? I didn't want to have an authoritarian attitude or risk shutting down a discussion where I might be in the wrong and in need of correction. This was OK for me for years because I found debate interesting and productive. But most debates gradually got more repetitive and less productive for me, so that's one of the reasons I created a debate policy.

Having a backup plan enables me to decline critical discussions and debates or end them without mutual consent. I use my best judgment but, if I'm wrong, I don't necessarily stay wrong. It's still possible for my error to be corrected via my backup plan (people can use my debate policy which is capable of overruling my judgment about which debates to participate in). Having a backup plan enables me to protect me time and energy better, and say "no" more, without being irrational or sabotaging the means of error correction. See also My Experience with My Debate Policy.

Error Correction Mechanisms

Let's review some of the error correction mechanisms I developed. These are designed primarily to work well for public intellectuals and debates.

Create a written debate policy. The policy lets you decline debates, using your best judgment, without that being the final, ultimate end of the matter. If you're wrong, you can still be corrected because someone can challenge you to a debate, according to your policy, and your policy can overrule your judgment not to discuss or debate that issue. This enables you to use your judgment most of the time while still being open to error correction. This kind of policy works best for confident people who think they aren't making a lot of errors and who don't lose debates frequently. Beginners could get overwhelmed with error corrections, and end up in many long debates that they lose, if they got much attention. Beginners could be better off studying and learning at their own pace and only participating in debates when they think it will help them learn, not whenever someone sees they're making a mistake and challenges them.

One of the goals of debate policies and other error correction mechanisms is to operate outside of or differently than the social status hierarchy. A common issue is that people choose who to give attention and energy to based on social status or prestige, not based on idea quality or rationality. See also A Non-Status-Based Filter.

For debate policies, it's important to use a debate methodology which restricts people's ability to arbitrarily quit at any moment without mutual consent, but which also allows people to end discussions using a limited, reasonable amount of energy. Arbitrary, no-questions-asked quitting enables bias: people (including the person who offered the debate policy) could quit at any time to avoid discussing any idea they're biased against. There's also an issue of people quitting when they start losing the debate. A simple example of a medium-effort debate-ending methodology would be that if you end a debate without mutual agreement, then you must write a 500+ word statement about your conclusions for the debate's topic, write a 500+ word statement explaining why you're ending the debate, and answer 3 followup questions. That's a lot less than infinite effort but a lot more than zero. My Impasse Chains idea is a more advanced approach which has some upsides and downsides over this simple approach.

Spend time on critics on a regular basis. Intellectuals can have a written policy for how they'll spend some time regularly. People can think about critics and contrary ideas, answer questions, answer criticisms, debate, etc. To help combat bias, time should be spent in a mix of ways. E.g. address some famous critics, some criticisms you choose, some that get upvoted by your audience, and some at random. This limits your control over which disagreeing ideas you spend time on so that subconscious bias doesn't lead to avoiding the most challenging, productive criticisms that might actually correct your errors. See also Rationality Policies, Rationality Policies Tips and Hard and Soft Rationality Policies.

Create state-of-the-debate trees and invite contributions. Using idea trees to organize your ideas or debates can make it easier for people to see what you believe and where they disagree, and it can give people a simple, helpful, short way to contribute: by suggesting one additional node to add to one of your trees.

Have transparency. Transparency lets people call you out when you violate your policies or make other mistakes. It's also helpful to have a forum or comment section or something online where people can publicly comment on your work and behavior. See also Fallibilism, Bias, and the Rule of Law.

Do postmortems. Postmortems can help you find more errors after you find out about one error. You can also do postmortems for debates that you believe you won, and you may still find mistakes you made or expose your thinking about what happened to potential criticism from others.

Applications to Parenting

How can error correction mechanisms be applied to disagreements between parents and children? I'll give an example of a pretty straightforward application. While I think it could help some families, I also think it's flawed, incomplete, and won't work nearly as well for parenting as for public intellectuals. There's a lot more left to figure out for parenting. I think this analysis will be a useful comparison even for philosophers who aren't interested in parenting.

A parent could be open to debate, criticism and questions from his child on Sundays from noon to 4pm. This would be in addition to the parent using his best judgment at all other times and trying to be a reasonable person who explains issues, answers questions, teaches things, and listens to criticism and feedback. This gives the child a secondary, backup opportunity to revisit any issue where he felt inappropriately dismissed. Further, before making any big decision, the parent should tell his child about it, then wait until at least one Sunday passes before finalizing his decision. If there's time pressure, the parent can offer a debate opportunity sooner. If there's too much time pressure for that, the parent should be careful: people make a lot of mistakes while under extreme time pressure, which is why scammers often rush their victims.

Will this work well? Sometimes, yes. I do think it's a reasonably good idea which could help some families.

How does it compare to CF's ideas for public intellectuals?

If a public intellectual loses a debate, he might not admit it. That's OK. Debates are typically public, so some of his audience could notice, so something useful can happen for societal progress. There are other people evaluating public debates besides debate participants. If a parent loses a debate with his child and doesn't admit it, then what? I think basically if the parent is unreasonable, then this method won't fix things. Some parents are quite unreasonable, but others are reasonable about a fair amount of topics in which case this method can help on some topics even if the parent is unreasonable about some other topics.

If a public intellectual violates clearly specified debate rules, methods or formats then the audience will notice. E.g. a debater might leave after 10 minutes in a 30 minute debate, or might quit with no concluding statement in a debate where people agreed at the start that whoever quits first (without mutual consent) must write a concluding statement. That kind of behavior is unlikely to fool a debate audience. If a parent does these things, their child may have no real recourse even if he notices. The child could try to bring it up on the next Sunday debate session, and it's possible that would go well, but I wouldn't have high expectations. This won't help families where the child is too intimidated to challenge their parent but it could help some of the better families.

Also, parents can design debate rules to be biased in their favor, explain the rules to their children incorrectly, change the rules if challenged and even gaslight their child about what the rules were (if the child isn't old enough to read or the rules aren't written down). Public intellectuals have limited ability to do those things if the rules are in writing on the internet and archived copies exist, since a lot of their audience is literate adults. And if a parent has biased rules, his child has little recourse, whereas if a public intellectual has biased rules his audience might lose respect for him.

If a public intellectual is open to debate from the public, then someone who is good at debate may challenge him. That's a big part of the point! There are some good debaters in the many millions of members of the public. Things should be set up to incentivize and encourage good debaters to come forward. One of the design goals of CF's approaches is that public intellectuals don't avoid all the good debaters, which is accomplished by not allowing arbitrary control over who is debated. Children are usually bad at debate, especially when young, so parents will usually win debates even when they're wrong. If the parent teaches the child how to debate, the parent will usually be better at debate because the child just knows a subset of what the parent knows about debate. Given years of practice and reading books and getting information from other sources, the child may surpass the parent at debate, but what happens when the child is young is probably more important. And if the child finally starts winning debates when he's 15, the parent might dislike it and stop debating. But some of the best parents might be OK with losing debates, or even like it, so this could at least help a few people.

If a public intellectual is being really unreasonable while going through the motions of having a debate policy, he may lose his audience. There are consequences. If a parent goes through the motions while being very unreasonable, then he can probably just get away with it. Nothing much has to change. The parent has all the power and the debate policy doesn't change that. And the debate policy stuff may serve to gaslight and manipulate the child, who is told he's irrational and lost all the debates instead of merely being told the more honest "because I'm the parent and I said so".

An idea to help parents do a better job of being open to Sunday debates and criticism from their children is having accountability buddies. Audiences help hold public intellectuals accountable. For privacy reasons, parents shouldn't share all about their family issues online, but they may be able to share with a few trusted friends.

If you know another family that is also doing debate policies with their children, the other parents could be present for some of your debates and you could be present for some of their debates. Your kids could also potentially debate the other parents about some of your decisions, either without you present or in a joint session with all the parents and kids present. Even if it's not a mutual arrangement, some friends might be willing to help review and critique some of your debates with your children or talk with or debate your children about family issues. Some debates could be done in writing or recorded for later review. Even if you have no accountability buddies, reviewing your debates later to look for your own errors can be helpful (if you wait a month, you may be able to look at the arguments with fresh eyes, similar to editing an essay draft after a month break).

Your spouse can also be an accountability buddy, but they're more likely to share the same biases and errors as you than people from outside your family are. Hopefully you get along with your spouse well, have talked through many issues, and agree a lot. That's good for a marriage but bad for providing an independent perspective. Other people in your family (like your parent or sibling) could also be accountability buddies but could also have a very similar point of view to you. Friends may offer the most independent perspective available from trusted people. You could also hire a therapist, tutor, philosopher or life coach and pay them for confidentiality.

Will accountability buddies work well? Maybe. I can easily imagine it helping some people, so I think it's a decent idea. But I can also imagine people doing it in bad faith. And there tends to be a fair amount of agreement between adults against children. Most random adults from your city would probably support most of your parenting decisions (and I wouldn't expect the decisions they don't support to correspond very well to the ones you're mistaken about).

I also wrote about handling fallibility using limited effort, and about parenting, in Fundamental Philosophical Errors in Taking Children Seriously.

Conclusion

I discussed fallibility and the reason for having error correction mechanisms. They can help enable error correction using a medium amount of effort, instead of basically no effort or infinite effort. Error correction mechanisms also play the crucial role of being a backup plan which is capable of overruling your judgment (so that when your best judgment is wrong, which fallibilists expect to happen sometimes, that doesn't guarantee bad outcomes). I discussed specific error correction mechanisms designed primarily for public intellectuals and debates.

I proposed and analyzed a reasonably direct, straightforward application of debate policies to parenting. I think the resulting idea was worth coming up with and could help some of the more rational parents. But I don't think it's good enough to change society much. By contrast, in the case of public intellectuals, I think CF's error correction mechanism ideas are already good enough to potentially significantly improve society if they were in widespread use.

With parenting, the power imbalance, privacy and some other issues are difficult to deal with, but I do have some abstract, theoretical reasons to think good solutions for fallibilist parenting should be possible, somehow, using reasonable amounts of time, energy, money and other resources. Views of rationality oriented around fallibilism and error correction are not thoroughly explored yet so there's lots of room for people to develop new solutions for many topics.