Example Debate with AI Researcher

This is a fictional example of what a debate could look like if an AI researcher were willing to debate and were unusually honest. It’s meant to be more illustrative than realistic. It can be read on its own, but it’s also a follow up to my article Error Correction and AI Alignment.

The main goal is to illustrate some of Critical Fallibilism’s ideas about rationality and openness to debate, which apply to all topics. Try to consider the general principles not just the particular examples.


Elliot Temple: Are your beliefs about AI alignment premised on other ideas, in other fields, such as philosophy?

AI Researcher: Yes.

ET: Have you studied philosophy?

AR: No.

ET: Are you prepared to debate philosophy topics?

AR: No.

ET: Do you have someone else ready to tag in to debate philosophy topics for you?

AR: No.

ET: How did you determine you were right about those philosophy issues without understanding them enough to be able to debate them yourself?

AR: I trusted various secondary sources.

ET: Will any of the authors of those secondary sources join the debate?

AR: Not that I know of.

ET: Then is trusting them a good idea?

AR: What else should I do? Personally study every field in depth?

ET: Find some philosophers who will participate in debate and who agree with you, or if you can’t find any, then study it yourself or get one of your colleagues to. All your premises for your AI alignment conclusions need someone who can and will discuss them, answer questions about them, answer criticisms, teach others why you’re right, etc.

AR: That’s a lot to ask for.

ET: I think it’s pretty much the bare minimum for rationality. But, regardless, how do you know how big of an ask it is? Analyzing that involves either philosophy or philosophical premises. So you shouldn’t make claims about it, right?

AR: You’re not going to let me speak about anything?

ET: You don’t even claim to have a chain of competence from basics up to any topic you want to speak about without skipping steps where you trust others who won’t debate. You also said earlier that you aren’t prepared to debate these issues. I took that to mean you can’t debate via citing existing literature, that you read and understand, that argues your case for you. You either aren’t familiar with the literature enough, don’t know how to do citation-heavy debating, or don’t believe literature actually exists with the arguments you’d need. Right?

AR: Can’t we just have a normal debate? You haven’t made any arguments about how safe you think AIs are. It’s all just meta discussion.

ET: I disagree with some of your premises which you won’t discuss, and you offer no alternative solutions. You have no way to get this disagreement resolved. There are, to the best of your knowledge, no people or literature on your side which can answer a critic like me. Right?

AR: No, I don’t agree at all, don’t put words in my mouth.

ET: Then can you name a person who will debate or cite a specific literature source that you endorse and take responsibility for (if it’s wrong, you’re wrong)?

AR: Can you give a specific issue that you want a counter-argument about?

ET: Sure. Karl Popper gave a refutation of induction. Broadly, lots of AI-related thinking is premised on induction being correct. So can you give a refutation of Popper’s refutation of induction?

AR: I don’t have that, but I don’t see why you expect anyone to waste their time on reading bad ideas like Popper’s. We’re busy doing AI research and developing our rationality.

ET: Popper was a moderately famous intellectual, who wrote books, who disagrees with you. He was a professor with credentials. Who would you engage with? What criteria do you have for what ideas or people you’ll engage with in what ways?

AR: I try to engage with promising ideas.

ET: Will you write down objective criteria for which ideas qualify as “promising” and follow them at least part of the time?

AR: No, stop trying to control how I spend my time.

ET: That sounds like you’re avoiding transparency and anything resembling the rule of law in order to enable your biases.

AR: No, I won’t be biased because my intellectual community has lots of great anti-bias literature.

ET: Anti-bias literature that doesn’t recommend using objective criteria?

AR: Some books recommend some things like that.

ET: But you aren’t doing that. Anyway, I don’t see how you expect to reach a true conclusion in this discussion given your limitations and complaints.

AR: Debating is hard. I don’t expect to reach a conclusion with such a picky person like you who doesn’t already share most of my premises.

ET: So you’re admitting you don’t even believe you have arguments that objectively should address my concerns and persuade me that you’re right?

AR: You’re impossible to deal with. I’m not going to waste my time continuing this.


Perhaps the reason many of them avoid debate is that they don’t want to create written records that look anything like this. There are upsides to avoiding these discussions if you have no rebuttal to Karl Popper and you’d rather work within your speciality than check your premises. (It’s not hard to find a rebuttal to Popper with a Google search, but citing that could bring you more trouble than saying nothing. What if the rebuttal contains egregious errors?)

If you think I’m wrong about the unwillingness of AI researchers (on both sides of the alignment and existential risk issue) to debate, please don’t just assert that the situation couldn’t really be that bad. Don’t just decide not to believe me. Don’t ask me for evidence that they won’t debate. Find someone who will debate or find several people who have posted written criteria and methodology for who, what and how they will debate.

I’d be satisfied if they previously didn’t know they should write down debate policies, but now that I’ve brought it up, they started doing it. I’ve made some attempts to bring up the idea with their online communities but I didn’t get good reactions. I could also be satisfied if they came up with and did some alternative ways to address the same concerns, but I haven’t seen that.

I think there are thousands of people who could make a worthwhile comment or criticism if there were better debate and discussion methodologies which gave them the opportunity. But instead those people are being ignored because they haven’t climbed a social status hierarchy to get attention. I think people’s willingness to debate me, or not, is representative of how they treat many other smart people. I work fine as an example but the main point isn’t about me personally.

Read more about how I think about debate.