AI Alignment and Openness to Debate: A Twitter Example
I’ll analyze a real example to illustrate some Critical Fallibilist thinking about openness to debate.
I replied to Eliezer Yudkowsky (an AI alignment thought leader) on Twitter:
I'm an expert on Critical Rationalism. You ignored my criticism over 10 years ago. I don't think you'll debate me re AI risk now. You don't post criteria for who & what you will or won't debate. If someone else on your side will debate to try to reach a conclusion,send them to me
(The missing space in “conclusion,send” isn’t a typo. I ran into Twitter’s maximum message length limit. I think thought leaders are wrong to be using Twitter much, and this is one little example of how and why Twitter is bad.)
I got only one reply from 181 views. Yitz said:
Hey, I’m no expert, but I’m generally on Yudkowsky’s side on many (but not all) positions, and would be happy to chat/debate! If you’re interested, let me know (also we could chat either publicly or privately, whichever you’d prefer)
I replied:
Are you interested in extended discussion with the explicit goal of actually reaching conclusions? I'm sick of people who just quit after a few back-and-forths. And Twitter isn't suitable. Do you want to come to my Discourse forum or suggest another public venue?
Yitz replied:
I’d be happy to engage in longer discourse, assuming that the discussion is interesting enough! I’m [redacted] on Discord; feel free to contact me there
Now let’s analyze. I think many people reading this would think my request was met. I asked for someone else on Yudkowsky’s side to debate, and someone showed up.
However, I asked Yudkowsky himself to send me someone. He didn’t. What difference does it make? If he sends someone and I win the debate, that person can report back to him and say “Hey, I lost the debate. You better send someone else.” If that happened several times, Yudkowsky might then personally debate me or send a colleague he knows well (otherwise people would judge him negatively). When someone volunteers, Yudkowsky avoids any responsibility. Yitz is not Yudkowsky’s proxy, which means Yudkowsky won’t debate and has no proxy who will debate in his place.
Also, Yitz doesn’t claim expertise. I don’t mind having discussions with people like that and I appreciate that he was willing to admit it. However, he’s basically saying he doesn’t think he knows enough to reach a conclusion about the issues. He doesn’t even claim that he actually has the knowledge to correct me and to win debates with people like me about AI alignment. (I interpret Yudkowsky as claiming he has the knowledge to win debates with anyone about AI alignment, though I haven’t seen him say it in those words.) In other words, it means Yitz isn’t in a position to debate me on a peer level as a fellow intellectual who has confidence in a conclusion about this topic. I didn’t find any available peer-type debate with someone who thinks they know the right answers.
Also, Yitz seems to have misunderstood what I said, which is a negative sign about his ability to have a high quality discussion. I suggested discussing on a forum or another public venue. In response, he gave me his Discord handle, which enables me to send him direct messages on a chat service.
Also, I asked for someone willing to debate “to try to reach a conclusion”. In his first message, Yitz didn’t engage with that part. When I brought it up again, Yitz framed his answer to sound kind of like “yes” but it actually means “no”.
Yitz indicated that he’ll quit the discussion if he loses interest. He said he’ll continue “assuming that the discussion is interesting enough”, which I read as interesting enough in his opinion. This is a subjective/arbitrary criterion for ending the discussion that would let him leave whenever he felt like it, including because of a bias he has or because he started to lose the debate.
This interestingness criterion is not used in serious peer debates. A public intellectual (such as Richard Dawkins) wouldn’t debate some kind of prominent opponent (like a religious leader) on a stage or web forum, but with the caveat that he’ll stop replying if he gets bored. Dawkins would instead debate with a time limit specified in advance, or perhaps he’d debate until he claims victory, admits defeat, or he claims the other guy is being too unreasonable to productively continue. I’m not a fan of time limits for debates because they tend to end debates before conclusions are reached, but at least they’re more objective than saying you got bored.
People debating about taxes, global warming or abortion, who claim to be experts with the right answers and who have public reputations, don’t just say “this isn’t interesting to me anymore” and stop in the middle. I understand that figuring out when and how to end debates is a hard problem, but ending when you get bored is a bad, untypical answer for serious debates. Ending when you feel like it is a common and reasonable approach for informal chats in Reddit comments or on Discord servers, though. So my point again is that Yitz isn’t offering a serious, peer debate. So no one on Yudkowsky’s side offered that.
Also, even if Yudkowsky doesn’t want to debate me, he or any of his colleagues could see the reasonableness of my suggestion to post public criteria for who and what they will debate. But they haven’t done that. I think the main reason they don’t post criteria is not ignorance. I don’t believe that, if only they saw my suggestion, they’d think it was a great idea and start doing it. I think they don’t want to commit themselves to some debates they might want to avoid, but they also don’t want to post criteria that exclude most debates because then they’d look closed to debate and would receive criticism. I think they want to maintain strategic ambiguity so they can claim to be open to debate without writing down what debates they are or aren’t open to.
Despite my negative comments above, I did contact Yitz on Discord. I try to be flexible about intellectual discussions and debates, and I think other intellectuals should be flexible too. The discussion is in progress and he gave me permission to treat his Discord messages as public.
So far, Yitz took a position that surprises me, which I doubt Yudkowsky would take. He claims that if a Popperian epistemology is true, that wouldn’t affect the AI alignment debate. He also said if Bayesian epistemology is wrong – including induction, credences, and updating – that wouldn’t change his conclusions about AIs. I do expect him to continue the discussion later, and I’ll try to understand his reasoning.
For reference, I wrote Criticism of Eliezer Yudkowsky on Karl Popper (2009). That’s my old criticism that I referred to in my tweet to Yudkowsky. I also wrote AGI Alignment and Karl Popper (2022), Error Correction and AI Alignment (2023), Less Wrong Lacks Representatives and Paths Forward (2017) and Open Letter to Machine Intelligence Research Institute (2017). I have a debate policy in which I publicly guarantee that I’ll debate if the stated conditions are met. My policy uses an impasse chain method for ending debates without mutual agreement, so that people (including me) can’t just quit arbitrarily when they’re biased or losing.