Meta Criticism and Unstated Premises

It’s often hard to engage with the literature on a topic because the literature is bad in meta ways. You can criticize the methodology and the glaring omissions, but it’s hard to write about the actual object-level topic.

For example, suppose some economics literature didn’t have a concept or definition of money, including no citation to some other work which covers that issue. Pointing out that they didn’t define “money” or analyze what money is – and yet many of their arguments are premised on some unstated assumptions about money – would not be writing about economics. That is meta criticism that doesn’t give you the opportunity to do economic analysis.

Meta criticism is fine and valid, but most people seem to dislike it. Most people won’t take it seriously and reject all the meta-criticized stuff. They want to hear topical, object-level arguments.

Also, you might want to think about economics, not about meta criticism. You might be interested in economics more than meta criticism. Or you might like meta criticism, and do lots of it, but want to do object criticism and analysis sometimes.

You might do meta criticism of many different fields, and find the literature is so bad that it’s hard to engage in object-level discussions. And if people aren’t responding to meta criticism, it might get tiresome to repeat the same points, so you’d rather engage with the object content of the fields. But if it’s all the equivalent of people making unstated assumptions about money, and premising their economics on that, then it’s very hard to engage with. You can try to guess and state their unstated assumptions and then criticize those, but they will deny it and say you’re making up stuff they don’t believe and didn’t say. But they won’t actually clearly state their assumptions at that point. You can also just ignore everyone else, write what you think money is, and then analyze economics using your own premises – which will probably get you ignored by people who assume your premises are wrong.

That’s a fictitious example about economists but I think it’s representative. I find this kind of issue in many fields. Let’s look at a real example.

I looked at the book Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. I wanted to find literature explaining the (alleged) extinction-of-humanity-level danger posed by AI which I could analyze and be persuaded by or give a rebuttal to. But instead I found a book which doesn’t define “intelligence”, and instead is premised on unstated, unanalyzed assumptions about the nature of intelligence. And I haven’t found any other literature, with similar conclusions, which is better to engage with.

The closest thing to a definition of intelligence in a book about superintelligence and artificial intelligence is this statement about general intelligence:

Machines matching humans in general intelligence—that is, possessing common sense and an effective ability to learn, reason, and plan to meet complex information-processing challenges across a wide range of natural and abstract domains—have been expected since the invention of computers in the 1940s.

This isn’t a suitable definition for judging what things are AIs or not. It defines “intelligence” in terms of “common sense” which just makes things worse. “Common sense” is left undefined, and is a particularly vague term. Also learning, reasoning and meeting complex information-processing challenges are all left undefined. Defining intelligence in terms of complex, undefined terms isn’t useful. You have to define a complex term using simple terms or give a lot of analysis and explanation. But instead of a section about what intelligence is that tries to explain how Bostrom thinks about it, we get this single sentence.

There’s also no statement of what non-general (special case) intelligence is.

There’s a statement defining superintelligence in terms of the undefined term “intellect”. The definition of superintelligence might be fine, or not, depending on what Bostrom means by intelligence. Some concepts of intelligence can pretty easily, understandably take a “super” modifier, but some can’t.

I struggle to comprehend how anyone thinks this is OK or fails to notice the problem. Why aren’t many readers noticing and complaining? My best guess is that they share a lot of the same unstated assumptions that Bostrom has. The issue stands out to me more because I think about intelligence differently.

Somehow, many people don’t seem to understand what premises are and how you have to actually examine and analyze your premises. Even if you agree with Bostrom about what intelligence is, you should still want him to state it, so it can be analyzed and discussed, and so you have something to say to people who disagree and ask you to explain your position.

The problem is not just Bostrom. A lot of literature in most fields is unsuitable for engaging with in direct, non-meta ways due to problems like this. It’s so broken that it’s hard to get past the meta criticisms to discuss anything else. This is a methodology problem: the author isn’t following the proper method of trying to understand and analyze high-relevance premises (nor is he citing sources that address those issues for him).

Note: I didn’t read much of Superintelligence. I did some skimming and searching. I did ask a fan of the book about this who was unable to point me to important, relevant text that I’d missed. Nor did he direct me to any other literature related to AI risks and AI alignment which does better. If you know of something better, please tell me.