Brandolini’s Law

Table of Contents

I read a tweet:

Brandolini's Law (aka the Bullshit Asymmetry Principle): It takes a lot more energy to refute bullshit than to produce it. Hence, the world is full of unrefuted bullshit.

I considered and analyzed the tweeted idea. This article follows my thought processes as I figured things out, rather than being organized around my final conclusion.

Brandolini's Law is a common belief with a Wikipedia page, but I think it’s wrong. Criticism is broadly easier. That’s because one decisive error is all you need. Critics don’t have to comprehensively review and comment on everything (even if they did, that might make the energy costs roughly equal for creator and critic – I don’t see in principle why criticism would be harder).

I do see a practical reason why critics would feel busy and overworked. There are more people creating bullshit than criticizing it.

The main error may come from partial arguments. If you aren’t using decisive criticism, then it’s a lot of work to refute bad ideas. Say an idea starts at 0 points and someone comes up with 10 poor quality arguments to get it up to 100 points. That’s 10 pieces of work, and refuting those arguments is 10 more pieces of work, which should be kinda equal work so far. But there’s another step.

In some views, after the 10 positive arguments and 10 negative arguments of equal quality, the score is back to 0, where it started. It has 10 arguments worth +10 each and 10 criticisms worth -10 each. The idea now has neutral status; it’s not refuted. To refute it we need e.g. 10 more criticisms to get it to a -100 score. If you view it that way, then criticism actually is more work (they did 10 positive arguments to meet the burden of proof to get the idea addressed; then you had to do 20 criticisms to refute it). But if 0 is the default score of all ideas, then it’s tied with infinitely many other ideas, and is only at the level of random noise. We shouldn’t need to criticize something to below random noise in order to reject or ignore it.

Looking at the issue more generally, if one counter-example is decisive, then you can refute tons of bullshit with one piece of data. That’s a huge asymmetry in favor of critics. This is part of why it’s important to focus on criticism and error correction in general: they’re the most efficient and effective.

We don’t always have clear counter-examples. So a problem is from trying to refute stuff using techniques that aren’t as effective as counter-examples. That’s harder and more work. I think good approaches to criticism do better than that.

A proper target for criticism – something actually worth criticizing – is an idea that explains a way to achieve a goal/purpose. Ideas should not only assert something but should give some reasoning about why they’re good for some goal. An idea with no goal is pointless (a goal is a point, objective or purpose).

So the basic structure is goal/problem and idea/solution. Without that, the idea is ignorable since it isn’t even claiming to be good for something. An idea has to not only exist but tell you what it’s for or else you have no reason to use it for anything.

So you need a problem, a solution, and some reasoning to connect the two. That’s what rational ideas/proposals look like. There are three parts. But a criticism can pick any one part and point out a flaw. So criticism seems smaller and potentially easier.

And a criticism can work on a sub-part. If the reasoning to connect problems and solution (aka idea and goal) has 10 parts, you can refute any one part to break the connection. And you could take one part, analyze how it works at a lower level in terms of five sub-parts, and then refute one of those sub-parts. In other words, criticism can address only a small part of an idea – some sub-detail – and still be effective.

You don’t need to give negative arguments proportional to positive arguments. That’s the weighing arguments model that Critical Fallibilism refutes. Instead, you should be evaluating ideas in terms of goal success. Success or failure is a binary distinction. A single (relevant, important) error is all it takes for failure instead of success. And any error which does not lead to failure is not really an error since it’s compatible with success.

What if there are three issues, and any one in isolation doesn’t cause failure, but all three together do cause failure? Then their conjunction (combination) is an error that will cause failure if present, but any of them individually is not.

Lying and Lazy Made Up BS

To better understand what people meant, I looked at an article about Brandolini’s Law.

One of the intended meanings of Brandolini’s Law is that you can just lie and looking up the truth is more work than making up a lie. But saying “Source?” is less work than making up a lie. I don’t need to look up the truth to reject an arbitrary assertion.

If someone is willing to lie in bad faith, they can create some work for others. If you don’t have the internet and they will fabricate quotes and lie about sources, then you’d have to go to the library to show they’re wrong. And how would you know which quotes to fact check if they only lied about some things? You might have to look up many things.

Besides the lying vs. library research example, the blog post also has a comic about the moon being made of cheese. Scientifically figuring out what the moon is actually made of is more work than saying cheese. But a critic doesn’t need to do that. A critic of the cheese theory can just ask for reasoning, sources, evidence, etc.

Unargued assertions don’t require any criticism other than “That is an unargued assertion”.

People need to give reasoning for their claims (roughly proportional to how big a claim it is - claim something more important or complex and you should put more effort into the claim). What if you think of a speculative hypothesis but it’s potentially a big deal? Does that require a ton of effort before you can speak? No. Just because X is a big claim that takes effort to properly make does not mean that “X is a speculative hypothesis that I think merits investigation” is a big claim. Proposing something as a hypothesis to consider is different than proposing it as true.

Claims with no reasoning are undifferentiated from picking random claims out of the set of all logically possible claims. They add no value over that so they aren’t even worth saying. People either need to differentiate their claim from garbage or put a footnote/cite to more information which does that – or at least respond to very short, low effort questions like “Reasons?” or “Source?”.

If you intuitively think a claim matters, that does differentiate it from garbage, but only barely. Don’t ignore your intuitions. But favorable intuition generally means the claim is worth thinking about for a few seconds or maybe even a few minutes (by the person with the intuition, not by other people who don’t have that intuition). Then you can see whether that thinking gets anywhere.

Note: People shouldn’t preemptively explain or footnote everything. If it’s a standard claim or it’s available in books then they might expect readers to already agree, already know where to look it up, or otherwise get the point or choose not to dispute it, even with very little information. If you think people won’t object then you can save time by explaining less. You always have to make choices about where to focus your explanatory effort. But this will go wrong sometimes, even if done in good faith, so you have to be prepared to give follow up elaborations or refer people to sources that do that for you. What you must not do is say “figure it out yourself” and put a big burden on critics/doubters. It’s up to you to differentiate your claim from noise somehow, and if you won’t do that then no one should listen, and no complex or effortful criticism should be required. It’s fine to sometimes omit some details initially but then it’s also fine for someone to tell you which additional details they want.

Bullshit

I found another article about Brandolini’s Law that I had comments on.

This post claims that the meaning of the law hinges on the word “bullshit” which actually does not refer to any kind of error, mistake or falsehood. Instead:

bullshit is a statement made without regard to the truth and connotes overstatement, exaggeration or falsehood. Spewing bullshit, however, is not the same as lying; rather the bullshitter has no real knowledge or care as to whether what they are saying is truthful or not.

I think you can handle that stuff with replies like “Source?”, “Details?” or “Reasons?”

It’s only the malicious lies that are more trouble, e.g. when you’re asked for a source and name a book you’ve never read, and don’t believe contains the information you say it does, just to pretend that you have a source. That kind of behavior sends people on wild goose chases. But that kind of bad behavior is socially punishable. When you check the book and page given, and the information isn’t there, then it’s pretty clear the person did something bad by lying to you. You and others can ignore them going forward. If they apologize you could perhaps give them another chance, but certainly anyone with a pattern of doing this can be caught and treated accordingly.

The world isn’t great at catching this stuff but that’s because people are lazy social climbers. They broadly think they have more to gain from positive assertions than from being a critic and fact checker, and they don’t want to investigate issues in detail. They rarely look up cites. So people get away with some lying about cites. But if anyone was actually doing the work to criticize stuff, and looking up some bogus cites, the bad actors would be caught and exposed and pay a price with their reputation.

The next paragraph I quote is given as an example of bullshit. He made it up with zero research. He doesn’t know if it’s true or not.

People in Canada are actually in favor of global warming because a warming planet will mean that their property values will increase dramatically as their arctic climate becomes more temperate. More of their country will be inhabitable and arable which will be an economic boon to them.

Now we’ll see what he says about this bullshit example:

Now, think about how much research it would take to disprove that statement. I’d have to find and dig into surveys of Canadian views of global warming. Not just whether they believe its happening or man-made, but also whether they welcome it. Even if I could find a survey addressing this point, there’s a pretty good argument that it’s skewed as I would think that even if some Canadians are in favor of a warming planet many probably wouldn’t say that out-loud to a someone taking a survey. So, my simple bit of bullshit would take quite a bit of effort to refute, whereas thinking of it and typing it out took no time or effort at all.

No way. All you have to reply is “Source? Reasoning?”

Is it poll data? You tell me the poll.

If it’s not poll data, what is it? Speculation based on a mental model of how people work? Unless my own mental model agrees and finds your conclusion plausible, I’ll ask for details of your mental model. Share it or your idea is too incomplete for me to get value from – you aren’t giving me a way to understand your idea enough to use it myself instead of just trusting your conclusion. Trusting conclusions is bad for various reasons including that I can’t adequately double check them for errors or modify them to improve them because I don’t know how they work. Connecting ideas to other ideas also doesn’t work well when I can’t break the conclusion down into parts and understand it.

Trusting conclusions that other people thought of is something to be careful with. I won’t just do it because you individually made up some bullshit. It takes more special circumstances. Like I know a bunch of scientists researched orbits and relativity and made GPS satellites. I trust that those work. Many smart people checked their work. And many, many people have actually used GPS navigation in their car successfully. And I can see on my phone where GPS thinks I am, on a map, and compare that to where I actually am (I can look at street signs or buildings in person), and in that way I can see that GPS works. And I can go to other locations and see that it also works there. And I have a smartphone and have personally tried the GPS. So I trust GPS even though I don’t know all the details. It doesn’t work great as an idea for me to build on, modify, or check for errors. But GPS is actionable for me in terms of some practical uses like phone maps. And details of GPS, orbits and satellites are publicly available in books and are taught at universities. So, if I cared more, I could learn more about it.

Another example of trusting someone else’s conclusion is using a car mechanic. Details about cars are public knowledge that I haven’t personally studied. So I have a car mechanic look at my car and evaluate what’s wrong. Then I probably trust his judgment and act on it by having him repair the problem. (If I have doubts I can get a second opinion or even start learning about cars myself.) The conclusion about the car’s problem is not the same as an idea that I understand myself, but it’s useful and actionable to me because the guy who does understand it is also available to use that understanding to repair my car. And a repaired car is something I can use myself to drive places even though I don’t know much about the engine. And the mechanic has been in business for years and repaired lots of cars successfully. And I know there is effective training where people can learn to be mechanics. I have many pieces of evidence related to this and it fits into my model of the world, and also basically everyone else agrees too.

These cases are wildly different than accepting some bullshit idea made up by one or a few people without understanding the details or reasoning. With the bullshit idea I’m going to ask for details and reasoning.

People can trick you. You could ask a scientist for expert advice and believe he’s telling you stuff that’s in textbooks and is basically uncontroversial. But then he throws in some bullshit he made up. And some car mechanics tell you that your car has more problems than it does because they want to sell you more repair work. This kind of tricking is intentional dishonesty rather than just lazy bullshit, and it’s possible to get caught for it. If you get caught, you aren’t just wrong about some idea. You are caught being way worse than wrong – a liar. Whereas with low-effort bullshit, if you’re caught being wrong, it’s not as bad. It’s actually still worse than just being mistaken. You can be revealed as a lazy thinker who throws out half-baked ideas overconfidently. That’s worse than just making a mistake while trying your best. However, the scientist who presents bullshit as consensus, or the car mechanic who lies about good parts being broken, is a cheat who is rightly seen very negatively, so that stuff is discouraged. Mere bullshit doesn’t need to be discouraged nearly so much because it’s not that bad because you can just say “Why?” instead of putting in more effort than the bullshitter.

Some people have vulnerable thinking methods that sometimes put in way higher effort than the other guy and this can be exploited by bullshitters. But that’s avoidable. The basic issue there is a big ingroup/outgroup distinction. People are tribalist, give huge benefit of the doubt to their tribe, and are resistant to considering ideas from other tribes. So a bullshitter in another tribe gets ignored but a bullshitter in their own tribe gets undue attention. This is a big problem but it’s not primarily about bullshit; it’s about ingroup/outgroup bias. People should learn to be more objective, critical thinkers instead of being super biased in favor of an ingroup. (Here’s another way to think of this: Basically no one is gullible to everything said by anyone. That wouldn’t work. People are often gullible, but it takes some degree of trust first, some rapport, some seeing a person as part of your own social circle instead of as an outsider.)

Contradictory Law

Also, Brandolini’s Law is contradicted by another law from the same list where I got it:

Hitchens' Razor: What can be asserted without evidence can be dismissed without evidence. If you make a claim, it's up to you to prove it, not to me to disprove it.

I don’t fully agree with that either, but it’s still notable that laws from the same worldview and author contradict each other.

The whole “burden of proof” idea focuses on positive justifications instead of critical thinking. Critical Fallibilism’s approach is about building up a set of powerful criticisms that refute all known bad ideas so that it’s hard to come up with new, bad ideas which aren’t already addressed by existing criticism. If you can come up with something that I don’t already know anything wrong with, that’s an important contribution – either it’s a good idea or it helps me find a weakness in my criticisms.

I want to learn to recognize dumb ideas and be able to point out what’s wrong with them. If you can come up with a dumb idea that I can’t recognize – that doesn’t fit any patterns and arguments that I already know – then that’s notable and worth considering.