Morality without Foundations
My morality dialog discusses that many different moral goals (such as maximizing squirrels or minimizing carrots, in the whole universe, over all of time), when taken seriously, converge on the same intermediate goals, actions and values (like control over reality, error correction, science, rationality, intellectual honesty, free speech, free markets and peace).
It explains that the foundations of morality don’t matter much. This applies as long as there is a strong or hard goal. “Maximize squirrels” works fine to develop a good moral system but “increase the squirrel population by three” doesn’t work because it’s too easy, so you can do all sorts of things wrong and still succeed at it. Vague moral ideas also don’t work well. A challenging moral ideal, that pushes us to be effective, works best.
As thought experiments, it’s worthwhile to consider a variety of different moral foundations, goals or ideals. What would life for individuals be like in that scenario? What would society do (with good leadership)? Some ideas keep coming up in each scenario. Those general purpose moral ideas, that are effective for a wide variety of purposes, are the most interesting and valuable.
It’s good to practice thinking about the big picture and the long term. Many short term actions that increase the number of squirrels are bad for maximizing the number in the long run. E.g. we don’t need more squirrels on Earth, nor do we need to like squirrels.
The article talks about how to make a civilization powerful. What should we do now if we want to get enough control over reality to, in the future, populate most planets with squirrels? We need to figure out how to get a lot of knowledge and wealth, and how to avoid disasters like a meteor or pandemic wiping us out. And the answers will be the same if we’ll later populate those planets with paperclips, dogs or computers instead of squirrels. We’d also reach the same answers if we wanted to make sure those planets never have any squirrels. It’s always things like peace, freedom and reason that work well to become powerful enough to achieve big goals.
The dialog also brings up the idea of only doing projects when they’re easy, which reduces the risk of errors. That was a precursor to my ideas about overreach and focusing on doing “easy” things (easy meaning resource-efficient and low error rate).
Note that we’d have plenty of time to change our mind about our moral goals while our civilization is getting more powerful. If squirrel maximization is the wrong goal, the result would be fine as long as we have intellectual freedom and critical discussion. We’d figure out a better moral ideal long before we actually took the step of filling trillions of planets with squirrels. That’s another way that moral foundations don’t matter.
If this interests you, read my morality dialog.
If you like the dialog format, view a list of 21 dialogs I wrote around the same time period (2006). I also wrote Pursuit of Happiness (2010) and Non-Consumption of Philosophy (2017) as dialogs.