Resolving Conflicting Ideas

I’ve written a lot about how to rationally resolve conflicts between ideas. It’s one of the most important issues in philosophy.

We have lots of ideas, and they disagree/conflict/contradict with each other, so what should we do about that? How do we evaluate the ideas and figure out what to accept or act on? How do we decide which ideas are good or bad, true or false, useful or ineffective? The goal includes being rational and objective about it, and taking into account our fallibility, rather than biasedly assuming our favored ideas are right.

This issue comes up when we ponder abstract ideas, make decisions, take actions (our actions are guided by ideas), have conflicts with other people (which always involve conflicting ideas), or have internal conflicts (which are between our ideas).

Rather than trying to pick winners and losers among ideas, Critical Fallibilism says we should find win/win solutions which address all the good points raised by all the conflicting ideas. To accomplish this, we can mentally model each conflicting idea as a person, imagine them discussing, and find a solution that satisfies each person. The people have to be reasonable and open to new ideas, though. Some ideas won’t work and people can be satisfied by understanding why it won’t work and why a different approach will get a result that is better according to their own values. In this mental model, you should think of yourself as a neutral arbiter or facilitator who guides the discussion and helps it along, rather than viewing yourself as an advocate for one of the ideas or for one side, faction or tribe.

This is basically the same process we should use when discussing disagreements with other human beings. There’s a lot of overlap between rationally approaching conflicting ideas within your own mind and rationally approaching disagreements with other people. That’s because the meaning of an idea doesn’t depend on whose brain it’s in, and rational analysis of ideas focuses on their meaning, content and structure.

Plus, when two people tell each other their conflicting ideas, now each person has each idea in their own mind. So when talking with others, you actually do need to resolve a conflict of ideas within your own mind. And you should be objective about it rather than being a biased advocate of “your” idea against “their” idea.

How to resolve conflicts between ideas is basically the same issue as how to do problem solving. Problems involve conflicting ideas, e.g. ideas about what action to take next or which idea to believe. And wants or preferences are types of ideas. And when you have a conflict with reality (e.g. you want reality to be one way, but it’s a different way), that also involves conflicting ideas, because you always experience reality through your ideas about it, not directly. And if you want something that you think violates the laws of physics, then there’s a conflict between your want (an idea) and your ideas about the laws of physics. You can also have conflicting ideas about something that’s hard to get: should you stop wanting it or keep trying? And you can have conflicting ideas about time management: should you put more time (and/or other resources) into looking for a better solution or be satisfied with your best idea so far?

Here are some of my best articles to help explain these issues:

In We Can Always Act on Non-Criticized Ideas, I talk about how there are always win/win solutions available that none of our ideas have objections to (criticisms of).

In part 5 of my Aubrey de Grey discussion, I wrote “Think of yourself as the arbiter, and the conflicting ideas as the different sides in the arbitration. Your goal is not to pick a winner. That's what justificationism does. Your goal as arbiter, instead, is to resolve the conflict – help the sides figure out a win/win outcome.” Read the post for more explanation of the arbitration model. The whole discussion with de Grey is good, too.

In Treat Yourself Rationally, I talk about not assuming which side is wrong when you have internal conflicts. Don’t delegitimize parts of yourself as “hang ups” or “irrationalities” and then try to suppress or fight them. Use reason to truth-seek as always. (The same thing applies to disagreements with other people. Coming up with excuses for why disagreements (with yourself or others) shouldn’t be dealt with using rational arguments is one of the major forms of irrationality in the world.)

Coercion relates conflicting ideas to human suffering and provides context for the original problem-solving method I present in Avoiding Coercion. The method involves recursively backing off to less ambitious goals when you get stuck while facing time pressure. Rational thinking involves finding common preferences (non-coercive, win/win solutions) between your ideas. I also wrote Avoiding Coercion Clarification, and I connected coercion with rationality in Coercion and Critical Preferences.

Multiple Incompatible Unrefuted Conjectures uses the Avoiding Coercion method as a solution to an abstract philosophical problem about how to judge ideas. (Note: I was mistaken about my claim that David Deutsch or his TCS philosophy knew the problem solving method in my Avoiding Coercion article.)

In Human Problems and Abstract Problems, I differentiate between something that’s difficult to accomplish (abstract problem) and something a person doesn’t like (human problem). By “TCS-coerce”, I mean coerce in the sense explained in my Coercion article, above. That use of the term “coercion” comes from the Taking Children Seriously philosophy..

Resolving Conflicts of Interest discusses people’s mistaken belief that losers (people who don’t get what they want) are a necessary part of life. If people’s interests inherently conflicted, then not everyone could get a good outcome. The idea of conflicts of interest has long been criticized by (classical) liberal political philosophers, and I have a section on it in my liberalism article. Just as there are no inherent conflicts of interest between individuals in a group (e.g. society), there are also no inherent conflicts of interest between ideas in a group (e.g. a mind). It’s logically the same issue. There is an objective truth about what’s good that all sides (whether they are ideas or are people) can be satisfied with.

In All Problems are Soluble, I discuss a person wanting to move an asteroid faster than the speed of light. He can solve that problem by changing his preference (stop wanting to do that). We can’t solve all problems just by changing our preferences, but we don’t have to prefer to violate the laws of physics nor want two contradictory things. When there’s a clear, objective reason that a preference is an error – when we’re actually wrong to want it – then changing our mind is the right approach to problem solving.


In Handling Information Overload, I explain how to deal with lots of information while having time limits. You can choose an option (like careful review, skimming, reading only specific parts, or setting it aside for a week until you’re less busy) based on your schedule, what else you have to do, how important you guess the information is, etc. You may make mistakes doing this, as you may when doing anything. But the scenario “lots of available and potentially useful information, but not enough time to deal with it all” doesn’t pose an insoluble problem. A conflict between “I want to read all this information” and “I don’t have time” can be resolved.

In Can Win/Win Solutions Take Too Long?, I answer “no”.

My philosophical theory about resolving conflicting ideas involves using decisive criticism so that you don’t have contradictory, non-refuted ideas. I explain it in Yes or No Philosophy.