Introduction to Theory of Constraints

Eliyahu Goldratt's philosophy, Theory of Constraints (TOC), is about how to think. He talks about concepts like goals, focus, bottlenecks, local optima, excess capacity and conflict resolution.

The majority of TOC material is aimed at the topic of managing businesses, but Goldratt included some other examples like solving family conflicts in his books. He said his goal was to teach the world to think well, and many of the concepts intentionally have wider applicability. Goldratt's book The Choice focuses more on philosophy instead of business.

Goldratt studied physics in school, then worked in software, before focusing on business management. He thought the methods of the hard sciences should be used for most of life, and he applied them to business and philosophy. Goldratt sold over ten million books and his ideas are taught in business management schools. He wasn't just an intellectual and author: he created a consulting business and helped many businesses grow dramatically (he focused mainly on big improvements, not minor optimizations).

Along with Critical Rationalism, Theory of Constraints is one of the main philosophies that Critical Fallibilism builds on. This article will discuss The Goal then go over many more TOC ideas.

If you'd prefer to watch a summary video before reading this article, click here.

The Goal

Eli Goldratt’s first and most popular book, The Goal, teaches that we need to know what our goal is. Formulating our goals in words helps us consider and analyze them. That helps us take actions and make changes that will achieve our goals.

The Goal is a novel about a manager at a factory who is under pressure to make the factory perform better. Secondarily, there are scenes with his wife and children, so you can see how the ideas apply outside of business.

TOC advocates goals that involve a process of ongoing improvement so things can be good now and in the future. E.g. a company should aim to make money now and also in the future.

The Goal teaches five focusing steps. They’re a process for improving business performance. They try to focus your attention on the most important changes to make.

The Goal explains that most improvements aren’t very effective. They're optimizing local optima, but that doesn’t significantly help the big picture (the global optima). We want success at our goal, e.g. making money, and the improvements we make are a means to that end. Success at our goal is called throughput, which means moving resources through a system to a goal at the end. An example of throughput is turning raw materials into intermediate components and then finished products in a factory (which are then sold to make money).

Improving a local optimum could be making one step in a factory go faster. In the small picture, looking at things narrowly, that looks like an improvement. But will that result in more products being produced? Probably not. Why? Because there are many steps, and the one you improved probably wasn’t the slowest step. The materials will still have to go through the slowest step before they’re done.

The key to focusing improvements in the right place, and making them globally effective, is to find bottlenecks. Bottlenecks are also called “constraints” or “limiting factors”. A bottleneck is the slow part of the system that other stuff has to wait on. Improving the bottleneck results in more throughput.

In general, there’s only one bottleneck. This is like a metal chain: a chain has only one weakest link. Making the other non-weakest links stronger won’t make the whole chain stronger. If the weakest link will break when the chain is pulled on with 5000 newtons of force, then increasing some other link’s capacity from 8000 to 8200 newtons won’t make the overall chain stronger. The limit for what the chain can hold will still be 5000 newtons. To increase the strength of the chain, you have to find and strengthen the weakest link.

TOC says optimization away from the bottleneck is wasted. Don’t optimize non-constraints. Focus your optimization on constraints.

Focusing Steps

The five focusing steps are a way to approach any problem. Increasing production of a factory is just one example. The steps are:

  1. Find the constraint.
  2. Optimize the constraint. (Make sure it’s used efficiently, e.g. if the bottleneck is a machine or tool, make sure it isn’t idle for lunch – not everyone should eat lunch at the same time.)
  3. Subordinate everything else to the constraint. (The constraint has priority, so don’t let other parts of the system cause problems for the constraint. Also don’t let the non-constraints produce more than the constraint can process.)
  4. Add more capacity at the constraint. (Optimize and subordinate first. Step 4 isn’t always needed. Be careful with turning something else into the constraint. There will always be a weakest link in the chain that you organize around. Where do you want it to be?)
  5. Check if the constraint moved. If it moved, go back through all the steps instead of acting on old policies and inertia.

Excess Capacity or Balanced Plants

The Goal also discusses variance and statistical fluctuations. A work station in a factory might process 30 parts per hour on average. But in some hours it only processes 20 parts, and in other hours it processes 50 parts. The result is that a balanced plant is inefficient.

A balanced plant means that every workstation could have 100% utilization. It's designed to avoid idle time. In simple scenarios, that means each station has the same capacity, e.g. 30 parts per hour. If one station could do 40 parts per hour, the extra would be wasted because it'd only receive 30 parts per hour to work on from the previous station. Intuitively, this sounds to most people like a good, efficient idea. But, due to variance, it doesn’t work out mathematically or practically.

Suppose we have three workstations, A, B and C. They form an assembly line: raw materials are processed in A, and the outputs from A are then processed in B, and the outputs from B are processed in C, which produces finished products. There are dependencies: B depends on A, and C depends on B. It’s the combination of variance and dependencies that makes balanced plants work poorly.

Assume, for simplicity, that B always processes 30 parts per hour. What happens when A has positive variance? If A produces 50 parts this hour, that’s more than B can process. We’ll get a build up of 20 extras in front of B. The plant floor is cluttered. Now we need storage space. If the variance is big enough, we’ll have to halt production on A until B catches up more.

What if A processes too few parts? Then B will lose production. A will get 20 parts done in an hour, and B will finish those in 40 minutes, then spend 20 minutes idle. (For a simple mental model, you can assume the workstations work for one hour, then transfer what they made to the next workstation, then repeat. The conclusion wouldn’t change if parts were transferred immediately.)

Suppose we don’t want to lose production on B. What can we do to protect against a shortage of incoming materials? We can create a buffer of extra parts in front of B. We can have parts in storage, so if A doesn’t finish enough parts this hour, B can keep working using stored parts. Alternatively, we could increase the production capacity of A. If A can do 100 parts per hour, it'll be rare that it doesn’t finish at least 30. But then we’ll average 70 extra parts per hour which can’t be processed by B. To avoid a huge pile of extra parts, A could make 30 parts then stop working for the rest of the hour so it doesn’t produce too much. That’s an unbalanced plant with excess capacity at workstation A. But that may actually be a good thing.

Suppose we keep a buffer in front of B. The bigger the buffer is, the safer we are against variance. But a bigger buffer means we need more storage space and we have more money tied up in parts. So there are tradeoffs. Let’s say we decide a good maximum buffer size is 100 parts. We have a balanced plant plus this buffer. What happens when A has positive variance and makes extra parts? If the buffer is full, then A will halt work. And what if A makes too few parts? We start using the buffer.

Let’s say we have a full buffer but then have a bad day at workstation A which produce 50 fewer parts than average (190 parts instead of 240 for the 8 hour work day). Half the buffer is used up. What happens tomorrow? On average, A produces the same number of parts (240) that B uses each day. For the next few days, A has average days, and the buffer stays at 50 units. It isn’t replenished because A doesn’t produce any faster than B. The only way to replenish the buffer, with a balanced plant, is to get lucky and have some above-average productivity.

We are on a time limit to replenish the buffer. We judged that a buffer of 100 parts is the right amount to protect us against the risk of losing production time on B. (It won’t give a 100% guarantee, but it provides a satisfactory amount of risk reduction.) But here we are operating for days with a buffer of only 50. That’s more risk than we wanted. If A has some more negative variance, we could lose production at workstation B that a full buffer would have prevented.

We need to replenish B's buffer back to 100 parts quickly, before we have more bad luck. To do that, A needs to produce parts faster, on average, than B uses them. E.g. A needs the capacity to produce 35 parts per hour. Then it can replenish the buffer at a rate of 5 per hour (on average). That’s an unbalanced plant because A has more capacity than B, which means that most of the time A will have to work below it’s maximum capacity (e.g. some people or machines will work slower than they could or take breaks).

What if we have a lot of excess capacity on A but no buffer in front of B. Would that work? It would mostly work but there are risks. What if something breaks and A produces 0 parts for a while? It’s safer to have some buffer. There’s a tradeoff. The more excess capacity A has, the less buffer we need for B. Or the larger the buffer for B, the less excess capacity A needs. We can look at how expensive buffer is, and how expensive production capacity for A is, and choose how much of each to buy. Some of both is generally best, but we could focus mostly on one or the other depending on their costs.

So a balanced plant doesn’t work well. We need a buffer to deal with variance. And we need excess production capacity to be able to replenish the buffer.

Which workstations need buffers? Only the bottleneck. And which workstations need excess capacity? Every non-bottleneck.

Let’s now turn our attention to workstation C. If it has the same average capacity as B, then it will sometimes have a productive hour and get ahead. It will then run out of parts to work on and stop working. On the other hand, sometimes it’ll have a bad hour and get behind. Then parts will build up in front of it. Suppose, due to negative variance on C, we get 50 extra parts in front of C. What happens next? On average, B and C produce at the same rate, so we'll keep 50 parts in front of C indefinitely. We'll only clear away those parts with good luck at C. And we’re at risk of bad luck at C, in which case more parts would pile up. And with the backlog, we might have a late customer order. C needs to catch up, reasonably soon, by going faster than B. To do that, C needs excess capacity, e.g. the ability to process 35 parts per hour. That’ll let it recover from bad luck or handle the good luck of producing extra B's. But having a capacity of 35 parts per hour also means that, on average, not all of C's production capacity will be used.

Note that it doesn't make very much difference if C has 5 parts per hour of extra capacity, or 4, or 6, or 20. These amounts all accomplish the same goal of letting C catch up pretty quickly. A principle here, which came up in Introduction to Critical Fallibilism, is that changes in excess capacity usually aren't important.

So far we analyzed variance for A or C. The situation is more difficult – so buffers and excess capacity are even more important – if B also has variance. And B does have variance. Everything has variance. Variance is an unavoidable part of life. You can reduce variance if you use automated machinery instead of people, but you can’t eliminate variance. Machines can break or malfunction. Lower variance production processes let you succeed with less excess capacity and smaller buffers, but you still need some, not a balanced plant.

Variance on B means that C can get behind while having an average hour because B has a good hour. That's a second way for parts to pile up in front of C.

A plant may have multiple bottlenecks if it has multiple production lines. It can also have a more complicated production line that isn’t a linear chain. E.g. there could be three workstations that make parts which feed into B, then B combines all those incoming parts, and then the output of B feeds into multiple later workstations. There could also be multiple B workstations, and they could have different production characteristics (e.g. one uses a fancy new machine, another uses an older machine, and a third uses hand tools). These details complicate the analysis, but the principles and conclusions remain similar.

A properly designed plant should have a known bottleneck with a buffer and have excess capacity for other workstations. Don’t aim for a balanced plant. You can’t avoid having a weakest link, and having a bunch of links that are tied (or close to tied) for weakest makes things more chaotic not more efficient. It’s better to know your limiting factor and plan around that instead of having it change frequently due to variance. You can change bottlenecks if you figure out a major improvement, but that should be uncommon so your production processes are reasonably stable.

Prices for capacity for different things tend to vary significantly, so choose something expensive for your bottleneck, then get plenty of excess capacity on most workstations since their capacity is cheap. This is related to decision making in general. Usually there are lots of factors that are easy to get plenty of, and only zero, one or a few factors which are hard to get enough of. With factories or life in general, if there are many factors which are hard to get enough of, what you're doing may be too difficult, and you should give some consideration to changing approaches.

Other TOC Books

Goldratt wrote reasonably short, readable books. I recommend reading The Goal first. I'll list some other books I'd recommend reading in a good reading order and give brief comments on what they're about. The books listed are novels except for The Choice. Besides these and other books, Goldratt also made video recordings of lectures.

It’s Not Luck covers problem solving with the method of finding a mistaken assumption behind a conflict instead of compromising. It features three different businesses and some family life examples. It discusses using trees and diagrams to aid thinking.

The Choice applies Goldratt’s thinking to life in general, not business. It focuses on real life discussions with his adult daughter. Big ideas include inherent simplicity, not compromising, not blaming others, no conflicts in reality, win/win solutions, people are good, and people have enough intelligence (don’t need to be a genius). It also has reports from Goldratt’s consulting business. This is Goldratt’s most philosophical book.

Critical Chain covers project management and moving most of the margin for error (buffer) from the individual tasks to the project as a whole. Statistical fluctuations even out better when they’re more aggregated.

Isn’t It Obvious covers keeping inventory buffers at a more aggregated level (e.g. at a regional warehouse instead of individual store) and pulling inventory as needed (replenishment of sales) instead of pushing out months of inventory ahead of time based on unreliable forecasts.

Necessary but Not Sufficient is about an enterprise resource planning software company. It discusses the need to focus on key features that help businesses make more money and the need to make policy changes in order to benefit from new technologies.

TOC Idea Explanations

In this section, I'll explain some of my favorite TOC ideas.

Making Changes

TOC gives three steps for thinking about changes:

  1. What to change? (Pinpoint the core problems – Effect-Cause-Effect method)
  2. To what to change to? (Construct simple, practical solutions – Evaporating Clouds method)
  3. How to cause the change? (Induce the appropriate people to figure out solutions themselves – Socratic method)

Effect-Cause-Effect involves finding a problem first. A problem, besides being something that you want to improve, is also an effect of some cause. Second, you try to guess what the cause may be. Third, for each cause you’re considering, you think of other effects that it would also cause. Fourth, you make predictions about those effects. Fifth, you check reality to see if the predictions are right.

For example, you may think the cause of low sales is an economic recession. If there’s a recession, what else would it cause? Other companies would also have low sales and there would be newspaper articles about the recession. You can predict those things, then check if you’re right.

Low sales could also be due to poor quality products. If that’s the case, it would probably cause customer complaints. You could predict that your company is receiving more complaints, then go check with the people who answer the phones, read the mail, or work in stores where they speak with customers.

Be careful because, as Critical Rationalism taught us, we need to consider rival theories. Don’t just build up support for your favorite idea by making some predictions that turn out correct. Also consider alternatives which contradict your beliefs. When you have two ideas that disagree, look for contradictory predictions. When you check predictions that disagree, at least one of the predictions will be wrong, so you'll be able to rule something out. That’s also the best way to do experimental tests in science.

Evaporating Clouds are a diagramming technique to resolve conflicts in a win/win way without compromising. The goal is to avoid win/lose outcomes, where someone gets a bad outcome, by instead seeking mutual benefit: outcomes that are good for everyone. This is achieved by solving the core conflict, which makes the problem “evaporate”. The method involves logically connecting both sides of a conflict to shared underlying values or goals, and then finding a mistaken assumption in the logic. A conflict shouldn’t logically come from a single starting point, so there must be a logical error that can be reconsidered – one of the connections has room for an alternative. This method is explained in It’s Not Luck and I explained it here.

The Socratic method means asking questions to help guide people to figure things out for themselves instead of telling them what to think. This helps people feel ownership for ideas and be able to work with and optimize them. When you tell people answers, they often says “yes” without a thorough enough understanding.

It’s important to make the right changes. We have limited ability to change and it costs resources to change. Get really clear on what the right change is before changing much. Consider small scale tests before making a big change.

Unrealistic Goals

A realistic goal is one that people see how to achieve without changing much. People will only change the rules, or make major changes, in pursuit of an unrealistic goal – something they can’t achieve with business-as-usual. So sometimes it’s important to aim for big wins which people initially see as unrealistic.

Limitations

Major progress comes from reducing limitations. E.g. technology lets us overcome limitations and solve more problems.

But there are (mostly unwritten) rules, policies, modes of behavior, etc., to cope with limitations. The rules were created in the past when we had to live with the limitation. If you remove a limitation, but leave the rules, then you’re acting like the limitation is still there. There’s little benefit to removing a limitation if you don’t also change the rules, behaviors and policies designed around it.

To prevent a limitation from affecting you, you have to remove it and remove the rules which cause people to act as if the limitation were still there. You need to remove the limitation and update your behavior for the new situation.

To benefit from new technology, identify what limitation it helps with. Then figure out what rules, policies and behaviors for dealing with that limitation exist. Then reconsider them.

Forecasts

You can’t predict the future very well, so find an approach where you don’t need to. This is part of the value of “just in time” production or a store having more inventory turns: less predicting the future.

Evaluate ongoing projects by actual consumption of safety buffers, not by forecasts.

Safety

You need safety (margins of error) to deal with statistical fluctuations. Fluctuations are common, e.g. a person who assembles 50 widgets per hour on average will actually do 60 in some hours and 40 in other hours. A programmer will sometimes take longer than expected to fix bugs or add features, but shorter than expected in other cases. Customers will buy more of a product than average on some days and less than average on other days.

Fluctuations should be dealt with at the highest, most aggregated level that you can. The more fluctuations you’re dealing with, the better the chance that there will be positive fluctuations to make up for some of the negative fluctuations. Having some fluctuations basically cancel each other out is far more efficient than dealing with every fluctuation individually.

For example, customer purchasing fluctuates less when you look at a month instead of a day because the slow days and busy days will probably partially average each other out over a month. Customer purchases also fluctuate less when you look at all stores instead of one store or when you look at all products instead of one product.

So, as a first approximation, using a smaller number of safety buffers is preferable (as long as everything that needs safety has some). Make the safety buffers more global, less local. Try to have each buffer apply to a larger number of things rather than using more separate buffers.

In project management, put the main safety buffer at end of the critical chain, so it’s for whole project. Have people estimate the time needed for a 50% chance to finish their task on time, not a 90% chance. A 90% or higher chance of finishing on time is what people use when they're adding a margin of error to the time estimate for their individual task, and a 50% chance of finishing on time means no margin of error for that task.

If people do 90% confidence time estimates for every task to protect themselves from being late, then two thirds of the planned project time will be safety buffer, and a lot of that time is likely to be wasted. People rarely finish in under half the time they estimated. It saves time to have a buffer for the whole project rather than having buffers for the individual tasks. Additional buffers should be used where sub-projects feed into the critical chain.

For production, put safety buffers at bottlenecks, not at each individual step.

For inventory management, put buffers in plant warehouses and only send inventory further along the chain (e.g. to regional warehouses or retail stores) as needed based on consumption.

You do need smaller buffers at other places, e.g. a retail stores and regional warehouses need some buffer to account for shipping time to get stuff there. But those buffers are not the main safety and should be kept small.

Safety and Error Correction

Critical Fallibilism has something to add about buffers. They're connected with the concept of error correction.

The concept of safety buffers assumes error correction is happening at a lower level. Problems come up and are dealt with, and we’re looking at the bigger picture, not the details of that error correction. E.g., if a machine breaks (error), it gets repaired or replaced (error correction). Workers who troubleshoot problems (do error correction) are assumed.

In general, error correction requires some resources – time, effort, etc. So you need some spare resources (safety, extra capacity, margin for error) in order for the error correction to happen without a disruption.

If there were no spare resources, then any resources used for error correction would lead to a resource shortage because they'd have to be taken away from something else that needed them. E.g. if workers didn't have a spare minute in their schedule, then every minute they spent troubleshooting anything would put the project behind schedule. And if there were no spare parts or tools, then any usage of parts or tools for solving a problem that came up would take those parts or tools away from somewhere else that they were needed.

Drum Buffer Rope

TOC talks metaphorically about “drum buffer rope”. What does that mean?

A drummer synchronizes many people like in a marching band. Another example is using a drum so that many rowers on a boat can pull their oars in sync.

A buffer provides safety resources because problems will come up. Originally a buffer was a physical object that could absorb a blow. That turned into a metaphor and now we see buffers more as an intellectual concept.

A rope can tie hikers together so they don’t spread out. In the hike in The Goal, they put the slowest kid in front and said no one is allowed to pass him, which achieved a similar result without tying any kids with ropes. Similarly, don’t release raw materials onto the factory floor, to be worked on, faster than the constraint (the slow guy or bottleneck work station) can keep up with.

Local Optima

Lots of mistakes seem to make sense locally but don't work well in bigger picture. This is the mistake of focusing on the trees but not considering the forest adequately. It means focusing on details without seeing a broader perspective that the details fit into.

Cost accounting is based on local optima. It's a bad approach for big, complex organizations that are trying to work as a synchronized team.

Cost accounting was understandable in the past when managers had less information available about other departments (and when they did have information, it was often weeks old). So they didn’t have much choice but to make locally optimal decisions (like reducing costs), because they didn’t really know what the rest of the business needed. Now we have much better data and communication using computers and other technology, so we can and should pay more attention to global optima.

We also have bigger companies now, which means one part of a company can be further removed from the global picture of the company, which makes avoiding local optima even more important.

Win/Win Solutions

Directly state a conflict. Then consider the assumptions behind each side of the conflict. At least one is wrong. There are no inherent conflicts of interest. Apparent conflicts of interest are caused by incorrect assumptions.

Also directly state the common purpose that both sides share. This helps figure out what assumption is wrong. There's a mistake somewhere, otherwise people with a common purpose wouldn't be in conflict.

Stop blaming other people and circumstances. This gets in the way of problem solving.

Stop compromising and thinking there are no solutions. Accepting compromises is a huge obstacle to finding great solutions.

Simplicity

People look for complex solutions to complex situations and fail. They should look for simple general principles and simple solutions.

There are only a few really important elements in complex systems. The Pareto Principle says 20% of the causes are responsible for 80% of the effects. But that’s for independent causes. When everything is connected with many dependencies, then less than 1% of stuff causes over 99% of outcomes. When you focus on the important less than 1%, then the situation is much simpler and you can find a simple solution.

Complexity can be measured in different ways, like a system's number of elements or its degrees of freedom. Which measure you use can result in significantly different answers for how complex a system is. In The Choice, Goldratt explains degrees of freedom as “What is the minimum number of points you have to touch in order to impact the whole system?”. He says they’re often a better way to look at complexity than counting the number of elements.

Chains

The performance of chains is determined by the weakest link. Optimizing stronger links is irrelevant. There’s only one or a couple weak links. Most links have plenty of excess capacity.

This is another reason why under 1% of stuff is important to focus on: the weak link(s) are what matter. (This applies when there are dependencies – when you’re dealing with a chain. If there were actually a bunch of unrelated things, then it wouldn’t be a chain with a weakest link, and more like 20% of stuff would be important.)

Smarts, Intuition and Verbalization

You’re smart enough (most people only use maybe 3% of their brain power) and already have a lot of good intuition. You have intuition about your life. If you’ve worked in a line of business for a while then you’ll have intuition about that too. If you’re pretty familiar with something, you already have lots of good intuitions about it.

Intuition is valuable but underrated by people trying to be “rational”. However, verbalizing ideas (putting them into spoken or written words) is also valuable and underrated. Verbalizing helps you understand what you mean better. You can have a good intuition first, and then verbalize it, rather than stopping at just having the intuition.

To be smart, what you need to do is use your capabilities in the right way. E.g. focus, look for simple explanations, and find wrong assumptions. Don’t try to create complex answers. Don’t optimize the over 99% of stuff that won’t have much impact. Don’t give up and compromise. Don’t make excuses like blaming others.

Emergencies

People use simple, intuitive solutions to handle emergencies. Whatever worked in an emergency has something good and powerful about it – it’s so good it solved an emergency that the normal system couldn't handle. So consider using that solution all the time, even when there is no emergency. You’ll have to consider what harm it will cause to use it on an ongoing basis, and how you can fix that. An emergency solution plus a few fixes can be a great approach.

Science

Figure out cause and effect, and one underlying cause, instead of dealing with symptoms. Physicists think reality is simple with just a few underlying laws. When things look too complex, it’s because we don’t understand it and don’t know what to focus on. We’ll be able to create unifying, simplifying theories when we learn more.

Think first, act second. You need to understand what you’re doing – figure out the cause and effect relationships and underlying simple cause – before you can act effectively.

No Conflicts In Reality

There are no conflicts in reality. If two measurements disagree significantly on the height of building, you don’t compromise between them. If one measurement says 100 feet tall and another says 2000 feet tall, then at least one is mistaken. A compromise would be saying it’s the average of the measurements, 1050 feet, but that's a terrible guess at the real height since that isn’t near any of our measurements.

No one will accept that a building has two heights or two weights and that they're both right. No one will accept that the real height or weight is the average of the conflicting measurements. Scientists will measure again, check their measuring tools for errors, check their methods for errors, and even consider if they might misunderstand the concept length itself. But they won’t accept a conflict and try to go forward by compromising. This scientific attitude should be used in all of life.

Goldratt talks about this most in The Choice. You can also read arguments against inherent conflicts of interest between people in my article on liberalism or in Ayn Rand’s article The “Conflicts” of Men’s Interests.

Resisting Change

People say that others are stubborn and resist change. But people don’t resist all change. They make many small changes and they make some big changes like marrying or having a child. They resist some changes because they're concerned about potential problems with those changes (even if they don't know how to verbalize the problems). If someone resists a change you propose, don’t conclude that the issue is “people resist change”.

People say “yes but…” to help you. They are telling you what problems they need a solution to in order to accept your change.

Critical Fallibilism adds to this by saying it’s crucial to solve conflicts in a decisive way (find the wrong assumption) instead of with compromises. If your method of dealing with objections is to compromise, then disagreement is dangerous (it leads to weakening good ideas), so “yes but” (a type of disagreement) is unwanted. But with proper problem solving, objections help you make a better solution that handles more potential problems, so hearing “yes but” is helpful.

Silver Bullets

Social sciences look for gradual progress, but hard sciences say “Give me a big enough lever and I’ll move the world”; they try for dramatic breakthroughs. TOC offers silver bullets (big wins from super effective solutions) to a world which doesn’t believe in them outside of the hard sciences.

Critical Fallibilism says that seeking silver bullets can be compatible with the gradualism advocated by Karl Popper's Critical Rationalism. How? In short, because TOC’s silver bullets are simple, individual steps, not complex mega-projects. Gradualism is about breaking things apart into incremental steps instead of introducing lots of complexity all at once, but it doesn’t necessarily require slowness or smallness.

Learn More

I've written blog posts about TOC, and I made a companion video for this essay: