Digital vs. Analog Thinking

David Deutsch wrote in The Beginning of Infinity:

Another thing that they [computers] have in common is that they are all digital: they operate on information in the form of discrete values of physical variables, such as electronic switches being on or off, or cogs being at one of ten positions. The alternative, ‘analogue’, computers, such as slide rules, which represent information as continuous physical variables, were once ubiquitous but are hardly ever used today. That is because a modern digital computer can be programmed to imitate any of them, and to outperform them in almost any application. The jump to universality in digital computers has left analogue computation behind. That was inevitable, because there is no such thing as a universal analogue computer.

That is because of the need for error correction: during lengthy computations, the accumulation of errors due to things like imperfectly constructed components, thermal fluctuations, and random outside influences makes analogue computers wander off the intended computational path. This may sound like a minor or parochial consideration. But it is quite the opposite. Without error-correction all information processing, and hence all knowledge-creation, is necessarily bounded. Error-correction is the beginning of infinity.

People normally try to evaluate ideas and arguments with real numbers (analog, e.g. percentages or talking about how good an idea is as a matter of degree on a spectrum of goodness). Having only two evaluations is digital. Having five or fifteen possible evaluations for ideas would also be digital but no such theory has been developed in epistemology. Critical Fallibilism’s (CF’s) two possible evaluations (non-refuted or refuted) seem to be adequate (don’t need more, nothing’s missing from the theory).

What if you take real numbers and round them off to get integers? As long as they are in a finite range (e.g. from 0 to 100) we can map them to a finite number of integers. But this doesn’t give us a proper digital theory. You don’t get rid of all the problems of analog just by dropping all the decimal places. If you’re going to do that, the analog part is not helping and you should just make a fully digital approach. Rounding creates errors rather than fixing everything. For example, rounding changes the ratio between the evaluations of two ideas. If one idea has an evaluation of 1.1 and another has an evaluation of 6.6, then the ratio is 1:6. However, if we round them, we’ll get 1 and 7, for a ratio of 1:7.

If 1 and 0 are the only valid outcomes, and you measure 0.9, then it looks like the real value was 1. You could guess that 1 is a more accurate answer than either 0.9 (which is invalid) or 0 (which is further away). But if 0.9 is a valid value, and you round to 1, then you’re making your answer less accurate.

Rounding to 0 and 1 to correct errors is basically how digital computer hardware works. Either there is an electric voltage through a component or there isn’t. The voltage should either be, let’s say, 0 or 100 volts. Those are the only valid values for amount of volts. But, in the real world, voltages fluctuate. Electronics aren’t perfect. If the computer measures 97 or 106, it error corrects that to 100; it treats that as yes there is a voltage rather than no voltage. By contrast, a voltage of 3 is treated as zero, as no voltage. This way, small errors in the voltage can easily be corrected. Actually even larger errors can be corrected, e.g. a 60 voltage, which is way off, can be corrected to 100.

It’s more complicated with refuted and non-refuted ideas because then you’re dealing with ideas rather than numbers. You can’t measure refutedness like electric voltages, but the general concept (of the advantage of digital over analog for error correction) still applies. With digital, you’re trying to pick between specific options that are significantly different than each other, and it’s possible and realistic to get exactly the right answer. Whereas with analog, you’re trying to pick the correct spot on the real number line, which is impossible to get exactly right; the best you can do is get near it. With digital, you know most values can’t possibly be correct (they aren’t one of the finite and often small number of valid answers, like just 1 or 0), so it helps you correct small errors. Whereas with analog, any answer could be correct (or even if it’s limited to a range, there are still infinitely many answers in that range or even in a small part of that range. No matter how much you zoom in and look at details there are always infinitely many potential answers, so you never get any help excluding anything. Not only are there infinitely many real numbers between 0 and 1, there are also infinitely many real numbers between 0.59999 and 0.6, and it’s actually the same infinity.)

Analog is like digital but with infinitely many answers. Error correction algorithms can’t deal with that. They can’t fix infinitely many potential errors to get correct answers that are infinitely precise. Digital doesn’t try to be infinitely precise and actually our computers are binary – every piece of data we store is either a 0 or a 1. There are as few valid values as possible while still storing any data at all. (If there was only one valid value, 1, you couldn’t store any data. There’s actually a nice jump to universality there, for data representations, when you go from one valid value to two valid values. That jumps from approximately zero functionality straight to universality – to being able to store any data. There’s no middle ground. There’s no system which can store a lot of data but not all. For more about jumps to universality, see Deutsch’s The Beginning of Infinity.)

With digital systems, fluctuations away from the correct value are OK as long as you don’t get halfway or more to a different valid value. But that raises a question. How can fluctuations even be possible in digital systems? If there are no values in the middle, how can anything fluctuate to them? The short answer is that we build digital systems on top of analog physical reality. We might design our system so that the only valid values are 0 inches or 5 inches, but amounts of inches in the middle still exist. The situation is similar with many other measurable units.

Having some tolerance or margin of error to handle fluctuations is good. It allows some random errors (statistical fluctuations, a.k.a. variance or luck) to be harmless. With analog, all random fluctuations are harmful because there is no straightforward way to know if the pre-fluctuation or post-fluctuation value is better. With analog, all fluctuations change it from a valid value to another valid value, and there are infinitely many more valid values between the original and the new one, too. Basically, highly effective (but not infallible) error correction for digital systems can be built into the system as an algorithm, but error correcting analog systems requires creative, intelligent critical thinking.

Digital is resilient to statistical fluctuations. Analog isn’t. That’s the issue with computers and data storage and other simple, measurable, physical stuff. With ideas, it’s also important to be able to get an exact answer instead of a real-numbered answer which is never infinitely precise nor exactly right. And it’s important to change your mind about ideas in discrete, meaningful chunks, not in infinitely small (or very, very small) ways where you have no good way to tell the difference between the old and new idea. How do you think critically about a difference in the 80th decimal place? How do you understand that? You don’t, so a digital approach is a more suitable.

People think digitally in general. What do you want for dinner? People consider a list of e.g. ten options and choose one of those. That’s digital. Analog would mean that “meatloaf, mashed potatoes and gravy” would not be one option; it’d be infinitely many options. E.g. you could have 55.23234 grams of meatloaf, or 55.23234000000000001 grams of meatloaf, or any other amount. And you could have slightly different types of meatloaf – any change in chemical composition would count as a different dinner. If you view each of those as a different dinner option, how will you choose? And with analog thinking, there are infinitely many other similar issues, e.g. the strength of the flavor of the gravy could be measured by at least one real number (I imagine there are multiple dimensions to flavor, so you could have multiple real numbers). To make the dinner question manageable, one must approach it in a digital way. Any important differences between options can be addressed digitally, e.g. you could digitally differentiate between a small or large portion, or between six portion sizes, or between having leftovers or not.

When people deal with analog stuff, e.g. lengths or weights, they always round them. They don’t care about the fine analog details; they only care to identify the measurement as one from a finite set of possibilities, not an infinite set. Often people round much more than they have to. E.g. you could measure sixteenths of an inch if you wanted, but measuring to the half-inch is good enough so that’s all you do.

Judging ideas is one of the areas where people actually try to use real numbers, which they often round to whole number percentages, which is still way too many valid values. The philosophy Objectivism does better because it has arbitrary/possible/probable/certain, which is only 4 valid evaluations instead of 101. (From 0 to 100 inclusive there are 101 integers, not 100.) Objectivism does also suggest it’s an analog continuum, though. But Objectivism, reasonably, tries to cope with the infinite complexity of a real-valued continuum by identifying a small number of specific cases that matter. Four is a good number of things to work with; 101 isn’t. Using four categories helps with error correction. If something is 1% below a typical probable thing, what is it? It’s probable. That 1% difference is just variation within the category.

Error is always possible regardless of the system. You could have a statistical fluctuation that’s big enough to get to the wrong category. E.g. something could be probable but due to a large fluctuation it looks like it’s very near possible, so you round it to possible. There’s always risk and fallibility, not perfectionist guarantees. And also you can mis-evaluate or misjudge anything and just be way off. Those errors have to be corrected by other means like creative, critical thinking. But correcting most errors in a simple way, that works well but not perfectly, is a huge help. That’s a great start on error correction. It works especially well with simple systems like voltages in CPUs where you don’t have to worry about e.g. intellectual bias. But even with human thinking, where systematic error and blindspots are issues, it’s still really valuable to have some simple, easy error correction methods that work to correct many errors. It doesn’t work on all types of errors, but it does work great on many errors, so you have fewer errors left to deal with by other methods that take more resources like time, attention, creativity, etc.


With analog rounding, you have a 50% chance to move further from the truth (the real value). Rounding to nearest integer is arbitrary. Round numbers aren't favored in reality. If you round from 8.9 to 9, there’s a 50% chance the real value was below 8.9 and a 50% chance it was above 8.9. Even if you round in the correct direction, you could round too far or not far enough (maybe rounding to 10 would have been more accurate). It depends how you got the value, but in many cases you can expect the real value to be in a bell curve around the measured value. Since the bell curve is centered on the measured value, the probability of a higher or lower real value is equal. If you knew the standard deviation of the bell curve, you could then judge how big a change rounding by 0.1 is (it could be 50 standard deviations or 0.001 standard deviations), which helps you know whether your rounding is badly breaking things or not. But you still wouldn’t know whether your measured value is too high or too low.

With digital rounding, you can be wrong but it's not a 50% chance to go in the wrong direction, and you aren’t just making your answer less accurate by rounding it. You’re rounding from an invalid to valid value, and you have a good chance to correct an error and be a more accurate (even perfectly accurate) answer.

Rounding can never reduce uncertainty. Rounding is only ok when the uncertainty is acceptable. When we measure distance, it’s generally ok to round by a small amount because even if you round in the wrong direction it is still precise enough

Here are some quick everyday life examples of analog and digital. It’s good to learn to see these distinctions in the world around you. The direction a fan is pointing is analog. Fan speed, for many fans, is set to high, medium, low or off, so that’s digital, even if the fan just has a knob not a computer chip, screen and buttons. Steering wheels and gas pedals are analog. Car gears and radio stations are digital. Old radios had analog dials. Volume is an analog dial in some cars, but is digital in some newer cars, and volume is digital for iPhones (at least using the physical volume buttons and built-in software; maybe custom software can set the volume to a huge number of in between values; I don’t know.) Car RPM (revolutions per minute) is handled with an analog gauge like the speedometer (some newer cars may give a digital number on a computer screen instead). Radio frequencies are analog but we treat them as digital by convention and only broadcast on specific, discrete frequencies like 101.3 or 101.5. It’s the same with 2.4ghz WiFi, which has 11 channels by convention. A scroll wheel, joystick or mouse sends digital signals to the computer but they’re approximately analog, e.g. we can push the joystick in a decent approximation of “any direction” to move our video game character around; it’s not like those old games where you could only move in 4 or 8 directions, which is clearly a digital approach to movement.