Error Correction Math and Types
Table of Contents
In this article, I try to better think through how error correction works and what types there are. I explore several ideas.
There’s explanatory error correction. You explain an error and then come up with a different solution that no longer has that error. And there’s quantitative error correction – you’re off by an amount (and you can correct it, or at least reduce it, with techniques like measuring a quantity more times and using the average measurement).
Error Bars
If measuring inches with a precision of 1 inch, for objects known to have integer inch lengths, then rounding to the nearest inch removes measuring errors of less than half an inch. (That’s like in computers. They measure a voltage that should be either 0 or (e.g.) 5 volts – there are only two valid options, on or off. So rounding corrects errors as long as the errors aren’t too big, e.g. 4 or 6 volts could be rounded to the expected 5 volts.)
If every value is valid, then you’re stuck with whatever value you measure or calculate. If only some values are valid, e.g. only integers, then a real-valued measurement can be rounded and you can see the size of the error (distance from an integer). This works best if errors are almost always smaller than half the size of the gaps between valid values. If you’re often off by 100, or even just 0.8, then you can’t know that 1.23 should be 1.
There appear to be two sorts of errors, explanatory/conceptual and numerical/statistical. I don’t know if that viewpoint is good. Statistical errors happen due to variance. Explanatory errors happen due to thinking errors. That makes some sense.
The standard way to deal with statistical variance is error bars. We’re 95% sure the real value is x±y. If it’s a bell curve, y is double the standard deviation.
Error bars let you write down how much error you might have, rather than correcting it. They document the error instead of removing or solving it. Therefore errors accumulate over time/steps. If you have multiple steps with error bars, the total error bar will grow.
If you add two error bars, you get a larger range. If you multiply two error bars, it’s even worse. Adding the ranges 3-5 and 8-10 makes a range of 11-15, but if you multiply the ranges you get 24-50. Multiplication created a larger range (size 27) than addition did (size 5). And two narrow ranges (size 3) multiplied to a range that isn’t narrow. Now imagine doing 50,000 multiplications, each with an error bar! The end result would have a huge error bar.
Silicon computers can do billions of computations per second today. Also, in a human life, you do many steps; you think about many things and connect them. Errors have to be corrected somehow, sometimes, instead of just labelled.
Put it another way: 4±1 ⋅ 9±1 = 37±13. Or is it? 37 is the midpoint of the range 24-50 but 36 is the product of the prior midpoints (4 and 9). If the result is 36 then the error bars have to be asymmetric (the correct answer could be up to 12 lower or 14 higher). I’ll just do it with symmetric error bars for simplicity.
Regardless, the error bar got way bigger. When you add then it’s straightforward and you get 13±2 (add the main numbers and add the variance). But with multiplication I think you multiply each number by the other variance and add those to get total variance. The variance on the 9 represents additional 4’s that can be included or excluded.
4±1 ⋅ 9±3 = ? Here the variance of 3 on the 9 means you can end up 12 higher or lower. Actually it could be 15 higher though because the biggest variation is if it’s a 5 and you multiply 3 more 5’s. So maybe you multiply the variances with each other and add that to the variance in the result, twice. Twice is because we need to correct 3⋅4 with +3 and we also need to correct 9⋅1 with +3. Then for the main/base number, multiply normally and add the product of the variances to recenter. So 4±1 ⋅ 9±3 = 39±27. (That comes from 36+3 ± 12+3+9+3). Is that right? Let’s see the minimum is 3⋅6 and the max is 5⋅12 so 18 to 60. But I got 12 to 66. My variance is 6 too high, so I shouldn’t have used that correction. What’s going on? Oh, so 3⋅4 and 1⋅9 are the average variance which is what I want. It’s higher on the high end and lower on the low end, but I added 3 to the base number – moved the midpoint up – to fix that. So the correct answer is: 39±21. So variances of 1 and 3 have turned into 21 after one multiplication.
So it’s just:
Addition with error bars:
a±b + c±d = (a+c) ± (b+d)
Multiplication with error bars:
a±b ⋅ c±d = (a⋅c + b⋅d) ± (a⋅d + b⋅c)
Note how each of the 4 products is used exactly once. Each of the two parts of the first number is multiplied with each of the two parts of the second number.
Subtraction with error bars:
a±b - c±d = (a-c) ± (b+d)
That’s easy since it’s the same as adding the opposite. The variance still adds. Just the base number ends up different.
Division with error bars:
a±b / c±d = (a/c + b⋅d) ± (a⋅d + b/c)
That’s the same as multiplying by the reciprocal.
The variance on a shrinks because a is being divided (meaning scaled down proportionally, e.g. to half). Imagine you had 300±30 and you were dividing by 3 with no error bar on the 3. You can see how the error bar should end up smaller. Dividing by 3 on the base number gets us from 300 to 100. The error bar was 10% of the base value in each direction, and it should stay at 10%. The old range is 270-330. The new range should be 90-110 not 70-130.
With addition, variance scales linearly. If we can be off by 3 twice, it makes a potential error of 6. With multiplication, variance scales with our data. Instead of being off by 3, now we can be off by 3 of our data, since we’re multiplying with our data. So if a data value is 20, and we could be off by 3 times our data, that makes a variance of 60 (just looking at positive numbers and positive variance). If the data is a million, then we could be off by 3 million. When you stick to addition the variance could be small relative to your data, e.g. only a 2% error, but after one multiplication (with integer error bars) now the error is large relative to your data (but the non-variance is now, after multiplying, on the order of the square of your data, so it’s still bigger). With multiplication you’ll need really small error bars, e.g. ±0.01 which would be a 1% error in either direction. The point is, addition error bars work as fixed amounts while multiplication error bars work more like a percentage of the values you’re working with, so they scale up much more.
Let’s try exponents:
a±b ^ c±d = ?
Let’s look at actual numbers. 3±1 ^ 4±2 is from 2^2 to 4^6. So from 4 to 4096. Middle is 2050 and variance is 2046. So the answer is 2050±2046. (Again I’m ignoring any kind of distribution with some errors being more common than others, and just finding the midpoint of the solution range. Quite possibly the most likely value is what you get if you assume the error bars are zero, which in this case is 3^4=81, which is not the midpoint of the possible solution range. So either you have to have unequal positive and negative error bars or change the base number to recenter it).
What’d I do to calculate the concrete case? I can take the same steps and write them out with variables:
Exponentiation with error bars:
((a-b)^(c-d) + (a+b)^(c+d))/2 ± ((a-b)^(c-d)+ (a+b)^(c+d))/2 - (a-b)^(c-d)
Viewed another way it’s:
mid = (min+max)/2
Answer:
mid ± mid-min.
This is generic and works for many things (not e.g. modulo). You just calculate min and max separately. But it’s inelegant. I got nicer formulas before. Is there a shortcut? I don’t know.
I’m going to try modulos. FYI, the symbol for modulo, aka mod, is % and it means basically to divide and the answer is the remainder. So 7%2 is 1 because there is 1 left over when you do 7/2.
a±b % c±d = ?
Mod is weird because it’s cyclic. Hmmm.
2-4 mod 1-3 would be 2-4 under mod 1, 1-2 under mod 2, and 1-2 under mod 3.
2-4 mod 11-13 would be 2-4
20-40 mod 11-13 would be 0-10 under mod 11, 0-11 under mod 12, and 0-12 under mod 13. It can be anything from 0 up to the one less than the modulo number.
Summary: Error scales linearly with addition, scales up to be proportional to your original data with multiplication, and scales up a lot with exponentiation.
With multiplication, now you’re at the square of your data if you multiplied two pieces of data that are similar sizes, so the error may still be manageable relative to the large product you just got. If you multiply data by something small with variance, you can get a bad result though, like variance on the same scale as the result. Example: 100±1 ⋅ 2±3 = 203±302.
Multiplication comes out best if the variance on both factors is much lower than the factors themselves. For addition, you want variance much lower than the largest number. You can add 100±1 + 2±3 = 102±4 and the large variance on the second number being added doesn’t result in a large variance relative to the final sum.
What about variance on just one part?
a±b + c = (a+c) ± b
a±b ⋅ c = (a⋅c) ± (b⋅c)
It’s harmless to add a zero-variance number if all the numbers are positive. That can actually reduce the variance is a percentage. However, if one of the numbers is negative or you’re subtracting, then the variance as a percentage of the result can go up. The variance’s absolute size doesn’t change.
Multiplying by a large number adds lots of variance as an absolute value but won’t increase the percentage variance. But the variance can now be large relative to the original number which came from your problem domain, so it may matter to you. It depends on what the multiplication means conceptually.
With 10±1 ⋅ 10±1 the variance on each is 10% and after multiplying we get 81-121 aka 101±20 so the variance is ~20%. Does the variance as a percentage add?
10±5 ⋅ 100±15 = 1075±650 which is 60.5% variance, not 65%.
Looks like the variance percentages add minus a correction. If the variance percentages are small then they approximately add. When variance is bigger, there’s a larger downward correction.
10±5 ⋅ 10±6 = 130±110 which is 85% variance (from 50% and 60% originally).
10±9 ⋅ 10±9 = 181±180 is nearly 100% variance (from 90% and 90% originally).
10±20 ⋅ 10±20 = 500±400. That is wrong. The range is -300 to 900, mid and variance are both 600. Crossing zero broke my formula! When doing the formula earlier, I assumed the low point comes from the minimum variance on both numbers, but that doesn’t always work when some terms are negative.
Other Errors
None of the math deals with qualitative errors. Variance and error bars are for quantitative errors. And the math I did ignores any possible unequal distribution of errors, as well as not dealing with negative numbers correctly.
Also, by the way, presumably other people have worked this stuff out before. I don’t know where to find it though (not that I looked much – working some of it out myself was a way to understand the issues more). And their goals were presumably different than mine so I wouldn’t expect to find this subject approached in the same way that I’m approaching it.
What other sorts of error exists? A basic, standard error is putting something in the wrong box/category. E.g. I say some X are Y and you put my statement in the universal claim box as if I’d said “all”. Or you confuse a cat with a dog. Or you confuse bigger or smaller being better.
Some errors are non sequiturs. Let’s get Taco Bell food by wanting it, without delivery, without driving there, without walking there, without any way for the food to get from the restaurant to us, like the food will just teleport. There’s a gap in my plan where the outcome (having and eating the food) doesn’t follow from the earlier steps (wanting Taco Bell food). Similarly, people make arguments kind of like “Pizza is good because beavers make dams”, though real non sequiturs are usually more subtle.
Some errors are superficial approximations that fail to take a bunch of factors into account. I want low wage workers to make more money, so I make a law to tell people to pay them more. But I forget that they might not be hired, might be fired, might have their hours per week of work reduced, etc. So my explanation for how it’ll work is basically a non sequitur because I forgot about those factors. It’s like if I forgot about the “getting there” factor for Taco Bell, but the ignored factors are less obvious in this case. And whether the real or underlying cause of the unsatisfactory wages is supply and demand of labor, or the greed of businessmen, or schools doing a bad job at educating people to be productive, or inadequate capital accumulation per capita, or whatever else, I don’t actually have a plan to improve it.
In a simplified hierarchy of knowledge model, some errors come from wrong connections between ideas or between groups of ideas – e.g. ideas A and B do not combine to form C. Errors can also come from wrong premises. That’s it since if your premises are right, and your connections are right, then you shouldn’t get errors later. If you have an error, you made some wrong assumptions or you combined/integrated/applied some ideas incorrectly. You have a wrong idea somewhere. Where did it come from? Either you got the idea from an invalid step to build it from prior ideas or you built it from a prior wrong idea. There is a first wrong idea (at least one; it could be a tie) and either it’s a premise or else it was built from some prior ideas that aren’t wrong (in which case the building on those correct ideas must have been done incorrectly). Warning: I’m not saying this model is correct and is a good way to think about everything; it’s just a simplified model that has some value.
Lots of errors come from being unclear and vague, including unclear about what your premises/assumptions/starting-points are, unclear about how you’re combining them, and unclear about what problem(s) you’re trying to solve (unclear goals).
Also tons of errors come from fudging things, taking shortcuts, being approximate on purpose. You have to check that your approximations are valid under the constraints of the problem. And no it shouldn’t be “probably” valid with no guarantee, unless the goal is actually a probable not guaranteed solution and you have some actual way to measure the probability of an error. “It intuitively feels unlikely to be an error” is not a reasonable way to decide a shortcut is OK. That’s not a way to objectively constrain the chance or size of error.
Types of Errors
Let’s brainstorm some errors to try to identify types of errors better.
There are word, language and grammar errors.
There are math errors.
There are factual errors like getting dates wrong, giving credit for something to the wrong person, misstating a price, misstating the number of days in a month, misstating the current day of the week, misstating what the law says (typically without checking).
There are errors of trusting secondary sources and treating that information like it’s firsthand. People can make the error worse by communicating their sloppy conclusion without communicating the chain of thought that reached it and is needed to judge whether the conclusion is good. Sometimes people communicate that they got information from a solid, reliable source that readers wouldn’t doubt, when actually they got it in a way people would doubt.
Lying is a sort of error.
There are planning and execution errors. (Based on Goldratt).
There are systemic errors and non-systemic errors. Systemic errors make it harder to fix non-systemic errors.
Some errors are part of a pattern and some aren’t. Some errors come from an error-factory or autopilot. Some errors are due to habit, intuition, emotion or your subconscious. Some errors are due to static memes.
There are traditional errors and your own errors when deviating from tradition.
There are errors with high reach to affect lots of your life and errors with low reach.
There are bounded and unbounded errors. Maybe there is error limited to one little concrete, to a whole field, or not limited at all so it can apply to and ruin everything.
There are autonomous and non-autonomous errors. Some errors only work in context and are reliant on other ideas. Other errors stand on their own and can be causal actors by themselves and take on a life of their own.
There are errors related to social dynamics, facts or logic. Those aren’t complete categories but they’re often notable categories.
For action: there are actions that were done but shouldn’t have been, and actions that were not done but should have been. That’s two types of errors: omissions and extras. (Based on Goldratt.)
Applying Goldratt to planning: There are incomplete ideas and actively wrong ideas. There are ideas that are a subset of the right idea, and ideas which aren’t. We can look at candidate solution elements as either being elements of the right answer, or not, and we can also look at what elements of the right answer are missing. Is this useful? A difficulty is that usually there are multiple right answers.
There are intentional and unintentional errors. A common type of intentional error is when you think something is false, but say it anyway, to try to fit into the group. People commonly think this is mostly harmless but it’s actually corrupting.
Does lying to your parents/teachers/authorities to try to avoid trouble count as an error? You’re saying something false. You’re proclaiming a version of reality disconnected from actual reality. And there’s legitimately something bad going on. You may say the fault is in the other people and this risky, dangerous strategy is the best way to defend yourself. Also sometimes it’s a reasonable or even expected lie to deal with the authority being unreasonable, but other times the authority is being reasonable and the liar is in the wrong.
Overreaching
People’s main objection to not overreaching is that they accept error as part of life, don’t care, and won’t take error correction seriously as a goal. Although some errors aren’t important, the general attitude that errors aren’t important is a rationalization. If it were a serious, honest attitude, then they’d try to understand which errors are important and how to tell. They’d look for and work on some important errors. They’d try to deal with the matter appropriately instead of using unimportant errors as an excuse to avoid self-improvement. Many people want to broadly assume their errors don’t matter (since they don’t make errors they see as really unreasonable or bizarre) even as lots of negative things occur in their lives.
People don’t see it as productive to fix their little errors. They think there are too many and it’s hopeless/impossible. It’s just hair splitting. For lots of the errors, they admit it matters, but once you point it out they say “OK, oops” and maybe even “thanks for telling me” but then want to immediately move on. But then it happens again. And again. They don’t fix the underlying cause.
People need a process by which they fix errors, fix patterns of errors, automate the better ideas, and get to the point that it’s really easy to stop making those errors. The goal of problem solving and improvement is ease not just being able to do the right answer when trying really hard. Making the right answer natural, intuitive and habitual is the goal.
If you get there, you could no longer make known errors much. It could become infrequent enough to deal with every known error with conscious attention. Your errors could be under control enough to get individual attention. To get there, you should start with simple/basic/easy enough stuff for your errors to be under control, and work your way up. Don’t jump in the deep end, make a million errors, and create a situation that’s out of control.
Besides known errors you can also be making unknown errors. Discovering those is reasonably uncommon, so it won’t be overwhelming to deal with. You won’t find out about too many too quickly. People already discovered most of the errors that are easy (for people in our society) to discover. Figuring out new and better criticism is kind of like making a new scientific discovery, which doesn’t happen every day.
Solution Spaces
Restrictions on a solution space are error detection mechanisms. Like “The answer can’t be between 5 and 50” means any value in that range is an error, so it helps you detect some errors.
When the answer must be a whole number, then any number with a decimal or fraction on the end can be immediately detected as an error. And in that scenario you can do simple error correction by rounding. Rounding to the nearest whole number at every step will prevent small errors from compounding into big errors. But rounding would not prevent compounding error if fractions could be correct answers; in that scenario, rounding would actually make things worse.
Restricting/constraining to a small solution space, e.g. just yes or no, makes detection of most answers as errors really trivial. This lets you focus on analyzing the small number of remaining answers instead of having a million things to analyze. It helps you focus. So yes or no questions are generally easier to answer, reason about and get right. They’re also generally a bit smaller, less ambitious, and less valuable to answer. But using small steps is broadly good. It’s better to add up to big value with ten easy steps than one hard, risky step that you’re much more likely to get wrong. People often get stuck because they fail to break problems down into more manageable parts and instead try to do the whole thing at once.