Similarity and Contextual Conversion Between Dimensions
Table of Contents
In Multi-Factor Decision Making Math, I discussed converting (measurements or judgments of) decision-making factors to other dimensions. I said that this broadly can’t be done and we need other approaches to decision making. However, I said, the narrower the context you care about, the more possible it is to do an approximate conversion. Why is that? Let’s look at some examples.
Army Examples
In a war, a machine gunner in one location could be about as effective as two machine gunners in a different location. This suggests a conversion between two dimensions: number of machine gunners in great locations and number of machine gunners in good locations.
You can imagine a particular battle happening in which the machine gun at the better location shoots 20 enemies during the battle. And if the same battle happened with the guns at the other locations, they would also shoot a combined total of 20 enemies. You could even imagine that the people shot are the same 20 enemies, though that’d be unusual. If they aren’t the same 20 enemies, then shooting them is not equally good. And if you’re shooting from a different spot and hit the same people, you’ll hit them in different places on their body and give them different wounds, so again it’s not really equal. The conversion which treats these things as equal is only approximate.
Suppose you give the identical injuries to the identical troops at the identical timings during the battle, and we don’t care about the manpower or ammo usage (to fire two guns instead of one). It’s still going to be unequal. Being shot at from different locations will make the enemy troops take cover with different obstacles. It’ll mean they are able to peek out and shoot in different ways. And two guns can keep up covering fire with no break to reload. Two guns would be shooting more bullets that hit less, which could be more effective as suppressing fire (because the enemy has to watch out for more bullets coming). The two guns could also be less effective as suppressing fire if they’re too far away. People might just ignore them and consider it bad luck if they get hit. You can imagine soldiers running past two distant machine guns, while not running past one closer one, even if the actual casualties end up being the same either way.
Machine guns are used strategically. We put them at certain spots to shoot in certain directions at certain times in coordination with other stuff we’re doing. There are purposes they serve other than to shoot enemies to rack up total kill count. Even when two options are identical in some ways (e.g. number of enemies shot), they will have differences too, making those options not fully interchangeable.
Even in the ancient world, smaller armies won many battles against larger armies. Approximations like “one spearman is worth one spearman” were rather approximate. Each spearman was an individual. In organized phalanx combat they become closer to interchangeable but they still aren’t the same and individual differences mattered. E.g. I imagine some men held up their shield fine but weren’t very effective with their spear during phalanx combat, while other men struck an outsized number of killing or wounding blows. Another factor is that an individual or entire group could get less sleep the night before the battle. And even if two groups of 100 shield-and-spearmen were equal, they have different commanders so they are unequally valuable. The trait “fights under Spear Captain Pullo” is a difference from “fights under Spear Captain Vorenus” even if two soldiers are otherwise identical (including, implausibly, identical bodies and training).
But people do use approximations like these – e.g. that two units of 100 spearmen are about equal – and they seem to work some. Why do they work at all?
They partly work because there are many similarities in addition to the many differences. Sometimes the differences don’t matter much and the similarities do matter.
Put another way: not everything matters. There is excess capacity for a lot of stuff. You should focus on key issues. E.g. the key issue in one case might be controlling a path through a canyon which is 90 feet wide, which requires 25 men with shields in a line. To fight effectively there requires approximately 100 men so they can have 3 lines of 25 and rotate who is in front, plus have extra men to deal with casualties, act as reserves, etc. If that’s the key issue, then two different squads of 100 could both do the job. Two squads, despite many differences, might both succeed (or both fail) at the goal of holding the past, and therefore be identical relative to that specific goal. In general, two options can be identical or very similar for a set of goals you have (or for one complex, multi-part goal), even though the options have identifiable differences (that are irrelevant to your goals).
While holding the canyon path, two squads may have different morale and different leaders, but as long as they stand their ground and don’t let enemies past, they’re succeeding at the goal. They could kill many enemies, or none, and succeed at their job either way. Blocking off the path is what matters to the big picture of which army wins the battle. Getting extra kills in this pass is a non-bottleneck local optima (which can actually be bad to try for – if the only goal is to hold the pass, then it’s generally better to stay disciplined and safe with a shield wall than take risks trying to inflict more blows). Killing more enemies seems nice to have – a positive thing – but it’s not important to the goal.
While it seems beneficial to get more kills (with no downsides to battle performance), it could actually be a bad thing. This helps illustrate the importance of focusing only on your goals. It could be the final battle of a civil war, so everyone you leave alive will rejoin your own society (just a few opposing leaders will have to be jailed or executed, but the common soldiers won’t be blamed). You could get a benefit from capturing more soldiers alive to enslave. It could be the final war against a nation that will soon be paying tribute, so it’s better if there are fewer angry brothers and widows, and more strong men to produce wealth. You could lose the war, and the victors might take more revenge on your army or country if they had higher losses. Getting more kills is beneficial only if it helps you with a goal such as winning the current battle or winning a future battle (sometimes battles are indecisive and you fight against the same army again in the near future).
For holding the pass, two groups of 100 spearmen can both have excess capacity: they can both succeed without it being a close call about whether they instead get a bad outcome like being slaughtered, being pushed back outside of the canyon or retreating. The squads may have different skill and strength with spears, but both can hold the pass. They may have slightly different shields, shield-arms, and ability to form a perfect shield wall, but they both have enough shielding capacity to hold the pass. They even have excess capacity – more than necessary if everything goes smoothly – which is important. It means they’re pretty reliable. Even if some things go wrong, they’ll still succeed. They can handle some bad luck. A little bad luck (random variance) is normal, so good plans need to be able to deal with it. Things should have to go really, really wrong for them to fail.
The reason you can treat the two groups of spearmen as (approximately) the same is they’re able to accomplish the same purpose (holding the pass) and that purpose is what you care about. The details of how well they do at other goals don’t matter much. In other words, we can convert from many factors in many dimensions, to a single evaluation, when that evaluation is about a well-defined goal (in a specified context) that limits what’s relevant. It can also work pretty well for a narrow range of goals and contexts. The broader the range of goals and contexts you want to take into account, the less well you can do even approximate conversions.
In short, everything is unique, but we can view things as similar when we have a particular goal in mind. Two different things can be similar by both succeeding or both failing at a particular goal. (What about similarity for multiple goals? Yeah that works too. Or you can view it as a single multi-part goal.)
You can’t approximate two things as the same when the differences between them cross a breakpoint relevant to a goal you have. That results in a different category of outcome. It’s common to use a single breakpoint that differentiates between success and failure, but you can also use multiple breakpoints to differentiate between several qualitatively different outcomes. (A breakpoint is a point on a spectrum where there’s a qualitative change rather than merely an incremental quantitative change. It’s often hard to pick an exact point so you can also have a margin of error around a breakpoint.) Because most factors have excess capacity, differences in those factors usually don’t matter. Excess capacity means having extra beyond a breakpoint, rather than being just barely past a breakpoint, so that most changes or differences won’t cross the breakpoint. That gives tolerance against random and non-random variance, including mistakes. It makes things more robust.
Maximizing Profit Example
Even for key factors (the ones we focus most attention on, rather than secondary or non-bottleneck factors), there are always breakpoints and we don’t need strict, perfect maximization. A dollar less profit for Apple is close enough and not a concern, despite the widespread idea that Apple wants to maximize profits. If you literally want to maximize profits, then (everything else being equal) getting one dollar less of profit would be failure, since you didn’t get the maximum amount of profit that you could have. But Apple executives would view that single dollar more like random noise or a rounding error rather than as an important part of maximizing profit. That’s because it doesn’t cross any breakpoint they care about for their goals. The goal of maximizing profit implies that there are no breakpoints – every extra penny is a better outcome, not a tie with the previous outcome – which makes excess capacity impossible. But executives act instead like excess capacity is possible – an extra dollar is within some margin of error – which means they aren’t literally maximizing.
Beach Trip Example
A good approach is to figure out pass/fail breakpoints for all factors based on your goals. Then conversions between dimensions are OK – different things are similar enough – as long as no breakpoint is crossed. Different things are never the same, but they may succeed at the same list of breakpoints or binary goals, so they’re similar enough for you.
When might two options be similar enough? One scenario is they’re similar enough on all dimensions. For dimensions with huge amounts of excess capacity, they might seem not very similar, but they are similar in the sense of both succeeding at your goal (being on the right side of a breakpoint). Another scenario is two things are able to get a similar result in a different way (which requires you to have a reasonably specific goal in mind). For example, you might want someone to help you, and four different methods could all work: asking very nicely and persuasively, calling in a favor he owes you, paying him, or blackmailing him. If you have a narrow enough goal (get his help) and don’t care about anything else, then those could all approximate to the same, since they all work for the goal. If we also take into account some standard life goals that most people have, that could rule out the blackmail because it fails at your goal of being a decent person, but the other three options could remain similar for all your goals despite clearly having some differences.
If you’re just thinking generically and don’t have goals, then you can’t say whether two things are adequately similar or not for conversion, or how much of X converts to Y. What makes things “the same” (for our purposes) is if they accomplish the same specific purposes. To judge that, we need to know what are purposes are. We can’t equate different things out of context because depending on what the context is the differences might matter. However, if there is a context including goals, we can equate two things as similar (enough).
Even if we don’t have exact goals, we could know the typical goals in a context. E.g. if the context is a future beach trip, we won’t know in advance exactly what activities we’ll do that day. But we know some activities we might do, like lying in the sun, swimming, surfing, beach volleyball, walking, watching waves, talking with friends, or building a sand castle. We could compare two things (e.g. swimsuits) as “very similar” for that context because they both are more than adequate for success at all of those goals. Once the trip is underway, we could narrow down which goals we’ll actually do more, so even more things would count as similar (since they only have to get the same results for a smaller number of goals). Those similar swimsuits could easily be different for other goals while being similar for the beach trip, e.g. one might be far better for wearing in a fashion show (and therefore succeed at a specific goal or breakpoint the other fails at, such as getting a modeling job from an audience member). Or one might be modest enough that you’d be admitted to a particular restaurant while another swimsuit wouldn’t get you in.
When we have a good idea of what sort of goals we might have (in other words, some context, like a future beach trip), then we can find some stuff that seems “very similar”. That means it can be treated as equivalent for most or all typical goals in that context. The more specific our context, the more we can know exactly what goals we care about instead of speculating. That means our answers can be more clearcut about what is similar enough, and less margin for error is needed.
Low Excess Capacity
What about scenarios where we don’t have a bunch of excess capacity on most factors? That often means setting up new systems or repairing non-functional systems rather than working with an already-functioning system (if it’s functioning, that is a sign it has excess capacity on most factors, otherwise random variance would cause it to frequently stop functioning).
If it’s the type of thing people have done before, e.g. making a new factory to produce products that are already being produced in other factories, then we’ll have a pretty good idea of what will work. We can consider using options X or Y and see that both should be OK and won’t break anything that we can predict. Or we can know that tolerances are really tight for a particular thing, so we’ll evaluate X and Y with high precision.
On the other hand, if we’re dealing with the unknown then we aren’t sure if X will work, also aren’t sure if Y will work, and shouldn’t assume the differences (whether tiny or large) between them won’t make a crucial difference. Sometimes even if two things seem very similar, one leads to success and the other doesn’t. When you don’t know which traits matter and where the breakpoints are, you can’t tell whether the seemingly minor differences between two similar things will matter (cross a breakpoint) or not. However, if we have a conceptual understanding of what’s going on, including our goal(s), then we might be able to deny that two things have relevant differences.
How can we have intuitions about two things being similar when we’re going to use them in a new way involving significant unknowns? Our intuitions are based on their similarity for goals that we’re already familiar with. And that often works fine. New, unknown goals are often related to some past goals we’ve had. New goals often involve some incremental changes to old goals. In that case, there’s a good chance that two things that seem similar to us will stay similar given the new goals. However, if the new goals are very different than our past goals and experiences, then our current intuitions are much less reliable. For example, if there’s a huge nuclear war that kills 99% of human beings, then we’d be in a very different situation and adopt new goals. Many of our pre-existing intuitions would be inappropriate because our goals would be so different than before. Another example where we change context and goals enough to invalidate many intuitions is when traveling to outer space.
Induction
Any two things have infinitely many similarities and infinitely many differences. The inductivist idea that “the future resembles the past” is wrong because the future always both resembles the past and differs from the past. No matter what happens, many patterns continue and many are broken. The issue the inductivist slogan doesn’t address is which patterns will continue, or in other words in which respects the future will resemble the past. Inductivists often rely on their intuitions to tell them which differences matter, but when doing philosophy we should analyze and use explicit reasoning. When judging things as similar, we should consider contexts and goals, because no two things are universally similar (unless they’re strictly fungible).
So, with no context, you cannot call two things similar. They have both similarities and differences. It’s only by supplying some context that you can make judgments about which traits matter. The more specific the context, the better you can focus on the important traits. To declare two things similar, you can have a very specific context and find that the two things are similar for the key, relevant traits. Or you can have a somewhat specific context and find two things have a lot of shared characteristics for a bunch of things that might matter. Another way to view it is that, the more specific the context, the more able you are to determine what differences matter and are dealbreakers. And partly or fully knowing your goals is one of the most important things for making a context more specific.
Note that we almost always know some context. E.g. “On Earth” is a context. If we didn’t know that context – if we had to take into account scenarios with other planets, deep space, the center of a star, etc. – then it’d be much harder to say what things are similar enough, in what ways, to count as (approximately) the same. The wider the variety of scenarios you have to account for, the more similar things have to be to perform similarly across all those scenarios.
Frozen Dinners Example
The grocery store sells frozen dinners. Some of them have the same price (exactly), the same brand name and product name (exactly), the same box art (looks the same to me, but I could find differences with a microscope), and the same food inside (pretty similar on many dimensions). In a general societal context today, they’re interchangeable. In contextless analysis, they’re not interchangeable because they have many differences that could somehow matter. In other words, you could invent a scenario where the differences determine success or failure at your goal.
If no context is specified, then any scenario at all is relevant, including scenarios you come up with for the specific purpose of highlighting a difference. For example, for any difference between two things, you could invent a scenario where aliens come and agree not to destroy our planet if and only if we give them a very specific object as a gift. It just so happens one of the two objects we’re considering would meet the aliens’ criteria but the other wouldn’t. The objects would have extremely unequal value in that scenario. So unless we’re considering the objects within a narrow enough context to exclude that alien tribute scenario from consideration (or at least enough context to treat it as very unlikely, and therefore unimportant), we won’t safely be able to consider two objects similar enough. Similar analysis works for things other than physical objects, like ideas – they’re always different for some scenarios, and the more constraints we put on the context then the more we can approximately judge them as similar or different.
When we focus on life in our society today, that gives us a fair amount of context. We know what the common goals and practices are. We know people will want a dinner and roughly how big it should be in terms of volume and calories. We know what sort of nutrients it should have. And we can check and see that thousands of the “same” meal meet reasonable societal expectations for different dinner goals like vitamin A, vitamin C, calories, sweetness, saltiness, etc. The meals are measured to be the right weight within a margin of error, and they have the right amount of each ingredient within a margin of error, and the boxes have identical art and writing within a margin of error, and the packaging is adequately solid or air tight within a margin of error, and they’re currently at the right temperature within a margin of error, and so on. So for a standard context today, many different frozen packaged meals are the same. In other words, we evaluate many meals and convert from many different values on many different incommensurable dimensions to a single overall evaluation.
Note that, in order to specify a margin of error, you need a context. A margin of error means that more or less of a particular factor is OK within some limits. That makes sense if you know a goal, such as the writing on the box being legible for any typical member of our society. But in another context with another goal, like the alien tribute, the relevant margins of error could be totally different. The aliens might care about the exact amount and shade of purple rather than whether it’s shaped like English letters. Margin of error for the “T” being imperfectly straight but still legible is totally different than margin of error for what ranges of light frequency responses (in other words, colors) the aliens would be satisfied by.
Change the context and what is similar enough (what conversions between dimensions work well enough) may change. You might want to do an art project that requires six carrot slices but find that some of the meals at the grocery store only have five. Five carrot slices is adequate for dinner but not for your art project.
You might want to split the meal between your two kids, so you want an even number of meatballs. Some meals have six meatballs and others have five. You don’t want to have to cut a meatball and get into an argument about which half is larger, so you’d prefer a meal with six meatballs. Or maybe you have six kids and want to give them one meatball each, so again random variance down to five meatballs would matter to you. Some other people might care a lot more about the mass of the meat rather than the number of meatballs. They’d prefer a meal that’s a little larger, but wouldn’t care at all if the same amount of meat is divided between five or six meatballs.
You might want to do a science experiment that requires exactly 500 calories worth of food (and the margin of error is only one calorie – you want between 499 and 501 calories). Then there are some frozen meals that would work but most wouldn’t (assuming the experiment requires a whole meal so you can’t add or remove some calories).
For general eating usage for people today, a meal doesn’t need to have an exact amount of calories. If it was really low you might be unsatisfied but many meals have excess capacity on calories to do their job and meet your expectations. You don’t try to optimize that. If you get hungry again you can just have a snack.
In the future, the price of beef might have gone up 1,000,000x (inflation-adjusted) while the price of potatoes went down 1,000,000x. If so, one of the meals with an extra gram of beef could be much more valuable than one with an extra gram of potato instead. So in that context they’d be very different, even though, today, I might cook and eat those meals without even noticing the difference.
If there’s a famine, people would start differentiating foods by total calories more than they do now. Some previously equivalent foods would become unequal because one, due to random variance, has 10% more calories. In a severe enough famine, it could matter that a meal has 1% more calories.
If you’re trying to collect cyanide to poison someone, apples with more and larger seeds have higher value. (You really want the most total extractable cyanide, which I would assume correlates with the number and size of seeds, though perhaps the variety of apple matters more). So two apples with differences in the seeds aren’t interchangeable for you. But if you’re just going to eat the apple and throw away the core, then the seeds don’t matter and the apples equal – at least equal enough. Differences in apple size are often within the margin of error of what you care about – in other words, many apples satisfy you because they have excess capacity on size and other traits, and you aren’t optimizing for more excess capacity. In other words, your goal is to eat and enjoy an apple and not feel unpleasantly hungry before dinner, and most differences between apples are not relevant to that goal. A few differences do matter for that, like apples that are tiny, huge or rotten. We tend to pay attention to factors that matter to our goals (like looking for apples that aren’t mushy, don’t have worms, and haven’t dried out – unless we’re intentionally eating dried apple slices which were dried in a safe, purposeful way). Other factors also exist and are relevant to a complete, philosophical analysis.
Conclusion
When making decisions or thinking about things, we consider many traits of things (of physical objects, ideas, anything). These traits are factors in our analysis. They are often in different dimensions which we can’t convert between, e.g. an object has a length and a temperature. And because two things aren’t identical, they have different values for some traits.
There is no general equivalence between a certain amount of length being worth a certain amount of temperature. We can’t say two things are approximately interchangeable if one is longer and the other is hotter (or colder). Speaking generally with no context specified, we also don’t know what magnitudes are important – e.g. important differences might be on a scale of nanometers or kilometers.
For a particular goal in a particular context, we can evaluate many things as similar enough. Realistic goals have breakpoints which we can achieve with an ample margin of error for most traits, and which could be a close call for a few key factors (or not – often every trait has excess capacity and our goal is easy).
In a wildly different context, two similar things could instead be significantly different (or two different things could be similar). When we view things as similar, that judgment is relative to some context and goal(s). The more specific the context, the better we’re able to treat two things as approximately the same, because the context helps us know what differences matter.
In a specific context, we can find that some temperature makes up for some length (e.g. there are multiple ways to succeed, and one relies on length and another on temperature), or that there’s enough margin of error that the length and temperature differences between these two objects don’t matter.
The reason two literally different things can count as the same is that they succeed at the same goal(s) that we have (in a range of contexts that we care about, which is most often related to the context we’re actually living in, but can be something else in a thought experiment).
Every little bit of literal difference in traits doesn’t matter because there are breakpoints for how much is enough to achieve a particular outcome. A breakpoint is the difference between success and failure at some goal that we care about. Breakpoints are sparse because we only have a few goals, not all logically possible goals. Most values for most traits aren’t near any breakpoint related to a goal we care about. This lets many different things be equal for that trait (they succeed and fail at exactly the same breakpoints, despite having different quantities of that trait). Because it’s easy to be equal for one trait, things can realistically be equal for many traits (especially when they’re designed to be, on purpose, like when they’re all the “same” product produced at a factory).
We can’t convert between two qualitatively different traits in general, but given a context and goal we can do conversions (e.g. this bundle of traits, and that bundle of traits, both results in success at our goal, so they’re equal, which implies some conversion). Even different values of the same trait are always qualitatively different for some context and goal, even though they’re identical within a margin of error for some other context and goal.
This helps us understand that decision making factors must be analyzed in context and relative to goals, and there’s no such thing as generic, out-of-context similarity between different things. That’s contrary to the inductivist idea of the future being similar to the past and contrary to the widespread idea of judging how good things are in terms of a generic goodness dimension.