Mold-Finding Dog Training and Induction
I read an account of training a dog to find hidden mold problems in buildings. (It’s in Mold Controlled by John Banta.) Dogs have a better sense of smell than us, so they can sniff for mold effectively. In short, if you want an exceptionally skilled dog, it takes a lot of rigorous training. This also applies to other types of dog training.
The dog training story was illustrative of some misconceptions about induction. Induction says, basically, that it’s not very hard to find patterns. You look around, observe lots of data, find the patterns, and conclude they are likely to hold in the future.
It’s true that it’s pretty easy to find patterns. The problem is there are too many patterns, and many of them won’t hold in the future. The set of all patterns logically compatible with a dataset is always huge (actually infinite), but most of those patterns aren't what people want to find.
People find induction plausible because their experience is that they find useful patterns at a reasonably high rate, rather than finding hundreds of useless patterns for each useful one. Why does that happen? Because they use intelligent (non-inductive) thinking to guide them. So induction is a loose approximation that relies on already being intelligent. It doesn’t work as an explanation for how intelligence works.
In dog training, you can’t explain what you mean to your dog. You can’t say “look for mold”. Instead, you can reward or punish your dog. You can communicate when the dog does a good job or not. In this training story, punishments were not used. They gave a reward when the dog did the desired action, and no reward when it did anything else.
So, from the dog’s perspective, it has a bunch of data and needs to find a pattern. The data specifies which actions preceded rewards and which actions didn't result in a reward. The data also includes context about the environment that the action was performed in.
Inductivists might expect finding the right pattern to be easy. But dog training is slow and hard. It took six months to initially train the dog. Why? Because dogs pick up on the wrong patterns. They don’t and can’t use the kind of non-inductive intelligent thinking that lets humans figure out which patterns are important with reasonable accuracy. The data they’re given is compatible with many patterns other than “find mold”, and the way to train a dog is basically to give thousands of data points until the dog is able to identify the pattern you want more accurately and rule out thousands of unwanted patterns.
The dog learned all the common spots the trainer put mold in his home for the dog to find during training sessions. At the start of training sessions, the dog started running around to everywhere it’d gotten a reward before without using its nose. Why? It identified a pattern correlating going to those locations with rewards.
The dog started finding refrigerators. Why? Because they have smells that overlap with mold smells. Any food made with fermentation has a mold-related smell. We see it totally differently than hidden mold in a wall, but the dog saw the underlying similarity and didn’t know anything about human food culture.
Molds produce alcohols, so the dog started finding liquor cabinets, wine bottles, etc. The dog again failed to use the kind of “common sense” that humans use when allegedly doing induction so that they can quickly find useful patterns. The dog basically looked at the data more literally, more like a calculator or computer performing math on a dataset, because it doesn’t have human intelligence to guide it.
By the way, how can a dog find the right pattern with enough training? If it just had to search infinitely many patterns using some genetically-built-in math algorithms, that’d be impossible. It'd basically be running into the various reasons that Karl Popper explained that induction is impossible. The answer is the dog doesn’t search through all logically possible patterns. It has built-in, genetic biases, predispositions, knowledge, etc. These provide some of the benefits of human intelligence that let humans appear to succeed at induction, but aren't as good. How was a dog's inborn knowledge created? By evolution, which involves replication, selection and variation, not pattern matching nor induction.
Keeping the dog good at finding mold required ongoing training. The dog was never finished training. It needed continual reinforcement. Possibly this is because daily life kept adding noisy data to the dog’s dataset, and the dog needed more useful data to keep the mold-finding pattern isolated and active in its dataset. This could make sense on the assumption that dogs prioritize newer data over older data, basically weighting new data higher.
I like this story because it provides real-life examples of finding multiple unintended patterns based on a dataset. (It's sort of like xkcd on Electoral Precedent, but those wrong patterns were chosen on purpose to be funny – he could have avoided finding those patterns if he wanted to.) Getting a dog to find the pattern that the trainer intends takes lots of work rather than being fairly easy like inductivist thinking suggests it would be. The story may also give people some clues about how hard it is to communicate effectively with other people. When looking for mold in a building, you don't normally communicate to people that cheese in the refrigerator doesn't count – you tend to just assume that they already know that. Human communication tends to involve a lot more assumption and prior knowledge – rather than actually communicating information – than people realize.