The Representation Problem

The Representation Problem

In The Book of Why Judea Pearl lays out what computer scientists call the representation problem by writing, “How do humans represent possible worlds in their minds and compute the closest one, when the number of possibilities is far beyond the capacity of the human brain?”
 
 
In the Marvel Movie Infinity War, Dr. Strange looks forward in time to see all the possible outcomes of a coming conflict. He looks at 14,000,605 possible futures. But did Dr. Strange really look at all the possible futures out there? 14 million is a convenient big number to include in a movie, but how many possible outcomes are there for your commute home? How many people could change your commute in just the tiniest way? Is it really a different outcome if you hit a bug while driving, if you were stopped at 3 red lights and not 4, or if you had to stop at a crosswalk for a pedestrian? The details and differences in the possible worlds of our commute home can range from the miniscule to the enormous (the difference between you rolling your window down versus a meteor landing in the road in front of you). Certainly with all things considered there are more than 14 million possible futures for your drive home.
 
 
Somehow, we are able to live our lives and make decent predictions of the future despite the enormity of possible worlds that exist ahead of us. Somehow we can represent possible worlds in our minds and determine what future world is the closest one to the reality we will experience. This ability allows us to plan for retirement, have kids, go to the movies, and cook dinner. If we could not do this, we could not drive down the street, could not walk to a neighbors house, and couldn’t navigate a complex social world. But none of us are sitting in a green glow with our head spinning in circles like Dr. Strange as we try to view all the possible worlds in front of us. What is happening in our mind to do this complex math?
 
 
Pearl argues that we solve this representation problem not through magical foresight, but through an intuitive understanding of causal structures. We can’t predict exactly what the stock market is going to do, whether a natural disaster is in our future, or precisely how another person will react to something we say, but we can get a pretty good handle on each of these areas thanks to causal reasoning.
 
 
We can throw out possible futures that have no causal structures related to the reality we inhabit.  You don’t have to think of a world where Snorlax is blocking your way home, because your brain recognizes there is no causal plausibility of a Pokémon character sleeping in the road. Our brain easily discards the absurd possible futures and simultaneous recognizes the causal pathways that could have major impacts on how we will live. This approach gradually narrows down the possibilities to a level where we can make decisions and work with a level of information that our brain (or computers) can reasonably decipher. We also know, without having to do the math, that rolling our window down or hitting a bug is not likely to start a causal pathway that materially changes the outcome of our commute home. The same goes for being stopped at a few more red lights or even stopping to pick up a burrito. Those possibilities exist, but they don’t materially change our lives and so our brain can discard them from the calculation. This is the kind of work our brains our doing, Pearl would argue, to solve the representation problem.

Navigating Uncertainty with Nudges

Navigating Uncertainty with Nudges

In Risk Savvy Gerd Gigerenzer makes a distinction between known risks and uncertainty. In a foot note for a figure, he writes, “In everyday language, we make a distinction between certainty and risk, but the terms risk and uncertainty are used mostly as synonyms. They aren’t. In a world of known risks, everything, including the probabilities, is known for certain. Here statistical thinking and logic are sufficient to make good decisions. In an uncertain world, not everything is known, and one cannot calculate the best option. Here, good rules of thumb and intuition are also required.” Gigerenzer’s distinction between risk and uncertainty is important. He demonstrates that people can manage decision-making when making risk based decisions, but that people need to rely on intuition and good judgement when dealing with uncertainty. One solution to improved judgement and intuition is to use nudges.

 

In the book Nudge, Cass Sunstein and Richard Thaler encourage choice architects to design systems and structures that will help individuals make the best decision in a given situation as defined by the chooser. Much of their argument is supported by research presented by Daniel Kahneman in Thinking Fast and Slow, where Kahneman demonstrates how predictable biases and cognitive errors can lead people to making decisions that they likely wouldn’t make if they had more clear information, had the ability to free themselves from irrelevant biases, and could improve their statistical thinking. Gigerenzer’s quote supports Sunstein and Thaler’s nudges by building on the research from Kahneman. Distinguishing between risk and uncertainty helps us understand when to use nudges, and how aggressive our nudges may need to be.

 

Gigerenzer uses casino slot machines as an example of risk and for examples of uncertainty uses stocks, romance, earthquakes, business, and health. When we are gambling, we can know the statistical chances that our bets will pay off and calculate optimal strategies (there is a reason the casino dealer stays on 17). We won’t know what the outcome will be ahead of time, but we can precisely define the risk. The same cannot be said for picking the right stocks, the right romantic partner, or when creating business, earthquake preparedness, or health plans. We may know the five year rate of return for a company’s stocks, the divorce rate in our state, the average frequency and strength of earthquakes in our region, and how old our grandfather lived to be, but we cannot use this information alone to calculate risk. We don’t know exactly what business trends will arise in the future, we don’t know for sure whether we have a genetic disease that will strike us (or our romantic partner) down sooner than expected, and we can’t say for sure that a 7.0 earthquake is or is not possible next month.

 

But nudges can help us in these decisions. We can use statistical information for business development and international stock returns to identify general rules of thumb when investing. We can listen to parents and elders and learn from their advice and mistakes when selecting a romantic partner, intuiting the traits that make a good (or bad) spouse. We can overengineer our bridges and skyscrapers by 10% to give us a little more assurance that they can survive a major and unexpected earthquake. Nudges are helpful because they can augment our gut instincts and help bring visualizations to the rules of thumb that we might utilize.

 

Expecting everyone’s individual intuition and heuristics to be up to the task of navigating uncertainty is likely to lead to many poor choices. But, if we help pool the statistical information available, provide guides, communicate rules of thumb that have panned out for many people, and structure choices in ways that help present this information, then people can likely make marginally better decisions. My suggestion in this post, is a nudge to use more nudges in moments of uncertainty. When certainty exists, or even when calculable risks exist, nudges may not be needed. However, once we get beyond calculable risk, where we must rely on judgement and intuition, nudges are important tools to help people navigate uncertainty and improve their decision making.