In The Book of Why Judea Pearl lays out what computer scientists call the representation problem by writing, “How do humans represent possible worlds in their minds and compute the closest one, when the number of possibilities is far beyond the capacity of the human brain?”
In the Marvel Movie Infinity War, Dr. Strange looks forward in time to see all the possible outcomes of a coming conflict. He looks at 14,000,605 possible futures. But did Dr. Strange really look at all the possible futures out there? 14 million is a convenient big number to include in a movie, but how many possible outcomes are there for your commute home? How many people could change your commute in just the tiniest way? Is it really a different outcome if you hit a bug while driving, if you were stopped at 3 red lights and not 4, or if you had to stop at a crosswalk for a pedestrian? The details and differences in the possible worlds of our commute home can range from the miniscule to the enormous (the difference between you rolling your window down versus a meteor landing in the road in front of you). Certainly with all things considered there are more than 14 million possible futures for your drive home.
Somehow, we are able to live our lives and make decent predictions of the future despite the enormity of possible worlds that exist ahead of us. Somehow we can represent possible worlds in our minds and determine what future world is the closest one to the reality we will experience. This ability allows us to plan for retirement, have kids, go to the movies, and cook dinner. If we could not do this, we could not drive down the street, could not walk to a neighbors house, and couldn’t navigate a complex social world. But none of us are sitting in a green glow with our head spinning in circles like Dr. Strange as we try to view all the possible worlds in front of us. What is happening in our mind to do this complex math?
Pearl argues that we solve this representation problem not through magical foresight, but through an intuitive understanding of causal structures. We can’t predict exactly what the stock market is going to do, whether a natural disaster is in our future, or precisely how another person will react to something we say, but we can get a pretty good handle on each of these areas thanks to causal reasoning.
We can throw out possible futures that have no causal structures related to the reality we inhabit. You don’t have to think of a world where Snorlax is blocking your way home, because your brain recognizes there is no causal plausibility of a Pokémon character sleeping in the road. Our brain easily discards the absurd possible futures and simultaneous recognizes the causal pathways that could have major impacts on how we will live. This approach gradually narrows down the possibilities to a level where we can make decisions and work with a level of information that our brain (or computers) can reasonably decipher. We also know, without having to do the math, that rolling our window down or hitting a bug is not likely to start a causal pathway that materially changes the outcome of our commute home. The same goes for being stopped at a few more red lights or even stopping to pick up a burrito. Those possibilities exist, but they don’t materially change our lives and so our brain can discard them from the calculation. This is the kind of work our brains our doing, Pearl would argue, to solve the representation problem.