Culture, Physics, Noise, & Thrawn

Culture, Physics, Noise, & Thrawn

I am a big fan of Timothy Zahn’s books about the Star Wars character Thrawn, but one critique I would offer is on the way that Thrawn derives insights about entire populations based on their artwork. It’s a fun part of the stories and I don’t mind suspending disbelief as I jump into the fiction worlds that Zahn has helped create, but culture is too turbulent for the ideas to really hold if you don’t work extra hard to suspend your disbelief. The reality is that culture is ever moving, shifting, and swirling, and drawling large conclusions about anyone and anything from artwork is probably not a good judgement practice.
 
 
In the book Sapiens, Yuval Noah Harari demonstrates this by contrasting culture with physics. He writes, “every culture has its typical beliefs, norms, and values, but these are in constant flux. … Unlike the laws of physics, which are free of inconsistencies, every man-made order is packed with internal contradictions.” Whether it is our political beliefs, the larger influencing factors that shape our media and artwork, or our individual opinions and mood, there is a lot of noise that influences our cultural products. We all see the world through unique perspectives influenced by where we happen to be at any given moment, what our past experiences have been, and factors that we are not even aware of. Drawing a single conclusion about anything is hardly ever possible, even for ideas and memes that are shared throughout a culture.
 
 
It is not just Thrawn who draws large overarching conclusions about entire groups of people based on their cultural outputs. Thrawn works because it is something we all do. It is easy to watch a sporting even where our favored team is losing and decide that the opposing team’s fans are savage animals. It is easy to see high school kids these days and decide that they are all degenerates based on seeing the way that a few of them dress and behave. It is easy to make broad assumptions and generalizations about people in another country after seeing a tourism advertisement. In each of these areas our own biases, the randomness of who we see and when, and even deliberate propaganda and framing influences the way we come to understand the world. But how people act and behave, how people dress, and what cultural outputs they create constantly change and are not the same between people or even within the same individual over time. Unlike physics, the culture of a people is constantly ebbing and flowing. It is constantly up for interpretation and debate, and constantly influenced by outside forces or appropriations. In a way we are all Thrawn, making grand pronouncements about others, without recognizing just how turbulent culture truly is and how much noise and variability is possible within a culture.
The Cigarette Wars - Judea Pearl The Book of Why - Joe Abittan

The Cigarette Wars

Recently I have been writing about Judea Pearl’s The Book of Why, in which Pearl asks if our reliance on statistics and our adherence to the idea correlation is not causation has gone too far in science. For most people, especially students getting into science and those who have studied politics, reiterating the idea that correlation does not imply causation is important. There are plenty of ways to misinterpret data and there is no shortage of individuals and interest groups who would love to have an official scientist improperly assign causation to a correlation for their own political and economic gain. However, Pearl uses the cigarette wars to show us that failing to acknowledge that correlations can imply causation can also be dangerous.
“The cigarette wars were science’s first confrontation with organized denialism, and no one was prepared,” writes Pearl. For decades there was ample evidence from different fields and different approaches linking cigarette smoking to cancer. However, it isn’t the case that every single person who smokes a cigarette gets cancer. We all know people who smoked for 30 years, and seem to have great lungs. Sadly, we also all know people who developed lung cancer but never smoked. The causation between cigarettes is not a perfect 1:1 correlation with lung cancer, and tobacco companies jumped on this fact.
For years, it was abundantly clear that smoking greatly increased the risk of lung cancer, but no one was willing to say that smoking caused lung cancer, because powerful interest groups aligned against the idea and conditioned policy-makers and the public to believe that in the case of smoking and lung cancer, correlation was not causation. The evidence was obvious, but built on statistical information and the organized denial was stronger. Who was to say if people more susceptible to lung cancer were also more susceptible to start smoking in the first place? Arguments such as these hindered people’s willingness to adopt the clear causal picture that cigarettes caused cancer. People hid behind a possibility that the overwhelming evidence was wrong.
Today we are in a similar situation with climate change and other issues. It is clear that statistics cannot give us a 100% certain answer to a causal question, and it is true that correlation is not necessarily a sign of causation, but at a certain point we have to accept when the evidence is overwhelming. We have to accept when causal models that are not 100% proven have overwhelming support. We have to be able to make decisions without being derailed by organized denialism that seizes on the fact that correlation does not imply causation, just to create doubt and confusion. Pearl’s warning is that failing to be better with how we think about and understand causality can have real consequences (lung cancer in the cigarette wars, and devastating climate impacts today), and that we should take those consequences seriously when we look at the statistics and data that helps us understand our world.
Post Hoc Conclusions

Post Hoc Conclusions

Our minds see a lot of patterns that don’t exist. We make observations of randomness and find patterns that we assume to be based on a causal link when in reality no causal structure exists between our observations. This can happen in 3 point shooting in basketball, in observations of bomb locations in WWII London, and in music streaming services. We are primed to see patterns and causes, and we can construct them even when we shouldn’t. One contributing factor for incorrect pattern observation is that we tend to make post hoc conclusions, making observations after the fact without predicting what we might expect to see before hand.

 

Using the WWII example, Cass Sunstein and Richard Thaler in the book Nudge show how people developed misconstructions of German bombing patterns in London during the war. The German bombing wasn’t precise, and there was no real pattern to the bombing raids and where bombs actually exploded across the city. Nevertheless, people mistakenly viewed a pattern in the random distribution of bombs. The authors describe the mistaken pattern identification by writing, “We often see patterns because we construct our informational tests only after looking at the evidence.”

 

People could map where bombs fell, and then create explanations for what targets the Germans were aiming at, for why the Germans would target a certain part of the city, and what strategic purpose the bombing was trying to accomplish. But these reasons are all post hoc constructions meant to satisfy a non-existent pattern that someone expected to find. We also see this in basketball, when a shooter makes a few baskets and is believed to have the hot hand or be on fire. In music streaming services, algorithms are actually tweaked to be less random, because listeners who hear two consecutive songs or more by the same band will assume the streaming isn’t randomizing the music, even though random chance will sometimes pick a string of songs from the same band or even from the same album.

 

The examples I mentioned in the previous paragraph are harmless cognitive errors stemming from poorly constructed post hoc explanations of phenomena.  However, post hoc conclusions based on non-existent patterns are important to consider because they can have real consequences in our lives and societies. If we are in a position to make important decisions for our families, our companies, or our communities, we should recognize that we possess the ability to be wildly wrong about observed patterns. It is important that we use better statistical techniques or listen to the experts who can honestly employ them to help us make decisions. We should not panic about meaningless stock market fluctuations and we should not incarcerate people based on poor crime statistic understandings. We should instead remember that our brains will look for patterns and find them even if they don’t actually exist. We should state assumptions before we make observations, rather than making post hoc conclusions on poor justifications for the patterns we want to see.
Accepting Unsound Arguments

Accepting Unsound Arguments

Motivated reasoning is a major problem for those of us who want to have beliefs that accurately reflect the world. To live is to have preferences about how the world operates and relates to our lives. We would prefer not to endure suffering and pain, and would rather have comfort, companionship, and prosperity. We would prefer the world to provide for us, and we would prefer to not be too heavily strained. From pure physical needs and preferences all the way through social and emotional needs and preferences, our experiences of the world are shaped by what we want and what we would like. This is why we cannot get away from our own opinions and individual preferences in life, and part of why motivated reasoning becomes the problem that it is.

 

In Thinking Fast and Slow, Daniel Kahneman writes about how motivated reasoning works in our minds, in terms of the arguments we make to support the conclusions we believe in, or would like to believe in. He writes, “When people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when these arguments are unsound.”

 

We justify conclusions we would like to believe with any argument that seems plausible and fits the conclusion we would like to believe. Our preference for one conclusion leads us to bend the arguments in favor of that conclusion. Rather than truly analyzing the arguments, we discount factors that don’t support what we want to believe, and we disregard arguments that come from people who are reaching an alternative conclusion. Our preferences take over, and the things we want become more important than reality. Motivated reasoning gives us a way to support what we want to believe by twisting the value we assign to different facts.

 

Even in our own mind, demonstrating that an argument in favor of our preferred conclusion is flawed is unlikely to make much of a difference. We will continue to hold on to our flawed argument, choosing to believe that there is something true about it, even if we know it is flawed or contradicts other disagreeable facts that must also be true if we are to support our preferred conclusion.

 

This doesn’t make us humans look very good. We can’t reason our way to new beliefs and we can’t rely on facts and data to change minds. In the end, if we want to change our thoughts and behavior as well as those of others, we have to shape people’s preferences. Motivated reasoning can support conclusions that do not accurately reflect the world around us, so for those of us who care about reality, we have to heighten the salience of believing and trusting science and expertise before we can get people to adopt our arguments in favor of rational evidence. If we don’t think about how preference and motivated reasoning lead people to believe inaccurate claims, we will fail to address the preferences that support problematic policies, and we won’t be able to guide our world in a direction based on reason and sound conclusions.
Detecting Simple Relationships

Detecting Simple Relationships

System 1, in Daniel Kahneman’s picture of the mind, is the part of our brain that is always on. It is the automatic part of our brain that detects simple relationships in the world, makes quick assumptions and associations, and reacts to the world before we are even consciously aware of anything. It is contrasted against System 2, which is more methodical, can hold complex and competing information, and can draw rational conclusions from detailed information through energy intensive thought processes.

 

According to Kahneman, we only engage System 2 when we really need to. Most of the time, System 1 does just fine and saves us a lot of energy. We don’t need to have to think critically about what we need to do when the stoplight changes from green to yellow to red. Our System 1 can develop an automatic response so that we let off the gas and come to a stop without having to consciously think about every action involved in slowing down at an intersection. However, System 1 has some very serious limitations.

 

“System 1 detects simple relations (they are all alike, the son is much taller than the father) and excels at integrating information about one thing, but it does not deal with multiple distinct topics at once, nor is it adept at using purely statistical information.”

 

When relationships start to get complicated, like say the link between human activities and long term climate change, System 1 will let us down. It also fails us when we see someone who looks like they belong to the Hell’s Angels on a father-daughter date at an ice cream shop, when we see someone who looks like an NFL linebacker in a book club, or when we see a little old lady driving a big truck. System 1 makes assumptions about the world based on simple relationships, and is easily surprised. It can’t calculate unique and edge cases, and it can’t hold complicated statistical information about multiple actors and factors that influence the outcome of events.

 

System 1 is our default, and we need to remember where its strengths and where its weaknesses are. It can help us make quick decisions while driving or catching an apple falling off a counter, but it can’t help us determine whether a defendant in a criminal case is guilty. There are times when our intuitive assumptions and reactions are spot on, but there are a lot of times when they can lead us astray, especially in cases that are not simple relationships and violate our expectations.