Narrative Confidence

Narrative Confidence

We like to believe that having more information will make us more confident in our decisions and opinions. The opposite, however, may be true. I have written in the past about a jam study, where participants who selected jam from a sample of a few jams were more happy with their choice than participants who selected jam from a sample of several dozen jam options. More information and more choices seems like it would help make us more happy and make us more confident with our decision, but those who selected jam from the small sample were happier than those who had several dozen jam options.

 

We like simple stories. They are easy for our brain to construct a narrative around and easy for us to have confidence in. The stories we tell ourselves and the conclusions we reach are often simplistic, often built on incomplete information, and often lack the nuance that is necessary to truly reflect reality. Our brains don’t want to work too hard, and don’t want to hold conflicting information that forces an unpleasant compromise. We don’t want to constantly wonder if we made the right choice, if we should do something different, if we need to try another option. We just want to make a decision and have someone tell us it was a good decision, regardless of the actual outcome or impact on our lives, the lives of others, or the planet.

 

Daniel Kahneman writes about this in his book Thinking Fast and Slow. He describes a study (not the jam study) where participants were presented with either one side or two sides of an argument. They had to chose which side they agreed with, and rate their confidence. “Participants who saw one-sided evidence were more confident of their judgments than those who saw both sides,” writes Kahneman, “This is just what you would expect if the confidence that people experience is determined by the coherence of the story they manage to construct from available information. It is the consistency of the information that matters for a good story, not its completeness. Indeed, you will often find that knowing little makes it easier to fit everything you know into a coherent pattern.”

 

Learning a lot and truly understanding any given issue is challenging because it means we must build a complex picture of the world. We can’t rely on simple arguments and outlooks on life when we start to get into the weeds of an issue or topic. We will see that admirable people have tragic flaws. We will see that policies which benefit us may exploit others. We will find that things we wish to be true about who we are and the world we live in are only semi-true. Ignorance is bliss in the sense that knowing only a little bit about the world will allow you to paint a picture that makes sense to you, but it won’t be accurate about the world and it won’t acknowledge the negative externalities that the story may create. Simplistic narratives may help us come together as sports fans, or as consumers, or as a nation, but we should all be worried about what happens when we have to accept the inaccuracies of our stories. How we do we weave a complex narrative that will bring people across the world together in a meaningful and peaceful way without driving inequality and negative externalities? That is the challenge of the age, and unfortunately, the better we try to be at accurately depicting the world we inhabit, the less confident any of us will be about the conclusions and decisions for how we should move forward.
System 1 Success

System 1 Success

“The measure of success for System 1 is the coherence of the story it manages to create.”

 

Daniel Kahneman writes that in his book Thinking Fast and Slow when discussing the quick conclusions of our System 1, the mental processing part of our brain that is fast, intuitive, and operates based on simple associations and heuristics.

 

System 1 stitches together a picture of the world and environment around us with incomplete information. It makes assumptions and quick estimates about what we are seeing and compiles a coherent story for us. And what is important for System 1 is that the story be coherent, not that the story be accurate.

 

System 2, the part of our brain which is more rational, calculating, and slower, is the part of the brain that is required for making detailed assessments on the information that System 1 takes in. But normally we don’t activate System 2 unless we really need to. If we judge that System 1 is making coherent connections and associations, then we don’t give it more attention and scrutiny from System 2.

 

It is important that we understand this about our minds. We can go about acting intuitively and believing that our simple narrative is correct, but we risk believing our own thoughts simply because they feel true and coherent to us and in line with our past experiences. Our thoughts will necessarily be inadequate, however, to fully encompass the reality around us. Other people will have different backgrounds, different histories, and different narratives knitted together in their own minds. It’s important that we find a way to engage System 2 when the stakes are high to make more thoughtful considerations than System 1 can generate. Simply because a narrative feels intuitively correct doesn’t mean that it accurately reflects the world around us, or creates a picture of the world that will work within the narrative frameworks that other people create.
First Impressions Matter

First Impressions Matter

In Thinking Fast and Slow, Daniel Kahneman describes a research study that shows the power of the halo effect. The halo effect is the phenomenon where positive traits in a person outshines the negative traits or characteristics of the individual, or cause us to project additional positive traits onto them. For example, think of your favorite celebrity. You know they are good looking, talented at whatever they do, and you most likely also ascribe a number of positive traits to them that you don’t really have evidence for. You probably believe they have the same political beliefs as you, that they probably pay their taxes and don’t litter. If you discovered they did one of these things, your brain would want to discredit that information, or you might face some cognitive dissonance as you square the negative characteristic with the fact that the person looks good and is talented.

 

The study Kahneman references shows the power of the halo effect by giving people 6 descriptions of a fictitious person. Some people were shown 3 positive characteristics followed by 3 negative traits. Another group of people were shown a different fictitious person, with the same 6 traits, but listed in reverse, with the negative traits first followed by the positive. Kahneman writes, “The sequence in which we observe characteristics of a person is often determined by chance. Sequence matters, however, because the halo effect increases the weight of first impressions, sometimes to the point that subsequent information is mostly wasted.”

 

The study shows that first impressions matter a lot, even when we are not actually meeting someone in person. When the first thing we learn about a person is something positive, it can be easy to overlook negative traits that we discover later, and this is true in reverse. This idea is part of what drove Malcolm Gladwell to write his new book Talking to Strangers. I have not read Gladwell’s book, but I have listened to him talk about it on several podcasts. He discusses the death of Sandra Bland, and the interaction she had with law enforcement that led to her arrest and subsequent suicide. First impressions matter, and the first impression she made on the police officer who pulled her over was negative, shaping the entire interaction between Sandra and the officer, and ultimately causing her arrest. Gladwell would also argue, I believe, that first impressions can be formed before you have even met someone, simply  by absorbing racial or other stereotypes.

 

Gladwell also discusses Bernie Madoff in his book. A savvy conman who relied on the halo effect to swindle millions. He charmed people and seemed successful, so people who trusted him with investments had trouble seeing through the lies. They wanted to believe the positive traits they first observed from him, and any hints of fraud were easily missed or ignored.

 

The best we can hope for is awareness of the halo effect, and remembering how much our very first impressions can matter. How we put ourselves forward can shape the interactions we have with others. But we can remember to give people a break, and give people second chances when our first impressions of them are not great. Remember to look beyond the first observed trait to see the whole picture of other people in your life, and try to set up situations so that you don’t judge people immediately on their appearance, and can look further to know and understand them a little better.
Positive Test Strategies

Positive Test Strategies

A real danger for us, that I don’t know how to move beyond, is positive test strategy. It is the search for evidence that confirms what we want to believe or what we think is true. When we already have an intuition about something, we look for examples that support our intuition. Looking for examples that don’t support our thought, or situations where our idea seems to fall short, is uncomfortable, and not something we are very good at. Positive test strategies are a form of motivated rationality, where we find ways to justify what we want to believe, and find ways to align our beliefs with what happens to be best for us.

 

In Thinking Fast and Slow, Daniel Kahneman writes the following, “A deliberate search for confirming evidence, known as positive test strategy, is also how System 2 tests a hypothesis. Contrary  to the rules of philosophers of science, who advise testing hypothesis by trying to refute them, people (and scientists, quite often) seek data that are likely to be compatible with the beliefs they currently hold.” 

 

In science, the best way to conduct a study is to try to refute the null hypothesis, rather than to try to support the actual hypothesis. You take a condition about the world, try to make an informed guess about why you observe what you do, and then you formulate a null hypothesis before you begin any testing. Your null hypothesis says, actually nothing is happening here after all. So you might think that teenage drivers are more likely to get in car crashes at roundabouts than regular intersections, or that crickets are more likely to eat a certain type of grass. Your null hypothesis is that teenagers do not crash at roundabouts more than typical intersections and that crickets don’t display a preference for one type of grass over another.

 

In your experimental study, instead of seeking out confirmation to show that teenagers crash more at roundabouts or that crickets prefer a certain grass, you seek to prove that there is a difference in where teenagers crash and which grass crickets prefer. In other-words, you seek to disprove the null hypothesis (that there is no difference) rather than try to prove that something specific is happening. It is a subtle difference, but it is importance. Its also important to note that good science doesn’t seek to disprove the null hypothesis in a specific direction. Good science tries to avoid positive test strategies by showing that the nothing to see here hypothesis is wrong and that there is something to see, but it could be in any direction. If scientists do want to provide more evidence that it is in a given direction, they look for stronger evidence, and less chance of random sampling error.

 

In our minds however, we don’t often do this. We start to see a pattern of behavior or outcomes, and we start searching for explanations to what we see. We come up with a hypothesis, think of more things that would fit with our hypothesis, and we find ways to explain how things align with our hypothesis. In My Big Fat Greek Wedding, this is what the character Gus does when he tries to show that all words in the world are originally Greek.

 

Normally, we identify something that would be in our personal interest or would support our group identity in a way to help raise our social status. From there, we begin to adopt hypothesis about how the world should operate that support what is in our personal interest. We then look for ways to test our hypothesis that would support it, and we avoid situations where our hypothesis could be disproven. Finding things that support what we already want to believe is comforting and relatively easy compared to identifying a null hypothesis, testing it, and then examining the results without already having a pre-determined outcome that we want to see.
Causal Versus Statistical Thinking

Causal Versus Statistical Thinking

Humans are naturally causal thinkers. We observe things happening in the world and begin to apply a causal reason to them, asking what could have led to the observation we made. We attribute intention and desire to people and things, and work out a narrative that explains why things happened the way they did.

 

The problem, however, is that we are prone to lots of mistakes when we think in this way. Especially when we start looking at situations that require statistical thinking. In his book Thinking Fast and Slow, Daniel Kahneman writes the following:

 

“The prominence of causal intuitions is a recurrent theme in this book because people are prone to apply causal thinking inappropriately, to situations that require statistical reasoning. Statistical thinking derives conclusions about individual cases from properties of categories and ensembles. Unfortunately, System 1 does not have the capability for this mode of reasoning; system 2 can learn to think statistically, but few people receive the necessary training.”

 

System 1 is our fast brain. It works quickly to identify associations and patters, but it doesn’t take in a comprehensive set of information and isn’t able to do much serious number crunching. System 2 is our slow brain, able to do the tough calculations, but limited to work on the set of data that System 1 is able to accumulate. Also, System 2 is only active for short periods of time, and only when we consciously make use of it.

 

This leads to our struggles with causal thinking. We have to take in a wide range of possibilities, categories, and ranges of combinations. We have to make predictions and understand that in some set of instances we will see one outcome, but in another set of circumstances we may see a different outcome. Statistical thinking doesn’t pin down a concrete answer the way our causal thinking likes. As a result, we reach conclusions based on incomplete considerations, we ignore some important pieces of information, and we assume that we are correct because our answer feels correct and satisfies some criteria. Thinking causally can be powerful and useful, but only if we fully understand the statistical dimensions at hand, and can fully think through the implications of the causal structures we are defining.
The Mental Scaffolding for Religious Belief

The Mental Scaffolding of Religious Belief

Yesterday’s post was about our mental structure for seeing causality in the world where there is none. We attribute agency to inanimate objects, imbue them with emotions, attribute intentions, and ascribe goals to objects that don’t appear to have any capacity for conscious thought or awareness. From a young age, our minds are built to see causality in the world, and we attribute causal actions linked to preferred outcomes to people, animals, plants, cars, basketballs, hurricanes, computers, and more. This post takes an additional step, looking at how our mind that intuitively perceives causal actions all around us plunges us into a framework for religious beliefs. There are structures in the mind that act as mental scaffolding for the construction of religious beliefs, and understanding these structures helps shed light on what is taking place inside the human mind.

 

In Thinking Fast and Slow, Daniel Kahneman writes the following:

 

“The psychologists Paul Bloom, writing in The Atlantic in 2005, presented the provocative claim that our inborn readiness to separate physical and intentional causality explains the near universality of religious beliefs. He observes that we perceive the world of objects as essentially separate from the world of minds, making it possible for us to envision soulless bodies and bodiless souls. The two models of causation that we are set to perceive make it natural for us to accept the two central beliefs of many religions: an immaterial divinity is the ultimate cause of the physical world, and immortal souls temporarily control our bodies while we live and leave them behind as we die.”

 

From the time that we are small children, we experience a desire for a change in the physical state around us. When we are tiny, we have no control over the world around us, but as we grow we develop the capacity to change the physical world to align with our desires. Infants who cannot directly change their environment express some type of discomfort by crying, and (hopefully) receiving loving attention. From a small age, we begin to understand that expressing some sort of discomfort brings change and comfort from a being that is larger and more powerful than we are.

 

This is an idea I heard years ago on a podcast. I don’t remember what show it was, but the argument that the guest presented was that humans have a capacity for imaging a higher being with greater power than what we have because that is literally the case when we are born. From the time we are in the womb to when we are first born, we experience larger individuals who provide for us, feed us, protect us, and literally talk down to us as if from above. In the womb we literally are within a protective world that nourishes our bodies and is ever present and ever powerful. We have an innate sense that there is something more than us, because we develop within another person, literally experiencing that we are part of something bigger. And when we are tiny and have no control over our world, someone else is there to protect and take care of us, and all we need to do to summon help is to cry out to the sky as we lay on our backs.

 

As we age, we learn to control our physical bodies with our mental thoughts and learn to use language to communicate our desired to other people. We don’t experience the build up action potentials between neurons prior to our decisions to do something. We only experience us, acting in the world and mentally interpreting what is around us. We carry with us the innate sense that we are part of something bigger and that there is a protector out there who will come to us if we cry out toward the sky. We don’t experience the phenomenological reality of the universe, we experience the narrative that we develop in our minds beginning at very young ages.

 

My argument in this piece is that both Paul Bloom as presented in Kahneman’s book and the argument from the scientist in the podcast are correct. The mind contains scaffolding for religious beliefs, making the idea that a larger deity exists and is the original causal factor of the universe feel so intuitive. Our brains are effectively primed to look for things that support the intuitive sense of our religions, even if there is no causal structure there, or if the causal structure can be explained in a more scientific and rational manner.
Seeing Agents

Seeing Agents

As I got about half-way through my undergraduate degree, a key thought process in my brain began to change. It was an intentional change on my part, and one that took quite a lot of effort. After several years I was able to stop seeing agency in things that were not alive. I was able to get away from the mindset of everything happens for a reason and I started to accept that some things were random, some things were only imbued with meaning by me, and potentially everything in the universe is the result of physical laws of nature.

 

Today I don’t believe that the table at which I write has any emotional experience of me using it to type out a blog post. I don’t think my car actually knows if I drive it today, and I don’t think that it has some preference deep inside to be driven. I don’t believe that the house I am about to move out of will actually be sad (or happy) to see me leave. But there was a time in my life where a piece of me may have believed such things. I certainly knew the houses, stuffed animals, and cars were not alive, but somewhere deep inside I was assigning agency to inanimate objects, imbuing them with emotions, thoughts, and desires of their own.

 

It is more than just cartoons that made me think the way I did about inanimate objects, and that is why it took several years late in my undergraduate degree to begin changing the way I thought about the world. I was seeing agents where there were none, and it was hard to remove agency from things that I had animated in my own mind. Research presented in Daniel Kahneman’s book Thinking Fast and Slow helps explain what was happening inside my mind:

 

“The perception of intention and emotion is irresistible; only people afflicted by autism do not experience it. All this is entirely in your mind of course. Your mind is ready and even eager to identify agents, assign them personality traits and specific intentions, and view their actions as expressing individual propensities. Here again, the evidence is that we are born prepared to make intentional attributions…” 

 

Kahneman describes a study in which participants watch geometric shapes chase each other around on a screen. People see random shapes and assign meaning, intention, and agency to the two dimensional objects. We create a story that justifies the behavior we intuit from them and gives them life. Our mind is geared to see agents where there are none, probably to help us understand other people, to be able to reflect on our own emotions, and to become better social beings. Yuval Noah Harari in his book Sapiens discusses how the cognitive revolution may have brought about this ability in our minds, by giving us the capacity for imagination, and the capacity to create narratives and stories to foster social cohesion and shared meaning.

 

Ultimately, it doesn’t matter much if you name your car and view it as having agency, you might even treat it better if you do. However, this can spill over into other aspects of our lives in problematic ways. We can become too attached to material objects, unable to let go of clutter and stuff. As Kahneman continues, and as I’ll write about tomorrow, this is also likely part of why we see the world so often through religious eyes, and conflicting religious beliefs and values have certainly been at the root of much violence and death in human existence, even if religion has given us community and social mission. Seeing agents where they do not exist is an interesting part of our humanity, and it can help us gel together, or can serve as the base for out-casting others and bringing violence upon them. Its not easy to overcome, but I think it is necessary if we are to have accurate beliefs about the world and advance as a global community.
Seeing Causality

Seeing Causality

In Thinking Fast and Slow Daniel Kahneman describes how a Belgian psychologist changed the way that we understand our thinking in regard to causality. The traditional thinking held that we make observations about the world and come to understand causality through repeated exposure to phenomenological events. As Kahneman writes, “[Albert] Michotte [1945] had a different idea: he argued that we see causality just as directly as we see color.”

 

The argument from Michotte is that causality is an integral part of the human psyche. We think and understand the world through a causal lens. From the time we are infants, we interpret the world causally and we can see and understand causal links and connections in the things that happen around us. It is not through repeated experience and exposure that we learn to view an event as having a cause or as being the cause of another event. It is something we have within us from the beginning.

 

“We are evidently ready from birth to have impressions of causality, which do not depend on reasoning about patterns of causation.”

 

I try to remember this idea of our intuitive and automatic causal understanding of the world when I think about science and how I should relate to science. We go through a lot of effort to make sure that we are as clear as possible with our scientific thinking. We use randomized controlled trials (RCT) to test the accuracy of our hypothesis, but sometimes, an intensely rigorous scientific study isn’t necessary for us to make changes in our behavior based on simple scientific exploration via normal causal thinking. There are some times where we can trust our causal intuition, and without having to rely on an RCT for evidence. I don’t know where to draw the line between causal inferences that we can accept and those that need an RCT, but through honest self-awareness and reflection, we should be able to identify times when our causal interpretations demonstrate validity and are reasonably well insulated from our own self-interests.

 

The Don’t Panic Geocast has discussed two academic journal articles on the effectiveness of parachutes for preventing death when falling from an aircraft during the Fun Paper Friday segment of two episodes. The two papers, both published in the British Medical Journal, are satirical, but demonstrate an important point. We don’t need to conduct an RCT to determine whether using a parachute when jumping from a plane will be more effective at helping us survive the fall than not using a backpack. It is an extreme example, but it demonstrates that our minds can see and understand causality without always needing an experiment to confirm a causal link. In a more consequential example, we can trust our brains when they observe that smoking cigarettes has negative health consequences including increased likelihood of an individual developing lung cancer. An RCT to determine the exact nature and frequency of cancer development in smokers would certainly be helpful in building our scientific knowledge, but the scientific consensus around smoking and cancer should have been accepted much more readily than what it was. An RCT in this example would take years and would potentially be unethical or impossible. Tobacco companies obfuscated the science by taking advantage of the fact that an RCT in this case couldn’t be performed, and we failed to accept the causal link that our brains could see, but could not prove as definitively as we can prove something with an RCT. Nevertheless, we should have trusted our causal thinking brains, and accepted the intuitive answer.

 

We can’t always trust the causal conclusions that our mind reaches, but there are times where we should acknowledge that our brains think causally, and accept that the causal links that we intuit are accurate.
A Capacity for Surprise

A Capacity for Surprise

For much of our waking life we operate on System 1, or we at least allow System 1 to be in control of many of our actions, thoughts, and reactions to the world around us. We don’t normally have to think very hard about our commute to work, we can practically walk through the house in the early morning on our way to the coffee machine with our eyes closed, and we can nod to the Walmart greeter and completely forget them half a second after we have passed. Most of the time, the pattern of associated ideas from System 1 is great at getting us through the world, but occasionally, something happens that doesn’t fit the model. Occasionally, something reveals our capacity for surprise.

 

Seeing someone juggling in the middle of the shopping isle in Walmart would be a surprise (although less of a surprise in a Walmart than in some other places). Stepping on a stray Lego is an unwelcome early morning pre-coffee surprise, as is an unexpected road closure on our commute. These are examples of large surprises in our daily routine, but we can also have very small surprises, like when someone tells us we will be meeting with Aaron to discuss our personal financial plan, and in walks Erin, surprising us by being a woman in a position we may have subconsciously associated with men.

 

“A capacity for surprise is an essential aspect of our mental life,” writes Daniel Kahneman in his book Thinking Fast and Slow, “and surprise itself is the most sensitive indication of how we understand our world and what we expect from it.”

 

Because so much of our lives is in the hands of System 1, we are frequently surprised. If we consciously think about the world and the vast array of possibilities at any moment, we might not be too surprised at any given outcome. We also would be paralyzed by trying to make predictions of a million different outcomes for the next five minutes. System 1 eases our cognitive load and sets us up for routine expectations based on the model of the world it has adapted from experience. Surprise occurs when something violates our model, and one of the best ways to understand what that model looks like is to look at the situations that surprise us.

 

Bias is revealed through surprise, when an associated pattern is interrupted by something that we were not expecting. The examples can be harmless, such as not expecting a friend to answer the phone sick, with a raspy and sleepy voice. But often our surprise can reveal more consequential biases, such as when we are surprised to see a person of a racial minority in a position of authority. It might not seem like much, but our surprise can convey a lot of information about what we expect and how we understand the world, and other people might pick up on that even if we didn’t intend to convey a certain expectation about another person’s place in the world. We are constantly making predictions about what we will experience in the world, and our capacity for surprise reveals the biases that exist within our predictions, saying something meaningful about what our model of the world looks like.
Patterns of Associated Ideas

Patterns of Associated Ideas

In Thinking Fast and Slow, Daniel Kahneman argues that our brains try to conserve energy by operating on what he calls System 1. The part of our brain that is intuitive, automatic, and makes quick assessments of the world is System 1. It doesn’t require intense focus, it quickly scans our environment, and it simply ignores stimuli that are not crucially important to our survival or the task at hand. System 1 is our low-power resting mode, saving energy so that when we need to, we can activate System 2 for more important mental tasks.

 

Without our conscious recognition, System 1 builds mental mental models of the world that shape the narrative that we use to understand everything that happens around us. It develops simple association and expectations for things like when we eat, what we expect people to look like, and how we expect the world to react when we move through it. Kahneman writes, “as these links are formed and strengthened, the pattern of associated ideas comes to represent the structure of events in your life, and determines your interpretations of the present as well as your expectations of the future.”

 

It isn’t uncommon for people different people to watch the same TV show, read the same news article, or witness the same event and walk away with completely different interpretations. We might not like a TV show that everyone else loves. We might reach a vastly different conclusion from reading a news article about global warming, and we might interpret the actions or words of another person completely differently. Part of why we don’t all see things the same, Kahneman might argue, is because we have all trained our System 1 in unique ways. We have different patterns of associated ideas that we use to fit information into a comprehensive narrative.

 

If you never have interactions with people who are different than you are, then you might be surprised when people don’t behave the way you expect. When you have a limited background and experience, then your System 1 will develop a pattern of associated ideas that might not generalize to situations that are new for you. How you see and understand the world is in some ways automatic, determined by the pattern of associated ideas that your System 1 has built over the years. It is unique to you, and won’t fit perfectly with the associated ideas that other people develop.

 

We don’t have control over System 1. If we active our System 2, we can start  to influence what factors stand out to System 1, but under normal circumstances, System 1 will move along building the world that fits its experiences and expectations. This works if we want to move through the world on auto-pilot with few new experiences, but if we want to be more engaged in the world and want to better understand the variety of humanity that exists in the world, our System 1 on its own will never be enough, and it will continually let us down.