Experiencing Versus Remembering

Experiencing Versus Remembering

My last two posts have been about the difference in how we experience life and how we remember what happens in our life. This is an important idea in Daniel Kahneman’s book Thinking Fast and Slow. Kahneman explains the ways in which our minds make predictable errors when thinking statistically, when trying to remember the past, and when making judgements about reality. Kahneman describes our mind as having two selves. He writes,

 

“The experiencing self is the one that answers the question: Does it hurt now? The remembering self is the one that answers the question: How was it on the whole? Memories are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self.”

 

In my post about the Peak-End Rule I highlighted findings from Kahneman that show that the remembering self isn’t very good at making accurate judgments about a whole experience. It more or less averages out the best (or worst) part of an experience with the ending of the experience. The ups and downs throughout, the actual average quality overall, isn’t that relevant to the way we think back on an experience.

 

Duration Neglect also demonstrates how the remembering self misjudges our experiences. A long monotonous experience with a positive ending can be remembered much more fondly than a generally positive short experience with a bad ending.

 

When I think about the experiencing and remembering self, I try to remember that my remembering self is not able to perfectly recall the reality of my experiences. I try to remember that my experiencing self is only alive in the present moment, and when I am experiencing something great, I try hard to focus on that moment, rather than try to focus on something I want to remember (this is the difference between sitting and watching a beautiful sunset versus trying to capture the perfect picture of the sunset for social media). Keeping in mind the distinctions between the experiencing and remembering self is helpful for avoiding the frustration, guilt, and pressure that the remembering self heaps on you when you don’t feel as though you have done enough or accomplished enough. The remembering self is only one part of you, and its revisionist view of your history isn’t real. There is real value in finding a balance between living for the experiencing self and living with the knowledge of what fuels the remembering self. Tilting too far either way can make us feel frustrated and overwhelmed, or unaccomplished, and we all want to be somewhere between the two extremes, giving up a little to prop up the other in different ways at different times of our lives.
Scared Before You Even Know It

Scared Before You Even Know It

In Thinking Fast and Slow, Daniel Kahneman demonstrates how quick our minds are and how fast they react to potential dangers and threats by showing us two very simple pictures of eyes. The pictures are black squares, with a little bit of white space that our brains immediately perceive as eyes, and beyond that immediate perception of eyes, our brains also immediately perceive an emotional response within the eyes. They are similar to the simple eyes I sketched out here:

In my sketch, the eyes on the left are aggressive and threatening, and our brains will pick up on the threat they pose and we will have physiological responses before we can consciously think through the fact that those eyes are just a few lines drawn on paper. The same thing happens with the eyes on the right, which our brains recognize as anxious or worried. Our body will have a quick fear reaction, and our brain will be on guard in case there is something we need to be anxious or worried about as well.

 

Regarding a study that was conducted where subjects in a brain scanner were shown a threatening picture for less than 2/100 of a second, Kahneman writes, “Images of the brain showed an intense response of the amygdala to a threatening picture that the viewer did not recognize. The information about the threat probably traveled via a superfast neural channel that feeds directly into a part of the brain that processes emotions, bypassing the visual cortex that supports the conscious experience of seeing.” The study was designed so that the subjects were not consciously aware of having seen an image of threatening eyes, but nevertheless their brain perceived it and their body reacted accordingly.

 

The takeaway from this kind of research is that our environments matter and that our brains respond to more than what we are consciously aware of. Subtle cues and factors around us can shape the way we behave and feel about where we are and what is happening. We might not know why we feel threatened, and we might not even realize that we feel threatened, but our heart rate may be elevated, we might tense up, and we might become short and defensive in certain situations. When we think back on why we behaved a certain way, why we felt the way we did, and why we had the reactions we did, our brains won’t be able to recognize these subtle cues that never rose to the level of consciousness. We won’t be able to explain the reason why we felt threatened, all we will be able to recall is the physiological response we had to the situation. We are influenced by far more than our conscious brain is aware, and we should remember that our conscious brain doesn’t provide us with a perfect picture of reality, but nevertheless our subconscious reacts to more of the world than we notice.
Base Rates Joe Abittan

Base Rates

When we think about individual outcomes we usually think about independent causal structures. A car accident happened because a person was switching their Spotify playlist and accidently ran a red light. A person stole from a grocery store because they had poor moral character which came from a poor cultural upbringing. A build-up of electrical potential from the friction of two air masses rushing past each other caused a lightning strike.

 

When we think about larger systems and structures we usually think about more interconnected and somewhat random outcomes that we don’t necessarily observe on a case by case basis, but instead think about in terms of likelihoods and conditions which create the possibilities for a set of events and outcomes. Increasing technological capacity in smartphones with lagging technological capacity in vehicles created a tension for drivers who wanted to stream music while operating vehicles, increasing the chances of a driver error accident. A stronger US dollar made it more profitable for companies to employ workers in other countries, leading to a decline in manufacturing jobs in US cities and people stealing food as they lost their paychecks.  Earth’s tilt toward the sun led to a difference in the amount of solar energy that northern continental landmasses experienced, creating a temperature and atmospheric gradient which led to lightning producing storms and increased chances of lightning in a given region.

 

What I am trying to demonstrate in the two paragraphs above is a tension between thinking statistically versus thinking causally. It is easy to think causally on a case by case basis, and harder to move up the ladder to think about statistical likelihoods and larger outcomes over entire complex systems. Daniel Kahneman presents these two types of thought in his book Thinking Fast and Slow writing:

 

Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case. Causal base rates change your view of how the individual case came to be.”

 

It is more satisfying for us to assign agency to a single individual than to consider that individual’s actions as being part of a large and complex system that will statistically produce a certain number of outcomes that we observe. We like easy causes, and dislike thinking about statistical likelihoods of different events.

 

“Statistical base rates are generally underweighted, and sometimes neglected altogether, when specific information about the case at hand is available.
Causal base rates are treated as information about the individual case and are easily combined with other case-specific information.”

 

The base rates that Kahneman describes can be thought of as the category or class to which we assign something. We can use different forms of base rates to support different views and opinions. Shifting the base rate from a statistical base rate to a causal base rate may change the way we think about whether a person is deserving of punishment, or aid, or indifference. It may change how we structure society, design roads, and conduct cost-benefit analyses for changing programs or technologies. Looking at the world through a limited causal base rate will give us a certain set of outcomes that might not generalize toward the rest of the world, and might cause us to make erroneous judgments about the best ways to organize ourselves to achieve the outcomes we want for society.
Fluency of Ideas

Fluency of Ideas

Our experiences and narratives are extremely important to consider when we make judgments about the world, however we rarely think deeply about the reasons why we hold the beliefs we do. We rarely pause to consider whether our opinions are biased, whether our limited set of experiences shape the narratives that play in our mind, and how this influences our entire outlook on life. Instead, we rely on the fluency of ideas to judge our thoughts and opinions as accurate.

 

In Thinking Fast and Slow Daniel Kahneman writes about ideas from Cass Sunstein and jurist Timur Kuran explaining their views on fluency, “the importance of an idea is often judged by the fluency (and emotional charge) with which that idea comes to mind.” It is easy to characterize an entire group of people as hardworking, or lazy, or greedy, or funny based entirely on a single interaction with a single person from that group. We don’t pause to ask if our interaction with one person is really a good reflection of all people who fit the same group as that person, we instead allow the fluency of our past experiences to shape our opinions of all people in that group.

 

And our ideas and the fluency with which those ideas come to mind don’t have to come from our own personal experience. If a claim is repeated often enough, we will have trouble distinguishing it from truth, even if it is absurd and doesn’t have any connection to reality. The idea will come to mind more fluently, and consequently the idea will start to feel true. We don’t have to have direct experience with something if a great marketing campaign has lodge an opinion or slogan in mind that we can quickly recall.

 

If we are in an important decision-making role, it is important that we recognize this fluency bias. The fluency of ideas will drive us toward a set of conclusions that might not be in our best interests. A clever marketing campaign, a trite saying repeated by salient public leaders, or a few extreme yet random personal experiences can bias our judgment. We have to find a way to step back, recognize the narrative at hand, and find reliable data to help us make better decisions, otherwise we might end up judging ideas and making decisions based on faulty reasoning.
As an addendum to this post (originally written on 10/04/2020), this morning I began The Better Angels of Our Nature: Why Violence Has Declined, by Steven Pinker. Early in the introduction, Pinker states that violence in almost all forms is decreasing, despite the fact that for many of us, it feels as though violence is as front and center in our world as ever before. Pinker argues that our subjective experience of out of control violence is in some ways due to the fluency bias that Kahneman describes from Sunstein and Kuran. Pinker writes,

 

“No matter how small the percentage of violent deaths may be, in absolute numbers there will always be enough of them to fill the evening news, so people’s impressions of violence will be disconnected from the actual proportions.” 

 

The fluency effect causes an observation to feel correct, even if it is not reflective of actual trends or rates in reality.
Teamwork Contributions

Thinking About Who Deserves Credit for Good Teamwork

Yesterday I wrote about the Availability Heuristic, the term that Daniel Kahneman uses in his book Thinking Fast and Slow to describe the ways in which our brains misjudge frequency, amount, and probability based on how easily an example of something comes to mind. In his book, Kahneman describes individuals being more likely to overestimate things like celebrity divorce rates if there was recently a high profile and contentious celebrity divorce in the news. The easier it is for us to make an association or to think of an example of a behavior or statistical outcome, the more likely we will overweight that thing in our mental models and expectations for the world.

 

Overestimating celebrity divorce rates isn’t a very big deal, but the availability heuristic can have a serious impact in our lives if we work as part of a team or if we are married and have a family. The availability heuristic can influence how we think about who deserves credit for good team work.

 

Whenever you are collaborating on a project, whether it is a college assignment, a proposal or set of training slides at work, or keeping the house clean on a regular basis, you are likely to overweight your own contributions relative to others. You might be aware of someone who puts in a herculean effort and does well more than their own share, but if everyone is chugging along completing a roughly equivalent workload, you will see yourself as doing more than others. The reason is simple, you experience your own work firsthand. You only see everyone else’s handiwork once they have finished it and everyone has come back together. You suffer from availability bias because it is easier for you to recall the time and effort you put into the group collaboration than it is for you to recognize and understand how much work and effort others pitched in. Kahneman describes the result in his book, “you will occasionally do more than your share, but it is useful to know that you are likely to have that feeling even when each member of the team feels the same way.” 

 

Even if everyone did an equal amount of work, everyone is likely to feel as though they contributed more than the others. As Kahneman writes, there is more than 100% of credit to go around when you consider how much each person thinks they contributed. In marriages, this is important to recognize and understand. Spouses often complain that one person is doing more than the other to keep the house running smoothly, but if they complain to their partner about the unfair division of household labor, they are likely to end up in an unproductive argument with each person upset that their partner doesn’t recognize how much they contribute and how hard they work. Both will end up feeling undervalued and attacked, which is certainly not where any couple wants to be.

 

Managers must be aware of this and must find ways to encourage and celebrate the achievements of their team members while recognizing that each team member may feel that they are pulling more than their own weight. Letting everyone feel that they are doing more than their fair share is a good way to create unhelpful internal team competition and to create factions within the workplace. No professional work team wants to end up like a college or high school project group, where one person pulls and all-nighter, overwriting everyone else’s work and where one person seemingly disappears and emails everyone last minute to ask them not to rat them out to the teacher.

 

Individually, we should acknowledge that other people are not going to see and understand how much effort we feel that we put into the projects we work on. Ultimately, at an individual level we have to be happy with team success over our individual success. We don’t need to receive a gold star for every little thing that we do, and if we value helping others succeed as much as we value our own success, we will be able to overcome the availability heuristic in this instance, and become a more productive team member, whether it is in volunteer projects, in the workplace, or at home with our families.
What You See Is All There Is

What You See Is All There Is

In Thinking Fast and Slow, Daniel Kahneman gives us the somewhat unwieldy acronym WYSIATI – what you see is all there is. The acronym describes a phenomenon that stems from how our brains work. System 1, the name that Kahneman gives to the part of our brain which is automatic, quick, and associative, can only take in so much information. It makes quick inferences about the world around it, and establishes a simple picture of the world for System 2, the thoughtful calculating part of our brain, to work with.

 

What you see is all there is means that we are limited by the observations and information that System 1 can take in. It doesn’t matter how good System 2 is at processing and making deep insights about the world if System 1 is passing along poor information. Garbage in, garbage out, as the computer science majors like to say.

 

Daniel Kahneman explains what this means for our day to day lives in detail in his book. He writes, “As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little.”

 

System 2 doesn’t recognize that System 1 hands it incomplete and limited information. It chugs along believing that the information handed off by System 1 is everything that it needs to know. It doesn’t ask for more information, it just accepts that it has been handed a complete data set and begins to work. System 2 creates a solid narrative out of whatever information System 1 gives it, and only momentarily pauses if it notices an inconsistency in the story it is stitching together about the world. If it can make a coherent narrative, then it is happy and doesn’t find a need to look for additional information. What you see is all there is, there isn’t anything missing.

 

But we know that we only take in a limited slice of the world. We can’t sense the Earth’s magnetic pull, we can’t see in ultraviolet or infrared, and we have no way of knowing what is really happening in another person’s mind. When we read a long paper or finish a college course, we will remember some stuff, but not everything. Our mind is only able to hold so much information, and System 2 is limited to what can be observed and held. This should be a huge problem for our brain, we should recognize enormous blind spots, and be paralyzed with inaction due to a lack of information. But this isn’t what happens. We don’t even notice the blind spots, and instead we make a story from the information we collect, building a complete world that makes sense of the information, no matter how limited it is. What you see is all there is, we make the world work, but we do so with only a portion of what is really out there, and we don’t even notice we do so.
Detecting Simple Relationships

Detecting Simple Relationships

System 1, in Daniel Kahneman’s picture of the mind, is the part of our brain that is always on. It is the automatic part of our brain that detects simple relationships in the world, makes quick assumptions and associations, and reacts to the world before we are even consciously aware of anything. It is contrasted against System 2, which is more methodical, can hold complex and competing information, and can draw rational conclusions from detailed information through energy intensive thought processes.

 

According to Kahneman, we only engage System 2 when we really need to. Most of the time, System 1 does just fine and saves us a lot of energy. We don’t need to have to think critically about what we need to do when the stoplight changes from green to yellow to red. Our System 1 can develop an automatic response so that we let off the gas and come to a stop without having to consciously think about every action involved in slowing down at an intersection. However, System 1 has some very serious limitations.

 

“System 1 detects simple relations (they are all alike, the son is much taller than the father) and excels at integrating information about one thing, but it does not deal with multiple distinct topics at once, nor is it adept at using purely statistical information.”

 

When relationships start to get complicated, like say the link between human activities and long term climate change, System 1 will let us down. It also fails us when we see someone who looks like they belong to the Hell’s Angels on a father-daughter date at an ice cream shop, when we see someone who looks like an NFL linebacker in a book club, or when we see a little old lady driving a big truck. System 1 makes assumptions about the world based on simple relationships, and is easily surprised. It can’t calculate unique and edge cases, and it can’t hold complicated statistical information about multiple actors and factors that influence the outcome of events.

 

System 1 is our default, and we need to remember where its strengths and where its weaknesses are. It can help us make quick decisions while driving or catching an apple falling off a counter, but it can’t help us determine whether a defendant in a criminal case is guilty. There are times when our intuitive assumptions and reactions are spot on, but there are a lot of times when they can lead us astray, especially in cases that are not simple relationships and violate our expectations.
Drug Policy as Electoral Strategy

Drug Policy as an Electoral Strategy

One of my big takeaways as a public policy student at the University of Nevada was that public policy is not detached from our values. We like to think that elected officials and public administration officials are able to look at the world rationally and make judgments based purely on empirical facts, but this is not the case. Our values seep into all of our judgments and influence what we find as good or bad evidence. A good example of this at the federal level is Richard Nixon’s drug policy.

 

Drug policy seems like an area where empiricism and facts would rule. It feels like an area where we could identify the harms of drug use, estimate the social costs of drugs, and set policy accordingly, but American history shows that is not the case. John Hudak examines this history in his book Marijuana: A Short History, and shows how Richard Nixon used propaganda related to drug use to fuel his electoral campaign.

 

Hudak writes, “In fact, crafting public opinion on drug use and crime was central to Richard Nixon’s electoral strategy: he recognized that if he could stoke fears among the public about the drug problem and then position himself as the individual most capable of fighting the war against drugs, he would benefit electorally. In many ways he was right.”

 

Even though we can track drug related crimes, we can record drug overdose deaths, and we can estimate the social cost of drug use, our policies are driven more by fear and the desire to others into villains than by facts. Richard Nixon was clearly a master of understanding and manipulating public opinion, and used this reality to his advantage. Rather than encouraging public opinion to reflect the realities of drug uses, Nixon tied drug use with racial anxiety and resentment in a way that helped his own electoral fortunes. Public policy, Nixon demonstrated, was not swayed primarily by facts and logic, but by fear and irrationality.

 

For those of us who care about an issue and want to see responsible policy regarding the issues we care about, we must understand that empiricism and facts is not the only thing behind public policy. Public policy reflects emotion, power, and influence, and is subject to framing by people whose motives are not always pure. Advocating and supporting good public policy requires that we get beyond facts and figures, and understand the frames being applied to the policy in question.

Different Angles

In his book How to Win Friends and Influence People, Dale Carnegie quotes Henry Ford on seeing things from another person’s perspective: “If there is any one secret of success, it lies in the ability to get the other person’s point of view and see things from that person’s angle as well as from your own.” 

 

I am fascinated by the mind, our perception of the universe, and how we interpret the information we take in to make decisions. There is so much data and information about the world, and we will all experience that information and data in different ways, and our brains will literally construct different realities with the different timing and information that we take in. There may be an objective reality underlying our experiences, but it is nearly impossible to pinpoint exactly what that reality is given all of our different perspectives.

 

What we can realize from the vast amount of data that is out there and by our limited ability to take it all in and comprehend it is that our understanding of the universe is woefully inadequate. We need to get the perspectives of others to really understand what is happening and to make sense of the universe. Everyone will see things slightly differently and understand the world in their own unique way.

 

Carnegie’s book addresses this point in the context of business. When we are trying to make a buck, we often become purely focused on ourselves, on what we want, on how we think it is best to accomplish our goals, and on the narrow set of things that we have identified as necessary steps to get from A to B.

 

However, we are always going to be working with others, and will need the help of other people and other companies to achieve our goals. We will have to coordinate, negotiate, and come to agreement on what actions we will all take, and we will all bring our own experiences and motivations to the table. If you approach business thinking purely about what you want and what your goals are, you won’t be able to do this successfully. You have to consider the perspectives of the other people that you work with or rely on to complete any given project.

 

Your employees motivation will be different than the motivation of the companies who partner with you. Your goal might be to become or remain the leader in a certain industry, but no one cares if you are the leader in your space. Everyone wants to achieve their own ends, and the power of adopting multiple perspectives helps you see how each unique goal can align to compound efforts and returns. Remember, your mind is limited and your individual perspectives are not going to give you the insight you need to succeed in a complex world. Only by seeing the different angles with which other people approach a given problem or situation can you successfully coordinate with and motivate the team you will be working with.

Three Factors That Push In Favor of Religious Belief

In The Elephant in the Brain by Kevin Simler and Robin Hanson, the idea that many of the ways we act and behave have little to do with our stated reason for our actions and behaviors is explored in great detail. The authors’ thesis is that our self-interest dominates many of our decisions. The authors suggest that our beliefs, our social behaviors, and our interactions in the world are reflective of our self-interest, even if we don’t admit it. One area the authors examine through this lens is religious belief.

 

Simler and Hanson identify three factors that tend to push people toward belief, even though the factors have little to do with evidence for or a belief in a deity. They write:

 

“1) People who believe they risk punishment for disobeying God are more likely to behave well, relative to nonbelievers. 2) It’s therefore in everyone’s interest to convince others that they believe in God and in the dangers of disobedience. 3) Finally, as we saw …, one of the best ways to convince others of one’s belief is to actually believe it. This is how it ends up being in our best interests to believe in a god that we may not have good evidence for.”

 

The argument the authors put forward is that people believe that people of faith will be better people. That they will be less likely to commit crimes, more likely to have high moral standards for themselves, and more likely to be an honest and trustworthy ally. In order to be seen as a person who is trustworthy and honest, it becomes in one’s best interest to display religious faith and to convince other people that our beliefs are sincere and that we truly are an honest, trustworthy, and moral ally. These social factors don’t actually have to be related to religious beliefs, but the beliefs can create a structure that allows us to demonstrate these qualities.

 

These factors then push us toward belief. It is hard to always convince people that you are authentic, but it is not hard to simply adopt a belief, even if there is a shaky foundation for the belief you adopt. This occurs today with political beliefs about specific governmental decisions and interventions. It happens with climate change denial, and with fad diets. We convince ourselves that we are doing something because it is correct, and we can then better defend our decision and better defend our actions which might be signaling something else about ourselves.