Experiencing Versus Remembering

Experiencing Versus Remembering

My last two posts have been about the difference in how we experience life and how we remember what happens in our life. This is an important idea in Daniel Kahneman’s book Thinking Fast and Slow. Kahneman explains the ways in which our minds make predictable errors when thinking statistically, when trying to remember the past, and when making judgements about reality. Kahneman describes our mind as having two selves. He writes,

 

“The experiencing self is the one that answers the question: Does it hurt now? The remembering self is the one that answers the question: How was it on the whole? Memories are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self.”

 

In my post about the Peak-End Rule I highlighted findings from Kahneman that show that the remembering self isn’t very good at making accurate judgments about a whole experience. It more or less averages out the best (or worst) part of an experience with the ending of the experience. The ups and downs throughout, the actual average quality overall, isn’t that relevant to the way we think back on an experience.

 

Duration Neglect also demonstrates how the remembering self misjudges our experiences. A long monotonous experience with a positive ending can be remembered much more fondly than a generally positive short experience with a bad ending.

 

When I think about the experiencing and remembering self, I try to remember that my remembering self is not able to perfectly recall the reality of my experiences. I try to remember that my experiencing self is only alive in the present moment, and when I am experiencing something great, I try hard to focus on that moment, rather than try to focus on something I want to remember (this is the difference between sitting and watching a beautiful sunset versus trying to capture the perfect picture of the sunset for social media). Keeping in mind the distinctions between the experiencing and remembering self is helpful for avoiding the frustration, guilt, and pressure that the remembering self heaps on you when you don’t feel as though you have done enough or accomplished enough. The remembering self is only one part of you, and its revisionist view of your history isn’t real. There is real value in finding a balance between living for the experiencing self and living with the knowledge of what fuels the remembering self. Tilting too far either way can make us feel frustrated and overwhelmed, or unaccomplished, and we all want to be somewhere between the two extremes, giving up a little to prop up the other in different ways at different times of our lives.
Duration Neglect

Duration Neglect

My last post was about the Peak-End Rule, the way our brains remember events where we subjectively rate them based on an average between the peak moment and the end. A great experience can be ruined by a poor ending, while a poor experience can be remember more positively if it ends on a high note. Duration Neglect goes along with the Peak-End Rule to shape the way we subjectively remember an experience that doesn’t necessarily align with our actual experience of the event in the moment.

 

Regarding an experiment with individuals rating painful colonoscopies, Kahneman writes, “the duration of the procedure had no effect whatsoever on the ratings of total pain.”

 

Again, what mattered for individuals is the peak level of pain and the pain they experienced at the end of the procedure. Patients who had a short colonoscopy with a painful ending rated the entire experience as more painful than individuals who had an equal peak in pain, but overall had a longer colonoscopy that ended on a less painful note. If two patients experience the same peak of pain, but one experiences it early rather than at the end, the subjective pain ratings will be skewed, even if the person who had the peak at the end had less total pain because their procedure was shorter.

 

What this means for gastroenterologists is that it is better for the procedure to go long than to be painful. We can tolerate pain as long as it is spaced out and as long as the ending is relatively better than the peak. A procedure that lasts 20 minutes with an average pain level of 4 is better than a 5 minute procedure with an average pain level of 6. The mind doesn’t remember how long the pain lasted, it only remembers how bad the pain was at the peak.

 

We can translate this into our daily lives as well. If we know there is going to be something unpleasant, then we can try to space it out and frontload the unpleasantness, knowing that the ending will lift the overall subjective feeling if it is relatively better. And, if we have something that is really positive, we can see that it is truly is better to leave on a high note. Once we reach a peak in terms of positivity, any additional goodness will only diminish the overall rating of people’s experience. Adding more positive notes that don’t quite match the peak doesn’t actually help improve the overall level that people will ascribe to the event when they think back on it.
The Peak-End Rule - Joe Abittan

The Peak-End Rule

Our experiencing self and our remembering self are not the same person. Daniel Kahneman shows this in his book Thinking Fast and Slow by gathering survey information from people during unpleasant events and then asking them to recall their subjective experience of the event later. The experiencing self and the remembering self rate the experiences differently.

 

We can see this in our own lives. During the day you may have had a frustrating project to work on, but when you lay down at night and reflect on the day, you might not remember the project being as bad as it felt in the moment. Alternatively, you might sit around all day binging a TV series and really enjoy a lazy relaxing day. However, you might remember the day much differently when you look back at it, no longer appreciating the experience but regretting it.

 

With our brain experiencing and remembering events differently, we are set up for some strange cognitive biases when we reflect on past events and think about how we should behave in the future. The Peak-End Rule is one bias that factors into how we remember events and can influence our future choices.

 

You might expect to rate a poor experience based on how bad the worst moment of the experience was. Say you had to go to a child’s gymnastics routine that you were really dreading. A certain part of the routine may have been all but unbearable to you, but if at the end you found a $20 bill on your way back to the car. Your judgement of the event is going to be influenced by your good luck. Rather than basing your judgement of the show purely on that dreadful routine, or on an average of the whole evening, you are going to find a spot somewhere between the worst moment and the happy moment when you found $20. Its not an average of the whole time, and its not really indicative of your actual experience. A random factor at the end shifted your perspective.

 

In his book Kahneman writes about the Peak-End Rule as “The global retrospective rating predicted by the average of the level of pain reported at the worst moment of the experience and at its end.” This definition from Kahneman comes after describing a study with participants sticking their hands in icy cold water and subjectively judging the experience later.

 

The peak-end rule is not limited to painful and unpleasant experiences. Instead of a miserable experience, you could have a truly wonderful experience that ends up being remembered somewhat poorly by a momentary blip at the end. Picture a concert that is great, but flops at the end with the speaker system failing. You won’t reflect back on the entirety of the experience as positively as you should simply because a single song at the end was ruined.

 

What we should remember from this is that endings matter a lot. Don’t end your meeting with the bad news, end it with the good news so that people walk out on a positive note. The ending of an experience weighs much more heavily than everything in the middle. The points that matter are the peak (either the best or worst part) and the ending. A great ending can buoy a poor experience while a bad ending can tank a great experience. For company meetings, job interviews, or performances, make sure you bring the ending to a high point to lift the overall level of the subjective experience.
Experienced Utility - Joe Abittan

Experienced Utility

In Thinking Fast and Slow, Daniel Kahneman presents an interesting situation. Imagine you need to receive a series of injections, and the pain for each injection each time is always the same. Suppose in one situation, the series of injections is 20 shots, and in another situation the series is 6 shots. If you were to imagine that you were in each series, would you pay the same amount to have the total number of shots for the series reduced by 2? In one situation you would go from 20 to 18, and in the other from 6 to 4.

 

Kahneman found that people were more likely to pay more to reduce the injection load if the total number of shots for their series was 6 rather than 18. In Kahneman’s eyes, this thinking process is an error. He writes, “at least in some cases, experienced utility is the criterion by which a decision should be assessed. A decision maker who pays different amounts to achieve the same gain of experienced utility (or be spared the same loss) is making a mistake.”

 

Experienced utility is the overall happiness, usefulness, or enjoyment (more or less) that we get out of life, a product, or an experience. In the situation I described above, each injection is equally as painful. The first shot is not any worse than the second, the sixth, or the 15th. So whether you are getting 6 shots or 20 shots, you are still having a similar reduction in the overall amount of pain that you are avoiding when you get two fewer shots. In pure experienced utility, there is no difference between reducing the shot count from 20 to 18 or from 6 to 4. It is two fewer shots, the same reduction in pain, in both instances.

 

But when we imagine ourselves in each situation, it is the low total shot count where we decide we would spend more to reduce the overall level of pain we experience. We are violating the terms of equal experienced utility and instead making a relative comparison. Two is 1/3rd of six, and reducing our pain by 1/3rd is relatively much better than reducing our pain by 1/10th which is what we would do when we moved from 20 to 18.

 

This problem reminds me of sitting first class on an airplane. Sitting on your own couch is much more enjoyable than sitting first class on an airplane. You have a larger TV to enjoy, you don’t have to pay extra for WiFi, and you have an entire kitchen and pantry of snacks available to you. But if someone asked you how much you would pay on any given day for the privilege of enjoying your living room you would look at them and laugh.

 

But we are all willing to pay huge amounts for smaller and less comfy chairs, to have to turn our phones off, and for overpriced alcoholic beverages in first class on an airplane. We are making a similar mistake in terms of experienced utility by making relative comparisons. First class is substantially better than coach, but much worse than our own living room. When we fail to recognize our experienced utility, and instead open ourselves up to paying for relative utility, then we risk making inconsistent decisions and paying far more in some situations than we would dream to pay in others. The relative frame of reference that we adopt could be manipulated by actors for their own ends, to convince us to pay more for things than what we would in another frame of reference.
Defaults Matter

Defaults Matter

I will discuss defaults in depth when I begin writing about Nudge by Cass Sunstein and Richard Thaler, but it is important to think about our responses to default choices in the context of Daniel Kahneman’s research in Thinking Fast and Slow. Kahneman argues that we can think of our brains as having two different operating systems. System 1 is the fast and automatic system. It scans the environment, takes in the salient information around us, filters out the unimportant information, and makes quick judgements without putting too much power into the thinking process. System 2 is where System 1 sends the more difficult problems that it can’t handle on its own. System 1 takes the information it can absorb, packages that information with a particular reference frame, and sends it to System 2 for slower, more energy intensive thought. And this is where the defaults matter.

 

System 1 will fall back on the default when System 2 doesn’t want to engage with a problem. Because System 2 is energy intensive we only use it when we need to (like when we are cooking a new recipe, trying to complete our taxes, or trying to win scrabble). For most decisions, we can just fall back on the default and be fine. Instead of making a tough decision, we can rely on simple standard choices without having to consider alternatives or justify why we made a particular choice. Kahneman shows how powerful the default can be by examining the rates at which people register to be organ donors in different states and countries. He writes, “The best single predictor of whether or not people will donate their organs is the designation of the default option that will be adopted without having to check a box.”

 

For most decisions and thoughts, System 1 scans the environment and makes a quick judgment as to whether or not we need to do anything. If it determines that there is a need for more comprehensive thought, then it engages System 2, but it only packages the information it could take in during its quick scan. So while our System 2 is powerful and can work through lots of information, it can only work on the information from System 1’s quick scan. That quick scan includes the default option, but doesn’t include the various other options that were not immediately available. This can create anchoring effects and limit the categories we consider for possible alternatives from the default. When someone yells an answer in Family Feud and everyone else comes up with similar answers in the same category, we are seeing people anchor to a default category for responses. When your company enrolls you in a 401K and automatically sets your contribution limit, any change that you make is likely to be a small deviation from that preset level, you are not very likely to change all the way to 0 or make a huge deviation from that default anchor. Indeed, if you have ever been stopped in freeway traffic and only after stopping realized that you could have taken numerous different routes to avoid the traffic jam, you have seen how limiting our lives can be when we stick to a simple default and fail to consider the various other possibilities available to us.

 

The reason that defaults matter so much is because we are lazy, because System 2 doesn’t do much work if it doesn’t have to, and because System 2 gets a limited set of information from System 1. Our perspectives, opinions, and the world of possibilities available to us is anchored around the default. When I write about Nudge I will get more in depth with thinking about the importance of various defaults in different areas of our lives.
Frame Bound vs Reality Bound

Frame Bound vs Reality Bound

My wife works with families with children with disabilities and one of the things I learned from her is how to ask children to do something. When speaking with an adult, we often use softeners when requesting that the other person do something, but this doesn’t work with children. So while we may say to a colleague, a spouse, or a friend, “can you please XYZ,” or “lets call it a night of bowling after this frame, OK?” these sentences don’t work with children. A child won’t quite grasp the way a softener like “OK” is used and they won’t understand that while you have framed an instruction or request as a question you are not actually asking a question or trying to give someone a choice. If you frame an instruction as a choice the child can reply with “no” and then you as a parent are stuck fighting them.

 

What happens in this situation is that children reject the frame bounding that parents present them with. To get around it, parents need to be either more direct or more creative with how they tell their children to do things. You can create a new frame for your child that they can’t escape by saying, “It is time to get ready for dinner, you can either put away your toys, or you can go set the table.” You frame a choice for the child, and they get to chose which action they are going to take, but in reality both are things you want them to do (my wife says this also works with husbands but I think the evidence is mixed).

 

In Thinking Fast and Slow, Daniel Kahneman writes, “Unless there is an obvious reason to do otherwise, most of us passively accept decision problems as they are framed and therefore rarely have an opportunity to discover the extent to which our preferences are frame-bound rather than reality-bound.”

 

The examples I gave with talking to children versus talking to adults helps demonstrate how we passively accept the framing for our decisions. We don’t often pause to reconsider whether we should really purchase an item on sale. The discount that we are saving outweighs the fact that we still face a cost when purchasing the item. Our thinking works this way in office settings, in politics, and on the weekends when we can’t decide if we are going to roll out of bed or not. The frame that is applied to our decisions becomes our reality, even if there are more possibilities out there than what we realize.

 

A child rejecting the framing that a parent provides, or conversely a parent creating new frames to shape a child’s decisions and behaviors demonstrates how easily we can fall into frame-bound thinking and how jarring it can be when reality intrudes on the frames we try to live within. Most of the times we accept the frames presented for us, but there can be huge costs if we just go along with the frames that advertisers, politicians, and other people want us to adopt.
Framing Costs and Losses - Joe Abittan

Framing Costs and Losses

Losses evokes stronger negative feelings than costs. Choices are not reality-bound because System 1 is not reality-bound,” writes Daniel Kahneman in Thinking Fast and Slow.

 

We do not like losses. The idea of a loss, of having the status quo changed in a negative way without it being our deliberate choice, is hard for us to accept or justify. Costs, on the other hand, we can accept much more readily, even if the only difference between a cost and a loss is the way we chose to describe it.

 

Kahneman shares an example in his book where he an Amos Tversky did just that, changing the structure of a gamble so that the contestant faced the possible outcome of a $5 loss or where they paid a $5 cost with a possibility of gaining nothing. The potential outcomes of the two gambles is exactly the same, but people interpret the gambles differently based on how the cost/loss is displayed. People are more likely to take a bet when it is posed as a cost and not as a possible loss. System 1, the quick thinking part of the brain, scans the two gambles and has an immediate emotional reaction to the idea of a loss, and that influences the ultimate decision and feeling regarding the two gambles. System 1 is not rationally calculating the two options to see that they are equivalent, it is just acting on the intuition that it experiences.

 

“People will more readily forgo a discount than pay a surcharge. The two may be economically equivalent, but they are not emotionally equivalent.”

 

Kahneman continues to describe research from Richard Thaler who had studied credit-card lobbying efforts to prevent gas stations from charging different rates for cash versus credit. When you pay with a card, there is a transaction processing fee that the vendor pays to the credit card company. Gas stations charge more for credit card purchases because they have to pay a portion on the back end of the all credit transactions that take place. Credit card companies didn’t want gas stations to charge a credit card surcharge, effectively making it more expensive to buy gas with a card than with cash. Ultimately they couldn’t stop gas stations from charging different rates, but they did succeed in changing the framing around the different prices. Cash prices are listed as discounts, shifting the base rate to the credit price. As Kahneman writes, people will skip the extra effort that would garner the cash discount and pay with their cards. However, if people were directly told that there was a credit surcharge, that they had to pay more for the convenience of using their card, it is possible that more individuals would make the extra effort to pay with cash. How we frame a cost or a loss matters, especially because it can shift the baseline for consideration, making us see things as either costs or losses depending on the context, and potentially altering our behavior.
A Lack of Internal Consistency

A Lack of Internal Consistency

Something I have been trying to keep in mind lately is that our internal beliefs are not as consistent as we might imagine. This is important right now because our recent presidential election has highlighted the divide between many Americans. In most of the circles I am a part of, people cannot imagine how anyone could vote for Donald Trump. Since they see President Trump as contemptible, it is hard for them to separate his negative qualities from the people who may vote for him. All negative aspects of Trump and of the ideas that people see him as representing are heaped onto his voters. The problem however, is that none of us have as much internal consistency between our thoughts, ideas, opinions, and beliefs for any of us to justify characterizing as much as half the country as bigoted, uncaring, selfish, or really any other adjective (except maybe self-interested).

 

I have written a lot recently about the narratives we tell ourselves. It is problematic that the more simplistic a narrative, the more believable and accurate it feels to us. The world is incredibly complicated, and a simplistic story that seems to make sense of it all is almost certainly wrong. Given this, it is worth looking at our ideas and views and trying to identify areas where we have inconsistencies in our thoughts. This helps us tease apart our narratives and recognize where simplistic thinking is leading us to unfound conclusions.

 

In Thinking Fast and Slow, Daniel Kahneman shows us how this inconsistency between our thoughts, beliefs, and behaviors can arise, using moral ambiguity as an example. He writes, “the beliefs that you endorse when you reflect about morality do not necessarily govern your emotional reactions, and the moral intuitions that come to your mind in different situations are not internally consistent.”

 

It is easy to adopt a moral position against some immoral behavior or attitude, but when we find ourselves in a situation where we are violating that moral position, we find ways to explain our internal inconsistency without directly violating our initial moral stance. We rationalize why our moral beliefs don’t apply to us in a given situation, and we create a story in our minds where there is no inconsistency at all.

 

Once we know that we do this with our own beliefs toward moral behavior, we should recognize that we do this with every area of life. It is completely possible for us to think entirely contradictory things, but to explain away those contradictions in ways that make sense to us, even if it leaves us with incoherent beliefs. And if we do this ourselves, then we should recognize that other people do this as well. So when we see people voting for a candidate and can’t imagine how they could vote for such a candidate, we should assume that they are making internally inconsistent justifications for voting for that candidate. They are creating a narrative in their head where they are making the best possible decision. They may have truly detestable thoughts and opinions, but we should remember that in their minds they are justified and making rational choices.

 

Rather than simply hating people and heaping every negative quality we can onto them. We should pause and ask what factors might be leading them to justify contemptible behavior. We should look for internal inconsistencies and try to help people recognize these areas and move forward more comprehensively. We should see in the negativity in others something we have the same capacity for, and we should try to find more constructive ways to engage with them and help them shift the narrative that justifies their inconsistent thinking.
Stoicism in Thinking Fast and Slow

Stoicism in Thinking Fast and Slow

“We spend much of our day anticipating, and trying to avoid, the emotional pains we inflict on ourselves,” writes Daniel Kahneman in his book Thinking Fast and Slow. “How seriously should we take these intangible outcomes, the self-administered punishments (and occasional rewards) that we experience as we score our lives?”

 

Kahneman’s point is that emotions such as regret greatly influence the decisions we make. We are so afraid of loss that we go out of our way to minimize risk, to the point where we may be limiting ourselves so much that we experience costs that are actually greater than the potential loss we wanted to avoid. Kahneman is pointing to something that stoic thinkers, dating back to Marcus Aurelius and Seneca, addressed – our ability to be captured by our emotions and effectively held hostage by fears of the future and pain from the past.

 

In Letters from a Stoic, Seneca writes, “Why, indeed, is it necessary to summon trouble – which must be endured soon enough when it has once arrived, or to anticipate trouble and ruin the present through fear of the future? It is foolish to be unhappy now because you may be unhappy at some future time.” I think Kahneman would agree with Seneca’s mindset. In his book, Kahneman write that we should accept some level of risk and some level of regret in our lives. We know we will face regret if we experience some type of failure. We can prepare for regret and accept it without having to ruin our lives by taking every possible precaution to try to avoid the potential for failure, pain, and loss. It is inevitable that we are going to lose loved ones and have unfortunate accidents. We can’t prepare and shield ourselves from every danger, unless we want to completely withdrawal from all that makes us human.

 

Ryan Holiday wrote about the importance of feeling and accepting our emotions in his book The Obstacle is the Way. He wrote, “Real strength lies in the control or, as Nassim Taleb put it, the domestication of one’s emotions, not in pretending they don’t exist.” Kahneman would also agree with Holiday and Taleb. Econs, the term Kahneman and other economists use to refer to theoretical humans who act purely rationally, are not pulled by emotions and cognitive biases. However, Econs are not human. We experience emotions when investments don’t pan out, when bets go the wrong way, and when we face multiple choices and are unsure if we truly made the best decision. We have to live with our emotions and the weight of failure or poor investments. Somehow, we have to work with these emotions and learn to continue even though we know things can go wrong. Holiday would suggest that we must be present, but acknowledge that things wont always go well and learn to recognize and express emotions in a healthy way when things don’t go well.

 

Kahneman continues, “Perhaps the most useful is to be explicit about the anticipation of regret. If you can remember when things go badly that you considered the possibility of regret carefully before deciding, you are less likely to experience less of it.” In this way, our emotions can be tools to help us make more thoughtful decisions, rather than anchors we are tethered to and hopelessly unable to escape. A thoughtful consideration of emotions, a return to the present moment, and acceptance of the different emotions we may feel after a decision are all helpful in allowing us to live and exist with some level of risk, some level of uncertainty, and some less of loss. These are ideas that stoic thinkers wrote about frequently, and they show up for Kahneman when he considers how we should live with our mental biases and cognitive errors.
The precautionary principle in governance

A Factor for Paralysis in Regulation & Legislation

A common complaint today in the United States is that nothing gets done. We are frustrated by political leaders who can’t pass important legislation. We dislike how slow local governments are to update infrastructure, adopt new technologies, and make improvements in the places we live. Gridlock has become the norm, and the actions that governments take seem to be too little too late.

 

But is this criticism really fair? Is the problem slow governments, ineffectual legislators, and inept public officials? Daniel Kahneman in his book Thinking Fast and Slow highlights a basic aspect of human psychology that might be one of the major contributing factors to the paralysis we see in governance today, and it has nothing to do with the quality of officials and legislators, but instead is all about the structures and systems of incentives that elected officials and policy actors respond to. The precautionary principle, a side effect of our general tendency toward loss aversion and our general stance against taboo tradeoffs drives our paralysis, and it is a logical response to the structure of many of our governing institutions.

 

Governments are necessary parts of human society, helping us establish rules for how we will live, interact, and make decisions collectively. Governments make investments, determine safety and efficacy standards, and help allocate resources across populations. In each of these functions of governance there is a possibility for error, a possibility for failure, and risk involved in the decisions. This is where the precautionary principle comes in. Kahneman writes,

 

“In the regulatory context, the precautionary principle imposes the entire burden of proving safety on anyone who undertakes actions that might harm people or the environment. Multiple international bodies have specified that the absence of scientific evidence of potential damage is not sufficient justification for taking risks. … the precautionary principle is costly, and when interpreted strictly it can be paralyzing.”

 

When risk is involved in decision-making processes, elected officials and public leaders are held responsible for any the bad outcomes that come to pass. There will always be a chance that a government investment fails, and no public official wants that failure to reflect poorly on their decision-making. There is always the risk that allocated resources could be misused, and it is often the official who approved the resource allocation (as well as the bad actor themselves) who faces consequences. When there is a deliberate decision to trade-off some level of safety or to accept an increase in risk in exchange for improved economic performance, faster traffic flows, or reduced government spending, public leaders and elected officials are the ones who look bad when something goes wrong.

 

The way our governance operates today encourages the precautionary principle. Risk is incredibly dangerous for public leaders, so the safer and more costly approach feels like the right choice in each individual decision. Over time, however, the costs add up, the paralysis becomes suffocating, and the public becomes dissatisfied and cynical. The answer might not be to completely cut out the regulation and safety apparatus of the government (that didn’t work well for President Trump who eliminated the NSC directorate for global health and security and bio-defense). The answer will be new structures for governance, new ways to allow government to take risks, and new ways to understand the risks that we all take in our lives. None of these are easy or simple transitions, but it is likely what we need in order to survive in a more complex and turbulent world.