Depleting Self-Control

Depleting Self-Control

A theme that runs through a lot of the writing that I do, influenced by Stoic thinkers such as Marcus Aurelius and modern academics and productivity experts like Cal Newport, is that we don’t have as much control over our lives as we generally believe. Writings from Aurelius show us how much happens beyond our control, and how important it is to be measured and moderate in our reactions to the world. Newport’s work shows how easily our brains can become distracted and how limited they are at sustaining long-term focus. Fitting in with both lines of thoughts is research from Daniel Kahneman, particularly an idea he presents in his book Thinking Fast and Slow about our depleting self-control. His work as a whole shows us just how much of our world we misunderstand and how important structures, systems, and institutions in our lives can be.

 

Regarding our ability for self-control, Kahneman writes, “an effort of will or self-control is tiring; if you have had to force yourself to do something, you are less willing or less able to exert self-control when the next challenge comes around. The phenomenon has been named ego depletion.”

 

Self-control is overrated. We think of ourselves and others as having far more self-control than is really possible. We are quick to judge others for failing to exercise self-control, and we can beat ourselves up mentally when we don’t seem to be able to muster the self-control needed to achieve our goals, stick to a diet, or hold to a resolution. But the work of Roy Baumeister that Kahneman’s quote describes shows us that self-control is limited, and that we can run out of self-control when we are overly taxed. Self-control is not an unlimited characteristic that reveals a deep truth about our personality.

 

It is easy to think up situations where you might have to restrain yourself from behaving rudely, indulging in vices, or shirking away from hard work. What is harder to immediately think of is how your initial act of self-control will influence the following situations that you might find yourself in. If you spend all day trying hard not to open Twitter while working, then you might give in to a post-work cookie. If you sat through an uncomfortable family dinner and restrained yourself from yelling at your relatives, then you might find it hard to hold back from speeding down the freeway on the drive home. We don’t like to think of ourselves as being so easily influenced by things that happened in the past, but we are unable to truly separate ourselves from things that happen around us. As we exert effort via self-control in one situation, we lose some degree of our ability to exert self-control in other situations.

 

It is important that we keep Kahneman and Baumeister’s research in mind and think about how we set up our environment so that we are not fighting a self-control battle all day long. There are tools that will stop you from being able to open certain websites while you are supposed to be working, you might have to decide that you just won’t buy any cookies so that they are not in the house at 2 in the afternoon when your sweet tooth acts up, and you may need to just Uber to and from those tense family dinners. If we put it all on ourselves to have self-control, then we will probably fail, but if we set up our environment properly, and give up some of the idea of self-control, then we will probably be more successful in the long run.
Thinking Fast and Evolution

Thinking Fast and Evolution

I have written in the past about how I think I probably put too much emphasis on evolutionary biology, especially considering brains, going all the way back to when our human ancestors liven in small tribes as hunter-gatherers. Perhaps it is because I look for it more than others, but I feel as though characteristics and traits that served us well during that time, still influence much of how we behave and respond to the world today. Sometimes the effects are insignificant, but sometimes I believe they do matter, and sometimes I believe they drive negative outcomes or behaviors that are maladapted to today’s world.

 

As I have begun writing about Daniel Kahneman’s research as presented in his book Thinking Fast and Slow, I have generally given System 1, or what Kahneman describes as our quick, automatic, and reactive part of our brain, a bad rep. But the reality is that it serves an important purpose, and likely served an especially important role over the course of human evolution, getting us to the place we are at today. Knowing that I tend to weigh our evolutionary past heavily (and perhaps too heavily), it is not surprising to me that I view System 1 as an important piece of how we got to where we are, even if System 1 is easy to pick on in our current world.

 

In his book, Kahneman writes, “Any task that requires you to keep several ideas in mind at the same time has the same hurried character. Unless you have the good fortune of a capacious working memory, you may be forced to work uncomfortably hard. The most effortful forms of slow thinking are those that require you to think fast.”

 

Anyone who has had to remember a couple of phone numbers without the benefit of being able to write them down or save them immediately, and anyone who has had to remember more words than Person, Woman, Man, Camera, TV, knows that we feel super rushed when we are suddenly given something important to hold in our working memory. We try to do what we can as quickly as possible to get the information out of our head, stored someplace other than our working memory. We feel rushed to complete the task to ease our cognitive load. Why would our brains work this way? Why would it be that we become so rushed when we have something meaningful that we need to hold in our mind?

 

The answer, as I view it, might go back to our hunter-gatherer ancestors. They mostly needed System 1. They had to react quickly to a noise that could be a dangerous predator. They had to move fast and on instinct to successfully take down dinner. There were not as many things that required deep focus, and the things that required deep focus were not dense academic journal articles, or spreadsheets, or PowerPoints, or a guy with a clip-board asking you to count backward from 200 by 13. You don’t have to worry about pop-ups or advertisements when you are skinning an animal, grinding seeds, or doing some type of work with your hands in a non-digital world. You didn’t have phone numbers to remember and you were not heading into a business meeting with four people you just met, whose names you needed to memorize as quick and fluidly as possible.

 

Slow thinking developed for people who had time for slow thinking. Fast thinking developed when survival was on the line. Today, the slow thinking might be more likely to help us survive than our fast thinking, presuming we don’t have dangerous drives to work each day and are eating safely prepared foods. Slow thinking is a greater advantage for us today, but we also live in a world where slow thinking is still difficult because we have packed more distractions into our environments. We have literally moved ourselves out of environments for which our brains are optimized by evolution, and this has created the challenges and conflicts we face with System 1 and System 2 in our daily lives and in the work we do.
Limited Effort for Focus and Deep Work

Limited Effort

A little while back I wrote a blog post centered around a quote from Cal Newport, “You have a finite amount of willpower that becomes depleted as you use it.”

 

The idea is that our brains get tired, and as they get tired, they become worse at practicing self control. When you are exhausted, when you have had to concentrate really hard on school work, a business presentation, or on paperwork to ensure your child’s medical care is covered, your mind’s ability to focus becomes deminished. You have trouble staying away from that piece of cake in the fridge, from scrolling through Facebook, and you have trouble being patient with a child or spouse when they try to talk to you.

 

In his book Thinking Fast and Slow, Daniel Kahneman writes something very similar to the quote from Newport, “self-control and deliberate thought apparently draw on the same limited budget of effort.” 

 

Our brains only have so much ability to do heavy duty thinking. It is as if there is a set account for deep thinking, and as we think critically we slowly make deductions from the account until our brains are in the red. Using our brain for serious thoughts and calculations requires focus and self-control. However, our willpower is depleted as we use it, so as we focus for longer periods of time, our brains become worse at ensuring that we stay focused.

 

Kahneman suggests that this is part of why we spend most of our life operating on System 1, the automatic, quick, and lightweight thinking process of our lives. System 2 is the deliberate thought process that we engage to do math, to study a map to make sure we know where we are driving, and to listen seriously to a spouse or child and provide them with support. System 2 takes a lot of energy, and has a limited budget. System 1 runs on low-power mode, and that is why it is our default. It makes mistakes, is subject to biases, and doesn’t always answer the right questions, but at least it saves us energy and allows us to reserve the effort of attention for the most important tasks.

 

Kahneman and Newport would likely both agree that we should use our budget for System 2. We should maximize the time we spend in deep work, and set ourselves up to do our best System 2 work when we need to. We can save System 1 for unimportant moments and tasks, and work with our brains so that we don’t force too much System 2 work into the times when our effort budget has been depleted.
Detecting Simple Relationships

Detecting Simple Relationships

System 1, in Daniel Kahneman’s picture of the mind, is the part of our brain that is always on. It is the automatic part of our brain that detects simple relationships in the world, makes quick assumptions and associations, and reacts to the world before we are even consciously aware of anything. It is contrasted against System 2, which is more methodical, can hold complex and competing information, and can draw rational conclusions from detailed information through energy intensive thought processes.

 

According to Kahneman, we only engage System 2 when we really need to. Most of the time, System 1 does just fine and saves us a lot of energy. We don’t need to have to think critically about what we need to do when the stoplight changes from green to yellow to red. Our System 1 can develop an automatic response so that we let off the gas and come to a stop without having to consciously think about every action involved in slowing down at an intersection. However, System 1 has some very serious limitations.

 

“System 1 detects simple relations (they are all alike, the son is much taller than the father) and excels at integrating information about one thing, but it does not deal with multiple distinct topics at once, nor is it adept at using purely statistical information.”

 

When relationships start to get complicated, like say the link between human activities and long term climate change, System 1 will let us down. It also fails us when we see someone who looks like they belong to the Hell’s Angels on a father-daughter date at an ice cream shop, when we see someone who looks like an NFL linebacker in a book club, or when we see a little old lady driving a big truck. System 1 makes assumptions about the world based on simple relationships, and is easily surprised. It can’t calculate unique and edge cases, and it can’t hold complicated statistical information about multiple actors and factors that influence the outcome of events.

 

System 1 is our default, and we need to remember where its strengths and where its weaknesses are. It can help us make quick decisions while driving or catching an apple falling off a counter, but it can’t help us determine whether a defendant in a criminal case is guilty. There are times when our intuitive assumptions and reactions are spot on, but there are a lot of times when they can lead us astray, especially in cases that are not simple relationships and violate our expectations.
Skill Versus Effort

Skill Versus Effort

In the world of sports, I have always enjoyed the saying that someone is so good at something they make it look easy. While I usually hear the saying in relation to physical activity, it also extends to other generally challenging activities – Kobe made the fadeaway jumper look easy, Tyler Cowen makes blogging look easy, and Roman Mars has made podcasting look (sound?) easy. But what is really happening when an expert makes something look easy? Daniel Kahneman argues that increased skill makes things look easy because skill decreases the effort needed to do the thing.

 

In Thinking Fast and Slow, Kahneman writes, “As you become skilled in a task, its demand for energy diminishes. Studies of the brain have shown that the pattern of activity associated with an action changes as skill increases, with fewer brain regions involved. Talent has similar effects. Highly intelligent individuals need less effort to solve the same problems, as indicated by both pupil size and brain activity. A general law of least effort applies to cognitive as well as physical exertion.”

 

while I was at a UCLA summer basketball camp years ago, Sean Farnham told me a story about Kobe – he used to work out at the UC Irvine Gym every morning. He drew such a big crowd to the gym that UC Irvine asked him to either stop coming to the gym, or to arrive at a different time. Kobe didn’t stop, he just changed his hours, working out at 4 or 5 a.m., before the gym would be packed. Farnham told me that Kobe had a training entourage with him, so that when he would pass out on the court from physical exhaustion of working so hard, his staff could pull him to the side, get him some fluids, and help him get back out on the court until he would pass out again.

 

Tyler Cowen writes every day. On his podcast and in other interviews, he has explained how writing every single day, even on Christmas and your birthday, is one of the most important things you can do if you want to be a good writer and clear thinker. Much of his writing never gets out into the public, but every day he puts in the effort and practice to build his skill.

 

Roman Mars loves radio, and his hit podcast 99% Invisible is onto episode 410.  In a 2012 interview with Debbie Millman Mars talked about learning to love radio early on and how he developed a passion for audio programming, even if no one was listening.

 

Kobe, Cowen, and Mars all practice a lot, and have developed a lot of skill from their practice. As Kahneman explains, their daily practice doesn’t just allow them to make things look easy. For those who practice as much as these three, things really are easier for them. Kobe’s muscle memory meant that he was more efficient in shooting a fadeaway jump shot, literally needing less energy and less mental focus to pull off a perfect swish. Cowen writes every day and the act of starting a piece of writing for him probably requires less brain power to begin putting thoughts together. Similarly, Mars probably slips into his radio voice effortlessly, without consciously having to think about everything he is about to say, making the words, the voice, and the intonation flow more simply and naturally.

 

Kahneman and the three examples I shared show how important practice is for the things we want to do well. Consistent practice builds skill, and literally alters the brain, the chemical nerve pathways (via myelination), and the physical strength needed to perform a task. With practice, tasks really do become easier and automatic.
Recognize Situations Where Mistakes Are Common

Recognize Situations Where Mistakes Are Common

“Because System 1 operates automatically and cannot be turned off at will, errors of intuitive thought are often difficult to prevent,” writes Daniel Kahneman in Thinking Fast and Slow. System 1 is how Kahneman describes the intuitive, quick reacting part of our brain that continually scans the environment and filters information going to System 2, the more thoughtful, deliberate, calculating, and rational part of our brain. Biases in human thought often originate with System 1. When System 1 misreads a situation, makes a judgment on a limited set of information, or inaccurately perceives something about the world, System 2 will be working on a poor data set and is likely reach faulty conclusions.

 

Kahneman’s book focuses on common cognitive errors and biases, not in the hope that we can radically change our brains and no longer fall victim to prejudices, optical illusions, and cognitive fallacies, but in the hopes that we can increase our awareness of how the brain and our thinking goes off the rails, to help us marginally improve our thought processes and final conclusions. Kahneman writes, “The best we can do is a compromise: learn to recognize situations in which mistakes are likely and try harder to avoid significant mistakes when the stakes are high.”

 

If we are aware that we will make snap judgments the instant we see a person, before either of us has even spoken a single word, we can learn to adjust our behavior to prevent an instantaneous bias from coloring the entire interaction. If we know that we are making a crucial decision on how we are going to invest our finances for retirement, we can pause and use examples from Kahneman’s book to remember that we have a tendency to answer simpler questions, we have a tendency to favor things that are familiar, and we have a tendency to trust other people based on factors that don’t truly align with trustworthiness. Kahneman doesn’t think his book and his discussions on cognitive fallacies will make us experts in investing, but he does think that his research can help us understand the biases we might make in an investment situation and improve the way we make some important decisions. Understanding how our biases may be impacting our decision can help us improve those decisions.

 

Self- and situational-awareness are crucial for accurately understanding the world and making good decisions based on sound predictions. It is important to know if you can trust an educated guess from yourself or others, and it is important to recognize when your confidence is unwarranted. It is important to know when your opinions carry weight, and when your direct observations might be incomplete and misleading. In most instances of our daily lives, the stakes are low and errors from cognitive biases and errors are low, but in some situations, like serving on a jury, driving on the freeway, or choosing whether to hire someone, our (and other people’s) livelihoods could be on the line. We should honestly recognize the biases and limitations of the mind so we can further recognize situations where mistakes are common, and hopefully make fewer mistakes when it matters most.
Blind to our blindness

Blind to Our Blindness

I remember the first time I watched the Gorilla Attentiveness Study, as a freshman in college, and to this day it is one of my favorite studies and examples of the ways in which our brains can let us down. Writing about the study in his book Thinking Fast and Slow, Daniel Kahneman states, “The gorilla study illustrates two important facts about our minds: we can be blind to the obvious, and we are also blind to our blindness.” Kahneman uses the study to show that we can’t always trust what we see, or what we experience in the world more broadly. Our minds are limited in what they take in, especially when we are engaged with one task and our mind is filtering out the other noise and extra information in our environment.

 

Kahneman uses the study to support two major ideas that he presents in his book. The first is that our brains can only operate on the information they take in. Most of the time, our general perception of the world is guided by System 1, the term Kahneman uses to describe the automatic, fast, and intuitive functioning part of our brain. It is not literally a separate part and structure of the brain, but it does seem to be a system with specific functions that generally runs in the background as we go about our lives. That system filters out unimportant information in the world around us, like the feeling of our clothes on our skin, low level traffic noise outside our office, or a bee buzzing around at the edges of our peripheral outside a window. That data is ignored as unimportant, allowing us to instead engage System 2 on something more worthy of our attention.

 

System 2 is used by Kahneman to describe the attentive, energy demanding, logical part of our brain. The modules in the brain which allow us to write blog posts, to count basketball passes, and to thread string through a needle comprise what Kahneman describes as System 2. However, System 2 can only focus on a limited number of things at one time. That is why we can’t write blog posts on a subway and why we miss the gorilla. We have to ignore the noise in order to focus on the important things. What is worse, however, is that System 2 is  often dependent on information from System 1, and System 1 is subject to biases and blind spots and has a bad habit of using inferences to complete the full picture based on a limited set of information. System 1’s biases directly feed into the intense focus and logical thinking of System 2, which in turn causes us to reach faulty conclusions. And because the inferences from System 1 are usually pretty good, and do an adequate job completing the picture, our faulty conclusions appear sound to us.

 

Kahneman writes that we are blind to the obvious, meaning that we often miss important, crucial, and sometimes clearly important information simply because  we don’t look for it, don’t recognize it for what it is, or and fill in gaps with intuition. Quite often we are not even aware of the things we are blind to, we literally are blind in regard to our blind spots, making it harder to see how we could be wrong, where our cognitive biases and errors may be, and what could be done to make our thinking more accurate.

 

I try to remember this in my own life and to ask myself where I think I could be wrong. I try to be aware of instances where I am deliberately creating blind spots in my life, and I try at least marginally to push against such tendencies. It is important that we remember our biases and errors in thinking, and consider how our thinking is often built on blind spots and faulty conclusions. Doing so will help us be more generous when thinking of others, and will help us become better thinkers ourselves. It will help us pause when we reach a conclusion about an argument, think more broadly when we become upset, and shift away from System 1 biases to have more accurate and complete pictures of the world.
We Think of Ourselves as Rational

We Think of Ourselves as Rational

In Daniel Kahneman’s book Thinking Fast and Slow, Kahneman lays out two ideas for thinking about our thought processing. Kahneman calles the two ways of thinking about our thought processing System 1 and System 2. System 1 is fast, automatic, often subconscious, and usually pretty accurate in terms of making quick judgments, assumptions, and estimations of the world. System 2 is where our heavy duty thinking takes place. It is where we crunch through math problems, where our rational problem-solving part of the brain is in action, and its the system that uses a lot of energy to help us remember important information and understand the world.

 

Despite the fact that we normally operate on System 1, that is not the part of our brain that we think of as ourselves. Kahneman writes, “When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do.” We believe ourselves to be rational agents, responding reasonably to the world around us. We see ourselves a free from bias, as logically coherent, and as considerate and understanding. Naturally, it is System 2 that we see ourselves as spending most of our time with, however, this is not exactly the case.

 

A lot of our actions are influenced by factors that seem to play more at the System 1 level than the System 2 level. If you are extra tired, if you are hungry, or if you feel insulted by someone close to you, then you probably won’t be thinking as rationally and reasonably as you would expect. You are likely going to operate on System 1, making sometimes faulty assumptions on incomplete data about the world around you. If you are hungry or tired enough, you will effectively be operating on auto-pilot, letting System 1 take over as you move about the cabin.

 

Even though we often operate on System 1, we feel as though we operate on System 2 because the part of us that thinks back to how we behaved, the part of us required for serious reflection, is part of System 2. It is critical, thoughtful, and takes its time generating logically coherent answers. System 1 is quick and automatic, so we don’t even notice when it is in control. When we think about who we are, why we did something, and what kind of person we aspire to be, it is System 2 that is flying the plane, and it is System 2 that we become aware of, fooling ourselves into believing that System 2 is all we are, that System 2 is what is really in our head. We think of ourselves as rational, but that is only because our irrational System 1 can’t pause to reflect back on itself. We only see the rational part of ourselves, and it is comforting to believe that is really who we are.
Thinking Statistically

Thinking Statistically

In Thinking Fast and Slow, Daniel Kahneman personifies two modes of thought as System 1 and System 2. System 1 is fast. It takes in information, processes it rapidly, and doesn’t always make us cognizant of the information we took in. It reacts to the world around us on an intuitive level, isn’t good at math, but is great at positioning us for catching a football.

 

System 2 is slow. Its is deliberate, calculating, and uses a lot of energy to maintain. Because it requires so much energy, we don’t actually active it very often, not unless we really need to. What is worse, System 2 can only operate on the information (unless we have a lot of time to pause specifically for information intake) that System 1 takes in, meaning, it processes incomplete information.

 

System 1 and System 2 are important to keep in mind when we start to to think statistically, something our minds are not good at. When we think back to the 2016 US Presidential election, we can see how hard statistical thinking is. Clinton was favored to win, but there was a statistical chance that Trump would win, as happened. The chance was small, but that didn’t mean the models were all wrong when he did win, it just means that the most likely event forecasted didn’t materialize. We had trouble thinking statistically about win percentages going into the election, and had trouble understanding an unlikely outcome after it happened.

 

“Why is it so difficult for us to think statistically?” Kahneman asks in his book, “We easily think associatively, wee think metaphorically, we think causally, but statistics requires thinking about many things at once, which is something that System 1 is not designed to do.”

 

System 1 operates quickly and cheaply. It takes less energy and effort to run on System 1, but because it is subject to bias and because it makes judgments on incomplete information, it is not reliable for important decisions and calculations based on nuance. We have to engage System 2 to be great at thinking statistically, but statistical thinking still trips up System 2 because it is hard to think about multiple competing outcomes at the same time and weight them appropriately. In Risk Savvy, Gerd Gigerenzer shows that statistical thinking can be substantially improved and that we really can think statistically, but that we need some help from visual aids and tools so that our minds can grasp statistical concepts better. We have to help System 1 so that it can set up System 2 for success if we want to be good at thinking statistically.

 

From the framework that Kahneman lays out, a quick reacting System 1 running on power save mode with limited informational processing power and System 2 operating on incomplete information aggregated by System 1, statistical thinking is nearly impossible. System 1 can’t bring in enough information for System 2 to analyze appropriately. As a result, we fall back on biases or maybe substitute an easier question over the challenging statistical question. Gigerenzer argues that we can think statistically, but that we need the appropriate framing and cues for System 1, so that System 2 can understand the number crunching and leg work that is needed. In the end, statistical thinking doesn’t happen quickly, and requires an ability to hold competing and conflicting information in the mind at the same time, making it hard for us to think statistically rather than anecdotally or metaphorically.
Answering the Easy Question

Answering the Easy Question

One of my favorite pieces from Daniel Kahneman’s book Thinking Fast and Slow, was the research Kahneman presented on mental substitution. Our brains work very quickly, and we don’t always recognize the times when our thinking has moved in a direction we didn’t intend. Our thinking seems to flow logically and naturally from one thought to the next, and we don’t notice the times when our brains make logical errors or jumps that are less than rational. Mental substitution is a great example of this, and one that I know my brain does, but that I often have trouble seeing even when I know to look for it.

 

Kahneman writes, “When the question is difficult and a skilled solution is not available, intuition still has a shot: an answer may come to mind quickly – but it is not an answer to the original question.” 

 

The example that Kahneman uses is of a business executive making a decision on whether to invest in Ford. To make a smart decision, the executive has to know what trends in the auto industry look like and whether Ford is well positioned to adapt to changing economic, climate, and consumer realities. They need to know what Ford’s competition is doing and think about how Ford has performed relative to other automobile companies and how the automotive sector has performed relative to other industries. The decision requires thinking about a lot of factors, and the executive’s time is limited, along with the amount of information they can hold in their head, especially given the other responsibilities at home and in the office that the executive has to worry about.

 

To simplify the decision, the executive might chose to answer a simpler question, as Kahneman explains, “Do I like Ford cars?” If the executive grew up driving a Ford truck, if they really liked the 1965 Mustang, or if the only car crash they were ever involved in was when a person driving a Ford rear-ended them, their decision might be influenced by an intuitive sense of Ford cars and people who drive Fords. Also, if the investor has personnaly met someone within the executive team, they may be swayed by whether or not they liked the person they met. Instead of asking a large question about Ford the company, they might substitute an easier question about a single Ford executive team member.

 

“This is the essence of intuitive heuristics,” writes Kahneman, “when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.” 

 

Often, we already have a certain feeling in mind, and we switch the question being asked so that we can answer in line with our intuition. I grew up driving a Ford, so I might be inclined to favor investing in Ford. I might answer the question of investing in Ford before I am even asked the question, and then, instead of objectively setting out to review a lot of information, I might just cherry pick the information that supports my original inclination. I’m substituting the question at hand, and might even provide myself with plenty of information to support my choice, but it is likely biased and misguided information.

 

It is important to recognize when these prejudices and biases are influencing our decisions. By being aware of how we feel when asked a question, we can think critically to ask if we are being honest with the question that was asked of us. Are we truly answering the right question, or have we substituted for a question that is easier for us to answer?

 

In the question of cars and investments, the cost might not truly be a big deal (at least if you have a well diversified portfolio in other respects), but if we are talking about public policy that could be influenced by racial prejudice or by how deserving we think another group of people is, then our biases could be very dangerous. If we think that a certain group of people is inherently greedy, lazy, or unintelligent, then we might substitute the question, “will this policy lead to the desired social outcome” with the question, “do I like the group that stands to benefit the most from this policy?” The results from answering the wrong question could be disastrous, and could harm our society for years.