Scarcity & Short-Term Thinking

Scarcity & Short-Term Thinking

I find critiques of people living in poverty to generally be unfair and shallow. People living in poverty with barely enough financial resources to get through the day are criticized for not making smart investments of their time and money, and are criticized when they spend in a seemingly irrational manner. But for low income individuals who can’t seem to get ahead no matter what jobs they take, these critiques seem to miss the reality of life at the poorest socioeconomic level.
I wrote recently about the costs of work, which are not often factored into our easy critiques of the poor or unemployed. Much of America has inefficient and underinvested public transit. The time involved with catching a bus (or two) to get to work are huge compared with simply driving to work. Additionally, subways and other transports can be dangerous (there is no shortage of Youtube videos of people having phones stolen on public transit). This means that owning and maintaining a car can be essential for being able to work, an expensive cost that can make working prohibitive for those living in poverty.
The example of transportation to work is meant to demonstrate that not working can be a more rational choice for the poorest among us. Work involves extra stress and costs, and the individual might not break even, making unemployment the more rational choice. There are a lot of instances where the socially desirable thing becomes the irrational choice for those living in poverty. If we do not recognize this reality, then we will unfairly criticize the choices and decisions of the poor.
In his book Evicted, Matthew Desmond writes about scarcity and short-term thinking, showing that they are linked and demonstrating how this shapes the lives of those living in poverty. “research show[s] that under conditions of scarcity people prioritize the now and lose sight of the future, often at great cost.” People living in scarcity have trouble thinking ahead and planning for their future. When you don’t know where you will sleep, where your next meal will come from, and if you will be able to afford the next basic necessities, it is hard to think ahead to everything you need to do for basic living in American society. Your decisions  might not make sense to the outside world, but to you it makes sense because all you have is the present moment, and no prospects regarding the future to plan for or think about. Sudden windfalls may be spent irrationally, time may not be spent resourcefully, and tradeoffs that benefit the current moment and the expense of the future may seem like obvious choices if you live in constant scarcity.
Combined, the misperceptions about the cost of work and the psychological short-termism resulting from scarcity show us that we have to approach poverty differently from how we approach lazy middle class individuals. I think we design our programs for assisting those in poverty while thinking of middle class lazy people. We don’t think about individuals who are actually so poor that the costs of work that most of us barely think about become crippling. We  don’t consider how scarcity shapes the way people think, leading them to make poor decisions that seem obvious for us to critique from the outside. Deep poverty creates challenges and obstacles that are separate from the problem of free loading and lazy middle class children or trust fund babies. We have to recognize this if we are to actually improve the lives of the poorest among us and create a better social and economic system to help integrate those individuals.
The Representation Problem

The Representation Problem

In The Book of Why Judea Pearl lays out what computer scientists call the representation problem by writing, “How do humans represent possible worlds in their minds and compute the closest one, when the number of possibilities is far beyond the capacity of the human brain?”
 
 
In the Marvel Movie Infinity War, Dr. Strange looks forward in time to see all the possible outcomes of a coming conflict. He looks at 14,000,605 possible futures. But did Dr. Strange really look at all the possible futures out there? 14 million is a convenient big number to include in a movie, but how many possible outcomes are there for your commute home? How many people could change your commute in just the tiniest way? Is it really a different outcome if you hit a bug while driving, if you were stopped at 3 red lights and not 4, or if you had to stop at a crosswalk for a pedestrian? The details and differences in the possible worlds of our commute home can range from the miniscule to the enormous (the difference between you rolling your window down versus a meteor landing in the road in front of you). Certainly with all things considered there are more than 14 million possible futures for your drive home.
 
 
Somehow, we are able to live our lives and make decent predictions of the future despite the enormity of possible worlds that exist ahead of us. Somehow we can represent possible worlds in our minds and determine what future world is the closest one to the reality we will experience. This ability allows us to plan for retirement, have kids, go to the movies, and cook dinner. If we could not do this, we could not drive down the street, could not walk to a neighbors house, and couldn’t navigate a complex social world. But none of us are sitting in a green glow with our head spinning in circles like Dr. Strange as we try to view all the possible worlds in front of us. What is happening in our mind to do this complex math?
 
 
Pearl argues that we solve this representation problem not through magical foresight, but through an intuitive understanding of causal structures. We can’t predict exactly what the stock market is going to do, whether a natural disaster is in our future, or precisely how another person will react to something we say, but we can get a pretty good handle on each of these areas thanks to causal reasoning.
 
 
We can throw out possible futures that have no causal structures related to the reality we inhabit.  You don’t have to think of a world where Snorlax is blocking your way home, because your brain recognizes there is no causal plausibility of a Pokémon character sleeping in the road. Our brain easily discards the absurd possible futures and simultaneous recognizes the causal pathways that could have major impacts on how we will live. This approach gradually narrows down the possibilities to a level where we can make decisions and work with a level of information that our brain (or computers) can reasonably decipher. We also know, without having to do the math, that rolling our window down or hitting a bug is not likely to start a causal pathway that materially changes the outcome of our commute home. The same goes for being stopped at a few more red lights or even stopping to pick up a burrito. Those possibilities exist, but they don’t materially change our lives and so our brain can discard them from the calculation. This is the kind of work our brains our doing, Pearl would argue, to solve the representation problem.

Thinking Conspiratorially Versus Evidence-Based Thinking - Joe Abittan

Thinking Conspiratorially Versus Evidence-Based Thinking

My last two posts have focused around conspiratorial thinking and whether it is an epistemic vice. Quassim Cassam in Vices of the Mind argues that we can only consider thinking conspiratorially to be a vice based on context. He means that conspiratorial thinking is a vice dependent on whether there is reliable and accurate evidence to support a conspiratorial claim. Thinking conspiratorially is not an epistemic vice when we are correct and have solid evidence and rational justifications for thinking conspiratorially. Anti-conspiratorial thinking can be an epistemic vice if we ignore good evidence of a conspiracy to continue believing that everything is in order.
Many conspiracies are not based on reliable facts and information. They create causal links between disconnected events and fail to explain reality. Anti-conspiratorial thinking also creates a false picture of reality, but does so by ignoring causal links that actually do exist. As epistemic vices, both ways of thinking can be described consequentially and by examining the patterns of thought that contribute to the conspiratorial or anti-conspiratorial thinking.
However, that is not to say that conspiratorial thinking is a vice in non-conspiracy environments and that anti-conspiratorial thinking is a vice in high-conspiracy environments. Regarding this line of thought, Cassam writes, “Seductive as this line of thinking might seem, it isn’t correct. The obvious point to make is that conspiracy thinking can be vicious in a conspiracy-rich environment, just as anti-conspiracy thinking can be vicious in contexts in which conspiracies are rare.” The key, according to Cassam, is evidence-based thinking and whether we have justified beliefs and opinions, even if they turn out to be wrong in the end.
Cassam generally supports the principle of parsimony, the idea that the simplest explanation for a scenario is often the best and the one that you should assume to be correct. Based on the evidence available, we should look for the simplest and most direct path to explain reality. However, as Cassam continues, “the principle of parsimony is a blunt instrument when it comes to assessing the merits of a hypothesis in complex cases.” This means that we will still end up with epistemic vices related to conspiratorial thinking if we only look for the simplest explanation.
What Cassam’s quotes about conspiratorial thinking and parsimony get at is the importance of good evidence-based thinking. When we are trying to understand reality, we should be thinking about what evidence should exist for our claims, what evidence would be needed to support our claims, and what kinds of evidence would refute our claims. Evidence-based thinking helps us avoid pitfalls of conspiratorial or anti-conspiratorial thinking, regardless as to whether we live in conspiracy rich or poor environments. Accurately identifying or denying a conspiracy based on thinking without any evidence, based on assuming simple relationships, is ultimately not much better than simply making up beliefs based on magic. What we need to do is learn to adopt evidence-based thinking and to better understand the causal structures that exist in the world. That is the only true way to avoid the epistemic vices related to conspiratorial thinking.
Knowledge and Perception

Knowledge and Perception

We often think that biases like prejudice are mean spirited vices that cause people to lie and become hypocritical. The reality, according to Quassim Cassam is that biases like prejudice run much deeper within our minds. Biases can become epistemic vices, inhibiting our ability to acquire and develop knowledge. They are more than just biases that make us behave in ways that we profess to be wrong. Biases can literally shape the reality of the world we live in by altering the way we understand ourselves and other people around us.
“What one sees,” Cassam writes in Vices of the Mind, “is affected by one’s beliefs and background assumptions. It isn’t just a matter of taking in what is in front of one’s eyes, and this creates an opening for vices like prejudice to obstruct the acquisition of knowledge by perception.”
I am currently reading Steven Pinker’s book Enlightenment Now where Pinker argues that humans strive toward rationality and that at the end of the day subjectivity is ultimately over-ruled by reason, rationality, and objectivity. I have long been a strong adherent to the Social Construction Framework and beliefs that our worlds are created and influenced by individual differences in perception to a great degree. Pinker challenges that assumption, but framing his challenge through the lens of Cassam’s quote helps show how Pinker is ultimately correct.
Individual level biases shape our perception. Pinker describes a study where university students watching a sporting event literally see more fouls called against their team than the opponent, revealing the prejudicial vice that Cassam describes. Perception is altered by a prejudice against the team from the other school. Knowledge (in the study it is the accurate number of fouls for each team) is inhibited for the sports fans by their prejudice. The reality they live in is to some extent subjective and shaped by their prejudices and misperceptions.
But this doesn’t mean that knowledge about reality is inaccessible to humans at a larger scale. A neutral third party (or committee of officials) could watch the game and accurately identify the correct number of fouls for each side. The sports fans and other third parties may quibble about the exact final number, but with enough neutral observers we should be able to settle on a more accurate reality than if we left things to the biased sports fans. At the end of the day, rationality will win out through strength of numbers, and even the disgruntled sports fan will have to admit that the number of fouls they perceived was different from the more objective number of fouls agreed upon by the neutral third party members.
I think this is at the heart of the message from Cassam and the argument that I am currently reading from Pinker. My first reaction to Cassam’s quote is to say that our realities are shaped by biases and perceptions, and that we cannot trust our understanding of reality. However, objective reality (or something pretty close to it that enough non-biased people could reasonably describe) does seem to exist. As collective humans, we can reach objective understandings and agreements as people recognize and overcome biases and as the descriptions of the world presented by non-biased individuals prove to be more accurate over the long run. The key is to recognize that epistemic vices shape our perception at a deep level, that they are more than just hypocritical behaviors and that they literally shape the way we interpret reality. The more we try to overcome these vices of the mind, the more accurately we can describe the world, and the more our perception can then align with reality.
Incentives for Environmentally Responsible Markets

Incentives for Environmentally Responsible Markets

When it comes to environmental issues, no single actor is completely to blame, and that means no single actor can make the necessary changes to prevent catastrophic climate change. This means we can’t put all the weight on governments to take actions to change the course of our climate future, and we can’t blame individual actors either. We have to think about economies, polities, and incentive structures.

 

In their book Nudge, economists Cass Sunstein and Richard Thaler look at what this means for markets and regulation as we try to find sustainable paths. They write, “markets are a big part of this system, and for all their virtues, they face two problems that contribute to environmental problems. First, incentives are not properly aligned. If you engage in environmentally costly behavior next year, through consumption choices, you will probably pay nothing for the environmental harms that you inflict. This is what is often called a tragedy of the commons.”

 

One reason markets bear some of the blame and responsibility for the climate change crisis is because market incentives can produce externalities that are hard to correct. Climate change mitigation strategies, such as research and development of more fuel efficient vehicles and technologies, are expensive, and the costs of climate change are far off. Market actors, both consumers and producers, don’t have proper incentives to make the costly changes today that would reduce the future costs of continued climate change.

 

A heavy handed approach to our climate change crisis would be for governments to step in with dramatic regulation – eliminating fossil fuel vehicles, setting almost unattainably high energy efficiency standards for furnaces and dishwashers, and limiting air travel. Such an approach, however, might anger the population and ruin any support for climate mitigation measures, making the crisis even more dire. I don’t think many credible people really support heavy handed government action, even if they do favor regulation which comes close to being as extreme as the examples I mentioned. Sunstein and Thaler’s suggestion of improved incentives to address failures in markets and change behaviors has advantages over heavy handed regulation. The authors write, “incentive-based approaches are more efficient and more effective, and they also increase freedom of choice.”

 

To some extent, regulation looks at a problem and asks what the most effective way to stop the problem is if everyone is acting rational. An incentives-based approach asks what behaviors need to be changed, and what existing forces encourage the negative behaviors and discourage changes toward better behaviors. Taxes, independent certifications, and public shaming can be useful incentives to get individuals, groups, and companies to make changes. I predict that in 10-15 years people who are not yet driving electric cars will start to be shamed for continuing to drive inefficient gas guzzlers (unfortunately this probably means people with low incomes will be shamed for not being able to afford a new car). In the US, we have tried to introduce taxes on carbon output, but have not been successful. Taxing energy consumption in terms of carbon output changes the incentives companies have with regard to negative environmental externalities form energy and resource consumption. And independent certification boards, like the one behind the EnergyStar label, can continue to play an important role in encouraging technological development of more efficient appliances. The incentives approach might seem less direct, slower, and less certain to work, but in many areas, not just climate change, we need broad public support to make changes, especially when the costs are high up front. This requires that we understand incentives and think about ways to change incentive structures. Nudges such as the ones I mentioned may work better than full government intervention if people are not acting fully rational, which is usually the case for most of us. Nudges can get us to change behaviors while believing that we are making choices for ourselves, rather than having choices forced on us by an outside authority.
Why Terrorism Works

Why Terrorism Works

In the wake of terrorism attacks, deadly shootings, or bizarre accidents I often find myself trying to talk down the threat and trying to act as if my daily life shouldn’t be changed. I live in Reno, NV, and my city has experienced school shootings while my state experienced the worst mass shooting in the United States, but I personally have never been close to any of these extreme yet rare events.  Nevertheless, despite efforts to talk down any risk, I do psychologically notice the fear that I feel following such events.

 

This fear is part of why terrorism works. Despite trying to rationally and logically talk myself through the post-terrorism incident and remind myself that I am in more danger on the freeway than I am near a school or at a concert, there is still some apprehension under the surface, no matter how cool I make myself look on the outside. In Thinking Fast and Slow, Daniel Kahneman examines why we behave this way following such attacks. Terrorism, he writes, “induces an availability cascade. An extremely vivid image of death and damage, constantly reinforced by media attention and frequent conversations becomes highly accessible, especially if it is associated with a specific situation.”

 

Availability is more powerful in our mind than statistics. If we know that a given event is incredibly rare, but have strong mental images of such an event, then we will overweight the likelihood of that event occurring again. The more easily an idea or possibility comes to mind, the more likely it will feel to us that it could happen again. On the other hand, if we have trouble recalling experiences or instances where rare outcomes did not happen, then we will discount the possibility that they could occur. Where terrorism succeeds is because it shifts deadly events from feeling as if they were impossible to making them easily accessible in the mind, and making them feel as though they could happen again at any time. If our brains were coldly rational, then terrorism wouldn’t work as well as it does. As it is, however, our brains respond to powerful mental images and memories, and the fluidity of those mental images and memories shapes what we expect and what we think is likely or possible.
Affect Heuristics

More on Affect Heuristics

For me, one of the easiest examples of heuristics that Daniel Kahneman shares in his book Thinking Fast and Slow is the affect heuristic. It is a bias that I know I fall into all the time, and that has led me to buy particular brands of shoes, has influenced how I think about certain foods, and has shaped the way I think about people. In his book Kahenman writes, “The affect heuristic is an instances of substitution, in which the answer to an easy question (How do I feel about it?) serves as an answer to a much harder question (What do I think About it?).”

 

The world is a complex and tricky place, and we can only focus a lot of attention in one direction at a time. For a lot of us, that means we are focused on getting kids ready for school, cooking dinner, or trying to keep the house clean. Trying to fully understand the benefits and drawbacks of a social media platform, a new traffic pattern, or how to invest in retirement may seem important, but it can be hard to find the time and mental energy to focus on a complex topic and organize our thoughts in a logical and coherent manner. Nevertheless, we are likely to be presented with situations where we have to make decisions about what level of social media is appropriate for our children, offer comments on new traffic patterns around the water cooler, or finally get around to setting up our retirement plan and deciding what to do with that old 401K from that job we left.

 

Without having adequate time, energy, and attention to think through these difficult decisions, we have to make choices and are asked to have an opinion on topics we are not very informed about. “The affect heuristic”, Kahneman writes, “simplifies our lives by creating a world that is much tidier than reality. Good technologies have few costs in the imaginary world we inhabit, bad technologies have no benefits, and all decisions are easy.” We substitute the hard question that requires detailed thought for a simple question: do I like social media, did I feel that the new traffic pattern made my commute slower, do I like the way my retirement savings advisor presented a new investment strategy. In each case, we rely on affect, our emotional reaction to something, and make decisions in line with our gut feelings. Of course my kid can use social media, I’m on it, I like it, and I want to see what they are posting. Ugh, that new traffic pattern is awful, what were they thinking putting that utility box where it blocks the view of the intersection. Obviously this is the best investment strategy for me, my advisor was able to explain it well and I liked it when they told me I was making a smart decision.

 

We don’t notice when we default to the affect heuristic. It is hard to recognize that we have shifted away from making detailed calculations to rely solely on intuitions about how something makes us feel. Rather than admitting that we buy Nike shoes because our favorite basketball player wears them, and we want to be like LeBron, we create a story in our head about the quality of the shoes, the innovative design, and the complementary colors. We fall back on a quick set of factors that gives the impression of a thoughtful decision. In a lot of situations, we probably can’t do much better than the affect heuristic, but it is worth considering if our decisions are really being driven by affect. We might be able to avoid buying things just out of brand loyalty, and we might be a little calmer and reasonable in debates and arguments with friends and family when we realize we are acting on affect and not on reason.
Anchoring Effects

Anchoring Effects

Anchoring effects were one of the psychological phenomenon that I found the most interesting in Daniel Kahneman’s book Thinking Fast and Slow. In many situations in our lives, random numbers seem to be able to influence other numbers that we consciously think about, even when there is no reasonable connection between the random number we see, and the numbers we consciously use for another purpose. As Kahneman writes about anchoring effects, “It occurs when people consider a particular value for an unknown quantity before estimating that quantity.”

 

Several examples of anchoring effects are given in the book. In one instance, judges were asked to assess how much a night club should be fined for playing loud music long after the quite orders in the night club’s local town. Real life judges who have to make these legal decisions were presented with the name of the club and information about the violation of the noise ordinance. The fictitious club was named after the fictitious street that it was located along. In some instanced, the club name was something along the lines of 1500 First Street, and in other instances the club name was something like 10 First Street. Judges consistently assessed a higher fine to the club with the name 1500 First Street than the club with the name 10 First Street. Seemingly random and unimportant information, the numbers for the street address in the name of the club, had a real impact on the amount that judges on average thought the club should be fined.

 

In other examples of anchoring effects, Kahneman shows us that we come up with different guesses of how old Gandhi was when he died if we are asked if he was older than 35 or younger than 114. In another experiment, a random wheel spin influenced the guess people offered for the number of African nations in the UN. In all these examples, when we have to think of a number that we don’t know or that we can apply subjective judgment to, other random numbers can influence what numbers come to mind.

 

This can have real consequences in our lives when we are looking to buy something or make a donation or investment. Retailers may present us with high anchors in an effort to prime us to be willing to accept a higher price than we would otherwise pay for an item. If you walk into a sunglass shop and see two prominently displayed sunglasses with very high prices, you might not be as surprised by the high prices listed on other sunglasses, and might even consider slightly lower prices on other sunglasses as a good deal.

 

It is probably safe to say that sales prices in stores, credit card interest rates, and investment management fees are carefully crafted with anchoring effects in mind. Retailers want you to believe that a high price on an item is a fair price, and could be higher if they were not willing to offer you a deal. Credit card companies and investment brokers want you to believe the interest rates and management fees they charge are small, and might try to prime you with large numbers relative to the rates they quote you. We probably can’t completely overcome the power of anchoring effects, but if we know what to look for, we might be better at taking a step back and analyzing the rates and costs a little more objectively. If nothing else, we can pause and doubt ourselves a little more when we are sure we have found a great deal on a new purchase or investment.
Accepting Unsound Arguments

Accepting Unsound Arguments

Motivated reasoning is a major problem for those of us who want to have beliefs that accurately reflect the world. To live is to have preferences about how the world operates and relates to our lives. We would prefer not to endure suffering and pain, and would rather have comfort, companionship, and prosperity. We would prefer the world to provide for us, and we would prefer to not be too heavily strained. From pure physical needs and preferences all the way through social and emotional needs and preferences, our experiences of the world are shaped by what we want and what we would like. This is why we cannot get away from our own opinions and individual preferences in life, and part of why motivated reasoning becomes the problem that it is.

 

In Thinking Fast and Slow, Daniel Kahneman writes about how motivated reasoning works in our minds, in terms of the arguments we make to support the conclusions we believe in, or would like to believe in. He writes, “When people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when these arguments are unsound.”

 

We justify conclusions we would like to believe with any argument that seems plausible and fits the conclusion we would like to believe. Our preference for one conclusion leads us to bend the arguments in favor of that conclusion. Rather than truly analyzing the arguments, we discount factors that don’t support what we want to believe, and we disregard arguments that come from people who are reaching an alternative conclusion. Our preferences take over, and the things we want become more important than reality. Motivated reasoning gives us a way to support what we want to believe by twisting the value we assign to different facts.

 

Even in our own mind, demonstrating that an argument in favor of our preferred conclusion is flawed is unlikely to make much of a difference. We will continue to hold on to our flawed argument, choosing to believe that there is something true about it, even if we know it is flawed or contradicts other disagreeable facts that must also be true if we are to support our preferred conclusion.

 

This doesn’t make us humans look very good. We can’t reason our way to new beliefs and we can’t rely on facts and data to change minds. In the end, if we want to change our thoughts and behavior as well as those of others, we have to shape people’s preferences. Motivated reasoning can support conclusions that do not accurately reflect the world around us, so for those of us who care about reality, we have to heighten the salience of believing and trusting science and expertise before we can get people to adopt our arguments in favor of rational evidence. If we don’t think about how preference and motivated reasoning lead people to believe inaccurate claims, we will fail to address the preferences that support problematic policies, and we won’t be able to guide our world in a direction based on reason and sound conclusions.
We Think of Ourselves as Rational

We Think of Ourselves as Rational

In Daniel Kahneman’s book Thinking Fast and Slow, Kahneman lays out two ideas for thinking about our thought processing. Kahneman calles the two ways of thinking about our thought processing System 1 and System 2. System 1 is fast, automatic, often subconscious, and usually pretty accurate in terms of making quick judgments, assumptions, and estimations of the world. System 2 is where our heavy duty thinking takes place. It is where we crunch through math problems, where our rational problem-solving part of the brain is in action, and its the system that uses a lot of energy to help us remember important information and understand the world.

 

Despite the fact that we normally operate on System 1, that is not the part of our brain that we think of as ourselves. Kahneman writes, “When we think of ourselves, we identify with System 2, the conscious, reasoning self that has beliefs, makes choices, and decides what to think about and what to do.” We believe ourselves to be rational agents, responding reasonably to the world around us. We see ourselves a free from bias, as logically coherent, and as considerate and understanding. Naturally, it is System 2 that we see ourselves as spending most of our time with, however, this is not exactly the case.

 

A lot of our actions are influenced by factors that seem to play more at the System 1 level than the System 2 level. If you are extra tired, if you are hungry, or if you feel insulted by someone close to you, then you probably won’t be thinking as rationally and reasonably as you would expect. You are likely going to operate on System 1, making sometimes faulty assumptions on incomplete data about the world around you. If you are hungry or tired enough, you will effectively be operating on auto-pilot, letting System 1 take over as you move about the cabin.

 

Even though we often operate on System 1, we feel as though we operate on System 2 because the part of us that thinks back to how we behaved, the part of us required for serious reflection, is part of System 2. It is critical, thoughtful, and takes its time generating logically coherent answers. System 1 is quick and automatic, so we don’t even notice when it is in control. When we think about who we are, why we did something, and what kind of person we aspire to be, it is System 2 that is flying the plane, and it is System 2 that we become aware of, fooling ourselves into believing that System 2 is all we are, that System 2 is what is really in our head. We think of ourselves as rational, but that is only because our irrational System 1 can’t pause to reflect back on itself. We only see the rational part of ourselves, and it is comforting to believe that is really who we are.