Moral Vantage Points & Costs vs. Benefits

Moral Vantage Points & Costs vs. Benefits

I am in favor of making as many rational choices as we can, but the reality is that we cannot take subjectivity out of our decision-making entirely. When we strive to be rational, we make decisions based on objective statistics and measurable data. We try to take subjectivity out of our measures so that our decisions can be fact based. But an unavoidable problem is determining which facts and measures to use in our evaluations. At some point, we have to decide which factors are important and which are not.
 
 
This means that there will always be some sort of subjectivity built into our systems. It also means we cannot avoid making decisions that are at some level political. No matter how much of a rational technocrat we strive to be, we are still making political and subjective judgements.
 
 
Steven Pinker has a sentence in his book The Better Angels of Our Nature which reflects this reality. Pinker writes, “A moral vantage point determines more than who benefits and who pays; it also determines how events are classified as benefits and costs to begin with.”
 
 
Pinker’s reflection is in line with what I described in the opening paragraph. Who gets to decide what is a cost and what is a benefit can shape the rational decision-making process and framework. If you believe that the most important factor in a decision is the total price paid and another person believes that the most important factor is whether access to the end product is equitable, then you may end up at an impasse that cannot be resolved rationally. Your two most important values may directly contradict each other and no amount of statistics is going to change how you understand costs and benefits in the situation.
 
 
I don’t think this means that rationality is doomed. I think it means we must be aware of the fact that rationality is bounded, that there are realms where we cannot be fully rational. We can still strive to be as rational as possible, but we need to acknowledge that how we view costs and benefits, how we view the most important or least important factors in a decision, will not be universal and cannot be entirely objective.
When to Stop Counting

When to Stop Counting

Yesterday I wrote about the idea of scientific versus political numbers. Scientific numbers are those that we rely on for decision-making. They are not always better and more accurate numbers than political numbers, but they are generally based on some sort of standardized methodology and have a concrete and agreed upon backing to them. Political numbers are more or less guestimates or are formed from sources that are not confirmed to be reliable. While they can end up being more accurate than scientific figures they are harder to accept and justify in decision-making processes. In the end, the default is scientific numbers, but scientific numbers do have a flaw that keeps them from ever becoming what they proport to be. How do we know when it is time to stop counting and when we are ready to move forward with a scientific number rather than fall back on a political number?
Christopher Jencks explores this idea in his book The Homeless by looking at a survey conducted by Martha Burt at the Urban Institute. Jencks writes, “Burt’s survey provides quite a good picture of the visible homeless. It does not tell us much about those who avoid shelters, soup kitchens, and the company of other homeless individuals. I doubt that such people are numerous, but I can see no way of proving this. It is hard enough finding the proverbial needle in a haystack. It is far harder to prove that a haystack contains no more needles.” The quote shows that Burt’s survey was good at identifying the visibly homeless people, but that at some point in the survey a decision was made to stop attempting to count the less visibly homeless. It is entirely reasonable to stop counting at a certain point, as Jencks mentions it is hard to prove there are no more needles left to count, but that always means there will be a measure of uncertainty with your counting and results. Your numbers will always come with a margin of error because there is almost no way to be certain that you didn’t miss something.
Where we chose to stop counting can influence whether we should consider our numbers to be scientific numbers or political numbers. I would argue that the decision for where to stop our count is both a scientific and a political decision itself. We can make political decisions to stop counting in a way that deliberately excludes hard to count populations. Alternatively, we can continue our search to expand the count and change the end results of our search. Choosing how scientifically accurate to be with our count is still a political decision at some level.
However, choosing to stop counting can also be a rational and economic decision. We may have limited funding and resources for our counting, and be forced to stop at a reasonable point that allows us to make scientifically appropriate estimates about the remaining uncounted population. Diminishing marginal returns to our counting efforts also means at a certain point we are putting in far more effort into counting relative to the benefit of counting one more item for any given survey. This demonstrates how our numbers can be based onĀ  scientific or political motivations, or both. These are all important considerations for us whether we are the counter or studying the results of the counting. Where we chose to stop matters, and because we likely can’t prove we have found every needle in the haystack, and that no more needles exist. No matter what, we will have to face the reality that the numbers we get are not perfect, no matter how scientific we try to make them.
Scarcity & Short-Term Thinking

Scarcity & Short-Term Thinking

I find critiques of people living in poverty to generally be unfair and shallow. People living in poverty with barely enough financial resources to get through the day are criticized for not making smart investments of their time and money, and are criticized when they spend in a seemingly irrational manner. But for low income individuals who can’t seem to get ahead no matter what jobs they take, these critiques seem to miss the reality of life at the poorest socioeconomic level.
I wrote recently about the costs of work, which are not often factored into our easy critiques of the poor or unemployed. Much of America has inefficient and underinvested public transit. The time involved with catching a bus (or two) to get to work are huge compared with simply driving to work. Additionally, subways and other transports can be dangerous (there is no shortage of Youtube videos of people having phones stolen on public transit). This means that owning and maintaining a car can be essential for being able to work, an expensive cost that can make working prohibitive for those living in poverty.
The example of transportation to work is meant to demonstrate that not working can be a more rational choice for the poorest among us. Work involves extra stress and costs, and the individual might not break even, making unemployment the more rational choice. There are a lot of instances where the socially desirable thing becomes the irrational choice for those living in poverty. If we do not recognize this reality, then we will unfairly criticize the choices and decisions of the poor.
In his book Evicted, Matthew Desmond writes about scarcity and short-term thinking, showing that they are linked and demonstrating how this shapes the lives of those living in poverty. “research show[s] that under conditions of scarcity people prioritize the now and lose sight of the future, often at great cost.” People living in scarcity have trouble thinking ahead and planning for their future. When you don’t know where you will sleep, where your next meal will come from, and if you will be able to afford the next basic necessities, it is hard to think ahead to everything you need to do for basic living in American society. Your decisionsĀ  might not make sense to the outside world, but to you it makes sense because all you have is the present moment, and no prospects regarding the future to plan for or think about. Sudden windfalls may be spent irrationally, time may not be spent resourcefully, and tradeoffs that benefit the current moment and the expense of the future may seem like obvious choices if you live in constant scarcity.
Combined, the misperceptions about the cost of work and the psychological short-termism resulting from scarcity show us that we have to approach poverty differently from how we approach lazy middle class individuals. I think we design our programs for assisting those in poverty while thinking of middle class lazy people. We don’t think about individuals who are actually so poor that the costs of work that most of us barely think about become crippling. WeĀ  don’t consider how scarcity shapes the way people think, leading them to make poor decisions that seem obvious for us to critique from the outside. Deep poverty creates challenges and obstacles that are separate from the problem of free loading and lazy middle class children or trust fund babies. We have to recognize this if we are to actually improve the lives of the poorest among us and create a better social and economic system to help integrate those individuals.
The Representation Problem

The Representation Problem

In The Book of Why Judea Pearl lays out what computer scientists call the representation problem by writing, “How do humans represent possible worlds in their minds and compute the closest one, when the number of possibilities is far beyond the capacity of the human brain?”
Ā 
Ā 
In the Marvel Movie Infinity War, Dr. Strange looks forward in time to see all the possible outcomes of a coming conflict. He looks at 14,000,605 possible futures. But did Dr. Strange really look at all the possible futures out there? 14 million is a convenient big number to include in a movie, but how many possible outcomes are there for your commute home? How many people could change your commute in just the tiniest way? Is it really a different outcome if you hit a bug while driving, if you were stopped at 3 red lights and not 4, or if you had to stop at a crosswalk for a pedestrian? The details and differences in the possible worlds of our commute home can range from the miniscule to the enormous (the difference between you rolling your window down versus a meteor landing in the road in front of you). Certainly with all things considered there are more than 14 million possible futures for your drive home.
Ā 
Ā 
Somehow, we are able to live our lives and make decent predictions of the future despite the enormity of possible worlds that exist ahead of us. Somehow we can represent possible worlds in our minds and determine what future world is the closest one to the reality we will experience. This ability allows us to plan for retirement, have kids, go to the movies, and cook dinner. If we could not do this, we could not drive down the street, could not walk to a neighbors house, and couldn’t navigate a complex social world. But none of us are sitting in a green glow with our head spinning in circles like Dr. Strange as we try to view all the possible worlds in front of us. What is happening in our mind to do this complex math?
Ā 
Ā 
Pearl argues that we solve this representation problem not through magical foresight, but through an intuitive understanding of causal structures. We can’t predict exactly what the stock market is going to do, whether a natural disaster is in our future, or precisely how another person will react to something we say, but we can get a pretty good handle on each of these areas thanks to causal reasoning.
Ā 
Ā 
We can throw out possible futures that have no causal structures related to the reality we inhabit.Ā  You don’t have to think of a world where Snorlax is blocking your way home, because your brain recognizes there is no causal plausibility of a PokĆ©mon character sleeping in the road. Our brain easily discards the absurd possible futures and simultaneous recognizes the causal pathways that could have major impacts on how we will live. This approach gradually narrows down the possibilities to a level where we can make decisions and work with a level of information that our brain (or computers) can reasonably decipher. We also know, without having to do the math, that rolling our window down or hitting a bug is not likely to start a causal pathway that materially changes the outcome of our commute home. The same goes for being stopped at a few more red lights or even stopping to pick up a burrito. Those possibilities exist, but they don’t materially change our lives and so our brain can discard them from the calculation. This is the kind of work our brains our doing, Pearl would argue, to solve the representation problem.

Thinking Conspiratorially Versus Evidence-Based Thinking - Joe Abittan

Thinking Conspiratorially Versus Evidence-Based Thinking

My last two posts have focused around conspiratorial thinking and whether it is an epistemic vice. Quassim Cassam in Vices of the Mind argues that we can only consider thinking conspiratorially to be a vice based on context. He means that conspiratorial thinking is a vice dependent on whether there is reliable and accurate evidence to support a conspiratorial claim. Thinking conspiratorially is not an epistemic vice when we are correct and have solid evidence and rational justifications for thinking conspiratorially. Anti-conspiratorial thinking can be an epistemic vice if we ignore good evidence of a conspiracy to continue believing that everything is in order.
Many conspiracies are not based on reliable facts and information. They create causal links between disconnected events and fail to explain reality. Anti-conspiratorial thinking also creates a false picture of reality, but does so by ignoring causal links that actually do exist. As epistemic vices, both ways of thinking can be described consequentially and by examining the patterns of thought that contribute to the conspiratorial or anti-conspiratorial thinking.
However, that is not to say that conspiratorial thinking is a vice in non-conspiracy environments and that anti-conspiratorial thinking is a vice in high-conspiracy environments. Regarding this line of thought, Cassam writes, “Seductive as this line of thinking might seem, it isn’t correct. The obvious point to make is that conspiracy thinking can be vicious in a conspiracy-rich environment, just as anti-conspiracy thinking can be vicious in contexts in which conspiracies are rare.” The key, according to Cassam, is evidence-based thinking and whether we have justified beliefs and opinions, even if they turn out to be wrong in the end.
Cassam generally supports the principle of parsimony, the idea that the simplest explanation for a scenario is often the best and the one that you should assume to be correct. Based on the evidence available, we should look for the simplest and most direct path to explain reality. However, as Cassam continues, “the principle of parsimony is a blunt instrument when it comes to assessing the merits of a hypothesis in complex cases.” This means that we will still end up with epistemic vices related to conspiratorial thinking if we only look for the simplest explanation.
What Cassam’s quotes about conspiratorial thinking and parsimony get at is the importance of good evidence-based thinking. When we are trying to understand reality, we should be thinking about what evidence should exist for our claims, what evidence would be needed to support our claims, and what kinds of evidence would refute our claims. Evidence-based thinking helps us avoid pitfalls of conspiratorial or anti-conspiratorial thinking, regardless as to whether we live in conspiracy rich or poor environments. Accurately identifying or denying a conspiracy based on thinking without any evidence, based on assuming simple relationships, is ultimately not much better than simply making up beliefs based on magic. What we need to do is learn to adopt evidence-based thinking and to better understand the causal structures that exist in the world. That is the only true way to avoid the epistemic vices related to conspiratorial thinking.
Knowledge and Perception

Knowledge and Perception

We often think that biases like prejudice are mean spirited vices that cause people to lie and become hypocritical. The reality, according to Quassim Cassam is that biases like prejudice run much deeper within our minds. Biases can become epistemic vices, inhibiting our ability to acquire and develop knowledge. They are more than just biases that make us behave in ways that we profess to be wrong. Biases can literally shape the reality of the world we live in by altering the way we understand ourselves and other people around us.
“What one sees,” Cassam writes in Vices of the Mind, “is affected by one’s beliefs and background assumptions. It isn’t just a matter of taking in what is in front of one’s eyes, and this creates an opening for vices like prejudice to obstruct the acquisition of knowledge by perception.”
I am currently reading Steven Pinker’s book Enlightenment Now where Pinker argues that humans strive toward rationality and that at the end of the day subjectivity is ultimately over-ruled by reason, rationality, and objectivity. I have long been a strong adherent to the Social Construction Framework and beliefs that our worlds are created and influenced by individual differences in perception to a great degree. Pinker challenges that assumption, but framing his challenge through the lens of Cassam’s quote helps show how Pinker is ultimately correct.
Individual level biases shape our perception. Pinker describes a study where university students watching a sporting event literally see more fouls called against their team than the opponent, revealing the prejudicial vice that Cassam describes. Perception is altered by a prejudice against the team from the other school. Knowledge (in the study it is the accurate number of fouls for each team) is inhibited for the sports fans by their prejudice. The reality they live in is to some extent subjective and shaped by their prejudices and misperceptions.
But this doesn’t mean that knowledge about reality is inaccessible to humans at a larger scale. A neutral third party (or committee of officials) could watch the game and accurately identify the correct number of fouls for each side. The sports fans and other third parties may quibble about the exact final number, but with enough neutral observers we should be able to settle on a more accurate reality than if we left things to the biased sports fans. At the end of the day, rationality will win out through strength of numbers, and even the disgruntled sports fan will have to admit that the number of fouls they perceived was different from the more objective number of fouls agreed upon by the neutral third party members.
I think this is at the heart of the message from Cassam and the argument that I am currently reading from Pinker. My first reaction to Cassam’s quote is to say that our realities are shaped by biases and perceptions, and that we cannot trust our understanding of reality. However, objective reality (or something pretty close to it that enough non-biased people could reasonably describe) does seem to exist. As collective humans, we can reach objective understandings and agreements as people recognize and overcome biases and as the descriptions of the world presented by non-biased individuals prove to be more accurate over the long run. The key is to recognize that epistemic vices shape our perception at a deep level, that they are more than just hypocritical behaviors and that they literally shape the way we interpret reality. The more we try to overcome these vices of the mind, the more accurately we can describe the world, and the more our perception can then align with reality.
Incentives for Environmentally Responsible Markets

Incentives for Environmentally Responsible Markets

When it comes to environmental issues, no single actor is completely to blame, and that means no single actor can make the necessary changes to prevent catastrophic climate change. This means we can’t put all the weight on governments to take actions to change the course of our climate future, and we can’t blame individual actors either. We have to think about economies, polities, and incentive structures.

 

In their book Nudge, economists Cass Sunstein and Richard Thaler look at what this means for markets and regulation as we try to find sustainable paths. They write, “markets are a big part of this system, and for all their virtues, they face two problems that contribute to environmental problems. First, incentives are not properly aligned. If you engage in environmentally costly behavior next year, through consumption choices, you will probably pay nothing for the environmental harms that you inflict. This is what is often called a tragedy of the commons.”

 

One reason markets bear some of the blame and responsibility for the climate change crisis is because market incentives can produce externalities that are hard to correct. Climate change mitigation strategies, such as research and development of more fuel efficient vehicles and technologies, are expensive, and the costs of climate change are far off. Market actors, both consumers and producers, don’t have proper incentives to make the costly changes today that would reduce the future costs of continued climate change.

 

A heavy handed approach to our climate change crisis would be for governments to step in with dramatic regulation – eliminating fossil fuel vehicles, setting almost unattainably high energy efficiency standards for furnaces and dishwashers, and limiting air travel. Such an approach, however, might anger the population and ruin any support for climate mitigation measures, making the crisis even more dire. I don’t think many credible people really support heavy handed government action, even if they do favor regulation which comes close to being as extreme as the examples I mentioned. Sunstein and Thaler’s suggestion of improved incentives to address failures in markets and change behaviors has advantages over heavy handed regulation. The authors write, “incentive-based approaches are more efficient and more effective, and they also increase freedom of choice.”

 

To some extent, regulation looks at a problem and asks what the most effective way to stop the problem is if everyone is acting rational. An incentives-based approach asks what behaviors need to be changed, and what existing forces encourage the negative behaviors and discourage changes toward better behaviors. Taxes, independent certifications, and public shaming can be useful incentives to get individuals, groups, and companies to make changes. I predict that in 10-15 years people who are not yet driving electric cars will start to be shamed for continuing to drive inefficient gas guzzlers (unfortunately this probably means people with low incomes will be shamed for not being able to afford a new car). In the US, we have tried to introduce taxes on carbon output, but have not been successful. Taxing energy consumption in terms of carbon output changes the incentives companies have with regard to negative environmental externalities form energy and resource consumption. And independent certification boards, like the one behind the EnergyStar label, can continue to play an important role in encouraging technological development of more efficient appliances. The incentives approach might seem less direct, slower, and less certain to work, but in many areas, not just climate change, we need broad public support to make changes, especially when the costs are high up front. This requires that we understand incentives and think about ways to change incentive structures. Nudges such as the ones I mentioned may work better than full government intervention if people are not acting fully rational, which is usually the case for most of us. Nudges can get us to change behaviors while believing that we are making choices for ourselves, rather than having choices forced on us by an outside authority.
Why Terrorism Works

Why Terrorism Works

In the wake of terrorism attacks, deadly shootings, or bizarre accidents I often find myself trying to talk down the threat and trying to act as if my daily life shouldn’t be changed. I live in Reno, NV, and my city has experienced school shootings while my state experienced the worst mass shooting in the United States, but I personally have never been close to any of these extreme yet rare events.Ā  Nevertheless, despite efforts to talk down any risk, I do psychologically notice the fear that I feel following such events.

 

This fear is part of why terrorism works. Despite trying to rationally and logically talk myself through the post-terrorism incident and remind myself that I am in more danger on the freeway than I am near a school or at a concert, there is still some apprehension under the surface, no matter how cool I make myself look on the outside. In Thinking Fast and Slow, Daniel Kahneman examines why we behave this way following such attacks. Terrorism, he writes, “induces an availability cascade. An extremely vivid image of death and damage, constantly reinforced by media attention and frequent conversations becomes highly accessible, especially if it is associated with a specific situation.”

 

Availability is more powerful in our mind than statistics. If we know that a given event is incredibly rare, but have strong mental images of such an event, then we will overweight the likelihood of that event occurring again. The more easily an idea or possibility comes to mind, the more likely it will feel to us that it could happen again. On the other hand, if we have trouble recalling experiences or instances where rare outcomes did not happen, then we will discount the possibility that they could occur. Where terrorism succeeds is because it shifts deadly events from feeling as if they were impossible to making them easily accessible in the mind, and making them feel as though they could happen again at any time. If our brains were coldly rational, then terrorism wouldn’t work as well as it does. As it is, however, our brains respond to powerful mental images and memories, and the fluidity of those mental images and memories shapes what we expect and what we think is likely or possible.
Affect Heuristics

More on Affect Heuristics

For me, one of the easiest examples of heuristics that Daniel Kahneman shares in his book Thinking Fast and Slow is the affect heuristic. It is a bias that I know I fall into all the time, and that has led me to buy particular brands of shoes, has influenced how I think about certain foods, and has shaped the way I think about people. In his book Kahenman writes, “The affect heuristic is an instances of substitution, in which the answer to an easy question (How do I feel about it?) serves as an answer to a much harder question (What do I think About it?).”

 

The world is a complex and tricky place, and we can only focus a lot of attention in one direction at a time. For a lot of us, that means we are focused on getting kids ready for school, cooking dinner, or trying to keep the house clean. Trying to fully understand the benefits and drawbacks of a social media platform, a new traffic pattern, or how to invest in retirement may seem important, but it can be hard to find the time and mental energy to focus on a complex topic and organize our thoughts in a logical and coherent manner. Nevertheless, we are likely to be presented with situations where we have to make decisions about what level of social media is appropriate for our children, offer comments on new traffic patterns around the water cooler, or finally get around to setting up our retirement plan and deciding what to do with that old 401K from that job we left.

 

Without having adequate time, energy, and attention to think through these difficult decisions, we have to make choices and are asked to have an opinion on topics we are not very informed about. “The affect heuristic”, Kahneman writes, “simplifies our lives by creating a world that is much tidier than reality. Good technologies have few costs in the imaginary world we inhabit, bad technologies have no benefits, and all decisions are easy.” We substitute the hard question that requires detailed thought for a simple question: do I like social media, did I feel that the new traffic pattern made my commute slower, do I like the way my retirement savings advisor presented a new investment strategy. In each case, we rely on affect, our emotional reaction to something, and make decisions in line with our gut feelings. Of course my kid can use social media, I’m on it, I like it, and I want to see what they are posting. Ugh, that new traffic pattern is awful, what were they thinking putting that utility box where it blocks the view of the intersection. Obviously this is the best investment strategy for me, my advisor was able to explain it well and I liked it when they told me I was making a smart decision.

 

We don’t notice when we default to the affect heuristic. It is hard to recognize that we have shifted away from making detailed calculations to rely solely on intuitions about how something makes us feel. Rather than admitting that we buy Nike shoes because our favorite basketball player wears them, and we want to be like LeBron, we create a story in our head about the quality of the shoes, the innovative design, and the complementary colors. We fall back on a quick set of factors that gives the impression of a thoughtful decision. In a lot of situations, we probably can’t do much better than the affect heuristic, but it is worth considering if our decisions are really being driven by affect. We might be able to avoid buying things just out of brand loyalty, and we might be a little calmer and reasonable in debates and arguments with friends and family when we realize we are acting on affect and not on reason.
Anchoring Effects

Anchoring Effects

Anchoring effects were one of the psychological phenomenon that I found the most interesting in Daniel Kahneman’s book Thinking Fast and Slow. In many situations in our lives, random numbers seem to be able to influence other numbers that we consciously think about, even when there is no reasonable connection between the random number we see, and the numbers we consciously use for another purpose. As Kahneman writes about anchoring effects, “It occurs when people consider a particular value for an unknown quantity before estimating that quantity.”

Ā 

Several examples of anchoring effects are given in the book. In one instance, judges were asked to assess how much a night club should be fined for playing loud music long after the quite orders in the night club’s local town. Real life judges who have to make these legal decisions were presented with the name of the club and information about the violation of the noise ordinance. The fictitious club was named after the fictitious street that it was located along. In some instanced, the club name was something along the lines of 1500 First Street, and in other instances the club name was something like 10 First Street. Judges consistently assessed a higher fine to the club with the name 1500 First Street than the club with the name 10 First Street. Seemingly random and unimportant information, the numbers for the street address in the name of the club, had a real impact on the amount that judges on average thought the club should be fined.

 

In other examples of anchoring effects, Kahneman shows us that we come up with different guesses of how old Gandhi was when he died if we are asked if he was older than 35 or younger than 114. In another experiment, a random wheel spin influenced the guess people offered for the number of African nations in the UN. In all these examples, when we have to think of a number that we don’t know or that we can apply subjective judgment to, other random numbers can influence what numbers come to mind.

 

This can have real consequences in our lives when we are looking to buy something or make a donation or investment. Retailers may present us with high anchors in an effort to prime us to be willing to accept a higher price than we would otherwise pay for an item. If you walk into a sunglass shop and see two prominently displayed sunglasses with very high prices, you might not be as surprised by the high prices listed on other sunglasses, and might even consider slightly lower prices on other sunglasses as a good deal.

 

It is probably safe to say that sales prices in stores, credit card interest rates, and investment management fees are carefully crafted with anchoring effects in mind. Retailers want you to believe that a high price on an item is a fair price, and could be higher if they were not willing to offer you a deal. Credit card companies and investment brokers want you to believe the interest rates and management fees they charge are small, and might try to prime you with large numbers relative to the rates they quote you. We probably can’t completely overcome the power of anchoring effects, but if we know what to look for, we might be better at taking a step back and analyzing the rates and costs a little more objectively. If nothing else, we can pause and doubt ourselves a little more when we are sure we have found a great deal on a new purchase or investment.