Causal Illusions

Causal Illusions

In The Book of Why Judea Pearl writes, “our brains are not wired to do probability problems, but they are wired to do causal problems. And this causal wiring produces systematic probabilistic mistakes, like optical illusions.” This can create problems for us when no causal link exists and when data correlate without any causal connections between outcomes. According to Pearl, our causal thinking, “neglects to account for the process by which observations are selected.” We don’t always realize that we are taking a sample, that our sample could be biased, and that structural factors independent of the phenomenon we are trying to observe could greatly impact the observations we actually make.
Pearl continues, “We live our lives as if the common cause principle were true. Whenever we see patterns, we look for a causal explanation. In fact, we hunger for an explanation, in terms of stable mechanisms that lie outside the data.” When we see a correlation our brains instantly start looking for a causal mechanism that can explain the correlation and the data we see. We don’t often look at the data itself to ask if there was some type of process in the data collection that lead to the outcomes we observed. Instead, we assume the data is correct and  that the data reflects an outside, real-world phenomenon. This is the cause of many causal illusions that Pearl describes in the book. Our minds are wired for causal thinking, and we will invent causality when we see patterns, even if there truly isn’t a causal structure linking the patterns we see.
It is in this spirit that we attribute negative personality traits to people who cut us off on the freeway. We assume they don’t like us, that they are terrible people, or that they are rushing to the hospital with a sick child so that our being cut off has a satisfying causal explanation. When a particular type of car stands out and we start seeing that car everywhere, we misattribute our increased attention to the type of car and assume that there really are more of those cars on the road now. We assume that people find them more reliable or more appealing and that people purposely bought those cars as a causal mechanism to explain why we now see them everywhere. In both of these cases we are creating causal pathways in our mind that in reality are little more than causal illusions, but we want to find a cause to everything and we don’t always realize that we are doing so.
Procedure Over Performance

Procedure Over Performance

My wife works with families with children with disabilities for a state agency. She and I often have discussions about some of the administrative challenges and frustrations with her job, and some of the creative ways that she and other members of her agency are able to bend the rules to meet the human needs of the job, even though their decisions occasionally step beyond management decisions for standard operating procedures. For my wife and her colleagues below the management level of the agency, helping families and doing what is best for children is the motivation for all of their decisions, however, for the management team within the agency, avoiding errors and blame often seems to be the more important goal.

 

This disconnect between agency functions, mission, and procedures is not unique to my wife’s state agency. It is a challenge that Max Weber wrote about in the late 1800’s and early 1900’s. Somewhere along the line, public agencies and private companies seem to forget their mission. Procedure becomes more important than performance, and services or products suffer.

 

Gerd Gigerenzer offers an explanation for why this happens in his book Risk Savvy. Negative error cultures likely contribute to people becoming more focused on procedure over performance, because following perfect procedure is safe, even if it isn’t always necessary and doesn’t always lead to the best outcomes. A failure to accept risk and errors, and a failure to discuss and learn from errors, leads people to avoid situations where they could be blamed for failure. Gigerenzer writes, “People need to be encouraged to talk about errors and take the responsibility in order to learn and achieve better overall performance.”

 

As companies and government agencies age, their workforce ages. People become comfortable in their role, they don’t want to have to look for a new job, they take out mortgages, have kids, and send them to college. People become more conservative and risk averse as they have more to lose, and that means they are less likely to take risks in their career, because they don’t want to lose their income to support their lifestyles, retirements, or the college plans for their kids. Following procedures, like getting meaningless forms submitted on time and documenting conversations timely, become more important than actually ensuring valuable services or products are provided to constituents and customers. Procedure prospers over performance, and the agency or company as a whole suffers. Positive error cultures, where it is ok to take reasonable risks and acceptable to discuss errors without fear of blame are important for overcoming the stagnation that can arise when procedure becomes more important than the mission of the agency or company.
Positive Error Cultures - Joe Abittan

Positive Error Cultures

My last post was about negative error cultures and the harm they can create. Today is about the flip side, positive error cultures and how they can help encourage innovation, channel creativity, and help people learn to improve their decision-making. “On the other end of the spectrum,” writes Gerd Gigerenzer in Risk Savvy, “are positive error cultures, that make errors transparent, encourage good errors, and learn from bad errors to create a safe environment.”

 

No one likes to make errors. Whether it is a small error on our personal finances or a major error on the job, we would all rather hide our mistakes from others. In school we probably all had the experience of quickly stuffing away a test that received a bad grade so that no one could see how many questions we got wrong. Errors in life have the same feeling, but like mistakes on homework, reports, or tests, hiding our errors doesn’t help us learn for the future. In school, reviewing our mistakes and being willing to work through them helps us better understand the material and shows us where we need to study for the final exam. In life, learning from our mistakes helps us become better people, make smarter decisions, and be more prepared for future opportunities.

 

This is why positive error cultures are so important. If we are trying to do something new, innovative, and important, then we are probably going to be in a position where we will make mistakes. If we are new homeowners and don’t know exactly how to tackle a project, we will certainly err, but by learning from our mistakes, we can improve and better handle similar home improvement projects in the future. Hiding our error will likely lead to greater costs in the future, and will leave us dependent on others to do costly work around the house. Business is the same way. If we want to grow to get a promotion or want to do something innovative to solve a new problem, we are going to make mistakes. Acknowledging where we were wrong and why we made an error helps us prepare for future challenges and opportunities. It helps us learn and grow rather than remaining stuck in one place, not solving any problems and not preparing for future opportunities.

 

80,000 Hours has a great “Our Mistakes” section on their website, demonstrating a positive error culture.
Negative Error Cultures - Joe Abittan

Negative Error Cultures

No matter how smart, observant, and rational we are, we will never have perfect information for all of the choices we make in our lives. There will always be decisions that we have to make based on a limited set of information, and when that happens, there will be a risk that we won’t make the right decision. In reality, there is risk in almost any decision we make, because there are very few choices where we have perfect information and fully understand all the potential consequences of our decisions and actions. This means the chance for errors is huge, and we will make many mistakes throughout our lives. How our cultures respond to these errors is important in determining how we move forward from them.

 

In Risk Savvy, Gerd Gigerenzer writes the following about negative error cultures:

 

“On the one end of the spectrum are negative error cultures. People living in such a culture fear to make errors of any kind, good or bad, and, if an error does occur, they do everything to hide it. Such a culture has little chance to learn from errors and discover new opportunities.”

 

None of us want to live in a world with errors, but the reality is that we spend a lot of our time engulfed by them. We don’t want to make mistakes on the job and potentially lose a raise, promotion, or employment altogether. Many people belong to religious social communities or live in families characterized by negative error cultures where any social misstep feels like the end of the world and poses expulsion from the community/family. Additionally, our world of politics is typically a negative error culture, where one political slip-up is often enough to irrevocably damage an individual’s political career.

 

Gigerenzer encourages us move away from negative error cultures because they stifle learning, reduce creativity, and fail to acknowledge the reality that our world is inherently a world of risk. We cannot avoid all risk because we cannot be all-knowing, and that means we will make mistakes. We can try to minimize the mistakes we make and their consequences, but we can only do so by acknowledging mistakes, owning up to them, learning, adapting, and improving future decision making.

 

Negative error cultures don’t expose mistakes and do not learn from them. They prevent individuals and organizations from finding the root cause of an error, and don’t allow for changes and adaptation. What is worse, efforts to hid errors can lead to more errors. Describing hospitals with negative error cultures, Gigerenzer writes, “zero tolerance for talking about errors produces more errors and less patient safety.” Being afraid to ever make a mistake makes us less willing to innovate, to learn, and to improve the world around us. It isolates us, and keeps us from improving and reducing risk for ourselves and others in the future. In the end, negative error cultures drive more of the thing they fear, reinforcing a vicious cycle of errors, secrecy, and more errors.
Inventing Excuses - Joe Abittan

Inventing Excuses

With the start of the new year and the inauguration of a new president of the United States, many individuals and organizations are turning their eyes toward the future. Individuals are working on resolutions to make positive changes in their lives. Companies are making plans and strategy adjustments to fit with economic and regulatory predictions. Political entities are adjusting a new course in anticipation of political goals, agendas, and actions of the new administration and the new distribution of political power in the country. However, almost all of the predictions and forecasts of individuals, companies, and political parties will end up being wrong, or at least not completely correct.

 

Humans are not great forecasters. We rarely do better than just assuming that what happened today will continue to happen tomorrow. We might be able to predict a regression to the mean, but usually we are not great at predicting when a new trend will come along, when a current trend will end, or when some new event will shake everything up. But this doesn’t mean that we don’t try, and it doesn’t mean that we throw in the towel or shrug our shoulders when we get things wrong.

 

In Risk Savvy Gerd Gigerenzer writes, “an analysis of thousands of forecasts by political and economic experts revealed that they rarely did better than dilettantes or dart-throwing chimps. But what the experts were extremely talented at was inventing excuses for their errors.” It is remarkable how poor our forecasting can be, and even more remarkable how much attention we still pay to forecasts. At the start of the year we all want to know whether the economy will improve, what a political organization is going to focus on, and whether a company will finally produce a great new product. We tune in as experts give us their predictions, running through all the forces and pressures that will shape the economy, political future, and performance of companies. And even when the experts are wrong, we listen to them as they explain why their initial forecast made sense, and why they should still be listened to in the future.

 

A human who threw darts, flipped a coin, or picked options out of a hat before making a big decision is likely to be just as wright or just as wrong as the experts who suggest a certain decision over another. However, the coin flipper will have no excuse when they make a poor decision. The expert on the other hand, will have no problem inventing excuses to explain away their culpability in poor decision-making. The smarter we are the better we are at rationalizing our choices and inventing excuses, even those that don’t go over so well.
Do People Make the Best Choices?

Do People Make the Best Choices?

My wife works with families with children with disabilities and for several years I worked in the healthcare space. A common idea between our two worlds was that the people being assisted are the experts on their own lives, and they know what is best for them. Parents are the experts for their children and patients are the experts in their health. Even if parents to don’t know all the intervention strategies to help a child with disabilities, and even if patients don’t have an MD from Stanford, they are still the expert in their own lives and what they and their families need.

 

But is this really true? In recent years there has been a bit of a customer service pushback in the world of business, more of a recognition that the customer isn’t always right. Additionally, research from the field of cognitive psychology, like much of the research from Daniel Kahneman’s book Thinking Fast and Slow that I wrote about, demonstrates that people can have huge blind spots in their own lives. People cannot always think rationally, in part because their brains are limited in their capacity to handle lots of information and because their brains can be tempted to take easy shortcuts in decision-making that don’t always take into account the true nature of reality. Add to Kahneman’s research the ideas put forth by Robin Hanson and Kevin Simler in The Elephant in the Brain, where the authors argue that our minds intentionally hide information from ourselves for political and personal advantage, and we can see that individual’s can’t be trusted to always make the best decisions.

 

So while no one else may know a child as well as the child’s parents, and while no one knows your body and health as well as you do, your status as the expert of who you are doesn’t necessarily mean you are in the best position to always make choices and decisions that are in your own best interest. Biases, cognitive errors, and simple self-deception can lead you astray.

 

If you accept that you as an individual, and everyone else individually, cannot be trusted to always make the best choices, then it is reasonable to think that someone else can step in to help improve your decision-making in certain predictable instances where cognitive errors and biases can be anticipated. This is a key idea in the book Nudge by Cass Sunstein and Richard Thaler. In defending their ideas for libertarian paternalism, the authors write, “The false assumption is that almost all people, almost all of the time, make choices that are in their best interest or at the very least are better than the choices that would be made by someone else. We claim that this assumption is false – indeed, obviously false.”

 

In many ways, our country prefers to operate with markets shaping the main decisions and factors of our lives. We like to believe that we make the best choices for our lives, and that aggregating our choices into markets will allow us to minimize the costs of individual errors. The idea is that we will collectively make the right choices, driving society in the right direction and revealing the best option and decision for each individual without deliberate tinkering in the process. However, we have seen that markets don’t encourage us to save as much as we should and markets can be susceptible to the same cognitive errors and biases that we as individuals all share.  Markets, in other words, can be wrong just like us as individuals.

 

Libertarian paternalism helps overcome the errors of markets by providing nudges to help people make better decisions. Setting up systems and structures that make saving for retirement easier helps correct a market failure. Outsourcing investment strategies, rather than each of us individually making stock trades, helps ensure that shared biases and panics don’t overwhelm the entire stock exchange. The reality is that we as individuals are not rational, but we can develop systems and structures that provide us with nudges to help us act more rationally, overcoming the reality that we don’t always make the choices that are in our best interest.
Intensity Matching and Intuitive Predictions

Intuitive Predictions and Intensity Matching

“Intuitive predictions are almost completely insensitive to the actual predictive quality of the evidence,” writes Daniel Kahneman in Thinking Fast and Slow. A lot of our thinking takes place in the part of our brain which is good at making quick connections, quickly detecting patterns, and making fast judgments. The deeper and more thoughtful part of our brain only engages with the world when it really needs to, when we really need to do some critical thinking to sort out a math problem, write a blog post, or figure out how to grind down some grains to make bread. The result is that a lot of our thinking processes happen at a quick and intuitive level that is subject to biases and assumptions based on incomplete information.  When we do finally turn our critical thinking brain to a problem, it is only operating with a limited set of information from the quick part of our brain which scanned the environment and grabbed the information which stood out.

 

When we make a prediction without sitting down and doing some math or weighing the factors that influence our prediction with pen and paper, our predictions will seem logical, but will miss critical information. We will make connections between ideas and experiences that might not be very reflective of the actual world. We will simplify the prediction by answering easy questions and substituting answers for the more difficult question that our prediction is trying to answer.

 

This year, as in 2016, we will see this in action. In 2016, for me and many of the people I know, it seemed as though very few people supported Donald Trump for president. I saw very few bumper stickers or yard signs for Trump, all the social media posts I saw highlighted his worst moments, and the news coverage I consumed described why he was unfit to be president. Naturally enough, I believed he would lose in a landslide. Of course, that did not happen. Intuitively I was sure that Clinton would win, and Kahneman’s research helps explain why I should have been more skeptical of my natural intuition.

 

Part of the problem was that my intuitive prediction was an exercise of intensity matching, and as Kahneman writes, “Intensity matching yields predictions that are as extreme as the evidence on which they are based.” All the information I saw highlighted how terrible Trump was. I didn’t see a lot of people supporting trump, I didn’t see news stories justifying his candidacy. I didn’t see people in my immediate environment who strongly supported him, so my intuition was biased. It didn’t help that I didn’t do anything to seek out people who did support him or information outlets that posted articles or stories in support of him.

 

Kahneman’s writing aligns with my real world experience. His studies of the brain and of our predictive machinery reveals biases and errors in our thinking. Our intuition is based on a limited set of information that the quick part of our brain can put together. When we do engage our deep thinking brain, it can still only operate on that limited information, so even if we do think critically, we are likely to still make mistakes because we can’t see the full picture and biases in the information we absorb will predictably shape the direction of our miscalculations. What might feel natural and obvious to us could be a result of faulty intensity matching and random chance in the environment around us.
Statistical Artifacts

Statistical Artifacts

When we have good graphs and statistical aids, thinking statistically can feel straightforward and intuitive. Clear charts can help us tell a story, can help us visualize trends and relationships, and can help us better conceptualize risk and probability. However, understanding data is hard, especially if the way that data is collected creates statistical artifacts.

 

Yesterday’s post was about extreme outcomes, and how it is the smallest counties in the United States where we see both the highest per capita instances of cancer and the lowest per capita instances of cancer. Small populations allow for large fluctuations in per capita cancer diagnoses, and thus extreme outcomes in cancer rates. We could graph the per capita rates, model them on a map of the United States, or present the data in unique ways, but all we would really be doing is creating a visual aid influenced by statistical artifacts from the samples we used. As Daniel Kahneman explains in his book Thinking Fast and Slow, “the differences between dense and rural counties do not really count as facts: they are what scientists call artifacts, observations that are produced entirely by some aspect of the method of research – in this case, by differences in sample size.”

 

Counties in the United States vary dramatically. Some counties are geographically huge, while others are pretty small – Nevada’s is a large state with over 110,000 square miles of land but only 17 counties compared to West Virginia with under 25,000 square feet of land and 55 counties. Across the US, some counties are exclusively within metropolitan areas, some are completely within suburbs, some are entirely rural with only a few hundred people, and some manage to incorporate major metros, expansive suburbs, and vast rural stretches (shoutout to Clark County, NV). They are convenient for collecting data, but can cause problems when analyzing population trends across the country. The variations in size and other factors creates the possibility for the extreme outcomes we see in things like cancer rates across counties. When smoothed out over larger populations, the disparities in cancer rates disappears.

 

Most of us are not collecting lots of important data for analysis each day. Most of us probably don’t have to worry too  much on a day to day basis about some important statistical sampling problem. But we should at least be aware of how complex information is, and how difficult it can be to display and share information in an accurate manner. We should turn to people like Tim Harford for help interpreting and understanding complex statistics when we can, and we should try to look for factors that might interfere with a convenient conclusion before we simply believe what we would like to believe about a set of data. Statistical artifacts can play a huge role in shaping the way we understand a particular phenomenon, and we shouldn’t jump to extreme conclusions based on poor data.
Pay and Chase

Pay and Chase

If you were working to set up a healthcare plan for your employers, you would want to make sure that payments by the insurance plan were quick so that your employees were not constantly bombarded by letters and phone calls from doctors offices asking when they would be paid by the insurance plan. You also would want the plan to have a system in place for catching fraudulent claims or errors in charges from hospitals and doctors offices. Both of these desires are reasonable, but in the real world, they have created a system of perverse incentives that Dave Chase calls “Pay and Chase” in his book The Opioid Crisis Wake-Up Call. Here is how Chase describes it in his book:

 

“Another fee opportunity is so-called pay and chase programs, in which the insurance carrier doing your claims administration gets paid 30-40 percent for recovering fraudulent or duplicative claims. Thus, there is a perverse incentive to tacitly allow fraudulent and duplicative claims to be paid, get paid as the plan administrator, then get paid a second time for recovering the originally paid claim.”

 

Insurance companies administering health insurance don’t actually have an incentive to create tools to proactively stop fraud. They actually benefit when there is fraud, because they get a bonus when they spot the fraud and recoup the already paid fraudulent amount. As an employer partnered with the insurance company, you might be happy that claims are paid quickly so that your employees don’t have negative interactions with doctors about payment, but the way that many plans currently operate, you will end up paying a lot more overall when your plan pays for fraudulent claims and billing errors. You will pay for the fraud itself, and if you get any money back, it won’t be for the full amount that your claim administrator originally paid in the fraud or error – they will keep a cut.

 

Chase continues, “Many of the fraud prevention tools used by claims administrators are laughably outdated and weak compared to what they are up against. Modern payment integrity solutions can stop fraud and duplicate claims, but aren’t being used by most self-insured companies’ claims administrators.”

 

Poor incentives and confusing systems have allowed this to occur. This is one example of how the systems around healthcare in the United States are not aligned with what we would all agree should be the number one focus: improving the health of Americans. Employers don’t want their employees to be angry, and plan administrators want to maximize profits. In the end, we all pay more as fraudsters find ways to get past the outdated fraud prevention systems of insurance companies and as those companies turn around and charge fees for catching the fraud and payment errors they didn’t prevent in the first place.

Motivated Reasoning – Arguments to Continue Believing As We Already Do

Recently I have been thinking a lot about the way we think. To each of us, it feels as though our thinking and our thought process is logical, that our assumptions about the world are sound and built on good evidence, and that we might have a few complex technical facts wrong, but our judgments are not influenced by bias or prejudice. We feel that we take into consideration wide ranges of data when making decisions, and we do not feel as though our decisions and opinions are influenced by meaningless information and chance.

 

However, science tells us that our brains often make mistakes, and that many of those mistakes are systematic. Also, we know people in our own lives who display wonderful thinking errors, such as close-mindedness, gullibility, and arrogance. We should be more ready to accept that our thinking isn’t any different from the minds of people in scientific studies that show the brain’s capacity to traverse off course or that we are really any different from the person we complain about for being biased or unfair in their thinking about something or someone we we care about.

 

What can make this process hard is the mind itself. Our brains are experts at creating logical narratives, including about themselves. We are great at explaining why we did what we did, why we believe what we believe, and why our reasoning is correct. Scientists call this motivated reasoning.

 

Dale Carnegie has a great explanation of it in his book How to Win Friends and Influence People, “We like to continue to believe what we have been accustomed to accept as true, and the resentment aroused when doubt is cast upon any of our assumptions leads us to seek every manner of excuse for clinging to it. The result is that most of our so-called reasoning consists in finding arguments for going on believing as we already do.” 

 

Very often, when confronted with new information that doesn’t align with what we already believe, doesn’t align with our own self-interest, or that challenges our identity in one way or another, we don’t update our thinking but instead explain away or ignore the new information. Even for very small thing (Carnegie uses the pronunciation of Epictetus as an example) we may ignore convention and evidence and back our beliefs in outdated and out of context examples that seem to support us.

 

In my own life I try to remember this, and whether it is my rationalization of why it is OK that I went for a workout rather than doing dishes, or my self-talk about how great a new business idea is, or me rationalizing buying that sushi at the store when I was hungry while grocery shopping, I try to ask myself if my thoughts and decisions are influenced by motivated reasoning. This doesn’t always change my behavior, but it does help me recognize that I might be trying to fool myself. It helps me see that I am no better than anyone else when it comes to making up reasons to support all the things that I want. When I see this in other people, I am able to pull forward examples from my own life of me doing the same thing, and I can approach others with more generosity and hopefully find a more constructive way of addressing their behavior and thought process. At an individual level this won’t change the world, but on the margins we should try to reduce our motivated reasoning, as hard as it may be, and slowly encourage those around us to do the same.