Rules of Thumb: Helpful, but Systematically Error Producing

Rules of Thumb: Helpful, but Systematically Error Producing

The world throws a lot of complex problems at us. Even simple and mundane tasks and decisions hold a lot of complexity behind them. Deciding what time to wake up at, the best way to go to the grocery store and post office in a single trip, and how much is appropriate to pay for a loaf of bread have incredibly complex mechanisms behind them. In figuring out when to wake up we have to consider how many hours of sleep we need, what activities we need to do in the morning, and how much time it will take for each of those activities to still provide us a cushion of time in case something runs long. In making a shopping trip we are confronted with p=np, one of the most vexing mathematical problems that exists. And the price of bread was once the object of focus for teams of Soviet economists who could not pinpoint the right price for a loaf of bread that would create the right supply to match the population’s demand.
The brain handles all of these problems with relatively simple heuristics and rules of thumb, simplifying decisions so that we don’t waste the whole night doing math problems for the perfect time to set an alarm, don’t miss the entire day trying to calculate the best route to run all our errands, and don’t waste tons of brain power trying to set bread prices. We set a standard alarm time and make small adjustments knowing that we ought to leave the house ready for work by a certain time to make sure we reduce the risk of being late. We stick to main roads and travel similar routes to get where we need to go, eliminating the thousands of right or left turn alternatives we could chose from. We rely on open markets to determine the price of bread without setting a universal standard.
Rules of thumb are necessary in a complex world, but that doesn’t mean they are not without their own downfalls. As Quassim Cassam writes in Vices of the Mind, echoing Daniel Kahneman from Thinking Fast and Slow, “We are hard-wired to use simple rules of thumb (‘heuristics’) to make judgements based on incomplete or ambiguous information, and while these rules of thumb are generally quite useful, they sometimes lead to systematic errors.” Useful, but inadequate, rules of thumb can create predictable and reliable errors or mistakes. Our thinking can be distracted with meaningless information, we can miss important factors, and we can fail to be open to improvements or alternatives that would make our decision-making better.
What is important to recognize is that systematic and predictable errors from rules of thumb can be corrected. If we know where errors and mistakes are systematically likely to arise, then we can take steps to mitigate and reduce those errors. We can be confident with rules of thumb and heuristics that simplify decisions in positive ways while being skeptical of rules of thumb that we know are likely to produce errors, biases, and inaccurate judgements and assumptions. Companies, governments, and markets do this all the time, though not always in a step by step process (sometimes there is one step forward and two steps backward) leading to progress over time. Embracing the usefulness of rules of thumb while acknowledging their shortcomings is a powerful way to improve decision-making while avoiding the cognitive downfall of heuristics.
A Bias Toward Complexity

A Bias Toward Complexity

When making predictions or decisions in the real world where there are many variables, high levels of uncertainty, and numerous alternative options to chose from, using a simple rule of thumb can be better than developing complex models for predictions. The intuitive sense is that the more complex our model the more accurately it will reflect the real complexity of the world, and the better job it will do with making a prediction. If we can see that there are multiple variables, then shouldn’t our model capture the different alternatives for each of those variables? Wouldn’t a simple rule of thumb necessarily flatten many of the alternatives for those variables, failing to take into consideration the different possibilities that exist? Shouldn’t a more complex model be better than a simple heuristic?

 

The answer to these questions is no. We are biased toward complexity for numerous reasons. It feels important to build a model that tries to account for every possible alternative for each variable, we believe that always having more information is always good, and we want to impress people by showing how thoughtful and considerate we are. Creating a model that accounts for all the different possibilities out there fits those preexisting biases. The problem, however, is that as we make our model more complex it becomes more unstable.

 

In Risk Savvy, Gerd Gigerenzer explains what happens with variance and our models by writing, “Unlike 1/N, complex methods use past observations to predict the future. These predictions will depend on the specific sample of observations it uses and may therefore be unstable. This instability (the variability of these predictions around their mean) is called variance. Thus, the more complex the method, the more factors need to be estimated, and the higher the amount of error due to variance.”  (Emphasis added by me – 1/N is an example of a simple heuristic that Gigerenzer explains in the book.)

 

Our bias toward complexity can make our models and predictions worse when high levels of uncertainty with many alternatives and relatively limited amounts of data exist. If we find ourselves in the opposite situation, where there is low uncertainty, few alternatives, and a plethora of data, then we can use very complex models to make accurate predictions. But when we are in the real world, like making stock market or March Madness predictions, then we should rely on a simple rule of thumb. The more complex our model, the more opportunities for us to misestimate a given variable. Rather than having one error be offset by numerous other point estimates within our model to reduce the cost of a miscalculation, our model ends up creating more variance and a greater likelihood that our prediction will be further from reality than if we had flattened the variables with a simple heuristic.
A mixture of Risks

A Mixture of Risks

In the book Risk Savvy, Gerd Gigerenzer explains the challenges we have with thinking statistically and how these difficulties can lead to poor decision-making. Humans have trouble holding lots of complex and conflicting information. We don’t do well with decisions involving risk and decisions where we cannot possibly know all the relevant information necessary for the best decision. We prefer to make decisions involving fewer variables, where we can have more certainty about our risks and about the potential outcomes. This leads to the substitution effect that Daniel Kahneman describes in his book Thinking Fast and Slow, where our minds substitute an easier question for the difficult question without us noticing.

 

Unfortunately, this can have bad outcomes for our decision-making. Gigerenzer writes, “few situations in life allow us to calculate risk precisely. In most cases, the risks involved are a mixture of more or less well known ones.” Most of our decisions that involve risk have a mixture of different risks. They are complex decisions with tiers and potential cascades of risk based on the decisions we make along the way. Few of our decisions involve just one risk independent of others that we can know with certainty.

 

If we consider investing for retirement we can see how complex decisions involving risk can be and how a mixture of risks is present across all the decisions we have to make. We can hoard money in a safe in our house where we reduce the risk of losing any of our money, but we risk being unable to have enough saved by the time we are ready to retire. We can invest our money, but have to make decisions regarding whether we will keep it in a bank account, invest it in the stock market, or look to other investment vehicles. Our bank is unlikely to lose much money, and is low risk, but is also unlikely to help us increase the value of our savings to have enough for retirement. Investing with a financial advisor takes on more risk, such as the risk that we are being scammed, the risk that the market tanks and our advisor made bad investments on our behalf, and the risk that we won’t have access to our money if we were to need it quickly in case of an emergency. What this shows is that even the most certain option for our money, protecting it in a secret safe at home, still contains additional risks for the future. The options that are likely to provide us with the greatest return on our savings, investing in the stock market, has a mixture of risks associated with each investment decision we make after the initial decision to invest. There is no way we can calculate and fully comprehend ever risk involved with such an investment decision.

 

Risk is complex, and we rarely deal with a single decision involving a single calculable risk at one time. Our brains are likely to flatten the decision by substituting more simple decisions, eliminating some of the risks from consideration and helping our mind focus on fewer variables at a time. Nevertheless, the complex mixture of risks doesn’t go away just because  our brains pretend it isn’t there.
intelligence - Joe Abittan

Intelligence

“Intelligence is not an abstract number such as an IQ, but similar to a carpenter’s tacit knowledge about using appropriate tools,” writes Gerd Gigerenzer in his book Risk Savvy. “This is why the modern science of intelligence studies the adaptive toolbox that individuals, organizations, and cultures have at their disposal; that is, the evolved and learned rules that guide our deliberate and intuitive decisions.”

 

I like Gigerenzer’s way of explaining intelligence. It is not simply a number or a ratio, but it is our knowledge and ability to understand our world. There are complex relationships between living creatures, physical matter, and information. Intelligence is an understanding of those relationships and an ability to navigate the complexity, uncertainty, and connections between everything in the world. Explicit rules, like mathematical formulas, help us understand some relationships while statistical percentages help us understand others. Recognizing and being aware of commonalities between different categories of things and items and identifying patterns help us understand these relationships and serves as the basis for our intelligence.

 

What is important to note, is that our intelligence is built with concrete tools for some situations, like 2+2=4, and less concrete rules of thumb for other situations, like the golden rule – do to others what you would like others to do to you. Gigerenzer shows that our intelligence requires that we know more than one mathematical formula, and that we have more than one rule of thumb to help us approach and address complex relationships in the world. “Granted, one rule of thumb cannot possibly solve all problems; for that reason, our minds have learned a toolbox of rules. … these rules of thumb need to be used in an adaptive way.”

 

Whether it is interpreting statistical chance, judging the emotions of others, of making plans now that delay gratification until a later time, our rules of thumb don’t have to be precise, but they do need to be flexible and adaptive given our current circumstances. 2+2 will always equal 4, but a smile from a family member might be a display of happiness or a nervous impulse and a silent plead for help in an awkward situation. It is our adaptive toolbox and our intelligence that allows us to figure out what a smile means. Similarly, adaptive rules of thumb and intelligence help us reduce complex interactions and questions to more manageable choices, reducing uncertainty about how much we need to save for retirement to a rule of thumb that tells us to save a small but significant amount of each pay check. Intelligence is not just about facts and complex math. It is about adaptable rules of thumb that help us make sense of complexity and uncertainty, and the more adaptive these rules of thumb are, the more our intelligence an help us in the complex world of today and into the uncertain future.
Probability is Multifaceted

Probability is Multifaceted

For five years my wife and I lived in a house that was at the base of the lee side of a small mountain range in Northern Nevada. When a storm would come through the area it would have to make it over a couple of small mountain ranges and valleys before getting to our house, and as a result we experienced less precipitation at our house than most people in the Reno/Sparks area. Now my wife and I live in a house higher up on a different mountain that is more in the direct path of storms coming from the west. We receive snow at our house while my parents and family lower in the valley barely get any wind. At both houses we have learned to adjust our expectations for precipitation relative to the probabilities reported by weather stations which reference the airport at the valley floor. Our experiences with rain and snow at our two places is a useful demonstration that probability (in this case the probability of precipitation) is multifaceted – that multiple factors  play a role in the probability of a given event at a given place and time.

 

In his book Risk Savvy, Gerd Gigerenzer writes, “Probability is not one of a kind; it was born with three faces: frequency, physical design, and degrees of belief.” Gigerenzer explains that frequency is about counting. To me, this is the most clearly understandable aspect of probability, and what we usually refer to when we discuss probability. On how many days does it usually rain in Reno each year? How frequently does a high school team from Northern Nevada win a state championship and how frequently does a team from Southern Nevada win a state championship? These types of questions simply require counting to give us a general probability of an event happening.

 

But probability is not just about counting and tallying events. Physical design plays a role as well. Our house on the lee side of a small mountain range was shielded from precipitation, so while it may have rained in the valley half a mile away, we didn’t get any precipitation. Conversely, our current home is in a position to get more precipitation than the rest of the region. In high school sports, fewer kids live in Reno/Sparks compared to the Las Vegas region, so in terms of physical design, state championships are likely to be more common for high schools in Southern Nevada. Additionally, there may be differences in the density of students at each school, meaning the North could have more schools per students than the south, also influencing the probability of a north or south school winning. Probability, Gigerenzer explains, can be impacted by the physical design of systems, potentially making the statistics and chance more complicated to understand.

 

Finally, degrees of belief play a role in how we comprehend probability. Gigerenzer states that degrees of belief include experience and personal impression which are very subjective. Trusting two eye witnesses, Gigerenzer explains, rather than two people who heard about an event from someone else can increase our perception that the probability of an unlikely story is accurate. Degrees of belief can also be seen in my experiences with rain and our two houses. I learned to discount the probability of rain at our first house and to increase my expectation of rain at our new house. If the meteorologist said there was a low chance of rain when we lived on the sheltered side of a hill, then I didn’t worry much about storm forecasts. At our new house, however, if there is a chance of precipitation and storm coming from the west, I will certainly go remove anything from the yard that I don’t want to get wet, because I believe the chance that our specific neighborhood will see rain is higher than what the meteorologist predicted.

 

Probability and how we understand it and consequentially make decisions  is complex, and Gigerenzer’s explanation of the multiple facets of probability helps us better understand the complexity. Simply tallying outcomes and predicting into the future often isn’t enough for us to truly have a good sense of the probability of a given outcome. We have to think about physical design, and we have to think about the personal experiences and subjective opinions that form the probabilities that people develop and express. Understanding probability requires that we hold a lot of information in our head at one time, something humans are not great at doing, but that we can do better when we have better strategies for understanding complexity.

Avoiding Complex Decisions & Maintaining Agency

Two central ideas to the book Nudge by Cass Sunstein and Richard Thaler are that people don’t like to make complex decisions and that people like to have agency. Unfortunately, these two ideas conflict with each other. If people don’t like to make complex decisions, then we should assume that they would like to have experts and better decision-makers make complex decisions on their behalf. But if people want to have agency in their lives, we should assume that they don’t want anyone to make decisions for them. The solution, according to Sunstein and Thaler, is libertarian paternalism, establishing systems and structures to support complex decision-making and designing choices to be more clear for individuals with gentle nudges toward the decisions that will lead to the outcomes the individual actually desires.

 

For Sunstein and Thaler, the important point is that libertarian paternalism, and nudges in general, maintain liberty. They write, “liberty is much greater when people are told, you can continue your behavior, so long as you pay for the social harm that it does, than when they are told, you must act exactly as the government says.”  People resent being told what to do and losing agency. When people resist direct orders, the objective of the orders may fail completely, or violence could erupt. Neither outcome is what government wanted with its direct order.

 

The solution is part reframing and part redirecting personal responsibility for negative externalities. The approach favored by Sunstein and Thaler allows individuals to continue making bad or harmful choices as long as they recognize and accept the costs of those choices. This isn’t appropriate in all situations (like drinking and driving), but it might be appropriate with regard to issues like carbon taxes on corporations, cigarette taxes, or national park entrance fees.  If we are able to pin the cost of externalities to specific individuals and behaviors, we can change the incentives that people have for harmful or over-consumptive behaviors. To reach the change we want, we will have to get people to change their behavior, make complex decisions, and maintain a sense of agency as they act in ways that will help us as a collective reach the goals we set.
Competing Biases

Competing Biases

I am trying to remind myself that everyone, myself included, operates on a complex set of ideas, narratives, and beliefs that are sometimes coherent, but often conflicting. When I view my own beliefs, I am tempted to think of myself as rational and realistic. When I think of others who I disagree with, I am prone to viewing them in a simplistic frame that makes their arguments irrational and wrong. The reality is that all of our beliefs are less coherent and more complex than we typically think.

 

Daniel Kahneman’s book Thinking Fast and Slow has many examples of how complex and contradictory much of our thinking is, even if we don’t recognize it. One example is competing biases that manifest within us as individuals and can be seen in the organizations and larger groups that we form. We can be exaggeratedly optimistic and paralyzingly risk averse at the same time, and sometimes this tendency can actually be a good thing for us. “Exaggerated optimism protects individuals and organizations from the paralyzing effects of loss aversion; loss aversion protects them from the follies of overconfident optimism.”

 

On a first read, I would expect the outcome of what Kahneman describes to be gridlock. The optimist (or optimistic part of our brain) wants to push forward with a big new idea and plan. Meanwhile, loss aversion halts any decision making and prevents new ideas from taking root. The reality, as I think Kahneman would explain, is less of a conscious and deliberate gridlock, but an unnoticed trend toward certain decisions. The optimism wins out in an enthusiastic way when we see a safe bet or when a company sees an opportunity to capture rents. The loss aversion wins out when the bet isn’t safe enough, and when we want to hoard what we already have. We don’t even realize when we are making these decisions, they are just obvious and clear directions, but the reality is that we are constantly being jostled between exaggerated optimism and loss aversion.

 

Kahneman shows that these two biases are not exclusionary even though they may be conflicting. We can act on both biases at the same time, we are not exclusively a risk seeking optimists or exclusively risk averse. When the situation calls for it, we apply the appropriate frame at an intuitive level. Kahneman’s quote above shows that this can be advantageous for us, but throughout the book he also shows us how biases in certain directions and situation can be costly for us overtime as well.

 

We like simple and coherent narratives. We like thinking that we are one thing or another, that other people are either good or bad and right or wrong. The reality, however, is that we contain multitudes within us, act on competing and conflicting biases, and have more nuance and incongruency in our lives than we realize. This isn’t necessarily a bad thing. We can all still survive and prosper despite the complexity and incoherent beliefs that we hold. Nevertheless, I think it is important that we acknowledge the reality we live within, rather than simply believing the simple stories that we like to tell ourselves.
Imagining Success Versus Anticipating Failure

Imagining Success Versus Anticipating Failure

I am guilty of not spending enough time planning what to do when things don’t work out the way I want. I have written in the past about the importance of planning for failure and adversity, but like many others, I find it hard to do and hard to get myself to sit down and think seriously about how my plans and projects may fail. Planning for resilience is incredibly important, but so many of us never get around to it. Daniel Kahneman in Thinking Fast and Slow, helps us understand why we fail to plan for failure.

 

He writes, “The successful execution of a plan is specific and easy to imagine when one tries to forecast the outcome of a project. In contrast, the alternative of failure is diffuse, because there are innumerable ways for things to go wrong.”

 

Recently, I have written a lot about the fact that our minds understand the world not by accumulating facts, understanding data, and analyzing nuanced information, but by constructing coherent narratives. The less we know and the more simplistic the information we work with, the more coherent our narratives of the world can be. When we have less uncertainty, our narrative flows more easily, feels more believable, and is more comforting to our mind. When we descend into the particular, examine complexity, and weigh competing and conflicting information, we have to balance substantial cognitive dissonance with our prior beliefs, our desired outcomes, and our expectations. This is hard and uncomfortable work, and as Kahneman points out, a problem when we try to anticipate failures and roadblocks.

 

It is easy to project forward how our perfect plan will be executed. It is much harder to identify how different potential failure points can interact and bring the whole thing crashing down. For large and complex systems and processes, there can be so many obstacles that this process can feel entirely overwhelming and disconnected from reality. Nevertheless, it is important that we get outside of our comfortable narrative of success and at least examine a few of the most likely mistakes and obstacles that we can anticipate. Any time that we spend planning ways to get the ship back on course if something goes wrong will pay off in the future when things do go wrong. Its not easy because it is mentally challenging and nebulous, but if we can get ourselves to focus on the likelihood of failure rather than the certainty of success, we will have a better chance of getting to where we want to be and overcoming the obstacles we will face along the way.
Subjective Gains and Losses

Subjective Gains and Losses

“Outcomes that are better than the reference points are gains. Below the reference point they are losses.”

 

Daniel Kahneman writes extensively about our subjective experiences of the world in his book Thinking Fast and Slow and about how those subjective experiences can have very serious consequences in our decisions, political stances, and beliefs about the world. One area he focuses on are reference points, our baseline beliefs and expectations about the world. As it turns out, our expectations can influence whether we think things are going well or going poorly, regardless of what the actual outcomes are. On top of that, we will make adjustments to our behavior based on what we expect in regard to those outcomes.

 

Kahneman continues, “When directly compared or weighted against each other, losses loom larger than gains. This asymmetry between the power of positive and negative expectations or experiences has an evolutionary history. Organisms that treat threats as more urgent than opportunities have a better chance to survive and reproduce.”

 

Without diving into the evolutionary psychology component of Kahneman’s quote (something that I normally would love to do) I want to focus on how complex our reality and decision making becomes when we predict outcomes, shape our behavior in response to those predictions, and bias those predictions based on personal reference points.

 

In the United States, two major economic indicators that are used by banks, economists, and the media for deciding whether we have a good economy or a poor economy are GDP growth and interest rates. Both of these measures are represented as percentages, both have specific targets that we have decided are good, and from both follow a set of decisions that we hope will improve the numbers in the direction we want to see. What is interesting, is that we have reference points for the numbers in terms of what percentages we believe reflect a strong and growing economy, and our subjective experience of the economy can be changed by those outcomes.

 

A 1% increase in GDP growth is growth in overall GDP, but to an economist, that growth is abysmal, and actions need to be taken to get that growth rate closer to 3 to 4%. At the same time, if expectations for GDP growth are only .8% and we hit the same 1% outcome, we might be very happy. In both situations, our decisions and behaviors might change based on the delta from our expected reference point and the final reference point. A gain can feel like a gain, but it can similarly feel like a loss depending on where exactly we placed our reference point.

 

Interest rates reflect similar dynamics, and might be even more complicated by more clear competing interests and desires in terms of interest rates. Banks might want to see higher interest rates, to earn more money, while people taking out loans may love the low interest rates. A 2% interest rate might feel like a huge loss to one entity, while simultaneously feel like a gain to another.

 

This creates strange competitive dynamics, because our brains hate losses. We generally need an expected or realized gain to be 2 times larger than a potential or realized loss before we will risk money or accept an outcome. If we have a certain reference point in mind for the outcome we want or would be happy with, we may need to see a large skew in a positive direction for us to be happy, while even a minor loss will feel disastrous.  (At this very moment in the United States this is what is taking place with the presidential election. Several journalists have noted that in December of 2019, the Democrats would be thrilled with the election outcome we have today, but many adjusted their reference point to a Biden landslide win, so a close win feels like a tragic loss – and somewhat of a win for Trump).

 

Reference points feel like a simple idea, but what I hope this post shows is that they can be hugely consequential, and incredibly complex, especially when we have multiple actors with multiple reference points all interacting on small and large issues. Choose your reference point carefully, and try to recognize when you are operating with a certain refence point in mind and be willing to adjust or discard it when necessary. Don’t let a win get wiped away because it ended up being slightly smaller than your reference point expectation.
Understanding the Past

Understanding the Past

I am always fascinated by the idea, that continually demonstrates validity in my own life, that the more we learn about something, the more realize how little we actually know about it. I am currently reading Yuval Noah Harari’s book Sapiens: A Brief History of Humankind, and I am continually struck by how often Harari brings in events from mankind’s history that I had never heard about. The more I learn about the past, or about any given subject, the more I realize how little knowledge I have ever had, and how limited, narrow, and sometimes just flat out inaccurate my understandings have been.

 

This is particularly important when it comes to how we think about the past. I believe very strongly that our reality and the worlds in which we live and inhabit are mostly social constructions. The trees, houses, and roads are all real, but how we understand the physical objects, the spaces we operate, and how we use the real material things in our worlds is shaped to an incredible degree by social constructions and the relationships we build between ourselves and the world we inhabit. In order to understand these constructions and in order to shape them for a future that we want to live in (and are physiologically capable of living in) we need to understand the past and make predictions about the future with new social constructs that enable continued human flourishing.

 

To some extent, this feels easy and natural to us. We all have a story and we learn and adopt family stories, national stories, and global stories about the grand arc of humanity. But while our stories seem to be shared, and while we seem to know where we are heading, we all operate based on individual understandings of the past, and where that means we are (or should be) heading. As Daniel Kahneman writes in his  book Thinking Fast and Slow, “we believe we understand the past, which implies that the future also should be knowable, but in fact we understand the past less that we believe we do.”

 

As I laid out to begin this post, there is always so much more complexity and nuance to anything that we might study and be familiar with than we often realize. We can feel that we know something well when we are ignorant of the nuance and complexity. When we start to really untangle something, whether it be nuclear physics, the history of the American Confederacy, or how our fruits and veggies get to the supermarket, we realize that we really don’t know and understand anything as well as we might intuitively believe.

 

When we lack a deep and complex understanding of the past, because we just don’t know about something or because we didn’t have an accurate and detailed presentation of the thing from the past, then we are likely to misinterpret and misunderstand how we got to our current point. By having a limited historical perspective and understanding, we will incorrectly assess where our best future lies. It is important that we recognize how limited our knowledge is, and remember that these limits will shape the extent to which we can make valid predictions for the future.