Rules of Thumb: Helpful, but Systematically Error Producing

Rules of Thumb: Helpful, but Systematically Error Producing

The world throws a lot of complex problems at us. Even simple and mundane tasks and decisions hold a lot of complexity behind them. Deciding what time to wake up at, the best way to go to the grocery store and post office in a single trip, and how much is appropriate to pay for a loaf of bread have incredibly complex mechanisms behind them. In figuring out when to wake up we have to consider how many hours of sleep we need, what activities we need to do in the morning, and how much time it will take for each of those activities to still provide us a cushion of time in case something runs long. In making a shopping trip we are confronted with p=np, one of the most vexing mathematical problems that exists. And the price of bread was once the object of focus for teams of Soviet economists who could not pinpoint the right price for a loaf of bread that would create the right supply to match the population’s demand.
The brain handles all of these problems with relatively simple heuristics and rules of thumb, simplifying decisions so that we don’t waste the whole night doing math problems for the perfect time to set an alarm, don’t miss the entire day trying to calculate the best route to run all our errands, and don’t waste tons of brain power trying to set bread prices. We set a standard alarm time and make small adjustments knowing that we ought to leave the house ready for work by a certain time to make sure we reduce the risk of being late. We stick to main roads and travel similar routes to get where we need to go, eliminating the thousands of right or left turn alternatives we could chose from. We rely on open markets to determine the price of bread without setting a universal standard.
Rules of thumb are necessary in a complex world, but that doesn’t mean they are not without their own downfalls. As Quassim Cassam writes in Vices of the Mind, echoing Daniel Kahneman from Thinking Fast and Slow, “We are hard-wired to use simple rules of thumb (‘heuristics’) to make judgements based on incomplete or ambiguous information, and while these rules of thumb are generally quite useful, they sometimes lead to systematic errors.” Useful, but inadequate, rules of thumb can create predictable and reliable errors or mistakes. Our thinking can be distracted with meaningless information, we can miss important factors, and we can fail to be open to improvements or alternatives that would make our decision-making better.
What is important to recognize is that systematic and predictable errors from rules of thumb can be corrected. If we know where errors and mistakes are systematically likely to arise, then we can take steps to mitigate and reduce those errors. We can be confident with rules of thumb and heuristics that simplify decisions in positive ways while being skeptical of rules of thumb that we know are likely to produce errors, biases, and inaccurate judgements and assumptions. Companies, governments, and markets do this all the time, though not always in a step by step process (sometimes there is one step forward and two steps backward) leading to progress over time. Embracing the usefulness of rules of thumb while acknowledging their shortcomings is a powerful way to improve decision-making while avoiding the cognitive downfall of heuristics.
Aspiration Rules

Aspiration Rules

My last post was all about satisficing, making decisions based on alternatives that satisfy our wants and needs and that are good enough, but may not be the absolute best option. Satisficing contrasts the idea of maximizing. When we maximize, we find the best alternative from which no additional Pareto efficiencies can be gained. Maximizing is certainly a great goal in theory, but in practice, maximizing can be worse than satisficing. As Gerd Gigerenzer writes in Risk Savvy, “in an uncertain world, there is no way to find the best.” Satisficing and using aspiration rules, he argues, is the best way to make decisions and navigate our complex world.

 

“Studies indicate that people who rely on aspiration rules tend to be more optimistic and have higher self-esteem than maximizers. The latter excel in perfectionism, depression, and self-blame,” Gigerenzer writes. Aspiration rules differ from maximizing because the goal is not to find the absolute best alternative, but to find an alternative that meets basic pre-defined and reasonable criteria. Gigerenzer uses the example of buying pants in his book.  A maximizer may spend the entire day going from store to store, checking all their options, trying every pair of pants, and comparing prices at each store until they have found the absolute best pair available for the lowest cost and best fit. However, at the end of the day, they won’t truly know that they found the best option, there will always be the possibility that they missed a store or missed a deal someplace else. To contrast a maximizer, an aspirational shopper would go into a store looking for a certain style at a certain price. If they found a pair of pants that fit right and was within the right price range, then they could be satisfied and make a purchase without having to check every store and without having to wonder if they could have gotten a better deal elsewhere. They had basic aspirations that they could reasonably meet to be satisfied.

 

Maximizers set unrealistic goals and expectations for themselves while those using aspiration rules are able to set more reasonable, achievable goals. This demonstrates the power and utility of satisficing. Decisions have to be made, otherwise we will be wandering around without pants as we try to find the best possible deal. We will forego opportunities to get lunch, meet up with friends, and do whatever it is we need pants to go do. This idea is not limited to pants and individuals. Businesses, institutions, and nations all have to make decisions in complex environments. Maximizing can be a path toward paralysis, toward CYA behaviors (cover your ass), and toward long-term failure. Start-ups that can satisfice and make quick business decisions and changes can unseat the giant that attempts to maximize every decision. Nations focused on maximizing every public policy decision may never actually achieve anything, leading to civil unrest and a loss of support. Institutions that can’t satisfice also fail to meet their goals and missions. Allowing ourselves and our larger institutions to set aspiration rules and satisfice, all while working to incrementally improve with each step, is a good way to actually move toward progress, even if it doesn’t feel like we are getting the best deal in any given decision.

 

The aspiration rules we use can still be high, demanding of great performance, and drive us toward excellence. Another key difference, however, between the use of aspiration rules and maximizing is that aspiration rules can be more personalized and tailored to the realistic circumstances that we find ourselves within. That means we can create SMART goals for ourselves by using aspiration rules. Specific, measurable, achievable, realistic, and time-bound goals have more in common with a satisficing mentality than goals that align with maximizing strategies. Maximizing doesn’t recognize our constraints and challenges, and may leave us feeling inadequate when we don’t become president, don’t have a larger house than our neighbors, and are not a famous celebrity. Aspiration rules on the other hand can help us set goals that we can realistically achieve within reasonable timeframes, helping us grow and actually reach our goals.
Satisficing

Satisficing

Satisficing gets a bad wrap, but it isn’t actually that bad of a way to make decisions and it realistically accommodates the constraints and challenges that decision-makers in the real world face. None of us would like admit when we are satisficing, but the reality is that we are happy to satisfice all the time, and we are often happy with the results.

 

In Risk Savvy, Gerd Gigerenzer recommends satisficing when trying to chose what to order at a restaurant. Regarding this strategy for ordering, he writes:

 

“Satisficing: This … means to choose the first option that is satisfactory; that is, good enough. You need the menu for this rule. First, you pick a category (say, fish). Then you read the first item in this category, and decide whether it is good enough. If yes, you close the menu and order that dish without reading any further.”

 

Satisficing works because we often have more possibilities than we have time to carefully weigh and consider. If you have never been to the Cheesecake Factory, reading each option on the menu for the first time would probably take you close to 30 minutes. If you are eating on your own and don’t have any time constraints, then sure, read the whole menu, but the staff will probably be annoyed with you. If you are out with friends or on a date, you probably don’t want to take 30 minutes to order, and you will feel pressured to make a choice relatively quickly without having full knowledge and information regarding all your options. Satisficing helps you make a selection that you can be relatively confident you will be happy with given some constraints on your decision-making.

 

The term satisficing was coined by the Nobel Prize winning political scientist and economist Herbert Simon, and I remember hearing a story from a professor of mine about his decision to remain at Carnegie Melon University in Pittsburgh. When asked why he hadn’t taken a position at Harvard or a more prestigious Ivy League School, Simon replied that his wife was happy in Pittsburgh and while Carnegie Melon wasn’t as renown as Harvard it was still a good school and still offered him enough of what he wanted to remain. In other words, Carnegie Melon satisfied his basic needs and satisfied criteria in enough areas to make him happy, even though a school like Harvard would have maximized his prestige and influence. Simon was satisficing.

 

Without always recognizing it, we turn to satisficing for many of our decisions. We often can’t buy the perfect home (because of timing, price, and other bidders), so we satisfice and buy the first home we can get a good offer on that meets enough of our desires (but doesn’t fit all our desires perfectly). The same goes for jobs, cars, where we are going to get take-out, what movie we want to rent, what new clothes to buy, and more. Carefully analyzing every potential decision we have to make can be frustrating and exhausting. We will constantly doubt whether we made the best choice, and we may be too paralyzed to even make a decision in the first place. If we satisfice, however, we accept that we are not making the best choice but are instead making an adequate choice that satisfies the greatest number of our needs while simplifying the choice we have to make. We can live with what we get and move on without the constant doubt and loss of time that we might otherwise experience. Satisficing, while getting a bad rep from those who favor rationality in all instances, is actually a pretty good decision-making heuristic.
Gut Decisions

Gut Decisions

“Although about half of professional decisions in large companies are gut decisions, it would probably not go over well if a manager publicly admitted, I had a hunch. In our society, intuition is suspicious. For that reason, managers typically hide their intuitions or have even stopped listening to them,” Gerd Gigerenzer writes in Risk Savvy.

 

The human mind evolved first in small tribal bands trying to survive in a dangerous world. As our tribes grew, our minds evolved to become more politically savvy, learning to intuitively hide our selfish ambitions and appear honest and altruistic. This pushed our brains toward more complex activity, which took place outside our direct consciousness, hiding in gut feelings and intuitions. Today however, we don’t trust those intuitions and gut decisions, even though they never left us.

 

We do have good reason to discount intuitions. Our minds did not evolve to serve us perfectly in a complex, data rich world full of uncertainty. Our brains are plagued by motivated reasoning, biases, and cognitive limitations. Making gut decisions can lead us vulnerable to these mental challenges, leading us to distrust our intuitions.

 

However, this doesn’t mean we have escaped gut decisions. Gerd Gigerenzer thinks that is actually a good thing, especially if we have developed years of insight and expertise through practice and real life training. What Gigerenzer argues is that we still make many gut decisions in areas as diverse as vacation planning, daily exercise, and corporate strategies. We just don’t admit we are making decisions based on intuition rather than careful statistical analysis. Taking it a step further, Gigerezner suggests that most of the time we make a decision at a gut level, and produce reasons after the fact.” We rationalize and use motivated reasoning to explain why we made a decision and we try to deceive ourselves to believe that we always intended to do the rational calculation first, and that we really hadn’t made up our mind until after we had done so.

 

Gigerenzer suggests that we acknowledge our gut decisions. Ignoring them and pretending they are not influential wastes our time and costs us money. An executive may have an intuitive sense of what to do in terms of a business decision, but may be reluctant to say they made a decision based on intuition. Instead, they spend time doing an analysis that didn’t need to be done. They create reasons to support their decision after the fact, again wasting time and energy that could go into implementing the decision that has already been made. Or an executive may bring in a consulting firm, hoping the firm will come up with the same answer that they got from their gut. Time and money are both wasted, and the decision-making and action-taking structures of the individual and organization are gummed up unnecessarily. Acknowledging gut decisions and moving forward more quickly, Gigerenzer seems to suggest, is better than rationalizing and finding support for gut decisions after the fact.
Public vs Private Choice Architects - Joe Abittan

Who to Fear: Public vs Private Choice Architects

A question that Cass Sunstein and Richard Thaler raise in their book Nudge is whether we should worry more about public or private sector choice architects. A choice architect is anyone who influences the decision space of another individual or group. Your office’s HR person in charge of health benefits is a choice architect. The people at Twitter who decided to increase the character length of tweets are choice architects. The government bureaucrat who designs the form you use to register to vote is also a choice architect. The decisions that each individual or team makes around the choice structure for other people’s decisions will influence the decisions and behaviors of people in those choice settings.

 

In the United States, we often see a split between public and private that is feels more concrete than the divide truly is. Often, we fall dramatically on one side of the imagined divide, either believing everything needs to be handled by businesses, or thinking that businesses are corrupt and self-interested and that government needs to step in to monitor almost all business actions. The reality is that businesses and government agencies overlap and intersect in many complex ways, and that choice architects in both influence the public and each other in complex ways. Regardless of what you believe and what side you fall on, both choice architects need to be taken seriously.

 

“On the face of it, it is odd to say that the public architects are always more dangerous than the private ones. After all, managers in the public sector have to answer to voters, and managers in the private sector have as their mandate the job of maximizing profits and share prices, not consumer welfare.”

 

Sunstein and Thaler suggest that we should be concerned about private sector choice architects because they are ultimately responsible to company growth and shareholder value, rather than what is in the best interest of individuals. When conflicts arise between what is best for people and what is best for a company’s bottom line, there could be pressure on the choice architect to use nudges to help the bottom line rather than to help people make the best decisions possible.

 

However, the public sector is not free from perverse incentives simply by being elected, being accountable to the public, or being free from profit motives. Sunstein and Thaler continue, “we agree that government officials, elected or otherwise, are often captured by private-sector interests whose representatives are seeking to nudge people in directions that will specifically promote their selfish goals.” The complex interplay of government and private companies means that even the public sector is not a space purely dedicated to public welfare. The general public doesn’t have the time, attention, energy, or financial resources to influence public sector choice architects in the ways that the private sector does. And if private sector influences shape choice structures via public elected officials, they can create a sense of legitimacy for ultimately selfish decisions. Of course, public sector choice architects could be more interested in keeping their job or winning reelection, and may promote their own selfish goals for self-preservation reasons as well.

 

We can’t think of public sector or private sector actors as being more trustworthy or responsible than the other. Often times, they overlap and influence each other, shifting the incentives and opinions of the public and the actors within public and private sectors simultaneously. Sunstein and Thaler suggest that this is a reason for maintaining the maximal choice freedom possible. The more people have their own ability to make choices, even if they are nudged, the more we can limit the impact of self-serving choice architects, whether they are in the public or private sectors.
Selfish Choice Architects

Selfish Choice Architects

“So lets go on record,” write Cass Sunstein and Richard Thaler in their book Nudge, “as saying that choice architects in all walks of life have incentives to nudge people in directions that benefit the architects (or their employers) rather than the users.”

 

Choice architects are those who design, organize, or provide decision situations to individuals. Whether it is the person who determines the order for food on the buffet line, the human resources manager responsible for selecting health insurance plans for the company, or a bureaucrat who designs an enrollment form, anyone who influences a situation where a person makes a decision or a choice is architecting that choice. Their decisions will influence the way that people understand their choice and the choices that they actually make.

 

As the quote above notes, there can always be pressure for a choice architect to design the decision space in a way that advances their own desires or needs as opposed to advancing the best interest of the individual making a given choice. Grocery stores adjust their layouts with the hopes that displays, sales, or conveniently located candy will get customers to purchase things they otherwise wouldn’t purchase. A company could skimp on health benefits and present confusing plans to employees with hurdles preventing them from utilizing their benefits, saving the company money while still appearing to have generous benefits. A public agency could design a program that meets a political objective and makes the agency head look good, even if it gives up actual effectiveness in the process and doesn’t serve citizens well.

 

Nudges are useful, but they have the capacity to be nefarious. A buffet manager might want patrons to fill up on cheap salad, eating less steak, meaning that the buffet does better on the margins. Placing multiple cheap salads at the front of the line, and not allowing people to jump right to steak, is a way to nudge people to eating cheaper food. Sunstein and Thaler acknowledge the dark side of nudges in their book, and encourage anyone who is a choice architect to strive to avoid the dark side of nudges. Doing so, they warn, risks leading to cynicism and in the long run is likely to create problems when employee, customer, or citizen trust and buy in is needed.

Avoiding Complex Decisions & Maintaining Agency

Two central ideas to the book Nudge by Cass Sunstein and Richard Thaler are that people don’t like to make complex decisions and that people like to have agency. Unfortunately, these two ideas conflict with each other. If people don’t like to make complex decisions, then we should assume that they would like to have experts and better decision-makers make complex decisions on their behalf. But if people want to have agency in their lives, we should assume that they don’t want anyone to make decisions for them. The solution, according to Sunstein and Thaler, is libertarian paternalism, establishing systems and structures to support complex decision-making and designing choices to be more clear for individuals with gentle nudges toward the decisions that will lead to the outcomes the individual actually desires.

 

For Sunstein and Thaler, the important point is that libertarian paternalism, and nudges in general, maintain liberty. They write, “liberty is much greater when people are told, you can continue your behavior, so long as you pay for the social harm that it does, than when they are told, you must act exactly as the government says.”  People resent being told what to do and losing agency. When people resist direct orders, the objective of the orders may fail completely, or violence could erupt. Neither outcome is what government wanted with its direct order.

 

The solution is part reframing and part redirecting personal responsibility for negative externalities. The approach favored by Sunstein and Thaler allows individuals to continue making bad or harmful choices as long as they recognize and accept the costs of those choices. This isn’t appropriate in all situations (like drinking and driving), but it might be appropriate with regard to issues like carbon taxes on corporations, cigarette taxes, or national park entrance fees.  If we are able to pin the cost of externalities to specific individuals and behaviors, we can change the incentives that people have for harmful or over-consumptive behaviors. To reach the change we want, we will have to get people to change their behavior, make complex decisions, and maintain a sense of agency as they act in ways that will help us as a collective reach the goals we set.
Collective Conservatism

Collective Conservatism

Groupthink is one of the most dangerous phenomenon that our world faces today. Families, companies, and governments can all find themselves stuck in groupthink, unable to adapt to a world that no longer fits the model and expectations that drive traditional thinking. When everyone has the same thought processes and members of the group discount the same information while adopting a uniform perspective, the world of possibilities becomes limited.

 

In Nudge, authors Cass Sunstein and Richard Thaler write about a particular element that is common when groupthink takes hold, collective conservatism. While discussing groups that follow tradition the authors write,

 

“We can see here why many groups fall prey to what is known as collective conservatism: the tendency of groups to stick to established patterns even as new needs arise. Once a practice (like wearing ties) has become established, it is likely to be perpetuated, even if there is no particular basis for it.”

 

In a family household, collective conservatism might take the form of a specific way to fold towels. Perhaps towels had to be folded a certain way to fit a space in a previous house, and the tradition has continued even though towels no longer need to be folded just right for the space. Nothing is really lost by folding towels just so, but it might be time consuming to make sure they are folded in order to fit a constraint that no longer exists.

 

Within companies and governments, however, collective conservatism can be more consequential than the time and effort involved in folding towels. A company that cannot adjust supply chains, cannot adjust a business model in response to competition, and that cannot improve workspaces to meet new employee expectations is likely to be overtaken by a start-up that is more in tune with new social, technological, and cultural business trends. For a government, failures to adjust for technological change and employee motivations are also risks, as are changes in international relations, social needs, and more. Being stuck in a mindset that cannot see the changes and cannot be more responsive can be dangerous because peoples actual lives and needed services and supports could be in jeopardy. Collective conservatism feels safe to those who are in decision-making roles and who know what worked in the past. However, collective conservatism is a form of group think that can lead to inept operations and strategies that can be economically costly and have negative impacts in peoples’ real lives.
Framing and Nudges

Framing and Nudges

“Framing works because people tend to be somewhat mindless, passive decision makers,” write Cass Sunstein and Richard Thaler in their book Nudge. “Their Reflective System does not do the work that would be required to check and see whether reframing the question would produce a different answer.”

 

Framing is an important rhetorical tool. We can frame things as gains or losses, reference numbers as percentages or as whole numbers, and compare phenomena to small classes or to larger populations. Framing can include elements of good or evil, morality or sin, responsibility toward ones family or individual greed. Depending on what we want people to do or how we want them to behave, we can adjust the way we frame a situation or decision to influence people in certain ways. Framing is not a 100% effective way to make people do what we want, but it can be a helpful way to nudge people toward certain decisions.

 

Sunstein and Thaler present an example of using framing to nudge people to conserve energy. They write,

 

“Energy conservation is now receiving a lot of attention, so consider the following information campaigns: (a) If you use energy conservation methods, you will save $350 per year; (b) If you do not use energy conservation methods, you will lose $350 per year. It turns out that information campaign (b), framed in terms of losses, is far more effective than information campaign (a). If the government wants to encourage energy conservation, option (b) is a stronger nudge.”

 

It is not the case that everyone who sees a message touting the money saved by conserving energy will do nothing while everyone who sees a message about the money they lose will take action. Some people will be motivated to take action by the message to save $350 per year, and some people won’t be motivated by the $350 loss aversion. However, on average, more people with the loss averse message will decide to take action. People tend to feel losses to a greater extent then they seek gains, so framing energy conservation methods as preventing a loss will motivate more people than framing energy conservation methods as leading to a gain.

 

This small shift in framing alters the perspective of buying energy efficient light bulbs or resealing windows from costly investments to practical strategies for avoiding further losses. Framing in this example is a simple nudge that isn’t a form of mind control, but plays into existing human biases and encourages people to make decisions that are better for them individually and for society collectively. I would argue that framing is a necessary and unavoidable choice. Messages are necessarily context dependent, and trying not to include any particular framing can make a message useless – at that point you might as well not have a message at all. Given that framing is necessary and that there are preferable outcomes, choice architects should think about framing and employ frames in a way to encourage the best possible decisions for the most people possible.
Nudges for Unrealistic Optimism

Nudges for Unrealistic Optimism

Our society makes fun of the unrealistic optimist all the time, but the reality is that most of us are unreasonably optimistic in many aspects of our life. We might not all believe that we are going to receive a financial windfall this month, that our favorite sports team will go from losing almost all their games last year to the championship this year, or that everyone in our family will suddenly be happy, but we still manage to be more optimistic about most things than is reasonable.

 

Most people believe they are better than average drivers, even though by definition half the people in a population must be above and half the people below average. Most of us probably think we will get a promotion or raise sometime sooner rather than later, and most of us probably think we will live to be 100 and won’t get cancer, go bald, or be in a serious car crash (after all, we are all above average drivers right?).

 

Our overconfidence is often necessary for daily life. If you are in sales, you need to be unrealistically optimistic that you are going to get a big sale, or you won’t continue to pick up the phone for cold calls. We would all prefer the surgeon who is more on the overconfident side than the surgeon who doubts their ability and asks us if we finalized our will before going into the operating room. And even just for going to the store, doing a favor for a neighbor, or paying for sports tickets, overconfidence is a feature, not a bug, of our thinking. But still, there are times where overconfidence can be a problem.

 

2020 is an excellent example. If we all think I’m not going to catch COVID, then we are less likely to take precautions and are more likely to actually catch the disease. This is where helpful nudges can come into play.

 

In Nudge, Cass Sunstein and Richard Thaler write, “If people are running risks because of unrealistic optimism, they might be able to benefit from a nudge. In fact, we have already mentioned one possibility: if people are reminded of a bad event, they may not continue to be so optimistic.”

 

Reminding people of others who have caught COVID might help encourage people to take appropriate safety precautions. Reminding a person trying to trade stocks of previous poor decisions might encourage them to make better investment choices then trying their hand at day trading. A quick pop-up from a website blocker might encourage someone not to risk checking social media while they are supposed to be working, saving them from the one time their supervisor walks by while they are scrolling through someone’s profile. Overconfidence may be necessary for us, but it can lead to risky behavior and can have serious downfalls. If slight nudges can help push people away from catastrophic consequences from unrealistic optimism, then they should be employed.