Science, Money, & Human Activities

Science, Money, & Human Activities

The world of science prides itself on objectivity. Our scientific measurements should be objective, free from bias, and repeatable by any person in any place. The conclusions of science should likewise be objective, clear, and understandable from the outside. We want science to be open, discussed, and the implications of results rigorously debated so that we can make new discoveries and develop new knowledge to help propel humanity forward.
 
 
“But science is not an enterprise that takes place on some superior moral or spiritual plane above the rest of human activity,” writes Yuval Noah Harari in his book Sapiens. Science may strive for objectivity and independence, but it still takes place in the human world and is conducted by humans. Additionally, “science is a very expensive affair … most scientific studies are funded because somebody believes they can help attain some political, economic, or religious goal,” continues Harari.
 
 
No matter how much objectivity and independence we try to imbue into science, human activities influence what, how, and when science is done. The first obstacle, as Harari notes, is money. Deciding to fund something always contains some sort of political decision. Whether we as individuals are looking to fund something, or whether a collective is looking to fund something, there is always a choice between how the final dollars could be used. Funding could be provided for science that helps develop a vaccine that predominantly impacts poor people in a country far away. Funding could be provided for a scientific instrument that could help address climate change. Or funding could be used to make a really cool laser that doesn’t have any immediate and obvious uses, but which would be really cool. Depending on political. goals, individual donor desires, and a host of other factors, different science could be funded and conducted. The cost of science means that it will always in some ways be tied to human desires, which means biases will always creep into the equation.
 
 
It is important to note that science is built with certain elements to buffer the research, results, findings, and conclusions from bias. Peer review for example limits the publication of studies that are not done in good faith or that make invalid conclusions. But still, science takes place in society and culture and is conducted by humans. What those individual humans chose to study and how they understand the world will influence the ways in which they choose and design studies. This means that bias will still creep into science, in terms of determining what to study and how it will be studied. Early material scientists working with plastics were enthusiastic about studies that developed new plastics with new uses, where today materials scientists may be more likely to study the harms of plastics and plastic waste. Both fields of research can produce new knowledge, but with very different consequences for the world stemming from different cultural biases from the human researchers.
 
 
This is not to say that science cannot be trusted and should not be supported by individuals and collectives. Science has improved living standards for humans across the globe and solved many human problems. We need to continue pushing forward with new science to continue to improve living standards, and possibly just to maintain existing living standards and expectations. Nevertheless, we do have to be honest and acknowledge that science does not exist in a magical space free from bias and other human fallacies.
Level Two Chaos & History

Level Two Chaos and History

“It is an iron law of history that what looks inevitable in hindsight was far from obvious at the time,” writes Yuval Noah Harari in his book Sapiens. History seems pretty clear when we look backwards. It is certainly complex, but whether it is our own lives, a sports season, or the rise and fall of an empire, we generally do a pretty good job of creating a compelling narrative to explain how history unfolded and why certain events took place. But these narratives create what we call the hindsight bias, where past events (in this case the course of human history) appear nearly deterministic. The reality is that small changes could shape the course of history in dramatic ways, and that the future is never clear at any point – as our current uncertainty about social media, climate change, and political polarization demonstrate. Harari continues, “in a few decades, people will look back and think that the answers to all of these questions were obvious,” but for us, the right answers are certainly far from obvious.
 
 
History, for human beings, is shaped by the innumerable decisions that we make every day. The course of history is not deterministic, but is instead chaotic. Harari argues that history is a level two chaotic system and writes, “chaotic systems come in two shapes. Level one chaos is chaos that does not react to predictions about it … level two chaos is chaos that reacts to predictions about it, and therefore can never be predicted accurately.”
 
 
The weather is a level one chaotic system because it doesn’t respond (on a day to day basis) to our predictions. Whether our computer models suggest a 1%, 45%, or 88% chance of rain on a given day doesn’t change what is going to happen in the atmosphere and whether it will or will not rain. Despite our sometimes magical thinking, scheduling a wedding or stating our hopes for or against certain weather patterns does not influence the actual weather.
 
 
History, is not a level one chaotic system. History is shaped by elections, the general beliefs within the public, the actions that people take, and how key actors understand risk and probability. Our predictions can greatly influence all of these areas. Predicting a landslide victory in an election can demotivate the losing side of a political divide, possibly turning what could have been a marginal victory into the predicted landslide as a sort of self-fulfilling prophecy. Reporting that general beliefs are becoming more or less prevalent among a population can influence the rate and direction of changing beliefs as people hear the predictions about belief trends (this seems like it may have happened as marijuana and gay marriage gained acceptance across the US). Our reactions to predictions can influence the final outcomes, contributing more uncertainty and chaos to the system.
 
 
We cannot predict exactly how people will react to predictions about the systems they participate in. It makes the predictions and forecasts more challenging since they have to incorporate different levels of response to various inputs. History cannot be thought of deterministically because so many small factors could have changed the outcomes, and those small changes in turn would have influenced exactly what predictions were made, in turn influencing the reactions of the people involved in the various actions of history. Our confidence in understanding history and why history played out as it did is not warranted, and is simply a fallacy.
Accepting Imaginary Orders

Accepting Imaginary Orders

“Most people do not wish to accept that the order governing their lives is imaginary, but in fact every person is born into a preexisting imagined order, and his or her desires are shaped from birth by its dominant myths,” writes Yuval Noah Harari in Sapiens. There is an incredibly wide range of possibilities for how we could live our lives. Throughout history, humans have lived in many different ways – in small tribal bands on tropical islands, in kingdoms ruled by divinely anointed tyrants, and in large cities across representative democracies. Truly thinking about why people have lived in such different ways and how it would be best for people to live today is difficult. It is much easier to accept the imagined order that directs modern society than to constantly question every decision and every possible way of life.

However, none of us want to appear as though we simply accept the way things are and only marginally change the world around us for a better fit. We want to believe that we have agency, that we chose to live in the world the way it is, and that the society that our families exist within are not organized in a random way, but are organized by rational and reasonable principals. We want to believe that we are constantly striving for a better way of living and that we have carefully thought through what needs to be done to reach the best social and economic order possible for humans.

There is substantial evidence to suggest that Harari is correct, and that we accept the myths we are born into rather than reach conclusions about the nature of reality and society after careful consideration and investigation. Most people adopt the religious or political beliefs of their family. This isn’t to discount people as unthinking or uncritical, but instead it demonstrates that there are many pressures and advantages to maintaining beliefs that are consistent with ones family. Additionally, terms such as conservative or liberal really don’t have any meaning. People do not have a consistent answer for what those terms mean, and it is easy to take those terms and demonstrate that many items within the platforms of Republicans, Democrats, and random people from the street seem to contradict the ideas of conservatism or liberalism.

It is much more plausible that people are signaling to the dominant group of their time and trying to fit in than to assume people are carefully thinking about the order that guides their societies and lives. The evidence does not suggest people are actively choosing how to live or what order to support based on careful judgment. It does happen, however most of us accept the myths we are born into and are ultimately shaped by those myths. Our lives are organized and our actions our mobilized by myths such as religious ideas, political systems, human rights, and other institutions that we may not even be aware of.

Causal Illusions - The Book of Why

Causal Illusions

In The Book of Why Judea Pearl writes, “our brains are not wired to do probability problems, but they are wired to do causal problems. And this causal wiring produces systematic probabilistic mistakes, like optical illusions.” This can create problems for us when no causal link exists and when data correlate without any causal connections between outcomes.  According to Pearl, our causal thinking, “neglects to account for the process by which observations are selected.”  We don’t always realize that we are taking a sample, that our sample could be biased, and that structural factors independent of the phenomenon we are trying to observe could greatly impact the observations we actually make.
Pearl continues, “We live our lives as if the common cause principle were true. Whenever we see patterns, we look for a causal explanation. In fact, we hunger for an explanation, in terms of stable mechanisms that lie outside the data.” When we see a correlation our brains instantly start looking for a causal mechanism that can explain the correlation and the data we see. We don’t often look at the data itself to ask if there was some type of process in the data collection that lead to the outcomes we observed. Instead, we assume the data is correct and  that the data reflects an outside, real-world phenomenon. This is the cause of many causal illusions that Pearl describes in the book. Our minds are wired for causal thinking, and we will invent causality when we see patterns, even if there truly isn’t a causal structure linking the patterns we see.
It is in this spirit that we attribute negative personality traits to people who cut us off on the freeway. We assume they don’t like us, that they are terrible people, or that they are rushing to the hospital with a sick child so that our being cut off has a satisfying causal explanation. When a particular type of car stands out and we start seeing that car everywhere, we misattribute our increased attention to the type of car and assume that there really are more of those cars on the road now. We assume that people find them more reliable or more appealing and that people purposely bought those cars as a causal mechanism to explain why we now see them everywhere. In both of these cases we are creating causal pathways in our mind that in reality are little more than causal illusions, but we want to find a cause to everything and we don’t always realize that we are doing so. It is important that we be aware of these causal illusions when making important decisions, that we think about how the data came to mind, and whether there is a possibility of a causal illusion or cognitive error at play.
Rules of Thumb: Helpful, but Systematically Error Producing

Rules of Thumb: Helpful, but Systematically Error Producing

The world throws a lot of complex problems at us. Even simple and mundane tasks and decisions hold a lot of complexity behind them. Deciding what time to wake up at, the best way to go to the grocery store and post office in a single trip, and how much is appropriate to pay for a loaf of bread have incredibly complex mechanisms behind them. In figuring out when to wake up we have to consider how many hours of sleep we need, what activities we need to do in the morning, and how much time it will take for each of those activities to still provide us a cushion of time in case something runs long. In making a shopping trip we are confronted with p=np, one of the most vexing mathematical problems that exists. And the price of bread was once the object of focus for teams of Soviet economists who could not pinpoint the right price for a loaf of bread that would create the right supply to match the population’s demand.
The brain handles all of these problems with relatively simple heuristics and rules of thumb, simplifying decisions so that we don’t waste the whole night doing math problems for the perfect time to set an alarm, don’t miss the entire day trying to calculate the best route to run all our errands, and don’t waste tons of brain power trying to set bread prices. We set a standard alarm time and make small adjustments knowing that we ought to leave the house ready for work by a certain time to make sure we reduce the risk of being late. We stick to main roads and travel similar routes to get where we need to go, eliminating the thousands of right or left turn alternatives we could chose from. We rely on open markets to determine the price of bread without setting a universal standard.
Rules of thumb are necessary in a complex world, but that doesn’t mean they are not without their own downfalls. As Quassim Cassam writes in Vices of the Mind, echoing Daniel Kahneman from Thinking Fast and Slow, “We are hard-wired to use simple rules of thumb (‘heuristics’) to make judgements based on incomplete or ambiguous information, and while these rules of thumb are generally quite useful, they sometimes lead to systematic errors.” Useful, but inadequate, rules of thumb can create predictable and reliable errors or mistakes. Our thinking can be distracted with meaningless information, we can miss important factors, and we can fail to be open to improvements or alternatives that would make our decision-making better.
What is important to recognize is that systematic and predictable errors from rules of thumb can be corrected. If we know where errors and mistakes are systematically likely to arise, then we can take steps to mitigate and reduce those errors. We can be confident with rules of thumb and heuristics that simplify decisions in positive ways while being skeptical of rules of thumb that we know are likely to produce errors, biases, and inaccurate judgements and assumptions. Companies, governments, and markets do this all the time, though not always in a step by step process (sometimes there is one step forward and two steps backward) leading to progress over time. Embracing the usefulness of rules of thumb while acknowledging their shortcomings is a powerful way to improve decision-making while avoiding the cognitive downfall of heuristics.
Aspiration Rules

Aspiration Rules

My last post was all about satisficing, making decisions based on alternatives that satisfy our wants and needs and that are good enough, but may not be the absolute best option. Satisficing contrasts the idea of maximizing. When we maximize, we find the best alternative from which no additional Pareto efficiencies can be gained. Maximizing is certainly a great goal in theory, but in practice, maximizing can be worse than satisficing. As Gerd Gigerenzer writes in Risk Savvy, “in an uncertain world, there is no way to find the best.” Satisficing and using aspiration rules, he argues, is the best way to make decisions and navigate our complex world.

 

“Studies indicate that people who rely on aspiration rules tend to be more optimistic and have higher self-esteem than maximizers. The latter excel in perfectionism, depression, and self-blame,” Gigerenzer writes. Aspiration rules differ from maximizing because the goal is not to find the absolute best alternative, but to find an alternative that meets basic pre-defined and reasonable criteria. Gigerenzer uses the example of buying pants in his book.  A maximizer may spend the entire day going from store to store, checking all their options, trying every pair of pants, and comparing prices at each store until they have found the absolute best pair available for the lowest cost and best fit. However, at the end of the day, they won’t truly know that they found the best option, there will always be the possibility that they missed a store or missed a deal someplace else. To contrast a maximizer, an aspirational shopper would go into a store looking for a certain style at a certain price. If they found a pair of pants that fit right and was within the right price range, then they could be satisfied and make a purchase without having to check every store and without having to wonder if they could have gotten a better deal elsewhere. They had basic aspirations that they could reasonably meet to be satisfied.

 

Maximizers set unrealistic goals and expectations for themselves while those using aspiration rules are able to set more reasonable, achievable goals. This demonstrates the power and utility of satisficing. Decisions have to be made, otherwise we will be wandering around without pants as we try to find the best possible deal. We will forego opportunities to get lunch, meet up with friends, and do whatever it is we need pants to go do. This idea is not limited to pants and individuals. Businesses, institutions, and nations all have to make decisions in complex environments. Maximizing can be a path toward paralysis, toward CYA behaviors (cover your ass), and toward long-term failure. Start-ups that can satisfice and make quick business decisions and changes can unseat the giant that attempts to maximize every decision. Nations focused on maximizing every public policy decision may never actually achieve anything, leading to civil unrest and a loss of support. Institutions that can’t satisfice also fail to meet their goals and missions. Allowing ourselves and our larger institutions to set aspiration rules and satisfice, all while working to incrementally improve with each step, is a good way to actually move toward progress, even if it doesn’t feel like we are getting the best deal in any given decision.

 

The aspiration rules we use can still be high, demanding of great performance, and drive us toward excellence. Another key difference, however, between the use of aspiration rules and maximizing is that aspiration rules can be more personalized and tailored to the realistic circumstances that we find ourselves within. That means we can create SMART goals for ourselves by using aspiration rules. Specific, measurable, achievable, realistic, and time-bound goals have more in common with a satisficing mentality than goals that align with maximizing strategies. Maximizing doesn’t recognize our constraints and challenges, and may leave us feeling inadequate when we don’t become president, don’t have a larger house than our neighbors, and are not a famous celebrity. Aspiration rules on the other hand can help us set goals that we can realistically achieve within reasonable timeframes, helping us grow and actually reach our goals.
Satisficing

Satisficing

Satisficing gets a bad wrap, but it isn’t actually that bad of a way to make decisions and it realistically accommodates the constraints and challenges that decision-makers in the real world face. None of us would like admit when we are satisficing, but the reality is that we are happy to satisfice all the time, and we are often happy with the results.

 

In Risk Savvy, Gerd Gigerenzer recommends satisficing when trying to chose what to order at a restaurant. Regarding this strategy for ordering, he writes:

 

“Satisficing: This … means to choose the first option that is satisfactory; that is, good enough. You need the menu for this rule. First, you pick a category (say, fish). Then you read the first item in this category, and decide whether it is good enough. If yes, you close the menu and order that dish without reading any further.”

 

Satisficing works because we often have more possibilities than we have time to carefully weigh and consider. If you have never been to the Cheesecake Factory, reading each option on the menu for the first time would probably take you close to 30 minutes. If you are eating on your own and don’t have any time constraints, then sure, read the whole menu, but the staff will probably be annoyed with you. If you are out with friends or on a date, you probably don’t want to take 30 minutes to order, and you will feel pressured to make a choice relatively quickly without having full knowledge and information regarding all your options. Satisficing helps you make a selection that you can be relatively confident you will be happy with given some constraints on your decision-making.

 

The term satisficing was coined by the Nobel Prize winning political scientist and economist Herbert Simon, and I remember hearing a story from a professor of mine about his decision to remain at Carnegie Melon University in Pittsburgh. When asked why he hadn’t taken a position at Harvard or a more prestigious Ivy League School, Simon replied that his wife was happy in Pittsburgh and while Carnegie Melon wasn’t as renown as Harvard it was still a good school and still offered him enough of what he wanted to remain. In other words, Carnegie Melon satisfied his basic needs and satisfied criteria in enough areas to make him happy, even though a school like Harvard would have maximized his prestige and influence. Simon was satisficing.

 

Without always recognizing it, we turn to satisficing for many of our decisions. We often can’t buy the perfect home (because of timing, price, and other bidders), so we satisfice and buy the first home we can get a good offer on that meets enough of our desires (but doesn’t fit all our desires perfectly). The same goes for jobs, cars, where we are going to get take-out, what movie we want to rent, what new clothes to buy, and more. Carefully analyzing every potential decision we have to make can be frustrating and exhausting. We will constantly doubt whether we made the best choice, and we may be too paralyzed to even make a decision in the first place. If we satisfice, however, we accept that we are not making the best choice but are instead making an adequate choice that satisfies the greatest number of our needs while simplifying the choice we have to make. We can live with what we get and move on without the constant doubt and loss of time that we might otherwise experience. Satisficing, while getting a bad rep from those who favor rationality in all instances, is actually a pretty good decision-making heuristic.
Gut Decisions

Gut Decisions

“Although about half of professional decisions in large companies are gut decisions, it would probably not go over well if a manager publicly admitted, I had a hunch. In our society, intuition is suspicious. For that reason, managers typically hide their intuitions or have even stopped listening to them,” Gerd Gigerenzer writes in Risk Savvy.

 

The human mind evolved first in small tribal bands trying to survive in a dangerous world. As our tribes grew, our minds evolved to become more politically savvy, learning to intuitively hide our selfish ambitions and appear honest and altruistic. This pushed our brains toward more complex activity, which took place outside our direct consciousness, hiding in gut feelings and intuitions. Today however, we don’t trust those intuitions and gut decisions, even though they never left us.

 

We do have good reason to discount intuitions. Our minds did not evolve to serve us perfectly in a complex, data rich world full of uncertainty. Our brains are plagued by motivated reasoning, biases, and cognitive limitations. Making gut decisions can lead us vulnerable to these mental challenges, leading us to distrust our intuitions.

 

However, this doesn’t mean we have escaped gut decisions. Gerd Gigerenzer thinks that is actually a good thing, especially if we have developed years of insight and expertise through practice and real life training. What Gigerenzer argues is that we still make many gut decisions in areas as diverse as vacation planning, daily exercise, and corporate strategies. We just don’t admit we are making decisions based on intuition rather than careful statistical analysis. Taking it a step further, Gigerezner suggests that most of the time we make a decision at a gut level, and produce reasons after the fact.” We rationalize and use motivated reasoning to explain why we made a decision and we try to deceive ourselves to believe that we always intended to do the rational calculation first, and that we really hadn’t made up our mind until after we had done so.

 

Gigerenzer suggests that we acknowledge our gut decisions. Ignoring them and pretending they are not influential wastes our time and costs us money. An executive may have an intuitive sense of what to do in terms of a business decision, but may be reluctant to say they made a decision based on intuition. Instead, they spend time doing an analysis that didn’t need to be done. They create reasons to support their decision after the fact, again wasting time and energy that could go into implementing the decision that has already been made. Or an executive may bring in a consulting firm, hoping the firm will come up with the same answer that they got from their gut. Time and money are both wasted, and the decision-making and action-taking structures of the individual and organization are gummed up unnecessarily. Acknowledging gut decisions and moving forward more quickly, Gigerenzer seems to suggest, is better than rationalizing and finding support for gut decisions after the fact.
Public vs Private Choice Architects - Joe Abittan

Who to Fear: Public vs Private Choice Architects

A question that Cass Sunstein and Richard Thaler raise in their book Nudge is whether we should worry more about public or private sector choice architects. A choice architect is anyone who influences the decision space of another individual or group. Your office’s HR person in charge of health benefits is a choice architect. The people at Twitter who decided to increase the character length of tweets are choice architects. The government bureaucrat who designs the form you use to register to vote is also a choice architect. The decisions that each individual or team makes around the choice structure for other people’s decisions will influence the decisions and behaviors of people in those choice settings.

 

In the United States, we often see a split between public and private that is feels more concrete than the divide truly is. Often, we fall dramatically on one side of the imagined divide, either believing everything needs to be handled by businesses, or thinking that businesses are corrupt and self-interested and that government needs to step in to monitor almost all business actions. The reality is that businesses and government agencies overlap and intersect in many complex ways, and that choice architects in both influence the public and each other in complex ways. Regardless of what you believe and what side you fall on, both choice architects need to be taken seriously.

 

“On the face of it, it is odd to say that the public architects are always more dangerous than the private ones. After all, managers in the public sector have to answer to voters, and managers in the private sector have as their mandate the job of maximizing profits and share prices, not consumer welfare.”

 

Sunstein and Thaler suggest that we should be concerned about private sector choice architects because they are ultimately responsible to company growth and shareholder value, rather than what is in the best interest of individuals. When conflicts arise between what is best for people and what is best for a company’s bottom line, there could be pressure on the choice architect to use nudges to help the bottom line rather than to help people make the best decisions possible.

 

However, the public sector is not free from perverse incentives simply by being elected, being accountable to the public, or being free from profit motives. Sunstein and Thaler continue, “we agree that government officials, elected or otherwise, are often captured by private-sector interests whose representatives are seeking to nudge people in directions that will specifically promote their selfish goals.” The complex interplay of government and private companies means that even the public sector is not a space purely dedicated to public welfare. The general public doesn’t have the time, attention, energy, or financial resources to influence public sector choice architects in the ways that the private sector does. And if private sector influences shape choice structures via public elected officials, they can create a sense of legitimacy for ultimately selfish decisions. Of course, public sector choice architects could be more interested in keeping their job or winning reelection, and may promote their own selfish goals for self-preservation reasons as well.

 

We can’t think of public sector or private sector actors as being more trustworthy or responsible than the other. Often times, they overlap and influence each other, shifting the incentives and opinions of the public and the actors within public and private sectors simultaneously. Sunstein and Thaler suggest that this is a reason for maintaining the maximal choice freedom possible. The more people have their own ability to make choices, even if they are nudged, the more we can limit the impact of self-serving choice architects, whether they are in the public or private sectors.
Selfish Choice Architects

Selfish Choice Architects

“So lets go on record,” write Cass Sunstein and Richard Thaler in their book Nudge, “as saying that choice architects in all walks of life have incentives to nudge people in directions that benefit the architects (or their employers) rather than the users.”

 

Choice architects are those who design, organize, or provide decision situations to individuals. Whether it is the person who determines the order for food on the buffet line, the human resources manager responsible for selecting health insurance plans for the company, or a bureaucrat who designs an enrollment form, anyone who influences a situation where a person makes a decision or a choice is architecting that choice. Their decisions will influence the way that people understand their choice and the choices that they actually make.

 

As the quote above notes, there can always be pressure for a choice architect to design the decision space in a way that advances their own desires or needs as opposed to advancing the best interest of the individual making a given choice. Grocery stores adjust their layouts with the hopes that displays, sales, or conveniently located candy will get customers to purchase things they otherwise wouldn’t purchase. A company could skimp on health benefits and present confusing plans to employees with hurdles preventing them from utilizing their benefits, saving the company money while still appearing to have generous benefits. A public agency could design a program that meets a political objective and makes the agency head look good, even if it gives up actual effectiveness in the process and doesn’t serve citizens well.

 

Nudges are useful, but they have the capacity to be nefarious. A buffet manager might want patrons to fill up on cheap salad, eating less steak, meaning that the buffet does better on the margins. Placing multiple cheap salads at the front of the line, and not allowing people to jump right to steak, is a way to nudge people to eating cheaper food. Sunstein and Thaler acknowledge the dark side of nudges in their book, and encourage anyone who is a choice architect to strive to avoid the dark side of nudges. Doing so, they warn, risks leading to cynicism and in the long run is likely to create problems when employee, customer, or citizen trust and buy in is needed.