We Bet on Technology

We Bet On Technology

I am currently reading Steven Pinker’s book Enlightenment Now and he makes a good case for being optimistic about human progress. In an age when it is popular to write about human failures, whether it is wealthy but unhappy athletes wrecking their cars, the perilous state of democracy, or impending climate doom, the responsible message always see ms to be warning about how bad things are. But Pinker argues that things are not that bad and that they are getting better. Pinker’s writing directly contradicts some earlier reading that I have done, including the writing of Gerd Gigerenzer who argues that we unwisely bet on technology to save us when we should be focused on improving statistical thinking and living with risk rather than hoping for a savior technology.
In Risk Savvy, Gigerenzer writes about the importance of statistical thinking and how we need it in order to successfully navigate an increasingly complex world. He argues that betting on technology will in some ways be a waste of money, and while I think he is correct in many ways, I think that some parts of his message are wrong. He argues that instead of betting on technology, we need to develop improved statistical understandings of risk to help us better adapt to our world and make smarter decisions with how we use and prioritize resources and attention. He writes, “In the twenty-first century Western world, we can expect to live longer than ever, meaning that cancer will become more prevalent as well. We deal with cancer like we deal with other crises: We bet on technology. … As we have seen … early detection of cancer is also of very limited benefit: It saves none or few lives while harming many.”
Gigerenzer is correct to state that to this point broad cancer screening has been of questionable use. We identify a lot of cancers that people would likely live with and that are unlikely to cause serious metastatic or life threatening disease. Treating cancers that won’t become problematic during the natural course of an individual’s life causes a lot of pain and suffering for no discernable benefit, but does this mean we shouldn’t bet on technology? I would argue that it does not, and that we can see the current mistakes we make with cancer screening and early detection as lessons to help us get to a better technological cancer detection and treatment landscape. Much of our resources directed toward cancer may be misplaced right now, but wise people like Gigerenzer can help the technology be redirected to where it can be the most beneficial. We can learn from poor decisions around treatment and diagnosis, call out the actors who profit from misinformation, uncertainty, and fear, and build a new regime that harnesses technological progress in the most efficient and effective ways. As Pinker would argue, we bet on technology because it offers real promises of an improved world. It won’t be an immediate success, and it will have red herrings and loose ends, but incrementalism is a good way to move forward, even if it is slow and feels like it is inadequate to meet the challenges we really face.
Ultimately, we should bet on technology and pursue progress to eliminate more suffering, improve knowledge and understanding, and better diagnose, treat, and understand cancer. Arguing that we haven’t done a good job so far, and that current technology and uses of technology haven’t had the life saving impact we wish they had is not a reason to abandon the pursuit. Improving our statistical thinking is critical, but betting on technology and improving statistical thinking go hand in hand and need to be developed together without prioritizing one over the other.
Teaching Statistical Thinking

Teaching Statistical Thinking

“Statistical thinking is the most useful branches of mathematics for life,” writes Gerd Gigerenzer in Risk Savvy, “and the one that children find most interesting.” I don’t have kids and I don’t teach or tutor children today, but I remember math classes of my own from elementary school math lessons to AP Calculus in high school. Most of my math education was solving isolated equations and memorizing formulas with an occasional word problem tossed in. While I was generally good at math, it was boring, and I like others questioned when I would ever use most of the math I was learning. Gerd Gigerenzer wants to change this, and he wants to do so in a way that focuses on teaching statistical thinking.
Gigerenzer continues, “teaching statistical thinking means giving people tools for problem solving in the real world. It should not be taught as pure mathematics. Instead of mechanically solving a dozen problems with the help of a particular formula, children and adolescents should be asked to find solutions to real-life problems.” 
We view statistics as incredibly complicated and too advanced for most children (and for most of us adults as well!). But if Gigerenzer’s assertion that statistical thinking and problem solving is what many children are the most excited about, then we should lean into teaching statistical thinking rather than hiding it away and saving it for advanced students. I found math classes to be alright, but I questioned how often I would need to use math, and that was before smartphones became ubiquitous. Today, most math that I have to do professionally is calculated using a spreadsheet formula. I’m glad I understand the math and calculations behind the formulas I use in spreadsheets, but perhaps learning mathematical concepts within real world examples would have been better than learning them in isolation and with essentially rote memorization practice.
Engaging with what kids really find interesting will spur learning. And doing so with statistical thinking will do more than just help kids make smart decisions on the Las Vegas Strip. Improving statistical thinking will help people understand how to appropriately respond to future pandemics, how to plan for retirement, and how think about risk in other health and safety contexts. Lots of mathematical concepts can be built into real world lessons that lean into teaching statistical thinking that goes beyond the memorization and plug-n-chug lessons that I grew up with.

Medical Progress

What does medical progress look like? To many, medical progress looks like new machines, artificial intelligence to read your medical reports and x-rays, or new pharmaceutical medications to solve all your ailments with a simple pill. However, much of medical progress might be improved communication, better management and operating procedures, and better understandings of statistics and risk. In the book Risk Savvy, Gerd Gigerenzer suggests that there is a huge opportunity for improving physician understanding of risk, improved communication around statistics, and better processes related to risk that would help spur real medical progress.

 

He writes, “Medical progress has become associated with better technologies, not with better doctors who understand these technologies.” Gigerenzer argues that there is currently an “unbelievable failure of medical schools to provide efficient training in risk literacy.” Much of the focus of medical schools and physician education is on memorizing facts about specific disease states, treatments, and how a healthy body should look. What is not focused on, in Gigerenzer’s 2014 argument, is how physicians understand the statistical results from empirical studies, how physicians interpret risk given a specific biological marker, and how physicians can communicate risk to patients in a way that adequately inform their healthcare decisions.

 

Our health is complex. We all have different genes, different family histories, different exposures to environmental hazards, and different lifestyles. These factors interact in many complex ways, and our health is often a downstream consequence of many fixed factors (like genetics) and many social determinants of health (like whether we have a safe park that we can walk, or whether we grew up in a house infested with mold). Understanding how all these factors interact and shape our current health is not easy.

 

Adding new technology to the mix can help us improve our treatments, our diagnoses, and our lifestyle or environment. However, simply layering new technology onto existing complexity is not enough to really improve our health. Medical progress requires better ways to use and understand the technology that we introduce, otherwise we are adding layers to the existing complexity. If physicians cannot understand, cannot communicate, and cannot help people make reasonable decisions based on technology and the data that feeds into it, then we won’t see the medical progress we all hope for. It is important that physicians be able to understand the complexity, the risk, and the statistics involved so that patients can learn how to actually improve their behaviors and lifestyles and so that societies can address social determinants of health to better everyone’s lives.
Understanding False Positives with Natural Frequencies

Understanding False Positives with Natural Frequencies

In a graduate course on healthcare economics a professor of mine had us think about drug testing student athletes. We ran through a few scenarios where we calculated how many true positive test results and how many false positive test results we should expect if we oversaw a university program to drug tests student athletes on a regular basis. The results were surprising, and a little confusing and hard to understand.

 

As it turns out, if you have a large student athlete population and very few of those students actually use any illicit drugs, then your testing program is likely to reveal more false positive tests than true positive tests. The big determining factors are the sensitivity of the test (how often it is actually correct) and the percentage of students using illicit drugs. A false positive occurs when the drug test indicates that a student who is not using illicit drugs is using them. A true positive occurs when the test correctly identifies a student who does indeed use drugs. The dilemma we discussed occurs if you have a test with some percentage of error and a large student athlete population with a minimal percentage of drug users. In this instance you cannot be confident that a positive test result is accurate. You will receive a number of positive tests, but most of the positive tests that you receive are actually false positives.

 

In class, our teacher walked us through this example verbally before creating some tables that we could use to multiply the percentages ourselves to see that the number of false positives will indeed exceed the number of true positives when you are dealing with a large population and a rare event that you are testing for. Our teacher continued to explain that this happens every day in the medical world with drug tests, cancer screenings, and other tests (including COVID-19 tests as we are learning today).  The challenge, as our professor explained, is that the math is complicated and it is hard to explain to person who just received a positive cancer test that they likely don’t have cancer, even though they just received a positive test. The statistics are hard to understand on their own.

 

However, Gerd Gigerenzer doesn’t think this is really a limiting problem for us to the extent that my professor had us work through. In Risk Savvy Gigerenzer writes that understanding false positives with natural frequencies is simple and accessible. What took nearly a full graduate course to go through and discuss, Gigerenzer suggests can be digested in simple charts using natural frequencies. Natural frequencies are numbers we can actually understand and multiply as opposed to fractions and percentages which are easy to mix up and hard to multiply and compare.

 

Rather than telling someone that the actual incidence of cancer in the population is only 1%, and that the chance of a false positive test is 9%, and trying to convince them that they still likely don’t have cancer is confusing. However, if you explain to an individual that for every 1,000 people who take a particular cancer test that only 10 actually have cancer and that 990 don’t, the path to comprehension begins to clear up. With the group of 10 true positives and true negatives 990, you can explain that of those 10 who do have cancer, the test correctly identifies 9 out of 10 of them, and provides 9 true positive results for every 1,000 test (or adjust according to the population and test sensitivity). The false positive number can then be explained by saying that for the 990 people who really don’t have cancer, the test will error and tell 89 of them (9% in this case) that they do have cancer. So, we see that 89 individuals will receive false positives while 9 people will receive true positives. 89 > 9, so the chance of actually having cancer with a positive test still isn’t a guarantee.

 

Gigernezer uses very helpful charts in his book to show us that the false positive problem can be understood more easily than we might think. Humans are not great at thinking statistically, but understanding false positives with natural frequencies is a way to get to better comprehension. With this background he writes, “For many years psychologists have argued that because of their limited cognitive capacities people are doomed to misunderstand problems like the probability of a disease given a positive test. This failure is taken as justification for paternalistic policymaking.” Gigerenzer shows that we don’t need to rely on the paternalistic nudges that Cass Sunstein and Richard Thaler encourage in their book Nudge. He suggest that in many instances where people have to make complex decisions what is really needed is better tools and aids to help with comprehension. Rather than developing paternalistic policies to nudge people toward certain behaviors that they don’t fully understand, Gigerenzer suggests that more work to help people understand problems will solve the dilemma of poor decision-making. The problem isn’t always that humans are incapable of understanding complexity and choosing the right option, the problem is often that we don’t present information in a clear and understandable way to begin with.
Dread Risks - Joe Abittan

Dread Risks

Over the course of 2020 we watched COVID-19 shift from a dread risk to a less alarming risk. To some extent, COVID-19 became a mundane risk that we adjusted to and learned to live with. Our initial reactions to COVID-19, and our later discontent but general acceptance reveal interesting ways in which the mind works. Sudden and unexplained deaths and risks are terrifying, while continual risk is to some extent ignored, even if we face greater risk from dangers we ignore.

 

In Risk Savvy Gerd Gigerenzer describes dread risks and our psychological reactions by writing, “low-probability events in which many people are suddenly killed trigger an unconscious psychological principle: If many people die at one point in time, react with fear and avoid that situation.” Dread risks are instances like terrorist attacks, sudden bridge collapses, and commercial food contamination events. A risk that we did not consider is thrust into our minds, and we react strongly by avoiding something we previously thought to be safe.

 

An unfortunate reality of dread risks is that they distract us and pull our energy and attention away from ongoing and more mundane risks. This has been a challenge as we try to keep people focused on limiting COVID-19 and not simply accepting deaths from the disease the way we accept deaths from car crashes, gun violence, and second hand smoke exposure. Gigerenzer continues, “But when as many or more die distributed over time, such as in car and motorbike accidents, we are less likely to be afraid.” Dread risks trigger fears and responses that distributed risks don’t.

 

This psychological bias drove the United States into wars in Iraq and Afghanistan in the early 2000s and we are still paying the prices for those wars. The shift of COVID-19 in our collective consciousnesses from a dread risk to a distributed risk lead to mass political rallies, unwise indoor gatherings, and other social and economic events where people contracted the disease and died even though they should have known to be more cautious. Reacting appropriately to a dread risk is difficult, and giving distributed risks the attention and resources they deserve is also difficult. The end result is poor public policy, poor individual decision-making, and potentially the loss of life as we fail to use resources in a way that saves the most lives.
Stats and Messaging

Stats and Messaging

In the past, I have encouraged attaching probabilities and statistical chances to the things we believe or to events we think may (or may not) occur. For example, say Steph Curry’s three point shooting percentage is about 43%, and I am two Steph Currys confident that my running regiment will help me qualify for the Boston Marathon. One might also be two Steph Currys confident that leaving now will guarantee they are at the theater in time for the movie, or that most COVID-19 restrictions will be rescinded by August 2021 allowing people to go to movies again. However, the specific percentages that I am attaching in these examples may be meaningless, and may not really convey an important message for most people (Myself included!). It turns out, that modern day statistics and the messaging attached to it is not well understood.

 

In his book Risk Savvy, Gerd Gigerenzer discusses the disconnect between stats and messaging, and the mistake most people make. The main problem with using statistics is that people don’t really know what the statistics mean in terms of actual outcomes. This was seen in the 2016 US presidential election when sources like FiveThirtyEight gave trump a 28.6% chance of winning and again in 2020 when the election was closer than many predicted, but was still well within the forecasted range.  In both instances, a Trump win was considered such a low probability event that people dismissed it as a real possibility, only to be shocked when Trump did win in 2016 and performed better than many expected in 2020. People failed to fully appreciate that FiveThirtyEight’s prediction meant that in 28.6% of election simulations, Trump was predicted to win in 2016, and in 2020 many of their models predicted races both closer than and wider than the result we actually observed.

 

Regarding weather forecasting and statistical confusion, Gigerenzer writes, “New forecasting technology has enabled meteorologists to replace mere verbal statements of certainty (it will rain tomorrow) or chance (it is likely) with numerical precision. But greater precision has not led to greater understanding of what the message really is.” Gigerenzer explains that in the context of weather forecasts, people often misunderstand that a 30% chance of rain means that on 30% of days when when the observed weather factors (temperature, humidity, wind speeds, etc…) match the predicted weather for that day, rain occurs. Or that models taking weather factors into account simulated 100 days of weather with those conditions and included rain for 30 of those days.  What is missing, Gigerenzer explains, is the reference class. Telling people there is a 30% chance of rain could lead them to think that it will rain for 30% of the day, that 30% of the city they live in will be rained on, or perhaps they will misunderstand the forecast in a completely unpredictable way.

 

Probabilities are hard for people to understand, especially when they are busy, have other things on their mind, and don’t know the reference class. Providing probabilities that don’t actually connect to a real reference class can be misleading and unhelpful. This is why my suggestion of tying beliefs and possible outcomes to a statistic might not actually be meaningful. If we don’t have a reasonable reference class and a way to understand it, then it doesn’t matter how many Steph Currys likely I think something is. I think we should take statistics into consideration with important decision-making, and I think Gigerenzer would agree, but if we are going to communicate our decisions in terms of statistics, we need to ensure we do so while clearly stating and explaining the reference classes and with the appropriate tools to help people understand the stats and messaging.
Denominator Neglect - Joe Abittan

Denominator Neglect

“The idea of denominator neglect helps explain why different ways of communicating risks vary so much in their effects,” writes Daniel Kahneman in Thinking Fast and Slow.

 

One thing we have seen in 2020 is how difficult it is to communicate and understand risk. Thinking about risk requires thinking statistically, and thinking statistically doesn’t come naturally for our brains. We are good at thinking in terms of anecdotes and our brains like to identify patterns and potential causal connections between specific events. When our brains have to predict chance and deal with uncertainty, they easily get confused. Our brains shift and solve easier problems rather than complex mathematical problems, substituting the answer to the easy problem without realizing it. Whether it is our risk of getting COVID or the probability we assigned to election outcomes before November 3rd, many of us have been thinking poorly about probability and chance this year.

 

Kahneman’s quote above highlights one example of how our thinking can go wrong when we have to think statistically. Our brains can be easily influenced by random numbers, and that can throw off our decision-making when it comes to dealing with uncertainty. To demonstrate denominator neglect, Kahneman presents two situations in his book. There are two large urns full of white and red marbles. If you pull a red marble from an urn, you are a winner. The first urn has 10 marbles in it, with 9 white and 1  red. The second urn has 100 marbles in it, with 92 white and 8 red marbles. Statistically, we should try our luck with the urn with 10 marbles, because 1 out of 10, or 10% of all marbles in the urn are red. In the second urn, only 8% of the marbles are red.

 

When asked which urn they would want to select from, many people select the second urn, leading to what Kahneman describes as denominator neglect. The chance of winning is lower with the second urn, but there are more winning marbles in the jar, making it seem like the better option if you don’t slow down and engage your System 2 thinking processes. If you pause and think statistically, you can see that option 1 provides better odds, but if you are moving quick your brain can be distracted by the larger number of winning marbles and lead you to make a worse choice.

 

What is important to recognize is that we can be influenced by numbers that shouldn’t mean anything to us. The number of winning marbles shouldn’t matter, only the percent chance of winning should matter, but our brains get thrown off. The same thing happens when we see sales prices, think about a the risk of a family gathering of 10 people during a global pandemic, or think about polling errors. I like to check The Nevada Independent‘s COVID-19 tracking website, and I have noticed denominator neglect in how I think about the numbers they report. For a continued stretch, Nevada’s total number of cases was decreasing, but our case positivity rate was staying the same. Statistically, nothing was really changing regarding the state of the pandemic in Nevada, but fewer tests were being completed and reported each day, so the overall number of positive cases was decreasing. If you scroll down the Nevada Independent website, you will get to a graph of the case positivity rate and see that things were staying the same. When looking at the decreasing number of positive tests reported, my brain was neglecting the denominator, the number of tests completed. The way I understood the pandemic was biased by the big headline number, and wasn’t really based on how many people out of those tested did indeed have the virus. Thinking statistically provides a more accurate view of reality, but it can be hard to think statistically and can be tempting to look just at a single headline number.
Decision Weights

Decision Weights

On the heels of the 2020 election, I cannot decide if this post is timely, or untimely. On the one hand, this post is about how we should think about unlikely events, and I will argue, based on a quote from Daniel Kahneman’s book Thinking Fast and Slow, that we overweight unlikely outcomes and should better alight our expectations with realistic probabilities. On the other hand, however, the 2020 election was closer than many people expected, we almost saw some very unlikely outcomes materialize, and one can argue that a few unlikely outcomes really did come to pass. Ultimately, this post falls in a difficult space, arguing that we should discount unlikely outcomes more than we actually do, while acknowledging that sometimes very unlikely outcomes really do happen.

 

In Thinking Fast and Slow Kahneman writes, “The decision weights that people assign to outcomes are not identical to the probabilities of these outcomes, contrary to the expectation principle.”  This quote is referencing studies which showed that people are not good at conceptualizing chance outcomes at the far tails of a distribution. When the chance of something occurring gets below 10%, and especially when it pushes into the sub 5% range, we have trouble connecting that with real world expectations. Our behaviors seem to change when things move from 50-50 to 75-25 or even to 80-20, but we have trouble adjusting any further once the probabilities really stretch beyond that point.

 

Kahneman continues, “Improbable outcomes are overweighed – this is the possibility effect. Outcomes that are almost certain are underweighted relative to actual certainty. The expectation principle, by which values are weighted by their probability, is poor psychology.”

 

When something has only 5% or lower chance of happening, we actually behave as though chance or probability for that occurrence is closer to say 25%. We know the likelihood is very low, but we behave as if the likelihood is actually a bit higher than a single digit percentage. Meanwhile, the very certain and almost completely sure outcome of 95%+ is discounted beyond what it really should be. Certainly very rare outcomes do sometimes happen, but in our minds we have trouble conceptualizing these incredibly rare outcomes, and rather than keeping a perspective based on the actual probabilities, by utilizing rational decision weights, we overweight the improbably and underweight the certain.

 

Our challenges with thinking about and correctly weighting extremely certain or extremely unlikely events may have an evolutionary history. For our early ancestors, being completely sure of anything may have resulted in a few very unlikely deaths. Those who were a tad more cautious may have been less likely to run across the log that actually gave way into the whitewater rapids below. And our ancestors who reacted to the improbably as though it were a little more certain may have also been better at avoiding the lion the one time the twig snapping outside the campground really was a lion. Our ancestor who sat by the fire and said, “twigs snap every night, the chances that it actually is a lion this time have gotta be under 5%,” may not have lived long enough to pass enough genes into the future generations. The reality is that in most situations for our early ancestors, being a little more cautious was probably advantageous for society. Today being overly cautious and struggling with improbable or nearly certain decision weights can be costly for us in terms of over-purchasing insurance, spending huge amounts to avoid the rare chance that we could lose a huge amount, and over trusting democratic institutions in the face of a coup attempt.
Avoiding Gambles

Avoiding Gambles

“Most people dislike risk (the chance of receiving the lowest possible outcome), and if they are offered a choice between a gamble and an amount equal to its expected value they will pick the sure thing,” writes Daniel Kahneman in Thinking Fast and Slow. I don’t want to get too far into expected value, but in my mind I think of it as a discount on the total value of the best outcome of a gamble blended with the possibility of getting nothing. Rather than the expected value of a $100 dollar bet being $100, the expected value is going to come in somewhere less than that, maybe around $50, $75, or $85 dollars depending on whether the odds of winning the bet are so-so or are pretty good. You will either win $100 or 0, not $50, $75, or $85, but the risk factor causes us to value the bet at less than the full amount up for grabs.

 

What Kahneman describes in his book is an interesting phenomenon where people will mentally (or maybe subjectively is the better way to put it) calculate an expected value in their head when faced with a betting opportunity. If the expected value of the bet that people calculate for themselves is not much higher than a guaranteed option, people will pick the guaranteed option. The quote I used to open the post explains the phenomenon which you have probably seen if you have watched enough game show TV. As Kahneman continues, “In fact a risk-averse decision maker will choose a sure thing that is less than the expected value, in effect paying a premium to avoid the uncertainty.”

 

On game shows, people will frequently walk away from the big possibility of a pay off with a modest sum of cash if they are risk averse or if the odds seem really stacked against them. What is interesting is that we can study when people make the bet versus when people walk away, and observe patterns in our decision making. It turns out we can predict the situations that drive people toward avoiding gambles, and the situations which encourage them. It turns out that the reward has to be about two times the possible loss before people will make a gamble. If the certain outcome is pretty close to the expected outcome, people will pick the certain outcome. If there is no certain outcome, people usually need a reward that is at least 2X what they might lose before people will be comfortable with a bet. We might like to take chances and gamble from time to time, but we tend to be pretty risk averse and we tend to prefer guaranteed outcomes, even at a slight cost over the expected value of a bet, than to lose it all.
Regression to the Mean Versus Causal Thinking

Regression to the Mean Versus Causal Thinking

Regression to the mean, the idea that there is an average outcome that can be expected and that overtime individual outliers from the average will revert back toward that average, is a boring phenomenon on its own. If you think about it in the context of driving to work and counting your red lights, you can see why it is a rather boring idea. If you normally hit 5 red lights, and one day you manage to get to work with just a single red light, you probably expect that the following day you won’t have as much luck with the lights, and will probably have more red lights than than your lucky one red light commute. Conversely, if you have a day where you manage to hit every possible red light, you would probably expect to have better traffic luck the next day and be somewhere closer to your average. This is regression to the mean. Simply because you had only one red or managed to hit every red one day doesn’t cause the next day’s traffic light stoppage to be any different, but you know you will probably have a more average count of reds versus greens – no causal explanation involved, just random traffic light luck.

 

But for some reason this idea is both fascinating and hard to grasp in other areas, especially if we think that we have some control of the outcome. In Thinking Fast and Slow, Daniel Kahneman helps explain why it is so difficult in some settings for us to accept regression to the mean, what is otherwise a rather boring concept. He writes,

 

“Our mind is strongly biased toward causal explanations and does not deal well with mere statistics. When our attention is called to an event, associative memory will look for its cause – more precisely, activation will automatically spread to any cause that is already stored in memory. Causal explanations will be evoked when regression is detected, but they will be wrong because the truth is that regression to the mean has an explanation but does not have a cause.”

 

Unless you truly believe that there is a god of traffic lights who rules over your morning commute, you probably don’t assign any causal mechanism to your luck with red lights. But when you are considering how well a professional golfer played on the second day of a tournament compared to the first day, or when you are considering whether intelligent women marry equally intelligent men, you are likely to have some causal idea that comes to mind. The golfer was more or less complacent on the second day – the highly intelligent women have to settle for less intelligent men because the highly intelligent men don’t want an intellectual equal. These are examples that Kahneman uses in the book and present plausible causal mechanisms, but as Kahneman shows, the more simple though boring answer is simply regression to the mean. A golfer who performs spectacularly on day one is likely to be less lucky on day two. A highly intelligent woman is likely to marry a man with intelligence closer to average just by statistical chance.

 

When regression to the mean violates our causal expectation it becomes an interesting and important concept. It reveals that our minds don’t simply observe an objective reality, they observe causal structures that fit with preexisting narratives. Our causal conclusions can be quite inaccurate, especially if they are influenced by biases and prejudices that are unwarranted. If we keep regression to the mean in mind, we might lose some of our exciting narratives, but our thinking will be more sound, and our judgments more clear.