Risk Literacy and Emotional Stress

Risk Literacy and Emotional Stress

In Risk Savvy Gerd Gigerenzer argues that better risk literacy could reduce emotional stress. To emphasize this point, Gigerenzer writes about parents who receive false positive medical test results for infant babies. Their children had been screened for biochemical disorders, and the tests indicated that the child had a disorder. However, upon follow-up screenings and evaluations, the children were found to be perfectly healthy. Nevertheless, in the long run (four years later) parents who initially received a false positive test result were more likely than other parents to say that their children required extra parental care, that their children were more difficult, and that that had more dysfunctional relationships with their children.

 

Gigerenzer suggests that the survey results represent a direct parental response to initially receiving a false positive test when their child was a newborn infant. He argues that parents received the biochemical test results without being informed about the chance of false positives and without understanding the prevalence of false positives due to a general lack of risk literacy.  Parents initially reacted strongly to the bad news of the test, and somewhere in their mind, even after the test was proven to be a false positive, they never adjusted their thoughts and evaluations of the children, and the false positive test in some ways became a self-fulfilling prophecy.

 

In writing about Gigerenzer’s argument, it feels more far-fetched than it did in an initial reading, but I think his general argument that risk literacy and emotional stress are tied together is probably accurate. Regarding the parents in the study, he writes, “risk literacy could have moderated emotional reactions to stress that harmed these parents’ relation to their child.” Gigerenzer suggests that parents had strong negative emotional reactions when their children received a false positive and that their initial reactions carried four years into the future. However, had the doctors better explained the chance of a false positive and better communicated next steps with parents, then the strong negative emotional reaction experienced by parents could have been avoided, and they would not have spent four years believing their child was in some ways more fragile or more needy than other children. I recognize that receiving a medical test with a diagnosis that no parent wants to hear is stressful, and I can see where better risk communication could reduce some of that stress, but I think there could have been other factors that the study picked up on. I think the results as Gigerenzer reported overhyped the connection between risk literacy and emotional stress.

 

Nevertheless, risk literacy is important for all of us living in a complex and interconnected world today. We are constantly presented with risks, and new risks can seemingly pop-up anywhere at any time. Being able to decipher and understand risk is important so that we can adjust and modulate our activities and behaviors as our environment and circumstances change. Doing so successfully should reduce our stress, while struggling to comprehend risk and adjust behaviors and beliefs is likely to increase emotional stress. When we don’t understand risks appropriately, we can become overly fearful, we can spend money on unnecessary insurance, and we can stress ourselves over incorrect information. Developing better charts, better communicative tools, and better information about risk will help individuals improve their risk literacy, and will hopefully reduce risk by allowing individuals to successfully adjust to the risks they face.
Understanding False Positives with Natural Frequencies

Understanding False Positives with Natural Frequencies

In a graduate course on healthcare economics a professor of mine had us think about drug testing student athletes. We ran through a few scenarios where we calculated how many true positive test results and how many false positive test results we should expect if we oversaw a university program to drug tests student athletes on a regular basis. The results were surprising, and a little confusing and hard to understand.

 

As it turns out, if you have a large student athlete population and very few of those students actually use any illicit drugs, then your testing program is likely to reveal more false positive tests than true positive tests. The big determining factors are the sensitivity of the test (how often it is actually correct) and the percentage of students using illicit drugs. A false positive occurs when the drug test indicates that a student who is not using illicit drugs is using them. A true positive occurs when the test correctly identifies a student who does indeed use drugs. The dilemma we discussed occurs if you have a test with some percentage of error and a large student athlete population with a minimal percentage of drug users. In this instance you cannot be confident that a positive test result is accurate. You will receive a number of positive tests, but most of the positive tests that you receive are actually false positives.

 

In class, our teacher walked us through this example verbally before creating some tables that we could use to multiply the percentages ourselves to see that the number of false positives will indeed exceed the number of true positives when you are dealing with a large population and a rare event that you are testing for. Our teacher continued to explain that this happens every day in the medical world with drug tests, cancer screenings, and other tests (including COVID-19 tests as we are learning today).  The challenge, as our professor explained, is that the math is complicated and it is hard to explain to person who just received a positive cancer test that they likely don’t have cancer, even though they just received a positive test. The statistics are hard to understand on their own.

 

However, Gerd Gigerenzer doesn’t think this is really a limiting problem for us to the extent that my professor had us work through. In Risk Savvy Gigerenzer writes that understanding false positives with natural frequencies is simple and accessible. What took nearly a full graduate course to go through and discuss, Gigerenzer suggests can be digested in simple charts using natural frequencies. Natural frequencies are numbers we can actually understand and multiply as opposed to fractions and percentages which are easy to mix up and hard to multiply and compare.

 

Rather than telling someone that the actual incidence of cancer in the population is only 1%, and that the chance of a false positive test is 9%, and trying to convince them that they still likely don’t have cancer is confusing. However, if you explain to an individual that for every 1,000 people who take a particular cancer test that only 10 actually have cancer and that 990 don’t, the path to comprehension begins to clear up. With the group of 10 true positives and true negatives 990, you can explain that of those 10 who do have cancer, the test correctly identifies 9 out of 10 of them, and provides 9 true positive results for every 1,000 test (or adjust according to the population and test sensitivity). The false positive number can then be explained by saying that for the 990 people who really don’t have cancer, the test will error and tell 89 of them (9% in this case) that they do have cancer. So, we see that 89 individuals will receive false positives while 9 people will receive true positives. 89 > 9, so the chance of actually having cancer with a positive test still isn’t a guarantee.

 

Gigernezer uses very helpful charts in his book to show us that the false positive problem can be understood more easily than we might think. Humans are not great at thinking statistically, but understanding false positives with natural frequencies is a way to get to better comprehension. With this background he writes, “For many years psychologists have argued that because of their limited cognitive capacities people are doomed to misunderstand problems like the probability of a disease given a positive test. This failure is taken as justification for paternalistic policymaking.” Gigerenzer shows that we don’t need to rely on the paternalistic nudges that Cass Sunstein and Richard Thaler encourage in their book Nudge. He suggest that in many instances where people have to make complex decisions what is really needed is better tools and aids to help with comprehension. Rather than developing paternalistic policies to nudge people toward certain behaviors that they don’t fully understand, Gigerenzer suggests that more work to help people understand problems will solve the dilemma of poor decision-making. The problem isn’t always that humans are incapable of understanding complexity and choosing the right option, the problem is often that we don’t present information in a clear and understandable way to begin with.
Procedure Over Performance

Procedure Over Performance

My wife works with families with children with disabilities for a state agency. She and I often have discussions about some of the administrative challenges and frustrations with her job, and some of the creative ways that she and other members of her agency are able to bend the rules to meet the human needs of the job, even though their decisions occasionally step beyond management decisions for standard operating procedures. For my wife and her colleagues below the management level of the agency, helping families and doing what is best for children is the motivation for all of their decisions, however, for the management team within the agency, avoiding errors and blame often seems to be the more important goal.

 

This disconnect between agency functions, mission, and procedures is not unique to my wife’s state agency. It is a challenge that Max Weber wrote about in the late 1800’s and early 1900’s. Somewhere along the line, public agencies and private companies seem to forget their mission. Procedure becomes more important than performance, and services or products suffer.

 

Gerd Gigerenzer offers an explanation for why this happens in his book Risk Savvy. Negative error cultures likely contribute to people becoming more focused on procedure over performance, because following perfect procedure is safe, even if it isn’t always necessary and doesn’t always lead to the best outcomes. A failure to accept risk and errors, and a failure to discuss and learn from errors, leads people to avoid situations where they could be blamed for failure. Gigerenzer writes, “People need to be encouraged to talk about errors and take the responsibility in order to learn and achieve better overall performance.”

 

As companies and government agencies age, their workforce ages. People become comfortable in their role, they don’t want to have to look for a new job, they take out mortgages, have kids, and send them to college. People become more conservative and risk averse as they have more to lose, and that means they are less likely to take risks in their career, because they don’t want to lose their income to support their lifestyles, retirements, or the college plans for their kids. Following procedures, like getting meaningless forms submitted on time and documenting conversations timely, become more important than actually ensuring valuable services or products are provided to constituents and customers. Procedure prospers over performance, and the agency or company as a whole suffers. Positive error cultures, where it is ok to take reasonable risks and acceptable to discuss errors without fear of blame are important for overcoming the stagnation that can arise when procedure becomes more important than the mission of the agency or company.
Risk and Innovation - Joe Abittan

Risk and Innovation

To be innovative is to make decisions, develop processes, and create things in new ways that improve over the status quo. Being innovative is necessarily different, and requires stepping away from the proven path to do something new or unusual. Risk and innovation are tied together because you cannot venture into something new or stray from the tried and true without the possibility of making a mistake and being wrong. Therefore, appropriately managing and understanding risk is imperative for innovation.

 

In Risk Savvy Gerd Gigerenzer writes, “Risk aversion is closely tied to the anxiety of making errors. If you work in the middle management of a company, your life probably revolves around the fear of doing something wrong and being blamed for it. Such a climate is not a good one for innovation, because originality requires taking risks and making errors along the way. No risks, no errors, no innovation.” Risk aversion is a fundamental aspect of human psychology. Daniel Kahneman in Thinking Fast and Slow shows that we won’t accept risk unless we are certain that the pay-off is at generally about two times greater than the potential loss. We go out of our way to avoid risk, because the potential of losing something is often paralyzing beyond the excitement of a potential gain. Individuals and companies who want to be innovative have to find ways around risk aversion in order to create something new.

 

Gigerenzer’s example of middle management is excellent for thinking about innovation and why it is often smaller companies and start-ups that make innovative breakthroughs. It also helps explain why in the United States so many successful and innovative companies are started by immigrants or by the super-wealthy. Large established companies are likely to have employees who have been with the company for a longer time and have become more risk averse. They have families, mortgages, and might be unsure they could find an equally attractive job elsewhere. Their incentives for innovation are diminished by their fear of loss if something where to go wrong and if the blame were to fall with them. Better to stick with established methods and to maximize according to well defined job evaluation statistics than to risk trying something new and uncharted. Start-ups, immigrants, and the super-wealthy don’t have the same constraining fears. New companies attract individuals who are less risk averse to begin with, and they don’t have established methods that everyone is comfortable sticking to. Immigrants are not as likely to have the same financial resources that limit their willingness to take risks, and the super-wealthy may have so many resources that the risks they face are smaller relative to their overall wealth and resources. The middle-class, like middle management, is stuck in a position where they feel they have too much to risk in trying to be innovative, and as a result stick to known and measured paths that ultimately reduce risk and innovation.
A mixture of Risks

A Mixture of Risks

In the book Risk Savvy, Gerd Gigerenzer explains the challenges we have with thinking statistically and how these difficulties can lead to poor decision-making. Humans have trouble holding lots of complex and conflicting information. We don’t do well with decisions involving risk and decisions where we cannot possibly know all the relevant information necessary for the best decision. We prefer to make decisions involving fewer variables, where we can have more certainty about our risks and about the potential outcomes. This leads to the substitution effect that Daniel Kahneman describes in his book Thinking Fast and Slow, where our minds substitute an easier question for the difficult question without us noticing.

 

Unfortunately, this can have bad outcomes for our decision-making. Gigerenzer writes, “few situations in life allow us to calculate risk precisely. In most cases, the risks involved are a mixture of more or less well known ones.” Most of our decisions that involve risk have a mixture of different risks. They are complex decisions with tiers and potential cascades of risk based on the decisions we make along the way. Few of our decisions involve just one risk independent of others that we can know with certainty.

 

If we consider investing for retirement we can see how complex decisions involving risk can be and how a mixture of risks is present across all the decisions we have to make. We can hoard money in a safe in our house where we reduce the risk of losing any of our money, but we risk being unable to have enough saved by the time we are ready to retire. We can invest our money, but have to make decisions regarding whether we will keep it in a bank account, invest it in the stock market, or look to other investment vehicles. Our bank is unlikely to lose much money, and is low risk, but is also unlikely to help us increase the value of our savings to have enough for retirement. Investing with a financial advisor takes on more risk, such as the risk that we are being scammed, the risk that the market tanks and our advisor made bad investments on our behalf, and the risk that we won’t have access to our money if we were to need it quickly in case of an emergency. What this shows is that even the most certain option for our money, protecting it in a secret safe at home, still contains additional risks for the future. The options that are likely to provide us with the greatest return on our savings, investing in the stock market, has a mixture of risks associated with each investment decision we make after the initial decision to invest. There is no way we can calculate and fully comprehend ever risk involved with such an investment decision.

 

Risk is complex, and we rarely deal with a single decision involving a single calculable risk at one time. Our brains are likely to flatten the decision by substituting more simple decisions, eliminating some of the risks from consideration and helping our mind focus on fewer variables at a time. Nevertheless, the complex mixture of risks doesn’t go away just because  our brains pretend it isn’t there.
Navigating Uncertainty with Nudges

Navigating Uncertainty with Nudges

In Risk Savvy Gerd Gigerenzer makes a distinction between known risks and uncertainty. In a foot note for a figure, he writes, “In everyday language, we make a distinction between certainty and risk, but the terms risk and uncertainty are used mostly as synonyms. They aren’t. In a world of known risks, everything, including the probabilities, is known for certain. Here statistical thinking and logic are sufficient to make good decisions. In an uncertain world, not everything is known, and one cannot calculate the best option. Here, good rules of thumb and intuition are also required.” Gigerenzer’s distinction between risk and uncertainty is important. He demonstrates that people can manage decision-making when making risk based decisions, but that people need to rely on intuition and good judgement when dealing with uncertainty. One solution to improved judgement and intuition is to use nudges.

 

In the book Nudge, Cass Sunstein and Richard Thaler encourage choice architects to design systems and structures that will help individuals make the best decision in a given situation as defined by the chooser. Much of their argument is supported by research presented by Daniel Kahneman in Thinking Fast and Slow, where Kahneman demonstrates how predictable biases and cognitive errors can lead people to making decisions that they likely wouldn’t make if they had more clear information, had the ability to free themselves from irrelevant biases, and could improve their statistical thinking. Gigerenzer’s quote supports Sunstein and Thaler’s nudges by building on the research from Kahneman. Distinguishing between risk and uncertainty helps us understand when to use nudges, and how aggressive our nudges may need to be.

 

Gigerenzer uses casino slot machines as an example of risk and for examples of uncertainty uses stocks, romance, earthquakes, business, and health. When we are gambling, we can know the statistical chances that our bets will pay off and calculate optimal strategies (there is a reason the casino dealer stays on 17). We won’t know what the outcome will be ahead of time, but we can precisely define the risk. The same cannot be said for picking the right stocks, the right romantic partner, or when creating business, earthquake preparedness, or health plans. We may know the five year rate of return for a company’s stocks, the divorce rate in our state, the average frequency and strength of earthquakes in our region, and how old our grandfather lived to be, but we cannot use this information alone to calculate risk. We don’t know exactly what business trends will arise in the future, we don’t know for sure whether we have a genetic disease that will strike us (or our romantic partner) down sooner than expected, and we can’t say for sure that a 7.0 earthquake is or is not possible next month.

 

But nudges can help us in these decisions. We can use statistical information for business development and international stock returns to identify general rules of thumb when investing. We can listen to parents and elders and learn from their advice and mistakes when selecting a romantic partner, intuiting the traits that make a good (or bad) spouse. We can overengineer our bridges and skyscrapers by 10% to give us a little more assurance that they can survive a major and unexpected earthquake. Nudges are helpful because they can augment our gut instincts and help bring visualizations to the rules of thumb that we might utilize.

 

Expecting everyone’s individual intuition and heuristics to be up to the task of navigating uncertainty is likely to lead to many poor choices. But, if we help pool the statistical information available, provide guides, communicate rules of thumb that have panned out for many people, and structure choices in ways that help present this information, then people can likely make marginally better decisions. My suggestion in this post, is a nudge to use more nudges in moments of uncertainty. When certainty exists, or even when calculable risks exist, nudges may not be needed. However, once we get beyond calculable risk, where we must rely on judgement and intuition, nudges are important tools to help people navigate uncertainty and improve their decision making.
Dread Risks - Joe Abittan

Dread Risks

Over the course of 2020 we watched COVID-19 shift from a dread risk to a less alarming risk. To some extent, COVID-19 became a mundane risk that we adjusted to and learned to live with. Our initial reactions to COVID-19, and our later discontent but general acceptance reveal interesting ways in which the mind works. Sudden and unexplained deaths and risks are terrifying, while continual risk is to some extent ignored, even if we face greater risk from dangers we ignore.

 

In Risk Savvy Gerd Gigerenzer describes dread risks and our psychological reactions by writing, “low-probability events in which many people are suddenly killed trigger an unconscious psychological principle: If many people die at one point in time, react with fear and avoid that situation.” Dread risks are instances like terrorist attacks, sudden bridge collapses, and commercial food contamination events. A risk that we did not consider is thrust into our minds, and we react strongly by avoiding something we previously thought to be safe.

 

An unfortunate reality of dread risks is that they distract us and pull our energy and attention away from ongoing and more mundane risks. This has been a challenge as we try to keep people focused on limiting COVID-19 and not simply accepting deaths from the disease the way we accept deaths from car crashes, gun violence, and second hand smoke exposure. Gigerenzer continues, “But when as many or more die distributed over time, such as in car and motorbike accidents, we are less likely to be afraid.” Dread risks trigger fears and responses that distributed risks don’t.

 

This psychological bias drove the United States into wars in Iraq and Afghanistan in the early 2000s and we are still paying the prices for those wars. The shift of COVID-19 in our collective consciousnesses from a dread risk to a distributed risk lead to mass political rallies, unwise indoor gatherings, and other social and economic events where people contracted the disease and died even though they should have known to be more cautious. Reacting appropriately to a dread risk is difficult, and giving distributed risks the attention and resources they deserve is also difficult. The end result is poor public policy, poor individual decision-making, and potentially the loss of life as we fail to use resources in a way that saves the most lives.
Stats and Messaging

Stats and Messaging

In the past, I have encouraged attaching probabilities and statistical chances to the things we believe or to events we think may (or may not) occur. For example, say Steph Curry’s three point shooting percentage is about 43%, and I am two Steph Currys confident that my running regiment will help me qualify for the Boston Marathon. One might also be two Steph Currys confident that leaving now will guarantee they are at the theater in time for the movie, or that most COVID-19 restrictions will be rescinded by August 2021 allowing people to go to movies again. However, the specific percentages that I am attaching in these examples may be meaningless, and may not really convey an important message for most people (Myself included!). It turns out, that modern day statistics and the messaging attached to it is not well understood.

 

In his book Risk Savvy, Gerd Gigerenzer discusses the disconnect between stats and messaging, and the mistake most people make. The main problem with using statistics is that people don’t really know what the statistics mean in terms of actual outcomes. This was seen in the 2016 US presidential election when sources like FiveThirtyEight gave trump a 28.6% chance of winning and again in 2020 when the election was closer than many predicted, but was still well within the forecasted range.  In both instances, a Trump win was considered such a low probability event that people dismissed it as a real possibility, only to be shocked when Trump did win in 2016 and performed better than many expected in 2020. People failed to fully appreciate that FiveThirtyEight’s prediction meant that in 28.6% of election simulations, Trump was predicted to win in 2016, and in 2020 many of their models predicted races both closer than and wider than the result we actually observed.

 

Regarding weather forecasting and statistical confusion, Gigerenzer writes, “New forecasting technology has enabled meteorologists to replace mere verbal statements of certainty (it will rain tomorrow) or chance (it is likely) with numerical precision. But greater precision has not led to greater understanding of what the message really is.” Gigerenzer explains that in the context of weather forecasts, people often misunderstand that a 30% chance of rain means that on 30% of days when when the observed weather factors (temperature, humidity, wind speeds, etc…) match the predicted weather for that day, rain occurs. Or that models taking weather factors into account simulated 100 days of weather with those conditions and included rain for 30 of those days.  What is missing, Gigerenzer explains, is the reference class. Telling people there is a 30% chance of rain could lead them to think that it will rain for 30% of the day, that 30% of the city they live in will be rained on, or perhaps they will misunderstand the forecast in a completely unpredictable way.

 

Probabilities are hard for people to understand, especially when they are busy, have other things on their mind, and don’t know the reference class. Providing probabilities that don’t actually connect to a real reference class can be misleading and unhelpful. This is why my suggestion of tying beliefs and possible outcomes to a statistic might not actually be meaningful. If we don’t have a reasonable reference class and a way to understand it, then it doesn’t matter how many Steph Currys likely I think something is. I think we should take statistics into consideration with important decision-making, and I think Gigerenzer would agree, but if we are going to communicate our decisions in terms of statistics, we need to ensure we do so while clearly stating and explaining the reference classes and with the appropriate tools to help people understand the stats and messaging.
Risk Literacy - Joe Abittan

Risk Literacy

In February of 2020 I finished a book called Risk Savvy by Gerd Gigerenzer. At the time I read the book, I could not predict that thinking about risk would come to dominate the remainder of the year. Throughout 2020 and into the start of 2021, humanity across the globe has demonstrated how poorly we think about and handle risk. The United States has clearly been worse than most countries, as we have failed to understand the risk of COVID-19, failed to grapple with the risk of crowds and appropriate uses of force, and failed to adequately assess the risk of a President living in a state of denial and delusion.  As Gigerenzer writes on page 6 of his book, Risk Literacy is the basic knowledge required to deal with a modern technological society,” and in many ways, the United States and the rest of humanity have shown that risk literacy is deeply lacking.

 

Gigerenzer believes that we are smart, that we are resourceful, and that with proper aids and education, we can become risk literate. Whether we recognize it or not, we already calculate risk and make decisions based on risk. Understanding risk can lead to us packing an umbrella and wearing a waterproof windbreaker when the weather station forecasts rain. We can make sound investments without understanding every aspect of an investment thanks to savings vehicles that help us better understand and calibrate risk. And we can decide to go to a movie or skip it based on aggregated reviews and ratings scores on Rotten Tomatoes.

 

At the same time, we have had trouble understanding our individual risks related to COVID-19, we have had trouble understanding the risks and benefits of wearing masks, and we have dismissed what seem like impossible possibilities until they happen to us personally, or happen in a dramatic way on tv. We are capable of making good decisions based on perceptions and understandings of risk, but at the same time, we have still shown ourselves to be risk-illiterate.

 

It is clear that moving forward societies will have to do better to become risk literate. We will have to improve our ability to communicate risk, estimate risk, and take appropriate precautions or actions. We cannot live in a world free from risk, and new technologies, ecological pressures, and sociopolitical realities will change the risk calculations that everyone will have to make. Improving our risk literacy might mean that we don’t have over 400,000 people die during future respiratory pandemics. It might mean we have robust economic systems that don’t damage the planet. And it might mean we are able to live together peacefully with global superpowers competing economically. Failure to address risk and failure to improve risk literacy could lead to disaster in any one of those areas.
Nudges for Unrealistic Optimism

Nudges for Unrealistic Optimism

Our society makes fun of the unrealistic optimist all the time, but the reality is that most of us are unreasonably optimistic in many aspects of our life. We might not all believe that we are going to receive a financial windfall this month, that our favorite sports team will go from losing almost all their games last year to the championship this year, or that everyone in our family will suddenly be happy, but we still manage to be more optimistic about most things than is reasonable.

 

Most people believe they are better than average drivers, even though by definition half the people in a population must be above and half the people below average. Most of us probably think we will get a promotion or raise sometime sooner rather than later, and most of us probably think we will live to be 100 and won’t get cancer, go bald, or be in a serious car crash (after all, we are all above average drivers right?).

 

Our overconfidence is often necessary for daily life. If you are in sales, you need to be unrealistically optimistic that you are going to get a big sale, or you won’t continue to pick up the phone for cold calls. We would all prefer the surgeon who is more on the overconfident side than the surgeon who doubts their ability and asks us if we finalized our will before going into the operating room. And even just for going to the store, doing a favor for a neighbor, or paying for sports tickets, overconfidence is a feature, not a bug, of our thinking. But still, there are times where overconfidence can be a problem.

 

2020 is an excellent example. If we all think I’m not going to catch COVID, then we are less likely to take precautions and are more likely to actually catch the disease. This is where helpful nudges can come into play.

 

In Nudge, Cass Sunstein and Richard Thaler write, “If people are running risks because of unrealistic optimism, they might be able to benefit from a nudge. In fact, we have already mentioned one possibility: if people are reminded of a bad event, they may not continue to be so optimistic.”

 

Reminding people of others who have caught COVID might help encourage people to take appropriate safety precautions. Reminding a person trying to trade stocks of previous poor decisions might encourage them to make better investment choices then trying their hand at day trading. A quick pop-up from a website blocker might encourage someone not to risk checking social media while they are supposed to be working, saving them from the one time their supervisor walks by while they are scrolling through someone’s profile. Overconfidence may be necessary for us, but it can lead to risky behavior and can have serious downfalls. If slight nudges can help push people away from catastrophic consequences from unrealistic optimism, then they should be employed.