Risk Savvy Citizens

Risk Savvy Citizens

In Risk Savvy Gerd Gigerenzer argues for changes to the way that financial systems, healthcare systems, and discourse around public projects operate. He argues that we are too afraid of risk, allow large organizations to profit from misunderstandings of risk, and that our goals are often thwarted by poor conceptions of risk. Becoming risk savvy citizens, he argues, can help us improve our institutions and make real change to move forward in an uncertain world.

“The potential lies in courageous and risk savvy citizens,” writes Gigerenzer.

I think that Gigerenzer is correct to identify the importance of risk savvy citizens. We are more interconnected than we ever have been, and to develop new innovations will require new risks. Many of the institutions we have built today exist to minimize both risk and uncertainty, unintentionally limiting innovation. Moving forward, we will have to develop better relationships toward risk to accept and navigate uncertainty.

A population of risk savvy citizens can help reshape existing institutions, and will have to lean into risk to develop new institutions for the future. This idea was touched on in Bruce Katz and Jeremy Nowak’s book The New Localism where they argue that we need new forms of public private partnerships to manage investment, development, and public asset management. Public agencies, institutions we have relied upon, have trouble managing and accepting risk, even if they are comprised of risk savvy citizens. The solution, Katz and Nowak might suggest, is to reshape institutions so that risk savvy citizens can truly act and respond in ways that successfully manage reasonable risks. Institutions matter, and they need to be reshaped and reformed if our courageous and risk savvy citizens are going to help change the world and solve some of the ills that Gigerenzer highlights.

Risk literacy and Reduced Healthcare Costs - Joe Abittan

Risk Literacy & Reduced Healthcare Costs

Gerd Gigerenzer argues that risk literacy and reduced healthcare costs go together in his book Risk Savvy. By increasing risk literacy we will help both doctors and patients better understand how behaviors contribute to overall health, how screenings may or may not reveal dangerous medical conditions, and whether medications will or will not make a difference for an individual’s long-term well being. Having both doctors and patients better understand and better discuss the risks and benefits of procedures, drugs, and lifestyle changes can help us use our healthcare resources more wisely, ultimately bringing costs down.
Gigerenzer argues that much of the modern healthcare system, not just the US system but the global healthcare system, has been designed to sell more drugs and more technology. Increasing the number of people using medications, getting more doctors to order more tests with new high-tech diagnostic machines, and driving more procedures became more of a goal than actually helping to improve people’s health. Globally, health and the quality of healthcare has improved, but healthcare is often criticized as a low productivity sector, with relatively low gains in health or efficiency for the investments we make.
I don’t know that I am cynical enough to accept all of Gigerenzer’s argument at face value, but the story of opioids, the fact that we invest much larger sums of money in cancer research versus parasitic disease research, and the ubiquitous use of MRIs in our healthcare landscape do favor Gigerenzer’s argument. There hasn’t been as much focus on improving doctor and patient statistical reasoning, and we haven’t put forward the same effort and funding to remove lead from public parks compared to the funding put forward for cancer treatments. We see medicine as treating diseases after they have popped up with fancy new technologies and drugs. We don’t see medicine as improving risk and health literacy or as helping improve the environment before people get sick.
This poor vision of healthcare that we have lived with for so long, Gigerenzer goes on to argue, has blinded us to the real possibilities within healthcare. Gigerenzer writes, “calls for better health care have been usually countered by claims that this implies one of two alternatives, which nobody wants: raising taxes or rationing care. I argue that there is a third option: by promoting health literacy of doctors and patients, we can get better care for less money.”
Improving risk and health literacy means that doctors can better understand and better communicate which medications, which tests, and which procedures  are most likely to help patients. It will also help patients better understand why certain recommendations have been made and will help them push back against the feeling that they always need the newest drugs, the most cutting edge surgery, and the most expensive diagnostic screenings. Regardless of whether we raise taxes or try to ration care, we have to help people truly understand their options in new ways that incorporate tools to improve risk literacy and reduce healthcare costs. By better understanding the system, our own care, and our systemic health, we can better utilize our healthcare resources, and hopefully bring down costs by moving our spending into higher productivity healthcare spaces.
Believing We Are Well Informed

Believing We Are Well Informed

In his book Risk Savvy, Gerd Gigerenzer demonstrated that people often overestimate their level of knowledge about the benefits of prostate and cancer screening. “A national telephone survey of U.S. adults,” he writes, “reported that the majority were extremely confident in their decision about prostate, colorectal, and breast screening, believed they were well informed, but could not correctly answer a single knowledge question.” I think this quote reveals something important about the way our minds work. We often believe we are well informed, but that belief and our confidence in our knowledge is often an illusion.
This is something I have been trying to work on. My initial reaction any time I hear any fact or any discussion about any topic is to position myself as a knowledgeable semi-expert in the topic. I have noticed that I do this with ideas and topics that I have really only heard once or twice on a commercial, or that I have seen in a headline, or that I once overheard someone talking about. I immediately feel like an expert even though my knowledge is often less than surface deep.
I think that what is happening in these situations is that I am substituting my feeling of expertise or knowledge with a different question. I am instead answering the question can I recall a time when I thought about this thing and then answering that question. Mental substitution is common, but hard to actually detect. I suspect that the easier a topic comes to mind, even if it is a topic I don’t know anything about but have only heard the name of, then the more likely I am to feel like I am an expert.
Gigerenzer’s quote shows that people will believe themselves to be well informed even if they cannot answer a basic knowledge question about the topic. Rather than substituting the question can I recall a time when I thought about this thing, patients may also be substituting another question. Instead of analyzing their confidence in their own decision regarding cancer screening, people may be substituting the question do I trust my doctor? Trust in a physician, even without any knowledge about the procedure, may be enough for people to feel extremely confident in their decisions. They don’t have to know a lot about their health or how a procedure is going to impact it, they just need to be confident that their physician does.
These types of substitutions are important for us to recognize. We should try to identify when we are falling victim to the availability bias and when we are substituting different questions that are easier for us to answer. In a well functioning and accurate healthcare setting these biases and cognitive errors may not harm us too much, but in a world of uncertainty, we stand to lose a lot when we fail to recognize how little we actually know. Being honest about our knowledge and thinking patterns can help us develop better systems and structures in our lives to improve and guide our decision-making.

Medical Progress

What does medical progress look like? To many, medical progress looks like new machines, artificial intelligence to read your medical reports and x-rays, or new pharmaceutical medications to solve all your ailments with a simple pill. However, much of medical progress might be improved communication, better management and operating procedures, and better understandings of statistics and risk. In the book Risk Savvy, Gerd Gigerenzer suggests that there is a huge opportunity for improving physician understanding of risk, improved communication around statistics, and better processes related to risk that would help spur real medical progress.

 

He writes, “Medical progress has become associated with better technologies, not with better doctors who understand these technologies.” Gigerenzer argues that there is currently an “unbelievable failure of medical schools to provide efficient training in risk literacy.” Much of the focus of medical schools and physician education is on memorizing facts about specific disease states, treatments, and how a healthy body should look. What is not focused on, in Gigerenzer’s 2014 argument, is how physicians understand the statistical results from empirical studies, how physicians interpret risk given a specific biological marker, and how physicians can communicate risk to patients in a way that adequately inform their healthcare decisions.

 

Our health is complex. We all have different genes, different family histories, different exposures to environmental hazards, and different lifestyles. These factors interact in many complex ways, and our health is often a downstream consequence of many fixed factors (like genetics) and many social determinants of health (like whether we have a safe park that we can walk, or whether we grew up in a house infested with mold). Understanding how all these factors interact and shape our current health is not easy.

 

Adding new technology to the mix can help us improve our treatments, our diagnoses, and our lifestyle or environment. However, simply layering new technology onto existing complexity is not enough to really improve our health. Medical progress requires better ways to use and understand the technology that we introduce, otherwise we are adding layers to the existing complexity. If physicians cannot understand, cannot communicate, and cannot help people make reasonable decisions based on technology and the data that feeds into it, then we won’t see the medical progress we all hope for. It is important that physicians be able to understand the complexity, the risk, and the statistics involved so that patients can learn how to actually improve their behaviors and lifestyles and so that societies can address social determinants of health to better everyone’s lives.
Risk Literacy and Emotional Stress

Risk Literacy and Emotional Stress

In Risk Savvy Gerd Gigerenzer argues that better risk literacy could reduce emotional stress. To emphasize this point, Gigerenzer writes about parents who receive false positive medical test results for infant babies. Their children had been screened for biochemical disorders, and the tests indicated that the child had a disorder. However, upon follow-up screenings and evaluations, the children were found to be perfectly healthy. Nevertheless, in the long run (four years later) parents who initially received a false positive test result were more likely than other parents to say that their children required extra parental care, that their children were more difficult, and that that had more dysfunctional relationships with their children.

 

Gigerenzer suggests that the survey results represent a direct parental response to initially receiving a false positive test when their child was a newborn infant. He argues that parents received the biochemical test results without being informed about the chance of false positives and without understanding the prevalence of false positives due to a general lack of risk literacy.  Parents initially reacted strongly to the bad news of the test, and somewhere in their mind, even after the test was proven to be a false positive, they never adjusted their thoughts and evaluations of the children, and the false positive test in some ways became a self-fulfilling prophecy.

 

In writing about Gigerenzer’s argument, it feels more far-fetched than it did in an initial reading, but I think his general argument that risk literacy and emotional stress are tied together is probably accurate. Regarding the parents in the study, he writes, “risk literacy could have moderated emotional reactions to stress that harmed these parents’ relation to their child.” Gigerenzer suggests that parents had strong negative emotional reactions when their children received a false positive and that their initial reactions carried four years into the future. However, had the doctors better explained the chance of a false positive and better communicated next steps with parents, then the strong negative emotional reaction experienced by parents could have been avoided, and they would not have spent four years believing their child was in some ways more fragile or more needy than other children. I recognize that receiving a medical test with a diagnosis that no parent wants to hear is stressful, and I can see where better risk communication could reduce some of that stress, but I think there could have been other factors that the study picked up on. I think the results as Gigerenzer reported overhyped the connection between risk literacy and emotional stress.

 

Nevertheless, risk literacy is important for all of us living in a complex and interconnected world today. We are constantly presented with risks, and new risks can seemingly pop-up anywhere at any time. Being able to decipher and understand risk is important so that we can adjust and modulate our activities and behaviors as our environment and circumstances change. Doing so successfully should reduce our stress, while struggling to comprehend risk and adjust behaviors and beliefs is likely to increase emotional stress. When we don’t understand risks appropriately, we can become overly fearful, we can spend money on unnecessary insurance, and we can stress ourselves over incorrect information. Developing better charts, better communicative tools, and better information about risk will help individuals improve their risk literacy, and will hopefully reduce risk by allowing individuals to successfully adjust to the risks they face.
Understanding False Positives with Natural Frequencies

Understanding False Positives with Natural Frequencies

In a graduate course on healthcare economics a professor of mine had us think about drug testing student athletes. We ran through a few scenarios where we calculated how many true positive test results and how many false positive test results we should expect if we oversaw a university program to drug tests student athletes on a regular basis. The results were surprising, and a little confusing and hard to understand.

 

As it turns out, if you have a large student athlete population and very few of those students actually use any illicit drugs, then your testing program is likely to reveal more false positive tests than true positive tests. The big determining factors are the sensitivity of the test (how often it is actually correct) and the percentage of students using illicit drugs. A false positive occurs when the drug test indicates that a student who is not using illicit drugs is using them. A true positive occurs when the test correctly identifies a student who does indeed use drugs. The dilemma we discussed occurs if you have a test with some percentage of error and a large student athlete population with a minimal percentage of drug users. In this instance you cannot be confident that a positive test result is accurate. You will receive a number of positive tests, but most of the positive tests that you receive are actually false positives.

 

In class, our teacher walked us through this example verbally before creating some tables that we could use to multiply the percentages ourselves to see that the number of false positives will indeed exceed the number of true positives when you are dealing with a large population and a rare event that you are testing for. Our teacher continued to explain that this happens every day in the medical world with drug tests, cancer screenings, and other tests (including COVID-19 tests as we are learning today).  The challenge, as our professor explained, is that the math is complicated and it is hard to explain to person who just received a positive cancer test that they likely don’t have cancer, even though they just received a positive test. The statistics are hard to understand on their own.

 

However, Gerd Gigerenzer doesn’t think this is really a limiting problem for us to the extent that my professor had us work through. In Risk Savvy Gigerenzer writes that understanding false positives with natural frequencies is simple and accessible. What took nearly a full graduate course to go through and discuss, Gigerenzer suggests can be digested in simple charts using natural frequencies. Natural frequencies are numbers we can actually understand and multiply as opposed to fractions and percentages which are easy to mix up and hard to multiply and compare.

 

Rather than telling someone that the actual incidence of cancer in the population is only 1%, and that the chance of a false positive test is 9%, and trying to convince them that they still likely don’t have cancer is confusing. However, if you explain to an individual that for every 1,000 people who take a particular cancer test that only 10 actually have cancer and that 990 don’t, the path to comprehension begins to clear up. With the group of 10 true positives and true negatives 990, you can explain that of those 10 who do have cancer, the test correctly identifies 9 out of 10 of them, and provides 9 true positive results for every 1,000 test (or adjust according to the population and test sensitivity). The false positive number can then be explained by saying that for the 990 people who really don’t have cancer, the test will error and tell 89 of them (9% in this case) that they do have cancer. So, we see that 89 individuals will receive false positives while 9 people will receive true positives. 89 > 9, so the chance of actually having cancer with a positive test still isn’t a guarantee.

 

Gigernezer uses very helpful charts in his book to show us that the false positive problem can be understood more easily than we might think. Humans are not great at thinking statistically, but understanding false positives with natural frequencies is a way to get to better comprehension. With this background he writes, “For many years psychologists have argued that because of their limited cognitive capacities people are doomed to misunderstand problems like the probability of a disease given a positive test. This failure is taken as justification for paternalistic policymaking.” Gigerenzer shows that we don’t need to rely on the paternalistic nudges that Cass Sunstein and Richard Thaler encourage in their book Nudge. He suggest that in many instances where people have to make complex decisions what is really needed is better tools and aids to help with comprehension. Rather than developing paternalistic policies to nudge people toward certain behaviors that they don’t fully understand, Gigerenzer suggests that more work to help people understand problems will solve the dilemma of poor decision-making. The problem isn’t always that humans are incapable of understanding complexity and choosing the right option, the problem is often that we don’t present information in a clear and understandable way to begin with.
Procedure Over Performance

Procedure Over Performance

My wife works with families with children with disabilities for a state agency. She and I often have discussions about some of the administrative challenges and frustrations with her job, and some of the creative ways that she and other members of her agency are able to bend the rules to meet the human needs of the job, even though their decisions occasionally step beyond management decisions for standard operating procedures. For my wife and her colleagues below the management level of the agency, helping families and doing what is best for children is the motivation for all of their decisions, however, for the management team within the agency, avoiding errors and blame often seems to be the more important goal.

 

This disconnect between agency functions, mission, and procedures is not unique to my wife’s state agency. It is a challenge that Max Weber wrote about in the late 1800’s and early 1900’s. Somewhere along the line, public agencies and private companies seem to forget their mission. Procedure becomes more important than performance, and services or products suffer.

 

Gerd Gigerenzer offers an explanation for why this happens in his book Risk Savvy. Negative error cultures likely contribute to people becoming more focused on procedure over performance, because following perfect procedure is safe, even if it isn’t always necessary and doesn’t always lead to the best outcomes. A failure to accept risk and errors, and a failure to discuss and learn from errors, leads people to avoid situations where they could be blamed for failure. Gigerenzer writes, “People need to be encouraged to talk about errors and take the responsibility in order to learn and achieve better overall performance.”

 

As companies and government agencies age, their workforce ages. People become comfortable in their role, they don’t want to have to look for a new job, they take out mortgages, have kids, and send them to college. People become more conservative and risk averse as they have more to lose, and that means they are less likely to take risks in their career, because they don’t want to lose their income to support their lifestyles, retirements, or the college plans for their kids. Following procedures, like getting meaningless forms submitted on time and documenting conversations timely, become more important than actually ensuring valuable services or products are provided to constituents and customers. Procedure prospers over performance, and the agency or company as a whole suffers. Positive error cultures, where it is ok to take reasonable risks and acceptable to discuss errors without fear of blame are important for overcoming the stagnation that can arise when procedure becomes more important than the mission of the agency or company.
Risk and Innovation - Joe Abittan

Risk and Innovation

To be innovative is to make decisions, develop processes, and create things in new ways that improve over the status quo. Being innovative is necessarily different, and requires stepping away from the proven path to do something new or unusual. Risk and innovation are tied together because you cannot venture into something new or stray from the tried and true without the possibility of making a mistake and being wrong. Therefore, appropriately managing and understanding risk is imperative for innovation.

 

In Risk Savvy Gerd Gigerenzer writes, “Risk aversion is closely tied to the anxiety of making errors. If you work in the middle management of a company, your life probably revolves around the fear of doing something wrong and being blamed for it. Such a climate is not a good one for innovation, because originality requires taking risks and making errors along the way. No risks, no errors, no innovation.” Risk aversion is a fundamental aspect of human psychology. Daniel Kahneman in Thinking Fast and Slow shows that we won’t accept risk unless we are certain that the pay-off is at generally about two times greater than the potential loss. We go out of our way to avoid risk, because the potential of losing something is often paralyzing beyond the excitement of a potential gain. Individuals and companies who want to be innovative have to find ways around risk aversion in order to create something new.

 

Gigerenzer’s example of middle management is excellent for thinking about innovation and why it is often smaller companies and start-ups that make innovative breakthroughs. It also helps explain why in the United States so many successful and innovative companies are started by immigrants or by the super-wealthy. Large established companies are likely to have employees who have been with the company for a longer time and have become more risk averse. They have families, mortgages, and might be unsure they could find an equally attractive job elsewhere. Their incentives for innovation are diminished by their fear of loss if something where to go wrong and if the blame were to fall with them. Better to stick with established methods and to maximize according to well defined job evaluation statistics than to risk trying something new and uncharted. Start-ups, immigrants, and the super-wealthy don’t have the same constraining fears. New companies attract individuals who are less risk averse to begin with, and they don’t have established methods that everyone is comfortable sticking to. Immigrants are not as likely to have the same financial resources that limit their willingness to take risks, and the super-wealthy may have so many resources that the risks they face are smaller relative to their overall wealth and resources. The middle-class, like middle management, is stuck in a position where they feel they have too much to risk in trying to be innovative, and as a result stick to known and measured paths that ultimately reduce risk and innovation.
A mixture of Risks

A Mixture of Risks

In the book Risk Savvy, Gerd Gigerenzer explains the challenges we have with thinking statistically and how these difficulties can lead to poor decision-making. Humans have trouble holding lots of complex and conflicting information. We don’t do well with decisions involving risk and decisions where we cannot possibly know all the relevant information necessary for the best decision. We prefer to make decisions involving fewer variables, where we can have more certainty about our risks and about the potential outcomes. This leads to the substitution effect that Daniel Kahneman describes in his book Thinking Fast and Slow, where our minds substitute an easier question for the difficult question without us noticing.

 

Unfortunately, this can have bad outcomes for our decision-making. Gigerenzer writes, “few situations in life allow us to calculate risk precisely. In most cases, the risks involved are a mixture of more or less well known ones.” Most of our decisions that involve risk have a mixture of different risks. They are complex decisions with tiers and potential cascades of risk based on the decisions we make along the way. Few of our decisions involve just one risk independent of others that we can know with certainty.

 

If we consider investing for retirement we can see how complex decisions involving risk can be and how a mixture of risks is present across all the decisions we have to make. We can hoard money in a safe in our house where we reduce the risk of losing any of our money, but we risk being unable to have enough saved by the time we are ready to retire. We can invest our money, but have to make decisions regarding whether we will keep it in a bank account, invest it in the stock market, or look to other investment vehicles. Our bank is unlikely to lose much money, and is low risk, but is also unlikely to help us increase the value of our savings to have enough for retirement. Investing with a financial advisor takes on more risk, such as the risk that we are being scammed, the risk that the market tanks and our advisor made bad investments on our behalf, and the risk that we won’t have access to our money if we were to need it quickly in case of an emergency. What this shows is that even the most certain option for our money, protecting it in a secret safe at home, still contains additional risks for the future. The options that are likely to provide us with the greatest return on our savings, investing in the stock market, has a mixture of risks associated with each investment decision we make after the initial decision to invest. There is no way we can calculate and fully comprehend ever risk involved with such an investment decision.

 

Risk is complex, and we rarely deal with a single decision involving a single calculable risk at one time. Our brains are likely to flatten the decision by substituting more simple decisions, eliminating some of the risks from consideration and helping our mind focus on fewer variables at a time. Nevertheless, the complex mixture of risks doesn’t go away just because  our brains pretend it isn’t there.
Navigating Uncertainty with Nudges

Navigating Uncertainty with Nudges

In Risk Savvy Gerd Gigerenzer makes a distinction between known risks and uncertainty. In a foot note for a figure, he writes, “In everyday language, we make a distinction between certainty and risk, but the terms risk and uncertainty are used mostly as synonyms. They aren’t. In a world of known risks, everything, including the probabilities, is known for certain. Here statistical thinking and logic are sufficient to make good decisions. In an uncertain world, not everything is known, and one cannot calculate the best option. Here, good rules of thumb and intuition are also required.” Gigerenzer’s distinction between risk and uncertainty is important. He demonstrates that people can manage decision-making when making risk based decisions, but that people need to rely on intuition and good judgement when dealing with uncertainty. One solution to improved judgement and intuition is to use nudges.

 

In the book Nudge, Cass Sunstein and Richard Thaler encourage choice architects to design systems and structures that will help individuals make the best decision in a given situation as defined by the chooser. Much of their argument is supported by research presented by Daniel Kahneman in Thinking Fast and Slow, where Kahneman demonstrates how predictable biases and cognitive errors can lead people to making decisions that they likely wouldn’t make if they had more clear information, had the ability to free themselves from irrelevant biases, and could improve their statistical thinking. Gigerenzer’s quote supports Sunstein and Thaler’s nudges by building on the research from Kahneman. Distinguishing between risk and uncertainty helps us understand when to use nudges, and how aggressive our nudges may need to be.

 

Gigerenzer uses casino slot machines as an example of risk and for examples of uncertainty uses stocks, romance, earthquakes, business, and health. When we are gambling, we can know the statistical chances that our bets will pay off and calculate optimal strategies (there is a reason the casino dealer stays on 17). We won’t know what the outcome will be ahead of time, but we can precisely define the risk. The same cannot be said for picking the right stocks, the right romantic partner, or when creating business, earthquake preparedness, or health plans. We may know the five year rate of return for a company’s stocks, the divorce rate in our state, the average frequency and strength of earthquakes in our region, and how old our grandfather lived to be, but we cannot use this information alone to calculate risk. We don’t know exactly what business trends will arise in the future, we don’t know for sure whether we have a genetic disease that will strike us (or our romantic partner) down sooner than expected, and we can’t say for sure that a 7.0 earthquake is or is not possible next month.

 

But nudges can help us in these decisions. We can use statistical information for business development and international stock returns to identify general rules of thumb when investing. We can listen to parents and elders and learn from their advice and mistakes when selecting a romantic partner, intuiting the traits that make a good (or bad) spouse. We can overengineer our bridges and skyscrapers by 10% to give us a little more assurance that they can survive a major and unexpected earthquake. Nudges are helpful because they can augment our gut instincts and help bring visualizations to the rules of thumb that we might utilize.

 

Expecting everyone’s individual intuition and heuristics to be up to the task of navigating uncertainty is likely to lead to many poor choices. But, if we help pool the statistical information available, provide guides, communicate rules of thumb that have panned out for many people, and structure choices in ways that help present this information, then people can likely make marginally better decisions. My suggestion in this post, is a nudge to use more nudges in moments of uncertainty. When certainty exists, or even when calculable risks exist, nudges may not be needed. However, once we get beyond calculable risk, where we must rely on judgement and intuition, nudges are important tools to help people navigate uncertainty and improve their decision making.