Teaching Statistical Thinking

Teaching Statistical Thinking

“Statistical thinking is the most useful branches of mathematics for life,” writes Gerd Gigerenzer in Risk Savvy, “and the one that children find most interesting.” I don’t have kids and I don’t teach or tutor children today, but I remember math classes of my own from elementary school math lessons to AP Calculus in high school. Most of my math education was solving isolated equations and memorizing formulas with an occasional word problem tossed in. While I was generally good at math, it was boring, and I like others questioned when I would ever use most of the math I was learning. Gerd Gigerenzer wants to change this, and he wants to do so in a way that focuses on teaching statistical thinking.
Gigerenzer continues, “teaching statistical thinking means giving people tools for problem solving in the real world. It should not be taught as pure mathematics. Instead of mechanically solving a dozen problems with the help of a particular formula, children and adolescents should be asked to find solutions to real-life problems.” 
We view statistics as incredibly complicated and too advanced for most children (and for most of us adults as well!). But if Gigerenzer’s assertion that statistical thinking and problem solving is what many children are the most excited about, then we should lean into teaching statistical thinking rather than hiding it away and saving it for advanced students. I found math classes to be alright, but I questioned how often I would need to use math, and that was before smartphones became ubiquitous. Today, most math that I have to do professionally is calculated using a spreadsheet formula. I’m glad I understand the math and calculations behind the formulas I use in spreadsheets, but perhaps learning mathematical concepts within real world examples would have been better than learning them in isolation and with essentially rote memorization practice.
Engaging with what kids really find interesting will spur learning. And doing so with statistical thinking will do more than just help kids make smart decisions on the Las Vegas Strip. Improving statistical thinking will help people understand how to appropriately respond to future pandemics, how to plan for retirement, and how think about risk in other health and safety contexts. Lots of mathematical concepts can be built into real world lessons that lean into teaching statistical thinking that goes beyond the memorization and plug-n-chug lessons that I grew up with.
Risk Savvy Citizens

Risk Savvy Citizens

In Risk Savvy Gerd Gigerenzer argues for changes to the way that financial systems, healthcare systems, and discourse around public projects operate. He argues that we are too afraid of risk, allow large organizations to profit from misunderstandings of risk, and that our goals are often thwarted by poor conceptions of risk. Becoming risk savvy citizens, he argues, can help us improve our institutions and make real change to move forward in an uncertain world.

“The potential lies in courageous and risk savvy citizens,” writes Gigerenzer.

I think that Gigerenzer is correct to identify the importance of risk savvy citizens. We are more interconnected than we ever have been, and to develop new innovations will require new risks. Many of the institutions we have built today exist to minimize both risk and uncertainty, unintentionally limiting innovation. Moving forward, we will have to develop better relationships toward risk to accept and navigate uncertainty.

A population of risk savvy citizens can help reshape existing institutions, and will have to lean into risk to develop new institutions for the future. This idea was touched on in Bruce Katz and Jeremy Nowak’s book The New Localism where they argue that we need new forms of public private partnerships to manage investment, development, and public asset management. Public agencies, institutions we have relied upon, have trouble managing and accepting risk, even if they are comprised of risk savvy citizens. The solution, Katz and Nowak might suggest, is to reshape institutions so that risk savvy citizens can truly act and respond in ways that successfully manage reasonable risks. Institutions matter, and they need to be reshaped and reformed if our courageous and risk savvy citizens are going to help change the world and solve some of the ills that Gigerenzer highlights.

Self-Interest & A Banking Moral Hazard

Self-Interest & A Banking Moral Hazard

I have not really read into or studied the financial crisis of 2008, but I remember how angry and furious so many people were at the time. There was an incredible amount of anger at big banks, especially when executives at big banks began to receive massive bonuses while many people in the country lost their homes and had trouble rebounding from the worst parts of the recession. The anger at banks spilled into the Occupy Wall Street movement, which is still a protest that I only have a hazy understanding of.
While I don’t understand the financial crisis that well, I do believe that I better understand self-interest, thanks to my own personal experience and constantly thinking about Robin Hanson and Kevin Simler’s book The Elephant in the Brain. The argument from Hanson and Simler is that most of us don’t actually have really strong beliefs about most aspects of the world. For most topics, the beliefs we have are usually subservient to our own self-interest, to the things we want that would give us more money, more prestige, and more social status. When you apply this filter retroactively to the financial crisis of 2008, some of the arguments shift, and I feel that I am able to better understand some of what took place in terms of rhetoric coming out of the crisis.
In Risk Savvy, published in 2014, Gerd Gigerenzer wrote about the big banks. He wrote about the way that bankers argued for limited regulation and intervention from states, suggesting that a fee market was necessary for a successful banking sector that could fund innovation and fuel the economy. However, banks realized that in the event of a major banking crisis, all banks would be in trouble, and dramatic government action would be needed to save the biggest banks and prevent a catastrophic collapse. “Profits are pocketed by executives, and losses are compensated by taxpayers. That is not exactly a free market – it’s a moral hazard,” writes Gigerenzer.
Banks, like the individuals who work for and comprise them, are self-interested. They don’t want to be regulated and have too many authorities limiting their business enterprises. At the same time, they don’t want to be held responsible for their actions. Banks took on increasingly risky and unsound financial loans, recognizing that if everyone was engaged in the same harmful lending practice, that it wouldn’t just be a single bank that went bust, but all of them. They argued for a free market before the crash, because a free market with limited intervention was in their self-interest, not because they had high minded ideological beliefs. After the crash, when all banks risked failure, the largest banks pleaded for bail outs, arguing that they were necessary to prevent further economic disaster. Counter to their free-market arguments of before, the banks favored bail-outs that were clearly in their self-interest during the crisis. Their high minded ideology of a free market was out the window.
Gigerenzer’s quote was meant to focus more on the moral hazard dimension of bailing out banks that take on too many risky loans, but for me, someone who just doesn’t fully understand banking the way I do healthcare or other political science topics, what is more obvious in his quote is the role of self-interest, and how we try to frame our arguments to hide the ways we act on little more than self-interest. A moral hazard, where we benefit by pushing risk onto others is just one example of how individual self-interest can be negative when multiplied across society. Tragedy of the commons, bank runs, and social signaling are all other examples where our self-interest can be problematic when layered up to larger societal levels.
Risk literacy and Reduced Healthcare Costs - Joe Abittan

Risk Literacy & Reduced Healthcare Costs

Gerd Gigerenzer argues that risk literacy and reduced healthcare costs go together in his book Risk Savvy. By increasing risk literacy we will help both doctors and patients better understand how behaviors contribute to overall health, how screenings may or may not reveal dangerous medical conditions, and whether medications will or will not make a difference for an individual’s long-term well being. Having both doctors and patients better understand and better discuss the risks and benefits of procedures, drugs, and lifestyle changes can help us use our healthcare resources more wisely, ultimately bringing costs down.
Gigerenzer argues that much of the modern healthcare system, not just the US system but the global healthcare system, has been designed to sell more drugs and more technology. Increasing the number of people using medications, getting more doctors to order more tests with new high-tech diagnostic machines, and driving more procedures became more of a goal than actually helping to improve people’s health. Globally, health and the quality of healthcare has improved, but healthcare is often criticized as a low productivity sector, with relatively low gains in health or efficiency for the investments we make.
I don’t know that I am cynical enough to accept all of Gigerenzer’s argument at face value, but the story of opioids, the fact that we invest much larger sums of money in cancer research versus parasitic disease research, and the ubiquitous use of MRIs in our healthcare landscape do favor Gigerenzer’s argument. There hasn’t been as much focus on improving doctor and patient statistical reasoning, and we haven’t put forward the same effort and funding to remove lead from public parks compared to the funding put forward for cancer treatments. We see medicine as treating diseases after they have popped up with fancy new technologies and drugs. We don’t see medicine as improving risk and health literacy or as helping improve the environment before people get sick.
This poor vision of healthcare that we have lived with for so long, Gigerenzer goes on to argue, has blinded us to the real possibilities within healthcare. Gigerenzer writes, “calls for better health care have been usually countered by claims that this implies one of two alternatives, which nobody wants: raising taxes or rationing care. I argue that there is a third option: by promoting health literacy of doctors and patients, we can get better care for less money.”
Improving risk and health literacy means that doctors can better understand and better communicate which medications, which tests, and which procedures  are most likely to help patients. It will also help patients better understand why certain recommendations have been made and will help them push back against the feeling that they always need the newest drugs, the most cutting edge surgery, and the most expensive diagnostic screenings. Regardless of whether we raise taxes or try to ration care, we have to help people truly understand their options in new ways that incorporate tools to improve risk literacy and reduce healthcare costs. By better understanding the system, our own care, and our systemic health, we can better utilize our healthcare resources, and hopefully bring down costs by moving our spending into higher productivity healthcare spaces.
Believing We Are Well Informed

Believing We Are Well Informed

In his book Risk Savvy, Gerd Gigerenzer demonstrated that people often overestimate their level of knowledge about the benefits of prostate and cancer screening. “A national telephone survey of U.S. adults,” he writes, “reported that the majority were extremely confident in their decision about prostate, colorectal, and breast screening, believed they were well informed, but could not correctly answer a single knowledge question.” I think this quote reveals something important about the way our minds work. We often believe we are well informed, but that belief and our confidence in our knowledge is often an illusion.
This is something I have been trying to work on. My initial reaction any time I hear any fact or any discussion about any topic is to position myself as a knowledgeable semi-expert in the topic. I have noticed that I do this with ideas and topics that I have really only heard once or twice on a commercial, or that I have seen in a headline, or that I once overheard someone talking about. I immediately feel like an expert even though my knowledge is often less than surface deep.
I think that what is happening in these situations is that I am substituting my feeling of expertise or knowledge with a different question. I am instead answering the question can I recall a time when I thought about this thing and then answering that question. Mental substitution is common, but hard to actually detect. I suspect that the easier a topic comes to mind, even if it is a topic I don’t know anything about but have only heard the name of, then the more likely I am to feel like I am an expert.
Gigerenzer’s quote shows that people will believe themselves to be well informed even if they cannot answer a basic knowledge question about the topic. Rather than substituting the question can I recall a time when I thought about this thing, patients may also be substituting another question. Instead of analyzing their confidence in their own decision regarding cancer screening, people may be substituting the question do I trust my doctor? Trust in a physician, even without any knowledge about the procedure, may be enough for people to feel extremely confident in their decisions. They don’t have to know a lot about their health or how a procedure is going to impact it, they just need to be confident that their physician does.
These types of substitutions are important for us to recognize. We should try to identify when we are falling victim to the availability bias and when we are substituting different questions that are easier for us to answer. In a well functioning and accurate healthcare setting these biases and cognitive errors may not harm us too much, but in a world of uncertainty, we stand to lose a lot when we fail to recognize how little we actually know. Being honest about our knowledge and thinking patterns can help us develop better systems and structures in our lives to improve and guide our decision-making.
On The Opportunity To Profit From Uninformed Patients

On The Opportunity To Profit From Uninformed Patients

The American Medical System is in a difficult and dangerous place right now. Healthcare services have become incredibly expensive, and the entire system has become so complex that few people fully understand it and even fewer can successfully navigate the system to get appropriate care that they can reasonably afford. My experience is that many people don’t see value in much of the care they receive or with many of the actors connected with their care. They know they need insurance to afford their care, but they really can’t see what value their insurance provides – it often appears to be more of a frustration than something most people appreciate. The same can be true for primary care, anesthesiologists, and the variety of healthcare benefits that employers may offer to their patients. There seem to be lots of people ready to profit from healthcare, but not a lot of people ready to provide real value to the people who need it.
 
These sentiments are all generalizations, and of course many people really do see value in at least some of their healthcare and are grateful for the care they receive. However, the complexity, the lack of transparency, and the ever climbing costs of care have people questioning the entire system, especially at a moral and ethical level. I think a great deal of support for Medicare for All, or universal healthcare coverage, comes from people thinking that profit within medicine may be unethical and from a lack of trust that stems from an inability to see anything other than a profit motive in many healthcare actors and services.
 
Gerd Gigerenzer writes about this idea in his book Risk Savvy. In the book he doesn’t look at healthcare specifically, but uses healthcare to show the importance of being risk literate in today’s complex world. Medical health screening in particular is a good space to demonstrate the harms that can come from misinformed patients and doctors. A failure to understand and communicate risk can harm patients, and it can actually create perverse incentives for healthcare systems by providing them the opportunity to profit from uninformed patients. Gigerenzer quotes Dr. Otis Brawley who had been Director of the Georgia Cancer Center at Emory in Atlanta.
 
In Dr. Brawley’s quote, he discusses how Emory could have screened 1,000 men at a mall for prostate cancer and how the hospital could have made $4.9 million in billing for the tests. Additionally the hospital would have profited from future services when men returned for other unrelated healthcare concerns as established patients. In Dr. Brawley’s experience, the hospital could tell him how much they could profit from the tests, but could not tell him whether screening 1,000 men early for prostate cancer would have actually saved any lives among the 1,000 men screened. Dr. Brawley knew that screening many men would lead to false positive tests, and unnecessary stress and further medical diagnostic care for those false positives – again medical care that Emory would profit from. The screenings would also identify men with prostate cancer that was unlikely to impact their future health, but would nevertheless lead to treatment that would make the men impotent or potentially incontinent. The hospital would profit, but their patients would be worse off than if they had not been screened. Dr. Brawley’s experience was that the hospital could identify avenues for profit, but could not identify avenues to provide real value in the healthcare services they offer.
 
Gigerenzer found this deeply troubling. A failure to understand and communicate the risks of prostate cancer (which is more complex than I can write about here) presents an opportunity for healthcare providers to profit by pushing unnecessary medical screening and treatment onto patients. Gigerenzer also notes that profiting from uninformed patients is not just limited to cancer screening. Doctors who are not risk literate cannot adequately explain risks and benefits of treatment to patients, and their patients cannot make the best decisions for themselves. This is a situation that needs to change if hospitals want to keep the trust of their patients and avoid being a hated entity that fails to demonstrate value. They will go the way of health insurance companies, with frustrated patients wanting to eliminate them altogether.
 
Wrapping up the quote from Dr. Brawley, Gigerenzer writes, “profiting from uninformed patients is unethical. medicine should not be a money game.” I believe that Gigerenzer and Dr. Brawley are right, and I think that all healthcare actors need to clearly demonstrate their value, otherwise any profits they earn will make them look like money-first enterprises and not patient-first enterprises, frustrating the public and leading to distrust in the medical field. In the end, this is going to be harmful for everyone involved. Demonstrating real value in healthcare is crucial, and profiting from uniformed patients will diminish the value provided and hurt trust, making the entire healthcare system in our country even worse.

Risk Literacy Builds Trust

Risk Literacy Builds Trust

In his book Risk Savvy Gerd Gigerenzer writes about a private medical panel and lecture series that he participated in. Gigerenzer gave a presentation about the importance of risk literacy between doctors and their patients and how frequently both misinterpret medical statistics. Regarding the dangers this could pose for the medical industry, Gigerenzer wrote the following, recapping a discussion he had with the CEO of the organization hosting the lectures and panel:

“I asked the CEO whether his company would consider it an ethical responsibility to do something about this key problem. The CEO made it clear that his first responsibility is with the shareholders, not patients or doctors. I responded that the banks had also thought so before the subprime crisis. At some point in the future, patients will notice how often they are being misled instead of informed, just as bank customers eventually did. When this happens, the health industry may lose the trust of the public, as happened to the banking industry.”

I focus a lot on healthcare since that is the space where I started my career and where I focused most of my studies during graduate school. I think Gigerenzer is correct in noting that risk literacy builds trust, and that a lack of risk literacy can translate to a lack of trust. Patients trust doctors because health and medicine is complex, and doctors are viewed as learned individuals who can decipher the complexity to help others live well. However, modern medicine is continuing to move into more and more complex fields where statistics and risk play a more prominent role. Understanding genetic test results, knowing whether a given medicine will work for someone based on their microbiome, and using and interpreting AI tools requires proficient risk literacy. If doctors can’t build risk literacy skills, and if they cannot communicate risk to patients, then patients will feel misled, and the trust that doctors have will slowly diminish.

Gigerenzer did not feel that his warning at the panel was well received. “The rest of the panel discussion was about business plans, which really captured the emotions of the health insurers and politicians present. Risk-literate doctors and patients are not part of the business.”

Healthcare has to be patient centered, not shareholder centered. If healthcare is not about patients, then the important but not visible and not always profitable work that is necessary to build risk literacy and build trust won’t take place. Eventually, patients will recognize when they are placed behind shareholders in terms of importance to a hospital, company, or healthcare system, and the results will not be good for their health or for the shareholders.

Medical Progress

What does medical progress look like? To many, medical progress looks like new machines, artificial intelligence to read your medical reports and x-rays, or new pharmaceutical medications to solve all your ailments with a simple pill. However, much of medical progress might be improved communication, better management and operating procedures, and better understandings of statistics and risk. In the book Risk Savvy, Gerd Gigerenzer suggests that there is a huge opportunity for improving physician understanding of risk, improved communication around statistics, and better processes related to risk that would help spur real medical progress.

 

He writes, “Medical progress has become associated with better technologies, not with better doctors who understand these technologies.” Gigerenzer argues that there is currently an “unbelievable failure of medical schools to provide efficient training in risk literacy.” Much of the focus of medical schools and physician education is on memorizing facts about specific disease states, treatments, and how a healthy body should look. What is not focused on, in Gigerenzer’s 2014 argument, is how physicians understand the statistical results from empirical studies, how physicians interpret risk given a specific biological marker, and how physicians can communicate risk to patients in a way that adequately inform their healthcare decisions.

 

Our health is complex. We all have different genes, different family histories, different exposures to environmental hazards, and different lifestyles. These factors interact in many complex ways, and our health is often a downstream consequence of many fixed factors (like genetics) and many social determinants of health (like whether we have a safe park that we can walk, or whether we grew up in a house infested with mold). Understanding how all these factors interact and shape our current health is not easy.

 

Adding new technology to the mix can help us improve our treatments, our diagnoses, and our lifestyle or environment. However, simply layering new technology onto existing complexity is not enough to really improve our health. Medical progress requires better ways to use and understand the technology that we introduce, otherwise we are adding layers to the existing complexity. If physicians cannot understand, cannot communicate, and cannot help people make reasonable decisions based on technology and the data that feeds into it, then we won’t see the medical progress we all hope for. It is important that physicians be able to understand the complexity, the risk, and the statistics involved so that patients can learn how to actually improve their behaviors and lifestyles and so that societies can address social determinants of health to better everyone’s lives.
Risk Literacy and Emotional Stress

Risk Literacy and Emotional Stress

In Risk Savvy Gerd Gigerenzer argues that better risk literacy could reduce emotional stress. To emphasize this point, Gigerenzer writes about parents who receive false positive medical test results for infant babies. Their children had been screened for biochemical disorders, and the tests indicated that the child had a disorder. However, upon follow-up screenings and evaluations, the children were found to be perfectly healthy. Nevertheless, in the long run (four years later) parents who initially received a false positive test result were more likely than other parents to say that their children required extra parental care, that their children were more difficult, and that that had more dysfunctional relationships with their children.

 

Gigerenzer suggests that the survey results represent a direct parental response to initially receiving a false positive test when their child was a newborn infant. He argues that parents received the biochemical test results without being informed about the chance of false positives and without understanding the prevalence of false positives due to a general lack of risk literacy.  Parents initially reacted strongly to the bad news of the test, and somewhere in their mind, even after the test was proven to be a false positive, they never adjusted their thoughts and evaluations of the children, and the false positive test in some ways became a self-fulfilling prophecy.

 

In writing about Gigerenzer’s argument, it feels more far-fetched than it did in an initial reading, but I think his general argument that risk literacy and emotional stress are tied together is probably accurate. Regarding the parents in the study, he writes, “risk literacy could have moderated emotional reactions to stress that harmed these parents’ relation to their child.” Gigerenzer suggests that parents had strong negative emotional reactions when their children received a false positive and that their initial reactions carried four years into the future. However, had the doctors better explained the chance of a false positive and better communicated next steps with parents, then the strong negative emotional reaction experienced by parents could have been avoided, and they would not have spent four years believing their child was in some ways more fragile or more needy than other children. I recognize that receiving a medical test with a diagnosis that no parent wants to hear is stressful, and I can see where better risk communication could reduce some of that stress, but I think there could have been other factors that the study picked up on. I think the results as Gigerenzer reported overhyped the connection between risk literacy and emotional stress.

 

Nevertheless, risk literacy is important for all of us living in a complex and interconnected world today. We are constantly presented with risks, and new risks can seemingly pop-up anywhere at any time. Being able to decipher and understand risk is important so that we can adjust and modulate our activities and behaviors as our environment and circumstances change. Doing so successfully should reduce our stress, while struggling to comprehend risk and adjust behaviors and beliefs is likely to increase emotional stress. When we don’t understand risks appropriately, we can become overly fearful, we can spend money on unnecessary insurance, and we can stress ourselves over incorrect information. Developing better charts, better communicative tools, and better information about risk will help individuals improve their risk literacy, and will hopefully reduce risk by allowing individuals to successfully adjust to the risks they face.
Understanding False Positives with Natural Frequencies

Understanding False Positives with Natural Frequencies

In a graduate course on healthcare economics a professor of mine had us think about drug testing student athletes. We ran through a few scenarios where we calculated how many true positive test results and how many false positive test results we should expect if we oversaw a university program to drug tests student athletes on a regular basis. The results were surprising, and a little confusing and hard to understand.

 

As it turns out, if you have a large student athlete population and very few of those students actually use any illicit drugs, then your testing program is likely to reveal more false positive tests than true positive tests. The big determining factors are the sensitivity of the test (how often it is actually correct) and the percentage of students using illicit drugs. A false positive occurs when the drug test indicates that a student who is not using illicit drugs is using them. A true positive occurs when the test correctly identifies a student who does indeed use drugs. The dilemma we discussed occurs if you have a test with some percentage of error and a large student athlete population with a minimal percentage of drug users. In this instance you cannot be confident that a positive test result is accurate. You will receive a number of positive tests, but most of the positive tests that you receive are actually false positives.

 

In class, our teacher walked us through this example verbally before creating some tables that we could use to multiply the percentages ourselves to see that the number of false positives will indeed exceed the number of true positives when you are dealing with a large population and a rare event that you are testing for. Our teacher continued to explain that this happens every day in the medical world with drug tests, cancer screenings, and other tests (including COVID-19 tests as we are learning today).  The challenge, as our professor explained, is that the math is complicated and it is hard to explain to person who just received a positive cancer test that they likely don’t have cancer, even though they just received a positive test. The statistics are hard to understand on their own.

 

However, Gerd Gigerenzer doesn’t think this is really a limiting problem for us to the extent that my professor had us work through. In Risk Savvy Gigerenzer writes that understanding false positives with natural frequencies is simple and accessible. What took nearly a full graduate course to go through and discuss, Gigerenzer suggests can be digested in simple charts using natural frequencies. Natural frequencies are numbers we can actually understand and multiply as opposed to fractions and percentages which are easy to mix up and hard to multiply and compare.

 

Rather than telling someone that the actual incidence of cancer in the population is only 1%, and that the chance of a false positive test is 9%, and trying to convince them that they still likely don’t have cancer is confusing. However, if you explain to an individual that for every 1,000 people who take a particular cancer test that only 10 actually have cancer and that 990 don’t, the path to comprehension begins to clear up. With the group of 10 true positives and true negatives 990, you can explain that of those 10 who do have cancer, the test correctly identifies 9 out of 10 of them, and provides 9 true positive results for every 1,000 test (or adjust according to the population and test sensitivity). The false positive number can then be explained by saying that for the 990 people who really don’t have cancer, the test will error and tell 89 of them (9% in this case) that they do have cancer. So, we see that 89 individuals will receive false positives while 9 people will receive true positives. 89 > 9, so the chance of actually having cancer with a positive test still isn’t a guarantee.

 

Gigernezer uses very helpful charts in his book to show us that the false positive problem can be understood more easily than we might think. Humans are not great at thinking statistically, but understanding false positives with natural frequencies is a way to get to better comprehension. With this background he writes, “For many years psychologists have argued that because of their limited cognitive capacities people are doomed to misunderstand problems like the probability of a disease given a positive test. This failure is taken as justification for paternalistic policymaking.” Gigerenzer shows that we don’t need to rely on the paternalistic nudges that Cass Sunstein and Richard Thaler encourage in their book Nudge. He suggest that in many instances where people have to make complex decisions what is really needed is better tools and aids to help with comprehension. Rather than developing paternalistic policies to nudge people toward certain behaviors that they don’t fully understand, Gigerenzer suggests that more work to help people understand problems will solve the dilemma of poor decision-making. The problem isn’t always that humans are incapable of understanding complexity and choosing the right option, the problem is often that we don’t present information in a clear and understandable way to begin with.