Vices and Personalities

Vices & Personalities

In Vices of the Mind Quassim Cassam argues that epistemic vices are different than personality traits. He argues that we can change our behaviors and escape epistemic vices in a way that we cannot with certain aspects of our personality and who we are. This means that we can improve the way we think in order to be more rational and knowledgeable individuals.
“Wishful thinking is what a person does rather than what a person is like,” writes Cassam as an example of a difference between a vice and a personality. We can generally be happy and optimistic people or we can generally be negative and pessimistic, and though I have not studied it, my understanding is that to some extent our genes can influence our general outlook and disposition on life. Nevertheless, we can still engage in epistemic vices like wishful thinking even if we are normally more of an optimist or pessimist. Distinguishing between epistemic vices that are more in our control than personality traits is helpful to see how we can make adjustments in our thinking to improve our knowledge.
To me, the distinction is similar to the difference between the Spanish verbs of estar and ser. Estar is used to describe states of things that change. You would use it to say I am happy today, the house is in good condition, or the vase is broken. Ser captures essential elements of something. You would use it to describe yourself as tall, to say that the house is large, or to describe a vase as blue.
We can generally be positive people, generally excited to talk to strangers, or we can prefer familiar routines rather than unknown situations. But regardless of these essential characteristics, there can be patterns of thinking that we engage in, like wishful thinking. Wishful thinking is a pattern of thought that assumes the best outcomes, discredits information that contradicts our hopes, and ignores the pursuit of additional information that might change our mind. It is a behavior that obstructs knowledge, and is also a behavior we can escape through practice and recognition.
The other epistemic vices that Cassam highlights are similar to wishful thinking. They are behaviors and patterns of thought that we generally have more control over than whether we have a sunny disposition toward life. Being behaviors that obstruct knowledge, they are behaviors that we can and should strive to avoid in order to facilitate knowledge, improve our behaviors and decision-making, and ultimately strengthen the choices in life that we make.
Systematically Obstructing Knowledge

Systematically Obstructing Knowledge

The defining feature of epistemic vices, according to Quassim Cassam, is that they get in the way of knowledge. They inhibit the transmission of knowledge from one person to another, they prevent someone from acquiring knowledge, or they make it harder to retain and recall knowledge when needed. Importantly, epistemic vices don’t always obstruct knowledge, but they tend to do so systematically.
“There would be no justification for classifying closed-mindedness or arrogance as epistemic vices if they didn’t systematically get in the way of knowledge,” writes Cassam in Vices of the Mind. Cassam lays out his argument for striving against mental vices through a lens of consequentialism. Focusing on the outcomes of ways of thinking, Cassam argues that we should avoid mental vices because they lead to bad outcomes and limit knowledge in most cases.
Cassam notes that epistemic vices can turn out well for an individual in some cases. While not specifically mentioned by Cassam, we can use former President Donald Trump as an example. Cassam writes, “The point of distinguishing between systematically and invariably is to make room for the possibility that epistemic vices can have unexpected effects in particular cases.” Trump used a massive personal fortune, an unabashed bravado, and a suite of mental vices to bully his way into the presidency. His mental vices such as arrogance, closed-mindedness, and prejudice became features of his presidency, not defects. However, while his epistemic vices helped propel him to the presidency, they clearly and systematically created chaos and problems once he was in office. In his arrogance he attempted to bribe the prime minister of Ukraine, leading to an impeachment. His closed-mindedness and wishful thinking contributed to his second impeachment as he spread baseless lies about the election. 
For most of us in most situations, these same mental vices will also likely lead to failure and errors rather than success. For most of us, arrogance is likely to prevent us from learning about areas where we could improve ourselves to perform better in upcoming job interviews. Closed-mindedness is likely to prevent us from gaining knowledge about saving money with solar panels or about a new ethnic restaurant that we would really enjoy. Prejudice is also likely to prevent us from learning about new hobbies, pastimes, or opportunities for investment. These vices don’t always necessarily lead to failure and limit important knowledge for us, as Trump demonstrated, but they are more likely to obstruct important knowledge than if we had pushed against them.
Consequentialism

On Consequentialism

In his book Vices of the Mind, Quassim Cassam argues that patterns of thoughts and mental habits that obstruct knowledge are essentially moral vices. Ways of thinking and mental habits that enhance the acquisition, retention, and transmission of knowledge, according to Cassam, are moral virtues. Cassam defends his argument largely through a consequentialist view.
Cassam is open about his consequentialist frame of reference. He writes:
“Obstructivism is a form of consequentialism. … Moral vices systematically produce bad states of affairs. … The point of systematically is to allow us to ascribe moral virtue in the actual world to people who, as a result of bad luck, aren’t able to produce good: if they possessed a character trait that systematically produces good in that context (though not in their particular case) they still have the relevant moral virtues.”
I think that this view of epistemic vices is helpful. I know for me that there are times when I fall into the epistemic vices that Cassam highlights, and they can often be comforting, make me feel good about myself, or just be distractions from an otherwise busy and confusing world. However, recognizing that these vices systematically lead to poorer outcomes can help me understand why I should stay away from them.
Epistemic vices like scrolling through Twitter to look at posts that bash on someone you dislike are structurally likely to produce bad outcomes by wasting your time, making you more prone to distractions, and prejudicing yourself against people you don’t agree with. What you spend your mental energy on matters, and in the case of Twitter scrolling, you are allowing your mind to indulge in shallow quick thinking, closed-mindedness, and biases. It plays off confirmation bias, giving you the ability to only see posts that confirm what you believe or want to believe about a person or topic. It feels nice to bash on someone else, but you are reinforcing a limited perspective that might be wrong and rewarding your brain for being shallow and inconsiderate. In the moment it is rewarding, but in the long run it will lead to worse thinking, shorter attention spans, and biased decision-making that is hard to get away from once you have closed the Twitter tab. Consequentialism helps us see that the epistemic vices involved in Twitter scrolling, which feel harmless in the moment, are more likely to result in negative outcomes over time. The systematic nature of these epistemic vices, the consequences and outcomes of indulging them, is what defines them as vices.
Consequentialism, Cassam’s argument shows, can be a useful way to think about how we should behave. People who try to do good but experience bad luck and don’t produce the same good outcomes as others can still be viewed as morally virtuous. Even though in their particular situation a good result did not occur, those who practice moral virtues can be praised for behaving in a way that is systematically more likely to produce good. Conversely, people who behave in ways that systematically produce negative outcomes can be deterred from their negative behavior through social taboos and norms, even if a poor behavior might provide them with an opportunity to succeed in the short term. It is hard to take absolute stances about any position, but consequentialism gives us a frame though which we can approach difficult decisions and uncertainty by recognizing where systematic patterns are likely to lead to desired or undesired outcomes for ourselves and our societies.
Knowledge and Perception

Knowledge and Perception

We often think that biases like prejudice are mean spirited vices that cause people to lie and become hypocritical. The reality, according to Quassim Cassam is that biases like prejudice run much deeper within our minds. Biases can become epistemic vices, inhibiting our ability to acquire and develop knowledge. They are more than just biases that make us behave in ways that we profess to be wrong. Biases can literally shape the reality of the world we live in by altering the way we understand ourselves and other people around us.
“What one sees,” Cassam writes in Vices of the Mind, “is affected by one’s beliefs and background assumptions. It isn’t just a matter of taking in what is in front of one’s eyes, and this creates an opening for vices like prejudice to obstruct the acquisition of knowledge by perception.”
I am currently reading Steven Pinker’s book Enlightenment Now where Pinker argues that humans strive toward rationality and that at the end of the day subjectivity is ultimately over-ruled by reason, rationality, and objectivity. I have long been a strong adherent to the Social Construction Framework and beliefs that our worlds are created and influenced by individual differences in perception to a great degree. Pinker challenges that assumption, but framing his challenge through the lens of Cassam’s quote helps show how Pinker is ultimately correct.
Individual level biases shape our perception. Pinker describes a study where university students watching a sporting event literally see more fouls called against their team than the opponent, revealing the prejudicial vice that Cassam describes. Perception is altered by a prejudice against the team from the other school. Knowledge (in the study it is the accurate number of fouls for each team) is inhibited for the sports fans by their prejudice. The reality they live in is to some extent subjective and shaped by their prejudices and misperceptions.
But this doesn’t mean that knowledge about reality is inaccessible to humans at a larger scale. A neutral third party (or committee of officials) could watch the game and accurately identify the correct number of fouls for each side. The sports fans and other third parties may quibble about the exact final number, but with enough neutral observers we should be able to settle on a more accurate reality than if we left things to the biased sports fans. At the end of the day, rationality will win out through strength of numbers, and even the disgruntled sports fan will have to admit that the number of fouls they perceived was different from the more objective number of fouls agreed upon by the neutral third party members.
I think this is at the heart of the message from Cassam and the argument that I am currently reading from Pinker. My first reaction to Cassam’s quote is to say that our realities are shaped by biases and perceptions, and that we cannot trust our understanding of reality. However, objective reality (or something pretty close to it that enough non-biased people could reasonably describe) does seem to exist. As collective humans, we can reach objective understandings and agreements as people recognize and overcome biases and as the descriptions of the world presented by non-biased individuals prove to be more accurate over the long run. The key is to recognize that epistemic vices shape our perception at a deep level, that they are more than just hypocritical behaviors and that they literally shape the way we interpret reality. The more we try to overcome these vices of the mind, the more accurately we can describe the world, and the more our perception can then align with reality.
Epistemic Vices - Joe Abittan

Epistemic Vices

Quassim Cassam’s book Vices of the Mind is all about epistemic vices. Epistemic vices are intentional and unintentional habits, behaviors, personality traits, and patterns of thought that hinder knowledge, information sharing, and accurate and adequate understandings of the world around us. Sometimes we intentionally deceive ourselves, sometimes we simply fail to recognize that we don’t have enough data to confidently state our beliefs, and sometimes we are intentionally deceived by others without recognizing it. When we fall into thinking habits and styles that limit our ability to think critically and rationally, we are indulging in epistemic vices, and the results can often be dangerous to ourselves and people impacted by our decisions.
“Knowledge is something that we can acquire, retain, and transmit. Put more simply, it is something that we can gain, keep, and share. So one way to see how epistemic vices get in the way of knowledge is to see how they obstruct the acquisition, retention, and transmission of knowledge,” Cassam writes.
A challenge that I have is living comfortably knowing that I have incomplete knowledge on everything, that the world is more complex than I can manage to realize, and that even when doing my best I will still not know everything that another person does. This realization is paralyzing for me, and I constantly feel inadequate because of it. However, Cassam’s quote provides a perspective of hope.
Knowledge is something we can always gain, retain, and transmit. We can improve all of those areas, gaining more knowledge, improving our retention and retrieval of knowledge, and doing better to transmit our knowledge. By recognizing and eliminating epistemic vices we can increase the knowledge that we have, use, and share, ultimately boosting our productivity and value to human society. Seeing knowledge as an iceberg that we can only access a tiny fraction of is paralyzing, but recognizing that knowledge is something we can improve our access to and use of is empowering. Cassam’s book is helpful in shining a light on epistemic vices so we can identify them, understand how they obstruct knowledge, and overcome our vices to improve our relationship with knowledge.
We Bet on Technology

We Bet On Technology

I am currently reading Steven Pinker’s book Enlightenment Now and he makes a good case for being optimistic about human progress. In an age when it is popular to write about human failures, whether it is wealthy but unhappy athletes wrecking their cars, the perilous state of democracy, or impending climate doom, the responsible message always see ms to be warning about how bad things are. But Pinker argues that things are not that bad and that they are getting better. Pinker’s writing directly contradicts some earlier reading that I have done, including the writing of Gerd Gigerenzer who argues that we unwisely bet on technology to save us when we should be focused on improving statistical thinking and living with risk rather than hoping for a savior technology.
In Risk Savvy, Gigerenzer writes about the importance of statistical thinking and how we need it in order to successfully navigate an increasingly complex world. He argues that betting on technology will in some ways be a waste of money, and while I think he is correct in many ways, I think that some parts of his message are wrong. He argues that instead of betting on technology, we need to develop improved statistical understandings of risk to help us better adapt to our world and make smarter decisions with how we use and prioritize resources and attention. He writes, “In the twenty-first century Western world, we can expect to live longer than ever, meaning that cancer will become more prevalent as well. We deal with cancer like we deal with other crises: We bet on technology. … As we have seen … early detection of cancer is also of very limited benefit: It saves none or few lives while harming many.”
Gigerenzer is correct to state that to this point broad cancer screening has been of questionable use. We identify a lot of cancers that people would likely live with and that are unlikely to cause serious metastatic or life threatening disease. Treating cancers that won’t become problematic during the natural course of an individual’s life causes a lot of pain and suffering for no discernable benefit, but does this mean we shouldn’t bet on technology? I would argue that it does not, and that we can see the current mistakes we make with cancer screening and early detection as lessons to help us get to a better technological cancer detection and treatment landscape. Much of our resources directed toward cancer may be misplaced right now, but wise people like Gigerenzer can help the technology be redirected to where it can be the most beneficial. We can learn from poor decisions around treatment and diagnosis, call out the actors who profit from misinformation, uncertainty, and fear, and build a new regime that harnesses technological progress in the most efficient and effective ways. As Pinker would argue, we bet on technology because it offers real promises of an improved world. It won’t be an immediate success, and it will have red herrings and loose ends, but incrementalism is a good way to move forward, even if it is slow and feels like it is inadequate to meet the challenges we really face.
Ultimately, we should bet on technology and pursue progress to eliminate more suffering, improve knowledge and understanding, and better diagnose, treat, and understand cancer. Arguing that we haven’t done a good job so far, and that current technology and uses of technology haven’t had the life saving impact we wish they had is not a reason to abandon the pursuit. Improving our statistical thinking is critical, but betting on technology and improving statistical thinking go hand in hand and need to be developed together without prioritizing one over the other.
Teaching Statistical Thinking

Teaching Statistical Thinking

“Statistical thinking is the most useful branches of mathematics for life,” writes Gerd Gigerenzer in Risk Savvy, “and the one that children find most interesting.” I don’t have kids and I don’t teach or tutor children today, but I remember math classes of my own from elementary school math lessons to AP Calculus in high school. Most of my math education was solving isolated equations and memorizing formulas with an occasional word problem tossed in. While I was generally good at math, it was boring, and I like others questioned when I would ever use most of the math I was learning. Gerd Gigerenzer wants to change this, and he wants to do so in a way that focuses on teaching statistical thinking.
Gigerenzer continues, “teaching statistical thinking means giving people tools for problem solving in the real world. It should not be taught as pure mathematics. Instead of mechanically solving a dozen problems with the help of a particular formula, children and adolescents should be asked to find solutions to real-life problems.” 
We view statistics as incredibly complicated and too advanced for most children (and for most of us adults as well!). But if Gigerenzer’s assertion that statistical thinking and problem solving is what many children are the most excited about, then we should lean into teaching statistical thinking rather than hiding it away and saving it for advanced students. I found math classes to be alright, but I questioned how often I would need to use math, and that was before smartphones became ubiquitous. Today, most math that I have to do professionally is calculated using a spreadsheet formula. I’m glad I understand the math and calculations behind the formulas I use in spreadsheets, but perhaps learning mathematical concepts within real world examples would have been better than learning them in isolation and with essentially rote memorization practice.
Engaging with what kids really find interesting will spur learning. And doing so with statistical thinking will do more than just help kids make smart decisions on the Las Vegas Strip. Improving statistical thinking will help people understand how to appropriately respond to future pandemics, how to plan for retirement, and how think about risk in other health and safety contexts. Lots of mathematical concepts can be built into real world lessons that lean into teaching statistical thinking that goes beyond the memorization and plug-n-chug lessons that I grew up with.
Risk Savvy Citizens

Risk Savvy Citizens

In Risk Savvy Gerd Gigerenzer argues for changes to the way that financial systems, healthcare systems, and discourse around public projects operate. He argues that we are too afraid of risk, allow large organizations to profit from misunderstandings of risk, and that our goals are often thwarted by poor conceptions of risk. Becoming risk savvy citizens, he argues, can help us improve our institutions and make real change to move forward in an uncertain world.

“The potential lies in courageous and risk savvy citizens,” writes Gigerenzer.

I think that Gigerenzer is correct to identify the importance of risk savvy citizens. We are more interconnected than we ever have been, and to develop new innovations will require new risks. Many of the institutions we have built today exist to minimize both risk and uncertainty, unintentionally limiting innovation. Moving forward, we will have to develop better relationships toward risk to accept and navigate uncertainty.

A population of risk savvy citizens can help reshape existing institutions, and will have to lean into risk to develop new institutions for the future. This idea was touched on in Bruce Katz and Jeremy Nowak’s book The New Localism where they argue that we need new forms of public private partnerships to manage investment, development, and public asset management. Public agencies, institutions we have relied upon, have trouble managing and accepting risk, even if they are comprised of risk savvy citizens. The solution, Katz and Nowak might suggest, is to reshape institutions so that risk savvy citizens can truly act and respond in ways that successfully manage reasonable risks. Institutions matter, and they need to be reshaped and reformed if our courageous and risk savvy citizens are going to help change the world and solve some of the ills that Gigerenzer highlights.

Self-Interest & A Banking Moral Hazard

Self-Interest & A Banking Moral Hazard

I have not really read into or studied the financial crisis of 2008, but I remember how angry and furious so many people were at the time. There was an incredible amount of anger at big banks, especially when executives at big banks began to receive massive bonuses while many people in the country lost their homes and had trouble rebounding from the worst parts of the recession. The anger at banks spilled into the Occupy Wall Street movement, which is still a protest that I only have a hazy understanding of.
While I don’t understand the financial crisis that well, I do believe that I better understand self-interest, thanks to my own personal experience and constantly thinking about Robin Hanson and Kevin Simler’s book The Elephant in the Brain. The argument from Hanson and Simler is that most of us don’t actually have really strong beliefs about most aspects of the world. For most topics, the beliefs we have are usually subservient to our own self-interest, to the things we want that would give us more money, more prestige, and more social status. When you apply this filter retroactively to the financial crisis of 2008, some of the arguments shift, and I feel that I am able to better understand some of what took place in terms of rhetoric coming out of the crisis.
In Risk Savvy, published in 2014, Gerd Gigerenzer wrote about the big banks. He wrote about the way that bankers argued for limited regulation and intervention from states, suggesting that a fee market was necessary for a successful banking sector that could fund innovation and fuel the economy. However, banks realized that in the event of a major banking crisis, all banks would be in trouble, and dramatic government action would be needed to save the biggest banks and prevent a catastrophic collapse. “Profits are pocketed by executives, and losses are compensated by taxpayers. That is not exactly a free market – it’s a moral hazard,” writes Gigerenzer.
Banks, like the individuals who work for and comprise them, are self-interested. They don’t want to be regulated and have too many authorities limiting their business enterprises. At the same time, they don’t want to be held responsible for their actions. Banks took on increasingly risky and unsound financial loans, recognizing that if everyone was engaged in the same harmful lending practice, that it wouldn’t just be a single bank that went bust, but all of them. They argued for a free market before the crash, because a free market with limited intervention was in their self-interest, not because they had high minded ideological beliefs. After the crash, when all banks risked failure, the largest banks pleaded for bail outs, arguing that they were necessary to prevent further economic disaster. Counter to their free-market arguments of before, the banks favored bail-outs that were clearly in their self-interest during the crisis. Their high minded ideology of a free market was out the window.
Gigerenzer’s quote was meant to focus more on the moral hazard dimension of bailing out banks that take on too many risky loans, but for me, someone who just doesn’t fully understand banking the way I do healthcare or other political science topics, what is more obvious in his quote is the role of self-interest, and how we try to frame our arguments to hide the ways we act on little more than self-interest. A moral hazard, where we benefit by pushing risk onto others is just one example of how individual self-interest can be negative when multiplied across society. Tragedy of the commons, bank runs, and social signaling are all other examples where our self-interest can be problematic when layered up to larger societal levels.
Risk literacy and Reduced Healthcare Costs - Joe Abittan

Risk Literacy & Reduced Healthcare Costs

Gerd Gigerenzer argues that risk literacy and reduced healthcare costs go together in his book Risk Savvy. By increasing risk literacy we will help both doctors and patients better understand how behaviors contribute to overall health, how screenings may or may not reveal dangerous medical conditions, and whether medications will or will not make a difference for an individual’s long-term well being. Having both doctors and patients better understand and better discuss the risks and benefits of procedures, drugs, and lifestyle changes can help us use our healthcare resources more wisely, ultimately bringing costs down.
Gigerenzer argues that much of the modern healthcare system, not just the US system but the global healthcare system, has been designed to sell more drugs and more technology. Increasing the number of people using medications, getting more doctors to order more tests with new high-tech diagnostic machines, and driving more procedures became more of a goal than actually helping to improve people’s health. Globally, health and the quality of healthcare has improved, but healthcare is often criticized as a low productivity sector, with relatively low gains in health or efficiency for the investments we make.
I don’t know that I am cynical enough to accept all of Gigerenzer’s argument at face value, but the story of opioids, the fact that we invest much larger sums of money in cancer research versus parasitic disease research, and the ubiquitous use of MRIs in our healthcare landscape do favor Gigerenzer’s argument. There hasn’t been as much focus on improving doctor and patient statistical reasoning, and we haven’t put forward the same effort and funding to remove lead from public parks compared to the funding put forward for cancer treatments. We see medicine as treating diseases after they have popped up with fancy new technologies and drugs. We don’t see medicine as improving risk and health literacy or as helping improve the environment before people get sick.
This poor vision of healthcare that we have lived with for so long, Gigerenzer goes on to argue, has blinded us to the real possibilities within healthcare. Gigerenzer writes, “calls for better health care have been usually countered by claims that this implies one of two alternatives, which nobody wants: raising taxes or rationing care. I argue that there is a third option: by promoting health literacy of doctors and patients, we can get better care for less money.”
Improving risk and health literacy means that doctors can better understand and better communicate which medications, which tests, and which procedures  are most likely to help patients. It will also help patients better understand why certain recommendations have been made and will help them push back against the feeling that they always need the newest drugs, the most cutting edge surgery, and the most expensive diagnostic screenings. Regardless of whether we raise taxes or try to ration care, we have to help people truly understand their options in new ways that incorporate tools to improve risk literacy and reduce healthcare costs. By better understanding the system, our own care, and our systemic health, we can better utilize our healthcare resources, and hopefully bring down costs by moving our spending into higher productivity healthcare spaces.