Epistemically Malevolent & Epistemically Insouciant

Over the last few years I feel as though I have seen an increase in the number of news outlets and reporters saying that we now live in a post-truth society. The argument is that truth and accuracy no longer matter to many people, and that we live in a world where people simply want to believe what they want to believe, regardless of the evidence. This argument is supported by documented instances of fake news, by a former US president who didn’t seem to care what the truth was, and by politicians and every day people professing beliefs that are clearly inaccurate as a type of loyalty test. This puts us in a position where it becomes difficult to communicate important information and create a coherent narrative based on accurate details surrounding the events of our lives.
Two concepts that Quassim Cassam discusses in his book Vices of the Mind can help us think about what it means to be in a post-truth society. Cassam writes, “one can be epistemically malevolent without being epistemically insouciant.” To me, it seems that a post-truth society depends on both malevolency and insouciance to exist. I find it helpful to see that there is a distinction in these two different postures toward knowledge.
To be epistemically malevolent means to intentionally and deliberately attempt to hinder and limit knowledge. Cassam uses the example of tobacco companies deliberately misleading the public on the dangers of smoking. Company executives intentionally made efforts to hide accurate scientific information and to mislead the public. In recent years we have seen epistemic malevolence in the form of fake-news, misinformation, and disinformation intended to harm political opponents and discourage voter turnout for opposing political parties.
Epistemic insouciance doesn’t necessarily have a malicious intent behind it. Instead, it is characterized by an indifference to the accuracy of information. You don’t need an intentional desire to spread false information in order to be epistemically insouciant. However, this careless attitude toward the accuracy of information is in some ways necessary for false information to take hold. Individuals who care whether their knowledge and statements are correct are less likely to be pulled in by the epistemically malevolent, and less likely to spread their messages. However, someone who favors what the epistemically malevolent have to say and is unwilling to be critical of the message are more likely to engage with such false messaging and to echo and spread malevolent lies. Even if an individual doesn’t want to be intentionally misleading, insouciance plays into malevolence.
This helps us see that our post-truth society will need to be addressed on two fronts. First, we need to understand why people are epistemically insouciant and find ways to encourage people to be more concerned with the accuracy and factuality of their statements and beliefs. External nudges, social pressures, and other feedback should be developed to promote factual statements and to hinder epistemic insouciance. This is crucial to getting people to both recognize and denounce epistemic malevolency. Once people care about the accuracy of their beliefs and statements, we can increase the costs of deliberately spreading false information. As things exist now, epistemic insouciance encourages epistemic malevolency. Combating epistemic malevolency will require that we address epistemic insouciance and then turn our attention to stopping the spread of deliberate falsehoods and fake news.

3 thoughts on “Epistemically Malevolent & Epistemically Insouciant

  1. “External nudges, social pressures, and other feedback should be developed to promote factual statements and to hinder epistemic insouciance.”

    I agree with the spirit of this statement, but use of the word “nudge” risks unnecessarily tying your point to people with known track records for epistemic malevolence (e.g., Cass Sunstein). In other words, I agree with all of this to the extent that we want to rely on methods that improve the critical thinking of laypeople (e.g., work by Gerd Gigerenzer) and disagree with it to the extent that we need to rely on using cute social psych tricks to manipulate people into believing what [insert authority] wants them to believe. People should think vaccines are safe because they have access to an open/transparent discussion of the evidence, not because they heard some attractive authority figure tell them that they’re safe.

    Its possible that we’re failing to appreciate how skilled people are at forming accurate beliefs because we assume the metric people are using to guide belief formation is “objective accuracy”. If we assume objective accuracy is always the only outcome being optimized, then yes people are obviously stupid (this is the basic message of Tversky and Kahneman’s work). However, I think we as a society under-appreciate the extent to which people skillfully and intelligently shape their beliefs in order to optimize social outcomes. People don’t appear to be forming stupid beliefs because they are fed bad info. They generally seem to form stupid beliefs because of the favorable (social) consequences of those beliefs. Insofar as this framing is appropriate, it means that we should modify our norms so that achieving the things people most desire (social outcomes, like respect) requires the ability to prioritize the formation of objectively accurate beliefs. In sum, norms shouldn’t be structured to reward/punish people by checking their beliefs against some Official List of True Things. Norms should be structured to reward/punish people based on HOW they form, maintain, and justify their beliefs.

    (Of course, we already have experience with forming cultures that police the process of forming beliefs rather the beliefs themselves: science. It isn’t obvious how to incorporate scientific thinking into the broader culture as a whole, but at least the philosophy of science provides a foundation to start with)

    Like

    1. I don’t think it is fair to call Sunstein and Thaler epistemically malevolent. I agree that their idea of nudges can easily go too far, but I think they are necessary and unavoidable, and I’m ok having the sentence you referenced directly tie back to their work. The bottom line in terms of Sunstein and Thaler’s nudges is that being a choice architect is unavoidable, and choice architects have a responsibility to help people make the best decision possible, the decision they would make if they had perfect information and could decide rationally. There are simply too many decisions for everyone to make, and even if we try to make the best decisions ourselves and value our own freedom of decision-making, we will still need nudges in some situations.

      That being said, and perhaps this is a point of contradiction in my own thinking, I believe that you are correct that people are more skilled than we give them credit for when it comes to forming accurate beliefs. After reading Kahneman I was pretty pessimistic about the future of human decision-making, especially since I first read Robin Hanson’s The Elephant in the Brain where he discusses the extent to which we deceive ourselves to look good in public. Gigerenzer is a useful pushback and deserves more attention than he gets. We really can think rationally and make good decisions, especially if we have clear information and visual aids. His argument is that a lot of the problem is in the structures we have developed that shape the HOW (as you mentioned) people form, maintain, and justify beliefs. If the problem is in the how, then I think we return to nudges as being important. (I do see how my argument is getting a bit circular)

      I recently completed Sapiens by Yuval Noah Harari and I’m currently reading The WEIRDEST People in the world by Joseph Henrich. I think one argument that both of them would make is that in the long run, cultures that are better able to utilize and act on true information will develop more stable and successful institutions. I personally feel that there is an urgent need for more scientific thinking, but even if we fail in the short term, there may still be hope in long-term institutional success at building better scientific thinking.

      Like

      1. Thanks for the thoughtful reply! I think we’re in agreement about your most important points (I also think we have a very similar taste in literature). We could argue about the extent to which Sunstein is dedicated to intellectual honesty or whether the elephant metaphor is justified by the research, but nothing seems to hang on this.

        My concern is about the extent to which beliefs (true or otherwise) are manufactured and distributed by authorities. Its been a few years since I was deep in the persuasion literature, but I recall studies of social proof showing that people can be persuaded to drink less by advertising an in-group’s low rates of drinking. If you make it seem like someone’s friends rarely drink, that person becomes less likely to drink. What do we do with this information (assuming the effect replicates!)? I worry that many nudgers might be willing to lie and inform people that drinking among peers is below the true value in an attempt to persuade them to drink less. The result is that people become misinformed, but this cost if often justified by the reduced alcohol use. One could argue that less drinking is invariably good, but is this really true? What are the potential long-term consequences of noble lies and how many topics can experts be highly certain about?

        I’m skeptical of persuasive techniques that depend on persuaders to be the ones to form accurate/wise beliefs and who then use tricks to distribute those beliefs to targets. Yes, sometimes a default must be chosen (e.g., organ donor status), but I think for many important topics we have the time and space to rely on strategies that don’t depend on smart-and-benevolent authorities. The research saying that people are stupid just isn’t that strong and the research showing that people are fairly rational raises far too many exceptions to the cognitive miser narrative. So I guess I’m not against the topic of nudging as much as I’m against the Trojan Horse subset of nudges where nudgers deliver packaged beliefs to targets using psychological tricks rather than the other class of nudges where people are provided with the raw ingredients and left to form their own beliefs.

        BTW I just found your blog and after skimming it I realize this might be well-trodden territory. Sorry if you’ve already addressed these ideas in the past!

        (I haven’t read WEIRDest People yet, but its on my list and I’d be interested to hear what you think about it when you finish.)

        Like

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.