Slope is Agnostic to Cause and Effect

Slope is Agnostic to Cause and Effect

I like statistics. I like to think statistically, to recognize that there is a percent chance of one outcome that can be influenced by other factors. I enjoy looking at best fit lines, seeing that there are correlations between different variables, and seeing how trend-lines change if you control for different variables. However, statistics and trend lines don’t actually tell us anything about causality.
In The Book of Why Judea Pearl writes, “the slope (after scaling) is the same no matter whether you plot X against Y or Y against X. In other words, the slope is completely agnostic as to cause and effect. One variable could cause the other, or they could both be effects of a third cause; for the purpose of prediction, it does not matter.”
In statistics we all know that correlation is not causation, but this quote helps us remember important information when we see a statistical analysis and a plot with linear regression line running through it. The regression line is like the owl that Pearl had described earlier in the book. The owl is able to predict where a mouse is likely to be and able to predict which direction it will run, but the owl does not seem to know why a mouse is likely to be in a given location or why it is likely to run in one direction over another. It simply knows from experience and observation what a mouse is likely to do.
The regression line is a best fit for numerous observations, but it doesn’t tell us whether one variable causes another or whether both are influenced in a similar manner by another variable. The regression line knows where the mouse might be and where it might run, but it doesn’t know why.
In statistics courses we end at this point of correlation. We might look for other variables that are correlated or try to control for third variables to see if the relationship remains, but we never answer the question of causality, we never get to the why. Pearl thinks this is a limitation we do not need to put on ourselves. Humans, unlike owls, can understand causality, we can recognize the various reasons why a mouse might be hiding under a bush, and why it may chose to run in one direction rather than another. Correlations can help us start to see where relationships exist, but it is the ability of our mind to understand causal pathways that helps us determine causation.
Pearl argues that statisticians avoid these causal arguments out of caution, but that it only ends up creating more problems down the line. Important statistical research in areas of high interest or concern to law-makers, business people, or the general public are carried beyond the cautious bounds that causality-averse statisticians place on their work. Showing correlations without making an effort to understand the causality behind it makes scientific work vulnerable to the epistemically malevolent who would like to use correlations to their own ends. While statisticians rigorously train themselves to understand that correlation is not causation, the general public and those struck with motivated reasoning don’t hold themselves to the same standard. Leaving statistical analysis at the level of correlation means that others can attribute the cause and effect of their choice to the data, and the proposed causal pathways can be wildly inaccurate and even dangerous. Pearl suggests that statisticians and researchers are thus obligated to do more with causal structures, to round off  their work and better develop ideas of causation that can be defended once their work is beyond the world of academic journals.
The Fundamental Nature of Cause and Effect

The Fundamental Nature of Cause and Effect

In my undergraduate and graduate studies I had a few statistics classes and I remember the challenge of learning probability. Probability, odds, and statistics are not always easy to understand and interpret. There are some concepts that are pretty straightforward, and others that seem to contradict what we would expect if we had not gone through the math and if we had not studied the concepts in depth. To contrast the difficult and sometimes counter-intuitive nature of statistics, we can think about causality, which is a challenging concept, but unlike statistics, is something we are able to intuit from very young age.
In The Book of Why Judea Pearl writes, “In both a cognitive and a philosophical sense, the idea of cause and effect is much more fundamental than probability. We begin learning causes and effects before we understand language and before we understand mathematics.”
As Pearl explains, we see causality naturally and experience causality as we move through our lives. From a young child who learns that if they cry they receive attention to a nuclear physicist who learns what happens when two atoms collide at high energy levels, our minds are constantly looking at the world and looking for causes. It begins by making observations of phenomena around us and continues as we predict what outcomes would happen based on certain system inputs. Eventually, our minds reach a point where we can understand why our predictions are accurate or inaccurate, and we can imagine new ways to bring about certain outcomes. Even if we cannot explain all of this, we can still understand causation at a fundamental and intuitive level.
However, many of us deny that we can see and understand the world in a causal way. I am personally guilty of thinking in a purely statistical way and ignoring the causal. The classes I took in college helped me understand statistics and probability, but also told me not to trust my intuitive causal thinking. Books like Kahneman’s Thinking Fast and Slow cemented this mindset for me. Rationality, we believe, requires that we think statistically and discount our intuitions for fear of bias. Modern science says we can only trust evidence when it is backed by randomized controlled trials and directs us to think of the world through correlations and statistical relationships, not through a lens of causality.
Pearl pushes back against this notion. By arguing that causality is fundamental to the human mind, he implies that our causal reasoning can and should be trusted. Throughout the book he demonstrates that a purely statistical way of thinking leaves us falling short of the knowledge we really need to improve the world. He demonstrates that complex tactics to remove variables from equations in statistical methods are often unnecessary, and that we can accept the results of experiments and interventions even when they are not fully randomized controlled trials.  For much of human history our causal thinking nature has lead us astray, but I think that Pearl argues that we have overcorrected in modern statistics and science, and that we need to return to our causal roots to move forward and solve problems that statistics tells us are impossible to solve.
Predictions & Explanations

Predictions & Explanations

The human mind has incredible predictive abilities, but our explanatory abilities do not always turn out to be as equally incredible. Prediction is relatively easy when compared to explanation. Animals can predict where a food source will be without being able to explain how it got there. For most of human history our ancestors were able to predict that the sun would rise the next day without having any way of explaining why it would rise. Computer programs today can predict our next move in chess but few can explain their prediction or why we would make the choice that was predicted.
As Judea Pearl writes in The Book of Why, “Good predictions need not have good explanations. The owl can be a good hunter without understanding why the rat always goes from point A to point B.” Prediction is possible with statistics and good observations. With a large enough database, we can make a prediction about what percentage of individuals will have negative reactions to medications, we can predict when a traffic jam will occur, and we can predict how an animal will behave. What is harder, according to Pearl, is moving to the stage where we describe why we observe the relationships that statistics reveal.
Statistics alone cannot tell us why particular patterns emerge. Statistics cannot identify causal structures. As a result, we continually tell ourselves that correlation is not causation and that we can only determine what relationships are truly causal through randomized controlled trials. Pearl would argue that this is incorrect, and he would argue that this idea results from the fact that statistics is trying to answer a completely different question than causation. Approaching statistical questions from a causal lens may lead to inaccurate interpretations of data or “p-hacking” an academic term used to describe efforts to get the statistical results you wanted to see. The key is not hunting for causation within statistics, but understanding causation and supporting it through evidence uncovered via statistics.
Seeing the difference between causation and statistics is helpful when thinking about the world. Being stuck without a way to see and determine causation leads to situations like tobacco companies claiming that cigarettes don’t cause cancer or oil and gas companies claiming that humans don’t contribute to global warming. Causal thinking, however, utilizes our ability to develop explanations and applies those explanations to the world. Our ability to predict different outcomes based on different interventions helps us interpret and understand the data that the world produces. We may not see the exact picture in the data, but we can understand it and use it to help us make better decisions that will lead to more accurate causal understandings over time.
Tool Use and Causation - Judea Pearl - The Book of Why - Joe Abittan

Tool Use and Causation

Judea Pearl’s book The Book of Why is all about causation. The reason human beings are able to produce vaccines, to send rockets into space, and maintain green gardens is because we understand causation. We have an ability to observe events in the world, to intervene, and to predict how our interventions produce specific outcomes. This allows us to develop tools to specifically achieve desired ends, and it is not a small feat.
In the book Pearl describes three levels of causation based on Alan Turing’s proposed system to classify cognitive systems in terms of the queries systems can answer. The three levels of causation are association, intervention, and counterfactuals. Pearl explains that many animals observe the world and detect patterns, but that fewer animals use tools to intervene in the world. Fewer still, Pearl explains, possess the ability to actually develop and improve new tools. As he writes, “tool users do not necessarily possess a theory of their tool that tells them why it works and what to do when it doesn’t. For that, you need to have achieved a level of understanding that permits imagining. It was primarily this third level that prepared us for further revolutions in agriculture and science and led to a sudden and drastic change in our species’ impact on the planet.”
The theory of tool use that Pearl mentions in the quote is our ability to see and understand causation. We can observe that rocks can be used to cut plant fibers, and then we can identify the qualities in some rocks that make them better at cutting fibers than others. But to get to the point where we are sharpening an edge of a rock to make it even better at cutting fibers, we have to have a causal understanding of what allows the rock to cut and we need sufficient imagination to predict what would happen if the rock had a sharper edge. We have to imagine an outcome in a future world where something was different, and that something different caused a new outcome.
This point is small, but is actually quite profound. Our minds are able to conceptualize causality and build hypothesis about the world that we can test. This can improve our tool usage, improve the ways we act and behave, and can allow us to achieve desired ends through study, prediction, imagination, and experimentation. The key, however, is that we have a theory of the tools and how they work, that we have an ability to intuit causation.
We hear all the time that correlation is not causation and in our modern technological age we are looking to statistics to help us solve massive problems. However, as Pearl’s quote shows, data, statistics, and information is useless unless we have a theory of the tools we can use based on the knowledge we gain from the data, statistics, and information. We have to embrace causation and our ability to imagine and predict causal structures if we want to do anything with the data.
This all reminds me of the saying, when the only tool you have is a hammer, everything begins to look like a nail. This represents an inability to understand causality, a lack of imagination and predictive prowess. Statistics without a theory of causality, without an ability to use our power to identify and predict causation, is like the hammer and nail saying. It is useless and throws the same toolkit and approach at every problem. Statistics alone doesn’t build knowledge – you also need a theory of causation.
Pearl’s message throughout the book is that statistics (tool use) and causation is linked, that we need a theory and understanding of causation if we are going to do anything with data, statistics, and information. For years we have relied on statistical relationships to help us understand the world, but we have failed to apply the same rigorous study to causation, and that will make it difficult for us to use our new statistical power to achieve the ends that big data and statistical processing promise.
Hope in Big Data

Hope in Big Data

Most of us probably don’t work with huge data sets, but all of us contribute to huge data sets. We know the world of big data is out there, and we know people are working with big data, but there are not many of us who truly know what it means and how we should think about any of it. In The Book of Why, Judea Pearl argues that even many of those doing research and running companies based on big data don’t fully understand what it all means.
Pearl is critical of researchers and entrepreneurs who lack causal understandings but pursue new knowledge and information by pulling correlations and statistics out of large data sets. There are some companies that are taking advantage of the fact that huge amounts of computing power can give us insights into data sets that we never before could have generated, however, these insights are not always as meaningful as we are lead to believe.
Pearl writes, “The hope – and at present, it is usually a silent one – is that the data themselves will guide us to the right answers whenever causal questions come up.”
My last post was about the overuse of the phrase: correlation is not causation. Finding correlations and relationships in data is meaningless if we don’t also have causal understandings in mind. This is the critique that Pearl makes with the quote above. If we don’t have a way of understanding basic causal structures, then the phrase is right, correlations don’t mean anything. Many companies and researchers are in a stage where they are finding correlations and unexpected statistical results in big data, but they lack causal understandings to do anything meaningful with the data. In the world of public policy this feels like the saying, a solution in search of a problem or in the world of healthcare like a pay and chase scenario.
Pearl argues throughout the book that we are better at identifying causal structures than we are lead to believe in our statistics courses. He also argues that understanding causality is key to unlocking the potential of big data and actually getting something useful out of massive datasets. Without a grounding in causality, we are wasting our time with the statistical research we do. We are running around with solutions in the forms of big data correlations that don’t have a causal underpinning. It is as if we are paying fraudulent claims, then chasing down some of the money we spent and congratulating ourselves on preventing fraud. The end result is a poor use of data that we prop up as a magnanimous solution.
Correlation and Causation - Judea Pearl - The Book of Why - Joe Abittan

Correlation and Causation

I have an XKCD comic taped to the door of my office. The comic is about the mantra of statistics, that correlation is not causation. I taped the comic to my office door because I loved learning statistics in graduate school and thinking deeply about associations and how mere correlations cannot be used to demonstrate that one thing causes another. Two events can correlate, but have nothing to do with each other, and a third thing may influence both, causing them to correlate without any causal link between the two things.
But Judea Pearl thinks that science and researchers have fallen into a trap laid out by statisticians and the infinitely repeated correlation does not imply causation mantra. Regarding this perspective of statistics he writes, “it tells us that correlation is not causation, but it does not tell us what causation is.”
Pearl seems to suggest in The Book of Why that there was a time where there was too much data, too much humans didn’t know, and too many people ready to offer incomplete assessments based on anecdote and incomplete information. From this time sprouted the idea that correlation does not imply causation. We started to see that statistics could describe relationships and that statistics could be used to pull apart entangled causal webs, identifying each individual component and assessing its contribution to a given outcome. However, as his quote shows, this approach never actually answered what causation is. It never actually told us when we can know and ascertain that a causal structure and causal mechanism is in place.
“Over and over again,” writes Pearl, “in science and in business, we see situations where mere data aren’t enough.”
To demonstrate the shortcomings of our high regard for statistics and our mantra that correlation is not causation, Pearl walks us through the congressional testimonies and trials of big tobacco companies in the United States. The data told us there was a correlation between smoking and lung cancer. There was overwhelming statistical evidence that smoking was related or associated with lung cancer, but we couldn’t attain 100% certainty just through statistics that smoking caused lung cancer. The companies themselves muddied the water with misleading studies and cherry picked results. They hid behind a veil that said that correlation was not causation, and hid behind the confusion around causation that statistics could never fully clarify.
Failing to develop a real sense of causation, failing to move beyond big data, and failing to get beyond statistical correlations can have real harms. We need to be able to recognize causation, even without relying on randomized controlled trials, and we need to be able to make decisions to save lives. The lesson of the comic taped to my door is helpful when we are trying to be scientific and accurate in our thinking, but it can also lead us astray when we fail to trust a causal structure that we can see, but can’t definitively prove via statistics.
Talking About Causation - Judea Pearl - The Book of Why - Joe Abittan

Talking About Causation

In The Book of Why Judea Pearl argues that humans are better at modeling, predicting, and identifying causation than we like to acknowledge. For Pearl, the idea that we can see direct causation and study it scientifically is not a radical and naïve belief, but a common sense and defensible observation about human pattern recognition and intuition of causal structures in the world. He argues that we are overly reliant on statistical methods and randomized controlled trials that suggest relationships, but never tell us exactly what causal mechanisms are at the heart of such relationships.
One of the greatest frustrations for Pearl is the limitations he feels have been placed around ideas and concepts for causality. For Pearl, there is a sense that certain research, certain ways of talking about causality, and certain approaches to solving problems are taboo, and that he and other causality pioneers are unable to talk in a way that might lead to new scientific breakthroughs. Regarding a theory of causation and a the history of our study of causality, he writes, “they declared those questions off limits and turned to developing a thriving causality-free enterprise called statistics.”
Statistics doesn’t tell us a lot about causality. Statistical thinking is a difficult way for most people to think, and for non-statistically trained individuals it leads to frustrations. I remember around the time of the 2020 election that Nate Silver, a statistics wonk at, posted a cartoon where one person was trying to explain the statistical chance of an outcome to another person. The other person interpreted statistical chances as either 50-50 or all or nothing. They interpreted a low probability event as a certainty that something would not happen and interpreted a high probability event as a certainty that it would happen, while more middle ground probabilities were simply lumped in as 50-50 chances. Statistics helps us understand these probabilities in terms of the outcomes we see, but doesn’t actually tell us anything about the why behind the statistical probabilities. That, I think Pearl would argue, is part of where the confusion for the individual in the cartoon who had trouble with statistics stems from.
Humans think causally, not statistically. However, our statistical studies and the accepted way of doing science pushes against our natural causal mindsets. This has helped us better understand the world in many ways, but Pearl argues that we have lost something along the way. He argues that we needed to be building better ways of thinking about causality and building models and theories of causality at the same time that we were building and improving our studies of statistics. Instead, statistics took over as the only responsible way to discuss relationships between events, with causality becoming taboo.
“When you prohibit speech,” Pearl writes, “you prohibit thought and stifle principles, methods, and tools.” Pearl argues that this is what is happening in terms of causal thinking relative to statistical thinking. I think he, and other academics who make similar speech prohibition arguments, are hyperbolic, but I think it is important to consider whether we are limiting speech and knowledge in an important way. In many studies, we cannot directly see the causal structure, and statistics does have ways of helping us better understand it, even if it cannot point to a causal element directly. Causal thinking alone can lead to errors in thinking, and can be hijacked by those who deliberately want to do harm by spreading lies and false information. Sometimes regressions and correlations hint at possible causal structures or completely eliminate others from consideration. The point is that statistics is still useful, but that it is something we should lean into as a tool to help us identify causality, not as the endpoint of research beyond which we cannot make any assumptions or conclusions.
Academics, such as Pearl and some genetic researchers, may want to push forward with ways of thinking that others consider taboo, and sometimes fail to adequately understand and address the concerns that individuals have about the fields. Addressing these areas requires tact and an ability to connect research in fields deemed off limits to the fields that are acceptable. Statistics and a turn away from a language of causality may have been a missed opportunity in scientific understanding, but it is important to recognize that human minds have posited impossible causal connections throughout history, and that we needed statistics to help demonstrate how impossible these causal chains were. If causality became taboo, it was at least partly because there were major epistemic problems in the field of causality. The time may have come for addressing causality more directly, but I am not convinced that Pearl is correct in arguing that there is a prohibition on speech around causality, at least not if the opportunity exists to tactfully and responsibly address causality as I think he does in his book.
A Vice Doom Loop

A Vice Doom Loop

In Vices of the Mind Quassim Cassam asks if we can escape our epistemic vices. He takes a deep look at epistemic vices, how they impact our thinking and behavior, and asks if we are stuck with them forever, or if we can improve and overcome them. Unfortunately for those of us who wish to become more epistemically virtuous, Cassam has some bad news that comes in the form of a vice doom loop. He writes,
“One is unlikely to take paraphrasing exercises seriously unless one already has a degree of intellectual humility. If one has the requisite degree of humility then one isn’t intellectually arrogant. If one is intellectually arrogant then one probably won’t be humble enough to do the exercises. In the same way, the epistemically lazy may well be too lazy to do anything about their laziness, and the complacent too complacent to worry about being complacent. In all of these cases, the problem is that the project of undoing one’s character vices is virtue-dependent, and those who have the necessary epistemic virtues don’t have the epistemic vices.”
The epistemic vice doom loop stems from the fact that epistemic vices are self-reinforcing. They create the mental modes that reinforce vicious thinking. Escaping from epistemic vices, as Cassam explains, requires that we possess epistemic virtues, which by default we do not possess. Virtues take deliberate effort and practice to build and maintain. We need virtues to escape our vices, but our vices prevent us from developing such virtues, and causes a further entrenchment of our vices.
So it seems as though epistemic vices are inescapable and that those with epistemic vices are stuck with them forever. Luckily, Cassam continues and explains that this is not the case. The world that Cassam’s quote lays out presents us with a false dichotomy. We are not either wholly epistemically vicious or epistemically virtuous. We exist somewhere in the middle, with some degree of epistemic viciousness present in our thinking and behavior and some degree of epistemic virtuosity. This means that we can ultimately overcome our vices. We can become less epistemically insouciant, we can become less arrogant, and we can reduce our wishful thinking. The vice doom loop is escapable because few of us are entirely epistemically vicious, and at least in some situations we are more epistemically virtuous, and we can learn from those situations and improve in others.
Epistemic Self-Improvement

Epistemic Self-Improvement

Is epistemic self-improvement possible? That is, can we individually improve the ways we think to become more conducive to knowledge? If we can’t, does that mean we are stuck with epistemic vices, unable to improve our thinking to become epistemically virtuous?
These are important questions because they determine whether we can progress as a collective and overcome ways of thinking that hinder knowledge. Gullibility, arrogance, and closed-mindedness are a few epistemic vices that I have written about recently that demonstrate how hard epistemic self-improvement can be. If you are gullible it is hard to make a change on your own to be less easily fooled. If you are arrogant it is hard to be introspective in a way that allows you to see how your arrogance has limited your knowledge. And if you are closed-minded then it is unlikely you will see a need to expand your knowledge at all. So can we really improve ourselves to think better?
Quassim Cassam seems to believe that we can. He identifies ways in which people have improved their thinking over time and how humans within institutions have become more epistemically virtuous throughout our history. After running through some examples and support for epistemic self-improvement in Vices of the Mind, Cassam writes, “none of this proves that self-improvement in respect of thinking vices is possible, but if our thinking can’t be improved that would make it one of the few things that humans do that they can’t do better with practice and training.”
I am currently reading Joseph Henrich’s book The WEIRDest People in the World and he argues that human psychology both shapes and is shaped by institutions. I think he would agree with Cassam, arguing that individual self-improvement is possible, and that it can contribute to a positive feedback loop where people improve their thinking, improving the institutions they are a part of, which feeds back into improved thinking. I agree with Cassam and would find it surprising if we couldn’t improve our thinking and become more epistemically virtuous if we set about trying to do so with practice. Viewing this idea through a Henrich lens also suggests that as we try to become more epistemically virtuous and focus on epistemic virtuosity, we would shape institutions to better support us, giving us an extra hand from the outside to help us improve our thinking. Individually we can become better thinkers and that allows us to create better institutions that further support better thinking, creating a virtuous cycle of epistemic self-improvement. There are certainly many jumping off points and gears that we can throw sand into during this process, but overall, it should leave us feeling more epistemically optimistic about humans and our societies.
Epistemic Optimists & Pessimists - Joe Abittan

Epistemic Optimists & Pessimists

A little while back I did a mini dive into cognitive psychology and behavioral economics by reading Thinking Fast and Slow by Daniel Kahneman, Nudge by Sunstein and Thaler, Risk Savvy by Gerd Gigerenzer, Vices of the Mind by Quassim Cassam, and The Book of Why by Judea Pearl. Each of these authors asked questions about the ways we think and tried to explain why our thinking so often seems go awry. Recognizing that it is a useful but insufficient dichotomy, each of these authors can be thought of as either an epistemic optimist or an epistemic pessimist.
In Vices of the Mind Cassam gives us the definitions for epistemic optimists and pessimists. He writes, “Optimism is the view that self-improvement is possible, and that there is often (though not always) something we can do about our epistemic vices, including many of our implicit biases.” The optimists, Cassam argues, believes that we can learn about our mind, our biases, and how our thinking works to make better decisions and improve our beliefs to foster knowledge. Cassam continues, “Pessimism is much more sceptical about the prospects of self-improvement or, at any rate, of lasting self-improvement. … For pessimists, the focus of inquiry shouldn’t be on overcoming our epistemic vices but  on outsmarting them, that is, finding ways to work around them so as to reduce their ill effects.” With Cassam’s framework, I think it is possible to look at the ways each author and researcher presents information in their books and to think of them as either optimists or pessimists.
Daniel Kahneman in Thinking Fast and Slow wants to be an optimist, but ultimately is a pessimist. He writes throughout the book how his own knowledge about biases, cognitive illusions, and thinking errors hardly help him in his own life. He states that what he really hopes his book accomplishes is improved water-cooler talk and better understanding of how the brain works, not necessarily better decision-making for those who read his book. Similarly, Sunstein and Thaler are pessimists. They clearly believe that we can outsmart our epistemic vices, but not by our own actions but rather by outside nudges that smarter people and responsible choice architects have designed for us. Neither Kahneman nor the Chicago economics pair believe we really have any ability to control and change our thinking independently.
Gigerenzer and Pearl are both optimists. While Gigerenzer believes that nudges can be helpful and encourages the development of aids to outsmart our epistemic vices, he also clearly believes that we can overcome them on our own simply through gaining experience and through practice. For Gigerenzer, achieving epistemic virtuosity is possible, even if it isn’t something you explicitly work toward. Pearl focuses how human beings are able to interpret and understand causal structures in the real world, and breaks from the fashionable viewpoint of most academics in saying that humans are actually very good and understanding, interpreting, and measuring causality. He is an epistemic optimist because he believes, and argues in his book, that we can improve our thinking, improve the ways we approach questions of causality, and improve our knowledge without having to rely on fancy tricks to outsmart epistemic vices. Both authors believe that growth and improved thinking is possible.
Cassam is harder to place, but I think he still is best thought of as an epistemic optimist. He believes that we are blameworthy for our epistemic vices and that they are indeed reprehensible. He also believes that we can improve our thinking and reach a more epistemically virtuous way of thinking if we are deliberate about addressing our epistemic vices. I don’t think that Cassam believes we have to outsmart our epistemic vices, only that we need to be able to recognize them and understand how to get beyond them, and I believe that he would argue that we can do so.
Ultimately, I think that we should learn from Kahneman, Sunstein, and Thaler and be more thoughtful of our nudges as we look for ways to overcome the limitations of our minds. However, I do believe that learning about epistemic vices and taking steps to improve our thinking can help us grow and become more epistemically virtuous. Simple experience, as I think Gigerenzer would argue, will help us improve naturally, and deliberate and calibrated thought, as Pearl might argue, can help us clearly see real and accurate causal structures in the world. I agree with Cassam that we are at least revision responsible for our epistemic vices, and that we can take steps to get beyond them, improving our thinking and becoming epistemically virtuous. In the end, I don’t think humanity is a helpless pool of irrationality and that we can only improve our thinking and decision-making through nudges. I think we can and over time will improve our statistical thinking, decision-making, and limit cognitive errors and biases as individuals and as societies (then again, maybe its just the morning coffee talking).