Laboratory Proof

Laboratory Proof

“If the standard of laboratory proof had been applied to scurvy,” writes Judea Pearl in The Book of Why, “then sailors would have continued dying right up until the 1930’s, because until the discovery of vitamin C, there was no laboratory proof that citrus fruits prevented scurvy.” Pearl’s quote shows that high scientific standards for definitive and exact causality are not always for the greater good. Sometimes modern science will spurn clear statistical relationships and evidence because statistical relationships alone cannot be counted on as concrete causal evidence. A clear answer will not be given because some marginal unknowns may still exist, and this can have its own costs.
Sailors did not know why or how citrus fruits prevented scurvy, but observations demonstrated that citrus fruits managed to prevent scurvy. There was no clear understanding of what scurvy was or why citrus fruits were helpful, but it was commonly understood that a causal relationship existed. People acted on these observations and lives were saved.
On two episodes, the Don’t Panic Geocast has talked about journal articles in the British Medical Journal that make the same point as Pearl. As a critique of the need for randomized controlled trials, the two journal articles highlight the troubling reality that there have not been any randomized controlled trials on the effectiveness of parachute usage when jumping from airplanes. The articles are hilarious and clearly satirical, but ultimately come to the same point that Pearl does with the quote above – laboratory proof is not always necessary, practical, or reasonable when lives are on the line.
Pearl argues that we can rely on our abilities to identify causality even without laboratory proof when we have sufficient statistical analysis and understanding of relationships. Statisticians always tell us that correlation is not causation and that observational studies are not sufficient to determine causality, yet the citrus fruit and parachute examples highlight that this mindset is not always appropriate. Sometimes more realistic and common sense understanding of causation – even if supported with just correlational relationships and statistics – are more important than laboratory proof.
The Screening-Off Effect

The Screening-Off Effect

Sometimes to our great benefit, and sometimes to our detriment, humans like to put things into categories – at least Western, Educated, Industrialized, Rich, Democratic (WEIRD) people do. We break things into component parts and categorize each part as belonging to a category of thing. We do this with things like planets, animals, and players within sports. We like established categories and dislike when our categorization changes. This ability has greatly helped us in science and strategic planning, allowing our species to do incredible things and learn crucial lessons about the world. What is remarkable about this ability is how natural and easy it is for us, but how hard it is to explain or program into a machine.
One component of this remarkable ability is referred to as the screening-off effect by Judea Pearl in The Book of Why. Pearl writes, “how do we decide which information to disregard, when every new piece of information changes the boundary between the relevant and the irrelevant? For humans, this understanding comes naturally. Even three-year-old toddlers understand the screening-off effect, though they don’t have a name for it. … But machines do not have this instinct, which is one reason that we equip them with causal diagrams.”
From a young age we know what information is the most important and what information we can ignore. We intuitively have a good sense for when we should seek out more information and when we have enough to make a decision (although sometimes we don’t follow this intuitive sense). We know there is always more information out there, but don’t have time to seek out every piece of information possible. Luckily, the screening-off effect helps us know when to stop and makes decision-making possible for us.
Beyond knowing when to stop, the screening-off effect helps us know when to ignore irrelevant information. The price of tea in China isn’t a relevant factor for us when deciding what time to wake up the next morning. We recognize that there are no meaningful causal pathways between the price of tea and the best time for us to wake up. This causal insight, however, doesn’t exist for machines that are only programmed with the specific statistics we build into them. We specifically have to code a causal pathway that doesn’t include the price of tea in China for a machine to know that it can ignore that information. The screening-off effect, Pearl explains, is part of what allows humans to think causally. In cutting edge science there are many factors we wouldn’t think to screen out that may impact the results of scientific experiments, but for the most part, we know what can be ignored and can look at the world around us through a causal lens because we know what is and is not important.
Slope is Agnostic to Cause and Effect

Slope is Agnostic to Cause and Effect

I like statistics. I like to think statistically, to recognize that there is a percent chance of one outcome that can be influenced by other factors. I enjoy looking at best fit lines, seeing that there are correlations between different variables, and seeing how trend-lines change if you control for different variables. However, statistics and trend lines don’t actually tell us anything about causality.
In The Book of Why Judea Pearl writes, “the slope (after scaling) is the same no matter whether you plot X against Y or Y against X. In other words, the slope is completely agnostic as to cause and effect. One variable could cause the other, or they could both be effects of a third cause; for the purpose of prediction, it does not matter.”
In statistics we all know that correlation is not causation, but this quote helps us remember important information when we see a statistical analysis and a plot with linear regression line running through it. The regression line is like the owl that Pearl had described earlier in the book. The owl is able to predict where a mouse is likely to be and able to predict which direction it will run, but the owl does not seem to know why a mouse is likely to be in a given location or why it is likely to run in one direction over another. It simply knows from experience and observation what a mouse is likely to do.
The regression line is a best fit for numerous observations, but it doesn’t tell us whether one variable causes another or whether both are influenced in a similar manner by another variable. The regression line knows where the mouse might be and where it might run, but it doesn’t know why.
In statistics courses we end at this point of correlation. We might look for other variables that are correlated or try to control for third variables to see if the relationship remains, but we never answer the question of causality, we never get to the why. Pearl thinks this is a limitation we do not need to put on ourselves. Humans, unlike owls, can understand causality, we can recognize the various reasons why a mouse might be hiding under a bush, and why it may chose to run in one direction rather than another. Correlations can help us start to see where relationships exist, but it is the ability of our mind to understand causal pathways that helps us determine causation.
Pearl argues that statisticians avoid these causal arguments out of caution, but that it only ends up creating more problems down the line. Important statistical research in areas of high interest or concern to law-makers, business people, or the general public are carried beyond the cautious bounds that causality-averse statisticians place on their work. Showing correlations without making an effort to understand the causality behind it makes scientific work vulnerable to the epistemically malevolent who would like to use correlations to their own ends. While statisticians rigorously train themselves to understand that correlation is not causation, the general public and those struck with motivated reasoning don’t hold themselves to the same standard. Leaving statistical analysis at the level of correlation means that others can attribute the cause and effect of their choice to the data, and the proposed causal pathways can be wildly inaccurate and even dangerous. Pearl suggests that statisticians and researchers are thus obligated to do more with causal structures, to round off  their work and better develop ideas of causation that can be defended once their work is beyond the world of academic journals.
The Fundamental Nature of Cause and Effect

The Fundamental Nature of Cause and Effect

In my undergraduate and graduate studies I had a few statistics classes and I remember the challenge of learning probability. Probability, odds, and statistics are not always easy to understand and interpret. There are some concepts that are pretty straightforward, and others that seem to contradict what we would expect if we had not gone through the math and if we had not studied the concepts in depth. To contrast the difficult and sometimes counter-intuitive nature of statistics, we can think about causality, which is a challenging concept, but unlike statistics, is something we are able to intuit from very young age.
In The Book of Why Judea Pearl writes, “In both a cognitive and a philosophical sense, the idea of cause and effect is much more fundamental than probability. We begin learning causes and effects before we understand language and before we understand mathematics.”
As Pearl explains, we see causality naturally and experience causality as we move through our lives. From a young child who learns that if they cry they receive attention to a nuclear physicist who learns what happens when two atoms collide at high energy levels, our minds are constantly looking at the world and looking for causes. It begins by making observations of phenomena around us and continues as we predict what outcomes would happen based on certain system inputs. Eventually, our minds reach a point where we can understand why our predictions are accurate or inaccurate, and we can imagine new ways to bring about certain outcomes. Even if we cannot explain all of this, we can still understand causation at a fundamental and intuitive level.
However, many of us deny that we can see and understand the world in a causal way. I am personally guilty of thinking in a purely statistical way and ignoring the causal. The classes I took in college helped me understand statistics and probability, but also told me not to trust my intuitive causal thinking. Books like Kahneman’s Thinking Fast and Slow cemented this mindset for me. Rationality, we believe, requires that we think statistically and discount our intuitions for fear of bias. Modern science says we can only trust evidence when it is backed by randomized controlled trials and directs us to think of the world through correlations and statistical relationships, not through a lens of causality.
Pearl pushes back against this notion. By arguing that causality is fundamental to the human mind, he implies that our causal reasoning can and should be trusted. Throughout the book he demonstrates that a purely statistical way of thinking leaves us falling short of the knowledge we really need to improve the world. He demonstrates that complex tactics to remove variables from equations in statistical methods are often unnecessary, and that we can accept the results of experiments and interventions even when they are not fully randomized controlled trials.  For much of human history our causal thinking nature has lead us astray, but I think that Pearl argues that we have overcorrected in modern statistics and science, and that we need to return to our causal roots to move forward and solve problems that statistics tells us are impossible to solve.
Correlation and Causation - Judea Pearl - The Book of Why - Joe Abittan

Correlation and Causation

I have an XKCD comic taped to the door of my office. The comic is about the mantra of statistics, that correlation is not causation. I taped the comic to my office door because I loved learning statistics in graduate school and thinking deeply about associations and how mere correlations cannot be used to demonstrate that one thing causes another. Two events can correlate, but have nothing to do with each other, and a third thing may influence both, causing them to correlate without any causal link between the two things.
But Judea Pearl thinks that science and researchers have fallen into a trap laid out by statisticians and the infinitely repeated correlation does not imply causation mantra. Regarding this perspective of statistics he writes, “it tells us that correlation is not causation, but it does not tell us what causation is.”
Pearl seems to suggest in The Book of Why that there was a time where there was too much data, too much humans didn’t know, and too many people ready to offer incomplete assessments based on anecdote and incomplete information. From this time sprouted the idea that correlation does not imply causation. We started to see that statistics could describe relationships and that statistics could be used to pull apart entangled causal webs, identifying each individual component and assessing its contribution to a given outcome. However, as his quote shows, this approach never actually answered what causation is. It never actually told us when we can know and ascertain that a causal structure and causal mechanism is in place.
“Over and over again,” writes Pearl, “in science and in business, we see situations where mere data aren’t enough.”
To demonstrate the shortcomings of our high regard for statistics and our mantra that correlation is not causation, Pearl walks us through the congressional testimonies and trials of big tobacco companies in the United States. The data told us there was a correlation between smoking and lung cancer. There was overwhelming statistical evidence that smoking was related or associated with lung cancer, but we couldn’t attain 100% certainty just through statistics that smoking caused lung cancer. The companies themselves muddied the water with misleading studies and cherry picked results. They hid behind a veil that said that correlation was not causation, and hid behind the confusion around causation that statistics could never fully clarify.
Failing to develop a real sense of causation, failing to move beyond big data, and failing to get beyond statistical correlations can have real harms. We need to be able to recognize causation, even without relying on randomized controlled trials, and we need to be able to make decisions to save lives. The lesson of the comic taped to my door is helpful when we are trying to be scientific and accurate in our thinking, but it can also lead us astray when we fail to trust a causal structure that we can see, but can’t definitively prove via statistics.
A Vice Doom Loop

A Vice Doom Loop

In Vices of the Mind Quassim Cassam asks if we can escape our epistemic vices. He takes a deep look at epistemic vices, how they impact our thinking and behavior, and asks if we are stuck with them forever, or if we can improve and overcome them. Unfortunately for those of us who wish to become more epistemically virtuous, Cassam has some bad news that comes in the form of a vice doom loop. He writes,
“One is unlikely to take paraphrasing exercises seriously unless one already has a degree of intellectual humility. If one has the requisite degree of humility then one isn’t intellectually arrogant. If one is intellectually arrogant then one probably won’t be humble enough to do the exercises. In the same way, the epistemically lazy may well be too lazy to do anything about their laziness, and the complacent too complacent to worry about being complacent. In all of these cases, the problem is that the project of undoing one’s character vices is virtue-dependent, and those who have the necessary epistemic virtues don’t have the epistemic vices.”
The epistemic vice doom loop stems from the fact that epistemic vices are self-reinforcing. They create the mental modes that reinforce vicious thinking. Escaping from epistemic vices, as Cassam explains, requires that we possess epistemic virtues, which by default we do not possess. Virtues take deliberate effort and practice to build and maintain. We need virtues to escape our vices, but our vices prevent us from developing such virtues, and causes a further entrenchment of our vices.
So it seems as though epistemic vices are inescapable and that those with epistemic vices are stuck with them forever. Luckily, Cassam continues and explains that this is not the case. The world that Cassam’s quote lays out presents us with a false dichotomy. We are not either wholly epistemically vicious or epistemically virtuous. We exist somewhere in the middle, with some degree of epistemic viciousness present in our thinking and behavior and some degree of epistemic virtuosity. This means that we can ultimately overcome our vices. We can become less epistemically insouciant, we can become less arrogant, and we can reduce our wishful thinking. The vice doom loop is escapable because few of us are entirely epistemically vicious, and at least in some situations we are more epistemically virtuous, and we can learn from those situations and improve in others.
Epistemic Optimists & Pessimists - Joe Abittan

Epistemic Optimists & Pessimists

A little while back I did a mini dive into cognitive psychology and behavioral economics by reading Thinking Fast and Slow by Daniel Kahneman, Nudge by Sunstein and Thaler, Risk Savvy by Gerd Gigerenzer, Vices of the Mind by Quassim Cassam, and The Book of Why by Judea Pearl. Each of these authors asked questions about the ways we think and tried to explain why our thinking so often seems go awry. Recognizing that it is a useful but insufficient dichotomy, each of these authors can be thought of as either an epistemic optimist or an epistemic pessimist.
In Vices of the Mind Cassam gives us the definitions for epistemic optimists and pessimists. He writes, “Optimism is the view that self-improvement is possible, and that there is often (though not always) something we can do about our epistemic vices, including many of our implicit biases.” The optimists, Cassam argues, believes that we can learn about our mind, our biases, and how our thinking works to make better decisions and improve our beliefs to foster knowledge. Cassam continues, “Pessimism is much more sceptical about the prospects of self-improvement or, at any rate, of lasting self-improvement. … For pessimists, the focus of inquiry shouldn’t be on overcoming our epistemic vices but  on outsmarting them, that is, finding ways to work around them so as to reduce their ill effects.” With Cassam’s framework, I think it is possible to look at the ways each author and researcher presents information in their books and to think of them as either optimists or pessimists.
Daniel Kahneman in Thinking Fast and Slow wants to be an optimist, but ultimately is a pessimist. He writes throughout the book how his own knowledge about biases, cognitive illusions, and thinking errors hardly help him in his own life. He states that what he really hopes his book accomplishes is improved water-cooler talk and better understanding of how the brain works, not necessarily better decision-making for those who read his book. Similarly, Sunstein and Thaler are pessimists. They clearly believe that we can outsmart our epistemic vices, but not by our own actions but rather by outside nudges that smarter people and responsible choice architects have designed for us. Neither Kahneman nor the Chicago economics pair believe we really have any ability to control and change our thinking independently.
Gigerenzer and Pearl are both optimists. While Gigerenzer believes that nudges can be helpful and encourages the development of aids to outsmart our epistemic vices, he also clearly believes that we can overcome them on our own simply through gaining experience and through practice. For Gigerenzer, achieving epistemic virtuosity is possible, even if it isn’t something you explicitly work toward. Pearl focuses how human beings are able to interpret and understand causal structures in the real world, and breaks from the fashionable viewpoint of most academics in saying that humans are actually very good and understanding, interpreting, and measuring causality. He is an epistemic optimist because he believes, and argues in his book, that we can improve our thinking, improve the ways we approach questions of causality, and improve our knowledge without having to rely on fancy tricks to outsmart epistemic vices. Both authors believe that growth and improved thinking is possible.
Cassam is harder to place, but I think he still is best thought of as an epistemic optimist. He believes that we are blameworthy for our epistemic vices and that they are indeed reprehensible. He also believes that we can improve our thinking and reach a more epistemically virtuous way of thinking if we are deliberate about addressing our epistemic vices. I don’t think that Cassam believes we have to outsmart our epistemic vices, only that we need to be able to recognize them and understand how to get beyond them, and I believe that he would argue that we can do so.
Ultimately, I think that we should learn from Kahneman, Sunstein, and Thaler and be more thoughtful of our nudges as we look for ways to overcome the limitations of our minds. However, I do believe that learning about epistemic vices and taking steps to improve our thinking can help us grow and become more epistemically virtuous. Simple experience, as I think Gigerenzer would argue, will help us improve naturally, and deliberate and calibrated thought, as Pearl might argue, can help us clearly see real and accurate causal structures in the world. I agree with Cassam that we are at least revision responsible for our epistemic vices, and that we can take steps to get beyond them, improving our thinking and becoming epistemically virtuous. In the end, I don’t think humanity is a helpless pool of irrationality and that we can only improve our thinking and decision-making through nudges. I think we can and over time will improve our statistical thinking, decision-making, and limit cognitive errors and biases as individuals and as societies (then again, maybe its just the morning coffee talking).
Transformational Insights - Joe Abittan - Vices of the Mind - Quassim Cassam

Transformational Insights

In my last post I wrote about self-deceptive rationalization. The idea was that even when trying to critically reflect back on our lives and learn lessons from our experiences, we can error and end up entrenching problematic and inaccurate beliefs about ourselves and the world. I suggested that one potential way to be bumped out of the problem of inaccurate self-reflection was to gain transformational insights from an external event. Wishful thinking might come to an end when you don’t get the promotion you were sure was coming your way. Gullibility can be ended after you have been swindled by a conman. The arrogant can learn their lesson after a painful divorce. However, Quassim Cassam in his book Vices of the Mind suggests that even transformational insights triggered by external events might not be enough to help us change our internal reflection.
In the book Cassam writes, “this leaves it open, however, whether self-knowledge by transformational insight is as vulnerable to the impact of epistemic vices as self-knowledge by active critical reflection. … Transformational insights are always a matter of interpretation.” Even external factors that have the potential to force us to recognize our epistemic vices may fail to do so. The wishful thinkers may continue on being wishful thinkers, believing they simply hit one blip in the road. The gullible may learn their lesson once, but need to learn it again and again in different contexts. And the arrogant may not be able to recognize how their arrogance played into a divorce, instead choosing to view themselves as unfortunate victims. The matter of interpretation of transformational insights, shocks from the outside that make us consider our epistemic vices, means that they cannot be a reliable way to ensure we eliminate epistemic vices.
Again, this seems to leave us in a place where we can not overcome our epistemic vices without developing epistemic virtues. But this puts us back in a circular problem. If our epistemic vices prevent us from developing and cultivating epistemic virtues, and if we need epistemic virtues to overcome our epistemic vices, then how do we ever improve our thinking?
The answer for most of us is probably pretty boring and disappointing. Incrementally, as we gain new perspectives and more experience, we can hopefully come to distinguish between epistemic virtues and epistemic vices. Epistemic vices will systematically obstruct knowledge, leading to poorer decision-making and worse outcomes. As we seek more positive outcomes and better understanding of the world, we will slowly start to recognize epistemic vices and to see them in ourselves. Incrementally, we will become more virtuous.
This is not an exciting answer for anyone looking to make a dramatic change in their life, to achieve a New Year’s Resolution, or to introduce new policy to save the world. It is however, practical and should take some pressure off of us. We can work each day to be a little more self-aware, a little more epistemically virtuous, and to better how to cultivate knowledge. We can grow overtime, without putting the pressure on ourselves to be epistemically perfect all at once. After all, trying to do so might trigger self-deceptive rationalization and our transformational insights are subject to interpretation, which could be wrong.
Beliefs are Not Voluntary

Beliefs Are Not Voluntary

One of the ideas that Quassim Cassam examines in his book Vices of the Mind is the idea of responsibility. Cassam recognizes two forms of responsibility in his book and examines those forms of responsibility through the lens of epistemic vices. The first form of responsibility is acquisition responsibility, or our responsibility for acquiring beliefs or developing ways of thinking, and the second form of responsibility is revision responsibility, or our responsibility for changing beliefs and ways of thinking that are shown to be harmful.
 
 
Within this context Cassam provides interesting insight about our beliefs. He writes, “If I raise my arm voluntarily, without being forced to raise it, then I am in this sense responsible for raising it.
Notoriously, we lack voluntary control over our own beliefs. Belief is not voluntary.”
 
 
Cassam explains that if it is raining outside, we cannot help but believe that it is raining. We don’t have control over many of our beliefs, they are in some ways inescapable and determined by factors beyond our control. beliefs are almost forced on us by external factors. I think this is true for many of our beliefs, ranging from spiritual beliefs to objective beliefs about the world. As Cassam argues, we are not acquisition responsible for believing that we are individuals, that something is a certain color, or that our favorite sports team is going to have another dreadful season.
 
 
But we are revision responsible for our beliefs.
 
 
Cassam continues, “We do, however, have a different type of control over our own beliefs, namely, evaluative control, and this is sufficient for us to count as revision responsible for our beliefs.”
 
 
Cassam introduces ideas from Pamela Hieronymi to explain our evaluative control over our beliefs. Hieronymi argues that we can revise our beliefs when new information arises that challenges those beliefs. She uses the example of our beliefs for how long a commute will be and our shifting beliefs if we hear about heavy traffic. We might not be responsible for the initial beliefs that we develop, but we are responsible for changing those beliefs if they turn out to be incorrect. We can evaluate our beliefs, reflect on their accuracy, and make adjustments based on those evaluations.
 
 
It is important for us to make this distinction because it helps us to better think about how we assign blame for inaccurate beliefs. We cannot blame people for developing inaccurate beliefs, but we can blame them for failing to change those beliefs. We should not spend time criticizing people for developing racist beliefs, harmful spiritual beliefs, or wildly inaccurate beliefs about health, well-being, and social structures. What we should do is blame people for failing to recognize their beliefs are wrong, and we should help people build evaluative capacities to better reflect on their own beliefs. This changes our stance from labeling people as racists, bigots, or jerks and instead puts the responsibility on us to foster a society of accurate self-reflection that push back against inaccurate beliefs. Labeling people will blame them for acquiring vices, which is unreasonable, but fostering a culture that values accurate information will ease the transition to more accurate beliefs.

A Vaccine for Lies and Falsehoods

A Vaccine for Lies and Falsehoods

Vaccines are on everyone’s mind this year as we hope to move forward from the Coronavirus Pandemic, and I cannot help but think about today’s quote from Quassim Cassam’s book Vices of the Mind through a vaccine lens. While writing about ways to build and maintain epistemic virtues Cassam writes, “only the inculcation and cultivation of the ability to distinguish truth from lies can prevent our knowledge from being undermined by malevolent individuals and organizations that peddle falsehoods for their own political or economic ends.” In other words, there is no vaccine for lies and falsehoods, only the hard work of building the skills to recognize truth, narrative, and outright lies.
I am also reminded of a saying that Steven Pinker included in his book Enlightenment Now, “any jackass can knock down a barn, but it takes a carpenter to build one.” This quote comes to mind when I think about Cassam’s quote because building knowledge is hard, but spreading falsehoods is easy. Epistemic vices are easy, but epistemic virtues are hard.
Anyone can be closed-minded, anyone can use lies to try to better their own position, and anyone can be tricked by wishful thinking. It takes effort and concentration to be open-minded yet not gullible, to identify and counter lies, and to create and transmit knowledge for use by other people. The vast knowledge bases that humanity has built has taken years to develop, to weed out the inaccuracies, and to painstakingly hone in on ever more precise and accurate understandings of the universe. All this knowledge and information has taken incredible amounts of hard work by people dedicated to building such knowledge.
But any jackass can knock it all down. Anyone can come along and attack science, attack knowledge, spread misinformation and deliberately use disinformation to confuse and mislead people. Being an epistemic carpenter and building knowledge is hard, but being a conman and acting epistemically malevolent is easy.
The task for all of us is to think critically about our knowledge, about the systems and structures that have facilitated our knowledge growth and development as a species over time, and to do what we can to be more epistemically virtuous. Only by working hard to identify truth, to improve systems for creating accurate information, and to enhance knowledge highways to help people learn and transmit knowledge effectively can we continue to move forward. At any point we can chose to throw sand in the gears of knowledge, bringing the whole system down, or we can find ways to make it harder to gum up the knowledge machinery we have built. We must do the latter if we want to continue to grow, develop, and live peacefully rather than at the mercy of the epistemically malevolent. After all, there is no vaccine to cure us from lies and falsehoods.