Motivations and Results

Yesterday I wrote about Quassim Cassam’s suggestion that virtues are teleological and that as a result motivations are also teleological. However, that may not actually be correct, and that may not actually be the argument that Cassam puts forward.
Cassam writes, “there is no reason to suppose that epistemic vices are rooted in a desire for ignorance. Epistemic vices may result in ignorance but that is not the same as being motivated by a desire for ignorance.” Cassam is maintaining a consequentialist view that epistemic vices systematically obstruct knowledge. It is a consequentialist argument in the sense that the outcome of particular behaviors and ways of thinking are likely to hinder knowledge, and we can understand those ways of thinking and behaviors as vices based on their consequences.
Cassam continues, “the closed-minded needn’t lack a healthy desire for knowledge but their approach to inquiry isn’t conductive to knowledge. There is a mismatch between what they seek – cognitive contact with reality – and how they go about achieving it.”
From this point it is hard to argue that motivations are also teleological and consequential. Limiting our thinking to just epistemic motivations, we can see that someone may not be motivated by trying to prove what they already believe is correct or motivated by a prejudice against certain information and opinions, yet can still end up obstructing knowledge, developing epistemic prejudices, or being closed-minded.
The idea of a thought bubble is a useful demonstration. Few of us would say that thought bubbles are good for us and most of us would acknowledge that they obstruct knowledge by trapping us in an information ecosystem where everyone we know and interact with holds the same beliefs and views. But few of us ever really escape thought bubbles. We don’t necessarily aim to be closed-minded and chose to only surround ourselves with people who think the same as us, but our time, attention, and energy is limited. We cannot always go about finding people outside our place of work, our religious communities, or our families to obtain drastically different views than our own. We only have so much time to watch the news, read books, and seek out information about the minimum wage, the causes of WWII, and new cancer therapies. Thought bubbles are an unavoidable outcome of the huge amount of information available and our limited ability to focus on and develop knowledge of any specific thing.
We may not be motivated to obstruct knowledge. We truly be motivated by finding more knowledge, but environmental factors, other decisions that we have made, and potentially just ignorance of how to improve our information ecosystem could prevent us from eliminating or avoiding an epistemic vice. Our motivations in these instances cannot be thought of teleologically. Judging them by the outcome alone misses many of the factors beyond our control that influenced where we ultimately ended up and whether we developed epistemic vices. What motivations serve us well in some situations may turn out to be epistemic vices that hinder knowledge in other situations. While outcomes may end up similar, there does seem to be a true difference between making an error that hinders knowledge and deliberately hindering knowledge out of a motivation to hold on to power, prestige, influence, or prior beliefs. 
Motivations, Virtues, & Vices

Motivations, Virtues, & Vices

Virtues are teleological. At least the argument that Quassim Cassam puts forward in his book Vices of the Mind relies on the suggestion that our virtues are defined by their actual outcomes and results in the real world. Cassam specifically looks at epistemic vices in the mind and demonstrates that epistemic vices systematically obstruct knowledge, where epistemic virtues systematically lead to an increase in knowledge. If the outcome of a particular way of thinking is more likely to increase the generation, transmission, and retention of knowledge, then it is a virtue, but if it is more likely to hinder one or more of those aspects of knowledge, it is a vice.
From this base, Cassam argues that our motivations are also teleological. Virtues and motivations are connected, and both are understood by the ways they actually shape and influence the world. Cassam writes, “every virtue can be defined in terms of particular motivation, and the goodness of the virtue is at least partly a function of the goodness of its particular motivation.”
I think this puts us in an interesting place when it comes to our motivations and whether we think we are virtuous or not. Initially, to me, motivations felt like they would be more deontological than teleological. As though motivations would be an intrinsic quality where they were defined as good on their own and rather than in reference to their ends and the outcomes they produce. But on closer consideration I think that Cassam is correct. Certain motivations underpin certain behaviors, and behaviors can have systematic results in the real world, giving us a teleological view of our initial motivations.
As an example, Cassam quotes Oklahoma University Professor Linda Zagzebski by writing, “an open-minded person is motivated out of a delight in discovering new truths, a delight that is strong enough to outweigh the attachment to old beliefs.”  In this example, motivations associated with discovering new truths, learning, and developing more accurate views of the world lead to the virtue of open-mindedness. These motivations, like the virtue they build into, systematically lead to more knowledge, new discoveries, and ultimately better outcomes for the world. Conversely, a motivation to hold on to old beliefs, to not have to adjust ones thinking and admit one may have been wrong, serves as a base for closed-mindedness. These motivations, along with the vice of being closed-minded, systematically inhibit knowledge, discovery, and progress. From this example, with the quote from Cassam in mind, we can see that virtues, vices, and motivations are teleological, capable of being understood as having consistent, if not univariable, positive or negative outcomes in our lives. Just as we can think of something being a virtue if it generally leads to positive outcomes, we can think of our motivations as being virtuous if they too lead to positive outcomes.
When we consider the motivations we have in our lives, and if we have motivations to become virtuous people, we can think about whether our motivations will systematically lead to good outcomes for ourselves and our societies. It is possible to hold motivations that may be beneficial for us while producing negative externalities for society. We can examine our motivations just as we evaluate our virtues and vices, and try to shift toward more virtuous motivations to try to systematically increase the good we do and the knowledge we generate. Few of us are probably motivated to be closed-minded, arrogant, or to hold any other epistemic vice, but our motivations may lead to such vices, so it is important that we pick our motivations well based on the real world outcomes they can inspire.
Systematically Obstructing Knowledge

Systematically Obstructing Knowledge

The defining feature of epistemic vices, according to Quassim Cassam, is that they get in the way of knowledge. They inhibit the transmission of knowledge from one person to another, they prevent someone from acquiring knowledge, or they make it harder to retain and recall knowledge when needed. Importantly, epistemic vices don’t always obstruct knowledge, but they tend to do so systematically.
“There would be no justification for classifying closed-mindedness or arrogance as epistemic vices if they didn’t systematically get in the way of knowledge,” writes Cassam in Vices of the Mind. Cassam lays out his argument for striving against mental vices through a lens of consequentialism. Focusing on the outcomes of ways of thinking, Cassam argues that we should avoid mental vices because they lead to bad outcomes and limit knowledge in most cases.
Cassam notes that epistemic vices can turn out well for an individual in some cases. While not specifically mentioned by Cassam, we can use former President Donald Trump as an example. Cassam writes, “The point of distinguishing between systematically and invariably is to make room for the possibility that epistemic vices can have unexpected effects in particular cases.” Trump used a massive personal fortune, an unabashed bravado, and a suite of mental vices to bully his way into the presidency. His mental vices such as arrogance, closed-mindedness, and prejudice became features of his presidency, not defects. However, while his epistemic vices helped propel him to the presidency, they clearly and systematically created chaos and problems once he was in office. In his arrogance he attempted to bribe the prime minister of Ukraine, leading to an impeachment. His closed-mindedness and wishful thinking contributed to his second impeachment as he spread baseless lies about the election. 
For most of us in most situations, these same mental vices will also likely lead to failure and errors rather than success. For most of us, arrogance is likely to prevent us from learning about areas where we could improve ourselves to perform better in upcoming job interviews. Closed-mindedness is likely to prevent us from gaining knowledge about saving money with solar panels or about a new ethnic restaurant that we would really enjoy. Prejudice is also likely to prevent us from learning about new hobbies, pastimes, or opportunities for investment. These vices don’t always necessarily lead to failure and limit important knowledge for us, as Trump demonstrated, but they are more likely to obstruct important knowledge than if we had pushed against them.
Consequentialism

On Consequentialism

In his book Vices of the Mind, Quassim Cassam argues that patterns of thoughts and mental habits that obstruct knowledge are essentially moral vices. Ways of thinking and mental habits that enhance the acquisition, retention, and transmission of knowledge, according to Cassam, are moral virtues. Cassam defends his argument largely through a consequentialist view.
Cassam is open about his consequentialist frame of reference. He writes:
“Obstructivism is a form of consequentialism. … Moral vices systematically produce bad states of affairs. … The point of systematically is to allow us to ascribe moral virtue in the actual world to people who, as a result of bad luck, aren’t able to produce good: if they possessed a character trait that systematically produces good in that context (though not in their particular case) they still have the relevant moral virtues.”
I think that this view of epistemic vices is helpful. I know for me that there are times when I fall into the epistemic vices that Cassam highlights, and they can often be comforting, make me feel good about myself, or just be distractions from an otherwise busy and confusing world. However, recognizing that these vices systematically lead to poorer outcomes can help me understand why I should stay away from them.
Epistemic vices like scrolling through Twitter to look at posts that bash on someone you dislike are structurally likely to produce bad outcomes by wasting your time, making you more prone to distractions, and prejudicing yourself against people you don’t agree with. What you spend your mental energy on matters, and in the case of Twitter scrolling, you are allowing your mind to indulge in shallow quick thinking, closed-mindedness, and biases. It plays off confirmation bias, giving you the ability to only see posts that confirm what you believe or want to believe about a person or topic. It feels nice to bash on someone else, but you are reinforcing a limited perspective that might be wrong and rewarding your brain for being shallow and inconsiderate. In the moment it is rewarding, but in the long run it will lead to worse thinking, shorter attention spans, and biased decision-making that is hard to get away from once you have closed the Twitter tab. Consequentialism helps us see that the epistemic vices involved in Twitter scrolling, which feel harmless in the moment, are more likely to result in negative outcomes over time. The systematic nature of these epistemic vices, the consequences and outcomes of indulging them, is what defines them as vices.
Consequentialism, Cassam’s argument shows, can be a useful way to think about how we should behave. People who try to do good but experience bad luck and don’t produce the same good outcomes as others can still be viewed as morally virtuous. Even though in their particular situation a good result did not occur, those who practice moral virtues can be praised for behaving in a way that is systematically more likely to produce good. Conversely, people who behave in ways that systematically produce negative outcomes can be deterred from their negative behavior through social taboos and norms, even if a poor behavior might provide them with an opportunity to succeed in the short term. It is hard to take absolute stances about any position, but consequentialism gives us a frame though which we can approach difficult decisions and uncertainty by recognizing where systematic patterns are likely to lead to desired or undesired outcomes for ourselves and our societies.
Decision Weights

Decision Weights

On the heels of the 2020 election, I cannot decide if this post is timely, or untimely. On the one hand, this post is about how we should think about unlikely events, and I will argue, based on a quote from Daniel Kahneman’s book Thinking Fast and Slow, that we overweight unlikely outcomes and should better alight our expectations with realistic probabilities. On the other hand, however, the 2020 election was closer than many people expected, we almost saw some very unlikely outcomes materialize, and one can argue that a few unlikely outcomes really did come to pass. Ultimately, this post falls in a difficult space, arguing that we should discount unlikely outcomes more than we actually do, while acknowledging that sometimes very unlikely outcomes really do happen.

 

In Thinking Fast and Slow Kahneman writes, “The decision weights that people assign to outcomes are not identical to the probabilities of these outcomes, contrary to the expectation principle.”  This quote is referencing studies which showed that people are not good at conceptualizing chance outcomes at the far tails of a distribution. When the chance of something occurring gets below 10%, and especially when it pushes into the sub 5% range, we have trouble connecting that with real world expectations. Our behaviors seem to change when things move from 50-50 to 75-25 or even to 80-20, but we have trouble adjusting any further once the probabilities really stretch beyond that point.

 

Kahneman continues, “Improbable outcomes are overweighed – this is the possibility effect. Outcomes that are almost certain are underweighted relative to actual certainty. The expectation principle, by which values are weighted by their probability, is poor psychology.”

 

When something has only 5% or lower chance of happening, we actually behave as though chance or probability for that occurrence is closer to say 25%. We know the likelihood is very low, but we behave as if the likelihood is actually a bit higher than a single digit percentage. Meanwhile, the very certain and almost completely sure outcome of 95%+ is discounted beyond what it really should be. Certainly very rare outcomes do sometimes happen, but in our minds we have trouble conceptualizing these incredibly rare outcomes, and rather than keeping a perspective based on the actual probabilities, by utilizing rational decision weights, we overweight the improbably and underweight the certain.

 

Our challenges with thinking about and correctly weighting extremely certain or extremely unlikely events may have an evolutionary history. For our early ancestors, being completely sure of anything may have resulted in a few very unlikely deaths. Those who were a tad more cautious may have been less likely to run across the log that actually gave way into the whitewater rapids below. And our ancestors who reacted to the improbably as though it were a little more certain may have also been better at avoiding the lion the one time the twig snapping outside the campground really was a lion. Our ancestor who sat by the fire and said, “twigs snap every night, the chances that it actually is a lion this time have gotta be under 5%,” may not have lived long enough to pass enough genes into the future generations. The reality is that in most situations for our early ancestors, being a little more cautious was probably advantageous for society. Today being overly cautious and struggling with improbable or nearly certain decision weights can be costly for us in terms of over-purchasing insurance, spending huge amounts to avoid the rare chance that we could lose a huge amount, and over trusting democratic institutions in the face of a coup attempt.
Availability Cascades

Availability Cascades

This morning, while reading Sapiens by Yuval Noah Harari, I came across an idea that was new to me. Harari writes, “Chaotic systems come in two shapes. Level one chaos is chaos that does not react to predictions about it. … Level two chaos is chaos that reacts to predictions about it.”  The idea is that chaotic systems, like societies and cultures, are distinct from chaotic systems like the weather. We can model the weather, and it won’t change based on what we forecast. When we model elections, on the other hand, there is a chance that people, and ultimately the outcome of the election, will be influenced by the predictions we make.  The chaos is responsive to the way we think about that chaos. A hurricane doesn’t care where we think it is going to make landfall, but voters in a state may care quite a bit and potentially change their behavior if they think their state could change the outcome of an election.

 

This ties in with the note from Daniel Kahneman’s book Thinking Fast and Slow which I had selected to write about today. Kahneman writes about availability cascades in his book, and they are a piece of the feedback mechanism described by Harari in level two chaos systems. Kaneman writes:

 

“An availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action. One some occasions, a media story about a risk catches the attention of a segment of the public, which becomes aroused and worried.”

 

We can think about any action or event that people and governments might take as requiring a certain action potential in order to take place. A certain amount of energy, interest, and attention is required for social action to take place. The action potential can be small, such as a red light being enough of an impetus to cause multiple people to stop their cars at an intersection, or monumental, such as a major health crisis being necessary to spur emergency financial actions from the Federal Government. Availability cascades create a set of triggers which can enhance the energy, interest, and attention provided to certain events and bolster the likelihood of a public response.

 

2020 has been a series of extreme availability cascades. With a global pandemic, more people are watching news more closely than before. This allows for the increased salience of incident of police brutality, and increases the energy in the public response to such incidents. As a result, more attention has been paid to racial injustice, and large companies have begun to respond in new ways to issues of race and equality, again heightening the energy and interest of the public in demanding action regarding both racial justice and police policy. There are other ways that events could have played out, but availability cascades created feedback mechanisms within a level two chaotic system, opening certain avenues for public and societal action.

 

It is easy to look back and make assessments on what happened, but in the chaos of the moment it is hard to understand what is going on. Availability cascades help describe what we see, and help us think about what might be possible in the future.
Extreme Outcomes

Extreme Outcomes

Large sample sizes are important. At this moment, the world is racing as quickly as possible toward a vaccine to allow us to move forward from the COVID-19 Pandemic. People across the globe are anxious for a way to resume normal life and to reduce the risk of death from the new virus and disease. One thing standing in the way of the super quick solution that everyone wants is basic statistics. For any vaccine or treatment, we need a large sample size to be certain of the effects of anything we offer to people as a cure or for prevention of COVID-19. We want to make sure we don’t make decisions based on extreme outcomes, and that what we produce is safe and effective.

 

Statistics and probability are frequent parts of our lives, and many of us probably feel as though we have a basic and sufficient grasp of both. The reality, however, is that we are often terrible with thinking statistically. We are much better at thinking in narrative, and often we substitute a narrative interpretation for a statistical interpretation of the world without even recognizing it. It is easy to change our behavior based on anecdote and narrative, but not always so easy to change our behavior based on statistics. This is why we have the saying often attributed to Stalin: One death is a tragedy, a million deaths is a statistic.

 

The danger with anecdotal and narrative interpretations of the world is that they are drawn from small sample sizes. Daniel Kahneman explains the danger of small sample sizes in his book Thinking Fast and Slow, “extreme outcomes (both high and low) are more likely to be found in small than in large samples. This explanation is not causal.”

 

In his book, Kahneman explains that when you look at counties in the United States with the highest rates of cancer, you find that some of the smallest counties in the nation have the highest rates of cancer. However, if you look at which counties have the lowest rates of cancer, you will also find that it is the smallest counties in the nation that have the lowest rates. While you could drive across the nation looking for explanations to the high and low cancer rates in rural and small counties, you likely wouldn’t find a compelling causal explanation. You might be able to string a narrative together and if you try really hard you might start to see a causal chain, but your interpretation is likely to be biased and based on flimsy evidence. The fact that our small counties are the ones that have the highest and lowest rates of cancer is an artifact of small sample sizes. When you have small sample sizes, as Kahneman explains, you are likely to see more extreme outcomes. A few random chance events can dramatically change the rate of cancer per thousand residents when you only have a few thousand residents in small counties. In larger more populated counties, you find a reversion to the mean, and few extreme chance outcomes outcomes are less likely to influence the overall statistics.

 

To prevent our decision-making from being overly influenced by extreme outcomes we have to move past our narrative and anecdotal thinking. To ensure that a vaccine for the coronavirus or a cure for COVID-19 is safe and effective, we must allow the statistics to play out. We have to have large sample sizes, so that we are not influenced by extreme outcomes, either positive or negative, that we see when a few patients are treated successfully. We need the data to ensure that the outcomes we see are statistically sound, and not an artifact of chance within a small sample.
Thoughts on Personal Responsibility

Complex and Conflicting Thoughts on Personal Responsibility

I’m really hesitant to criticize others for not taking sufficient personal responsibility for the ways they live and the outcomes of their lives. A lot of factors influence whether you are economically successful or whether you are fit and healthy. Some things we seem to have a lot of control over, but many things are matters of chance and circumstance. Placing too much blame on the individual doesn’t seem fair, yet at the same time, there is clearly an element of personal responsibility involved. I’m not sure where I land on how we should think about this division.

 

What is clear, however, is that there can be negative consequences when we take away people’s agency in their decision-making and life outcomes, and when we erode the authority of those who are reasonably critical of negative lifestyles and ways of thinking and being, we can put ourselves and societies in vulnerable positions.

 

Sam Quinones writes about these tensions in his book Dreamland and he highlights how patient responsibility and physician authority devolved between the 1980’s and twenty-teens as a quick fix, there’s-a-drug-for-that mindset took hold of the American healthcare system. He writes, “…patients were getting used to demanding drugs for treatment. They did not, however, have to accept the idea that they might, say, eat better and exercise more, and that this might help them lose weight and feel better. Doctors, of course, couldn’t insist. As the defenestration of the physician’s authority and clinical experience was under way, patients didn’t have to take accountability for their own behavior.”

 

I’m usually hesitant to say that the problem is people’s lack of accountability, because how often do we really control how much exercise we can get when many of us live in places where walking is difficult because our streets are not safe, or are not well designed for pedestrian use, or because half the year it is dark early and we get lots of snowfall? How often do we not know what kinds of exercises we should do, and how often do we have people who are only critical of our current state rather than supportive and encouraging? How often have we had a bad break and poor advice on how to get back, only leading to a further defeat, deflating our sense of self worth? In addition to all this, how often have we seen people use the personal responsibility argument in bad faith? To justify not helping others or to rationalize their greed or excessive self-aggrandizement?

 

But at the same time, as Quinones shows, responsibility is important. We need to think about what we can and should be doing to help improve our own lives, without hoping for an easy fix in the form of a miracle pill. We can’t just throw out the opinions of experts and devalue their authority because they are willing to say things that are discomforting for us, but are likely correct in terms of how we can make our lives better. Somehow we need to work together to build a society that recognizes the barriers and challenges that we face toward becoming the successful and healthy people that we want to be, but encourages us to still work hard and overcome obstacles by taking responsibility for our actions and (at least some percentage of) our outcomes. I don’t know what this looks like exactly, and I’m not sure where the line falls between personal responsibility and outside factors, but I am willing to have an honest discussion about it and about what it all means for how we relate to each other.

New Considerations for the Public vs Private Discussion

In the past I wrote about the importance of privacy in our politics from the point of view of Jonathan Rauch and his book Political Realism. We have almost no trust in government, and we frequently say things like, “sunshine is the best disinfectant,” but the reality is that politics is made much more complex when it is in the open. Difficult negotiations, compromises, and sacrifices are hard to do in open and public meetings, but can be a little easier when the cameras are turned off and political figures who disagree can have open and honest discussions without the fear of their own words and negotiations being used against them in the future.

 

In The New Localism, Bruce Katz and Jeremy Nowak acknowledge the difficulties faced by governments when open meeting laws force any discussion to be public. The laws come from a good place, but for local governments that need to move fast, make smart decisions, and negotiate with private and civic sectors to spur innovation and development, public meetings can lead to stagnation and gridlock. A solution proposed by Katz and Nowak is for local governments to authorize private corporations overseen by public bodies and boards to operate economic development areas and to take ownership of public asset management.

 

They describe how the city of Copenhagen has used this approach, “Copenhagen has found that by managing transactions through a publicly owned, privately driven corporation, operations run faster and more efficiently in comparison to how local government traditionally tackled public development projects.”

 

The private corporation running local development is publicly owned. It is still accountable to the local elected officials who are ultimately still accountable to the voters. But, the decisions are private, the finances are managed privately, and negotiations are not subject to public meeting laws. While the corporation has to demonstrate that it is acting in the public interest, free of corruption, it can engage with other public, private, and civic organizations in a more free and flexible manner to accomplish its goals. Leveraging the strengths of the private sector, publicly owned private corporations that put local assets to work can help drive change and innovation.

 

Directly calling back Jonathan Rauch’s ideas, these corporations create space for negotiations that would be publicly damning for an elected official. They also prevent elected officials from having undue influence in development and public asset management, preventing them from stonewalling a project that might be overwhelmingly popular in general, but unpopular with a narrow and vocal segment of their electorate. This prevents public officials from pursuing a good sounding but ineffective use of public resources to signal loyalty or virtue to constituents. Removing transparency and making the system appear less democratic, as Rauch suggests, might just make the whole system operate more smoothly and work better in terms of the outcomes our cities actually need.

Sample Bias and Obliquity – Lessons from the Education Model

I studied political science for a masters and focused generally on public health. A big challenge in both areas is that the people who end up participating in our studies or who are the targets of our interventions are often different in one way or another from the general population, and that makes it hard to tell whether our study or intervention was meaningful. We might see a result and want to attribute it to a specific thing happening in society or that we introduced to a group, but it could just be that the people observed already had some particular quality that led to the outcome we saw. Our theory and our intervention may have just been a small thing on the side that didn’t really do what it looks like it did.

 

Another challenge in both areas is accomplishing our goals without being able to directly address our goals. We may want to do something like prevent drug overdose deaths, but public opinion won’t support safe injection sites, legal drug use, or free needles for drug addicts. We can work toward our goals, but we often have to do them in an oblique manner that purports to address one thing, while in the background really addressing another thing.

 

These experiences from my educational background come to mind when I think about the following quote from Kevin Simler and Robin Hanson in their book The Elephant in the Brain. Their example is about education, but it relates to what I discussed above because it shows how our current education system seems to be doing one thing, but really accomplishes another goal in an indirect way. It does so by taking qualities that people already have, and purporting to provide an intervention to enhance those qualities, but runs into the same selection bias I mentioned in my opening paragraph.

 

“Educated workers are generally better workers, but not necessarily because school made them better. Instead, a lot of the value of education lies in giving students a chance to advertise the attractive qualities they already have.”

 

Education can do a lot of things for us, but pin pointing exactly what it does is tricky because the people attracted to school are in some ways different from the population that is not attracted to higher education. It is hard to say that the schooling is what made the big difference or if the people who do well in school had other qualities that set the stage for the difference observed between those who do well at school and those who don’t. This doesn’t mean school is a waste or that we should invest in it less, but rather that we should consider a wider range of schooling options to allow people to demonstrate their unique qualities in different ways.

 

The other piece I like about the quote is the obliquity of schooling and education in our journey to tell others how amazing we are. It is hard to demonstrate one’s skills and qualities, but going through an obstacle course, such as college, is a good way to show our positive qualities and skills. Education is one obstacle we use to differentiate ourselves and advertise how capable we are in a socially acceptable manner. There is something to be learned when thinking through policy from the education example. Direct approaches to policy-making sometimes are impossible, but indirect routes can open doors if they make it seem as though another good is being pursued with the outcome we want to see occurring incidentally.