Markets & Political Bias

Markets & Political Bias

The United States loves free market capitalism. Almost any political action that would raise taxes, introduce tariffs on foreign goods, or regulate an industry is met with incredible pushback and extreme rhetoric. Most people don’t have a great sense of what communism or socialism really are, but those terms are used extensively whenever the government proposes a new regulation or program that might interfere with a market. Free market capitalism is the heart of the United States, at least in rhetoric.
 
 
However, as Yuval Noah Harari writes in his book Sapiens, “there is simply no such thing as a market free of all political bias.” Markets on their own are not perfect. Clickbait headlines and designed obsolescence of smartphones are two frustrating examples of imperfect markets. In both instances, unequal information and misaligned financial incentives provide motivation for the producer to provide sub-par products. These examples are relatively harmless, but they do contribute to a larger problem within markets – a lack of trust between consumers and producers.
 
 
Harari continues, “the most important economic resource is trust in the future, and this resource is constantly threatened by thieves and charlatans. Markets by themselves offer no protection against fraud, theft, and violence.” Clickbait headlines have made me distrust internet links and headlines that sound juicy. I make it a point to never click on a Yahoo! article after being burned too many times by clickbait headlines when I was younger. I simply don’t trust what appears to be valuable information on the internet – a problem that has larger spillover effects as our population comes to distrust any information. In terms of smartphones, government regulation actually did play a role in changing the problem of designed obsolescence. Apple was deliberately slowing down older devices in an attempt to force users to buy newer devices – Apple claimed they had to slow phones down to prevent battery degradation and damage to the devices (eye roll). The result has been better performance of older smartphones for a longer lifetime.
 
 
“It is the job of political systems to ensure trust by legislating sanctions against cheats and to establish and support … the law.” We make investments and purchase goods when we can trust the market actors and that our investment will payoff in the future. I continue to purchase Apple products because I have seen an improvement in the problems of designed obsolescence and devices failing to function after just a year. Market intervention from the government helped stabilize the market and ensure that consumers had access to better products. However, I still don’t click on many articles, especially if the headline sounds like clickbait. I still don’t have trust in internet information, a space that the government has done little to regulate. The market on its own hasn’t established that trust, and as a result I make deliberate attempts to avoid the market. Free market capitalism, in these two examples, actually seems to work a bit better when there is some regulation and intervention, something that seems to contradict the general idea surrounding markets in the United States.
Science, Money, & Human Activities

Science, Money, & Human Activities

The world of science prides itself on objectivity. Our scientific measurements should be objective, free from bias, and repeatable by any person in any place. The conclusions of science should likewise be objective, clear, and understandable from the outside. We want science to be open, discussed, and the implications of results rigorously debated so that we can make new discoveries and develop new knowledge to help propel humanity forward.
 
 
“But science is not an enterprise that takes place on some superior moral or spiritual plane above the rest of human activity,” writes Yuval Noah Harari in his book Sapiens. Science may strive for objectivity and independence, but it still takes place in the human world and is conducted by humans. Additionally, “science is a very expensive affair … most scientific studies are funded because somebody believes they can help attain some political, economic, or religious goal,” continues Harari.
 
 
No matter how much objectivity and independence we try to imbue into science, human activities influence what, how, and when science is done. The first obstacle, as Harari notes, is money. Deciding to fund something always contains some sort of political decision. Whether we as individuals are looking to fund something, or whether a collective is looking to fund something, there is always a choice between how the final dollars could be used. Funding could be provided for science that helps develop a vaccine that predominantly impacts poor people in a country far away. Funding could be provided for a scientific instrument that could help address climate change. Or funding could be used to make a really cool laser that doesn’t have any immediate and obvious uses, but which would be really cool. Depending on political. goals, individual donor desires, and a host of other factors, different science could be funded and conducted. The cost of science means that it will always in some ways be tied to human desires, which means biases will always creep into the equation.
 
 
It is important to note that science is built with certain elements to buffer the research, results, findings, and conclusions from bias. Peer review for example limits the publication of studies that are not done in good faith or that make invalid conclusions. But still, science takes place in society and culture and is conducted by humans. What those individual humans chose to study and how they understand the world will influence the ways in which they choose and design studies. This means that bias will still creep into science, in terms of determining what to study and how it will be studied. Early material scientists working with plastics were enthusiastic about studies that developed new plastics with new uses, where today materials scientists may be more likely to study the harms of plastics and plastic waste. Both fields of research can produce new knowledge, but with very different consequences for the world stemming from different cultural biases from the human researchers.
 
 
This is not to say that science cannot be trusted and should not be supported by individuals and collectives. Science has improved living standards for humans across the globe and solved many human problems. We need to continue pushing forward with new science to continue to improve living standards, and possibly just to maintain existing living standards and expectations. Nevertheless, we do have to be honest and acknowledge that science does not exist in a magical space free from bias and other human fallacies.
The Hindsight Fallacy &The Hindsight Fallacy & the How & Why of History - Jim Collins Good To Great Bias the How & Why of History

The Hindsight Fallacy & the How & Why of History

Looking forward and making predictions and judgments on what will happen in the future is incredibly difficult, and very few people can reliably make good predictions about what will happen. But when we look to the past, almost all of us can describe what did happen. It is easy to look back and see how a series of events unfolded, to make connections between certain conditions and eventual outcomes, and to be confident that we understood why things unfolded as they did. But this confidence is misleading and is something we can reliably expect from people.
 
 
The hindsight fallacy is the term which describes our overconfidence in describing what happened in the past and determining which causal factors influenced the outcomes we observed. When the college football playoff is over this year, sports commentators will have a compelling narrative as to why the winning team was able to pull through. When the stock market makes a jump or dip in the next year, analysts will be able to look backward to connect the dots that caused the rise or fall of the market. Their explanations will be confident and narratively coherent, making the analysts and commentators sound like well reasoned individuals.
 
 
However, “every point in history is a crossroads,” writes Yuval Noah Harari. Strange and unpredictable things could happen at any time in history, and the causal factors at work are hard to determine. It is worth remembering that the best social science studies return an R value of about .4 at most (the R value is a statistical value reflecting how well the model fits reality). This means that the best social science studies we can conduct barely reflect the reality of the world. It is unlikely that any commentator, even a seasoned football announcer or stock market analyst, really understands causality well enough to be confident in what caused what, even in hindsight. Major shifts could happen because someone was in a bad mood. Unexpected windfalls could create new and somewhat random outcomes. Humans can think causally, and this helps us better understand the world, but we can also be overconfident in our causal reasoning.
 
 
Harari continues, “the better you know a particular historical period, the harder it becomes to explain why things happened one way and not another. Those who have only a superficial knowledge of a certain period tend to focus only on the possibility that was eventually realized.” What Harari is saying in this quote is that we can get very good at describing how things happened in the past, but not exactly very good at describing why. We can look at each step and each development that unfolded ahead of a terrorist attack, a win by an army, or as Harari uses for demonstration in his book, the adoption of Christianity in the Roman Empire. But we can’t always explain the exact causal pathway of each step. If we could, then we could identify the specific historical crossroads where history took one path and not another and make reasonable predictions about how the world would have looked had the alternative option been the one that history followed. But we really can’t do this. We can look back and identify factors that seemed important in the historical development, but we can’t always explain exactly why those factors were important in one situation relative to another. There is too much randomness, too much chance, and too much complexity for us to be confident in the causal pathways we see. We won’t stop thinking in a causal way, of course, but we should at least be more open to a wild range of possibilities, and less confident in our assessments of history.
 
 
One of my favorite examples of the hindsight bias in action is in Good to Great by Jim Collins. In the book published in 2001, Collins identifies 11 companies that had jumped form being good companies to great companies. One of the companies identified, Circuit City, was out of business before Collins published his subsequent book. Another, Wells Fargo, is now one of the most hated companies in the United States.  A third, Fannie Mae, was at the center of the 2008 financial crisis, and a fourth, Gillette, was purchased by P&G and is no longer an independent entity. A quick search suggests that the companies in the Good to Great portfolio have underperformed the market since the books publication. It is likely that the success of the 11 companies included a substantial amount of randomness, which Collins and his team failed to incorporate in their analysis. Hindsight bias was at play in the selection of the 11 companies and the explanation for why they had such substantial growth in the period that Collins explored. 
Pollution & Purity

Pollution & Purity

“Throughout history, and in almost all societies, concepts of pollution and purity have played a leading role in enforcing social and political divisions and have been exploited by numerous ruling classes to maintain their privileges,” writes Yuval Noah Harari in his book Sapiens.
 
 
Humans are amazingly good at identifying the in-groups and out-groups to which  they belong or do not belong. We are also capable of incredible acts of in-group kindness and generosity as well as out-group nastiness and unfairness. I think that JK Rowling’s Harry Potter books can be read as an example of our incredibly powerful in-group versus out-group nature, and within those stories we can also see how purity and pollution play a role in how we understand our in-groups relative to our out-groups.
 
 
In the books, an evil and powerful wizard, the son of a magical witch and non-magical (muggle) man, attempts to rule the world on the premise that magical individuals are inherently superior than non-magical individuals. Purebloods, those whose family line is entirely magical rather than half-bloods or mud-bloods, those whose family line includes muggles or is entirely muggle, are seen as superior and more valuable. Mud-bloods and half-bloods are viewed by the evil wizard and his supporters (and tacitly accepted by many characters) as somehow polluted, or at best diluted. That becomes the basis for out-group hostilities, biases, discrimination, and general nastiness.
 
 
Harry Potter may be a fiction, but the books reflect real world struggles that do take place between arbitrary groups and real world discrimination that occurs based on appearance, social class, talent and skill, religion, and other factors. The United States was founded as a country that discriminated against black people because of the color of their skin and their seemingly savage or backward tribal lifestyles in Africa (and also because white plantation owners stood to benefit from the free labor and exploitation of captured Africans). People born with mixed race parents were called mulatto, and were quite literally seen as less pure than people born to white parents. Ability, skill, and intelligence did not differ in a material way between black slaves and white slaveowners, but in-group and out-group dynamics founded a country based on an imagined hierarchy and real world discrimination between white and black people – a hierarchy and system of discrimination that was legally upheld and perpetuated long after slavery ended.
 
 
Other countries have had similar challenges. In India, a caste system was built almost entirely on ideas of purity. A certain segment of the population was referred to (and still is  to some extent) as “untouchables” for fear of contamination. This group was (and still is) isolated and outcast within the larger society and the results of the discrimination shown to such people has later been use to justify the unequal treatment they receive.
 
 
In some ways fears of pollution and desires for purity are rooted in biology. Pigs can carry dangerous parasites that can infect humans. For early Jews – before sanitary cooking methods were developed, dietary restrictions possibly helped ensure there was less parasite and disease transmission. Isolating sick individuals, people with sores, or people who were hired or charged with handling dead bodies, possibly helped reduce disease transmission among early humans. However, from these reasonable precautions came the biases, fears, and unjust discrimination which became part of our in-group and out-group dynamics and ultimately contributed to the ideas of pure ruling classes and polluted lower classes. Something that was biologically prudent took on a narrative that was exploited and abused over time for political ends.
 
 
When we sense ourselves being fearful of ideas of pollution, whether it is genetic, racial, sanitary, or other forms of pollution, we should try to be aware of our thoughts and feelings. We should try to recognize if we are simply acting out in-group versus out-group biases and prejudices, or if we do have real health and sanitary concerns. If the latter is the case, we should find ways to uphold health and safety while minimizing and reducing bias and discrimination as much as possible.
Survivorship Bias and Ancient Humans

Survivorship Bias and Ancient Humans

Yuval Noah Harari writes almost romantically about ancient human foragers in his book Sapiens. Describing the difference in knowledge, skills, and abilities between modern humans and ancient hunter-gatherers, Harari is absolutely glowing in his descriptions of ancient humans. He praises them for the knowledge, self-awareness, and connectedness between their bodies and the natural world. Something he argues modern humans have lost.
He writes, “Foragers mastered not only the surrounding world of animals, plants and objects, but also the internal world of their own bodies and senses. They listened to the slightest movement in the grass to learn whether a snake might be lurking there. They carefully observed the foliage of trees in order to discover fruits, beehives, and bird nests. They moved with a minimum of effort and noise, and knew how to sit, walk, and run in the most agile and efficient manner. Varied and constant use of their bodies made them as fit as marathon runners.”
I think this paragraph is generally accurate, if a bit hyperbolic, but troublingly, I think this paragraph is also subject to survivor bias. The humans who lived and survived the longest in a dangerous wilderness environment were probably as fit as modern day triathletes. They probably were more aware of seasonal changes and small details in nature that helped them find food and avoid predators. But I don’t see why we would extend those traits to all foragers. It is unlikely that every human was great at all of the skills Harari lays out, and it seems to me that it would be unlikely for all of them to be agile, fit, super proto-homo sapiens. Many probably fell short in a few areas, and if they fell too short in too many areas, then they probably died, leaving us with the survivorship bias that Harari ends up with. Ultimately, this gives us an overly-romanticized perspective of foraging humans.
The Wood Age

The Wood Age

An interesting bias that enters into our understanding of the world and human history comes from material sciences. When we look into the deep human past, we have very weak evidence upon which we can base our assumptions and theories. Before people wrote things down, we didn’t have anything that could preserve the history of a people or place. And even once people began to record things it was still hard to save those recordings for the long term. We have some stone and clay tablets with inscriptions on them, some quipus (string counting devices), and some written documents on animal skin, but not a lot of well preserved, documented writing from the earliest known humans to have settled into communities. Most of the artifacts we have from early humans come from any stone tools they used, because those can be preserved better.
Yuval Noah Harari writes about the bias these stone tools create in his book Sapiens. “The common impression that pre-agricultural humans lived in an age of stone is a misconception based on this archaeological bias. The Stone Age should more accurately be called the Wood Age, because most of the tools used by ancient hunter-gatherers were made of wood.”
I think it is interesting to consider our bias of ancient humans based on the materials we can recover of their history. The Stone Age is a common idea that appears in cartoons, in advertisements, and in stories. But it is understandable that early humans would have used wood more than stone for many tools. Wood is lighter, can be shaped more easily, and may be more available than stone in some parts of the world – not for me personally here in Northern Nevada. The Stone Age bias is a simple bias that we never think about unless we are asked to stop and consider the way that materials and artifacts could bias our minds. It is an accepted story and idea that we share, without even realizing the bias taking place.
Causal Illusions - The Book of Why

Causal Illusions

In The Book of Why Judea Pearl writes, “our brains are not wired to do probability problems, but they are wired to do causal problems. And this causal wiring produces systematic probabilistic mistakes, like optical illusions.” This can create problems for us when no causal link exists and when data correlate without any causal connections between outcomes.  According to Pearl, our causal thinking, “neglects to account for the process by which observations are selected.”  We don’t always realize that we are taking a sample, that our sample could be biased, and that structural factors independent of the phenomenon we are trying to observe could greatly impact the observations we actually make.
Pearl continues, “We live our lives as if the common cause principle were true. Whenever we see patterns, we look for a causal explanation. In fact, we hunger for an explanation, in terms of stable mechanisms that lie outside the data.” When we see a correlation our brains instantly start looking for a causal mechanism that can explain the correlation and the data we see. We don’t often look at the data itself to ask if there was some type of process in the data collection that lead to the outcomes we observed. Instead, we assume the data is correct and  that the data reflects an outside, real-world phenomenon. This is the cause of many causal illusions that Pearl describes in the book. Our minds are wired for causal thinking, and we will invent causality when we see patterns, even if there truly isn’t a causal structure linking the patterns we see.
It is in this spirit that we attribute negative personality traits to people who cut us off on the freeway. We assume they don’t like us, that they are terrible people, or that they are rushing to the hospital with a sick child so that our being cut off has a satisfying causal explanation. When a particular type of car stands out and we start seeing that car everywhere, we misattribute our increased attention to the type of car and assume that there really are more of those cars on the road now. We assume that people find them more reliable or more appealing and that people purposely bought those cars as a causal mechanism to explain why we now see them everywhere. In both of these cases we are creating causal pathways in our mind that in reality are little more than causal illusions, but we want to find a cause to everything and we don’t always realize that we are doing so. It is important that we be aware of these causal illusions when making important decisions, that we think about how the data came to mind, and whether there is a possibility of a causal illusion or cognitive error at play.
Bias Versus Discrimination - Joe Abittan

Bias Versus Discrimination

In The Book of Why Judea Pearl writes about a distinction between bias and discrimination from Peter Bickel, a statistician  from UC Berkeley. Regarding sex bias and discrimination in the workplace, Bickel carefully distinguished between bias and discrimination in a way that I find interesting. Describing his distinction Pearl writes the following:
“He [Bickel] carefully distinguishes between two terms, that in common English, are often taken as synonyms: bias and discrimination. He defines bias as a pattern of association between a particular decision and a particular sex of applicant. Note the words pattern and association. They tell us that bias is a phenomenon on rung one of the Ladder of Causation.”
Bias, Pearl explains using Bickel’s quote, is simply an observation. There is no causal mechanism at play when dealing with bias and that is why he states that it is on rung one of the Ladder of Causation. It is simply recognizing that there is a disparity, a trend, or some sort of pattern or association between two things.
Pearl continues, “on the other hand, he defines discrimination as the exercise of decision influenced by the sex of the applicant when that is immaterial to the qualification for entry. Words like exercise of decision, or influence and immaterial are redolent of causation, even if Bickel could not bring himself to utter that word in 1975. Discrimination, unlike bias, belongs on rung two or three of the Ladder of Causation.”
Discrimination is an intentional act. There is a clear causal pathway that we can posit between the outcome we observe and the actions or behaviors of individuals. In the case that Bickel used, sex disparities in work can be directly attributed to discrimination if it can be proven that immaterial considerations were the basis for not hiring women (or maybe men) for specific work. Discrimination does not happen all on its own, it happens because of something else. Bias can exist on its own. It can be caused by discrimination, but it could be caused by larger structural factors that themselves are not actively making decisions to create a situation. Biases are results, patterns, and associations we can observe. Discrimination is deliberate behavior that generates, sustains, and reinforces biases.
Causal Illusions

Causal Illusions

In The Book of Why Judea Pearl writes, “our brains are not wired to do probability problems, but they are wired to do causal problems. And this causal wiring produces systematic probabilistic mistakes, like optical illusions.” This can create problems for us when no causal link exists and when data correlate without any causal connections between outcomes. According to Pearl, our causal thinking, “neglects to account for the process by which observations are selected.” We don’t always realize that we are taking a sample, that our sample could be biased, and that structural factors independent of the phenomenon we are trying to observe could greatly impact the observations we actually make.
Pearl continues, “We live our lives as if the common cause principle were true. Whenever we see patterns, we look for a causal explanation. In fact, we hunger for an explanation, in terms of stable mechanisms that lie outside the data.” When we see a correlation our brains instantly start looking for a causal mechanism that can explain the correlation and the data we see. We don’t often look at the data itself to ask if there was some type of process in the data collection that lead to the outcomes we observed. Instead, we assume the data is correct and  that the data reflects an outside, real-world phenomenon. This is the cause of many causal illusions that Pearl describes in the book. Our minds are wired for causal thinking, and we will invent causality when we see patterns, even if there truly isn’t a causal structure linking the patterns we see.
It is in this spirit that we attribute negative personality traits to people who cut us off on the freeway. We assume they don’t like us, that they are terrible people, or that they are rushing to the hospital with a sick child so that our being cut off has a satisfying causal explanation. When a particular type of car stands out and we start seeing that car everywhere, we misattribute our increased attention to the type of car and assume that there really are more of those cars on the road now. We assume that people find them more reliable or more appealing and that people purposely bought those cars as a causal mechanism to explain why we now see them everywhere. In both of these cases we are creating causal pathways in our mind that in reality are little more than causal illusions, but we want to find a cause to everything and we don’t always realize that we are doing so.
On Prejudice - Joe Abittan

On Prejudice

In Vices of the Mind Quassim Cassam writes, “A prejudice isn’t just an attitude towards something, someone, or some group, but an attitude formed and sustained without any proper inquiry into the merits or demerits of the object of prejudice.”
Prejudices are pernicious and in his book Cassam describes prejudices as epistemic vices. They color our perception and opinions about people, places, and things before we have any reasonable reason to hold such beliefs. They persist when we don’t make any efforts to investigate them and they actively deter our discovery of new knowledge that would dismantle a prejudice. They are in a sense, self sustaining.
Prejudices obstruct knowledge by creating fear and negative associations with the people, places, and things we are prejudiced against. When we are in such a state, we feel no need, desire, or obligation to improve our point of view and possibly obtain knowledge that would change our mind. We actively avoid such information and discourage others from adopting points of view that would run against our existing prejudices.
I think that Cassam’s way of explaining prejudices is extremely valuable. When there is something we dislike, distrust, and are biased against, we should ask ourselves if our opinions are based on any reality or simply on unmerited existing feelings. Have we formed our opinions without any real inquiry into the merits or demerits of the person, place, or thing that we scorn?
It is important that we ask these questions honestly and with a real willingness to explore topics openly. It would be very easy for us to set out to confirm our existing biases, to seek out only examples that support our prejudice. But doing so would only further entrench our unfair priors and give us excuses for being so prejudiced. It would not count as proper inquiry into the merits or demerits of the objects of our prejudice.
We must recognize when we hold such negative opinions without cause. Anecdotal thinking, closed-mindedness, and biases can drive us to prejudice. These epistemic vices obstruct our knowledge, may lead us to share and spread misinformation, and can have harmful impacts on our lives and the lives of others. There is no true basis for the beliefs other than our lack of reasonable information and potentially our intentional choices to avoid conflicting information to further entrench our prejudices.