Undeserving Poor

Undeserving Poor

Our nation encourages us to look at the outcomes within our lives as the product of our own doing. How hard we work, how much effort we make to learn and get ahead, and how well we do with making good decisions determines whether we are successful, poor, addicted to drugs, healthy, and happy. This is the narrative that drives our lives, and any failure within any area of our life ultimately represents some type of personal or moral failure by us as individuals. However, is this really an accurate way of looking at humans living within complex societies? Should everything be tied to this sense of hyper personal-responsibility?
Matthew Desmond questions this idea throughout his book Evicted, but he also shows how dominant and entrenched this idea is. Even among our nation’s poorest who have faced extreme difficulties and poverty, the idea of personal responsibility is still the driving narrative around life. Writing about individuals in poverty living in a trailer park Desmond writes, “Evictions were deserved, understood to be the outcome of individual failure. They helped get rid of the riffraff some said. No one thought the poor more underserving than the poor themselves.” Even those living in the deepest poverty, those who have ostensibly failed the most within our capitalistic society, see each other as personal failures, not as victims of a system that was stacked against them. They don’t see themselves as getting swept up in a system and society that didn’t help provide enough support, guidance, and opportunity for them. They only see the bad choices that have landed people in the trailer park, and subsequently driven them out through eviction.
The reality is that as individuals we still exist within a society. We are still dependent on numerous social systems and institutions which shape the reality of the worlds we inhabit and the opportunities and possibilities available to us.  Drug use, for example, use seems like an individual decisions, however research on adverse childhood experiences and the impact of loss of meaning, social connections, and opportunity, shows that there are social determinants that drive drug use across communities. What seems like simply an individual decision based entirely on personal morality has numerous dimensions that cannot be explained simply by individual level decisions.
Desmond argues that evictions are also not something we should see as simply personal failures. There are numerous factors that can push an individual toward a downward spiral that ends in eviction. There are numerous points where social systems and institutions seem designed to drive poor people to failure. Blaming individuals for their own failure and subsequent eviction hides the ways in which we are all responsible to a system that either lifts us all up, or allows some of us to fail spectacularly. Focusing just on an individual’s poor decisions, and not seeing those decisions as a consequence or symptom of larger structural failures means that we can never address the root causes that push people toward failure, poverty, drug use, and eviction. It is easy to blame the individual, but it is inadequate.
On "The Media"

On “The Media”

“The media” is a  term that is frequently used to categorize journalists, newspapers, and broadcast news shows. We often use “the media” in a negative way, complaining about coverage of events in unfair and oversimplified ways. “The media” always seems to have an agenda, a narrative, and a specific concern plucked from the zeitgeist that will fade away without a real resolution. But this idea is a bit misleading. Categorizing only news sources as “the media” misses out on a lot of media consumption that we engage with every day. It also lumps together news organizations and sources that have vastly different ways of operating, different profit motives, and different general beliefs. Even within a single news or media source there can be things that are terrible, things that are marvelous, and things that we barely notice.
Challenges with “the media” have existed as long as news and media have existed. Books, even fiction books, have been burned and banned almost as long as books have existed. People expressing heretical views against churches or governments have also received the same fate across human history.  But “the media” has been a lens through which we have understood the world past and present. Expanding our view of media to include books, movies, podcasts, and even TikTok videos shows us how media consumption can be cultural cornerstones of our highest values and simultaneously cesspools of rot.
In the George Herriman biography Krazy, author Michael Tisserand includes a quote from a critique written by Gilbert Seldes in the Pittsburgh Sun in the 1920’s. Tisserand’s passage reads:
“In his initial appraisal of Krazy Kat [George Herriman’s celebrated comic strip], he wrote that the cult of the genius of the comic strip who has created the fantastic little monster is a growing one. He added if we have to condemn utterly the press which demoralizes all thought and makes ugly all things capable of beauty, we must still be gentle with it, because Krazy Kat, the invincible and joyous, is a creature of the press, inconceivable without its foundation of cheapness and stupidity. He is there to enliven and encourage and to give much delight.
I really like this quote when viewed through the lens of “the media” that I have been trying to lay out in this post, even though Seldes uses “the press” in the quote above. Categorizing “the media” as entirely worthless or negative or alternatively categorizing “the media” as a cornerstone of democracy is an overly broad brush with which to paint news and information ecosystems. There are things we may hate about “the media” but there are also things we may find invaluable and necessary. Thinking clearly about the media requires that we delve into the particulars, understand the profit motives, understand the competition, and understand the forces that drive the things we like and dislike.
Individually, we are probably powerless to change the course of “the media” or how we talk about “the media.” However, we can think about the choices we make in relation to “the media” and to our friends, family, and colleagues. We can engage in meaningful and deep topics, or we can become enraged over shallow and meaningless topics. We can enjoy the cultural reflections of the shallow or we can criticize them. Ultimately, “the media” is a product of our humanity, and we can project onto it what we want, but we shouldn’t categorize an entire institution as rotten or democracy saving as a whole. “The Media” is complex and has multiple layers running throughout each interconnected element.
The Illusion of Free Will & Computer Software

The Illusion of Free Will & Computer Software

Judea Pearl uses soccer as an analogy to demonstrate the usefulness of freewill, even if it is only an illusion, in The Book of Why. Pearl argues that believing we have free will, even if it doesn’t exist as we believe it does, has been helpful for humans throughout our evolutionary history. He argues that being able to communicate about our intentions, desires, and actions through a lens of free will has helped us develop agency to improve our existence as a species and survive.
Pearl also views the illusion of free will as a two tiered system that helps our species survive through agency by attributing responsibility to individuals. He communicates this idea through the language of computers by writing, “when we start to adjust our own software, that is when we begin to take moral responsibility for our actions. This responsibility may be an illusion at the level of neural activation but not at the level of the self-awareness software.”
Pearl is arguing that our consciousness (software) is different from our neural activity (the computer hardware equivalent of the brain). In this sense, Pearl is viewing consciousness and free will as a dualist. There is the electrical activity of the brain, and the software (our thinking and self-awareness) running on top of that electrical activity. While we might not be able to directly change the neural activity and while it may be automatic and deterministic, the software packages it runs are not, they are in a way revisable, and we are responsible for those revisions. That is the view that Pearl is advancing in this argument.
I think this idea is wrong. I understand the dualist view of consciousness and use that model most of the time when thinking about my thinking, but I don’t think it reflects reality. Additionally, throughout human history we have used technological analogies to explain the brain. Always equating the brain and thinking to the best technologies of the day, we have viewed the brain as having some sort of duality about it. The brain was once viewed as hydraulic pumps and levers, and today it is compared to computerized hardware and software.
I don’t have a full rebuttal for Pearl. I recognize that our experience feels as though it is not deterministic, that there seems to be some role for free will and individual agency, but I can’t go as far as Pearl and actually assign revision responsibility to our consciousness. I agree with him that the illusion can be and has been useful, but I can’t help but feel that it is a mistake to equate the brain to a computer. I don’t truly feel that even within the illusion of free will we are entirely revision responsible for our consciousness (the software/operating system). I think that comparing us to a computer is misleading and gives people the wrong impression about the mind, and I’m sure that in the future we will replace the hardware/software distinction and thoughts with different and more complex technologies in our analogies.
Complex Causation Continued

Complex Causation Continued

Our brains are good at interpreting and detecting causal structures, but often, the real causal structures at play are more complicated than what we can easily see. A causal chain may include a mediator, such as citrus fruit providing vitamin C to prevent scurvy. A causal chain may have a complex mediator interaction, as in the example of my last post where a drug leads to the body creating an enzyme that then works with the drug to be effective. Additionally, causal chains can be long-term affairs.
In The Book of Why Judea Pearl discusses long-term causal chains writing, “how can you sort out the causal effect of treatment when it may occur in many stages and the intermediate variables (which you might want to use as controls) depend on earlier stages of treatment?”
This is an important question within medicine and occupational safety. Pearl writes about the fact that factory workers are often exposed to chemicals over a long period, not just in a single instance. If it was repeated exposure to chemicals that caused cancer or another disease, how do you pin that on the individual exposures themselves? Was the individual safe with 50 exposures but as soon as a 51st exposure occurred the individual developed a cancer? Long-term exposure to chemicals and an increased cancer risk seems pretty obvious to us, but the actual causal mechanism in this situation is a bit hazy.
The same can apply in the other direction within the field of medicine. Some cancer drugs or immune therapy treatments work for a long time, stop working, or require changes in combinations based on how disease has progressed or how other side effects have manifested. Additionally, as we have all learned over the past year with vaccines, some medical combinations work better with boosters or time delayed components. Thinking about causality in these kinds of situations is difficult because the differing time scopes and combinations make it hard to understand exactly what is affecting what and when. I don’t have any deep answers or insights into these questions, but simply highlight them to again demonstrate complex causation and how much work our minds must do to fully understand a causal chain.
Rules of Thumb: Helpful, but Systematically Error Producing

Rules of Thumb: Helpful, but Systematically Error Producing

The world throws a lot of complex problems at us. Even simple and mundane tasks and decisions hold a lot of complexity behind them. Deciding what time to wake up at, the best way to go to the grocery store and post office in a single trip, and how much is appropriate to pay for a loaf of bread have incredibly complex mechanisms behind them. In figuring out when to wake up we have to consider how many hours of sleep we need, what activities we need to do in the morning, and how much time it will take for each of those activities to still provide us a cushion of time in case something runs long. In making a shopping trip we are confronted with p=np, one of the most vexing mathematical problems that exists. And the price of bread was once the object of focus for teams of Soviet economists who could not pinpoint the right price for a loaf of bread that would create the right supply to match the population’s demand.
The brain handles all of these problems with relatively simple heuristics and rules of thumb, simplifying decisions so that we don’t waste the whole night doing math problems for the perfect time to set an alarm, don’t miss the entire day trying to calculate the best route to run all our errands, and don’t waste tons of brain power trying to set bread prices. We set a standard alarm time and make small adjustments knowing that we ought to leave the house ready for work by a certain time to make sure we reduce the risk of being late. We stick to main roads and travel similar routes to get where we need to go, eliminating the thousands of right or left turn alternatives we could chose from. We rely on open markets to determine the price of bread without setting a universal standard.
Rules of thumb are necessary in a complex world, but that doesn’t mean they are not without their own downfalls. As Quassim Cassam writes in Vices of the Mind, echoing Daniel Kahneman from Thinking Fast and Slow, “We are hard-wired to use simple rules of thumb (‘heuristics’) to make judgements based on incomplete or ambiguous information, and while these rules of thumb are generally quite useful, they sometimes lead to systematic errors.” Useful, but inadequate, rules of thumb can create predictable and reliable errors or mistakes. Our thinking can be distracted with meaningless information, we can miss important factors, and we can fail to be open to improvements or alternatives that would make our decision-making better.
What is important to recognize is that systematic and predictable errors from rules of thumb can be corrected. If we know where errors and mistakes are systematically likely to arise, then we can take steps to mitigate and reduce those errors. We can be confident with rules of thumb and heuristics that simplify decisions in positive ways while being skeptical of rules of thumb that we know are likely to produce errors, biases, and inaccurate judgements and assumptions. Companies, governments, and markets do this all the time, though not always in a step by step process (sometimes there is one step forward and two steps backward) leading to progress over time. Embracing the usefulness of rules of thumb while acknowledging their shortcomings is a powerful way to improve decision-making while avoiding the cognitive downfall of heuristics.
A Bias Toward Complexity

A Bias Toward Complexity

When making predictions or decisions in the real world where there are many variables, high levels of uncertainty, and numerous alternative options to chose from, using a simple rule of thumb can be better than developing complex models for predictions. The intuitive sense is that the more complex our model the more accurately it will reflect the real complexity of the world, and the better job it will do with making a prediction. If we can see that there are multiple variables, then shouldn’t our model capture the different alternatives for each of those variables? Wouldn’t a simple rule of thumb necessarily flatten many of the alternatives for those variables, failing to take into consideration the different possibilities that exist? Shouldn’t a more complex model be better than a simple heuristic?

 

The answer to these questions is no. We are biased toward complexity for numerous reasons. It feels important to build a model that tries to account for every possible alternative for each variable, we believe that always having more information is always good, and we want to impress people by showing how thoughtful and considerate we are. Creating a model that accounts for all the different possibilities out there fits those preexisting biases. The problem, however, is that as we make our model more complex it becomes more unstable.

 

In Risk Savvy, Gerd Gigerenzer explains what happens with variance and our models by writing, “Unlike 1/N, complex methods use past observations to predict the future. These predictions will depend on the specific sample of observations it uses and may therefore be unstable. This instability (the variability of these predictions around their mean) is called variance. Thus, the more complex the method, the more factors need to be estimated, and the higher the amount of error due to variance.”  (Emphasis added by me – 1/N is an example of a simple heuristic that Gigerenzer explains in the book.)

 

Our bias toward complexity can make our models and predictions worse when high levels of uncertainty with many alternatives and relatively limited amounts of data exist. If we find ourselves in the opposite situation, where there is low uncertainty, few alternatives, and a plethora of data, then we can use very complex models to make accurate predictions. But when we are in the real world, like making stock market or March Madness predictions, then we should rely on a simple rule of thumb. The more complex our model, the more opportunities for us to misestimate a given variable. Rather than having one error be offset by numerous other point estimates within our model to reduce the cost of a miscalculation, our model ends up creating more variance and a greater likelihood that our prediction will be further from reality than if we had flattened the variables with a simple heuristic.
A mixture of Risks

A Mixture of Risks

In the book Risk Savvy, Gerd Gigerenzer explains the challenges we have with thinking statistically and how these difficulties can lead to poor decision-making. Humans have trouble holding lots of complex and conflicting information. We don’t do well with decisions involving risk and decisions where we cannot possibly know all the relevant information necessary for the best decision. We prefer to make decisions involving fewer variables, where we can have more certainty about our risks and about the potential outcomes. This leads to the substitution effect that Daniel Kahneman describes in his book Thinking Fast and Slow, where our minds substitute an easier question for the difficult question without us noticing.

 

Unfortunately, this can have bad outcomes for our decision-making. Gigerenzer writes, “few situations in life allow us to calculate risk precisely. In most cases, the risks involved are a mixture of more or less well known ones.” Most of our decisions that involve risk have a mixture of different risks. They are complex decisions with tiers and potential cascades of risk based on the decisions we make along the way. Few of our decisions involve just one risk independent of others that we can know with certainty.

 

If we consider investing for retirement we can see how complex decisions involving risk can be and how a mixture of risks is present across all the decisions we have to make. We can hoard money in a safe in our house where we reduce the risk of losing any of our money, but we risk being unable to have enough saved by the time we are ready to retire. We can invest our money, but have to make decisions regarding whether we will keep it in a bank account, invest it in the stock market, or look to other investment vehicles. Our bank is unlikely to lose much money, and is low risk, but is also unlikely to help us increase the value of our savings to have enough for retirement. Investing with a financial advisor takes on more risk, such as the risk that we are being scammed, the risk that the market tanks and our advisor made bad investments on our behalf, and the risk that we won’t have access to our money if we were to need it quickly in case of an emergency. What this shows is that even the most certain option for our money, protecting it in a secret safe at home, still contains additional risks for the future. The options that are likely to provide us with the greatest return on our savings, investing in the stock market, has a mixture of risks associated with each investment decision we make after the initial decision to invest. There is no way we can calculate and fully comprehend ever risk involved with such an investment decision.

 

Risk is complex, and we rarely deal with a single decision involving a single calculable risk at one time. Our brains are likely to flatten the decision by substituting more simple decisions, eliminating some of the risks from consideration and helping our mind focus on fewer variables at a time. Nevertheless, the complex mixture of risks doesn’t go away just because  our brains pretend it isn’t there.
intelligence - Joe Abittan

Intelligence

“Intelligence is not an abstract number such as an IQ, but similar to a carpenter’s tacit knowledge about using appropriate tools,” writes Gerd Gigerenzer in his book Risk Savvy. “This is why the modern science of intelligence studies the adaptive toolbox that individuals, organizations, and cultures have at their disposal; that is, the evolved and learned rules that guide our deliberate and intuitive decisions.”

 

I like Gigerenzer’s way of explaining intelligence. It is not simply a number or a ratio, but it is our knowledge and ability to understand our world. There are complex relationships between living creatures, physical matter, and information. Intelligence is an understanding of those relationships and an ability to navigate the complexity, uncertainty, and connections between everything in the world. Explicit rules, like mathematical formulas, help us understand some relationships while statistical percentages help us understand others. Recognizing and being aware of commonalities between different categories of things and items and identifying patterns help us understand these relationships and serves as the basis for our intelligence.

 

What is important to note, is that our intelligence is built with concrete tools for some situations, like 2+2=4, and less concrete rules of thumb for other situations, like the golden rule – do to others what you would like others to do to you. Gigerenzer shows that our intelligence requires that we know more than one mathematical formula, and that we have more than one rule of thumb to help us approach and address complex relationships in the world. “Granted, one rule of thumb cannot possibly solve all problems; for that reason, our minds have learned a toolbox of rules. … these rules of thumb need to be used in an adaptive way.”

 

Whether it is interpreting statistical chance, judging the emotions of others, of making plans now that delay gratification until a later time, our rules of thumb don’t have to be precise, but they do need to be flexible and adaptive given our current circumstances. 2+2 will always equal 4, but a smile from a family member might be a display of happiness or a nervous impulse and a silent plead for help in an awkward situation. It is our adaptive toolbox and our intelligence that allows us to figure out what a smile means. Similarly, adaptive rules of thumb and intelligence help us reduce complex interactions and questions to more manageable choices, reducing uncertainty about how much we need to save for retirement to a rule of thumb that tells us to save a small but significant amount of each pay check. Intelligence is not just about facts and complex math. It is about adaptable rules of thumb that help us make sense of complexity and uncertainty, and the more adaptive these rules of thumb are, the more our intelligence an help us in the complex world of today and into the uncertain future.
Probability is Multifaceted

Probability is Multifaceted

For five years my wife and I lived in a house that was at the base of the lee side of a small mountain range in Northern Nevada. When a storm would come through the area it would have to make it over a couple of small mountain ranges and valleys before getting to our house, and as a result we experienced less precipitation at our house than most people in the Reno/Sparks area. Now my wife and I live in a house higher up on a different mountain that is more in the direct path of storms coming from the west. We receive snow at our house while my parents and family lower in the valley barely get any wind. At both houses we have learned to adjust our expectations for precipitation relative to the probabilities reported by weather stations which reference the airport at the valley floor. Our experiences with rain and snow at our two places is a useful demonstration that probability (in this case the probability of precipitation) is multifaceted – that multiple factors  play a role in the probability of a given event at a given place and time.

 

In his book Risk Savvy, Gerd Gigerenzer writes, “Probability is not one of a kind; it was born with three faces: frequency, physical design, and degrees of belief.” Gigerenzer explains that frequency is about counting. To me, this is the most clearly understandable aspect of probability, and what we usually refer to when we discuss probability. On how many days does it usually rain in Reno each year? How frequently does a high school team from Northern Nevada win a state championship and how frequently does a team from Southern Nevada win a state championship? These types of questions simply require counting to give us a general probability of an event happening.

 

But probability is not just about counting and tallying events. Physical design plays a role as well. Our house on the lee side of a small mountain range was shielded from precipitation, so while it may have rained in the valley half a mile away, we didn’t get any precipitation. Conversely, our current home is in a position to get more precipitation than the rest of the region. In high school sports, fewer kids live in Reno/Sparks compared to the Las Vegas region, so in terms of physical design, state championships are likely to be more common for high schools in Southern Nevada. Additionally, there may be differences in the density of students at each school, meaning the North could have more schools per students than the south, also influencing the probability of a north or south school winning. Probability, Gigerenzer explains, can be impacted by the physical design of systems, potentially making the statistics and chance more complicated to understand.

 

Finally, degrees of belief play a role in how we comprehend probability. Gigerenzer states that degrees of belief include experience and personal impression which are very subjective. Trusting two eye witnesses, Gigerenzer explains, rather than two people who heard about an event from someone else can increase our perception that the probability of an unlikely story is accurate. Degrees of belief can also be seen in my experiences with rain and our two houses. I learned to discount the probability of rain at our first house and to increase my expectation of rain at our new house. If the meteorologist said there was a low chance of rain when we lived on the sheltered side of a hill, then I didn’t worry much about storm forecasts. At our new house, however, if there is a chance of precipitation and storm coming from the west, I will certainly go remove anything from the yard that I don’t want to get wet, because I believe the chance that our specific neighborhood will see rain is higher than what the meteorologist predicted.

 

Probability and how we understand it and consequentially make decisions  is complex, and Gigerenzer’s explanation of the multiple facets of probability helps us better understand the complexity. Simply tallying outcomes and predicting into the future often isn’t enough for us to truly have a good sense of the probability of a given outcome. We have to think about physical design, and we have to think about the personal experiences and subjective opinions that form the probabilities that people develop and express. Understanding probability requires that we hold a lot of information in our head at one time, something humans are not great at doing, but that we can do better when we have better strategies for understanding complexity.

Avoiding Complex Decisions & Maintaining Agency

Two central ideas to the book Nudge by Cass Sunstein and Richard Thaler are that people don’t like to make complex decisions and that people like to have agency. Unfortunately, these two ideas conflict with each other. If people don’t like to make complex decisions, then we should assume that they would like to have experts and better decision-makers make complex decisions on their behalf. But if people want to have agency in their lives, we should assume that they don’t want anyone to make decisions for them. The solution, according to Sunstein and Thaler, is libertarian paternalism, establishing systems and structures to support complex decision-making and designing choices to be more clear for individuals with gentle nudges toward the decisions that will lead to the outcomes the individual actually desires.

 

For Sunstein and Thaler, the important point is that libertarian paternalism, and nudges in general, maintain liberty. They write, “liberty is much greater when people are told, you can continue your behavior, so long as you pay for the social harm that it does, than when they are told, you must act exactly as the government says.”  People resent being told what to do and losing agency. When people resist direct orders, the objective of the orders may fail completely, or violence could erupt. Neither outcome is what government wanted with its direct order.

 

The solution is part reframing and part redirecting personal responsibility for negative externalities. The approach favored by Sunstein and Thaler allows individuals to continue making bad or harmful choices as long as they recognize and accept the costs of those choices. This isn’t appropriate in all situations (like drinking and driving), but it might be appropriate with regard to issues like carbon taxes on corporations, cigarette taxes, or national park entrance fees.  If we are able to pin the cost of externalities to specific individuals and behaviors, we can change the incentives that people have for harmful or over-consumptive behaviors. To reach the change we want, we will have to get people to change their behavior, make complex decisions, and maintain a sense of agency as they act in ways that will help us as a collective reach the goals we set.