Planning and Homelessness

Planning

Planning requires two things. It requires agency, believing that one can act and influence the future world that one inhabits and an ability to look forward and make predictions about future outcomes. Making predictions about the future has its own requirements – stability and causal reasoning. Luckily for most of us, we have relatively stable lives, impressive causal reasoning abilities, and agency in our lives to influence future outcomes. But that doesn’t mean that planning is easy or that it is something we always do.
We may fail to plan for a number of reasons. Some of those reasons may come from a lack of agency, some may come from uncertainty about the future, and some reasons may be a simple failure to think ahead. When we don’t plan, we don’t think about what our lives may be like in the future, what we would like our lives to be like, and what causal structures exist to help us reach that desired future or avoid an undesirable future. However, sometimes a failure to plan can also be a defense mechanism.
“At the very heart of planning,” writes Elliot Liebow in Tell Them Who I Am, “is the assumption that one has the power to control or influence the future. If one is truly powerless to influence events, planning makes little sense.” Without agency, planning leads to disappointment. If you make plans, even simple plans, but you cannot possibly take the actions necessary to execute those plans, then you will necessarily be let down. The imagined future you tried to plan for will not occur. Your desired states will not materialize. Liebow continues, “in the extreme case planning [is] to be actively avoided, for down that road lay failure and disappointment and still further confirmation of one’s own impotence.”
When plans fail it reflects either a lack of agency or an inability to predict the future. The failure of our plans means that we don’t control our surroundings, or that we do not have good causal reasoning skills, or that we do not have stable lives. None of these realities is comforting. The first reflects a lack of personal ability, the second a lack of mental capacity, and the third reflects a dangerous and tumultuous life. Improving our lives requires an ability to plan and execute. Failing to do so reflects inward failures or inadequacies. Rather than risk failure, the defense mechanism is to not plan at all. Not planning means we can deny that we have a lack of agency, that we lack causal reasoning skills, or that we have ended up in a place where our lives are unpredictable beyond our control. If people want to be able to plan their lives, they need control, need to be able to see into the future to predict desired outcomes, and need some level of stability in their lives.
Pessimist, Optimist, or Just Mist? - George Herriman, Michael Tisserand, Joe Abittan

Pessimist, Optimist, or Just Mist?

In his biography of George Herriman, author Michael Tisserand included numerous comics from Herriman to demonstrate his artistic skill, wit, and general approach to comics. One of the book’s chapters started with the written words from one of Herriman’s Krazy Kat comics from April 23, 1921, and stood out to me:
 
 
Ignatz: Now, “Krazy,” do you look upon the future as a pessimist, or an optimist?
Krazy: I look upon it just as mist–”
 
 
I really enjoyed this line of dialogue when I first read it in Tisserand’s book, and still get a chuckle as I read it now. It is a witty pun, an accurate reflection of our predicament with looking toward the future, and feels entirely fresh 100 years after it was written.
 
 
I feel like I notice false dichotomies everywhere. It is easy to see the world in black or white and tempting to live in a world defined with dichotomies. They make our lives easier by slotting things into neat categories and helping us reduce the amount of thinking we have to do. Unfortunately, living a life that accepts false dichotomies is dangerous and deluded.
 
 
The false dichotomy that the comic pulls apart is the false dichotomy of pessimism versus optimism. If pressed, probably all of us could say we were more of an optimist or pessimist, but it is probably not very accurate to really define ourselves as one way or the other. At any given time we may be more or less optimistic or pessimistic on any number of factors and our views for any of them could change at any moment. We may also be deeply pessimistic about one important area, but very optimistic in another area with no clear reconciliation between those two optimistic and pessimistic feelings. For example, you could be very optimistic about the direction of the economy, but pessimistic about the long-term sustainability of current economic practices given climate change. It is hard to pin yourself as either pessimistic or optimistic overall regarding the economy in this situation.
 
 
Some of us may try to avoid this false dichotomy with a trite response that we are neither an optimist or pessimist, but a realist (or nihilist or other -ist). This dodge acknowledges that the distinction between optimist and pessimist isn’t necessarily real, but fails to provide a legitimate alternative. Is there any exclusionary factor between a realist and an optimist or pessimist? A nihilist might be optimistic that society is going to collapse, even if they feel pessimistic about what will happen to them. My suspicion is that people who call themselves realists simply want to avoid looking like they are optimistic or pessimistic without merit, and as if they base their optimism or pessimism off data and not vague feelings.
 
 
I think that Krazy in the comic is addressing the dichotomy in the most reasonable way possible, by acknowledging the difficulties of predicting the future and accepting that he is overwhelmed with the mist. His answer rejects the false dichotomy of optimism and pessimism and embraces the conflicting factors that might make us happy or sad, financially well off or ruined, or lead to any number of potential outcomes. Rather than trying to hold positive or negative views regarding our futures, the best thing to do is admit that we don’t really know what will happen, but to try to place ourselves in a position where we can have the best outcomes no matter what takes place, even if all we see is mist.
Dose-Response Curves

Dose-Response Curves

One limitation of linear regression models, explains Judea Pearl in his book The Book of Why is that they are unable to accurately model interactions or relationships that don’t follow linear relationships. This lesson was hammered into my head by a statistics professor at the University of Nevada, Reno when discussing binomial variables. For variables where there are only two possible options, such as yes or no, a linear regression model doesn’t work. When the Challenger Shuttle’s O-ring failed, it was because the team had run a linear regression model to determine a binomial variable, the O-ring fails or it’s integrity holds. However, there are other situations where a linear regression becomes problematic.
 
 
In the book, Pearl writes, “linear models cannot represent dose-response curves that are not straight lines. They cannot represent threshold effects, such as a drug that has increasing effects up to a certain dosage and then no further effect.”
 
 
Linear relationship models become problematic when the effect of a variable is not constant over dosage. In the field of study that I was trained in, political science, this isn’t a big deal. In my field, simply demonstrating that there is a mostly consistent connection between ratings of trust in public institutions and receipt of GI benefits, for example, is usually sufficient. However, in fields like medicine or nuclear physics, it is important to recognize that a linear regression model might be ill suited to the actual reality of the variable.
 
 
A drug that is ineffective at small doses, becomes effective at moderate doses, but quickly becomes deadly at high doses shouldn’t be modeled with a linear regression model. This type of drug is one that the general public needs to be especially careful with, since so many individuals approach medicine with a “if some is good then more is better” mindset. Within physics, as was seen in the Challenger example, the outcomes can also be a matter of life. If a particular rubber for tires holds its strength but fails at a given threshold, if a rubber seal fails at a low temperature, or if a nuclear cooling pool will flash boil at a certain heat, then linear regression models will be inadequate for making predictions about the true nature of variables.
 
 
This is an important thing for us to think about when we consider the way that science is used in general discussion. We should recognize that people assume a linear relationship based on an experimental study, and we should look for binomial variables or potential non-linear relationships when thinking about a study and its conclusions. Improving our thinking about linear regression and dose-response curves can help us be smarter when it comes to things that matter like global pandemics and even more general discussions about what we think the government should or should not do.

Alternative, Nonexistent Worlds - Judea Pearl - The Book of Why - Joe Abittan

Alternative, Nonexistent Worlds

Judea Pearl’s The Book of Why hinges on a unique ability that human animals have. Our ability to imagine alternative, nonexistent worlds is what has set us on new pathways and allowed us to dominate the planet. We can think of what would happen if we acted in a certain manner, used a tool in a new way, or if two objects collided together. We can visualize future outcomes of our actions and of the actions of other bodies and predict what can be done to create desired future outcomes.
In the book he writes, “our ability to conceive of alternative, nonexistent worlds separated us from our protohuman ancestors and indeed from any other creature on the planet. Every other creature can see what is. Our gift, which may sometimes be a curse, is that we can see what might have been.”
Pearl argues that our ability to see different possibilities, to imagine new worlds, and to be able to predict actions and behaviors that would realize that imagined world is not something we should ignore. He argues that this ability allows us to move beyond correlations, beyond statistical regressions, and into a world where our causal thinking helps drive our advancement toward the worlds we want.
It is important to note that he is not advocating for holding a belief and setting out to prove it with data and science, but rather than we use data and science combined with our ability to think causally to better understand the world. We do not have to be stuck in a state where we understand statistical techniques but deny plausible causal pathways. We can identify and define causal pathways, even if we cannot fully define causal mechanisms. Our ability to reason through alternative, nonexistent worlds is what allows us to think causally and apply this causal reasoning to statistical relationships. Doing so, Pearl argues, will save lives, help propel technological innovation, and will push science to new frontiers to improve life on our planet.
Slope is Agnostic to Cause and Effect

Slope is Agnostic to Cause and Effect

I like statistics. I like to think statistically, to recognize that there is a percent chance of one outcome that can be influenced by other factors. I enjoy looking at best fit lines, seeing that there are correlations between different variables, and seeing how trend-lines change if you control for different variables. However, statistics and trend lines don’t actually tell us anything about causality.
In The Book of Why Judea Pearl writes, “the slope (after scaling) is the same no matter whether you plot X against Y or Y against X. In other words, the slope is completely agnostic as to cause and effect. One variable could cause the other, or they could both be effects of a third cause; for the purpose of prediction, it does not matter.”
In statistics we all know that correlation is not causation, but this quote helps us remember important information when we see a statistical analysis and a plot with linear regression line running through it. The regression line is like the owl that Pearl had described earlier in the book. The owl is able to predict where a mouse is likely to be and able to predict which direction it will run, but the owl does not seem to know why a mouse is likely to be in a given location or why it is likely to run in one direction over another. It simply knows from experience and observation what a mouse is likely to do.
The regression line is a best fit for numerous observations, but it doesn’t tell us whether one variable causes another or whether both are influenced in a similar manner by another variable. The regression line knows where the mouse might be and where it might run, but it doesn’t know why.
In statistics courses we end at this point of correlation. We might look for other variables that are correlated or try to control for third variables to see if the relationship remains, but we never answer the question of causality, we never get to the why. Pearl thinks this is a limitation we do not need to put on ourselves. Humans, unlike owls, can understand causality, we can recognize the various reasons why a mouse might be hiding under a bush, and why it may chose to run in one direction rather than another. Correlations can help us start to see where relationships exist, but it is the ability of our mind to understand causal pathways that helps us determine causation.
Pearl argues that statisticians avoid these causal arguments out of caution, but that it only ends up creating more problems down the line. Important statistical research in areas of high interest or concern to law-makers, business people, or the general public are carried beyond the cautious bounds that causality-averse statisticians place on their work. Showing correlations without making an effort to understand the causality behind it makes scientific work vulnerable to the epistemically malevolent who would like to use correlations to their own ends. While statisticians rigorously train themselves to understand that correlation is not causation, the general public and those struck with motivated reasoning don’t hold themselves to the same standard. Leaving statistical analysis at the level of correlation means that others can attribute the cause and effect of their choice to the data, and the proposed causal pathways can be wildly inaccurate and even dangerous. Pearl suggests that statisticians and researchers are thus obligated to do more with causal structures, to round off  their work and better develop ideas of causation that can be defended once their work is beyond the world of academic journals.
Predictions & Explanations

Predictions & Explanations

The human mind has incredible predictive abilities, but our explanatory abilities do not always turn out to be as equally incredible. Prediction is relatively easy when compared to explanation. Animals can predict where a food source will be without being able to explain how it got there. For most of human history our ancestors were able to predict that the sun would rise the next day without having any way of explaining why it would rise. Computer programs today can predict our next move in chess but few can explain their prediction or why we would make the choice that was predicted.
As Judea Pearl writes in The Book of Why, “Good predictions need not have good explanations. The owl can be a good hunter without understanding why the rat always goes from point A to point B.” Prediction is possible with statistics and good observations. With a large enough database, we can make a prediction about what percentage of individuals will have negative reactions to medications, we can predict when a traffic jam will occur, and we can predict how an animal will behave. What is harder, according to Pearl, is moving to the stage where we describe why we observe the relationships that statistics reveal.
Statistics alone cannot tell us why particular patterns emerge. Statistics cannot identify causal structures. As a result, we continually tell ourselves that correlation is not causation and that we can only determine what relationships are truly causal through randomized controlled trials. Pearl would argue that this is incorrect, and he would argue that this idea results from the fact that statistics is trying to answer a completely different question than causation. Approaching statistical questions from a causal lens may lead to inaccurate interpretations of data or “p-hacking” an academic term used to describe efforts to get the statistical results you wanted to see. The key is not hunting for causation within statistics, but understanding causation and supporting it through evidence uncovered via statistics.
Seeing the difference between causation and statistics is helpful when thinking about the world. Being stuck without a way to see and determine causation leads to situations like tobacco companies claiming that cigarettes don’t cause cancer or oil and gas companies claiming that humans don’t contribute to global warming. Causal thinking, however, utilizes our ability to develop explanations and applies those explanations to the world. Our ability to predict different outcomes based on different interventions helps us interpret and understand the data that the world produces. We may not see the exact picture in the data, but we can understand it and use it to help us make better decisions that will lead to more accurate causal understandings over time.
Tool Use and Causation - Judea Pearl - The Book of Why - Joe Abittan

Tool Use and Causation

Judea Pearl’s book The Book of Why is all about causation. The reason human beings are able to produce vaccines, to send rockets into space, and maintain green gardens is because we understand causation. We have an ability to observe events in the world, to intervene, and to predict how our interventions produce specific outcomes. This allows us to develop tools to specifically achieve desired ends, and it is not a small feat.
In the book Pearl describes three levels of causation based on Alan Turing’s proposed system to classify cognitive systems in terms of the queries systems can answer. The three levels of causation are association, intervention, and counterfactuals. Pearl explains that many animals observe the world and detect patterns, but that fewer animals use tools to intervene in the world. Fewer still, Pearl explains, possess the ability to actually develop and improve new tools. As he writes, “tool users do not necessarily possess a theory of their tool that tells them why it works and what to do when it doesn’t. For that, you need to have achieved a level of understanding that permits imagining. It was primarily this third level that prepared us for further revolutions in agriculture and science and led to a sudden and drastic change in our species’ impact on the planet.”
The theory of tool use that Pearl mentions in the quote is our ability to see and understand causation. We can observe that rocks can be used to cut plant fibers, and then we can identify the qualities in some rocks that make them better at cutting fibers than others. But to get to the point where we are sharpening an edge of a rock to make it even better at cutting fibers, we have to have a causal understanding of what allows the rock to cut and we need sufficient imagination to predict what would happen if the rock had a sharper edge. We have to imagine an outcome in a future world where something was different, and that something different caused a new outcome.
This point is small, but is actually quite profound. Our minds are able to conceptualize causality and build hypothesis about the world that we can test. This can improve our tool usage, improve the ways we act and behave, and can allow us to achieve desired ends through study, prediction, imagination, and experimentation. The key, however, is that we have a theory of the tools and how they work, that we have an ability to intuit causation.
We hear all the time that correlation is not causation and in our modern technological age we are looking to statistics to help us solve massive problems. However, as Pearl’s quote shows, data, statistics, and information is useless unless we have a theory of the tools we can use based on the knowledge we gain from the data, statistics, and information. We have to embrace causation and our ability to imagine and predict causal structures if we want to do anything with the data.
This all reminds me of the saying, when the only tool you have is a hammer, everything begins to look like a nail. This represents an inability to understand causality, a lack of imagination and predictive prowess. Statistics without a theory of causality, without an ability to use our power to identify and predict causation, is like the hammer and nail saying. It is useless and throws the same toolkit and approach at every problem. Statistics alone doesn’t build knowledge – you also need a theory of causation.
Pearl’s message throughout the book is that statistics (tool use) and causation is linked, that we need a theory and understanding of causation if we are going to do anything with data, statistics, and information. For years we have relied on statistical relationships to help us understand the world, but we have failed to apply the same rigorous study to causation, and that will make it difficult for us to use our new statistical power to achieve the ends that big data and statistical processing promise.
Inventing Excuses - Joe Abittan

Inventing Excuses

With the start of the new year and the inauguration of a new president of the United States, many individuals and organizations are turning their eyes toward the future. Individuals are working on resolutions to make positive changes in their lives. Companies are making plans and strategy adjustments to fit with economic and regulatory predictions. Political entities are adjusting a new course in anticipation of political goals, agendas, and actions of the new administration and the new distribution of political power in the country. However, almost all of the predictions and forecasts of individuals, companies, and political parties will end up being wrong, or at least not completely correct.

 

Humans are not great forecasters. We rarely do better than just assuming that what happened today will continue to happen tomorrow. We might be able to predict a regression to the mean, but usually we are not great at predicting when a new trend will come along, when a current trend will end, or when some new event will shake everything up. But this doesn’t mean that we don’t try, and it doesn’t mean that we throw in the towel or shrug our shoulders when we get things wrong.

 

In Risk Savvy Gerd Gigerenzer writes, “an analysis of thousands of forecasts by political and economic experts revealed that they rarely did better than dilettantes or dart-throwing chimps. But what the experts were extremely talented at was inventing excuses for their errors.” It is remarkable how poor our forecasting can be, and even more remarkable how much attention we still pay to forecasts. At the start of the year we all want to know whether the economy will improve, what a political organization is going to focus on, and whether a company will finally produce a great new product. We tune in as experts give us their predictions, running through all the forces and pressures that will shape the economy, political future, and performance of companies. And even when the experts are wrong, we listen to them as they explain why their initial forecast made sense, and why they should still be listened to in the future.

 

A human who threw darts, flipped a coin, or picked options out of a hat before making a big decision is likely to be just as wright or just as wrong as the experts who suggest a certain decision over another. However, the coin flipper will have no excuse when they make a poor decision. The expert on the other hand, will have no problem inventing excuses to explain away their culpability in poor decision-making. The smarter we are the better we are at rationalizing our choices and inventing excuses, even those that don’t go over so well.
Predictable Outcomes

Predictable Outcomes

“In many domains people are tempted to think, after the fact, that an outcome was entirely predictable, and that the success of a musician, an actor, an author, or a politician was inevitable in light of his or her skills and characteristics. Beware of that temptation. Small interventions and even coincidences, at a key stage, can produce large variations in the outcome,” write Richard Thaler and Cass Sunstein in their book Nudge.

 

People are poor judges of the past. We lament the fact that the future is always unclear and unpredictable. We look back at the path that we took to get to where we are today, and are frustrated by how clear everything should have seemed. When we look back, each step to where we are seems obvious and unavoidable. However, what we forget in our own lives and when we look at others, is how much luck, coincidence, and random chance played a role in the way things developed. Whether you are a Zion Williams level athlete, a JK Rowling skilled author, or just someone who is happy with the job you have, there were plenty of chances where things could have gone wrong, derailing what seems like an obvious and secure path. Injuries, deaths in our immediate family, or even just a disparaging comment from the right person could have turned Zion away from basketball, could have shot Rowling’s writing confidence, and could have denied you the job you enjoy.

 

What we should recognize is that there is a lot of randomness along the path to success, it is not entirely about hard work and motivation. This should humble us if we are successful, and comfort us when we have a bad break. We certainly need to focus, work hard, develop good habits, and try to make the choices that will lead us to success, but when things don’t work out as well as we hoped, it is not necessarily because we lack something and are incompetent. At the same time, reaching fantastic heights is no reason to proclaim ourselves better than anyone else. We may have had the right mentor see us at the right time, we may have just happened to get a good review at the right time, and we maybe just got lucky when another person was unlucky to get to where we are. We can still be proud of where we are, but we shouldn’t use that pride to deny other people opportunity. We should use that pride to help other people have lucky breaks of their own. Nothing is certain, even if it looks like it always was when you look in the rear view mirror.
Take the Outside View

Take the Outside View

Taking the outside view is a shorthand and colloquial way to say, think of the base rate of the reference class to which something belongs, and make judgements and predictions from that starting point. Take the outside view is advice from Daniel Kahneman in his book Thinking Fast and Slow for anyone working on a group project, launching a start-up, or considering an investment with a particular company. It is easy to take the inside view, where everything seems predictable and success feels certain. However, it is often better for long-term success to take the outside view.

 

In his book, Kahneman writes, “people who have information about an individual case rarely feel the need to know the statistics of the class to which the case belongs.” He writes this after discussing a group project he worked on where he and others made an attempt to estimate the time necessary to complete the project and the obstacles and hurdles they should expect along the way. For everyone involved, the barriers and likelihood of being derailed and slowed down seemed minimal, but Kahneman asked the group what to expect based on the typical experience of similar projects. The outlook was much more grim when viewed from the outside perspective, and helped the group better anticipate challenges they could face and set more reasonable timelines and work processes.

 

Kahneman continues, “when forecasting the outcomes of risky projects, executives too easily fall victim to the planning fallacy. In its grip, they make decisions based on delusional optimism rather than on a rational weighting of gains, losses, and probabilities. They overestimate benefits and underestimate costs.”

 

Taking the outside view helps us get beyond delusional optimism. It helps us make better expectations about how long a project will take, what rate of return we should expect, and what the risks really look like. It is like getting a medical second opinion, to ensure that your doctor isn’t missing anything and to ensure they are following the most up-to-date practices. Taking the outside view shifts our base rate, anchors us to a reality that is more reflective of the world we live in, and helps us prepare for challenges that we would otherwise overlook.