Why Giant Brains Are So Rare

Giant Brains Are Rare

In Sapiens, Yuval Noah Harari explains that Homo sapiens means wise man. It is a term we have given ourselves as a species because we have large brains and use those large brains to set ourselves apart from the rest of the animals and creatures on the planet. There are some other species with big brains, but in general large brains are rare, and no other species has been shown to use their brain to the same competitive advantage as humans.
But if large brains have made us so competitive across the globe, why are they so rare? Harari writes, “The fact is that a jumbo brain is a jumbo drain on the body. … in Homo sapiens, the brain accounts for about 2-3 percent of total body weight, but it consumes 25 per cent of the body’s energy when the body is at rest. By comparison, the brains of other apes require only 8 per cent of rest-time energy.”
Our brains are incredibly active and use a lot of sugars for fuel, even when we are not doing anything. This is great news for those of us who are trying to go on a diet to lose some weight today, but it was not great news for our ancestor hunter-gatherer humans and proto-Homo sapiens species of the past. According to Harari, large brains essentially have a high up-front cost. There is a large energy up front energy cost that goes into maintaining the brain before a species can really use the brain to a competitive advantage, and that has been a barrier to other species developing large brains and using them in a way that could give them a competitive advantage.
Harari continues, “Archaic humans paid for their large brains in two ways. Firstly, they spent more time in search of food. Secondly, their muscles atrophied. … A chimpanzee can’t win an argument with a Homo  sapiens, but the ape can rip the man apart.” Strong thinking and reasoning skills are helpful today and are the reasons we live in houses, build rocket ships, and are able to develop vaccines to end global pandemics. However, our big brains are not always the best tool to bring to a fist fight. It is not obvious that better reasoning skills will help a species survive better than sharp claws and teeth, thick hides, or spiky spines. Evolution doesn’t have an end goal in mind, and for all species besides the human species that evolved into Homo sapiens, the big brain payoff simply wasn’t the evolutionary rout that provided the best chance of survival and spread. It wasn’t until the big brained human species began to live and interact in clusters and tribes, communicating and working together, that big brains and reasoning skills could begin to pay off and become competitive against larger animals with bigger muscles and more ferocious claws, teeth, and tusks.
Post-Action Rationalization

Post-Action Rationalization

I have heard people write about a split brain experiment where a participant whose corpus collosum was severed was instructed in one ear, through a pair of headphones, to leave the room they were in because the experiment was over. As the participant stood to leave the room, a researcher asked them why they had gotten up. The participant said they wanted to get something to drink.
This experiment is pretty famous and demonstrates the human ability to rationalize our behaviors even when we really don’t know what prompted us to behave in one way or another. If you have ever been surprised that you had an angry outburst at another person, if you have ever had a gut feeling in an athletic competition, and if you ever forgot something important in a report and been bewildered by your omission, then you have probably engaged in post-action rationalization. You have probably thought back over the event, the mental state you were in, and tried to figure out exactly why you did what you did and not something else.
However, Judea Pearl in The Book of Why would argue that your answer is nothing more than an illusion. Writing about this phenomenon he says:
“Rationalization of actions may be a reconstructive, post-action process. For example, a soccer player may explain why he decided to pass the ball to Joe instead of Charlie, but it is rarely the case that those reasons consciously triggered the action. In the heat of the game, thousands of input signals compete for the player’s attention. The crucial decision is which signals to prioritize, and the reasons can hardly be recalled and articulated.”
Your angry traffic outburst was brought on by a huge number of factors. Your in game decision was not something you paused, thought about, and worked out the physics to perfect before hand. Similarly, your omission on a report was a barely conscious lapse of information. Each of these situations we can rationalize and explain based on several salient factors that come to mind post-action, but that hardly describes how our brain was actually working in the moment.
The brain has to figure out what signals to prioritize and what signals to consciously respond to in order for each of the examples I mentioned to come about. These notions should challenge our ideas of free-will, our beliefs that we can ever truly know ourselves, and our confidence in learning from experience. Pearl explains that he is a determinist who compromises by accepting an illusion of free will. He argues that the illusion I have described with my examples and his quote helps us to experience and navigate the world. We feel that there is something that it is like to be us, that we make our decisions, and we can justify our behaviors, but this is all merely an illusion.
If Pearl is right, then it is a helpful illusion. We can still understand it better, still understand how this illusion is created, sustained, and can be put to the best uses. We might not have a true and authentic self under the illusion. We might not be in control of what the illusion is. But nevertheless, we can shape and mold it, and have a responsibility to do the best with our illusion, even if much of it is post-action rationalization.
The Representation Problem

The Representation Problem

In The Book of Why Judea Pearl lays out what computer scientists call the representation problem by writing, “How do humans represent possible worlds in their minds and compute the closest one, when the number of possibilities is far beyond the capacity of the human brain?”
 
 
In the Marvel Movie Infinity War, Dr. Strange looks forward in time to see all the possible outcomes of a coming conflict. He looks at 14,000,605 possible futures. But did Dr. Strange really look at all the possible futures out there? 14 million is a convenient big number to include in a movie, but how many possible outcomes are there for your commute home? How many people could change your commute in just the tiniest way? Is it really a different outcome if you hit a bug while driving, if you were stopped at 3 red lights and not 4, or if you had to stop at a crosswalk for a pedestrian? The details and differences in the possible worlds of our commute home can range from the miniscule to the enormous (the difference between you rolling your window down versus a meteor landing in the road in front of you). Certainly with all things considered there are more than 14 million possible futures for your drive home.
 
 
Somehow, we are able to live our lives and make decent predictions of the future despite the enormity of possible worlds that exist ahead of us. Somehow we can represent possible worlds in our minds and determine what future world is the closest one to the reality we will experience. This ability allows us to plan for retirement, have kids, go to the movies, and cook dinner. If we could not do this, we could not drive down the street, could not walk to a neighbors house, and couldn’t navigate a complex social world. But none of us are sitting in a green glow with our head spinning in circles like Dr. Strange as we try to view all the possible worlds in front of us. What is happening in our mind to do this complex math?
 
 
Pearl argues that we solve this representation problem not through magical foresight, but through an intuitive understanding of causal structures. We can’t predict exactly what the stock market is going to do, whether a natural disaster is in our future, or precisely how another person will react to something we say, but we can get a pretty good handle on each of these areas thanks to causal reasoning.
 
 
We can throw out possible futures that have no causal structures related to the reality we inhabit.  You don’t have to think of a world where Snorlax is blocking your way home, because your brain recognizes there is no causal plausibility of a Pokémon character sleeping in the road. Our brain easily discards the absurd possible futures and simultaneous recognizes the causal pathways that could have major impacts on how we will live. This approach gradually narrows down the possibilities to a level where we can make decisions and work with a level of information that our brain (or computers) can reasonably decipher. We also know, without having to do the math, that rolling our window down or hitting a bug is not likely to start a causal pathway that materially changes the outcome of our commute home. The same goes for being stopped at a few more red lights or even stopping to pick up a burrito. Those possibilities exist, but they don’t materially change our lives and so our brain can discard them from the calculation. This is the kind of work our brains our doing, Pearl would argue, to solve the representation problem.

Tool Use and Causation - Judea Pearl - The Book of Why - Joe Abittan

Tool Use and Causation

Judea Pearl’s book The Book of Why is all about causation. The reason human beings are able to produce vaccines, to send rockets into space, and maintain green gardens is because we understand causation. We have an ability to observe events in the world, to intervene, and to predict how our interventions produce specific outcomes. This allows us to develop tools to specifically achieve desired ends, and it is not a small feat.
In the book Pearl describes three levels of causation based on Alan Turing’s proposed system to classify cognitive systems in terms of the queries systems can answer. The three levels of causation are association, intervention, and counterfactuals. Pearl explains that many animals observe the world and detect patterns, but that fewer animals use tools to intervene in the world. Fewer still, Pearl explains, possess the ability to actually develop and improve new tools. As he writes, “tool users do not necessarily possess a theory of their tool that tells them why it works and what to do when it doesn’t. For that, you need to have achieved a level of understanding that permits imagining. It was primarily this third level that prepared us for further revolutions in agriculture and science and led to a sudden and drastic change in our species’ impact on the planet.”
The theory of tool use that Pearl mentions in the quote is our ability to see and understand causation. We can observe that rocks can be used to cut plant fibers, and then we can identify the qualities in some rocks that make them better at cutting fibers than others. But to get to the point where we are sharpening an edge of a rock to make it even better at cutting fibers, we have to have a causal understanding of what allows the rock to cut and we need sufficient imagination to predict what would happen if the rock had a sharper edge. We have to imagine an outcome in a future world where something was different, and that something different caused a new outcome.
This point is small, but is actually quite profound. Our minds are able to conceptualize causality and build hypothesis about the world that we can test. This can improve our tool usage, improve the ways we act and behave, and can allow us to achieve desired ends through study, prediction, imagination, and experimentation. The key, however, is that we have a theory of the tools and how they work, that we have an ability to intuit causation.
We hear all the time that correlation is not causation and in our modern technological age we are looking to statistics to help us solve massive problems. However, as Pearl’s quote shows, data, statistics, and information is useless unless we have a theory of the tools we can use based on the knowledge we gain from the data, statistics, and information. We have to embrace causation and our ability to imagine and predict causal structures if we want to do anything with the data.
This all reminds me of the saying, when the only tool you have is a hammer, everything begins to look like a nail. This represents an inability to understand causality, a lack of imagination and predictive prowess. Statistics without a theory of causality, without an ability to use our power to identify and predict causation, is like the hammer and nail saying. It is useless and throws the same toolkit and approach at every problem. Statistics alone doesn’t build knowledge – you also need a theory of causation.
Pearl’s message throughout the book is that statistics (tool use) and causation is linked, that we need a theory and understanding of causation if we are going to do anything with data, statistics, and information. For years we have relied on statistical relationships to help us understand the world, but we have failed to apply the same rigorous study to causation, and that will make it difficult for us to use our new statistical power to achieve the ends that big data and statistical processing promise.
Talking About Causation - Judea Pearl - The Book of Why - Joe Abittan

Talking About Causation

In The Book of Why Judea Pearl argues that humans are better at modeling, predicting, and identifying causation than we like to acknowledge. For Pearl, the idea that we can see direct causation and study it scientifically is not a radical and naïve belief, but a common sense and defensible observation about human pattern recognition and intuition of causal structures in the world. He argues that we are overly reliant on statistical methods and randomized controlled trials that suggest relationships, but never tell us exactly what causal mechanisms are at the heart of such relationships.
One of the greatest frustrations for Pearl is the limitations he feels have been placed around ideas and concepts for causality. For Pearl, there is a sense that certain research, certain ways of talking about causality, and certain approaches to solving problems are taboo, and that he and other causality pioneers are unable to talk in a way that might lead to new scientific breakthroughs. Regarding a theory of causation and a the history of our study of causality, he writes, “they declared those questions off limits and turned to developing a thriving causality-free enterprise called statistics.”
Statistics doesn’t tell us a lot about causality. Statistical thinking is a difficult way for most people to think, and for non-statistically trained individuals it leads to frustrations. I remember around the time of the 2020 election that Nate Silver, a statistics wonk at Fivethirtyeight.com, posted a cartoon where one person was trying to explain the statistical chance of an outcome to another person. The other person interpreted statistical chances as either 50-50 or all or nothing. They interpreted a low probability event as a certainty that something would not happen and interpreted a high probability event as a certainty that it would happen, while more middle ground probabilities were simply lumped in as 50-50 chances. Statistics helps us understand these probabilities in terms of the outcomes we see, but doesn’t actually tell us anything about the why behind the statistical probabilities. That, I think Pearl would argue, is part of where the confusion for the individual in the cartoon who had trouble with statistics stems from.
Humans think causally, not statistically. However, our statistical studies and the accepted way of doing science pushes against our natural causal mindsets. This has helped us better understand the world in many ways, but Pearl argues that we have lost something along the way. He argues that we needed to be building better ways of thinking about causality and building models and theories of causality at the same time that we were building and improving our studies of statistics. Instead, statistics took over as the only responsible way to discuss relationships between events, with causality becoming taboo.
“When you prohibit speech,” Pearl writes, “you prohibit thought and stifle principles, methods, and tools.” Pearl argues that this is what is happening in terms of causal thinking relative to statistical thinking. I think he, and other academics who make similar speech prohibition arguments, are hyperbolic, but I think it is important to consider whether we are limiting speech and knowledge in an important way. In many studies, we cannot directly see the causal structure, and statistics does have ways of helping us better understand it, even if it cannot point to a causal element directly. Causal thinking alone can lead to errors in thinking, and can be hijacked by those who deliberately want to do harm by spreading lies and false information. Sometimes regressions and correlations hint at possible causal structures or completely eliminate others from consideration. The point is that statistics is still useful, but that it is something we should lean into as a tool to help us identify causality, not as the endpoint of research beyond which we cannot make any assumptions or conclusions.
Academics, such as Pearl and some genetic researchers, may want to push forward with ways of thinking that others consider taboo, and sometimes fail to adequately understand and address the concerns that individuals have about the fields. Addressing these areas requires tact and an ability to connect research in fields deemed off limits to the fields that are acceptable. Statistics and a turn away from a language of causality may have been a missed opportunity in scientific understanding, but it is important to recognize that human minds have posited impossible causal connections throughout history, and that we needed statistics to help demonstrate how impossible these causal chains were. If causality became taboo, it was at least partly because there were major epistemic problems in the field of causality. The time may have come for addressing causality more directly, but I am not convinced that Pearl is correct in arguing that there is a prohibition on speech around causality, at least not if the opportunity exists to tactfully and responsibly address causality as I think he does in his book.
Epistemic Self-Improvement

Epistemic Self-Improvement

Is epistemic self-improvement possible? That is, can we individually improve the ways we think to become more conducive to knowledge? If we can’t, does that mean we are stuck with epistemic vices, unable to improve our thinking to become epistemically virtuous?
These are important questions because they determine whether we can progress as a collective and overcome ways of thinking that hinder knowledge. Gullibility, arrogance, and closed-mindedness are a few epistemic vices that I have written about recently that demonstrate how hard epistemic self-improvement can be. If you are gullible it is hard to make a change on your own to be less easily fooled. If you are arrogant it is hard to be introspective in a way that allows you to see how your arrogance has limited your knowledge. And if you are closed-minded then it is unlikely you will see a need to expand your knowledge at all. So can we really improve ourselves to think better?
Quassim Cassam seems to believe that we can. He identifies ways in which people have improved their thinking over time and how humans within institutions have become more epistemically virtuous throughout our history. After running through some examples and support for epistemic self-improvement in Vices of the Mind, Cassam writes, “none of this proves that self-improvement in respect of thinking vices is possible, but if our thinking can’t be improved that would make it one of the few things that humans do that they can’t do better with practice and training.”
I am currently reading Joseph Henrich’s book The WEIRDest People in the World and he argues that human psychology both shapes and is shaped by institutions. I think he would agree with Cassam, arguing that individual self-improvement is possible, and that it can contribute to a positive feedback loop where people improve their thinking, improving the institutions they are a part of, which feeds back into improved thinking. I agree with Cassam and would find it surprising if we couldn’t improve our thinking and become more epistemically virtuous if we set about trying to do so with practice. Viewing this idea through a Henrich lens also suggests that as we try to become more epistemically virtuous and focus on epistemic virtuosity, we would shape institutions to better support us, giving us an extra hand from the outside to help us improve our thinking. Individually we can become better thinkers and that allows us to create better institutions that further support better thinking, creating a virtuous cycle of epistemic self-improvement. There are certainly many jumping off points and gears that we can throw sand into during this process, but overall, it should leave us feeling more epistemically optimistic about humans and our societies.
Ignorance is Culpable

Ignorance is Culpable

We are responsible for our vices and deserve blame for them. We are sometimes responsible for acquiring our vices and are almost always responsible for eliminating our vices. However, sometimes our vices prevent us from being able to recognize that we possess vices and from taking the necessary steps to eliminate them. However, blind-spots induced by our vices do not absolve us from our culpability, they only make it worse.
Quassim Cassam references former President Donald Trump to demonstrate how we become more culpable for our vices when they create blind-spots in our lives. Cassam writes:
“Few would be tempted to regard the cruel person’s ignorance of his own cruelty as non-culpable on the grounds that it is the result of his cruelty. If the only thing preventing one from knowing one’s vices is those very vices then one’s ignorance is culpable. It is on this basis that Trump’s ignorance of his epistemic incompetence can still be deemed culpable. It is no excuse that he is so incompetent that he can’t get the measure of his incompetence. That only makes it worse.”
The blind-spots induced by our vices may inhibit us from actually recognizing how our vices shape the ways in which we act, think about the world, and behave. Cassam demonstrates this throughout his book as he investigates epistemic vices, those vices which hinder knowledge. If we fail to recognize how little we actually know about the world and can’t be bothered to learn anything, then we will never actually see how little we know. Arrogance, closed-mindedness, and intellectual laziness will prevent us from actually seeing that our thinking is vicious, and that our thinking is limiting our knowledge.
However, we cannot then say that our vices are not our fault. Arguing that we couldn’t have changed and couldn’t have improved our thinking because our vices were in the way simply demonstrates how vicious our thinking is. Instead of removing the culpability of the vice, Cassam argues, this line of thinking simply doubles down on the cost of the vice, making us even more revision responsible for our vice.  Ultimately, we are culpable for our vices and for our ignorance about our vices.
Improve Your Posture - Joe Abittan - Vices Of The Mind - Cassam

Improve Your Posture

In the book Vices of  the Mind, Quassim Cassam compares our thinking to our physical posture. Parents, physical therapists, and human resources departments all know the importance of good physical posture. Strengthening your core, lifting from your legs and not your back, and having your computer monitor at an appropriate height is important if you are going to avoid physical injuries and costly medical care to relive your pain. But have you ever thought about your epistemic posture?
Your epistemic posture can be thought of in a similar manner as your physical posture. Are you paying attention to the right things, are you practicing good focus, and are you working on being open-minded? Having good epistemic posture will mean that you are thinking in a way that is the most conducive to knowledge generation. Just as poor physical posture can result in injuries, poor epistemic posture can result in knowledge injuries (at least if you want to consider a lack of knowledge and information an injury).
Cassam writes, “The importance of one’s physical posture in doing physical work is widely recognized. The importance of one’s epistemic posture in doing epistemic work is not. Poor physical posture causes all manner of physical problems, and a poor epistemic posture causes all manner of intellectual problems. So the best advice to the epistemically insouciant and intellectually arrogant is: improve your posture.”
Improving our epistemic posture is not easy. Its not something we just wake up and decide we can do on our own, just as we can’t improve our walking form, the way we lift boxes, or easily adjust our workspace to be the most ergonomic all on our own. We need coaches, teachers, and therapists to help us see where we are going through dangerous, harmful, or imbalanced motions, and we need them to help correct us. These are skills that should be taught from a young age (both physically and epistemically) to help us understand how to adopt good habits maintain a healthy posture throughout life.
Thinking in ways that build and enhance our knowledge is important. It is important that we learn to be open-minded, that we learn how not to be arrogant, and that we learn that our opinions and perspectives are limited. The more we practice good epistemic posture the better we can be at recognizing when we have enough information to make important decisions and when we are making decisions without sufficient information. It can help us avoid spreading misinformation and disinformation, and can help us avoid harmful conspiracy theories or motivated reasoning. Good epistemic posture will help us have strong and resilient minds, just as good physical posture will help us have strong and resilient bodies.
Anecdotal Versus Systematic Thinking

Anecdotal Versus Systematic Thinking

Anecdotes are incredibly convincing, especially when they focus on an extreme case. However, anecdotes are not always representative of larger populations. Some anecdotes are very context dependent, focus on specific and odd situations, and deal with narrow circumstances. However, because they are often vivid, highly visible, and emotionally resonant, they can be highly memorable and influential.
Systemic thinking often lacks many of these qualities. Often, the general reference class is hard to see or make sense of. It is much easier to remember a commute that featured an officer or traffic accident than the vast majority of commutes that were uneventful. Sometimes the data directly contradicts the anecdotal stories and thoughts we have, but that data often lacks the visibility to reveal the contradictions. This happens frequently with news stories or TV shows that highlight dangerous crime or teen pregnancy. Despite a rise in crime during 2020, we have seen falling crime rates in recent decades, and despite TV shows about teen pregnancies, those rates have also been falling.
In Vices of the Mind, Quassim Cassam examines anecdotal versus systematic thinking to demonstrate that anecdotal thinking can be an epistemic vice that obstructs our view of reality. He writes, “With a bit of imagination it is possible to show that every supposed epistemic vice can lead to true belief in certain circumstances. What is less obvious is that epistemic vices are reliable pathways to true belief or that they are systematically conducive to true belief.”
Anecdotal versus systematic thinking or structural thinking is a useful context for thinking about Cassam’s quote. An anecdote describes a situation or story with an N of 1. That is to say, an anecdote is a single case study. Within any population of people, drug reactions, rocket launches, or any other phenomenon, there are going to be outliers. There will be some results that are strange and unique, deviating from the norm or average. These individual cases are interesting and can be useful to study, but it is important that we recognize them as outliers and not generalize these individual cases to the larger population. Systematic and structural thinking helps us see the larger population and develop more accurate beliefs about what we should normally expect to happen.
Anecdotal thinking may occasionally lead to true beliefs about larger classes, but as Cassam notes, it will not do so reliably. We cannot build our beliefs around single anecdotes, or we will risk making decisions based on unusual outliers. Trying to address crime, reduce teen pregnancy, determine the efficacy of a medication, or verify the safety of a spaceship requires that we understand the larger systemic and structural picture. We cannot study one instance of crime and assume we know how to reduce crime across an entire country, and none of us would want to ride in a spaceship that had only been tested once.
It is important that we recognize anecdotal thinking, and other epistemic vices, so we can improve our thinking and have better understandings of reality. Doing so will help improve our decision-making, will improve the way we relate to the world, and will help us as a society better determine where we should place resources to help create a world we want to live in. Anecdotal thinking, and indulging in other epistemic vices, might give us a correct answer from time to time, but it is likely to lead to worse outcomes and decisions over time as we routinely misjudge reality. This in turn will create tensions and distrust among a society that cannot agree on the actual trends and needs of the population.
Thinking Conspiratorially Versus Evidence-Based Thinking - Joe Abittan

Thinking Conspiratorially Versus Evidence-Based Thinking

My last two posts have focused around conspiratorial thinking and whether it is an epistemic vice. Quassim Cassam in Vices of the Mind argues that we can only consider thinking conspiratorially to be a vice based on context. He means that conspiratorial thinking is a vice dependent on whether there is reliable and accurate evidence to support a conspiratorial claim. Thinking conspiratorially is not an epistemic vice when we are correct and have solid evidence and rational justifications for thinking conspiratorially. Anti-conspiratorial thinking can be an epistemic vice if we ignore good evidence of a conspiracy to continue believing that everything is in order.
Many conspiracies are not based on reliable facts and information. They create causal links between disconnected events and fail to explain reality. Anti-conspiratorial thinking also creates a false picture of reality, but does so by ignoring causal links that actually do exist. As epistemic vices, both ways of thinking can be described consequentially and by examining the patterns of thought that contribute to the conspiratorial or anti-conspiratorial thinking.
However, that is not to say that conspiratorial thinking is a vice in non-conspiracy environments and that anti-conspiratorial thinking is a vice in high-conspiracy environments. Regarding this line of thought, Cassam writes, “Seductive as this line of thinking might seem, it isn’t correct. The obvious point to make is that conspiracy thinking can be vicious in a conspiracy-rich environment, just as anti-conspiracy thinking can be vicious in contexts in which conspiracies are rare.” The key, according to Cassam, is evidence-based thinking and whether we have justified beliefs and opinions, even if they turn out to be wrong in the end.
Cassam generally supports the principle of parsimony, the idea that the simplest explanation for a scenario is often the best and the one that you should assume to be correct. Based on the evidence available, we should look for the simplest and most direct path to explain reality. However, as Cassam continues, “the principle of parsimony is a blunt instrument when it comes to assessing the merits of a hypothesis in complex cases.” This means that we will still end up with epistemic vices related to conspiratorial thinking if we only look for the simplest explanation.
What Cassam’s quotes about conspiratorial thinking and parsimony get at is the importance of good evidence-based thinking. When we are trying to understand reality, we should be thinking about what evidence should exist for our claims, what evidence would be needed to support our claims, and what kinds of evidence would refute our claims. Evidence-based thinking helps us avoid pitfalls of conspiratorial or anti-conspiratorial thinking, regardless as to whether we live in conspiracy rich or poor environments. Accurately identifying or denying a conspiracy based on thinking without any evidence, based on assuming simple relationships, is ultimately not much better than simply making up beliefs based on magic. What we need to do is learn to adopt evidence-based thinking and to better understand the causal structures that exist in the world. That is the only true way to avoid the epistemic vices related to conspiratorial thinking.