Sapiens' Trade

Sapiens’ Trade

In the book Sapiens, Yuval Noah Harari discusses archaeological evidence that Homo sapiens engaged in trade tens of thousands of years ago. He also suggests that evidence of trade can be used to explain how Homo sapiens could have out competed other human species.
For example, Harari suggests that Neanderthals probably couldn’t cooperate to the same extent as sapiens. He also suggests that a Neanderthal would win in a fight with a Sapiens, but that individual fights between human species was not the main form of competition. Large numbers of Sapiens could communicate and share goals through myths and stories, allowing them to gang up on stronger species like Neanderthals. Comparing the cognitive level of the two species, Harari explains how differences in trade support ideas of different levels of cognition, and the advantages that Sapiens had:
“Archaeologists excavating 30,000-year-old Sapiens sites in the European heartland occasionally find there seashells from the Mediterranean and Atlantic coasts. In all likelihood, these shells got to the continental interior through long-distance trade between different Sapiens bands. Neanderthal sites lack any evidence of such trade. Each group manufactured its own tools from local materials.”
Sapiens and Neanderthals were both tool users, but Sapiens appeared to be traders with foreign bands. While Neanderthals constructed all their tools themselves, Sapiens could get different tools from different bands, could get decorative seashells, and could coordinate and cooperate among themselves and others. This communication and cooperation is what Harari argues gave Sapiens an advantage over species like the Neanderthals, and what eventually allowed Sapiens to outcompete other species and ultimately become modern Homo sapiens.
Fiction as a Technology - Yuval Noah Harari Sapiens - Joe Abittan

Fiction As A Technology

In nerdy circles, on some podcasts and in discussions among people who look at the world in complex ways, you may hear people refer to human institutions as technologies. The idea is that human institutions are designed and created to help further specific goals, just as the things we typically think of as technologies are, such as cell phones and automatic coffee makers. Forms of governance, religions, and social organizations can all be thought of as technologies – they are tools we create to help us live as social creatures in complex societies. Through this lens, we can also view fictional stories as a technology.
In his book Sapiens, Yuval Noah Harari looks at fictions as a type of technology and explains how the evolution of the human brain and an increased capacity for language unlocked this technology. He writes:
“Legends, myths, gods, and religions appeared for the first time with the Cognitive Revolution. Many animals and human species could previously say, careful! A Lion! Thanks to the Cognitive Revolution, Homo sapiens acquired the ability to say, the lion is the guardian spirit of our tribe. This ability to speak about fictions is the most unique feature of Sapiens’ language.”
Fictions allow us to imagine things that don’t exist. It allows us to transmit ideas that are hard to put into concrete, real world terms and examples. Memes often exist in fictional form, transmitting through people once a critical mass has been reached. Myths, the show Friends, and concepts like the American Dream help us think about how we should live and behave. As Harari writes, “fiction has enabled us not merely to imagine things, but to do so collectively.”
Fiction as a technology functions as a type of social bond. We spend our time constantly creating fictions, imaging what is taking place inside another person’s head, what our future will look like if we do one thing rather than another, and what the world would look like if some of us had special powers. What is incredible about the human brain is that these fictions don’t just exist in isolation within individual brains. They are often shared, shaped, and constructed socially. We share fictions and can find meaning, belonging, and structures for living our lives through our shared fictions. The power of the mind to create fictional stories and to then live within collective fictions is immense, sometimes for the betterment of human life, and sometimes for the detriment.
More on Human Language and Gossip

More on Human Language and Gossip

In my last post I wrote about human language evolving to help us do more than just describe our environments. Language seems helpful to ask someone how many cups of flour are in a cookie recipe, where the nearest gas station is, and whether there are any cops on the freeway (or for our ancestors, what nuts are edible, where one can find edible nuts, and if there is a lion hiding near the place with the edible nuts). However, humans use language for much more than describing these aspects of our environment. In particular, we use language for signaling, gossiping, and saying things without actually saying the thing out loud.
We might use language to say that we believe something which is clearly, objectively false (that the emperor has nice clothes on) to signal our loyalty. We may gossip behind someone’s back to assess from another person whether that individual is trustworthy, as Yuval Noah Harari argues in his book Sapiens. And we might ask someone if they would like to come over to our house to watch Netflix and chill, even if no watching of Netflix is actually in the plans we are asking the other person if they are interested in engaging in. As Robin Hanson and Kevin Simler explain in The Elephant in the Brain, we are asking a question and giving the other person plausible deniability in their response and building plausible deniability into the intent of our question.
These are all very complicated uses of language, and they developed as our brains evolved to be more complicated. The reason evolution favored brain evolution that could support such complicated uses of language is due to the fact that humans are social beings. In Sapiens, Harari writes, “The amount of information that one must obtain and store in order to track the ever-changing relationships of even a few dozen individuals is staggering. (In a band of fifty individuals, there are 1,225 one-on-one relationships and countless more complex social combinations.)” In order for us to signal to a group of humans, gossip about others, or say things that we know will be commonly understood but plausibly denied, our brains needed a lot of power. History suggests that tribes typically ranged from about 50 on the low end to 250 people on the high end, meaning we had a lot of social interactions and considerations to manage. Our brains evolved to make us better social creatures, and language was one of the tools that both supported and drove that evolution.
Using Language for More than Conveying Environmental Information - Yuval Noah Harari Sapiens - Kevin Simler and Robin Hanson The Elephant in the Brain - Joe Abittan

Using Language for Conveying More Than Environmental Information

In the most basic utilitarian sense, our complex human languages evolved because they allowed us to convey information about the world from one individual to another. Language for early humans was incredibly important because it helped our ancestors tell each other when a predator was spotted nearby, when fruit was safe to eat, or if there was a dead water buffalo nearby that our ancestors could go scavenge some scraps from.  This idea is the simplest idea for the evolution of human language, but it doesn’t truly convey everything we have come to do with our language over a couple million years of evolution.
Yuval Noah Harari expands on this idea in his book Sapiens, “a second theory agrees that our unique language evolved as a means of sharing information about the world. But the most important information that needed to be conveyed was about humans, not lions and bison.” What Harari means in this quote is that human language allowed our ancestors to gossip. This is an idea that Kevin Simler and Robin Hanson share in their book The Elephant in the Brain. They argue that language is often more about showing off and gossiping than it is about utilitarian matters such as conveying environmental information. They also argue that the use of language for gossip and signaling was one of the key drivers of the evolution of the human brain, rewarding our ancestors for being smarter and more deceptive, hence rewarding larger and more complex brains.
In Sapiens, Harari explains that many species of monkeys are able to convey basic information through specific calls that are recognized among a species, such as when a predator is nearby or when there is ample food nearby. Playbacks of sounds identified as warnings will make monkeys in captivity hide. However, studies haven’t been able to show that other species are able to communicate and gossip about each other in the ways that humans do from a very young age. Our use of language to convey more than basic information about our environment allowed humans to develop into social tribes, and it has sine allowed us to develop massive populations of billions of people all cooperating and living together.

Bonk

On one of the first few pages of her book Bonk: The Curious Coupling of Science and Sex, Mary Roach writes the following tribute: “This book is a tribute to the men an women who dared. Who, to this day, endure ignorance, closed minds, righteousness, and prudery. Their lives are not easy. But their cocktail parties are the best.”
Bonk is an exploration of our scientific exploration of sex. For many reasons, sex research has been difficult to carry out and often taboo. Researchers face extra challenges getting funding, are treated with skepticism, have trouble finding subjects, have trouble publishing important findings, and can be publicly ridiculed for their research. Roach writes about the euphemisms that researchers have to employ when describing their studies, switching words related to sex to more physiologically based words. She also writes about the range of topics that become difficult to study because of their relation to sex – topics related to genitals, especially to the female body, even if they are not sex specific topics.
Across the book Roach identifies important themes in global culture. Humans are often driven by sex, surrounded by sex, or confused by something sexual, but we rarely discuss sex or anything related to it in a direct way. Even intimate couples find it difficult to have honest and direct conversations about sex. In some ways it is fair to say that sex is hyper-present in the United States, but this doesn’t mean we are ok with openly discussing our sexual experiences with other people, even neutral and independent researchers.
This has created a challenge where we all have many questions and uncertainties related to our sexual development, our sexual orientation, and physiological sexual responses to stimuli throughout our lives, but few good places to get answers to those questions. Even if we can study these topics, it is not easy to access, share, and discuss that research. People who do such research, or claim to be interested in such research, are often stigmatized and other people who know their research interests may not want to associate with them to avoid the same stigma.
Ultimately, what I think Roach believes is that we should work to be more honest and develop better conversations around the science of sex. I think this is something Roach believes is necessary in many academic and scientific fields, not just those related to sex. Her work has generally made an effort to study and explore topics that are gross, taboo, and overlooked, but are always present and important. Sex is something that has many individual and social factors, and failing to research sex leaves us stuck with ignorance, where strong voices can win out over the reality of many people’s experiences. Better science, study, and discussion will hopefully help us better understand ourselves, our bodies, and our physical relationships with others.

Closed-Mindedness

One of the epistemic vices that Quassim Cassam describes in his book Vices of the Mind is closed-mindedness. An epistemic vice, Cassam explains, is a pattern of thought or a behavior that obstructs knowledge. They systematically get in the way of learning, communicating, or holding on to important and accurate information.
Regarding closed-mindedness, Cassam writes, “in the case of closed-mindedness, one of the motivations is the need for closure, that is, the individual’s desire for a firm answer to a question, any firm answer as compared to confusion and/or ambiguity [Italics indicate quote from A.W. Kruglanski]. This doesn’t seem an inherently bad motive and even has potential benefits. The point at which it becomes problematic is the point at which it gets in the way of knowledge.”
This quote about closed-mindedness reveals a couple of interesting aspects about the way we think and the patterns of thought that we adopt. The quote shows that we can become closed-minded without intending to be closed-minded people. I’m sure that very few people think that it is a good thing for us to close ourselves off from new information or diverse perspectives about how our lives should be. Instead, we seek knowledge and we prefer feeling as though we are correct and as though we understand the world we live in. Closed-mindedness is in some ways a by-product of living in a complex world where we have to make decisions with uncertainty. It is uncomfortable to constantly question every decision we make and can become paralyzing if we stress each decision too tightly. Simply making a decision and deciding we are correct without revisiting the question is easier, but also characteristically closed-minded.
The second interesting point is that epistemic vices such as closed-mindedness are not always inherently evil. As I wrote in the previous paragraph, closed-mindedness (or at least a shade of it), can help us navigate an uncertain world. It can help us make an initial decision and move on from that decision in situations where we otherwise may feel paralyzed. In many instances, like purchasing socks, there is no real harm that comes from being closed-minded. You might pay more than necessary purchasing fancy socks, but the harm is pretty minimal.
However, closed-mindedness systematically hinders knowledge by making people unreceptive to new information that challenges existing or desired beliefs. It makes people worse at communicating information because their data may be incomplete and irrelevant. Knowledge is limited by closed-mindedness, and overtime this creates a potential for substantial consequences in people’s lives. Selecting a poor health insurance plan as a result of being closed-minded, starting a war, or spreading harmful chemical pesticides are real world consequences that have occurred as a result of closed-mindedness. Substantial sums of money, people’s lives, and people’s health and well-being can hang in the balance when closed-mindedness prevents people from making good decisions, regardless of the motives that made someone closed-minded and regardless of whether being closed-minded helped solve analysis paralysis. Many of the epistemic vices, and the characteristics of epistemic vices, that Cassam describes manifest in our lives similar to closed-mindedness. Reducing such vices, like avoiding closed-mindedness, can help us prevent serious harms that can accompany the systematic obstruction of knowledge.
Risk literacy and Reduced Healthcare Costs - Joe Abittan

Risk Literacy & Reduced Healthcare Costs

Gerd Gigerenzer argues that risk literacy and reduced healthcare costs go together in his book Risk Savvy. By increasing risk literacy we will help both doctors and patients better understand how behaviors contribute to overall health, how screenings may or may not reveal dangerous medical conditions, and whether medications will or will not make a difference for an individual’s long-term well being. Having both doctors and patients better understand and better discuss the risks and benefits of procedures, drugs, and lifestyle changes can help us use our healthcare resources more wisely, ultimately bringing costs down.
Gigerenzer argues that much of the modern healthcare system, not just the US system but the global healthcare system, has been designed to sell more drugs and more technology. Increasing the number of people using medications, getting more doctors to order more tests with new high-tech diagnostic machines, and driving more procedures became more of a goal than actually helping to improve people’s health. Globally, health and the quality of healthcare has improved, but healthcare is often criticized as a low productivity sector, with relatively low gains in health or efficiency for the investments we make.
I don’t know that I am cynical enough to accept all of Gigerenzer’s argument at face value, but the story of opioids, the fact that we invest much larger sums of money in cancer research versus parasitic disease research, and the ubiquitous use of MRIs in our healthcare landscape do favor Gigerenzer’s argument. There hasn’t been as much focus on improving doctor and patient statistical reasoning, and we haven’t put forward the same effort and funding to remove lead from public parks compared to the funding put forward for cancer treatments. We see medicine as treating diseases after they have popped up with fancy new technologies and drugs. We don’t see medicine as improving risk and health literacy or as helping improve the environment before people get sick.
This poor vision of healthcare that we have lived with for so long, Gigerenzer goes on to argue, has blinded us to the real possibilities within healthcare. Gigerenzer writes, “calls for better health care have been usually countered by claims that this implies one of two alternatives, which nobody wants: raising taxes or rationing care. I argue that there is a third option: by promoting health literacy of doctors and patients, we can get better care for less money.”
Improving risk and health literacy means that doctors can better understand and better communicate which medications, which tests, and which procedures  are most likely to help patients. It will also help patients better understand why certain recommendations have been made and will help them push back against the feeling that they always need the newest drugs, the most cutting edge surgery, and the most expensive diagnostic screenings. Regardless of whether we raise taxes or try to ration care, we have to help people truly understand their options in new ways that incorporate tools to improve risk literacy and reduce healthcare costs. By better understanding the system, our own care, and our systemic health, we can better utilize our healthcare resources, and hopefully bring down costs by moving our spending into higher productivity healthcare spaces.
On The Opportunity To Profit From Uninformed Patients

On The Opportunity To Profit From Uninformed Patients

The American Medical System is in a difficult and dangerous place right now. Healthcare services have become incredibly expensive, and the entire system has become so complex that few people fully understand it and even fewer can successfully navigate the system to get appropriate care that they can reasonably afford. My experience is that many people don’t see value in much of the care they receive or with many of the actors connected with their care. They know they need insurance to afford their care, but they really can’t see what value their insurance provides – it often appears to be more of a frustration than something most people appreciate. The same can be true for primary care, anesthesiologists, and the variety of healthcare benefits that employers may offer to their patients. There seem to be lots of people ready to profit from healthcare, but not a lot of people ready to provide real value to the people who need it.
 
These sentiments are all generalizations, and of course many people really do see value in at least some of their healthcare and are grateful for the care they receive. However, the complexity, the lack of transparency, and the ever climbing costs of care have people questioning the entire system, especially at a moral and ethical level. I think a great deal of support for Medicare for All, or universal healthcare coverage, comes from people thinking that profit within medicine may be unethical and from a lack of trust that stems from an inability to see anything other than a profit motive in many healthcare actors and services.
 
Gerd Gigerenzer writes about this idea in his book Risk Savvy. In the book he doesn’t look at healthcare specifically, but uses healthcare to show the importance of being risk literate in today’s complex world. Medical health screening in particular is a good space to demonstrate the harms that can come from misinformed patients and doctors. A failure to understand and communicate risk can harm patients, and it can actually create perverse incentives for healthcare systems by providing them the opportunity to profit from uninformed patients. Gigerenzer quotes Dr. Otis Brawley who had been Director of the Georgia Cancer Center at Emory in Atlanta.
 
In Dr. Brawley’s quote, he discusses how Emory could have screened 1,000 men at a mall for prostate cancer and how the hospital could have made $4.9 million in billing for the tests. Additionally the hospital would have profited from future services when men returned for other unrelated healthcare concerns as established patients. In Dr. Brawley’s experience, the hospital could tell him how much they could profit from the tests, but could not tell him whether screening 1,000 men early for prostate cancer would have actually saved any lives among the 1,000 men screened. Dr. Brawley knew that screening many men would lead to false positive tests, and unnecessary stress and further medical diagnostic care for those false positives – again medical care that Emory would profit from. The screenings would also identify men with prostate cancer that was unlikely to impact their future health, but would nevertheless lead to treatment that would make the men impotent or potentially incontinent. The hospital would profit, but their patients would be worse off than if they had not been screened. Dr. Brawley’s experience was that the hospital could identify avenues for profit, but could not identify avenues to provide real value in the healthcare services they offer.
 
Gigerenzer found this deeply troubling. A failure to understand and communicate the risks of prostate cancer (which is more complex than I can write about here) presents an opportunity for healthcare providers to profit by pushing unnecessary medical screening and treatment onto patients. Gigerenzer also notes that profiting from uninformed patients is not just limited to cancer screening. Doctors who are not risk literate cannot adequately explain risks and benefits of treatment to patients, and their patients cannot make the best decisions for themselves. This is a situation that needs to change if hospitals want to keep the trust of their patients and avoid being a hated entity that fails to demonstrate value. They will go the way of health insurance companies, with frustrated patients wanting to eliminate them altogether.
 
Wrapping up the quote from Dr. Brawley, Gigerenzer writes, “profiting from uninformed patients is unethical. medicine should not be a money game.” I believe that Gigerenzer and Dr. Brawley are right, and I think that all healthcare actors need to clearly demonstrate their value, otherwise any profits they earn will make them look like money-first enterprises and not patient-first enterprises, frustrating the public and leading to distrust in the medical field. In the end, this is going to be harmful for everyone involved. Demonstrating real value in healthcare is crucial, and profiting from uniformed patients will diminish the value provided and hurt trust, making the entire healthcare system in our country even worse.

Understanding False Positives with Natural Frequencies

Understanding False Positives with Natural Frequencies

In a graduate course on healthcare economics a professor of mine had us think about drug testing student athletes. We ran through a few scenarios where we calculated how many true positive test results and how many false positive test results we should expect if we oversaw a university program to drug tests student athletes on a regular basis. The results were surprising, and a little confusing and hard to understand.

 

As it turns out, if you have a large student athlete population and very few of those students actually use any illicit drugs, then your testing program is likely to reveal more false positive tests than true positive tests. The big determining factors are the sensitivity of the test (how often it is actually correct) and the percentage of students using illicit drugs. A false positive occurs when the drug test indicates that a student who is not using illicit drugs is using them. A true positive occurs when the test correctly identifies a student who does indeed use drugs. The dilemma we discussed occurs if you have a test with some percentage of error and a large student athlete population with a minimal percentage of drug users. In this instance you cannot be confident that a positive test result is accurate. You will receive a number of positive tests, but most of the positive tests that you receive are actually false positives.

 

In class, our teacher walked us through this example verbally before creating some tables that we could use to multiply the percentages ourselves to see that the number of false positives will indeed exceed the number of true positives when you are dealing with a large population and a rare event that you are testing for. Our teacher continued to explain that this happens every day in the medical world with drug tests, cancer screenings, and other tests (including COVID-19 tests as we are learning today).  The challenge, as our professor explained, is that the math is complicated and it is hard to explain to person who just received a positive cancer test that they likely don’t have cancer, even though they just received a positive test. The statistics are hard to understand on their own.

 

However, Gerd Gigerenzer doesn’t think this is really a limiting problem for us to the extent that my professor had us work through. In Risk Savvy Gigerenzer writes that understanding false positives with natural frequencies is simple and accessible. What took nearly a full graduate course to go through and discuss, Gigerenzer suggests can be digested in simple charts using natural frequencies. Natural frequencies are numbers we can actually understand and multiply as opposed to fractions and percentages which are easy to mix up and hard to multiply and compare.

 

Rather than telling someone that the actual incidence of cancer in the population is only 1%, and that the chance of a false positive test is 9%, and trying to convince them that they still likely don’t have cancer is confusing. However, if you explain to an individual that for every 1,000 people who take a particular cancer test that only 10 actually have cancer and that 990 don’t, the path to comprehension begins to clear up. With the group of 10 true positives and true negatives 990, you can explain that of those 10 who do have cancer, the test correctly identifies 9 out of 10 of them, and provides 9 true positive results for every 1,000 test (or adjust according to the population and test sensitivity). The false positive number can then be explained by saying that for the 990 people who really don’t have cancer, the test will error and tell 89 of them (9% in this case) that they do have cancer. So, we see that 89 individuals will receive false positives while 9 people will receive true positives. 89 > 9, so the chance of actually having cancer with a positive test still isn’t a guarantee.

 

Gigernezer uses very helpful charts in his book to show us that the false positive problem can be understood more easily than we might think. Humans are not great at thinking statistically, but understanding false positives with natural frequencies is a way to get to better comprehension. With this background he writes, “For many years psychologists have argued that because of their limited cognitive capacities people are doomed to misunderstand problems like the probability of a disease given a positive test. This failure is taken as justification for paternalistic policymaking.” Gigerenzer shows that we don’t need to rely on the paternalistic nudges that Cass Sunstein and Richard Thaler encourage in their book Nudge. He suggest that in many instances where people have to make complex decisions what is really needed is better tools and aids to help with comprehension. Rather than developing paternalistic policies to nudge people toward certain behaviors that they don’t fully understand, Gigerenzer suggests that more work to help people understand problems will solve the dilemma of poor decision-making. The problem isn’t always that humans are incapable of understanding complexity and choosing the right option, the problem is often that we don’t present information in a clear and understandable way to begin with.
Stats and Messaging

Stats and Messaging

In the past, I have encouraged attaching probabilities and statistical chances to the things we believe or to events we think may (or may not) occur. For example, say Steph Curry’s three point shooting percentage is about 43%, and I am two Steph Currys confident that my running regiment will help me qualify for the Boston Marathon. One might also be two Steph Currys confident that leaving now will guarantee they are at the theater in time for the movie, or that most COVID-19 restrictions will be rescinded by August 2021 allowing people to go to movies again. However, the specific percentages that I am attaching in these examples may be meaningless, and may not really convey an important message for most people (Myself included!). It turns out, that modern day statistics and the messaging attached to it is not well understood.

 

In his book Risk Savvy, Gerd Gigerenzer discusses the disconnect between stats and messaging, and the mistake most people make. The main problem with using statistics is that people don’t really know what the statistics mean in terms of actual outcomes. This was seen in the 2016 US presidential election when sources like FiveThirtyEight gave trump a 28.6% chance of winning and again in 2020 when the election was closer than many predicted, but was still well within the forecasted range.  In both instances, a Trump win was considered such a low probability event that people dismissed it as a real possibility, only to be shocked when Trump did win in 2016 and performed better than many expected in 2020. People failed to fully appreciate that FiveThirtyEight’s prediction meant that in 28.6% of election simulations, Trump was predicted to win in 2016, and in 2020 many of their models predicted races both closer than and wider than the result we actually observed.

 

Regarding weather forecasting and statistical confusion, Gigerenzer writes, “New forecasting technology has enabled meteorologists to replace mere verbal statements of certainty (it will rain tomorrow) or chance (it is likely) with numerical precision. But greater precision has not led to greater understanding of what the message really is.” Gigerenzer explains that in the context of weather forecasts, people often misunderstand that a 30% chance of rain means that on 30% of days when when the observed weather factors (temperature, humidity, wind speeds, etc…) match the predicted weather for that day, rain occurs. Or that models taking weather factors into account simulated 100 days of weather with those conditions and included rain for 30 of those days.  What is missing, Gigerenzer explains, is the reference class. Telling people there is a 30% chance of rain could lead them to think that it will rain for 30% of the day, that 30% of the city they live in will be rained on, or perhaps they will misunderstand the forecast in a completely unpredictable way.

 

Probabilities are hard for people to understand, especially when they are busy, have other things on their mind, and don’t know the reference class. Providing probabilities that don’t actually connect to a real reference class can be misleading and unhelpful. This is why my suggestion of tying beliefs and possible outcomes to a statistic might not actually be meaningful. If we don’t have a reasonable reference class and a way to understand it, then it doesn’t matter how many Steph Currys likely I think something is. I think we should take statistics into consideration with important decision-making, and I think Gigerenzer would agree, but if we are going to communicate our decisions in terms of statistics, we need to ensure we do so while clearly stating and explaining the reference classes and with the appropriate tools to help people understand the stats and messaging.