During the COVID-19 Pandemic internal review boards (IRB) were scrutinized for delaying potential treatments and vaccines to fight the coronavirus that causes COVID. IRBs exist to ensure that scientific research doesn’t harm patients. Throughout the history of science, many dubious science experiments have been carried out by less than fully considerate scientists. An IRB is a useful tool to ensure that researchers have real reasons to conduct experiments that may cause some type of physical or psychological harm to participants, and to ensure that researchers do as much as possible to mitigate those harms and adequately address the safety and needs of subjects before, during, and after an experiment.
However, in recent years many researchers have argued that IRBs have become too risk averse and too restrictive. Rather than purely focusing on the safety and health of research participants, IRBs have been criticized as protecting the brand of the research institution, meaning that some valuable and worthy science is denied funding or approval because it sounds weird and if it doesn’t go well could reflect poorly on the academic standing of the institution that approved the study. Additionally, well meaning IRBs can cause extensive delays, as each study is reviewed, debated, and approved or denied. Studies that are denied may have to make adjustments to methodology and approaches, and re-deigning studies can add additional time to actually get the study up and running. For many research studies this may be more of an inconvenience for the researcher than anything else, but during the COVID-19 pandemic, these delays have been sharply criticized.
COVID moves fast, and a delay of one month for a study that could prove to be life saving means that more people would die than would have died if the study had not been delayed. This means that in the interest of promoting safety, an IRB can create a delay that harms life. Mary Roach wrote about these concerns years before the pandemic in her book Gulp, “rather than protecting patients, IRBs – with their delays and prodigious paperwork – can put them in harms way.” If checking the right boxes on the right forms and submitting the right paperwork at the right time is more important than the actual research, we could see delays that hold back treatments, preventative vaccinations, and cures for deadly diseases.
The Pandemic has shown us how serious these delays can be. IRBs may have to be rethought and restructured so that in times of emergency we can move quicker while still addressing patient safety. For science where time is important and risk is inherent in the study, we may have to develop a new review or oversight body beyond the traditional IRB structure to ensure that we don’t harm patients while trying to protect them.
[Author Note: This begins a short three post break in writing regarding homelessness for a few quotes and thoughts on books by Mary Roach. More to come from Roach after finishing some additional writing on homelessness and poverty.]
Mary Roach’s book Stiff: The Curious Lives of Human Cadavers is an exploration into what happens to bodies donated for scientific research. In the book she meets with scientists, researchers, and academics who are working with human cadavers to make life better for those of us who are still living. It is a witty, humorous, yet altogether respectful exploration of the ways in which the human body has helped propel our species forward, even after the human life within the body has expired.
Regarding cadavers and what they have unlocked through sometimes gory (though today as considerate and respectful as possible) experiments, Roach writes the following:
“Cadavers are our superheroes: They brave fire without flinching, withstand falls from tall buildings and head-on car crashes into walls. You can fire a gun at them or run a speedboat over their legs, and it will not faze them. Their heads can be removed with no deleterious effect. They can be in six places at once. I take the Superman point of view: What a shame to waste these powers, to not use them for the betterment of humankind.”
The scientific study of cadavers can be off-putting, but it has been incredibly valuable for humanity across the globe. Cadavers have helped us understand basic anatomy, design safer cars, and ensure the safety of astronauts. Without cadavers many more people would have died in ill devised medical experiments and car crashes, and numerous live animals would have suffered as alternative test subjects. Cadavers perform miraculous jobs that living humans cannot perform, and for their service and sacrifices, we should all be grateful.
In linear causal models the total effect of an action is equal to the direct effect of that action and its indirect effect. We can think of an oversimplified anti-tobacco public health campaign to conceptualize this equation. A campaign could be developed to use famous celebrities in advertisements against smoking. This approach may have a direct effect on teen smoking rates if teens see the advertisements and decide not to smoke as a result of the influential messaging from their favorite celebrity. This approach may also have indirect effects. Imagine a teen who didn’t see the advertising, but their best friend did see it. If their best friend was influenced, then they may adopt their friend’s anti-smoking stance. This would be an indirect effect of the advertising campaign in the positive direction. The total effect of the campaign would then be the kids who were directly deterred from smoking combined with those who didn’t smoke because their friends were deterred.
However, linear causal models don’t capture all of the complexity that can exist within causal models. As Judea Pearl explains in The book of Why, there can be complex causal models where the equation that I started this post with doesn’t hold. Pearl uses a drug used to treat a disease as an example of a situation where the direct effect and indirect effect of a drug don’t equal the total effect. He says that in situations where a drug causes the body to release an enzyme that then combines with the drug to treat a disease, we have to think beyond the equation above. In this case he writes, “the total effect is positive but the direct and indirect effects are zero.”
The drug itself doesn’t do anything to combat the disease. It stimulates the release of an enzyme and without that enzyme the drug is ineffective against the disease. The enzyme also doesn’t have a direct effect on the disease. The enzyme is only useful when combined with the drug, so there is no indirect effect that can be measured as a result of the original drug being introduced. The effect is mediated between the interaction of both the drug and enzyme together. In the model Pearl shows us, there is only the mediating effect, not a direct or indirect effect.
This model helps us see just how complicated ideas and conceptions of causation are. Most of the time we think about direct effects, and we don’t always get to thinking about indirect effects combined with direct effects. Good scientific studies are able to capture the direct and indirect effects, but to truly understand causation today, we have to be able to include mediating effects in complex causation models like the one Pearl describes.
I have written a lot lately about the incredible human ability to imagine worlds that don’t exist. An important way that we understand the world is by imagining what would happen if we did something that we have not yet done or if we imagine what would have happened had we done something different in the past. We are able to use our experiences about the world and our intuition on causality to imagine a different state of affairs from what currently exists. Innovation, scientific advancements, and social cooperation all depend on our ability to imagine different worlds and intuit causal chains between our current world and the imagined reality we desire.
In The Book of Why Jude Pearl writes, “counterfactuals are an essential part of how humans learn about the world and how our actions affect it. While we can never walk down both the paths that diverge in a wood, in a great many cases we can know, with some degree of confidence, what lies down each.”
A criticism of modern science and statistics is the reliance on randomized controlled trials and the fact that we cannot run an RCT on many of the things we study. We cannot run RCTs on our planet to determine the role of meteor impacts or lightning strikes on the emergence of life. We cannot run RCTs on the toxicity of snake venoms in human subjects. We cannot run RCTs on giving stimulus checks to Americans during the COVID-19 Pandemic. Due to physical limitations and ethical considerations, RCTs are not always possible. Nevertheless, we can still study the world and use counterfactuals to think about the role of specific interventions.
If we forced ourselves to only accept knowledge based on RCTs then we would not be able to study the areas I mentioned above. We cannot go down both paths in randomized experiments with those choices. We either ethically cannot administer an RCT or we are stuck with the way history played out. We can, however, employ counterfactuals, imagining different worlds in our heads to think about what would have happened had we gone down another path. In this process we might make errors, but we can continually learn and improve our mental models. We can study what did happen, think about what we can observe based on causal structures, and better understand what would have happened had we done something different. This is how much of human progress has moved forward, without RCTs and with counterfactuals, imagining how the world could be different, how people, places, societies, and molecules could have reacted differently with different actions and conditions.
The idea of ignorability helps us in science by playing a role in randomized trials. In the real world, there are too many potential variables to be able to comprehensively predict exactly how a given intervention will play out in every case. We almost always have outliers that have wildly different outcomes compared to what we would have predicted. Quite often some strange factor that could not be controlled or predicted caused the individual case to differ dramatically from the norm.
Thanks to concepts of ignorability, we don’t have to spend too much time worrying about the causal structures that created a single outlier. In The Book of Why Judea Pearl tries his best to provide a definition of ingorability for those who need to assess whether ignorability holds in a given outlier decision. He writes, “the assignment of patients to either treatment or control is ignorable if patients who would have one potential outcome are just as likely to be in the treatment or control group as the patients who would have a different potential outcome.”
What Pearl means is that ignorability applies when there is not a determining factor that makes people with any given outcome more likely to be in a control or treatment group. When people are randomized into control versus treatment, then there is not likely to be a commonality among people in either group that makes them more or less likely to have a given reaction. So a random outlier in one group can be expected to be offset by a random outlier in the other group (not literally a direct opposite, but we shouldn’t see a trend of specific outliers all in either treatment or control).
Ignroability does not apply in situations where there is a self-selection effect for control or treatment. In the world of the COVID-19 Pandemic, this applies in situations like human challenge trials. It is unlikely that people who know they are at risk of bad reactions to a vaccine would self-select into a human challenge trial. This same sort of thing happens with corporate health benefits initiatives, smart phone beta-testers, and general inadvertent errors in scientific studies. Outliers may not be outliers we can ignore if there is a self-selection effect, and the outcomes that we observe may reflect something other than what we are studying, meaning that we can’t apply ignorability in a way that allows us to draw a conclusion specifically on our intervention.
In Thinking Fast and Slow Daniel Kahneman describes how a Belgian psychologist changed the way that we understand our thinking in regard to causality. The traditional thinking held that we make observations about the world and come to understand causality through repeated exposure to phenomenological events. As Kahneman writes, “[Albert] Michotte  had a different idea: he argued that we see causality just as directly as we see color.”
The argument from Michotte is that causality is an integral part of the human psyche. We think and understand the world through a causal lens. From the time we are infants, we interpret the world causally and we can see and understand causal links and connections in the things that happen around us. It is not through repeated experience and exposure that we learn to view an event as having a cause or as being the cause of another event. It is something we have within us from the beginning.
“We are evidently ready from birth to have impressions of causality, which do not depend on reasoning about patterns of causation.”
I try to remember this idea of our intuitive and automatic causal understanding of the world when I think about science and how I should relate to science. We go through a lot of effort to make sure that we are as clear as possible with our scientific thinking. We use randomized controlled trials (RCT) to test the accuracy of our hypothesis, but sometimes, an intensely rigorous scientific study isn’t necessary for us to make changes in our behavior based on simple scientific exploration via normal causal thinking. There are some times where we can trust our causal intuition, and without having to rely on an RCT for evidence. I don’t know where to draw the line between causal inferences that we can accept and those that need an RCT, but through honest self-awareness and reflection, we should be able to identify times when our causal interpretations demonstrate validity and are reasonably well insulated from our own self-interests.
The Don’t Panic Geocast has discussed two academic journal articles on the effectiveness of parachutes
for preventing death when falling from an aircraft
during the Fun Paper Friday
segment of two episodes. The two papers, both published in the British Medical Journal, are satirical, but demonstrate an important point. We don’t need to conduct an RCT to determine whether using a parachute when jumping from a plane will be more effective at helping us survive the fall than not using a backpack. It is an extreme example, but it demonstrates that our minds can see and understand causality without always needing an experiment to confirm a causal link. In a more consequential example, we can trust our brains when they observe that smoking cigarettes has negative health consequences including increased likelihood of an individual developing lung cancer. An RCT to determine the exact nature and frequency of cancer development in smokers would certainly be helpful in building our scientific knowledge, but the scientific consensus around smoking and cancer should have been accepted much more readily than what it was. An RCT in this example would take years and would potentially be unethical or impossible. Tobacco companies obfuscated the science by taking advantage of the fact that an RCT in this case couldn’t be performed, and we failed to accept the causal link that our brains could see, but could not prove as definitively as we can prove something with an RCT. Nevertheless, we should have trusted our causal thinking brains, and accepted the intuitive answer.
We can’t always trust the causal conclusions that our mind reaches, but there are times where we should acknowledge that our brains think causally, and accept that the causal links that we intuit are accurate.
I really enjoy science podcasts, science writing, and trying to think rationally and scientifically when I observe and consider the world. Within science, when we approach the world to better understand the connections that take place, we try to isolate the variables acting on our observations or experiments. We try to separate ourselves from the world so that we can make an objective and independent observation of reality, free from our own interference and influence. Nevertheless, it is important to remember that we are part of the world, and that we do have an influence on it. No matter how independent and rational we want to be, we are still part of the world and interact with it, even if we are just thinking and observing.
Daniel Kahneman demonstrates how our thoughts and observations can lead us to have unintended physical manifestations in the world in his book Thinking Fast and Slow. He presents the reader with two words that normally don’t go together (I won’t reveal his experiment for the reader here). What he shows with his word association experiment is that simple thoughts, just hearing or reading a word, can influence how we experience and behave in the physical world. Anyone who has started sweating during a poker game and anyone who has shuttered just from reading the words nails on a chalkboard knows that this is true. We are physical systems, and simple thoughts, memories, and words are enough to trigger physical responses in our bodies. While we like to think of ourselves as being independent and separate from the world, we never really are.
Kahneman explains this by writing, “As cognitive scientists have emphasized in recent years, cognition is embodied; you think with your body, not only with your brain.” Our brains take in electrical information from stimuli in the world. Chemicals bind to receptors in our noses or on our tongues, and nerves transmit electrical information to the brain to tell it what chemicals are present. Light interacts with receptors in our eyes, and nerves from our eyes again travel directly into our brains. Thinking is a direct result of physical sensory input, and while we can’t physically touch a thought, our body does react to the thinking and experiencing taking place.
No matter how much we want to believe that we can be objective and separated from the physical reality of the world around us, we cannot be 100% isolated. We experience the world physically, and we can try to think of the world independently, but our senses and experiences are directly connected to that physical world. Our responses in turn are also physical, even if we don’t perceive them. We have to accept, no matter how scientific and objective we want to be, that we are part of the system we are evaluating. There is no independent God’s eye view
, our cognition is embodied, and we are within the system we observe.