Challenges with the Scientific Process: Setting Priorities & Managing Conclusions

Challenges with the Scientific Process: Setting Priorities & Managing Conclusions

Science provides objective answers to questions about the world, but that doesn’t mean that science is an entirely objective enterprise. Science exists within a world dominated by human needs, biases, and prejudices which means that science can be impacted by the same political, discriminatory, and mistaken judgements and decisions that any other human activity can be overwhelmed by. In his book Sapiens, Yuval Noah Harari shows how this happens when it comes to selecting scientific research topics, setting the priorities of science, and when objective conclusions flow into the world where they can be used by less than respectable actors.
 
 
Harari writes, “science is unable to set its own priorities. It is also incapable of determining what to do with its discoveries.” Part of the reason why science cannot set its own priorities because science is expensive. Especially as we continue to make new discoveries, the subsequent steps require more time, energy, and resources. To discover the next quantum particle will require an even more impressive supercollider. To discover the next secret of the Amazon river will require taking new technology further up river. The cost grows, and individuals conducting research need to be able to convince those with resources to commit those resources to their particular interests. This means that science doesn’t unfold uniformly or in equal ways. As Harari puts it, “to channel limited resources we must answer questions such as what is more important and what is good? And these are not scientific questions.”
 
 
But even when good science is done, and even when accurate and objective measurements are obtained with reasonable conclusions drawn from those measurements, the impact of science can be unpredictable. Many scientific studies and results are obscure, with very few people outside a select expert community ever hearing about the results. But other conclusions can be taken out of their original context and can become part of the cultural zeitgeist. How studies and their conclusions are understood can get away from the researchers, and can be used to further specific political or economic goals, even if those goals really don’t have a real relationship to the original conclusion that was drawn. Harari demonstrates how this happened with scientific conclusions being merged with racist ideas about the inferiority of non-white people. He writes, “racist theories enjoyed prominence and respectability for many generations, justifying the Western conquest of the world.” Whether researchers were explicitly racist or not, their research was adopted by people who were, and used to justify unsavory political ends. The science became wrapped up in a political culture that wanted to justify discriminatory and prejudiced behaviors and attitudes.
 
 
This doesn’t only happen with racist ideas, though those ideas can be the most prominent and dangerous. Small scientific findings can be taken up by militaries, by corporations, and by media organizations which may use the research and findings in ways the authors could not have predicted. Research on technology that helps improve light detection could find its way into a guided missile, into mass surveillance systems, or onto the grocery store shelves to be used by advertisers. The science itself cannot control the way that results end up being used in the real world, and that can be problematic.
Positive Test Strategies

Positive Test Strategies

A real danger for us, that I don’t know how to move beyond, is positive test strategy. It is the search for evidence that confirms what we want to believe or what we think is true. When we already have an intuition about something, we look for examples that support our intuition. Looking for examples that don’t support our thought, or situations where our idea seems to fall short, is uncomfortable, and not something we are very good at. Positive test strategies are a form of motivated rationality, where we find ways to justify what we want to believe, and find ways to align our beliefs with what happens to be best for us.

 

In Thinking Fast and Slow, Daniel Kahneman writes the following, “A deliberate search for confirming evidence, known as positive test strategy, is also how System 2 tests a hypothesis. Contrary  to the rules of philosophers of science, who advise testing hypothesis by trying to refute them, people (and scientists, quite often) seek data that are likely to be compatible with the beliefs they currently hold.” 

 

In science, the best way to conduct a study is to try to refute the null hypothesis, rather than to try to support the actual hypothesis. You take a condition about the world, try to make an informed guess about why you observe what you do, and then you formulate a null hypothesis before you begin any testing. Your null hypothesis says, actually nothing is happening here after all. So you might think that teenage drivers are more likely to get in car crashes at roundabouts than regular intersections, or that crickets are more likely to eat a certain type of grass. Your null hypothesis is that teenagers do not crash at roundabouts more than typical intersections and that crickets don’t display a preference for one type of grass over another.

 

In your experimental study, instead of seeking out confirmation to show that teenagers crash more at roundabouts or that crickets prefer a certain grass, you seek to prove that there is a difference in where teenagers crash and which grass crickets prefer. In other-words, you seek to disprove the null hypothesis (that there is no difference) rather than try to prove that something specific is happening. It is a subtle difference, but it is importance. Its also important to note that good science doesn’t seek to disprove the null hypothesis in a specific direction. Good science tries to avoid positive test strategies by showing that the nothing to see here hypothesis is wrong and that there is something to see, but it could be in any direction. If scientists do want to provide more evidence that it is in a given direction, they look for stronger evidence, and less chance of random sampling error.

 

In our minds however, we don’t often do this. We start to see a pattern of behavior or outcomes, and we start searching for explanations to what we see. We come up with a hypothesis, think of more things that would fit with our hypothesis, and we find ways to explain how things align with our hypothesis. In My Big Fat Greek Wedding, this is what the character Gus does when he tries to show that all words in the world are originally Greek.

 

Normally, we identify something that would be in our personal interest or would support our group identity in a way to help raise our social status. From there, we begin to adopt hypothesis about how the world should operate that support what is in our personal interest. We then look for ways to test our hypothesis that would support it, and we avoid situations where our hypothesis could be disproven. Finding things that support what we already want to believe is comforting and relatively easy compared to identifying a null hypothesis, testing it, and then examining the results without already having a pre-determined outcome that we want to see.
Accepting Unsound Arguments

Accepting Unsound Arguments

Motivated reasoning is a major problem for those of us who want to have beliefs that accurately reflect the world. To live is to have preferences about how the world operates and relates to our lives. We would prefer not to endure suffering and pain, and would rather have comfort, companionship, and prosperity. We would prefer the world to provide for us, and we would prefer to not be too heavily strained. From pure physical needs and preferences all the way through social and emotional needs and preferences, our experiences of the world are shaped by what we want and what we would like. This is why we cannot get away from our own opinions and individual preferences in life, and part of why motivated reasoning becomes the problem that it is.

 

In Thinking Fast and Slow, Daniel Kahneman writes about how motivated reasoning works in our minds, in terms of the arguments we make to support the conclusions we believe in, or would like to believe in. He writes, “When people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when these arguments are unsound.”

 

We justify conclusions we would like to believe with any argument that seems plausible and fits the conclusion we would like to believe. Our preference for one conclusion leads us to bend the arguments in favor of that conclusion. Rather than truly analyzing the arguments, we discount factors that don’t support what we want to believe, and we disregard arguments that come from people who are reaching an alternative conclusion. Our preferences take over, and the things we want become more important than reality. Motivated reasoning gives us a way to support what we want to believe by twisting the value we assign to different facts.

 

Even in our own mind, demonstrating that an argument in favor of our preferred conclusion is flawed is unlikely to make much of a difference. We will continue to hold on to our flawed argument, choosing to believe that there is something true about it, even if we know it is flawed or contradicts other disagreeable facts that must also be true if we are to support our preferred conclusion.

 

This doesn’t make us humans look very good. We can’t reason our way to new beliefs and we can’t rely on facts and data to change minds. In the end, if we want to change our thoughts and behavior as well as those of others, we have to shape people’s preferences. Motivated reasoning can support conclusions that do not accurately reflect the world around us, so for those of us who care about reality, we have to heighten the salience of believing and trusting science and expertise before we can get people to adopt our arguments in favor of rational evidence. If we don’t think about how preference and motivated reasoning lead people to believe inaccurate claims, we will fail to address the preferences that support problematic policies, and we won’t be able to guide our world in a direction based on reason and sound conclusions.

Motivated Reasoning – Arguments to Continue Believing As We Already Do

Recently I have been thinking a lot about the way we think. To each of us, it feels as though our thinking and our thought process is logical, that our assumptions about the world are sound and built on good evidence, and that we might have a few complex technical facts wrong, but our judgments are not influenced by bias or prejudice. We feel that we take into consideration wide ranges of data when making decisions, and we do not feel as though our decisions and opinions are influenced by meaningless information and chance.

 

However, science tells us that our brains often make mistakes, and that many of those mistakes are systematic. Also, we know people in our own lives who display wonderful thinking errors, such as close-mindedness, gullibility, and arrogance. We should be more ready to accept that our thinking isn’t any different from the minds of people in scientific studies that show the brain’s capacity to traverse off course or that we are really any different from the person we complain about for being biased or unfair in their thinking about something or someone we we care about.

 

What can make this process hard is the mind itself. Our brains are experts at creating logical narratives, including about themselves. We are great at explaining why we did what we did, why we believe what we believe, and why our reasoning is correct. Scientists call this motivated reasoning.

 

Dale Carnegie has a great explanation of it in his book How to Win Friends and Influence People, “We like to continue to believe what we have been accustomed to accept as true, and the resentment aroused when doubt is cast upon any of our assumptions leads us to seek every manner of excuse for clinging to it. The result is that most of our so-called reasoning consists in finding arguments for going on believing as we already do.” 

 

Very often, when confronted with new information that doesn’t align with what we already believe, doesn’t align with our own self-interest, or that challenges our identity in one way or another, we don’t update our thinking but instead explain away or ignore the new information. Even for very small thing (Carnegie uses the pronunciation of Epictetus as an example) we may ignore convention and evidence and back our beliefs in outdated and out of context examples that seem to support us.

 

In my own life I try to remember this, and whether it is my rationalization of why it is OK that I went for a workout rather than doing dishes, or my self-talk about how great a new business idea is, or me rationalizing buying that sushi at the store when I was hungry while grocery shopping, I try to ask myself if my thoughts and decisions are influenced by motivated reasoning. This doesn’t always change my behavior, but it does help me recognize that I might be trying to fool myself. It helps me see that I am no better than anyone else when it comes to making up reasons to support all the things that I want. When I see this in other people, I am able to pull forward examples from my own life of me doing the same thing, and I can approach others with more generosity and hopefully find a more constructive way of addressing their behavior and thought process. At an individual level this won’t change the world, but on the margins we should try to reduce our motivated reasoning, as hard as it may be, and slowly encourage those around us to do the same.

Take a Close Look at What Feels Right

A topic I am fascinated by and plan to dig into in the future is motivated reasoning. We are great at finding all of the reasons and examples for why the things we do are overwhelmingly good and justified, while finding all the flaws in the people and things we dislike. Our brains seems to be wired to tell us that what benefits us is inherently good for the world while things that harm us are inherently evil. As Kevin Simler and Robin Hanson write in The Elephant in the Brain, “What feels, to each of you, overwhelmingly right and undeniably true is often suspiciously self-serving, and if nothing else, it can be useful to take a step back and reflect on your brain’s willingness to distort things for your benefit.” This is the essence of motivated reasoning, and we often don’t even realize we are doing it.

 

We each have a particular view of the world that feels like it is foolproof. We have our own experiences and knowledge, and the way we see the world comes out of those factors. It will always feel right to us because it is directly dependent on the inputs we observe, recognize, and cognitively arrange. But, we should be able to recognize that the worldview that we hold will always be an incomplete and ineffective model. We can’t have all of the experiences in the world and we can’t know all of the information about the universe. We will always have a flaw in our opinion because we can’t have a perfect and all encompassing perspective. There will always be gaps and there will always be inaccuracies.

 

When we train ourselves to remember the reality that we don’t have all the information and all the background experiences necessary to fully understand the world, we can start to approach our own thoughts and opinions with more skepticism. It is easy to be skeptical of the out of date baby boomer advice you received and it is easy to discount the political views of someone in the other party, but it is much harder to discount something that feels overwhelmingly accurate to yourself but might be wrong or only marginal, especially if you stand to benefit in one way or another.

 

At the end of the day we likely will have to make some type of decision related to our incomplete and inaccurate worldview. Even if we step back and observe what is going through our mind and where we might have blind-spots, we may find that we reach the same conclusion. That is perfectly fine, as long as we understand where we may be wrong and work to improve our understanding in that area. Or, we might acknowledge that we don’t know it all and be willing to accept some type of compromise that might slightly diminish our self-interest but still hold true to the underlying value at the heart of our decision. This is likely the only way our fractured societies can move forward. We must admit we can’t know it all and we must be willing to admit that sometimes we act out of self-interest in favor of our own personal values rather than acting based on immutable truths. From there we can start to find areas where it makes sense for us to give up a small piece and be willing to experiment with something new. A disposition toward this type of thinking can help us actually develop and make real progress toward a better world.

More on Hiding Our Motives

Deception is a big part of being a human being. If we try, we can all think of times when we have been deceived. Someone led us to believe one thing, and then we found out that something different was really going on the whole time. If we are honest with ourselves, we can also see that we clearly try to deceive others all the time. We make ourselves seem like we are one thing, but in many ways, we are not exactly what we present ourselves as being. Sometimes we truly are genuine, but often, we are signaling a particular behavior or trait to a group so that we can be accepted, praised, and get some sort of future benefit. In order to do this really well, we create stories and ideas about why we do the things we do, deceiving even ourselves in the process. As Kevin Simler and Robin Hanson wright in their book The Elephant in the Brain, “We hide some of our motives…in order to mislead others.”

 

This is not a pretty idea of humans, and expressing this idea is an admittance that we sometimes are not as great as we like to make everyone believe. This is not an idea that is popular or that everyone will be quick to admit, but I believe that Simler and Hanson are right in saying that it is a huge driving influencer of the world around us. I also don’t think that accepting this about ourselves ends up leaving us in as sad, cynical, and dejected of a place as one might think. Humans and our social groups are complicated, and sometimes being a little deceptive, doing things with ulterior motives at their base, and behaving in a way to signal group alliance or value can be a net positive. We can recognize that we do these things, that we are deceptive, and that we deceive others by lying about our motives, and still make a good impact in the world. The altruist who donates money to the Against Malaria foundation may tell himself and everyone he knows that he donates because he wants to save people’s lives, but truly he just gets a warm glow within himself, and that is perfectly fine as long as the externality from his status seeking behavior is overwhelmingly positive (looking in the mirror on this one).

 

If we don’t accept this reality about ourselves and others then we will spend a lot of time trying to work on the wrong problem and a lot of time being confused as to why our mental models of the world don’t seem to work out. In my own life, recognizing status seeking behavior, self-deception, and motivated thinking helps me to be less judgmental toward other people. I recognize that I have the same capacity for these negative and deceptive behaviors within myself, and I choose (as much as I can) to redirect these types of behaviors in directions that have the greatest positive social impact rather than in the direction that has the greatest personal benefit for me and my feelings. Ultimately, I encourage us to be honest about the fact that we are sometimes rather dishonest and to build our awareness in a way that is easy on ourselves and others for behaving as humans naturally behave, but still nudges us in a direction where we create positive externalities where possible from these ways of being.