Knowledge and Perception

Knowledge and Perception

We often think that biases like prejudice are mean spirited vices that cause people to lie and become hypocritical. The reality, according to Quassim Cassam is that biases like prejudice run much deeper within our minds. Biases can become epistemic vices, inhibiting our ability to acquire and develop knowledge. They are more than just biases that make us behave in ways that we profess to be wrong. Biases can literally shape the reality of the world we live in by altering the way we understand ourselves and other people around us.
“What one sees,” Cassam writes in Vices of the Mind, “is affected by one’s beliefs and background assumptions. It isn’t just a matter of taking in what is in front of one’s eyes, and this creates an opening for vices like prejudice to obstruct the acquisition of knowledge by perception.”
I am currently reading Steven Pinker’s book Enlightenment Now where Pinker argues that humans strive toward rationality and that at the end of the day subjectivity is ultimately over-ruled by reason, rationality, and objectivity. I have long been a strong adherent to the Social Construction Framework and beliefs that our worlds are created and influenced by individual differences in perception to a great degree. Pinker challenges that assumption, but framing his challenge through the lens of Cassam’s quote helps show how Pinker is ultimately correct.
Individual level biases shape our perception. Pinker describes a study where university students watching a sporting event literally see more fouls called against their team than the opponent, revealing the prejudicial vice that Cassam describes. Perception is altered by a prejudice against the team from the other school. Knowledge (in the study it is the accurate number of fouls for each team) is inhibited for the sports fans by their prejudice. The reality they live in is to some extent subjective and shaped by their prejudices and misperceptions.
But this doesn’t mean that knowledge about reality is inaccessible to humans at a larger scale. A neutral third party (or committee of officials) could watch the game and accurately identify the correct number of fouls for each side. The sports fans and other third parties may quibble about the exact final number, but with enough neutral observers we should be able to settle on a more accurate reality than if we left things to the biased sports fans. At the end of the day, rationality will win out through strength of numbers, and even the disgruntled sports fan will have to admit that the number of fouls they perceived was different from the more objective number of fouls agreed upon by the neutral third party members.
I think this is at the heart of the message from Cassam and the argument that I am currently reading from Pinker. My first reaction to Cassam’s quote is to say that our realities are shaped by biases and perceptions, and that we cannot trust our understanding of reality. However, objective reality (or something pretty close to it that enough non-biased people could reasonably describe) does seem to exist. As collective humans, we can reach objective understandings and agreements as people recognize and overcome biases and as the descriptions of the world presented by non-biased individuals prove to be more accurate over the long run. The key is to recognize that epistemic vices shape our perception at a deep level, that they are more than just hypocritical behaviors and that they literally shape the way we interpret reality. The more we try to overcome these vices of the mind, the more accurately we can describe the world, and the more our perception can then align with reality.
Pluralistic Ignorance

Pluralistic Ignorance

TV shows and movies frequently have scenes where one character has been putting up with something they dislike in order to please another character, only to find out that the other character also dislikes the thing. I can think of instances where characters have been drinking particular beverages they dislike, playing games they don’t enjoy, or wearing clothing they hate, just because they think another character enjoys that particular thing and they want to share in that experience with the other person. It is a little corny, but I really enjoy the moment when the character recognizes they have been putting themselves in agony for the benefit of the other person, only to realize they have been in agony as well!

 

This particular comedic device plays on pluralistic ignorance. We don’t ever truly know what is in another person’s head, and even if we live with someone for most of our life, we can’t ever know them with complete certainty. When it comes to really knowing everyone around us and everyone in our community or society, we can only ever know most people at a minimal surface level. We follow cues from others that we want to be like, that we think are popular, and that we want to be accepted by. But when everyone is doing this, how can any of us be sure that we all actually want to be the way we present ourselves? We are all imagining what other people think, and trying to live up to those standards, not realizing that we may all hate the thing that we think everyone else considers cool.

 

The whole situation reminds me of AP US History from my junior year in high school. My friend Phil sat toward the back of the classroom and the year he and I had the class was the very last year for our teacher before he planned to retire. He was on autopilot most of the year, a good teacher, but not exactly worried about whether his students payed attention in class or cheated on tests. For one test, Phil was copying off the girl next to him, only to realize halfway through class that she was cheating off him! When Phil told the story later, we all had to ask where any answers were coming from if they were both cheating off each others test.

 

Pluralistic ignorance feels like Phil and his AP US History test. However, pluralistic ignorance can be much more important than my little anecdote. Yesterday’s post was about collective conservatism, a form of groupthink where important decision-makers stick to tradition and familiar strategies and answers even as the world changes and demands new and innovative responses. Pluralistic ignorance can limit our responses to change, locking in tradition because we think that is what people want, even though people may be tired of old habits and patterns and ready for something new.

 

In Nudge, Cass Sunstein and Richard Thaler write, “An important problem here is pluralistic ignorance – that is, ignorance, on the part of all or most, about what other people think. We may follow a practice or tradition not because we like it, or even think it defensible, but merely because we think that most other people like it.”

 

A real world example I can think of would be driving cars. Many people in the country absolutely love cars and see them as symbols of freedom, innovation, and American ingenuity. Thinking that people would be willing to give up their cars or change anything about them seems delusional, and public policy, advertising campaigns, and car designs reflect the idea that people want more, bigger, and faster cars. But is this actually true for most Americans?

 

Our cars emit toxic fumes, tens of thousands of people die annually in crashes, and the lights and sounds of cars can keep those who live along busy streets or next to car enthused neighbors awake at night. People have to pay for auto insurance, vehicles break down frequently, require constant costly maintenance, and in the US there is a constant pressure to have a newer and nicer car to signal how well off one is. My sense is that people generally dislike cars, especially anything dealing with purchasing or repairing a car, but that they put up with them because they think other people like cars and value and respect their car choice. I believe that if there were enough reliable, fast, and convenient alternative transportation options, people would start to ditch cars. I think lots of people buy fancy, powerful, and loud cars because they think other people like them, not necessarily because they actually like the car themselves. If we could come together in an honest way, I think we could all scale back our cars, opting for smaller, quieter, less polluting vehicles or public transportation. There are certainly a lot of problems with public transportation, but I think our obsession and connections with cars is in part pluralistic ignorance as to how much other people actually like and value cars. We are trapped in a vehicular arms race, when we would really all rather not have to worry about cars in the first place.
The Remembering Self and Time - Joe Abittan

The Remembering Self and Time

Time, as we have known it, has only been with human beings for a small slice of human history. The story of time zones is fascinating, and really began once rail roads connected the United States. Before we had a standardized system for operating within time, human lives were ruled by the cycle of the sun and the seasons, not by the hands of a watch. This is important because it suggests that the time bounds we put on our lives, the hours of our schedules and work days, and the way we think about the time duration of meetings, movies, a good night’s sleep, and flights is not something our species truly evolved to operate within.

 

In Thinking Fast and Slow, Daniel Kahneman shows one of the consequences of human history being out of sync with modern time. “The mind,” he writes, “is good with stories, but it does not appear to be well designed for the processing of time.”

 

I would argue that this makes sense and should be expected. Before we worked set schedules defined by the clock, before we could synchronize the start of a football game with TV broadcasts across the world, and before we all needed to be at the same place at precisely the right time to catch a departing train, time wasn’t very important. It was easy to tie time with sunrise, sunset, or mid-day compared to a 3:15 departure or a 7:05 kick-off. The passage of time also didn’t matter that much. The difference between being 64 and 65 years old wasn’t a big deal for humans that didn’t receive retirement benefits and social security payments. We did not evolve to live in a world where every minute of every day was tightly controlled by time and where the passage of time was tied so specifically to events in our lives.

 

For me, and I think for Daniel Kahneman, this may explain why we see some of the cognitive errors we make when we remember events from our past. Time wasn’t as important of a factor for ancient humans as story telling was. Kahneman continues,

 

“The remembering self, as I have described it, also tells stories and makes choices, and neither the stories nor the choices properly represent time. In storytelling mode, an episode is represented by a few critical moments, especially the beginning, the peak, and the end. Duration is neglected.”

 

When we think back on our lives, on moments that meant a lot to us, on times we want to relive, or on experiences we want to avoid in the future, we remember the salient details. We don’t necessarily remember how long everything lasted. My high school basketball days are not remembered by the hours spent running UCLAs, by the number of Saturdays I had to be up early for 8 a.m. practices, or by the hours spent in drills. My memories are made up of a few standout plays, games, and memorable team moments. The same is true for my college undergrad memories, the half-marathons I have raced, and my memories from previous homes I have lived in.

 

When we think about our lives we are not good at thinking about the passage of time, about how long we spent working on something, how long we had to endure difficulties, or how long the best parts of our lives lasted. We live with snapshots that can represent entire years or decades. Our remembering self drops the less meaningful parts of experiences from our memories, and holds onto the start, the end, and the best or worst moments from an experience. It distorts our understanding of our own history, and creates memories devoid of a sense of time or duration.

 

I think about this a lot because our minds and our memories are the things that drive how we behave and how we understand the present moment. However, duration neglect helps us see that reality of our lives is shaped by unreality. We are influenced by cognitive errors and biases, by poor memories, and distortions of time and experience. It is important to recognize how faulty our thinking can be, so we can develop systems, structures, and ways of thinking that don’t assume we are always correct, but help guide us toward better and more realistic ways of understanding the world.
Experiencing Versus Remembering

Experiencing Versus Remembering

My last two posts have been about the difference in how we experience life and how we remember what happens in our life. This is an important idea in Daniel Kahneman’s book Thinking Fast and Slow. Kahneman explains the ways in which our minds make predictable errors when thinking statistically, when trying to remember the past, and when making judgements about reality. Kahneman describes our mind as having two selves. He writes,

 

“The experiencing self is the one that answers the question: Does it hurt now? The remembering self is the one that answers the question: How was it on the whole? Memories are all we get to keep from our experience of living, and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self.”

 

In my post about the Peak-End Rule I highlighted findings from Kahneman that show that the remembering self isn’t very good at making accurate judgments about a whole experience. It more or less averages out the best (or worst) part of an experience with the ending of the experience. The ups and downs throughout, the actual average quality overall, isn’t that relevant to the way we think back on an experience.

 

Duration Neglect also demonstrates how the remembering self misjudges our experiences. A long monotonous experience with a positive ending can be remembered much more fondly than a generally positive short experience with a bad ending.

 

When I think about the experiencing and remembering self, I try to remember that my remembering self is not able to perfectly recall the reality of my experiences. I try to remember that my experiencing self is only alive in the present moment, and when I am experiencing something great, I try hard to focus on that moment, rather than try to focus on something I want to remember (this is the difference between sitting and watching a beautiful sunset versus trying to capture the perfect picture of the sunset for social media). Keeping in mind the distinctions between the experiencing and remembering self is helpful for avoiding the frustration, guilt, and pressure that the remembering self heaps on you when you don’t feel as though you have done enough or accomplished enough. The remembering self is only one part of you, and its revisionist view of your history isn’t real. There is real value in finding a balance between living for the experiencing self and living with the knowledge of what fuels the remembering self. Tilting too far either way can make us feel frustrated and overwhelmed, or unaccomplished, and we all want to be somewhere between the two extremes, giving up a little to prop up the other in different ways at different times of our lives.
Scared Before You Even Know It

Scared Before You Even Know It

In Thinking Fast and Slow, Daniel Kahneman demonstrates how quick our minds are and how fast they react to potential dangers and threats by showing us two very simple pictures of eyes. The pictures are black squares, with a little bit of white space that our brains immediately perceive as eyes, and beyond that immediate perception of eyes, our brains also immediately perceive an emotional response within the eyes. They are similar to the simple eyes I sketched out here:

In my sketch, the eyes on the left are aggressive and threatening, and our brains will pick up on the threat they pose and we will have physiological responses before we can consciously think through the fact that those eyes are just a few lines drawn on paper. The same thing happens with the eyes on the right, which our brains recognize as anxious or worried. Our body will have a quick fear reaction, and our brain will be on guard in case there is something we need to be anxious or worried about as well.

 

Regarding a study that was conducted where subjects in a brain scanner were shown a threatening picture for less than 2/100 of a second, Kahneman writes, “Images of the brain showed an intense response of the amygdala to a threatening picture that the viewer did not recognize. The information about the threat probably traveled via a superfast neural channel that feeds directly into a part of the brain that processes emotions, bypassing the visual cortex that supports the conscious experience of seeing.” The study was designed so that the subjects were not consciously aware of having seen an image of threatening eyes, but nevertheless their brain perceived it and their body reacted accordingly.

 

The takeaway from this kind of research is that our environments matter and that our brains respond to more than what we are consciously aware of. Subtle cues and factors around us can shape the way we behave and feel about where we are and what is happening. We might not know why we feel threatened, and we might not even realize that we feel threatened, but our heart rate may be elevated, we might tense up, and we might become short and defensive in certain situations. When we think back on why we behaved a certain way, why we felt the way we did, and why we had the reactions we did, our brains won’t be able to recognize these subtle cues that never rose to the level of consciousness. We won’t be able to explain the reason why we felt threatened, all we will be able to recall is the physiological response we had to the situation. We are influenced by far more than our conscious brain is aware, and we should remember that our conscious brain doesn’t provide us with a perfect picture of reality, but nevertheless our subconscious reacts to more of the world than we notice.
Base Rates Joe Abittan

Base Rates

When we think about individual outcomes we usually think about independent causal structures. A car accident happened because a person was switching their Spotify playlist and accidently ran a red light. A person stole from a grocery store because they had poor moral character which came from a poor cultural upbringing. A build-up of electrical potential from the friction of two air masses rushing past each other caused a lightning strike.

 

When we think about larger systems and structures we usually think about more interconnected and somewhat random outcomes that we don’t necessarily observe on a case by case basis, but instead think about in terms of likelihoods and conditions which create the possibilities for a set of events and outcomes. Increasing technological capacity in smartphones with lagging technological capacity in vehicles created a tension for drivers who wanted to stream music while operating vehicles, increasing the chances of a driver error accident. A stronger US dollar made it more profitable for companies to employ workers in other countries, leading to a decline in manufacturing jobs in US cities and people stealing food as they lost their paychecks.  Earth’s tilt toward the sun led to a difference in the amount of solar energy that northern continental landmasses experienced, creating a temperature and atmospheric gradient which led to lightning producing storms and increased chances of lightning in a given region.

 

What I am trying to demonstrate in the two paragraphs above is a tension between thinking statistically versus thinking causally. It is easy to think causally on a case by case basis, and harder to move up the ladder to think about statistical likelihoods and larger outcomes over entire complex systems. Daniel Kahneman presents these two types of thought in his book Thinking Fast and Slow writing:

 

Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case. Causal base rates change your view of how the individual case came to be.”

 

It is more satisfying for us to assign agency to a single individual than to consider that individual’s actions as being part of a large and complex system that will statistically produce a certain number of outcomes that we observe. We like easy causes, and dislike thinking about statistical likelihoods of different events.

 

“Statistical base rates are generally underweighted, and sometimes neglected altogether, when specific information about the case at hand is available.
Causal base rates are treated as information about the individual case and are easily combined with other case-specific information.”

 

The base rates that Kahneman describes can be thought of as the category or class to which we assign something. We can use different forms of base rates to support different views and opinions. Shifting the base rate from a statistical base rate to a causal base rate may change the way we think about whether a person is deserving of punishment, or aid, or indifference. It may change how we structure society, design roads, and conduct cost-benefit analyses for changing programs or technologies. Looking at the world through a limited causal base rate will give us a certain set of outcomes that might not generalize toward the rest of the world, and might cause us to make erroneous judgments about the best ways to organize ourselves to achieve the outcomes we want for society.
Fluency of Ideas

Fluency of Ideas

Our experiences and narratives are extremely important to consider when we make judgments about the world, however we rarely think deeply about the reasons why we hold the beliefs we do. We rarely pause to consider whether our opinions are biased, whether our limited set of experiences shape the narratives that play in our mind, and how this influences our entire outlook on life. Instead, we rely on the fluency of ideas to judge our thoughts and opinions as accurate.

 

In Thinking Fast and Slow Daniel Kahneman writes about ideas from Cass Sunstein and jurist Timur Kuran explaining their views on fluency, “the importance of an idea is often judged by the fluency (and emotional charge) with which that idea comes to mind.” It is easy to characterize an entire group of people as hardworking, or lazy, or greedy, or funny based entirely on a single interaction with a single person from that group. We don’t pause to ask if our interaction with one person is really a good reflection of all people who fit the same group as that person, we instead allow the fluency of our past experiences to shape our opinions of all people in that group.

 

And our ideas and the fluency with which those ideas come to mind don’t have to come from our own personal experience. If a claim is repeated often enough, we will have trouble distinguishing it from truth, even if it is absurd and doesn’t have any connection to reality. The idea will come to mind more fluently, and consequently the idea will start to feel true. We don’t have to have direct experience with something if a great marketing campaign has lodge an opinion or slogan in mind that we can quickly recall.

 

If we are in an important decision-making role, it is important that we recognize this fluency bias. The fluency of ideas will drive us toward a set of conclusions that might not be in our best interests. A clever marketing campaign, a trite saying repeated by salient public leaders, or a few extreme yet random personal experiences can bias our judgment. We have to find a way to step back, recognize the narrative at hand, and find reliable data to help us make better decisions, otherwise we might end up judging ideas and making decisions based on faulty reasoning.
As an addendum to this post (originally written on 10/04/2020), this morning I began The Better Angels of Our Nature: Why Violence Has Declined, by Steven Pinker. Early in the introduction, Pinker states that violence in almost all forms is decreasing, despite the fact that for many of us, it feels as though violence is as front and center in our world as ever before. Pinker argues that our subjective experience of out of control violence is in some ways due to the fluency bias that Kahneman describes from Sunstein and Kuran. Pinker writes,

 

“No matter how small the percentage of violent deaths may be, in absolute numbers there will always be enough of them to fill the evening news, so people’s impressions of violence will be disconnected from the actual proportions.” 

 

The fluency effect causes an observation to feel correct, even if it is not reflective of actual trends or rates in reality.
Teamwork Contributions

Thinking About Who Deserves Credit for Good Teamwork

Yesterday I wrote about the Availability Heuristic, the term that Daniel Kahneman uses in his book Thinking Fast and Slow to describe the ways in which our brains misjudge frequency, amount, and probability based on how easily an example of something comes to mind. In his book, Kahneman describes individuals being more likely to overestimate things like celebrity divorce rates if there was recently a high profile and contentious celebrity divorce in the news. The easier it is for us to make an association or to think of an example of a behavior or statistical outcome, the more likely we will overweight that thing in our mental models and expectations for the world.

 

Overestimating celebrity divorce rates isn’t a very big deal, but the availability heuristic can have a serious impact in our lives if we work as part of a team or if we are married and have a family. The availability heuristic can influence how we think about who deserves credit for good team work.

 

Whenever you are collaborating on a project, whether it is a college assignment, a proposal or set of training slides at work, or keeping the house clean on a regular basis, you are likely to overweight your own contributions relative to others. You might be aware of someone who puts in a herculean effort and does well more than their own share, but if everyone is chugging along completing a roughly equivalent workload, you will see yourself as doing more than others. The reason is simple, you experience your own work firsthand. You only see everyone else’s handiwork once they have finished it and everyone has come back together. You suffer from availability bias because it is easier for you to recall the time and effort you put into the group collaboration than it is for you to recognize and understand how much work and effort others pitched in. Kahneman describes the result in his book, “you will occasionally do more than your share, but it is useful to know that you are likely to have that feeling even when each member of the team feels the same way.” 

 

Even if everyone did an equal amount of work, everyone is likely to feel as though they contributed more than the others. As Kahneman writes, there is more than 100% of credit to go around when you consider how much each person thinks they contributed. In marriages, this is important to recognize and understand. Spouses often complain that one person is doing more than the other to keep the house running smoothly, but if they complain to their partner about the unfair division of household labor, they are likely to end up in an unproductive argument with each person upset that their partner doesn’t recognize how much they contribute and how hard they work. Both will end up feeling undervalued and attacked, which is certainly not where any couple wants to be.

 

Managers must be aware of this and must find ways to encourage and celebrate the achievements of their team members while recognizing that each team member may feel that they are pulling more than their own weight. Letting everyone feel that they are doing more than their fair share is a good way to create unhelpful internal team competition and to create factions within the workplace. No professional work team wants to end up like a college or high school project group, where one person pulls and all-nighter, overwriting everyone else’s work and where one person seemingly disappears and emails everyone last minute to ask them not to rat them out to the teacher.

 

Individually, we should acknowledge that other people are not going to see and understand how much effort we feel that we put into the projects we work on. Ultimately, at an individual level we have to be happy with team success over our individual success. We don’t need to receive a gold star for every little thing that we do, and if we value helping others succeed as much as we value our own success, we will be able to overcome the availability heuristic in this instance, and become a more productive team member, whether it is in volunteer projects, in the workplace, or at home with our families.
What You See Is All There Is

What You See Is All There Is

In Thinking Fast and Slow, Daniel Kahneman gives us the somewhat unwieldy acronym WYSIATI – what you see is all there is. The acronym describes a phenomenon that stems from how our brains work. System 1, the name that Kahneman gives to the part of our brain which is automatic, quick, and associative, can only take in so much information. It makes quick inferences about the world around it, and establishes a simple picture of the world for System 2, the thoughtful calculating part of our brain, to work with.

 

What you see is all there is means that we are limited by the observations and information that System 1 can take in. It doesn’t matter how good System 2 is at processing and making deep insights about the world if System 1 is passing along poor information. Garbage in, garbage out, as the computer science majors like to say.

 

Daniel Kahneman explains what this means for our day to day lives in detail in his book. He writes, “As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little.”

 

System 2 doesn’t recognize that System 1 hands it incomplete and limited information. It chugs along believing that the information handed off by System 1 is everything that it needs to know. It doesn’t ask for more information, it just accepts that it has been handed a complete data set and begins to work. System 2 creates a solid narrative out of whatever information System 1 gives it, and only momentarily pauses if it notices an inconsistency in the story it is stitching together about the world. If it can make a coherent narrative, then it is happy and doesn’t find a need to look for additional information. What you see is all there is, there isn’t anything missing.

 

But we know that we only take in a limited slice of the world. We can’t sense the Earth’s magnetic pull, we can’t see in ultraviolet or infrared, and we have no way of knowing what is really happening in another person’s mind. When we read a long paper or finish a college course, we will remember some stuff, but not everything. Our mind is only able to hold so much information, and System 2 is limited to what can be observed and held. This should be a huge problem for our brain, we should recognize enormous blind spots, and be paralyzed with inaction due to a lack of information. But this isn’t what happens. We don’t even notice the blind spots, and instead we make a story from the information we collect, building a complete world that makes sense of the information, no matter how limited it is. What you see is all there is, we make the world work, but we do so with only a portion of what is really out there, and we don’t even notice we do so.
Detecting Simple Relationships

Detecting Simple Relationships

System 1, in Daniel Kahneman’s picture of the mind, is the part of our brain that is always on. It is the automatic part of our brain that detects simple relationships in the world, makes quick assumptions and associations, and reacts to the world before we are even consciously aware of anything. It is contrasted against System 2, which is more methodical, can hold complex and competing information, and can draw rational conclusions from detailed information through energy intensive thought processes.

 

According to Kahneman, we only engage System 2 when we really need to. Most of the time, System 1 does just fine and saves us a lot of energy. We don’t need to have to think critically about what we need to do when the stoplight changes from green to yellow to red. Our System 1 can develop an automatic response so that we let off the gas and come to a stop without having to consciously think about every action involved in slowing down at an intersection. However, System 1 has some very serious limitations.

 

“System 1 detects simple relations (they are all alike, the son is much taller than the father) and excels at integrating information about one thing, but it does not deal with multiple distinct topics at once, nor is it adept at using purely statistical information.”

 

When relationships start to get complicated, like say the link between human activities and long term climate change, System 1 will let us down. It also fails us when we see someone who looks like they belong to the Hell’s Angels on a father-daughter date at an ice cream shop, when we see someone who looks like an NFL linebacker in a book club, or when we see a little old lady driving a big truck. System 1 makes assumptions about the world based on simple relationships, and is easily surprised. It can’t calculate unique and edge cases, and it can’t hold complicated statistical information about multiple actors and factors that influence the outcome of events.

 

System 1 is our default, and we need to remember where its strengths and where its weaknesses are. It can help us make quick decisions while driving or catching an apple falling off a counter, but it can’t help us determine whether a defendant in a criminal case is guilty. There are times when our intuitive assumptions and reactions are spot on, but there are a lot of times when they can lead us astray, especially in cases that are not simple relationships and violate our expectations.