Valid Stereotypes

Valid Stereotypes?

Arguments about stereotypes are common in the world today. In the United States we have worked very hard to push back against stereotypes by bringing them into view so that we can address them directly to dispel incorrect and harmful prejudices. In the circles I am usually a part of, eliminating stereotypes is universally applauded, and people who reveal an inner stereotype, even if harmless, are often castigated for applying a characteristic or trait to an entire group of people and failing to recognize diversity and randomness within a group of people.

 

What I almost never hear, at least among the circles I am a part of, is that stereotypes can have validity and help improve some level of judgment. However, Daniel Kahneman in Thinking Fast and Slow suggests that maybe we should acknowledge some valid and helpful stereotypes. He writes,

 

“The social norm against stereotyping, including the opposition to profiling, has been highly beneficial in creating a more civilized and more equal society. It is useful to remember, however, that neglecting valid stereotypes inevitably results in suboptimal judgments. Resistance to stereotyping is a laudable moral position, but the simplistic idea that the resistance is costless is wrong. The costs are worth paying to achieve a better society, but denying that the costs exist, while satisfying to the soul and politically correct, is not scientifically defensible.”

 

I have a couple of thoughts in response to the quote from Kahneman. First, is about the way in which rejecting stereotypes that helps with judgment makes society more cohesive, and the second is about how we can use stereotypes to actually make the world more inclusive.

 

First, Kahneman states that society has become more equal and more civilized with stereotype rejection. The benefits of rejecting stereotypes comes from rejecting invalid stereotypes – prejudices that outcast other people and groups as inferior and inadequate. When we throw out stereotypes, we eliminate a lot of barriers from prejudices, even if it makes some roles and interactions with people who are not like us a little more challenging. The cost, as Kahneman notes, of abandoning stereotypes is that we have a little more friction in some of our interactions with others, but through deliberate effort this can be overcome and reduced.

 

The second note, is that embracing some valid stereotypes can help us have a better world. My initial thought in this regard is bright colored sand-paper strips at the edge of stairs. Many public buildings will add a strip of sand-paper like material, often bright yellow or a contrasting color, to the edge of stairs in public walkways. We might stereotype senior citizens or people with vision disorders and assume they need extra help walking up stairs, and we might be correct in these stereotypes. The stereotypes can become valid if they enable us to build a better world and accurately reflect the reality of the people we are making assumptions or pre-judgments about. The end result, if we embrace the stereotype instead of dismissing or ignoring it, is that we build staircases that are more safe and actually better for everyone. Able bodied young people will also benefit from stairs that are responsive to stereotypical concerns about the elderly. Perhaps this isn’t what Kahneman is referring to in his thoughts of valid stereotypes, perhaps this is just good design of the built world, but I think it can be considered a way of using stereotypes in a positive direction.

 

In most instances, our stereotypes have been negative factors that outcast people who are not like us, and serve to create more social animosity among people. Certainly these stereotypes should be discarded, however, Kahneman would argue that some stereotypes can be valid, and we can use them to construct more inclusive and overall better worlds for ourselves and others. There is a cost to ignoring all stereotypes, even if ignoring the vast majority of stereotypes actually is helpful for our societies.
Base Rates Joe Abittan

Base Rates

When we think about individual outcomes we usually think about independent causal structures. A car accident happened because a person was switching their Spotify playlist and accidently ran a red light. A person stole from a grocery store because they had poor moral character which came from a poor cultural upbringing. A build-up of electrical potential from the friction of two air masses rushing past each other caused a lightning strike.

 

When we think about larger systems and structures we usually think about more interconnected and somewhat random outcomes that we don’t necessarily observe on a case by case basis, but instead think about in terms of likelihoods and conditions which create the possibilities for a set of events and outcomes. Increasing technological capacity in smartphones with lagging technological capacity in vehicles created a tension for drivers who wanted to stream music while operating vehicles, increasing the chances of a driver error accident. A stronger US dollar made it more profitable for companies to employ workers in other countries, leading to a decline in manufacturing jobs in US cities and people stealing food as they lost their paychecks.  Earth’s tilt toward the sun led to a difference in the amount of solar energy that northern continental landmasses experienced, creating a temperature and atmospheric gradient which led to lightning producing storms and increased chances of lightning in a given region.

 

What I am trying to demonstrate in the two paragraphs above is a tension between thinking statistically versus thinking causally. It is easy to think causally on a case by case basis, and harder to move up the ladder to think about statistical likelihoods and larger outcomes over entire complex systems. Daniel Kahneman presents these two types of thought in his book Thinking Fast and Slow writing:

 

Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case. Causal base rates change your view of how the individual case came to be.”

 

It is more satisfying for us to assign agency to a single individual than to consider that individual’s actions as being part of a large and complex system that will statistically produce a certain number of outcomes that we observe. We like easy causes, and dislike thinking about statistical likelihoods of different events.

 

“Statistical base rates are generally underweighted, and sometimes neglected altogether, when specific information about the case at hand is available.
Causal base rates are treated as information about the individual case and are easily combined with other case-specific information.”

 

The base rates that Kahneman describes can be thought of as the category or class to which we assign something. We can use different forms of base rates to support different views and opinions. Shifting the base rate from a statistical base rate to a causal base rate may change the way we think about whether a person is deserving of punishment, or aid, or indifference. It may change how we structure society, design roads, and conduct cost-benefit analyses for changing programs or technologies. Looking at the world through a limited causal base rate will give us a certain set of outcomes that might not generalize toward the rest of the world, and might cause us to make erroneous judgments about the best ways to organize ourselves to achieve the outcomes we want for society.
Availability Cascades

Availability Cascades

This morning, while reading Sapiens by Yuval Noah Harari, I came across an idea that was new to me. Harari writes, “Chaotic systems come in two shapes. Level one chaos is chaos that does not react to predictions about it. … Level two chaos is chaos that reacts to predictions about it.”  The idea is that chaotic systems, like societies and cultures, are distinct from chaotic systems like the weather. We can model the weather, and it won’t change based on what we forecast. When we model elections, on the other hand, there is a chance that people, and ultimately the outcome of the election, will be influenced by the predictions we make.  The chaos is responsive to the way we think about that chaos. A hurricane doesn’t care where we think it is going to make landfall, but voters in a state may care quite a bit and potentially change their behavior if they think their state could change the outcome of an election.

 

This ties in with the note from Daniel Kahneman’s book Thinking Fast and Slow which I had selected to write about today. Kahneman writes about availability cascades in his book, and they are a piece of the feedback mechanism described by Harari in level two chaos systems. Kaneman writes:

 

“An availability cascade is a self-sustaining chain of events, which may start from media reports of a relatively minor event and lead up to public panic and large-scale government action. One some occasions, a media story about a risk catches the attention of a segment of the public, which becomes aroused and worried.”

 

We can think about any action or event that people and governments might take as requiring a certain action potential in order to take place. A certain amount of energy, interest, and attention is required for social action to take place. The action potential can be small, such as a red light being enough of an impetus to cause multiple people to stop their cars at an intersection, or monumental, such as a major health crisis being necessary to spur emergency financial actions from the Federal Government. Availability cascades create a set of triggers which can enhance the energy, interest, and attention provided to certain events and bolster the likelihood of a public response.

 

2020 has been a series of extreme availability cascades. With a global pandemic, more people are watching news more closely than before. This allows for the increased salience of incident of police brutality, and increases the energy in the public response to such incidents. As a result, more attention has been paid to racial injustice, and large companies have begun to respond in new ways to issues of race and equality, again heightening the energy and interest of the public in demanding action regarding both racial justice and police policy. There are other ways that events could have played out, but availability cascades created feedback mechanisms within a level two chaotic system, opening certain avenues for public and societal action.

 

It is easy to look back and make assessments on what happened, but in the chaos of the moment it is hard to understand what is going on. Availability cascades help describe what we see, and help us think about what might be possible in the future.
Affect Heuristics

More on Affect Heuristics

For me, one of the easiest examples of heuristics that Daniel Kahneman shares in his book Thinking Fast and Slow is the affect heuristic. It is a bias that I know I fall into all the time, and that has led me to buy particular brands of shoes, has influenced how I think about certain foods, and has shaped the way I think about people. In his book Kahenman writes, “The affect heuristic is an instances of substitution, in which the answer to an easy question (How do I feel about it?) serves as an answer to a much harder question (What do I think About it?).”

 

The world is a complex and tricky place, and we can only focus a lot of attention in one direction at a time. For a lot of us, that means we are focused on getting kids ready for school, cooking dinner, or trying to keep the house clean. Trying to fully understand the benefits and drawbacks of a social media platform, a new traffic pattern, or how to invest in retirement may seem important, but it can be hard to find the time and mental energy to focus on a complex topic and organize our thoughts in a logical and coherent manner. Nevertheless, we are likely to be presented with situations where we have to make decisions about what level of social media is appropriate for our children, offer comments on new traffic patterns around the water cooler, or finally get around to setting up our retirement plan and deciding what to do with that old 401K from that job we left.

 

Without having adequate time, energy, and attention to think through these difficult decisions, we have to make choices and are asked to have an opinion on topics we are not very informed about. “The affect heuristic”, Kahneman writes, “simplifies our lives by creating a world that is much tidier than reality. Good technologies have few costs in the imaginary world we inhabit, bad technologies have no benefits, and all decisions are easy.” We substitute the hard question that requires detailed thought for a simple question: do I like social media, did I feel that the new traffic pattern made my commute slower, do I like the way my retirement savings advisor presented a new investment strategy. In each case, we rely on affect, our emotional reaction to something, and make decisions in line with our gut feelings. Of course my kid can use social media, I’m on it, I like it, and I want to see what they are posting. Ugh, that new traffic pattern is awful, what were they thinking putting that utility box where it blocks the view of the intersection. Obviously this is the best investment strategy for me, my advisor was able to explain it well and I liked it when they told me I was making a smart decision.

 

We don’t notice when we default to the affect heuristic. It is hard to recognize that we have shifted away from making detailed calculations to rely solely on intuitions about how something makes us feel. Rather than admitting that we buy Nike shoes because our favorite basketball player wears them, and we want to be like LeBron, we create a story in our head about the quality of the shoes, the innovative design, and the complementary colors. We fall back on a quick set of factors that gives the impression of a thoughtful decision. In a lot of situations, we probably can’t do much better than the affect heuristic, but it is worth considering if our decisions are really being driven by affect. We might be able to avoid buying things just out of brand loyalty, and we might be a little calmer and reasonable in debates and arguments with friends and family when we realize we are acting on affect and not on reason.
Fluency of Ideas

Fluency of Ideas

Our experiences and narratives are extremely important to consider when we make judgments about the world, however we rarely think deeply about the reasons why we hold the beliefs we do. We rarely pause to consider whether our opinions are biased, whether our limited set of experiences shape the narratives that play in our mind, and how this influences our entire outlook on life. Instead, we rely on the fluency of ideas to judge our thoughts and opinions as accurate.

 

In Thinking Fast and Slow Daniel Kahneman writes about ideas from Cass Sunstein and jurist Timur Kuran explaining their views on fluency, “the importance of an idea is often judged by the fluency (and emotional charge) with which that idea comes to mind.” It is easy to characterize an entire group of people as hardworking, or lazy, or greedy, or funny based entirely on a single interaction with a single person from that group. We don’t pause to ask if our interaction with one person is really a good reflection of all people who fit the same group as that person, we instead allow the fluency of our past experiences to shape our opinions of all people in that group.

 

And our ideas and the fluency with which those ideas come to mind don’t have to come from our own personal experience. If a claim is repeated often enough, we will have trouble distinguishing it from truth, even if it is absurd and doesn’t have any connection to reality. The idea will come to mind more fluently, and consequently the idea will start to feel true. We don’t have to have direct experience with something if a great marketing campaign has lodge an opinion or slogan in mind that we can quickly recall.

 

If we are in an important decision-making role, it is important that we recognize this fluency bias. The fluency of ideas will drive us toward a set of conclusions that might not be in our best interests. A clever marketing campaign, a trite saying repeated by salient public leaders, or a few extreme yet random personal experiences can bias our judgment. We have to find a way to step back, recognize the narrative at hand, and find reliable data to help us make better decisions, otherwise we might end up judging ideas and making decisions based on faulty reasoning.
As an addendum to this post (originally written on 10/04/2020), this morning I began The Better Angels of Our Nature: Why Violence Has Declined, by Steven Pinker. Early in the introduction, Pinker states that violence in almost all forms is decreasing, despite the fact that for many of us, it feels as though violence is as front and center in our world as ever before. Pinker argues that our subjective experience of out of control violence is in some ways due to the fluency bias that Kahneman describes from Sunstein and Kuran. Pinker writes,

 

“No matter how small the percentage of violent deaths may be, in absolute numbers there will always be enough of them to fill the evening news, so people’s impressions of violence will be disconnected from the actual proportions.” 

 

The fluency effect causes an observation to feel correct, even if it is not reflective of actual trends or rates in reality.
Why We Talk About Human Nature

Why We Talk About Human Nature

I entered a Master’s in Public Administration program at the University of Nevada in 2016. I started the same semester as the 2016 election of President Donald Trump. I was drawn toward public policy because I love science, because I have always wanted to better understand how people come to hold political beliefs, and because I thought that bringing my rational science-based mind to public policy would open doors and avenues for me that were desperately needed in the world of public administration and policy. What I learned, and what we have all learned since President Trump took office, is that politics is not about policy, public administration is not about the high minded ideals we say it is about, and rationality is not and cannot be at the heart of public policy. Instead, politics is about identity, public administration is about systems and structures that benefit those we decide to be deserving and punishing those who are deviant. Public policy isn’t rational, its about self-interest and individual and group preferences. And this connects to the title of this post. We talk about human nature, because how we can define, understand, and perceive human nature can help us rationalize why our self-interest is valuable in public policy, why one group should be favored over another, and why one system that rewards some people is preferable over another system that rewards other people.

 

In Daniel Kahneman’s book Thinking Fast and Slow, he writes, “policy is ultimately about people, what they want and what is best for them. Every policy question involves assumptions about human nature, in particular about the choices that people may make and the consequences of their choices for themselves and society.” The reason why we talk about human nature is because it serves as the foundation upon which all of our social systems and structures are built upon. All of our decisions are based in fundamental assumptions about what we want, what are inherently inclined to do, and how we will behave as individuals and as part of a collective. However, this discussion is complicated because what we consider to be human nature, is subject to bias, to misunderstandings, and motivated reasoning. Politics and public policy are not rational because we all live with narrow understandings of what we want human nature to mean.

 

Personally, I think our conceptions and ideas of human nature are generally too narrow and limiting. I am currently reading Yuval Noah Harari’s book Sapiens, and he makes a substantial effort to show the diversity and seeming randomness in the stories that humans have created over tens of thousands of years, and how humans have lived in incredibly different circumstances, with different beliefs, different cultures, and different lifestyles throughout time. It is a picture of human nature which doesn’t quite make the jump to arguing that there is no human nature, but argues that human nature is a far more broad topic than what we typically focus on. I think Harari is correct, but someone who wants questions to religion to be central to human nature, someone who wants capitalistic competition to be central to human nature, or someone who wants altruism to be a deep facet of human nature might disagree with Harari.

 

Ultimately, we argue over human nature because how we define human nature can influence who is a winner and who is a loser in our society. It can shape who we see as deserving and who we see as deviant. The way we frame human nature can structure the political systems we adopt, the leaders we favor, and the economic systems that will run most of our lives. The discussions about human nature appear to be scientific, but they are often biased and flawed, and in the end what we really care about is our personal self-interest, and in seeing our group advance, even at the expense of others. Politics is not rational, we have all learned in nearly four years of a Donald Trump Presidency, because we have different views of what the people want and what is best for them, and flawed understandings of human nature influence those views and the downstream political decisions that we make.
How We Chose to Measure Risk

How We Chose to Measure Risk

Risk is a tricky thing to think about, and how we chose to measure and communicate risk can make it even more challenging to comprehend. Our brains like to categorize things, and categorization is easiest when the categories are binary or represent three or fewer distinct possibilities. Once you start adding options and different possible outcomes, decisions quickly become overwhelmingly complex, and our minds have trouble sorting through the possibilities. In his book Thinking Fast and Slow, Daniel Kahneman discusses the challenges of thinking about risk, and highlights another level of complexity in thinking about risk: what measurements we are going to use to communicate and judge risk.

 

Humans are pretty good at estimating coin flips – that is to say that our brains do ok with binary 50-50 outcomes (although as Kahneman shows in his book this can still trip us up from time to time). Once we have to start thinking about complex statistics, like how many people will die from cancer caused by smoking if they smoke X number of packs of cigarettes per month for X number of years, our brains start to have trouble keeping up. However, there is an additional decision that needs to be layered on top statistics such as cigarette related death statistics before we can begin to understand them. That decision is how we are going to report the death statistics.  Will we chose to report deaths per thousand smokers? Will we chose to report the number of packs smoked for a number of years? Will we just chose to report deaths among all smokers, regardless as to whether they smoked one pack per month or one pack before lunch every day?

 

Kahneman writes, “the evaluation of the risk depends on the choice of a measure – with the obvious possibility that the choice may have been guided by a preference for one outcome or another.”

 

Political decisions cannot be escaped, even when we are trying to make objective and scientific statements about risk. If we want to convey that something is dangerous, we might chose to report overall death numbers across the country. Those death numbers might sound like a large number, even though they may represent a very small fraction of incidents. In our lives today, this may be done with COVID-19 deaths, voter fraud instances, or wildfire burn acreage. Our brains will have a hard time comprehending risk in each of these areas, and adding the complexity of how that risk is calculated, measured, and reported can make virtually impossible for any of us to comprehend risk. Clear and accurate risk reporting is vital for helping us understand important risks in our lives and in society, but the entire process can be derailed if we chose measures that don’t accurately reflect risk or that muddy the waters of exactly what the risk is.
The Emotional Replica of Reality in our Brains

The Emotional Replica of Reality Within Our Brains

It feels weird to acknowledge that the model for reality within our brains is nothing more than a model. It is a construction of what constitutes reality based on our experiences and based on the electrical stimuli that reach our brain from various sensory organs, tissues, and nerve endings. The brain doesn’t have a model for things that it doesn’t have a way of experiencing or imagining. Like the experience of falling into a black hole, representations of what the experience is like will never fully substitute for the real thing, and will forever be unknowable to our brains. Consequently, the model of reality that our brain uses for every day operations can only include the limited slice of reality that is available to our experiences.

 

What results is a distorted picture of the world. This was not too much of a problem for our ancestors living as hunter-gatherers in small tribes. It didn’t matter if they fully understood the precise risk of tiger attacks or poisonous fungi, as long as they had heuristics to keep them away from dangerous situations and questionable foods. They didn’t need to hear at the frequency of a bat’s echolocation pulses, they didn’t need to see ultraviolet, and they didn’t need to sense the earth’s magnetic field. Precision and completeness wasn’t as important as a general sense of the world for pattern recognition and enough fear and memory to stay safe and find reliable food.

 

Today, however, we operate in complex social structures and the narratives we tell about ourselves, our societies, and how we should interact can have lasting influences on our own lives and the lives of generations to come.  How we understand the world is often shaped by our emotional reaction to the world, rather than being shaped by a complete set of scientific and reality based details and information. As Daniel Kahneman writes in Thinking Fast and Slow, “The world in our heads is not a precise replica of reality; our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed.”

 

Kahneman writes about news reporting of strange and extreme phenomenon, and how that leads us to believe that very rare events like tornado deaths are more likely than mundane and common causes of death such as those resulting from asthma complications. Things that are dramatic and unique feel more noteworthy, and are likely to be easier for us to remember and recall. When that happens, the events feel less like strange outliers, and more like normal events. The picture of reality operating in our mind is altered and distorted based on our experiences, the information we absorb, and our emotional valence to both.

 

For a social species, this can have dramatic consequences. If we generalize a character trait of one person to an entire group, we can develop dangerous stereotypes that influence our interactions with hundreds or thousands of people. A single salient event can shape how we think about problems or opportunities in our communities and societies. Rather than fully understanding our reaction and the event itself, we are going to struggle through narratives that seek to combine thousands of individual perceptions of reality, each influenced in unique ways by conflicting emotions and opinions of what has happened. Systems and structures matter, especially when our brains operate on inadequate versions of reality rather than concrete versions of reality and can be shaped by our emotional reactions to such systems and structures.
Fluency Versus Frequency

Fluency Versus Frequency

When it comes to the availability heuristic, fluency seems to be the most important factor. The ease with which an example of something comes to mind matters more than the real world frequency of the event. Salient examples of people being pulled over by the police, of celebrity divorces, or of wildfires cause our brains to consider these types of events to be more common and likely than they really are.

 

In Thinking Fast and Slow, Daniel Kahneman shares results from a study by German psychologist Norbert Schwarz which demonstrates fluency versus frequency in our analysis of the world. Schwarz asked participants to list six instances in which they behaved assertively, and to then rate their overall level of assertiveness. In a second instance, Schwarz asked participants to list twelve instances where they were assertive and to then rate their overall level of assertiveness. What the studies show is that those who were asked to come up with 6 instances of assertiveness considered themselves to be more assertive than those asked to come up with 12 instances. Kahneman describes the results by writing, “Self-ratings were dominated by the ease with which examples had come to mind. The experience of fluent retrieval of instances trumped the number retrieved.”

 

The logical expectation would be that asking people to list 12 instances of assertiveness would give people more reason to believe they were a more assertive person. However, that is not what the study showed. Instead, what Kahneman explains happened is that as you are asked to pull more examples from memory, your brain has a harder time remembering times when you were assertive. You easily remember a few stand-out assertive moments, but eventually you start to run out of examples. As you struggle to think of assertive times in your life, you start to underrate your assertiveness. On the other hand, if you only have to think of a handful of assertive moments, and your brain pulls those moments from memory easily, then the experience of easily identifying moments of assertiveness gives you more confidence with rating yourself as assertive.

 

What I find fascinating with the study Kahneman presents is that the brain doesn’t rely on facts or statistics to make judgments and assessments about the world. It is not setting a bar before analysis at which it can say, more examples of this and I am assertive, or fewer examples and I am not assertive. It is operating on feeling and intuition, fluidly moving through the world making judgments by heuristics. The brain is not an objective observer of the world, and its opinions, perspectives, and conclusions are biased by the way it operates. The study suggests that we cannot trust our simple judgments, even when they are about something as personal as our own level of assertiveness.
Teamwork Contributions

Thinking About Who Deserves Credit for Good Teamwork

Yesterday I wrote about the Availability Heuristic, the term that Daniel Kahneman uses in his book Thinking Fast and Slow to describe the ways in which our brains misjudge frequency, amount, and probability based on how easily an example of something comes to mind. In his book, Kahneman describes individuals being more likely to overestimate things like celebrity divorce rates if there was recently a high profile and contentious celebrity divorce in the news. The easier it is for us to make an association or to think of an example of a behavior or statistical outcome, the more likely we will overweight that thing in our mental models and expectations for the world.

 

Overestimating celebrity divorce rates isn’t a very big deal, but the availability heuristic can have a serious impact in our lives if we work as part of a team or if we are married and have a family. The availability heuristic can influence how we think about who deserves credit for good team work.

 

Whenever you are collaborating on a project, whether it is a college assignment, a proposal or set of training slides at work, or keeping the house clean on a regular basis, you are likely to overweight your own contributions relative to others. You might be aware of someone who puts in a herculean effort and does well more than their own share, but if everyone is chugging along completing a roughly equivalent workload, you will see yourself as doing more than others. The reason is simple, you experience your own work firsthand. You only see everyone else’s handiwork once they have finished it and everyone has come back together. You suffer from availability bias because it is easier for you to recall the time and effort you put into the group collaboration than it is for you to recognize and understand how much work and effort others pitched in. Kahneman describes the result in his book, “you will occasionally do more than your share, but it is useful to know that you are likely to have that feeling even when each member of the team feels the same way.” 

 

Even if everyone did an equal amount of work, everyone is likely to feel as though they contributed more than the others. As Kahneman writes, there is more than 100% of credit to go around when you consider how much each person thinks they contributed. In marriages, this is important to recognize and understand. Spouses often complain that one person is doing more than the other to keep the house running smoothly, but if they complain to their partner about the unfair division of household labor, they are likely to end up in an unproductive argument with each person upset that their partner doesn’t recognize how much they contribute and how hard they work. Both will end up feeling undervalued and attacked, which is certainly not where any couple wants to be.

 

Managers must be aware of this and must find ways to encourage and celebrate the achievements of their team members while recognizing that each team member may feel that they are pulling more than their own weight. Letting everyone feel that they are doing more than their fair share is a good way to create unhelpful internal team competition and to create factions within the workplace. No professional work team wants to end up like a college or high school project group, where one person pulls and all-nighter, overwriting everyone else’s work and where one person seemingly disappears and emails everyone last minute to ask them not to rat them out to the teacher.

 

Individually, we should acknowledge that other people are not going to see and understand how much effort we feel that we put into the projects we work on. Ultimately, at an individual level we have to be happy with team success over our individual success. We don’t need to receive a gold star for every little thing that we do, and if we value helping others succeed as much as we value our own success, we will be able to overcome the availability heuristic in this instance, and become a more productive team member, whether it is in volunteer projects, in the workplace, or at home with our families.