Science and Facts

Science and Facts

Science helps us understand the world and answer questions about how and why things are the way they are. But this doesn’t mean science always gives us the most accurate answers possible. Quite often science seems to suggest an answer, sometimes the answer we get doesn’t really answer the question we wanted to ask, and sometimes there is just too much noise to gain any real understanding. The inability to perfectly answer every question, especially when we present science as providing clear facts when teaching science to young children, is a point of the confusion and dismissal among those who don’t want to believe the answers that science gives us.
In Spook: Science Tackles the Afterlife, Mary Roach writes, “Of course, science doesn’t dependably deliver truths. It is as fallible as the men and women who undertake it. Science has the answer to every question that can be asked. However, science reserves the right to change that answer should additional data become available.” The science of the afterlife (really the science of life, living, death, and dying), Roach explains, has been a science of revision. What we believe, how we conduct experiments, and how we interpret scientific results has shifted as our technology and scientific methods have progressed. The science of life and death has given us many different answers over the years as our own biases have shifted and as our data and computer processing has evolved.
The reality is that all of our scientific fields of study are incomplete. There are questions we still don’t have great answers to, and as we seek those answers, we have to reconsider older answers and beliefs. We have to study contradictions and try to understand what might be wrong with the way we have interpreted the world. What we bring to science impacts what we find, and that means that sometimes we don’t find truths, but conveniently packaged answers that reinforce what we always wanted to be true. Overtime, however, the people doing the science change, the background knowledge brought to science changes, and the way we understand the answers from science changes. It can be frustrating to those of us on the outside who want clear answers and don’t want to be abused by people who wish to deliberately mislead based on incomplete scientific knowledge. But overtime science revises itself to become more accurate and to better describe the world around us.
55% of Chicago Homeless Looked Neat and Clean

55% of Chicago Homeless Looked Neat & Clean

If I pictured a homeless person in my mind I would imagine someone who was dirty, who may not have a shirt, and who had a mess of overgrown and ungroomed hair. Whether male or female, ragged clothes, unkempt hair, and bags full of stuff seems to be the typical image of a homeless person. However, there are many homeless people, perhaps even a majority in some cities at some times, who do not fit the stereotypical image of a homeless person.
In The Homeless Christopher Jencks writes about our expectations of what homelessness looks like, and what the reality of homeless often is for those experiencing homelessness. Regarding our expectations, he writes, “but appearances can mislead us … When [Peter] Rossi surveyed the Chicago homeless, his interviewers classified 55 percent of the people they interviewed as neat and clean rather than dirty, unkempt, or shabbily dressed.”
We don’t expect people who wear normal and clean clothes to be homeless. We don’t expect people who are generally well groomed and don’t smell bad to be homeless. But when Peter Rossi was writing his book Down and Out in America, a slight majority of people interviewed were dressed more or less normally and appeared to be typical people. These individuals are part of the group generally referred to as the invisible homeless. Rather than the visible people sleeping in tents who can’t shave, can’t shower, and have a few dirty possessions, these people appeared normal, but still didn’t have a home. They were missed and misunderstood in the debates and discussions of homeless people. They often hid their homelessness from the people they interacted with, creating space for the misperceptions about homelessness.
I think it is important to compare our stereotypical view of homelessness to the reality of homelessness for many people. When we see the visibly homeless we often have strong reactions, and those strong reactions generally dictate how we think the homeless should be handled. But those strong reactions and opinions fail to account for the kind of homelessness that perhaps a majority experience. This means that policies and programs to help the homeless (or more nefariously “address” the homeless) may fail to actually benefit the majority of homeless or to even focus on the leading drivers of homelessness. If someone who is neat and clean is in line at a soup kitchen, or asking for aid, they may not seem like they need it because they don’t look like the typical homeless person. Or, if we deny assistance to the homeless because we think they are all dirty, lazy, and possibly on drugs, then we fail to help those homeless who are trying to look and appear normal, who are tying to keep a job while homeless, and who are trying not to fall into the ungroomed stereotype of the homeless. It is important that we are aware of our expectations, biases, and prejudices around the homeless so that we can develop an accurate understanding of homelessness in our nation to actually address the problem.
Epistemic Optimists & Pessimists - Joe Abittan

Epistemic Optimists & Pessimists

A little while back I did a mini dive into cognitive psychology and behavioral economics by reading Thinking Fast and Slow by Daniel Kahneman, Nudge by Sunstein and Thaler, Risk Savvy by Gerd Gigerenzer, Vices of the Mind by Quassim Cassam, and The Book of Why by Judea Pearl. Each of these authors asked questions about the ways we think and tried to explain why our thinking so often seems go awry. Recognizing that it is a useful but insufficient dichotomy, each of these authors can be thought of as either an epistemic optimist or an epistemic pessimist.
In Vices of the Mind Cassam gives us the definitions for epistemic optimists and pessimists. He writes, “Optimism is the view that self-improvement is possible, and that there is often (though not always) something we can do about our epistemic vices, including many of our implicit biases.” The optimists, Cassam argues, believes that we can learn about our mind, our biases, and how our thinking works to make better decisions and improve our beliefs to foster knowledge. Cassam continues, “Pessimism is much more sceptical about the prospects of self-improvement or, at any rate, of lasting self-improvement. … For pessimists, the focus of inquiry shouldn’t be on overcoming our epistemic vices but  on outsmarting them, that is, finding ways to work around them so as to reduce their ill effects.” With Cassam’s framework, I think it is possible to look at the ways each author and researcher presents information in their books and to think of them as either optimists or pessimists.
Daniel Kahneman in Thinking Fast and Slow wants to be an optimist, but ultimately is a pessimist. He writes throughout the book how his own knowledge about biases, cognitive illusions, and thinking errors hardly help him in his own life. He states that what he really hopes his book accomplishes is improved water-cooler talk and better understanding of how the brain works, not necessarily better decision-making for those who read his book. Similarly, Sunstein and Thaler are pessimists. They clearly believe that we can outsmart our epistemic vices, but not by our own actions but rather by outside nudges that smarter people and responsible choice architects have designed for us. Neither Kahneman nor the Chicago economics pair believe we really have any ability to control and change our thinking independently.
Gigerenzer and Pearl are both optimists. While Gigerenzer believes that nudges can be helpful and encourages the development of aids to outsmart our epistemic vices, he also clearly believes that we can overcome them on our own simply through gaining experience and through practice. For Gigerenzer, achieving epistemic virtuosity is possible, even if it isn’t something you explicitly work toward. Pearl focuses how human beings are able to interpret and understand causal structures in the real world, and breaks from the fashionable viewpoint of most academics in saying that humans are actually very good and understanding, interpreting, and measuring causality. He is an epistemic optimist because he believes, and argues in his book, that we can improve our thinking, improve the ways we approach questions of causality, and improve our knowledge without having to rely on fancy tricks to outsmart epistemic vices. Both authors believe that growth and improved thinking is possible.
Cassam is harder to place, but I think he still is best thought of as an epistemic optimist. He believes that we are blameworthy for our epistemic vices and that they are indeed reprehensible. He also believes that we can improve our thinking and reach a more epistemically virtuous way of thinking if we are deliberate about addressing our epistemic vices. I don’t think that Cassam believes we have to outsmart our epistemic vices, only that we need to be able to recognize them and understand how to get beyond them, and I believe that he would argue that we can do so.
Ultimately, I think that we should learn from Kahneman, Sunstein, and Thaler and be more thoughtful of our nudges as we look for ways to overcome the limitations of our minds. However, I do believe that learning about epistemic vices and taking steps to improve our thinking can help us grow and become more epistemically virtuous. Simple experience, as I think Gigerenzer would argue, will help us improve naturally, and deliberate and calibrated thought, as Pearl might argue, can help us clearly see real and accurate causal structures in the world. I agree with Cassam that we are at least revision responsible for our epistemic vices, and that we can take steps to get beyond them, improving our thinking and becoming epistemically virtuous. In the end, I don’t think humanity is a helpless pool of irrationality and that we can only improve our thinking and decision-making through nudges. I think we can and over time will improve our statistical thinking, decision-making, and limit cognitive errors and biases as individuals and as societies (then again, maybe its just the morning coffee talking).
The Life and Death Consequences of Epistemic Vices

The Life and Death Consequences of Epistemic Vices

For the last couple of months I have been writing about ideas and thoughts that stood out to me in Quassim Cassam’s book Vices of the Mind. Cassam specifically analyzes epistemic vices, asking why they exist, whether we should be blamed for having them, and what real world consequences arise because of them. To this point, most of my posts have focused on relatively harmless aspects of epistemic vices. I have written about how they limit knowledge and how they can cause us to make suboptimal decisions about investing money, making career choices, or relating to political figures. However, epistemic vices do have life and death consequences, and can be much more vicious than I have written about to this point.
In his book, Cassam uses an example of weapon bias to demonstrate the tragic consequences that can arise from epistemic vices. He describes work from Keith Payne to outline the concept. He writes, “Under the pressure of a split-second decision, the readiness to see a weapon became an actual false claim of seeing a weapon. It was race that shaped people’s mistakes, and Payne found that African American participants were as prone to weapon bias as white participants.” This quote shows that a bias influences the way we perceive the world and directly influences the beliefs we come to hold. It becomes an epistemic vice by inhibiting knowledge and causing us to have inaccurate views of the world. And these biases, these epistemic vices, are endemic to our nation. It is not one group of biased people, but an entire system that promotes and fosters weapon bias based on racism, hindering knowledge for everyone, creating life and death misunderstandings across our country.
Cassam continues. “By causing errors in perception weapon bias gets in the way of perceptual knowledge, and the practical consequences hardly need spelling out. In the US innocent African American men are shot with alarming frequency by policy officers who think they see a gun when no gun is present. If weapon bias is an epistemic vice then here is proof that some epistemic vices are quite literally a matter of life and death.”(It is worth noting that Cassam is at the University of Warwick in the UK).
Failing to see the world clearly can have life and death consequences. In terms of our police, we encourage them to think of themselves as needing to react in a split second when they perceive the threat of a weapon, potentially another vice that should be addressed. Systemic and structural racism biases police toward seeing a harmless item, like a tool or phone, as a gun, forming the base of weapon biases. The end result is a lack of knowledge via false perceptions, and in the United States disproportionate numbers of black men killed in police interactions.
Cassam’s book is a dense and deep dive into epistemic vices, but the life and death consequences of epistemic vices such as weapon bias demonstrate the importance of understanding how our thoughts, actions, and behaviors can obstruct knowledge. It is important that we recognize our own epistemic vices and work to build systems and structures that limit the acquisition of and negative consequences of epistemic vices. Seeing the world more clearly can literally prevent unnecessary death.

Acquisition Responsibility

We are not always responsible for the acquisition of our virtues and vices. For some of us, being good natured and virtuous toward other people comes naturally, and for others of us, being arrogant or closed-minded comes naturally or was pushed onto us from forces we could not control. I think it is reasonable to say that virtues likely require more training, habituation, imitation, and intentionality for acquisition than vices, so in that sense we are more responsible for virtue acquisition than vice acquisition. It is useful to think about becoming versus being when we think about virtues and vices because it helps us better consider individual responsibility. Making this distinction helps us think about blameworthiness and deservingness, and it can shape the narratives that influence how we behave toward others.
In Vices of the Mind Quassim Cassam writes, “a person who is not responsible for becoming dogmatic might still be responsible for being that way. Acquisition responsibility is backward-looking: it is concerned with the actual or imagined origin of one’s vices.”
In the book, in which Cassam focuses on epistemic vices, or vices that obstruct knowledge. Cassam uses an example from Heather Battaly of a young man who is unfortunate enough to grow up in a part of the world controlled by the Taliban. The young man will undoubtedly be closed-minded (at the very least) as a result of being indoctrinated by the Taliban. There is little the man could do to be more open minded, to avoid adopting a specific viewpoint informed by the biases, prejudices, and agendas of the Taliban. It is not reasonable to say that the man has acquisition responsibility for his closed-mindedness. Many of our epistemic vices are like this, they are the results of forces beyond our control or invisible to us, they are in some ways natural cognitive errors that come from misperceptions of the world.
When we think about vices in this way, I would argue that it should change how we think about people who hold such vices. It seems to me that it would be unreasonable to scorn everyone who holds a vice for which they have no control over the acquisition. Being backward-looking doesn’t help us think about how to move forward. It is important to recognize that people hate being held responsible for things they had no control over, even if that thing lead to serious harms for other people. An example might be people who have benefitted from structural racism, and might like to see systems and institutions change to be less structurally racist, but don’t want to be blamed for a system they didn’t recognize or know they contributed to. Being stuck with a backward-looking view frustrates people, makes them feel ashamed and powerless, and prevents progress. People would rather argue that it wasn’t their fault and that they don’t deserve blame than think about ways to move forward. Keeping this in mind when thinking about how we address and eliminate vices for which people are not acquisition responsible is important for us if we want to continue to grow as individuals and societies and if we want to successfully overcome epistemic vices.
Do People Make the Best Choices?

Do People Make the Best Choices?

My wife works with families with children with disabilities and for several years I worked in the healthcare space. A common idea between our two worlds was that the people being assisted are the experts on their own lives, and they know what is best for them. Parents are the experts for their children and patients are the experts in their health. Even if parents to don’t know all the intervention strategies to help a child with disabilities, and even if patients don’t have an MD from Stanford, they are still the expert in their own lives and what they and their families need.

 

But is this really true? In recent years there has been a bit of a customer service pushback in the world of business, more of a recognition that the customer isn’t always right. Additionally, research from the field of cognitive psychology, like much of the research from Daniel Kahneman’s book Thinking Fast and Slow that I wrote about, demonstrates that people can have huge blind spots in their own lives. People cannot always think rationally, in part because their brains are limited in their capacity to handle lots of information and because their brains can be tempted to take easy shortcuts in decision-making that don’t always take into account the true nature of reality. Add to Kahneman’s research the ideas put forth by Robin Hanson and Kevin Simler in The Elephant in the Brain, where the authors argue that our minds intentionally hide information from ourselves for political and personal advantage, and we can see that individual’s can’t be trusted to always make the best decisions.

 

So while no one else may know a child as well as the child’s parents, and while no one knows your body and health as well as you do, your status as the expert of who you are doesn’t necessarily mean you are in the best position to always make choices and decisions that are in your own best interest. Biases, cognitive errors, and simple self-deception can lead you astray.

 

If you accept that you as an individual, and everyone else individually, cannot be trusted to always make the best choices, then it is reasonable to think that someone else can step in to help improve your decision-making in certain predictable instances where cognitive errors and biases can be anticipated. This is a key idea in the book Nudge by Cass Sunstein and Richard Thaler. In defending their ideas for libertarian paternalism, the authors write, “The false assumption is that almost all people, almost all of the time, make choices that are in their best interest or at the very least are better than the choices that would be made by someone else. We claim that this assumption is false – indeed, obviously false.”

 

In many ways, our country prefers to operate with markets shaping the main decisions and factors of our lives. We like to believe that we make the best choices for our lives, and that aggregating our choices into markets will allow us to minimize the costs of individual errors. The idea is that we will collectively make the right choices, driving society in the right direction and revealing the best option and decision for each individual without deliberate tinkering in the process. However, we have seen that markets don’t encourage us to save as much as we should and markets can be susceptible to the same cognitive errors and biases that we as individuals all share.  Markets, in other words, can be wrong just like us as individuals.

 

Libertarian paternalism helps overcome the errors of markets by providing nudges to help people make better decisions. Setting up systems and structures that make saving for retirement easier helps correct a market failure. Outsourcing investment strategies, rather than each of us individually making stock trades, helps ensure that shared biases and panics don’t overwhelm the entire stock exchange. The reality is that we as individuals are not rational, but we can develop systems and structures that provide us with nudges to help us act more rationally, overcoming the reality that we don’t always make the choices that are in our best interest.
Can We Avoid Cognitive Errors?

Can We Avoid Cognitive Errors?

Daniel Kahneman is not very hopeful when it comes to our ability to avoid cognitive errors. Toward the end of his book Thinking Fast and Slow, a book all about cognitive errors, predictable biases, and situations in which we can recognize such biases and thinking errors, Kahneman isn’t so sure there is much we can actually do in our lives to improve our thinking.

 

Regarding his own thinking, Kahneman writes, “little can be achieved without considerable effort. As I know from experience, System 1 is not readily educable. Except for some effects that I attribute mostly to age, my intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy as it was before I made a study of these issues.”

 

Kahneman’s book is fantastic in part because of his humility. It would be easy to take a book on illusions, cognitive errors, biases, and predictable fallacies and use it to show how much smarter you are than everyone else who makes such thinking mistakes. However, Kahneman uses his own real life examples throughout the book to show how common and easy it is to fall into ways of thinking that don’t actually reflect reality. What is unfortunate though, is how hard it is to actually take what you learn from the book and apply it to your own life. If the author himself can hardly improve his own thinking, then those of us who read the book likely won’t make big changes in our thinking either.

 

“The upshot is that it is much easier to identify a minefield when you observe others wandering into it than when you are about to do so. Observers are less cognitively busy and more open to information than actors,” Kahneman continues. While we might not be able to improve our thinking simply by knowing about cognitive errors and being aware of predictable biases, we can at least recognize them in others. This can help us be more thoughtful when we critique or gossip about others (something we all do even if we claim we don’t).

 

Beyond improving the way we gossip or judge others, Kahneman’s research and his book are incredibly valuable for anyone who is in a design focused role. If you are creating a layout for a webpage, a seating arrangement at a restaurant, or the standard operating procedures for a company, you have an opportunity to design and develop a process and flow that takes cognitive errors and predictable biases into account. Because it is easier to observe others making mistakes than to observe those mistakes in ourselves, we can watch for situations where people are led astray, and help get them back on course. We can develop systems and structures that take our biases and cognitive errors into account, and minimize the damage they may do. We can set the world up to help guide us in a reasonable way through our cognitive errors and biases, but only if we know what to look for.
Endowment Effects Joe Abittan

Endowment Effects

In his book Thinking Fast and Slow, Daniel Kahneman discusses an experiment he helped run to explore the endowment effect. The endowment effect is a cognitive fallacy that helps explain our attachment to things and our unwillingness to part with objects, even when we are offered something greater than the objective value of the the object itself. We endow the object with greater significance than is really warranted, and in his book, Kahneman shows that this has been studied with Super Bowl tickets, wine, and coffee mugs.

 

Kahneman helped run experiments at a few different universities where college students were randomly given coffee mugs with the university logo. The mugs were worth about $6 each, and were randomly distributed to about half of a classroom. Students were allowed to buy or sell the mugs, and the researchers saw a divergence in the value assigned to the mugs by the students who randomly obtained a mug and those who didn’t. Potential sellers were willing to part with the mug for about $7 dollars, a price above the actual value of the mug. Buyers, however, were generally only willing to purchase a mug for about $3, or half the value of the mug.

 

Kahneman suggests that the endowment effect has something to do with the unequal values assigned to the mug by those who received a mug and those who didn’t. He suggests that it is unlikely that those who received the mugs really wanted a university mug and particularly valued a mug relative to those who didn’t receive a mug. Those students should have been willing to trade the mug for $3 dollars which could be used to purchase something that they may have actually wanted, rather than a random mug. To explain why they didn’t sell their mugs, Kahneman suggests that the mugs became endowed with additional value by those who received them.

 

A further study showed similar effects. When all students in the class randomly received either a chocolate bar or a mug, researchers found that fewer students were willing to make a trade than the researchers predicted. Again, it is unlikely that a random distribution of mugs and candy perfectly matched the mug versus candy preferences of the students. There should have been plenty of students who could have used a sugar boost more than an extra mug (and vice versa), but little trading actually took place. It appears that once someone randomly receives a gift, even if the value of the gift was very small, they are not likely to give it up. The gift becomes endowed with some meaning beyond its pure utility and value.

 

Kahneman describes part of what takes place in our minds when the endowment effect is at work, “the shoes the merchant sells you and the money you spend from your budget for shoes are held for exchange. They are intended to be traded for other goods. Other goods, such as wine and Super Bowl tickets, are held for use to be consumed or otherwise enjoyed. Your leisure time and the standard of living that your income supports are also not intended for sale or exchange.”

 

The random mug or candy bar were not seen as objective items intended to be traded or bartered in exchange for something that we actually want. They were viewed as a windfall over the status quo, and thus their inherent value to the individual was greater than the actual value of the object. Kahneman suggests that this is why so few students traded candy for mugs, and why mug sellers asked far more than what mug buyers wanted to pay in his experiments. The endowment effect is another example of how our emotional valence and narrative surrounding an otherwise objectively unimportant object can shape our behaviors in ways that can seem irrational. Next spring when you are trying to de-clutter your house, remember this post and the endowment effect. Remember that you are imbuing objects with value simply because you happen to own it, and remember that you would only pay half price for it if it was actually offered to you for purchase now. Hopefully that helps you minimalize the number of mugs you own and declutter some of your cabinets.
Overcoming Group Overconfidence

Overcoming Group Overconfidence

Overcoming group overconfidence is hard, but in Thinking Fast and Slow, Daniel Kahneman offers one partial remedy: a premortem. As opposed to a postmortem, and analysis of why a project failed, a premortem looks at why a program might fail before it has started.

 

Group communication is difficult. When the leader of a group is enthusiastic about an idea, it is hard to disagree with them. If you are a junior member of a team, it can be uncomfortable, and potentially even disadvantageous for you and your career to doubt the ideas that a senior leader is excited about. If you have concerns, it is not likely that you will bring them up, especially in a group meeting with other seemingly enthusiastic team members surrounding you.

 

Beyond the silencing of a member who has concerns but doesn’t want to speak up is another problem that contributes to overconfidence among teams: groupthink. Particularly among groups that lack diversity, groupthink can crush the planning stage of a project. When everyone has similar backgrounds, similar experiences, and similar styles of thinking, it is unlikely that anyone within the group will have a viewpoint or opinion that is significantly different than the prevailing wisdom of the rest. What seems like a good idea or the correct decision to one person probably feels like the correct idea or decision to everyone else – there is literally no one in the room who has any doubts or alternative perspectives.

 

Premortems help get beyond groupthink and the fear of speaking up against a powerful and enthusiastic leader. The idea is to brainstorm all the possible ways that a project might fail. It includes an element of creativity by asking everyone to imagine the project is finally finished, either successfully but well over budget, way late, after a very turbulent series of events, or the project was a complete failure and never reached its intended end point. People have to describe the issues that came up and why the project did not reach the rosy outcome everyone initially pictured. Imaging that these failures had taken place in real life gets people to step beyond groupthink and encourages highlighting roadblocks that particularly enthusiastic members overlook.

 

Because premortems are hypothetical, it gives people a chance to speak up about failure points and weaknesses in plans and ideas without appearing to criticize the person the idea came from. It creates a safe space for imagining barriers and obstacles that need to be overcome to achieve success. It reduces groupthink by encouraging a creative flow of ideas of failure points. As Kahneman writes, “The main virtue of the premortem is that it legitimizes doubts. Furthermore, it encourages even supporters of the decision to search for possible threats that they had not considered earlier.”

 

Overcoming group overconfidence is possible, but it needs the right systems and structures to happen. Groupthink and fear are likely to prevent people from bringing up real doubts and threats, but a premortem allows those concerns to be aired and seriously considered. It helps get people to look beyond the picture of success they intuitively connect with, and it helps prevent enthusiastic supporters from getting carried away with their overconfidence.
Teamwork Contributions

Thinking About Who Deserves Credit for Good Teamwork

Yesterday I wrote about the Availability Heuristic, the term that Daniel Kahneman uses in his book Thinking Fast and Slow to describe the ways in which our brains misjudge frequency, amount, and probability based on how easily an example of something comes to mind. In his book, Kahneman describes individuals being more likely to overestimate things like celebrity divorce rates if there was recently a high profile and contentious celebrity divorce in the news. The easier it is for us to make an association or to think of an example of a behavior or statistical outcome, the more likely we will overweight that thing in our mental models and expectations for the world.

 

Overestimating celebrity divorce rates isn’t a very big deal, but the availability heuristic can have a serious impact in our lives if we work as part of a team or if we are married and have a family. The availability heuristic can influence how we think about who deserves credit for good team work.

 

Whenever you are collaborating on a project, whether it is a college assignment, a proposal or set of training slides at work, or keeping the house clean on a regular basis, you are likely to overweight your own contributions relative to others. You might be aware of someone who puts in a herculean effort and does well more than their own share, but if everyone is chugging along completing a roughly equivalent workload, you will see yourself as doing more than others. The reason is simple, you experience your own work firsthand. You only see everyone else’s handiwork once they have finished it and everyone has come back together. You suffer from availability bias because it is easier for you to recall the time and effort you put into the group collaboration than it is for you to recognize and understand how much work and effort others pitched in. Kahneman describes the result in his book, “you will occasionally do more than your share, but it is useful to know that you are likely to have that feeling even when each member of the team feels the same way.” 

 

Even if everyone did an equal amount of work, everyone is likely to feel as though they contributed more than the others. As Kahneman writes, there is more than 100% of credit to go around when you consider how much each person thinks they contributed. In marriages, this is important to recognize and understand. Spouses often complain that one person is doing more than the other to keep the house running smoothly, but if they complain to their partner about the unfair division of household labor, they are likely to end up in an unproductive argument with each person upset that their partner doesn’t recognize how much they contribute and how hard they work. Both will end up feeling undervalued and attacked, which is certainly not where any couple wants to be.

 

Managers must be aware of this and must find ways to encourage and celebrate the achievements of their team members while recognizing that each team member may feel that they are pulling more than their own weight. Letting everyone feel that they are doing more than their fair share is a good way to create unhelpful internal team competition and to create factions within the workplace. No professional work team wants to end up like a college or high school project group, where one person pulls and all-nighter, overwriting everyone else’s work and where one person seemingly disappears and emails everyone last minute to ask them not to rat them out to the teacher.

 

Individually, we should acknowledge that other people are not going to see and understand how much effort we feel that we put into the projects we work on. Ultimately, at an individual level we have to be happy with team success over our individual success. We don’t need to receive a gold star for every little thing that we do, and if we value helping others succeed as much as we value our own success, we will be able to overcome the availability heuristic in this instance, and become a more productive team member, whether it is in volunteer projects, in the workplace, or at home with our families.