Risk Literacy and Emotional Stress

Risk Literacy and Emotional Stress

In Risk Savvy Gerd Gigerenzer argues that better risk literacy could reduce emotional stress. To emphasize this point, Gigerenzer writes about parents who receive false positive medical test results for infant babies. Their children had been screened for biochemical disorders, and the tests indicated that the child had a disorder. However, upon follow-up screenings and evaluations, the children were found to be perfectly healthy. Nevertheless, in the long run (four years later) parents who initially received a false positive test result were more likely than other parents to say that their children required extra parental care, that their children were more difficult, and that that had more dysfunctional relationships with their children.

 

Gigerenzer suggests that the survey results represent a direct parental response to initially receiving a false positive test when their child was a newborn infant. He argues that parents received the biochemical test results without being informed about the chance of false positives and without understanding the prevalence of false positives due to a general lack of risk literacy.  Parents initially reacted strongly to the bad news of the test, and somewhere in their mind, even after the test was proven to be a false positive, they never adjusted their thoughts and evaluations of the children, and the false positive test in some ways became a self-fulfilling prophecy.

 

In writing about Gigerenzer’s argument, it feels more far-fetched than it did in an initial reading, but I think his general argument that risk literacy and emotional stress are tied together is probably accurate. Regarding the parents in the study, he writes, “risk literacy could have moderated emotional reactions to stress that harmed these parents’ relation to their child.” Gigerenzer suggests that parents had strong negative emotional reactions when their children received a false positive and that their initial reactions carried four years into the future. However, had the doctors better explained the chance of a false positive and better communicated next steps with parents, then the strong negative emotional reaction experienced by parents could have been avoided, and they would not have spent four years believing their child was in some ways more fragile or more needy than other children. I recognize that receiving a medical test with a diagnosis that no parent wants to hear is stressful, and I can see where better risk communication could reduce some of that stress, but I think there could have been other factors that the study picked up on. I think the results as Gigerenzer reported overhyped the connection between risk literacy and emotional stress.

 

Nevertheless, risk literacy is important for all of us living in a complex and interconnected world today. We are constantly presented with risks, and new risks can seemingly pop-up anywhere at any time. Being able to decipher and understand risk is important so that we can adjust and modulate our activities and behaviors as our environment and circumstances change. Doing so successfully should reduce our stress, while struggling to comprehend risk and adjust behaviors and beliefs is likely to increase emotional stress. When we don’t understand risks appropriately, we can become overly fearful, we can spend money on unnecessary insurance, and we can stress ourselves over incorrect information. Developing better charts, better communicative tools, and better information about risk will help individuals improve their risk literacy, and will hopefully reduce risk by allowing individuals to successfully adjust to the risks they face.
Understanding False Positives with Natural Frequencies

Understanding False Positives with Natural Frequencies

In a graduate course on healthcare economics a professor of mine had us think about drug testing student athletes. We ran through a few scenarios where we calculated how many true positive test results and how many false positive test results we should expect if we oversaw a university program to drug tests student athletes on a regular basis. The results were surprising, and a little confusing and hard to understand.

 

As it turns out, if you have a large student athlete population and very few of those students actually use any illicit drugs, then your testing program is likely to reveal more false positive tests than true positive tests. The big determining factors are the sensitivity of the test (how often it is actually correct) and the percentage of students using illicit drugs. A false positive occurs when the drug test indicates that a student who is not using illicit drugs is using them. A true positive occurs when the test correctly identifies a student who does indeed use drugs. The dilemma we discussed occurs if you have a test with some percentage of error and a large student athlete population with a minimal percentage of drug users. In this instance you cannot be confident that a positive test result is accurate. You will receive a number of positive tests, but most of the positive tests that you receive are actually false positives.

 

In class, our teacher walked us through this example verbally before creating some tables that we could use to multiply the percentages ourselves to see that the number of false positives will indeed exceed the number of true positives when you are dealing with a large population and a rare event that you are testing for. Our teacher continued to explain that this happens every day in the medical world with drug tests, cancer screenings, and other tests (including COVID-19 tests as we are learning today).  The challenge, as our professor explained, is that the math is complicated and it is hard to explain to person who just received a positive cancer test that they likely don’t have cancer, even though they just received a positive test. The statistics are hard to understand on their own.

 

However, Gerd Gigerenzer doesn’t think this is really a limiting problem for us to the extent that my professor had us work through. In Risk Savvy Gigerenzer writes that understanding false positives with natural frequencies is simple and accessible. What took nearly a full graduate course to go through and discuss, Gigerenzer suggests can be digested in simple charts using natural frequencies. Natural frequencies are numbers we can actually understand and multiply as opposed to fractions and percentages which are easy to mix up and hard to multiply and compare.

 

Rather than telling someone that the actual incidence of cancer in the population is only 1%, and that the chance of a false positive test is 9%, and trying to convince them that they still likely don’t have cancer is confusing. However, if you explain to an individual that for every 1,000 people who take a particular cancer test that only 10 actually have cancer and that 990 don’t, the path to comprehension begins to clear up. With the group of 10 true positives and true negatives 990, you can explain that of those 10 who do have cancer, the test correctly identifies 9 out of 10 of them, and provides 9 true positive results for every 1,000 test (or adjust according to the population and test sensitivity). The false positive number can then be explained by saying that for the 990 people who really don’t have cancer, the test will error and tell 89 of them (9% in this case) that they do have cancer. So, we see that 89 individuals will receive false positives while 9 people will receive true positives. 89 > 9, so the chance of actually having cancer with a positive test still isn’t a guarantee.

 

Gigernezer uses very helpful charts in his book to show us that the false positive problem can be understood more easily than we might think. Humans are not great at thinking statistically, but understanding false positives with natural frequencies is a way to get to better comprehension. With this background he writes, “For many years psychologists have argued that because of their limited cognitive capacities people are doomed to misunderstand problems like the probability of a disease given a positive test. This failure is taken as justification for paternalistic policymaking.” Gigerenzer shows that we don’t need to rely on the paternalistic nudges that Cass Sunstein and Richard Thaler encourage in their book Nudge. He suggest that in many instances where people have to make complex decisions what is really needed is better tools and aids to help with comprehension. Rather than developing paternalistic policies to nudge people toward certain behaviors that they don’t fully understand, Gigerenzer suggests that more work to help people understand problems will solve the dilemma of poor decision-making. The problem isn’t always that humans are incapable of understanding complexity and choosing the right option, the problem is often that we don’t present information in a clear and understandable way to begin with.
Aspiration Rules

Aspiration Rules

My last post was all about satisficing, making decisions based on alternatives that satisfy our wants and needs and that are good enough, but may not be the absolute best option. Satisficing contrasts the idea of maximizing. When we maximize, we find the best alternative from which no additional Pareto efficiencies can be gained. Maximizing is certainly a great goal in theory, but in practice, maximizing can be worse than satisficing. As Gerd Gigerenzer writes in Risk Savvy, “in an uncertain world, there is no way to find the best.” Satisficing and using aspiration rules, he argues, is the best way to make decisions and navigate our complex world.

 

“Studies indicate that people who rely on aspiration rules tend to be more optimistic and have higher self-esteem than maximizers. The latter excel in perfectionism, depression, and self-blame,” Gigerenzer writes. Aspiration rules differ from maximizing because the goal is not to find the absolute best alternative, but to find an alternative that meets basic pre-defined and reasonable criteria. Gigerenzer uses the example of buying pants in his book.  A maximizer may spend the entire day going from store to store, checking all their options, trying every pair of pants, and comparing prices at each store until they have found the absolute best pair available for the lowest cost and best fit. However, at the end of the day, they won’t truly know that they found the best option, there will always be the possibility that they missed a store or missed a deal someplace else. To contrast a maximizer, an aspirational shopper would go into a store looking for a certain style at a certain price. If they found a pair of pants that fit right and was within the right price range, then they could be satisfied and make a purchase without having to check every store and without having to wonder if they could have gotten a better deal elsewhere. They had basic aspirations that they could reasonably meet to be satisfied.

 

Maximizers set unrealistic goals and expectations for themselves while those using aspiration rules are able to set more reasonable, achievable goals. This demonstrates the power and utility of satisficing. Decisions have to be made, otherwise we will be wandering around without pants as we try to find the best possible deal. We will forego opportunities to get lunch, meet up with friends, and do whatever it is we need pants to go do. This idea is not limited to pants and individuals. Businesses, institutions, and nations all have to make decisions in complex environments. Maximizing can be a path toward paralysis, toward CYA behaviors (cover your ass), and toward long-term failure. Start-ups that can satisfice and make quick business decisions and changes can unseat the giant that attempts to maximize every decision. Nations focused on maximizing every public policy decision may never actually achieve anything, leading to civil unrest and a loss of support. Institutions that can’t satisfice also fail to meet their goals and missions. Allowing ourselves and our larger institutions to set aspiration rules and satisfice, all while working to incrementally improve with each step, is a good way to actually move toward progress, even if it doesn’t feel like we are getting the best deal in any given decision.

 

The aspiration rules we use can still be high, demanding of great performance, and drive us toward excellence. Another key difference, however, between the use of aspiration rules and maximizing is that aspiration rules can be more personalized and tailored to the realistic circumstances that we find ourselves within. That means we can create SMART goals for ourselves by using aspiration rules. Specific, measurable, achievable, realistic, and time-bound goals have more in common with a satisficing mentality than goals that align with maximizing strategies. Maximizing doesn’t recognize our constraints and challenges, and may leave us feeling inadequate when we don’t become president, don’t have a larger house than our neighbors, and are not a famous celebrity. Aspiration rules on the other hand can help us set goals that we can realistically achieve within reasonable timeframes, helping us grow and actually reach our goals.
Satisficing

Satisficing

Satisficing gets a bad wrap, but it isn’t actually that bad of a way to make decisions and it realistically accommodates the constraints and challenges that decision-makers in the real world face. None of us would like admit when we are satisficing, but the reality is that we are happy to satisfice all the time, and we are often happy with the results.

 

In Risk Savvy, Gerd Gigerenzer recommends satisficing when trying to chose what to order at a restaurant. Regarding this strategy for ordering, he writes:

 

“Satisficing: This … means to choose the first option that is satisfactory; that is, good enough. You need the menu for this rule. First, you pick a category (say, fish). Then you read the first item in this category, and decide whether it is good enough. If yes, you close the menu and order that dish without reading any further.”

 

Satisficing works because we often have more possibilities than we have time to carefully weigh and consider. If you have never been to the Cheesecake Factory, reading each option on the menu for the first time would probably take you close to 30 minutes. If you are eating on your own and don’t have any time constraints, then sure, read the whole menu, but the staff will probably be annoyed with you. If you are out with friends or on a date, you probably don’t want to take 30 minutes to order, and you will feel pressured to make a choice relatively quickly without having full knowledge and information regarding all your options. Satisficing helps you make a selection that you can be relatively confident you will be happy with given some constraints on your decision-making.

 

The term satisficing was coined by the Nobel Prize winning political scientist and economist Herbert Simon, and I remember hearing a story from a professor of mine about his decision to remain at Carnegie Melon University in Pittsburgh. When asked why he hadn’t taken a position at Harvard or a more prestigious Ivy League School, Simon replied that his wife was happy in Pittsburgh and while Carnegie Melon wasn’t as renown as Harvard it was still a good school and still offered him enough of what he wanted to remain. In other words, Carnegie Melon satisfied his basic needs and satisfied criteria in enough areas to make him happy, even though a school like Harvard would have maximized his prestige and influence. Simon was satisficing.

 

Without always recognizing it, we turn to satisficing for many of our decisions. We often can’t buy the perfect home (because of timing, price, and other bidders), so we satisfice and buy the first home we can get a good offer on that meets enough of our desires (but doesn’t fit all our desires perfectly). The same goes for jobs, cars, where we are going to get take-out, what movie we want to rent, what new clothes to buy, and more. Carefully analyzing every potential decision we have to make can be frustrating and exhausting. We will constantly doubt whether we made the best choice, and we may be too paralyzed to even make a decision in the first place. If we satisfice, however, we accept that we are not making the best choice but are instead making an adequate choice that satisfies the greatest number of our needs while simplifying the choice we have to make. We can live with what we get and move on without the constant doubt and loss of time that we might otherwise experience. Satisficing, while getting a bad rep from those who favor rationality in all instances, is actually a pretty good decision-making heuristic.
A Leadership Personality

A Leadership Personality

I find personality trait tests misleading. I know they are used by companies in hiring decisions and I know that Big 5 Personality Traits have been shown to predict political party support, but I still feel that they are misapplied and misunderstood. Specifically, I think that the way we interpret them fails to take context into consideration, which may make them next to useless. Gerd Gigerenzer considers this lapse in our judgement when thinking about the way we discuss and evaluate leadership personalities.

 

In Risk Savvy he writes, “leadership lies in the match between person and environment, which is why there is no single personality that would be a successful leader at all historical times and for all problems to solve.” A military general might make a great leader on the battlefield, but they may not be a great leader in a public education setting. A surgeon leading a hospital during the times of the American Civil War might not make a good leader at Columbia University Medical Center today, and the leader who thrives at a prestigious New York City medical center might not make a great leader at Northeastern Nevada Regional Hospital. Leadership is in many ways context dependent. The problems that a leader has to address may call for different approaches and solutions, which may be supported or sabotaged by particular personality types. Someone who is an outgoing socialite may be the right type of leader in New York City, but might be bored in Rural Nevada and may come across as overbearing to those who prefer a rural lifestyle. What Gigerenzer suggests may be the most important quality for a leader is not some form of leadership personality, but the right experiences and the right ability to apply particular rules of thumb and intuition to a given problem.

 

If the appropriate leadership personality is so context dependent, it may also be worth asking if our personality in general is context dependent. I have not studied personality and personality tests deeply enough to have any true evidence to back me up, but I would expect it to be. Dan Pink in When shows that we are the most productive and have the most positive mood about 4 hours after waking, and have the least amount of energy and worst mood around mid day (or 8 to 10 hours after we wake up). It seems to me that my performance on a personality test would be different if I was taking it at the peak of my day versus during the deepest trough. Also, I would expect my personality to manifest differently in an online multiple choice test relative to an unexpected car emergency, or during a game of cards with my old high school best friends. To say that I have one personality that shines through in all situations seems misleading, and to say that I have a particular level of any given personality trait that remains constant through the day and from experience to experience also seems misleading.

 

Gigerenzer’s quote above is about leadership and the idea that there is no single personality trait that applies to good leaders. I think it is reasonable to extend that assumption to personality generally, assuming that our personality is context dependent and being successful as individuals also involves rules of thumb based on experiences. What is important then is to develop and cultivate experiences and rules of thumb that can guide us toward success. Incorporating goals, feedback, and tools to help us recall successful approaches and strategies within a given context can help us become leaders and can help us succeed regardless of what a personality test tells us and regardless of the context we find ourselves in.
A Leader's Toolbox

A Leader’s Toolbox

In the book Risk Savvy Gerd Gigerenzer describes the work of top executives within companies as being inherently intuitive. Executives and managers within high performing companies are constantly pressed for time. There are more decisions, more incoming items that need attention, and more things to work on than any executive or manager can adequately handle on their own. Consequentially, delegation is necessary, as is quick decision-making based on intuition. “Senior managers routinely need to make decisions or delegate decisions in an instant after brief consultation and under high uncertainty,” writes Gigerenzer. This combination of quick decision-making under uncertainty is where intuition comes to play, and the ability to navigate these situations is what truly comprises the leader’s toolbox.

 

Gigerenzer stresses that the intuitions developed by top managers and executives are not arbitrary. Successful managers and companies tend to develop similar tool boxes that help encourage trust and innovation. While many individual level decisions are intuitive, the structure of the leader’s toolbox often becomes visible and intentional. As an example, Gigerenzer highlights a line of thinking he uncovered when working on a previous book. He writes, hire well and let them do their jobs reflects a vision of an institution where quality control (hire well) goes together with a climate of trust (let them do their jobs) needed for cutting-edge innovation.”

 

In many companies and industries, the work to be done is incredibly complex, and a single individual cannot manage every decision. The structure of the decision-making process necessarily needs to be decentralized for the individual units of the team to work effectively and efficiently. Hiring talented individuals and providing them with the autonomy and tools necessary to be successful is the best approach to get the right work done well.

 

Gigerenzer continues, “Good leadership consists of a toolbox full of rules of thumb and the intuitive ability to quickly see which rule is appropriate in which context.”

 

A leader’s toolbox doesn’t consist of specific lists of what to do in certain situations or even specific skills that are easy to check off on a resume. A leader’s toolbox is built by experience in a diverse range of settings and intuitions about things as diverse as hiring, teamwork, and delegation. Because innovation is always uncertain and always includes risk, leaders must develop intuitive skills and be able to make quick and accurate judgements about how to best handle new challenges and obstacles. Intuition and gut-decisions are an essential part of leadership today, even if we don’t like to admit that we make important decisions on intuition.
Gut Decisions

Gut Decisions

“Although about half of professional decisions in large companies are gut decisions, it would probably not go over well if a manager publicly admitted, I had a hunch. In our society, intuition is suspicious. For that reason, managers typically hide their intuitions or have even stopped listening to them,” Gerd Gigerenzer writes in Risk Savvy.

 

The human mind evolved first in small tribal bands trying to survive in a dangerous world. As our tribes grew, our minds evolved to become more politically savvy, learning to intuitively hide our selfish ambitions and appear honest and altruistic. This pushed our brains toward more complex activity, which took place outside our direct consciousness, hiding in gut feelings and intuitions. Today however, we don’t trust those intuitions and gut decisions, even though they never left us.

 

We do have good reason to discount intuitions. Our minds did not evolve to serve us perfectly in a complex, data rich world full of uncertainty. Our brains are plagued by motivated reasoning, biases, and cognitive limitations. Making gut decisions can lead us vulnerable to these mental challenges, leading us to distrust our intuitions.

 

However, this doesn’t mean we have escaped gut decisions. Gerd Gigerenzer thinks that is actually a good thing, especially if we have developed years of insight and expertise through practice and real life training. What Gigerenzer argues is that we still make many gut decisions in areas as diverse as vacation planning, daily exercise, and corporate strategies. We just don’t admit we are making decisions based on intuition rather than careful statistical analysis. Taking it a step further, Gigerezner suggests that most of the time we make a decision at a gut level, and produce reasons after the fact.” We rationalize and use motivated reasoning to explain why we made a decision and we try to deceive ourselves to believe that we always intended to do the rational calculation first, and that we really hadn’t made up our mind until after we had done so.

 

Gigerenzer suggests that we acknowledge our gut decisions. Ignoring them and pretending they are not influential wastes our time and costs us money. An executive may have an intuitive sense of what to do in terms of a business decision, but may be reluctant to say they made a decision based on intuition. Instead, they spend time doing an analysis that didn’t need to be done. They create reasons to support their decision after the fact, again wasting time and energy that could go into implementing the decision that has already been made. Or an executive may bring in a consulting firm, hoping the firm will come up with the same answer that they got from their gut. Time and money are both wasted, and the decision-making and action-taking structures of the individual and organization are gummed up unnecessarily. Acknowledging gut decisions and moving forward more quickly, Gigerenzer seems to suggest, is better than rationalizing and finding support for gut decisions after the fact.
A Bias Toward Complexity

A Bias Toward Complexity

When making predictions or decisions in the real world where there are many variables, high levels of uncertainty, and numerous alternative options to chose from, using a simple rule of thumb can be better than developing complex models for predictions. The intuitive sense is that the more complex our model the more accurately it will reflect the real complexity of the world, and the better job it will do with making a prediction. If we can see that there are multiple variables, then shouldn’t our model capture the different alternatives for each of those variables? Wouldn’t a simple rule of thumb necessarily flatten many of the alternatives for those variables, failing to take into consideration the different possibilities that exist? Shouldn’t a more complex model be better than a simple heuristic?

 

The answer to these questions is no. We are biased toward complexity for numerous reasons. It feels important to build a model that tries to account for every possible alternative for each variable, we believe that always having more information is always good, and we want to impress people by showing how thoughtful and considerate we are. Creating a model that accounts for all the different possibilities out there fits those preexisting biases. The problem, however, is that as we make our model more complex it becomes more unstable.

 

In Risk Savvy, Gerd Gigerenzer explains what happens with variance and our models by writing, “Unlike 1/N, complex methods use past observations to predict the future. These predictions will depend on the specific sample of observations it uses and may therefore be unstable. This instability (the variability of these predictions around their mean) is called variance. Thus, the more complex the method, the more factors need to be estimated, and the higher the amount of error due to variance.”  (Emphasis added by me – 1/N is an example of a simple heuristic that Gigerenzer explains in the book.)

 

Our bias toward complexity can make our models and predictions worse when high levels of uncertainty with many alternatives and relatively limited amounts of data exist. If we find ourselves in the opposite situation, where there is low uncertainty, few alternatives, and a plethora of data, then we can use very complex models to make accurate predictions. But when we are in the real world, like making stock market or March Madness predictions, then we should rely on a simple rule of thumb. The more complex our model, the more opportunities for us to misestimate a given variable. Rather than having one error be offset by numerous other point estimates within our model to reduce the cost of a miscalculation, our model ends up creating more variance and a greater likelihood that our prediction will be further from reality than if we had flattened the variables with a simple heuristic.
Defensive Decision-Making - Joe Abittan

Defensive Decision-Making

One of the downfalls of a negative error cultures is that people become defensive over any mistake they make. Errors and mistakes are shamed and people who commit errors do their best to hide them or deflect responsibility. Within negative error cultures you are more likely to see people taking steps to distance themselves from responsibility before a decision is made, practicing what is called defensive decision-making.

 

Gerd Gigerenzer expands on this idea is his book Risk Savvy by writing, “defensive decision making [is] practiced by individuals who waste time and money to protect themselves at the cost of others, including their companies. Fear of personal responsibility creates a market for worthless products delivered by high-paid experts.”

 

Specifically, Gigerenzer writes about companies that hire expensive outside experts and consultants to make market predictions and help improve company decision-making. The idea is that individual banks, corporations, and sales managers can’t accurately know the state of a market as well as an outside expert whose job it is to study trends, talk to market actors, and understand how the market relates to internal and external pressures. The problem, as Gigerenzer explains, is that even experts are not very good at predicting the future of a market. There is simply too much uncertainty for anyone to be able to say that market trends will continue, that a shock is coming, or that a certain product or service is about to take off. Experts make these types of predictions all the time, but evidence suggests that their predictions are not much better than just throwing dice.

 

So why do companies pay huge fees, sit through lengthy meetings, and spend time trying to understand and adapt to the predictions of experts? Gigerenzer suggests that it is because individuals within the company are practicing defensive decision-making. If you are a sales manager and you make a decision to sell to a particular market with a new approach after analyzing performance and trends of your own team, then you are responsible for the outcome of the new approach and strategy. If it works, you will look great, but if it fails, then you will be blamed for not understanding the market, for failing to see the signs that indicated your plan wasn’t going to succeed, and for misinterpreting past trends. However, if a consultant suggested a course of action, presented your team with a great visual presentation, and was certain that they understood the market, then you escape blame when the plan doesn’t work out. If even the expert couldn’t see what was going to happen, then how could you be blamed for a plan not working out?

 

Defensive decision-making is good for the individual, but bad for the larger organization that the individual is a part of. Companies would be better off if they made decisions quicker, accepted risk, and could openly evaluate success and failure without having to place too much blame on individuals. Companies could learn more about their errors and could do a better job identifying and promoting talent. Defensive decision-making is expensive, time consuming, and outsources blame, preventing companies and organizations from actually learning and improving their decision-making over the long run.
A Useful Myth

A Useful Myth

Autonomy, free will, and self-control combine to create a useful myth. The myth is that we control our own destinies, that we are autonomous actors with rights, freedoms, and the opportunity to improve our lives through our own effort. The reality is that the world is incredibly complex, that we don’t get to chose our genes, our parents, or the situations in life that we are born and raised within. A huge number of factors based on random chance and luck contribute to whether we are successful or not, but nevertheless, the belief that we are autonomous actors with control over our own free will is still a useful myth.

 

In Risk Savvy Gerd Gigerenzer writes, “people who report more internal control tend to fare better in life than those who don’t. They play a more active role in their communities, take better care of their health, and get better jobs. We may have no control about whether people find our clothes or skills or appearance attractive. But we do have control over internal goals such as acquiring languages, mastering a musical instrument, or taking responsibility for small children or our grandparents.”

 

This quote shows why the idea of internal control and agency is such a useful myth. If we believe we have the power to shape our lives for the better, then we seem to be more likely to work hard, persevere, and stretch for challenging goals. A feeling of helplessness, as though we don’t have control, likely leads to cynicism and defeatism. Why bother trying if you and your actions won’t determine the success or failure you experience in life?

 

This myth is at the heart of American meritocracy, but it is important to note that it does appear to be just a myth. EKGs can detect electrical activity in the brain and predict an action before a person becomes aware of a conscious desire to perform an action. Split brain experiments and the research of Kahneman and Tversky show that our brains are composed of multiple competing systems that almost amount to separate people and personalities all within our singular consciousness. And as I wrote earlier, luck is a huge determining factor in whether we have the skills and competencies for success, and whether we have a supportive environment and sufficient opportunities to master those skills.

 

Recently, on an episode of Rationally Speaking, Julia Galef interviewed Michael Sandel about our meritocracy. One fear that Sandel has about our system of meritocracy is that people who succeed by luck and chance believe that they succeeded because of special qualities or traits that they possess. Meanwhile, those who fail are viewed as having some sort of defect, a mindset that people who fail or live in poverty may come to believe is true and embrace, thus creating another avenue for defeatism to thrive.

 

If internal control is a useful myth, it is because it encourages action and flourishing for individuals. My solution therefore is to blend the two views, the view of internal agency and the view of external forces shaping the future we have. These are contradictory views on the surface, but I believe they can be combined and live in harmony (especially given the human ability to peacefully and ignorantly live with contradictory beliefs). We need to believe we have agency, but also believe that success is essentially a matter of luck and that we are dependent on society and others to reach great heights. This should encourage us to apply ourselves fully, but to be humble, and take steps to help ensure others can also apply themselves fully to reach greater levels of success. When people fail, we shouldn’t look at them as morally inept, as lacking skills and abilities, but as people who happened to end up in a difficult place. We should then take steps to help improve their situations and to give them more opportunities to find the space that fits their skills and abilities for growth and success. Internal control can still be a useful myth if we tie it to the right structures and systems to ensure everyone can use their agency appropriately and avoid the overwhelming crush of defeatism when things don’t go well.