Believing We Are Well Informed

Believing We Are Well Informed

In his book Risk Savvy, Gerd Gigerenzer demonstrated that people often overestimate their level of knowledge about the benefits of prostate and cancer screening. “A national telephone survey of U.S. adults,” he writes, “reported that the majority were extremely confident in their decision about prostate, colorectal, and breast screening, believed they were well informed, but could not correctly answer a single knowledge question.” I think this quote reveals something important about the way our minds work. We often believe we are well informed, but that belief and our confidence in our knowledge is often an illusion.
This is something I have been trying to work on. My initial reaction any time I hear any fact or any discussion about any topic is to position myself as a knowledgeable semi-expert in the topic. I have noticed that I do this with ideas and topics that I have really only heard once or twice on a commercial, or that I have seen in a headline, or that I once overheard someone talking about. I immediately feel like an expert even though my knowledge is often less than surface deep.
I think that what is happening in these situations is that I am substituting my feeling of expertise or knowledge with a different question. I am instead answering the question can I recall a time when I thought about this thing and then answering that question. Mental substitution is common, but hard to actually detect. I suspect that the easier a topic comes to mind, even if it is a topic I don’t know anything about but have only heard the name of, then the more likely I am to feel like I am an expert.
Gigerenzer’s quote shows that people will believe themselves to be well informed even if they cannot answer a basic knowledge question about the topic. Rather than substituting the question can I recall a time when I thought about this thing, patients may also be substituting another question. Instead of analyzing their confidence in their own decision regarding cancer screening, people may be substituting the question do I trust my doctor? Trust in a physician, even without any knowledge about the procedure, may be enough for people to feel extremely confident in their decisions. They don’t have to know a lot about their health or how a procedure is going to impact it, they just need to be confident that their physician does.
These types of substitutions are important for us to recognize. We should try to identify when we are falling victim to the availability bias and when we are substituting different questions that are easier for us to answer. In a well functioning and accurate healthcare setting these biases and cognitive errors may not harm us too much, but in a world of uncertainty, we stand to lose a lot when we fail to recognize how little we actually know. Being honest about our knowledge and thinking patterns can help us develop better systems and structures in our lives to improve and guide our decision-making.
On The Opportunity To Profit From Uninformed Patients

On The Opportunity To Profit From Uninformed Patients

The American Medical System is in a difficult and dangerous place right now. Healthcare services have become incredibly expensive, and the entire system has become so complex that few people fully understand it and even fewer can successfully navigate the system to get appropriate care that they can reasonably afford. My experience is that many people don’t see value in much of the care they receive or with many of the actors connected with their care. They know they need insurance to afford their care, but they really can’t see what value their insurance provides – it often appears to be more of a frustration than something most people appreciate. The same can be true for primary care, anesthesiologists, and the variety of healthcare benefits that employers may offer to their patients. There seem to be lots of people ready to profit from healthcare, but not a lot of people ready to provide real value to the people who need it.
 
These sentiments are all generalizations, and of course many people really do see value in at least some of their healthcare and are grateful for the care they receive. However, the complexity, the lack of transparency, and the ever climbing costs of care have people questioning the entire system, especially at a moral and ethical level. I think a great deal of support for Medicare for All, or universal healthcare coverage, comes from people thinking that profit within medicine may be unethical and from a lack of trust that stems from an inability to see anything other than a profit motive in many healthcare actors and services.
 
Gerd Gigerenzer writes about this idea in his book Risk Savvy. In the book he doesn’t look at healthcare specifically, but uses healthcare to show the importance of being risk literate in today’s complex world. Medical health screening in particular is a good space to demonstrate the harms that can come from misinformed patients and doctors. A failure to understand and communicate risk can harm patients, and it can actually create perverse incentives for healthcare systems by providing them the opportunity to profit from uninformed patients. Gigerenzer quotes Dr. Otis Brawley who had been Director of the Georgia Cancer Center at Emory in Atlanta.
 
In Dr. Brawley’s quote, he discusses how Emory could have screened 1,000 men at a mall for prostate cancer and how the hospital could have made $4.9 million in billing for the tests. Additionally the hospital would have profited from future services when men returned for other unrelated healthcare concerns as established patients. In Dr. Brawley’s experience, the hospital could tell him how much they could profit from the tests, but could not tell him whether screening 1,000 men early for prostate cancer would have actually saved any lives among the 1,000 men screened. Dr. Brawley knew that screening many men would lead to false positive tests, and unnecessary stress and further medical diagnostic care for those false positives – again medical care that Emory would profit from. The screenings would also identify men with prostate cancer that was unlikely to impact their future health, but would nevertheless lead to treatment that would make the men impotent or potentially incontinent. The hospital would profit, but their patients would be worse off than if they had not been screened. Dr. Brawley’s experience was that the hospital could identify avenues for profit, but could not identify avenues to provide real value in the healthcare services they offer.
 
Gigerenzer found this deeply troubling. A failure to understand and communicate the risks of prostate cancer (which is more complex than I can write about here) presents an opportunity for healthcare providers to profit by pushing unnecessary medical screening and treatment onto patients. Gigerenzer also notes that profiting from uninformed patients is not just limited to cancer screening. Doctors who are not risk literate cannot adequately explain risks and benefits of treatment to patients, and their patients cannot make the best decisions for themselves. This is a situation that needs to change if hospitals want to keep the trust of their patients and avoid being a hated entity that fails to demonstrate value. They will go the way of health insurance companies, with frustrated patients wanting to eliminate them altogether.
 
Wrapping up the quote from Dr. Brawley, Gigerenzer writes, “profiting from uninformed patients is unethical. medicine should not be a money game.” I believe that Gigerenzer and Dr. Brawley are right, and I think that all healthcare actors need to clearly demonstrate their value, otherwise any profits they earn will make them look like money-first enterprises and not patient-first enterprises, frustrating the public and leading to distrust in the medical field. In the end, this is going to be harmful for everyone involved. Demonstrating real value in healthcare is crucial, and profiting from uniformed patients will diminish the value provided and hurt trust, making the entire healthcare system in our country even worse.

Risk Literacy Builds Trust

Risk Literacy Builds Trust

In his book Risk Savvy Gerd Gigerenzer writes about a private medical panel and lecture series that he participated in. Gigerenzer gave a presentation about the importance of risk literacy between doctors and their patients and how frequently both misinterpret medical statistics. Regarding the dangers this could pose for the medical industry, Gigerenzer wrote the following, recapping a discussion he had with the CEO of the organization hosting the lectures and panel:

“I asked the CEO whether his company would consider it an ethical responsibility to do something about this key problem. The CEO made it clear that his first responsibility is with the shareholders, not patients or doctors. I responded that the banks had also thought so before the subprime crisis. At some point in the future, patients will notice how often they are being misled instead of informed, just as bank customers eventually did. When this happens, the health industry may lose the trust of the public, as happened to the banking industry.”

I focus a lot on healthcare since that is the space where I started my career and where I focused most of my studies during graduate school. I think Gigerenzer is correct in noting that risk literacy builds trust, and that a lack of risk literacy can translate to a lack of trust. Patients trust doctors because health and medicine is complex, and doctors are viewed as learned individuals who can decipher the complexity to help others live well. However, modern medicine is continuing to move into more and more complex fields where statistics and risk play a more prominent role. Understanding genetic test results, knowing whether a given medicine will work for someone based on their microbiome, and using and interpreting AI tools requires proficient risk literacy. If doctors can’t build risk literacy skills, and if they cannot communicate risk to patients, then patients will feel misled, and the trust that doctors have will slowly diminish.

Gigerenzer did not feel that his warning at the panel was well received. “The rest of the panel discussion was about business plans, which really captured the emotions of the health insurers and politicians present. Risk-literate doctors and patients are not part of the business.”

Healthcare has to be patient centered, not shareholder centered. If healthcare is not about patients, then the important but not visible and not always profitable work that is necessary to build risk literacy and build trust won’t take place. Eventually, patients will recognize when they are placed behind shareholders in terms of importance to a hospital, company, or healthcare system, and the results will not be good for their health or for the shareholders.

Medical Progress

What does medical progress look like? To many, medical progress looks like new machines, artificial intelligence to read your medical reports and x-rays, or new pharmaceutical medications to solve all your ailments with a simple pill. However, much of medical progress might be improved communication, better management and operating procedures, and better understandings of statistics and risk. In the book Risk Savvy, Gerd Gigerenzer suggests that there is a huge opportunity for improving physician understanding of risk, improved communication around statistics, and better processes related to risk that would help spur real medical progress.

 

He writes, “Medical progress has become associated with better technologies, not with better doctors who understand these technologies.” Gigerenzer argues that there is currently an “unbelievable failure of medical schools to provide efficient training in risk literacy.” Much of the focus of medical schools and physician education is on memorizing facts about specific disease states, treatments, and how a healthy body should look. What is not focused on, in Gigerenzer’s 2014 argument, is how physicians understand the statistical results from empirical studies, how physicians interpret risk given a specific biological marker, and how physicians can communicate risk to patients in a way that adequately inform their healthcare decisions.

 

Our health is complex. We all have different genes, different family histories, different exposures to environmental hazards, and different lifestyles. These factors interact in many complex ways, and our health is often a downstream consequence of many fixed factors (like genetics) and many social determinants of health (like whether we have a safe park that we can walk, or whether we grew up in a house infested with mold). Understanding how all these factors interact and shape our current health is not easy.

 

Adding new technology to the mix can help us improve our treatments, our diagnoses, and our lifestyle or environment. However, simply layering new technology onto existing complexity is not enough to really improve our health. Medical progress requires better ways to use and understand the technology that we introduce, otherwise we are adding layers to the existing complexity. If physicians cannot understand, cannot communicate, and cannot help people make reasonable decisions based on technology and the data that feeds into it, then we won’t see the medical progress we all hope for. It is important that physicians be able to understand the complexity, the risk, and the statistics involved so that patients can learn how to actually improve their behaviors and lifestyles and so that societies can address social determinants of health to better everyone’s lives.
Risk Literacy and Emotional Stress

Risk Literacy and Emotional Stress

In Risk Savvy Gerd Gigerenzer argues that better risk literacy could reduce emotional stress. To emphasize this point, Gigerenzer writes about parents who receive false positive medical test results for infant babies. Their children had been screened for biochemical disorders, and the tests indicated that the child had a disorder. However, upon follow-up screenings and evaluations, the children were found to be perfectly healthy. Nevertheless, in the long run (four years later) parents who initially received a false positive test result were more likely than other parents to say that their children required extra parental care, that their children were more difficult, and that that had more dysfunctional relationships with their children.

 

Gigerenzer suggests that the survey results represent a direct parental response to initially receiving a false positive test when their child was a newborn infant. He argues that parents received the biochemical test results without being informed about the chance of false positives and without understanding the prevalence of false positives due to a general lack of risk literacy.  Parents initially reacted strongly to the bad news of the test, and somewhere in their mind, even after the test was proven to be a false positive, they never adjusted their thoughts and evaluations of the children, and the false positive test in some ways became a self-fulfilling prophecy.

 

In writing about Gigerenzer’s argument, it feels more far-fetched than it did in an initial reading, but I think his general argument that risk literacy and emotional stress are tied together is probably accurate. Regarding the parents in the study, he writes, “risk literacy could have moderated emotional reactions to stress that harmed these parents’ relation to their child.” Gigerenzer suggests that parents had strong negative emotional reactions when their children received a false positive and that their initial reactions carried four years into the future. However, had the doctors better explained the chance of a false positive and better communicated next steps with parents, then the strong negative emotional reaction experienced by parents could have been avoided, and they would not have spent four years believing their child was in some ways more fragile or more needy than other children. I recognize that receiving a medical test with a diagnosis that no parent wants to hear is stressful, and I can see where better risk communication could reduce some of that stress, but I think there could have been other factors that the study picked up on. I think the results as Gigerenzer reported overhyped the connection between risk literacy and emotional stress.

 

Nevertheless, risk literacy is important for all of us living in a complex and interconnected world today. We are constantly presented with risks, and new risks can seemingly pop-up anywhere at any time. Being able to decipher and understand risk is important so that we can adjust and modulate our activities and behaviors as our environment and circumstances change. Doing so successfully should reduce our stress, while struggling to comprehend risk and adjust behaviors and beliefs is likely to increase emotional stress. When we don’t understand risks appropriately, we can become overly fearful, we can spend money on unnecessary insurance, and we can stress ourselves over incorrect information. Developing better charts, better communicative tools, and better information about risk will help individuals improve their risk literacy, and will hopefully reduce risk by allowing individuals to successfully adjust to the risks they face.
Understanding False Positives with Natural Frequencies

Understanding False Positives with Natural Frequencies

In a graduate course on healthcare economics a professor of mine had us think about drug testing student athletes. We ran through a few scenarios where we calculated how many true positive test results and how many false positive test results we should expect if we oversaw a university program to drug tests student athletes on a regular basis. The results were surprising, and a little confusing and hard to understand.

 

As it turns out, if you have a large student athlete population and very few of those students actually use any illicit drugs, then your testing program is likely to reveal more false positive tests than true positive tests. The big determining factors are the sensitivity of the test (how often it is actually correct) and the percentage of students using illicit drugs. A false positive occurs when the drug test indicates that a student who is not using illicit drugs is using them. A true positive occurs when the test correctly identifies a student who does indeed use drugs. The dilemma we discussed occurs if you have a test with some percentage of error and a large student athlete population with a minimal percentage of drug users. In this instance you cannot be confident that a positive test result is accurate. You will receive a number of positive tests, but most of the positive tests that you receive are actually false positives.

 

In class, our teacher walked us through this example verbally before creating some tables that we could use to multiply the percentages ourselves to see that the number of false positives will indeed exceed the number of true positives when you are dealing with a large population and a rare event that you are testing for. Our teacher continued to explain that this happens every day in the medical world with drug tests, cancer screenings, and other tests (including COVID-19 tests as we are learning today).  The challenge, as our professor explained, is that the math is complicated and it is hard to explain to person who just received a positive cancer test that they likely don’t have cancer, even though they just received a positive test. The statistics are hard to understand on their own.

 

However, Gerd Gigerenzer doesn’t think this is really a limiting problem for us to the extent that my professor had us work through. In Risk Savvy Gigerenzer writes that understanding false positives with natural frequencies is simple and accessible. What took nearly a full graduate course to go through and discuss, Gigerenzer suggests can be digested in simple charts using natural frequencies. Natural frequencies are numbers we can actually understand and multiply as opposed to fractions and percentages which are easy to mix up and hard to multiply and compare.

 

Rather than telling someone that the actual incidence of cancer in the population is only 1%, and that the chance of a false positive test is 9%, and trying to convince them that they still likely don’t have cancer is confusing. However, if you explain to an individual that for every 1,000 people who take a particular cancer test that only 10 actually have cancer and that 990 don’t, the path to comprehension begins to clear up. With the group of 10 true positives and true negatives 990, you can explain that of those 10 who do have cancer, the test correctly identifies 9 out of 10 of them, and provides 9 true positive results for every 1,000 test (or adjust according to the population and test sensitivity). The false positive number can then be explained by saying that for the 990 people who really don’t have cancer, the test will error and tell 89 of them (9% in this case) that they do have cancer. So, we see that 89 individuals will receive false positives while 9 people will receive true positives. 89 > 9, so the chance of actually having cancer with a positive test still isn’t a guarantee.

 

Gigernezer uses very helpful charts in his book to show us that the false positive problem can be understood more easily than we might think. Humans are not great at thinking statistically, but understanding false positives with natural frequencies is a way to get to better comprehension. With this background he writes, “For many years psychologists have argued that because of their limited cognitive capacities people are doomed to misunderstand problems like the probability of a disease given a positive test. This failure is taken as justification for paternalistic policymaking.” Gigerenzer shows that we don’t need to rely on the paternalistic nudges that Cass Sunstein and Richard Thaler encourage in their book Nudge. He suggest that in many instances where people have to make complex decisions what is really needed is better tools and aids to help with comprehension. Rather than developing paternalistic policies to nudge people toward certain behaviors that they don’t fully understand, Gigerenzer suggests that more work to help people understand problems will solve the dilemma of poor decision-making. The problem isn’t always that humans are incapable of understanding complexity and choosing the right option, the problem is often that we don’t present information in a clear and understandable way to begin with.
Aspiration Rules

Aspiration Rules

My last post was all about satisficing, making decisions based on alternatives that satisfy our wants and needs and that are good enough, but may not be the absolute best option. Satisficing contrasts the idea of maximizing. When we maximize, we find the best alternative from which no additional Pareto efficiencies can be gained. Maximizing is certainly a great goal in theory, but in practice, maximizing can be worse than satisficing. As Gerd Gigerenzer writes in Risk Savvy, “in an uncertain world, there is no way to find the best.” Satisficing and using aspiration rules, he argues, is the best way to make decisions and navigate our complex world.

 

“Studies indicate that people who rely on aspiration rules tend to be more optimistic and have higher self-esteem than maximizers. The latter excel in perfectionism, depression, and self-blame,” Gigerenzer writes. Aspiration rules differ from maximizing because the goal is not to find the absolute best alternative, but to find an alternative that meets basic pre-defined and reasonable criteria. Gigerenzer uses the example of buying pants in his book.  A maximizer may spend the entire day going from store to store, checking all their options, trying every pair of pants, and comparing prices at each store until they have found the absolute best pair available for the lowest cost and best fit. However, at the end of the day, they won’t truly know that they found the best option, there will always be the possibility that they missed a store or missed a deal someplace else. To contrast a maximizer, an aspirational shopper would go into a store looking for a certain style at a certain price. If they found a pair of pants that fit right and was within the right price range, then they could be satisfied and make a purchase without having to check every store and without having to wonder if they could have gotten a better deal elsewhere. They had basic aspirations that they could reasonably meet to be satisfied.

 

Maximizers set unrealistic goals and expectations for themselves while those using aspiration rules are able to set more reasonable, achievable goals. This demonstrates the power and utility of satisficing. Decisions have to be made, otherwise we will be wandering around without pants as we try to find the best possible deal. We will forego opportunities to get lunch, meet up with friends, and do whatever it is we need pants to go do. This idea is not limited to pants and individuals. Businesses, institutions, and nations all have to make decisions in complex environments. Maximizing can be a path toward paralysis, toward CYA behaviors (cover your ass), and toward long-term failure. Start-ups that can satisfice and make quick business decisions and changes can unseat the giant that attempts to maximize every decision. Nations focused on maximizing every public policy decision may never actually achieve anything, leading to civil unrest and a loss of support. Institutions that can’t satisfice also fail to meet their goals and missions. Allowing ourselves and our larger institutions to set aspiration rules and satisfice, all while working to incrementally improve with each step, is a good way to actually move toward progress, even if it doesn’t feel like we are getting the best deal in any given decision.

 

The aspiration rules we use can still be high, demanding of great performance, and drive us toward excellence. Another key difference, however, between the use of aspiration rules and maximizing is that aspiration rules can be more personalized and tailored to the realistic circumstances that we find ourselves within. That means we can create SMART goals for ourselves by using aspiration rules. Specific, measurable, achievable, realistic, and time-bound goals have more in common with a satisficing mentality than goals that align with maximizing strategies. Maximizing doesn’t recognize our constraints and challenges, and may leave us feeling inadequate when we don’t become president, don’t have a larger house than our neighbors, and are not a famous celebrity. Aspiration rules on the other hand can help us set goals that we can realistically achieve within reasonable timeframes, helping us grow and actually reach our goals.
Satisficing

Satisficing

Satisficing gets a bad wrap, but it isn’t actually that bad of a way to make decisions and it realistically accommodates the constraints and challenges that decision-makers in the real world face. None of us would like admit when we are satisficing, but the reality is that we are happy to satisfice all the time, and we are often happy with the results.

 

In Risk Savvy, Gerd Gigerenzer recommends satisficing when trying to chose what to order at a restaurant. Regarding this strategy for ordering, he writes:

 

“Satisficing: This … means to choose the first option that is satisfactory; that is, good enough. You need the menu for this rule. First, you pick a category (say, fish). Then you read the first item in this category, and decide whether it is good enough. If yes, you close the menu and order that dish without reading any further.”

 

Satisficing works because we often have more possibilities than we have time to carefully weigh and consider. If you have never been to the Cheesecake Factory, reading each option on the menu for the first time would probably take you close to 30 minutes. If you are eating on your own and don’t have any time constraints, then sure, read the whole menu, but the staff will probably be annoyed with you. If you are out with friends or on a date, you probably don’t want to take 30 minutes to order, and you will feel pressured to make a choice relatively quickly without having full knowledge and information regarding all your options. Satisficing helps you make a selection that you can be relatively confident you will be happy with given some constraints on your decision-making.

 

The term satisficing was coined by the Nobel Prize winning political scientist and economist Herbert Simon, and I remember hearing a story from a professor of mine about his decision to remain at Carnegie Melon University in Pittsburgh. When asked why he hadn’t taken a position at Harvard or a more prestigious Ivy League School, Simon replied that his wife was happy in Pittsburgh and while Carnegie Melon wasn’t as renown as Harvard it was still a good school and still offered him enough of what he wanted to remain. In other words, Carnegie Melon satisfied his basic needs and satisfied criteria in enough areas to make him happy, even though a school like Harvard would have maximized his prestige and influence. Simon was satisficing.

 

Without always recognizing it, we turn to satisficing for many of our decisions. We often can’t buy the perfect home (because of timing, price, and other bidders), so we satisfice and buy the first home we can get a good offer on that meets enough of our desires (but doesn’t fit all our desires perfectly). The same goes for jobs, cars, where we are going to get take-out, what movie we want to rent, what new clothes to buy, and more. Carefully analyzing every potential decision we have to make can be frustrating and exhausting. We will constantly doubt whether we made the best choice, and we may be too paralyzed to even make a decision in the first place. If we satisfice, however, we accept that we are not making the best choice but are instead making an adequate choice that satisfies the greatest number of our needs while simplifying the choice we have to make. We can live with what we get and move on without the constant doubt and loss of time that we might otherwise experience. Satisficing, while getting a bad rep from those who favor rationality in all instances, is actually a pretty good decision-making heuristic.
A Leadership Personality

A Leadership Personality

I find personality trait tests misleading. I know they are used by companies in hiring decisions and I know that Big 5 Personality Traits have been shown to predict political party support, but I still feel that they are misapplied and misunderstood. Specifically, I think that the way we interpret them fails to take context into consideration, which may make them next to useless. Gerd Gigerenzer considers this lapse in our judgement when thinking about the way we discuss and evaluate leadership personalities.

 

In Risk Savvy he writes, “leadership lies in the match between person and environment, which is why there is no single personality that would be a successful leader at all historical times and for all problems to solve.” A military general might make a great leader on the battlefield, but they may not be a great leader in a public education setting. A surgeon leading a hospital during the times of the American Civil War might not make a good leader at Columbia University Medical Center today, and the leader who thrives at a prestigious New York City medical center might not make a great leader at Northeastern Nevada Regional Hospital. Leadership is in many ways context dependent. The problems that a leader has to address may call for different approaches and solutions, which may be supported or sabotaged by particular personality types. Someone who is an outgoing socialite may be the right type of leader in New York City, but might be bored in Rural Nevada and may come across as overbearing to those who prefer a rural lifestyle. What Gigerenzer suggests may be the most important quality for a leader is not some form of leadership personality, but the right experiences and the right ability to apply particular rules of thumb and intuition to a given problem.

 

If the appropriate leadership personality is so context dependent, it may also be worth asking if our personality in general is context dependent. I have not studied personality and personality tests deeply enough to have any true evidence to back me up, but I would expect it to be. Dan Pink in When shows that we are the most productive and have the most positive mood about 4 hours after waking, and have the least amount of energy and worst mood around mid day (or 8 to 10 hours after we wake up). It seems to me that my performance on a personality test would be different if I was taking it at the peak of my day versus during the deepest trough. Also, I would expect my personality to manifest differently in an online multiple choice test relative to an unexpected car emergency, or during a game of cards with my old high school best friends. To say that I have one personality that shines through in all situations seems misleading, and to say that I have a particular level of any given personality trait that remains constant through the day and from experience to experience also seems misleading.

 

Gigerenzer’s quote above is about leadership and the idea that there is no single personality trait that applies to good leaders. I think it is reasonable to extend that assumption to personality generally, assuming that our personality is context dependent and being successful as individuals also involves rules of thumb based on experiences. What is important then is to develop and cultivate experiences and rules of thumb that can guide us toward success. Incorporating goals, feedback, and tools to help us recall successful approaches and strategies within a given context can help us become leaders and can help us succeed regardless of what a personality test tells us and regardless of the context we find ourselves in.
A Leader's Toolbox

A Leader’s Toolbox

In the book Risk Savvy Gerd Gigerenzer describes the work of top executives within companies as being inherently intuitive. Executives and managers within high performing companies are constantly pressed for time. There are more decisions, more incoming items that need attention, and more things to work on than any executive or manager can adequately handle on their own. Consequentially, delegation is necessary, as is quick decision-making based on intuition. “Senior managers routinely need to make decisions or delegate decisions in an instant after brief consultation and under high uncertainty,” writes Gigerenzer. This combination of quick decision-making under uncertainty is where intuition comes to play, and the ability to navigate these situations is what truly comprises the leader’s toolbox.

 

Gigerenzer stresses that the intuitions developed by top managers and executives are not arbitrary. Successful managers and companies tend to develop similar tool boxes that help encourage trust and innovation. While many individual level decisions are intuitive, the structure of the leader’s toolbox often becomes visible and intentional. As an example, Gigerenzer highlights a line of thinking he uncovered when working on a previous book. He writes, hire well and let them do their jobs reflects a vision of an institution where quality control (hire well) goes together with a climate of trust (let them do their jobs) needed for cutting-edge innovation.”

 

In many companies and industries, the work to be done is incredibly complex, and a single individual cannot manage every decision. The structure of the decision-making process necessarily needs to be decentralized for the individual units of the team to work effectively and efficiently. Hiring talented individuals and providing them with the autonomy and tools necessary to be successful is the best approach to get the right work done well.

 

Gigerenzer continues, “Good leadership consists of a toolbox full of rules of thumb and the intuitive ability to quickly see which rule is appropriate in which context.”

 

A leader’s toolbox doesn’t consist of specific lists of what to do in certain situations or even specific skills that are easy to check off on a resume. A leader’s toolbox is built by experience in a diverse range of settings and intuitions about things as diverse as hiring, teamwork, and delegation. Because innovation is always uncertain and always includes risk, leaders must develop intuitive skills and be able to make quick and accurate judgements about how to best handle new challenges and obstacles. Intuition and gut-decisions are an essential part of leadership today, even if we don’t like to admit that we make important decisions on intuition.