Missing Feedback

Missing Feedback

I generally think we are overconfident in our opinions. We should all be more skeptical that we are right, that we have made the best possible decisions, and that we truly understand how the world operates. Our worldviews can only be informed by our experiences and by the information we take in about events, phenomena, and stories in the world. We will always be limited because we can’t take in all the information the world has to offer. Additionally, beyond simply not being able to hold all the information possible, we are unable to get the appropriate feedback we need in all situations for comprehensive learning. Some feedback is hazy and some feedback is impossible to receive at all. This means that we cannot be sure that we have made the best choices in our lives, even if things are going well and we are making our best efforts to study the world.

 

In Nudge, Cass Sunstein and Richard Thaler write, “When feedback does not work, we may benefit from a nudge.” When we can’t get immediate feedback on our choices and decisions, or when we get feedback that is unclear, we can’t adjust appropriately for future decisions. We can’t learn, we can’t improve, and we can’t make the best choices when we return to a decision-situation. However, we can observe where situations of poor feedback exist, and we can help design those decision-spaces to provide subtle nudges to help people make better decisions in the absence of feedback. Visual aids showing how much money people need for retirement and how much they can expect to have based on current savings rates is a helpful nudge in a situation where we don’t get feedback for how well we are saving money. There are devices that glow red or green based on your home’s current energy usage and efficiency, providing a subtle nudge to remind people not to use appliances at peak demand times and giving people feedback on energy usage that they normally wouldn’t receive. Nudges such as these can provide feedback, or can provide helpful information in the absence of feedback.

 

Sunstein and Thaler also write, “many of life’s choices are like practicing putting without being able to see where the balls end up, and for one simple reason: the situation is not structured to provide good feedback. For example, we usually get feedback only on the options we select, not the ones we reject.” Missing feedback is an important consideration because the lack of feedback influences how we understand the world and how we make decisions. The fact that we cannot get feedback on options we never chose should be nearly paralyzing. We can’t say how the world works if we never experiment and try something different. We can settle into a decent rhythm and routine, but we may be missing out on better lifestyles, happier lives, or better societies if we made different choices. However, we can never receive feedback on these non-choices. I don’t know that this means we should necessarily try to constantly experiment at the cost of settling in with the feedback we can receive, but I do think it means we should discount our own confidence and accept that we don’t know all there is. I also think it means we should look to increase nudges, use more visual aids, and structure our choices and decisions in ways that help maximize useful feedback to improve learning for future decision-making.
Hindsight Bias and Misleading Headlines

Hindsight Bias and Misleading Headlines

I absolutely hate internet ads that have headlines along the lines of “Analyst Who Predicted Stock Market Crash Says Makes New Prediction.” These headlines are always nothing but clickbait, and reading Daniel Kahneman’s book Thinking Fast and Slow has given me even more reason to hate these types of headlines. They play on cognitive errors in our thinking, particularly our hindsight bias. When we look back at previous choices, decisions, and important events, whether in our individual lives or across the globe, our present state of being always seems inevitable. It was clear that the internet would lead to major social network platforms, and that those platforms would then contribute to major challenges and problems with misinformation, how could anyone fail to see this as far back as 2004?

 

The problem of course, is that the inevitable present moment and the pathway that seems so obvious in retrospect, was never clear at all. There was no way to predict a major housing bubble and financial collapse in 2008 if you were living in 2006. Headlines introducing some genius who saw what the rest of us couldn’t see before the Great Recession, and then claiming that this person has made another prediction are pulling at our emotions and playing with hindsight bias in a way that is deliberately misleading. The fact that someone made an unlikely prediction that came true is not a reason to believe they will be correct again in the future. If anything, we should expect some version of regression to the mean with their predictions, and assume that their next grand claim is wrong.

 

Rather than using hindsight bias to convince more people to follow links to bogus news stories, we should be more cautious with hindsight bias and our proclivity toward inaccurate heuristics. As Kahneman writes, “Hindsight is especially unkind to decision makers who act as agents for others—physicians, financial advisers, third-base coaches, CEOs, social workers, diplomats, politicians. We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious only after the fact. There is a clear outcome bias. When the outcomes are bad, the clients often blame their agents for not seeing the handwriting on the wall—forgetting that it was written in invisible ink that became legible only afterward. Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.”

 

Our key decision-makers can be punished by our hindsight bias. It can cloud our judgment for what we should expect in the future and lead us to trust individuals who don’t deserve trust, and mistrust those who are making the best possible decisions given a set of serious constraints. Hindsight bias deserves a greater recognition and more respect than use for misleading headlines.
Understanding the Past

Understanding the Past

I am always fascinated by the idea, that continually demonstrates validity in my own life, that the more we learn about something, the more realize how little we actually know about it. I am currently reading Yuval Noah Harari’s book Sapiens: A Brief History of Humankind, and I am continually struck by how often Harari brings in events from mankind’s history that I had never heard about. The more I learn about the past, or about any given subject, the more I realize how little knowledge I have ever had, and how limited, narrow, and sometimes just flat out inaccurate my understandings have been.

 

This is particularly important when it comes to how we think about the past. I believe very strongly that our reality and the worlds in which we live and inhabit are mostly social constructions. The trees, houses, and roads are all real, but how we understand the physical objects, the spaces we operate, and how we use the real material things in our worlds is shaped to an incredible degree by social constructions and the relationships we build between ourselves and the world we inhabit. In order to understand these constructions and in order to shape them for a future that we want to live in (and are physiologically capable of living in) we need to understand the past and make predictions about the future with new social constructs that enable continued human flourishing.

 

To some extent, this feels easy and natural to us. We all have a story and we learn and adopt family stories, national stories, and global stories about the grand arc of humanity. But while our stories seem to be shared, and while we seem to know where we are heading, we all operate based on individual understandings of the past, and where that means we are (or should be) heading. As Daniel Kahneman writes in his  book Thinking Fast and Slow, “we believe we understand the past, which implies that the future also should be knowable, but in fact we understand the past less that we believe we do.”

 

As I laid out to begin this post, there is always so much more complexity and nuance to anything that we might study and be familiar with than we often realize. We can feel that we know something well when we are ignorant of the nuance and complexity. When we start to really untangle something, whether it be nuclear physics, the history of the American Confederacy, or how our fruits and veggies get to the supermarket, we realize that we really don’t know and understand anything as well as we might intuitively believe.

 

When we lack a deep and complex understanding of the past, because we just don’t know about something or because we didn’t have an accurate and detailed presentation of the thing from the past, then we are likely to misinterpret and misunderstand how we got to our current point. By having a limited historical perspective and understanding, we will incorrectly assess where our best future lies. It is important that we recognize how limited our knowledge is, and remember that these limits will shape the extent to which we can make valid predictions for the future.
Substitution Heuristics

Substitution Heuristics

I think heuristics are underrated. We should discuss heuristics as a society way more than we do. We barely acknowledge heuristics, but if we look closely, they are at the heart of many of our decisions, beliefs, and assumptions. They save us a lot of work and help us move through the world pretty smoothly, but are rarely discussed directly or even slightly recognized.

 

In Thinking Fast and Slow, Daniel Kahneman highlights heuristics in the sense of substitution and explains their role as:

 

“The target question is the assessment you intended to produce.
The heuristic question is the simpler question that you answered instead.”

 

I have already written about our brain substituting easier questions for harder questions, but the idea of heuristics gives the process a deeper dimension. Kahneman defines a heuristic writing, “The technical definition of heuristic is a simple procedure that helps find adequate, though often imperfect, answers to difficult questions.”

 

In my own life, and I imagine I am a relatively average case, I have relied on heuristics to help me make a huge number of decisions. I don’t know the best possible investment strategies for my future retirement, but as a heuristic, I know that working with an investment advisor to manage mutual funds and IRAs can be an adequate (even if not perfect) way to ensure I save for the future. I don’t know the healthiest possible foods to eat and what food combinations will maximize my nutrient intake, but as a heuristic I can ensure that I have a colorful plate with varied veggies and not too many sweets to ensure I get enough of the vitamins and nutrients that I need.

 

We have to make a lot of difficult decisions in our lives. Most of us don’t have the time or the ability to compile all the information we need on a given subject to make a fully informed decision, and even if we try, most of us don’t have a reasonable way to sort through contrasting and competing information to determine what is true and what the best course of action would be. Instead, we make substitutions and use heuristics to figure out what we should do. Instead of recognizing that we are using heuristics, however, we ascribe a higher level of confidence and certainty to our decisions than is warranted. What we do, how we live, and what we believe become part of our identity, and we fail to recognize that we are adopting a heuristic to achieve some version of what we believe to be a good life. When pressed to think about it, our mind creates a justification for our decision that doesn’t acknowledge the heuristics in play.

 

In a world where we were quicker to recognize heuristics, we might be able to live with a little distance between ourselves, our decisions, and our beliefs. We could acknowledge that heuristics are driving us, and be more open to change and more willing to be flexible with others. Acknowledging that we don’t have all the answers (that we don’t even have all the necessary information) and are operating on substitution heuristics for complex questions, might help us be less polarized and better connected within our society.
What You See Is All There Is

What You See Is All There Is

In Thinking Fast and Slow, Daniel Kahneman gives us the somewhat unwieldy acronym WYSIATI – what you see is all there is. The acronym describes a phenomenon that stems from how our brains work. System 1, the name that Kahneman gives to the part of our brain which is automatic, quick, and associative, can only take in so much information. It makes quick inferences about the world around it, and establishes a simple picture of the world for System 2, the thoughtful calculating part of our brain, to work with.

 

What you see is all there is means that we are limited by the observations and information that System 1 can take in. It doesn’t matter how good System 2 is at processing and making deep insights about the world if System 1 is passing along poor information. Garbage in, garbage out, as the computer science majors like to say.

 

Daniel Kahneman explains what this means for our day to day lives in detail in his book. He writes, “As the WYSIATI rule implies, neither the quantity nor the quality of the evidence counts for much in subjective confidence. The confidence that individuals have in their beliefs depends mostly on the quality of the story they can tell about what they see, even if they see little.”

 

System 2 doesn’t recognize that System 1 hands it incomplete and limited information. It chugs along believing that the information handed off by System 1 is everything that it needs to know. It doesn’t ask for more information, it just accepts that it has been handed a complete data set and begins to work. System 2 creates a solid narrative out of whatever information System 1 gives it, and only momentarily pauses if it notices an inconsistency in the story it is stitching together about the world. If it can make a coherent narrative, then it is happy and doesn’t find a need to look for additional information. What you see is all there is, there isn’t anything missing.

 

But we know that we only take in a limited slice of the world. We can’t sense the Earth’s magnetic pull, we can’t see in ultraviolet or infrared, and we have no way of knowing what is really happening in another person’s mind. When we read a long paper or finish a college course, we will remember some stuff, but not everything. Our mind is only able to hold so much information, and System 2 is limited to what can be observed and held. This should be a huge problem for our brain, we should recognize enormous blind spots, and be paralyzed with inaction due to a lack of information. But this isn’t what happens. We don’t even notice the blind spots, and instead we make a story from the information we collect, building a complete world that makes sense of the information, no matter how limited it is. What you see is all there is, we make the world work, but we do so with only a portion of what is really out there, and we don’t even notice we do so.
Data Liquidity

Data Liquidity in Healthcare

Another piece of Dave Chase’s Fair Trade for Health Care as outlined in his book The Opioid Crisis Wake-Up Call is what he calls Data Liquidity. It is the idea that you can access your data, see it, contribute to it, and take it someplace else if you want. The idea that you have control over your data – the data you produce in the world, the data which is about you – is a new and growing idea in the world.

 

Data Liquidity is a problem with all of tech right now, but it is especially important in the healthcare industry. Chase writes, “Care teams do their best work when they have the most complete view of a patient’s health status. Anything less comes with an increased risk of harm. Likewise, your employees should have easy access to their own information in a secure patient-controlled data repository  – including the right to contribute their own data or take it elsewhere.”

 

In the world of social media, people (at least in Europe) have demanded to have the right to see their data and have it completely removed from a company’s server if they desire. In the world of finance, there is increasing pressure on the big three credit rating companies to be more transparent in how they determine an individual’s credit score, and some lawmakers want to push the companies to change what they consider and evaluate when generating a credit score. Within healthcare, the debate is on who owns a patient’s medical records. Does the medical provider own the records? Does the patient own the records? What records does the insurance company own?

 

Chase argues that patients need to own their medical records and have access to and control over them. Since most people get their insurance through their employers, Chase argues that it is up to businesses and companies to demand data liquidity and transparency within the contracts they establish with insurers and healthcare systems. It is up to the businesses which contract with employers or health systems to set fair rules related to data that give employees data power and the ability to ensure all of their providers have access to all of their pertinent records.

 

From tech to finance to healthcare, people are starting to see the importance of controlling data, and Chase is hopeful that this revolution will improve healthcare quality, reduce unnecessary procedures, and reduce healthcare costs.
pharmaceutical advertisements

Thoughts on Pharmaceutical Advertisements

“The reality is that most people hear more from pharmaceutical companies (16 to 18 hours of pharma ads per year) than from their doctor (typically under 2 hours per year).” writes Dave Chase in his book The Opioid Crisis Wake-Up Call. Chase is critical of American’s looking for a quick fix and expecting a pill to solve their problems. He says that short doctors appointments and a bombardment of pharmaceutical advertisements on TV contribute to the mindset that any disorder or illness can be fixed in a matter of minutes with a quick pill. With how much we hear from drug companies, and how little time we spend with someone who is trying to work with us in depth to correct behaviors, change our thoughts, improve muscle imbalances, or make adjustments to help us live a more healthy lifestyle, it isn’t hard to understand why most people think of medical care in the form of a pill.

 

I am wary of pharmaceutical advertisements. I don’t really understand if I am the target audience or if medical professionals are the target audience. I’m not sure if the goal is to just normalize taking pills, or if the goal is to educate patients about a potential solution for a potential problem. I’m not sure if the idea is to get people away from taking generic medication in favor of brand name drugs, or if it is to get people to try a medication and see if it helps them.

 

However, I also remember seeing a study which suggested that drug advertisements did help improve people’s health literacy, and did lead to patients being more likely to ask about medications which would help them, without finding an increase in patients asking about medications that wouldn’t be helpful for them. `When primary care providers are stressed, have limited time with patients, and are likely to miss important details, having patients with goals and specific questions about beneficial medication is important for overall health gains and an improved doctor-patient relationship. Additionally, advertisements approved by the FDA and at least somewhat regulated are better places for people to gain medical information about a drug than a Reddit or Facebook post from a random person.

 

Ultimately, I think I fall on the side of banning direct pharmaceutical advertisements. I find they are overly broad, dangerously support the idea that all one needs is a pill to solve all health problems, and ultimately are more about pharmaceutical companies than about improving health in general. I’m not 100% sure this is the best course, but I’d put my confidence around 75% sure this is the best path to pursue. I don’t think it would hurt America to be a little less focused on pills as cures rather than focused on lifestyle changes, especially if we start to favor policy changes that would support more healthy lives.

An Illusion of Security, Stability, and Control

The online world is a very interesting place. While we frequently say that we have concerns about privacy, about how our data is being used, and about what information is publicly available to us, very few people delete their social media accounts or take real action when a data breach occurs. We have been moving more and more of our life online, and we have been more accepting of devices connected to the internet that can either be hacked or be used to tacitly spy on us than we would expect given the amount of time we spend expressing concern for our privacy.

 

A quick line from Tyler Cowen’s book The Complacent Class may explain the contradiction. “A lot of our contentment or even enthrallment with online practices may be based on an illusion of security, stability, and control.”

 

I just read Daniel Kahneman’s book Thinking Fast and Slow and in it he writes about a common logical fallacy, the substitution principle. When we are asked difficult questions, we often substitute a simpler question that we can answer. However, we rarely realize that we do this. Cowen’s insight suggests that we are using this substitution fallacy when we are evaluating online practices.

 

Instead of thinking deeply and critically about our privacy, safety, and the security of our personal or financial information in a given context, we substitute. We ask ourselves, does this website intuitively feel legitimate and well put together? If the answer is yes, we are more likely to enter our personal information, allow our online movements to be tracked, enter our preferences, and save our credit card number.

 

If matching technology works well, if our order is fulfilled, and if we are provided with more content that we can continue to enjoy, we will again substitute. Instead of asking whether our data is safe or whether the value we receive exceeds the risk of having our information available, we will ask if we are satisfied with what was provided to us and if we liked the look and feel of what we received. We can pretend to answer the hard questions with illusory answers to easier questions.

 

In the end, we land in a place where the companies and organizations operating on the internet have little incentive to improve their systems, to innovate in ways that create disruptive changes, or to pursue big leaps forward. We are already content and we are not actually asking the hard questions which may push innovation forward. This contentment builds stagnation and prevents us from seeing the risks that exist behind the curtain. We live in our illusion that we control our information online, that we know how the internet works, and that things are stable and will continue to work, even if the outside world is chaotic. This could be a recipe for a long-term disaster that we won’t see coming because we believe we are safely in control when we are not.

Two Messages

In The Elephant in the Brain Kevin Simler and Robin Hanson write about the ways in which we act to signal something important about ourselves that we cannot outright express. We deceive ourselves to believe that we are not sending these signals, but we recognize them, pick up on their subtle nature, and know how to respond to these cues even if we remain consciously ignorant to them. In the book, the authors focus on how we use these cues in language and communication.

 

The authors write, “Every remark made by a speaker contains two messages for the listener: text and subtext. The text says, ‘Here’s a new piece of information,’ while the subtext says, ‘By the way, I’m the kind of person who knows such things.’ Sometimes the text is more important than the subtext … but frequently, it’s the other way around.”

 

It is important to acknowledge that sometimes the text truly is the important part of our message. Because we occasionally have really important things that people need to know, and because that information outweighs the fact that we are the one who knows it and shared it, we can use that as a screen for us in this game of two messages. We can believe that all our communication is about important important information because there are times where the things we communicate are crucial to know. Hanson and Simler’s idea above only works if sometimes it is true that the text is the important piece and if almost always we can plausibly say that we are just trying to convey useful information as opposed to showing off what we know or what we have learned.

 

No matter what, at the same time our communication says something about us and about what knowledge and information we may have. It can also say something about what we don’t know, which may be part of why we go to great lengths to make it seem like we were not ignorant of something – our language/knowledge might tell people we are not the kind of person who knows something that everyone else knows.

 

Our language also tells people that we are the kind of person who cares about something, or has great attention to detail, is strict and disciplined, or is from a certain part of the country/world. Some of these signals are fairly hidden, while others are more clear and obvious. When we look more closely at the way we signal in our conversation, we can see how often our words are only part of what we are communicating.

Sabotage Information

“Our minds are built to sabotage information in order to come out ahead in social games.” In The Elephant in the Brain, Kevin Simler and Robin Hanson write about the ways in which we act out of our own self-interest without acknowledging it. We are more selfish, more deceptive, and less altruistic than we would like to admit, even to ourselves. To keep us feeling good about what we do, and to make it easier to put on a benevolent face, our brains seem to deliberately distort information to make us look like we are honest, open, and acting with the best of intentions for everyone.

 

“When big parts of our minds are unaware of how we try to violate social norms, it’s more difficult for others to detect and prosecute those violations. This also makes it harder for us to calculate optimal behaviors, but overall, the trade-off is worth it.” As someone who thinks critically about Stoicism and believes that self-reflection and awareness are keys to success and happiness, this is hard to take in. It suggests that self-awareness is a bigger burden for social success than blissful unawareness. Being deluded about our actions and behaviors, Simler and Hanson suggest, helps us be better political animals and helps us climb the social hierarchy to attain a better mate, more status, and more allies. Self-awareness, their idea suggests, makes us more aware of the lies we tell ourselves about who we are, what we do, and why we do it, and makes it harder for us to lie and get ahead.

 

“Of all the things we might be self-deceived about, the most important are our own motives.” Ultimately, however, I think we will be better off if we can understand why we, and everyone else, believe the things we do and behave the way we do. Turning inward and recognizing how often we hide our motives and deceive ourselves and others about our actions can help us overcome bias. We can start to be more intentional about our decisions and think more critically about what we want to work toward. We don’t have to hate humanity because we lie and hide parts of ourselves from even ourselves, but we can better move through the world if we actually know what is going on. Before we become angry over a news story, before we shell out thousands of dollars for new toys, and before we make overt displays of charity, we can ask ourselves if we are doing something for legitimate reasons, or just to deceive others and appear to be someone who cares deeply about an issue or item. Slowly, we can counteract the negative externalities associated with the brain’s faulty perceptions, and we can at least make our corner of the world a little better.