Closed-Mindedness

One of the epistemic vices that Quassim Cassam describes in his book Vices of the Mind is closed-mindedness. An epistemic vice, Cassam explains, is a pattern of thought or a behavior that obstructs knowledge. They systematically get in the way of learning, communicating, or holding on to important and accurate information.
Regarding closed-mindedness, Cassam writes, “in the case of closed-mindedness, one of the motivations is the need for closure, that is, the individual’s desire for a firm answer to a question, any firm answer as compared to confusion and/or ambiguity [Italics indicate quote from A.W. Kruglanski]. This doesn’t seem an inherently bad motive and even has potential benefits. The point at which it becomes problematic is the point at which it gets in the way of knowledge.”
This quote about closed-mindedness reveals a couple of interesting aspects about the way we think and the patterns of thought that we adopt. The quote shows that we can become closed-minded without intending to be closed-minded people. I’m sure that very few people think that it is a good thing for us to close ourselves off from new information or diverse perspectives about how our lives should be. Instead, we seek knowledge and we prefer feeling as though we are correct and as though we understand the world we live in. Closed-mindedness is in some ways a by-product of living in a complex world where we have to make decisions with uncertainty. It is uncomfortable to constantly question every decision we make and can become paralyzing if we stress each decision too tightly. Simply making a decision and deciding we are correct without revisiting the question is easier, but also characteristically closed-minded.
The second interesting point is that epistemic vices such as closed-mindedness are not always inherently evil. As I wrote in the previous paragraph, closed-mindedness (or at least a shade of it), can help us navigate an uncertain world. It can help us make an initial decision and move on from that decision in situations where we otherwise may feel paralyzed. In many instances, like purchasing socks, there is no real harm that comes from being closed-minded. You might pay more than necessary purchasing fancy socks, but the harm is pretty minimal.
However, closed-mindedness systematically hinders knowledge by making people unreceptive to new information that challenges existing or desired beliefs. It makes people worse at communicating information because their data may be incomplete and irrelevant. Knowledge is limited by closed-mindedness, and overtime this creates a potential for substantial consequences in people’s lives. Selecting a poor health insurance plan as a result of being closed-minded, starting a war, or spreading harmful chemical pesticides are real world consequences that have occurred as a result of closed-mindedness. Substantial sums of money, people’s lives, and people’s health and well-being can hang in the balance when closed-mindedness prevents people from making good decisions, regardless of the motives that made someone closed-minded and regardless of whether being closed-minded helped solve analysis paralysis. Many of the epistemic vices, and the characteristics of epistemic vices, that Cassam describes manifest in our lives similar to closed-mindedness. Reducing such vices, like avoiding closed-mindedness, can help us prevent serious harms that can accompany the systematic obstruction of knowledge.
Risk literacy and Reduced Healthcare Costs - Joe Abittan

Risk Literacy & Reduced Healthcare Costs

Gerd Gigerenzer argues that risk literacy and reduced healthcare costs go together in his book Risk Savvy. By increasing risk literacy we will help both doctors and patients better understand how behaviors contribute to overall health, how screenings may or may not reveal dangerous medical conditions, and whether medications will or will not make a difference for an individual’s long-term well being. Having both doctors and patients better understand and better discuss the risks and benefits of procedures, drugs, and lifestyle changes can help us use our healthcare resources more wisely, ultimately bringing costs down.
Gigerenzer argues that much of the modern healthcare system, not just the US system but the global healthcare system, has been designed to sell more drugs and more technology. Increasing the number of people using medications, getting more doctors to order more tests with new high-tech diagnostic machines, and driving more procedures became more of a goal than actually helping to improve people’s health. Globally, health and the quality of healthcare has improved, but healthcare is often criticized as a low productivity sector, with relatively low gains in health or efficiency for the investments we make.
I don’t know that I am cynical enough to accept all of Gigerenzer’s argument at face value, but the story of opioids, the fact that we invest much larger sums of money in cancer research versus parasitic disease research, and the ubiquitous use of MRIs in our healthcare landscape do favor Gigerenzer’s argument. There hasn’t been as much focus on improving doctor and patient statistical reasoning, and we haven’t put forward the same effort and funding to remove lead from public parks compared to the funding put forward for cancer treatments. We see medicine as treating diseases after they have popped up with fancy new technologies and drugs. We don’t see medicine as improving risk and health literacy or as helping improve the environment before people get sick.
This poor vision of healthcare that we have lived with for so long, Gigerenzer goes on to argue, has blinded us to the real possibilities within healthcare. Gigerenzer writes, “calls for better health care have been usually countered by claims that this implies one of two alternatives, which nobody wants: raising taxes or rationing care. I argue that there is a third option: by promoting health literacy of doctors and patients, we can get better care for less money.”
Improving risk and health literacy means that doctors can better understand and better communicate which medications, which tests, and which procedures  are most likely to help patients. It will also help patients better understand why certain recommendations have been made and will help them push back against the feeling that they always need the newest drugs, the most cutting edge surgery, and the most expensive diagnostic screenings. Regardless of whether we raise taxes or try to ration care, we have to help people truly understand their options in new ways that incorporate tools to improve risk literacy and reduce healthcare costs. By better understanding the system, our own care, and our systemic health, we can better utilize our healthcare resources, and hopefully bring down costs by moving our spending into higher productivity healthcare spaces.
On The Opportunity To Profit From Uninformed Patients

On The Opportunity To Profit From Uninformed Patients

The American Medical System is in a difficult and dangerous place right now. Healthcare services have become incredibly expensive, and the entire system has become so complex that few people fully understand it and even fewer can successfully navigate the system to get appropriate care that they can reasonably afford. My experience is that many people don’t see value in much of the care they receive or with many of the actors connected with their care. They know they need insurance to afford their care, but they really can’t see what value their insurance provides – it often appears to be more of a frustration than something most people appreciate. The same can be true for primary care, anesthesiologists, and the variety of healthcare benefits that employers may offer to their patients. There seem to be lots of people ready to profit from healthcare, but not a lot of people ready to provide real value to the people who need it.
 
These sentiments are all generalizations, and of course many people really do see value in at least some of their healthcare and are grateful for the care they receive. However, the complexity, the lack of transparency, and the ever climbing costs of care have people questioning the entire system, especially at a moral and ethical level. I think a great deal of support for Medicare for All, or universal healthcare coverage, comes from people thinking that profit within medicine may be unethical and from a lack of trust that stems from an inability to see anything other than a profit motive in many healthcare actors and services.
 
Gerd Gigerenzer writes about this idea in his book Risk Savvy. In the book he doesn’t look at healthcare specifically, but uses healthcare to show the importance of being risk literate in today’s complex world. Medical health screening in particular is a good space to demonstrate the harms that can come from misinformed patients and doctors. A failure to understand and communicate risk can harm patients, and it can actually create perverse incentives for healthcare systems by providing them the opportunity to profit from uninformed patients. Gigerenzer quotes Dr. Otis Brawley who had been Director of the Georgia Cancer Center at Emory in Atlanta.
 
In Dr. Brawley’s quote, he discusses how Emory could have screened 1,000 men at a mall for prostate cancer and how the hospital could have made $4.9 million in billing for the tests. Additionally the hospital would have profited from future services when men returned for other unrelated healthcare concerns as established patients. In Dr. Brawley’s experience, the hospital could tell him how much they could profit from the tests, but could not tell him whether screening 1,000 men early for prostate cancer would have actually saved any lives among the 1,000 men screened. Dr. Brawley knew that screening many men would lead to false positive tests, and unnecessary stress and further medical diagnostic care for those false positives – again medical care that Emory would profit from. The screenings would also identify men with prostate cancer that was unlikely to impact their future health, but would nevertheless lead to treatment that would make the men impotent or potentially incontinent. The hospital would profit, but their patients would be worse off than if they had not been screened. Dr. Brawley’s experience was that the hospital could identify avenues for profit, but could not identify avenues to provide real value in the healthcare services they offer.
 
Gigerenzer found this deeply troubling. A failure to understand and communicate the risks of prostate cancer (which is more complex than I can write about here) presents an opportunity for healthcare providers to profit by pushing unnecessary medical screening and treatment onto patients. Gigerenzer also notes that profiting from uninformed patients is not just limited to cancer screening. Doctors who are not risk literate cannot adequately explain risks and benefits of treatment to patients, and their patients cannot make the best decisions for themselves. This is a situation that needs to change if hospitals want to keep the trust of their patients and avoid being a hated entity that fails to demonstrate value. They will go the way of health insurance companies, with frustrated patients wanting to eliminate them altogether.
 
Wrapping up the quote from Dr. Brawley, Gigerenzer writes, “profiting from uninformed patients is unethical. medicine should not be a money game.” I believe that Gigerenzer and Dr. Brawley are right, and I think that all healthcare actors need to clearly demonstrate their value, otherwise any profits they earn will make them look like money-first enterprises and not patient-first enterprises, frustrating the public and leading to distrust in the medical field. In the end, this is going to be harmful for everyone involved. Demonstrating real value in healthcare is crucial, and profiting from uniformed patients will diminish the value provided and hurt trust, making the entire healthcare system in our country even worse.

Understanding False Positives with Natural Frequencies

Understanding False Positives with Natural Frequencies

In a graduate course on healthcare economics a professor of mine had us think about drug testing student athletes. We ran through a few scenarios where we calculated how many true positive test results and how many false positive test results we should expect if we oversaw a university program to drug tests student athletes on a regular basis. The results were surprising, and a little confusing and hard to understand.

 

As it turns out, if you have a large student athlete population and very few of those students actually use any illicit drugs, then your testing program is likely to reveal more false positive tests than true positive tests. The big determining factors are the sensitivity of the test (how often it is actually correct) and the percentage of students using illicit drugs. A false positive occurs when the drug test indicates that a student who is not using illicit drugs is using them. A true positive occurs when the test correctly identifies a student who does indeed use drugs. The dilemma we discussed occurs if you have a test with some percentage of error and a large student athlete population with a minimal percentage of drug users. In this instance you cannot be confident that a positive test result is accurate. You will receive a number of positive tests, but most of the positive tests that you receive are actually false positives.

 

In class, our teacher walked us through this example verbally before creating some tables that we could use to multiply the percentages ourselves to see that the number of false positives will indeed exceed the number of true positives when you are dealing with a large population and a rare event that you are testing for. Our teacher continued to explain that this happens every day in the medical world with drug tests, cancer screenings, and other tests (including COVID-19 tests as we are learning today).  The challenge, as our professor explained, is that the math is complicated and it is hard to explain to person who just received a positive cancer test that they likely don’t have cancer, even though they just received a positive test. The statistics are hard to understand on their own.

 

However, Gerd Gigerenzer doesn’t think this is really a limiting problem for us to the extent that my professor had us work through. In Risk Savvy Gigerenzer writes that understanding false positives with natural frequencies is simple and accessible. What took nearly a full graduate course to go through and discuss, Gigerenzer suggests can be digested in simple charts using natural frequencies. Natural frequencies are numbers we can actually understand and multiply as opposed to fractions and percentages which are easy to mix up and hard to multiply and compare.

 

Rather than telling someone that the actual incidence of cancer in the population is only 1%, and that the chance of a false positive test is 9%, and trying to convince them that they still likely don’t have cancer is confusing. However, if you explain to an individual that for every 1,000 people who take a particular cancer test that only 10 actually have cancer and that 990 don’t, the path to comprehension begins to clear up. With the group of 10 true positives and true negatives 990, you can explain that of those 10 who do have cancer, the test correctly identifies 9 out of 10 of them, and provides 9 true positive results for every 1,000 test (or adjust according to the population and test sensitivity). The false positive number can then be explained by saying that for the 990 people who really don’t have cancer, the test will error and tell 89 of them (9% in this case) that they do have cancer. So, we see that 89 individuals will receive false positives while 9 people will receive true positives. 89 > 9, so the chance of actually having cancer with a positive test still isn’t a guarantee.

 

Gigernezer uses very helpful charts in his book to show us that the false positive problem can be understood more easily than we might think. Humans are not great at thinking statistically, but understanding false positives with natural frequencies is a way to get to better comprehension. With this background he writes, “For many years psychologists have argued that because of their limited cognitive capacities people are doomed to misunderstand problems like the probability of a disease given a positive test. This failure is taken as justification for paternalistic policymaking.” Gigerenzer shows that we don’t need to rely on the paternalistic nudges that Cass Sunstein and Richard Thaler encourage in their book Nudge. He suggest that in many instances where people have to make complex decisions what is really needed is better tools and aids to help with comprehension. Rather than developing paternalistic policies to nudge people toward certain behaviors that they don’t fully understand, Gigerenzer suggests that more work to help people understand problems will solve the dilemma of poor decision-making. The problem isn’t always that humans are incapable of understanding complexity and choosing the right option, the problem is often that we don’t present information in a clear and understandable way to begin with.
Stats and Messaging

Stats and Messaging

In the past, I have encouraged attaching probabilities and statistical chances to the things we believe or to events we think may (or may not) occur. For example, say Steph Curry’s three point shooting percentage is about 43%, and I am two Steph Currys confident that my running regiment will help me qualify for the Boston Marathon. One might also be two Steph Currys confident that leaving now will guarantee they are at the theater in time for the movie, or that most COVID-19 restrictions will be rescinded by August 2021 allowing people to go to movies again. However, the specific percentages that I am attaching in these examples may be meaningless, and may not really convey an important message for most people (Myself included!). It turns out, that modern day statistics and the messaging attached to it is not well understood.

 

In his book Risk Savvy, Gerd Gigerenzer discusses the disconnect between stats and messaging, and the mistake most people make. The main problem with using statistics is that people don’t really know what the statistics mean in terms of actual outcomes. This was seen in the 2016 US presidential election when sources like FiveThirtyEight gave trump a 28.6% chance of winning and again in 2020 when the election was closer than many predicted, but was still well within the forecasted range.  In both instances, a Trump win was considered such a low probability event that people dismissed it as a real possibility, only to be shocked when Trump did win in 2016 and performed better than many expected in 2020. People failed to fully appreciate that FiveThirtyEight’s prediction meant that in 28.6% of election simulations, Trump was predicted to win in 2016, and in 2020 many of their models predicted races both closer than and wider than the result we actually observed.

 

Regarding weather forecasting and statistical confusion, Gigerenzer writes, “New forecasting technology has enabled meteorologists to replace mere verbal statements of certainty (it will rain tomorrow) or chance (it is likely) with numerical precision. But greater precision has not led to greater understanding of what the message really is.” Gigerenzer explains that in the context of weather forecasts, people often misunderstand that a 30% chance of rain means that on 30% of days when when the observed weather factors (temperature, humidity, wind speeds, etc…) match the predicted weather for that day, rain occurs. Or that models taking weather factors into account simulated 100 days of weather with those conditions and included rain for 30 of those days.  What is missing, Gigerenzer explains, is the reference class. Telling people there is a 30% chance of rain could lead them to think that it will rain for 30% of the day, that 30% of the city they live in will be rained on, or perhaps they will misunderstand the forecast in a completely unpredictable way.

 

Probabilities are hard for people to understand, especially when they are busy, have other things on their mind, and don’t know the reference class. Providing probabilities that don’t actually connect to a real reference class can be misleading and unhelpful. This is why my suggestion of tying beliefs and possible outcomes to a statistic might not actually be meaningful. If we don’t have a reasonable reference class and a way to understand it, then it doesn’t matter how many Steph Currys likely I think something is. I think we should take statistics into consideration with important decision-making, and I think Gigerenzer would agree, but if we are going to communicate our decisions in terms of statistics, we need to ensure we do so while clearly stating and explaining the reference classes and with the appropriate tools to help people understand the stats and messaging.
Frame Bound vs Reality Bound

Frame Bound vs Reality Bound

My wife works with families with children with disabilities and one of the things I learned from her is how to ask children to do something. When speaking with an adult, we often use softeners when requesting that the other person do something, but this doesn’t work with children. So while we may say to a colleague, a spouse, or a friend, “can you please XYZ,” or “lets call it a night of bowling after this frame, OK?” these sentences don’t work with children. A child won’t quite grasp the way a softener like “OK” is used and they won’t understand that while you have framed an instruction or request as a question you are not actually asking a question or trying to give someone a choice. If you frame an instruction as a choice the child can reply with “no” and then you as a parent are stuck fighting them.

 

What happens in this situation is that children reject the frame bounding that parents present them with. To get around it, parents need to be either more direct or more creative with how they tell their children to do things. You can create a new frame for your child that they can’t escape by saying, “It is time to get ready for dinner, you can either put away your toys, or you can go set the table.” You frame a choice for the child, and they get to chose which action they are going to take, but in reality both are things you want them to do (my wife says this also works with husbands but I think the evidence is mixed).

 

In Thinking Fast and Slow, Daniel Kahneman writes, “Unless there is an obvious reason to do otherwise, most of us passively accept decision problems as they are framed and therefore rarely have an opportunity to discover the extent to which our preferences are frame-bound rather than reality-bound.”

 

The examples I gave with talking to children versus talking to adults helps demonstrate how we passively accept the framing for our decisions. We don’t often pause to reconsider whether we should really purchase an item on sale. The discount that we are saving outweighs the fact that we still face a cost when purchasing the item. Our thinking works this way in office settings, in politics, and on the weekends when we can’t decide if we are going to roll out of bed or not. The frame that is applied to our decisions becomes our reality, even if there are more possibilities out there than what we realize.

 

A child rejecting the framing that a parent provides, or conversely a parent creating new frames to shape a child’s decisions and behaviors demonstrates how easily we can fall into frame-bound thinking and how jarring it can be when reality intrudes on the frames we try to live within. Most of the times we accept the frames presented for us, but there can be huge costs if we just go along with the frames that advertisers, politicians, and other people want us to adopt.
pharmaceutical advertisements

Thoughts on Pharmaceutical Advertisements

“The reality is that most people hear more from pharmaceutical companies (16 to 18 hours of pharma ads per year) than from their doctor (typically under 2 hours per year).” writes Dave Chase in his book The Opioid Crisis Wake-Up Call. Chase is critical of American’s looking for a quick fix and expecting a pill to solve their problems. He says that short doctors appointments and a bombardment of pharmaceutical advertisements on TV contribute to the mindset that any disorder or illness can be fixed in a matter of minutes with a quick pill. With how much we hear from drug companies, and how little time we spend with someone who is trying to work with us in depth to correct behaviors, change our thoughts, improve muscle imbalances, or make adjustments to help us live a more healthy lifestyle, it isn’t hard to understand why most people think of medical care in the form of a pill.

 

I am wary of pharmaceutical advertisements. I don’t really understand if I am the target audience or if medical professionals are the target audience. I’m not sure if the goal is to just normalize taking pills, or if the goal is to educate patients about a potential solution for a potential problem. I’m not sure if the idea is to get people away from taking generic medication in favor of brand name drugs, or if it is to get people to try a medication and see if it helps them.

 

However, I also remember seeing a study which suggested that drug advertisements did help improve people’s health literacy, and did lead to patients being more likely to ask about medications which would help them, without finding an increase in patients asking about medications that wouldn’t be helpful for them. `When primary care providers are stressed, have limited time with patients, and are likely to miss important details, having patients with goals and specific questions about beneficial medication is important for overall health gains and an improved doctor-patient relationship. Additionally, advertisements approved by the FDA and at least somewhat regulated are better places for people to gain medical information about a drug than a Reddit or Facebook post from a random person.

 

Ultimately, I think I fall on the side of banning direct pharmaceutical advertisements. I find they are overly broad, dangerously support the idea that all one needs is a pill to solve all health problems, and ultimately are more about pharmaceutical companies than about improving health in general. I’m not 100% sure this is the best course, but I’d put my confidence around 75% sure this is the best path to pursue. I don’t think it would hurt America to be a little less focused on pills as cures rather than focused on lifestyle changes, especially if we start to favor policy changes that would support more healthy lives.

How We Define Our World

Our thoughts are generally not just our own thoughts. What we think, what we say, and ultimately what we do is influenced by other people. We are social animals and come to understand ourselves and define ourselves socially. However, we often are not aware of just how much this social conditioning shapes our thinking and understanding. Fernando Pessoa writes about this in his book The Book of Disquiet which was assembled from his notes and published after his death.

 

In a translation from the original Portuguese by Margaret Jull Costa, Pessoa writes, “Their inability to say what they see or think is a cause of suffering to most people. …they imagine that to define something one should say what other people want, and not what one needs to say in order to produce a definition.”

 

When we think about something, it is often in the context of social situations. We don’t exist in a vacuum where we can give everything around us a name and definition, so we must rely on the knowledge and understanding of others in creating a shred definition and shared meaning in what we communicate. At a basic level, we must share some type of understanding to communicate how we are feeling, what something is, what happened, and what it all means. However, we go a step further than just this.

 

We anticipate what other people want to hear and expect to hear, and we adjust our communication accordingly. Pessoa seems to suggest that we don’t just adapt our speaking and communication when we do this, but we adjust our entire way of thinking to align with what we think other people believe, feel, and understand. We don’t think and develop concepts independently, but we do so socially, depending on others and making assumptions about what is happening in their head as we formulate ideas within our own heads. Because our thoughts are not independent, when we are asked to define something abstract we falter. Rather than simply describing the thing, we become paralyzed as we try to think about what is already in another person’s head, what they are expecting to hear, and what they will think if we provide a definition they did not expect. Rather than being free and brave enough to offer our own definition, or to have our own thoughts, we simply adopt the social beliefs around us, conforming to the shared thoughts of others.

 

In one sense I find it troubling that we don’t have our own independent thoughts and ideas. But at the same time, I don’t know what it would mean for everyone to have independent thoughts and understandings of the world. I don’t know how we could cooperate and build a society if we all had truly distinct thoughts and opinions about how the world should operate and about how to define the world as it is. I find that when I consider the reality of our social minds, I fall back on the same conclusion as always, it is important to be aware of what is really happening and understand that we don’t think independently of others, but I don’t know how that should change our ways of thinking or our manifesting behaviors on individual or societal levels. Perhaps our honesty with ourselves will make us less cocky and less arrogant, but perhaps it will open us up to be taken advantage of by people who are. Ultimately, having more knowledge of what our minds are really doing will hopefully make us better people.

Elevating Reason

This blog is a place for me to return to specific quotes and thoughts that stood out to me in books that interested me. The blog, on its face, is mostly about me trying to remember key insights from books, to formulate my thoughts, and share them with others. Another goal of the blog, if I am honest, is to attempt to elevate reason in our lives. I believe that we must live in a way that attempts to look at the world as clearly and objectively as possible, all while understanding that our brains didn’t evolve to see the world in this way.

 

By highlighting the benefits of rational thought, I hope to raise the status of those who try to be rational and encourage more people to think deeply about their world. It is no surprise then, that a sentence I highlighted in The Elephant in the Brain by Kevin Simler and Robin Hanson reads, “People who are able to acknowledge uncomfortable truths and discuss them dispassionately can show a combination of honesty, intellectual ability, and perhaps even courage (or at least a thick skin).”

 

That quote is from a short section that focuses on why someone might want to acknowledge the elephant in the brain. Which is to say, why anyone would want to acknowledge that much of their behavior is likely driven by selfish self-interested motives and not by the high-minded reasons we like to project? Our brains seem to be very good at deceiving even ourselves about our behaviors and choices (so that we can better lie to others), and our high minded reasons for doing things make us feel good about who we are. Why would we want to look past that into less pretty parts of our inner workings?

 

I believe that acknowledging our true motives will help us better understand humanity, develop better institutions, and in the long run function better together. One way to make that happen is to prop up the social status of people who think rationally about the universe and their existence within the universe. By acknowledging truths that tear down the stories we tell about how amazing and special we are, and by being able to look at issues dispassionately and as objectively as we can get ourselves to look at an issue, we can hopefully start to pursue better policy and better general debates and discussions. Making rational thinking interesting and helpful in our daily lives will encourage more people to be honest about the world and will hopefully lead to more rewarding lives for those who cultivate these important yet undervalued intellectual abilities.

Examples of Hidden Meaning in Communication

Yesterday I wrote about how our speech conveys information in the direct meaning of what we say and also conveys additional information about us as a person. Our messages include the specific thing we said, and also something about how we are the type of person who knows about or cares about the thing we just communicated. This second layer of communication is very important, and is often more important than the information we actually express, even though we likely never acknowledge it.

 

As an example, Kevin Simler and Robin Hanson write the following in The Elephant in the Brain, “When you’re interviewing someone for a job, for example, you aren’t trying to learn new domain knowledge from the job applicant, but you might discuss a topic in order to gauge the applicant as a potential coworker. You want to know whether the applicant is sharp or dull, plugged-in or out of the loop. You want to know the size and utility of the applicant’s backpack.”

 

This example is really clear and we can see that the things being communicated are less important than the behind the scenes things that the communication tells us about the person. Have they been in situations that demand creativity, were they able to navigate those situations well, and can they now look back and clearly express what they learned from those situations? These questions are hard to ask directly, but the communication from the interviewee will give us answers to these questions whether they are directly asked or not.

 

As an example from my personal life, the other week I drew a river on a coworker’s whiteboard because I learned some really fascinating information about erosion and deposition within rivers from the Don’t Panic Geocast. The information I shared about rivers is not going to help either of us in our jobs or life in any meaningful way, but I found it interesting and wanted to share. What I was really conveying, however, was that I am the type of person who gets excited about science and fun geological processes. I was telling her, “hey, I’m the kind of person who picks up interesting but obscure information from across the world and can remember it.” If I just walked around saying that I would probably annoy everyone (not to say drawing rivers on other peoples whiteboards doesn’t) but at least in this way I can show my interests in the world and share a little bit about myself in a less obnoxious and intrusive manner.

 

We all do things like this at times in our lives. We are not conscious of it, because being conscious of it doesn’t actually help us be much better in conversations and social situations. Our brains continuously monitor, adjust, and respond to social situations, and we are able to send a lot of messages without either ourselves or the people we talk to actively noticing what we are doing during these conversations.