Self-Interest & A Banking Moral Hazard

Self-Interest & A Banking Moral Hazard

I have not really read into or studied the financial crisis of 2008, but I remember how angry and furious so many people were at the time. There was an incredible amount of anger at big banks, especially when executives at big banks began to receive massive bonuses while many people in the country lost their homes and had trouble rebounding from the worst parts of the recession. The anger at banks spilled into the Occupy Wall Street movement, which is still a protest that I only have a hazy understanding of.
While I don’t understand the financial crisis that well, I do believe that I better understand self-interest, thanks to my own personal experience and constantly thinking about Robin Hanson and Kevin Simler’s book The Elephant in the Brain. The argument from Hanson and Simler is that most of us don’t actually have really strong beliefs about most aspects of the world. For most topics, the beliefs we have are usually subservient to our own self-interest, to the things we want that would give us more money, more prestige, and more social status. When you apply this filter retroactively to the financial crisis of 2008, some of the arguments shift, and I feel that I am able to better understand some of what took place in terms of rhetoric coming out of the crisis.
In Risk Savvy, published in 2014, Gerd Gigerenzer wrote about the big banks. He wrote about the way that bankers argued for limited regulation and intervention from states, suggesting that a fee market was necessary for a successful banking sector that could fund innovation and fuel the economy. However, banks realized that in the event of a major banking crisis, all banks would be in trouble, and dramatic government action would be needed to save the biggest banks and prevent a catastrophic collapse. “Profits are pocketed by executives, and losses are compensated by taxpayers. That is not exactly a free market – it’s a moral hazard,” writes Gigerenzer.
Banks, like the individuals who work for and comprise them, are self-interested. They don’t want to be regulated and have too many authorities limiting their business enterprises. At the same time, they don’t want to be held responsible for their actions. Banks took on increasingly risky and unsound financial loans, recognizing that if everyone was engaged in the same harmful lending practice, that it wouldn’t just be a single bank that went bust, but all of them. They argued for a free market before the crash, because a free market with limited intervention was in their self-interest, not because they had high minded ideological beliefs. After the crash, when all banks risked failure, the largest banks pleaded for bail outs, arguing that they were necessary to prevent further economic disaster. Counter to their free-market arguments of before, the banks favored bail-outs that were clearly in their self-interest during the crisis. Their high minded ideology of a free market was out the window.
Gigerenzer’s quote was meant to focus more on the moral hazard dimension of bailing out banks that take on too many risky loans, but for me, someone who just doesn’t fully understand banking the way I do healthcare or other political science topics, what is more obvious in his quote is the role of self-interest, and how we try to frame our arguments to hide the ways we act on little more than self-interest. A moral hazard, where we benefit by pushing risk onto others is just one example of how individual self-interest can be negative when multiplied across society. Tragedy of the commons, bank runs, and social signaling are all other examples where our self-interest can be problematic when layered up to larger societal levels.
Social learning and risk aversion

Social Learning and Risk Aversion

In his book Risk Savvy, Gerd Gigerenzer looks at risk aversion in the context of social learning and presents interesting ideas and results from studies of risk aversion and fear. He writes, “In risk research people are sometimes divided into two kinds of personalities: risk seeking and risk averse. But it is misleading to generalize a person as one or the other. … Social learning is the reason why people aren’t generally risk seeking or risk averse. They tend to fear whatever their peers fear, resulting in a patchwork of risks taken and avoided.”

 

I agree with Gigerenzer and I find it is normally helpful to look beyond standard dichotomies. We often categorize things into binaries as the example of risk averse or risk seeking demonstrates. The reality, I believe, is that far more things are situational and exist within spectrums. In general for most of our behaviors that we may want to categorize with a dichotomy, I would argue that we are often much more self-interested than we would like to admit and often driven by our present context to a greater extent than we normally realize. People are not good or evil, honest or dishonest, or even hardworking or lazy. People adjust to the needs of the moment, fitting what they believe is in their best interest at a given time with influence from a great deal of social determinants. Social learning and risk aversion helps us see that dichotomies often don’t stand up, and it reveals something interesting about who we are as individuals within a larger society.

 

People have a patchwork of things they fear and a patchwork of risks they are willing to accept. On the whole, we generally won’t accept a bet unless the payoff is twice the potential gamble (there is an expected value calculation we can do that I don’t want to dive into). However, we are not always rational and calculating in the risks and gambles we take. We are much more likely to die in a car crash than an airplane crash, yet few of have any hesitation when buckling our seat for the drive to work but likely feel some nervousness during takeoff on a short flight. We are not risk seeking if we are more willing to drive than fly (in fact it isn’t really appropriate to categorize this activity as either risk seeking or risk avoiding), we are simply responding to learned fears that have developed in our culture.

 

What this shows us is that we are creatures that respond to our environment, especially our social environments. We often think of ourselves as unique individuals, but the reality is that we are dependent on society and define ourselves based on the societies and groups we belong to. We learn from those around us, try to do what we understand to be in our best interest, and navigate a complicated course between societal expectations and our self-interest. Just as we can’t classify ourselves into imagined dichotomies, we cannot do so with others. Social learning and risk aversion give us a window into the complexity that we smooth over when we try to categorize ourselves or others into simple dichotomies.
Open Default Nudges

Open Defaults

Our society has a lot of defaults, and for many of us, we only opt out of the default in a narrow set of circumstances. Whether it is our mode of travel, how we pay for goods, or the type of health insurance plan we are enrolled in, the default option makes a big difference in our lives. Actors within our political and economic systems know this, and the choice of default can matter a lot to individual actors, political groups, and companies. Consequentially, what default is selected, and what story we tell about the default, is a constant point of argument and debate in our country.

 

In their book Nudge, Cass Sunstein and Richard Thaler discuss the importance of nudges and the ways that responsible choice architects should think about them. Choice architects may face pressure to select a default option that in one way or another benefits them personally or benefits the group or ideology they identify with. A state government may favor a default Medicaid option that is confusing and hard for individuals to use, meaning that fewer people will access services, and the state won’t have to pay as much for medical services for low income individuals. A corporate HR representative might feel pressured from a boss to have the default retirement savings rate for employees set at 2%, knowing that the company will spend less through retirement savings matching if the rate is lower.

 

But these types of defaults are not in the best interest of individuals. A health plan that is easy to use and facilitates access to necessary medical care is clearly in the best interest of the individual, but it may cost more for the government agency or corporation sponsoring the plan. A retirement plan that helps save above the rate of inflation is also clearly in the best interest of the individual, but might be more costly to a company’s bottom line.

 

As a guide for setting defaults, following with previous advice of ensuring that deliberate nudges employed by governments or corporations can survive open transparency, Sunstein and Thaler write, “The same conclusion holds for legal default rules. If government alters such rules – to encourage organ donation or reduce discrimination – it should not be secretive about what it is doing.”

 

The defaults we chose, and the reasons we select defaults should be open and transparent. If a choice architect cannot defend a default choice, then they should set an alternative default that can be defended in the open. Defaults that clearly benefit the choice architect or their interests at the expense of the individual making (or failing to make) a choice should be excluded. It is important to note that this means that choice architects have to actively make a decision with the default. Setting the default for a retirement savings plan if an individual never makes a selection to 0 is not in the best interest of the individual. An argument could be made that the choice architect attempted to remove themselves from the choice setting as much as possible by not providing a default, but that is still a choice, and will leave some people worse off than if the choice architect had selected a more defensible choice. Choosing not to set a default can be as indefensible as selecting a self-serving default.
Acknowledging Nudges

Acknowledging Nudges

In the book Nudge, Cass Sunstein and Richard Thaler argue that it is impossible to avoid and eliminate nudges. Whenever people have a choice to be made, someone else has a hand in shaping how that choice is presented and structured. Even if a choice architect were to strive to maximize choice and decision-making autonomy in the chooser, subtle factors will influence the chooser and nudge them in particular directions. Striving to eliminate nudges is likely to lead to worse potential outcomes and choices than acknowledging nudges and trying to employ them in ways that help people make good choices.

 

But how does a choice architect judge when a nudge is appropriate versus when a nudge goes too far? Again, Sunstein and Thaler recommend that first a choice architect acknowledge their nudge, and then ask themselves whether they could discuss the way they use nudges in public. The authors reference an idea from John Rawls called the publicity principle. If a choice architect feels comfortable with publicly acknowledging nudges and their choice to employ a given nudge, then their nudge is probably going in an appropriate direction. If however, the discovery of their nudges would lead people to shame them or if they would be embarrassed about their actions, then they have overstepped the bounds of an acceptable nudge.

 

Sunstein and Thaler write, “The government should respect the people whom it governs, and if it adopts policies that it could not defend in public, it fails to manifest that respect. Instead, it treats its citizens as tools for its own manipulation.”

 

Nudges are effective tools because we can understand how human psychology works and we can predict situations in which people are likely to make biased judgements or judgements based on cognitive errors. Appropriate nudges seek to improve decision-making by helping people overcome these biases and errors. Manipulative nudges are those which seek to exploit such biases. Governments are expected to be transparent, and more laws exist for transparency in the public rather than the private sector, meaning that government officials must be more considerate about their explicit nudges. If oversight bodies, reporters, or the general public were to learn of a practice that made an agency or official look good while failing to actually benefit the public, then it would be clear that an abuse of power took place. Choice architects who wish to serve the public rather than manipulate it should always consider acknowledging nudges, and whether they can safely do so publicly.
Public vs Private Choice Architects - Joe Abittan

Who to Fear: Public vs Private Choice Architects

A question that Cass Sunstein and Richard Thaler raise in their book Nudge is whether we should worry more about public or private sector choice architects. A choice architect is anyone who influences the decision space of another individual or group. Your office’s HR person in charge of health benefits is a choice architect. The people at Twitter who decided to increase the character length of tweets are choice architects. The government bureaucrat who designs the form you use to register to vote is also a choice architect. The decisions that each individual or team makes around the choice structure for other people’s decisions will influence the decisions and behaviors of people in those choice settings.

 

In the United States, we often see a split between public and private that is feels more concrete than the divide truly is. Often, we fall dramatically on one side of the imagined divide, either believing everything needs to be handled by businesses, or thinking that businesses are corrupt and self-interested and that government needs to step in to monitor almost all business actions. The reality is that businesses and government agencies overlap and intersect in many complex ways, and that choice architects in both influence the public and each other in complex ways. Regardless of what you believe and what side you fall on, both choice architects need to be taken seriously.

 

“On the face of it, it is odd to say that the public architects are always more dangerous than the private ones. After all, managers in the public sector have to answer to voters, and managers in the private sector have as their mandate the job of maximizing profits and share prices, not consumer welfare.”

 

Sunstein and Thaler suggest that we should be concerned about private sector choice architects because they are ultimately responsible to company growth and shareholder value, rather than what is in the best interest of individuals. When conflicts arise between what is best for people and what is best for a company’s bottom line, there could be pressure on the choice architect to use nudges to help the bottom line rather than to help people make the best decisions possible.

 

However, the public sector is not free from perverse incentives simply by being elected, being accountable to the public, or being free from profit motives. Sunstein and Thaler continue, “we agree that government officials, elected or otherwise, are often captured by private-sector interests whose representatives are seeking to nudge people in directions that will specifically promote their selfish goals.” The complex interplay of government and private companies means that even the public sector is not a space purely dedicated to public welfare. The general public doesn’t have the time, attention, energy, or financial resources to influence public sector choice architects in the ways that the private sector does. And if private sector influences shape choice structures via public elected officials, they can create a sense of legitimacy for ultimately selfish decisions. Of course, public sector choice architects could be more interested in keeping their job or winning reelection, and may promote their own selfish goals for self-preservation reasons as well.

 

We can’t think of public sector or private sector actors as being more trustworthy or responsible than the other. Often times, they overlap and influence each other, shifting the incentives and opinions of the public and the actors within public and private sectors simultaneously. Sunstein and Thaler suggest that this is a reason for maintaining the maximal choice freedom possible. The more people have their own ability to make choices, even if they are nudged, the more we can limit the impact of self-serving choice architects, whether they are in the public or private sectors.
Should We Assume Rationality?

Should We Assume Rationality?

The world is a complex place and people have to make a lot of decisions within that complexity. Whether we are deliberate about it or not, we create and manage systems and structures for navigating the complexity and framing the decisions we make. However, each of us operate from different perspectives. We make decisions that seem reasonable and rational from our individual point of view, but from the outside may seem irrational. The question is, should we assume rationality in ourselves and others? Should we think that we and other people are behaving irrationally when our choices seem to go against our own interests or should we assume that people have a good reason to do what they do?

 

This is a current debate and challenge in the world of economics and has been a long standing and historical debate in the world of politics. In his book Thinking Fast and Slow, Daniel Kahneman seems to take the stance that people are acting rationally, at least from their own point of view. He writes, “when we observe people acting in ways that seem odd, we should first examine the possibility that they have a good reason to do what they do.”

 

Rational decision-making involves understanding a lot of risk. It involves processing lots of data points, having full knowledge of our choices and the potential outcomes we might face, as well as thinking through the short and long-term consequences of our actions. Kahneman might argue, it would seem after reading his book, that truly rational thinking is beyond what our brains are ordinarily capable of managing. But to him, this doesn’t mean that people cannot still make rational choices and do what is in their best interests. When we see behaviors that seem odd, it is possible that the choices other people have made are still rational, but just require a different perspective.

 

The way people get to rationality, Thinking Fast and Slow suggests, is through heuristics that create shortcuts to decision-making and eliminate data that is more or less just noise. Markets can be thought of as heuristics in this way, allowing people to aggregate decisions and make choices with an invisible hand directing them toward rationality. So when we see people who seem to be acting obviously irrationally or opposed to their self-interest, we should ask whether they are making choices within an entirely different marketplace. What seems like odd behavior from the outside might be savvy signaling to a group we are not part of, might be a short term indulgence that will stand out to the remembering self in the long run, and might make sense if we can change the perspective through which we judge another person.

 

Kahneman shows that we can predict biases and patterns of thought in ourselves and others, but still, we don’t know exactly what heuristics and thinking structures are involved in other people’s decision-making. A charitable way to look at people is to assume their decisions are rational from where they stand and in line with the goals they hold, even if the choices they make do not appear to be rational to us from the outside.

 

Personally, I am on the side that doubts human rationality. While it is useful, empathetic, and humanizing to assume rationality, I think it can be a mistake, especially if we go too far in accepting the perspective of others as justification for their acts. I think that there are simply too many variables and too much information for us to truly make rational decisions or to fully understand the choices of others. My thinking is influenced by Kevin Simler and Robin Hanson who argue in The Elephant in the Brain, that we act on pure self-interest to a greater extent than we would ever admit, and we hide our self-interested behaviors and decisions from everyone, including ourselves.

 

At the same time, I do believe that we can set up systems, structures, and institutions that can help us make more rational decisions. Sunstein and Thaler, in Nudge, clearly show that markets can work and that people can be rational, but often need proper incentives and easy choice structures that encourage to encourage better choices. Gigerenzer in Risk Savvy ends up at a similar place, showing that we can get ahead of the brain’s heuristics and biases to produce rational thought. Creating the right frames, offering the right visual aids, and helping the brain focus on the relevant information can lead to rational thought, but nevertheless, as Kahneman shows, our thinking can still be hijacked and derailed, leading to choices that feel rational from the inside, but appear to violate what would be in our best interest when our decisions are stacked and combined over time. Ultimately, the greatest power in assuming rationality in others is that it helps us understand multiple perspectives, and might help us understand what nudges might help people change their behaviors and decisions to be more rational.
The Dominance of Loss Aversion - Joe Abittan

The Dominance of Loss Aversion

Loss aversion is a dominant force in many of our individual lives and in many of our societies. At this moment, I think it is one of the greatest barriers to change and growth that our entire world needs to overcome in order to move forward to address climate change, to create more equitable and cohesive societies, and to drive new innovations. Loss aversion has made us complacent, and we are feeling the cost of stagnation in our politics and in our general discontent, but at the same time we are paralyzed and unable to do anything about it. As Tyler Cowen wrote in The Complacent Class, “Americans are in fact working much harder than before to postpone change, or to avoid it altogether, and that is true whether we’re talking about corporate competition, changing residences or jobs, or building things. In an age when it is easier than ever before to dig in, the psychological resistance to change has become progressively stronger.”

 

My argument in this post is that much of the complacency and stagnation that Cowen has written about stems from loss aversion. In Thinking Fast and Slow, Daniel Kahneman writes, “Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals.” Additional research in the book shows that the pain and fear of loss is generally at least two times greater for most people than the pleasure and excitement of gain. Before we make a bet, the payoff has to be at least twice what we could stand to lose. If we are offered $10 or a gamble for more money, we prefer the sure $10 over the gamble, until the payoff of the gamble far outweighs the possible loss of the guaranteed $10.

 

I believe this is at the heart of the trite saying that people become more “conservative” as they get older. The reality is that as people get older they acquire more wealth, are more likely to own a home, and secure their social standing. People are not “conservative” in some high-minded ideological sense of “conservativism,” they are self-interested and risk averse. They don’t want to risk losing their wealth, losing value on their home, or losing social status. To me, this more plausibly explains conservatism and complacency than do political ideology explanations or cultural decadence.

 

To me, Kahneman’s quote is supported by Cowen’s thoughts. Institutions are built and run by people. People within institutions, especially as the institutions have become well established, become risk averse. They don’t want to lose their job, their position as the office veteran who knows how to do everything, and their knowledge and authority in their field. As the potential for loss increases, people become increasingly likely to push back against change and risk, ensuring that we cannot lose what we have, but also forgoing changes that could greatly benefit all of us in the long run. Loss Aversion has come to dominate how we organize our societies, and how we relate to one another, at individual, social, and political levels in the United States.
Affect Heuristics

Affect Heuristics

I studied public policy at the University of Nevada, Reno, and one of the things I had to accept early on in my studies was that humans are not as rational as we like to believe. We tell ourselves that we are making objective and unbiased judgments about the world to reach the conclusions we find. We tell ourselves that we are listening to smart people who truly understand the issues, policies, and technicalities of policies and science, but studies of voting, of policy preference, and of individual knowledge show that this is not the case.

 

We are nearing November and in the United States we will be voting for president and other elected officials. Few of us will spend much time investigating the candidates on the ballot in a thorough and rigorous way. Few of us will seek out in-depth and nuanced information about the policies our political leaders support or about referendum questions on the ballot.  But many of us, perhaps the vast majority of us, will have strong views on policies ranging from tech company monopolies, to tariffs, and to public health measures. We will reach unshakable conclusions and find a few snippets of facts to support our views. But this doesn’t mean that we will truly understand any of the issues in a deep and complex manner.

 

Daniel Kahneman, in his book Thinking Fast and Slow helps us understand what is happening with our voting, and reveals what I didn’t want to believe, but what I was confronted with over and over through academic studies. He writes, “The dominance of conclusions over arguments is most pronounced where emotions are involved. The psychologist Paul Slovic has proposed an affect heuristic in which people let their likes and dislikes determine their beliefs about the world.”

 

Very few of us have a deep understating of economics, international relations, or public health, but we are good at recognizing what is in our immediate self-interest and who represents the identities that are core to who we are. We know that having someone who reflects our identities and praises those identities will help improve the social standing of our group, and ultimately improve our own social status. By recognizing who our leader is and what is in our individual self-interest to support, we can learn which policy beliefs we should adopt. We look to our leaders, learn what they believe and support, and follow their lead. We memorize a few basic facts, and use that as justification for the beliefs we hold, rather than admit that our beliefs simply follow our emotional desire to align with a leader that we believe will boost our social standing.

 

It is this affect heuristic that drives much of our political decision making. It helps explain how we can support some policies which don’t seem to immediately benefit us, by looking at the larger group we want to be a part of and trying to increase the social standing of that group, even at a personal cost. The affect heuristic shows that we want a conclusion to be true, because we would benefit from it, and we use motivated reasoning to adopt beliefs that conveniently support our self-interest. There doesn’t need to be any truth to the beliefs, they just need to satisfy our emotional valance and give us a shortcut to making decisions on complex topics.
Help Them Build a Better Life

Help Them Build a Better Life

It is an unavoidable reality that we are more motivated by what is in our immediate self-interest than we would like to admit. This idea is at the heart of Kevin Simler and Robin Hanson’s book The Elephant in the Brain and can be seen everywhere if you open your eyes to recognize it. I’m currently doing a dive into reading about homelessness, and I’m working through Elliot Liebow’s book Tell Them Who I am. Liebow writes about American society’s belief that people will become dependent on aid if it is offered unconditionally. In a passage from his book where he reflects on the barriers that homeless women face in obtaining services and aid, and how those barriers can often become abuse, Liebow writes:

 

“One important source of abuse lies much deeper, in a widespread theory about human behavior that gets expressed in various forms: as public policy, as a theoretical statement about rehabilitation, or simply as common sense. Whatever the form, it boils down to something like this: We mustn’t make things too easy for them (mental patients in state hospitals, welfare clients, homeless people, the dependent poor generally). That just encourages their dependency”

 

What is incredible with the sentiment in the paragraph above is how well it seems to justify what is in the immediate self-interest of the people with the resources to help those in need. It excuses inaction, it justifies the withholding of aid, and it places people with material resources on a moral high ground over those who need help. Helping others, the idea posits, actually hurts them. If I give up some of my hard earned money to help another person, I don’t just lose money, that person loses motivation and loses part of their humanity as they become dependent on the state. They ultimately drag us all down if I give them unconditional financial aid. What is in my best interest (not sharing my money) just also happens to be the economic, moral, and personal best thing to do for another person in less fortunate circumstances.

 

This idea assumes that people have only one singular motivation for ever working, making money to have nice things. It ignores ideas of feeling respected and valued by others. It ignores the human desire to be engaged in meaningful pursuits. And it denies our needs as humans for love, recognition, and basic necessities before we can pull ourselves up by our boot straps.

 

Johann Hari’s book Chasing the Scream is an excellent example of how wrong this mindset is and of the horrors that people can face when the rest of society thinks this way and won’t offer them sufficient help to reach a better place in life. Regarding drug addicts and addiction, Hari quotes the ideas of a Portugese official, “addiction is an expression of despair, and the best way to deal with despair is to offer a better life, where the addict doesn’t feel the need to anesthetize herself anymore. Giving rewards, rather than making threats, is the path out. Congratulate them. Give them options. Help them build a life.”

 

Helping someone build a life requires a financial investment in the other person, a time and attention investment, and also requires that we recognize that we have a responsibility to others, and that we might even be part of the problem by not engaging with those in need. It is in our selfish interest to blame others for the plight of society or the failures of other people. From that standpoint punishment and outcasting is justified, but as Hari, Liebow, and the Portugese official suggest, real relationships and getting beyond fears of dependency are necessary if we are truly to help people reach better places and get beyond the evils we want to see eliminated from the world. We can’t go out of our way to find all the ways in which things that are in our self-interest are good for the rest of the world. We have to acknowledge the damages that our self-interests can cause, and find ways to be responsible to the whole, and help other people build their lives in meaningful ways.

Signaling Loyalty

Politics is an interesting world. We all have strong opinions about how the world should operate, but in general, most of us don’t have much deep knowledge about any particular issue. We might understand the arguments about charter schools, about abortions, or about taxes, but very few of us have really studied any of these areas in considerable depth. Anyone with a career in a specific industry understands that there is a public perception of the industry and the deeper and more complex inner workings of the actual industry. But when we think about political decisions regarding any given industry and topic, we suddenly adopt easy surface level answers that barely skim the surface of these deep and complex inner dynamics.

 

If we all have strong opinions about politics without having strong knowledge about any of it, then we must ask ourselves if politics is really about policy at all? Kevin Simler and Robin Hanson suggest that politics is generally about something other than policy. In The Elephant in the Brain they write, “Our hypothesis is that the political behavior of ordinary, individual citizens is often better explained as an attempt to signal loyalty to our side (whatever side that happens to be in a particular situation), rather than as a good-faith attempt to improve outcomes.” 

 

If the main driver of politics was doing good in the world and reaching good outcomes for society, then we would likely be a much more hands-off, technocratic society. Instead, we have elected a president who doesn’t seem to have a deep understanding of any major issues, but who does know how to stoke outrage and draw lines in the sand to differentiate each side. We generally look around and figure out which team we belong to based on our identity and self-interest, and separate into our camps with our distinct talking points. We don’t understand issues beyond these talking points, but we understand how they make our side look more virtuous.

 

I believe that people who are deeply religious are drawn toward the Republican Party which currently denies climate change partly because a society that has less emphasis on science is likely to be more favorable toward religious beliefs. The veracity of climate change and the complex science behind it is less important than simply being on a side that praises people for religious beliefs. Similarly, I believe that people with higher education degrees are more likely to align with the Democrat Party because, at the moment, it is a party that encourages scientific and technical thought. It is a party that socially rewards the appearance of critical thinking and praises people who have gone to school. Without needing to actually know anything specific, people with degrees who appear to think in a scientific method framework are elevated in the party where people with religious beliefs are disregarded. Both parties are operating in ways that signal who is valuable and who belongs on a particular side. Issues map onto these signals, but the issues and policies are not the main factors in choosing a side.