Summary: The Righteous Mind

I think everyone should read Jonathan Haidt’s book, The Righteous Mind: Why Good People Are Divided by Politics and Religion. But if you don’t have time, you can read my detailed summary here.


Overview

How do people think about right and wrong? This is an empirical question, and The Righteous Mind’s primary goal is to answer it. The psychologist Jonathan Haidt tells the story of how his research led him to formulate Moral Foundations Theory, which claims that the human mind responds to combinations of six moral “taste receptors”: Care, Liberty, Fairness, Authority, Loyalty, and Sanctity.

The theory and its implications are presented in three parts. In Part I, Haidt shows that our conscious moral reasoning is done not to find the truth, but to preserve our reputations by justifying our intuitions. In Part II, he gives a full account of those intuitions and he shows how they differ across the political spectrum. In Part III, he argues that morality evolved through natural selection as a group-level adaptation; among our ancestors, the groups who were unified by collective moral principles were best able to turn resources into offspring.

In the final chapter, Haidt transitions from description to prescription, offering insights into how we might recognize the shortcomings of our own moral tribes, benefit from the most sensible ideas of other tribes, and work together to design a better society that acknowledges certain crucial facts of human psychology.

Chapter Summaries

Introduction

Because this book aims to describe the psychology of morality, it might have made sense to title it The Moral Mind. However, “righteous” better captures a feature of the mind that is crucial for understanding why we hold ourselves to certain principles: we hold each other to those principles too. We are righteous, moralistic, and judgmental. These qualities are crucial for maintaining safe and productive societies, but they also lead to much enmity and moralistic strife within our communities. In order to parlay this strife into the mutually beneficial conversations that should be possible in democratic societies, we must first recognize that we are all hypocrites, that we all mean well, and that we will understand each other better if we first understand ourselves.

Part 1: Intuitions Come First, Strategic Reasoning Second

Central metaphor: The mind is divided, like a rider on an elephant, and the rider’s job is to serve the elephant.

Chapter 1: Where Does Morality Come From?

When Haidt decided to pursue moral psychology as a graduate student in the late 1980s, the field was dominated by rationalism, the theory that morality comes from reason. Rationalists believe that people become moral by learning that since they don’t like to be harmed, it must be wrong to harm others. Harm is thus central to rationalism; without it, there is no such thing as a moral dilemma. In cultures where people seem to condemn harmless acts (e.g. masturbation), rationalists assume those people must have supernatural beliefs (e.g. belief in hell) that make those acts seem harmful.

Haidt suspected that rationalism was wrong, so he conducted surveys across many cultures and reached three conclusions. First, morality varies greatly by culture, and it often concerns much more than harm. Second, people often have gut feelings about morality—often related to disgust—that they cannot justify but still hold to be true. Third, it can’t be true that morality is constructed entirely through a growing understanding of harm, or else people wouldn’t condemn acts that they themselves agree are not harmful. Thus, morality must come not from reason but from some combination of innateness and social learning. But even if this is true, a question remains: what are people doing when they reason about morality?

Chapter 2: The Intuitive Dog and Its Rational Tail

There are three hypotheses regarding the relationship between reason and intuition. One is found in Plato’s dialogue Timaeus, in which humans are described as perfect rational souls encased in imperfect emotional bodies. In Plato’s model, reason is the master and intuition is the slave. The converse of Plato’s hypothesis is that of the Scottish philosopher David Hume, who wrote in 1739 that intuition is the master and reason is the slave. A third option was put forth by Thomas Jefferson, who wrote in 1786 that reason and intuition are co-rulers of the mind.

Drawing on findings from cognitive psychology, Haidt argues that Hume was right. To help readers understand how reason and intuition work together in the mind, he develops a metaphor: intuition is an elephant, massive and non-verbal, and reason is a tiny rider who sits on top, defending the elephant’s movements but relatively powerless to steer them. This explains why you can’t change people’s minds simply by refuting their arguments; those arguments were produced by the rider to justify what the elephant was already doing. To use another metaphor, “You can’t make a dog happy by forcibly wagging its tail” (48). Dogs wag their tails to communicate their happiness, and people use reason to communicate their intuitions.

Thus, in response to rationalism, Haidt presents the social intuitionist model of moral psychology, so named because it claims that morality comes from intuitions that are shaped by genetic predispositions and social learning.

Chapter 3: Elephants Rule

Haidt presents six empirical findings that support the social intuitionist model:

  1. Intuitive judgments are powerful, instant, and constant. Experimental psychologists have long known that with every perception comes a tiny flash of either positive or negative affect, often so subtle that we don’t notice it. These flashes cause our elephants to lean one way or another, and our riders are always anticipating the next step.
  2. Political and social judgments are especially intuitive. When asked to rate emotionally loaded words (such as “happy” or “angry”) as positive or negative, people are slowed down significantly when positive words are shown next to images of unattractive people, immigrants, or the elderly. Even if such judgments would not be endorsed by our riders, they still arise without conscious control.
  3. Our bodies guide our judgments, often by activating feelings of disgust. When asked to make moral judgments in the presence of fart spray, people are more condemnatory. Associations with purity and cleanliness also affect moral judgments. When asked to make a series of judgments while standing next to a bottle of hand sanitizer, people become temporarily more conservative.
  4. Psychopaths reason but don’t feel, and are severely morally deficient. Their elephants don’t react to shame, guilt, or human dignity, and so psychopaths feel no compunction when they behave in monstrous ways. Meanwhile, their riders serve their elephants just as well as anyone’s do, which means psychopaths can reason, plan, and even charm their victims. If the rationalists were correct, then psychopaths would be just as moral as the rest of us.
  5. Babies feel but don’t reason, and have the beginnings of morality. As early as two months old, babies show evidence of their many intuitions when they stare longer at things that surprise them than at things that don’t. After being shown a short video of someone climbing a hill and being repeatedly pushed down by one person while being repeatedly helped up by another, babies are surprised to see the climber cozy up to the pusher rather than the helper. If the rationalists were correct, then babies would have no sense of right and wrong until long after they’d been personally harmed in a social environment.
  6. In the brain, moral judgments happen in the right place at the right time to support social intuitionism. When asked to make moral judgments while attached to an fMRI scanner, people are shown to activate regions in their brains associated with emotional processing. High activity in these regions correlates with the judgments people ultimately make. It is as if we can watch the elephant lurch in real time.

Elephants can respond to reason, especially when reason gives rise to new intuitions. But the best way to change elephants is not through argument, but through stories, art, or friendly social contexts, in which riders are not so obsessively protective of their masters.

Chapter 4: Vote for Me (Here’s Why)

The “social” part of the social intuitionist model comes from an insight of evolutionary psychology: in our ancestors’ mighty struggle to survive and find a mate, it was less important to know the truth than it was to maintain one’s reputation within the group. The rider is therefore much like a press secretary, ready at any moment to vindicate the president’s decisions. Haidt supports this view with five premises:

  1. We are obsessed with our reputations, even if we don’t realize it. In one experiment, people are told to talk about themselves into a microphone. While they speak, they can see a number on a screen that rises and falls according to an anonymous listener’s desire to meet them—or so they are told. In fact, the experimenter simply makes the number rise for some people and fall for others. Unsurprisingly, people’s self-esteem, as they describe it in the moments after, is greatly affected by the numbers they see, even when they go into the experiment believing they don’t care what other people think of them.
  2. When we believe something, we look for evidence that confirms it instead of evidence that would falsify it. Look at the sequence “2, 4, 6.” What is the pattern? When people are allowed to test their hypothesis with other sequences, they typically say things like “8, 10, 12” or “100, 102, 104,” which both confirm their hypothesis: “three numbers that increase by intervals of two.” But that’s wrong. It doesn’t occur to people to guess “1, 6, 20” or “3, 2, 1.” If they had been looking to falsify their hunches, then they might have eventually reached the correct answer: “three numbers arranged in increasing order.” We look to confirm our ideas, not to put them on trial.
  3. We are able to lie to ourselves, and believe those lies, whenever we want to conceal our own wrongdoing. In a series of experiments in which people attempt a series of math problems and then get paid for the number they manage to solve, the majority of people exaggerate when they are told to report their own result—even more so when they can destroy the evidence. So, most people cheat some of the time, but only to the extent that they can still convince themselves that their cheating was an honest mistake; the goal is to preserve one’s reputation, and lying is easier if you believe your own lies.
  4. When evaluating a proposition we wish to believe, we tend to ask, “can I believe it?” Our riders are creative, and they can almost always come up with a reason for the answer to be “yes.” When evaluating a proposition we don’t wish to believe, we tend to ask, “must I believe it?” Our riders can almost always come up with a reason for the answer to be “no.”
  5. When called upon to defend and support our groups, we activate regions of our brains related to pleasure and pain depending on the type of information we encounter. For example, an extreme partisan’s brain shows increased activity in areas related to negative emotion and punishment when they see their beloved leader acting hypocritically; but when they learn new information that exonerates their leader, the partisan gets a flash of pleasure as dopamine is released. Our brains don’t care about the truth. They care about our status within our groups.

Part II: There’s More to Morality than Harm and Fairness

Central metaphor: The righteous mind is like a tongue with six taste receptors.

Chapter 5: Beyond WEIRD Morality

People in societies that are WEIRD (Western, educated, industrialized, rich, and democratic) are statistical outliers on many measures of morality. Most notably, they tend to see the world as a collection of discrete, autonomous individuals, which leads their morality to focus mainly on harm, oppression, and inequality. Most other cultures see the world as a collection of relationships that ought to be protected even at the expense of the individuals who make them up. In these cultures, and in religious and conservative sub-cultures in the West, morality involves a broader set of concerns, often involving community and divinity. Moral matrices—the collection of values shared by a particular community—bind people together and blind them to the possibility that there might be other legitimate ways to judge people and organize society.

Chapter 6: Taste Buds of the Righteous Mind

There are two predominant systems of prescriptive ethics in Western culture: utilitarianism and deontology. Utilitarianism, developed by the 18th-century English philosopher Jeremy Bentham, says that moral behavior is that which maximizes the intensity, duration, and certainty of “hedons” (measures of pleasure) and minimizes those of “dolors” (measures of pain). Deontology, developed by the 18th-century German philosopher Immanuel Kant, says that moral behavior is that which anyone could will to become a universal law.

Though Haidt does not wish to discount the great value that utilitarianism and deontology have brought to Western society, he notes that both theories were developed by men who measured high on systematizing and low on empathy. Thus, these men were likely blind to certain crucial facts of human psychology. Haidt believes we can find a better starting point for both descriptive and prescriptive theories of ethics in David Hume, who acknowledged that morality involves numerous and often contradictory concerns that vary between cultures.

To figure out what those basic concerns are, Haidt and his research team borrowed a concept from cognitive anthropology called modularity. A module is an innate cognitive program that responds automatically to a particular perception—for example, one module is our ability to recognize faces. The first draft of any module is written in our genes by our evolutionary history, and then it gets tweaked by experience, sometimes through cultural learning. After reviewing the evidence of how people across cultures reason about morality, Haidt and his research team identified five initial candidates for modules that bear on moral psychology: care, fairness, authority, loyalty, and sanctity.

Chapter 7: The Moral Foundations of Politics

Haidt tells the full story of how he reasoned from our ancestors’ adaptive challenges to come up with a first draft of Moral Foundations Theory:

  1. Care/harm. This foundation helped our ancestors raise their children, who were completely dependent for the first few years of life. It gets activated when we see cute children or animals, especially if they are helpless and suffering.
  2. Fairness/cheating. This foundation helped our ancestors form two-way alliances and benefit from cooperation while protecting them from cheaters. It gets activated when we feel gratitude toward those who help us, and anger toward those who hurt us.
  3. Loyalty/betrayal. This foundation helped our ancestors form effective coalitions that could outcompete other groups that were less unified. It gets activated when we root for our favorite sports teams or swell with pride while singing our national anthem.
  4. Authority/subversion. This foundation helped our ancestors establish beneficial relationships within hierarchies. It gets activated when we feel respect or fear for our bosses or parents.
  5. Sanctity/degradation. This foundation helped to protect our ancestors from pathogens and unsafe food. It gets activated when we feel disgust or reverence for people or symbols, or when we shrink away from words or ideas that are taboo.

Chapter 8: The Conservative Advantage

Survey data from over 130,000 people show that liberal voters value care and fairness above all other moral foundations, and that conservative voters value all five foundations equally. This explains why the Democratic Party often has such a hard time connecting with voters. Democrats can’t understand why poor rural Americans tend to vote Republican when it is the Democrats who want to distribute money more equally. The answer is that rural Americans value all six moral foundations, which are found in Republican rhetoric but not in Democratic rhetoric. As Haidt says, “Republicans understand moral psychology. Democrats don’t.” (156)

In the summer of 2008, Haidt was nervous that Obama’s two-foundation rhetoric might cost the Democrats the presidency. To help Democrats appeal to more voters, Haidt wrote an essay for Edge.org with the title, “What Makes People Vote Republican?” He contrasted two visions of society—one from the utilitarian philosopher John Stuart Mill, who viewed the ideal society as one that prized individual autonomy above all else, and another from the French sociologist Émile Durkheim, who saw society as a set of valuable relationships that had developed organically over time as people learned how to suppress each other’s selfishness. Haidt explained in the essay that a Durkheimian society is a better fit for human psychology, and that the Democrats should try to capitalize on that fact as much as Republicans do.

Haidt received many emails from both liberals and conservatives after writing that essay, and they contained moral content that could not be easily categorized using Moral Foundations Theory in its current form. This led him to split the Fairness/cheating foundation into two separate foundations:

  1. Liberty/oppression. This foundation helped our egalitarian hunter-gatherer ancestors to rally against tyrannical alpha males who abused their power within the hierarchy of the group. For liberals, this foundation is almost as important as Care/harm, and it motivates them to advocate for sub-groups who appear to have been oppressed by traditional social structures.
  2. Fairness/proportionality. This foundation helped our ancestors to maintain moral communities in which people were punished and rewarded in proportion to their deeds. This foundation is valued on both sides of the political spectrum, but slightly more on the right; conservatives resent liberal policies that appear to reward free riders.

Part III: Morality Binds and Blinds

Central metaphor: We are 90 percent chimp and 10 percent bee.

Chapter 9: Why Are We So Groupish?

In The Descent of Man, Charles Darwin proposed the idea that certain moral emotions were selected for because they gave a competitive advantage to groups who could better unify to turn resources into offspring. This hypothesis is called group selection, and it fell out of favor in the 1970s when the evolutionary biologists Richard Dawkins and George Williams argued that individual selection better explains why certain groups are more successful than others. To sum up their argument, “A fast herd of deer is nothing more than a herd of fast deer” (196).

The debate has been revived in recent years, and Haidt believes group selection provides the best explanation for many features of the mind. He presents four premises in defense of group selection:

  1. Throughout the history of life on earth, there have been around half a dozen “major transitions,” in which multiple individual carriers of genes bound themselves together to increase their collective chance of reproducing. This is what happened 2 million years ago when two prokaryotes combined to create the first eukaryote, which had multiple organelles that could then only reproduce when the whole cell did. This happened again when eukaryotes first combined to form multi-cellular organisms, and again when certain animals began cooperating to defend a shared nest, to feed offspring over an extended period, and to win conflicts against other groups. Good examples of these animals are ants and bees, which accomplish much more as colonies than they would as individuals. Another example is Homo sapiens after the development of agriculture.
  2. Before our ancestors began to work well in groups, they must have had shared intentionality: the ability to recognize when other people wanted to achieve the same goals as them. This ability must have predated the development of language, since language is not a relationship between a word and an object but an agreement between people over a relationship between a word and a mental representation. Shared intentionality allows for the development of a moral matrix, and it explains how group selection could have bound together communities of non-kin humans in the same way that kin relations bound together hives of ants and bees.
  3. Genes and culture coevolve. For example, early adult cattle-herders couldn’t digest cow milk because lactase, the enzyme which breaks down lactose, used to be produced only in children while they were young enough to breastfeed. But over many generations, the children whose lactase lasted longer than usual could benefit from cow milk for longer than their peers, which ultimately led to full communities of adults who could digest milk. The cultural evolution of cattle herding led to the genetic evolution of lactase-production in adults, which led to further creation of dairy products such as cheese and yogurt. Similarly, it is plausible that cultural developments such as moral matrices led humans to choose more cooperative mates than they would have before, which made some groups more successful than others, and this in turn led to the genetic evolution of groupish behavior.
  4. Evolution can happen fast, and it works on the level of groups. If you want to breed chickens to lay more eggs, it doesn’t make sense to breed the individuals that lay the most eggs, since those are usually the most aggressive ones in each cage, and the death rate would increase. If you instead select the most productive cages, the death rate will plummet in just a few generations, leaving you far more cooperative chickens and many more eggs. Similarly, our own ancestors faced several near-existential disasters that wiped out huge portions of the global population; it seems very plausible that the people who survived to pass on their genes were not the most aggressive individuals, but the most cooperative groups.

Haidt does not claim that all of human psychology is the product of group selection—far from it. Rather, something like 90% of our mental modules resulted from individual selection over millions of years after our ancestors diverged from those of chimpanzees. But once our ancestors began to evolve alongside culture, they developed a few bee-like mental modules that bound them together into increasingly cooperative groups.

Chapter 10: The Hive Switch

Most of the time we pursue individual goals and maintain a strong sense of self. But in certain situations, we are apt to lose ourselves and feel that we are part of a greater whole. Haidt calls this the “hive switch.” It can be activated in many ways, such as when we feel awe in nature, when we take drugs such as LSD and MDMA, or when we move in synchrony with many other people (e.g. soldiers marching in time, or dancers at a rave).

Two recent findings in neuroscience support the idea of the hive switch and give some insight into how it operates. One is the hormone oxytocin, which makes us more trusting and more caring toward members of our in-groups. The other is mirror neurons, which activate when we perceive another person’s behavior and feel compelled to imitate them, especially when that person shares our moral matrix; this is why we automatically return our friends’ smiles, for example. The hive switch offers three insights for institutions that want to make their members more cooperative: increase similarity, not diversity; exploit synchrony; and create healthy competition among teams, not individuals.

Chapter 11: Religion Is a Team Sport

After the 9/11 attacks in 2001, a profusion of new books decried religion as a parasite that preys on unscientific minds. This view was promulgated by the New Atheists—Richard Dawkins, Sam Harris, Daniel Dennett, and Christopher Hitchens—who viewed religion primarily as a set of supernatural beliefs that directly cause all religious behaviors, from peaceful congregations to suicide bombings. The New Atheists saw that religion comes with the risk of great violence, and believed that there are better ways to glean its supposed benefits. So, they claimed that the world would be better without it.

Haidt argues that religion was a critical innovation that evolved through cultural group selection to produce much of what we now value in society. The reason the New Atheists fail to see this is that their model of religious psychology involves only two factors—belief and action—when there is in fact a crucial third: belonging. Haidt agrees with the New Atheists that the first step in creating the mental module for religion is in our tendency to see patterns where none exist (e.g. we see faces in the clouds). But where the New Atheists think the next step is that religion parasitically passes from one brain to the next, Haidt thinks the next step is the creation of moral capital. Whenever a religion is compelling enough to be shared by an entire community, it provides a framework for a shared moral matrix, which makes the group more cohesive and thus better at turning resources into offspring.

Evolutionary fitness isn’t the same thing as societal health, but perhaps religion deserves more credit as one of the forces that enabled our ancestors to cooperate so successfully on such a large scale. As Haidt points out, atheistic societies have never existed until recently (in parts of Europe), and they are downright horrible at turning resources (of which they have many) into offspring (of which they have few).

Chapter 12: Can’t We All Disagree More Constructively?

What makes a person vote liberal or conservative? Drawing from studies of identical twins separated at birth, Haidt shows that most of the variance in the relevant personality traits is attributable to genetics, not to environmental factors. As people grow up, they construct life narratives that work in tandem with their innate tendencies and make them more or less likely to identify with various political narratives. Voters all across the political spectrum are psychologically normal, they all mean well, and they all have insights into how to build a healthy society.

In Haidt’s opinion, liberals offer two crucial insights that other groups tend to miss. One is that certain institutional norms can put sub-groups at a disadvantage, and we should work to make sure nobody is systematically oppressed in pursuit of the greater good. The second liberal insight is that governments should play some role in regulating the activities of corporations, which often produce harmful externalities. Haidt believes that libertarians, whose morality is singularly focused on the liberty/oppression foundation, have an important insight as well: markets are often miraculous. If governments are too zealous in their regulations, we end up paying far more for the products and services we all need, and we disincent innovation. Lastly, Haidt says that social conservatives have an important insight that liberals rarely understand: by undermining certain traditional social structures, we sometimes end up hurting the people we are trying to help.

To build a healthy society, we should aim not to wipe out ideologies that differ from our own, but to recognize that each political group is made up of good people who have important things to say.

Conclusion

Haidt concludes the book by reminding the reader of the central metaphors in each section. In Part I, he explained that intuitions come first, reasoning second. He advises that as a souvenir from this section of the book, we should keep a vision of ourselves and everyone around us as tiny riders on massive elephants. In Part II, he explained that there is more to morality than harm and fairness. As a souvenir from this section, we should keep a healthy suspicion of anyone who claims there is a single morality that is appropriate for all societies at all times. Lastly, in Part III, he explained that we are 90 percent chimp, 10 percent bee. As a souvenir from this final section, we should keep the image of the hive switch, which can engage our moral emotions and enable us to become selfless parts of a greater whole.