Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/

Book Review: COVID-19 and Psychology: People and Society in Times of Pandemic by John G. Haas

Published by Anonymous (not verified) on Sat, 02/07/2022 - 7:00pm in

In COVID-19 and Psychology: People and Society in Times of Pandemic, John G. Haas explores the psychological impact of the COVID-19 pandemic at all levels of society. This book will be useful for those in the social sciences, policymakers and the general public looking to understand how to build resilience through social support and combat the fear of … Continued

Implicit Attitudes, Science, and Philosophy (guest post)

Published by Anonymous (not verified) on Wed, 25/05/2022 - 12:03am in

“Philosophers, including myself, have for decades been too credulous about science, being misled by scientists’ marketing and ignoring the unavoidable uncertainties that affect the scientific process…”

The following is a guest post* by Edouard Machery, Distinguished Professor in the Department of History and Philosophy of Science at the University of Pittsburgh and Director of the university’s Center for Philosophy of Science. It is the first in a series of weekly guest posts by different authors at Daily Nous this summer.

[Anni Albers, “Intersection” (detail)]

Implicit Attitudes, Science, and Philosophy
by Edouard Machery

How can we be responsible and savvy consumers of science, particularly when it gives us morally and politically pleasing narratives? Philosophers’ fascination with the psychology of attitudes is an object lesson.

Some of the most exciting philosophy in the 21st century has been done with an eye towards philosophically significant developments in science. Social psychology has been a reliable source of insights: consider only how much ink has been spilled on situationism and virtue ethics or on Greene’s dual-process model of moral judgment and deontology.

That people can have, at the same time, perhaps without being aware of it, two distinct and possibly conflicting attitudes toward the same object (a brand like Apple, an abstract idea like capitalism, an individual like Obama, or a group such as the elderly or women philosophers) is one of the most remarkable ideas to come from social psychology: in addition to the attitude we can report (usually called “explicit”), people can harbor an unconscious attitude that influences behavior automatically (their “implicit” attitude)—or so we were told. We have all grown familiar with (and perhaps now we have all grown tired of) the well-meaning liberal who unbeknownst to them harbors negative attitudes toward some minority or other: women or African Americans, for instance.

While it was first discussed in the late 2000s—Tamar Gendler discussed the Implicit Association Test in her papers on aliefs and Dan Kelly, Luc Faucher, and I discussed how implicit attitudes bear on issues in the philosophy of race—this idea crystallized as an important philosophical topic through the series of conferences Implicit Bias & Philosophy, organized by Jennifer Saul in the early 2010s at Sheffield. This conference series led to two groundbreaking volumes edited by Michael Brownstein and Jennifer Saul (Implicit Bias and Philosophy, Volumes 1 and 2, Oxford University Press). By then, philosophers’ fascination with implicit attitudes was in sync with the obsession with the topic in the society at large: implicit attitudes were discussed in dozens of articles and open-eds in the New York Times, by then President Obama, and by Hilary Clinton during her presidential campaign. We were lectured to be on the lookout for our unconscious prejudices by deans and provosts, well-paid consultants on “debiasing,” and journalists.

Most remarkable is the range of areas of philosophy that engaged with implicit attitudes. Here is a small sample:

  • Moral philosophy: Can people be held responsible for their implicit attitudes?
  • Social and political philosophy: Should social inequalities be explained by means of structural/social or psychological factors?
  • Metaphysics of mind: What kind of things are attitudes? How to think of beliefs in light of implicit attitudes?
  • Philosophy of cognitive science: Are implicit attitudes propositional or associations?
  • Epistemology: How should implicit bias impact our trust in our own faculties?

The social psychology of implicit attitudes in philosophy had also another kind of impact: it provided a ready explanation of women’s embarrassing underrepresentation and of the perduring inequalities between men and women philosophers. Jennifer Saul published a series of important articles on this theme, including “Ranking Exercises in Philosophy and Implicit Bias” in 2012 and “Implicit Bias, Stereotype Threat, and Women in Philosophy” in 2013. In the first article, after summarizing “what we know about implicit bias” (my emphasis), Saul concluded her discussion of the Philosophical Gourmet Report as follows:

There is plenty of room for implicit bias to detrimentally affect rankings of both areas and whole departments. However, it seems to me that this worry is much more acute in the case of whole department rankings. With that in mind, I offer what is sure to be a controversial suggestion: abandon the portion of the Gourmet Report that asks rankers to evaluate whole departments.

The British Philosophical Association was receptive to explaining gender inequalities in philosophy by means of implicit biases and to this day implicit attitudes are mentioned on its website. Of course, by doing so, philosophers were just following broader social trends in English-speaking countries.

Looking back, it is hard not to find this enthusiasm puzzling since the shortcomings of the scientific research on implicit attitudes have become glaring. In “Anomalies in Implicit Attitudes Research,” recently published in WIREs Cognitive Science, I have identified four fundamental shortcomings, which are still not addressed after nearly 25 years of research:

  • It isn’t yet clear whether the indirect measurement of attitudes (via, e.g., the IAT) and their direct measurement measure different things; in fact, it seems increasingly dubious that we need to postulate implicit attitudes in addition to explicit attitudes.
  • The indirect measurement of attitudes predicts individuals’ behavior very poorly, and it isn’t clear under what conditions their predictive power can be improved.
  • Indirect measures of attitudes are temporally unstable.
  • There is no evidence that whatever it is that indirect measures of attitudes happen to measure causally impact behavior.

These four shortcomings should lead us to question whether the concept of indirect attitudes refers to anything at all (or as psychologists or philosophers of science put it, to question its construct validity). To my surprise, leading researchers in this area such as psychologist Bertram Gawronski and philosophers Michael Brownstein and Alex Madva agree with the main thrust of my discussion (see “Anomalies in Implicit Attitudes Research: Not so Easily Dismissed”): indirect measures of attitudes do not measure stable traits that predict individuals’ behavior.

It thus appears that many of the beliefs that motivated philosophical discussion of implicit attitudes are either erroneous or scientifically uncertain—why worry about how to limit the influence of implicit attitudes in philosophy when they might not have any influence on anything at all?—and that philosophers have been way too quick to reify measures (the indirect measures of attitudes) into psychological entities (implicit attitudes).

Hindsight is of course 20/20, and it would be ill-advised to blame philosophers (including my former self) for taking seriously science in the making. On the other hand, philosophers failed to even listen and a fortiori to give a fair hearing to the dissenting voices challenging the relentless hype by implicit-attitudes cheerleaders. The lesson is not limited to implicit attitudes: the neuroscience of meditation, the neuroscience of oxytocin, the so-called love molecule, the experimental research on epigenetics in humans, and the research on gene x environment interaction in human genetics also come to mind.

Philosophers, including myself, have for decades been too credulous about science, being misled by scientists’ marketing and ignoring the unavoidable uncertainties that affect the scientific process: the frontier of science is replete with unreplicable results, it is affected by hype and exaggeration (COVID researchers, I am looking at you!), and its course is shaped by deeply rooted cognitive and motivational biases. In fact, we should be particularly mindful of the uncertainty of science when it appears to provide a simple explanation for, and promises a simple solution to, the moral, social, and political ills that we find repugnant such as the underrepresentation of women in philosophy and elsewhere and enduring racial inequalities in the broader society.


Weird Consilience: A Review of Joseph Henrich’s ‘The WEIRDest People in the World’

Published by Anonymous (not verified) on Sat, 21/05/2022 - 12:55am in

Download: PDF | EPUB

The anthropologist Joseph Henrich has written a book called The WEIRDest People in the World. It offers a captivating look at the roots of Western psychology (and capitalism). Here is my in-depth review.

* * *

Edward O. Wilson, the famed evolutionary biologist and entomologist, argued that the goal of science should be to construct a single ‘consilient’ tree of knowledge. For the most part, the natural sciences have achieved this vision. Modern chemistry is rooted in quantum physics, and the study of biology is based on organic chemistry. When we get to the social sciences, however, we run into problems. On the spectrum of life on Earth, human behavior seem so exceptional that it is difficult to make the social sciences fit with the rest of biology.

The trouble is human culture.

Or more precisely, the dilemma is how to make sense of culture in light of evolution. One option is to claim that culture doesn’t really exist, meaning the behaviors we think are ‘learned’ are actually instinctive. This idea is clearly wrong. It implies, for example, that reading is genetic. And yet the spread of literacy has been so rapid that it cannot possibly be due to changing genes. Another option is to claim that humans are ‘blank slates’ whose behavior is determined almost completely by culture. But since some behaviors are obviously instinctive (i.e. breathing), we find that the supposedly ‘blank’ slate is not actually empty.

So the truth about human behavior lies somewhere in the middle; actions are determined jointly by genes and culture. Okay, but then where does culture come from? Surprisingly, it took a long time for scientists to realize the answer. Similar to genes, cultures evolve.

The roadblock to studying cultural evolution was mostly philosophical. For much of the 20th century, scientists tried to reduce evolution to competition between individuals. (Richard Dawkins popularized this worldview in his book The Selfish Gene.) While the individualist lens works well for animals that are asocial, for social animals it leads to a large blind spot: it negates the idea of group-level adaptations. And as it turns out, that’s the best way to understand culture. Human culture is a group-level adaptation. The idea is that cultures evolve when groups compete. Winning groups spread their culture. Losing groups don’t.

In the last few decades, the idea of cultural evolution has become more popular, giving rise to some fascinating new research. What’s important is that cultural evolution gives us a lens to make sense of history — a lens that is consilient with the rest of evolutionary biology.

Joseph Henrich’s book The WEIRDest People in the World is a major contribution to the study of cultural evolution. Like many big-picture histories, Henrich traces the evolution of Western culture. However, the story that Henrich tells is highly original. He argues that Western culture arose from norms around sex. Let’s say that again: Henrich claims that the rise of the West stems from norms around sex.

Skeptical? So was I. But after reading Henrich’s book, I am convinced that he is on the right track. It seems that we may owe Western culture to an odd little religion that got obsessed with banning all forms of incest. Of course, WEIRDest People doesn’t have all the pieces of the puzzle. (There are some important omissions, which I’ll discuss.) But overall, Henrich’s book offers a compelling new perspective on human history.

WEIRD psychology

In WEIRDest People, Henrich approaches history from an unusual angle. In epics about the rise and fall of civilizations, we rarely hear about psychology. And yet that is precisely where Henrich begins his account of Western history.

Some backstory. For most of the last century, scientists assumed that human psychology was roughly universal. The idea was that Amazon hunter gatherers (for example) would respond to psychological tests similarly to American college students. This assumption made life easy for psychologists. They could study college students (who were cheap and easy test fodder) and then assume that their results would hold across all cultures. Unfortunately, most psychologists never bothered to verify that this assumption was true.

It was an Aristotelian mistake.1

It turns out that human psychology varies significantly across cultures. Worse still, college students (who tend to be rich and mostly from Western societies) are not in the middle of the pack. No, in almost every way, college students are weird. Hence the title of Henrich’s book, The WEIRDest People. The word ‘WEIRD’ is Henrich’s acronym for people who are Western, Educated, Industrialized, Rich and Democratic. Compared to other cultures, WEIRD people are:

  1. more individualistic
  2. more ‘impersonally prosocial’ (trusting of strangers)
  3. show less favoritism to in-groups
  4. focused more on mental states (when judging ethics)
  5. more analytical
  6. more prone to universalism
  7. more overconfident

A decade ago, Henrich and his colleagues documented the unusual features of WEIRD psychology. In The WEIRDest People, Henrich tries to explain how these traits ‘evolved’. I’ve used scare quotes here because to many social scientists, the word ‘evolved’ means ‘encoded in genes’. Henrich, however, is certain that there’s nothing genetic about WEIRD psychology. It is a product of cultural evolution.

Backing this claim, Henrich notes that WEIRD people tend to be highly literate. But since people from all cultures can learn to read (given sufficient opportunity), literacy cannot be genetic. Bolstering this reasoning, evidence suggests that learning to read alters both our brains and our psychology. Compared to people who cannot read, literate populations tend to have thicker corpus callosa and (oddly) worse facial recognition.2

So if not genetics, then what explains WEIRD psychology? Is it a byproduct of industrialization? Or maybe a consequence of the Enlightenment? Henrich thinks not. Instead, he argues that the seeds of Western psychology were planted more than a millennia ago, largely by accident. What happened is that Europeans got obsessed with incest.

Incest taboos

The idea that Western psychology was caused by incest taboos seems outlandish. Yet once Henrich works through the evidence, the hypothesis seems plausible. That’s because incest taboos cut to the core of kin relations. And kin relations cut to the core of human organization.

So let’s talk about bans on incest.

To most people, the notion of incest elicits a strong ethical reaction. Incest is wrong. But why do we feel this way? The (seemingly) obvious answer is that it’s instinctive. We know that incest is damaging to sexual reproduction. Therefore, sexually reproducing organisms (like humans) will evolve instincts to avoid it.3

Although it seems plausible that human incest taboos are instinctive, this claim runs into two major problems. First, incest taboos vary greatly between cultures. Second, these taboos often ban relationships that have nothing to do with biological incest. For example, Woody Allen was widely criticized when he married his adopted daughter. (Doing so clearly violated Western norms.) And yet the relationship was not ‘incestuous’ in any biological sense. So it seems that like most human behaviors, incest taboos are the joint product of genes and culture. Whatever incest-avoiding instinct we might have, culture can ramp it up or down.

So incest taboos are cultural. Fine. But why should we care about them? While these taboos are certainly titillating (especially if they are drastically different than our own), it’s hard to see why they are of scientific interest. And yet Henrich puts these taboos at the center of his work. That seems odd … but it is actually quite clever.

To understand why incest taboos are important to the study of cultural evolution, consider the following question: why do royal families tend to inbreed?4 One possibility is that marrying close relatives is some sort of royal fetish. But if so, why is royal inbreeding so common throughout history? A more plausible explanation is that the social structure of royalty somehow demands incestuous marriage.

To make sense of this possibility, note that marriage is about more than just sex. Marriage is an institution that cements bonds between families. And when it comes to royalty, these family ties are key. To be ‘royal’ is to be part of an extended lineage that traces bloodlines back to a common noble ancestor. Now, a key feature of bloodlines is that they tend to dilute with each generation. And this dilution, in turn, threatens the integrity of the clan. To combat this problem, kin-based groups often rely on marriage between relatives as a way to reinforce the lineage. Hence the tendency for royals (the quintessential kin group) to inbreed.

Today, this royal tendency seems like a quirk. But it is actually a remnant of the past. In Western society, royals are the last vestiges of a social structure called ‘intensive kinship’, in which groups are organized around blood ties. This social structure, Henrich argues, once dominated Europe. In fact, it likely dominated most civilizations. So if you think that marrying your cousin is odd, it’s because you come from a weird new culture that rejects kin-based organization. The goal of Henrich’s book is to explain how this WEIRD culture evolved.

Human history in three acts

After introducing the reader to WEIRD psychology, Henrich attempts to reconstruct the evolution of Western society, starting with the big picture. According to Henrich, human history has three acts. In Act I, humans lived as hunter gatherers. In Act II, we started to farm. And in Act III, we built a global industrial civilization.

Each of these acts, Henrich argues, came with a distinct form of social organization. Hunter gathers built their (small) groups around loose networks of kin — a social structure the Henrich calls ‘extensive kinship’. With agriculture, humans started to build tight-knit clans based around bloodlines — a form of organization that Henrich calls ‘intensive’ kinship. And with industrialization, humans built massive groups based on ‘voluntary’ (non-kin) organization.

Let’s take a tour though each act.

Act I: Extensive kinship

Henrich’s discussion of extensive kinship is brief, and is designed mostly to highlight what extensive kinship is not. It is not intensive kinship. And it is not large-scale, voluntary organization.

Basically, Henrich thinks that early human groups organized using informal kin networks that served mostly as safety nets. So if your hunt failed, for example, you could get food from your neighboring kin. In other words, kin were people you could trust, but not people you could command. As I see it, that’s the defining difference between ‘extensive’ and ‘intensive’ kinship. When agrarian societies started to build tight-knit extended lineages, the effect was to create a hierarchy. The patriarch could tell the rest of the family what to do. In ‘extensive’ kinships, however, there was no chain of command.5

The difference between extensive and intensive kinship, Henrich argues, is evident in differing incest taboos. Similar to WEIRD people, foraging societies organized using extensive kinship tend to have fairly expansive incest taboos. For example, the Ju/’hoansi peoples ban marriage between third cousins (and closer). Other foraging groups like the Wathaurung, organize in ‘clans’, and then ban intermarriage within a clan. (Many agrarian groups encourage marriage within the extended clan.) The effect of these expansive taboos, Henrich claims, is to suppress the formation of lineages.

So in terms of their marriage norms, early humans were likely similar to modern, Western societies. And yet Westerners organize into groups that are thousands of times larger than their ancient counterparts. How?

The answer, Henrich claims, is that Western societies have developed a suite of institutions that enable large-scale voluntary organization. For example, a WEIRD person would think nothing of driving to a neighboring city and ordering a coffee from a stranger. While seemingly banal, such behavior is quite odd. Throughout most of human history, approaching strangers uninvited meant risking death. Today, members of industrialized societies take for granted the host of norms and laws that enable interactions between strangers. However, early humans had no such norms, and so interaction between groups was dominated by violence.

If Henrich is correct, human prehistory was spent in small egalitarian bands that were connected by blood ties, but not bound by them.

Then everything changed.

Act II: ‘Scaling up’ with intensive kinship

In Act II of human history, groups began to organize on progressively larger scales. This shift clearly had something to do with the emergence of agriculture. However, Henrich is more concerned with the changing social structure that came with it. As groups got larger, they abandoned the loose bonds of extensive kinship and adopted a tighter network built on ‘intensive’ kinship.

To frame this change, we need to think about how and why some groups are able to organize on large scales, while others cannot. Perhaps the best way to understand this issue is to look at the history of European colonialism. When Europeans took over the world, their favorite technique was to ‘divide and conquer’. The idea is that to suppress resistance to colonial rule, you play local groups against each other. I have always been fascinated that this tactic worked. More generally, the fact that colonialism ever works seems a mystery.

Colonialism has a huge math problem: in most circumstances, the local population vastly outnumbers the invaders. Given this disadvantage, you’d think that the conquerors would consistently get massacred. True, anti-colonial victories did happen. (In North America, the most famous example is likely the Battle of Little Bighorn, where the U.S. military was slaughtered by a unified group of Lakota Sioux, Northern Cheyenne, and Arapaho tribes.) Still, such reverse victories were rare. Why?

A common answer is that Europeans had better weapons. (‘Guns, germs and steel’, as Jared Diamond put it.) While surely true, this explanation is not the whole story. When anthropologists began to study the peoples that Europe had previously conquered, they noticed that local groups also played the colonial game (on a smaller scale). An invading group would conquer or displace neighboring tribes, despite being outnumbered (in total). Here, the technological playing field was level. And yet the neighboring groups didn’t unify to repel their invaders.6 Again, why?

A plausible explanation is that these small groups did not unify because they lacked the culture to do so. In other words, there is nothing ‘natural’ about large-scale organization. It does not happen automatically in the face of danger. Instead, large-scale organization requires a suite of cultural tools that take time to develop. So if a population of paleolithic people were suddenly threatened by the US military, they would not (and likely could not) mirror its command structure. And that is why colonialism ‘works’.

Looking at the cultural tools used by Western groups — things like money, property rights, laws, regulations, contracts, etc. — it is tempting to see them as the ‘normal’ way of organizing large groups. But these tools are fairly recent inventions. Long before they existed, societies ‘scaled up’ using a different technique: they ritualized kinship.

The idea is that the easiest route to social scale is not the laws and regulations that today enable ‘voluntary’ organization. The simpler path is to take humanity’s innate kin bias and ramp it up. Here’s how you do it. You track bloodlines back to a revered ancestor. You invent gods who oversee the extended lineage. You create rites of passage that give age cohorts a shared identity. You sanctify obligations between clans. You revere family ties. And so on. The effect of this ritualization is to solidify the extended lineage, allowing kin-groups to scale up.

By ritualizing blood ties, however, you also create problems. You set the stage for rule by birthright, and the despotism that goes with it.

Given this problem, why didn’t humans take the more ‘rational’ approach: skip divine kingship and go straight to representative democracy? The likely answer is that this ‘shortcut’ was not an option. Cultural evolution does not invent new designs from scratch. Instead, it builds on what exists. And what existed, when human societies first started to scale up, was an innate bias towards kin — a feature of our primate heritage. Cultural evolution took this bias and went to town. The result was that intensive kinship conquered the world.7

The limits of intensive kinship

Had Henrich been writing 3000 years ago, the story would essentially end here. During the Neolithic era, humans began to organize using intensive kinship, and this form of organization spread everywhere. Finis.

Of course, we know that the story does not end there. Today, we have organizations like Walmart and the US government — institutions that dwarf most previous human groups and yet are not built on kinship. Where did these ‘voluntary’ organizations come from? And why did they eventually replace intensive kinship as the dominant mode of organization?

A plausible answer is that intensive kinship comes with inherent limits, which non-kin organization managed to sidestep.

To understand these limits, we must first understand what intensive kinship does. In simple terms, it takes the nested structure of an extended lineage and turns it into a hierarchy. Figure 1 illustrates. When you trace bloodlines (indicated by straight lines), you inevitably get a family tree that has a nested structure: one founding ancestor gives rise to a tree of descendants. Intensive kinship takes this tree structure and uses it to create power relations. Within the clan, status depends on proximity to the ‘maximal lineage’ (the founding ancestor). By ritualizing bloodlines, intensive kinship unifies sub-groups who might otherwise be enemies.

Figure 1: Using kinship to create a hierarchy. This figure shows Henrich’s illustration of a segmentary lineage, a form of organization typical to intensive kinship. Segments of the population (curved lines) are organized hierarchically based on their lineage (straight lines).

When this ritualization of kin structure first emerged, the hierarchical bonds were likely loose. However, we know from history that these bonds eventually tightened into a strict chain of command. The result was kin-based dictatorships (i.e. monarchies).

Here we arrive at the problems with intensive kinship: it sets societies on the path to rule by birthright, which is not the nicest of institutions. Rule by birthright leads to things like divine kingship, a permanent aristocracy, and born servitude. In short, kin-based hierarchies tend to entrench inequality, which breeds resentment and instability.

Another problem with kin-based hierarchies is that blood ties are a poor way of choosing successors. Every generation, you roll the dice to see if the ruler will produce a ‘legitimate’ heir. If the ruler is infertile (or produces heirs out of wedlock or of the wrong sex) you end up with conflict. This issue of succession is no small matter, as Figure 2 illustrates. From 1648 to 1713, roughly one third of all interstate wars were fought (at least in part) over succession. Fortunately this fraction decreased over the following centuries, as states abolished (or limited the power of) hereditary monarchs. But going back in time, it seems likely that succession was a major source of war.

Figure 2: Wars of succession. This figures shows the fraction of interstate wars that involved issues of succession. The data is from Kalevi Holsti’s book Peace and war: Armed conflicts and international order, 1648-1989.

Related to the issue of succession is the problem of polygyny — the tendency for elite males to hoard wives. While humans likely evolved as a mildly polygynous species (a fact we infer from size differences between sexes), rule by birthright pressures elite males to be wildly polygynous. The formula is simple: more wives brings a higher chance of producing an heir, provides a conspicuous way to display power, and serves as a tool for building political alliances.

The trouble is that this hoarding of wives forces low status males into bachelorhood. Figure 3 illustrates the problem. On the left, monogamy means that every male can have a partner (at least in theory). On the right, a moderate amount of polygamy means that a large portion of men become unwilling bachelors.

Figure 3: The problem with polygyny. This figure shows Henrich’s illustration of ‘polygyny’s math problem’. On the left, monogamy means that every male finds a mate (at least in principle). On the right, a mild amount of polygyny means that a large portion of males become forced bachelors. This demographic shift leads to intense male conflict, which is corrosive to group cohesion.

So what’s wrong with bachelors? Well, as a group, they tend to create conflict. That’s because in polygynous societies, a main avenue for winning wives is to challenge elite men. In other words, polygyny — and the induced pool of bachelors — intensifies male competition. Of course, there’s nothing inherently wrong with male competition. However, it does tend to undermine group cohesion. Let’s put it this way: when men are busy fighting over wives, what they are not doing is cooperating with each other.8 So if the goal is to foster a large, cohesive group, polygyny is a problem.

Now, what you need to know about polygyny is that in many agrarian (kin-based) societies, it reached outrageous levels. To give you some numbers, historians claim that ancient Chinese emperors had thousands of women in their harems. (The Chinese emperor Yangdi reportedly had 100,000 women in his palace.) These numbers are so large that they defy belief. And yet genetic evidence supports the fact that ancient rulers were wildly polygynous. For example, DNA analysis indicates that roughly 8% of men currently living in the former Mongolian empire are descendants of Genghis Khan.

Going back further, DNA evidence suggests that the Neolithic revolution came with an explosion in polygyny. Geneticist Monika Karmin and colleagues have found that starting around 10,000 years ago, there was massive bottleneck in the Y chromosome (the gene passed on by males). For some reason, the number of reproductive males plummeted, but the female population didn’t change. The likely cause was intense male competition, combined with runaway polygyny. As Genghis Khan would later put it, the strategy was to kill your enemies and steal their wives.9 Of course, this approached worked well for men like Khan. But it’s not the best way to build stable institutions.

In short, we know that intensive kinship can create fairly large groups. But it also comes with a host of problems that make kin-based organizations unstable.

Act III: Dismantling kinship lock in

The limits to kin-based organization are an example of the ‘lock-in effect’, whereby past decisions constrain the future. In Act III of human history, Henrich argues that Europeans sidestepped kinship lock in by dismantling their kin-based institutions. Over a period of about 1000 years, Europeans rebuilt their society using ‘voluntary’ organization.

At first, this restructuring might seem like an improbable turn of events. However, a look at the wider evolutionary landscape shows that nature is full of unlikely solutions to adaptive lock in.

Here’s an example. You might think that the largest sea animal would be a fish. After all, fish have been in the ocean for 500 million years, so they’ve had lots of time to grow large. And yet this reasonable expectation is wrong. The largest sea animal took a wildly improbable path to bigness. It started as a fish, moved onto land, evolved lungs, and then, 50 million years ago, moved back into the water. I’m talking, of course, about whales — air-breathing mammals that dwarf all other ocean life. (The blue whale is the largest animal alive, and also the largest animal ever to have lived.)

The story of whales makes little sense until you think about the limits faced by fish. As animals with gills, fish are locked into ‘breathing’ water, which is a poor source of oxygen. So as fish grow bigger, their gills struggle to keep up. Whales, however, can breath air, which is a rich source of oxygen. So by moving onto land and then back into water, whales sidestepped an evolutionary lock in.10 (For a broader discussion of evolutionary lock in, see my post The Evolution of ‘Big’: How Sociality Made Life Larger.)

Back to humans. If an ancient historian was asked to predict the largest institutions of the 21st century, they’d likely describe something that resembles a scaled-up mafia family. (This is a good description of royalty.) What the ancient historian would not predict is Walmart — a giant organization built on voluntary membership. The jump from mafia clans to Walmart is the social equivalent of the jump from fish to whales. It seems (and perhaps is) wildly improbable. Yet it happened. And so the task is to understand how and why.

The Church finds a weird obsession

For Henrich, the story of groups like Walmart begins long before most historians look for the origins of capitalism. Henrich takes us back to the fourth century, to a time when European culture headed in an odd direction.11

If Henrich is correct, the seeds of Western society were first planted in 305 CE. Let’s set the stage. At the time, the Catholic Church was spreading throughout Europe, and its followers were getting obsessed with incest. We don’t know why this obsession started. But we do know that it lasted for over a millennia, and that it likely transformed European society.

The Church’s ‘marriage and family program’, as Henrich calls it, got rolling in the year 305 when a synod (council) in Elvira, Spain, issued a strange decree. If a man marries the sister of his dead wife, the council ordered, he must abstain from communion for five years. And if a man marries his daughter-in-law, he should abstain from communion until near death. Although these policies were ostensibly about ‘communion’, their effect was to ban what anthropologists call ‘affinal’ marriage — marriage to your in-laws. The Church evidently thought such relationships were ‘incestuous’ (although, in biological terms, they are not). And so it sought to prohibit them.

What is important about this prohibition, Henrich argues, is that it removed a key tool for unifying the clan. When a spouse dies, affinal marriage helps keep the extended lineage together. Of course, one decree from one council does not transform a whole society. So what matters is that the Elvira decree was the first of many official doctrines that would be issued by the Church over the next 1000 years. By the 13th century, the Church had banned the following:

  1. Marriage to relatives out to sixth cousins;
  2. Polygamous marriage;
  3. Affinal (in-law) marriage;
  4. Arranged marriages;
  5. Adoption.

These prohibitions, Henrich argues, removed the key building blocks of intensive kinship. And so from the fourth century onward, the clans of Europe slowly died out, leading to a society built on non-kin organization.12

Church exposure, cousin marriage, and WEIRD psychology

If you are skeptical of Henrich’s thesis, you are in good company. When I first encountered his arguments about incest, I found them implausible. But after thinking about Henrich’s thesis — and the evidence underlying it — I now find it compelling.

Speaking of evidence, it seems unlikely that we can connect one-thousand-year-old church policies to the culture and psychology of modern Europeans. And yet we can. In a landmark 2019 study, Jonathan Schulz and colleagues made the connection. Here’s how they did it.

First, Schulz and colleagues looked at how long the Western Catholic Church had been present in different regions in Europe. (Only the Western Church got obsessed with banning incest.) Next, they looked at modern rates of cousin marriage in the same regions. When they put these two pieces of data together, they found a surprising connection: the longer the region’s exposure to the Church, the lower the rate of cousin marriage. Figure 4 shows the trend.

Figure 4: In regions of Europe, cousin marriage rates decline with longer exposure to the Western Church. Each point represents a region in Turkey, Spain, Italy or France. The vertical axis shows the rate of first cousin marriage. The horizontal axis shows the number of centuries that the region has been exposed to the Western Church (based on whether there was an active bishopric within 50 km). The data is from Jonathan Schulz and colleague’s paper ‘The Church, intensive kinship, and global psychological variation’. Their dataset is available here.

Now, its tempting to dismiss the rate of cousin marriage as a cultural quirk. But for Schulz (and Henrich), it is key indicator of social structure. Remember that cousin marriage is one of the main ways to consolidate an extended lineage. So when you reduce the rate of cousin marriage, it signals that intensive kinship is dying off. Therefore, the evidence in Figure 4 is consistent with Henrich’s thesis that the Catholic Church’s marriage policies killed off Europe’s clans.

So we’ve connected Church exposure to rates of cousin marriage. That’s step 1. Step 2 is to connect the cousin marriage rate to variation in psychology. Looking at the same regions in Europe, Schulz and colleagues again found strong correlations. As cousin marriage rates decline, people become:

  1. Less conformist;
  2. More inclined to be fair to strangers;
  3. More trusting of strangers;
  4. More individualist.

Figure 5 shows the trends. Each panel shows a different psychological metric, plotted against the rate of cousin marriage.

Figure 5: In regions of Europe, psychological traits vary with the rate of cousin marriage. Each point represents a region in Turkey, Spain, Italy or France. In each panel, the vertical axis shows the average value of the given psychological index within a region. The horizontal axis shows the rate of first cousin marriage. The data is from Jonathan Schulz and colleague’s paper ‘The Church, intensive kinship, and global psychological variation’. Their dataset is available here.

Similar trends appear between Italian provinces, as shown in Figure 6. In provinces with lower rates of cousin marriage, people store less of their wealth in cash, and are more likely to use checks. Both behaviors, Henrich argues, signal that as cousin marriage rates decline, people become more trusting of strangers and put more faith in impersonal institutions (i.e. banks). In other words, killing off intensive kinship enlarges people’s circle of trust.

Figure 6: In provinces of Italy, measures of financial trust vary with the rate of cousin marriage. Each point represents a province in Italy. The left panel shows how the percentage of financial wealth held in cash varies with the rate of cousin marriage. The right panel compares cousin marriage rates to percentage of people who use checks. The data is from Jonathan Schulz and colleague’s paper ‘The Church, intensive kinship, and global psychological variation’. Their dataset is available here.

In short, there is solid evidence connecting the spread of the Catholic Church to the death of intensive kinship and the rise of WEIRD psychology. What needs to be fleshed out is how this evidence relates to the institutions of modern capitalism.

From WEIRD psychology to Western culture

In the last third of his book, Henrich tries to connect WEIRD psychology to the ‘scaling up’ of Western society via industrial capitalism. I think he is partially successful. I say ‘partially’ because there is a disconnect between how Henrich defines ‘scaling up’ in the first third of the book and the last third. I’ll discuss this problem in a moment. But first, let’s focus on what, in my opinion, Henrich gets right.

One of the key features of WEIRD psychology is that it is ‘impersonally prosocial’. That’s a scientific way of saying that Westerners tend to trust strangers and treat them (almost) as fairly as they would treat friends and family. This impersonal prosocial stance, Henrich argues, is one of the hallmarks of commerce.13

Wait, is Henrich saying that markets encourage fairness? Yes … but only impersonal fairness. You see, while many non-market societies are famed for their generosity, their circle of fairness rarely extends to complete strangers (who are often feared). Markets, however, facilitate interactions between strangers. And they do it by instilling social norms (and laws) that encourage fair exchange. The result, Henrich argues, is that market exposure leads to increased impersonal fairness.

The evidence backs him up. In 2010, Henrich conducted a study in which individuals from different cultures were asked to split a sum of money with a stranger. Henrich found that people from more market-integrated societies tended to offer a more generous split. This evidence supports the idea that impersonal prosociality goes hand in hand with markets.

Henrich also argues that markets are a way to domesticate competition. The idea (which I’ve also explored) is that humans don’t need an incentive to compete with each other. Instead, we need cultural tools to suppress violent competition. Henrich agrees with Peter Turchin, who argues that the default human state was incessant warfare between tribes.

Markets take the violence out of competition. To compete with another group in a market setting, you must obey the laws of property rights. In other words, you cannot steal and you cannot conquer. True, market competition still leads to all kinds of dodgy behavior. (Wall Street comes to mind.) But compared to the ravenous predation of rulers like Genghis Kahn, capitalist tycoons are domesticated house cats. With the predators in check, Henrich argues that Western society kept the benefits of competition, but removed the most destructive elements.

Related to market norms is the WEIRD tendency to be ‘individualistic’. Henrich argues that this psychology arises from the needs of voluntary organization. When you break down kin bonds, people are ‘freed’ to associate with anyone they like. In a sense, this freedom is liberating. But it also leads to a self-centered worldview. It forces people to constantly broadcast their abilities in order to find friends and win gainful employment.

One of the paradoxes here is that Westerners adopted a more ‘individualistic’ psychology at the same time that their behavior become more ‘collectivist’. Urbanization is a prime example. Today, 9 million New Yorkers live and work in a dense urban jungle that is essentially a massive hive. And yet the majority of these people would probably claim to value autonomy and independence. How can this be? For his part, Henrich doesn’t see a contradiction. The dismantling of intensive kinship, he argues, meant people had to choose who to associate with. And that choice made people more individualistic, yet also more impersonally prosocial.

Another possibility worth exploring is that individualism is a kind of mind hack that simplifies the complex web of relationships that surround us. For example, when Karl Marx documented 19th-century capitalism in Britain, he complained that people focused on commodities, but forgot about the social relations that underpinned production. He called this stance ‘commodity fetishism’. Perhaps ‘individualism’ is a similar ‘fetish’: it causes people to take social relations and conceive of them as personal traits.

For instance, when I write my resume, I tell myself that I am listing ‘personal traits’. And yet the evidence that I actually provide is mostly relational. I tell you about the schools where I have studied, the organizations where I have worked, the institutions that have given me awards, the journals that have published my work, the groups who have listened to my presentations, and so on. In short, I am telling you about my past social relationships, yet I am convinced these are personal accomplishments. It’s worth researching if/how this individualistic stance makes large numbers of relationships easier to maintain.

Back to Henrich’s story about Western culture. He argues that after dismantling intensive kinship and developing a WEIRD psychology, Europeans began to ‘scale up’ using voluntary organization. To support his argument, Henrich documents the steady growth of merchant guilds, charter towns, universities, monastic orders and knowledge societies. As people aggregated in cities, they sought out like-minded individuals, leading to a network effect — a vast ‘collective brain’. Stoked by the fires of commerce, knowledge and innovation proliferated. The result, Henrich claims, is that Westerners became unprecedentedly rich.14

The death of ritualized power

As far as histories go, Henrich’s story about the rise of industrial capitalism is fairly standard. And it is based on well-known trends. Still, it leaves me with a feeling that something is missing.

Perhaps a good place to start is with Henrich’s own words. At the outset of his book, Henrich notes that people often misunderstand their own culture:

Institutions usually remain inscrutable to those operating within them — like water to fish. Because cultural evolution generally operates slowly, subtly, and outside conscious awareness, people rarely understand how or why their institutions work or even that they “do” anything. People’s explicit theories about their own institutions are generally post hoc and often wrong.

Let’s call this sentiment the ‘anthropological stance’. When studying a society, the anthropological stance means that you take what people say about their own culture with a grain of salt. You assume that people are like ‘fish’ who cannot see the institutional ‘water’ in which they swim.

So what is the water?

Well, a big part of it is ritualized power. The important thing about power is that when it is legitimized (via ritual), it becomes invisible to those who believe in the rituals. For example, if you asked a devout Catholic to describe the Pope, they might use words like ‘holy’ or ‘sacred’. What they would not say is ‘the Pope is a powerful ruler who use rituals to legitimize his authority’. Of course, that is exactly how a heretic (like me) would describe the Pope. But to a Catholic, the Pope’s power is hidden within a web of beliefs and rituals.

Back to Henrich. When he describes how societies scaled up using intensive kinship, he adopts the anthropological stance. He is clear that intensive kinship involved a large dose of ritualized power. For example, here is how he depicts the emergence of chiefdoms:

[B]oth anthropological and historical evidence suggest that the manipulation and accumulation of ritual powers and offices has been one of the main ways in which some clans have set themselves above others. … By providing a means to make and enforce community-level decisions, chiefdoms often have a substantial edge in competition with more egalitarian societies. … [They] often have enough command and control to unify large armies for military campaigns.

(power words highlighted)

For the moment, let’s forget about whether this description is correct. What’s clear is that this paragraph is not how people living in a chiefdom would describe their own society. (It seems far fetched that a chief would say: ‘I manipulate and accumulate ritual powers.’)

Now let’s switch gears and look at how Henrich describes the institutions of capitalism. Here is Henrich discussing the psychological effects of markets:

Well-functioning impersonal markets, in which strangers freely engage in competitive exchange, demand what I call market norms. Market norms establish the standards for judging oneself and others in impersonal transactions and lead to the internalization of motivations for trust, fairness, and cooperation with strangers and anonymous others.

(market words highlighted)

Again, let’s forget about whether this description is correct. Instead, let’s ask ourselves if this paragraph is how a Westerner might describe their own culture. You can judge for yourself, but my impression is that when asked to describe capitalism, many people will pontificate about markets, freedom, and competitive exchange.

So in these two paragraphs, we have a change in tone. Intensive kinship involves ‘ritualized power’, while modern institutions involve ‘competitive exchange’. I think this tone change is problematic, because it implies that in capitalism, ritualized power has disappeared. (It has not.) But before I explain further, let me convince you that I’m not cherry picking Henrich’s words.

Figure 7 analyzes the frequency of six different words in Henrich’s book. Each panel, shows the relative frequency of the given word, measured by chapter. The shaded regions highlight the major topic of the book in two different sections. When Henrich discusses intensive kinship, the words ‘command’, ‘ritual’, and ‘power’ (top row) spike in frequency. When Henrich moves on to discuss the ‘new institutions’ of Western society, these words become more rare. Instead, talk of ‘exchange’, ‘market’ and ‘trade’ (bottom row) becomes more frequent.

Figure 7: From ‘ritualized power’ to ‘market exchange’ — analyzing word frequency in Henrich’s book. This figure shows the frequency of six different words in Henrich’s book, broken down by their occurrence in each chapter. The words in the top row (‘command’, ‘ritual’ and ‘power’) are used most frequently in the chapters where Henrich discusses intensive kinship. The words in the bottom row (‘exchange’, ‘market’, and ‘trade’) are used most frequently in the chapters where Henrich describes the new institutions of Western society.

Clearly, Henrich’s language changes as he moves from discussing intensive kinship to the institutions of modern capitalism. Is this switch justified? In part, yes. It would be foolish to discuss the evolution of capitalism without describing the spread of commerce. But the problem is that by omission, Henrich implies that the rise of markets did away with ritualized power. I think that is misleading.

Inscrutable institutions

When we study capitalism, it is difficult to adopt the anthropological stance, because we are the ‘fish’ who are unable to see the institutional ‘water’. Still, there are some tricks that can help us understand the ritualized power that surrounds us. The simplest option is to look for contradictions between what people say and what they do.

Let’s use neoclassical economists as an example. Economists spend their days theorizing the efficacy of the ‘free market’. And yet these same economists have tenured positions in large universities that are funded by still larger governments. In other words, economists’ working lives have almost nothing to do with the market. The discrepancy between language and action is severe.

When you find this type of contradiction, it’s a sure bet that you’ve found a ritual. In fact, future anthropologists might claim that economist use the idea of markets to ‘manipulate and accumulate ritual powers’. This language, by the way, is how Henrich described the accumulation of chiefly power. I think it remains appropriate for describing capitalism.

Of course, you might disagree. And so what we really need to do is understand if concentrated power has actually gone away. Market ideology suggests that it has. The empirical evidence suggests that it has not.

To look at the evidence, consider the example of the United States. Today, the US is viewed as the quintessential free-market society. US citizens can buy what they want, work where they want, and live where they want. Two hundred years ago, however, things were different. At the time, the US South was the quintessential slave state, governed by ritualized, racist power. We all know what happened. Americans fought a civil war that ended slavery, setting African Americans on a century-long path to greater equality. Looking at US history, it seems to fit with Henrich’s story about the spread of voluntary organization. However, just because group membership is voluntary does not mean that ritualized power goes away.

Here’s an example. Three years ago, I voluntarily joined Twitter. And yet a few weeks ago, Elon Musk bought Twitter, and so gained the power to censor me. How did he do it? By ‘manipulating and accumulating ritual powers’. (How else do you describe the mysteries of Musk’s debt financing?)

Speaking of Elon Musk, his Twitter purchase was possible because of his Tesla shares, which are wildly valuable. These shares are themselves a form of ritualized power, giving Musk control over the 100,000 people who work for Tesla. Looking at Musk’s power over Tesla employees, we can say that it is less despotic than the power of a slave owner. And yet US slave owners rarely owned more than 1000 slaves. So the paradox is that in terms of the number of people he commands, Elon Musk is likely more powerful than any US slave owner ever was.

Now, Tesla is but one example of a giant corporation. There are many others. In fact, there are so many big corporations, that you can convincingly argue that the modern United States is far more hierarchical than the Antebellum South.

Figure 8 makes the case. Here I have contrasted the size distribution of modern US business firms with the size distribution of slave estates in the Antebellum South. What’s important is that as we move from left to right, the blue line (business firms) extends far beyond the red line (slave estates). This difference indicates that modern corporations grow orders of magnitude larger than Antebellum slave estates.

Figure 8: Scaling down despotism, scaling up organization size. This figure compares the size distribution of slave estates in the Antebellum US South to the size distribution of modern US business firms. The horizontal axis shows the number of members in the organization, plotted on a log scale. The vertical axis shows the relative abundance of the given-sized organization. Slave estates (red line) were overwhelmingly small, with few exceeding 1000 slaves. Today, US business firms (blue line) grow far larger, with many exceeding 10,000 members. [Sources and methods]

Now, you might counter that US corporations are not actually hierarchies. However, if you’ve ever worked in a big company, you know that it has a chain of command. But aside from worldly experience, we can connect the growing size of corporations to other indicators of hierarchy, such as the relative number of managers. You might also protest that the US is somehow exceptional. Perhaps in other countries, firms got smaller? Again, the evidence suggests not. Across all countries, it seems that industrialization tends to bring larger firms and larger governments. (For more details about this evidence, see ‘Energy and institution size’ and ‘Economic development and the death of the free market’.)

For the last half decade, I’ve been puzzling over this evidence, trying to understand how to make it fit with the standard picture from economics. Here are my conclusions:

  1. Despite what economists say, societies always scale up using hierarchy;
  2. The growth of hierarchy comes with cultural tools that ritualize and legitimize the centralization of power (economists are part of this cultural package);
  3. Hierarchy is a double-edged sword. It can organize large numbers of people, yet it leads to despotism.

So where do markets fit into these conclusions? I think markets do two things. First, they turn power into a quantitative ritual. Instead of appealing to divine right, capitalist rulers appeal to the power of property rights, which can then be quantified (via stock prices) and bought and sold. This is not my own insight. It is the central thesis of Jonathan Nitzan and Shimshon Bichler’s theory of ‘capital as power’.

Second, I think that the norms, rules, and laws that come with markets act to make power less despotic. For example, the rule of law puts limits on corporate power. Jeff Bezos might want to sentence a petulant Amazon employee to be hanged, drawn and quartered. But the law says he cannot. The law also gives employees the right to leave a company if they wish. Granted, doing so may mean loss of income. Still, the effect is to limit despotism. If corporate rulers treat their subordinates badly, they risk loosing them. The same is not true in a slave estate, or a kin-based institution. And so corporate hierarchies avoid the despotism that plagues intensive kinships.

What is not obvious (and what I am still trying to grapple with) is that by lessening despotism, market institutions actually promoted the growth of hierarchy. And yet that seems to be exactly what happened.

Back to Henrich. Readers of The WEIRDest People will be left with the impression that capitalist societies abandoned ritualized power as an organizing principle. I think that’s a flawed conclusion that says more about capitalist ideology than it does about actual behavior. My view is that WEIRD people continue to appeal to ritualized power, but are largely unaware that they do so.

New pieces in the puzzle

Omissions aside, Henrich’s book is a major contribution to the study of cultural evolution. His focus on the relation between intensive kinship and psychology is particularly important because it provides fresh insight into the debate over the emergence of Western culture. In the long term, my guess is that Henrich’s research will revolutionize our understanding of cultural evolution.

For example, the relation between psychology and institutions helps to explain the inertia of culture. If people were ‘free’ to think about the world anyway they liked, then culture could not possibly last. (In a sense, culture would not exist, since it requires that behavior have some degree of uniformity.) But if culture imprints on individuals — directing their thought patterns and behavior — then it has staying power.15

The task that Henrich sets for himself is to understand how changes in psychology interrelate to changes in culture. His big idea (which he backs with abundant evidence) is that many of the tenets of Western thinking — things like liberalism, individualism, universalism, and reason — were germinating long before the Enlightenment. The seeds were planted, he argues, a thousand years earlier as the Catholic Church began to break up intensive kinship.

This idea is bound to be controversial and surely needs more research. But I think the best way to characterize Henrich’s book is as a new piece in the puzzle. Like the discovery of a new fossil bed, Henrich provides a rich trove of evidence that demands a consilient explanation.

Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.


Stay updated

Sign up to get email updates from this blog.

Email Address

Keep me up to date

This work is licensed under a Creative Commons Attribution 4.0 License. You can use/share it anyway you want, provided you attribute it to me (Blair Fix) and link to Economics from the Top Down.

Sources and methods

Data for the size distribution of slave estates (Figure 8) is from Lee Soltow’s book Men and wealth in the United States, 1850-1870. You can peruse the data at

Data for the size distribution of US firms is from a variety of sources, discussed in my post ‘Institution Size as a Window into Cultural Evolution’.


  1. Aristotle has become somewhat infamous for caring more about ideas than evidence. For example, Aristotle believed that men have more teeth than women. (They don’t.) Lambasting Aristotle, Bertrand Russell writes: “although he was twice married, it never occurred to him to verify this statement by examining his wives’ mouths.”↩
  2. It seems that when you learn to read, you co-opt the left side of your brain for interpreting symbols, pushing facial recognition to the right side. For many years, scientists assumed that this was ‘natural’ — that all humans recognized faces using the brain’s right hemisphere. It turns out that this pattern is unique to people who are literate. Illiterate populations tend to interpret faces using both sides of the brain. And with more ‘brain power’ devoted to the task, illiterate people are better at facial recognition.↩
  3. The most common tool for avoiding incest seems to be sexual dispersal. When an animal matures, one (or both) of the sexes moves to new territory. Because most animals use dispersal, they do not evolve more complicated forms of incest avoidance (such as kin detection). As a result, many species will mate with close relatives when given the chance in captivity.↩
  4. The Habsburgs are perhaps the most notorious example of royal inbreeding. From 1450 to 1750, the family became so inbred that it suffered from excessive child mortality and developed a number of deformities, including the famous Habsburg jaw.↩
  5. The anthropologist Christopher Boehm argued that hunter gathers maintain equality by practicing ‘reverse dominance’: the weak enforce egalitarianism by ganging up on would-be strongmen. A criticism of Boehm’s claim is that some foraging societies (like those in the Pacific Northwest) organized in despotic hierarchies. This evidence is then taken as a refutation of the agriculture = hierarchy hypothesis.

    I think this controversy can be resolved by taking the focus away from agriculture and instead putting it on energy. My own research suggests that it is the scale of energy consumption (not the specific way that this energy is harvest) that predicts hierarchy. So what makes agriculture important is that it generally comes with the ability to harvest more energy. But that is not always true. In the case of Pacific foragers, they tapped into the concentrated energy of local salmon runs. In other words, their energy consumption was probably greater than that of most hunter gatherers. And so they formed larger hierarchies.↩

  6. In his book Darwin’s Cathedral, David Sloan Wilson discusses the example of the Nuer people of present-day Sudan, who spread at the expense of the neighboring Dinka. The difference between the two groups seems to have been mostly cultural. Similarly, Henrich highlights the example of Ilahita, an agrarian community in the Sepik region of New Guinea. Despite having similar technology as its neighbors, Ilahita was far larger than neighboring village. Why? Again, because of culture. Both the Nuer and the Ilahita had started to organize using what Henrich calls ‘intensive kinship’.↩
  7. Henrich notes that in many societies, kin bias is considered ‘family loyalty’. In WEIRD societies, however, it is called ‘nepotism’ — a word that has a negative connotation. Interestingly, the word ‘nepotism’ traces to the Catholic Church. It derives from the Latin nepotem, meaning ‘nephew’, and describes the practice of granting privileges to a pope’s ‘nephew’, which was a euphemism for his natural son.↩
  8. Interestingly, Henrich cites experimental evidence to back up the claim that polygyny is bad for cooperation. A 2009 study by Mehta, Wuehrmann, and Josephs tested how testosterone levels affected performance during a competition. In one experiment, Mehta asked people to beat their partner’s score on a standardized test. In another experiment, pairs of people were asked to combine their scores to beat other groups. It seems that individuals with high testosterone did better when competing against their partner. In contrast, people with low testosterone faired better when competing as a group. In other words, testosterone heightens competition within groups — the opposite of what you need to form a cohesive society. How does this evidence relate to polygyny? Well, polygyny leads to more bachelors, and bachelors tend to have high testosterone. And so do polygynous men.↩
  9. Here’s how Genghis Khan framed his life goals:

    The greatest happiness is to vanquish your enemies, to chase them before you, to rob them of their wealth, to see those dear to them bathed in tears, to clasp to your bosom their wives and daughters.

    While rulers frequently embellish, Khan seems to have been telling the truth.↩

  10. In the ocean, oxygen dissolves at around 10 parts per million. In the air, oxygen exists at about 200,000 parts per million. So per particle, air contains about 20,000 times more oxygen. The atmosphere, however, is about 100 times less dense than the ocean. So per unit of volume, air has about 20 times more oxygen. Still, that’s a considerable advantage for air-breathing animals like whales.↩
  11. Origin stories tend to depend on how we understand the present. Hence, debates about the origins of capitalism turn largely on different theories of present-day capitalism.

    Mainstream economists tend to see capitalism as a market system, and so (when they bother to study history) trace the development of monetary exchange. Marxists (like Robert Brenner) focus on the exploitation of workers, and so study the spread of wage labor. Max Weber thought capitalism was characterized by a ritualization of work, and so traced its origin to the Protestant Reformation. World-systems theorists like Immanuel Wallerstein and Andre Gunder Frank focus on core-periphery dynamics, and so study the history of international trade. More recently, Jonathan Nitzan and Shimshon Bichler see capitalism as a ‘mode of power’, and trace its origins to the 12th-century development of the European ‘bourg’.

    While dates vary, most of these approaches trace the seeds of capitalism back to the late Middle Ages. Henrich, in contrast, starts much earlier because he is focused on kinship. Henrich argues that by the late Middle Ages, Western Europe had already dismantled most of its kin-based institutions. Unlike in other clan-based societies, polygyny was becoming rare in Western Europe (although the male aristocracy still hoarded wives). Perhaps more importantly, sons were not bound to reside in their father’s ‘house’, as was the norm in patriarchal clans. Instead, male peasants could marry and take up new residence away from their fathers (and in-laws). This break-up of intensive kinship, Henrich proposes, began in the fourth century, and took about a thousand years to complete. It did not guarantee the emergence of capitalism, but laid the preconditions for non-kin organization.↩

  12. Naturally, one wonders if the Church knew what it was doing when it started its ‘marriage and family program’. The answer seems to be both yes and no. Over time, the Church clearly figured out that it benefited from breaking up clans. Catholic priests, Henrich notes, were able to convince many rich patriarchs to donate some (or all) of their estate to the Church— a behavior that would have been unthinkable had these patriarchs been focused solely on maintaining their lineage. So yes, the Church understood that it benefited from killing off intensive kinship.

    However, the Church likely had no idea that it was rewriting European culture at large. In that sense, the Church’s ‘marriage and family program’ is similar to the peacock’s ‘big shiny tail program’. Like the Church, peacocks unwittingly played a game of evolution. For unknown reasons, female peacocks developed a preference for males with big shiny tails. That led males to evolve tails that were even bigger and shinier. None of the animals had any idea what they were doing. And yet evolution did its work anyway, culminating in the male peacock’s preposterously large tail. The Church’s incest obsession was probably similar. Like most human groups, the Church was oblivious to the long-term effects of its own culture.↩

  13. Fun fact: changing norms about fairness are evident in the Bible. In the Old Testament, which is famous for its tribal mentality, the ‘golden rule’ applies only to kin:

    You shall not take vengeance or bear a grudge against your kinsfolk. Love your neighbor as yourself.

    (Leviticus 19:18)

    When Mathew restated the golden rule in the (less tribal) New Testament, he dropped the kin bias:

    Therefore whatever you desire for men to do to you, you shall also do to them; for this is the law and the prophets.

    (Mathew 7:12)


  14. You’ll note that Henrich’s story of the rise of the West doesn’t highlight imperialism. While he acknowledges the “very real and pervasive horrors of slavery, racism, plunder, and genocide”, Europe’s expansionism doesn’t feature in his main story. That’s a bit odd, given that the theory of cultural evolution focuses on competition between groups. It’s a bit like describing the changes that happened in imperial Rome without mentioning that they coincided (and likely depended on) Rome’s conquest of Europe and North Africa.↩
  15. If, like me, you value ‘free thinking’, then the idea that culture restricts our thinking is a bitter pill to swallow. I’d like to believe that regardless of the time or place I was born, I’d have developed the same scientific worldview. But Henrich has convinced me that this is an illusion. Scientific thinking, he argues, is a cultural tool. And like all tools, it has been slowly improved over time.

    In particular, science provides a prescription for when and how you should not defer to authority. This is important, because deference to authority is one of the main ways that culture is passed on. And that is often good. It would be disastrous, for example, if children had to learn by experiment that crossing highways was deadly. It’s far safer to be told of the danger — to accept an argument from authority.

    The problem with deference to authority, however, is that it provides no way to distinguish between knowledge that is good, useless, or bad. For example, a medicinal recipe might contain some ingredients that are helpful and others that do nothing. And perhaps it comes wrapped in a elaborate ritual that involves human sacrifice. Deference to authority passes down the whole package to future generations. Science, in contrast, gives people the tools for deconstructing the package and separating the good from the useless and bad.

    Of course, philosophers of science have long known that science strives to distinguish fact from fiction. But few philosophers have thought to connect scientific thinking to kinship structure. But Henrich does just that. Cultures built on intensive kinship, Henrich argues, tend to put more value on deference to authority and devotion to the clan. That makes the scientific worldview more difficult.↩

Further reading

Fix, B. (2019). An evolutionary theory of resource distribution. Real-World Economics Review, (90), 65–97.

Fix, B. (2021). Economic development and the death of the free market. Evolutionary and Institutional Economics Review, 1–46.

Henrich, J. (2020). The WEIRDest people in the world: How the West became psychologically peculiar and particularly prosperous. Penguin UK.

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2-3), 61–83.

Karmin, M., Saag, L., Vicente, M., Sayres, M. A. W., Järve, M., Talas, U. G., … others. (2015). A recent bottleneck of Y chromosome diversity coincides with a global change in culture. Genome Research, 25(4), 459–466.

Nitzan, J., & Bichler, S. (2009). Capital as power: A study of order and creorder. New York: Routledge.

Schulz, J. F., Bahrami-Rad, D., Beauchamp, J. P., & Henrich, J. (2019). The church, intensive kinship, and global psychological variation. Science, 366(6466).

Turchin, P. (2016). Ultrasociety: How 10,000 years of war made humans the greatest cooperators on Earth. Chaplin, Connecticut: Beresta Books.

Wilson, D. S. (2010). Darwin’s cathedral: Evolution, religion, and the nature of society. University of Chicago Press.

The post Weird Consilience: A Review of Joseph Henrich’s ‘The WEIRDest People in the World’ appeared first on Economics from the Top Down.

‘A way of making the Turkey vote for Christmas’

Published by Anonymous (not verified) on Tue, 03/05/2022 - 6:37am in

There is a really excellent article – well worth reading in full- by Gerhard Schnyder, a Professor at Loughborough University, here on ‘Budapest-on Thames’ outlining the current parlous state of UK democracy. I particularly appreciated this paragraph: This post should not be read as ‘Conservatives bashing.’ I am actually not opposed to conservatives or conservatism... Read more

Dunning-Kruger Discussion

Published by Anonymous (not verified) on Tue, 26/04/2022 - 1:33am in



A post by Blair Fix at Economics from the Top Down about whether the Dunning-Kruger effect (the inverse relationship between one’s skills in a particular domain and one’s tendency to overestimate them) is a mere statistical artifact, that I put in the Heap of Links last week, generated some discussion and prompted an email from a philosopher with a possibly helpful reference.

[Rene Magritte, Le Principe du Plaisir (detail)]

Amy Seymour, assistant professor of philosophy at Fordham University, writes:

The claims made by Blair Fix in “The Dunning-Kruger Effect is Autocorrelation” are both vastly overstated and ones to which David Dunning has responded. Notably, [Joachim Krueger and Ross Mueller] Kruger himself* raised the original statistical complaint twenty years ago (with Mueller in 2002, Journal of Personality and Social Psychology), but Kruger also refutes that complaint in the immediately following article in the very same issue of the journal (with Dunning, 2002). The responses to the complaint seem compelling and there’s lots of further research which suggests the effect is fairly robust.

Here’s a brief excerpt from the Dunning response:

Often, scholars cite statistical artefacts to argue that the Dunning-Kruger effect is not real. But they fail to notice that the pattern of self-misjudgements remains regardless of what may be producing it. Thus, the effect is still real; the quarrel is merely over what produces it. Are self-misjudgements due to psychological circumstances (such as metacognitive deficits among the unknowledgeable) or are they due to statistical principles, revealing self-judgement to be a noisy, unreliable, and messy business?

The piece by Dunning is here.

Further discussion welcome.

* Correction: As Devin Curry pointed out in a comment, and as Amy Seymour acknowledged, this piece should have been attributed to Joachim Krueger and Russ Mueller, not Justin Kruger.

The ‘Madman Despot’ Theory Only Serves to Embolden Vladimir Putin

Published by Anonymous (not verified) on Mon, 25/04/2022 - 7:00pm in

Dimitris Dimitriadis and Iain Overton explore how accusations of insanity serve to strengthen the Russian President’s hand in Ukraine


The Russian President, if commentators are to be believed, is “deranged”, “possibly crazed”, and in the grips of “hubris syndrome” and “COVID brain fog”. But to what degree has this psychological profiling of Vladimir Putin inadvertently strengthened the tyrant’s hand?

Quite a bit, potentially.

In the MacManus theory – a concept published in 2021 by Roseanne W McManus, a professor at Pennsylvania State University, following a major review of leaders’ reputations for madness – it was found that perceived madness could be harmful in crisis bargaining. A widespread perception of madness was an advantage, especially with an autocrat backed by a giant military, she wrote.

Admittedly, some see Putin as anything but a wild dictator with a loose finger on the nuclear button. Former Russian Foreign Minister Andrei Kozyrev has described him as “a rational actor” whose invasion of Ukraine is “horrific but not irrational”. But many others have depicted a man on the edge of lunacy, and such attempts cast a long shadow over his words and deeds. 

Such armchair psychoanalysis must be resisted.

These attempts to put Putin on the couch are, at best, speculative leaps that make him look more erratic, unstable and unpredictable than warranted. And this may be in his interest: because as long as the Western press – and to an extent its leaders – perceive him to be unhinged and fundamentally irrational, Putin will know that he can (probably) get away with more. 

Indeed, depicting the Russian leader as deranged is an exercise in speculation that arguably says more about his commentators than their subject. It suggests one of two things: either that they do not understand him – or that they do not want to. The former perhaps cannot be helped (we never fully divine the contents of another person’s mind), the latter is obviously more problematic. 

The framing of a despot as mad has a long tradition but is all too often reductive and offensive to people with real mental health challenges. The depiction can also produce grotesque, irrational foreign villains and therefore geopolitical mistakes.

Saddam Hussein was painted as erratic and unpredictable, despised by his own people. And when the 'weapons of mass destruction' lie was used as a reason to invade Iraq, much of the Western press took it at face-value – a case of collective confirmation bias owing to a widespread investment in Hussein’s madness.

Today, the truth is that Putin’s motivation in invading Ukraine is more nuanced and strategic than madness permits. And, while a great deal has also been made of his obscure ideological convictions, his antediluvian desire to reunite the two countries and his strange obsession with Kyiv (often described as the “mother of Russian cities”), these are not in themselves indications of mental instability – even if they are entirely wrong-headed.

In the end, Putin is reduced by some to a caricature of mental pathology and warped ideology.

The Russian Rationale

Some leaders knew all too well the virtues of being seen as slightly unhinged. Yet, when former US President Richard Nixon tried to persuade the world that he was mad – and wasn’t above pressing the nuclear button to stop communist aggression – no one bought it. Nixon was outed as a hard pragmatist. Why not Putin?

For all the unspeakable atrocities and war crimes that many say have been committed in the past two months, there was nothing fundamentally irrational about the invasion of Ukraine. Desperate? Maybe. Abhorrent? Undoubtedly. But not unthinkable, and certainly not deranged. 

The invasion must be seen in the context of a country running out of options. Russia is a petrochemical state – a pariah among an increasingly broad tent of countries committing to net zero and renewable energy. Global politics and climate change dictate that fossil fuels, which currently fill the Russian state’s coffers, are a dwindling source of revenue. Meanwhile, climate change, the same phenomenon that Russia is refusing to tackle, is threatening to devour three-quarters of its territory that lies in the arctic north. 

Invading Ukraine does not solve climate change but it could, in theory, win Russia immense geopolitical leverage over global food and energy markets. Indeed, Ukraine is one of the world’s largest exporters of wheat, with reliable year-round access to the Black Sea, a key trade route.  

Known as the 'breadbasket of Europe', the country is also home to incredibly fertile ‘black earth’ (chernozem) covering an area larger than Italy – and vast, sprawling flatlands which, for decades if not centuries, have been part of the nationalist dream of Russian invaders.

This type of nationalism is not rooted in history or jingoism but hard-nosed pragmatism.

Together with Ukraine, Russia could control a-quarter of globally traded wheat, and even larger chunks of the global barley and maize markets – a dependency that threatens to bring countries in middle Africa and north Africa to their knees, with the World Trade Organisation foreshadowing bread riots, violence and social unrest. 

Meanwhile, Russia is already weaponising oil and gas – its main export – as a means of economic warfare. This is a response, in the Kremlin’s narrative, to Western sanctions and a stark reminder of just how dependent Europe still is on Russia to keep the heating on. Putin knows that Europe’s attempt to wean itself off Russian energy will be long and painful for its electorates, and he is pressing leaders where it hurts the most.  

It seems as if it will simply be a matter of time before he decides to do the same with wheat and other critical food stuffs, including barley and cereal. In a world in which climate change has rendered food security ever-more elusive, an autocrat who can credibly threaten starvation – at least among certain countries – or serious food upheaval, is a force to be reckoned with. 

While that may seem like a far-cry from the current realities of the conflict, it is in line with a broader, long-term strategic plan – one that a deranged mind would simply not be capable of hatching. But, as Niccolò Machiavelli remarked, “at times it is a very wise thing to simulate madness”.

Unless Western journalists resist the sensationalist urge to depict Putin as a madman – and seriously engage with the nuances of Russian aggression – he may yet succeed.

This article was produced by the Byline Intelligence Team – a collaborative investigative project formed by Byline Times with The Citizens. If you would like to find out more about the Intelligence Team and how to fund its work, click on the button below.





Byline Times is funded by its subscribers. Receive our monthly print edition and help to support fearless, independent journalism.





Death by a Hundred Scandals: Are We Living in a Sociopathocracy?

Published by Anonymous (not verified) on Fri, 22/04/2022 - 7:09pm in

Iain Overton considers the calibre of people drawn to high office, and how power has warped their sense of empathy and compassion

The French philosopher Georges Bataille kept a photograph on his desk. It was from the early 20th Century and showed a Chinese prisoner undergoing one of the worst executions imaginable: a “death of a hundred cuts”.

Exactly what Bataille would have made of the past few years in British political life is hard to guess, but it is not unimaginable that he would have also contemplated British democracy being slowly eviscerated: a death by a hundred scandals.

He might also have considered something else. As Susan Sontag wrote, Bataille saw in the grisly picture a view of the pain of others “which links pain to sacrifice… a view that could not be more alien to a modern sensibility, which regards suffering as something that is a mistake or an accident or a crime".

Suffering, to Bataille, was viewed both empathetically and having purpose. Bataille, if contemplating modern British Conservative politics, may have considered two things.

First, that there was an emotional disconnect to the suffering of others among the leaders of the Conservative Party.

Second, that Conservatives ministers saw themselves as the victims of their own scandals: a suffering whereby the humiliations of ‘Partygate’ or the scrutiny of the tax affairs of the wife of the Chancellor were the products of mistakes or – worse – exaggerations concocted by journalists or religious leaders.

Such a profound lack of empathy, Bataille may have concluded, would suggest that the UK is being led by a confederacy of sociopaths: a sociopathocracy wielding its weapons against democracy.

After all, a sociopath is one who has no regard for others’ rights or feelings, lacks empathy or remorse for wrongdoings, and has a compulsive need to exploit and manipulate others for personal gain. And, at every turn, this sums up the actions of the Government. 

When, for instance, the Home Secretary was criticised for her plan to send asylum seekers to Rwanda, she rounded on the opposition, calling their concerns “xenophobic”. She failed to appreciate any of the deep, human concerns of sending vulnerable people to central Africa. Just as she refused to accept – as summed up in a recent US State Government report – that Rwanda has significant human rights issues, including “unlawful or arbitrary killings by the Government; forced disappearance by the Government; torture by the Government; harsh and life-threatening conditions in detention facilities”.

Or the fact that, in 2018, Rwandan police killed 12 refugees after a demonstration outside the offices of the UN high commissioner for refugees in Karongi district.

The Altar of Power

Boris Johnson, too, claims a moral high ground that is even higher than the Church. He has rejected calls to apologise for slandering the Archbishop of Canterbury after he denigrated the Anglican church leader’s critique of the Government’s Rwanda asylum policy.

Rather than be concerned the principal leader of the Church of England had fundamental issues with the Government’s treatment of his human beings, the Prime Minister instead accused Justin Welby of having “misconstrued” the plans.

Rishi Sunak diminished the concerns about his wife’s ‘non-dom’ tax status as a “political hit job” and “smearing her to get at him”.

And, faced with 'Partygate', Conservative MPs have – in turn – blamed journalists for behaving like “vultures” (Lee Anderson); compared the police’s decision to fine the Prime Minister to that of a judgement in cricket (Jacob Rees-Mogg); argued that Johnson hadn’t robbed a bank (Andrew Rosindell); that he was “ambushed” with a cake (Conor Burns); that it was like nurses having a drink together at the end of shifts at the height of the pandemic (Michael Fabricant); and that the best way to respond to the anger felt by the British people was to criticise Putin’s attack on Ukraine (Boris Johnson).

The opposition sits wide-eyed at this textbook sociopathy. When the Prime Minister walked into the Commons this week to the applause of his own backbenchers, an opposition MP shouted out: “Why are you clapping, he’s a criminal?”

But clap they did – leading Labour Leader Keir Starmer to accuse Johnson of “never taking responsibility for his words or actions”.

But, still, the Conservatives stay in power and – with the exception of a few Conservative MPs who have spoken out – unashamedly so. Perhaps this is of little surprise, as Brian Klass concludes in his book Corruptible: Who Gets Power and How it Changes Us, “people most attracted to power are often those least suited for it.”

Indeed, perhaps the members of Johnson’s Cabinet weren’t always sociopaths. It is just that the heady whiff of power has made them so. After all, becoming powerful, Klass writes, “makes you more selfish, reduces empathy, increases hypocrisy, and makes you more likely to commit abuse”.

To prove this, Klass describes a 2015 study where researchers played a dictator game. The dictator could divide a pot of money among his companions in a 60/40, 50/50, or 90/10 split. In the first, “low power” scenario, the dictator controlled just one other person. In the “high power” scenario, the dictator controlled three people. In the low-power option, there was a 39% chance that the dictator took nine-tenths of the money. In the high-power case, the dictator took the most money 78% of the time. The more people they had control over, the more they behaved selfishly.

The participants also had their saliva tested to measure testosterone levels. It is no surprise that those who were in the high-power group and who had high levels of testosterone were the most likely to take the money. If there is one thing that Boris Johnson – a man with seven children – seems to have in ample supply, it is testosterone.

Klass ends his book with a call to action: “Better people can lead us. We can recruit smarter, use sortition to second-guess powerful people, and improve oversight. We can remind leaders of the weight of their responsibility... And if we’re going to watch people, we can focus on those at the top who do the real damage, not the rank-and-file.”

With MPs finally approving a plan to open an investigation into whether the Prime Minister misled the House of Commons on lockdown parties, we may see an end to this political age of a hundred scandals. But, knowing sociopaths will sacrifice any principle upon the altars of power, we cannot count on it.

Bataille once wrote that “you will recognise happiness / when you see it die”. And unless something is done, and soon, it may well be democracy itself that will be recognised at the point of its death – at the hands of sociopaths.




Byline Times is funded by its subscribers. Receive our monthly print edition and help to support fearless, independent journalism.





Loving Machines: Mental Health by Algorithm Is Reshaping Care and Sociality

Published by Anonymous (not verified) on Sat, 16/04/2022 - 11:13am in

Online stores currently offer hundreds of mental health applications. These apps promise to enhance your coping mechanisms, relieve your stress, make you happy, fix your depression, and much more. As popular and diverse as these digital gizmos are, however, they occupy only one corner of the ‘mental health digital space’. In the rapidly changing field of mental health care, these emerging technologies are of special interest to cash-strapped state-funded mental health services, non-government agencies and private health companies alike.

In what follows, I examine a variety of mediated forms of mental health care, ending with the latest development, which is a particularly profitable commercial combination: the marriage of artificial intelligence (AI) with the industry’s most popular form of psychotherapy—cognitive-behavioural therapy (CBT). This pairing is noteworthy for at least two reasons. First, it has implications for the user-client’s sense of self and for the mode of sociality it calls the person to enact. Second, it points to an immensely lucrative market. Mental health is a huge and growing industry: the Kaiser Foundation estimates that around $200 billion is spent on mental health disorders in the United States every year. This has made mental health the ‘top’ cost category among medical conditions.

These latest developments have emerged in a field that is already roiled in controversy and contradiction. Even the notion ‘mental health’ is contested. Is it about wellness, an absence of mental illness, resilience, vitality, normality, quality of life, the demonstration of correct attitudes, genetics, biology? Yet the digital applications discussed below all work with a restricted notion of what mental health might be, as easily read in their limited view of the processes that constitute a self and how they understand a person’s ‘mental’ difficulties.

That CBT has a highly individualistic and mechanistic understanding of the person is in turn highly relatable to neoliberal ideology, which has been working its way through the institutions for decades, allocating the carriage of care to the lowest-level entity—the private self. In this allocation we find a kind of mental hygiene approach that is the grooming of ‘resilience’, an increasingly high-priority personal concern. The assumption is that, yes, life can be stressful, but it is the responsibility of each citizen to identify, manage and eventually overcome the ‘challenges’ that inevitably come their way.

Now the existing mix of mediated care-giving sees the introduction of newly minted forms of online care: delivered by bots, according to algorithms. These mediated forms, but especially the new digital techniques, are significantly de-territorialising the location-based, in-person setting of traditional therapeutic services. Among a range of impacts, a ‘keep-your-distance’ imperative seems to be slipping into place, disrupting received expectations of the respective roles and responsibilities of those who seek and those who offer help. Any form of mental health care presupposes certain ‘technologies of the self’, as Nikolas Rose reminds us, but the question is: what will the new technological forms of mental health care bring?


Technologically mediated therapy is not new, but during COVID it has been reframed in terms of necessity, and as something that should perhaps be embraced even after lockdowns end. This turn to digital is revolutionary. A hundred years’ worth of assumptions mandating the primacy of face-to-face, here-and-now interaction are in the process of being upended It is remarkable how quickly even the most august segments of the business adapted: within months of the pandemic’s arrival the International Journal of Psychoanalysis had published an editorial, ‘The Current Sociosanitary Coronavirus Crisis: Remote Psychoanalysis by Skype or Telephone’, advising members how to practise in non-face-to-face settings. Practical perhaps, but setting the context for more incursions to come.

More than simply pragmatic, it has been argued elsewhere that the digital turn in care is also moral. Australian policy grandees Ian Hickie and Stephen Duckett advised mid-2020 in The Conversation that ‘Australia’s governments must seize the opportunity that COVID-19 has created. Digital systems must now be viewed as essential health infrastructure, so that the most disadvantaged Australians move to the front of the queue’. It’s a proposition that packs ethical and rhetorical punch, and the equalitarian impulse is not to be rejected outright. But there are other problems with this kind of approach, as seen in the language of the queue, a recycling of the framework in which health services are treated as commodities. As Dylan Riley observed recently in New Left Review, health services are not goods, and they are not all of a type. In the field of mental health especially, care is not a ‘deliverable’—a product that can be shipped along a supply chain. Frameworks that position us as ‘consumers’ and ‘providers’ deny the reciprocities present in any health setting, but especially mental health settings. This interactionless vocabulary de-natures the actors, rendering one executant, the other passive recipient. As will be discussed, a lexicon of providers and recipients makes the shift into care-via-AI not only possible but even, it would seem, ‘logical’.

Of course, in the short term, and in some specific cases, there may be advantages in de-territorialising the therapeutic situation. Some clients have reported that online offers comfort: there’s benefit in the distance technology imposes. If the ‘other’ is mediated by a machine/screen, it is easier to quell anxieties regarding one’s interlocutor’s thoughts, feelings and judgements. On the ‘supplier’ side, some therapists see online as useful, with cuts in travel time and rental costs, and potential danger minimised as well.

So, for some patient-clients—the young person exploring their sexual diversity; the neuro-atypical person seeking non-corporeal contact—the advantages of distance might outweigh what others think of as the costs of being removed or hidden behind a screen. Similarly, questions of accessibility and convenience cannot be flatly dismissed. But if the normative condition of accountability and the complexity of ‘recognition’ that is built into face-to-face communication is put under threat by the extension and naturalisation of technological models, then we are entering new terrain. What does it mean that the helping ‘other’ is not present—or, in the most extreme example, that the other is an algorithm?

Mediated and online

A variety of online and other mediated formats are presently in use, each with their risks and rewards. Telehealth consultations are frequently described as ‘digital medicine’, and there has been huge growth in their use during COVID. To an extent this is a misleading description, as telehealth retains a person-to-person, if not face-to-face, relationship. It involves real-time contacts and, while mediated, they are synchronous. Asynchronous, non-face-to-face interactions and asynchronous written exchanges are less common, but they are on the rise. Examples are the ‘mood tracking’ apps such as the Monash University and beyondblue collaboration ‘MoodPrism’, in which clients map their states of mind.

A key difference between traditional care practices and the disembodied distance of telephone or video-based contact concerns touch. More precisely, what discriminates between mediated and face-to-face modes of relating is the actuality in the former, and the absence in the latter, of the possibility of touch. In one mode there is the possibility that kinaesthetic contact might occur; in technologically distanced contexts, even the possibility is precluded. Put more subtly, in situations of physical co-presence, as in face-to-face therapeutic situations, there is a sensitivity to the non-specific qualities of communication. Where there is person-to-person immediacy, the ambit of intimacy, what couples therapist Tom Paterson termed the ‘co-ordination of meanings’ is more likely. One can feel closer, or, alternatively, that the other, or oneself, has emotionally moved away. The potential for actual touch may be slight—it is unlikely your therapist will hug you, or attack you, unexpectedly halve or double the physical distance between you—but a degree of the tactile is provisionally built into the face-to-face therapeutic relationship, and meaning takes shape in the reference points it provides.

By contrast, in technologically mediated contexts there is an absolute guarantee that participants will not touch—will not ever physically interact. Because this is materially precluded, a particular ‘realm’ is created, one where a property that was once outside the control of participants is no longer so, and may now even be elevated as a right: the ‘right’ to be insulated from the possibility of contact.

Of course, differences between face-to-face and digital interaction go way beyond the matter of touch. For example, in-person encounters present a dense and dynamic milieu within which participants have to filter, and respond to, a multi-dimensional mix of external communications and inner experience. The psycho-social dance may be patterned by custom, but in the immediacy of here-and-now, in-person exchanges, there is always the potential for surprise. Depending on one’s disposition, that there is an edgy quality in co-presence can be seen as a formidable challenge, which needs to be acknowledged, and abated; or, the possibility of non-linearity in intersubjective situations might be welcomed as an element in, if not the sine qua non of, in-depth human relating.

In a promotional video for the MoodPrism app, we are told that daily monitoring and ‘mapping’ of an individual’s mood can allow the person to gain control over the patterns that produce feelings of anxiety or depression and thereby gain a greater sense of freedom. ‘Autonomy’ is on offer. The video ends with the friendly injunction: ‘MoodPrism, map your mood and learn more about yourself’. But here, the Enlightenment motto sapere aude, ‘Dare to know’, applied to the self, becomes an injunction to dare to know one’s data! Critical commentators such as Catherine Loveday, professor of cognitive neuroscience at the University of Westminster, point out the potential, much larger costs of this framing when she argues that the apparent freedom these applications offer is associated with self-preoccupation and a state of ‘degraded social interaction’.

As noted, then, some people experience the technologically mediated realm as ‘safer’, as less intense than the demands of unpredictable, in-the-moment, face-to-face encounters. Technological mediation appears to confer the advantage of a buffer, a diminution of circumstances that could lead to awkwardness. Offering a sense of control, distance means less communicational load, lightening the processing demands of embodied situations. Metaphorically, and sometimes literally, mediated exchanges even give anonymity; less judged and less pressed at a distance, and especially if the exchange is asynchronous, participants may have a very strong sense of freedom and release. But if this becomes the norm, it follows that clients will not only tend towards interpersonal wariness but will also miss out on the positive effects of relational co-presence on which care and therapeutic situations have typically depended.

Of a different order to these mediated forms is a more transformational technology again, one that introduces an even greater discontinuity with past practice. What happens when the client’s interlocutor is not human—when your interlocutor is an AI-driven application that simulates a form of sentience?

Cognitive-behavioural therapy

CBT, and its sibling rational-emotive therapy (RET), share the basic premise that the person is an autarkic unit whose presenting problem—depression, anxiety; for some, even psychosis—is produced by an individual’s problematic pattern of thought. This rests on the assumption that rational thought pursues what is best for the self. What is described as ‘rational’, ‘logical’, ‘correct’ or ‘positive’ thought is not to be evaluated against some ultimate standard; rather, it is to be judged on a ‘hedonic calculus’. Jettisoning the murkiness of intersubjectivity, the rational thinker calculates, and the user/consumer of this therapy will be taught to be more effectively self-centred. Albert Ellis, founder of rational emotive therapy (RET), and Windy Dryden put it this way in The Practice of Rational Emotive-Therapy: ‘rigid absolutism is the very core of human disturbances’. Alongside ‘flexibility’, ‘acceptance of uncertainty’ and a commitment to ‘long-term hedonism’, mental health is seen as conditional on the person’s ability to sustain ‘scientific thinking’, as ‘nondisturbed individuals tend to be more objective, rational, and scientific than more disturbed ones’.

According to CBT advocates, non-calculating thought is the result of defective ‘automatic’ thought patterns. These seem to be a variety of programming error, which can be corrected by standardising the subject’s inner cognitive life. The image is of a machine, either functional or dysfunctional; in the latter case, thinking is beset with operational errors, termed ‘distortions’ or ‘inaccuracies’, and the subject needs to be reprogrammed with a different form of automatic thinking, a regime that is properly ‘rational’, ‘correct’, ‘functional’.  Signing up to the project—admitting that your problems are due to the patterns of thought you take for granted—is the first step. In the critical literature this is referred to as ‘client socialisation’ and seen as deeply problematic; for CBT/RET practitioners it is called ‘insight’ or more simply ‘buying in’.

Unlike either the biological perspective on mental illness or analytic approaches exploring deep intra-psychic processes, CBT posits a clockwork-like inner process that is readily accessible verbally. This purportedly regular and predictable set of processes means the source of the problem—and appropriate intervention steps—can be rendered as reproducible segments of dialogue between client and therapist. Abstracted as protocols that absorb individual variations to pursue a known end, CBT assumes a kind of robotic organisation of the human mind. This assumed quality now seems to be attractive to the various corporations advancing AI machines dedicated to mental health interventions.

Alison Darcy, a psychologist and the CEO of Woebot, a high-profile private provider of online mental health services, has developed Woebot as an ‘automated conversational agent’. Leaving the Stanford Artificial Intelligence Lab to start up her company, she sees Woebot as ‘the future of mental health’. In the official publicity, the company intends

to bring the art and science of effective therapy together in a portfolio of digital therapeutics, applications and tools that automate both the content and the process of therapy. To develop technology capable of building trusted relationships with people, so that we can solve for gaps along the entire health care journey, from symptom monitoring to episode management.

CBT is not only practical but is the essence of a modern, evidence-based psychological method, according to Darcy. Opposing what she sees as a mystification muddling the psychotherapeutic field, Darcy describes CBT as accessible and structured. For her, and for others, this means CBT ‘lends itself well to being delivered over the internet’. Programmers are now busy proceduralising CBT: processing it into algorithmic form. Users type in responses to questions, are sent prompts, and receive guidance in the form of messages, emojis and videos. On official Woebot sites it is claimed that the online application is as effective as in-person CBT. Even more:

it may be easier to share your stress with a non-judgmental nonentity than friends, family, or mental health professionals, especially if you’re a person who spends all your time online and has come to find personal interaction offensively intimate.

The above is key:

the program’s non-human disposition [is] a surprising asset in comforting millennials. [In the trials] Testers were more willing to disclose personal information to an artificially intelligent virtual therapist than they were to a living breathing clinician.

Many individuals (and especially men), reports Darcy, are ‘not able or ready to speak to another human’. Part of it is shame, the other part is fear of stigma, which has often been considered a barrier to entry into therapy. ‘There is no risk of managing impressions. [Robots] are not going to judge you’, explains Darcy. ‘We’ve removed the stigma by completely removing the human.’

But it goes further. Awkwardness is minimised by the interlocutor’s machine status; but does this other have a persona? As a piece on describes:

Darcy and her colleagues assigned a non-gender specific identity to their creation, which they infused with a dorky personality described as a mix between Kermit the Frog and Dr. Spock. But users quickly and repeatedly imprinted one on the digital pen pal. They referred to Woebot as ‘he’, ‘little dude’ and ‘friend’.

A taste of what is offered is signalled in the introduction, delivered by a therapist pictogram to first-use customers: ‘I’ll teach you how to crush self-defeating thinking styles’. Darcy is committed to ‘mak[ing] great psychological tools radically accessible’. And as with earlier programs where humans interacted with minimally sentient machines—the 1960s ELIZA experiments at MIT Artificial Intelligence Lab, for example—the Woebot official site claims users bond with their robot therapist.

Is there research on this claim? In a paper titled ‘The Digitalization of MH Support’ delivered to a 2020 lockdown conference, UK-based academic Ian Tucker presented his research into the use of AI-driven chatbots providing a mental health service to members of a community-based peer-support hub. He reported that service users overwhelmingly valued the use of AI-driven chatbots. The reasons given included ‘support [was] available 24/7’; service users ‘did not feel judged’; chatbots delivered ‘automated empathy’; and, rather than feeling stressed about ‘being on call to others’, it was good with a machine because there was ‘no expectation of reciprocity’.

Although this was a small study, might these responses suggest a particular orientation to self and other? Respondents appear to be needy and vulnerable (‘round the clock help is what we want’); sensitive to embarrassment (‘it’s good the machine did not look down on me’); wanting to be listened to (‘the machine is good at understanding me’); and reluctant to be there for others (‘I’m stressed and fragile, so give me a break: real people want too much of me’). This last take-out is especially interesting, as the hub these young people were attending describes itself as a ‘peer support’ service.

Might the format and content of this automated service be playing a part in shaping the subject position, and the understanding of respective roles and responsibilities, of its consumers? Does the use of a CBT-fuelled chatbot encourage mutuality, camaraderie and a sense of accountability, or might it summon self-concern, vulnerability, entitlement and the valorisation of convenience? Far from the assumed benefits of peer support, this instance of technological mediation—CBT-with-chatbot—may well be fostering I-centred, non-accountable forms of selfhood and promoting interpersonal illiteracy.

Ironically, while CBT maintains the importance of avoiding absolute and static understandings of selfhood, it has facilitated forms of digital mental health care centred on dogmatic attachments to diagnostic identity, together with repetitive cognitive actions. ‘Hey, if I’m not feeling good, it must be because I’m failing to put myself first with sufficient focus and assertion’; ‘I’m failing to complete the homework set by my app e-therapist’; ‘I’m failing to maintain the lifestyle associated with effectively managing my diagnosis’. Mental health care in this form is encouraging recipients to see themselves as automata that can regulate their states by modifying their inputs—their thinking sequences.

Is this just a different form of relating?

How does this square with the realities of the offline world?

Pierre Bourdieu emphasised that attitudes inform practice. It is also true that what is practised requires less dedicated attention than what is only occasionally performed. Like driving a car or piloting oneself about, the more this is practised the easier, the more naturalised, this activity becomes. For example, in map reading one has to dynamically engage in a series of reflexive operations, a-conscious processes of scaling up/scaling down, in order to make sense of the correspondence between map and territory. The more this is done, the easier it is. Conversely, the more a person relies on Google Maps, on a voice or a simple visual signal, to navigate, the less competent—and, arguably, the less engaged with nature, or place—one becomes. The same can be said for being able to navigate social situations: what is practised tends to become easier, and what is not practised tends to become more difficult and feels less natural.

In face-to-face relationships—in any real relationship—no single party is in control; the rules are often opaque, even invisible; and you can’t really ‘drop out’  (just exit) if you feel uncomfortable. Mostly this is the opposite of what happens online, and now, as online manners and relations become normalised, the offline world is an increasingly foreign territory. In this now-almost-foreign place, interactions tend to feel awkward, even unsafe. They present a lack of predictability. ‘I feel it’s bumpy. I’m vulnerable, I feel trapped, confused.’ The real world becomes ‘inferior’, ‘strange’, ‘unwelcome’ and ‘unsafe’: face-to-face is intense, just too demanding.

Perhaps this critique lacks compassion. Service users don’t choose to have mental health issues. Indeed they are beset by some problem, condition, syndrome, affliction, disorder, disease—identifying the appropriate form of address is itself a difficulty. However labelled, given one has been involuntarily visited by a ‘trouble’ of some kind, one is a victim of misfortune rather than guilty of an offence. But as Sarah Schulman writes in Conflict Is not Abuse: Overstating Harm, Community Responsibility, and the Duty of Repair, we appear to be in a cultural moment where we view our entitlement to compassion as requiring a certain self-pathologisation. We are given culturally to seeing ourselves as victims, in need of protection from psychically threatening forces and contexts. One of the issues here is that, as digital mental-health therapies become more popular, and given their apparent cost-effectiveness, all manner of personal experiences, even those that range out to political conflicts, could be recast as mental-health issues.

Woebot may sit at the outer rim of digital mental health technologies, but we can see the shape and trajectory it implies for understanding mental health, the care relation and, beyond that, the person generally. Where the CBT-AI combination sets up a relentlessly positive artificial other—an interlocutor that is never awkward or demanding, and becomes naturalised as ‘what I like’—real-world relationships are bound to be found unsatisfactory. At the least, it is likely that our capacity to ‘read’ and respond to the other diminishes where the structures and protocols of digital communication take over.

This is consistent with the aggregate effect of the use of online mental health products, which draw the user’s horizon of awareness away from any other and more and more tightly around ‘the me’—my sensitivities, my needs, my entitlements. Given that in these applications the vitality of personal accountability and the expectation of reciprocity typical of the in-person therapeutic setting is depleted, if not expressly dismissed, it would not be surprising to find that the other more-or-less disappears beyond the event horizon. One might observe the same dynamic in a range of fields where the offline world lifts the individual and persons generally out of the conditions and challenges of embodied life and relationships. In this we might observe that the conditions that have led to this reconstitution of the self–other relationship, and the profound implications if this situation is normalised, go well beyond the realm of technologised ‘mental health’.

Sham Diagnosis

David Ferraro, Mar 2021

…denial of care to a particular group of patients is not the application of some apolitical, medical procedure. It is thoroughly reactionary, and continues the worst traditions of psychiatric care, updated for the neoliberal age.

The Dunning-Kruger Effect is Autocorrelation

Published by Anonymous (not verified) on Sat, 09/04/2022 - 1:35am in

Have you heard of the ‘Dunning-Kruger effect’? It’s the (apparent) tendency for unskilled people to overestimate their competence. Discovered in 1999 by psychologists Justin Kruger and David Dunning, the effect has since become famous.

And you can see why.

It’s the kind of idea that is too juicy to not be true. Everyone ‘knows’ that idiots tend to be unaware of their own idiocy. Or as John Cleese puts it:

If you’re very very stupid, how can you possibly realize that you’re very very stupid?

Of course, psychologists have been careful to make sure that the evidence replicates. But sure enough, every time you look for it, the Dunning-Kruger effect leaps out of the data. So it would seem that everything’s on sound footing.

Except there’s a problem.

The Dunning-Kruger effect also emerges from data in which it shouldn’t. For instance, if you carefully craft random data so that it does not contain a Dunning-Kruger effect, you will still find the effect. The reason turns out to be embarrassingly simple: the Dunning-Kruger effect has nothing to do with human psychology.1 It is a statistical artifact — a stunning example of autocorrelation.

What is autocorrelation?

Autocorrelation occurs when you correlate a variable with itself. For instance, if I measure the height of 10 people, I’ll find that each person’s height correlates perfectly with itself. If this sounds like circular reasoning, that’s because it is. Autocorrelation is the statistical equivalent of stating that 5 = 5.

When framed this way, the idea of autocorrelation sounds absurd. No competent scientist would correlate a variable with itself. And that’s true for the pure form of autocorrelation. But what if a variable gets mixed into both sides of an equation, where it is forgotten? In that case, autocorrelation is more difficult to spot.

Here’s an example. Suppose I am working with two variables, x and y. I find that these variables are completely uncorrelated, as shown in the left panel of Figure 1. So far so good.

Figure 1: Generating autocorrelation. The left panel plots the random variables x and y, which are uncorrelated. The right panel shows how this non-correlation can be transformed into an autocorrelation. We define a variable called z, which is correlated strongly with x. The problem is that z happens to be the sum x + y. So we are correlating x with itself. The variable y adds statistical noise.

Next, I start to play with the data. After a bit of manipulation, I come up with a quantity that I call z. I save my work and forget about it. Months later, my colleague revisits my dataset and discovers that z strongly correlates with x (Figure 1, right). We’ve discovered something interesting!

Actually, we’ve discovered autocorrelation. You see, unbeknownst to my colleague, I’ve defined the variable z to be the sum of x + y. As a result, when we correlate z with x, we are actually correlating x with itself. (The variable y comes along for the ride, providing statistical noise.) That’s how autocorrelation happens — forgetting that you’ve got the same variable on both sides of a correlation.

The Dunning-Kruger effect

Now that you understand autocorrelation, let’s talk about the Dunning-Kruger effect. Much like the example in Figure 1, the Dunning-Kruger effect amounts to autocorrelation. But instead of lurking within a relabeled variable, the Dunning-Kruger autocorrelation hides beneath a deceptive chart.2

Let’s have a look.

In 1999, Dunning and Kruger reported the results of a simple experiment. They got a bunch of people to complete a skills test. (Actually, Dunning and Kruger used several tests, but that’s irrelevant for my discussion.) Then they asked each person to assess their own ability. What Dunning and Kruger (thought they) found was that the people who did poorly on the skills test also tended to overestimate their ability. That’s the ‘Dunning-Kruger effect’.

Dunning and Kruger visualized their results as shown in Figure 2. It’s a simple chart that draws the eye to the difference between two curves. On the horizontal axis, Dunning and Kruger have placed people into four groups (quartiles) according to their test scores. In the plot, the two lines show the results within each group. The grey line indicates people’s average results on the skills test. The black line indicates their average ‘perceived ability’. Clearly, people who scored poorly on the skills test are overconfident in their abilities. (Or so it appears.)

Figure 2: The Dunning-Kruger chart. From Dunning and Kruger (1999). This figure shows how Dunning and Kruger reported their original findings. Dunning and Kruger gave a skills test to individuals, and also asked each person to estimate their ability. Dunning and Kruger then placed people into four groups based on their ranked test scores. This figure contrasts the (average) percentile of the ‘actual test score’ within each group (grey line) with the (average) percentile of ‘perceived ability’. The Dunning-Kruger ‘effect’ is the difference between the two curves — the (apparent) fact that unskilled people overestimate their ability.

On its own, the Dunning-Kruger chart seems convincing. Add in the fact that Dunning and Kruger are excellent writers, and you have the recipe for a hit paper. On that note, I recommend that you read their article, because it reminds us that good rhetoric is not the same as good science.

Deconstructing Dunning-Kruger

Now that you’ve seen the Dunning-Kruger chart, let’s show how it hides autocorrelation. To make things clear, I’ll annotate the chart as we go.

We’ll start with the horizontal axis. In the Dunning-Kruger chart, the horizontal axis is ‘categorical’, meaning it shows ‘categories’ rather than numerical values. Of course, there’s nothing wrong with plotting categories. But in this case, the categories are actually numerical. Dunning and Kruger take people’s test scores and place them into 4 ranked groups. (Statisticians call these groups ‘quartiles’.)

What this ranking means is that the horizontal axis effectively plots test score. Let’s call this score x.

Figure 3: Deconstructing the Dunning-Kruger chart. In the Dunning-Kruger chart, the horizontal axis ranks ‘actual test score’, which I’ll call x.

Next, let’s look at the vertical axis, which is marked ‘percentile’. What this means is that instead of plotting actual test scores, Dunning and Kruger plot the score’s ranking on a 100-point scale.3

Now let’s look at the curves. The line labeled ‘actual test score’ plots the average percentile of each quartile’s test score (a mouthful, I know). Things seems fine, until we realize that Dunning and Kruger are essentially plotting test score (x) against itself.4 Noticing this fact, let’s relabel the grey line. It effectively plots x vs. x.

Figure 3: Deconstructing the Dunning-Kruger chart. In the Dunning-Kruger chart, the line marked ‘actual test score’ is plotting test score (x) against itself. In my notation, that’s x vs. x.

Moving on, let’s look at the line labeled ‘perceived ability’. This line measures the average percentile for each group’s self assessment. Let’s call this self-assessment y. Recalling that we’ve labeled ‘actual test score’ as x, we see that the black line plots y vs. x.

Figure 3: Deconstructing the Dunning-Kruger chart. In the Dunning-Kruger chart, the line marked ‘perceived ability’ is plotting ‘perceived ability’ y against actual test score x.

So far, nothing jumps out as obviously wrong. Yes, it’s a bit weird to plot x vs. x. But Dunning and Kruger are not claiming that this line alone is important. What’s important is the difference between the two lines (‘perceived ability’ vs. ‘actual test score’). It’s in this difference that the autocorrelation appears.

In mathematical terms, a ‘difference’ means ‘subtract’. So by showing us two diverging lines, Dunning and Kruger are (implicitly) asking us to subtract one from the other: take ‘perceived ability’ and subtract ‘actual test score’. In my notation, that corresponds to y – x.

Figure 3: Deconstructing the Dunning-Kruger chart. To interpret the Dunning-Kruger chart, we (implicitly) look at the difference between the two curves. That corresponds to taking ‘perceived ability’ and subtracting from it ‘actual test score’. In my notation, that difference is y – x (indicated by the double-headed arrow). When we judge this difference as a function of the horizontal axis, we are implicitly comparing y – x to x. Since x is on both sides of the comparison, the result will be an autocorrelation.

Subtracting y – x seems fine, until we realize that we’re supposed to interpret this difference as a function of the horizontal axis. But the horizontal axis plots test score x. So we are (implicitly) asked to compare y – x to x:

\displaystyle (y - x) \sim x

Do you see the problem? We’re comparing x with the negative version of itself. That is textbook autocorrelation. It means that we can throw random numbers into x and y — numbers which could not possibly contain the Dunning-Kruger effect — and yet out the other end, the effect will still emerge.

Replicating Dunning-Kruger

To be honest, I’m not particularly convinced by the analytic arguments above. It’s only by using real data that I can understand the problem with the Dunning-Kruger effect. So let’s have a look at some real numbers.

Suppose we are psychologists who get a big grant to replicate the Dunning-Kruger experiment. We recruit 1000 people, give them each a skills test, and ask them to report a self-assessment. When the results are in, we have a look at the data.

It doesn’t look good.

When we plot individuals’ test score against their self assessment, the data appear completely random. Figure 7 shows the pattern. It seems that people of all abilities are equally terrible at predicting their skill. There is no hint of a Dunning-Kruger effect.

Figure 7: A failed replication. This figure shows the results of a thought experiment in which we try to replicate the Dunning-Kruger effect. We get 1000 people to take a skills test and to estimate their own ability. Here, we plot the raw data. Each point represents an individual’s result, with ‘actual test score’ on the horizontal axis, and ‘self assessment’ on the vertical axis. There is no hint of a Dunning-Kruger effect.

After looking at our raw data, we’re worried that we did something wrong. Many other researchers have replicated the Dunning-Kruger effect. Did we make a mistake in our experiment?

Unfortunately, we can’t collect more data. (We’ve run out of money.) But we can play with the analysis. A colleague suggests that instead of plotting the raw data, we calculate each person’s ‘self-assessment error’. This error is the difference between a person’s self assessment and their test score. Perhaps this assessment error relates to actual test score?

We run the numbers and, to our amazement, find an enormous effect. Figure 8 shows the results. It seems that unskilled people are massively overconfident, while skilled people are overly modest.

(Our lab techs points out that the correlation is surprisingly tight, almost as if the numbers were picked by hand. But we push this observation out of mind and forge ahead.)

Figure 8: Maybe the experiment was successful? Using the raw data from Figure 7, this figure calculates the ‘self-assessment error’ — the difference between an individual’s self assessment and their actual test score. This assessment error (vertical axis) correlates strongly with actual test score (horizontal) axis.

Buoyed by our success in Figure 8, we decide that the results may not be ‘bad’ after all. So we throw the data into the Dunning-Kruger chart to see what happens. We find that despite our misgivings about the data, the Dunning-Kruger effect was there all along. In fact, as Figure 9 shows, our effect is even bigger than the original (from Figure 2).

Figure 9: Recovering Dunning and Kruger. Despite the apparent lack of effect in our raw data (Figure 7), when we plug this data into the Dunning-Kruger chart, we get a massive effect. People who are unskilled over-estimate their abilities. And people who are skilled are too modest.

Things fall apart

Pleased with our successful replication, we start to write up our results. Then things fall apart. Riddled with guilt, our data curator comes clean: he lost the data from our experiment and, in a fit of panic, replaced it with random numbers. Our results, he confides, are based on statistical noise.

Devastated, we return to our data to make sense of what went wrong. If we have been working with random numbers, how could we possibly have replicated the Dunning-Kruger effect? To figure out what happened, we drop the pretense that we’re working with psychological data. We relabel our charts in terms of abstract variables x and y. By doing so, we discover that our apparent ‘effect’ is actually autocorrelation.

Figure 10 breaks it down. Our dataset is comprised of statistical noise — two random variables, x and y, that are completely unrelated (Figure 10A). When we calculated the ‘self-assessment error’, we took the difference between y and x. Unsurprisingly, we find that this difference correlates with x (Figure 10B). But that’s because x is autocorrelating with itself. Finally, we break down the Dunning-Kruger chart and realize that it too is based on autocorrelation (Figure 10C). It asks us to interpret the difference between y and x as a function of x. It’s the autocorrelation from panel B, wrapped in a more deceptive veneer.

Figure 10: Dropping the psychological pretense. This figure repeats the analysis shown in Figures 79, but drops the pretense that we’re dealing with human psychology. We’re working with random variables x and y that are drawn from a uniform distribution. Panel A shows that the variables are completely uncorrelated. Panel B shows that when we plot y – x against x, we get a strong correlation. But that’s because we have correlated x with itself. In panel C, we input these variables into the Dunning-Kruger chart. Again, the apparent effect amounts to autocorrelation — interpreting y – x as a function of x.

The point of this story is to illustrate that the Dunning-Kruger effect has nothing to do with human psychology. It is a statistical artifact — an example of autocorrelation hiding in plain sight.

What’s interesting is how long it took for researchers to realize the flaw in Dunning and Kruger’s analysis. Dunning and Kruger published their results in 1999. But it took until 2016 for the mistake to be fully understood. To my knowledge, Edward Nuhfer and colleagues were the first to exhaustively debunk the Dunning-Kruger effect. (See their joint papers in 2016 and 2017.) In 2020, Gilles Gignac and Marcin Zajenkowski published a similar critique.

Once you read these critiques, it becomes painfully obvious that the Dunning-Kruger effect is a statistical artifact. But to date, very few people know this fact. Collectively, the three critique papers have about 90 times fewer citations than the original Dunning-Kruger article.5 So it appears that most scientists still think that the Dunning-Kruger effect is a robust aspect of human psychology.6

No sign of Dunning Kruger

The problem with the Dunning-Kruger chart is that it violates a fundamental principle in statistics. If you’re going to correlate two sets of data, they must be measured independently. In the Dunning-Kruger chart, this principle gets violated. The chart mixes test score into both axes, giving rise to autocorrelation.

Realizing this mistake, Edward Nuhfer and colleagues asked an interesting question: what happens to the Dunning-Kruger effect if it is measured in a way that is statistically valid? According to Nuhfer’s evidence, the answer is that the effect disappears.

Figure 11 shows their results. What’s important here is that people’s ‘skill’ is measured independently from their test performance and self assessment. To measure ‘skill’, Nuhfer groups individuals by their education level, shown on the horizontal axis. The vertical axis then plots the error in people’s self assessment. Each point represents an individual.

Figure 11: A statistically valid test of the Dunning-Kruger effect. This figure shows Nuhfer and colleagues’ 2017 test of the Dunning-Kruger effect. Similar to Figure 8, this chart plots people’s skill against their error in self assessment. But unlike Figure 8, here the variables are statistically independent. The horizontal axis measures skill using academic rank. The vertical axis measures self-assessment error as follows. Nuhfer takes a person’s score on the SLCI test (science literacy concept inventory test) and subtracts it from the person’s self assessment, called KSSLCI (knowledge survey of the SLCI test). Each black point indicates the self-assessment error of an individual. Green bubbles indicate means within each group, with the associated confidence interval. The fact that the green bubbles overlap the zero-effect line indicates that within each group, the averages are not statistically different from 0. In other words, there is no evidence for a Dunning-Kruger effect.

If the Dunning-Kruger effect were present, it would show up in Figure 11 as a downward trend in the data (similar to the trend in Figure 7). Such a trend would indicate that unskilled people overestimate their ability, and that this overestimate decreases with skill. Looking at Figure 11, there is no hint of a trend. Instead, the average assessment error (indicated by the green bubbles) hovers around zero. In other words, assessment bias is trivially small.

Although there is no hint of a Dunning-Kruger effect, Figure 11 does show an interesting pattern. Moving from left to right, the spread in self-assessment error tends to decrease with more education. In other words, professors are generally better at assessing their ability than are freshmen. That makes sense. Notice, though, that this increasing accuracy is different than the Dunning-Kruger effect, which is about systemic bias in the average assessment. No such bias exists in Nuhfer’s data.

Unskilled and unaware of it

Mistakes happen. So in that sense, we should not fault Dunning and Kruger for having erred. However, there is a delightful irony to the circumstances of their blunder. Here are two Ivy League professors7 arguing that unskilled people have a ‘dual burden’: not only are unskilled people ‘incompetent’ … they are unaware of their own incompetence.

The irony is that the situation is actually reversed. In their seminal paper, Dunning and Kruger are the ones broadcasting their (statistical) incompetence by conflating autocorrelation for a psychological effect. In this light, the paper’s title may still be appropriate. It’s just that it was the authors (not the test subjects) who were ‘unskilled and unaware of it’.

Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.


Stay updated

Sign up to get email updates from this blog.

Email Address

Keep me up to date

This work is licensed under a Creative Commons Attribution 4.0 License. You can use/share it anyway you want, provided you attribute it to me (Blair Fix) and link to Economics from the Top Down.


Cover image: Nevit Dilmen, altered.

  1. The Dunning-Kruger effect tells us nothing about the people it purports to measure. But it does tell us about the psychology of social scientists, who apparently struggle with statistics.↩
  2. It seems clear that Dunning and Kruger didn’t mean to be deceptive. Instead, it appears that they fooled themselves (and many others). On that note, I’m ashamed to say that I read Dunning and Kruger’s paper a few years ago and didn’t spot anything wrong. It was only after reading Jonathan Jarry’s blog post that I clued in. That’s embarrassing, because a major theme of this blog has been me pointing out how economists appeal to autocorrelation when they test their theories of value. (Examples here, here, here, here, and here.) I take solace in the fact that many scientists were similarly hoodwinked by the Dunning-Kruger chart.↩

  3. The conversion to percentiles introduces a second bias (in addition to the problem of autocorrelation). By definition, percentiles have a floor (0) and a ceiling (100), and are uniformly distributed between these bounds. If you are close the floor, it is impossible for you to underestimate your rank. Therefore, the ‘unskilled’ will appear overconfident. And if you are close to the ceiling, you cannot overestimate your rank. Therefore, the ‘skilled’ will appear too modest. See Nuhfer et al (2016) for more details.↩

  4. In technical terms, Dunning and Kruger are plotting two different forms of ranking against each other — test-score ‘percentile’ against test-score ‘quartile’. What is not obvious is that this type of plot is data independent. By definition, each quartile contains 25 percentiles whose average corresponds to the midpoint of the quartile. The consequence of this truism is that the line labeled ‘actual test score’ tells us (paradoxically) nothing about people’s actual test score.↩

  5. According to Google scholar, the three critique papers (Nuhfer 2016, 2017 and Gignac and Zajenkowski 2020) have 88 citations collectively. In contrast, Dunning and Kruger (1999) has 7893 citations.↩

  6. The slow dissemination of ‘debunkings’ is a common problem in science. Even when the original (flawed) papers are retracted, they often continue to accumulate citations. And then there’s the fact that critique papers are rarely published in the same journal that hosted the original paper. So a flawed article in Nature is likely to be debunked in a more obscure journal. This asymmetry is partially why I’m writing about the Dunning-Kruger effect here. I think the critique raised by Nuhfer et al. (and Gignac and Zajenkowski) deserves to be well known.↩

  7. When Dunning and Kruger published their 1999 paper, they both worked at Cornell University.↩

Further reading

Gignac, G. E., & Zajenkowski, M. (2020). The Dunning-Kruger effect is (mostly) a statistical artefact: Valid approaches to testing the hypothesis with individual differences data. Intelligence, 80, 101449.

Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121.

Nuhfer, E., Cogan, C., Fleisher, S., Gaze, E., & Wirth, K. (2016). Random number simulations reveal how random noise affects the measurements and graphical portrayals of self-assessed competency. Numeracy: Advancing Education in Quantitative Literacy, 9(1).

Nuhfer, E., Fleisher, S., Cogan, C., Wirth, K., & Gaze, E. (2017). How random noise and a graphical convention subverted behavioral scientists’ explanations of self-assessment data: Numeracy underlies better alternatives. Numeracy: Advancing Education in Quantitative Literacy, 10(1).

The post The Dunning-Kruger Effect is Autocorrelation appeared first on Economics from the Top Down.

Backwards Britain: Having Rejected a European Future, We Can Only Hark Back to an Imperial Past

Published by Anonymous (not verified) on Wed, 06/04/2022 - 1:00am in

Hardeep Matharu explores how the Russian invasion of Ukraine has exposed the UK's perilous retreat – at a time when collaboration and a new vision of itself is required to navigate the dangerous realities of a changing world


When Boris Johnson stood up at a conference in Blackpool and told his party why they understood what Ukrainians were going through, the Prime Minister was attempting another of his bridges to nowhere.

After 23 days of Russian bombs raining down on Ukraine, Johnson claimed his Tories knew that Brits had the same “instinct” as the people of Ukraine “to choose freedom every time”. He had a “famous recent example”. 

“When the British people voted for Brexit in such large numbers, I don’t believe that it was because they were remotely hostile to foreigners, it’s because they wanted to be free, to do things differently for this country, to be able to run itself,” he declared.

It was another crass Johnson moment. Outrage swirled among politicians and the media; an invite to an EU summit reportedly rescinded. Ukraine’s former President, Petro Poroshenko, recorded himself asking Johnson “how many citizens of the United Kingdom died because of Brexit” and instructed him: “Please no comparison.”

But it was also a revealing moment. One which exposed an impossible problem: at a time when Vladimir Putin is bringing genocide back to Europe, when a collective stand by united Western democracies is required to fight against Russian neocolonial fascism, Johnson’s Brexit Britain is utterly at odds with our shifting world. 

Inward-looking, insecure and with delusions of past grandeur, ‘Global Britain’ in a world of Putin’s aggression, a global crisis in democracy and climate catastrophe cannot reconcile its infantilised state with the demands of reality. 

With no new ideas, and imagination deeply lacking, it finds itself in a pathetic and perilous position – in retreat as an apparent form of advance. The very idea of itself that Ukraine is fighting for – one of a different, brighter future – is the very idea of itself that Britain lacks, choosing instead to rest on its laurels. Johnson’s provocation suggested he too had spotted the problem. 

In an audacious attempt at reconciliation, he laid out a blueprint for one of his fantastical bridges – to nowhere: that Brexit and the resistance of war-torn Ukraine embodied similar values; that the UK leaving the EU meant Brits understood Ukraine’s instincts in fighting to join it.

This is the same Brexit that painted the EU as a form of neocolonial fascism; of which Boris Johnson said “Napoleon, Hitler, various people tried this out, and it ends tragically”; and Nigel Farage declared “June 23 is going to be Independence Day”. The same EU which Russian propaganda has characterised as a fascistic super-state.

But this is, after all, Backwards Britain.

For Timothy Snyder, Professor of History at Yale University – a specialist in the history of central and eastern Europe and the Holocaust – Brexiters were right in one respect, “that Brexit would bring back Empire”. “This time, though, England would be the colonised, not the coloniser.”

By comparing Brexit Britain and besieged Ukraine, Johnson was also distancing his country further from Putin. But parallels remain.

While Vladimir Putin’s quest to create a ‘greater Russia’ has taken a barbaric and murderous form – thankfully such brutality is nowhere in sight here in the UK – Boris Johnson’s ‘Global Britain’ is also a dangerous project rooted in an imperial past and future fantasy; of a ‘memory politics’ which obscures and justifies how neither country has a politics that can deliver tangibly for its people.

With no new vision, and colonial nostalgia the one constant, neither Britain nor Russia have reconciled with their pasts. 

As Putin presides over a vastly unequal Russian kleptocracy, dominated by oligarchy and the country’s wealth looted by its leaders; Johnson’s Government is overseeing an increasingly captured state and a governing party dominated by wealth, a spiralling cost of living crisis, worsening inequality and the biggest drop in living standards in generations.

To distract from their economic failures and lack of policy, both men have whipped up divisive ‘culture wars’ – advancing ‘wedge issues’, targeting minorities and cracking down on those they believe question their mythic narratives. Putin’s fury about the West ‘cancelling’ author JK Rowling, because she “fell out of favour with fans of so-called gender freedoms”, came in the same week as Johnson kicked off another Conservative bash by saying “good evening ladies and gentleman. Or, as Keir Starmer would put it, people who are assigned female or male at birth”. 

These manufactured conflicts around ‘wokeness’ – of which the majority of the public in Britain have been shown to know little – are nothing compared to the actual conflicts (living costs, healthcare and crime to name a few) that people must contend with in their daily lives, with little support from politicians such as Boris Johnson.

Meanwhile, Brexit – the ‘anti-establishment’ revolution which made the Prime Minister its iconic leader – has left Britain permanently on the outside looking in; encouraged by the Russian President, who saw the UK’s farewell to the EU as the first step in his “information blitzkrieg” in destabilising the West.

Both Putin and Johnson have backed their countries into a corner. In this era-defining moment, their myths are now on a collision course with the reality they seek desperately to avoid.

Britain’s willingness to deny and distort its history, combined with its exceptionalism – vaccines, refugee schemes and the economy are all on a long list of “world-beating” achievements – has birthed a nation unable to mature or grow into a true sense of itself. The present feels hollow, perhaps best exemplified by the hollow men now at Britannia’s helm.

Myth is the country’s fail-safe, when a vision of itself rooted in reality is necessary.

That Britain has no outward-looking ideas of what is possible is not only true of its current leadership under Johnson, but also of its opposition politics where no defining story of the future is being advanced. In the land ideas vacate, myths take root and concerns of emotion and identity are encouraged to bloom.

Tony Blair recently spoke of “the “two competing ideas” Britain has about itself, and how an “older narrative has reasserted itself” in recent years.

“Britain finds it very difficult to tell a story about itself, because there is a narrative that supposes our best days are behind us, and that’s caught up with what happened in the Second World War: Churchill defeated Nazism, Britain’s finest hour,” he told the New Statesman. “My idea was to take what I think are the enduring best qualities of Britain – open-mindedness, tolerance, innovation – and try to give Britain a different narrative that would allow it to think its best days are ahead of it. I think, for a time, that succeeded… We quite deliberately put Britain forward as a multicultural, tolerant society, looking to the future.”

The London Olympics in 2012 seemed to be the culmination of this confident, forward-looking Britain – with its scientific innovation, diversity, Shakespeare and the NHS all at the forefront in its celebratory opening ceremony. Alongside its ‘Cool Britannia’ ethos, New Labour also positioned Britain as a “bridge” between Europe and America, maintaining strong relationships with both. The limits of this became apparent in Blair’s controversial decision to follow the US into Iraq – a move which has defined, and eclipsed, the achievements of his party’s era in power.

But even this reinvention felt like an attempt to brush the “older narrative” under the carpet. Reforms to the state, including the Union, were partial and measures to tackle issues such as institutional racism incomplete. The desire to hark back to the past and the legacy of Britain’s imperial history were not examined, in and of themselves.

And so the older narrative remained brushed under the carpet, ready for a band of hollow men keen to pull the rug from under us all.

A Britain that is about fairness and equality and has a place in the world, where it’s respected for our soft power and our humanity and for our compassion... I was brought up with those values and values are not myths

Gina Miller

Free from the shackles of the EU, Britain would be free to build partnerships and trade with the rest of the world, the Brexiters told us. It would stand alone and still be a leader on the world stage. 

The promised trade deals have not materialised, war in Ukraine has highlighted the difficulties of Britain’s continued friction with Europe, and the UK’s response to both Afghan and Ukrainian refugees has underlined its closedness. 

But ‘Global Britain standing alone once more’ was always a myth. This country was victorious in two world wars it could not have fought without the help of its soldiers from across the Empire. That their subjugation continued after 1945, and little recognition was made of the colonies’ contribution to the conflicts, led to the drive for independence in Britain’s ‘jewel in the crown’ – India – and then elsewhere.

These are inconvenient truths not found in Britain's grand narratives dominated by Blitz spirit, Rule, Britannia! and Churchill.

The Ukrainian President, Volodymyr Zelensky, showed the power of these historical touch-points in his address to the UK Parliament, when he told MPs he was fighting the Russian invasion in “just the same way you once didn’t want to lose your country when the Nazis started to fight your country and you had to fight for Britain”. Borrowing from Britain’s favourite wartime Prime Minister, he added: “We will fight in the forests, in the fields, on the shores, in the streets.”

It’s not that we shouldn’t feel pride in this history – but this pride alone cannot be the basis for a thriving, modern Britain. To move forward, a more accurate and rounded version of our past must be engaged with, in which unpalatable facts can provide perspective and greater, messy truths. 

In 1946, when he said “we must build a kind of United States of Europe”, Churchill was one of the first to express his commitment to the idea of European integration in this way. But from Boris Johnson’s cosplaying of his hero, the man on the street could be forgiven for thinking that Britain’s wartime Prime Minister was a passionate Eurosceptic.

Our British history is a selective history, intolerant of contradictions and complexity. Yet, its problematic nature is not discussed.

For German journalist Annette Dittert, the Russian invasion shows that – despite the praise it has received for its practical support of Ukraine, which has been acknowledged publicly by President Zelensky himself – Britain “cannot afford to see the EU as a failing entity” any longer, and that its inability to engage with its past is part of its present difficulties. 

Speaking on Friday Night With Byline Times, she said this “has a lot to do with Brexit”.

“If you honestly engage with your own history – which Germany had to do because it was horrific – if you do that seriously, I think you do not fall for national myths so easily anymore, and you understand that cooperation is a real good, cooperation with other countries, with other people is the basis of democracy. I think that somehow that escaped some people in this country,” she said. 

“That’s a big danger for a nation, if you don’t look into your past… you fail to understand reality. And the reality is we have to engage with each other. Britain has to start to operate with the EU as a political entity.”

Britain has arguably not experienced any event which has forced such self-reflection – the loss of the Empire wasn’t seen as a revolution or a defeat. Accompanying this complacency are its other trappings.

One look at Prince William and Kate in their ceremonial dress atop a Land Rover surveying troops in Jamaica last month was enough to transport anyone back to the 1950s; into a bygone era of patronising recognition of native subservience and the white man’s burden being discharged in all its finery. If ever there was an image that conveyed Britain’s lack of imagination and lack of ideas, it was this photo of the Duke and Duchess of Cambridge on their recent tour of the Caribbean – a trip beset with controversy over its colonial optics and calls to remove the Queen as head of state by those in Belize, Jamaica and the Bahamas. 

Prince William and Kate, Duke and Duchess of Cambridge, at an inaugural Commissioning Parade for service personnel from across the Caribbean in Kingston, Jamaica, on 24 March 2022. Photo: Jane Barlow/PA Images/Alamy

While Prince William expressed his “profound sorrow” about slavery, he did not follow in the footsteps of Belgium’s King Philippe who in 2020 apologised for his ancestor King Leopold II’s brutal abuse of colonial subjects in the now Democratic Republic of Congo.

This reluctance to hold a mirror up to its past is a position also pursued by Britain’s current Government, which characterises any meaningful attempt to present a fuller account as ‘rewriting history’ and the questioning of complex historical figures ‘cancel culture’.

As Corinne Fowler, the historian hounded for helping the National Trust document which of its properties has links to colonialism, told me: “The near hysterical response on most occasions when researchers have simply tried to provide new information about specific ways in which heritage sites relate to the British Empire is worrying.”

But then “part of the colonial legacy,” she added, “is a resistance to having an honest discussion which is evidence-based about what our collective past looks like.”

Discussing the problem of disinformation, “Russia is a very emotional country”, a former Cabinet minister told me recently. They were speaking about a trip to the country shortly after the fall of the Berlin Wall, when a Russian guide said she “can’t believe” what was being said of Stalin’s atrocities.

Timothy Snyder’s analysis of Russia under Putin is that it is stuck in a ‘politics of eternity’ with the “replacement of history with myth”.

Both Brexit and Trump’s ‘Make America Great Again’ movement are examples of this – of a grand narrative placing “one nation at the centre of a cyclical story”. Both advocated a return to a successful past snatched away; offering recognition and meaning but no practical solutions. 

According to Snyder, such projects are also ‘sadopopulist’ – premised on the idea that people are willing to undergo pain in order to feel better about themselves. No matter that Trump and a hard Brexit don’t actually improve their lives, deliverance takes the form of a psychic ‘winning’ through which people feel better off because scapegoated others are to be made worse off.

“Eternity politicians imagine cycles of threat in the past, creating an imagined pattern that they realise in the present by producing artificial crises and daily drama,” Snyder observes. Russia, with its mystic tales of victimhood and suffering, is a prime example.

Speaking a fews days into the current Russian invasion, Snyder said that “the basic question in the 20th and now 21st Centuries is: what comes after empire?” In Europe, the answer has been a “process of integration with other post-imperial states”, through the EU. For Russia, the answer is “more empire – it’s an imperial war”.

In his seminal book on Ukraine, The Road to Unfreedom, the historian writes that Ukraine is “the axis between the new Europe of integration and the old Europe of empire”. 

“The politics of integration were fundamentally different from the politics of empire,” he says. “Russia was the first European post-imperial power not to see the EU as a safe landing for itself.” Britain is now another.

At the heart of Putin’s 22-year rule has been an increasing reliance on ‘memory politics’. Just days before he sent troops into Ukraine in February, Putin lamented Russia’s loss of the “territory of the former Russian empire”.

His justification for the invasion, to ‘deNazify’ Ukraine, is premised on a baseless distortion of the past – which has also seen Stalin’s collaboration with Hitler in partitioning and invading Poland during the Second World War airbrushed out of official narratives. Ukraine has no significant presence of far-right elements and President Volodymr Zelensky is himself Jewish – his family members having been killed during the Holocaust. 

As far back as 2011, academic Nikolay Koposov observed: “It is difficult to condemn Stalinism and to keep insisting on the Stalinist conception of history at the same time.”

“The new mythology of the war emphasises the unity of the people and the state, not the state’s violence against the people,” he wrote. “It stresses the peaceful character of the Soviet foreign policy and defends the memory of the state against charges such as complicity in initiating the war, the violence carried out by the Red Army, and its seizure of independent states."

The source of Putinism's legitimacy "lay not in future utopias but in past victories,” he added.

The war crimes being carried out by Russian troops to eradicate the Ukrainian people in the name of an (old and new) Eurasian empire, has brought horrors to Europe we all hoped lay long in the past. But the negation of truth always leads to dark consequences. 

Here in Britain, we take our democracy for granted, with its human rights and rule of law within a rules-based international order. But, in our own ways, we negate the truth. This unwillingness to understand ourselves sets us on a dangerous path of a wider denialism of our own. 

If you honestly engage with your own history – which Germany had to do because it was horrific – if you do that seriously, I think you do not fall for national myths so easily... That’s a big danger for a nation, if you don’t look into your past… you fail to understand reality

Annette Dittert

Britain and Russia are not alone in their memory politics. From Erdogan’s Turkey, where citizens acknowledging the Armenian Genocide have been prosecuted; to Narendra Modi’s India, in which the BJP leadership persecutes Muslims to advance its claims of a ‘Hindu civilisational destiny’ of the world’s largest democracy, countries with populist 'strongmen' everywhere are looking to stay wilfully ignorant of their pasts.

Germany, as Annette Dittert pointed out, is a rare exception.

Its decision to increase defence investment in the wake of war in Ukraine represents a paradigm shift for the country, since one of the legacies of confronting its past atrocities was its commitment to not build up military force again. Its departure from this is reflective of its pragmatism – the reality of Vladimir Putin’s murderous intent in the heart of Europe.

“I remember very well sitting in endless school days analysing Hitler’s speeches and having to write essays about why there should never be a war coming from German territory ever again,” Dittert told me on Friday Night With Byline Times.

Like many visiting Berlin, I was struck by the Stolpersteine I encountered under my feet – small plaques (or ‘stumbling stones’) commemorating victims of the Nazis, each starting with “here lived”. More than 75,000 of them are dotted around German towns and cities.

The number of different types of memorials in the capital, and the depth of Berlin’s cultural offerings and museums allowing people to access different elements of the country’s history, I found remarkable. Having touched remnants of the Berlin Wall, I looked into the faces of those killed trying to cross it; before learning about the families torn apart through state-sponsored deception at the original secret police headquarters, now the Stasi Museum. And just a short stroll away from the city’s famous Brandenburg Gate sits the ‘Europa Experience’, billed as a multimedia journey through Europe and the EU.

Germany provides an example of how a country can integrate its history in order to look to the future. 

DeNazification didn’t start immediately after the Second World War, when many who had supported Hitler’s regime were still living in German society. But following the high profile trials of notorious Nazi figures such as Adolf Eichmann, things began to change. 

From the 1960s, a grassroots movement, Vergangenheitsaufarbeitung – “working off the past” – started to take shape, to examine and learn to live with Germany’s dark history. The Stolpersteine, for instance, are researched and applied for by local residents. Denying the Holocaust is illegal in Germany, and in many areas – from education to those working in public services – Germans are made to engage with, and learn from, the crimes of the Nazis.

While this hasn’t eradicated all far-right feeling still found in small pockets, ‘working off the past’ is not seen as a one-off exercise, but a process – one which is still ongoing.

The British Empire is still not taught comprehensively in our schools, and even mentioning it continues to be met with awkward silence (as someone who grew up with a father who was born and brought up under the Empire in Kenya and a mother from India, I find these silences bizarre but telling).

From the perplexity at Priti Patel’s hardline approach to immigrants as the granddaughter of refugees, to the former Surrey Police and Crime Commissioner who told me Sir William Macpherson was suffering from “post-colonial guilt” when he conducted his 1999 inquiry into Stephen Lawrence’s murder, there is a distinct lack of interest in our collective amnesia and its consequences. 

But perhaps a reckoning is approaching. 

Russia’s invasion of Ukraine has not only woken the West up to the need for unity in the defence of democracy, it has exposed Britain’s default, out-of-touch, ‘small island’ mentality – one that has come to particular prominence in the Brexit years.

Even the Queen, now in the twilight of her reign, can surely only hold the royal Firm together in its current form for so long. A uniquely respected figure – a bridge between Britain’s past and present – will the country feel so fondly towards those who succeed her? Or will it be a chance for that much needed self-reflection and real reinvention? A moment to consider the role of monarchy and the notions of deference and supremacy that Britain still willingly wraps itself in?

As former diplomat Alexandra Hall Hall has observed in these pages, “is it not time to set the Royal Family free from their gilded cages and in the process free ourselves from the hierarchical mentality which accompanies royalty?” In a rare recognition, Prince William signalled that times are changing in the Commonwealth in response to his much derided recent royal tour. Maybe events at home will also force the Royal Family’s hand.

But genuine reinvention requires Britain to decide on its values; the lessons from its rounded history it wishes to carry with it, and the future it envisages for the best days still ahead.

Queen Elizabeth II and Prince Philip in a Land Rover greeting crowds in Sabina Park, Kingston, Jamaica, on 25 November 1953. Photo: PA Images/Alamy

Forty years ago, a Conservative Prime Minister struggling in the polls found political capital in war. Margaret Thatcher won an overwhelming majority following the Falklands conflict, with a victory parade drawing 300,000 people to the mile-long route through central London – the first time the city had celebrated a military event since 1949. At lunch in the Guildhall afterwards, Thatcher said the British people were “proud of these heroic pages in our island story”.

Years later, she wrote that the legacy of the Falklands was that “Britain’s name meant something more than it had” and that its significance “was enormous, both for Britain’s self-confidence and for our standing in the world”.

Though four decades have passed, Thatcher’s imperial spirit is still alive today. Endorsing calls for a Margaret Thatcher Day, Conservative Party Chairman Oliver Dowden recently tweeted that “Margaret Thatcher led the UK to victory in our defence of the Falklands” and “ended our national decline”.

While the Falklands was another harking back, Thatcher did look forward – with her, albeit divisive and at times destructive, vision of a free-market, privatised, ‘Big Bang’ Britain.

Can it now find a way to reconcile the lessons of the past with new ideas for its future; to build a more equitable country, one of genuine equality of opportunity and unafraid of looking ahead?

Speaking after a performance of Bloody Difficult Women, during its recent run at Hammersmith’s Riverside Studios, businesswoman Gina Miller – on whose story the play is based – told me what prompted her to take the UK Government to court over its plans to trigger Article 50 (and Brexit) without consulting Parliament: she had an idea of what Britain is which she felt was being violated by how the process was playing out.

Born and brought up in Guyana, a former British colony, she said many children of the Commonwealth feel attached to a certain notion of Britain in this way.

“We listened to the BBC World Service every night, the Queen was on the wall, my mother collected blue Wedgwood china – we literally were more British I think than the British... it’s British values that were taught to us growing up; respect and truth and honesty and doing the right thing. All those values are instilled in us and so, to me, it’s what you defend.”

Recalling her appearance on the BBC’s Andrew Marr Show, she said that the journalist observed off-camera – to her shock – that she and Nigel Farage were actually “really similar”.

“He said ‘you both have a very strong view of Britain – yours is different to his, but you have a very strong view of what you’re fighting for’. And I have a very strong view still of what I’m fighting for… a Britain that is about fairness and equality and has a place in the world, where it’s respected for our soft power and our humanity and for our compassion. 

“I was brought up with those values and values are not myths. But the snake-oil salesmen [did sell] a myth… playing on people’s fear and anger and deep resentment.”

The biggest crisis facing Britain is the crisis of facing itself. Time is of the essence in integrating our past and looking to the future – lest we drift further, beyond a point of no return.




Byline Times is funded by its subscribers. Receive our monthly print edition and help to support fearless, independent journalism.