Psychology

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

Deontologists, Utilitarians, and Predictability

Tags

Non-philosophers tend to view utilitarians as less moral and deontologists as more moral. The reason for this, according to recent research, is that deontologists are more “predictable.”

However, if utilitarians are made to seem to more predictable in the thought experiments given to survey respondents, then they’re perceived as no less moral than the deontologists. It seems that being able to form accurate expectations of an agent’s reasoning and behavior is more important to people’s assessment of the agent’s morality than the content of the agent’s principles (at least within some limited range of options). Rule consequentialists, is this your moment?

These findings are reported in “The search for predictable moral partners: Predictability and moral (character) preferences,” by Martin Turpin (Waterloo) et al, published in The Journal of Experimental Psychology. The researchers conducted studies in which thought experiments and questions were presented to around 2000 people.

The authors write:

If morality is fundamentally underpinned by the need to cooperate, then it follows that the ability to predict another’s behavior should be paramount in determining moral character. A great deal of uncertainty exists when deciding whether to cooperate with others. A person’s moral character is unclear when first encountering them. As such, one cannot be certain that the intention to cooperate is present in another’s mind… One way of reducing social uncertainty is to establish clear rules that everyone is expected to follow. If everyone is aware of the same rules or norms, then any given member of a society can generally be trusted to be a predictable cooperator…

Regardless of the consequences of an agent’s actions, and regardless of their violation of proscriptions against killing, participants in the current study consistently preferred the agent who they judged to be most predictable. That is, utilitarian actors opting to sacrifice an individual for the greater good were judged as more or less moral than a deontological actor refusing this sacrifice, depending on how predictable their actions appeared.

We find that assessments of predictability are multi-faceted, strongly associated with judgments of an agent’s consistency, reliability, intelligibility, and methodicalness. However, we observe that assessments of predictability most strongly evoke judgments related to “consistency of behavior,” particularly for judgments of deontological actors. Additionally, we show that participants’ preferred course of action within the described moral dilemmas (i.e., WWYD judgments) are positively associated with judgments of predictability and morality. Nevertheless, assessments of predictability maintain a unique and non-trivial contribution to judgments of morality, even when controlling for participants’ preferred moral decisions. Overall, we suggest that judgments of an agent’s predictability inform judgments of their morality.

Discussion welcome.

(via MR)

Philosopher Awarded Nearly $1 Million Grant for Memory and Forgiveness Project Published by Anonymous (not verified) on Thu, 22/07/2021 - 9:18pm in Tags Felipe De Brigard, associate professor of philosophy, psychology, and neuroscience at Duke University, and leader of the Imagination and Modal Cognition Lab there, has been awarded a grant of$988,602 for his project, “Forgetting and Forgiving: Exploring the Connections between Memory and Forgiveness.”

The grant is from the John Templeton Foundation.

The project takes philosophical and empirical approaches to conceptual and psychological questions related to forgiveness, emotions, and memory, focusing on victims of political violence:

People who have suffered wrongdoings are often urged to “forgive and forget”. Indeed, forgetting the details of past experiences that elicit painful, sometimes debilitating, feelings of resentment, anger and hate, seems necessary in order to replace those negative feelings with more positive ones. However, remembering the details of past wrongdoing also seems necessary for forgiveness. If a person’s memory of a past offense were somehow deleted from her mind, we wouldn’t say that she had forgiven the offender. Forgiveness, then, seems to require a contradiction: one must both remember and forget to forgive. How should we understand the precise relationship between forgiving and forgetting to resolve this paradox? Despite a growing body of research on forgiveness, the relationship between memory and forgiveness remains unclear.

The current project seeks to explore this relationship both empirically and theoretically. Based upon the working hypothesis that forgiveness prompts a psychological process of emotional reappraisal of memories of past wrongdoing, the experimental aspect of the project aims to investigate the effects of forgiving on subsequent recollection, as well as the effects that different reappraisal techniques may have on people’s tendency to forgive offenses. The empirical investigation will be conducted across three different populations: a sample of direct victims of political violence from Montes de Maria, a rural region in the north of Colombia, an urban sample of indirect victims from Bogota, and a comparison sample from individuals in the United States. Clarifying the role that memory plays on forgiveness will not only advance our understanding of this notion, but it will also provide a solid empirical basis upon which to build a theory of forgiveness’ emotional change.

Is Human Probability Intuition Actually ‘Biased’?

Tags

According to behavioral economics, most human decisions are mired in ‘bias’. It muddles our actions from the mundane to the monumental. Human behavior, it seems, is hopelessly subpar.1

Or is it?

You see, the way that behavioral economists define ‘bias’ is rather peculiar. It involves 4 steps:

1. Start with the model of the rational, utility-maximizing individual — a model known to be false;
2. Re-falsify this model by showing that it doesn’t explain human behavior;
3. Keep the model and label the deviant behavior a ‘bias’;
4. Let the list of ‘biases’ grow.

Jason Collins (an economist himself) thinks this bias-finding enterprise is weird. In his essay ‘Please, Not Another Bias!’, Collins likens the proliferation of ‘biases’ to the accumulation of epicycles in medieval astronomy. Convinced that the Earth was the center of the universe, pre-Copernican astronomers explained the (seemingly) complex motion of the planets by adding ‘epicycles’ to their orbits — endless circles within circles. Similarly, when economists observe behavior that doesn’t fit their model, they add a ‘bias’ to their list.2

The accumulation of ‘biases’, Collins argues, is a sign that science is headed down the wrong track. What scientists should do instead is actually explain human behavior. To do that, Collins proposes, you need to start with human evolution.

The ‘goal’ of evolution is not to produce rational behavior. Evolution produces behavior that works — behavior that allows organisms to survive. If rationality does evolve, it is a tool to this end. On that front, conscious reasoning appears to be the exception in the animal kingdom. Most animals survive using instinct.

That brings me to the topic of this essay: the human instinct for probability. By most accounts, this instinct is terrible. And that should strike you as odd. As a rule, evolution does not produce glaring flaws. (It slowly removes them.) So if you see flaws everywhere, it’s a good sign that you’re observing an organism in a foreign environment, a place to which it is not adapted.

When it comes to probability, I argue that humans now live in a foreign environment. But it is of our own creation. Our intuition, I propose, was shaped by observing probability in short samples — the information gleaned from a single human lifetime. But with the tools of mathematics, we now see probability as what happens in the infinite long run. It’s in this foreign mathematical environment that our intuition now lives.

Unsurprisingly, when we compare our intuition to our mathematics, we find a mismatch. But that doesn’t mean our intuition is wrong. Perhaps it is just solving a different problem — one not usually posed by mathematics. Our intuition, I hypothesize, is designed to predict probability in the short run. And on that front, it may be surprisingly accurate.

‘Bias’ in an evolutionary context

As a rule, evolutionary biologists don’t look for ‘bias’ in animal behavior. That’s because they assume that organisms have evolved to fit their environment. When flaws do appear, it’s usually because the organism is in a foreign place — an environment where its adaptations have become liabilities.3

As an example, take a deer’s tendency to freeze when struck by headlights. This suicidal flaw is visible because the deer lives in a foreign environment. Deer evolved to have excellent night vision in a world without steel death machines attached to spotlights. In this world, the transition from light to dark happened slowly, so there was no need for fast pupil reflexes. Nor was there a need to flee from bright light. The evolutionary result is that when struck by light, deer freeze until their eyes adjust. It’s a perfectly good behavior … in a world without cars. In the industrial world, it’s a fatal flaw.

Back to humans and our ‘flawed’ intuition for probability. I suspect that many apparent ‘biases’ in our probability intuition stem from a change in our social environment, a change in the way we view ‘chance’. But before I discuss this idea, let’s review a widely known ‘flaw’ in our probability intuition — something called the gambler’s fallacy.

The gambler’s fallacy

On August 18, 1913, a group of gamblers at the Monte Carlo Casino lost their shirts. It happened at a roulette table, which had racked up a conspicuous streak of blacks. As the streak grew longer, the gamblers became convinced that red was ‘due’. And yet, with each new roll they were wrong. The streak finally ended after 26 blacks in a row. By then, nearly everyone had gone broke.

These poor folks fell victim to what we now call the gambler’s fallacy — the belief that if an event happens more frequently than normal during the past, it is less likely to happen in the future. It is a ‘fallacy’ because in games like roulette, each event is ‘independent’. It doesn’t matter if a roulette ball landed on black 25 times in a row. On the next toss, the probability of landing on black remains the same (18/37 on a European wheel, or 18/38 on an American wheel).

Many gamblers know that roulette outcomes are independent events, meaning the past cannot affect the future. And yet their intuition consistently tells them the opposite. Gamblers at the Monte Carlo Casino had an overwhelming feeling that after 25 blacks, the ball had to land on red.

The mathematics tell us that this intuition is wrong. So why would evolution give us such a faulty sense of probability?

Games of chance as a foreign environment

It is in ‘games of chance’ (like roulette) that flaws in our probability intuition are most apparent. Curiously, it is in these same games where the mathematics of probability are best understood. I doubt this is a coincidence.

Let’s start with our intuition. Games of chance are to humans what headlights are to deer: a foreign environment to which we’re not adapted. As such, these games reveal ‘flaws’ in our probability intuition. The Monte Carlo gamblers who lost their shirts betting on red were the equivalent of deer in headlights, misled by their instinct.

And yet unlike deer, we recognize our flaws. We know that our instinct misguides us because we’ve developed formal tools for understanding probability. Importantly, these tools were forged in the very place where our intuition is faulty — by studying games of chance.

It was a gamblers dispute in 1654 that led Blaise Pascal and Pierre de Fermat to first formalize the mathematics of probability. A few years later, Christian Huygens published a book on probability called De Ratiociniis in Ludo Aleae — ‘the value of all chances in games of fortune’. The rules of probability were then further developed by Jakob Bernoulli and Abraham de Moivre, who again focused mostly on games of chance. Today, the same games remain the basis of probability pedagogy — the domain where students learn how to calculate probabilities and discover that their intuition is wrong.

Why did we develop the mathematics of probability in the place where our intuition most misleads? My guess is that it’s because games of chance are at once foreign yet controlled. In evolutionary terms, games of chance are a foreign environment — something we did not evolve to play. But in scientific terms, these games are an environment that we control. Why? Because we designed the game.

By studying games we designed, we get what I call a ‘god’s eye view’ of probability. We know, for instance, that the probability of drawing an ace out of a deck of cards is 1 in 13. We know this because we designed the cards to have this probability. It is ‘innate’ in the design.

When we are not the designers, the god’s eye view of probability is inaccessible. To see this fact, ask yourself — what is the ‘innate’ probability of rain on a Tuesday? It’s a question that is unanswerable. All we can do is observe that on previous Tuesdays, it rained 20% of the time. Is this ‘observed’ probability of rain the same as the ‘innate’ probability? No one knows. The ‘innate’ probability of rain is forever unobservable.

Because games of chance are an environment that is both controlled yet foreign, they are a fertile place for understanding our probability intuition. As designers, we have a god’s eye view of the game, meaning we know the innate probability of different events. But as players, we’re beholden to our intuition, which knows nothing of innate probability.

This disconnect is important. As game designers, we start with innate probability and deduce the behavior of the game. But with intuition, all we have are observations, from which we must develop a sense for probability.

Here’s the crux of the problem. To get an accurate sense for innate probability, you need an absurdly large number of observations. And yet humans typically observe probability in short windows. This mismatch may be why our intuition appears wrong. It’s been shaped to predict probability within small samples.

When you flip a coin, the chance of heads or tails is 50–50.

So begins nearly every introduction to the mathematics of probability. What we have here is not a statement of fact, but an assumption. Because we design coins to be balanced, we assume that heads and tails are equally likely. From there, we deduce the behavior of the coin. The mathematics tell us that over the long run, innate probability will show its face as a 50–50 balance between heads and tails.

The trouble is, this ‘long run’ is impossibly long.

How many coin tosses have you observed in your life? A few hundred? A few thousand? Likely not enough to accurately judge the ‘innate’ probability of a coin.

To see this fact, start with Table 1, which shows 10 tosses of a simulated coin. For each toss, I record the cumulative number of heads, and then divide by the toss number to calculated the ‘observed’ probability of heads. As expected, this probability jumps around wildly. (This jumpiness is what makes tossing a coin fun. In the short run, it is unpredictable.)

Table 1: A simulated coin toss

Toss
Outcome

1
T
0
0.0

2
T
0
0.0

3
H
1
33.3

4
H
2
50.0

5
T
2
40.0

6
H
3
50.0

7
H
4
57.1

8
H
5
62.5

9
H
6
66.7

10
T
6
60.0

In the long run, this jumpiness should go away and the ‘observed’ probability of heads should converge to the ‘innate’ probability of 50%. But it takes a surprisingly long time to do so.

Figure 1 illustrates. Here I extend my coin simulation from 10 tosses to over 100,000 tosses. The red line is the coin’s ‘innate’ probability of heads (50%). This probability is embedded in my simulation code, but is accessible only to me, the simulation ‘god’. Observers know only the coin’s behavior — the ‘observed’ probability of heads shown by the blue line.

Figure 1: In search of ‘innate’ probability. I’ve plotted here the results of a simulated coin toss. The blue line shows the ‘observed’ probability of heads after the respective number of tosses. The red line shows the ‘innate’ probability of heads (50%), which is embedded in the simulation code but inaccessible to observers.

Here’s what Figure 1 tells us. If observers see a few hundred tosses of the coin, they will deduce the wrong probability of heads. (The coin’s ‘observed’ probability will be different from its ‘innate’ probability.) Even after a few thousand tosses, observers will be misled. In this simulation, it takes about 100,000 tosses before the ‘observed’ probability converges (with reasonable accuracy) to the ‘innate’ probability.4

Few people observe 100,000 tosses of a real coin. And that means their experience can mislead. They may conclude that a coin is ‘biased’ when it is actually not. Nassim Nicholas Taleb calls this mistake getting ‘fooled by randomness’.

Not only do we fool ourselves today, I suspect that we fooled ourselves repeatedly as we evolved. Before we designed our own games of chance, the god’s eye view of probability was inaccessible. All we had were observations of real-world outcomes, which could easily mislead.

For outcomes that were frequent, we could develop an accurate intuition. We are excellent, for instance, at using facial expressions to judge emotions — obviously because such judgment is a ubiquitous part of social life. But for outcomes that were rare (things like droughts and floods), patterns would be nearly impossible to see. The result, it seems, was not intuition but superstition. The worse our sense for god’s-eye probability, the more we appealed to the gods.

When coins have ‘memory’

Even when we know the god’s-eye probability, we find it difficult to suppress our intuition. Take the gambler’s fallacy, whereby we judge independent events based on the past. When a coin lands repeatedly on heads, we feel like tails is ‘due’. And yet logic tells us that this feeling is false. Each toss of the coin is an independent event, meaning past outcomes cannot affect the future. So why do we project ‘memory’ onto something that has none?

One reason may be that when we play games of chance, we are putting ourselves in a foreign environment, much like deer in headlights. As a social species, our most significant interactions are with things that do have a memory (i.e. other humans). So a good rule of thumb may be to project memory onto everything with which we interact. Sure, this intuition can be wrong. But if the costs of falsely projecting memory (onto a coin toss, for instance) are less than the costs of falsely not projecting memory (onto your human enemies, for example), this rule of thumb would be useful. Hence it could evolve as an intuition.

This explanation for our flawed intuition is well trodden. But there is another possibility that has received less attention. It could be that our probability intuition is not actually flawed, but is instead a correct interpretation of the evidence … as we see it.

Remember that our intuition has no access to the god’s eye view of ‘innate’ probability. Our intuition evolved based only on what our ancestors observed. What’s important is that humans typically observe probability in short windows. (For instance, we watch a few dozen tosses of a coin.) Interestingly, over these short windows, independent random events do have a memory. Or so it appears.

In his article ‘Aren’t we smart, fellow behavioural scientists’, Jason Collins shows you how to give a coin a ‘memory’. Just toss it 3 times and watch what follows a heads. Repeat this experiment over and over, and you’ll conclude that the coin has a memory. After a heads, the coin is more likely to return a tails.

To convince yourself that this is true, look at Table 2. The left-hand column shows all the possible outcomes for 3 tosses of a coin. For each outcome, the right-hand column shows the probability of getting tails after heads.

Table 2: The probability of tails after heads when tossing a coin 3 times

Outcome

HHH
0%

HHT
50%

HTH
100%

HTT
100%

THH
0%

THT
100%

TTH

TTT

Expected probability of tails after heads
58%

Modeled after Jason Collins’ table in Aren’t we smart, fellow behavioural scientists.

To understand the numbers, let’s work through some examples:

• In the first row of Table 2 we have HHH. There are no tails, so the probability of tails after heads is 0%.
• In the second row we have HHT. One of the heads is proceeded by a tails, the other is not. So the probability of tails after heads 50%.

We keep going like this until we’ve covered all possible outcomes.

To find the expected probability of tails after heads, we average over all the outcomes where heads occurred in the first two flips. (That means we exclude the last two outcomes.) The resulting probability of tails after heads is:

\displaystyle \begin{aligned} P(T ~|~ H) &= \frac{50\%+100\%+100\%+100\%}{6} \\ \\&= \frac{350\%}{6} \\ \\ &= 58\% \end{aligned}

When tossed 3 times, our coin appears to have a memory! It ‘remembers’ when it lands on heads, and endows the next toss with a higher chance of tails. Or so it would appear if you ran this 3-toss experiment many times.

The evidence would look something like the blue line in Figure 2. This is the observed probability of getting tails following heads in a simulated coin toss. Each iteration (horizontal axis) represents 3 tosses of the coin. The vertical axis shows the cumulative probability of tails after heads as we repeat the experiment. After a few thousand iterations, the coin’s preference for tails becomes unmistakable.

Figure 2: When tossed 3 times, a simulated coin favors tails after heads. I’ve plotted data from a simulation in which I repeatedly toss a balanced coin 3 times. The blue line shows the observed probability that tails follows heads. The horizontal axis shows the number of times I’ve repeated the experiment.

The data shouts at us that the coin has a ‘memory’. Yet we know this is impossible. What’s happening?

The coin’s apparent ‘memory’ is actually an artifact of our observation window of 3 tosses. As we lengthen this window, the coin’s memory disappears. Figure 3 shows what the evidence would look like. Here I again observe the probability of tails after heads during a simulated coin toss. But this time I change how many times I flip the coin. For an observation window of 5 tosses (red), tails bias remains strong. But when I increase the observation window to 10 tosses (green), tails bias decreases. And for a window of 100 tosses (blue), the coin’s ‘memory’ is all but gone.

Figure 3: Favoratism for tails (after heads) disappears as the observation window lengthens. I’ve plotted data from a simulation in which I repeatedly toss a balanced coin n times and measure the probability of tails after heads. As the observation window n increases (from 5 to 10 to 100 tosses), tails favoratism decreases. The vertical axis shows the cumulative probability of tails after heads. The horizontal axis shows the number of times I’ve repeated the experiment.

Here’s the take-home message. If you flip a coin a few times (and do this repeatedly), the evidence will suggest that the coin has a ‘memory’. Increase your observation window, though, and the ‘memory’ will disappear.

The example above shows the coin’s apparent memory after a single heads. But what if we lengthen the run of heads? Then the coin’s memory becomes more difficult to wipe. Figure 4 illustrates.

I’ve plotted here the results of a simulated coin toss in which I measure the probability of getting tails after a run of heads. Each panel shows a different sized run (from top to bottom: 3, 5, 10, and 15 heads in a row). The vertical axis shows the observed probability of tails after the run. The colored lines indicate how this probability varies as we increase the number of tosses in our observation window (horizontal axis).

Figure 4: Wiping a coin’s ‘memory’. I’ve plotted here the results of a simulated coin toss in which I measure the probability of getting tails after a run of heads. Each panel shows a different sized run (from top to bottom: 3, 5, 10, and 15 heads in a row). The horizontal axis shows the number of tosses observed. The vertical axis shows the observed probability (the average outcome over many iterations) of getting tails after the corresponding run of heads. The longer the run of heads, the more tosses you need to remove the coin’s apparent preference for tails.

Here’s how to interpret the data in Figure 4. When the observed probability of tails exceeds 50%, the coin appears to have a ‘memory’. As the observation window increases, this memory slowly disappears, and eventually converges to the innate probability of 50%. But how long this convergence takes depends on the length of the run of heads. The longer the run, the larger the observation window needed to wipe the coin’s memory.

For a run of 3 heads (top panel), it takes a window of about 1000 tosses to purge the preference for tails. For 5 heads in a row (second panel), it takes a 10,000-toss window to purge tails favoritism. For 10 heads in a row (third panel), the memory purge requires a window of 100,000 tosses. And for 15 heads in a row (bottom panel), tails favoritism remains up to a window of 1 million tosses.

The corollary is that when we look at a short observation window, the evidence shouts at us that the coin has a ‘memory’. After a run of heads, the coin ‘prefers’ tails. The data tells us so!

Playing god with AI

With the above evidence in mind, imagine that we play god. We design an artificial intelligence that repeatedly observes the outcome of a coin toss and learns to predict probability.

Here’s the catch.

Every so often we force the AI to reboot. As it restarts, the machine’s parameters (its ‘intuition’) remain safe. But its record of the coin’s behavior is purged. This periodic reboot forces our AI to understand the coin’s probability by looking at short windows of observation. The AI never sees more than a few thousand tosses in a row.

We let the AI run for a few months. Then we open it up and look at its ‘intuition’. Lo and behold, we find that after a long run of heads, the machine has an overwhelming sense that tails is ‘due’. To the machine, the coin has a memory.

The programmers chide the AI for its flawed intuition. ‘Silly machine,’ they say. ‘Coins have no memory. Each toss is an independent random event. Your intuition is flawed.’

Then a programmer looks at the data that the machine was fed. And she realizes that the machine’s intuition is actually accurate. The AI is predicting probability not for an infinite number of tosses (where ‘innate’ probability shows its face), but for a small number of tosses. And there, she finds, the machine is spectacularly accurate. When the sample size is small, assuming the coin has a memory is a good way to make predictions.

The AI machine, you can see, is a metaphor for human intuition. Because our lives are finite, humans are forced to observe probability in short windows. When we die, the raw data gets lost. But our sense for the data gets passed on to the next generation.5 Over time, an ‘intuition’ for probability evolves. But like the AI, it is an intuition shaped by observing short windows. And so we (like the AI) feel that independent random events have memory.

Correct intuition … wrong environment

Let’s return to the idea that our probability intuition is ‘biased’. In economics, ‘bias’ is judged by comparing human behavior to the ideal of the rational utility maximizer. When we make this comparison, we find ‘bias’ everywhere.

From an evolutionary perspective, this labelling makes little sense. An organism’s ‘bias’ should be judged in relation to its evolutionary environment. Otherwise you make silly conclusions — such as that fish have a ‘bias’ for living in water, or humans have a ‘bias’ for breathing air.

So what is the evolutionary context of our probability intuition? It is random events viewed through a limited window — the length of a human life. In this context, it’s not clear that our probability intuition is actually biased.

Yes, we tend to project ‘memory’ onto random events that are actually independent. And yet when the sample size is small, projecting memory on these events is actually a good way to make predictions. I’ve used the example of a coin’s apparent ‘memory’ after a run of heads. But the same principle holds for any independent random event. If the observation window is small, the random process will appear to have a memory.

When behavioral economists conclude that our probability intuition is ‘biased’, they assume that its purpose is to understand the god’s eye view of innate probability — the behavior that emerges after a large number of observations. But that’s not the case. Our intuition, I argue, is designed to predict probability as we observe it … in small samples.

In this light, our probability intuition may not actually be biased. Rather, by asking our intuition to understand the god’s eye view of probability, we are putting it in a foreign environment. We effectively make ourselves the deer in headlights.

Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.

Keep me up to date

Simulating a coin toss

With modern software like R, it’s easy to simulate a coin toss. Here, for instance, is R code to generate a random series of 1000 tosses:

coin_toss = round( runif(1000) )

Let’s break it down. The runif function generates random numbers that are uniformly distributed between some lower and upper bound. The default bounds (which go unstated) are 0 and 1 … just what we need. Here, I’ve asked runif to generate 1000 random numbers between 0 and 1. I then use the round function to round these numbers to the nearest whole number. The result is a random series of 0’s and 1’s. Let 0 be tails and 1 be heads. Presto, you have a simulated coin toss. The results look like this:

coin_toss [1] 0 0 1 1 0 1 1 1 1 0 …

Modern computers are so incredibly fast that you can simulate millions of coin tosses in a fraction of a second. The more intensive part, however, is counting the results.

Suppose that we want to measure the probability of tails following 2 heads. In R, the best method I’ve found is to first convert the coin toss vector to a string of characters, and then use the stringr package to count the occurrence of different events.

First, we use the paste function to convert our coin_toss vector to a single character string:

coin_string = paste(coin_toss, collapse="")

That gives you a character string of 0’s and 1’s:

coin_string [1] "0011011110…"

Now suppose we want to find the probability that 2 heads are proceeded by tails. We start by counting the occurrences of HHH. In our binary system, that’s 111. We use the str_count function to count the occurrences of 111:

library(stringr)

Next we count the occurrences of HHT. In our binary system, that’s 110:

# count occurrence of tails after 2 heads n_tails = str_count(coin_string, paste0("(?=","110",")"))

The observed probability of tails following 2 heads is then:

p_tails = n_tails / (n_tails + n_heads)

If you’re simulating the coin toss series once, the code above will do the job. (You can download it here.) But if you want to run the simulation repeatedly (to measure the average probability across many iterations), you’ll need another tool.

To create the data shown in Figure 4, I wrote C++ code to simulate a coin toss and count the occurrence of different outcomes. You can download the code at GitHub. I simulated each coin-toss window 40,000 times and then measured the average probability across all iterations.

Notes

[Cover image: Pixabay]

1. Here’s how novelist Cory Doctorow summarizes the behavioral economics ‘revolution’:

Tellingly, the most exciting development in economics of the past 50 years is “behavioral economics” – a subdiscipline whose (excellent) innovation was to check to see whether people actually act the way that economists’ models predict they will.

(they don’t)

2. What behavioral economists are doing is essentially falsifying (over and over) the core neoclassical model of human behavior. To understand the response, it’s instructive to look at what happened in other fields when core models have failed.

Take physics. In the late-19th century, most physicists thought that light traveled through an invisible substance called ‘aether’ — a kind of background fabric that permeated all of space. Although invisible, the aether had a simple consequence for how light ought to behave. Since the Earth presumably traveled through the aether as it orbited the Sun, the speed of light on Earth ought to vary by direction.

In 1887, Albert Michelson and Edward Morley went looking for this directional variation in the speed of light. They found no evidence for it. Instead, light appeared to have constant speed in all directions. Confusion ensued.

In 1905, Albert Einstein resolved the problem with his theory of relativity. Einstein assumed that light needed no transmission medium, and that its speed was a universal constant for all observers. After Einstein, physicists abandoned the idea of ‘aether’ and moved on to better theories.

In economics, the response to falsifying evidence has been quite different. Instead of abandoning their rational model of man, economists ensconced it as a kind of ‘human aether’ — an invisible template used to judge how humans ought to behave. When humans don’t behave as the model predicts, economists label the behavior a ‘bias’.

3. Seemingly ‘flawed’ behavior can also signal that the organism isn’t controlling its own actions. The virus Toxoplasma gondii, for instance, turns mice into cat-seeking robots. That’s suicidal for the mouse, but good for virus, which needs to get inside a cat to reproduce.
4. The mathematics tell us that ‘true’ convergence takes infinitely long. That is, you need to toss a coin an infinite number of times before the observed probability of heads will be exactly 50%. Anything less than that and the observed probability of heads will differ (however slightly) from the innate probability. For example, after 100,000 tosses my simulated coin returns heads at rate of 49.992% — a 0.008% deviance from the innate probability.
5. OK, this is an oversimplification. What actually happens is that a person with intuition x reproduces more than the person with intuition y. And so intuition x spreads and evolves.

The Empire Depends On Psychological Compartmentalization

Tags

Listen to a reading of this article:

Britain’s High Court has granted the US government limited permission to appeal its extradition case against WikiLeaks founder Julian Assange, meaning that the acclaimed journalist will continue to languish in prison for exposing US war crimes while the appeals process plays out.

If the western media were what it purports to be, every member of the public will be acutely aware of the fact that a journalist is being imprisoned by the most powerful government on earth for exposing inconvenient facts about its war machine. Because the western media are propaganda institutions designed to protect the powerful, this fact is far from the forefront of public attention. Most people are more aware of the smears about Assange being a Russian agent or a rapist than they are of his victimization by a tyrannical assault on world press freedoms.

body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}

function notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}

There is a whole other world happening just below the surface of mainstream public attention. The membrane of celebrities and entertainment and partisan bickering overlays public perception of the world almost the entire time, only occasionally being disrupted by short-lived blasts of dissonance piercing through the fog.

You’ll be reading about what Ronnie Republican said to Debbie Democrat, and it will feel so real and normal, then all of a sudden you’re getting blasted in the face with talk of Jeffrey Epstein getting suicided in prison amid reported ties to government-run sexual blackmail operations using minors for the purpose of controlling society’s leading influencers. Then it gets quickly memory-holed, the membrane returns, and it’s back to Ronnie and Debbie once again.

But before the fog returns there’s always a short-lived moment of “What?? Huh??” as you try to re-orient yourself to reality in light of the new information you just received. What you just saw is completely irreconcilable with your current view of the world, the one you’ve been fed piece-by-piece by school and mass media and internet algorithms. The clash between your comfortable existing worldview and the new information you just received causes a kind of psychological discomfort known as cognitive dissonance, which makes it hard to hold them both at the same time.

From there, psychological compartmentalization takes over. Compartmentalizing is when we mentally separate information or experience from our existing understanding of ourselves and our world and kind of sweep it under the carpet so we don’t experience cognitive dissonance anymore. We don’t delete it; the information is still there to be accessed if we want to, but it’s placed in a separate file and treated as though it exists in a parallel alternate reality.

body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}

function notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}

Compartmentalization sometimes comes into play when a wife discovers that her husband has been sexually molesting their child; she files the information away into a separate container, because the way that information would shatter her world if she held onto it is too frightening and the cognitive dissonance of holding both worlds at the same time too uncomfortable.

Compartmentalization comes in when we’re scrolling through our news feed and see something about the horrors that are being unleashed upon Yemen with the help of our government; it doesn’t square with the model of the world we’ve been trained to hold in our minds, so we dissociate it from our model.

It comes in when we remember that we were lied to about Iraq. It comes in when we think about what humankind’s way of living on this planet is doing to our ecosystem. It comes in when we think about the fact that nuclear weapons are a thing and that cold war tensions are escalating. It comes in when we are reminded that our government is participating in the torture and imprisonment of a journalist whose only crime was trying to bring the truth out from the locked files it’s been hidden away in so we can make it a part of our worldview.

The cost of coming out of compartmentalization into a worldview of integrity is cognitive dissonance and the discomfort of constructing a new conceptual framework for reality. But the reward is having a perspective that is based on truth instead of lies.

Compartmentalization is a weapon of the propagandists. It’s a glitch in our cognitive processing which means they don’t have to work as hard to keep us living in a lie-based reality tunnel; all they have to do is construct our perception of reality for us, and from there our own psychological defense systems will do the work for them.

The oligarchic empire which rules our world is not truly hidden from view; we see signs of it all the time, it’s just too uncomfortable for most of us to look at. The monster isn’t hiding under the bed, it’s staring us right in the face and we’re looking all over the room except where it’s standing because to meet its gaze would obliterate our world.

But obliterate it we must. Lie-based worldviews are what hold the empire together; the powerful spend so much energy propagandizing us because they need to in order to retain power. Without it, we could realize that they are unleashing immense evils upon our world, and that there are a whole lot more of us than there are of them.

And this is what we must do if our species is to survive into the future. We must find a way to move past the cognitive dissonance from a lie-based way of living into a truth-based way of living, and become a truth-based species with a truth-based relationship with each other and with our ecosystem. If we keep hiding from reality, we’ll compartmentalize ourselves right out of existence.

_________________________

My work is entirely reader-supported, so if you enjoyed this piece please consider sharing it around, following me on Facebook, Twitter, Soundcloud or YouTube, or throwing some money into my tip jar on Ko-fi, Patreon or Paypal. If you want to read more you can buy my books. The best way to make sure you see the stuff I publish is to subscribe to the mailing list for at my website or on Substack, which will get you an email notification for everything I publish. Everyone, racist platforms excluded, has my permission to republish, use or translate any part of this work (or anything else I’ve written) in any way they like free of charge. For more info on who I am, where I stand, and what I’m trying to do with this platform, click here.

Bitcoin donations:1Ac7PCQXoQoLA9Sh8fhAgiU3PHA2EX5Zm2

The Anxieties Of A Beautiful Day

Tags

That mysterious, terrible anxiety felt on a beautiful day–whether that of the spring, summer, fall, or winter–is perhaps better understood when we realize that such anxiety is not one, but many anxieties. To wit, that anxiety is:

The anxiety of not knowing whether this beautiful day is not the harbinger of a terrible day; for do not all accounts of disaster begin by noting the innocent beauty of an ‘ordinary day like any other’?;

The anxiety of despairing that this beautiful day is not being ‘lived,’ ‘used,’ ‘experienced,’ ‘utilized,’ or ‘seized’ ‘well enough.’ This is lent an especially melancholic sense when we feel others are ‘outperforming’ us on their said ‘usage’ of the day–a gleaning obtained from their public proclamations (these days, on social media) of such feats. We are anxious because we sense that we are spending this day ‘wrong,’ that we could be spending it in some ‘better fashion’;

The anxiety of not knowing whether this day is the last of those like it, never to be seen again, and time is inexorably running out on it even as grasp and seize at its offerings;

The despair at the memory of many days like this, in days gone by, that were not then realized for being the beautiful days they were; perhaps this day is similarly condemned.

The beautiful day is at hand; that much is certain. But all else is still uncertain and provisional, and so long as that is the case, we are anxious.

Political Partisanship Is A Propaganda Lubricant

Tags

Listen to a reading of this article:

Studying the unfolding of the new mainstream UFO narrative has been very interesting, because it highlights the dynamics I always talk about in a fresh light which makes them easier to point to.

One theme that keeps resurfacing is people marvelling at how low-key the public response to the whole thing has been. One might expect the US government officially stating that the military has been frequently encountering strange unknown aircraft of unthinkable technological advancement would rank a little higher in public interest, but so far that really hasn’t been the case.

I’ve seen numerous attempts to explain the unexpectedly apathetic response to the fact that UFOs are in the news every day now, the most common being that people have so much on their plate these days that even the possibility of extraterrestrials buzzing US navy ships just doesn’t rank high on their priorities. Others suggest that it’s such an obvious military psyop that the public is dismissive of the story.

Neither of these offerings are particularly convincing in my opinion. We see vapid nonsense attracting mountains of public interest every day, so the idea that people have no mental bandwidth for this story doesn’t hold water. While the belief that the UFO narrative looks like some kind of military psyop is widely accepted among the sort of people who’d be likely to read this article (I’ve been saying it for a while now myself), skepticism toward suspicious US government claims is not a very widespread posture for people to hold in the mainstream public.

It seems pretty clear to me that the reason there’s not as much public interest in this story as you’d expect is because it doesn’t fit neatly into any of the little boxes that people have been trained to file news into in this society. There’s no partisan angle to it, so it doesn’t appeal to any of the egoic constructs to which the general public tends to hook incendiary news stories.

The likelihood of a news story going viral in our society has little to do with its newsworthiness, its unusualness, or even whether or not it is factually accurate. The single most likely factor in whether or not a news story will have mass appeal is whether it appears to validate the worldview of one of the two mainstream political factions. This is why the mainstream media have been deliberately sowing partisan divisiveness and marketing toward increasingly distant partisan echo chambers instead of just reporting the news; they have an obvious profit motive to do so, because tickling people’s egos with hate porn and illusory validation is the best way to get clicks and generate ad revenue.

This is why those who promoted the theory that Trump was a secret Russian agent saw their ratings soar for years before it was conclusively discredited by the very Special Counsel they’d been literally singing Christmas carols and lighting prayer candles to until then. There is more evidence that space aliens are cruising around in Earth’s atmosphere than there ever was that Vladimir Putin had covertly infiltrated the highest levels of the US government, but because it inflamed liberal passions and made them feel like their partisan worldview was about to be vindicated any minute, it sold like crack.

You can immediately tell if something is going to go viral by how politically tinged it is and how mainstream the appeal of those politics are. A story about how schools want to make your kids transgender. A popular conservative acting like an idiot. A black Trump supporter saying Trump isn’t racist. Marjorie Taylor Greene doing literally anything. Take it too far outside the mainstream, like the US government getting caught tampering with an OPCW investigation in Syria for example, and you won’t see a ton of clicks, but if it appeals to tens of millions of mainstream partisans you will.

In a society that’s enslaved to egoic consciousness as ours is, the things that generate the most public interest will be those which flatter or infuriate common egoic constructs. This is not unique to politics; advertisers have raked in vast fortunes by associating products with common cultural mind viruses like body image issues and personal inadequacy, and TV show hosts like Jerry Springer and Maury Povich figured out decades ago that you can attract massive ratings by letting people feel smug and superior at the sight of poor and uneducated guests acting out emotionally.

To make something go viral, it needs to appeal to the ego. Advertisers understand this. Media executives understand this. Propagandists understand this.

body[data-twttr-rendered="true"] {background-color: transparent;}.twitter-tweet {margin: auto !important;}

function notifyResize(height) {height = height ? height : document.documentElement.offsetHeight; var resized = false; if (window.donkey && donkey.resize) {donkey.resize(height);resized = true;}if (parent && parent._resizeIframe) {var obj = {iframe: window.frameElement, height: height}; parent._resizeIframe(obj); resized = true;}if (window.location && window.location.hash === "#amp=1" && window.parent && window.parent.postMessage) {window.parent.postMessage({sentinel: "amp", type: "embed-size", height: height}, "*");}if (window.webkit && window.webkit.messageHandlers && window.webkit.messageHandlers.resize) {window.webkit.messageHandlers.resize.postMessage(height); resized = true;}return resized;}twttr.events.bind('rendered', function (event) {notifyResize();}); twttr.events.bind('resize', function (event) {notifyResize();});if (parent && parent._resizeIframe) {var maxWidth = parseInt(window.frameElement.getAttribute("width")); if ( 500 < maxWidth) {window.frameElement.setAttribute("width", "500");}}

Creating big psychological identity structures out of our politics makes the job of the propagandists so very much easier; it’s like a lubricant which lets mass-scale psyops glide smoothly into public consciousness. From there it’s a very easy task to get people hating Russia or China for this or that partisan reason, or to get people believing Trump or Biden are helping the American people despite their both continuing and expanding the same murderous and oppressive status quo of their predecessors.

This is why the partisan divide is the most heated and contentious it’s ever been, while the actual behavior of each mainstream party when it’s in power brings in only the most superficial of changes. The oligarchs who own the political/media class desire the continuation of the status quo upon which they have built their empire, but they also want to keep the public as plugged in as possible to the partisan perspectives which facilitate the propaganda that cages our minds.

The solution to this, on an individual level, is to dismantle any egoic attachment you might have to either of the mainstream political factions which preserve the status quo. This includes any attachment to the phony populism of progressive Democrats, and it includes any attachment to the phony populism of Trumpian Republicans. These factions within the mainstream factions are themselves propaganda constructs which will never be permitted to advance any agenda that isn’t desired by the oligarchic empire; they serve only to keep people who would be inclined to reject mainstream politics plugged in to mainstream politics.

And of course the ultimate solution to this problem is for humanity to awaken from the ego. All propaganda relies on egoic hooks in public consciousness to circulate itself, so if humanity begins dropping its habit of creating psychological identity structures altogether (which it looks like it might), we will become harder and harder to propagandize. Since humanity’s collective problems ultimately boil down to the fact that sociopaths manipulate our minds at mass scale, such a transformation would make a healthy new world not just possible but inevitable.

_____________________

The best way to get around the internet censors and make sure you see the stuff I publish is to subscribe to the mailing list for at my website or on Substack, which will get you an email notification for everything I publish. My work is entirely reader-supported, so if you enjoyed this piece please consider sharing it around, following me on Facebook, Twitter, Soundcloud or YouTube, or throwing some money into my tip jar on Ko-fi, Patreon or Paypal. If you want to read more you can buy my books. Everyone, racist platforms excluded, has my permission to republish, use or translate any part of this work (or anything else I’ve written) in any way they like free of charge. For more info on who I am, where I stand, and what I’m trying to do with this platform, click here.

Bitcoin donations:1Ac7PCQXoQoLA9Sh8fhAgiU3PHA2EX5Zm2

Free Speech For Me, Not You

Tags

They say that Americans love two things: freedom … and guns. The trouble with guns is obvious. The trouble with freedom is more subtle, and boils down to doublespeak.

When a good old boy defends his ‘freedom’, there’s a good chance he has a hidden agenda. He doesn’t want freedom for everyone. He wants ‘freedom for himself, not you’. I call this sentiment freedom tribalism. It’s something that, given humanity’s evolutionary heritage, is predictable. It’s also something that has gotten worse over the last few decades. And that brings me to the topic of this essay: free speech.

When the talking heads on Fox News advocate ‘free speech’, they’re using doublespeak. What they actually want is free speech for their own tribe … and censorship for everyone else. This free-speech tribalism extends far beyond the swill of cable news. It’s clearly visible (and growing worse) in the pantheon of high thought — the US Supreme Court.

To make sense of this free-speech tribalism, we need to reframe how we understand ‘free speech’. And that means reconsidering the idea of ‘freedom’ itself. Behind freedom’s virtuous ring lies a dark underbelly: power. Free-speech tribalism, I’ll argue, amounts to a power-struggle between groups — a struggle to broadcast your tribe’s ideas and censor those of the others. When you look closely at this struggle, it becomes clear that ‘free speech’ is not universally virtuous. In modern America, free speech has become a kind of slavery.

And with those incendiary words, let’s jump into the free-speech fire.

Fire!

FIRE! Fire, fire… fire. Now you’ve heard it. Not shouted in a crowded theatre, admittedly, … but the point is made.

That was the inimitable Christopher Hitchens addressing the elephant in every free-speech room: shouting fire in a crowded theatre. The metaphor has come to symbolize speech that is so ‘dangerous’ it must be censored. It’s a fair example, since people have actually died from false shouts of fire in crowded theatres.1 But more often than not, the shouting-fire metaphor is used to justify censorship of a more dubious kind.

Woodrow Wilson got the ball rolling during World War I. After declaring war on Germany, Wilson embarked on a campaign to silence internal dissent. Among the thousands of Americans who were prosecuted was Charles Schenck, a socialist convicted of printing an anti-draft leaflet. His case went to the Supreme Court. Writing to uphold the conviction, Justice Oliver Holmes claimed that war critics like Schenck were, in effect, falsely shouting fire:

The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic. … The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent. It is a question of proximity and degree.

(Oliver Wendell Holmes, Schenck v. United States)

Holmes’ decision set off a long debate about what types of speech represent a ‘clear and present danger’. I won’t wade into the details. Instead, what I find more interesting is the language that is missing here. Holmes speaks about ‘free speech’, ‘danger’, and ‘evils’. But what is really at stake is the government’s power.

Holmes admits as much in a less-cited part of his ruling. Schenck’s anti-draft leaflet was dangerous, Holmes noted, precisely because it undermined the government’s power to make war:

It denied the [government’s] power to send our citizens away to foreign shores to shoot up the people of other lands …

(Oliver Wendell Holmes, Schenck v. United States)

So there you have it. The idea of ‘falsely shouting fire’ was used to bolster the government’s power to wage war.

Free speech for views you don’t like

The lesson from the Schenck case is that reasonable forms of censorship inevitably get used to justify more dubious types of speech suppression. To combat this creeping censorship, free-speech advocates like Noam Chomsky argue that we must do something that feels reprehensible — defend freedom of speech for views we despise:

[I]f you believe in freedom of speech, you believe in freedom of speech for views you don’t like. Goebbels was in favour of freedom of speech for views he liked, right? So was Stalin. If you’re in favour of freedom of speech, that means you’re in favour of freedom of speech precisely for views you despise. Otherwise you’re not in favour of freedom of speech.

(Noam Chomsky in Manufacturing Consent)

Chomsky’s position is elegant, principled and more than just words. It’s a maxim he lives by. And that has gotten him into all sorts of trouble. You can imagine the uproar, for instance, when Chomsky defended the free speech of historian Robert Faurisson, a Holocaust denier. More recently (and to the delight of the far right), Chomsky drew leftist ire for signing a Harper’s editorial warning of a “stifling atmosphere” in modern America that was “narrow[ing] the boundaries of what can be said without the threat of reprisal.”

In the face of this criticism, however, Chomsky remains unphased. He is a tireless advocate for the right to espouse ideas he finds despicable.

Free-speech tribalism

If everyone was as principled as Chomsky, the world would probably be a better place. But the reality is that Chomsky is an outlier. Most people find it difficult to separate the right to free speech from the speech itself. Rather than criticize this tendency, though, we should try to understand it. And that means studying ‘free speech’ in the context of human evolution.

If evolutionary biologists David Sloan Wilson and E.O. Wilson are correct, human evolution has been strongly shaped by ‘group selection’. That means we evolved as a social species that competed in groups. The result is that humans have an instinct for group cohesion in the face of competition — an us-vs-them mentality. In other words, humans are tribal.

When it comes to ‘free speech’, this tribalism plays out predictably. Humans behave exactly the way Chomsky says we should not. We support free speech for ideas we like, and censorship for ideas we dislike.

Take, as an example, Donald Trump. After Trump delivered his incendiary speech that stoked the storming of the Capitol, Twitter decided they’d had enough. They permanently banned Trump from their platform. How did Americans feel about this ban? Support fell predictably along partisan lines (Figure 1). Democrats overwhelmingly supported Twitter’s Trump ban. Republicans overwhelmingly opposed it. This tribal divide isn’t rocket science. When the shit hits the fan, instincts trump abstract principles.

Figure 1: Partisan support for Twitter’s Trump ban. Source: PEW Research Center.

Commentary on the Trump ban focused mostly on the content of his speech. Was he stoking ‘imminent lawlessness’? Or was he [cue incredulous cough] ‘defending democracy’? These are important questions. But what I find more interesting is what seemed to go undiscussed.

It’s one thing for a President to silence his critics. That’s state censorship. It’s another thing for critics to silence a President. That’s called accountability. The difference has nothing to do with the content of the speech. Instead, it comes down to power. When the weak censor the powerful, it’s different than when the powerful censor the weak.

Granted, Twitter CEO Jack Dorsey is hardly ‘the weak’. But the principle remains. Power dynamics should affect how we interpret ‘censorship’. When the government censors an obscure Neo-Nazi, that’s probably bad. But what if Nazis run the government? Should citizens let the Nazi regime broadcast propaganda on the grounds that it is ‘free speech’?

If so, George Orwell was right. Freedom is slavery.

Free-speech tribalism on the US Supreme Court

Back to free-speech tribalism. On the individual level, the game is about free speech for me, not you. But at the group level, it’s about us versus them. Free speech for my tribe, not your tribe.

Since Americans’ right to free speech is written in the constitution, free-speech tribalism has played out most prominently in the US Supreme Court — the institution that determines how the constitution is interpreted. Of course, Supreme Court justices all claim to believe in free speech for everyone. But their behavior tells a different story.

In a landmark study, Lee Epstein, Andrew Martin and Kevin Quinn tracked how US Supreme Court justices ruled on cases concerning free speech. Importantly, Epstein and colleagues distinguished between two factors:

1. the partisanship of the justices;
2. the political spectrum of the speech on trial.

Figure 2 shows Epstein’s results — a quantification of 6 decades of free-speech rulings on the Supreme Court.

Figure 2: Partisan support for free speech on the US Supreme Court. I’ve plotted here data from Epstein, Martin & Quinn’s study of Supreme Court rulings on free speech. The horizontal axis shows the court’s chief justice and their associated tenure. The vertical axis shows the percentage of rulings supporting free speech. The panels differentiate between the type of speech being tried — coded as either ‘liberal’ or ‘conservative’. Colored lines show the percentage of rulings supporting free speech, differentiated by the party of the president who appointed the corresponding justice.

It’s clear, from Figure 2, that there is a tribal game afoot. Let’s spell it out. If Supreme Court justices were following Chomsky’s ideal (free speech for ideas you like and those you despise) then the red and blue lines in Figure 2 would overlap. Democratic and Republican justices would support free speech to the same degree, regardless of the content of the speech. That clearly doesn’t happen.

Instead, Supreme Court justices are following the ‘tribal ideal’ — free speech for ideas they like … censorship for ideas they despise. Hence Democratic justices support liberal speech more than Republican justices (Figure 2, left). And Republican justices support conservative speech more than Democratic justices (Figure 2, right).

Given humanity’s evolutionary background, this tribalism is not surprising. What’s interesting, though, is that Supreme Court tribalism hasn’t been constant. Instead, it’s grown with time.

In the Warren court of the 1950s and 1960s, there was remarkably little free-speech tribalism. Justices of both parties overwhelmingly supported free speech of all kinds, with only a slight preference for the speech of their own tribe. Today, that’s changed. In the Roberts court of the 21st century, not only have justices of both parties become less tolerant of free speech in general, there is now a glaring tribal bias. Democratic justices support liberal speech far more than Republican justices. And Republican justices support conservative speech far more than Democratic justices.

It is tempting to blame both political parties for this tribalistic turn. But the reality is that the blame rests overwhelmingly on Republicans. Figure 3 tells the story. I’ve plotted here the partisan bias in support for free speech. This is the difference in support for speech made by ‘your tribe’ versus support for speech made by the ‘other tribe’. Let’s start with the Democratic tribe. While Democratic justices have become less tolerant of free speech in general (Fig. 2), they have not become more biased. Instead, for the last 6 decades, Democratic justices have had a slight but constant bias for liberal speech.

Figure 3: Partisan bias in support for free speech on the US Supreme Court. I’ve plotted here data from Epstein, Martin & Quinn’s study of Supreme Court rulings on free speech. The horizontal axis shows the court’s chief justice and their associated tenure. The vertical axis shows the partisan bias in justices’ rulings. For Democratic justices, this bias is the difference between their support for ‘liberal speech’ vs. ‘conservative speech’. For Republican justices, it is the reverse.

Now to the Republican tribe, where the story is quite different. Once less biased than Democrats (during the Warren court), Republican justices now show overwhelming bias for conservative speech. In the Roberts court, Republican justices support conservative speech over liberal speech by a whopping 44%.

Free speech for us, not them.

Republican bias for ‘conservative’ speech isn’t the only way that the US Supreme Court has become more tribal. The court has also become more biased towards the business tribe.

The most seismic case in this pro-business shift was Citizens United. In this 2010 decision, the Supreme Court ruled it unconstitutional to limit corporate spending on political campaigns. The majority’s reasoning was simple:

1. people have free speech;
2. corporations are legal persons;
3. therefore, corporations have free speech.

Citizens United opened the floodgates of corporate electioneering. The reality, though, was that this case was part of a larger pro-business shift on the Supreme Court — a shift that coincided with a reversal of fortune for US corporations.

Figure 4 tells the story. From the 1950s to the 1990s, US corporations had a problem. Although they had no trouble making profits in absolute terms, the profit share of the pie tended to decrease. (See the red curve in Fig. 4.) Then came a stunning reversal of fortune. From the mid-1990s onward, corporate profits boomed, eating up an ever increasing share of the US income pie.

Figure 4: ‘Free speech’ for business is good for profits. The blue curve shows the portion of US Supreme court cases involving business ‘free speech’ that were settled in favor of business. Data is from Epstein, Martin & Quinn, and is averaged over the tenure of chief justices. The red line shows the smoothed trend in US corporate profits as a share of national income.

This reversal of fortune coincided with a change in the Supreme Court’s attitudes towards ‘free speech’. Until the 1990s, the Court was increasingly hostile to ‘free speech’ for business. As a result, the ‘win rate’ for business free speech declined steadily. Then came the Roberts court, which brought relief for the business tribe. Over the last decade and a half, the Roberts court sided with business in a whopping 80% of free-speech cases.

Unsurprisingly, in this pro-business environment, profits boomed. ‘Free speech’ for corporations means wage slavery for workers.2

The trouble with ‘freedom’

The triumph of business propaganda (and the corresponding boom in corporate profits) shouts at us to reconsider some basic moral principles. Ask yourself — is ‘free speech’ universally virtuous? I think the answer has to be no.

The problem with ‘free speech’ boils down to a basic contradiction in the idea of ‘freedom’ itself. In a social world, freedom for everyone is impossible. The reason is simple. Freedom has two dimensions: ‘freedom to’ and ‘freedom from’. These two dimensions are always in opposition. For example:

• If you are free to shout racist slurs, your neighbour cannot be free from such slurs.
• If you are free to smoke anywhere, your friends cannot be free from second hand smoke.
• If you are free to drive through a red light, fellow motorists cannot be free from T-bone collisions.

You get the picture. There are two sides to being ‘free’, and they are always in mutual conflict. When you think about this conflict, you realize that ‘freedom’ always involves power:

• If I am ‘free to’ shout racist slurs, I have the power to suppress your ‘freedom from’ such slurs.
• If I am ‘free from’ hearing racist slurs, I have the power to suppress your ‘freedom to’ shout racist speech.

When we look at this power behind ‘freedom’, we realize that ‘freedom’ cannot be universally virtuous. One man’s freedom is always another man’s chains.

Resolving conflict with property rights

If the two sides of freedom are always in opposition, we need a way to resolve the ensuing conflict. In capitalist societies, the main way we do this is by defining property rights. These are legal principles that delineate which type of freedom wins out, and when and where it does so.

A key purpose of property rights is to restrict ‘freedom to’. In other words, property rights restrict ‘free speech’. For example, if someone enters my property and shouts racist slurs, I don’t have to listen. Instead, my property rights give me the power to have the culprit removed by the cops. On my property, my ‘freedom from’ trumps your ‘freedom to’. In other words, my property gives me the power to censor.3

Is this power a bad thing? Probably not, at least in principle. To see why, imagine a world in which ‘freedom to’ always trumped ‘freedom from’. In this world, if someone wanted to insult you in your living room, you’d have to let them. It would be an Orwellian nightmare in which solitude was impossible. So having a space where ‘freedom from’ trumps ‘freedom to’ is undoubtedly a good thing.

That said, when we scale up private property, the power to censor becomes more dubious. Suppose that instead of owning a house, I own a corporation. This is a very different type of property. Rather than own space, I own an institution — a set of human relations. With this more expansive type of property, I suddenly have much more power to censor. If my employees wanted to unionize, for instance, I could ban ‘union propaganda’. I could go further and ban any speech critical of me, the supreme leader. It would be a Stalinist dream … for me. For my employees, would be a totalitarian nightmare.

Let’s flip sides now and look at the other side of property. While private property suppresses ‘freedom to’, public property suppresses ‘freedom from’. On public property, my ‘freedom to’ speak trumps your ‘freedom from’ my speech. So when I stand on a street corner, I am free to shout racist slurs. Passersby must endure my slander. In other words, on the street, I have the power to broadcast.

The street-corner ability to broadcast is, admittedly, a weak form of power. Everyone else has the same power, so they can drown me out if they want. (This is the principle of public protest.)4 But notice what happens if we treat the ‘public domain’ more broadly, not as the street-corner, but as the space between corporations. In a world in which corporations have free speech, there is no respite from corporate propaganda. It’s a world in which freedom-loving Americans now live.

Freedom is just another word for …

The problem with the debate about free speech boils down to the language of ‘freedom’ itself. When ‘freedom’ becomes synonymous with virtue, the debate becomes vacuous. Saying “I stand for freedom” is like saying “I stand for happiness.” Who’s going to argue with you?

Okay, I’ll argue with you. If murdering people makes me happy, my ‘happiness’ is not virtuous. It is sadistic. Likewise, if I am ‘free’ to murder people I dislike, my ‘freedom’ is not virtuous. It is depraved.

The same goes for ‘free speech’. It is virtuous in some contexts, but not others. Unfortunately, there is no simple way to determine when and where ‘free speech’ is good, and when and where it is bad. Like so many things in life, it is a matter of opinion. But a useful tool is to look at the underside of ‘freedom’. When you see the words ‘free speech’, substitute the language of power:

With this revised language, the virtue of ‘free speech’ becomes more ambiguous. If the substitution gives you a bad feeling, that’s a sign there is doublespeak at work. Sometimes freedom really does mean slavery.

Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.

Keep me up to date

Sources and methods

Data for US corporate profits is from the Bureau of Economic Analysis, Table 1.12. I’ve taken corporate profits before tax (without IVA and CCAdj) and divided by national income.

Notes

1. Some shouting-fire examples. In 1902, a crowd at the Shiloh Baptist Church (in Birmingham Alabama), misheard ‘fight!’ for ‘fire!’ The ensuing stampede killed over 100 people. In 1913, the Italian Hall in Calumet, Michigan was filled with striking miners. Someone shouted ‘fire!’, causing a stampede that killed 73 people. The miners suspected that the perpetrator was a strike breaker, but no one was ever charged.
2. Mainstream economics is mute about how profits relate to the law. That’s probably because if you study the law, you realize that it (not ‘productivity’) is the foundation of corporate profits. A century ago, heterodox economist John R. Commons explored this connection in his book Legal Foundations of Capitalism. He was largely ignored.
3. The word ‘censorship’ has, for good reasons, acquired a negative connotation. But it seems clear that some forms of censorship are good — perhaps even essential to maintaining a healthy dialogue. Rather than ‘censorship’, a better word for this act is ‘moderation’. Social critic Keith Spencer proposes a rule of thumb: ‘unmoderated online forums always degenerate into fascism’. This is a hyperbole, but probably contains a kernel of truth. When there is no moderation, expect a creep not to high philosophy, but to base-level urges.
4. A note on free-speech tribalism. I once met a Jordan Peterson fan who was incensed that Peterson’s speaking event (in Toronto) was besieged by protesters. “Let Peterson have free speech!” he demanded. The Peterson acolyte didn’t seem to understand that he was advocating censorship … for the protesters. No matter, they weren’t in his tribe.

Commons, J. R. (1924). Legal foundations of capitalism. London: Transaction Publishers.

Epstein, L., Martin, A. D., & Quinn, K. (2018). 6+ decades of freedom of expression in the US supreme court. Preprint, 1–17.

What Predicts Professional Philosophers’ Views? (updated)

Tags

A new study looks at correlations between professional philosophers’ philosophical views and their psychological traits, religious beliefs, political views, demographic information, and other characteristics.

[Josef Albers – untitled from “Formulation: Articulation”]

The research was carried out by David B. Yaden (psychology, Johns Hopkins University) and Derek E. Anderson (philosophy, Boston University), and the results have recently been published as “The psychology of philosophy: Associating philosophical views with psychological traits in professional philosophers,” in Philosophical Psychology. They write:

Our interest was in how philosophical views (not merely intuitions about philosophical thought experiments) relate to psychological traits in professional philosophers… we aim to identify associations between psychological traits and philosophical views for further replication and study.

Their method involved asking philosophers questions based on the PhilPapers Survey and administering measures for “personality, well-being, mental health, numeracy, varieties of life experiences, questions related to public education of philosophy, and demographics.” Their results are based on a sample of 314 respondents (264 of which were philosophy professors, with the remaining being post-docs and graduate students), about which they gathered some background information, such as gender, race, political views, academic affiliation, and philosophical tradition.

What did they learn? Below are some of their findings (for which they used a “conservative criterion” for statistical significance “to increase the likelihood that the reported correlations would replicate”).

Some of their results were negative, or findings of a lack of correlation:

• Age, gender, relationship status, income, ethnicity, professional status yielded no significant findings of correlations with particular philosophical views.
• None of the five factor model’s list of personality traits (openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism) were associated with specific philosophical views.
• Neither exercise nor meditation were associated with any views.
• “Anti-naturalism” (a cluster of beliefs including libertarian notions of free will, nonphysicalism about the mind, belief in God, non-naturalism, belief in the metaphysical possibility of philosophical Zombies, and the further fact view of personal identity) is largely unassociated with particular personality traits or well-being.

But they did find some positive correlations:

• Theism is associated with agreeableness.
• Hard determinism is associated with lower life satisfaction and higher depression/anxiety.
• Consequentialism, realism, physicalism, and correspondence theories of truth are associated with more numerical interest
• Believing philosophical zombies are metaphysically possible is associated with conscientiousness
• Theism and idealism are associated with having had a transformative or self-transcendent experience.
• Accepting non-classical logic is associated with having had a self-transcendent experience.
• Non-realism regarding aesthetics and morality is associated with having used psychoactive substances such as psychedelics and marijuana.
• Contextualism about knowledge claims is associated with supporting more public education about philosophy
• Naturalism is associated with the notion that projects such as this one by Yaden and Anderson have philosophical value

The authors also found evidence of correlations between being an analytic philosopher and supporting certain philosophical views, such as the correspondence theory of truth, realism about the external world, invariantism about knowledge claims, scientific realism, and that one ought to pull the switch (sacrifice one person to save five others) in the bystander part of the trolley problem.

Additionally, they found that being more politically right-leaning was associated with several philosophical views, such as theism, free will libertarianism, nonphysicalist views in philosophy of mind, and the correspondence theory of truth.

What are we to take from all of these findings, if anything? The authors write:

To the extent that reliable causal patterns emerge between philosophical beliefs and other psychological factors, we are presented with the opportunity to study the structure of belief in a way that is elided by typical discussions in the philosophy of mind. Epistemically significant mental states such as beliefs and credences are typically characterized in terms of their contents and/or their functional roles in rational inference. Relatively little attention has been paid to connections between philosophical beliefs/views and personality, mental health, life experiences, psychopharmacology, or other psychological variables. The present study suggests that there may be important relationships between philosophical beliefs and various psychological traits. These findings therefore could raise doubts about the adequacy of a purely rationalistic conception of belief.

It is also possible that certain psychological states provide evidence for philosophical positions. Perhaps, for example, some features of depression might (seem to) provide evidence for a lack of free will. Or perhaps experiences with some mind-altering substances (seem to) provide evidence related to objective esthetic value. These possibilities suggest empirical lines of research into the ways in which individuals understand or consciously perceive the evidential relationships between their psychological states and their philosophical views.

UPDATE: There’s an ungated version here, and supplemental materials (including more data than that which is discussed in the article) here.

Sham Diagnosis

Tags

The fiction of borderline personality disorder

Suffering, in its myriad forms, can be assimilated into medical and psychiatric discourses, and sometimes, at least, can become the object of treatment of these discourses. A diagnosis within these discourses is, therefore, strictly speaking a metaphor, a displacement, and it is the hallmark of delusion for what is metaphorical to be taken literally. Consider the expansion of mental health discourse over recent years, with an ever-growing cohort of professional lobbyists, advocacy groups and awareness campaigns. Fitness watches and their corresponding apps do not merely count calories or function as pedometers but quantify the amount of mindfulness one has undertaken. ‘Mental health’ has become central to the care of the self that subjects are supposed to perform for their ‘well-being’. The COVID-19 pandemic has illustrated this expansion very clearly. Pundits (often the same ones who support aggressive austerity policies) who were opposed to the Victorian government’s lockdowns and social-distancing measures bemoaned the alleged mental health effects of these strategies. In fact, suicide rates did not increase in Victoria during the lockdown period, but nonetheless the federal government took the extraordinary step of increasing Medicare funding for mental health treatments, doubling the number of sessions accessible per year from ten to twenty. This took place in a context in which practically every other aspect of the welfare state is being cut or systematically underfunded. ‘Mental health’ may be a way of naming individual suffering, but it also names a site of social and governmental contestation in terms of its funding and practices.

In the field that goes under the name of ‘mental health’, the purposes of diagnosis are varied, and go well beyond orienting clinicians in their treatment of patients. The social effects of a diagnosis can be far-reaching. ‘Schizophrenia’, to give one example, carries implications of madness, ‘psychopathy’ of moral depravity. Clearly, such labels carry a risk of stigmatisation for patients. On the other hand, however, some patients themselves clamour for a diagnosis, wearing it as sort of personal emblem in social settings and online forums. Just as astrology has its Pisceans and Virgos, so does the field of mental health include patients who strongly identify themselves as being ‘on the spectrum’ or as an ‘INTJ’, with these terms playing much the same role as the label of ‘addict’ in twelve-step programs. The label acts as a master signifier, linking and conceptually unifying disparate elements within an individual’s subjective life. It can function as a key through which, retroactively, people can make sense of their suffering. For this reason, people can become attached to their diagnoses, even when they are stigmatising.

Then there is the disciplinary function of a diagnosis. A case in point is that of childhood attention deficit hyperactivity disorder (ADHD). Studies have repeatedly demonstrated that among schoolchildren this diagnosis is disproportionately made of the youngest in the class or cohort. This suggests that what is being identified and pathologised is not so much a substantive underlying ‘disorder’ as a child’s inability or unwillingness to conform to the expectations of the adults in his or her life. A successful ‘treatment’—in the case of ADHD in the Anglophone world, this is almost always stimulant medication—is one in which this non-conformity has been suppressed. In the first instance, it is the adults in the child’s world, and not the child him- or herself, who has the problem. This is not to deny that some children do indeed display ‘hyperactivity’ or difficulties with attention, rather that there is a disciplinary element to the diagnosis and treatment that exceeds the amelioration of these symptoms.

Another example, one that will be central to this article, is borderline personality disorder (BPD). It is sometimes known by a less flattering term, ‘emotionally unstable personality disorder’ (EUPD). Whatever the clinical or empirical bases of the diagnosis, BPD has become a synonym for ‘manipulative’, ‘attention-seeking’, ‘difficult’ and ‘treatment resistant’. ‘Men’s rights activists’—largely unreconstructed misogynists—are wont to say that their unhappy experiences with women happened because the women in question had this particular condition. Beyond the pejorative implications, the clinical consequences of the diagnosis, in an Australian public health context, will very likely be refusal of treatment or refusal of a hospital admission. As Michel Foucault observed, formerly moral categories can, when under the auspices of medical ‘science’, be reconstructed into a diagnostic taxonomy. That this taxonomy speaks the language of medicine does not prevent the moralisation from continuing.

The origins of the borderline

The concept of the ‘borderline’ is intertwined with the history of psychoanalysis in the United States. Almost all of Sigmund Freud’s early disciples were Jewish, and the rise of fascism in Europe resulted in a great diaspora, with psychoanalysts relocating to Paris (for a time, at least), London, Chicago, New York and Buenos Aires, among other destinations. Differences in language and culture effectively ensured that psychoanalytic theory and practice underwent very different trajectories in each of its new homelands. Freud founded the International Psychoanalytic Association (IPA), and the origin of the BPD diagnosis is tied very closely to the advocacy of this group. The IPA claimed an institutional lineage that derived directly from Freud, but some have argued that this institutional legacy conceals profound betrayals of Freudian theory and praxis. French psychoanalyst Jacques Lacan, for example, who was ultimately expelled from the IPA, accused this latter of breaking fidelity with Freud’s founding principles by extracting the radical elements from psychoanalysis, and subordinating it to imperatives of ‘independence’, conformity and heteronormativity. Freud and Lacan permitted ‘lay’ (i.e. non-medical) analysts to practice, provided that they had undergone their own analysis; the IPA largely rejected this, seeking to shore up the group’s scientific prestige. Freud was tolerant of homosexuality, despite the conservatism of his surroundings, and Lacan declared that there were no pre-established sexual relations between humans at all, but well into the 1970s the IPA maintained that homosexuality, for instance, was a perversion, even as crudely bioreductionist psychiatrists were disputing this. In Paris at the time of the May 1968 uprising, IPA analysts dismissed the students as ‘infantile’; Lacan met and held seminars with them. In short, there are some major differences between the IPA and other psychoanalytic orientations.

In any case, Freud never set out to make a systematic diagnostic classification system in the style of the DSM-5 (the Diagnostic and Statistical Manual of Mental Disorders, touted as the ‘bible’ of psychiatry by its authors). He did, however, maintain a diagnostic distinction between neurosis and psychosis. In neurosis, a patient is separated from his or her most intimate or traumatic memories, wishes and fantasies by a process called repression, when these phenomena come into conflict with the ego. In psychosis, there is a similar problem of conflict with the ego, but instead of its giving rise to repression, the psychotic subject alters his or her relations with the ‘external world’, as Freud put it. Psychoanalysts subsequent to Freud came to use this distinction as a guide as to whether a patient could be analysed. A neurotic was analysable in this system, but in psychosis, whether the patient in question was schizophrenic, or melancholic, paranoid or manic, the potential for analysability was much less clear. Some schools of psychoanalysis to this day dispute the applicability of psychoanalysis to psychotic subjects.

Into this distinction stepped two US psychoanalysts of the mid-twentieth century, Robert Knight and Adolf Stern, who designated the existence of a border zone between neurosis and psychosis. Under stress, a borderline patient may exhibit symptoms similar to those in psychosis, but under better conditions these symptoms may dissipate. Note that the origins of the borderline diagnosis are, first, as a wastebasket category and, second, as a means of assisting clinicians who are, strictly speaking, the ones on the border, faced with the difficulty of making a clear diagnostic distinction.

Several decades after this innovation, psychoanalysis was no longer the pre-eminent clinical paradigm for mental health in the United States. By the 1980s, psychopharmaceutical medications had rapidly proliferated, and talking therapists tended to favour techniques of direct suggestion (such as those in behaviourist or cognitive therapy) over psychoanalysis. The DSM was published in its third iteration in 1980, and its main architect, Robert Spitzer, waged a battle to replace the document’s psychoanalytic lineage (IPA only, by this point) with an orientation that was explicitly more biomedical. Psychoanalytic terms and diagnoses were largely purged from the DSM, but, as a sort of concession to the American psychoanalysts, a second ‘axis’ of diagnosis was added, incorporating a number of so-called ‘personality disorders’, among which was BPD. Never mind that there is no unified conception of what a personality is, nor how it can be ‘disordered’, and that accounts of personality differ considerably depending on a given psychoanalytic school, and vary even more greatly if one is a psychometrician or a cognitivist. And never mind that many psychoanalysts around the world reject the use of the BPD label as stigmatising and unrigorous. BPD had become canonical in mainstream medicine and psychiatry.

The construction of the borderline

Having brought BPD into ‘scientific’ respectability, the task of the DSM’s nosologists was to assign it a set of positive criteria far removed from the wastebasket theorising of IPA psychoanalysis. The BPD diagnosis is still prevalent within IPA psychoanalysis, but its terms—above all, ‘projecting’ and ‘splitting’—are by now radically different to those of psychiatry. The psychiatric definition of BPD instead is as follows:

A pervasive pattern of instability of interpersonal relationships, self-image and affects, and marked impulsivity beginning by early adulthood and present in a variety of contexts, as indicated by five (or more) of the following:

1. Frantic efforts to avoid real or imagined abandonment (Note: Do not include suicidal or self-mutilating behaviour covered in Criterion 5)

2. A pattern of unstable and intense interpersonal relationships characterised by alternating between extremes of idealisation and devaluation

3. Identity disturbance: markedly and persistently unstable self-image or sense of self

4. Impulsivity in at least two areas that are potentially self-damaging (e.g. spending, sex, substance abuse, reckless driving, binge eating) (Note: Do not include suicidal or self-mutilating behaviour covered in Criterion 5)

5. Recurrent suicidal behaviour, gestures, or threats, or self-mutilating behaviour

6. Affective instability due to a marked reactivity of mood (e.g. intense episodic dysphoria, irritability or anxiety usually lasting a few hours and only rarely more than a few days)

7. Chronic feelings of emptiness

8. Inappropriate, intense anger or difficulty controlling anger (e.g. frequent displays of temper, constant anger, recurrent physical fights)

9. Transient, stress-related paranoid ideation or severe dissociative symptoms.

Several observations are worth making here. First, most of the criteria refer to near-universal aspects of human experience, which vary quantitatively between individuals. Mood as such is mutable and ‘reactive’, and people generally try to avoid real or perceived abandonment. Second, there is a subtle but distinct strain of moralising, wherein certain bodily satisfactions—food, sex, self-administered substances—are viewed through the lens of ‘self-damaging’ behaviour. Third, the would-be diagnostician is granted considerable power to exercise what are strictly subjective and arbitrary judgements. For instance, what precisely is an ‘inappropriate’ level of anger, and how would one gauge this objectively, especially in the absence of a mediating context? The diagnostic criteria on the whole tend to pathologise various manifestations of distress without any reference to the cause of this distress.

It is worth noting further that these criteria are disproportionately applied to women rather than men, and that women are diagnosed with BPD at a rate of three to one relative to men. Moreover, even if one is prepared to entertain the validity of BPD as a construct, it is striking that the most significant etiological factor is what the literature terms ‘environmental’. Up to 70 per cent of those diagnosed as borderline have a history of serious trauma, often in the form of childhood neglect, or physical and sexual abuse. The aim of DSM diagnosis is essentially one of generalisation on the basis of symptoms, but implicitly this generalisation is gendered, and imbued with a set of social norms regarding suffering, satisfaction and an individual’s relation to others. What are supposedly ‘neutral’ criteria expressed without value judgements are in fact deeply ideological constructs in pseudo-scientific garb. Lacan referred to ‘discourse’ as the means by which language forms different types of social bonds. There are many such discourses, and that which he termed the university discourse speaks in the name of knowledge, of science, but conceals (and produces) relations of mastery. Contemporary mental health is one example of such a discourse, and anybody taking up the label of BPD as their emblem, as encouraged by the diagnosticians, is liable to become an agent of their own subjection.

Mental health, politics and neoliberalism

The field of mental health in general, and psychiatry in particular, has a long history of political ideology being passed off as ‘science’. The most notorious examples are perhaps those of the twentieth century’s dictatorships. The USSR effectively used psychiatry, with its attendant threats of forced treatments, confinement and sedation, as a tool for managing dissidents. (Nobel Prize winner Joseph Brodsky, for instance, suffered a stint in a mental institution for ‘pornographic and anti-Soviet’ writing.) Even more striking was psychiatry in Nazi Germany. What is particularly remarkable about the Nazi psychiatrists is that much of the killing perpetrated by psychiatrists—estimated to be in the order of 200,000 patients—occurred before any executive order from Hitler or other senior Nazis, and was undertaken with the imprimatur of German academic psychiatry. If this seems like an outlier, we should recall that a significant portion of psychiatric thought outside Nazi Germany, and particularly in the United States at that time, was grounded in the eugenics movement, and supported the notion, for instance, of administering euthanasia to the ‘feeble minded’.

The liberal democracies too were not immune to abuses within the field of psychiatry. In the United States, in 1851, one Dr Samuel A. Cartwright notoriously posited the existence of ‘drapetomania’, a madness in which the afflicted were slaves possessed with an intense desire to escape. Among the remedies prescribed by Dr Cartwright included ‘whipping the devil out of them’ and preventing further absconding by removal of the patient’s toes. A diagnosis of schizophrenia in the 1950s was characterised by a splitting or fragmentation of the personality, usually resulting in a calm, if poorly functioning patient. The diagnosis was disproportionately made of women, particularly housewives. In 1968, the DSM underwent its second revision. That year was famously one of protest across many parts of the world, including the United States, which saw mass movements in opposition to the Vietnam War and in favour of civil rights. When the DSM-II emerged in the context of these societal conflicts, schizophrenia had been redefined with an emphasis on ‘masculinised belligerence’, according to historian of psychiatry Christopher Lane. With schizophrenia now recast as a condition characterised by ‘hostility’ and ‘aggression’, the primary candidates for the diagnosis shifted from white housewives to African American men, many of whom were directly involved in civil rights protests. The effect of this shift was that many such men were hospitalised, often for years at a time, and subjected to mandatory ‘treatments’. Silencing dissent or, even worse, pressing the dissenter into express agreement with some ideology or other, has been an aim of mental health intervention.

The ideas of the ruling class are, in every epoch, the ruling ideas, and this is as true of mental health and psychiatry as anything else. The current ruling ideas are decidedly neoliberal. The effect of neoliberalism on the field of mental health is not merely to seclude, kill or disable dissidents, though that still occurs. More commonly, the contemporary patient of neoliberal mental health is obliged to uphold the virtues that are highly esteemed by the ideology. It is probably not a coincidence, for instance, that the shift from institutionalisation towards self-management of mental disorders occurred at precisely the same times and in the same places as government economic policy moved from Keynesian social democracy to neoliberalism. As the political economy changed, so too did psychiatric technology, with lower-risk SSRI medications and cognitive therapies replacing sedatives and lobotomies. The move towards self-management is more fiscally prudent and, moreover, the neoliberal paradigm posits each individual as his or her own entrepreneur, with a fundamentally different social contract with the state to his or her Keynesian predecessors. Viewing individuals primarily as entrepreneurs allows us to view their difficulties in life as the result of malinvestment. Unemployment, sickness and other life problems, in this view, ought not to receive too much in the way of governmental amelioration, since this would effectively prolong the malinvestment, and obstruct other, better investments. In this paradigm, suffering people ought not to receive too much help. This is not always stated explicitly in the textbooks—though sometimes it is—but it nonetheless is a massive influence on the provision of mental health services. Dependence is systematically pathologised when it involves a dependence of people, though not necessarily if it involves the use of prescription medications. Patients are encouraged to ‘individuate’, but not from their pharmaceuticals, ‘pleasant activity schedules’ and mindfulness apps. Individuation is held by researchers in this field as a self-evident good, thus demonstrating that the hallmarks of psychology, ‘well-being’, ‘maturity’ and ‘self-regulation’, consist in reproducing the isolated, alienated subject of liberal capitalism.

In order to understand the effects of neoliberalism on mental health policy, it is important not to see it simply as cuts to funding, or competition between services—though it is those things—but also as the internalisation of a different set of prerogatives within the services that survive. For instance, several major psychological treatments for common mental afflictions regard negative affect as the outcome of invalid reasoning, or as something a person can be diverted from by distraction, meditation, so-called thought-stopping exercises, enhanced self-regulation and so on. Subtly but surely, negative emotions become an indication of a moral fault. When it comes to mental health treatment delivered by words, the latest thrust is for technology to be used to deliver directive strategies, which patients would self-administer by way of apps or computer programs. One can see how, in the present climate, such apps appeal both to a narrative of ‘innovation’ as well as to a desire to save money. But one of the most empirically robust findings in the history of mental health, dating back to at least the 1930s, is the importance of the therapeutic relationship in determining the outcome of a treatment. In the context of a collapse of many social structures, and arguable increases in isolation and alienation, self-administered apps may very well aggravate the problems they purport to treat. The borderline, with her fear of abandonment and appeal to the other for support, is at an impasse here, and her ‘neediness’ and ‘dependence’ are liable to be read as symptoms under the dominant paradigm.

The borderline under neoliberalism

Julia Kristeva, in Powers of Horror, situated borderline phenomena with the ‘abject’. The etymology of this term is significant, as the Latin refers to that which is thrown away. Kristeva was writing in 1982, but this term captures very precisely the position of the borderline with respect to contemporary mental health. Officially, whether in legislation or the policy of the major hospitals, there is no prohibition on admitting BPD patients during a bout of suicidality or acute psychotic symptoms. Practically, however, such rejections are a daily occurrence, with the prevailing attitude being that the risk with BPD is in providing too much, rather than too little, care. Just as with Centrelink, where ‘support’ for dole recipients is minimal and punitive, for the recipient’s supposed own good, so too does the borderline run the risk of indulging in ‘secondary gain’ in the face of undue assistance. Consequently, the treatment protocols for BPD sometimes enjoin clinicians to refrain from indulging a patient presenting with suicidal gestures. To quote from one (Borderline Personality Disorder: A Clinical Guide, by John G. Gunderson), suicidal acts are potentially ‘manipulative’ and will lead to ‘secondary gain’.  The prudent clinician, confronted with these gestures, should remain ‘uninvolved’ and ‘unavailable’. Again, to reiterate, the majority of such patients have a background of serious trauma, often experienced at the hands of those who raised them. Treatment is directed at having them ‘self-manage’ this trauma in such a way as to impose only minimally on public services. This is not merely a neutral, ‘scientific’ procedure but one in which mental health services are animated by the prevailing political economy and its ideology.

The concern about ‘secondary gain’ here is particularly ironic, as the term derives from Freud. The primary gain of a symptom, according to him, was the quantum of satisfaction that it produced. (The sort of satisfaction that he had in mind is something radically distinct from what we might term ‘pleasure’.) Freud introduced the idea of secondary gain in his Introductory Lectures on Psychoanalysis, in which he discussed ‘advantages’ that might accrue from a given symptom. He gives the example of a woman ‘roughly treated’ (as James Strachey translates it) by her husband. Her illness provides her with a modicum of defence against her husband’s aggression. Freud did not argue that the woman in question should be deprived of such defences but instead suggested that they would likely be intractable in the absence of a solution to her marital problems. Any ethical treatment worthy of the name would be obliged to consider alleviating the patient’s marital problems, and not merely silencing her neurotic symptom.

One can consider this in the light of Lacan, who once quipped that it was Marx who invented the symptom. One reading of this is to understand that the point of conflict within a system, whether at the societal or familial level, and which may well be localised within a given individual, is produced via a structural causation beyond that individual. The child diagnosed with ADHD may be the one given the dexamphetamine, but his or her family and classroom may also require a treatment of their own, and likewise for the borderline. The contemporary neoliberal paradigm for mental health diagnosis and treatment aims at silencing symptoms and their context without regard for the social problems that have given rise to them. Insofar as interpersonal, non-pharmaceutical treatment is provided to BPD patients—and this is extremely minimal in the public system—it tends to involve an elaborate series of microregulatory, didactic methods for self-management without discussion of the patient’s history. A prominent private hospital in Melbourne, for example, offers an ‘emotional management’ program in which patients are taught modules such as ‘Emotion Regulation’ and ‘Distress Tolerance’. The particularity of an individual’s suffering is ignored in favour of generic techniques and the provision of information, rather than care as such. This impersonal, one-size-fits-all approach to treatment makes it to psychology what Ceaușescu’s orphanages were to parenting. Those aspects of BPD that are given the most attention are those that are most distressing to hospitals and not necessarily the patients themselves. Moreover, the fundamental problem of the BPD patient is construed as an educative failure. He or she is held to be an undivided, self-reflexive subject with no unconscious and no relevant history, who merely needs to acquire the right knowledge in order to be well regulated. The last two and a half millennia of philosophy, Eastern and Western, might as well not have existed as far as contemporary mental health treatment is concerned.

To be clear, the need for particularised care within the field of mental health is not a call for personal therapy at the expense of social measures. The government could do more for the mental health of the populace by improving housing, health and unemployment benefits than by providing any kind of awareness campaign or psychiatric treatment program. That suffering always contains an irreducibly subjective component, in psychoanalytic theory, does not alter the fact that the kind of subjectivity that exists within psychoanalysis is one that is founded on a relational ontology. This is especially true in Lacanian psychoanalysis, in which there is no ‘self’, as such, except imaginarily, and the subject who exists instead is one that should be read as having been ‘subjected to’ the social order, from the level of the family up to the broader society. The most intimate, most ‘biological’ aspects of life—sleep, eating, toileting, sexuality—have been thoroughly socialised before a child can typically write his or her own name. Consequently, when thinking about the causation of ‘symptoms’, as these are understood in mental health, one is apt to be led astray unless both subjective and social structures are taken into account. It is to the benefit of neoliberal governments and mental health providers under the dominant paradigm to obscure the dual structure of symptoms, and to pretend instead that said symptoms correspond to some underlying biological impairment or, better yet, a moral failing.

Criticisms of the diagnosis and treatment of BPD are growing, and in various parts of the world service users and researchers are providing trenchant critiques and offering alternatives. This allows for the possibility that somebody with a history of trauma who presents as suicidal at a clinic might yet be viewed as something other than a manipulator to be turned away or cajoled into conformity. Australia, however, is lamentably backward on this point, and one will search in vain in the official statements of the major mental health providers for the slightest acknowledgement that such critiques of BPD even exist. The concept of a ‘disordered’ personality is itself taken up uncritically by the leading Australian mental health institutions, who also regard such disorders as bona fide medical in nature, to be (self)-managed along the lines of diabetes or high blood pressure. The suppression of history, particularly historical trauma, and the denial of care to a particular group of patients is not the application of some apolitical, medical procedure. It is thoroughly reactionary, and continues the worst traditions of psychiatric care, updated for the neoliberal age.

Power … and the Dialect of Economics

Tags

A few months ago, I went down a rabbit hole analyzing word frequency in economics textbooks. Henry Leveson-Gower, editor of The Mint Magazine, thought the results were interesting and asked me to write up a short piece. The Mint article is now up, and is called ‘Power: don’t mention it’. What follows is my original manuscript.

If you’ve ever taken Economics 101, then you’re familiar with its jargon. In the course, you probably heard the words ‘supply and demand’ and ‘marginal utility’ uttered hundreds of times. As you figured out what these words meant, you gradually learned to speak a dialect that I call econospeak.

Like all dialects, econospeak affects how you express ideas. The vocabulary of econospeak makes it easier to express certain ideas (such as ‘market equilibrium’), but harder to express others (like ‘imperialism’, as we will see). This trade-off is a feature of all specialized dialects. Physics-speak, for instance, makes it easy to talk about the dynamics of motion, but difficult to talk about emotion.

While all scientific languages share this kind of trade-off, econospeak is different from natural-science dialects in one key way. The natural sciences have a solid empirical footing. Mainstream economics does not. As Steve Keen showed in his book Debunking Economics, when the ideas in Econ 101 are subjected to scientific scrutiny, they manifestly fail.

Despite this scientific failure, Econ 101 charges on like a juggernaut, largely unchanged for a half century. Why? The simplest (and most incendiary) explanation is that the course is not teaching you science. Rather, it is indoctrinating you in an ideology.

In his introductory textbook Principles of Economics, former Fed Chair Ben Bernanke admits as much. He writes: “economics is not a set of durable facts … it is a way of thinking about the world.” I agree. Economics 101 teaches you a fact-free way of thinking — the very definition of an ideology.

In their book Capital as Power, political economists Jonathan Nitzan and Shimshon Bichler go further. They argue that mainstream (neoclassical) economics is “ideology in the service of the powerful”. If this is true, the trade-offs in econospeak take on new meaning. In particular, the words that are absent from the economics dialect indicate ideas that the powerful wish to suppress. As we will see, what econospeak seems to suppress is the idea of power itself.

Analyzing econospeak

To better understand the economics dialect, I recently assembled a large sample of economics textbooks. I measured the frequency of words in these books, and then compared this frequency to what is found in mainstream English (as measured by the Google English corpus). Here is what I found.

Unsurprisingly, economists use some words far more than average. The word ‘supply’, for instance, is about 30 times more common in economics textbooks than in mainstream English. The word ‘demand’ is about 50 times more common.

Economists also use some words far less than average. The word ‘imperialism’, for instance, is about 100 times less common in econospeak than in mainstream English. And the word ‘anti’ (used to voice opposition) is about 1000 times less common. It seems that economists rarely speak about imperial conquest, or of opposition to it.

We can find similar differences across the whole economics dialect. Because there are about 35,000 unique words in my textbook sample, I cannot discuss the results for every word. But I can show you the ‘structure’ of econospeak. To do so, I’ll break the economics dialect into four quadrants, as shown in Figure 1. (If you want to explore this data in more detail, I have built an interactive app available here.)

Figure 1: Quadrants of econospeak. For an interactive version of this chart, visit https://blair-fix.shinyapps.io/deconstructing-econospeak/

In Figure 1, each point represents a word. The horizontal axis shows the word’s frequency in economics textbooks. The vertical axis shows this word frequency relative to mainstream English. I have divided the economics dialect into four quadrants that I call ‘jargon’, ‘quirks’, ‘under-represented’ and ‘neglected’.

The ‘jargon’ quadrant contains words that economists use frequently, and far more than in mainstream English. Here you will find the jargon of economics — words like ‘supply’, ‘demand’, ‘marginal’, and ‘utility’.

The ‘quirks’ quadrant contains words that economists use infrequently in absolute terms. But these words are so rare in mainstream English that economists actually overuse them in relative terms. In the ‘quirks’ quadrant you will find the language of economic parables. ‘Superathletes’, for instance, are a parable for extremely productive people. And to be ‘grasshopperish’ is to be lazy.

When you learn economics, you focus on the jargon and the quirks. What you do not focus on are the words in the lower two quadrants of Figure 1. These are words that economists underused relative to mainstream English. You tend not to notice these underused words because absence is difficult to spot. But when we crunch the numbers, it becomes obvious which words economists choose not to say.

Let’s look at the ‘under-represented’ quadrant. Here you will find words that economists use frequently — almost as frequently as economics jargon. But outside of economics, these words are so common that economists actually underuse them. In the ‘under-represented’ quadrant you will find (among other things) the language used to describe the bureaucratic structure of groups — words like ‘organizations’, ‘administration’, ‘management’, and ‘committee’. Economists, it seems, prefer not to think about bureaucracy.

And now to the ‘neglected’ quadrant. Here live words that are common in mainstream English, yet which economists utter rarely. It is here that we find the language of power.

Economists, it seems, do not like to speak about power. We can see this fact in Figure 2. Here I show words that relate to wielding and submitting to power. I’ve plotted their frequency using the same coordinates as in Figure 1 — the ‘quadrants of econospeak’. I find that the vast majority of these power-speak words live in the ‘neglected’ quadrant. Economists rarely utter them. And relative to mainstream English, this constitutes massive underuse.

Figure 2: Econospeak neglects the language of power.

What should we make of the relative absence of power-speak in economics? One possibility is that economists are simply unaware of the power dynamics of modern capitalism. This is a popular argument made by in-house critics like Wassily Leontief — the former President of the American Economic Association (AEA). In his 1970 presidential address to the AEA, Leontief scolded his colleagues for building models that had little to do with reality. Economists, he claimed, made theoretical assumptions based on ‘nonobserved facts’.

In this telling, it is ‘naive assumptions’ that cause economists to neglect the language of power. Leontief’s criticism certainly has an element of truth. The assumptions baked into mainstream economics models are extremely naive — at odds with an ever-growing array of evidence. But what Leontief’s critique does not explain is economists’ intransigence. It has been fifty years since Leontief scolded his fellow economists for ignoring the real world. And yet in the time since, Econ 101 has remained virtually unchanged. Why?

The intransigence of Econ 101 points to a darker side of economics — namely that the absence of power-speak is by design. Could it be that economics describes the world in a way that purposely keeps the workings of power opaque? History suggests that this idea is not so far-fetched.

An investment in hiding power

John D. Rockefeller — widely regarded as the richest American who ever lived — was nothing if not a shrewd investor. What, then, did he consider his ‘best investment’? It was not stocks. It was not even physical property. No, what Rockefeller would describe as his ‘best investment’ was far less tangible. It was an investment in ideology.

In 1890, Rockefeller spent $600,000 — the equivalent today of about$170 million1 — to found the University of Chicago. While remembered as an act of philanthropy, Rockefeller himself considered this gift an investment — and his best one at that.2 This was because the University of Chicago would later promote ideas that vastly improved Rockefeller’s image.

In building his industrial empire, Rockefeller was ruthless in wielding power. Here is how Jonathan Nitzan and Shimshon Bichler describe it:

[Rockefeller] invented every possible trick in limiting competition and output, in using religious indoctrination for profitable ends, in rigging stock prices and bashing unions, in enforcing ‘free trade’ while helping friendly dictators, in confiscating oil-rich territories and in uprooting and destroying indigenous Indian populations.

(Nitzan and Bichler, 2009)

This ruthless use of power, you can imagine, gave Rockefeller a bad image. Enter Rockefeller’s ‘investment’. In founding the University of Chicago, Rockefeller created the first bastion of neoclassical economics. It was at the University of Chicago that Milton Friedman penned his ode to free markets, Capitalism and Freedom. It was there that Theodore Schultz and Gary Becker proclaimed that income stemmed from productive ‘human capital’. And it was there that the ‘Chicago boys’ were trained — a group of Chilean economists who helped Augusto Pinochet institute a brutal military coup in the name of the ‘free market’.

In the writings that came out of the Chicago school, ruthless acts of power were not discussed. Instead, the focus was on a fantasy world governed by ‘perfectly competitive free markets’. This fantasy, argue Nitzan and Bichler, “helped make Rockefeller and his like invisible”. It allowed capitalists to cloak their power in euphemistic language. Capitalists did not wield ‘power’ … they wielded ‘freedom’.

A century later, Rockefeller’s investment appears to still be paying dividends. As Figure 2 shows, the language of power remains conspicuously absent from economics textbooks. This absence, I believe, is by design.

The key to successfully wielding power is to make control appear legitimate. That requires ideology. Before capitalism, rulers legitimized their power by tying it to divine right. In modern secular societies, however, that’s no longer an option. So rather than brag of their God-like power, modern corporate rulers use a different tactic. They turn to economics — an ideology that simply ignores the realities of power. Safe in this ideological obscurity, corporate rulers wield power that rivals (or even surpasses) the kings of old.

Are economists cognizant of this game? Some may be. Most economists, however, are likely just ‘useful intellectuals’ — clever people who are willing to delve into the intricacies of neoclassical theory without ever questioning its core tenets. Meanwhile, with every student who gets hoodwinked by Econ 101, the Rockefellers of the world happily reap the benefits.

Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.

Keep me up to date

Notes

For more details about my analysis of econospeak, see my blog post Deconstructing Econospeak.

1. In 1890, Rockefeller’s $600,000 gift was worth about 2600 times the US per capita income of$230. In 2019, US income per capita was $65,000. If today Rockefeller gifted 2600 times the average American income, it would be worth about$170 million.
2. Speaking of his endowment to the University of Chicago, Rockefeller reportedly said ‘it was the best investment I ever made’ (Collier & Horowitz, 1976; cited in Nitzan & Bichler, 2009).