Psychology

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

Don’t Vote for a Psychopath: Tyranny at the Hands of a Psychopathic Government

Published by Anonymous (not verified) on Thu, 22/10/2020 - 1:15am in

Tags 

News, Psychology

Politicians are more likely than people in the general population to be sociopaths. “I think you would find no expert in the field of sociopathy/psychopathy/antisocial personality disorder who would dispute this… That a small minority of human beings literally have no conscience was and is a bitter pill for our society to swallow — but it does explain a great many things, shamelessly deceitful political behavior being one.”—Dr. Martha Stout, clinical psychologist and former instructor at Harvard Medical School

Twenty years ago, a newspaper headline asked the question: “What’s the difference between a politician and a psychopath?

The answer, then and now, remains the same: None.

There is no difference between psychopaths and politicians.

Nor is there much of a difference between the havoc wreaked on innocent lives by uncaring, unfeeling, selfish, irresponsible, parasitic criminals and elected officials who lie to their constituents, trade political favors for campaign contributions, turn a blind eye to the wishes of the electorate, cheat taxpayers out of hard-earned dollars, favor the corporate elite, entrench the military industrial complex, and spare little thought for the impact their thoughtless actions and hastily passed legislation might have on defenseless citizens.

Psychopaths and politicians both have a tendency to be selfish, callous, remorseless users of others, irresponsible, pathological liars, glib, con artists, lacking in remorse and shallow.

Charismatic politicians, like criminal psychopaths, exhibit a failure to accept responsibility for their actions, have a high sense of self-worth, are chronically unstable, have socially deviant lifestyles, need constant stimulation, have parasitic lifestyles and possess unrealistic goals.

It doesn’t matter whether you’re talking about Democrats or Republicans.

Political psychopaths are all largely cut from the same pathological cloth, brimming with seemingly easy charm and boasting calculating minds. Such leaders eventually create pathocracies: totalitarian societies bent on power, control, and destruction of both freedom in general and those who exercise their freedoms.

Once psychopaths gain power, the result is usually some form of totalitarian government or a pathocracy. “At that point, the government operates against the interests of its own people except for favoring certain groups,” author James G. Long notes. “We are currently witnessing deliberate polarizations of American citizens, illegal actions, and massive and needless acquisition of debt. This is typical of psychopathic systems, and very similar things happened in the Soviet Union as it overextended and collapsed.”

In other words, electing a psychopath to public office is tantamount to national hara-kiri, the ritualized act of self-annihilation, self-destruction and suicide. It signals the demise of democratic government and lays the groundwork for a totalitarian regime that is legalistic, militaristic, inflexible, intolerant and inhuman.

Incredibly, despite clear evidence of the damage that has already been inflicted on our nation and its citizens by a psychopathic government, voters continue to elect psychopaths to positions of power and influence.

Indeed, a study from Southern Methodist University found that Washington, DC—our nation’s capital and the seat of power for our so-called representatives—ranks highest on the list of regions that are populated by psychopaths.

According to investigative journalist Zack Beauchamp, “In 2012, a group of psychologists evaluated every President from Washington to Bush II using ‘psychopathy trait estimates derived from personality data completed by historical experts on each president.’ They found that presidents tended to have the psychopath’s characteristic fearlessness and low anxiety levels — traits that appear to help Presidents, but also might cause them to make reckless decisions that hurt other people’s lives.”

The willingness to prioritize power above all else, including the welfare of their fellow human beings, ruthlessness, callousness and an utter lack of conscience are among the defining traits of the sociopath.

When our own government no longer sees us as human beings with dignity and worth but as things to be manipulated, maneuvered, mined for data, manhandled by police, conned into believing it has our best interests at heart, mistreated, jailed if we dare step out of line, and then punished unjustly without remorse—all the while refusing to own up to its failings—we are no longer operating under a constitutional republic.

Instead, what we are experiencing is a pathocracy: tyranny at the hands of a psychopathic government, which “operates against the interests of its own people except for favoring certain groups.”

Worse, psychopathology is not confined to those in high positions of government. It can spread like a virus among the populace. As an academic study into pathocracy concluded, “[T]yranny does not flourish because perpetuators are helpless and ignorant of their actions. It flourishes because they actively identify with those who promote vicious acts as virtuous.”

People don’t simply line up and salute. It is through one’s own personal identification with a given leader, party or social order that they become agents of good or evil.

Much depends on how leaders “cultivate a sense of identification with their followers,” says Professor Alex Haslam. “I mean one pretty obvious thing is that leaders talk about ‘we’ rather than ‘I,’ and actually what leadership is about is cultivating this sense of shared identity about ‘we-ness’ and then getting people to want to act in terms of that ‘we-ness,’ to promote our collective interests. . . . [We] is the single word that has increased in the inaugural addresses over the last century . . . and the other one is ‘America.’”

The goal of the modern corporate state is obvious: to promote, cultivate, and embed a sense of shared identification among its citizens. To this end, “we the people” have become “we the police state.”

We are fast becoming slaves in thrall to a faceless, nameless, bureaucratic totalitarian government machine that relentlessly erodes our freedoms through countless laws, statutes, and prohibitions.

Any resistance to such regimes depends on the strength of opinions in the minds of those who choose to fight back. What this means is that we the citizenry must be very careful that we are not manipulated into marching in lockstep with an oppressive regime.

Writing for ThinkProgress, Beauchamp suggests that “one of the best cures to bad leaders may very well be political democracy.”

But what does this really mean in practical terms?

It means holding politicians accountable for their actions and the actions of their staff using every available means at our disposal: through investigative journalism (what used to be referred to as the Fourth Estate) that enlightens and informs, through whistleblower complaints that expose corruption, through lawsuits that challenge misconduct, and through protests and mass political action that remind the powers-that-be that “we the people” are the ones that call the shots.

Remember, education precedes action. Citizens need to the do the hard work of educating themselves about what the government is doing and how to hold it accountable. Don’t allow yourselves to exist exclusively in an echo chamber that is restricted to views with which you agree. Expose yourself to multiple media sources, independent and mainstream, and think for yourself.

For that matter, no matter what your political leanings might be, don’t allow your partisan bias to trump the principles that serve as the basis for our constitutional republic. As Beauchamp notes, “A system that actually holds people accountable to the broader conscience of society may be one of the best ways to keep conscienceless people in check.”

That said, if we allow the ballot box to become our only means of pushing back against the police state, the battle is already lost.

Resistance will require a citizenry willing to be active at the local level.

Yet as I point out in my book Battlefield America: The War on the American People, if you wait to act until the SWAT team is crashing through your door, until your name is placed on a terror watch list, until you are reported for such outlawed activities as collecting rainwater or letting your children play outside unsupervised, then it will be too late.

This much I know: we are not faceless numbers.

We are not cogs in the machine.

We are not slaves.

We are human beings, and for the moment, we have the opportunity to remain free—that is, if we tirelessly advocate for our rights and resist at every turn attempts by the government to place us in chains.

The Founders understood that our freedoms do not flow from the government. They were not given to us only to be taken away by the will of the State. They are inherently ours. In the same way, the government’s appointed purpose is not to threaten or undermine our freedoms, but to safeguard them.

Until we can get back to this way of thinking, until we can remind our fellow Americans what it really means to be free, and until we can stand firm in the face of threats to our freedoms, we will continue to be treated like slaves in thrall to a bureaucratic police state run by political psychopaths.

Feature photo | Editing by MintPress | Artist Unknown

Constitutional attorney and author John W. Whitehead is founder and president of The Rutherford Institute. His new book Battlefield America: The War on the American People  (SelectBooks, 2015) is available online at www.amazon.com. Whitehead can be contacted at johnw@rutherford.org.

The post Don’t Vote for a Psychopath: Tyranny at the Hands of a Psychopathic Government appeared first on MintPress News.

William James On The ‘Automatic, Therapeutic Decision’

Published by Anonymous (not verified) on Thu, 22/10/2020 - 12:47am in

In Existential Psychotherapy, Irvin Yalom, writing of conscious, directed, self-therapeutic change, writes of the ‘essential’ role of personal decisions and choices in ‘effective’ therapy, and invokes William James‘ five-fold taxonomy of decisions “only two of which, the first and the second, involve “willful” effort”:

1. Reasonable decision. We consider the arguments for and against a given course and settle on one alternative. A rational balancing of the books;
we make this decision with a perfect sense of being free.

2. Willful decision. A willful and strenuous decision involving a sense of “inward effort.” A “slow, dead heave of the will.” This is a rare decision;
the great majority of human decisions are made without effort.

3. Drifting decision. In this type there seems to be no paramount reason for either course of action. Either seems good, and we grow weary or frustrated at the decision. We make the decision by letting ourselves drift in a direction seemingly accidentally determined from without.

4. Impulsive decision. We feel unable to decide and the determination seems as accidental as the third type. But it comes from within and not from
without. We find ourselves acting automatically and often impulsively.

5. Decision based on change of perspective. This decision often occurs suddenly and as a consequence of some important outer experience or inward change (for example, grief or fear) which results in an important change in perspective or a “change in heart.”

So three kinds of decisions are seemingly ‘automatic’; they are made for ‘no paramount reason’ or ‘accidentally’ or ‘suddenly.’ But they should not, for that reason, be understood as ‘spontaneous’ or ‘uncaused.’ After all, they are made by a patient in therapy, someone that has decided to go to therapy to ‘become better’ or to ‘be cured.’ Change, or an acute desire for it, already stirs within such persons. When the decision is made, therapy has already taken place for some time; narratives of the lived life have been constructed and edited for clarity; ‘suggestions’ for therapeutic change have been made; tentative drafts of new self-constructing narratives have been offered for emendation and rewriting in the clinic.

In these circumstances, the patient/client is not a passive participant in the therapeutic process but an active dynamic one, albeit with levels of interaction with therapy that are not always explicitly conscious and available for introspection. These levels of interaction, in ‘producing’ decisions,’ act in much the same way as unconscious modes of problem-solving do, the ones that prompt the anecdotal observation that ‘mathematicians do all their theorem-proving while they sleep.’ The ‘therapeutic decisions’ which result should be cause for optimism; in the same way that writers, artists, and creators of all stripes press on through moments of ‘block’, trusting that their unconscious creative processes will work out for them in the end, moving them past points of turmoil and stasis in their artmaking, the patient in therapy can continue to strive, pressing on, trusting that within them, directed processes, even if not immediately apparent, of self-discovery, invention, and construction are under way.

‘I’ Review of Book on the Alma Fielding Poltergeist Case

Published by Anonymous (not verified) on Tue, 13/10/2020 - 5:12am in

Last Friday, 9th October 2020, the ‘I’ published a review by Fiona Sturges of the book, The Haunting of Alma Fielding, by Kate Summerscale (Bloomsbury, £18.99). Fielding was a woman from Croydon, who in 1938 found herself and her husband haunted by a poltergeist, the type of spirit which supposedly throws objects around and generally makes itself unpleasant. The review states that she was investigated by the Society for Psychical Research, in particular Nandor Fodor. Summerscale came across the case while going through the Society’s files.

I’m putting up Sturges’ review as I’ve friends, who are members of the Society and very involved in paranormal research, as are a few of the great peeps, who comment on this blog. Ghost hunting is also very big at the moment, and there are any number of programmes on the satellite and cable channels, as well as a multitude of ghost hunting groups across the UK, America and other countries. Despite its popularity, there’s a big difference between serious paranormal investigation of the type done by the SPR and ASSAP and the majority of ghost hunting groups. The SPR and ASSAP contain professional scientists as well as ordinary peeps from more mundane professions, and try to investigate the paranormal using strict scientific methodology. They contain sceptics as well as believers, and are interested in finding the truth about specific events, whether they are really paranormal or have a rational explanation. They look down on some of the ghost-hunting groups, because these tend to be composed entirely of believers seeking to confirm their belief in the paranormal and collect what they see as evidence. If someone points out that the evidence they show on their videos actually is no such thing – for example, most researchers believe orbs aren’t the souls of the dead, but lens artefacts created by floating dust moats – then the die-hard ghost hunters tend to react by decrying their critics as ‘haters’. Many of the accounts of their encounters with the supernatural by the ghost hunters are extremely dramatic. They’ll describe how members got possessed or were chased by the spirits on their home. I’m not saying such events don’t happen at all. I do know people, who have apparently been possessed by spirits during investigations. But the stories of such supernatural events put up by the ghost-hunters seem more likely the result of powerful imaginations and hysteria than genuine manifestations by the dead.

Academic historians are also interested in spiritualism and supernatural belief in the past because of what they reveal about our ancestors worldview and the profound changes this underwent during the 19th and early 20th centuries. Psychical research emerged in the 19th century at the same time as spiritualism, and was founded partly to investigate the latter. Both can be seen as attempts to provide concrete, scientifically valid proof of the survival of the soul after death at the time science was itself just taking shape and religious belief was under attack from scientific materialism. As the review says, spiritualism and psychic research were particularly popular in the aftermath of the First World War, as bereaved relatives turned to it for comfort that their loved ones still lived on in a blessed afterlife. One famous example of this is Conan Doyle, the creator of the arch-rationalist detective, Sherlock Holmes. Doyle was a spiritualist, who helped, amongst other things, popularise the Cottingley Fairies in his book, The Coming of the Fairies. Another of his books in this area was Raymond, an account of his contact with the spirit of his son, who was one of those killed in that terrible conflict.

But the history of spiritualism is also interesting because of what it also reveals about gender roles and sexuality, topics also touched on in the review. Mediums stereotypically tend to be women or gay men. At the same time, historians have also suggested that there was an erotic element to seances and investigations. More intimate physical contact between the sexes was permitted in the darkness of the séance room that may otherwise have been permitted in strictly respectable Victorian society. At the same time, there is to modern viewers a perverse aspect to the investigation of the mediums themselves. In order to rule out fraud, particularly with the physical mediums who claimed to produce ectoplasm from their bodies, mediums were tied up, stripped naked and examined physically, including in their intimate parts. Emetics could be administered to make sure that their stomachs were empty and not containing material, like cheesecloth, which could be used to fake ectoplasm.

The review, ‘Strange but true?’, runs

In February 1938, there was a commotion at a terraced house in Croydon. Alma and Les Fielding were asleep when tumblers began launching themselves at walls; a wind whipped up in their bedroom, lifting their eiderdown into the air; and a pot of face cream flew across the room. The next morning, as Alma prepared breakfast, eggs exploded and saucers snapped.

Over the next few days, visiting journalists witnessed lumps of coal rising from the fireplace and barrelling through the air, glasses escaping from locked cabinets and a capsizing wardrobe. As far as they could tell, the Fieldings were not responsible for the phenomena. One report told of a “malevolent, ghostly force”. The problem, it was decided, was a poltergeist.

Fast-forward to 2017 and the writer Kate Summerscale, best known for the award-winning The Suspicions of Mr Whicher, was in the Society for Psychical Research Archive in Cambridge looking for references to Nandor Fodor, a Hungarian émigré and pioneer of supernatural study, who investigated the fielding case.

She found a dossier of papers related to Alma, compiled by Fodor, containing interviews, séance transcripts, X-rays, lab reports, scribbled notes and photographs. The file was, says Summerscale, “a documentary account of fictional and magical events, a historical record of the imagination.”

The Haunting of Alma Fielding is a detective novel, a ghost yarn and a historical record rolled into one. Blending fact and fiction it is an electrifying reconstruction of the reported events surrounding the Fieldings, all the while placing them in a wider context.

The narrative centres of Fodor, who at the time was losing faith in spiritualism – the mediums he had met were all fakes, and the hauntings he had investigated were obvious hoaxes. He was increasing convinced that supernatural occurrences were caused “not by the shades of the dead but by the unconscious minds of the living”.

But he was intrigued by Alma, who now experiencing “apports” – the transference of objects from one place to another. Rare stones and fossils would appear in her hands and flowers under her arms. Beetles started to scuttle out from her clothes and a terrapin appeared in her lap. She would later claim to be able to astrally project herself and give herself over to possession by spirits.

Summerscale resists the temptation to mine the more comic aspects of the story. She weaves in analysis on class, female emancipation and sexuality, and the collective angst of a nation. At the time, spiritualism was big business in Britain, which was still suffering the shocks of mass death from the First World War and Spanish flu. Seances to reach the departed were as common as cocktail parties. There was dread in the air, too, as another conflict in Europe loomed.

Alma became a local celebrity, released from domestic dreariness into the gaze of mostly male journalists, mediums and psychiatrists. Chaperoned by Fodor, she made frequent visits to the Institute of Psychical Research, where she submitted to lengthy and often invasive examinations.

We come to understand how Fodor stood to benefit from the cases, both in furthering his career and restoring his faith in the possibility of an afterlife. You feel his pain, along with Alma’s, as the true story is revealed.

It sounds very much from that last paragraph that the haunting was a hoax. There have been, unfortunately, all too many fake mediums and hoaxers keen to exploit those seeking the comfort of making contact once again with deceased relatives and friends. There was even a company selling a catalogue of gadgets to allow someone to take a séance. But I don’t believe for a single moment that all mediums are frauds. There is a psychological explanation, based on anthropologists study of the zar spirit possession cult of one of the African peoples. This is a very patriarchal culture, but possession by the zar spirits allows women to circumvent some of the restrictions of women. For example, they may be given rings and other objects while possessed through the spirits asking, or apparently asking, through them. It’s been suggested that zar possessions are a form of hysteria, in which women, who are frustrated by societal restrictions, are able to get around them. The same explanation has also been suggested for western mediumship and alien abductions. Many of the women, who became mediums and who experience abductions by aliens, may do so subconsciously as these offer an escape from stifling normal reality.

I also believe that some supernatural events may well be genuine. This view was staunchly defended by the late Brian Inglis in his history of ghosts and psychical research, Natural and Supernatural, in the 1990s. As an Anglican, I would also caution anyone considering getting involved in psychical research to take care. There’s fraud and hoaxing, of course, as well as misperception, while some paranormal phenomena may be the result of poorly understood fringe mental states. But I also believe that some of the supposed entities contacting us from the astral realms, if they exist, are deliberately trying to mislead us. The great UFO researchers, John Keel and Jacques Vallee, came to the same conclusion about the UFO entities. One of Keel’s books was entitled, Messengers of Deception. There’s also the book, Hungry Ghosts, again written from a non-Christian perspective, which also argues that some of the spirits contacting people are malevolent and trying to deceive humanity for their own purposes.

If you are interested in psychical research, therefore do it properly using scientific methodology. And be aware of the possibility of deception, both natural and supernatural.

Children: The Familiar And Strange, The Known And Unknown

Published by Anonymous (not verified) on Sat, 10/10/2020 - 12:44am in

Parenting, and my relationship with my daughter, is persistently fraught by the presence of two seemingly incompatible states of affairs.

First, my child seems utterly familiar to me, the most intimately known person in our family: I was with her at her birth, and have been a companion and guardian since then, cleaning, bathing, feeding, escorting to school, playing with, teaching, comforting, advising, encouraging, ‘disciplining’ and so on. My daughter’s face, I have often said, seems to reflect my family album: sometimes, wistfully, I see glimpses of my father and mother; sometimes, I catch fleeting resemblances to cousins or nephews; on other occasions, miraculously enough, I see myself staring back at me. She is unmistakably, a recipient of my genetic material, a biological bond I have formed with the cosmos thanks to my relationship with her mother, and our joint decision to bring our child into this world.

And yet, for all of that, my child remains an utter mystery to me. To be confronted with her is to come face to face with the most profound question of all: Who is this person? When my daughter was younger, working through her terrible twos, her toddler stage, I used to  joke with my friends that while my daughter immediately took to her mother–the person who shared her body with her for nine months and then breastfed her for the next two–I had to ‘start from scratch’ and introduce myself, negotiating the parameters of a brand new relationship with a person who knew nothing about me. I could take nothing for granted in this relationship; I had, so to speak, to begin from the basement and work my way upwards, establishing myself as a presence in her life. Hopefully one to be loved and trusted. But it didn’t come for free; I couldn’t have it granted to me; I was dealing with an unknown quantity, as was she. And she is changing, in ways I cannot fully fathom and of course, cannot predict.

Most of this is utterly unsurprising to parents. Children, for their part, have long known that their parents are mysteries to them; indeed, when I think of how much my life had already transpired before my daughter met me,  of the little dribbles of information with which I seek to inform her of the kind of person I was, am, and am trying to become, I feel utterly defeated. As an immigrant parent, this task is particularly intractable. I will remain a mystery to her.

The nature of this relationship broadly understood is not radically dissimilar from that we enjoy with our lovers and friends: the most intimate of relationships is revealed to have acute perplexities at its heart, which have inspired countless poetic and philosophical flights of fancy: the encounter with another subjectivity, when we look into the eyes of the seemingly utterly familiar and find instead, the greatest mystery of all, one that we have merely deferred from our interiors to the external, and which serves to remind us of the task of discovery that waits within.

Advice for an Aspiring Economist

Published by Anonymous (not verified) on Thu, 24/09/2020 - 7:00pm in

A few weeks ago, evolutionary biologist David Sloan Wilson contacted me about an essay series he’s editing called Advice for an Aspiring Economist. The series aims to give advice to students who are interested in learning ‘evonomics’ — economics from an evolutionary perspective. It will be published in This View of Life Magazine and in Evonomics.

To my surprise, Wilson asked me to contribute an essay. I’m honored for two reasons. First, I still consider myself an ‘aspiring economist’. (I’m hardly an established academic.) Second, David Sloan Wilson is one of my intellectual heroes. For decades he’s battled against academic orthodoxy, promoting the idea that groups are an important unit of natural selection. Today, his ideas are widely accepted. Sociality, many scientists now believe, cannot evolve without group selection.

Back to economics. Here’s my contribution to Wilson’s series: advice for an aspiring economist.

✹ ✹ ✹

One of my favorite criticisms of the economics discipline is this:

… [Economics] provides an outstanding example of the “you can’t get there from here” principle in academic cultural evolution. It will never move if we try to change it incrementally.1

This criticism comes not from an economist, but from an evolutionary biologist — David Sloan Wilson. My advice to an aspiring (evolutionary-minded) economist is to remember Wilson’s words: you can’t get there from here.

Here’s what I mean.

If you learn economics as it’s taught in most universities, you’ll find it difficult to think in evolutionary terms. The reason is simple. Mainstream (neoclassical) economics treats humans as asocial animals — ‘self-contained globules of desire’.2 Unfortunately, this asocial model is wrong. Looking at human evolution, it’s clear that we’re a social species. Actually, we’re more than that. Humans are the most social of all mammals. Our group-forming ability is rivalled only by the social insects (ants, bees). Humans, in short, are ultra-social.3

Because mainstream economics treats humans as asocial, it’s a thought barrier to doing evolutionary science. As such, my advice is to not learn economics as it’s taught in most universities.

This advice may seem odd. Isn’t it like telling a chemistry student to skip Chemistry 101? Actually, no. It’s like telling a chemistry student to skip Phlogiston 101.

Never heard of phlogiston? That’s because it’s a long abandoned theory. In the 18th century, scientists proposed that combustion was caused by the release of a fire-like substance called ‘phlogiston’. The problem was that nobody could detect this mysterious element. Instead, scientists discovered oxygen. And so phlogiston theory was abandoned in favor of the oxygen theory of combustion.

Today, chemistry students don’t learn phlogiston theory. Science pedagogy has moved on. Unfortunately, economics pedagogy has not. If you take Economics 101, you’re learning the social-science equivalent of phlogiston theory — 19th-century ideas that should have been (but were not) abandoned. So if you’re an aspiring economist, skip the phlogiston theory. Skip Economics 101.4

That brings me back to Wilson’s statement: ‘you can’t get there from here’. The ‘here’ is both a theory and a place. You can’t use mainstream (neoclassical) economics to understand human genetic and cultural evolution. So if you want to do evonomics, you need to dump neoclassical theory. The catch is that you can’t dump neoclassical theory if you study in an economics department. That’s because in most econ departments, neoclassical theory is the core pedagogical canon. It’s required learning.5

Find a safe space

So what is an aspiring (evolutionary-minded) economist to do? I recommend finding a safe space to learn economics outside a traditional economics department. There are a few options.

One option is to study at a ‘heterodox’ economics department. (Here’s a list of such departments.) Heterodox departments are open to pluralist ideas and generally skeptical of neoclassical theory. These departments may not explicitly teach evolutionary ideas, but if you adopt an evonomics approach, you probably won’t encounter pushback.

Another option — one that I chose — is to learn economics in an interdisciplinary department. I studied in the Faculty of Environmental Studies at York University (Toronto). Students and professors in the department came from many different academic niches. I found the interdisciplinary environment both safe (for unorthodox ideas) and stimulating.6

Here’s what I like about interdisciplinary programs. First, they’re designed to be open. You can usually take courses from any department. (I took courses in ecology, sociology, and political science, among others.) Second, you’ll be around people with diverse ideas. Budding economists need to interact not only with other social scientists, but with biologists, ecologists, and other natural scientists. In an interdisciplinary program, you can do so daily. That stops you from getting stuck in an academic ‘silo’.

Studying economics in an interdisciplinary environment does have some downsides — not for doing science, but for advancing your career. If you dream of working in a university economics department, getting an interdisciplinary education is potentially career limiting. Economics departments tend to hire people who received a ‘traditional’ (i.e. neoclassical) economics training. (This revolving door is a big reason why the economics discipline is slow to change.) So if your goal is to be an economics professor, know that studying in an interdisciplinary program limits your options.

That being said, an interdisciplinary training also opens new doors. There’s a growing number of interdisciplinary programs where you could teach evonomics. And outside academia, the general public hungers for new economic thinking. For the past forty years, public discourse has been dominated by individualistic dogma. There has never been a greater need to remind people that humans are a social species, and that we’ve evolved to be so.

Majestic and practical

One of the beautiful things about evolutionary thinking is that it’s both majestic and practical.

Let’s start with the majestic. Humans are a social species — that much you probably know. But do you know the evolutionary history of our sociality? It extends deep into our past — far beyond what you might think. The evolution of sociality starts not with social animals like ourselves, but with social cells. Your body is not a single entity, but rather a group of cells that have evolved to cooperate. All multicellular organisms are similar — an evolved group of cooperating cells.

We can go deeper still. Your cells are composed of social organelles. Eukaryotic cells (like yours) are the result of an ancient merger of social prokaryotes. One prokaryote became the cytoplasm and nucleus. The other became the mitochondria — the cell powerhouse. Going still deeper, these organelles are composed of social molecules that somehow (at the dawn of life) managed to ‘cooperate’.7

Human sociality, then, is part of a long social lineage among life on Earth. Organisms that were once autonomous started to cooperate in groups. Sometimes these groups became so coherent that we think of them as ‘individuals’. It’s a story that has repeated countless times over a billion years. And humans are part of it. When I study economics, I try to keep this majestic history in mind.

What about practical problems? Perhaps surprisingly, the deep history of sociality is of practical importance. All social animals — whether human or otherwise — must solve a basic problem. For groups to succeed, group members must act prosocially. The problem is that within groups, it’s usually best for individuals to act selfishly. Among social animals, then, there’s a clash between individual interest and group interest. David Sloan Wilson and E.O. Wilson call this clash ‘the fundamental problem of social life’.8

Successful social animals have all solved this fundamental problem. They have suppressed self interest (at least to some degree) and promoted prosociality. The question is how?

In some animals (like ants), it seems clear that prosociality is instinctual. But in other animals (like humans) prosociality must be nurtured. What norms and institutions best foster human cooperation? We’re only beginning to answer this question.9 But what seems clear is that this question should form the foundation of economics.

Back to you, the aspiring economist. Do you marvel at the deep history of our evolved sociality? Do you also want to improve humanity’s lot? If so, the economics discipline needs you. Help move the field beyond its obsession with individualism. What awaits you is not fame or fortune, but the satisfaction of making the world a better place.

Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.

patron_button

Stay updated

Sign up to get email updates from this blog.

Email Address:

Keep me up to date

[Cover image: Pixabay]

Notes

  1. From David Sloan Wilson’s The Neighborhood Project.↩
  2. ‘Globules of desire’ is political economist Thorstein Veblen’s mocking term for the economic model of human behavior. See Veblen’s seminal essay Why is Economics not an Evolutionary Science?↩
  3. Humanity’s ultrasocial nature is a hot topic of research. Two good books on the subject are Peter Turchin’s Ultrasociety and E.O. Wilson’s The Social Conquest of the Earth.↩
  4. It’s not just renegades like myself who compare economic theory to ‘phlogiston’. Nobel-prize winning economist Paul Romer made the analogy in his essay The trouble with macroeconomics.↩
  5. For a satire of life in a (mainstream) economics department, see Axel Leijonhufvud’s essay Life Among the Econ.↩
  6. Students of evonomics take note: York University, together with McGill University and University of Vermont, is currently hosting the Economics for the Anthropocene graduate program. It aims to give an economics education that connects the ecological and economic realities of the Anthropocene.)↩
  7. The deep history of sociality is often called the ‘major transitions in evolution’. See John Maynard Smith and Eörs Szathmáry’s book of the same name for an exposition. For an interpretation of these transitions using the theory of multilevel selection, see Samir Okasha’s paper Multilevel Selection and the Major Transitions in Evolution.↩
  8. See David Sloan Wilson and E.O. Wilson’s essay Rethinking the theoretical foundation of sociobiology.↩
  9. For research on how human groups suppress selfishness and promote altruism, see David Sloan Wilson, Elinor Ostrom, and Michael Cox’s paper Generalizing the core design principles for the efficacy of groups.↩

Further reading

Leijonhufvud, A. (1973). Life among the econ. Economic Inquiry, 11(3), 327–337.

Okasha, S. (2005). Multilevel selection and the major transitions in evolution. Philosophy of Science, 72(5), 1013–1025.

Romer, P. (2016). The trouble with macroeconomics. The American Economist, 20, 1–20.

Smith, J. M., & Szathmary, E. (1997). The major transitions in evolution. Oxford University Press.

Turchin, P. (2016). Ultrasociety: How 10,000 years of war made humans the greatest cooperators on earth. Chaplin, Connecticut: Beresta Books.

Veblen, T. (1898). Why is economics not an evolutionary science? The Quarterly Journal of Economics, 12(4), 373–397.

Wilson, D. S. (2011). The neighborhood project: Using evolution to improve my city, one block at a time. New York: Little, Brown & Company.

Wilson, D. S., Ostrom, E., & Cox, M. E. (2013). Generalizing the core design principles for the efficacy of groups. Journal of Economic Behavior & Organization, 90, S21–S32.

Wilson, D. S., & Wilson, E. O. (2007). Rethinking the theoretical foundation of sociobiology. The Quarterly Review of Biology, 82(4), 327–348.

Wilson, E. O. (2012). The social conquest of earth. New York: WW Norton & Company.

Mark Twain On The ‘Growing’ Wisdom Of Our Parents

Published by Anonymous (not verified) on Thu, 10/09/2020 - 12:12am in

Mark Twain is famously said to have revised his assessment of his parents’ wisdom:

When I was seventeen I was convinced my father was a damn fool. When I was twenty-one I was astounded by how much the old man had learned in four years.

Twain’s words speak to a crucial perspectival aspect of our life: our critical judgments are a function of our lived lives and experiences. We appreciate our parents doubly, if not many times more, when we finally become parents ourselves; we realize what their parenting experiences must have been like in their own complex particularity. The people we thought were experts (or sometimes,  less kindly, bumbling fools) were fumbling around themselves, learning the tricks of the parenting trade on the fly, making it up as they went along, sometimes getting it right, sometimes not. We realize how little we knew of them, just as we later realize with a start that our children know very little of us and will live their lives largely free of our presence and inspection and evaluation. We realize too, like Twain, that while our youthful impatience often led us to condemn our parents’ bumbling in matters that seemed straightforward to us, we did so because we did not understand the full dimensions of the problems that perplexed them. The facile solutions we had imagined for our ‘life problems’ had already been considered, rejected, and moved on from by our parents; we must, despite our reluctance, follow in their footsteps. That imperfect solution that so enraged us when we were young now strikes us as a masterful compromise, a skillful navigation between the Scylla and Charybdis of competing moral and parenting imperatives; we can only see that now because we have grown and learned and realized it as such. 

Twain notes too that the more we know, the more we realize we know very little. Moreover, our knowledge now makes our past more ignorant, and our assessments of the ignorance of others ever more flawed. By learning more, we realize how little we know and how much others know. This is especially true of academics who lose confidence as they progress through their PhD; gone is the cocky undergraduate who thought he knew everything; in his place stands the modest and humble grad who has learned how vast human knowledge is, how insuperable its problems, and how much everyone else knows in the fields in which he did not pursue further study; he learns that in his chosen field, many have explored its furthest reaches with diligence and creativity. We realize we have shrunk while the world has grown; the road we have set out on speaks of no end. 

Youth is wasted on the young; the wisdom of this claim is never more apparent than when we realize how we muddled around in our fogs of misconceptions and ignorance, even as it is true that while we are young, we were aware of truths we forget as we grow older. 

Kierkegaard On Being Educated By Possibility (And Anxiety)

Published by Anonymous (not verified) on Sat, 29/08/2020 - 6:50am in

In The Concept of Anxiety, Soren Kierkegaard writes

Whoever is educated by anxiety is educated by possibility, and only he who is educated by possibility is educated according to his infinitude. Therefore possibility is the weightiest of all categories….in possibility all things are equally possible, and whoever has truly been brought up by possibility has grasped the terrible as well as the joyful. So when such a person graduates from the school of possibility, and he knows better than a child knows his ABC’s that he can demand absolutely nothing of life and that the terrible, perdition, and annihilation live next door to every man, and when he has thoroughly learned that every anxiety about which he was anxious came upon him in the next moment-he will give actuality another explanation, he will praise actuality, and even when it rests heavily upon him, he will remember that it nevertheless is far, far lighter than possibility was. [Chapter V, ‘Anxiety as Saving Through Faith’)

All too often in this ‘profound and byzantine’ work¹, Kierkegaard is elliptical. Here, he hits a sustained note of lucidity. That ‘all things are equally possible’ – especially from the standpoint of human uncertainty, epistemic limitation and capacity – is a truly terrifying thought; for we know that within ‘all things’ are truly included all things, good and evil, painful and pleasurable. There is no limitation here, save that of logic and that of conceptual imagination. Monsters lurk here, as do angels. Here, indeed, be dragons. To grasp the terrible as well as the joyful here is to grasp that life is not bounded normatively or physically by these; there are no boundaries beyond which the terrible cannot advance, no wall that can hold it back; there is no specified interval for joys to last, they may be as fleeting and ephemeral as the lightest of our quicksilver fancies.

To be educated by this knowledge, to be truly educated by the journey here, one must plumb its depths, and soar into and above its heights. Here anxieties acquire shape and form, crystallizing into fears; here, within the space of possibility, as we look around at its curling edges we see abysses lurking–these indicate the limits of our imagination, beyond which monsters worse than the ones our minds have been able to conjure up find their abode. 

To retreat from this space into that of actuality, the lived empirical life, is to arrive suitably chastened by the realization that we had ever dared demand from this world any consolation whatsoever; we learn to give thanks for the spaces of possibility that have been realized in our lives to our favor; this actual, realized, world for all its terrors, is still less onerous than the world whose contours we had so vividly and powerfully sketched as we traversed the spaces of possibility. It is our memory and our understanding of possibility – another name for anxiety – that weighs us down in the actual; the closer we look possibility in the face–as the Stoics too, urged us to do–the more of a home we find in actuality, which for all its terrors, is still only a subset of the possible.   

Notes: 

  1. Gordon Marino in the New York Times

Why ‘General Intelligence’ Doesn’t Exist

Published by Anonymous (not verified) on Tue, 18/08/2020 - 10:18pm in

Donald Trump took an IQ test … you’ll never guess what he scored!

Apologies. That was my attempt at clickbait.1 Now that I’ve hooked you, let’s talk about the elephant in the room. No, not Donald Trump. Let’s talk about IQ.

For as long as I can remember I’ve been skeptical of measures of ‘intelligence’. The whole procedure rubbed me the wrong way. You take a test, get a score, and find out your ‘intelligence’. Doesn’t that seem weird? In school, I took hundreds of tests. None of them claimed to measure ‘intelligence’. It was clear to me (and to everyone else) that each test measured performance on specific tasks. But IQ tests are somehow different. Rather than measure specific skills, IQ tests claim to measure something more expansive: general intelligence.

I think this claim is bullshit. The problem, as I see it, is that ‘general intelligence’ doesn’t really exist. It’s a reified concept — a vague abstraction made concrete though a series of arbitrary decisions.

To see the arbitrariness, let’s use different words. Substitute ‘intelligence’ with ‘performance’. Imagine that your friend tells you, “I just took a general performance test. I scored in the top percentile!” You’d ask, “What did you perform? Did you make a painting? Do some math? Play music? Play a video game?” It’s obvious that this ‘general performance’ test is arbitrary. Someone thought of some tasks, measured performance on these tasks, and added up the results. Presto! They (arbitrarily) measured ‘general performance’.

This arbitrariness is part of any measure that aggregates different skills. The problem is that the skills that we select will affect what we find. That’s because a person who is exceptional on one set of tasks may be average on another set. And so our aggregate measurement depends on what we include in it. This is true of ‘general performance’. And it’s true of ‘general intelligence’.

The word ‘intelligence’, however, carries a mystique that ‘performance’ does not. No one believes that ‘general performance’ exists. Yet many people think that ‘general intelligence’ lurks in the brain, waiting to be measured.

It doesn’t.

A complete (and hence, objective) measure of ‘general intelligence’ is forever beyond our reach. And if we forge ahead anyway, we’ll find that how we define intelligence affects what we find.

Speaking of ‘intelligence’

I’ll start this foray into intelligence not with psychology, but with linguistics. Language is, in many ways, a barrier to science. The problem is that everyday language is imprecise. Usually that’s a good thing. Vagueness allows us to communicate, even though our subjective experiences are different. We can talk about ‘love’, for instance, even though we each define the word differently. And we can talk about ‘intelligence’, even though the concept is poorly defined.

In everyday life, this vagueness is probably essential. Without it, we’d spend all day agreeing on definitions. But in science, vague language is ruinous. That’s because how we define concepts determines how we measure them. Without a precise definition, precise measurement is impossible. And without precise measurement, there is no science.

Take, as an example, something as simple as mass. In everyday language, we use the word ‘mass’ as a synonym for ‘weight’. Usually that’s not a problem. But if you want to do physics, you need to be more precise. Equating ‘mass’ with ‘weight’ implies that you can use a spring scale to measure ‘mass’. But that’s true only in certain circumstances.

In physics, ‘mass’ has a specific definition. It’s the resistance to acceleration.2 Now, spring scales can measure mass, but only in the correct setting. That’s because spring scales technically measure ‘force’, not ‘mass’. But the two concepts are related. According to Newton’s laws, force is proportional to mass times acceleration (F = ma). So if we know the acceleration and the force, we can infer mass. On Earth, the downward acceleration of gravity is (nearly) constant.3 That means we can use the force registered on a spring scale to measure mass. But this works only if you’re at rest. If you’re in an accelerating elevator, your bathroom scale will mislead you.

The point of this foray into physics is to highlight how measurement follows from a definition. Newton defined mass as force per unit of acceleration: m = F/a. From this precise definition follows precise measurement.

Back to intelligence. In the same way that we speak of ‘mass’ in colloquial terms, we also speak of ‘intelligence’. But whereas physicists have devised their own precise definition of ‘mass’ (that differs from the colloquial usage), psychologists have not devised a precise definition of ‘intelligence’. This makes its measurement problematic.

When we measure ‘intelligence’, what exactly are we quantifying? Perhaps an easier question is what are we excluding? When we measure mass, for instance, we exclude ‘color’. That’s because according to Newton’s laws, color doesn’t affect mass. So what doesn’t affect ‘intelligence’?

Most human behavior.

If you look at how IQ tests are constructed, they exclude an enormous range of human behavior. They exclude athletic ability. They exclude social and emotional savvy. They exclude artistic skill (visual, musical, and written). The list goes on.

What’s the reason for this exclusion? It stems not from any scientific concept of ‘intelligence’, but rather, from the colloquial definition of the word. In common language, great musicians are not considered ‘intelligent’. They are ‘talented’. The same is true of a host of other skilled activities. In common parlance, the word ‘intelligent’ is reserved for a specific suite of skills that we might call ‘book smarts’. A mathematician is intelligent. An artist is talented.

There’s nothing wrong with this type of distinction. In fact, it highlights an interesting aspect of human psychology. We put actions into categories and use different words to describe them. Sometimes we even use different categories when the same task is done by different objects. When a person moves through water, we say that they ‘swim’. But when a submarine does the same thing, it doesn’t ‘swim’. It ‘self propels’. Similarly, when a person does math, they ‘think’. But when a computer does math, it ‘computes’.4

This type of arbitrary distinction isn’t a problem for daily life. But it’s a problem for science. Science usually requires that we abandon colloquial definitions. They’re simply too vague and too arbitrary to be useful. That’s why physicists have their own definition of ‘mass’ that differs from the colloquial concept. But with ‘intelligence’, something weird happens. Cognitive psychologists use the colloquial concept of ‘intelligence’, which arbitrarily applies to a narrow range of human behaviors. Then they attempt to measure a universal quantity from this arbitrary definition. The result is incoherent.

Intelligence as computation

If we want to measure something, the first thing we need to do is define it precisely. So how should we define ‘intelligence’? We should define it, I believe, by turning to computer science. That’s because one of the best ways to understand our own intellect is to try to simulate it on a computer. When we do so, we realize that the concept of intelligence is quite simple. Intelligence is computation.

This simplicity doesn’t mean that intelligence is easy to replicate. We struggle, for instance, to make computers drive cars — a task that most people find mundane. But defining intelligence as ‘computation’ tells us which tasks require ‘intellect’ and which do not. Catching a ball requires intellect because for a computer to do so, it must calculate the ball’s trajectory. But the ball itself doesn’t need intellect to move on its trajectory. That’s because the laws of physics work whether you’re aware of them or not.

Having defined intelligence as computation, we immediately run into a problem. We find that ‘general intelligence’ can’t be measured. Here’s why. Our definition implies that ‘general intelligence’ is equivalent to ‘general computation’. But ‘general computation’ doesn’t exist.

To see this fact, imagine asking a software engineer to write a program that ‘generally computes’. They’d look at you quizzically. “Computes what?” they’d ask. This reaction points to something important. While we can speak of ‘computation’ in the abstract, real-world programs are always designed to solve specific problems. A computer can add 2 + 2. It can calculate π. It can even play Jeopardy. But what a computer cannot do is ‘generally compute’. The reason is simple. ‘General computation’ is unbounded. A machine that can ‘generally compute’ could solve every specific problem that exists. It could also solve every problem that will ever exist.

This unboundedness raises a giant red flag for measuring intelligence. If ‘general computation’ is unbounded, so is ‘general intelligence’. This means that neither concept can be measured objectively.

Think of it like a sentence. Suppose that your friend tells you that they’ve constructed the longest sentence possible. You know they’re wrong. Why? Because sentences are unbounded. No matter how long your friend’s sentence, you can always lengthen it with the phrase “and then …”. The same is true of ‘general computation’. If someone claims to have definitively measured ‘general computation’, you can always show that they’re wrong. How? By inventing a new problem to solve.

The same is true of ‘general intelligence’. Any measure of ‘general intelligence’ is incomplete, because we can always invent new tasks to include. This means that a definitive measure of ‘general intelligence’ is forever beyond our reach.

Impossible … but let’s do it anyway

I don’t expect the argument above to convince many cognitive psychologists to stop measuring intelligence. That’s because a general dictum in the social sciences seems to be:

If you cannot measure, measure anyhow.5

As a social scientist, I understand this dictum (although I don’t agree with it). It arises out of practicality. Many concepts in the social sciences are poorly defined. If we waited for precise definitions of everything, we’d never measure anything. The solution (for many social scientists) is to pick an arbitrary definition and run with it.

With the ‘measure anyhow’ dictum in mind, let’s forge ahead. Let’s pick an arbitrary set of tasks, measure performance on these tasks, and call the result ‘intelligence’.

Which tasks should we include? If intelligence is computation, every human task is fair game. (I can’t think of a single task that doesn’t require computation by the brain. Can you?) Let’s spell out this breadth. Any conscious activity is fair game for our intelligence test. So is any unconscious activity.

Against this vast set of behavior, think about the narrowness of IQ tests. Taking them involves sitting at a desk, reading and responding to words. That’s an astonishingly narrow set of human behavior. And yet IQ tests claim to measure ‘general intelligence’.

Variation in intelligence

That IQ tests are ‘narrow’ is an old critique that I don’t want to dwell on. Instead, I want to ask a related question. If we widened our test of intelligence, what would we find? Unfortunately, no one has ever attempted a broad test that includes the full suite of human behavior. So we don’t know what would happen. Still, we can make a prediction.

To do so, we’ll start with a rule of thumb. The narrower the task, the more performance between people will vary. Conversely, the broader a task, the less performance between people will vary. The consequence of this rule as that as we add more tasks to our measure of intelligence, variation in intelligence should collapse.

This prediction stems in part from our intuition about the mind. But it also stems, as I explain below, from basic mathematics.

Chess power

Back to our rule of thumb. The narrower a task, the more performance will vary between individuals.

To grasp this rule, ask yourself the following question: who is the world’s best gamer? That’s hard to know. There are many different games, and everyone is better at some than others. Now ask yourself: who is the best chess player? That’s easier to know. The best chess players — the grandmasters — stand out from the crowd.

This thought experiment suggests that abilities at specific games vary more than abilities at a wide range of games. Why is this? I suspect it’s because the rules of a specific game restrict the range of allowable behavior. This constraint emphasizes subtle differences in how we think. In everyday life, such differences are imperceptible. But games like chess bring them to the forefront. In chess, a minute cognitive difference gets amplified into a huge advantage.

This rule of thumb raises an interesting question. At ultra-narrow tasks like chess, how much does individual ability vary? Like most aspects of human performance, we don’t really know. But we can hazard a guess. And we can use our definition of intellegence to do so.

Intelligence, I’ve proposed, is computation. Taking this literally, suppose we measured chess-playing intelligence in terms of the computer power needed to do defeat you. How much would this computer power vary between people? We don’t have rigorous data. But history does provide anecdotal evidence. Let’s look at the computer power needed to defeat two different men: Hubert Dreyfus and Garry Kasparov.

Hubert Dreyfus was an MIT professor of philosophy. A vocal critic of machine intelligence, Dreyfus argued bellicosely that computers would never beat humans at chess. In 1967, Dreyfus played the chess-playing computer Mac Hack VI. He lost. What is perhaps most humiliating, in hindsight, is that Mac Hack ran on a computer that today wouldn’t match a smartphone. To beat Dreyfus, Mac Hack evaluated about 16,000 chess positions per second.

Despite humiliating Dreyfus, computers like Mac Hack were no match for the best human players. Not even close. Take, as an example, chess grandmaster Garry Kasparov. In 1985, Kasparov beat thirty-two different chess-playing computers simultaneously. (As Kasparov describes it, he “walked from one machine to the next, making … moves over a period of more than five hours.”) Still, Kasparov was eventually defeated. In 1997 he lost to IBM’s Deep Blue. But what testifies to Kasparov’s astonishing ability is the computational power needed to beat him. Deep Blue could evaluate 200 million positions per second. That’s about 10,000 times more computing power than needed to beat Hubert Dreyfus.

So Garry Kasparov may have been 10,000 times better at chess than Hubert Dreyfus. But was he 10,000 times more intelligent? Unlikely. The reason stems from our rule of thumb. Yes, performance can vary greatly when tasks are hyper specific. But as we broaden tasks, performance variation will decrease.

Think about it this way. At his peak, Kasparov was certainly the greatest chess player. But he was not the greatest Go player. Nor was he the greatest bridge player. So if we measured Kasparov’s intelligence at many different types of games, he would appear less exceptional. That’s because his stupendous ability at chess would be balanced by his lesser ability at other games. If we moved beyond gaming to the full range of human tasks, Kasparov’s advantage would lessen even more. The reason is simple. No one is the greatest at everything.

A central limit

When we generalize the principle that ‘no one is the greatest at everything’, something startling happens. We find that the more broadly we define ‘intelligence’, the less variation we expect to find. The reason, interestingly, has little to do with the human mind. Instead, it stems from a basic property of random numbers.

This property is described by something called the central limit theorem. As odd as it sounds, the central limit theorem is about the non-random behavior of random numbers. I’ll explain with an example. Suppose that I have a bag containing the numbers 0 to 10. From this bag, I draw a number and record it. I put the number back into the bag and draw another number, again recording it. Then I calculate the average of these numbers. Let’s try it out. Suppose I draw a 1 followed by a 7, giving an average of 4. Repeating the process, I draw an 8 followed by a 10, giving an average of 9. As expected, the numbers vary randomly, and so does the corresponding average. But according to the central limit theorem, there’s order hidden under this randomness.

Like our random numbers themselves, you’d think that the average of our sample is free to bounce around between 0 and 10. But it’s not. Variation in the average, it turns out, depends on the sample size. For a small sample, the average could indeed be anything. But for a large sample, this isn’t true. As my sample size grows, the central limit theorem tells us that the average must converge to 5. Stated differently, the more numbers I draw from the bag, the less the average of my sample is allowed to vary.6

That’s interesting, you say. But what does the central limit theorem have to do with intelligence? Here’s why it’s important. To measure someone’s ‘intelligence’, we take a set of tasks and then average their performance on each task. While seemingly benign, this act of averaging evokes the central limit theorem under the hood. And that causes something startling to happen. It means that the number of tasks included in our measure of intelligence affects the variation of intelligence.

I’ll show you a model of how this works. But first, let’s make things concrete by returning to chess wizard Garry Kasparov. Kasparov, it’s safe to say, is far better at chess than the average person — perhaps thousands of times better. So if we were to measure ‘intelligence’ solely in terms of chess performance, Kasparov would be an unmitigated genius. But as we add other tasks to our measure of intelligence, Kasparov’s genius will appear to decline. That’s because like any human, Kasparov isn’t the greatest at everything. So as we add tasks in which Kasparov is mediocre, his ‘intelligence’ begins to lessen. In other words, Kasparov’s ‘intelligence’ isn’t some definite quantity. It’s affected by how we measure intelligence!

A model of ‘general intelligence’

Let’s put this insight into a model of ‘general intelligence’. Imagine that we have a large sample of people — veritable cross-section of humanity. We subject each person to a barrage of tests, measuring their performance on thousands of tasks. Their average performance is then their ‘intelligence’.

The problem, though, is that we have to choose which tasks to include in our measure of intelligence. In academic speak, this choice is called the ‘degrees of freedom’ problem. It’s a problem because if a researcher has too much freedom to choose their method, you can’t trust their results. Why? Because they could have cherry-picked their method to get the results they wanted.

Suppose we’re aware of this problem. To solve it, we decide not to pick just one measure of intelligence. We’ll pick many. We start by selecting a single task and using it to measure intelligence. We then measure how intelligence varies across the population. Next, we add another task to our metric, and again measure intelligence variation. We repeat until we’ve included all of the available tasks.

Before getting to the model results, one more detail. Let’s assume that individuals’ performance on different tasks is uncorrelated. This means that if Bob is exceptional at arithmetic, he can be abysmal at multiplication. Bob’s skill at different tasks is completely random. Now, this is obviously unrealistic. (I’ll revise this assumption shortly.) But I make this assumption to illustrate how the central limit theorem works in pure form. This theorem assumes that random numbers are independent of one another. Applied to intelligence, this means that individuals’ performance on different tasks is unrelated.

Figure 1 shows the results of this simple model. The horizontal axis shows the number of tasks included in our measure of intelligence. We start with just 1 task and gradually add more until we’ve included 10,000. For each set of tasks, we measure the ‘intelligence’ of every person. Finally, we measure the variation in intelligence using the Gini index. (A Gini index close to 1 indicates huge variation. A Gini index close to 0 indicates minimal variation.) Plotting this Gini on the vertical axis, we see how the variation of ‘general intelligence’ changes as we add more tasks.


Figure 1: Variation in ‘general intelligence’ decreases as more tasks are measured. Here’s the results of a model in which we vary the number of tasks included in a measure of general intelligence. I’ve assumed that individuals’ performance on different tasks is uncorrelated. The vertical axis shows how the variation in general intelligence (measured using the Gini index) decreases as more tasks are added.

According to our model, variation in general intelligence collapses as we add more tasks. Intelligence starts with a Gini index of about 0.38. This represents the performance variation on each task. (I’ve chosen this value arbitrarily.) As we add more tasks, the variation in intelligence collapses. Soon it’s far below variation in standardized tests like the SAT. (SAT scores have a Gini index of about 0.11.)7

The take away from this model is that our measure of intelligence is ambiguous. There is no definitive value, but instead a huge range of values. If we include only a few tasks, ‘intelligence’ is unequally distributed. But as we add more tasks, ‘intelligence’ becomes almost uniform. This doesn’t mean that the properties of people’s intellect change. Far from it. Our results are caused by the act of measurement itself. How we define ‘intelligence’ affects how it varies.

A more realistic model

The model above comes with a big caveat. I’ve assumed that performance on different tasks is uncorrelated. This is dubious. If Bob is exceptional at arithmetic, he’s probably also exceptional at multiplication.

This correlation between related abilities is common sense. It’s also scientific fact. Performance on different parts of IQ tests tends to be correlated. If you score well on the language portion, for instance, you’ll also likely score well on the math portion. Knowledge of this correlation dates to the early 20th-century work of psychologist Charles Spearman. He found that performance of English school children tended to correlate across seemingly unrelated subjects. This correlation between different abilities is important because it’s the main evidence for ‘general intelligence’. It suggests that underneath diverse skills lies some ‘general intellect’. Charles Spearman called it the g factor.

Given that abilities tend to correlate, let’s revise our model. We’ll again measured performance on a wide variety of tasks. But now, let’s assume that performance on ‘adjacent’ tasks is highly correlated.

Here’s how it works. Suppose task 1 is simple arithmetic and task 2 is simple multiplication. I’ll assume that performance on the two tasks is 99% correlated (meaning the correlation coefficient is 0.99). This means that if you’re great at arithmetic, you’re also great at multiplication. But I’ll go further. I’ll assume that performance on any adjacent pair of tasks is 99% correlated. Suppose that task 3 is simple division. Performance on this task is 99% correlated with performance on task 2 (multiplication). Task 4 is exponentiation. Performance on task 4 is 99% correlated with task 3 (division). This correlation between adjacent tasks goes on indefinitely. Performance on task n is always 99% correlated with performance on task n-1.

The effect of this correlation is twofold. First, it creates a broad correlation in performance across all tasks. So if you went looking for a ‘g-factor’, you’d always find it. Second, it creates a gradient of ability. So if you’re excellent at multiplication, you’re also excellent at related tasks like addition. But this excellence diffuses as we move to unrelated tasks (say cooking). This gradient, I think, is a realistic model of human abilities.

With this more realistic model, it may seem that ‘general intelligence’ is better defined. If performance between tasks is highly correlated, it seems like there really is some ‘general intellect’ waiting to be measured.

And yet there isn’t.

As Figure 2 shows, variation in ‘general intelligence’ is still a function of the number of tasks measured. When we measure few tasks, intelligence varies greatly between individuals. But as we add more tasks, this variation collapses. This pattern is an unavoidable consequence of the central limit theorem. The more random numbers we add together, the less the corresponding average varies (even when these random numbers are highly correlated).


Figure 2: Variation in ‘general intelligence’ decreases as more tasks are measured, even when performance on adjacent tasks is highly correlated. Here’s the results of a second model in which we vary the number of tasks included in our measure of general intelligence. This time individuals’ performance on different tasks is highly correlated. I assume performance on adjacent tasks (meaning task n and n+1) is 99% correlated. The vertical axis shows how the variation in general intelligence (measured using the Gini index) decreases as more tasks are added.

The results of this model are unsettling. Despite strong correlation between performance on different tasks, it seems that ‘general intelligence’ is still ambiguous. It’s not a definite property of the brain. Instead, it’s a measurement artifact that we actively construct.

Multiple intelligences?

One of the long-standing criticisms of IQ tests is that they are too narrow. They measure only one ‘type’ of intelligence. The alternative, critics propose, is that many types of intelligence exist. This leads to the theory of ‘multiple intelligences’.

At first glance, such a theory seems convincing. There are many types of human abilities. Why not assign each of these abilities its own domain of ‘intelligence’ and then measure it accordingly. Sounds good, right?

While I’m sympathetic to this approach, I think it grants too much credence to orthodox measures of intelligence. It effectively says ‘you can keep your standard measure of intelligence, but we’ll add others to it’. The problem is that the arguments for ‘general intelligence’ can always be used to undermine the theory of ‘multiple intelligences’. Suppose we discover that different ‘types’ of intelligence are correlated with the ‘g-factor’ (a real finding). This suggests that intelligence isn’t multiple, but general.

What I’ve tried to show here is that even if we grant a strong correlation between different abilities, the measure of ‘general intelligence’ is still ambiguous. We can never objectively measure ‘general intelligence’ because the concept is unbounded. This means that any specific measure is incomplete, and worse still, arbitrary. We can put on a brave face and measure anyway. But doing so won’t solve the problem. Instead, we’ll find that ‘intelligence’ is circularly affected by how we’ve defined it.

Does this mean we shouldn’t measure human abilities? Of course not. Specific abilities can be measured. The trouble comes when we attempt to measure general abilities. The problem is that such abilities are fundamentally ill-defined. The sooner we realize this, the sooner we can put ‘general intelligence’ in its proper place: the trash bin of history.

Support this blog

Economics from the Top Down is where I share my ideas for how to create a better economics. If you liked this post, consider becoming a patron. You’ll help me continue my research, and continue to share it with readers like you.

patron_button

Stay updated

Sign up to get email updates from this blog.

Email Address:

Keep me up to date

[Cover image: Pixabay]

Model Code

Here’s the code for my model of intelligence. It runs in R. Use it and change it as you see fit.

The model assumes that performance on each task is lognormally distributed. You can vary this distribution by changing the parameters inside the rlnorm function. In the first model (iq_uncor), performance is completely random. But in the second model (iq_cor), performance on each task is 99% correlated with performance on the previous task. I create the correlation using the function simcor. To vary the correlation, change the value for task_cor (to any value between 0 and 1).

library(ineq)
library(data.table)

# number of tasks in IQ test
n_tasks = 10^4

# number of people
n_people = 10^4

# task correlation (model 2)
task_cor = 0.99

# distribution of performance on each task
performance = function(n_people){ rlnorm(n_people, 1, 0.7) }

# mean and standard deviation of performance on each task
perf_mean = mean(performance(10^4))
perf_sd = sd(performance(10^4))

# function to generate correlated random variable
simcor = function (x, correlation, stdev) {

x_mean = perf_mean
x_sd = perf_sd
n = length(x)

y = rnorm(n)
z = correlation * scale(x)[,1] + sqrt(1 - correlation^2) * scale(resid(lm(y ~ x)))[,1]
y_result <- x_mean + x_sd * z

return(y_result)
}

# output vectors (Gini index of IQ)
g_uncor = rep(NA, n_tasks)
g_cor = rep(NA, n_tasks)

# loop over tasks
pb <- txtProgressBar(min = 0, max = n_tasks, style = 3)

for(i in 1:n_tasks){

if(i == 1){

# first task
x_uncor = performance(n_people)
iq_uncor = x_uncor

x_cor = performance(n_people)
iq_cor = x_cor

} else {

# all other tasks
x_uncor = performance(n_people)
iq_uncor = iq_uncor + x_uncor

x_cor = abs( simcor(x_cor, task_cor) )
iq_cor = iq_cor + x_cor

}

# Gini index of IQ
g_uncor[i] = Gini(iq_uncor)
g_cor[i] = Gini(iq_cor)

setTxtProgressBar(pb, i)

}

results = data.table(n_task = 1:n_tasks, g_uncor, g_cor)

# export
fwrite(results, "iq_model.csv")

Notes

  1. This kind of clickbait is all over the internet. Here’s a real example: “What is Donald Trump’s IQ? His IQ test scores will shock you”.↩
  2. Actually, ‘mass’ has a dual meaning in physics. Mass is ‘resistance to acceleration’ — usually called the inertial mass. But mass is also what causes gravitational pull — the gravitational mass. According to Newton’s equivalence principle, the two masses are the same. That’s why all objects accelerate uniformly in the same gravitational field.↩
  3. Where is the gravitational ‘acceleration’ when you’re standing (at rest) on the bathroom scale? The convention, in physics, is to treat the acceleration as what would occur if the Earth was removed from beneath your feet and you entered free fall. Since you’re not in free fall, it follows that the Earth is constantly working to stop this acceleration by applying an upward force (what physicists call the ‘normal’ force). The bathroom scale measures this upward force. Given the known acceleration when in free fall, (9.8 m/s), you can use this force to measure your mass. But only if you’re at rest.↩
  4. Noam Chomsky often uses this linguistic analogy when discussing artificial intelligence. Do machines think? A meaningless question, he argues:

    There is a great deal of often heated debate about these matters in the literature of the cognitive sciences, artificial intelligence, and philosophy of mind, but it is hard to see that any serious question has been posed. The question of whether a computer is playing chess, or doing long division, or translating Chinese, is like the question of whether robots can murder or airplanes can fly — or people; after all, the “flight” of the Olympic long jump champion is only an order of magnitude short of that of the chicken champion (so I’m told). These are questions of decision, not fact; decision as to whether to adopt a certain metaphoric extension of common usage.

    There is no answer to the question whether airplanes really fly (though perhaps not space shuttles). Fooling people into mistaking a submarine for a whale doesn’t show that submarines really swim; nor does it fail to establish the fact. There is no fact, no meaningful question to be answered, as all agree, in this case. The same is true of computer programs, as Turing took pains to make clear in the 1950 paper that is regularly invoked in these discussions. Here he pointed out that the question whether machines think “may be too meaningless to deserve discussion,” being a question of decision, not fact, though he speculated that in 50 years, usage may have “altered so much that one will be able to speak of machines thinking without expecting to be contradicted” — as in the case of airplanes flying (in English, at least), but not submarines swimming. Such alteration of usage amounts to the replacement of one lexical item by another one with somewhat different properties. There is no empirical question as to whether this is the right or wrong decision.

    (Chomsky in Powers and Prospects)

    ↩

  5. This quote comes from Frank Knight, who was commenting on economists’ inability to measure utility. This inability didn’t stop them however. Economists simply inverted the problem. Utility was supposed to explain prices. But prices, economists proposed, ‘revealed’ utility. Knight’s comment is quoted in Jonathan Nitzan and Shimshon Bichler’ book Capital as Power.↩
  6. The central limit theorem is usually stated as follows. Imagine we sample n numbers from a distribution with mean μ and standard deviation σ. The sample mean distribution will have a standard deviation of \sigma/\sqrt{n} . So as n grows, the standard deviation of the sample mean will converge to 0.↩
  7. Here’s how I estimate the Gini index for the SAT. According to College Board, the average score for the 2019 SAT was 1059 and the standard deviation was 210. That gives a coefficient of variation (the standard deviation divided by the mean) of 0.2. Next, we’ll assume that SAT scores are lognormally distributed. The coefficient of variation for a lognormal distribution is CV=\sqrt{e^{\sigma^2} - 1} , where σ is the ‘scale parameter’. Solving for σ gives: \sigma = \sqrt{\log(CV^2 + 1)} . The Gini index of the lognormal distribution is then defined as G=\text{erf}(\sigma/2) , where erf is the Gauss error function. Plugging CV = 0.2 into these equations gives a Gini index of SAT performance of 0.11.↩

Further reading

Chomsky, N. (2015). Powers and prospects: Reflections on nature and the social order. Haymarket Books.

Gardner, H. (1985). Frames of mind: The theory of multiple intelligences. Basic Books.

Gould, S. J. (1996). The mismeasure of man. WW Norton & company.

Thomson, G. H. (1916). A hierarchy without a general factor. British Journal of Psychology, 8(3), 271.

‘I’ Article on ‘Bardcore’ – Postmodern Fusion of Medieval Music and Modern Pop

Published by Anonymous (not verified) on Wed, 05/08/2020 - 8:20pm in

I’m a fan of early music, which is the name that’s been given to music from the ancient period through medieval to baroque. It partly comes from having studied medieval history at ‘A’ level, and then being in a medieval re-enactment group for several years. Bardcore is, as this article explains, a strange fusion of modern pop and rock with medieval music, played on medieval instruments and with a medieval vocal arrangement. I’ve been finding a good deal of it on my YouTube page at the moment, which means that there are a good many people out there listening to it. On Monday the I’s Gillian Fisher published a piece about this strange new genre of pop music, ‘Tonight we’re going to party like it’s 1199’, with the subtitle ‘Bardcare reimagines modern pop with a medieval slant. Hark, says Gillian Fisher’. The article ran

“Hadst thou need to stoop so low? To send a wagon for thy minstrel and refuse my letters, I need no longer write them though. Now thou art somebody whom I used to know.”

If you can’t quite place this verse, let me help – it’s the chorus from the 2011 number one Somebody That I Used to Know, by Gotye. It might seem different to how you remember it, which is no surprise – this is the 2020 Bardcore version. Sometimes known as Tavernwave, Bardcore gives modern hits a medieval makeover with crumhorns a plenty and lashings of lute. Sometimes lyrics are also rejigged as per Hildegard von Blingin’s offering above.

Algal (41-year-old Alvaro Galan) has been creating medieval covers since 2016, a notable example being his 2017 version of System of a Down’s Toxicity. Largely overlooked at the time, the video now boasts over 4.4 million views. Full-time musician Alvaro explains that “making the right song at the right moment” is key, and believes that Bardcore offers absolute escapism.

Alvaro says: “What I enjoy most about Bardcore is that I can close my eyes and imagine being in a medieval tavern playing for a drunk public waiting to dance! But from a more realistic perspective , I love to investigate the sounds of the past.”

In these precarious times, switching off Zoom calls and apocalyptic headlines to kick back with a flagon of mead offers a break from the shambles of 2020. Looking back on simpler times during periods of unrest is a common coping mechanism, as Krystine Batcho, professor of psychology at New York’ Le Moyne College explained in her paper on nostalgia: “Nostalgic yearning for the past is especially likely to occur during periods of transition, like maturing into adulthood or aging into retirement. Dislocation or alienation can also elicit nostalgia.”

The fact that Bardcore is also pretty funny also offers light relief. The juxtaposition of ancient sound with 21st-century sentiment is epitomised in Stantough’s medieval oeuvre, such as his cover of Shakira’s Hips Don’t Lie. Originally from Singapore, Stantough (Stanley Yong), 35 says: “I really like the fact we don’t really take it very seriously. We’re all aware what we’re making isn’t really medieval but the idea of modern songs being “medievalised” is just too funny.”

One of Bardcore’s greatest hits, is Astronomia by Cornelius Link, which features trilling flutes and archaic vocal by Hildegard. It’s a tune that has been enjoyed by 5.3 million listeners. Silver-tongued Hildegard presides over the Bardcore realm, with her cover of Lady Gaga’s Bad Romance clocking up 5 million views. Canadian illustrator Hildegard, 28, fits Bardcore around work and describes herself as “an absolute beginner” with the Celtic harp and “enthusiastically mediocre” with the recorder. Her lyric adaptations have produced some humdingers such as “All ye bully-rooks with your buskin boots which she sings in rich, resonant tones.

HIldegard, who wishes to remain anonymous, believes the Bardcore boom can be “chalked up to luck, boredom and a collective desire to connect and laugh.”

In three months, the Bardcore trend has evolved with some minstrels covering Disney anthems, while others croon Nirvana hits in classical Latin. While slightly absurd, this fusion genre has ostensibly provided a sense of unity and catharsis.

The humming harps and rhythmic tabor beats evoke a sense of connection with our feudal ancestors and their own grim experience of battening down the hatches against the latest outbreak. Alongside appealing to the global sense of pandemic ennui, connecting to our forbears through music is predicated upon the fact that they survived their darkest hours. And so shall we.

While Bardcore’s a recent phenomenon, I think it’s been drawing on trends in pop music that have happening for quite long time. For example, I noticed in the 1990s when I went to a performance of the early music vocal group, the Hilliard Ensemble, when they performed at Brandon Hill in Bristol that the audience also included a number of Goths. And long-haired hippy types also formed part of the audience for Benjamin Bagley when he gave his performance of what the Anglo-Saxon poem Beowulf probably sounded like on Anglo-Saxon lyre at the Barbican centre in the same decade.

Bardcore also seems connected to other forms of postmodern music. There’s the group the Postmodern Jukebox, whose tunes can also be found on YouTube, who specialise in different 20th century arrangements of modern pop songs. Like doing a rock anthem as a piece of New Orleans Jazz, for example. And then there’s Orkestra Obsolete, who’ve arranged New Order’s Blue Monday using the instruments of the early 20th century, including musical saws and Theremin. There’s definitely a sense of fun with all these musical experiments, and behind the postmodern laughter it is good music. An as this article points out, we need this in these grim times.

Here’s an example of the type of music we’re talking about: It’s Samuel Kim’s medieval arrangement of Star Wars’ Imperial March from his channel on YouTube.

And here’s Orkestra Obsolete’s Blue Monday.

 

 

 

 

 

 

Getting Pulled Over; A Teachable Moment

Published by Anonymous (not verified) on Sat, 01/08/2020 - 12:12am in

Last week, while driving in Ketchum, Idaho, I was pulled over for speeding (driving 36 mph in a 25-mph zone.) The traffic stop proceeded along expected lines: the police car switched on its flashing red and blue lights as it sidled up behind me, I pulled over to the side of the road, the policeman walked over and asked for my driver’s license and vehicle registration and insurance etc. After I handed those over, I was treated to a brief lecture on the need to observe posted speed limits; I apologized, received a warning, and resumed my journey to a local trailhead. 

This little incident was watched, with considerable interest, by my seven-year old daughter, sitting in the backseat. 

After the policeman had driven off in his cruiser, and as we began driving toward our planned hike, I asked my daughter what she made of the encounter she had just witnessed. She said that she’d been a little frightened as the police scare her, but she was happy all had ended well. I asked her why she was scared of the police, and she replied that she’d heard–probably from family conversations–of the terrible things they often do to people they detain, search, arrest or imprison. I then said to her that she’d witnessed an important part of her training and acculturation as a legal subject: she’d learned an important lesson about the reach and power of the law. It was an essential part of her growing up in a ‘legal society,’ in ‘a land of laws, not men.’

For in witnessing an uniformed police officer pull over her father, my daughter had learned that her father, the supposed co-master of the domestic dominion along with her mother, one who regulated most details of her life, was subject to a power greater than his: that of the state, and its armed, uniformed representatives, the police. She’d seen her father, an authority apparently unquestioned –except by her mother, interrupted in his ventures, commanded to cease and desist whatever it was he was doing, reduced to the role of a polite, deferential subject, one only too willing to be inconvenienced by a perfect stranger who just happened to be wearing a gun and a badge. She’d witnessed a presumed regulatory order come crumbling down, replaced by a far more far-reaching, powerful, and certainly impressive one. Nothing in my parenting arsenal of the raised voice, the disapproving tone, the wagging finger, can compete with the starched uniform, the holstered weapon, the flashing lights, the dramatic intervention in a public space. She saw me defer; she saw me obey; she saw me comply. (I’m unfailingly polite with armed police; I am, after all, a brown man with an accent.) 

My daughter was in fact, witnessing a species of social construction at work: the sustenance and promulgation of an ideology of law, one essential component of which is to remind the legal subjects of the reach and extent of legal power in showy, public, demonstrations of it. All those who drove by on Highway 75 while I was receiving my little re-education learned a little lesson too; but the most important spectators were the children, legal subjects in training. Children must learn their parents, while powerful, are not the supreme regulators of their lives, the state is. Secular citizens are especially impressed by such displays of the power of the law–there is a new Supreme Force in town, and it wears a blue uniform. 

Pages