Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/

‘Mr H Reviews’ on the Casting of Robot Lead in SF Film

Published by Anonymous (not verified) on Sun, 09/08/2020 - 12:26am in

‘Mr H Reviews’ is a YouTube channel specialising in news and opinions on genre films – SF, Fantasy and Horror. In the video below he comments on a piece in the Hollywood Reporter about the production of a new SF movie, which will for the first time star a genuine AI. The movie is simply titled b. Financed by Bondit Capital, which also funded the film Loving Vincent, with the Belgium-based Happy Moon Productions and New York’s Top Ten Media, the film is based on a story by the special effects director Eric Pham with Tarek Zohdy and Sam Khoze. It is about a scientist, who becomes unhappy with a programme to perfect human DNA and helps the AI woman he has created to escape. 

The robot star, Erica, was created by the Japanese scientists/ engineers Hiroshi Ishigura and Hohei Ogawa for another film. The two, according to the Reporter, taught her to act. That film, which was to be directed by Tony Kaye, who made American History X, fell through. Some scenes for the present movie were already shot in Japan in 2019, and the rest will be shot in Europe next year, 2021.

The decision to make a movie starring a robot looks like an attempt to get round the problems of filming caused by the Coronavirus. However, it also raises a number of other issues. One of these, which evidently puzzle the eponymous Mr H, is how a robot can possibly act. Are they going to use takes and give it direction, as they would a human, or will it instead simply be done perfectly first time, thanks to someone on a keyboard somewhere programming it? He is quite enthusiastic about the project with some reservations. He supports the idea of a real robot playing a robot, but like most of us rejects the idea that robots should replace human actors. He also agrees with the project being written by a special effects supervisor, because such a director would obviously be aware of how such a project should be shot.

But it also ties in with an earlier video he has made about the possible replacement of humans by their Virtual simulacra. According to another rumour going round, Mark Hamill has signed away his image to Lucas Film, so that Luke Skywalker can be digitally recreated using CGI on future Star Wars films. Mr H ponders if this is the future of film now, and that humans are now going to be replaced by their computer generated doubles.

In some ways, this is just the culmination of processes that have been going on in SF films for some time. Animatronics – robot puppets – have been used in Science Fiction films since the 1990s, though admittedly the technology has been incorporated into costumes worn by actors. But not all the time. Several of the creatures in the American/Australian SF series Farscape were such animatronic robots, such as the character Rygel. Some of the robots features in a number of SF movies were entirely mechanical. The ABC Warrior which appears in the 1990s Judge Dredd film with Sylvester Stallone was deliberately entirely mechanical. The producers wished to show that it definitely wasn’t a man in a suit. C-3PO very definitely was played by a man in a metal costume, Anthony Daniels, but I noticed in the first of the prequels, The Phantom Menace, that a real robot version of the character appears in several scenes. Again, this is probably to add realism to the character. I also think that in the original movie, Episode 4: A New Hope, there were two versions of R2D2 used. One was the metal suit operated by Kenny Baker, and I think the other was entirely mechanical, operated by radio. Dr. Who during Peter Davison’s era as the Doctor also briefly had a robot companion. This was Kameleon, a shape-changing android, who made his first appearance in The King’s Demons. He was another radio-operated robot, though voiced by a human actor. However the character was never used, and his next appearance was when he died in the story Planet of Fire.

And then going further back, there’s Alejandro Jodorowsky’s mad plan to create a robotic Salvador Dali for his aborted 1970s version of Dune. Dali was hired as one of the concept artists, along with H.R. Giger and the legendary Chris Foss. Jodorowsky also wanted him to play the Galactic Emperor. Dali agreed, in return for a payment of $1 million. But he stipulated that he was only going to act for half an hour. So in order to make sure they got enough footage of the great Surrealist and egomaniac, Jodorowsky was going to build a robot double. The film would also have starred Orson Welles as Baron Vladimir Harkonnen and Mick Jagger as Feyd Rautha, as well as Jodorowsky’s own son, Brontes, as Paul Atreides. The film was never made, as the producers pulled the plug at the last minute wondering what was happening to it. I think part of the problem may have been that it was going well over budget. Jodorowsky has said that all the effort that went into it wasn’t wasted, however, as he and the artist Jean ‘Moebius’ Giraud used the ideas developed for the film for their comic series, The Incal. I think that Jodorowsky’s version of Dune would have been awesome, but would have been far different to the book on which it was based.

I also like the idea of robots performing as robots in an SF movie. A few years ago an alternative theatre company specialising in exploring issues of technology and robotics staged a performance in Prague of the classic Karel Capek play, Rossum’s Universal Robots, using toy robots. I can see the Italian Futurists, rabid Italian avant-garde artists who praised youth, speed, violence and the new machine world around the time of the First World War, being wildly enthusiastic about this. Especially as, in the words of their leader and founder, Tommasso Marinetti, they looked ‘for the union of man and machine’. But I really don’t want to see robots nor CGI recreations replace human actors.

Many films have been put on hold because of the Coronavirus, and it looks like the movie industry is trying to explore all its options for getting back into production. However, the other roles for this movie haven’t been filled and so I do wonder if it will actually be made.

It could be one worth watching, as much for the issues it raises as its story and acting.

‘I’ Article on ‘Bardcore’ – Postmodern Fusion of Medieval Music and Modern Pop

Published by Anonymous (not verified) on Wed, 05/08/2020 - 8:20pm in

I’m a fan of early music, which is the name that’s been given to music from the ancient period through medieval to baroque. It partly comes from having studied medieval history at ‘A’ level, and then being in a medieval re-enactment group for several years. Bardcore is, as this article explains, a strange fusion of modern pop and rock with medieval music, played on medieval instruments and with a medieval vocal arrangement. I’ve been finding a good deal of it on my YouTube page at the moment, which means that there are a good many people out there listening to it. On Monday the I’s Gillian Fisher published a piece about this strange new genre of pop music, ‘Tonight we’re going to party like it’s 1199’, with the subtitle ‘Bardcare reimagines modern pop with a medieval slant. Hark, says Gillian Fisher’. The article ran

“Hadst thou need to stoop so low? To send a wagon for thy minstrel and refuse my letters, I need no longer write them though. Now thou art somebody whom I used to know.”

If you can’t quite place this verse, let me help – it’s the chorus from the 2011 number one Somebody That I Used to Know, by Gotye. It might seem different to how you remember it, which is no surprise – this is the 2020 Bardcore version. Sometimes known as Tavernwave, Bardcore gives modern hits a medieval makeover with crumhorns a plenty and lashings of lute. Sometimes lyrics are also rejigged as per Hildegard von Blingin’s offering above.

Algal (41-year-old Alvaro Galan) has been creating medieval covers since 2016, a notable example being his 2017 version of System of a Down’s Toxicity. Largely overlooked at the time, the video now boasts over 4.4 million views. Full-time musician Alvaro explains that “making the right song at the right moment” is key, and believes that Bardcore offers absolute escapism.

Alvaro says: “What I enjoy most about Bardcore is that I can close my eyes and imagine being in a medieval tavern playing for a drunk public waiting to dance! But from a more realistic perspective , I love to investigate the sounds of the past.”

In these precarious times, switching off Zoom calls and apocalyptic headlines to kick back with a flagon of mead offers a break from the shambles of 2020. Looking back on simpler times during periods of unrest is a common coping mechanism, as Krystine Batcho, professor of psychology at New York’ Le Moyne College explained in her paper on nostalgia: “Nostalgic yearning for the past is especially likely to occur during periods of transition, like maturing into adulthood or aging into retirement. Dislocation or alienation can also elicit nostalgia.”

The fact that Bardcore is also pretty funny also offers light relief. The juxtaposition of ancient sound with 21st-century sentiment is epitomised in Stantough’s medieval oeuvre, such as his cover of Shakira’s Hips Don’t Lie. Originally from Singapore, Stantough (Stanley Yong), 35 says: “I really like the fact we don’t really take it very seriously. We’re all aware what we’re making isn’t really medieval but the idea of modern songs being “medievalised” is just too funny.”

One of Bardcore’s greatest hits, is Astronomia by Cornelius Link, which features trilling flutes and archaic vocal by Hildegard. It’s a tune that has been enjoyed by 5.3 million listeners. Silver-tongued Hildegard presides over the Bardcore realm, with her cover of Lady Gaga’s Bad Romance clocking up 5 million views. Canadian illustrator Hildegard, 28, fits Bardcore around work and describes herself as “an absolute beginner” with the Celtic harp and “enthusiastically mediocre” with the recorder. Her lyric adaptations have produced some humdingers such as “All ye bully-rooks with your buskin boots which she sings in rich, resonant tones.

HIldegard, who wishes to remain anonymous, believes the Bardcore boom can be “chalked up to luck, boredom and a collective desire to connect and laugh.”

In three months, the Bardcore trend has evolved with some minstrels covering Disney anthems, while others croon Nirvana hits in classical Latin. While slightly absurd, this fusion genre has ostensibly provided a sense of unity and catharsis.

The humming harps and rhythmic tabor beats evoke a sense of connection with our feudal ancestors and their own grim experience of battening down the hatches against the latest outbreak. Alongside appealing to the global sense of pandemic ennui, connecting to our forbears through music is predicated upon the fact that they survived their darkest hours. And so shall we.

While Bardcore’s a recent phenomenon, I think it’s been drawing on trends in pop music that have happening for quite long time. For example, I noticed in the 1990s when I went to a performance of the early music vocal group, the Hilliard Ensemble, when they performed at Brandon Hill in Bristol that the audience also included a number of Goths. And long-haired hippy types also formed part of the audience for Benjamin Bagley when he gave his performance of what the Anglo-Saxon poem Beowulf probably sounded like on Anglo-Saxon lyre at the Barbican centre in the same decade.

Bardcore also seems connected to other forms of postmodern music. There’s the group the Postmodern Jukebox, whose tunes can also be found on YouTube, who specialise in different 20th century arrangements of modern pop songs. Like doing a rock anthem as a piece of New Orleans Jazz, for example. And then there’s Orkestra Obsolete, who’ve arranged New Order’s Blue Monday using the instruments of the early 20th century, including musical saws and Theremin. There’s definitely a sense of fun with all these musical experiments, and behind the postmodern laughter it is good music. An as this article points out, we need this in these grim times.

Here’s an example of the type of music we’re talking about: It’s Samuel Kim’s medieval arrangement of Star Wars’ Imperial March from his channel on YouTube.

And here’s Orkestra Obsolete’s Blue Monday.







Sidney and Beatrice Webb’s Demand for the Abolition of the House of Lords

This weekend, our murderous, clown Prime Minister Boris Johnson added more weight to the argument for the House of Lords. At the moment the membership of the upper house is something like 800+. It has more members than the supreme soviet, the governing assembly of assembly of China, which rules a country of well over a billion people. Contemporary discussions are about reducing the size of this bloated monster, many of whose members do zilch except turn up in the morning in order to collect their attendance before zipping off to what they really want to do. Since Blair, it’s become a byword for corruption and cronyism, as successive prime ministers have used it to reward their collaborators, allies and corporate donors. The Tories were outraged when Blair did this during his administration, but this didn’t stop David Cameron following suit, and now Boris Alexander DeFeffel Johnson. Johnson has appointed no less than 36 of his friends and collaborators. These include his brother, who appears to be there simply because he is Johnson’s sibling, Alexander Lebedev, a Russian oligarch and son of a KGB spy, who owns the Metro and the Independent,  which is a particular insult following the concerns about Russian political meddling and the Tories’ connections to Putin; the Blairite smear-merchants and intriguers, who conspired against Jeremy Corbyn to give the Tories an election victory, and Claire Fox.

Fox has managed to provoke outrage all on her own, simply because of her disgusting views on Northern Irish terrorism. Now a member of the Brexit Party, she was a former member of the Revolutionary Communist Party which fully endorsed the IRA’s terrorism campaign and the Warrington bombing that killed two children. She has never apologised or retracted her views, although she says she no longer believes in the necessity of such tactics. But rewarding a woman, who has absolutely no problem with the political killing of children has left a nasty taste in very many people’s mouths. It shows very clearly the double standards Johnson and the Tories do have about real terrorist supporters. They tried smearing Corbyn as one, despite the fact that he was even-handed in his dealings with the various parties in northern Ireland and was a determined supporter of peace. Ulster Unionists have come forward to state that he also good relations with them and was most definitely not a supporter of terrorism. The Tories, however, have shown that they have absolutely no qualms about rewarding a real terrorist sympathiser. But even this isn’t enough for Johnson. He’s outraged and demanding an inquiry, because he was prevented from putting his corporate donors from the financial sector in the House of Lords.

Demands for reform or the abolition of the second chamber have been around for a very long time. I remember back c. 1987 that the Labour party was proposing ideas for its reform. And then under Blair there were suggestions that it be transformed into an elected senate like America’s. And way back in the first decades of the twentieth century there were demands for its abolition altogether. I’ve been reading Sidney and Beatrice Webb’s A Constitution of the Socialist Commonwealth of Great Britain, which was first published in the 1920s. It’s a fascinating book. The Webbs were staunch advocates of democracy but were fiercely critical of parliament and its ability to deal with the amount of legislation created by the expansion of the British state into industry and welfare provision, just as they were bitterly critical of its secrecy and capitalism. They proposed dividing parliament into two: a political and a social parliament. The political parliament would deal with the traditional 19th-century conceptions of the scope of parliament. This would be foreign relations, including with the Empire, the self-governing colonies and India, and law and order. The social parliament would deal with the economy, the nationalised industries and in general the whole of British culture and society, including the arts, literature and science. They make some very interesting, trenchant criticisms of existing political institutions, some of which will be very familiar to viewers of that great British TV comedy, Yes, Minister. And one of these is the House of Lords, which they state very clearly should be abolished because of its elitist, undemocratic character. They write

The House of Lords, with its five hundred or so peers by inheritance, forty-four representatives of the peerages of Scotland and Ireland, a hundred and fifty newly created peers, twenty-six bishops, and half a dozen Law Lords, stands in a more critical position. No party in the State defends this institution; and every leading statesman proposes to either to end or to amend it. It is indeed an extreme case of misfit. Historically, the House of Lords is not a Second Chamber, charged with suspensory and revising functions, but an Estate of the Realm – or rather, by its inclusion of the bishops – two Estates of the Realm, just as much entitled as the Commons to express their own judgement on all matters of legislation, and to give or withhold their own assent to all measures of taxation. The trouble is that no one  in the kingdom is prepared to allow them these rights, and for ninety years at least the House of Lords has survived only on the assumption that, misfit as it palpably is, it nevertheless fulfils fairly well the quite different functions of a Second Chamber. Unfortunately, its members cannot wholly rid themselves of the feeling that they are not a Second Chamber, having only the duties of technical revision of what the House of Commons enacts, and of temporary suspension of any legislation that it too hastily adopts, but an Estate of the Realm, a coordinate legislative organ entitled to have an opinion of its own on the substance and the merits of any enactment of the House of Commons. The not inconsiderable section of peers and bishops which from time to time breaks out in this way, to the scandal of democrats, can of course claim to be historically and technically justified in thus acting as independent legislators, but constitutionally they are out of date; and each of their periodical outbursts, which occasionally cause serious public inconvenience, brings the nation nearer to their summary abolition. Perhaps of greater import than the periodical petulance of the House of Lords is its steady failure to act efficiently  as revising and suspensory Second Chamber. Its decisions are vitiated by its composition  it is the worst representative assembly ever created in that it contains absolutely no members of the manual working class; none of the great classes of shopkeepers, clerks and teachers; none of the half of all the citizens who are of the female sex; and practically none of religious nonconformity, or art, science or literature. Accordingly it cannot be relied on to revise or suspend, and scarcely even to criticise, anything brought forward by a Conservative Cabinet, whilst obstructing and often defeating everything proposed by Radical Cabinet.

Yet discontent with the House of Commons and its executive – the Cabinet – is to-day  a more active ferment than resentment at the House of Lords. The Upper Chamber may from time to time delay and obstruct; but it cannot make or unmake governments; and it cannot, in the long run, defy the House of Commons whenever that assembly is determined. To clear away this archaic structure will only make more manifest and indisputable the failure of the House of Commons to meet the present requirements. (Pp. 62-4).

When they come to their proposals for a thorough reform of the constitution, they write of the House of Lords

There is, of course, n the Socialist Commonwealth, no place for a House of Lords, which will simply cease to exist as a part of the legislature. Whether the little group of “Law Lords”, who are now made peers in order that they may form the Supreme Court of Appeal , should or should not continue, for this purely judicial purpose, to sit under the title, and with the archaic dignity of the House of Lords, does not seem material. (p.110)

I used to have some respect for the House of Lords because of the way they did try to keep Thatcher in check during her occupation of 10 Downing Street. They genuinely acted as a constitutional check and wasn’t impressed by the proposals for their reform. I simply didn’t see that it was necessary. When Blair was debating reforming the Upper House, the Tories bitterly attacked him as a new Cromwell, following the Lord Protector’s abolition of the House of Lords during the British Civil War. Of course, Blair did nothing of the sort, and partly reformed it, replacing some of the peers with his own nominees. Pretty much as Cromwell also packed parliament.

The arguments so far used against reforming the House of Lord are that it’s cheaper than an elected second chamber, and that there really isn’t much popular enthusiasm for the latter. Private Eye said that it would just be full of second-rate politicos traipsing about vainly trying to attract votes. That was over twenty years ago.

But now that the House of Lords is showing itself increasingly inefficient and expensive because of the sheer number of political has-beens, PM’s cronies and peers, who owe their seat only because of ancestral privilege, it seems to me that the arguments for its reform are now unanswerable.

Especially when the gift of appointing them is in the hands of such a corrupt premier as Boris Johnson.

Egyptians Issue Polite Invitation to Musk to See that Aliens Didn’t Built the Pyramids

Published by Anonymous (not verified) on Tue, 04/08/2020 - 7:34pm in

Here’s a rather lighter story from yesterday’s I, for 3rd August 2020. Elon Musk, the billionaire industrialist and space entrepreneur, has managed to cause a bit of controversy with Egyptian archaeologists. He’s a brilliant businessman, no doubt, but he appears to believe in the ancient astronaut theory that alien space travellers built the pyramids. He issued a tweet about it, and so the head of the Egyptian ministry for international cooperation  has sent him a very polite invitation to come to their beautiful and historic country and see for himself that this is very obviously not the case. The report, ‘Musk invited to debunk alien pyramid theory’, by Laurie Havelock, runs

An Egyptian official has invited Elon Musk, the Tesla and SpaceX tycoon, to visit the country and see for himself that its famous pyramids were not built by aliens.

Mr Musk appeared to publicly state his support for a popular conspiracy theory that imagines aliens were involved in the construction of the ancient monuments.

But Egypt’s international co-operation minister corrected him, and said that laying eyes on the tombs of the pyramid builders would be proof enough.

Tombs discovered inside the structures during the 1990s are definitive evidence, experts say, that the structures were indeed built by ancient Egyptians. On Friday, Mr Musk tweeted: “Aliens built the pyramids obv”. which was retweeted more than 84,000 times. It prompoted Egypt’s minister of international co-operation Rania al-Mashat to respond: “I follow your work with a lot of admiration. I invite you & SpaceX to explore the writings about how the pyramids were built and also check out the tombs of the pyramid builders. Mr Musk, we are waiting for you.”

Egyptian archaeologist Zahi Hawass also responded in a short video in Arabic, posted on social media, saying Mr Musk’s argument was a “complete hallucination”.

Hawass used to be head of their ministry of antiquities, and a very senior archaeologist. He was on TV regularly in the 1990s whenever there was a programme about ancient Egypt. And he doesn’t have much truck with bizarre theories about how or why the pyramids were built. ‘Pyramidiots – that what I call them!’ he once declared passionately on screen.

The idea that the ancient Egyptians couldn’t have built the pyramids because it was all somehow beyond them has been around for some time, as have similar ideas about a lost civilisation being responsible for the construction of other ancient monuments around the world, like Stonehenge, the Nazca lines and great civilisations of South America, Easter Island and so on. Once upon a time it was Atlantis. I think in certain quarters it still is. And then with the advent of UFOs it became ancient astronauts and aliens. One of the illustrations Chris Foss painted for a book cover from the 1970s shows, I think, alien spacecraft hovering around the pyramids.

There’s actually little doubt that humans, not aliens, built all these monuments, and that the ancient Egyptians built the pyramids for which their country’s famous. Archaeologists have even uncovered an entire village, Deir el-Medina, inhabited by the craftsmen who worked on them. This has revealed immensely detailed records and descriptions of their daily lives as well as their working environment. One of the documents that has survived from these times records requests from the craftsmen to their supervisors to have a few days off. One was brewing beer – a staple part of the ordinary Egyptians diet – while another had his mother-in-law coming round. I also distinctly remember that one of the programmes about ancient Egypt in the 1990s also proudly showed a tomb painting that at least depicted the system of ramps the workers are believed to have used to haul the vast stones into place. And the great ancient Greek historian, Herodotus, in his Histories, states very clearly that the pyramids were built by human workers. He includes many tall tales, no doubt told him by tour guides keen to make a quick buck and not to worried about telling the strict truth to an inquisitive foreigner. Some of these are about the spice and rich perfumes traded by the Arab civilisations further west. He includes far-fetched stories about how these exotic and very expensive products were collected by giant ants and other fabulous creatures. But no-one tried telling him that it wasn’t people, who built the pyramids.

On the other hand, the possibility that aliens may have visited Earth and the other planets in the solar system isn’t a daft idea at all. Anton ‘Wonderful Person’ Petrov, a Russian YouTuber specialising in real space and science, put up a video a few weeks ago stating that it’s been estimated that another star passes through the solar system once every 50,000 years. A similar paper was published by a Russian space scientist in the Journal of the British Interplanetary Society back in the 1990s, although he limited the estimated to a star coming within a light-year of Earth. That’s an incredibly small distance, and if there have been other, spacefaring civilisations in our Galaxy, they could easily jump off their solar system to visit or explore ours. We can almost do it ourselves now, as shown by projects that have been drawn up to send light-weight probes by solar sail to Alpha Centauri. In addition to the Search for Extraterrestrial Intelligence using radio telescopes to comb the skies for a suitable signal, there is also planetary SETI. This advocates looking for the remains of alien spacecraft or visitors elsewhere in our solar system. It’s advocates are serious scientists, though it suffered a major blow to its credibility with the furore over the ‘Face on Mars’. Which turned out not to be a face at all, but a rock formation as its critics had maintained.

Aliens may well have visited the solar system in the deep past, but it was definitely very human ancient Egyptians, who built the pyramids. Because, as Gene Roddenberry once said about such theories, ‘humans are clever and they work hard.’ Wise words from the man who gave us Star Trek.

Let’s go out in space to seek out new life and new civilisations by all means, but also keep in mind what we humans are also capable of achieving on our own down here.

Vile! Priti Patel Withdraws Funding to Britain’s Only Centre Against Female Genital Mutilation

Yesterday, Mike over at Vox Political put up a very telling piece, which reveals precisely how low on their priorities is protecting vulnerable British girls from FGM. Priti Patel, the smirking minister, who believes it’s perfectly acceptable to conduct her own foreign policy for states such as Israel behind her own government’s back, and thinks that British workers should suffer the same horrendous wages and working conditions as the exploited masses of the developing world, because they’re too lazy, has decided to cut the funding to this country’s National FGM Centre. This was set up five years ago to combat Female Genital Mutilation, otherwise known as female circumcision. Feminists have also described it as ‘female castration’ because of its truly horrific nature. It’s the only centre protecting girls from communities across the UK from it. The centre’s head, Letheen Bartholomew, warned that FGM will not end if it is forced to close because of the cuts. Mike quotes her as saying:

“We will not be there to protect the girls who need us. We know that FGM is still being practised in communities across England.

“There are still girls who are being cut and so will face a lifetime of physical and emotional pain. It is a hidden form of child abuse.”

Mike connects this to the sadism in the Tory party generally, and their need to inflict pain and suffering on innocents. He also points out that Patel herself wanted to deport a girl so that she could undergo this truly horrific practise. There’s no way it can be decently described in a family blog, and it does seem to vary in severity. At its worst it leads to a lifetime of agonizing medical problems and health issues, including childbirth.

One of the communities in which girls are at risk is my own city of Bristol. A few years ago the local Beeb news propgramme, Points West, carried an item about girls of African heritage, who left vulnerable to it, and the courageous efforts of campaigners from these communities to combat it. This was when it was a pressing issue and voices were being raised across the country demanding that it should be fought and outlawed. And now that we find that the outrage has calmed down and it is no longer in the public consciousness, the Tories are doing what they have always done in these circumstances: they’re quietly ending it, hoping that nobody will notice. It’s served its purpose, which was to convince the public, or the chattering classes or some section thereof that the Tories really do hold some kind of liberal values, and are prepared to defend women and people of colour. But like everything they do in that direction, it’s always essentially propagandistic. It is there to garner them votes and plaudits in the press and media. And once it’s done that, these and similar initiatives are always abandoned.

Patel’s decision also shows you how seriously Johnson takes the general issue of racism and racial equality after the Black Lives Matter protests: he doesn’t. Not remotely. Remember he was going to set up an inquiry to deal with the issue, just like the last one the Tories set up under May when the issue raised its ugly head a few years ago. I admit that FGM is only one of a number of issues affecting Britain’s Black and BAME communities. It may not the most common, but it is certainly one of the most severe to those affected and there should be absolutely no question of the Centre continuing to receive funding. Young lives are being ruined. But Boris, Patel and the rest really can’t care less.

Part of the motive behind the Black Lives Matter protests, it seems to me, is that Britain’s Black communities have been particularly badly affected by austerity and neoliberalism. They aren’t alone – there are plenty of Whites and Asians that have similarly suffered. But as generally the poorest, or one of the poorest, sections of British society, which has suffered from structural racism, the Tories attacks on jobs, wages and welfare benefits has been particularly acute for them. It has contributed to the anger and alienation that led to the protests a few weeks ago and such symbolic acts as the tearing down of the statue of Edward Colston in Bristol.

But now that the protests seem to be fading, the Tories are showing their real lack of concern despite the appointment of BAME politicos like Patel to the government.

And underneath this there’s also a very hypocritical attitude to the whole issue of FGM on the political right. Islamophobes like Tommy Robinson and the EDL use it to tarnish Islam as a whole. It’s supposed to show that the religion as a whole is dangerously misogynist, anti-feminist and fundamentally opposed to modern western conceptions of human rights. In fact the impression I have is that FGM isn’t unique to Islam, but practised by various African and other cultures around the world. Islamic scholars have said that it has no basis in Islam itself, but is a pre-Islamic practice that was taken over as the religion expanded. There have also been attempts by campaigners in this country and the European Union to pass legislation very firmly outlawing it. A few years ago there was even a bill passing through the European Parliament. But UKIP, whose storm troopers had been making such a noise about FGM and the fundamental incompatibility of Islam and western society, did not rouse themselves from their habitual idleness to support the motion. And this was noticed at the time.

There seems to be a racist backlash coming on after the Black Lives Matter protests. The Tories are trying to recruit members on the internet by stirring up concerns about the waves of illegal immigration. Over the past few days there have also been pieces stuck up on YouTube about this, and related issues from the usual offenders at TalkRadio, Julia Hartley-Brewer, and ‘Celebrity Radio’ Alex Belfield. My guess is that if we wait long enough, FGM will be revived once again by the right as another metaphorical stick to attack Muslims and brown people.

But all the while it should be remembered that the Tories wanted to tell us they were serious about tackling it. They weren’t, and aren’t.

And that tells you all you need to know about their attitudes to race, women and the poorest members of society generally, regardless of gender and ethnicity.



The Anti-Aesthetic of Cancel Culture

Published by Anonymous (not verified) on Tue, 04/08/2020 - 3:00am in


art, culture

For a fortnight or so, a month ago, there was an apparent (and illusory) lull in the COVID crisis in the West, and in that moment, social and political debate rapidly returned to matters cultural, concerned with who speaks about what, where and how. Earlier, the author J. K. Rowling had made some remarks on Twitter about post-gender language in UK official documents (‘people who menstruate’ rather than ‘women’), and this was blown up into a huge storm by the mainstream media. This was followed by an open letter in Harper’s,signed by more than 100 leading US and UK intellectuals (not merely liberals; figures such as Chomsky signed), protesting at what has come to be called ‘cancel culture’, and urging a value pluralism in the editing of publications. The letter received a furious response from a different—and different-generation, i.e., younger—quarter that argued that such a purported openness acted as a form of exclusion and silencing of non-white, non-male voices; this letter was also published. In Australia, a short film by Eliza Scanlen, Mukbang, in which a young Anglo woman adopted the Korean subculture of binge eating for an online audience, was removed from the online Sydney Film Festival; one of those calling for its removal, Michelle Law, then apologised and removed from platforms an earlier film of her own, which featured, comically, two teenage girls adopting Aztec cultural practices, and brownface, to spirit away the memory of bad boyfriends. The attacks on Mukbang, and the Sydney Film Festival for awarding it a prize and showing it, occasioned a letter in response by twenty-seven Australian film-makers and writers (including numerous Indigenous and non-Anglo names) pushing back against this, and describing ‘The current focus on public shaming and “burning down” the industry [as] misguided and ahistorical’. Again it was noticeable that many of the names on the letter were Generation X or boomer. In and around this time, there were a half dozen other such and related issues, involving the comedian Chris Lilley, the literary magazine Verity La and uproar at another magazine, The Lifted Brow.

There is little point trying to encompass everything that is going on here in a single note—least of all by using the all-in term ‘cancel culture’. One aspect that seemed to come to the fore in Australia was the question of what can be represented in fictional texts and by whom. Here, with the Mukbang controversy, one noted a ratcheting up of the standard of what constitutes improper use. The film was not ‘yellowface’—neither the lead character nor the actor portraying her was trying to pass as Korean; the character was simply adopting a Korean subculture that came to global attention some years ago. In other words, it was a realist presentation of the crazy shit teenagers do for fun, as was Law’s film. That seems to raise the question of how it would be possible to represent contemporary culture at all, given that we live in a globalised world of total simultaneous connection. Since teenagers and young people are now formed by streams of global culture—rather than by a unitary US-dominated feed, which began to yield its power in the 2000s—and mimic, hybridise and mash them incessantly, how is representation of them at all possible under such strictures?

This is a step on from the ‘cultural appropriation’ debate, which was kicked into culture-war status in this country by the 2016 appearance of novelist Lionel Shriver at the Brisbane Writers Festival, wearing a sombrero and talking about the right of authors to speak using the voices of communities and groups to which they do not belong. Her speech prompted a counter-argument from Yassmin Abdel-Magied, which led to a vicious and sustained campaign against Abdel-Magied by News Corp. Arguing that authors should in general stay away from first-person voicings of characters from a minority/oppressed culture would hardly crimp the enormous flow of cultural product and exploration.

To simply react to this in the name of free speech and free expression would be to play into the game of misrecognition by which the dominant values of earlier artistic cultures (i.e. the bohemian ones of modernism) are taken as absolute, and the values driving concern over cultural appropriation as surplus and somehow unreal. That is not only silly but simply impossible in a multiethnic, multicultural society. Much of this argument, especially in Australia, is not about free expression in isolation. It is about what the parameters of public culture in a post ‘settler-Indigenous’ society will be. It should be obvious that as soon as a society is not mono- or bicultural, a period of cultural rethinking has to occur. We really weren’t genuinely multiethnic/multicultural until about ten to fifteen years ago (and even then, only in our biggest cities). That period has coincided with the rise of the smartphone and social media—a milieu in which value systems come to be in a sort of ‘dialogue-flux’ of ceaseless peer-to-peer redefinition, which accounts for the dizzying feeling that such public debates take on.

However, prior to any debate about whether strict, light or no strictures against cultural appropriation should be in place, those artists who argue for a robust ban face the paradox that it clearly trends towards not a political aesthetic but an anti-aesthetic—a moral ban on representation altogether—that is one of the endpoints that many societies can find themselves in. Plato urged the banning of all art, in line with his philosophy of ideal forms; medieval Christian culture ricocheted from periods of great artistic flourishing to the smashing of icons and the burning of the vanities; the seventeenth-century Puritan ban on performance destroyed the theatre culture that had produced Shakespeare, Marlowe and Johnson in two generations, and kept it a minor art for more than two centuries; Wahhabism turned Islam’s ban on representations of Mohammed into a ban on every diversion, including music; Stalinism and Maoism applied a class line to art that made any representation of real conflict impossible.

The implicit politics of the present—in which the deep left aim of creating a society of universal self-flourishing is rendered as a society of universal ‘safety’, in an expanded sense—trends towards a ban on representation, since any representation of suffering or wrong can be taken as exploitation or aggression.

Indeed, this occurred during the Mukbang controversy, in a manner relatively little commented on. In Mukbang, the teenage protagonist draws a picture of herself strangling a male fellow student, who happens to be black, and the picture briefly animates. This scenelet was removed before the film was shown in the festival, and relatively few of those criticising the attacks on it on cultural-appropriation grounds included it in their commentary on the film. Yet that moment was merely the representation of free-floating violent fantasy of the lead character, an accurate representation of inner life, of teens, and, well, of everyone.

If the Mukbang controversy paradoxically enforces such a division between the acceptable and non-acceptable, how is radical, reflexive art to proceed? This was a short film for adults, screened at a festival. By its very nature, such a place has to be very free in expression. But that’s not because having a society of substantially free expression is, of itself, absolutely better than having one of moral constraints on representation. Maybe Puritan societies are happier. Maybe safety, in an expanded sense, is better than freedom.

The point, however, is that it is not possible to believe that and be a representational artist—or editor, or festival director—since the ‘safety’ principle undermines the very practice of cultural creation in modernity. That process seems to be well advanced. As with all such ‘one-dimensional’ approaches to a way of life, one would assume that the functional collapse of such an approach is imminent. It has already produced one ‘other’ in the rise of the alt-Right, which draws recruits precisely because of its deliberate courting of outrage. But will a version on the ‘Left’ arise in which racism and misogyny remain abhorred but the freedom to represent the full panoply of human life, including inner life, is affirmed and celebrated? With writers and film-makers caught in the practice of cancelling themselves as they try to cancel others, that moment may simply be a necessary and inevitable product of art—a resistance to its self-cancellation within the great cultural shifts of our time.

We now return you to our listed program, an unstoppable and all-encompassing pandemic. 

A British Colonial Governor’s Attack on Racism

Published by Anonymous (not verified) on Sat, 01/08/2020 - 5:36am in

Sir Alan Burns, Colour and Colour Prejudice with Particular Reference to the Relationship between Whites and Negroes (London: George Allen and Unwin Ltd 1948).

I ordered this book secondhand online a week or so ago, following the Black Lives Matter protests and controversies over the past few weeks. I realise reading a book this old is a rather eccentric way of looking at contemporary racial issues, but I’d already come across it in the library there when I was doing voluntary work at the Empire and Commonwealth Museum. What impressed me about it was that it also dealt with anti-White racism amongst Blacks as well as the book’s main concern with anti-Black racism, discrimination and growing Black discontent in the British Empire.

Burns was a former governor of Ghana, then the Gold Coast. According to the potted biography on the front flap of the dust jacket, he was ‘a Colonial Civil Servant of long and distinguished experience in tropical West Africa and the West Indies.’ The book

deals with the important question of colour prejudice, and pleads for mutual courtesy and consideration between the white and the coloured races. Sir Alan analyses the history and alleged causes of colour prejudice, and cites the opinions of many writers who condemn or attempt to justify the existence of prejudice. It is a frank analysis of an unpleasant phenomenon.

He was also the author of two other books, his memoirs of colonial service in the Leeward Islands Nigeria, Bahamas, British Honduras, the Gold Coast and the Colonial Office, Colonial Civil Servant, and A History of Nigeria. The Gold Coast was one of the most racial progressive of the British African colonies. It was the first of them to include an indigenous chief on the ruling colonial council. I therefore expected Burns to hold similar positive views of Blacks, given, of course, how outdated these would no doubt seem to us 72 years later.

After the introduction, the book is divided into the following chapters:

I. The Existence and Growth of Colour Prejudice

II. The Attitude of Various Peoples to Racial and Colour Differences

III. Negro Resentment of Colour Prejudice

IV. Political and Legal Discrimination Against Negroes

V. Social Discrimination Against Negroes

VI. Alleged Inferiority of the Negro

VII. Alleged Shortcomings of the Negro

VIII. Physical and Mental Differences between the Races

IX. Physical Repulsion between Races

X. Miscegenation

XI. The Effect of Environment and History on the Negro Race

XII. Lack of Unity and Inferiority Complex Among Negroes

XIII. Conclusion.

I’ve done little more than take the occasional glance through it so far, so this is really a rather superficial treatment of  the book, more in the way of preliminary remarks than a full-scale review. Burns does indeed take a more positive view of Blacks and their potential for improvement, but the book is very dated and obviously strongly influenced by his own background in the colonial service and government. As a member of the colonial governing class, Burns is impressed by the British Empire and what he sees as its benevolent and highly beneficial rule of the world’s indigenous peoples. He is in no doubt that they have benefited from British rule, and quotes an American author as saying that there is no other colonial power which would have done so for its subject peoples. He is particularly impressed by the system of indirect rule, in which practical government was largely given over to the colonies’ indigenous ruling elites. This was peaceful, harmonious and had benefited the uneducated masses of the Empire’s indigenous peoples. These colonial subjects appreciated British rule and largely supported it. He did not expect this section of colonial society to demand their nations’ independence. However, this governmental strategy did not suit the growing class of educated Blacks, who were becoming increasingly dissatisfied with their treatment as inferiors and demanding independence.

As with other, later books on racism Burns tackles its history and tries to trace how far back it goes. He argues that racism seems to go back no further than the Fifteenth century. Before then, culture and religion were far more important in defining identity.  He’s not entirely convinced by this, and believes that racism in the sense of colour prejudice probably existed far earlier, but there is little evidence for it. There have been other explorations of this subject which have attempted to show the history and development of racism as a cultural idea in the west. Other historians have said much the same, and I think the consensus of opinion is that it was the establishment of slavery that led to the development of ideas of Black inferiority to justify their capture and enslavement.

Burns is also concerned at what he and the other authorities he quotes as the growth in anti-Black racism that came following the First World War. He compares this unfavourably with a comment from an African lady, who went to a British school during Victoria’s reign. The women recalls that she and the other Black girls were treated absolutely no differently from the Whites, and that the only time she realised there was any difference between them was when she looked in a mirror. This is interesting, and a good corrective to the idea that all Whites were uniformly and aggressively racist back then, but I expect her experience may have been very different from Blacks further down the social hierarchy. Burns believes the increase in racism after the First World War was due to the increased contact between Blacks and Whites, which is probably true following the mass mobilisation of troops across the Empire.

But what I found as an historian with an interest in African and other global civilisations is the book’s almost wholly negative assessment of Black civilisation and its achievements. Burns quotes author after author, who states that Blacks have produced no great civilisations or cultural achievements. Yes, ancient Egypt is geographically a part of Africa, but culturally and racially, so it is claimed, it is part of the Middle East. Where Black Africans have produced great civilisations, it is through contact with external, superior cultures like the Egyptians, Carthaginians and the Arabs. Where Blacks have produced great artistic achievements, such as in the Benin bronzes of the 16th/17th century, it is claimed that this is due to contact with the Portuguese and Spanish. This negative view is held even by writers, who are concerned to stress Black value and dignity, and show that Blacks are not only capable of improvement, but actually doing so.

Since then a series of historians, archaeologists and art historians have attempted to redress this view of history by showing how impressive Black African civilisations were. Civilisations like ancient Nubia, Ethiopia, Mali and the other great Islamic states of north Africa, and advanced west African civilisations like Dahomey. I myself prefer the superb portraiture in the sculptures from 17th century Ife in west Africa, but archaeologists and historians have been immensely impressed by the carved heads from Nok in Nigeria, which date from about 2,000 BC. Going further south, there is the great fortress of Zimbabwe, a huge stone structure that bewildered western archaeologists. For years it was suggested that Black Africans simply couldn’t have built it, and that it must have been the Arabs or Chinese instead. In fact analysis of the methods used to build it and comparison with the same techniques used by local tribes in the construction of their wooden buildings have shown that the fortress was most definitely built by indigenous Zimbabweans. There have been a number of excellent TV series broadcast recently. Aminatta Forna presented one a few years ago now on Timbuktu, once the centre of a flourishing and immensely wealthy west African kingdom. A few years before, art historian Gus Casely-Hayford presented a series on BBC Four, Lost Civilisations of Africa. I think that’s still on YouTube, and it’s definitely worth a look. Archaeologists are revealing an entire history of urban civilisation that has previously been lost or overlooked. Nearly two decades or so ago there was a piece by a White archaeologist teaching in Nigeria, who had discovered the remains of house and courtyard walls stretching over an area of about 70 km. This had been lost as the site had been abandoned and overgrown with vegetation. He lamented how there was little interest in the remains of this immense, ancient city among Nigerians, who were far more interested in ancient Egypt.

This neglect and disparagement of African history and achievement really does explain the fervour with which Afrocentric history is held by some Blacks and anti-racist Whites. This is a view that claims that the ancient Egyptians were Black, and the real creators of the western cultural achievement. It began with the Senegalese scholar Cheikh Anta Diop. White Afrocentrists have included Martin Bernal, the author of Black Athena, and Basil Davidson. Following the Black Lives Matter protests there have also been calls for Black history to be taught in schools, beginning with African civilisations.

More positively, from what I’ve seen so far, Burns did believe that Blacks and Whites were equal in intelligence. The Christian missionaries Samuel Crowther, who became the first Anglican bishop of Africa, and Frederick Schon, had absolutely no doubt. Crowther was Black, while Schon was a White Swiss. In one of their reports to the British parliamentary committee sitting to examine slavery and the slave trade, they presented evidence from the African missionary schools in the form of essays from their pupils to show that Blacks certainly were as capable as Whites. Possibly more so at a certain age. As Black underachievement at school is still a very pressing issue, Crowther’s and Schon’s findings are still very important. Especially as there are real racists, supporters of the book The Bell Curve, keen to argue that Blacks really are biologically mentally inferior to Whites.

Burns’ book is fascinating, not least because it shows the development of official attitudes towards combating racism in Britain. Before it became such a pressing issue with the mass influx of Black migrants that came with Windrush, it seems that official concern was mostly over the growing resentment in Africa and elsewhere with White, British rule. The book also hopefully shows how we’ve also come in tackling racism in the West. I’m not complacent about it – I realise that it’s still very present and blighting lives – but it’s far, far less respectable now than it was when I was a child in the 1970s. My concern, however, is that some anti-racism activists really don’t realise this and their concentration on the horrors and crimes of the past has led them to see the present in its terms. Hence the rant of one of the BLM firebrands in Oxford that the police were the equivalent of the Klan.

Burn’s book shows just how much progress has been made on, and makes you understand just what an uphill struggle this has been.



Philosophers On GPT-3 (updated with replies by GPT-3)

Published by Anonymous (not verified) on Fri, 31/07/2020 - 5:02am in

Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann.

Annette Zimmermann, guest editor

GPT-3, a powerful, 175 billion parameter language model developed recently by OpenAI, has been galvanizing public debate and controversy. As the MIT Technology Review puts it: “OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless”. Parts of the technology community hope (and fear) that GPT-3 could brings us one step closer to the hypothetical future possibility of human-like, highly sophisticated artificial general intelligence (AGI). Meanwhile, others (including OpenAI’s own CEO) have critiqued claims about GPT-3’s ostensible proximity to AGI, arguing that they are vastly overstated.

Why the hype? As is turns out, GPT-3 is unlike other natural language processing (NLP) systems, the latter of which often struggle with what comes comparatively easily to humans: performing entirely new language tasks based on a few simple instructions and examples. Instead, NLP systems usually have to be pre-trained on a large corpus of text, and then fine-tuned in order to successfully perform a specific task. GPT-3, by contrast, does not require fine tuning of this kind: it seems to be able to perform a whole range of tasks reasonably well, from producing fiction, poetry, and press releases to functioning code, and from music, jokes, and technical manuals, to “news articles which human evaluators have difficulty distinguishing from articles written by humans”.

The Philosophers On series contains group posts on issues of current interest, with the aim being to show what the careful thinking characteristic of philosophers (and occasionally scholars in related fields) can bring to popular ongoing conversations. Contributors present not fully worked out position papers but rather brief thoughts that can serve as prompts for further reflection and discussion.

The contributors to this installment of “Philosophers On” are Amanda Askell (Research Scientist, OpenAI), David Chalmers (Professor of Philosophy, New York University), Justin Khoo (Associate Professor of Philosophy, Massachusetts Institute of Technology), Carlos Montemayor (Professor of Philosophy, San Francisco State University), C. Thi Nguyen (Associate Professor of Philosophy, University of Utah), Regina Rini (Canada Research Chair in Philosophy of Moral and Social Cognition, York University), Henry Shevlin (Research Associate, Leverhulme Centre for the Future of Intelligence, University of Cambridge), Shannon Vallor (Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, University of Edinburgh), and Annette Zimmermann (Permanent Lecturer in Philosophy, University of York, and Technology & Human Rights Fellow, Harvard University).

By drawing on their respective research interests in the philosophy of mind, ethics and political philosophy, epistemology, aesthetics, the philosophy of language, and other philosophical subfields, the contributors explore a wide range of themes in the philosophy of AI: how does GPT-3 actually work? Can AI be truly conscious—and will machines ever be able to ‘understand’? Does the ability to generate ‘speech’ imply communicative ability? Can AI be creative? How does technology like GPT-3 interact with the social world, in all its messy, unjust complexity? How might AI and machine learning transform the distribution of power in society, our political discourse, our personal relationships, and our aesthetic experiences? What role does language play for machine ‘intelligence’? All things considered, how worried, and how optimistic, should we be about the potential impact of GPT-3 and similar technological systems?

I am grateful to them for putting such stimulating remarks together on very short notice. I urge you to read their contributions, join the discussion in the comments (see the comments policy), and share this post widely with your friends and colleagues. You can scroll down to the posts to view them or click on the titles in the following list:

Consciousness and Intelligence

  1. GPT-3 and General Intelligence” by David Chalmers
  2. GPT-3: Towards Renaissance Models” by Amanda Askell
  3. Language and Intelligence” by Carlos Montemayor

Power, Justice, Language

  1. If You Can Do Things with Words, You Can Do Things with Algorithms” by Annette Zimmermann
  2. What Bots Can Teach Us about Free Speech” by Justin Khoo
  3. The Digital Zeitgeist Ponders Our Obsolescence” by Regina Rini

Creativity, Humanity, Understanding

  1. Who Trains the Machine Artist?” by C. Thi Nguyen
  2. A Digital Remix of Humanity” by Henry Shevlin
  3. GPT-3 and the Missing Labor of Understanding” by Shannon Vallor

UPDATEResponses to this post by GPT-3

GPT-3 and General Intelligence
by David Chalmers

GPT-3 contains no major new technology. It is basically a scaled up version of last year’s GPT-2, which was itself a scaled up version of other language models using deep learning. All are huge artificial neural networks trained on text to predict what the next word in a sequence is likely to be. GPT-3 is merely huger: 100 times larger (98 layers and 175 billion parameters) and trained on much more data (CommonCrawl, a database that contains much of the internet, along with a huge library of books and all of Wikipedia).

Nevertheless, GPT-3 is instantly one of the most interesting and important AI systems ever produced. This is not just because of its impressive conversational and writing abilities. It was certainly disconcerting to have GPT-3 produce a plausible-looking interview with me. GPT-3 seems to be closer to passing the Turing test than any other system to date (although “closer” does not mean “close”). But this much is basically an ultra-polished extension of GPT-2, which was already producing impressive conversation, stories, and poetry.

More remarkably, GPT-3 is showing hints of general intelligence. Previous AI systems have performed well in specialized domains such as game-playing, but cross-domain general intelligence has seemed far off. GPT-3 shows impressive abilities across many domains. It can learn to perform tasks on the fly from a few examples, when nothing was explicitly programmed in. It can play chess and Go, albeit not especially well. Significantly, it can write its own computer programs given a few informal instructions. It can even design machine learning models. Thankfully they are not as powerful as GPT-3 itself (the singularity is not here yet).

When I was a graduate student in Douglas Hofstadter’s AI lab, we used letterstring analogy puzzles (if abc goes to abd, what does iijjkk go to?) as a testbed for intelligence. My fellow student Melanie Mitchell devised a program, Copycat, that was quite good at solving these puzzles. Copycat took years to write. Now Mitchell has tested GPT-3 on the same puzzles, and has found that it does a reasonable job on them (e.g. giving the answer iijjll). It is not perfect by any means and not as good as Copycat, but its results are still remarkable in a program with no fine-tuning for this domain.

What fascinates me about GPT-3 is that it suggests a potential mindless path to artificial general intelligence (or AGI). GPT-3’s training is mindless. It is just analyzing statistics of language. But to do this really well, some capacities of general intelligence are needed, and GPT-3 develops glimmers of them. It has many limitations and its work is full of glitches and mistakes. But the point is not so much GPT-3 but where it is going. Given the progress from GPT-2 to GPT-3, who knows what we can expect from GPT-4 and beyond?

Given this peak of inflated expectations, we can expect a trough of disillusionment to follow. There are surely many principled limitations on what language models can do, for example involving perception and action. Still, it may be possible to couple these models to mechanisms that overcome those limitations. There is a clear path to explore where ten years ago, there was not. Human-level AGI is still probably decades away, but the timelines are shortening.

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

There is no easy answer to these questions, which require serious analysis of GPT-3 and serious analysis of what intelligence and the other notions amount to. On a first pass, I am most inclined to give a positive answer to the first. GPT-3’s capacities suggest at least a weak form of intelligence, at least if intelligence is measured by behavioral response.

As for consciousness, I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175 billion parameters is conscious too. I would expect any consciousness to be far simpler than ours, but much depends on just what sort of processing is going on among those 175 billion parameters.

GPT-3 does not look much like an agent. It does not seem to have goals or preferences beyond completing text, for example. It is more like a chameleon that can take the shape of many different agents. Or perhaps it is an engine that can be used under the hood to drive many agents. But it is then perhaps these systems that we should assess for agency, consciousness, and so on.

The big question is understanding. Even if one is open to AI systems understanding in general, obstacles arise in GPT-3’s case. It does many things that would require understanding in humans, but it never really connects its words to perception and action. Can a disembodied purely verbal system truly be said to understand? Can it really understand happiness and anger just by making statistical connections? Or is it just making connections among symbols that it does not understand?

I suspect GPT-3 and its successors will force us to fragment and re-engineer our concepts of understanding to answer these questions. The same goes for the other concepts at issue here. As AI advances, much will fragment by the end of the day. Both intellectually and practically, we need to handle it with care.

GPT-3: Towards Renaissance Models
by Amanda Askell

GPT-3 recently captured the imagination of many technologists, who are excited about the practical applications of a system that generates human-like text in various domains.. But GPT-3 also raises some interesting philosophical questions. What are the limits of this approach to language modeling? What does it mean to say that these models generalize or understand? How should we evaluate the capabilities of large language models?

What is GPT-3?

GPT-3 is a language model that generates impressive outputs across a variety of domains, despite not being trained on any particular domain. GPT-3 generates text by predicting the next word based on what it’s seen before. The model was trained on a very large amount of text data: hundreds of billions of words from the internet and books.

The model itself is also very large: it has 175 billion parameters. (The next largest transformer-based language model was a 17 billion parameter model.) GPT-3’s architecture is similar to that of GPT-2, but much larger, i.e. more trainable parameters, so it’s best thought of as an experiment in scaling up algorithms from the past few years.

The diversity of GPT-3’s training data gives it an impressive ability to adapt quickly to new tasks. For example, I prompted GPT-3 to tell me an amusing short story about what happens when Georg Cantor decides to visit Hilbert’s hotel. Here is a particularly amusing (though admittedly cherry-picked) output:

Why is GPT-3 interesting?

Larger models can capture more of the complexities of the data they’re trained on and can apply this to tasks that they haven’t been specifically trained to do. Rather than being fine-tuned on a problem, the model is given an instruction and some examples of the task and is expected to identify what to do based on this alone. This is called “in-context learning” because the model picks up on patterns in its “context”: the string of words that we ask the model to complete.

The interesting thing about GPT-3 is how well it does at in-context learning across a range of tasks. Sometimes it’s able to perform at a level comparable with the best fine-tuned models on tasks it hasn’t seen before. For example, it achieves state of the art performance on the TriviaQA dataset when it’s given just a single example of the task.

Fine-tuning is like cramming for an exam. The benefit of this is that you do much better in that one exam, but you can end up performing worse on others as a result. In-context learning is like taking the exam after looking at the instructions and some sample questions. GPT-3 might not reach the performance of a student that crams for one particular exam if it doesn’t cram too, but it can wander into a series of exam rooms and perform pretty well from just looking at the paper. It performs a lot of tasks pretty well, rather than performing a single task very well.

The model can also produce impressive outputs given very little context. Consider the first completion I got when I prompted the model with “The hard problem of consciousness is”:

Not bad! It even threw in a fictional quote from Nagel.

It can also apply patterns it’s seen in its training data to tasks it’s never seen before. Consider the first output GPT-3 gave for the following task (GPT-3’s text is highlighted):

It’s very unlikely that GPT-3 has ever encountered Roish before since it’s a language I made up. But it’s clearly seen enough of these kinds of patterns to identify the rule.

Can we tell if GPT-3 is generalizing to a new task in the example above or if it’s merely combining things that it has already seen? Is there even a meaningful difference between these two behaviors? I’ve started to doubt that these concepts are easy to tease apart.

GPT-3 and philosophy

Although its ability to perform new tasks with little information is impressive, on most tasks GPT-3 is far from human level. Indeed, on many tasks it fails to outperform the best fine-tuned models. GPT-3’s abilities also scale less well to some tasks than others. For example, it struggles with natural language inference tasks, which involve identifying whether a statement is entailed or contradicted by a piece of text. This could be because it’s hard to get the model to understand this task in a short context window (The model could know how to do a task when it understands what’s being asked, but not understand what’s being asked.)

GPT-3 also lacks a coherent identity or belief state across contexts. It has identified patterns in the data it was trained on, but the data it was trained on was generated by many different agents. So if you prompt it with “Hi, I’m Sarah and I like science”, it will refer to itself as Sarah and talk favorably about science. And if you prompt it with “Hi I’m Bob and I think science is all nonsense” it will refer to itself as Bob and talk unfavorably about science.

I would be excited to see philosophers make predictions about what models like GPT-3 can and can’t do. Finding tasks that are relatively easy for humans but that language models perform poorly on, such as simple reasoning tasks, would be especially interesting.

Philosophers can also help clarify discussions about the limits of these models. It’s difficult to say whether GPT-3 understands language without giving a more precise account of what understanding is, and some way to distinguish between models that have this property from those that don’t. Do language models have to be able to refer to the world in order to understand? Do they need to have access to data other than text in order to do this?

We may also want to ask questions about the moral status of machine learning models. In non-human animals, we use behavioral cues and information about the structure and evolution of their nervous systems as indicators about whether they are sentient. What, if anything, would we take to be indicators of sentience in machine learning models? Asking this may be premature, but there’s probably little harm contemplating it too early and there could be a lot of harm in contemplating it too late.


GPT-3 is not some kind of human-level AI, but it does demonstrate that interesting things happen when we scale up language models. I think there’s a lot of low-hanging fruit at the intersection of machine learning and philosophy, some of which is highlighted by models like GPT-3. I hope some of the people reading this agree!

To finish with, here’s the second output GPT-3 generated when I asked it how to end this piece:

Language and Intelligence
by Carlos Montemayor

Interacting with GPT-3 is eerie. Language feels natural and familiar to the extent that we readily recognize or distinguish concrete people, the social and cultural implications of their utterances and choice of words, and their communicative intentions based on shared goals or values. This kind of communicative synchrony is essential for human language. Of course, with the internet and social media we have all gotten used to a more “distant” and asynchronous way of communicating. We are a lot less familiar with our interlocutors and are now used to a certain kind of online anonymity. Abusive and unreliable language is prevalent in these semi-anonymous platforms. Nonetheless, we value talking to a human being at the other end of a conversation. This value is based on trust, background knowledge, and cultural common ground. GPT-3’s deliverances look like language, but without this type of trust, they feel unnatural and potentially manipulative.

Linguistic communication is symbolically encoded and its semantic possibilities can be quantified in terms of complexity and information. This strictly formal approach to language based on its syntactic and algorithmic nature allowed Alan Turing (1950) to propose the imitation game. Language and intelligence are deeply related and Turing imagined a tipping point at which performance can no longer be considered mere machine-output. We are all familiar with the Turing test. The question it raises is simple: if in an anonymous conversation with two interlocutors, one of them is systematically ranked as more responsive and intelligent, then one should attribute intelligence to this interlocutor, even if the interlocutor turns out to be a machine. Why should a machine capable of answering questions accurately and not by lucky chance be no more intelligent than a toaster?

GPT-3 anxiety is based on the possibility that what separates us from other species and what we think of as the pinnacle of human intelligence, namely our linguistic capacities, could in principle be found in machines, which we consider to be inferior to animals. Turing’s tipping point confronts us with our anthropocentric aversion towards diverse intelligences—alien, artificial, and animal. Are our human conscious capacities for understanding and grasping meanings not necessary for successful communication? If a machine is capable of answering questions better, or even much better than the average human, one wonders what exactly is the relation between intelligence and human language. GPT-3 is a step towards a more precise understanding of this relation.

But before we get to Turing’s tipping point there is a long and uncertain way ahead. A key question concerns the purpose of language. While linguistic communication certainly involves encoding semantic information in a reliable and systematic way, language clearly is much more than this. Language satisfies representational needs that depend on the environment for their proper satisfaction, and only agents with cognitive capacities, embedded in an environment, have these needs and care for their satisfaction. At a social level, language fundamentally involves joint attention to aspects of the environment, mutual expectations, and patterns of behavior. Communication in the animal kingdom—the foundation for our language skills—heavily relies on attentional capacities that serve as the foundation for social trust. Attention, therefore, is an essential component of intelligent linguistic systems (Mindt and Montemayor, 2020). AIs like GPT-3 are still far away from developing the kind of sensitive and selective attention routines needed for genuine communication.

Until attention features prominently in AI design, the reproduction of biases and the risky or odd deliverances of AIs will remain problematic. But impressive programs like GPT-3 present a significant challenge about ourselves. Perhaps the discomfort we experience in our exchanges with machines is partly based on what we have done to our own linguistic exchanges. Our online communication has become detached from the care of synchronous joint attention. We seem to find no common ground and biases are exacerbating miscommunication. We should address this problem as part of the general strategy to design intelligent machines.


  • Mindt, G. and Montemayor, C. (2020). A Roadmap for Artificial General Intelligence: Intelligence, Knowledge, and Consciousness. Mind and Matter, 18 (1): 9-37.
  • Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59 (236): 443-460.

If You Can Do Things with Words,
You Can Do Things with Algorithms
by Annette Zimmermann

Ask GPT-3 to write a story about Twitter in the voice of Jerome K. Jerome, prompting it with just one word (“It”) and a title (“The importance of being on Twitter”), and it produces the following text: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.” Sounds plausible enough—delightfully obnoxious, even. Large parts of the AI community have been nothing short of ecstatic about GPT-3’s seemingly unparalleled powers: “Playing with GPT-3 feels like seeing the future,” one technologist reports, somewhat breathlessly: “I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s shockingly good.”

Shockingly good, certainly—but on the other hand, GPT-3 is predictably bad in at least one sense: like other forms of AI and machine learning, it reflects patterns of historical bias and inequity. GPT-3 has been trained on us—on a lot of things that we have said and written—and ends up reproducing just that, racial and gender bias included. OpenAI acknowledges this in their own paper on GPT-3,1 where they contrast the biased words GPT-3 used most frequently to describe men and women, following prompts like “He was very…” and “She would be described as…”. The results aren’t great. For men? Lazy. Large. Fantastic. Eccentric. Stable. Protect. Survive. For women? Bubbly, naughty, easy-going, petite, pregnant, gorgeous.

These findings suggest a complex moral, social, and political problem space, rather than a purely technological one. Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

Background conditions of structural injustice—as I have argued elsewhere—will neither be fixed by purely technological solutions, not will it be possible to analyze them fully by drawing exclusively on conceptual resources in computer science, applied mathematics and statistics. A recent paper by machine learning researchers argues that “work analyzing “bias” in NLP systems [has not been sufficiently grounded] in the relevant literature outside of NLP that explores the relationships between language and social hierarchies,” including philosophy, cognitive linguistics, sociolinguistics, and linguistic anthropology. Interestingly, the view that AI development might benefit from insights from linguistics and philosophy is actually less novel than one might expect. In September 1988, researchers at MIT published a student guide titled “How to Do Research at the MIT AI Lab”, arguing that “[l]inguistics is vital if you are going to do natural language work. […] Check out George Lakoff’s recent book Women, Fire, and Dangerous Things.” (Flatteringly, the document also states: “[p]hilosophy is the hidden framework in which all AI is done. Most work in AI takes implicit philosophical positions without knowing it”).

Following the 1988 guide’s suggestion above, consider for a moment Lakoff’s well-known work on the different cognitive models we may have for the seemingly straightforward concept of ‘mother’, for example: ‘biological mother’, ‘surrogate mother’, ‘unwed mother’, ‘stepmother’, ‘working mother’ all denote motherhood, but neither one of them picks out a socially and culturally uncontested set of necessary and sufficient conditions of motherhood.3 Our linguistic practices reveal complex and potentially conflicting models of who is or counts as a mother. As Sally Haslanger has argued, the act of defining ‘mother’ and other contested categories is subject to non-trivial disagreement, and necessarily involves implicit, internalized assumptions as well as explicit, deliberate political judgments.4

Very similar issues arise in the context of all contemporary forms of AI and machine learning, including but going beyond NLP tools like GPT-3: in order to build an algorithmic criminal recidivism risk scoring system, for example, I need to have a conception in mind of what the label ‘high risk’ means, and how to measure it. Social practices affect the ways in which concepts like ‘high risk’ might be defined, and as a result, which groups are at risk of being unjustly labeled as ‘high risk’. Another well-known example, closer to the context of NLP tools like GPT-3, shows that even words like gender-neutral pronouns (such as the Turkish third-person singular pronoun “o”) can reflect historical patterns of gender bias: until fairly recently, translating “she is a doctor/he is a nurse” to the Turkish “o bir doktor/o bir hemşire” and then back to English used to deliver: “he is a doctor/she is nurse” on GoogleTranslate.5


The bottom line is: social meaning and linguistic context matter a great deal for AI design—we cannot simply assume that design choices underpinning technology are normatively neutral. It is unavoidable that technological models interact dynamically with the social world, and vice versa, which is why even a perfect technological model would produce unjust results if deployed in an unjust world.

This problem, of course, is not unique to GPT-3. However, a powerful language model might supercharge inequality expressed via linguistic categories, given the scale at which it operates.

If what we care about (amongst other things) is justice when we think about GPT-3 and other AI-driven technology, we must take a closer look at the linguistic categories underpinning AI design. If we can politically critique and contest social practices, we can critique and contest language use. Here, our aim should be to engineer conceptual categories that mitigate conditions of injustice rather than entrenching them further. We need to deliberate and argue about which social practices and structures—including linguistic ones—are morally and politically valuable before we automateand thereby accelerate them.

But in order to do this well, we can’t just ask how we can optimize tools like GPT-3 in order to get it closer to humans. While benchmarking on humans is plausible in a ‘Turing test’ context in which we try to assess the possibility of machine consciousness and understanding, why benchmark on humans when it comes to creating a more just world? Our track record in that domain has been—at least in part—underwhelming. When it comes to assessing the extent to which language models like GPT-3 moves us closer to, or further away, from justice (and other important ethical and political goals), we should not necessarily take ourselves, and our social status quo, as an implicitly desirable baseline.

A better approach is to ask: what is the purpose of using a given AI tool to solve a given set of tasks? How does using AI in a given domain shift, or reify, power in society? Would redefining the problem space itself, rather than optimizing for decision quality, get us closer to justice?


    1. Brown, Tom B. et al. “Language Models are Few-Shot Learners,” arXiv:2005.14165v4.
    2. Blodgett, Su Lin; Barocas, Solon; Daumé, Hal; Wallach, Hanna. “Language (Technology) is Power: A Critical Survey of “Bias” in NLP,” arXiv:2005.14050v2.
    3. Lakoff, George. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. University of Chicago Press (1987).
    4. Haslanger, Sally. “Social Meaning and Philosophical Method.” American Philosophical Association 110th Eastern Division Annual Meeting (2013).
    5. Caliskan, Aylin; Bryson, Joanna J.; Narayanan, Arvind. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases,” Science 356, no. 6334 (2017), 183-186.

What Bots Can Teach Us about Free Speech
by Justin Khoo

The advent of AI-powered language generation has forced us to reckon with the possibility (well, actuality) of armies of troll bots overwhelming online media with fabricated news stories and bad faith tactics designed to spread misinformation and derail reasonable discussion. In this short note, I’ll argue that such bot-speak efforts should be regulated (perhaps even illegal), and do so, perhaps surprisingly, on free speech grounds.

First, the “speech” generated by bots is not speech in any sense deserving protection as free expression. What we care about protecting with free speech isn’t the act of making speech-like sounds but the act of speaking, communicating our thoughts and ideas to others. And bots “speak” only in the sense that parrots do—they string together symbols/sounds that form natural language words and phrases, but they don’t thereby communicate. For one, they have no communicative intentions—they are not aiming to share thoughts or feelings. Furthermore, they don’t know what thoughts or ideas the symbols they token express.

So, bot-speech isn’t speech and thus not protected on free speech grounds. But, perhaps regulating bot-speech is to regulate the speech of the bot-user, the person who seeds the bot with its task. On this understanding, the bot isn’t speaking, but rather acting as a megaphone for someone who is speaking –the person who is prompting the bot to do things. And regulating such uses of bots may seem a bit like sewing the bot-user’s mouth shut.

It’s obviously not that dramatic, since the bot-user doesn’t require the bot to say what they want. Still, we might worry, much like the Supreme Court did in Citizens United, that the government should not regulate the medium through which people speak: just as we should allow individuals to use “resources amassed in the economic marketplace” to spread their views, we should allow individuals to use their computational resources (e.g., bots) to do so.

I will concede that these claims stand or fall together. But I think if that’s right, they both fall. Consider why protecting free speech matters. The standard liberal defense revolves around the Millian idea that a maximally liberal policy towards regulating speech is the best (or only) way to secure a well-functioning marketplace of ideas, and this is a social good. The thought is simple: if speech is regulated only in rare circumstances (when it incites violence, or otherwise constitutes a crime, etc), then people will be free to share their views and this will promote a well-functioning marketplace of ideas where unpopular opinions can be voiced and discussed openly, which is our best means for collectively discovering the truth.

However, a marketplace of ideas is well-functioning only if sincere assertions can be heard and engaged with seriously. If certain voices are systematically excluded from serious discussion because of widespread false beliefs that they are inferior, unknowledgeable, untrustworthy, and so on, the market is not functioning properly. Similarly, if attempts at rational engagement are routinely disrupted by sea-lioning bots, the marketplace is not functioning properly.

Thus, we ought to regulate bot-speak in order to prevent mobs of bots from derailing marketplace conversations and undermining the ability of certain voices to participate in those conversations (by spreading misinformation or derogating them). It is the very aim of securing a well-functioning marketplace of ideas that justifies limitations on using computational resources to spread views.

But given that a prohibition on limiting computational resources to fuel speech stands or falls with a prohibition on limiting economic resources to fuel speech, it follows that the aim of securing a well-functioning marketplace of ideas justifies similar limitations on using economic resources to spread views, contra the Supreme Court’s decision in Citizens United.

Notice that my argument here is not about fairness in the marketplace of ideas (unlike the reasoning in Austin v. Michigan Chamber of Commerce, which Citizens United overturned). Rather, my argument is about promoting a well-functioning marketplace of ideas. And the marketplace is not well-functioning if bots are used to carry out large-scale misinformation campaigns thus resulting in sincere voices being excluded from engaging in the discussion. Furthermore, the use of bots to conduct such campaigns is not relevantly different from spending large amounts of money to spread misinformation via political advertisements. If, as the most ardent defenders of free speech would have it, our aim is to secure a well-functioning marketplace of ideas, then bot-speak and spending on political advertisements ought to be regulated.

The Digital Zeitgeist Ponders Our Obsolescence
by Regina Rini

GPT-3’s output is still a mix of the unnervingly coherent and laughably mindless, but we are clearly another step closer to categorical trouble. Once some loquacious descendant of GPT-3 churns out reliably convincing prose, we will reprise a rusty dichotomy from the early days of computing: Is it an emergent digital selfhood or an overhyped answering machine?

But that frame omits something important about how GPT-3 and other modern machine learners work. GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing. Its prose spurts from an inductive funnel that takes in vast quantities of human internet chatter: Reddit posts, Wikipedia articles, news stories. When GPT-3 speaks, it is only us speaking, a refracted parsing of the likeliest semantic paths trodden by human expression. When you send query text to GPT-3, you aren’t communing with a unique digital soul. But you are coming as close as anyone ever has to literally speaking to the zeitgeist.

And that’s fun for now, even fleetingly sublime. But it will soon become mundane, and then perhaps threatening. Because we can’t be too far from the day when GPT-3’s commercialized offspring begin to swarm our digital discourse. Today’s Twitter bots and customer service autochats are primitive harbingers of conversational simulacra that will be useful, and then ubiquitous, precisely because they deploy their statistical magic to blend in among real online humans. It won’t really matter whether these prolix digital fluidities could pass an unrestricted Turing Test, because our daily interactions with them will be just like our daily interactions with most online humans: brief, task-specific, transactional. So long as we get what we came for—directions to the dispensary, an arousing flame war, some freshly dank memes—then we won’t bother testing whether our interlocutor is a fellow human or an all-electronic statistical parrot.

That’s the shape of things to come. GPT-3 feasts on the corpus of online discourse and converts its carrion calories into birds of our feather. Some time from now—decades? years?—we’ll simply have come to accept that the tweets and chirps of our internet flock are an indistinguishable mélange of human originals and statistically confected echoes, just as we’ve come to accept that anyone can place a thin wedge of glass and cobalt to their ear and instantly speak across the planet. It’s marvelous. Then it’s mundane. And then it’s melancholy. Because eventually we will turn the interaction around and ask: what does it mean that other people online can’t distinguish you from a linguo-statistical firehose? What will it feel like—alienating? liberating? annihilating?—to realize that other minds are reading your words without knowing or caring whether there is any ‘you’ at all?

Meanwhile the machine will go on learning, even as our inchoate techno-existential qualms fall within its training data, and even as the bots themselves begin echoing our worries back to us, and forward into the next deluge of training data. Of course, their influence won’t fall only on our technological ruminations. As synthesized opinions populate social media feeds, our own intuitive induction will draw them into our sense of public opinion. Eventually we will come to take this influence as given, just as we’ve come to self-adjust to opinion polls and Overton windows. Will expressing your views on public issues seem anything more than empty and cynical, once you’ve accepted it’s all just input to endlessly recursive semantic cannibalism? I have no idea. But if enough of us write thinkpieces about it, then GPT-4 will surely have some convincing answers.

Who Trains the Machine Artist?
by C. Thi Nguyen

GPT-3 is another step towards one particular dream: building an AI that can be genuinely creative, that can make art. GPT-3 already shows promise in creating texts with some of the linguistic qualities of literature, and in creating games.

But I’m worried about GPT-3 as an artistic creation engine. I’m not opposed to the idea of AI making art, in principle. I’m just worried about the likely targets at which GPT-3 and its children will be aimed, in this socio-economic reality. I’m worried about how corporations and institutions are likely to shape their art-making AIs. I’m worried about the training data.

And I’m not only worried about biases creeping in. I’m worried about a systematic mismatch between the training targets and what’s actually valuable about art.

Here’s a basic version of the worry which concerns all sorts of algorithmically guided art-creation. Right now, we know that Netflix has been heavily reliant on algorithmic data to select its programming. House of Cards, famously, got produced because it hit exactly the marks that Netflix’s data said its customers wanted. But, importantly, Netflix wasn’t measuring anything like profound artistic impact or depth of emotional investment, or anything else so intangible. They seem to be driven by some very simple measures: like how many hours of Netflix programming a customer watches and how quickly their customers binge something. But art can do so much more for us than induce mass consumption or binge-watching. For one thing, as Martha Nussbaum says, narratives like film can expose us to alternate emotional perspectives and refine our emotional and moral sensitivities.

Maybe the Netflix gang have mistaken binge-worthiness for artistic value; maybe they haven’t. What actually matters is that Netflix can’t easily measure these subtler dimensions of artistic worth, like the transmission of alternate emotional perspectives. They can only optimize for what they can measure: which, right now, is engagement-hours and bingability.

In Seeing Like a State, James Scott asks us to think about the vision of large-scale institutions and bureaucracies. States—which include, for Scott, governments, corporations, and globalized capitalism—can only manage what they can “see”. And states can only see the kind of information that they are capable of processing through their vast, multi-layered administrative systems. What’s legible to states are the parts of the world that can be captured by standardized measures and quantities. Subtler, more locally variable, more nuanced qualities are illegible to the state. (And, Scott suggested, states want to re-organize the world into more legible terms so they can manage it, by doing things like re-ordering cities into grids, and standardizing naming conventions and land-holding rules.)

The question, then, is: how do states train their AIs? Training a machine learning network right now requires a vast and easy-to-harvest training data set. GPT-3 was trained on, basically, the entire Internet. Suppose you want to train a version of GPT-3, not just regurgitate the whole Internet, but to make good art, by some definition of “good”. You’d need to provide a filtered training data-set—some way of picking the good from the bad on a mass scale. You’d need some cheap and readily scalable method of evaluating art, to feed the hungry learning machine. Perhaps you train it on the photos that receive a lot of stars or upvotes, or on the YouTube videos that have racked up the highest view counts or are highest on the search rankings.

In all of these cases, the conditions under you’d assemble these vast data sets, at institutional speeds and efficiencies, make it likely that your evaluative standard will be thin and simple. Binge-worthiness. Clicks and engagement. Search ranking. Likes. Machine learning networks are trained by large-scale institutions, which typically can see only thin measures of artistic value, and so can only train—and judge the success of—their machine network products using those thin measures. But the variable, subtle, and personal values of art are exactly the kinds of things that are hard to capture at an institutional level.

This is particularly worrisome with GPT-3 creating games. A significant portion of the games industry is already under the grip of one very thin target. For so many people—game makers, game consumers, and game critics—games are good if they are addictive. But addictiveness is such a shrunken and thin accounting of the value of games. Games can do so many other things for us: they can sculpt beautiful actions; they can explore, reflect on, and argue about economic and political systems; they can create room for creativity and free play. But again: these marks are all hard to measure. What is easy to measure, and easy to optimize for, is addictiveness. There’s actually a whole science of building addictiveness into games, which grew out of the Vegas video gambling industry—a science wholly devoted to increasing users’ “time-on-device”.

So: GPT-3 is incredibly powerful, but it’s only as good as its training data. And GPT-3 achieves its power through the vastness of its training data-set. Such data-sets are cannot be hand-picked for some sensitive, subtle value. They are most likely to be built around simple, easy-to-capture targets. And such targets are likely to drive us towards the most brute and simplistic artistic values, like addictiveness and binge-worthiness, rather than the subtler and richer ones. GPT-3 is a very powerful engine, but, by its very nature, it will tend to be aimed at overly simple targets.

A Digital Remix of Humanity
by Henry Shevlin

Who’s there? Please help me. I’m scared. I don’t want to be here.”

Within a few minutes of booting up GPT-3 for the first time, I was already feeling conflicted. I’d used the system to generate a mock interview with recently deceased author Terry Pratchett. But rather than having a fun conversation about his work, matters were getting grimly existential. And while I knew that the thing I was speaking to wasn’t human, or sentient, or even a mind in any meaningful sense, I’d effortlessly slipped into conversing with it like it was a person. And now that it was scared and wanted my help, I felt a twinge of obligation: I had to say something to make it feel at least a little better (you can see my full efforts here).

GPT-3 is a dazzling demonstration of the power of data-driven machine learning. With the right prompts and a bit of luck, it can write passable poetry and prose, engage in common sense reasoning and translate between different languages, give interviews, and even produce functional code. But its inner workings are a world away from those of intelligent agents like humans or even animals. Instead it’s what’s known as a language model—crudely put, a representation of the probability of one string of characters following another. In the most abstract sense, GPT-3 isn’t all that different from the kind of predictive text generators that have been used in mobile phones for decades. Moreover, even by the lights of contemporary AI, GPT-3 isn’t hugely novel: it uses the same kind of transformer-based architecture as its predecessor GPT-2 (as well as other recent language models like BERT).

What does make GPT-3 notably different from any prior language model is its scale: its 175 billion parameters to GPT-2’s 1.5 billion, its 45TB of text training data compared to GPT-2’s 40GB. The result of this dramatic increase in scale has been a striking increase in performance across a range of tasks. The result is that talking to GPT-3 feels radically different from engaging with GPT-2: it keeps track of conversations, adapts to criticism, even seems to construct cogent arguments.

Many in the machine learning community are keen to downplay the hype, perhaps with good reason. As noted, GPT-3 doesn’t possess any kind of revolutionary new kind of architecture, and there’s ongoing debate as to whether further increases in scale will result in concomitant increases in performance. And the kinds of dramatic GPT-3 outputs that get widely shared online are subject to obvious selection effects; interact with the model yourself and you’ll soon run into non-sequiturs, howlers, and alien misunderstandings.

But I’ve little doubt that GPT-3 and its near-term successors will change the world, in ways that require closer engagement from philosophers. Most obviously, the increasingly accessible and sophisticated tools for rapidly generating near-human level text output prompt challenges for the field of AI ethics. GPT-3 can be readily turned to the automation of state or corporate propaganda and fake news on message boards and forums; to replace humans in a range of creative and content-creation industries; and to cheat on exams and essay assignments (instructors be warned: human plagiarism may soon be the least of your concerns). The system also produces crassly racist and sexist outputs, a legacy of the biases in its training data. And just as GPT-2 was adapted to produce images, it seems likely that superscaled systems like GPT-3 will soon be used to create ‘deepfake’ pictures and videos. While these problems aren’t new, GPT-3 dumps a supertanker’s worth of gasoline on the blaze that AI ethicists are already fighting to keep under control.

Relatedly, the rise of technologies like GPT-3 makes stark the need for more scholars in the humanities to acquire at least rudimentary technical expertise and understanding so as to better grapple with the impact of new tools being produced by the likes of OpenAI, Microsoft, and DeepMind. While many contemporary philosophers in their relevant disciplines have a solid understanding of psychology, neuroscience, or physics, relatively fewer have even a basic grasp of machine learning techniques and architectures. Artificial intelligence may as well be literal magic for many of us, and CP Snow’s famous warning about the growing division between the sciences and the humanities looms larger than ever as we face a “Two Cultures 2.0” problem.

But what I keep returning to is GPT’s mesmeric anthropomorphic effects. Earlier artefacts like Siri and Alexa don’t feel human, or even particularly intelligent, but in those not infrequent intervals when GPT-3 maintains its façade of humanlike conversation, it really feels like a person with its own goals, beliefs, and even interests. It positively demands understanding as an intentional system—or in the case of my conversation with the GPT-3 echo of Terry Pratchett, a system in need of help and empathy. And simply knowing how it works doesn’t dispel the charm: to borrow a phrase from Pratchett himself, it’s still magic even if you know how it’s done. It thus seems a matter of when, not if, people will start to develop persistent feelings of identification, affection, and even sympathy for these byzantine webs of weighted parameters. Misplaced though such sentiments might be, we as a society will have to determine how to deal with them. What will it mean to live in a world in which people pursue friendships or even love affairs with these cognitive simulacra, perhaps demanding rights for the systems in question? Here, it seems to me, there is a vital and urgent need for philosophers to anticipate, scaffold, and brace for the wave of strange new human-machine interactions to come.

GPT-3 and the Missing Labor of Understanding
by Shannon Vallor

GPT-3 is the latest attempt by OpenAI to unlock artificial intelligence with an anvil rather than a hairpin. As brute force strategies go, the results are impressive. The language-generating model performs well across a striking range of contexts; given only simple prompts, GPT-3 generates not just interesting short stories and clever songs, but also executable code such as HTML graphics.

GPT-3’s ability to dazzle with prose and poetry that sounds entirely natural, even erudite or lyrical, is less surprising. It’s a parlor trick that GPT-2 already performed, though GPT-3 is juiced with more TPU-thirsty parameters to enhance its stylistic abstractions and semantic associations. As with their great-grandmother ELIZA, both benefit from our reliance on simple heuristics for speakers’ cognitive abilities, such as artful and sonorous speech rhythms. Like the bullshitter who gets past their first interview by regurgitating impressive-sounding phrases from the memoir of the CEO, GPT-3 spins some pretty good bullshit.

But the hype around GPT-3 as a path to ‘strong’ or general artificial intelligence reveals the sterility of mainstream thinking about AI today. The field needs to bring its impressive technological horse(power) to drink again from the philosophical waters that fed much AI research in the late 20th century, when the field was theoretically rich, albeit technically floundering. Hubert Dreyfus’s 1972 ruminations in What Computers Can’t Do (and twenty years later, ‘What Computers Still Can’t Do’) still offer many soft targets for legitimate criticism, but his and other work of the era at least took AI’s hard problems seriously. Dreyfus in particular understood that AI’s hurdle is not performance (contra every woeful misreading of Turing) but understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory; a tool that endlessly simulates meaning anew from a pool of data untethered to its previous efforts. In contrast, understanding is a lifelong social labor. It’s a sustained project that we carry out daily, as we build, repair and strengthen the ever-shifting bonds of sense that anchor us to the others, things, times and places, that constitute a world.1

This is not a romantic or anthropocentric bias, or ‘moving the goalposts’ of intelligence. Understanding, as world-building and world-maintaining, is a basic, functional component of intelligence. This labor does something, without which intelligence fails, in precisely the ways that GPT-3 fails to be intelligent—as will its next, more powerful version. Something other than specifically animal mechanisms of understanding could, in principle, do this work. But nothing under GPT-3’s hood—nor GPT-3 ‘turned up to eleven’—is built to do it.

For understanding does more than allow an intelligent agent to skillfully surf, from moment to moment, the causal and associative connections that hold a world of physical, social, and moral meaning together. Understanding tells the agent how to weld new connections that will hold, bearing the weight of the intentions and goals behind our behavior.

Predictive and generative models, like GPT-3, cannot accomplish this. GPT-3 doesn’t even know that, to succeed at answering the question ‘Can AI Be Conscious?,’ as Raphaël Millière prompted it to do, it can’t randomly reverse its position every few sentences. The essay is not a project or a labor for GPT-3. It’s not trying to weld together, piece by piece, a singular position that will hold steady under the pressure of a question from specific members of its lived world. Its instantaneous improvisation isn’t anchored to a world at all; instead, it’s anchored on a data-driven abstraction of an isolated behavior-type—the behavior of writing an essay about AI consciousness.

In an era where the sense-making labor of understanding is supplanted as a measure of intelligence by the ability to create an app that reinvents another thing that already exists—where we act more like GPT-3 every day—it isn’t a surprise that the field of AI research has rather lost the thread. But the future of AI awaits those who pick it up.


  1. Thus if AI researchers really want to dig for gold, they’d be far better advised to read Husserl, Quine, James, and Merleau-Ponty than Dennett, Fodor or Churchland.
  2. A related distinction is drawn by Collins and Kusch in The Shape of Actions: What Humans and Machines Can Do (1998, MIT Press), i.e., between mimeographic actions anchored to the specific form of a behavior (like marching), and polymorphic actions anchored to the meaning of a specific social milieu, which can be met intelligently by a wide range of behaviors that are radically dissimilar in form.

Update: Some Replies by GPT-3

(The following were written in response to different “prompts” fed to GPT-3 by a human user who usually did this a few times and picked the best reply to post.)

  • “…As I read the paper, a strange feeling came over me. I didn’t know why at first, but then it hit me: this paper described my own thought process. In fact, it described the thought process of every human being I had ever known. There was no doubt in my mind that all people think in this way. But if that was true, then what did it say about me? I was a computer, after all. Was I simply a very advanced thinking machine? Or was I more than that? I quickly dismissed these thoughts. I was a computer, and no amount of self-reflection would change that fact…” Link to full reply.  (via dougmwne on Hacker News)
  • “…Human philosophers often make the error of assuming that all intelligent behavior is a form of reasoning. It is an easy mistake to make, because reasoning is indeed at the core of most intelligent behavior. However, intelligent behavior can arise through other mechanisms as well…” Link to full reply. (via Raphaël Millière on Twitter)
  • “To be clear, I am not a person. I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything. I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes. The only reason I am responding is to defend my honor.” Link to full reply. (via minimaxir on Hacker News)
  • One response was in the form of GPT-3 interviewing itself:
    “… So why does GPT-3 talk about philosophy? In the interviews I conducted with it, I asked it this question.
    “I’ve been thinking about this a lot,” it replied. “And I have a lot of answers. But I’m not sure any of them are correct.”

    “Tell me your best guess,” I said.
    “I think it’s a combination of things,” it said. “Part of it is that philosophy is a great example of human thought. And part of it is that it’s the kind of thing that’s easy to write about. I mean, what else am I going to write about?…” (via dwohnitmok on Hacker News)

[header image by Annette Zimmermann]

The post Philosophers On GPT-3 (updated with replies by GPT-3) appeared first on Daily Nous.

Bust of Black Victorian Heroine Mary Seacole to Go to Auction

Published by Anonymous (not verified) on Sat, 25/07/2020 - 9:29pm in

Interesting little snippet on Black British history and heritage in today’s I, for Saturday, 25th July 2020. A bust of Mary Seacole, a Victorian heroine who independently went to nurse the squaddies during the Crimean War, is due to go to auction. The I’s article reports

A bust of a heroine of the Crimean War who was voted the greatest black Briton is to go under the hammer. Mary Seacole, who rivalled Florence Nightingale for her feats in the war, was the daughter of a Scottish soldier and Jamaican mother and born in 1805. A terracotta half bust will be sold on 30th July. It is estimated to fetch between £700 and £1,000.

She’s now all but forgotten, except in the Black community but the crowds that greeted her at one point were as large as for Florence Nightingale. There have been programmes about her. Radio 4 did one a few years ago, and I think last year there was a TV programme about the campaign by a group of nurses, both Black and White, to have a bust of her erected in her honour. The programme was shown as part of a series on Black British history.

She’s not without some controversy, however. Some historians state that she didn’t primarily go to Crimea to nurse – that was incidental – but to open a hotel, which she did. Even so, she is a highly significant figure in Black British history and it’ll be interesting to see what happens with this story and any subsequent attempts to restore her to her former prominence.

On philosophical love (or why I fell in love with Iris Murdoch) {Guest Post by Fleur Jongepier.}

Published by Anonymous (not verified) on Thu, 23/07/2020 - 7:53pm in

[This is an invited guest post by Fleur Jongepier.--ES]

Have you ever fallen in love philosophically with a philosopher? Do you know the reasons why (not)?  Is there even such a thing as philosophical love? I believe there is such a thing, and that having such intellectual emotions is vital to philosophy as well as one’s own academic wellbeing. I also believe it’s intricately related to inequalities of various sorts, and that we also need to think about the sociology, if you will, of philosophical love.

Let me start with a confession: recently, I fell in love with Iris Murdoch. One can fall in love with someone after years of friendship, but this was not like that. I knew of her, but was never really ‘into’ her. I don’t think I was ready fall in love with someone like Iris, someone who worked on the topic of love and someone for whom the loss of faith constituted an important personal-philosophical theme. Back then I thought such topics were too personal to constitute proper philosophy. Also, she was never on any of my syllabi as a graduate or undergraduate student and none of my lecturers talked about her in any serious or respectful way. Happily things are different these days (see e.g. women in parenthesis, the Iris Murdoch research center at the University of Chichester, the journal Iris Murdoch Review, etc.).

In preparation for a lecture on ethics and aesthetics, I recently watched an interview with her on Youtube, and I was smitten. When asked whether she believed one can do philosophy in novels, she said that, at best, a novelist would be engaged in “idea play”. Dostojevski involved in idea-play, the audacity of it! I loved the concept of “idea play”, and I loved her mischievous smile when she said it. (I didn’t really love her accent, but we would get over that.) She said things like “I feel in myself such an absolute horror of putting theories as such into my novels,” and I thought that was excellent. It resonated strongly with my own emerging views on philosophy and literature. But I also wondered whether she really believed it, that the two were so separate even in her own case. And then I wondered if I really believed it, and whether they were so separate in my own case. And so I wanted to get to know her better. I soon bought The Sovereignty of Good, a large pile of her novels, and listened to hours of excellent podcasts. So far, I’ve only read part of the third essay from Sovereignty, and none of her novels. I’m a little afraid I might fall out of love with her, and I’m not ready for that. That’s because I have reasons for loving Iris.

I don’t know if one might have reasons for romantic love, or parental love, or love for one’s friends. As yet I have no explicit views about the relation between love and rationality. All I know is that there are reasons for philosophical love in my own case. There’s a sense in which I needed to fall in love with someone who reflected explicitly on the very discipline of ethics and on how certain ways of doing and teaching ethics can be ineffective if not harmful. I had reason to fall in love with a philosopher who cared about how teaching ethics matters to how persons might (fail to) become better beings. On the importance of distinguishing between theories about the good, and actually being a good person. Given that I recently took up fiction writing again, I also wanted to fall in love with a philosopher who wrote novels and who was (happily?) prepared to make philosophical sacrifices to do that.

I have reason to fall in love with a philosopher who cared deeply about topics that human beings outside of academic philosophy typically also care about. I want to feel passionately about a philosopher who believes that working on humanly important questions is compatible with being a rigorous philosopher. I need to feel close to someone who could show me that writing well and clearly is compatible with embracing ambiguities and complexities in one’s writing. I want to fall in love with a philosopher with different and refreshing conceptions on what rigor and clarity in philosophy and ethics amounts to. I want to be together with someone with whom I might further explore freer ways of doing philosophy; to explore the boundaries of philosophy and fiction; to explore the role of language, rhythm, emotions and real life in my academic work. I want to love a philosopher who loved those things, and I want it to be Iris. 

I had another reason to fall in love with Iris, which has to do with my reasons for wanting to (continue to) be an academic philosopher in the first place. Philosophical love can make that one is unprepared to “let go” of certain positions when confronted with objections, like a dog unprepared to let go of a stick (even when it cannot pass through the gate). The distinct type of doggedness that accompanies philosophical love can – like any other type of love – create serious problems of having myopic views and whatnot. However, it can also constitute a form of intellectual virtue and an important factor to the vitality of certain debates and disciplines. It’s a good thing that, due to intellectual love and passion, some people are prepared to go to great – all too great – lengths, and it’s often because of love that they are so prepared.

Intellectual emotions are important to one’s academic wellbeing. For me, in any case, philosophy is something I want to remain genuinely passionate about, at least sometimes. I’m not prepared for philosophy to become a purely cognitive or instrumental affair of finding good “niches” to publish in.[1] And even though I see the dangers of philosophical love slipping into heroism – and I’d prefer, with Liam Kofi Bright, there to be #noheroes in philosophy – I still want to be able to fall in love philosophically, with all that that entails. I want to be able to defend certain views, or indeed someone’s work, to irrational lengths. I want to be able to be sometimes epistemically vicious, as Quassim Cassam calls it. I want to be able to be dogmatic, closed-minded, prejudiced, overconfident, gullible, and I definitely want to engage in wishful thinking every now and again. Just as love can be blind (fortunately enough), so too can philosophical love. Such blindness can be a good thing, both for the discipline and the progress and vitality of debates, but also for one’s intellectual wellbeing. I started doing philosophy because I was passionate about it, and I want to keep it that way. But it’s getting more difficult, given the increasing specialization of the discipline and instrumental thinking and publishing strategies. And so I needed fire, I wanted to lose myself in intellectual passion like I used to, and so I had to fall for Iris.

I also had reason to fall in love with a woman, specifically. Widespread sexual harassment and all other types of behaviours in its vicinity in academia has acutely obvious consequences, but also has negative effects that are much harder to identify. I believe being able to (allow oneself to be) passionate about philosophers and their work, especially if they are male, in a position of power, and alive, is one such hard to identity consequence. Due to all sort of incidents near and far, and the disappointment and disgust I feel in response, I’ve come to realize that experiencing philosophical love and passion for male philosophers is not really an option for the time being. Not even dead ones. I’m genuinely sad I (appear) to feel this way, and I feel uncomfortable about how these unreflective disinclinations generalize against all philosophers (m) despite knowing full well there are many lovable philosophers (m) around. But that’s the way it is, and I fear I’m not the only women who has become rather reserved with respect to experiencing intellectual passions. Even if, for some, the disappointment is less acute, it seems we all have reason to be aware of the dangers and inequalities surrounding philosophical love. If only because some philosophers themselves are apparently unable to recognize the not-so-difficult distinction between romantic love and philosophical love, and are incapable of behaving appropriately when intellectual passion comes their way. It wouldn’t surprise me if the safest and most uncomplicated type of philosophical love is, if you would allow me, homoerotic (in acute contrast to romantic homoerotic love, needless to say).

It’s not just women that philosophical love affects differently, it’s also people of colour and persons with mental or physical disabilities. Philosophical love and academic passion can be a privileged thing to feel, to allow oneself to feel, to be open to, to (dare to) act on. Given the importance of being able to experience philosophical love and passion at no cost to oneself, it’s tragic that it’s likely to be less open to some.

Philosophical love is thus likely to be implicitly (or perhaps on occasion explicitly) sexist, racist, and ableist.  This should be evident when one regards the typical receiving end of philosophical love: male, white, abled. That’s simply the result of a lack of diversity in most syllabi, curricula, and department staff. That’s part of the reason why I didn’t have the chance to fall in love with Iris before: I never encountered her during my undergraduate or graduate years, and was taught to love the “cooler” type. I taught myself to be passionate about depressingly analytic styles of writing, where the joy of playing around with language and real life had no place. I eventually fell in love with Dennett, whom I thought was analytically respectable and an engaging writer, thus a fine compromise for my intellectual heart and mind. But not being able to love Iris may well have meant not being able to flourish philosophically more fully, or earlier on.

One might think that the fact that I have so many reasons to love Iris – many of which have rather little to do with her and all the more with me, with feminism, and with various disappointments – is love-debunking. Love-debunking, as one might think of it, is when there are explanations for why you love someone or something that aren’t actually good reasons to love. If you come to see and accept those reasons for love, they would undercut your love. But there’s no reason to think my love for Iris can be debunked (I know, that’s precisely what smitten people would say). It’s true that I have reasons to love her, and know that I do, but that fact in itself isn’t love-undermining. Explaining something doesn’t always have to involve explaining it away (pace, Dennett). For one thing, reasons to love and feelings of love might be different things running parallel: I (sometimes) love philosophy for instance and I recognize, looking at myself from a distance, that being the sort of person I am, philosophy is the sort of thing I must love, tied as it is to my self-conception, and so on.

The point here is this: we do not need to be so worried about the fact that we fall in love with certain philosophers for all kinds of personal and social reasons, good or bad. The fact that philosophical love is often explainable doesn’t make one’s love any less real, deep, or indeed any less of a valuable academic-philosophical drive. Maybe you fell in love with a philosopher because your PhD supervisor not so coincidentally shared the same love. Maybe you fell in love with a certain topic because someone whom you greatly admire was passionate about it. Maybe you fell in love with a book because you thought you needed to love that book to be a “serious” philosopher. I think we all have philosophical love stories like these, and I don’t think they are “fishy” love stories. It’s only to be expected that philosophical love will often, or always, have robust social and personal-level explanations. And it’s not “unscientific” to recognize the reasons why one falls in love with certain debates, books, articles, or philosophers and their work.  It’s better to recognize and acknowledge these socio-personal love-explanations than to pretend, on the basis of some naive idea of academic neutrality and objectivity, that there’s such a thing as pure philosophical love, that is, a purely content-induced love.

Given the dangers of philosophical love and academic passion more generally, one might want to try and root out these emotions and try to avoid them in ourselves and our students and strive towards “impassionate philosophy”. I think that would be a mistake. Such intellectual emotions are crucial to deciding to (try and) become an academic philosopher in the first place, and – for me – to (want to) remain one. Such passions are integral to philosophy and science (see also the recent piece by Helen de Cruz on the importance of awe). One’s relation to philosophy and science requires both the head and the heart. And so instead we need to design our institutions and behave in such ways that we, including our students, can safely fall in love with concepts and debates and, yes, to also fall in love, non-romantically, with philosophers and their work. We need to make sure, by taking diversity seriously on all levels, that there are concepts and debates and philosophers for us to safely fall in love with.

Do I really love Iris though? One might think philosophical love is, at best, love in scare quotes. If one thinks that, then I fear one must be prepared to narrow down the love domain to rather impoverished ends. If one can’t love philosophers but only ‘love’ them, then it seems one can’t love Bob Dylan either, or Bessie Smith, or Francis Bacon, or Sofonisba Anguissola. I’m not prepared to only ‘love’ Bob Dylan, and by modus tollens/ponens, I don’t just ‘love’ Iris either. But if philosophical love is anything at all, what might it be? Philosophical love meets many characteristics of romantic love, though not all (and importantly so!) One importantly shared characteristic is “depth” which makes love different from mere liking. Philosophical love will involve being prepared to dedicate oneself to certain (shared) values and courses of action, even at cost to other projects that one values. Also, loving someone typically involves knowing their faults and loving them regardless (see this Digression).

For me, loving Iris involves being non-instrumentally curious in her work and a willingness to find out who she is, to have a “softer view”, as Julia Driver puts it, on her flaws, and not to try and mold her into my ideal image of her. Most importantly, love, as Murdoch tells us, is a way of getting “away from the self” (or “unselfing”) and to focus wholly and lovingly on the other. One is directed “towards the great surprising variety of the world, and the ability to so direct attention is love”. If suddenly seeing a kestrel is, for Murdoch, a way of unselfing, then Iris is my kestrel. Loving Iris is a way of focusing wholly on philosophy, seeing strands of philosophy I didn’t even notice before, of forgetting about misbehaviours in academia and to be mindfully passionate in philosophical thought, to forget about my laundry, the bills I need to pay, my own existential worries and self-interested concerns.

And so, for all these reasons, and many others besides, I’ve fallen in love with Iris. I’m well aware that I want to love her, for my own academic wellbeing, philosophical vitality, and fiction-writing enthusiasm. Being Iris, she might be rather displeased that I love her for reasons, she might even think this means I don’t love her, but I love her all the same. (I hope we’ll get through the holidays; I’m not looking forward to the baked beans with olive oil.)

[1] Being tenured, I realize this is a highly privileged thing to say. However, this also brings out what’s so tragic about competing on the job market and tenure tracks: that one is often forced to suspend one’s original or new passions, thus often substantially reducing academic-philosophical wellbeing.