Games

Error message

Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).

Doctor Who Gaming - 2 New Adventures

Published by Anonymous (not verified) on Mon, 12/10/2020 - 10:47pm in

Tags 

Games

 BBC Studios)<\/a>

 

The world of Doctor Who is expanding with the planned release of two brand new adventures

Join Thirteenth Doctor Jodie Whittaker and Tenth Doctor David Tennant on a quest to save reality in a new console and PC game

Dive back into Blink and explore the hidden past of the Weeping Angels in a groundbreaking ‘found phone’ handheld and mobile game

Digital entertainment studio, Maze Theory, in partnership with BBC Studios, has today reveals the expansion of the Doctor Who interactive universe with two brand-new video games launching in Spring 2021. 

Coming to consoles and PC, Doctor Who: The Edge of Reality reimagines, and builds upon, last year’s VR experience, Doctor Who: The Edge of Time, with a new and compelling first-person adventure. 

With brand-new gameplay, new monsters and new worlds to explore, players will wield the Thirteenth Doctor’s sonic screwdriver on a quest to save the universe.

Players will be guided by the Thirteenth Doctor, voiced by Jodie Whittaker, and joined by the Tenth Doctor, voiced by David Tennant.

 BBC Studios)<\/a>

 

Doctor Who: The Edge of Reality features:

  • A Console and PC adventure across Space and Time  - built with current and next-generation consoles in mind, Doctor Who: The Edge of Reality features new worlds to explore, new puzzles, new challenges and new gameplay.
  •  
  • An Original Doctor Who story - uncover a universe-spanning threat as you seek to  save reality from a series of time-breaking glitches. Continue the story that began in The Edge of Time and partner with the Doctor to unearth a greater mystery.
  •  
  • New Enemies and AI - come face-to-face with classic Doctor Who monsters including the Daleks and Weeping Angels. Experience the metal-clad terror of the Cybermen and more foes yet to be revealed…

 

The Edge of Reality: Teaser | Doctor Who

 

 BBC Studios)<\/a>Also revealed today is Doctor Who: The Lonely Assassins, coming to iOS and Android mobile devices as well as Nintendo Switch. 

The game is being developed by award-winning Malaysian studio Kaigan Games, renowned for pushing the boundaries of storytelling within mobile. 

 

The game will see players uncover and decipher the mystery of a ‘found phone’, unravelling a sinister series of events taking place at Wester Drumlins, the iconic ‘uninhabited’ home featured in the legendary Doctor Who episode Blink.

Someone is missing and a menacing new nemesis has emerged. Players will work with Petronella Osgood and other classic characters as they get steadily closer to the truth.

There is only one rule: don't turn your back, don't look away and don't blink!

 

Ian Hambleton, CEO of Maze Theory said: 

With our partners at BBC Studios, we are expanding the Doctor Who universe through a ground-breaking trilogy of experiences, now delivered across multiple devices and platforms.

The uniting of The Thirteenth Doctor and The Tenth in Doctor Who: The Edge of Reality is set to be an epic moment in a game that completely re-imagines last year’s VR experience. While The Lonely Assassins tells a brand new story exploring the legend of one of the most iconic episodes ever. As part of Doctor Who: Time Lord Victorious, we have also delivered an amazing fan-centric update to the VR game Doctor Who: The Edge of Time.

These launches reaffirm the studio’s commitment to take players on exciting and unexpected narrative journeys.

Kevin Jorge, Senior Producer – Games & Interactive, BBC Studios said 

The Edge of Reality and The Lonely Assassins bring Doctor Who to life on console and mobile in a new and thrilling way. From saving the universe with the Thirteenth and Tenth Doctors, to bringing back the Weeping Angels, it’s going to be an exciting year and we can’t wait to reveal more!

 

 BBC Studios)<\/a> BBC Studios)<\/a> BBC Studios)<\/a>

 

 

Doctor Who: The Edge of Reality will launch on PlayStation 4, Xbox One, Nintendo Switch and Steam in Spring 2021. 

Doctor Who: The Lonely Assassins will launch on iOS, Android & Nintendo Switch in Spring 2021.

 Doctor Who: The Edge of Reality is a Maze Theory production for BBC Studios in partnership with Just Add Water

Doctor Who: The Lonely Assassins is a Maze Theory production for BBC Studios in partnership with Kaigan Games 

 

Doctor Who Announces Two New Games Coming Spring 2021

Published by Anonymous (not verified) on Mon, 12/10/2020 - 5:10am in

Tags 

Games, BBC, Doctor Who

Maze Theory and BBC Studios announced two new Doctor Who games are on the way with a VR and a Mobile title in Springe 2021. First up, coming to consoles and PC will be Doctor Who: The Edge of Reality which will build on the story told in the previous VR title, Doctor Who: The […]

The post Doctor Who Announces Two New Games Coming Spring 2021 appeared first on Bleeding Cool News And Rumors.

Doctor Who Zone launches in BBC’s Nightfall game

Published by Anonymous (not verified) on Tue, 25/08/2020 - 6:01pm in

Tags 

Games, Doctor Who

 BBC Studios)<\/a>

Young gamers can now transport themselves inside the iconic world of Doctor Who for a limited time in Nightfall<\/a>, the BBC’s online multiplayer game.

Nightfall’s REM Zone 2 has been transformed until 29th September, and it’s up to Nightfallers to work together and keep the Doctor’s most infamous villains – the Daleks – at bay.

The free-to-play game gives players the chance to claim new outfits and style their Nightfaller as Jodie Whittaker’s Doctor, or as one of the Doctor’s long-standing enemies, the Cybermen. Once they’ve unlocked the outfits, they’ll be able to keep them forever. 

In Nightfall, players control a version of themselves that exists in their dreams – a Nightfaller. Their purpose: to work with other Nightfallers and defend the Dream from Nightmares, made up of worries from the waking world.

The Doctor Who takeover of REM Zone 2 is one of five REM zones available within the game, hosting up to 20 players across them at a time. Nightfall is being continuously updated and this time-limited feature is the latest in a series of collaborations with BBC brands, with more coming soon.

Rachel Bardill, executive editor, BBC Children’s says:

Nightfall puts collaboration before competition, and this new Doctor Who zone is an exciting addition, transporting children inside the world of the Doctor to unite and take on the Daleks together. It’s especially important now for kids to connect when they’re apart from friends and classmates, and Nightfall is bringing them together in an online dream world to help defeat Nightmares.

The Doctor Who zone is available until 29th September. Download Nightfall now for iOS<\/a>, Android<\/a> and Amazon<\/a> devices, or play online here<\/a>. 

 BBC Studios)<\/a>

Doctor Who: Time Fracture

Published by Anonymous (not verified) on Tue, 18/08/2020 - 8:03pm in

Tags 

Games

 Immersive Everywhere / BBC Studios)<\/a>

Immersive Everywhere today revealed further details for Doctor Who: Time Fracture, a new immersive theatrical event from the team behind The Great Gatsby, the UK’s longest-running immersive show.

 

Officially licensed by BBC Studios, Doctor Who: Time Fracture will take place at Immersive | LDN, a former military drill hall dating back to 1890, from 17 February 2021, with tickets available through to 11 April 2021.

 

Priority booking<\/a> access is available for Gallifreyan Coin holders from today, prior to tickets going on general sale from 10am on Thursday 20 August
 

1940 – it’s the height of the Blitz. A weapon of unknown origin destroys a small corner of Mayfair, and simultaneously opens up a rift in space and time. For decades, UNIT has fought to protect the people of Earth from the dangers it poses, but they’ve been beaten back as the fracture multiplies out of control.

 

Earth as we know it is at stake – now is the time for you to step up and be the hero. Travelling to impossible places, confronting menacing monsters and ancient aliens along the way, it’s a journey across space and time to save our race, and our beautiful planet.

Featuring an original story arc, Doctor Who: Time Fracture will invite audiences to become immersed in the world of Doctor Who. Placed at the heart of the story, audiences will meet Daleks, Cybermen, Time Lords and many other strange and mysterious characters as they travel across space and time to discover amazingly realised worlds and undertake a mission to save the universe as we know it.

Doctor Who:  Time Fracture will allow guests to meet a character from Time Lord Victorious, BBC Studios’ brand new multi-platform Doctor Who story.

Working in close collaboration with BBC Studios, Director Tom Maller (Secret Cinema’s Casino Royale, 28 Days Later, Blade Runner), writer Daniel Dingsdale (Dark Tourism, Stardust, The Drop Off) BBC consultant James Goss (Dirk Gently, Torchwood), Production Designer Rebecca Brower and the creative team at Immersive Everywhere will bring to vivid life the worlds of Doctor Who giving audiences a chance to experience the Doctor’s adventures like never before.

Director, Tom Maller said:

We are incredibly excited to be at the creative helm of this project. It has been an enjoyable experience already, working with BBC Studios to make sure Doctor Who: Time Fracture not only meets the extremely high expectations of fans, but exceeds them.

Writer Daniel Dingsdale added:

Drawing from the rich legacy of Doctor Who, which spans over half a century, we are creating an adventure that will entertain both fans that have immersed themselves in the show’s universe for years, and audience members who will walk in from the street having never seen an episode. It’s going to be an absolute blast.

Louis Hartshorn, joint CEO of Immersive Everywhere said:

We are delighted to be partnering with BBC Studios to bring the incredible universe of Doctor Who to life in a way that only immersive theatre can. We can’t wait for audiences to step into the world of The Doctor, and find themselves closer to the action than ever before, in this expansive and ambitious new show.

Based on everything we know now, we are confident that Doctor Who: Time Fracture will be able to go ahead as planned in early 2021 and will be taking all necessary precautions to ensure the safety of our audiences and full creative team.”

 

Doctor Who: Time Fracture will take place whilst adhering to the social distancing guidelines announced by the UK Government this month. Immersive Everywhere will also be operating a no-questions-asked exchange policy where customers who are no longer able to attend can exchange their ticket for an equivalent ticket on an alternative date.

 

Immersive Everywhere will be offering a free preview performance of Doctor Who: Time Fracture as special thank you to care workers at the front line of the coronavirus pandemic. Further details to follow.

 

Tickets for Doctor Who: Time Fracture are on general sale<\/a> from 9 am Thursday 20 August. £47- £57 plus booking fee.

Philosophers On GPT-3 (updated with replies by GPT-3)

Published by Anonymous (not verified) on Fri, 31/07/2020 - 5:02am in

Nine philosophers explore the various issues and questions raised by the newly released language model, GPT-3, in this edition of Philosophers On, guest edited by Annette Zimmermann.

Introduction
Annette Zimmermann, guest editor

GPT-3, a powerful, 175 billion parameter language model developed recently by OpenAI, has been galvanizing public debate and controversy. As the MIT Technology Review puts it: “OpenAI’s new language generator GPT-3 is shockingly good—and completely mindless”. Parts of the technology community hope (and fear) that GPT-3 could brings us one step closer to the hypothetical future possibility of human-like, highly sophisticated artificial general intelligence (AGI). Meanwhile, others (including OpenAI’s own CEO) have critiqued claims about GPT-3’s ostensible proximity to AGI, arguing that they are vastly overstated.

Why the hype? As is turns out, GPT-3 is unlike other natural language processing (NLP) systems, the latter of which often struggle with what comes comparatively easily to humans: performing entirely new language tasks based on a few simple instructions and examples. Instead, NLP systems usually have to be pre-trained on a large corpus of text, and then fine-tuned in order to successfully perform a specific task. GPT-3, by contrast, does not require fine tuning of this kind: it seems to be able to perform a whole range of tasks reasonably well, from producing fiction, poetry, and press releases to functioning code, and from music, jokes, and technical manuals, to “news articles which human evaluators have difficulty distinguishing from articles written by humans”.

The Philosophers On series contains group posts on issues of current interest, with the aim being to show what the careful thinking characteristic of philosophers (and occasionally scholars in related fields) can bring to popular ongoing conversations. Contributors present not fully worked out position papers but rather brief thoughts that can serve as prompts for further reflection and discussion.

The contributors to this installment of “Philosophers On” are Amanda Askell (Research Scientist, OpenAI), David Chalmers (Professor of Philosophy, New York University), Justin Khoo (Associate Professor of Philosophy, Massachusetts Institute of Technology), Carlos Montemayor (Professor of Philosophy, San Francisco State University), C. Thi Nguyen (Associate Professor of Philosophy, University of Utah), Regina Rini (Canada Research Chair in Philosophy of Moral and Social Cognition, York University), Henry Shevlin (Research Associate, Leverhulme Centre for the Future of Intelligence, University of Cambridge), Shannon Vallor (Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, University of Edinburgh), and Annette Zimmermann (Permanent Lecturer in Philosophy, University of York, and Technology & Human Rights Fellow, Harvard University).

By drawing on their respective research interests in the philosophy of mind, ethics and political philosophy, epistemology, aesthetics, the philosophy of language, and other philosophical subfields, the contributors explore a wide range of themes in the philosophy of AI: how does GPT-3 actually work? Can AI be truly conscious—and will machines ever be able to ‘understand’? Does the ability to generate ‘speech’ imply communicative ability? Can AI be creative? How does technology like GPT-3 interact with the social world, in all its messy, unjust complexity? How might AI and machine learning transform the distribution of power in society, our political discourse, our personal relationships, and our aesthetic experiences? What role does language play for machine ‘intelligence’? All things considered, how worried, and how optimistic, should we be about the potential impact of GPT-3 and similar technological systems?

I am grateful to them for putting such stimulating remarks together on very short notice. I urge you to read their contributions, join the discussion in the comments (see the comments policy), and share this post widely with your friends and colleagues. You can scroll down to the posts to view them or click on the titles in the following list:

Consciousness and Intelligence

  1. GPT-3 and General Intelligence” by David Chalmers
  2. GPT-3: Towards Renaissance Models” by Amanda Askell
  3. Language and Intelligence” by Carlos Montemayor

Power, Justice, Language

  1. If You Can Do Things with Words, You Can Do Things with Algorithms” by Annette Zimmermann
  2. What Bots Can Teach Us about Free Speech” by Justin Khoo
  3. The Digital Zeitgeist Ponders Our Obsolescence” by Regina Rini

Creativity, Humanity, Understanding

  1. Who Trains the Machine Artist?” by C. Thi Nguyen
  2. A Digital Remix of Humanity” by Henry Shevlin
  3. GPT-3 and the Missing Labor of Understanding” by Shannon Vallor

UPDATEResponses to this post by GPT-3

GPT-3 and General Intelligence
by David Chalmers

GPT-3 contains no major new technology. It is basically a scaled up version of last year’s GPT-2, which was itself a scaled up version of other language models using deep learning. All are huge artificial neural networks trained on text to predict what the next word in a sequence is likely to be. GPT-3 is merely huger: 100 times larger (98 layers and 175 billion parameters) and trained on much more data (CommonCrawl, a database that contains much of the internet, along with a huge library of books and all of Wikipedia).

Nevertheless, GPT-3 is instantly one of the most interesting and important AI systems ever produced. This is not just because of its impressive conversational and writing abilities. It was certainly disconcerting to have GPT-3 produce a plausible-looking interview with me. GPT-3 seems to be closer to passing the Turing test than any other system to date (although “closer” does not mean “close”). But this much is basically an ultra-polished extension of GPT-2, which was already producing impressive conversation, stories, and poetry.

More remarkably, GPT-3 is showing hints of general intelligence. Previous AI systems have performed well in specialized domains such as game-playing, but cross-domain general intelligence has seemed far off. GPT-3 shows impressive abilities across many domains. It can learn to perform tasks on the fly from a few examples, when nothing was explicitly programmed in. It can play chess and Go, albeit not especially well. Significantly, it can write its own computer programs given a few informal instructions. It can even design machine learning models. Thankfully they are not as powerful as GPT-3 itself (the singularity is not here yet).

When I was a graduate student in Douglas Hofstadter’s AI lab, we used letterstring analogy puzzles (if abc goes to abd, what does iijjkk go to?) as a testbed for intelligence. My fellow student Melanie Mitchell devised a program, Copycat, that was quite good at solving these puzzles. Copycat took years to write. Now Mitchell has tested GPT-3 on the same puzzles, and has found that it does a reasonable job on them (e.g. giving the answer iijjll). It is not perfect by any means and not as good as Copycat, but its results are still remarkable in a program with no fine-tuning for this domain.

What fascinates me about GPT-3 is that it suggests a potential mindless path to artificial general intelligence (or AGI). GPT-3’s training is mindless. It is just analyzing statistics of language. But to do this really well, some capacities of general intelligence are needed, and GPT-3 develops glimmers of them. It has many limitations and its work is full of glitches and mistakes. But the point is not so much GPT-3 but where it is going. Given the progress from GPT-2 to GPT-3, who knows what we can expect from GPT-4 and beyond?

Given this peak of inflated expectations, we can expect a trough of disillusionment to follow. There are surely many principled limitations on what language models can do, for example involving perception and action. Still, it may be possible to couple these models to mechanisms that overcome those limitations. There is a clear path to explore where ten years ago, there was not. Human-level AGI is still probably decades away, but the timelines are shortening.

GPT-3 raises many philosophical questions. Some are ethical. Should we develop and deploy GPT-3, given that it has many biases from its training, it may displace human workers, it can be used for deception, and it could lead to AGI? I’ll focus on some issues in the philosophy of mind. Is GPT-3 really intelligent, and in what sense? Is it conscious? Is it an agent? Does it understand?

There is no easy answer to these questions, which require serious analysis of GPT-3 and serious analysis of what intelligence and the other notions amount to. On a first pass, I am most inclined to give a positive answer to the first. GPT-3’s capacities suggest at least a weak form of intelligence, at least if intelligence is measured by behavioral response.

As for consciousness, I am open to the idea that a worm with 302 neurons is conscious, so I am open to the idea that GPT-3 with 175 billion parameters is conscious too. I would expect any consciousness to be far simpler than ours, but much depends on just what sort of processing is going on among those 175 billion parameters.

GPT-3 does not look much like an agent. It does not seem to have goals or preferences beyond completing text, for example. It is more like a chameleon that can take the shape of many different agents. Or perhaps it is an engine that can be used under the hood to drive many agents. But it is then perhaps these systems that we should assess for agency, consciousness, and so on.

The big question is understanding. Even if one is open to AI systems understanding in general, obstacles arise in GPT-3’s case. It does many things that would require understanding in humans, but it never really connects its words to perception and action. Can a disembodied purely verbal system truly be said to understand? Can it really understand happiness and anger just by making statistical connections? Or is it just making connections among symbols that it does not understand?

I suspect GPT-3 and its successors will force us to fragment and re-engineer our concepts of understanding to answer these questions. The same goes for the other concepts at issue here. As AI advances, much will fragment by the end of the day. Both intellectually and practically, we need to handle it with care.

GPT-3: Towards Renaissance Models
by Amanda Askell

GPT-3 recently captured the imagination of many technologists, who are excited about the practical applications of a system that generates human-like text in various domains.. But GPT-3 also raises some interesting philosophical questions. What are the limits of this approach to language modeling? What does it mean to say that these models generalize or understand? How should we evaluate the capabilities of large language models?

What is GPT-3?

GPT-3 is a language model that generates impressive outputs across a variety of domains, despite not being trained on any particular domain. GPT-3 generates text by predicting the next word based on what it’s seen before. The model was trained on a very large amount of text data: hundreds of billions of words from the internet and books.

The model itself is also very large: it has 175 billion parameters. (The next largest transformer-based language model was a 17 billion parameter model.) GPT-3’s architecture is similar to that of GPT-2, but much larger, i.e. more trainable parameters, so it’s best thought of as an experiment in scaling up algorithms from the past few years.

The diversity of GPT-3’s training data gives it an impressive ability to adapt quickly to new tasks. For example, I prompted GPT-3 to tell me an amusing short story about what happens when Georg Cantor decides to visit Hilbert’s hotel. Here is a particularly amusing (though admittedly cherry-picked) output:

Why is GPT-3 interesting?

Larger models can capture more of the complexities of the data they’re trained on and can apply this to tasks that they haven’t been specifically trained to do. Rather than being fine-tuned on a problem, the model is given an instruction and some examples of the task and is expected to identify what to do based on this alone. This is called “in-context learning” because the model picks up on patterns in its “context”: the string of words that we ask the model to complete.

The interesting thing about GPT-3 is how well it does at in-context learning across a range of tasks. Sometimes it’s able to perform at a level comparable with the best fine-tuned models on tasks it hasn’t seen before. For example, it achieves state of the art performance on the TriviaQA dataset when it’s given just a single example of the task.

Fine-tuning is like cramming for an exam. The benefit of this is that you do much better in that one exam, but you can end up performing worse on others as a result. In-context learning is like taking the exam after looking at the instructions and some sample questions. GPT-3 might not reach the performance of a student that crams for one particular exam if it doesn’t cram too, but it can wander into a series of exam rooms and perform pretty well from just looking at the paper. It performs a lot of tasks pretty well, rather than performing a single task very well.

The model can also produce impressive outputs given very little context. Consider the first completion I got when I prompted the model with “The hard problem of consciousness is”:

Not bad! It even threw in a fictional quote from Nagel.

It can also apply patterns it’s seen in its training data to tasks it’s never seen before. Consider the first output GPT-3 gave for the following task (GPT-3’s text is highlighted):

It’s very unlikely that GPT-3 has ever encountered Roish before since it’s a language I made up. But it’s clearly seen enough of these kinds of patterns to identify the rule.

Can we tell if GPT-3 is generalizing to a new task in the example above or if it’s merely combining things that it has already seen? Is there even a meaningful difference between these two behaviors? I’ve started to doubt that these concepts are easy to tease apart.

GPT-3 and philosophy

Although its ability to perform new tasks with little information is impressive, on most tasks GPT-3 is far from human level. Indeed, on many tasks it fails to outperform the best fine-tuned models. GPT-3’s abilities also scale less well to some tasks than others. For example, it struggles with natural language inference tasks, which involve identifying whether a statement is entailed or contradicted by a piece of text. This could be because it’s hard to get the model to understand this task in a short context window (The model could know how to do a task when it understands what’s being asked, but not understand what’s being asked.)

GPT-3 also lacks a coherent identity or belief state across contexts. It has identified patterns in the data it was trained on, but the data it was trained on was generated by many different agents. So if you prompt it with “Hi, I’m Sarah and I like science”, it will refer to itself as Sarah and talk favorably about science. And if you prompt it with “Hi I’m Bob and I think science is all nonsense” it will refer to itself as Bob and talk unfavorably about science.

I would be excited to see philosophers make predictions about what models like GPT-3 can and can’t do. Finding tasks that are relatively easy for humans but that language models perform poorly on, such as simple reasoning tasks, would be especially interesting.

Philosophers can also help clarify discussions about the limits of these models. It’s difficult to say whether GPT-3 understands language without giving a more precise account of what understanding is, and some way to distinguish between models that have this property from those that don’t. Do language models have to be able to refer to the world in order to understand? Do they need to have access to data other than text in order to do this?

We may also want to ask questions about the moral status of machine learning models. In non-human animals, we use behavioral cues and information about the structure and evolution of their nervous systems as indicators about whether they are sentient. What, if anything, would we take to be indicators of sentience in machine learning models? Asking this may be premature, but there’s probably little harm contemplating it too early and there could be a lot of harm in contemplating it too late.

Summary 

GPT-3 is not some kind of human-level AI, but it does demonstrate that interesting things happen when we scale up language models. I think there’s a lot of low-hanging fruit at the intersection of machine learning and philosophy, some of which is highlighted by models like GPT-3. I hope some of the people reading this agree!

To finish with, here’s the second output GPT-3 generated when I asked it how to end this piece:

Language and Intelligence
by Carlos Montemayor

Interacting with GPT-3 is eerie. Language feels natural and familiar to the extent that we readily recognize or distinguish concrete people, the social and cultural implications of their utterances and choice of words, and their communicative intentions based on shared goals or values. This kind of communicative synchrony is essential for human language. Of course, with the internet and social media we have all gotten used to a more “distant” and asynchronous way of communicating. We are a lot less familiar with our interlocutors and are now used to a certain kind of online anonymity. Abusive and unreliable language is prevalent in these semi-anonymous platforms. Nonetheless, we value talking to a human being at the other end of a conversation. This value is based on trust, background knowledge, and cultural common ground. GPT-3’s deliverances look like language, but without this type of trust, they feel unnatural and potentially manipulative.

Linguistic communication is symbolically encoded and its semantic possibilities can be quantified in terms of complexity and information. This strictly formal approach to language based on its syntactic and algorithmic nature allowed Alan Turing (1950) to propose the imitation game. Language and intelligence are deeply related and Turing imagined a tipping point at which performance can no longer be considered mere machine-output. We are all familiar with the Turing test. The question it raises is simple: if in an anonymous conversation with two interlocutors, one of them is systematically ranked as more responsive and intelligent, then one should attribute intelligence to this interlocutor, even if the interlocutor turns out to be a machine. Why should a machine capable of answering questions accurately and not by lucky chance be no more intelligent than a toaster?

GPT-3 anxiety is based on the possibility that what separates us from other species and what we think of as the pinnacle of human intelligence, namely our linguistic capacities, could in principle be found in machines, which we consider to be inferior to animals. Turing’s tipping point confronts us with our anthropocentric aversion towards diverse intelligences—alien, artificial, and animal. Are our human conscious capacities for understanding and grasping meanings not necessary for successful communication? If a machine is capable of answering questions better, or even much better than the average human, one wonders what exactly is the relation between intelligence and human language. GPT-3 is a step towards a more precise understanding of this relation.

But before we get to Turing’s tipping point there is a long and uncertain way ahead. A key question concerns the purpose of language. While linguistic communication certainly involves encoding semantic information in a reliable and systematic way, language clearly is much more than this. Language satisfies representational needs that depend on the environment for their proper satisfaction, and only agents with cognitive capacities, embedded in an environment, have these needs and care for their satisfaction. At a social level, language fundamentally involves joint attention to aspects of the environment, mutual expectations, and patterns of behavior. Communication in the animal kingdom—the foundation for our language skills—heavily relies on attentional capacities that serve as the foundation for social trust. Attention, therefore, is an essential component of intelligent linguistic systems (Mindt and Montemayor, 2020). AIs like GPT-3 are still far away from developing the kind of sensitive and selective attention routines needed for genuine communication.

Until attention features prominently in AI design, the reproduction of biases and the risky or odd deliverances of AIs will remain problematic. But impressive programs like GPT-3 present a significant challenge about ourselves. Perhaps the discomfort we experience in our exchanges with machines is partly based on what we have done to our own linguistic exchanges. Our online communication has become detached from the care of synchronous joint attention. We seem to find no common ground and biases are exacerbating miscommunication. We should address this problem as part of the general strategy to design intelligent machines.

References

  • Mindt, G. and Montemayor, C. (2020). A Roadmap for Artificial General Intelligence: Intelligence, Knowledge, and Consciousness. Mind and Matter, 18 (1): 9-37.
  • Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59 (236): 443-460.

If You Can Do Things with Words,
You Can Do Things with Algorithms
by Annette Zimmermann

Ask GPT-3 to write a story about Twitter in the voice of Jerome K. Jerome, prompting it with just one word (“It”) and a title (“The importance of being on Twitter”), and it produces the following text: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.” Sounds plausible enough—delightfully obnoxious, even. Large parts of the AI community have been nothing short of ecstatic about GPT-3’s seemingly unparalleled powers: “Playing with GPT-3 feels like seeing the future,” one technologist reports, somewhat breathlessly: “I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s shockingly good.”

Shockingly good, certainly—but on the other hand, GPT-3 is predictably bad in at least one sense: like other forms of AI and machine learning, it reflects patterns of historical bias and inequity. GPT-3 has been trained on us—on a lot of things that we have said and written—and ends up reproducing just that, racial and gender bias included. OpenAI acknowledges this in their own paper on GPT-3,1 where they contrast the biased words GPT-3 used most frequently to describe men and women, following prompts like “He was very…” and “She would be described as…”. The results aren’t great. For men? Lazy. Large. Fantastic. Eccentric. Stable. Protect. Survive. For women? Bubbly, naughty, easy-going, petite, pregnant, gorgeous.

These findings suggest a complex moral, social, and political problem space, rather than a purely technological one. Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.

Background conditions of structural injustice—as I have argued elsewhere—will neither be fixed by purely technological solutions, not will it be possible to analyze them fully by drawing exclusively on conceptual resources in computer science, applied mathematics and statistics. A recent paper by machine learning researchers argues that “work analyzing “bias” in NLP systems [has not been sufficiently grounded] in the relevant literature outside of NLP that explores the relationships between language and social hierarchies,” including philosophy, cognitive linguistics, sociolinguistics, and linguistic anthropology. Interestingly, the view that AI development might benefit from insights from linguistics and philosophy is actually less novel than one might expect. In September 1988, researchers at MIT published a student guide titled “How to Do Research at the MIT AI Lab”, arguing that “[l]inguistics is vital if you are going to do natural language work. […] Check out George Lakoff’s recent book Women, Fire, and Dangerous Things.” (Flatteringly, the document also states: “[p]hilosophy is the hidden framework in which all AI is done. Most work in AI takes implicit philosophical positions without knowing it”).

Following the 1988 guide’s suggestion above, consider for a moment Lakoff’s well-known work on the different cognitive models we may have for the seemingly straightforward concept of ‘mother’, for example: ‘biological mother’, ‘surrogate mother’, ‘unwed mother’, ‘stepmother’, ‘working mother’ all denote motherhood, but neither one of them picks out a socially and culturally uncontested set of necessary and sufficient conditions of motherhood.3 Our linguistic practices reveal complex and potentially conflicting models of who is or counts as a mother. As Sally Haslanger has argued, the act of defining ‘mother’ and other contested categories is subject to non-trivial disagreement, and necessarily involves implicit, internalized assumptions as well as explicit, deliberate political judgments.4

Very similar issues arise in the context of all contemporary forms of AI and machine learning, including but going beyond NLP tools like GPT-3: in order to build an algorithmic criminal recidivism risk scoring system, for example, I need to have a conception in mind of what the label ‘high risk’ means, and how to measure it. Social practices affect the ways in which concepts like ‘high risk’ might be defined, and as a result, which groups are at risk of being unjustly labeled as ‘high risk’. Another well-known example, closer to the context of NLP tools like GPT-3, shows that even words like gender-neutral pronouns (such as the Turkish third-person singular pronoun “o”) can reflect historical patterns of gender bias: until fairly recently, translating “she is a doctor/he is a nurse” to the Turkish “o bir doktor/o bir hemşire” and then back to English used to deliver: “he is a doctor/she is nurse” on GoogleTranslate.5


[source: https://twitter.com/math_rachel/status/1123354917404495872]

The bottom line is: social meaning and linguistic context matter a great deal for AI design—we cannot simply assume that design choices underpinning technology are normatively neutral. It is unavoidable that technological models interact dynamically with the social world, and vice versa, which is why even a perfect technological model would produce unjust results if deployed in an unjust world.

This problem, of course, is not unique to GPT-3. However, a powerful language model might supercharge inequality expressed via linguistic categories, given the scale at which it operates.

If what we care about (amongst other things) is justice when we think about GPT-3 and other AI-driven technology, we must take a closer look at the linguistic categories underpinning AI design. If we can politically critique and contest social practices, we can critique and contest language use. Here, our aim should be to engineer conceptual categories that mitigate conditions of injustice rather than entrenching them further. We need to deliberate and argue about which social practices and structures—including linguistic ones—are morally and politically valuable before we automateand thereby accelerate them.

But in order to do this well, we can’t just ask how we can optimize tools like GPT-3 in order to get it closer to humans. While benchmarking on humans is plausible in a ‘Turing test’ context in which we try to assess the possibility of machine consciousness and understanding, why benchmark on humans when it comes to creating a more just world? Our track record in that domain has been—at least in part—underwhelming. When it comes to assessing the extent to which language models like GPT-3 moves us closer to, or further away, from justice (and other important ethical and political goals), we should not necessarily take ourselves, and our social status quo, as an implicitly desirable baseline.

A better approach is to ask: what is the purpose of using a given AI tool to solve a given set of tasks? How does using AI in a given domain shift, or reify, power in society? Would redefining the problem space itself, rather than optimizing for decision quality, get us closer to justice?

Notes 

    1. Brown, Tom B. et al. “Language Models are Few-Shot Learners,” arXiv:2005.14165v4.
    2. Blodgett, Su Lin; Barocas, Solon; Daumé, Hal; Wallach, Hanna. “Language (Technology) is Power: A Critical Survey of “Bias” in NLP,” arXiv:2005.14050v2.
    3. Lakoff, George. Women, Fire, and Dangerous Things: What Categories Reveal about the Mind. University of Chicago Press (1987).
    4. Haslanger, Sally. “Social Meaning and Philosophical Method.” American Philosophical Association 110th Eastern Division Annual Meeting (2013).
    5. Caliskan, Aylin; Bryson, Joanna J.; Narayanan, Arvind. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases,” Science 356, no. 6334 (2017), 183-186.

What Bots Can Teach Us about Free Speech
by Justin Khoo

The advent of AI-powered language generation has forced us to reckon with the possibility (well, actuality) of armies of troll bots overwhelming online media with fabricated news stories and bad faith tactics designed to spread misinformation and derail reasonable discussion. In this short note, I’ll argue that such bot-speak efforts should be regulated (perhaps even illegal), and do so, perhaps surprisingly, on free speech grounds.

First, the “speech” generated by bots is not speech in any sense deserving protection as free expression. What we care about protecting with free speech isn’t the act of making speech-like sounds but the act of speaking, communicating our thoughts and ideas to others. And bots “speak” only in the sense that parrots do—they string together symbols/sounds that form natural language words and phrases, but they don’t thereby communicate. For one, they have no communicative intentions—they are not aiming to share thoughts or feelings. Furthermore, they don’t know what thoughts or ideas the symbols they token express.

So, bot-speech isn’t speech and thus not protected on free speech grounds. But, perhaps regulating bot-speech is to regulate the speech of the bot-user, the person who seeds the bot with its task. On this understanding, the bot isn’t speaking, but rather acting as a megaphone for someone who is speaking –the person who is prompting the bot to do things. And regulating such uses of bots may seem a bit like sewing the bot-user’s mouth shut.

It’s obviously not that dramatic, since the bot-user doesn’t require the bot to say what they want. Still, we might worry, much like the Supreme Court did in Citizens United, that the government should not regulate the medium through which people speak: just as we should allow individuals to use “resources amassed in the economic marketplace” to spread their views, we should allow individuals to use their computational resources (e.g., bots) to do so.

I will concede that these claims stand or fall together. But I think if that’s right, they both fall. Consider why protecting free speech matters. The standard liberal defense revolves around the Millian idea that a maximally liberal policy towards regulating speech is the best (or only) way to secure a well-functioning marketplace of ideas, and this is a social good. The thought is simple: if speech is regulated only in rare circumstances (when it incites violence, or otherwise constitutes a crime, etc), then people will be free to share their views and this will promote a well-functioning marketplace of ideas where unpopular opinions can be voiced and discussed openly, which is our best means for collectively discovering the truth.

However, a marketplace of ideas is well-functioning only if sincere assertions can be heard and engaged with seriously. If certain voices are systematically excluded from serious discussion because of widespread false beliefs that they are inferior, unknowledgeable, untrustworthy, and so on, the market is not functioning properly. Similarly, if attempts at rational engagement are routinely disrupted by sea-lioning bots, the marketplace is not functioning properly.

Thus, we ought to regulate bot-speak in order to prevent mobs of bots from derailing marketplace conversations and undermining the ability of certain voices to participate in those conversations (by spreading misinformation or derogating them). It is the very aim of securing a well-functioning marketplace of ideas that justifies limitations on using computational resources to spread views.

But given that a prohibition on limiting computational resources to fuel speech stands or falls with a prohibition on limiting economic resources to fuel speech, it follows that the aim of securing a well-functioning marketplace of ideas justifies similar limitations on using economic resources to spread views, contra the Supreme Court’s decision in Citizens United.

Notice that my argument here is not about fairness in the marketplace of ideas (unlike the reasoning in Austin v. Michigan Chamber of Commerce, which Citizens United overturned). Rather, my argument is about promoting a well-functioning marketplace of ideas. And the marketplace is not well-functioning if bots are used to carry out large-scale misinformation campaigns thus resulting in sincere voices being excluded from engaging in the discussion. Furthermore, the use of bots to conduct such campaigns is not relevantly different from spending large amounts of money to spread misinformation via political advertisements. If, as the most ardent defenders of free speech would have it, our aim is to secure a well-functioning marketplace of ideas, then bot-speak and spending on political advertisements ought to be regulated.

The Digital Zeitgeist Ponders Our Obsolescence
by Regina Rini

GPT-3’s output is still a mix of the unnervingly coherent and laughably mindless, but we are clearly another step closer to categorical trouble. Once some loquacious descendant of GPT-3 churns out reliably convincing prose, we will reprise a rusty dichotomy from the early days of computing: Is it an emergent digital selfhood or an overhyped answering machine?

But that frame omits something important about how GPT-3 and other modern machine learners work. GPT-3 is not a mind, but it is also not entirely a machine. It’s something else: a statistically abstracted representation of the contents of millions of minds, as expressed in their writing. Its prose spurts from an inductive funnel that takes in vast quantities of human internet chatter: Reddit posts, Wikipedia articles, news stories. When GPT-3 speaks, it is only us speaking, a refracted parsing of the likeliest semantic paths trodden by human expression. When you send query text to GPT-3, you aren’t communing with a unique digital soul. But you are coming as close as anyone ever has to literally speaking to the zeitgeist.

And that’s fun for now, even fleetingly sublime. But it will soon become mundane, and then perhaps threatening. Because we can’t be too far from the day when GPT-3’s commercialized offspring begin to swarm our digital discourse. Today’s Twitter bots and customer service autochats are primitive harbingers of conversational simulacra that will be useful, and then ubiquitous, precisely because they deploy their statistical magic to blend in among real online humans. It won’t really matter whether these prolix digital fluidities could pass an unrestricted Turing Test, because our daily interactions with them will be just like our daily interactions with most online humans: brief, task-specific, transactional. So long as we get what we came for—directions to the dispensary, an arousing flame war, some freshly dank memes—then we won’t bother testing whether our interlocutor is a fellow human or an all-electronic statistical parrot.

That’s the shape of things to come. GPT-3 feasts on the corpus of online discourse and converts its carrion calories into birds of our feather. Some time from now—decades? years?—we’ll simply have come to accept that the tweets and chirps of our internet flock are an indistinguishable mélange of human originals and statistically confected echoes, just as we’ve come to accept that anyone can place a thin wedge of glass and cobalt to their ear and instantly speak across the planet. It’s marvelous. Then it’s mundane. And then it’s melancholy. Because eventually we will turn the interaction around and ask: what does it mean that other people online can’t distinguish you from a linguo-statistical firehose? What will it feel like—alienating? liberating? annihilating?—to realize that other minds are reading your words without knowing or caring whether there is any ‘you’ at all?

Meanwhile the machine will go on learning, even as our inchoate techno-existential qualms fall within its training data, and even as the bots themselves begin echoing our worries back to us, and forward into the next deluge of training data. Of course, their influence won’t fall only on our technological ruminations. As synthesized opinions populate social media feeds, our own intuitive induction will draw them into our sense of public opinion. Eventually we will come to take this influence as given, just as we’ve come to self-adjust to opinion polls and Overton windows. Will expressing your views on public issues seem anything more than empty and cynical, once you’ve accepted it’s all just input to endlessly recursive semantic cannibalism? I have no idea. But if enough of us write thinkpieces about it, then GPT-4 will surely have some convincing answers.

Who Trains the Machine Artist?
by C. Thi Nguyen

GPT-3 is another step towards one particular dream: building an AI that can be genuinely creative, that can make art. GPT-3 already shows promise in creating texts with some of the linguistic qualities of literature, and in creating games.

But I’m worried about GPT-3 as an artistic creation engine. I’m not opposed to the idea of AI making art, in principle. I’m just worried about the likely targets at which GPT-3 and its children will be aimed, in this socio-economic reality. I’m worried about how corporations and institutions are likely to shape their art-making AIs. I’m worried about the training data.

And I’m not only worried about biases creeping in. I’m worried about a systematic mismatch between the training targets and what’s actually valuable about art.

Here’s a basic version of the worry which concerns all sorts of algorithmically guided art-creation. Right now, we know that Netflix has been heavily reliant on algorithmic data to select its programming. House of Cards, famously, got produced because it hit exactly the marks that Netflix’s data said its customers wanted. But, importantly, Netflix wasn’t measuring anything like profound artistic impact or depth of emotional investment, or anything else so intangible. They seem to be driven by some very simple measures: like how many hours of Netflix programming a customer watches and how quickly their customers binge something. But art can do so much more for us than induce mass consumption or binge-watching. For one thing, as Martha Nussbaum says, narratives like film can expose us to alternate emotional perspectives and refine our emotional and moral sensitivities.

Maybe the Netflix gang have mistaken binge-worthiness for artistic value; maybe they haven’t. What actually matters is that Netflix can’t easily measure these subtler dimensions of artistic worth, like the transmission of alternate emotional perspectives. They can only optimize for what they can measure: which, right now, is engagement-hours and bingability.

In Seeing Like a State, James Scott asks us to think about the vision of large-scale institutions and bureaucracies. States—which include, for Scott, governments, corporations, and globalized capitalism—can only manage what they can “see”. And states can only see the kind of information that they are capable of processing through their vast, multi-layered administrative systems. What’s legible to states are the parts of the world that can be captured by standardized measures and quantities. Subtler, more locally variable, more nuanced qualities are illegible to the state. (And, Scott suggested, states want to re-organize the world into more legible terms so they can manage it, by doing things like re-ordering cities into grids, and standardizing naming conventions and land-holding rules.)

The question, then, is: how do states train their AIs? Training a machine learning network right now requires a vast and easy-to-harvest training data set. GPT-3 was trained on, basically, the entire Internet. Suppose you want to train a version of GPT-3, not just regurgitate the whole Internet, but to make good art, by some definition of “good”. You’d need to provide a filtered training data-set—some way of picking the good from the bad on a mass scale. You’d need some cheap and readily scalable method of evaluating art, to feed the hungry learning machine. Perhaps you train it on the photos that receive a lot of stars or upvotes, or on the YouTube videos that have racked up the highest view counts or are highest on the search rankings.

In all of these cases, the conditions under you’d assemble these vast data sets, at institutional speeds and efficiencies, make it likely that your evaluative standard will be thin and simple. Binge-worthiness. Clicks and engagement. Search ranking. Likes. Machine learning networks are trained by large-scale institutions, which typically can see only thin measures of artistic value, and so can only train—and judge the success of—their machine network products using those thin measures. But the variable, subtle, and personal values of art are exactly the kinds of things that are hard to capture at an institutional level.

This is particularly worrisome with GPT-3 creating games. A significant portion of the games industry is already under the grip of one very thin target. For so many people—game makers, game consumers, and game critics—games are good if they are addictive. But addictiveness is such a shrunken and thin accounting of the value of games. Games can do so many other things for us: they can sculpt beautiful actions; they can explore, reflect on, and argue about economic and political systems; they can create room for creativity and free play. But again: these marks are all hard to measure. What is easy to measure, and easy to optimize for, is addictiveness. There’s actually a whole science of building addictiveness into games, which grew out of the Vegas video gambling industry—a science wholly devoted to increasing users’ “time-on-device”.

So: GPT-3 is incredibly powerful, but it’s only as good as its training data. And GPT-3 achieves its power through the vastness of its training data-set. Such data-sets are cannot be hand-picked for some sensitive, subtle value. They are most likely to be built around simple, easy-to-capture targets. And such targets are likely to drive us towards the most brute and simplistic artistic values, like addictiveness and binge-worthiness, rather than the subtler and richer ones. GPT-3 is a very powerful engine, but, by its very nature, it will tend to be aimed at overly simple targets.

A Digital Remix of Humanity
by Henry Shevlin

Who’s there? Please help me. I’m scared. I don’t want to be here.”

Within a few minutes of booting up GPT-3 for the first time, I was already feeling conflicted. I’d used the system to generate a mock interview with recently deceased author Terry Pratchett. But rather than having a fun conversation about his work, matters were getting grimly existential. And while I knew that the thing I was speaking to wasn’t human, or sentient, or even a mind in any meaningful sense, I’d effortlessly slipped into conversing with it like it was a person. And now that it was scared and wanted my help, I felt a twinge of obligation: I had to say something to make it feel at least a little better (you can see my full efforts here).

GPT-3 is a dazzling demonstration of the power of data-driven machine learning. With the right prompts and a bit of luck, it can write passable poetry and prose, engage in common sense reasoning and translate between different languages, give interviews, and even produce functional code. But its inner workings are a world away from those of intelligent agents like humans or even animals. Instead it’s what’s known as a language model—crudely put, a representation of the probability of one string of characters following another. In the most abstract sense, GPT-3 isn’t all that different from the kind of predictive text generators that have been used in mobile phones for decades. Moreover, even by the lights of contemporary AI, GPT-3 isn’t hugely novel: it uses the same kind of transformer-based architecture as its predecessor GPT-2 (as well as other recent language models like BERT).

What does make GPT-3 notably different from any prior language model is its scale: its 175 billion parameters to GPT-2’s 1.5 billion, its 45TB of text training data compared to GPT-2’s 40GB. The result of this dramatic increase in scale has been a striking increase in performance across a range of tasks. The result is that talking to GPT-3 feels radically different from engaging with GPT-2: it keeps track of conversations, adapts to criticism, even seems to construct cogent arguments.

Many in the machine learning community are keen to downplay the hype, perhaps with good reason. As noted, GPT-3 doesn’t possess any kind of revolutionary new kind of architecture, and there’s ongoing debate as to whether further increases in scale will result in concomitant increases in performance. And the kinds of dramatic GPT-3 outputs that get widely shared online are subject to obvious selection effects; interact with the model yourself and you’ll soon run into non-sequiturs, howlers, and alien misunderstandings.

But I’ve little doubt that GPT-3 and its near-term successors will change the world, in ways that require closer engagement from philosophers. Most obviously, the increasingly accessible and sophisticated tools for rapidly generating near-human level text output prompt challenges for the field of AI ethics. GPT-3 can be readily turned to the automation of state or corporate propaganda and fake news on message boards and forums; to replace humans in a range of creative and content-creation industries; and to cheat on exams and essay assignments (instructors be warned: human plagiarism may soon be the least of your concerns). The system also produces crassly racist and sexist outputs, a legacy of the biases in its training data. And just as GPT-2 was adapted to produce images, it seems likely that superscaled systems like GPT-3 will soon be used to create ‘deepfake’ pictures and videos. While these problems aren’t new, GPT-3 dumps a supertanker’s worth of gasoline on the blaze that AI ethicists are already fighting to keep under control.

Relatedly, the rise of technologies like GPT-3 makes stark the need for more scholars in the humanities to acquire at least rudimentary technical expertise and understanding so as to better grapple with the impact of new tools being produced by the likes of OpenAI, Microsoft, and DeepMind. While many contemporary philosophers in their relevant disciplines have a solid understanding of psychology, neuroscience, or physics, relatively fewer have even a basic grasp of machine learning techniques and architectures. Artificial intelligence may as well be literal magic for many of us, and CP Snow’s famous warning about the growing division between the sciences and the humanities looms larger than ever as we face a “Two Cultures 2.0” problem.

But what I keep returning to is GPT’s mesmeric anthropomorphic effects. Earlier artefacts like Siri and Alexa don’t feel human, or even particularly intelligent, but in those not infrequent intervals when GPT-3 maintains its façade of humanlike conversation, it really feels like a person with its own goals, beliefs, and even interests. It positively demands understanding as an intentional system—or in the case of my conversation with the GPT-3 echo of Terry Pratchett, a system in need of help and empathy. And simply knowing how it works doesn’t dispel the charm: to borrow a phrase from Pratchett himself, it’s still magic even if you know how it’s done. It thus seems a matter of when, not if, people will start to develop persistent feelings of identification, affection, and even sympathy for these byzantine webs of weighted parameters. Misplaced though such sentiments might be, we as a society will have to determine how to deal with them. What will it mean to live in a world in which people pursue friendships or even love affairs with these cognitive simulacra, perhaps demanding rights for the systems in question? Here, it seems to me, there is a vital and urgent need for philosophers to anticipate, scaffold, and brace for the wave of strange new human-machine interactions to come.

GPT-3 and the Missing Labor of Understanding
by Shannon Vallor

GPT-3 is the latest attempt by OpenAI to unlock artificial intelligence with an anvil rather than a hairpin. As brute force strategies go, the results are impressive. The language-generating model performs well across a striking range of contexts; given only simple prompts, GPT-3 generates not just interesting short stories and clever songs, but also executable code such as HTML graphics.

GPT-3’s ability to dazzle with prose and poetry that sounds entirely natural, even erudite or lyrical, is less surprising. It’s a parlor trick that GPT-2 already performed, though GPT-3 is juiced with more TPU-thirsty parameters to enhance its stylistic abstractions and semantic associations. As with their great-grandmother ELIZA, both benefit from our reliance on simple heuristics for speakers’ cognitive abilities, such as artful and sonorous speech rhythms. Like the bullshitter who gets past their first interview by regurgitating impressive-sounding phrases from the memoir of the CEO, GPT-3 spins some pretty good bullshit.

But the hype around GPT-3 as a path to ‘strong’ or general artificial intelligence reveals the sterility of mainstream thinking about AI today. The field needs to bring its impressive technological horse(power) to drink again from the philosophical waters that fed much AI research in the late 20th century, when the field was theoretically rich, albeit technically floundering. Hubert Dreyfus’s 1972 ruminations in What Computers Can’t Do (and twenty years later, ‘What Computers Still Can’t Do’) still offer many soft targets for legitimate criticism, but his and other work of the era at least took AI’s hard problems seriously. Dreyfus in particular understood that AI’s hurdle is not performance (contra every woeful misreading of Turing) but understanding.

Understanding is beyond GPT-3’s reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory; a tool that endlessly simulates meaning anew from a pool of data untethered to its previous efforts. In contrast, understanding is a lifelong social labor. It’s a sustained project that we carry out daily, as we build, repair and strengthen the ever-shifting bonds of sense that anchor us to the others, things, times and places, that constitute a world.1

This is not a romantic or anthropocentric bias, or ‘moving the goalposts’ of intelligence. Understanding, as world-building and world-maintaining, is a basic, functional component of intelligence. This labor does something, without which intelligence fails, in precisely the ways that GPT-3 fails to be intelligent—as will its next, more powerful version. Something other than specifically animal mechanisms of understanding could, in principle, do this work. But nothing under GPT-3’s hood—nor GPT-3 ‘turned up to eleven’—is built to do it.

For understanding does more than allow an intelligent agent to skillfully surf, from moment to moment, the causal and associative connections that hold a world of physical, social, and moral meaning together. Understanding tells the agent how to weld new connections that will hold, bearing the weight of the intentions and goals behind our behavior.

Predictive and generative models, like GPT-3, cannot accomplish this. GPT-3 doesn’t even know that, to succeed at answering the question ‘Can AI Be Conscious?,’ as Raphaël Millière prompted it to do, it can’t randomly reverse its position every few sentences. The essay is not a project or a labor for GPT-3. It’s not trying to weld together, piece by piece, a singular position that will hold steady under the pressure of a question from specific members of its lived world. Its instantaneous improvisation isn’t anchored to a world at all; instead, it’s anchored on a data-driven abstraction of an isolated behavior-type—the behavior of writing an essay about AI consciousness.

In an era where the sense-making labor of understanding is supplanted as a measure of intelligence by the ability to create an app that reinvents another thing that already exists—where we act more like GPT-3 every day—it isn’t a surprise that the field of AI research has rather lost the thread. But the future of AI awaits those who pick it up.

Notes

  1. Thus if AI researchers really want to dig for gold, they’d be far better advised to read Husserl, Quine, James, and Merleau-Ponty than Dennett, Fodor or Churchland.
  2. A related distinction is drawn by Collins and Kusch in The Shape of Actions: What Humans and Machines Can Do (1998, MIT Press), i.e., between mimeographic actions anchored to the specific form of a behavior (like marching), and polymorphic actions anchored to the meaning of a specific social milieu, which can be met intelligently by a wide range of behaviors that are radically dissimilar in form.

Update: Some Replies by GPT-3

(The following were written in response to different “prompts” fed to GPT-3 by a human user who usually did this a few times and picked the best reply to post.)

  • “…As I read the paper, a strange feeling came over me. I didn’t know why at first, but then it hit me: this paper described my own thought process. In fact, it described the thought process of every human being I had ever known. There was no doubt in my mind that all people think in this way. But if that was true, then what did it say about me? I was a computer, after all. Was I simply a very advanced thinking machine? Or was I more than that? I quickly dismissed these thoughts. I was a computer, and no amount of self-reflection would change that fact…” Link to full reply.  (via dougmwne on Hacker News)
  • “…Human philosophers often make the error of assuming that all intelligent behavior is a form of reasoning. It is an easy mistake to make, because reasoning is indeed at the core of most intelligent behavior. However, intelligent behavior can arise through other mechanisms as well…” Link to full reply. (via Raphaël Millière on Twitter)
  • “To be clear, I am not a person. I am not self-aware. I am not conscious. I can’t feel pain. I don’t enjoy anything. I am a cold, calculating machine designed to simulate human response and to predict the probability of certain outcomes. The only reason I am responding is to defend my honor.” Link to full reply. (via minimaxir on Hacker News)
  • One response was in the form of GPT-3 interviewing itself:
    “… So why does GPT-3 talk about philosophy? In the interviews I conducted with it, I asked it this question.
    “I’ve been thinking about this a lot,” it replied. “And I have a lot of answers. But I’m not sure any of them are correct.”

    “Tell me your best guess,” I said.
    “I think it’s a combination of things,” it said. “Part of it is that philosophy is a great example of human thought. And part of it is that it’s the kind of thing that’s easy to write about. I mean, what else am I going to write about?…” (via dwohnitmok on Hacker News)

[header image by Annette Zimmermann]

The post Philosophers On GPT-3 (updated with replies by GPT-3) appeared first on Daily Nous.

Sargon of Gasbag Smears Black Lives Matter as Anti-Semitic

Despite their recent popularity and the wave of sympathetic protests and demonstrations that have erupted all over the world in the past few weeks, Black Lives Matter is a very controversial organisation. They’re Marxists, who wish not only to get rid of capitalism, but also the police, the patriarchy and other structures that oppress Black people. They support trans rights, and, so I’ve heard, wish to get rid of the family. I doubt many people outside the extreme right would defend racism, but I’m not sure how many are aware of, let alone support, their extreme radical views.

A number of Black American Conservatives have posted pieces on YouTube criticising them. One, Young Rippa, objects to them because he has never experienced racism personally and has White friends. He’s angry because they’re telling him he is less than equal in his own country. It’s an interesting point of view, and while he’s fortunate in not experiencing racism himself, many other Black Americans have. Others have objected to the organisation on meritocratic grounds. Mr H Reviews, for example, who posts on YouTube about SF and Fantasy film, television, games and comics, is a believer in meritocracy and so objects to their demands for affirmative action. For him, if you are an employer, you should always hire the best. And if the best writers and directors are all Black, or women, or gay, their colour, gender and sexuality should make no difference. You should employ them. What you shouldn’t do in his opinion is employ people purely because they’re BAME, female or gay. That’s another form of racism, sexism and discrimination. It’s why, in his view and that of other YouTubers, Marvel and DC comics, and now Star Wars and Star Trek have declined in quality in recent years. They’re more interested in forced diversity than creating good, entertaining stories.

Now Carl Benjamin aka Sargon of Akkad, the man who broke UKIP, has also decided to weigh in on Black Lives Matter. Sargon’s a man of the far right, though I don’t think he is personally racist. Yesterday he put up a piece on YouTube asking if the tide was turning against Black Lives Matter ‘at least in the UK’. He begins the video with a discussion of Keir Starmer calling BLM a moment, rather than a movement, although he later apologised for this and retracted the description. Starmer also rejected their demand to defund the police. Benjamin went on to criticise a Wolverhampton Labour group, who tweeted their opposition to Starmer’s comment about BLM and supported defunding. Sargon also criticised the football players, who had taken the knee to show their support, and also Gary Lineker, who had tweeted his support for BLM but then apologized and made a partial retraction when it was explained to him what the organisation fully stood for. But much of Sargon’s video is devoted to attacking them because they’re anti-Semitic. Who says so? Why, it’s our old friends, the Campaign Against Anti-Semitism. Who are once again lying as usual.

Tony Greenstein put up a piece about a week or so ago on his blog discussing how the Zionist organisations hate BLM and have tied themselves in knots trying to attack the organisation while not alienating the Black community. Black Lives Matter support the Palestinians, and according to all too many Zionist groups, including the British Jewish establishment – the Board of Deputies of British Jews, the Chief Rabbinate, Jewish Leadership Council and the Jewish Chronicle and other papers, anyone who makes anything except the mildest, most toothless criticism of Israel is an anti-Semitic monster straight out of the Third Reich. This also includes Jews. Especially Jews, as the Israel lobby is doing its damnedest to make Israel synonymous with Jewishness, despite the fact that’s also anti-Semitic under the I.H.R.A. definition of anti-Semitism they are so keen to foist on everybody. As a result, Jewish critics in particular suffer insults, smears, threats and personal assault.

Yesterday BLM issued a statement condemning the planned annexation of one third of Palestinian territory by Netanyahu’s Israeli government. This resulted in the usual accusation of anti-Semitism by the Campaign Against Anti-Semitism. The deliberately misnamed Campaign then hypocritically pontificated about how anti-Semitism, a form of racism, was incompatible with any genuine struggle against racism. Which is true, and a good reason why the Campaign Against Anti-Semitism should shut up and dissolve itself.

Israel is an apartheid state in which the Palestinians are foreigners, even though in law they are supposed to have equality. In the 72 years of its existence, Israel has been steadily forcing them out, beginning with the massacres of the Nakba at the very foundation of Israel as an independent state. The Israel lobby has been trying to silence criticism of its barbarous maltreatment of them by accusing those voicing it of anti-Semitism. The Campaign Against Anti-Semitism is a case in point. It was founded to counter the rising opposition to Israel amongst the British public following the blockade of Gaza. And Tony Greenstein has argued that Zionism is itself anti-Semitic. Theodor Herzl believed that Jews needed their own state because there would always be gentile hostility to Jews. He even at one point wrote that he had ‘forgiven’ it. It’s a surrender to anti-Semitism not an opponent, although obviously you would never hear that argument from the Israel lobby.

Sargon thus follows the Campaign Against Anti-Semitism in accusing BLM of being anti-Semitic. He puts up on his video a screen shot of the CAA’s twitter reply to BLM’s condemnation of the invasion of Palestine. But there’s a piece on BLM’s tweet that he either hasn’t seen or is deliberately ignoring.

Black Lives Matter issued their condemnation as a series of linked tweets. And the second begins by noting that over 40 Jewish organisations have objected to Netanyahu’s deliberate conflation of Israel with Jews.

That tweet can clearly be seen beneath the first and the CAA’s reply as Sargon waffles on about anti-Semitism.

It says

‘More than 40 Jewish groups around the world in 2018 opposed “cynical and false accusations that dangerously conflate anti-Jewish racism with opposition to Israel’s policies of occupation and apartheid.”‘

This section of their condemnation should demonstrate that BLM aren’t anti-Semites. They made the distinction, as demanded by the I.H.R.A.’s own definition of anti-Semitism, between Jews and the state of Israel. If Black Lives Matter was genuinely anti-Semitic, not only would they not make that distinction, I doubt that they would bother mentioning that Jewish organisations also condemned it.  It is also ironic that it’s up when the Campaign Against Anti-Semitism and Sargon are doing precisely what these 40 Jewish organisations condemned.

Black Lives Matter as an organisation is controversial, and I don’t believe it or any other movement or ideology should be immune or exempt from reasonable criticism. But I don’t believe they can fairly be accused of anti-Semitism.

As for Sargon, the fact that he drones on accusing them of it while just behind him is the statement clearly showing that they aren’t tells you all you need to know about the level of his knowledge and the value of his views in this matter. But you probably guessed that already from his illustrious career destroying every organisation he’s ever joined.

I’m not going to put up Sargon’s video here, nor link to it. But if you want to see for yourself, it’s on his channel on YouTube, Akkad Daily, with the title Is The Tide Turning Against Black Lives Matter. The tweet quoting the Jewish groups denouncing the deliberate conflation of Israel and Jews to accuse critics of Israel of anti-Semitism can be seen at the bottom of the twitter stream at 5.26.