Like many other academics, it seems, I spent part of Winter break playing around with ChatGPT, a neural network “which interacts in a conversational way.” It has been trained up on a vast database, to recognize and (thereby) predict patterns, and its output is conversational in character. You can try it by signing up. Somewhat […]
artificial intelligence
There has been a fair amount of concern over the threats that ChatGPT and AI in general pose to teaching. But perhaps there’s an upside? Eric Schliesser (Amsterdam) entertains the possibility: Without wanting to diss the underlying neural network, but ChatGPT is a bullshitter in the (Frankfurt) sense of having no concern for the truth at all… I have seen many of my American colleagues remark that while ChatGPT is, as of yet, unable to handle technical stuff skillfully, it can produce B- undergraduate papers.
“It will be difficult to make an entire class completely ChatGPT cheatproof. But we can at least make it harder for students to use it to cheat.” (I’m reposting this to encourage those teaching philosophy courses to share what they are doing differently this semester so as to teach effectively in a world in which their students have access to ChatGPT. It was originally published on January 4th.) That’s Julia Staffel (University of Colorado, Boulder) in a helpful video she has put together on ChatGPT and its impact on teaching philosophy. In it, she explains what ChatGPT is, demonstrates how it can be used by students to cheat in ways that are difficult to detect, and discusses what we might do about it. You can watch it below: See our previous discussions on the topic: Conversation Starter: Teaching Philosophy in an Age of Large Language Models If You Can’t Beat Them, Join Them: GPT-3 Edition Oral Exams in Undergrad Courses? Talking Philosophy with ChatGPT Philosophers On GPT-3 (updated with replies by GPT-3)
Steven Rieber, a former philosopher who is now a program manager at Intelligence Advanced Research Projects Activity (IARPA), a part of the United States government’s Office of the Director of National Intelligence, is heading up a new research program that might be of interest to philosophers. The program, “Rapid Explanation, Analysis, and Sourcing Online” (REASON) aims to “develop novel technologies that will enable intelligence analysts to substantially improve the evidence and reasoning in draft analytic reports.” It is seeking research teams to fund that will build systems to help “analysts discover valuable evidence, identify strengths and weaknesses in reasoning, and produce higher quality reports.” Here is some more information about the project: Intelligence analysts sort through huge amounts of often uncertain and conflicting information as they strive to answer intelligence questions. REASON will assist and enhance analysts’ work by pointing them to key pieces of evidence beyond what they have already considered and by helping them determine which alternative explanations have the strongest support.
Apropos last week’s “We’re Not Ready for the AI on the Horizon, But People Are Trying,” here is economist and policy analyst Samuel Hammond on what the near future holds: You’ll be able to replace your face and voice with those of someone else in real time, allowing anyone to socially engineer their way into anything. Bots will slide into your DMs and have long, engaging conversations with you until it senses the best moment to send its phishing link… Relationships will fall apart when the AI lets you know, via microexpressions, that he didn’t really mean it when he said he loved you. Copyright will be as obsolete as sodomy law, as thousands of new Taylor Swift albums come into being with a single click. Public comments on new regulations will overflow with millions of cogent and entirely unique submissions that the regulator must, by law, individually read and respond to. Death-by-kamikaze drone will surpass mass shootings as the best way to enact a lurid revenge. The courts, meanwhile, will be flooded with lawsuits because who needs to pay attorney fees when your phone can file an airtight motion for you?
Florian J. Boge, currently an interim professor for philosophy of science at Wuppertal University and a postdoc in the interdisciplinary research unit The Epistemology of the Large Hadron Collider, has recently obtained a €1.35 million (≈ $1.44 million) grant by the German Research Foundation (DFG) for research on the impact of artificial intelligence on scientific understanding. The project, “Scientific Understanding and Deep Neural Networks,” according to Dr. Boge, “keys in on the impressive recent successes of Deep Neural Networks within scientific applications and inquires into whether, or in what sense and to what extent, this means an advancement of prediction, classification, and pattern-recognition over scientific understanding.