The promise—and the threat—of artificial intelligence is that the world as we know it is...
artificial intelligence
We must understand that democracy is morphing with more technocratic systems of governance that lack full oversight and a clear understanding of their social and political impacts.
The post AI and ChatGPT: Yet Another Assault on Democratic Governance? appeared first on scheerpost.com.
By Binoy Kampmark / CounterPunch Inside the beating heart of many students and a large number of learners lies an inner cheat. To get passing grades, every effort will be made to do the least to achieve the most. Efforts to subvert the central class examination are the stuff of legend: discreetly written notes on […]
The post ChatGPT: Boon for the Lazy Learner appeared first on scheerpost.com.
ChatGPT is all the rage. It even drives some people into a rage. It does...
What should our norms be regarding the publishing of philosophical work created with the help of large language models (LLMs) like ChatGPT or other forms of artificial intelligence? In a recent article, the editors of Nature put forward their position, which they think is likely to be adopted by other journals: First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility. Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM. A few comments about these: a. It makes sense to not ban use of the technology. Doing so would be ineffective, would incentivize hiding its use, and would stand in opposition to the development of new effective and ethical uses of the technology in research. b. The requirement to document how the LLMs were used in the research and writing is reasonable but vague.
Luciano Floridi, currently Professor of Philosophy and Ethics of Information at the University of Oxford and Professor of Sociology of Culture and Communication at the University of Bologna, has accepted an offer from Yale University to become the founding director of its Digital Ethics Center and professor in its Cognitive Science Program. Professor Floridi is known for his work in philosophy of information, digital ethics, the ethics of artificial intelligence, and philosophy of technology, publishing several books and hundreds of articles on these topics, which you learn more about here. He has also consulted for Google, advised the European Commission on artificial intellligence, chaired a Parliamentary commission on technology ethics, to name just some of his non-academic work and service. Last year, he was awarded the highest honor the Italian government bestows, the Cavaliere di Gran Croce Ordine al Merito della Repubblica Italiana.
Like many other academics, it seems, I spent part of Winter break playing around with ChatGPT, a neural network “which interacts in a conversational way.” It has been trained up on a vast database, to recognize and (thereby) predict patterns, and its output is conversational in character. You can try it by signing up. Somewhat […]
There has been a fair amount of concern over the threats that ChatGPT and AI in general pose to teaching. But perhaps there’s an upside? Eric Schliesser (Amsterdam) entertains the possibility: Without wanting to diss the underlying neural network, but ChatGPT is a bullshitter in the (Frankfurt) sense of having no concern for the truth at all… I have seen many of my American colleagues remark that while ChatGPT is, as of yet, unable to handle technical stuff skillfully, it can produce B- undergraduate papers.
“It will be difficult to make an entire class completely ChatGPT cheatproof. But we can at least make it harder for students to use it to cheat.” (I’m reposting this to encourage those teaching philosophy courses to share what they are doing differently this semester so as to teach effectively in a world in which their students have access to ChatGPT. It was originally published on January 4th.) That’s Julia Staffel (University of Colorado, Boulder) in a helpful video she has put together on ChatGPT and its impact on teaching philosophy. In it, she explains what ChatGPT is, demonstrates how it can be used by students to cheat in ways that are difficult to detect, and discusses what we might do about it. You can watch it below: See our previous discussions on the topic: Conversation Starter: Teaching Philosophy in an Age of Large Language Models If You Can’t Beat Them, Join Them: GPT-3 Edition Oral Exams in Undergrad Courses? Talking Philosophy with ChatGPT Philosophers On GPT-3 (updated with replies by GPT-3)
Steven Rieber, a former philosopher who is now a program manager at Intelligence Advanced Research Projects Activity (IARPA), a part of the United States government’s Office of the Director of National Intelligence, is heading up a new research program that might be of interest to philosophers. The program, “Rapid Explanation, Analysis, and Sourcing Online” (REASON) aims to “develop novel technologies that will enable intelligence analysts to substantially improve the evidence and reasoning in draft analytic reports.” It is seeking research teams to fund that will build systems to help “analysts discover valuable evidence, identify strengths and weaknesses in reasoning, and produce higher quality reports.” Here is some more information about the project: Intelligence analysts sort through huge amounts of often uncertain and conflicting information as they strive to answer intelligence questions. REASON will assist and enhance analysts’ work by pointing them to key pieces of evidence beyond what they have already considered and by helping them determine which alternative explanations have the strongest support.