large language models

Created
Wed, 15/05/2024 - 18:00
Adam Muhtar and Dragos Gorduza Imagine a world where machines can assist humans in navigating across complex financial rules. What was once far-fetched is rapidly becoming reality, particularly with the emergence of a class of deep learning models based on the Transformer architecture (Vaswani et al (2017)), representing a whole new paradigm to language modelling … Continue reading Leveraging language models for prudential supervision
Created
Mon, 30/01/2023 - 22:00
What should our norms be regarding the publishing of philosophical work created with the help of large language models (LLMs) like ChatGPT or other forms of artificial intelligence? In a recent article, the editors of Nature put forward their position, which they think is likely to be adopted by other journals: First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility. Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM. A few comments about these: a. It makes sense to not ban use of the technology. Doing so would be ineffective, would incentivize hiding its use, and would stand in opposition to the development of new effective and ethical uses of the technology in research. b. The requirement to document how the LLMs were used in the research and writing is reasonable but vague.
Created
Thu, 05/01/2023 - 23:18
There has been a fair amount of concern over the threats that ChatGPT and AI in general pose to teaching. But perhaps there’s an upside? Eric Schliesser (Amsterdam) entertains the possibility: Without wanting to diss the underlying neural network, but ChatGPT is a bullshitter in the (Frankfurt) sense of having no concern for the truth at all…  I have seen many of my American colleagues remark that while ChatGPT is, as of yet, unable to handle technical stuff skillfully, it can produce B- undergraduate papers.
Created
Wed, 18/01/2023 - 23:00
“It will be difficult to make an entire class completely ChatGPT cheatproof. But we can at least make it harder for students to use it to cheat.” (I’m reposting this to encourage those teaching philosophy courses to share what they are doing differently this semester so as to teach effectively in a world in which their students have access to ChatGPT. It was originally published on January 4th.) That’s Julia Staffel (University of Colorado, Boulder) in a helpful video she has put together on ChatGPT and its impact on teaching philosophy. In it, she explains what ChatGPT is, demonstrates how it can be used by students to cheat in ways that are difficult to detect, and discusses what we might do about it. You can watch it below: See our previous discussions on the topic: Conversation Starter: Teaching Philosophy in an Age of Large Language Models  If You Can’t Beat Them, Join Them: GPT-3 Edition Oral Exams in Undergrad Courses? Talking Philosophy with ChatGPT Philosophers On GPT-3 (updated with replies by GPT-3)