Norms for Publishing Work Created with AI

Created
Mon, 30/01/2023 - 22:00
Updated
Mon, 30/01/2023 - 22:00
What should our norms be regarding the publishing of philosophical work created with the help of large language models (LLMs) like ChatGPT or other forms of artificial intelligence? In a recent article, the editors of Nature put forward their position, which they think is likely to be adopted by other journals: First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility. Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM. A few comments about these: a. It makes sense to not ban use of the technology. Doing so would be ineffective, would incentivize hiding its use, and would stand in opposition to the development of new effective and ethical uses of the technology in research. b. The requirement to document how the LLMs were used in the research and writing is reasonable but vague. Perhaps it should be supplemented with more specific guidelines, or with examples of the variety of ways in which an LLM might be used, and the proper way to acknowledge these uses. c. The requirements say nothing about conflict of interest. The creators of LLMs are themselves corporations with their own interests to pursue. (OpenAI, the creator of ChatGPT, for example, has been bankrolled by Elon Musk, Sam Altman, Peter Thiel, Reid Hoffman, and other individuals, along with companies like Microsoft, Amazon Web Services, Infosys, and others.) Further, LLMs are hardly “neutral” tools. It’s not just that they learn from and echo existing biases in the materials on which they’re trained, but their creators can incorporate constraints and tendencies into their functions, affecting the outputs they produce. Just as we would expect a researcher to disclose any funding that has an appearance of conflict of interest, ought we expect researchers to disclose any apparent conflicts of interest concerning the owners of the LLMs or AI they use? Readers are of course welcome to share their thoughts. One question to take..