artificial intelligence

Created
Fri, 07/04/2023 - 00:01
by Cole Thompson

Artificial Intelligence, or AI, is attracting great attention as working AI systems become accessible to the public. The AI claim is that it can digest the mass of knowledge that humanity has made public, then perform cognitive tasks with that knowledge or answer questions with speed and accuracy. This has many implications, some potentially worrisome. But when AI works well, it can serve up some interesting “truths.”

While AI does not generate authoritative or definitive information—you wouldn’t bet your savings on its output—my sense is that its findings often deserve a hearing.

The post Even AI Understands Limits to Growth appeared first on Center for the Advancement of the Steady State Economy.

Created
Thu, 16/02/2023 - 22:01

By Binoy Kampmark / CounterPunch Inside the beating heart of many students and a large number of learners lies an inner cheat.  To get passing grades, every effort will be made to do the least to achieve the most. Efforts to subvert the central class examination are the stuff of legend: discreetly written notes on […]

The post ChatGPT: Boon for the Lazy Learner appeared first on scheerpost.com.

Created
Mon, 30/01/2023 - 22:00
What should our norms be regarding the publishing of philosophical work created with the help of large language models (LLMs) like ChatGPT or other forms of artificial intelligence? In a recent article, the editors of Nature put forward their position, which they think is likely to be adopted by other journals: First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility. Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM. A few comments about these: a. It makes sense to not ban use of the technology. Doing so would be ineffective, would incentivize hiding its use, and would stand in opposition to the development of new effective and ethical uses of the technology in research. b. The requirement to document how the LLMs were used in the research and writing is reasonable but vague.
Created
Tue, 10/01/2023 - 04:29
Luciano Floridi, currently Professor of Philosophy and Ethics of Information at the University of Oxford and Professor of Sociology of Culture and Communication at the University of Bologna, has accepted an offer from Yale University to become the founding director of its Digital Ethics Center and professor in its Cognitive Science Program.   Professor Floridi is known for his work in philosophy of information, digital ethics, the ethics of artificial intelligence, and philosophy of technology, publishing several books and hundreds of articles on these topics, which you learn more about here. He has also consulted for Google, advised the European Commission on artificial intellligence, chaired a Parliamentary commission on technology ethics, to name just some of his non-academic work and service. Last year, he was awarded the highest honor the Italian government bestows, the Cavaliere di Gran Croce Ordine al Merito della Repubblica Italiana.