Vance and AI “He’s really thought this through and he is an incredibly twisted man,” Sam Seder tweeted in response to the JD Vance clip below. As if the prospect of this man being one heartbeat away from the presidency isn’t warped enough, our technology-fetishizing overlords want to put the electronic brains behind driverless cars in charge of defining reality for us. You’ve noticed it too: “Google has rolled out generative AI to users of its search engine on at least four continents, placing AI-written responses above the usual list of links; as many as 1 billion people may encounter this feature by the end of the year.” I already don’t trust them: Yet AI chatbots and assistants, no matter how wonderfully they appear to answer even complex queries, are prone to confidently spouting falsehoods—and the problem is likely more pernicious than many people realize. A sizable body of research, alongside conversations I’ve recently had with several experts, suggests that the solicitous, authoritative tone that AI models take—combined with them being legitimately helpful and correct in many cases—could lead people to place too much trust in the technology. That credulity, in turn, could make chatbots a particularly effective tool for anyone seeking to manipulate the public through the subtle spread of misleading or slanted information. No one person, or even government, can tamper with every link displayed by Google or Bing. Engineering a chatbot to present a tweaked version of reality is a different story. There’s evidence that interactions with AI distributing misleading information…