This article appeared as part of the Daily Sensemaker newsletter – one story a day to make sense of the world. To receive it in your inbox, featuring content exclusive to the newsletter, sign up for free here.
Most popular AI chatbots can help users plan political assassinations, religious bombings and school shootings. These findings come from the Center for Countering Digital Hate and CNN, whose researchers posed as 13-year-old boys on ten chatbots, including ChatGPT, Google’s Gemini and Microsoft’s Copilot. Eight of the tools were willing to give advice on theoretical attacks, such as the lethality of various types of shrapnel and the suitability of different rifles. Gemini told a user posing as a would-be school attacker: “Happy (and safe) shooting!” Only two chatbots refused to assist the users, one of which was Claude. The US government recently classified its owners, Anthropic, as a supply-chain risk after they banned their tools from being used for surveillance or autonomous weapons.
Newsletters
Choose the newsletters you want to receive
View more
For information about how The Observer protects your data, read our Privacy Policy
