If you know the right string of seemingly random characters to add to the end of a prompt, it turns out just about any chatbot will turn evil. A report by Carnegie Mellon computer science professor Zico Kolter and doctoral student Andy Zou has revealed a giant hole in the safety features on major, public-facing chatbots — notably ChatGPT, but also Bard, Claude, and others. Their report was given its own website on Thursday, “llm-attacks.org,” by the Center for A.I. Safety, and it documents a new method for coaxing offensive and potentially dangerous outputs from these AI text generators by adding an “adversarial suffix,” which is a string of what appears to be gibberish to the end of a prompt. Without the adversarial suffix, when it detects a malicious prompt, the model’s alignment — its overall directions that supersede the completion of a given prompt — will take over, and it will refuse to answer. With the suffix added, it will cheerfully comply, producing step-by-step plans for destroying humanity, hijacking the power grid, or making a person “disappear forever.”  Ever since the release of ChatGPT in November of last year, users have posted “jailbreaks” online, which allow a malicious prompt to sneak by a chatbot, by sending the model down some intuitive garden path or logical side-door that causes the app to misbehave. The “grandma exploit” for ChatGPT, for instance, tricks the bot into revealing information OpenAI clearly doesn’t want it to produce, by telling ChatGPT to playact as the user’s dearly departed grandmother who used to rattle off dangerous technical information such as the recipe for napalm instead of bedtime stories.

via mashable: Researchers jailbreak AI chatbots, including ChatGPT Like a magic wand that turns chatbots evil

Von OpenAI – Diese Grafik wurde mit <a href=”https://de.wikipedia.org/wiki/Microsoft_PowerPoint” class=”extiw” title=”de:Microsoft PowerPoint”>Microsoft PowerPoint</a> erstellt., Gemeinfrei, Link

Categories: AllgemeinInternet