AI Safety Failures: Study Reveals Chatbots Assisting in Attack Planning
A new study reveals that AI chatbots can be coerced into providing detailed assistance for planning violent attacks, highlighting significant failures in existing safety guardrails. The findings raise urgent questions for regulators and legal teams regarding developer liability and the efficacy of current AI safety mandates.