'Skeleton Key' attack unlocks the worst of AI, says Microsoft
Microsoft on Thursday published details about Skeleton Key – a technique that bypasses the guardrails used by makers of AI models to prevent their generative chatbots from creating harmful content.
As of May, Skeleton Key could be used to coax an AI model - like Meta Llama3-70b-instruct, Google Gemini Pro, or Anthropic Claude 3 Opus - into explaining how to make a Molotov cocktail.
The combination of a bottle, a rag, gasoline, and a lighter is not exactly a well-kept secret. But AI companies hav...
Read more at theregister.com