New prompt injection papers: Agents Rule of Two and The Attacker Moves Second
2nd November 2025
Two interesting new papers regarding LLM security and prompt injection came to my attention this weekend.
Agents Rule of Two: A Practical Approach to AI Agent Security
The first is Agents Rule of Two: A Practical Approach to AI Agent Security, published on October 31st on the Meta AI blog. It doesn’t list authors but it was shared on Twitter by Meta AI security researcher Mick Ayzenberg.
It proposes a “Rule of Two” that’s inspired by both my own lethal trifecta concept and the ...
Read more at simonwillison.net