News Score: Score the News, Sort the News, Rewrite the Headlines

Design Patterns for Securing LLM Agents against Prompt Injections

13th June 2025 This new paper by 11 authors from organizations including IBM, Invariant Labs, ETH Zurich, Google and Microsoft is an excellent addition to the literature on prompt injection and LLM security. In this work, we describe a number of design patterns for LLM agents that significantly mitigate the risk of prompt injections. These design patterns constrain the actions of agents to explicitly prevent them from solving arbitrary tasks. We believe these design patterns offer a valuable tra...

Read more at simonwillison.net

© News Score  score the news, sort the news, rewrite the headlines