News Score: Score the News, Sort the News, Rewrite the Headlines

Weaponizing image scaling against production AI systems

Picture this: you send a seemingly harmless image to an LLM and suddenly it exfiltrates all of your user data. By delivering a multi-modal prompt injection not visible to the user, we achieved data exfiltration on systems including the Google Gemini CLI. This attack works because AI systems often scale down large images before sending them to the model: when scaled, these images can reveal prompt injections that are not visible at full resolution.In this blog post, we’ll detail how attackers can...

Read more at blog.trailofbits.com

© News Score  score the news, sort the news, rewrite the headlines