News Score: Score the News, Sort the News, Rewrite the Headlines

New hack uses prompt injection to corrupt Gemini’s long-term memory

INVOCATION DELAYED, INVOCATION GRANTED There's yet another way to inject malicious prompts into chatbots. The Google Gemini logo. Credit: Google In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. Developers of platforms such as Google's Gemini and OpenAI's ChatGPT are generally good at plugging these security holes, but hackers keep finding new ways to poke through ...

Read more at arstechnica.com

© News Score  score the news, sort the news, rewrite the headlines