Diving Deeper into AI Package Hallucinations
Six months ago (forever in Generative AI terms) while working at Vulcan Cyber, I conducted research regarding LLM hallucinations on open source package recommendations.In my previous research I have exposed a new attack technique: AI Package Hallucination. This Attack technique uses LLM tools such as ChatGPT, Gemini, and more, in order to spread malicious packages that do not exist, based on model outputs provided to the end-user.I also tested the technique on GPT-3.5 turbo model and used 457 ...
Read more at lasso.security