News Score: Score the News, Sort the News, Rewrite the Headlines

OpenAI's GPT-4 can exploit real vulnerabilities by reading security advisories

AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories, academics have claimed. In a newly released paper, four University of Illinois Urbana-Champaign (UIUC) computer scientists – Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang – report that OpenAI's GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing ...

Read more at theregister.com

© News Score  score the news, sort the news, rewrite the headlines