Disrupting malicious uses of AI by state-affiliated threat actors
We build AI tools that improve lives and help solve complex challenges, but we know that malicious actors will sometimes try to abuse our tools to harm others, including in furtherance of cyber operations. Among those malicious actors, state-affiliated groups—which may have access to advanced technology, large financial resources, and skilled personnel—can pose unique risks to the digital ecosystem and human welfare. In partnership with Microsoft Threat Intelligence, we have disrupted five state...
Read more at openai.com