OpenAI pledges to publish AI safety test results more often | TechCrunch
Image Credits:Kim Jae-Hwan/SOPA Images/LightRocket / Getty Images
9:38 AM PDT · May 14, 2025
OpenAI is moving to publish the results of its internal AI model safety evaluations more regularly in what the outfit is saying is an effort to increase transparency.
On Wednesday, OpenAI launched the Safety evaluations hub, a web page showing how the company’s models score on various tests for harmful content generation, jailbreaks, and hallucinations. OpenAI says that it’ll use the hub to share metrics...
Read more at techcrunch.com