PaperBench: Evaluating AI’s Ability to Replicate AI Research
PaperBench: Evaluating AI’s Ability to Replicate AI Research | OpenAIEvaluating AI’s Ability to Replicate AI Research.We introduce PaperBench, a benchmark evaluating the ability of AI agents to replicate state-of-the-art AI research. Agents must replicate 20 ICML 2024 Spotlight and Oral papers from scratch, including understanding paper contributions, developing a codebase, and successfully executing experiments. For objective evaluation, we develop rubrics that hierarchically decompose each rep...
Read more at openai.com