Training A Small Language Model To Outperform Frontier Models On CRM-Arena
Much of the research we are doing at Neurometric focuses on how to auto-generate SLMs for specific tasks. After evaluating test time compute strategies on various models and publishing our Leaderboard, we turned to SLMs. There’s a growing assumption in the AI world that bigger is always better. More parameters, more compute, more everything. But what if a model with fewer than 6 billion parameters could hold its own against models 10x or 20x its size on real enterprise tasks?CRMArena is a benc...
Read more at neurometric.substack.com