Detecting when LLMs are Uncertain
This post tries to explain the new reasoning techniques developed by XJDR in a new project called
Entropix.Entropix attempts to improve reasoning in models through being smarter at sampling during moments of uncertainty.A big caveat, there have been no large scale evals yet for Entropix, so it’s not clear how much this helps in practice. But it does seem to introduce some promising techniques and mental models for reasoning.
Uncertainity at a glance
Sampling is the process of choosing which toke...
Read more at example.me