News Score: Score the News, Sort the News, Rewrite the Headlines

LLMs Know More Than They Show: On the Intrinsic Representation of LLM Hallucinations

View PDF HTML (experimental) Abstract:Large language models (LLMs) often produce errors, including factual inaccuracies, biases, and reasoning failures, collectively referred to as "hallucinations". Recent studies have demonstrated that LLMs' internal states encode information regarding the truthfulness of their outputs, and that this information can be utilized to detect errors. In this work, we show that the internal representations of LLMs encode much more information about truthfulness than ...

Read more at arxiv.org

© News Score  score the news, sort the news, rewrite the headlines