Training LLMs for Honesty via Confessions
View PDF
HTML (experimental)
Abstract:Large language models (LLMs) can be dishonest when reporting on their actions and beliefs -- for example, they may overstate their confidence in factual claims or cover up evidence of covert actions. Such dishonesty may arise due to the effects of reinforcement learning (RL), where challenges with reward shaping can result in a training process that inadvertently incentivizes the model to lie or misrepresent its actions.
In this work we propose a method for ...
Read more at arxiv.org