From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning
View PDF
HTML (experimental)
Abstract:Humans organize knowledge into compact categories through semantic compression by mapping diverse instances to abstract representations while preserving meaning (e.g., robin and blue jay are both birds; most birds can fly). These concepts reflect a trade-off between expressive fidelity and representational simplicity. Large Language Models (LLMs) demonstrate remarkable linguistic abilities, yet whether their internal representations strike a human-like trade...
Read more at arxiv.org