The LLMentalist Effect: how chat-based Large Language Models rep…
For the past year or so I’ve been spending most of my time researching the use of language and diffusion models in software businesses.
One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.
But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.
LLMs are not brains a...
Read more at softwarecrisis.dev