CONFIRMED: LLMs have indeed reached a point of diminishing returns
For years I have been warning that “scaling” — eeking out improvements in AI by adding more data and more compute, without making fundamental architectural changes — would not continue forever. In my most notorious article, in March of 2022, I argued that “deep learning is hitting a wall”. Central to the argument was that pure scaling would not solve hallucinations or abstraction; I concluded that “there are serious holes in the scaling argument.” And I got endless grief for it. Sam Altman impli...
Read more at garymarcus.substack.com