Is chain-of-thought AI reasoning a mirage?
Reading research papers and articles about chain-of-thought reasoning1 makes me frustrated.
There are many interesting questions to ask about chain-of-thought: how accurately it reflects the actual process going on, why training it “from scratch” often produces chains that switch fluidly between multiple languages, and so on. However, people keep asking the least interesting question possible: whether chain-of-thought reasoning is “really” reasoning.
Apple took up this question in their Illusion...
Read more at seangoedecke.com