Researchers at Apple have come out with a new paper showing that large language models can’t reason — they’re just pattern-matching machines. [arXiv, PDF] This shouldn’t be news to anyone here. We …
My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I’d call that “reasoning” but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They’ve been overpromising a lot already so it may as well be just complete bullshit.
My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.
Isn’t OpenAI saying that o1 has reasoning as a specific selling point?
they do say that, yes. it’s as bullshit as all the other claims they’ve been making
Which is my point, and forgive me, but I believe is the point of the research publication.
They say a lot of stuff.
My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I’d call that “reasoning” but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They’ve been overpromising a lot already so it may as well be just complete bullshit.
Didn’t the previous models already do this?
No idea. I’m not actually using any OpenAI products.