The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
I think increasingly specialized models and analog systems that run them will be increasingly prevalent.
LLMs at their current scales don’t do enough to be worth their enormous cost… And adding more data is increasingly difficult.
That said: the gains on LLMs have always been linear based on recent research. Emergence was always illusory.
I’d like to read the research you alluded to. What research specifically did you have in mind?