He seems to think so, and I agree. Though I also believe that LLMs probably have won a permanent place in the repertoire of techniques for AI devices. We just have to figure out how best to use them.
Here’s Marcus’s most recent post: Satya Nadella and the three stages of scientific truth. You know the three stages: First the idea is ridiculed, which happened with Marcus’s 2022 paper in which he declared that LLMs would hit a wall. In the second stage, the idea opposed. In the third stage the idea wins, as though we’d known it all along.
Marcus quotes Microsoft’s CEO Satya Nadella:
So now in fact there is a lot of debate. In fact just in the last multiple weeks there is a lot of debate or have we hit the wall with scaling laws. Is it gonna continue? Again, the thing to remember at the end of the day these are not physical laws. There are just empirical observations that hold true just like Moore’s law did for a long period of time and so therefore it’s actually good to have some skepticism some debate because that I think will motivate more innovation on whether its model architectures or whether its data regimes or even system architecture.
Marcus notes that Marc Andreeseen and Alexandr Wang have made similar statements.
No comments:
Post a Comment