We’re faced with a paradox: On the one hand the last 15 years of work in machine learning has to be seen as a profound INTELLECTUAL SUCCESS. In particular, it’s clear that the success of the transformer architecture – which first became apparent with GPT-3 – has brought us to the threshold of a new intellectual and technological era. However, existing architectures – and I’m thinking in particular of LLMs made by transformers – aren’t sufficient, as Gary Marcus, Yann LeCun and now even Ilya Sutskever, among others, have argued.
On the other hand, we must face what has happened since then. An intellectual monoculture, one based on scaling and the construction of ever larger data farms, has come to dominate the field, and that has to be seen as a profound INSTITUTIONAL FAILURE. I say “institutional” quite deliberately because it wasn’t just this individual and that one and the other one and on through a whole list of individuals. No, the failure must be attributed to institutions within which all those individuals function.
* * * * *
NOTE: This article at 3 Quarks Daily gives some of the reasons I regard this intellectual monoculture to be an institutional failure: Aye Aye, Cap’n! Investing in AI is like buying shares in a whaling voyage captained by a man who knows all about ships and little about whales.

No comments:
Post a Comment