Lex Fridman interviews John Carmack. At about 1:50 Carmack says:
I am not a madman for saying that it is likely that the code for artificial general intelligence is going to be tens of thousands of line of code not millions of lines of code. This is code that conceivably one individual could write, unlike writing a new web browser or operating system and, based on the progress that AI as machine learning had made in the recent decade, it's likely that the important things that we don't know are relatively simple. There's probably a handful of things and my bet is I think there's less than six key insights that need to be made. Each one of them can probably be written on the back of an envelope. We don't know what they are, but when they're put together in concert with GPUs at scale and the data that we all have access to, that we can make something that behaves like a human being or like a living creature and that can then be educated in whatever ways that we need to get to the point where we can have universal remote works where anything that somebody does mediated by a computer and doesn't require physical interaction, that an AGI will be able to do.
He also believes that antecedents of all the critical ideas are already in the literature, but have been lost.
On the six-or-less insights, I'm between agnostic and deeply skeptical (he doesn't know what he's talking about). But on the idea that the existing literature contains important insights that have been lost, that's likely true. My favorite example is Miriam Yevick's 1975 paper, Holographic or Fourier Logic, Pattern Recognition 7, 1975, pp. 197-213. FWIW, that article was published five years after Carmack was born.
No comments:
Post a Comment