Monday, January 2, 2023

AI DEBATE 3: The AGI Debate [hosted in Montreal]

The Pivotal Discussion in Shaping the Path of AGI's Global Discourse.

Five panels of the world's most distinguished researchers and experts on AGI :

Panel 1: Cognition and Neuroscience

Panel 2: Common Sense

Panel 3: Architecture

Panel 4: Ethics and morality

Panel 5: AI: Policy and Net Contribution

Speakers: Erik Brynjolfsson, Yejin Choi, Noam Chomsky, Jeff Clune, David Ferrucci, Artur d'Avila Garcez, Michelle Rempel Garner, Dileep George, Ben Goertzel, Sara Hooker, Anja Kaspersen, Konrad Kording, Kai-Fu Lee, Francesca Rossi, Jürgen Schmidhuber and Angela Sheffield.

Moderator and co-organizer (with Vincent Boucher) : Gary Marcus. Gary Marcus is a leading voice in artificial intelligence. Scientist, best-selling author, and entrepreneur, he was Founder and C.E.O. of Geometric Intelligence, a machine-learning company acquired by Uber in 2016. His most recent book, Rebooting AI, co-authored with Ernest Davis, is one of Forbes’s 7 Must Read Books in AI.

Vincent Boucher is President of Montreal.AI and Quebec.AI.

Website: https://agidebate.com/

Gary Marcus remarks:

Every single speaker was both articulate, and pointed. Two—Schmidhuber and Clune—notably more optimistic about the potential of current techniques. (Sparks flew when they expressed that optimism.) But not one speaker thought that current AI was anything like the holy grail of artificial general intelligence. (Clune thought we might get there, by 2030.) Virtually every speaker thought that things were about to get wild—and not necessarily in entirely good ways.

I urge you, if you care about artificial intelligence, and its future, and its impact on society, to watch the debate, in full. That’s a huge commitment, 3.5 hours, but one thing that I think all of our speakers could agree on is that artificial intelligence, in whatever form it currently is, is about to have a huge impact on society.

Marcus recommends Tiernan Ray's account of the debate for ZDNet. I have posted excerpts from it.

* * * * *

Noam Chomsky spoke first, remarking that he saw no scientific value in current AI. Why? Because machine learning systems can learn any language whatsoever and so can tell us nothing about specifically human language. In making this remark I suppose he is implicitly analogizing these AI systems to the language acquisition device (LAD) he has been theorizing about for years. The LAD can learn all and only human languages and so is quite different the engines of machine learning.

I wonder. Do we actually know that current ML engines can learn any language? Are we even sure they can learn English? To be sure, many of them produce fluent English output, but they also make many blunders. The blunders, however, tend to be about semantics, not syntax. Chomsky's theorizing has always been centered on syntax, so semantic mistakes, however glaring, may be irrelevant to him. Can one be said to know a language without knowing its semantics? But none of us knows the full semantics of any natural language.

Cf. my recent remarks, On limits to the ability of LLMs to approximate the mind’s structure, December 27, 2022.

No comments:

Post a Comment