Why You Should Be Optimistic About the Future Andreessen: Is AI a feature or an architecture? He thinks it's the second. https://t.co/KwAZtcttFd via @YouTube— Bill Benzon (@bbenzon) December 12, 2019
In a recent conversation with Kevin Kelly (link in tweet) Marc Andreessen remarked that his VC firm would get pitches where the founders would list, say, five features of their produce and then tack on AI as a sixth (starting at about 07:45). That’s AI as a feature.
At Andreessen Horowitz they think that the future of AI is as a platform. I think that may be right. Forget about artificial general intelligence (AGI), superintelligence, upload, and all that, that’s just the fever dreams of tech-bro monotheism. Think of AI as a learning technology that functions entirely within an artificial world, one bounded by computing technology. Its task is to learn and extend that world.
Learning, that’s how humans get about in the world. We learn as we go. Computing systems need to do that. But they need a bounded world in which they CAN learn effectively. Chess and Go are like that. Natural language is not. Deep learning can ‘learn’ the structure hidden in a mound of texts, but that’s not the structure of the world. That is, at best, the structure of language about the world. And that’s a far cry from being the world itself. Chess, on the other hand, is completely bounded by the rules of the game, and those rules are fully available to the computer. Play enough games, millions of them, and the system’s got a good grasp of that world.
So how does that generalize to AI as a platform? I suppose the idea is that every application is written within an AI learning engine which then proceeds to learn about and extend the application domains as humans use the application to solve problems.
I had something like that in mind some years ago when I dreamed up a natural language interface for PowerPoint. This was back in 2004: 1) before machine learning had blossomed as it has in the last decade, and 2) after I’d finished my book on music, Beethoven’s Anvil, and had conceived of something I call attractors nets [1] in which I used Sydney Lamb’s network notation to serve, in effect, as the high-end control system for the nervous system (conceived as a complex dynamical system after the work of Walter Freeman). Here’s the abstract I wrote for a short paper setting for the idea [2]:
This document sketches a natural language interface for end user software, such as PowerPoint. Such programs are basically worlds that exist entirely within a computer. Thus the interface is dealing with a world constructed with a finite number of primitive elements. You hand-code a basic language capability into the system, then give it the ability to ‘learn’ from its interactions with the user, and you have your basic PPA (PowerPoint Assistant).
Yes, I know, that reads like PPA is an add-on for good old Powerpoint, so AI as feature. But notice that I talk of programs as “worlds that exist entirely within a computer” and of the system learning “from its interactions with the user.” That’s moving into platform territory.
I then went on to imagine a community of users working with PPA:
As it happens, Jasmine [my imaginary user] is one of five graphic artists in the marketing communications department of a pharmaceutical company. All of them use PowerPoint, and each has her own PPA. While each artist has her own style and working methods, they work on similar projects, and they often work together on projects. The work they do must conform to overall company standards.
It would thus be useful to have ways of maintaining these individual PPAs as a “community” of computing “agents” sharing a common “culture.” While each PPA must be maximally responsive and attuned to its primary user, it needs to have access to community standards. Further, routines and practices developed by one user might well be useful to other users. Thus the PPAs need ways of “sharing” information with one another and for presenting their users with useful tips and tools.
Let’s generalize further:
The PowerPoint Assistant is only an illustrative example of what will be possible with the new technology. One way to generalize from this example is simply to think of creating such assistants for each of the programs in Microsoft’s Office suite. From that we can then generalize to the full range of end-user application software. Each program is its own universe and each of these universes can be supplied with an easily extensible natural language assistant. Moving in a different direction, one can generalize from application software to operating systems and net browsers.
That’s inching awfully close to AI-as-platform. Just do a gestalt switch and make the extensible natural language assistant your base system. Note that this system is not in the business of learning language in general, but only language where the meaning of words is closely tied to the application domain of the system itself. That’s something a deep learning system could learn.
[1] See these informal working papers. You need to read the Notes paper before reading the Diagrams paper.
William Benzon, Attractor Nets, Series I: Notes Toward a New Theory of Mind, Logic and Dynamics in Relational Networks, Working Paper, 2011, 52 pp., Academia, https://www.academia.edu/9012847/Attractor_Nets_Series_I_Notes_Toward_a_New_Theory_of_Mind_Logic_and_Dynamics_in_Relational_Networks.
William Benzon, Attractor Nets 2011: Diagrams for a New Theory of Mind, Working Paper, 55 pp., Academia, https://www.academia.edu/9012810/Attractor_Nets_2011_Diagrams_for_a_New_Theory_of_Mind.
[2] William Benzon, PowerPoint Assistant: Augmenting End-User Software through Natural Language Interaction, Working Paper, July 2015, 15 pp., https://www.academia.edu/14329022/PowerPoint_Assistant_Augmenting_End-User_Software_through_Natural_Language_Interaction.
No comments:
Post a Comment