Saturday, August 22, 2020

Saturday ramble: GPT-3, your AI companion, metaphysics, Instagram [& identity]

Yikes! But I’ve been busy. And it’s not going to stop.

GPT-3 update – GPT-X: The Star Trek computer and beyond

Back on August 5 I’d issued my first working paper in my GPT-3 project, GPT-3: Waterloo or Rubicon? Here be Dragons, but of course I continued thinking about those issues as well as making attempts to move on. Since then I’ve issued a number of further-thoughts posts and then a couple of days ago I decided that I needed to revise the middle of the paper, resulting in GPT-3: Waterloo or Rubicon? Here be Dragons, Version 2, which I uploaded on Wednesday (or was it Thursday?). Whenever it was, I’ve tweaked it twice since then. I think it’s now to the point where I can leave further thoughts to another paper, perhaps GPT-3: Bounding the Space, toward a theory of minds.

I’ve been making fitful progress on the GPT-3 future paper, which now has the working title, After GPT-X: The Star Trek computer, and beyond. The problem is figuring out just what I want to do. What I DON'T want to do is take on the future of all of AI and robotics. But how do I limit myself? My current scheme is to have an opening section that starts by pointing out that much of AI still seems gripped by the chess-thinking that dominated its founding. Just where I’ll go from there I don’t know, but I want to say something about what AI is good at and what it isn’t: closed worlds vs. open, abstract vs. concrete. That would lay the foundations for the next section, “AI as platform and beyond”.

That’s where I’ve been doing some work. I’ll start with Andreessen’s remarks on AI as platform, pointing out, however, that he’s clearly thinking of something like the Star Trek computer as the endgame here. That’s one possibility, but not the only one. I want to think along the lines I laid out in an old document, PowerPoint Assistant: Augmenting End-User Software through Natural Language Interaction. That’s easily transformed into the AI-as-platform architecture and it’s oriented “inward” toward the machine itself, rather than “outward” toward the external world. It’s thus oriented toward a relatively closed world, moreover, one that’s well suited to the machine’s capabilities; it doesn’t have to cope with the perceptual messiness of the physical world. This line of thinking leads to conceiving of the operating system as an AI. That, in turn leads to AIs interacting with other AIs.

Then we can take up Andreessen’s line of thought, which leads toward the Star Trek computer. I suppose I’ve got to reiterate just how difficult it is going to be to develop a cognitive model and a semantics of the external world, with common sense reasoning at its base. I suspect that may well require an ongoing cumulative and cooperative effort that will have to be public, and international, rather than a private effort, like GPT-3. It’s just too large. But then something like the Star Trek computer – if we’re really thinking in those terms – really should be a public resource, like water, electricity, and so forth. That’s a project that will take decades, if not generations.

Computer/AI as life-long companion

Then we can bring these two lines of development together – the inward move toward the operting system as AI and the outward move toward the Star Trek computer – in the notion of an AI as a life-long companion. A child will be given an artificial companion at a young age, say between 18 and 24 months or so, and will have that companion with it for the rest of its life. Just what that companion does and how it functions, that will change as the child matures, and so forth. I figure it could have various manifestations: an electro-mechanical pet, a hand-held device, a facility in the cloud, and so forth. It will variously be a tutor, a companion, a resource, perhaps an aide too. This raises a host of design issues, technical issures, and ethical issues.

What’s the time frame? I don’t know. Companion robots exist now, at least in Japan. I know they’re used for the elderly. And of course there’s the Tamagotchi, a small hand-held device that functions as a virtual pet. That’s the seed of such a companion. Add to that the smart phone…But a full-blown companion? I don’t know. It’s not at all an either/or thing. It will evolve by degrees. I suppose the critical point will be when every child is given one. Fifty years? Who will own this device? Will it be the private property of the child’s family until the child becomes and adult? Who will manufacture it? Private corporations, under license from the government? Regulation?

Off hand it’s not at all clear to me that such a capability is well served by the current division between public and private. It’s somewhere in between. I don’t think it can be private property, like a slave. But I don’t think it’ll be capable of fully autonomous human-like existence, and so entitled to citizen-hood. How will it fit into a new social order?

The metaphysical structure of the world – it’s lumpy

I am currently in the process of taking two posts and combining them into a single working paper with the provisional title, What economic growth and statistical semantics tell us about the structure of the world. One post is from August 13, Stagnation, Redux: It’s the way of the world [good ideas are not evenly distributed, no more so than diamonds], and is about economic growth. The other is from August 15, World, mind, and learnability: A note on the metaphysical structure of the cosmos, and takes its point of departure from my currenting thinking about GPT-3.

Both posts are grounded in the fact that the world is complex and irregular, chaotic, in the way David Hays and I argued in A Note on Why Natural Selection Leads to Complexity (1990). The world isn’t “smooth”; it’s “lumpy” – terms I’ll be explicating the working paper. This lumpiness is not fundamentally physical ¬– though it may be so in some cases, think, for example, of the distribution of diamonds, or oil, or whales – but is rather metaphysical, a function of the relationship between our capacities and the world we live in.

It is because the world is lumpy that we can perceive and more around in it effectively. That is, it is learnable. And that’s what makes GPT-3 and similar engines possible. This lumpiness exists at all scales.

But lumpiness underlies the phenomenon of economic stagnation as well. That, in effect, is what I argued last year in Stagnation and Beyond: Economic growth and the cost of knowledge in a complex world. There I argued that the increasing cost of drug discovery, on the one hand, and developing ever smaller chip components for computation (Moore’s law), on the other, reflected the lost of learning more about the world. In a lumpy world, we start by exploiting things that are near to us and then move toward things that are more distant. Effort increases with distance.

Thus understood, this metaphysical structure of course must be extended to animals and to plants as well – the relationship between their capacities and the world. These capacities – for all of us, plants, animals, humans – change over time. Thus the metaphysical structure of the world is not fixed, but it doesn’t wander chaotically either. It evolves.

Hello, Instagram!

I’ve had an Instagram account for some time now, but only on my laptop, not my smart phone. I’ve only been using my smart phone to make phone calls and sent text messages. But I’ve recently started running it through wifi, and that gives me access to the internet. So I’ve downloaded the Instagram app and have been uploading photos to it for that last few days.

Oh, I’ve been taking photos with the phone’s camera, but I’ve mostly been uploading photos from my laptop. I email them to my Google account and then save them to the phone. From there it’s easy to post them to Instagram. I’ve been getting a lot of “likes”.

What fun.

Identity

And I’ve still got a post about cultural and social identity on my plate. It’s been hanging around since my interview with Hollis Robbins. I plan to cover four cases:
  • a white academic writing about jazz (“a culture not one’s own”)
  • A Black poet writing sonnets (“received European form”)
  • Japanese and baseball
  • Arabic numbers in the West
What will come out of the cross-comparison? Perhaps cultural and social identity are not the same thing.

No comments:

Post a Comment