Friday, December 22, 2023

What do I want from my AI Assistant? [control, that's what]

It seems pretty clear that sooner or later I’m going to have one. Whether or not it will be one that is focused on my interests as I see them, that’s not at all clear. It’s just as likely, if not more so, that I’ll be forced to accept an AI assistant designed for me by some MegaCorp. This MegaCorp will tell me that it has my interests at heart, and that I can easily customize MyAssist the way I want, I won’t believe it. MyAssist will just be MegaCorp’s way of handling me. If I refuse MyAssist, which I could do, then it is likely that it will be much more difficult, if not impossible, for me to access many of the things I use my computer for.

Why do I think such a thing? Because that’s pretty much the way things are now. Two years ago I published a series of posts on the topic Facebook or Freedom. Those posts were precipitated by a change in the Facebook interface that was being thrust on me. I was happy with the interface just as it was, thank you very much, and I saw no need for a change. I resisted as long as I could; I even considered installing one of the work-arounds that was available to those who really did not want to change. But in the end I had to change. So I did. I’ve adapted, as I have to subsequent changes, all done for my good.

What bothered me then, and still does, is that I had little choice in the matter. The same thing’s been going on with the App Formerly Known as Twitter since Elon took it over. Since the change a bunch of my academic buddies have all but disappeared from the platform, and the number of Babe Bots has increased, but otherwise things are much the same for me. I’ve never been on AFKaT for the politics, so I’ve been able to avoid most of it. That hasn’t changed. I really don’t want to pay a monthly fee (I live on a very modest fixed income), I understand that Elon has to turn a buck, so why has he driven the advertisers away?

I digress.

The fact is, I’m wedded to my computer and to the internet, email and world-wide web. I really couldn’t function very well without them, not as an intellectual. And for the most part I don’t have to spend all that much time fiddling around with things in order to keep them working. But I do have to spend some time. And, yes, I probably could use some changes. But I don’t have the skills I’d need to make those changes, much less the time.

It's obvious that I need an AI Assistant to take care of all of this. Some years ago I sketched out ideas for a PowerPoint Assistant I could control through natural language. I also imagined that what I was thinking about for PowerPoint could be generalized:

The PowerPoint Assistant is only an illustrative example of what will be possible with the new technology. One way to generalize from this example is simply to think of creating such assistants for each of the programs in Microsoft’s Office suite. From that we can then generalize to the full range of end-user application software. Each program is its own universe and each of these universes can be supplied with an easily extensible natural language assistant. Moving in a different direction, one can generalize from application software to operating systems and net browsers.

Back then – the notes originally date from 2002-2003 – the technology we’d need to do that didn’t exist. Now it does.

Who’s going to control these AI Assistants? The end-users or the MegaCorps?

But how could it be anyone BUT the MegaCorps? Real control requires technical skill few end users have. Even if a user has the skills, the humongous AI models at the heart of current AI tech are necessarily in the hands of organizations that have the capital and personnel needed to create and maintain them. To be sure, a lively open-source scene is developing, with Meta and Microsoft encouraging it by releasing relatively small AI engines to the open-source world. Would they be doing that if they hadn’t been caught flatfooted by OpenAI, primarily, and secondarily by Google and Anthropic and a few others? Could governments develop large AI models that they maintain as public utilities? Should they? Can they attract the personnel needed to do so?

I can’t see where any of this is going. I’m in favor of decentralized disbursement and control of the technology. Just how that’s going to work out, who knows? At the moment the Robber Barons of Silicon Valley seem to have the upper hand. I don’t know whether or not they can keep it forever.

And so forth and so on.

More later.

Much more. 

* * * * *

 ADDENDUM: Is Rabbit a first-draft of the assistant I want?

No comments:

Post a Comment