Two years ago I did a series of posts provoked by Facebook. As I explained in the first post in that series, Facebook or freedom, Part 1: Who gave you permission to mess with my mind?:
On Tuesday, August 25, I was using Facebook, as I do every day, and I changed from one page to another. All of a sudden, WHAM! the interface changed and went mostly black. Facebook informed me that they would be changing the interface permanently on September 1, but I could get the new interface now. But, if I wanted, I could, at least temporarily, switch back to the old interface.
I resisted as long as I could, though I knew my resistence was doomed to failure, and in the process wrote a series of posts about how endusers are the at the mercey of Big Corps that provide us with the software we use to run our daily lives. The most recent post in the series went up on December 22 of last year, What do I want from my AI Assistant? [control, that's what].
Though I was aware of it, I neglected to write a post about the ‘right to repair,’ which has been an important issue for farmers and other operators of big equipment:
Modern-day tractors and combines are basically like computers on wheels. And for years, there has been a battle between farmers and manufacturers over who should have access to the information needed to repair them. Equipment manufacturers have made some concessions in order to avoid new laws, but some farmers say that's not enough. A new law in Colorado went into effect that allows farmers to repair their own equipment.
It’s the same issue. WE own it, sorta’, but THEY own it more, and so can control us, after a fashion.
I was reminded of the issue this morning when I woke up and my Android phone displayed a message: “Optimize your updated device. Click to get started.” My immediate reaction was %$!!$$!*##!! I may have to do it just to get rid of that damn message. And the result may be benign, but it’s the principle of the thing.
And THAT’s the deepest issue currently raised by AI, who controls what? It’s pretty obvious that the Big Boyz want to control as much as possible, not withstanding the fact that both Microsoft and Facebook are jumping on the Open Source bandwagon. Regardless of what they want and how the politics evolves, we’re dealing with highly sophisticated technology, technology about which most of us are ignorant. Given that ignorance, how can we possibly exert control over the AI that’s being shoved our way willy-nilly? Moreover it’s clear that AI technology can and will be very useful in educational settings.
Arizona State University just partnered with OpenAI (H/t Tyler Cowen).[1] Who controls what and for whom in that partnership? My guess is that, regardless of what the contracts say, we don’t how things will unfold. Who has what power? OpenAI? ASU administration, faculty, and students? What about the Arizona governor, legislature, and voters? The concentration and density is highest with OpenAI, not to mention the knowledge. To be sure, there are some faculty at ASU who are quite sophisticated about AI – I’m thinking particularly of Subbarao Kambhampati (కంభంపాటి సుబ్బారావు), an AI researcher who is immune to the hype and is quite active on Twitter, I mean X – what role will they play in this partnership? And so on.
You see the problem, don’t you?
In this context all this hype and blather about AI Doom is just a distraction. I’m reasonably certain that the Doomers are sincere in their anxiety, but when you zoom out at look at things from 20,000 feet their actions take on a different cast. The way social systems behave is almost always different from what is intended by individual human actors within those systems.
What’s going on?
ADDENDUM, Feb. 9, 2024: OpenAI has is now working on agent software that would all but take over user devices and perform useful tasks. Take over?!!! Gary Marcus is skeptical:
Finally, let’s not forget about privacy. Such agents could (more or less by definition) have access to literally all of a user’s personal and professional information: every file, every keystroke, every password, every email, every text message, every location change, and every web search. After all, that’s what it means to take over a user’s device.
Even Orwell didn’t quite dream of that. Combine that with worries about security, and it’s all a colossal accident waiting to happen. Heaven forbid they should be allowed to run such software on Department of Defense computers. One screwup (and we all know LLMs are prone to screwups) and a LOT of people could die.
* * * * *
[1] From the article about ASU:
With the OpenAI partnership, ASU plans to build a personalized AI tutor for students, not only for certain courses, but also for study topics. STEM subjects are a focus and are “the make-or-break subjects for a lot of higher education,” Gonick said. The university will also use the tool in ASU’s largest course, Freshman Composition, to offer students writing help.
ASU also plans to use ChatGPT Enterprise to develop AI avatars as a “creative buddy” for studying certain subjects, like bots that can sing or write poetry about biology, for instance.
Gonick said ASU’s prompt engineering course has become one of the university’s most popular courses, not limited to engineering students. The access to ChatGPT Enterprise means students will no longer be limited by usage caps. He also said that after conversations with OpenAI’s leadership, he feels confident that the tool provides a “private walled-garden environment” that will safeguard student privacy and intellectual property.
OpenAI and ASU’s joint release specified that any prompts the ASU community inputs into ChatGPT “remain secure,” and that OpenAI “does not use this data for its training models.”
No comments:
Post a Comment