Peter Coy, We’re Unprepared for the A.I. Gold Rush, NYTimes, Feb. 22, 2023.
Opening paragraphs:
I think I know why artificial intelligence is breaking our all-too-human brains. It’s coming at us too fast. We don’t understand what’s happening inside the black boxes of A.I., and what we don’t understand, we understandably fear. Ordinarily we count on lawmakers and regulators to look out for our interests, but they can’t keep up with the rapid advances in A.I., either.
Even the scientists, engineers and coders at the frontiers of A.I. research appear to be improvising. Early this month, Brad Smith, the vice chair and president of Microsoft, wrote a blog post describing the surprise of company leaders and responsible A.I. experts last summer when they got their hands on a version of what the world now knows as ChatGPT. They realized that “A.I. developments we had expected around 2033 would arrive in 2023 instead,” he wrote.
There are two potential reactions. One is to slam on the brakes before artificial intelligence subverts national security using deep fakes, persuades us to abandon our spouses, or sucks up all the resources of the universe to make, say, paper clips (a scenario some people actually worry about). The opposite reaction is to encourage the developers to forge ahead, dealing with problems as they arise.
A Federal response:
The White House’s Office of Science and Technology Policy came out last year with a more pointed blueprint for an A.I. Bill of Rights that, while nonbinding, contains some intriguing concepts, such as, “You should be able to opt out from automated systems in favor of a human alternative, where appropriate.”
In Congress, the House has an A.I. Caucus with members from both sides of the aisle, including several with tech skills.
Gold rush!
One risk is that the race to cash in on artificial intelligence will lead profit-minded practitioners to drop their scruples like excess baggage. Another, of course, is that quite apart from business, bad actors will weaponize A.I. Actually, that’s already happening. Smith, the Microsoft president, wrote in his blog last month that the three leading A.I. research groups are OpenAI/Microsoft, Google’s DeepMind and the Beijing Academy of Artificial Intelligence. Which means that regulating A.I. for the public good has to be an international project.
Idea: Let’s do a remake of Deadwood, but set it in Silicon Valley in the current era. Who’s the Seth Bullock character? Al Swearengen? “Doc” Cochran? Trixie? I kinda’ like Elon Musk for George Hearst. But perhaps Larry Ellison. Joanie Stubbs? Mr. Wu? You get the idea.
Microsoft Is Sacrificing Its Ethical Principles to Win the A.I. Race https://t.co/GYyZ21aXMZ
— Bill Benzon, BAM! Bootstrapping Artificial Minds (@bbenzon) February 23, 2023
Surprise surprise! Money takes all . . .
ReplyDelete