Jon Stokes, When software writes the software that eats the world: The muse learns to code, doxa, May 19, 2021.
Here's the opening of his latest newsletter:
Here’s a thing that comes up often enough in my conversations with AI/ML practitioners and even in some specialist YouTube videos I’ve seen, but in my experience seems to be news to people who aren’t deep in the weeds of this field: an artificial general intelligence (AGI) is an AI that has learned to code.
As a lifelong coder, it seems intuitive to me that an AGI is a machine that can act in the world by writing code at least as well as a human — intuitive because I’ve been a person who has built everything from toy games, to back-office finance platforms, to restaurant reservation management systems with code. Code feels infinitely flexible to me — like the kind of thing that, once you master it, can do anything worth doing.
But as someone who has been in and around the startup scene as a programmer, I find I’m skeptical of this programmer intuition. I have tried a few times to eat some tiny little pieces of the world with code — both unsuccessfully and successfully — and I’ve realized that different parts of the software-as-world-eating process matter in different ways. More importantly, I’m aware that the most critical parts of that process are fundamentally, philosophically not amenable to automation, no matter how sophisticated.
I don’t think I’m necessarily an AGI skeptic — in fact, I’ve grown less skeptical of late. Rather, I just don’t think that “code that writes better code that writes even better code...” is as obvious a route to AGI as is claimed. If I’m skeptical of anything, it’s of the singularity. But more on the “s”-word, later.
Here’s what’s in this newsletter:
- IBM has just released a large dataset that aims to accelerate the development of an AI that can code.
- It’s thought in some circles that the moment we create AI that can write a code to build an AI that’s better than itself, we will have essentially created a kind of god — it’ll be a far bigger deal than even humanity’s taming of fire or electricity. No one knows what comes after that.
- The above point may be intuitive for programmers, but as a student (and critic) of “AI ethics” discourse, I do think the ethicists have pointed out something that poses a fundamental philosophical problem for this whole idea of recursively self-improving AIs. Specifically, the selection of an objective function in machine learning often forces the model builder to effectively take a side in some fundamentally irreconcilable conflict of legitimate values. If this is already the case for today’s limited models in narrow domains like image classification, imagine how much more true it is when the objective function has to express a view of what is “better” vs. “worse” in the domain of general intelligence.
- To translate my philosophical objections into the practical language of coding and startups: as a startup programmer, I’ve found that the hardest, most expensive problems in software are all fundamentally social and human-centric. So I’m skeptical that a self-coding, self-improving AI will be anything more than a massively deflationary technology along the lines of Moore’s Law. Massively disruptive and important, to be sure, but not necessarily godlike, or even particularly “general.”
I find that last item rather interesting and provocative. In his discussion of the difficulties involved in writing good software he notes: "I added the most value as a programmer in the part of the coding process where I was first helping the team to understand the business problem we were trying to solve, and then subsequently framing that problem in such a way that we could know when we had solved it with some code." That is consistent with what I've been told by the best and most experienced programmers I know. One even developed a specific ethnographic method, if you will, for investigating the business problem by interviewing the people currently tasked with solving the problem.
This is where software meets the world. Do we really expect software that is challenged by common sense reasoning to be able to conduct a competent ethonography of the business environment?
No comments:
Post a Comment