Over at Marginal Revolution Tyler Cowen has posted a paragraph from an interview with the mathematician, Terence Tao:
With formalization projects, what we’ve noticed is that you can collaborate with people who don’t understand the entire mathematics of the entire project, but they understand one tiny little piece. It’s like any modern device. No single person can build a computer on their own, mine all the metals and refine them, and then create the hardware and the software. We have all these specialists, and we have a big logistics supply chain, and eventually we can create a smartphone or whatever. Right now, in a mathematical collaboration, everyone has to know pretty much all the mathematics, and that is a stumbling block, as [Scholze] mentioned. But with these formalizations, it is possible to compartmentalize and contribute to a project only knowing a piece of it. I think also we should start formalizing textbooks. If a textbook is formalized, you can create these very interactive textbooks, where you could describe the proof of a result in a very high-level sense, assuming lots of knowledge. But if there are steps that you don’t understand, you can expand them and go into details—all the way down the axioms if you want to. No one does this right now for textbooks because it’s too much work. But if you’re already formalizing it, the computer can create these interactive textbooks for you. It will make it easier for a mathematician in one field to start contributing to another because you can precisely specify subtasks of a big task that don’t require understanding everything.
One of the regulars at Marginal Revolution, rayward, posted this comment:
Less collaboration? "It (AI) will make it easier for a mathematician in one field to start contributing to another because you can precisely specify subtasks of a big task that don’t require understanding everything."
Lawyers (I'm one) know a little about a lot not a lot about a little; thus, they are dependent on collaboration. Over my career many of the projects referred to me came from other lawyers, and vice versa. In the process of collaborating, the other lawyers learn a little from me and I learn a little from them, and hopefully the client is better for it.
I'm no economist (as Geithner liked to remind people), but my impression is that they work in silos, intentionally insulating themselves from outside influences: economics is very much driven by a certain way of defining and addressing a problem, reflected in the various "schools" of economics such as the Austrian School or the Keynesian School). Collaboration in this setting would be equivalent to MTG collaborating with AOC: it ain't happening. Sure, law at the highest level (e.g., the Supreme Court) is ideological, but in the real world of solving real problems for actual clients, it's not.
So which is it: will AI make economists (and others) more or less likely to collaborate?
Here’s how I replied to rayward:
Interesting. And that's the issue that this project raises for me: What kinds of projects & enterprises can be collaborative and which cannot? As I recall the Higgs boson paper from the super-collider had over a thousand signatures. That's a very large scale enterprise. In contrast, just about everything in literary criticism, the discipline I'm trained in, is done by a single person. Some disciplines lend themselves to collaboration, some do not. I suspect that AI will increase the range of collaborative work. And that's where we get the real superintelligence.
What do we know about the characteristics of projects & enterprises that make the amenable to collaboration or resistant to it?
Below the asterisks I have appended a passage from a recent post, How smart could an A.I. be? Intelligence in a network of human and machine agents.
* * * * *
So, let us think in terms of problem-solving by networks of specialized solvers. Some of those solvers are human, but some will be machines. Such man-machine problem-solving networks are ubiquitous in the modern world and they solve problems well-beyond the capacity of individual humans. They aren’t what most AI experts have in mind when they talk about superintelligence, but it’s not clear to me that we can simply ignore such networks in these discussions. They are, after all, how many very important problems get solved. Henry Farrell and Cosma Shalizi have made this argument in The Economist (here’s an ungated and somewhat longer version, and here as well, where it is followed by a brief discussion).
I assume that such man-machine networks will proliferate in the future. Some of the nodes in these networks will be machines and some will be humans. The question of AGI then becomes:
Will there ever come a time when the tasks of every node in such problems-solving networks can be executed by a computer system that is as capable as any human?
Note that it is possible that some tasks will require manipulation of the physical world that is of such a nature that humans are better at it than any machine. Would we say that the existence of such nodes is evidence only of physical skill, but not of intelligence?
The question of machine superintelligence would then become:
Will there ever come a time when we have problem-solving networks where there exists at least one node that is assigned to a non-routine task, a creative task, if you will, that only a computer can perform?
That’s an interesting question. I specify non-routine task because we have all kinds of computing systems that are more effective at various tasks than humans are, from simple arithmetic calculations to such things solving the structure of a protein string. I fully expect the more and more systems will evolve that are capable of solving such sophisticated, but ultimately routine, problems. But it’s not at all obvious to me that computational systems will eventually usurp all problem-solving tasks.
No comments:
Post a Comment