Henry Farrell has a new post at Crooked Timber: Dr. Pangloss’s Panopticon (Feb. 27, 2024). He's replying to Noah Smith's negative review of Acemoglu and Johnson, Power and Progress.
On power:
What Acemoglu and Robinson are saying is something quite different than what Noah depicts them as saying. For sure, they acknowledge that persuasion has some stochasticity. But they stress that it is not a series of haphazard accidents. Instead, under their argument, there are some kinds of people who are systematically more likely to succeed in getting their views listened to than other kinds of people. This asymmetry can reasonably be considered to be an asymmetry of power.
Under this definition, power is a kind of social influence. Again, it is completely true that it is extremely difficult to isolate social influence from other factors, proving that social influence absolutely caused this, that, or the other thing. But if Noah himself does not believe in the importance and value of social influence, then why does he get up in the morning and fire up his keyboard to go out and influence people, and why do people support his living by reading him?
I imagine Noah would concede that social influence is a real thing! And if he were actually put to it, I think that he would also have to agree to a very plausible corollary: that on average he, Noah Smith, exerts more social influence than the modal punter argufying on the Internet. Lots of people pay to receive his newsletter; lots of other people receive it for free. That means that he is, under a very reasonable definition, more powerful than those other people. He is, on average, more capable of persuading large numbers of people of his beliefs than the modal reply-guy is going to be.
This understanding of power is neither purely semantic nor empirically useless. Again, it may be really difficult to prove that Noah’s social influence has specific causal consequences in a specific instance. But the counter-hypothesis – that Noah’s ability to change minds, given his umpteen followers, is the same as the modal Twitter reply guy – is absurd. Occasionally, random people on the Internet can be temporarily enormously influential. Sometimes, super prominent people aren’t particularly successful at getting their ideas to spread. But on average, the latter kind of people will have more influence than the former. We can reasonably anticipate that people with lots of clout (whether measured by absolute numbers of followers, numbers of elite followers, bridging position between sparsely connected communities or whatever – there are different, plausible measures of influence and lively empirical debates about which matters when) will on average be substantially more influential than those with little or none. This means, for example, that it will be very difficult for ideas or beliefs to spread if they are disliked by the highly connected elite.
Now in fairness to Noah, Acemoglu and Johnson don’t help their case by using a wishy-washy seeming term like “persuasion.” But if you think about “persuasion” as some combination of “social influence” and “agenda control,” you will get the empirical point they are trying to make.
Core claims:
Acemoglu and Johnson’s core claims, as I read them are:
- That the debate about technology is dominated by techno-optimists [they actually write this before Andreessen’s ludicrous “techno-optimist manifesto” but they anticipate all its major points].
- That this dominance can be traced back to the social influence and agenda setting power of a narrow elite of mostly very rich tech people, who have a lot of skin in the game.
- That their influence, if left unchecked, will lead to a trajectory of technological development in which aforementioned very rich tech people likely get even richer, but where things become increasingly not-so-great for everyone else.
- That the best way to find better and different technology trajectories, is to build on more diverse perspectives, opinions and interests than those of the self-appointed tech elite, through democracy and countervailing power.
Since I more or less endorse all these claims (I would slightly qualify Claim 1 to emphasize mutually reinforcing pathologies of tech optimism and tech pessimism), I think that Power and Progress is a really good book, in ways that you won’t understand if you just relied on Noah’s summary of it (I note that this book and my own with Abe Newman are both shortlisted for a very nice prize, but that is neither here nor there in my opinion of it). I haven’t read another book that lays out this broad line of argument so clearly or so well. I haven’t read another book that lays out this broad line of argument so clearly or so well. And it is a very important line of argument that is mostly missing from current debates. Noah speculates that the book hasn’t gotten much attention because it is lost amidst the multitudes of tech pessimistic accounts. My speculation is that it has gotten less attention than it deserves because reviewers and readers don’t know quite how to categorize it, given that it approaches the issues from an unexpected slant.
The panopticon:
The panopticon may indeed have efficiency benefits. People can get away with far less slacking, if it works as advertised. But it also comes with profound costs to human freedom. And the technologies that are at the heart of the book’s argument – machine learning and related algorithms – bear a strong and unfortunate resemblance to Bentham’s panopticon. They too, enable automated surveillance at scale, perhaps making hierarchy and intrusive surveillance much, much easier and cheaper than they used to be. As Acemoglu and Johnson note:
The situation is similarly dire for workers when new technologies focus on surveillance, as Jeremy Bentham’s panopticon intended. Better monitoring of workers may lead to some small improvements in productivity, but its main function is to extract more effort from workers and sometimes also reduce their pay
This is, I think, why Acemoglu and Johnson worry that machine learning might immiserate billions, another claim that Noah finds puzzling. Acemoglu and Johnson fear that it will not only remake the bargain between capital and labour, but radically empower authoritarians (I think they are partly wrong on this, but that authoritarian machine learning could instead lead to a different class of disasters: pick yer poison).
The post is longish, but excellent. I made the following comment, about power in Silicon Valley:
Excellent, Henry, at least as far as I got. I made it about half-way though before I just had to make a comment. I'm thinking about the piece you and Cosma Shalizi did about the culture of Doomerism, which is very much a Silicon Valley phenomenon. And is also very relevant to any discussion of ideas, influence, persuasion, and POWER. I was shocked when such a mainstream magazine as Time ran a (crazy-ass) op-ed by Eliezer Yudkowsky.
I am reasonably familiar with his work. I have several times attempted to read a long piece he published in 2007 about Logical Organization in General Intelligence. I've been unable to finish it. Why? Because it's not very good. It's the kind of thing a really bright and creative sophomore does when they've read a lot of stuff and decide to write it up. You read it, think the guy's bright, if he gets some discipline, he could do some very good work. Well, 2007 was awhile ago, but as far as I can tell, he still doesn't have much intellectual discipline and certainly doesn't have deep insight into current AI or into human intelligent. But Time Magazine gave him scarce space in their widely read magazine.
That's power. Now, as far as I know, he's not again been able to place his ideas in such a venue. But even once is pretty damn good.
How'd that come about? Well there's a story, one I don't know in detail. But the story certainly involves money from Silicon Valley billionaires. He's been funded by Elon Musk and, I believe, by Peter Thiel (who's since become disillusioned with some of those folks). There's a lot of money coming into and through the world centered around LessWrong (which, BTW, has the best community-style user interface I've seen) from tech billionaires.
On technological trajectories, Acemoglu is one of a team of writers of a 2021 report out of Harvard, How AI Fails Us. Here's the abstract:
The dominant vision of artificle intelligence imagines a future of large-scale autonomous systems outperforming humans in an increasing range of fields. This “actually existing AI” vision misconstrues intelligence as autonomous rather than social and relational. It is both unproductive and dangerous, optimizing for artificle metrics of human replication rather than for systemic augmentation, and tending to concentrate power, resources, and decision-making in an engineering elite. Alternative visions based on participating in and augmenting human creativity and cooperation have a long history and underlie many celebrated digital technologies such as personal computers and the internet. Researchers and funders should redirect focus from centralized autonomous general intelligence to a plurality of established and emerging approaches that extend cooperative and augmentative traditions as seen in successes such as Taiwan’s digital democracy project and collective intelligence platforms like Wikipedia. We conclude with a concrete set of recommendations and a survey of alternative traditions.That is much better than the view that dominates AI development today. Moreover, I believe it to be more technically feasible. But, despite the Harvard imprimatur, it doesn't have nearly as much power behind it as the Silicon Valley view of "a future of large-scale autonomous systems outperforming humans in an increasing range of fields."
No comments:
Post a Comment