Pages in this blog

Wednesday, May 26, 2021

Henry Farrell on democracy [it's messy]

Sean Carroll interviews Henry Farrell:

Democracy as problem solving:

0:06:11.0 Sean Carroll: ... So let’s start with this idea of democracy and other kinds of institutions in a more or less theoretical sense. Probably the thing that is gonna guide the conversation the most is the paper that you wrote with Cosma Shalizi on cognitive democracy. And I take it that the idea is to think about democracy as a way of making decisions and to compare it with other institutional ways of making decisions. Is that fair?

0:06:48.8 Henry Farrell: That’s right. So we can think about markets, we can think about democracy, we can think about hierarchy, we can think about all of these different modes of problem-solving. And what I mean by problem-solving here is something like the following: So imagine that there are a number of us and we have some shared problem in common that we would like to solve. But either A, it is a complex problem that the solution is not immediately and readily apparent; or B, we have strong disagreements about how to solve it. I think that is where we can start to begin to get some traction on which modes of governance, whether these be markets, whether this be democracy, whether this be hierarchy, are better or worse at solving individual problems.

0:07:30.9 HF: And so I think that the argument that Cosma and I have made there, and I think we would probably modify it some. I think we’re a little bit too enthusiastic about democracy as against its competitors in that piece, but I think that the argument that we make is that the ability of democracy to solve problems is underestimated because people don’t pay attention to the ways of which democracy forces people, ideally, with very different perspectives, very different wants to work together. And sometimes, from the very diversity of our goals and our wants, information emerges and the democratic process works best when it is actually able to harness that process, and to turn that information into something useful and something actionable to help us solve problems.

Democracy is not an optimization problem & how we find our goals:

0:17:18.0 HF: … I was listening to a podcast a couple of weeks ago. This was the Quanta Podcast with Steven Strogatz. And he was interviewing Moon Duchin, who is a mathematician at Tufts. And she said, and I think this is absolutely right, that democracy is not an optimisation problem. And this is, I think, a temptation that a lot of people with engineering or mathematical backgrounds have, is to think in a certain sense, “Well, why is it that people disagree? If we could just come up with some sweet engineering solution, everybody would realise, and we would find some kind of optimal solution”. There are a number of reasons why this is not true.

0:17:58.9 HF: One of those is what you’ve already touched upon, which is that we often figure out what our goals are in the process of actually seeking after them. And this is an argument that political theorists such as John Dewey have made at length. So I think that the best articulation of this is a wonderful, albeit, I think also flawed book that Dewey wrote back in the 1920s called The Public and Its Problems, where he conceives of democracy as effectively a means of trying to figure out these broader problems that we have and to… But in the process of discovery, which both involves ordinary members of the public and experts. Ordinary members of the public understand how the problem affects them in their lives, and experts perhaps understand the more subtle causal chains that mean that…

0:18:47.1 HF: Public has a shared problem. So first of all, a public identifies itself, and then the public tries to create the means through which it can actually address the problem. But the implication of all of this, and Dewey’s a pragmatist, is that this is going to be a never-ending process of discovery where, in trying to solve the problems, the public will, of course, figure out that some of its goals are appropriate and some of its goals widely misconceived the nature of the problem and so, there’s going to be an endless process of revision, and that is something that is crucial and important.

0:19:21.1 HF: The other part of this which is something that Dewey, I think, is bad at, is understanding how it is that people, in a democratic society, have very different goals. We are tossed together in American society, and I think this is part of the issue that we have at the moment with polarisation with people who have very, very different goals, very different understandings of what the shared outcome ought to be, what our goals ought to be, what kinds of values ought to guide those goals. And so democracy then becomes a process of group warfare, to some degree, when it works badly, or group accommodation where you work together and you sometimes figure out messy and painful and agonising solutions, which don’t please ever anybody, but which, nonetheless, sort of allow you to continue on together in relative peace while addressing some of the problems that everybody can agree are problems or whatever.

0:20:15.9 HF: And so this is a really messy process, but it also is not an optimisation process; this is much more like a… So if you think about it in slightly more abstract terms, you’re searching across a rugged solution space where you do not know what the solutions are.

Democracy's dynamic advantage:

0:22:55.3 HF: That if you think about how democracy works versus, for example, systems such as autocracy, so you can think about them as drawing upon possible solutions to problems that a society faces. And here, again, I’m abstracting away all of these difficult and complex problems about, “Do we agree what a solution is? Do we even agree on what a problem is?” etcetera, etcetera. But if you start from that kind of metaphor, then you can reasonably see that some solutions, which are likely to pass muster in a democratic space because they are to the benefit of the collective majority, are probably going to get locked in an autocratic situation or in a democracy where there are extreme power disparities because some of these solutions, even if they are overall beneficial for the society, are going to be uncongenial to the powerful minority the elite was in that society.

0:23:50.0 HF: And so what that suggests then is that under reasonable circumstances, we can say that democracy is likely to have a dynamic advantage, vis-à-vis totalitarian systems, and its ability to search this landscape for possible solutions for problems that pop up. And we think about this primarily in terms of problems of institutional change, finding new rules because there’s a literature there that we want to talk to. And one could also say the more equal democracies, ipso facto, and holding all things equal, are probably going to be better at doing this, at searching for good solutions than less equal democracies precisely because it is less likely that elites are going to be able to block these solutions from being adopted in a more general way.

Markets:

0:32:48.0 HF: And more or less, the argument he’s making is something like the following, that you have diffused knowledge about the world. I have diffused knowledge about the world. And there isn’t any real way for us to be able to share this in a way that helps us to solve problems, but we can rely upon the price mechanism very often to do this. You may be an incredibly good grower of tomatoes. I may be an incredibly good maker of tomato sauce. And the ways in which we figure out here, so for example, so which kinds of tomatoes to grow, what kinds of sauces to make is by using the price mechanism as a summary statistic of all of these, so complex relationships of production and hence, in a sense, the price mechanism becomes this extraordinary means of capturing signal out of a set of latent processes that are very, very… That are fundamentally invisible to human beings or at least, inarticulable by human beings. It’s very, very hard to explain without showing or doing how it is to make tomatoes.

0:33:52.6 HF: So what we’re trying to do is we’re trying to argue in a sense that, Hayek, so it’s a real insight and there are incredible things that markets can do. And also, this is a point that Lindblom makes, is that you don’t ever get any democracies, there are no historical examples of democracies which have not had markets going together wisdom. So there is some plausible, complex causal relationship between the two. But that there are certain things that markets are going to be terrible at doing and these, very often, are going to be those instances where human interaction and where human verbal exchange and reasoning and giving of reasons can help us to understand how politics works.

Machine learning and high tech modernism:

0:56:16.0 HF: Okay. So I think that the argument that this… Again, as I say, I try to work with co-authors who are brighter than me, and Marian very definitely. This is applied versions of her ideas without a huge amount of originality on my part. But more or less, we have this notion that we call “high-tech modernism”, that the idea behind it is that if you think about how machine learning works, it really it’s a process… It’s a process of categorisation, a process of classification, whether it’s supervised or primarily supervised but also unsupervised machine learning. What you’re trying to do is you’re trying to take a dataset and you’re trying to sort it out into some kind of a classification scheme, which can then allow you to figure out stuff to identify relationships that you might not have done otherwise. And so if you think about it from that perspective, you immediately see that there is a very, very strong analogy to bureaucracy, because that is exactly what traditional bureaucracies have done.

0:57:16.9 HF: Traditional bureaucracies have always looked to classify and to figure out if you are a, I don’t know, a deviant Irishman, or a upstanding loyal British citizen, or what have you. And they have done this immensely and efficiently and whatever with paper files, but there’s something pretty strongly the case… So that, there’s something very, very similar. And the way many of the fights that we see happening around machine learning and about machine learning and equity machine learning, racial bias are in many ways their new versions of the fights that we have been having since forever about the way, for example, in which redlining was used by financial institutions to effectively make it impossible for people in entire neighbourhoods to get loans. So we see, not the same debate because of course as the technologies change, things change about how the technologies are applied and their social impact, but there’s a strong relationship there.

0:58:19.7 HF: The key difference, I think, with machine learning to the kinds of stuff that James Scott was talking about, is that when you think about James Scott and the way that he talks about these cities being created to map, map the bureaucratic categories onto the world around them, the process with machine learning, it’s much more subtle because we are all carrying around with us these little devices in our pockets which are endlessly feeding information to the mothership, allowing the mothership to classify us and categorise us in different ways, cookies, whatever new thing Google is coming up with to replace cookies. All of these are intended to categorise and classify us, and they have these very substantial consequences, they guide you into being the kind of person who gets a sweet mortgage offers versus the kind of person who gets the kinds of predatory loans that we see advertised on the side of subway cars, but these processes of guiding tend to be much more invisible than the more direct forms of repression that we’ve seen in previous eras.

0:59:22.7 HF: And so then the question… This gets back to what you’re saying about the ways in which hierarchy can go wrong, the question is, what kinds of feedback loops are there through which you can correct some of these problems? If you think about hierarchy, say for example, if you think about authoritarian regimes, authoritarian regimes are notoriously terrible at taking feedback. They try to monitor what their citizens are doing and thinking because they want to stay in power, but when the feedback tell some things that they don’t really want to know, it’s very hard to penetrate through the system. And one of the interesting questions with machine learning is, if you have these feedback loops which are largely automated and which provide very little room for actual people to insert themselves into the process and say, “Hey, this isn’t actually what we want”. How do you change that? How do you get into a world in which machine learning is democratised, and that’s a huge problem.

There's more at the link.

No comments:

Post a Comment