Wednesday, March 19, 2025

Dan Davies: cybernetic abundance and its limits

cybernetic abundance and its limits by Dan Davies

the age of half-diminished expectations

Read on Substack

From then end of the article:

The problem of abundanceism, restated in this form, is simply that the liberal regulatory state isn’t adequate to the task.

Specifically, the problem is that, for the reasons noted above, as things grow and become more complex, a greater proportion of their energy and resources have to be devoted to purely internal and administrative matters. The regulatory model needs, ideally, to grow alongside the system that it’s modelling, so that it’s still capable of representing the complexity of the system.

If it doesn’t, then the people who still have the job of stopping things getting in the way of each other will reorganise, in order to try to continue to do their job with an inadequate model. One of the most effective organisational techniques to do this is to replace, as much as possible, “how and why” questions with “yes or no” questions. The “planning system” gradually stops being one in which the word “planning” has something close to its ordinary meaning, and moves toward becoming a “permissioning authority”.

As the resource imbalance gets bigger, another organisational/cognitive technique which helps reduce the load even more is to adopt something like the Hippocratic principle. It’s much easier to turn a “no” into a “yes” than vice versa, and part of the cost of building something is that it constrains what can be built in the future. So, the greater your uncertainty about the future (perhaps because you don’t have the capacity to think about it any more), the more likely you are to be worried about closing off options.

Where I think I end up with this is in a view that the battle between “builders” and “blockers” is mischaracterised. These are two wrong answers to a problem which is fundamentally caused by the imbalance between the complexity of the system and the capacity to manage it. Neither builders, nor blockers, but planners.

This suggests that AI systems than can really do planning, as opposed to the pseudo-planning of LLMs and their extensions, will be enormously valuable. Thus it’s almost tragic that Silicon Valley is unlikely to produce such systems. Why? Because they appear to have bought the sunk-costs fallacy hook, line, and sinker and so are committed to the current repertoire of techniques.

No comments:

Post a Comment