Wednesday, January 15, 2025

Henry Farrell: Biden moves to control global AI

Henry Farrell, America’s plan to control global AI, Programmable Mutter, Jan. 15. 2025.

The idea is to use export controls to restrict the selling and use of to achieve two U.S. policy goals. The first is its desire to keep the most advanced AI out of the grasp of China, for fear that China will use strong AI to undermine U.S. security. The second is its desire to allow some degree of continued access to semiconductors and AI in most countries, to mitigate the anticipated shrieks of protest from big U.S. firms that don’t want to see their export markets disappear.

Hence, this highly complex plan involves controlling access to the advanced semiconductors that are used to train advanced AI models, as well as the model ‘weights’ themselves. The plan continues to very sharply restrict China’s and some other countries’ access to highly advanced semiconductors [...] It allows a much more liberal regime of exports without much in the way of controls to a small group of ‘Tier 1’ countries - important allies and other friendlies such as Norway and Ireland. Finally, there is a large intermediary zone of other countries, including some traditional U.S. allies, that will be allowed access to U.S. semiconductors, but under complex restrictions.

The whole shebang “is intended to cement U.S. power over information technology over the longer term” and depends on “five distinct bets; two on technology, and three on politics.” The technology bets are on 1) scaling and 2) AGI. The political bets are on the 3) effectiveness of export controls, 4) organizational capacity, and 5) politics. I want to comment on 1 and 2 and give you bit of Farrell on 5.

Scaling

The most straightforward bet behind this policy is that the “scaling hypothesis” is right. That is, (a) the more computer power is applied to training AI, the more powerful it will be, and (b) access to the most advanced parallel processing semiconductors is essential to building cutting edge AI models. If this is so, then the U.S. has a possible trump card. U.S. based and dependent companies like Nvidia and AMD, that design the cutting edge semiconductors that are used for training AI, have a considerable advantage over their competitors. China and other U.S. rivals and adversaries have no equivalent producers, and are obliged to rely on the inferior chips that they can make themselves, or that the U.S. allows them access to.

If this bet is right, then the U.S. indeed potentially possesses a chokehold that might allow it to shape the world’s AI system, selectively providing access to those countries and companies that it favors, while denying access to those it does not. Controlling the chips used for training, while restricting the export of AI weights, will allow it to shape what other countries do.

There is, however, some possible evidence suggesting that the relationship between chips and scaling is more complicated than the US might like.

Farrell goes on to mention DeepSeek, a powerful Chinese LLM “that it has trained a frontier AI model without access to the most advanced semiconductors.” Beyond that, I just don’t think that scaling alone is the key to the kingdom. As Gary Marcus, Yann LeCun (just search on the names) and others have been arguing, we need new architectures.

AGI

As you know, I think the term itself (artificial general intelligence) as all but meaningless. AGI’s about as real as the Holy Grail and likely springs from similar psycho-cultural desires.

Farrell notes:

One other belief, which is quite widespread among people in the U.S. national security debate as well as many in Silicon Valley, is that we are on the verge of real AGI - ‘artificial general intelligence.’ In other words, we are about to witness a moment where there will be a vast leap forward in the ability of AI to do things in the world, creating self reinforcing dynamics where those with strong AI are going to be capable of creating yet stronger AI and so on in a feedback loop. This then implies that short term AI superiority over the next couple of years might lead into a long term strategic advantage.

Farrell is skeptical:

Here, for example, Arvind Narayanan and Sayash Kapoor argue that we should be skeptical about the hype that is bubbling out right now from inside the big AI companies.

Industry leaders don’t have a good track record of predicting AI developments. … There are some reasons why we might want to give more weight to insiders’ claims, but also important reasons to give less weight to them. … there’s a huge and obvious reason why we should probably give less weight to their views, which is that they have an incentive to say things that are in their commercial interests, and have a track record of doing so.

There is a lot more in Narayanan and Kapoor’s article, about the specifics of what is happening right now, as we (perhaps) move from one model of AI development to another. I find their arguments compelling - your own mileage may of course vary.

Yes, great things will one day be possible, but not as long as the techbros keep leading us down the path of scaling up LLMs and forms of deep learning. We need new architectures and that’s going to require some fundamental research, research that won’t happen as long as scaling sucks up all the resources, financial, technological, and intellectual.

Politics

None of this will happen if the Trump administration doesn’t want it to. And there are clearly Republicans who are listening to industry protests, and promising to do what they can to get the plan reversed. A lot of people are speculating that the plan is dead on arrival.

That may be premature. One plausible interpretation is that the Biden people are trying to create facts on the ground that will bolster China hawks in the incoming administration, who want strong technology restrictions, so that they have a greater chance of prevailing over the people who want to let technology rip. And that might perhaps work!

It isn’t just the foreign policy people who want sharp restrictions on China. It is also some important people in the AI debate. Pottinger is probably not going to be coming back in (he demonstrated Insufficient Loyalty to the Beloved Leader in the days surrounding January 6 2021) but his co-author, Amodei reflects a general hawkish turn among many people in Silicon Valley. [...]

I don’t feel particularly confident in making any predictions about what the Trump administration will do. I am not the person you ought turn to for accurate gossip about who has influence among the people who are about to take power. But I don’t see any unambiguous signals (yet) that the one side or the other has the upper hand in the internal arguments.

There’s much more at the link.

No comments:

Post a Comment