https://x.com/sebkrier/status/2018351274127962300?s=20
1. Existing models will continue improving and getting better. And they will continue to be trained while accounting for all sorts of things like cost, efficiency, steerability, personality etc. as we already see today. I think it’s more obvious than ever that there is likely no convergence to the One Big Model. [...]
4. Here, there is still a lot to work out, and I expect high complementarity with human workers for at least the next decade. Roles will evolve: as you start doing less coding, your work looks more like technical product management. [...]
5. You just keep going up layers of abstraction, and humans continue steering complex multi-agent systems, until fixed costs bite. Part of the reason why humans always stay at the top of the chain is that many decisions made are normative: about what you want to happen, where you want things to go, how you want to react to changes. This requires inherently human inputs, since there's no point in having an AI decide this alone no matter how smart without eliciting more information about what the relevant humans prefer. Put differently: the telos of the whole system is the amalgamation of what users/consumers/businesses want, and tracking whether you're actually achieving that requires human input. This is already the case today with highly complex gigantic companies that make 1000 opaque decisions a minute.
6. Remember, this doesn’t violate the basic fact that market-coordinated economic activity is downstream of consumer and business demand. Capital isn’t some sort of independent force of the universe. What is being built depends on buyers/consumers that are ultimately human, even if occasionally intermediated by agents. The "AI decides everything" frame misses something fundamental about what economic and political systems are for. But as we go through these transitions, there are also costs or externalities (both pecuniary and non-pecuniary). Some people lose jobs. New industries cause unforeseen harms. Terence Tao has a great analogy: the abundance of food solved famine, but of course also led to harms like obesity. The solution is not to slow down abundance, but to develop the right norms, technologies, and laws to curb the excesses.
7. Accounts of full disempowerment assume democracy disappears, but I don't think all roads lead to autocracy. I don’t think ‘this time it’s different’. Growth and innovation historically benefited from free trade and liberal democracy, and this will be the case here too because of its impacts on investment, human capital, institutional quality, self-correction mechanisms, and ensuing fly-wheel effects. [...]
8. As the world goes through these transitions, we will probably continue to see many commentators gloss over the vast benefits and improvements humanity will see. Progress in longevity, cured diseases, consumer welfare, massive reduction in poverty and famine, better education and so on. The arguments for market coordination over some sort of early-Soviet or Maoist collectivism apply even more in this world, not less. The world will generally become materially richer. [...]
9. If we allow sufficient deployment of technology, robots, AI and so on, while ensuring the supply of energy, housing, and other important inputs isn’t constrained to a strangling degree, then the production of many goods and services will go down in price. [...] In general I am more concerned with customer service operators in Bangalore than I am with upper middle class white-collar professions in the West. I think FDI [foreign direct investment] and aid will be critical if we want humanity to thrive.
10. But this doesn't justify regressive populist policies or a 'pause'. It's not even optimal if we were being maximally selfish, and the equivalent of saying "poverty, misery and illness should be preserved for a longer period of time, for the benefit of a particular group of workers in time." Opposing AI or technological progress is a particularly nasty version of degrowth: it kills people, it entrenches poverty, and generally locks in all sorts of tragedies for the benefit of a comfortable elite who can easily thrive with the status quo. However, this does mean ensuring the right welfare systems, democratic protections, ‘societal resilience’, public infrastructure etc is important, as many have repeatedly noted over time. Just because things net out positively doesn’t mean ignoring those who lose out in the short run is the best we can do. There’s so much work to be done still if you want to build a better world, and I think we desperately need new, better economists, scientists, sociologists, artists, and politicians more than ever. I have more faith in the zoomers than some of my peers!
[Hmmmm.... I'm not so sure of 10. Don't know what it means.-BB]
12. In the future, I expect politics and governance to be an increasingly important component of people's lives: many will care deeply about how things are organised and managed at the local or national or international level. Personally, I think it’s fine if a large fraction don’t care much about those issues most of the time, since I don’t think there’s an obligation for everyone to have an opinion on everything, and that preference will likely be easy to satisfy. [...]
13. And I do think status games will continue, albeit in a much more diverse ecosystem of sub cultures and geographies. But again: always has been. Even today plenty of people more interested in art have zero envy for techbro founder lifestyles, and conversely many engineers couldn't care less about being perceived as cultured. As people get richer, much of this will evolve too. [...]
14. Ultimately, AGI will bring about huge positive transformations for the world, many of which are hard to describe: could anyone at the dawn of the Industrial Revolution have told you about video games, eye surgery, deep sea diving, street tacos, and mRNA vaccines? [...]
15. Lastly, so much of the field uses "this time it's different" as hand-wavey justifications for flouting norms, justifying unusual political measures, ignoring fragile progress built on centuries of trial and error, and various yet-to-be seen proposals for haphazard action (made confidently despite the uncertainty that one might guess would come with handling unprecedented phenomena). I think this is misguided: AGI will be huge, and of course will affect everything around us; but in many ways it’s also not different, and as always, there's a lot to learn from History. Much still needs to be built, except that this time you will also have millions of agents by your side to make progress. 🚀
No comments:
Post a Comment