Malcolm Murray, The Shifting Nature Of AI Risk, 3 Quarks Daily, July 22, 2024.
Second and more importantly, much of the focus in AI is currently on building AI agents. As can be seen in comments from Sam Altman and Andrew Ng, this is seen as the next frontier for AI. When intelligent AI agents are available at scale, this is likely to change the risk landscape dramatically. [...]
Introducing agentic intelligences could mean that we lose the ability to analyze AI risks and chart their path through the risk space. The analysis can become too complex since there are too many unknown variables. Note that this is not a question of “AGI” (Artificial General Intelligence), just agentic intelligences. We already have AI models that are much more capable than humans in many domains (playing games, producing images, optical recognition). However, the world is still recognizable and analyzable because these models are not agentic. The question of AGI and when we will “get it”, a common discussion topic, is a red herring and the wrong question. There is no such thing as one type of “general” intelligence. The question should be when will we have a plethora of agentic intelligences operating in the world trying to act according to preferences that may be largely unknown or incomprehensible to us.
For now, what this means is that we need to increase our efforts on AI risk assessment, to be as prepared as possible as AI risk continues to get more complex. However, it also means we should start planning for a world where we won’t be able to understand the risks. The focus in that world needs to instead be on resilience.
There's more at the link.
No comments:
Post a Comment