I've seen videos, but I still don't believe it, NOT!
Is this is as remarkable as anything AI has so far accomplished? I'd like to think so but, truth be told, I doubt that the question is very meaningful as we have no way of measuring remarkability; that is, we have to way measuring the difficulty of the various tasks.
In the case of AI, just what are the specific tasks, anyhow? Landing a rocket is a very specific task, one that exists in a matrix world rich with knowledge and technique relevant to performing the task. What tasks within AI are this specific and how has AI been able to advance toward achieving them? For tasks certainly have been identified, along with relevant metrics – I'm thinking, for example, of various natural-language-processing benchmarks that have been used in annual competitions. But, for example, developing a system as intelligent as a 5-year old is not one such task, nor is achieving human-level general intelligence.
How much of AI is a matter of 1) setting intelligible goals and working toward them in a systematic way and how much is a matter of 2) trying things out and seeing what happens? I have nothing against the latter in principle; I think it's necessary and important – and do it myself. What's the ratio between the two?
No comments:
Post a Comment