An email I recently sent to Tyler Cowen, with his succinct response.
* * * * *
I was wondering if any systematic work has been done on classifying (large-scale) research and R&D by ‘guesstimated’ probably of success at the outset. I’m thinking of a brief Mercatus paper from 2016, Graboyes & Stossel, “Curing Cancer Is Not a Moonshot.”
When President John F. Kennedy articulated his moonshot project in May 1961, it was a narrowly focused, clearly defined engineering problem. While costs and timelines were somewhat uncertain, tried-and-true principles of physics informed the engineers and made a successful lunar landing highly likely.
Cancer, in contrast, is many different diseases demanding diverse treatments. Compared with the high-precision physical sciences that reliably guided NASA to the moon, the principles of biology are obscure and agonizingly unpredictable.
That seems to me to be a useful and important distinction to make. And while it’s perhaps easy enough to make in retrospect, I’m not at all sure we can’t make it, or some approximation to it, at the beginning of an enterprise. And it seems to me that if we examine however many cases in retrospect that might help us to make such distinctions in prospect.
It seems to me, for example, that the Manhattan Project was like the moonshot. And it seems to me that a human landing on (and return from) Mars is more like that than like the cancer shot. But what do we make of work toward practical nuclear fusion? It’s been going on a long time and we’re still not there. It seems to me that it’s like the moonshot in that the basic science is (seems to be) is in place. But there’s an awful lot of engineering we haven’t figured out. Is that a third type.
Of course we’ve got the distinction between a scientific venture and an engineering one. One could characterize the cancer boondoggle as treating what is in fact a combined scientific and engineering project, with a large and important scientific component, as purely a matter of engineering (broadly considered as designing and constructing solutions to practical problems, in this case, therapies).
What of brain interfaces? In the case of controlling prosthetic limbs, it seems to me that that’s mostly engineering at this point. After all, it’s been done, though I have no sense of whether or not it’s been reduced to routine. But then you have Elon Musk telling Joe Rogan it’s only ten years until we can share thoughts directly with one another (Rodolfo Llinas made that suggestion over 15 years ago and Christof Koch has made again it now). I think there’s tremendous engineering challenges there, but also scientific. [In fact, I suspect it’s impossible, but that’s another story.]
What about self-driving cars? It’s one of those things that’s been any-day-now for awhile, but still isn’t here. And then there’s AGI. As of May 2019 Rodney Brooks thought fully self-driving cars to be 30-50 years in the future, and AGI? Way way out. I think AGI is so poorly defined that the prospect borders on science fiction, and yet, I note, I also think GPT-3 represents a phase change.
It’s just that the AI space is so very large and who knows how much of it is as yet unexplored. Compare the prospect of landing a human on Mars with that of creating an AGI. Except perhaps for information about what happens to humans under those conditions for that long, the basic science seems to be in place for a Mars mission. We can make plans and have intelligent conversations. But AGI, it’s mostly just words at this point. We’re a long way from understanding the human brain – and such understanding would surely be useful in crafting an artificial brain. Nor do we even understand our best AI engines. We can build them, but what’s going on under the hood, that’s still rather obscure.
Tyler's response: “All radically understudied in my view...”