This is from a New Yorker profile of Nick Bostrom, who believes that the development of super intelligent machines is inevitable and who worries about what they might do to/with us:
That's what Horikoshi thought, no, that building planes was technically sweet?The keynote speaker at the Royal Society was another Google employee: Geoffrey Hinton, who for decades has been a central figure in developing deep learning. As the conference wound down, I spotted him chatting with Bostrom in the middle of a scrum of researchers. Hinton was saying that he did not expect A.I. to be achieved for decades. “No sooner than 2070,” he said. “I am in the camp that is hopeless.”“In that you think it will not be a cause for good?” Bostrom asked.“I think political systems will use it to terrorize people,” Hinton said. Already, he believed, agencies like the N.S.A. were attempting to abuse similar technology.“Then why are you doing the research?” Bostrom asked.“I could give you the usual arguments,” Hinton said. “But the truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”
BTW, I think Bostrom is wrong about the prospects of super intelligent machines, but that's another issue.