Everyone’s hyped about “AI for Science.” in 2025! At the end of the year, please allow me to share my unease and optimism, specifically about AI & biology.
— Bo Wang (@BoWang87) December 31, 2025
After spending another year deep in biological foundation models, healthcare AI, and drug discovery, here are 3 lessons I… pic.twitter.com/p6EOlymxio
The final paragraph of the tweet:
We won’t make progress by treating biology like text. We’ll make progress by building AI that behaves more like a scientist : skeptical, iterative, and willing to be wrong.
Nor, I would add, is biology like chess, a finite, closed world.
If Serendipity (me) is to survive, I'd best go with Bo Wang & Yoshua Bengio, not Yudkowsky...
ReplyDelete"I asked: What if we could build Yudkowsky’s “coherent extrapolated volition” into the AI?
Bengio shook his head. “I’m not willing to let go of that sovereignty,” he insisted. “It’s my human free will.”
Bo Wang is in good company.
"The 2,000-year-old debate that reveals AI’s biggest problem
"Silicon Valley is racing to build a god — without understanding what makes a good one.
by Sigal Samuel Dec 17, 2025
...
"Chang, the philosopher who says it’s precisely through making hard choices that we become who we are, told me she’d never want to outsource the bulk of decision-making to AI, even if it is aligned. “All our skills and our sensitivity to values about what’s important will atrophy, because you’ve just got these machines doing it all,” she said. “We definitely don’t want that.”
...
"It turned out this is an overriding concern for Yoshua Bengio, too. When I told him the Talmud story and asked him if he agreed with his namesake, he said, “Yeah, pretty much! Even if we had a god-like intelligence, it should not be the one deciding for us what we want.”
"He added, “Human choices, human preferences, human values are not the result of just reason. It’s the result of our emotions, empathy, compassion. It is not an external truth. It isour truth. And so, even if there was a god-like intelligence, itcould not decide for us what we want.”
"I asked: What if we could build Yudkowsky’s “coherent extrapolated volition” into the AI?
Bengio shook his head. “I’m not willing to let go of that sovereignty,” he insisted. “It’s my human free will.”
...
https://www.vox.com/future-perfect/472545/ai-alignment-superintelligence-meaning-agency-autonomy
Yoshua Bengio et al want to develop a Scientist ai...
"Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?
Yoshua Bengio +13
...
"... Following the precautionary principle, we see a strong need for safer, yet still useful, alternatives to the current agency-driven trajectory. Accordingly, we propose as a core building block for further advances the development of a non-agentic AI system that is trustworthy and safe by design, which we call Scientist AI. This system is designed to explain the world from observations, as opposed to taking actions in it to imitate or please humans. It comprises a world model that generates theories to explain data and a question-answering inference machine. Both components operate with an explicit notion of uncertainty to mitigate the risks of overconfident predictions. In light of these considerations, a Scientist AI could be used to assist human researchers in accelerating scientific progress, including in AI safety. ..."
https://arxiv.org/abs/2502.15657
Yudkowsky’s “coherent extrapolated volition” kills... Serendipity....
"... Many significant discoveries in history were serendipitous, including penicillin, Post-it notes, Popsicles, and the microwave oven, arising from unforeseen circumstances that were then recognized and capitalized upon.[2][3][4]
...
"While serendipity in popular usage is often understood as a matter of pure chance, scientific discussions emphasize the crucial role of human agency—recognizing, interpreting, and acting upon unexpected opportunities. This interaction between chance and conscious action has been a key theme in areas such as creativity, leadership, innovation, and entrepreneurship.[6][7][8]"
...
https://en.wikipedia.org/wiki/Serendipity
And... Gerd Gigerenzer
https://www.gerd-gigerenzer.com/podcasts-english
Gigerenzer, G. (1991).
"How to make Cognitive Illusions Disappear: Beyond “Heuristics and Biases”.
European Review of Social Psychology, 2(1), 83-115. https://doi.org/10.1080/14792779143000033
By 2003, a different slant by Gigerenzer;
"12 How to Make Cognitive Illusions Disappear
Gerd Gigerenzer
https://doi.org/10.1093/acprof:oso/9780195153729.003.0012
Pages 241–266 Published: March 2002
Now a book.
Thanks, Seren Dipity
May also be relevant... "have been promoting magical thinking under the guise of science. Perhaps no surprise that politicians and non-academic hucksters want to get into the game too." Andrew Gelman.
ReplyDelete"I have a horrible feeling sometimes that heavily promoted crap research on space aliens, cold showers, mind-body healing, schoolyard evolutionary psychology, extra-sensory perception, magic golf balls, air rage, himmicanes, subliminal smiley faces, etc etc etc, has softened the ground so that the seeds of more evil trees could then be planted and take root.
Posted on December 31, 2025 9:34 AM by Andrew
"Dale Lehman sends an email with subject line “A new low in science”:
...
"The vaccine crap is much worse from a policy perspective—space aliens and mind-body healing are mostly just a waste of time—but, as we’ve discussed, I have a horrible feeling sometimes that heavily promoted crap on space aliens, cold showers, mind-body healing, schoolyard evolutionary psychology, extra-sensory perception, magic golf balls, air rage, himmicanes, subliminal smiley faces, etc etc etc, has softened the ground so that the seeds of more evil trees can now planted and take root.
"All that junk science over the past twenty years has been promoted by leading academics. Prominent professors from Harvard, Stanford, Columbia, Chicago, etc. have been promoting magical thinking under the guise of science. Perhaps no surprise that politicians and non-academic hucksters want to get into the game too."
...
https://statmodeling.stat.columbia.edu/2025/12/31/h/
SD... still using active luck, not Yudkowsky’s “coherent extrapolated volition”!