It's not what you think.
YouTube:
In this episode, Dr. Rena Malik, MD is joined by sociologist Dr. Ken Hanson to explore the surprising realities of sex tech, including sex dolls and AI companions. Together, they unpack who is really using these technologies, how they're reshaping intimacy, and the emotional bonds that can form between humans and artificial partners. Listeners will gain a deeper understanding of the motivations, implications, and future trends in sexual technology and relationships.
00:00:00 Introduction
00:01:02 Who Really Uses Sex Tech
00:04:03 Sex Dolls
00:06:24 From Dolls to Sex Robots
00:24:30 AI Companions, Replica, Attachment
00:41:00 Gen Z, Loneliness, Community
00:54:06 Doll Brothels and Industry Ethics
01:02:10 Sex Dolls, Porn, and Fantasy
01:03:26 How Academia Sees Sex Tech
01:04:05 Why Sex Research Matters
01:09:01 Conclusion
Bill, perhaps you have seen in Crooked Timber, John Quiggin's latest focus - humanoid robots, Musk's grift, natalism, birth rates, and housework.
ReplyDeleteYour post ties in re robots, birth rates and housework. And imo, deserves a comment from you. Or maybe not?! Had I seen "explore the surprising realities of sex tech, including sex dolls and AI companions. Together, they unpack who is really using these technologies, how they're reshaping intimacy, and the emotional bonds that can form between humans and artificial partners"... I would have commented myself.
See CT for series. Latest...
"Adventures with Deep Research: success then failure"
by JOHN Q on DECEMBER 9, 2025
I’ve long been interested in the topic of housework, as you can see from this CT post, which produced a long and unusually productive discussion thread [fn1]. The issue came up again in relation to the prospects for humanoid robots. It’s also at the edge of bunch of debates going on (mostly on Substack) about living standards and birth rates.
https://crookedtimber.org/2025/12/09/adventures-with-deep-research-success-then-failure/
"Musk’s last grift"
by JOHN Q on NOVEMBER 22, 2025
"Having failed with the Cybertruck and robotaxis, Tesla’s value depends almost entirely on the projected success of the Optimus humanoid robot. "
And if ... "The Universal Weight Subspace Hypothesis"...
"[Submitted on 4 Dec 2025 (v1), last revised 6 Dec 2025 (this version, v2)]
https://arxiv.org/abs/2512.05117
... is bourne out, all foundational and expert models will collapse into the universal weight subspace.
Via... with excellent prognostications by commenters...
The universal weight subspace hypothesis (arxiv.org) 356 points by lukeplato 4 days ago | hide | past | favorite | 132 comments
https://news.ycombinator.com/item?id=46199623
Above comment from me, SD.
ReplyDeleteOT. Loose lateral linkage to Homo Ludens re user transition and understanding change from Economicus to Ludens.
As Seren Dipity would have it, I found, in "The human side of XAI" below;
"Case D - Gamification through Interactive Learning. [13]"
... which cited;
13. Ulrike Kuhl, André Artelt, and Barbara Hammer. 2023.
"Let's go to the Alien Zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning."
Frontiers in Computer Science 5 (mar 2023). https://doi.org/10.3389/fcomp.2023.1087929
[Aside: is any human able to explain why citations and abstracts are sans formatting?!]
Bill, the paper uses a game like tool- code availabe - which may! -ymmv - provide a coded environment to gain an understanding of where humans are, to support the transition to Homo Ludens. Imho.
"counterfactual explanations (CFEs) have gained considerable traction as a psychologically grounded approach to generate post-hoc explanations."
"we introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework. The Alien Zoo provides the means to evaluate usability of CFEs for gaining new knowledge from an automated system, targeting novice users in a domain-general context. "
Clip above from:
"This article is part of the Research TopicExplainable Artificial IntelligenceView all 6 articles
"Let's go to the Alien Zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning"
Ulrike Kuhl1,2* André Artelt2 Barbara Hammer2
Introduction: To foster usefulness and accountability of machine learning (ML), it is essential to explain a model's decisions in addition to evaluating its performance. Accordingly, the field of explainable artificial intelligence (XAI) has resurfaced as a topic of active research, offering approaches to address the “how” and “why” of automated decision-making. Within this domain, counterfactual explanations (CFEs) have gained considerable traction as a psychologically grounded approach to generate post-hoc explanations. To do so, CFEs highlight what changes to a model's input would have changed its prediction in a particular way. However, despite the introduction of numerous CFE approaches, their usability has yet to be thoroughly validated at the human level.
...
Discussion: With this work, we aim to equip research groups and practitioners with the means to easily run controlled and well-powered user studies to complement their otherwise often more technology-oriented work. Thus, in the interest of reproducible research, we provide the entire code, together with the underlying models and user data: https://github.com/ukuhl/IntroAlienZoo.
...
Alongside novel explainability approaches, authors have proposed evaluation criteria and guidelines to systematically assess XAI approaches in terms of their usability (Doshi-Velez and Kim, 2017; Arrieta et al., 2020; Davis et al., 2020; Sokol and Flach, 2020a). "
...
https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2023.1087929/full
Reference #13 in...
"The Human Side of XAI: Bridging the Gap between AI and Non-expert Audiences"
...
"This study contributes to the broader discussion of ethical implications surrounding opaque machine learning models in decision-making. Through the development of guidelines, we hope to bridge the gap between machine learning experts and the public, enabling a better common understanding of its increasing importance in our lives."
DOI: https://doi.org/10.1145/3615335.3623062
SIGDOC '23: The 41st ACM International Conference on Design of Communication, Orlando, USA, October 2023
ReplyDeleteHi Bill! Have not seen you around much lately; just checking in!
Ward
Good to hear from you, Ward.
Delete