I haven’t done one of this in a while. I do them when I’ve got a number of things jammed up in my mind and have trouble deciding what to do next. So I write a post in which I talk briefly about each of them, letting them rub up against one another.
Al alchemy and the future
This comes from a recent post, Superhuman AI, a 21st century Philosopher's Stone? The idea is that AGI and superintelligence play the same role in some people’s imagination that the Philosopher’s Stone played in an older imagination. I’m heading toward a 3QD post that might be titled, “Alan Turing and the Philosopher’s Stone.”
This involves the role of the imagination in thinking about the future: What of science fiction, scenario planning, and prediction? How do we direct our activities in exploring new (intellectual and imaginative) territory? Why did I turn to New York 2140 get a sense of our possible real future? And why do I think AGI is an imaginative fancy, like, well, the Philosopher’s Stone?
This leads to thoughts about...
RationalityLand and epistemic theater
By which I mean folks who tend to hang out a blogs such as LessWrong, Overcoming Bias, and Astral Codex Ten and are interested in things that include effective altruism, AGI, and AI alignment. These people see themselves as being deeply committed to rationality in all things and sometimes will refer to “the rationality community.”
I’m preparing posts in which I look at two posts from RationalityLand. One of them is from 2016 by Holden Karnofsky of Open Philanthropy: Some Background on Our Views Regarding Advanced Artificial Intelligence. He explains why he thinks AAI is likely within this century. Some of what he says strikes me as being what I’m calling epistemic theater, assertions couched in quasi-technical language that are not supportable by the underlying situation.
My other post is about a very long post by Scott Alexander at Astral Codex Ten: Biological Anchors: A Trick That Might Or Might Not Work. He’s taking a look at recent discussions about a long and complex research paper out of Open Philanthropy that attempts to predict the emergence of advanced AI. He’s very ambivalent about it. Does he fear that such research is, in effect, epistemic theatre?
Why fully human intelligence is impossible for a machine
At the end of his Chinese Room argument Searle suggests, but does not argue, that biology has something to do with it. Only living systems are capable of intentionality, which is required for meaning. How might one construct an argument on that point? I have an idea or two, but only that.
I’ve got part of such an argument in the idea that minds are built from the inside. I need to couple that with the observation that the distinction between hardware and software, which is central to digital computing, doesn’t hold for brains. I should probably toss in energy and thermodynamics as well. The human brain uses much less energy that digital computers, moreover, it is responsible to obtaining its own energy. If its operations require more energy than its actions allow it to obtain from the world, then it cannot survive. That brings up the issue of complexity that Hays and I explored some time ago.
Terminology
We need some terminology. What do we call what artificial minds do? Can we cover it with think, calculate, and compute, or do we need another term? For that matter, do we want to call them minds? Or do we just call them AIs? – “artificial intelligence” is too long to rattle off all the time.
I like the term “mentat” for AIs, but it’s already in use in Frank Herbert’s Dune universe. Still... If we adopt it, what is it that mentats do if not think?
Tell me about the blues
I’m planning a series of relatively short music post about the blues. Each post will comment on one, two, or maybe three, YouTube videos of blues performances. When I’ve written a half dozen or so posts I’ll collect them into a article for 3QD.
NCIS
Maybe I’ll do a post about how family is treated in NCIS. I’ve got notes on the topic. In what way is the NCIS team a family? How does that differ from real actual, you know, family?
Working papers
I’ve got a number of working papers to do. There’s another Seinfeld one in which I collect my most recent Seinfeld posts, the ones in which I analyze specific bits. I need to collect my Jaws posts, including the 3QD article, into a working paper. And I need to write a primer on attractor-nets.
No comments:
Post a Comment