A lot’s happened since my last ramble (August 22). I’ve changed my attitude toward AI; Elon Music and Christof Koch have triggered one of my hobby horses, direct brain-to-brain thought transfer; and Facebook has decided to change its user interface. It’s start with the last and work through them in reverse order.
Facebook! Don’t mess with my mind!
Facebook has announced that it’s changing its interface. It’s done that to some people already. Others, like me, have been warned. But our respite is only temporary.
I’ve objected to this on two grounds: 1) it’s inconvenient, and 2) general principle: it’s my mind. The first is real but not, at this point, all that consequential. I neither like nor dislike the current interface; it is simply what I’m used to. Changing it will force me to think a bit where now I can act intuitively and automatically. While I don’t know how bad that will be – I do quite a bit on Facebook – I don’t anticipate major problems. Though, who knows, maybe they’ll cripple a feature I like. Twitter did that with their last interface change. I like the Moments feature, where I could group tweets under a single topic. Contrary to what it says in the help notes, it is no longer possible to add new tweets to existing moments. Has Facebook done something similar? I don’t know.
However, I think the matter of principle is deeper and thus more important. But I’m not sure how to argue it. What I’ve said is Facebook is messing with my mind, and I’ve justified that by reference to the idea that all sorts of media, books, sound recordings, movies, etc., are extensions of our minds. I’m wondering if that’s quite right?
What I’m thinking at the moment is that we think of minds as essentially private. You can’t read my mind and I can’t read yours. And, yes, that is true. But aren’t minds fundamentally social as well?
Wittgenstein famously argued that there is no such thing as a private language. Hence language is inherently social. Inner speech is a form of language and hence is, in some way, social and not merely private. That needs clarification and development, but I’m thinking that maybe that’s the way to go.
Social media are also private media, in a sense. I know Facebook has a bunch of settings you can use to control who sees what. I’ve looked at them once or twice in very specific situations, but for the most part I’ve not paid any attention to them because I don’t use Facebook for more or less private purposes. I’m a thinker and a writer and I’m happy to do both in public. That is, it’s my mind, but I’m using it in public and in a public way.
This line of thought goes in two directions: 1) the institutional and civic nature of social media, and 2) the nature of the technology. On the first, it’s not at all clear to me that our current institutional structure is adequate for using and governing this technology. On the second, I think the user needs to be in control of the interface (beyond the blither of settings FB makes available) in a way that requires perhaps a substantial rethinking of the nature of the technology. I suspect that AI will play a role in the future realignment.
Brain-to-brain thought transfer
The idea’s been around for awhile, along with the idea of direct brain-to-machine linkage. I don’t know when it first showed up in science fiction, or how wide spread it is, but I first became aware of it as a scientific and technical proposal in the work of Rodolfo Llinás in the mid-2000s. More recently Christof Koch and, most visibly, Elon Musk have joined the parade.
I don’t get it. There is an obvious technical objection which, as far as I can tell, none of these thinkers have addressed: how does a neuron tell whether an impulse is coming from some other neuron in the same brain or some other brain? If neurons can’t tell the difference, how can there possibly be thought transfer? I’ve offered a more elaborate and explicit argument, but that should be enough. Yet it hasn’t even been considered?
I don't know. I note that no one is suggesting it as something that will be done in the near-term or even mid-term future – though come to think of it, yikes!, Musk told Joe Rogan five to ten years – rather, it’s kinda’ far in the future where who knows what will happen. That is, it’s in a realm where we can’t really think about things rigorously. So, yeah! let’s throw caution to the winds.
Anyhow, it’s right up there with other fantasies of tech-bro religion, the super-intelligent computer (malevolent or benevolent) and uploading the contents of one’s brain to the cloud and thus becoming immortal.
AI and the future of technology
Finally, just the other day I started talking about the evolution of rank 5 culture and linking that, however loosely, with AI. Rather, it’s something a bit beyond AI. For AI is just a bunch of techniques. I’m looking for some one thing, comparable to the role of writing in the emergence of rank 2 or Arabic notation in the emergence of rank 3, etc. Whatever this new thing is, it is as likely to emerge out AI as anywhere else.
To get there AI has to break free of its chess-centric ways and develop a bit of curiosity about the basis of current statistical success, which I’ve been pointing toward in my recent thinking about GPT-3: GPT-3: Waterloo or Rubicon? Here be Dragons, and What economic growth and statistical semantics tell us about the structure of the world. As I said before, don’t believe the hype, but there’s a breakthrough there. We just have to find it.
More later.
No comments:
Post a Comment