I was making one more run around the web before I buckled down and got back to a major writing task, when I came across the brand-spanking-new conversation between Lex Fridman and Sam Altman. Lex is Lex, and an interesting guy, and Sam is, well, he's interesting to me, but – there was a hint of megalomania at the end of that NYTimes story from Mar. 31, 2023, that rubbed me the wrong way, and all the AI hype – he IS the CEO of OpenAI. So it seemed to me that I just had to listen in, not the whole thing – and I could legit play solitaire while listening – and so I did, skipping over stuff.
But then the conversation hit an interesting patch. So – and I'm not going to try to re-create the context – they're talking about GPT-4 at roughly 46:03:
Altman: what are the best things it can do
Fridman: what are the best things it can do and the the limits of those best things that allow you to say it sucks therefore gives you an inspiration and hope for the future
Altman: you know one thing I've been using it for more recently is sort of a like a brainstorming partner
Fridman: Yep for that
Altman: there's a glimmer of something amazing in there
I don't think it gets you know
when people talk about it
it what it does they're like
ah it helps me code more productively
it helps me write more faster and better
it helps me you know translate from this language to another
all these like amazing things
but there's something about the like kind of creative brainstorming partner
I need to come up with a name for this thing
I need to like think about this problem in a different way
I'm not sure what to do here
uh that I think like gives a glimpse of something I hope to see more of
um one of the other things that you can see like a very small glimpse of is
when it can help on longer Horizon tasks
you know break down some multiple steps
maybe like execute some of those steps
search the internet
write code whatever put that together uh
when that works which is not very often
it's like very magical
At about 52:54:
Fridman: I use it as a reading partner for reading books
it helps me think
help me think through ideas especially when the books are classic
so it's really well written about and it actually is is I
I find it often to be significantly better than even like Wikipedia on well-covered topics
it's somehow more balanced and more nuanced or maybe it's me
but it inspires me to think deeper than a Wikipedia article does
I'm not exactly sure what that is
you mentioned like this collaboration I'm not sure where the magic is if it's in here [gestures to his head]
or if it's in there [points toward the table]
or if it's somewhere in between
It's that magic-collaborative zone that interests me. While I've spent a great deal of time working with (plain old) ChatGPT, most of that time I've been doing research on how it behaves. But every once in awhile I'll play around just to mess around. And then I've seen sparks of magic. The interaction that generated AGI and Beyond: A Whale of a Tale certainly had the magic flowing, and it showed up here and there during the Green Giant Chronicles. I suspect those two cases are somewhat idiosyncratic. Nor am I sure that I can do this at will. But there's definitely something there, and its in the interaction.
I would guess that the magic varies from person to person as well. I wonder how many uses have had these kind of magical flow interactive states? I'd thinking finding that out would be tricky because they're likely to be idiosyncratic and elusive. If I were to research it, I'd probably start out with interviews, either face-to-face or through some online medium. That might lead to a questionnaire that could be used more broadly.
It'll be interesting to see how Altman ends up characterizing this flow state – which is what I'm calling it for the moment, a man-machine flow state. It's the human, of course, that's in flow. The machine is just being the machine.
* * * * *
I have a final comment, of an epistemological nature. As the post indicates, I'd already had a magical interaction or two with ChatGPT before I listened to this podcast. The first time it came up in the podcast, from Altman, OK, I noted it. And went on, playing solitaire with one part of my mind and listening in on the podcast with another part. But then it came up again, this time from Fridman. Wham! That's three, my threshold number for this kind of thing. Three people independently have the same or similar experience. Maybe there's something real there.
No comments:
Post a Comment