I recently posted this note to an online community, Brainstorms, hosted by Howard Rheingold (whom I address in the final paragraph):
I've spent a LOT of time using ChatGPT since it went online at the end of Nov. 2022. Most of the time I've been investigating its behavior and have written a number of posts about it. But more recently I've been using it a bit in for my own research purposes, where it's a tool for me, not the object of my inquiry. Thus, back in October the economist Tyler Cowen announced that he'd written a book about the greatest economists of all time. You can download the PDF, but he's also made AI versions that you can query like an AI engine. I've only used the AI once or three times. It seemed OK, but not great.
However, I decided, why the hell not? I'll write so posts about great literary critics. I've now got over a half dozen posts in the series, and I've used ChatGPT in the process. But always in a background role. If you look at any of those posts and see something set in Courier, that's from ChatGPT. I'm not using it to do any of the writing. not even drafts for me to edit into (more or less) finished prose. If I'm going to stick its text into my work, I want the reader to know that it did it, not me.
Now, I'm trained as a literary critic. I've published in professional journals. Though I've mostly been doing other things for the last three or four decades, I still follow some things here and there. From 2007 to 2010 or so I was a member of a (now-defunct) group blog (The Valve) of mostly professional literary scholars and a philosopher or two. In the last decade I've followed work in digital humanities, often on Twitter. So I'm pretty confident of using ChatGPT in that arena. If it produces some nonsense, there's a good chance I'll detect it.
Of course, I'd really like to be able to use it in areas where I know little or nothing. There I'm more likely to be fooled. However, I've looked into and published about a lot of things over the years. I'm very sophisticated about these things. And I know how to check things. If I get some stuff from ChatGPT in an area I don't know, but I like what it's given me. I'll check it.
I've got a lifetime's worth of crap-detection behind me. Now how you distil that down, you've thought a lot more about that than I have, Howard. It seems to me, though, that the tricky part is that you learn best if you're really interested and the work means something to you. Then you'll automatically take care and, I assumem, would be open to specific instruction on crap-detection. But if you're just doing it to get a grade in something that otherwise doesn't interest you, or to rack up education points at a boring job, then I'd think you're more vulnerable.
No comments:
Post a Comment