Sunday, July 6, 2025

What happens to kid's minds when they use LLMs to complete writing tasks? What can we do about it?

From a recent NYTimes column by David Brooks (Are We Really Willing to Become Dumber?):

A group of researchers led by M.I.T.’s Nataliya Kosmyna recruited 54 participants to write essays. Some of them used A.I. to write the essays, some wrote with the assistance of search engines (people without a lot of domain knowledge are not good at using search engines to identify the most important information), and some wrote the old-fashioned way, using their brains. The essays people used A.I. to write contained a lot more references to specific names, places, years and definitions. The people who relied solely on their brains had 60 percent fewer references to these things. So far, so good.

But the essays written with A.I. were more homogeneous, while those written by people relying on their brains created a wider variety of arguments and points.

That's consistent with recent discussions I've had with my colleague, Ramesh Viswanathan, who remarked that, when given a prompt, a chatbot will return with a modal response. What's that mean, modal? It's a statistical team meaning the most frequent (as opposed to the mean and the median – look it up). For example, in an experiment I conducted in 2023 I gave ChatGPT a one-word prompt, "story," to which it responded with a simple story (note: it no longer responds in this way). In ten independent trials it gave almost the same story each time. Then I conducted a session where I gave it that prompt ten times in a row. It responded with a story each time and, while the stories were highly similar, there was more difference among them than among those elicited in individual sessions. This is a modal response. You can move it away from the mode by requesting specific kinds of stories.

Back to Brooks, who continues:

Later the researchers asked the participants to quote from their own papers. Roughly 83 percent of the A.I. large language model, or L.L.M., users had difficulty quoting from their own papers. They hadn’t really internalized their own “writing,” and little of it had sunk in. People who used search engines were better at quoting their own points, and people who used just their brains were a lot better.

Almost all the people who wrote their own papers felt they owned their work, whereas fewer of the A.I. users claimed full ownership of their work. Here’s how the study authors summarize this part of their research:

The brain-only group, though under greater cognitive load, demonstrated deeper learning outcomes and stronger identity with their output. The search engine group displayed moderate internalization, likely balancing effort with outcome. The L.L.M. group, while benefiting from tool efficiency, showed weaker memory traces, reduced self-monitoring and fragmented authorship.

In other words, more effort, more reward. More efficiency, less thinking.

Brooks then reports some work the researchers did measuring EEG response of the students: "The researchers conclude, “Collectively, these findings support the view that external support tools restructure not only task performance but also the underlying cognitive architecture.”

So, the use of LLMs by students is deeply problematic. I've been hearing about this for over two years. Here's a remark I recently made to Bryan Alexander, a futurist and consultant to higher education:

A quick thought. I’m aware that there’s this MASSIVE problem about LLMs and creating in school, but I haven’t thought much about it because it’s not my problem. I don’t have a university post and don’t have to teach students many of whom mostly want grades and don’t much care about learning. Isn’t THAT the problem, though? By the time they get to college they’ve spent 12 years in an education factory that’s about results, grades, not process. LLMs give them a way to get the results they want without having to work and as for learning, who needs it?

They do, obviously. But the damage has been done. So we’ve got two problems: 1) Given the ubiquity of LLMs, what do we do with these damaged kids? 2) How do we revamp the primary and secondary schools so kids aren’t intellectually damaged?

While I'm inclined to believe that, on the whole, AI can benefit education, we're going to have to work hard to discover and support those benefits. We may actually have to change the way we go about eduction. And that's a good thing.

No comments:

Post a Comment