Turns out LLMs don't simply memorize verbatim; 25% of "memorized" tokens are actually predicted using general language modeling features
— Tunadorable (@tunadorable) September 26, 2024
YouTube: https://t.co/A7eMAxFLVA@arxiv: https://t.co/9KQ1Gi0VML@Bytez: https://t.co/PdSVPTz7RB@askalphaxiv: https://t.co/UK43dX1DeT pic.twitter.com/OXhW8KU6fM
Abstract of the linked article:
Large Language Models (LLMs) frequently memorize long sequences verbatim, often with serious legal and privacy implications. Much prior work has studied such verbatim memorization using observational data. To complement such work, we develop a framework to study verbatim memorization in a controlled setting by continuing pre-training from Pythia checkpoints with injected sequences. We find that (1) non-trivial amounts of repetition are necessary for verbatim memorization to happen; (2) later (and presumably better) checkpoints are more likely to verbatim memorize sequences, even for out-of-distribution sequences; (3) the generation of memorized sequences is triggered by distributed model states that encode high-level features and makes important use of general language modeling capabilities. Guided by these insights, we develop stress tests to evaluate unlearning methods and find they often fail to remove the verbatim memorized information, while also degrading the LM. Overall, these findings challenge the hypothesis that verbatim memorization stems from specific model weights or mechanisms. Rather, verbatim memorization is intertwined with the LM's general capabilities and thus will be very difficult to isolate and suppress without degrading model quality.
I have a paper that deals with so-called memorization: Discursive Competence in ChatGPT, Part 2: Memory for Texts, Version 3.
No comments:
Post a Comment