Very promising that this simple technique doesn't require pre-training from scratch and has great results up to 8B model-size.
— William Fedus (@LiamFedus) May 6, 2022
Figure shows fine-tuning a pre-trained 1B model (red) with the external memory added (black) matches from-scratch pre-training (blue). pic.twitter.com/nwMoSMFMHW
Yuhuai Wu, Markus N. Rabe, DeLesley Hutchins, Christian Szegedy, Memorizing Transformers:
Abstract: Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights. We instead envision language models that can simply read and memorize new data at inference time, thus acquiring new knowledge immediately. In this work, we extend language models with the ability to memorize the internal representations of past inputs. We demonstrate that an approximate kNN lookup into a non-differentiable memory of recent (key, value) pairs improves language modeling across various benchmarks and tasks, including generic webtext (C4), math papers (arXiv), books (PG-19), code (Github), as well as formal theorems (Isabelle). We show that the performance steadily improves when we increase the size of memory up to 262K tokens. On benchmarks including code and mathematics, we find that the model is capable of making use of newly defined functions and theorems during test time.
No comments:
Post a Comment