You may remember Noam Chomsky’s NYT article on ChatGPT from last year. https://t.co/HyL1U2JXZr
— Julie Kallini ✨ (@JulieKallini) January 15, 2024
He and others have claimed that LLMs are equally capable of learning possible and impossible languages. We set out to empirically test this claim.
We find that GPT-2 struggles to learn impossible languages. In Experiment 1, we find that models trained on exceedingly complex languages learn the least efficiently, while possible languages are learned the most efficiently, measured through perplexities over training steps. pic.twitter.com/JPtGGe5ZtI
— Julie Kallini ✨ (@JulieKallini) January 15, 2024
A big thank you to my wonderful co-authors: @isabelpapad, @rljfutrell, @kmahowald, @ChrisGPotts.
— Julie Kallini ✨ (@JulieKallini) January 15, 2024
This tweet will self-destruct in 5 seconds. 🔥
No comments:
Post a Comment