For those of you interested in learning more about zero- and few-shot learning, I wrote up a blog post covering a few recent methods with 🤗 Transformers— Joe Davison (@joeddav) May 29, 2020
175 billion parameter language models not required 😉
Post: https://t.co/YxDUI6JE3u
Demo: https://t.co/MLtTKTcczM pic.twitter.com/NuKmJtXS2j
Pages in this blog
Wednesday, August 12, 2020
Zero- and few-shot learning: Transformers on the cheap
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment