Pages in this blog

Friday, November 25, 2016

Zero-Shot Translation – "implicit bridging between language pairs never seen explicitly during training"


Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation

Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat melvinp,schuster,qvl,krikun,yonghui,zhifengc,nsthorat@google.com
Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean

Abstract
We propose a simple, elegant solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no change in the model architecture from our base system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. The rest of the model, which includes encoder, decoder and attention, remains unchanged and is shared across all languages. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT using a single model without any increase in parameters, which is significantly simpler than previous proposals for Multilingual NMT. Our method often improves the translation quality of all involved language pairs, even while keeping the total number of model parameters constant. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for EnglishFrench and surpasses state-of-the-art results for EnglishGerman. Similarly, a single multilingual model surpasses state-of-the-art results for FrenchEnglish and GermanEnglish on WMT’14 and WMT’15 benchmarks respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. In addition to improving the translation quality of language pairs that the model was trained with, our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and show some interesting examples when mixing languages. 

arXiv:1611.04558 [cs.CL]

* * * * *


Thinking off the top of my head, if I were going to summon this article for use in discussing literary criticism, I could see it supporting both structuralist/deconstructive thought and Darwinian lit crit. The former emphasizes differential relations between words as the source of meaning. And that's all these programs have to work from, differential relations as inferred from distributional patterns. That "secret internal language" is a pattern 'distilled' from patterns of differential relationships that are congruent across languages. And that congruence is what a Darwinian would expect, because it reflects the core semantic proclivities of the adapted mind. Now, making this argument in detail, that's a different matter. I'll pass on that, at least for now.

No comments:

Post a Comment