This is a brief summary of paper for me to note it, Transfer Learning for Low-Resource Neural Machine Translation. Zoph et al. EMNLP 2016
This paper experiment a variety of scenarios for low-resource langauge with transfer learning.
They trained the sequence-to-sequence model for high-resource langauge pair as parent model.
Next, transfer some of the learned parameters to the low-resource lanauge pair as child model to initialize and constrain training.
In other words, their key idea is to first train a high-resource language pair (the parent model), then transfer some of the learned parameters to the low-resource pair (the child model) to initialize and constrain training.
Note(Abstract):
The encoder-decoder framework for neural machine translation (NMT) has been shown effective in large data scenarios, but is much less effective for low-resource languages. They present a transfer learning method that significantly improves BLEU scores across a range of low-resource languages. They key idea is to first train a high-resource language pair (the parent model), then transfer some of the learned parameters to the low-resource pair (the child model) to initialize and constrain training.
Download URL:
The paper: Transfer Learning for Low-Resource Neural Machine Translation. Zoph et al. EMNLP 2016
The paper: Transfer Learning for Low-Resource Neural Machine Translation. Zoph et al. EMNLP 2016