This is a brief summary of paper for me to note it, Transfer Learning for Low-Resource Neural Machine Translation. Zoph et al. EMNLP 2016

This paper experiment a variety of scenarios for low-resource langauge with transfer learning.

They trained the sequence-to-sequence model for high-resource langauge pair as parent model.

Next, transfer some of the learned parameters to the low-resource lanauge pair as child model to initialize and constrain training.

In other words, their key idea is to first train a high-resource language pair (the parent model), then transfer some of the learned parameters to the low-resource pair (the child model) to initialize and constrain training.

Reference