This is a brief summary of paper for me to study and organize it, Cross-lingual Language model Pretraining (Conneau and Lample NIPS 2019) that I read and studied.
Cross-lingual Language model Pretraining (Conneau and Lample NIPS 2019)
For detailed experiment analysis, you can found in Cross-lingual Language model Pretraining (Conneau and Lample NIPS 2019)
Note(Abstract):
Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, they extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. They propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language modelo bjective. They obtain state-ofthe-art resultson cross-lingual classification, unsupervisedand supervised machine translation. On XNLI, their approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, they obtain 34.3 BLEU on WMT’16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, they obtain a new state of the art of 38.5 BLEU on WMT’16 Romanian-English, outperforming the previous best approach by more than 4 BLEU.
Reference
- Paper
- How to use html for alert
- For you information