This is a brief summary of paper for me to study and arrange for Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. (Artetxe and Schwenk., arXiv 2019) I read and studied.

This paper propose sentence embedding by using a BiLSTM encoder sharing a large number of languages as follows:

The interesting point they used is byte-pair encoding vocabulary with 50K operations, which is learned on the concatenation of all training corpora.

Artetxe and Schwenk., 2019 ArXiv

Reference