This is a brief summary of paper for me to study and organize it, Learned in Translation: Contextualized Word Vectors (McCann et al., NIPS 2017) I read and studied.
They have a focus on transfer-learning from Machine translation task to downstream tasks of NLP.
Learned in Translation: Contextualized Word Vectors
Note(Abstract):
Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, they use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. They show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.
Download URL:
The paper: Learned in Translation: Contextualized Word Vectors (McCann et al., NIPS 2017)
The paper: Learned in Translation: Contextualized Word Vectors (McCann et al., NIPS 2017)
Reference
- Paper
- How to use html for alert
- For your information