This is a brief summary of paper for me to study and organize it, Predicting and interpreting embeddings for out of vocabulary words in downstream tasks (Garneau et al., EMNLP 2018) I read and studied.

This paper focus on handling the OOV words for downstream task.

OOV(out-of-vocabulary) words make NLP tasks underestimated but due to the lack of proper way to handle OOV words.

Most researches often resort to simple assign random embedding sto unknown words or to map them to a unique “unknown” embedding, nevertheless hoping their model will generalize well.

For their an OOV predictions module, First the left context, right context and word characters are fed to three bi-LSTM to produce seperate encodings.

These three hidden states are then passed to a linear layer on which a softmax is applied to determine their relative importance (i.e. their degree of attention).

The output of this layer is then used to produce a weighted sum of the hidden states. Finally, a simple layer compoute an embedding from this sum.

If you want to know the result of their experiment, refer to Predicting and interpreting embeddings for out of vocabulary words in downstream tasks (Garneau et al., EMNLP 2018)

Reference