This is a brief summary of paper for me to study and organize it, An efficient framework for learning sentence representations (Logeswaran and Lee., ICLR 2018) I read and studied.

They propose simple method to represent a sentence into a fixed-length vectors they call Quick Thought Vectors

The method is similar to skip-gram method of word2vec. i.e. they extend the skip-gram to sentence-level.

Unlike skip-thought vector, the differences is they use only an encoder to represent a sentence into a fixed-length vector using RNN types (here GRU).

They turn the part of decoder to generate adjacent sentences into discriminative approximation which is whether the context sentence is adjacent to input sentence.

The method is as follows:

Logeswaran and Lee., ICLR 2018

Reference