This post is a brief summary about the paper that I read for my study and curiosity, so I shortly arrange the content of the paper, titled Measuring and Narrowing the Compositionality Gag in Language Models (Press et al. arXiv 2023), that I read and studied.

Self-Ask requires a one- or few-shot prompt that demonstrates how to answer the questions.

The Self-Ask prompt starts with examples and then append the inference-time questions like “Are follow up questions needed here:”.

If LM model answert “YES” to the inference-time question, LM continously generates the question with “Follow up:” and answer to the follow-up question.

Finally, LM has sufficiently thought of having information and then stop the generation.

It outputs the final answer with “So the final answers is:”.

If LM outputs “No” to follow-up qeustion need, LM directly responses the question with “Completely Automatic:”

Press et al. arXiv 2023

For detailed experiment and explanation, refer to the paper, titled Measuring and Narrowing the Compositionality Gag in Language Models (Press et al. arXiv 2023)

Reference