This post is a brief summary about the paper that I read for my study and curiosity, so I shortly arrange the content of the paper, titled Measuring and Narrowing the Compositionality Gag in Language Models (Press et al. arXiv 2023), that I read and studied.
Self-Ask requires a one- or few-shot prompt that demonstrates how to answer the questions.
The Self-Ask prompt starts with examples and then append the inference-time questions like “Are follow up questions needed here:”.
If LM model answert “YES” to the inference-time question, LM continously generates the question with “Follow up:” and answer to the follow-up question.
Finally, LM has sufficiently thought of having information and then stop the generation.
It outputs the final answer with “So the final answers is:”.
If LM outputs “No” to follow-up qeustion need, LM directly responses the question with “Completely Automatic:”

For detailed experiment and explanation, refer to the paper, titled Measuring and Narrowing the Compositionality Gag in Language Models (Press et al. arXiv 2023)
Note(Abstract):
We investigate the ability of language models to perform compositional reasoning tasks where the overall solution depends on correctly composing the answers to sub-problems. We measure how often models can correctly answer all sub-problems but not generate the overall solution, a ratio we call the compositionality gap. We evaluate this ratio by asking multi-hop questions with answers that require composing multiple facts unlikely to have been observed together during pretraining. In the GPT-3 family of models, as model size increases we show that the single-hop question answering performance improves faster than the multi-hop performance does, therefore the compositionality gap does not decrease. This surprising result suggests that while more powerful models memorize and recall more factual knowledge, they show no corresponding improvement in their ability to perform this kind of compositional reasoning. We then demonstrate how elicitive prompting (such as chain of thought) narrows the compositionality gap by reasoning explicitly. We present a new method, self-ask, that further improves on chain of thought. In our method, the model explicitly asks itself (and answers) follow-up questions before answering the initial question. We finally show that self-ask's structured prompting lets us easily plug in a search engine to answer the follow-up questions, which additionally improves accuracy.
Reference
- Paper
- How to use html for alert
- How to use MathJax