Why step-back prompting?

RAG models excel at combining retrieved information with their own knowledge to answer questions. However, they can struggle with complex or poorly phrased questions.

Step-back prompting addresses this by encouraging the model to:

  • Abstract the question: Instead of directly attempting to answer, the model rephrases it into a more general, underlying question.

  • Leverage broader knowledge: This reformulated question allows the model to tap into its wider knowledge base for relevant information.

  • Improve answer accuracy: By understanding the core concept behind the question, the model can generate more accurate and informative responses.

Educative Byte: Imagine a student struggling with a specific math problem involving a right-angled triangle. Step-back prompting would guide them to first recognize that the problem can be solved using the Pythagorean theorem. By stepping back to identify the relevant principle, the student understands that finding the lengths of the sides of the triangle requires applying the formula a2+b2 = c2, where c is the hypotenuse, thus leading to the solution.

What is step-back prompting?

Step-back prompting involves a two-stage process:

  1. Paraphrasing to a generic question: The model is prompted to rewrite the user’s question into a more general one. This step helps uncover the underlying concept or principle.

  2. Answering the reformulated question: The model generates a comprehensive answer to the original user query using the step-back question and retrieved information.

This method, as discussed in thisZheng, Huaixiu Steven, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H. Chi, Quoc V. Le, and Denny Zhou. "Take a step back: Evoking reasoning via abstraction in large language models." arXiv preprint arXiv:2310.06117 (2023). paper by Huaixiu Steven Zheng, enhances the model’s ability to provide well-rounded responses by encouraging a broader perspective.

Get hands-on with 1200+ tech skills courses.