RaR Prompt: ‘Rephrase and Respond’ is AI’s New Superpower

The New Secret Method Boosting Accuracy Like Never Before!

Marko Vidrih
3 min readNov 29, 2023

--

The world of artificial intelligence has witnessed a groundbreaking development that’s set to redefine how we interact with Large Language Models (LLMs). Dubbed ‘Rephrase and Respond’ (RaR), this innovative method is transforming the way AI understands and responds to human queries. Let’s dive into the intricacies of this revolutionary technique and discover how it’s enhancing AI performance to unprecedented levels.

The Genesis of RaR

RaR emerged from the need to bridge the gap between human and AI thought processes. Traditional methods often led to misunderstandings, as LLMs like GPT-4 struggled with the inherent ambiguities in human queries. RaR tackles this by enabling AI to rephrase questions, thereby gaining clarity and precision in understanding and responding.

A badly crafted QA example, as shown in red, result in the LLM following the provided logic but reaching an arbitrary answer. Meanwhile, RaR prompt can successfully correct the pitfalls in the few-shot examples and improve the robustness and efficacy of few-shot CoT.

One-step RaR: Simplifying AI Responses

One-step RaR is a marvel of simplicity and efficiency. By rephrasing and expanding a question in a single prompt, it enhances the LLM’s response accuracy. This method mirrors human communication strategies, emphasizing clarity and coherence. Its effectiveness across various reasoning tasks is a testament to its revolutionary impact.

Two-step RaR: The Advanced Approach

Two-step RaR takes the concept further by involving a rephrasing LLM to refine the question first. The original and rephrased questions are then combined to guide the responding LLM. This method shines in transferring quality improvements from advanced models like GPT-4 to less sophisticated ones, ensuring a fairer assessment of LLM capabilities.

Accuracy (%) comparison of different prompts using GPT-4. Both One-step RaR and Two-step RaR effectively improve the accuracy of GPT-4 across 10 tasks.

Unprecedented Performance Enhancements

The RaR method has shown remarkable success in improving the accuracy of LLMs, particularly in tasks that were previously challenging due to ambiguities. Its applicability across different models from GPT-4 to Vicuna highlights its flexibility and universal utility.

RaR vs. Chain-of-Thought (CoT)

A comparison with the Chain-of-Thought method reveals RaR’s unique strengths. While CoT focuses on augmenting either at the beginning or the end of a query, RaR directly modifies the query itself. This makes RaR more cost-effective in terms of token usage and complements CoT, enhancing its effectiveness.

Comparison of GPT-4’s rephrased questions with Vicuna’s self-rephrased questions.
Accuracy (%) of GPT-4–0613, GPT-3.5-turbo-0613 and Vicuna-13b when testing on original and self-rephrased questions using Two-step RaR.

The Future of AI Interaction

RaR’s introduction is more than just an improvement in LLMs’ performance; it’s a leap towards more nuanced and sophisticated AI-human interactions. The ability of LLMs to understand and respond with greater clarity and precision opens up new possibilities in AI applications, from customer service to education.

--

--

Marko Vidrih

Most writers waste tremendous words to say nothing. I’m not one of them.