Search⌘ K
AI Features

Handling Ambiguity, Safety, and Prompt Quality

Explore techniques to design AI prompts that handle ambiguous or incomplete queries by asking clarifying questions or stating assumptions. Understand how to build safety guardrails through refusal behaviors and incorporate self-correction to improve output quality. Learn to identify common prompt issues and apply systematic troubleshooting to create robust, production-ready AI prompts that deliver reliable and safe results.

So far, the examples have focused on straightforward cases where the user’s intent is clear, and the task is straightforward. Production systems rarely operate under those conditions. Users may provide incomplete information, ask out-of-scope questions, or try actions the system isn’t designed to support.

Consider an AI-powered appointment booking bot for a medical clinic. A user types, “I need to see a doctor next week afternoon.” A naive prompt, eager to be helpful, might guess what the user means, perhaps booking an appointment for “next Wednesday at 2:00 p.m.” If this guess is wrong, the bot has just created a real-world problem for both the user and the clinic.

This is the core challenge of building production-grade AI systems. Our prompts must be resilient. They must be engineered to handle not only well-formed inputs but also ambiguous, incomplete, or adversarial ones. In this lesson, we will learn to design prompts that gracefully handle ambiguity, refuse to answer unsafe questions, can critique and refine their own outputs, and can be systematically troubleshooted when they underperform.

Handling ambiguous user queries

An AI’s tendency to guess when faced with an incomplete query is a major source of error and user frustration. Our first and most important line of defense is to engineer the prompt to stop guessing and start clarifying.

Instructing the model to ask clarifying questions

For any interactive application, the most robust way to handle ambiguity is to turn the conversation back to the user. Instead of allowing the model to make a potentially incorrect assumption, we can explicitly instruct it to ask for the information it needs. Let’s re-engineer the prompt for our medical appointment bot.

System prompt: You are an appointment booking assistant for a medical clinic. Your goal is to schedule an appointment based on the user's request. Extract the doctor's name, requested day, and time, and confirm the booking.

User prompt: I need to see a doctor next week afternoon.

Assistant response: Okay, I have booked an appointment for you with Dr. Evans for next Wednesday at 2:00 PM. Is that correct?

Now, let’s add a rule for handling ...