Communicating with AI
Learn to harness advanced prompting techniques—from n-shot and chain-of-thought to role prompting.
Imagine you’re teaching a robot how to bake a cake—except the robot only speaks in riddles, and you have to be extra clever in how you phrase your instructions. That’s basically what prompting is all about. We’ve entered a fascinating era where we can direct language models to do our bidding by typing plain instructions (or sometimes code-like instructions). This new way of interacting with models—where our natural language becomes a high-level abstraction—shares some similarities with programming but doesn’t replace traditional coding. Instead, it complements software development by enabling us to direct language models intuitively and creatively. People often laugh at titles like prompt engineer, but it’s no joke. This new discipline is carving out its niche, where careful wording and creative thinking often trump traditional coding skills.
At the heart of this shift is the realization that large language models (LLMs) represent an entirely new abstraction layer. Think of traditional machine learning as the foundation—a system where mathematicians and data scientists train smaller networks from scratch. Then, along comes a specialized group of experts—LLM engineers—who focus on building, training, and optimizing massive transformer models on supercomputers. Most of us, however, aren’t wrestling with supercomputers; we’re standing on top of these finished monuments. Today’s AI engineers integrate these models into real-world applications through APIs, toolchains, and even fine-tuning, effectively acting like full stack developers who master both the frontend (prompting) and the backend (connecting prompts with tools, building agentic workflows and handling inference pipelines).
So yes, prompting might look like child’s play—just typing a phrase in English or another language—but make no mistake: it’s an art form that can define whether your AI acts like a clueless intern or an expert consultant. Prompt design will evolve as models get smarter and the systems around them become more complex. You’ll see code frameworks, chains, and agentic workflows all working to get that perfect output from an AI. But through all these layers, prompting remains the beating heart—the secret sauce that helps us talk to these giant neural networks and make them do what we want. Ultimately, no matter how fancy our tools become, there’s always a human in the loop, whispering instructions into the AI’s ear—just in the right way. After understanding why prompting is critical, let’s explore key techniques.
What is n-shot prompting?
One of the most foundational is n-shot prompting, which determines how many examples you give the AI before it answers.
Imagine you’re trying to teach a friend a new card trick without spending hours training them. You show them just a couple of examples—maybe two or three demonstrations—and then let them try it themselves. That’s the basic idea behind n-shot prompting: you provide an AI model with limited examples within the prompt so it knows the kind of response you want. When there are zero examples given, we call that zero-shot prompting. Essentially, you’re showing the model what you expect by your instruction alone (zero-shot) or by both your instruction and a few examples (few-shot).
Why does this matter? Think of an AI like a new intern—sometimes, you need to give a quick demo of how something is done so they catch on to your style. With few-shot prompting, you place these ...