What is Prompt Engineering?
The art of talking to AI. How the way you phrase your request changes everything about the response.
5 min read
You have access to the same AI as everyone else. Same ChatGPT. Same Claude. Same models.
But some people get incredible results. Others get garbage.
The difference? How they ask.
The basic idea
Large language models predict text. They complete patterns.
When you write a prompt, you're setting up a pattern for the model to complete.
Weak pattern:
"Write about dogs."
The model could go anywhere. A Wikipedia article? A poem? A sales pitch? It's guessing what you want.
Strong pattern:
"Write a 200-word blog post about why golden retrievers make great family pets. Use a warm, conversational tone. Include one personal anecdote."
Now the model knows exactly what pattern to complete.
Why it matters
Here's the uncomfortable truth: the same model can seem dumb or brilliant depending on how you prompt it.
Ask ChatGPT "What's the capital of France?" and it answers correctly.
Ask "What's the capital of the country where the Eiffel Tower is located?" and it also answers correctly.
But ask complex questions with vague prompts? You'll get vague answers. The model isn't reading your mind. It's completing the pattern you gave it.
Core techniques
1. Be specific
Vague: "Help me with my resume."
Specific: "Review my resume for a senior software engineer role at Google. Focus on: 1) Whether my experience section shows impact with metrics, 2) If my skills section matches what big tech looks for, 3) Any red flags a recruiter might notice. Here's my resume: [paste resume]"
The specific prompt tells the model exactly what to do. The vague one makes it guess.
2. Give examples (few-shot prompting)
Models learn patterns. Show them the pattern you want.
"Convert these sentences to formal business language:
Casual: Hey, can you send that report when you get a chance? Formal: At your earliest convenience, please forward the quarterly report.
Casual: I messed up the numbers in the spreadsheet. Formal: I identified an error in the spreadsheet calculations.
Casual: This idea is kind of risky but might work. Formal: "
The model sees the pattern and continues it. Much more reliable than just explaining what you want.
3. Assign a role
Giving the AI a persona changes its behavior.
"You are a senior editor at The New York Times with 20 years of experience. Review this article and provide feedback on structure, clarity, and newsworthiness."
The model has seen text from (and about) NYT editors. Invoking that role pulls in those patterns.
4. Chain of thought
For complex reasoning, ask the model to think step by step.
Without chain of thought: "If a train leaves Chicago at 2pm going 60mph, and another leaves NYC at 3pm going 80mph, when do they meet?"
With chain of thought: "If a train leaves Chicago at 2pm going 60mph, and another leaves NYC at 3pm going 80mph, when do they meet? Think through this step by step, showing your work."
Forcing the model to "show its work" produces more accurate answers on reasoning tasks.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā ā CHAIN OF THOUGHT ā ā ā ā ā Without: Question āāāāāāāāāāāāāāāāāāāāāŗ Answer ā ā (often wrong) ā ā ā ā ā With: Question āāāŗ Step 1 āāāŗ Step 2 āāāŗ Answer ā ā "First..." "Then..." ā ā ā ā The reasoning steps help the model stay on track. ā ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
5. Set constraints
Tell the model what NOT to do, or set boundaries.
"Explain quantum computing to a 10-year-old.
- Use simple analogies
- Avoid technical jargon
- Keep it under 150 words
- Don't mention wave functions or probability amplitudes"
Constraints focus the output.
Advanced patterns
System prompts
Many AI interfaces let you set a "system prompt" that frames all subsequent conversation.
System: "You are a helpful coding assistant. Always provide code examples. When showing code, include comments explaining each section. If you're unsure about something, say so."
User: "How do I read a file in Python?"
The system prompt sets persistent behavior.
Output formatting
Tell the model exactly how to format its response.
"Analyze this company's strengths and weaknesses. Format your response as:
Strengths:
- [bullet points]
Weaknesses:
- [bullet points]
Summary: [one paragraph]"
You'll get a structured response you can actually use.
Iterative refinement
Don't expect perfection on the first try. Prompt, evaluate, refine.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā ā THE PROMPT REFINEMENT LOOP ā ā ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā ā ā ā ā ā ā¼ ā ā ā [Write āāāŗ [Get āāāŗ [Evaluate āāāŗ [Adjust ā ā Prompt] Response] Output] Prompt] ā ā ā ā "Too vague" ā Add specifics ā ā "Wrong format" ā Add format instructions ā ā "Missing detail" ā Ask explicitly for it ā ā ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Common mistakes
Being too vague. The model isn't a mind reader.
Not providing context. "Continue this" means nothing without showing what "this" is.
Expecting perfect first tries. Iteration is normal.
Over-prompting. Sometimes simple is better. Don't add complexity you don't need.
Ignoring the model's strengths. Use the model for what it's good at (writing, explaining, brainstorming) not what it's bad at (precise math, real-time facts).
Is this a real skill?
Yes and no.
It's a real skill in the sense that practice makes you better. You develop intuition for what works.
It's not a mysterious art. It's mostly about clarity and specificity. Good prompts are good communication.
The best prompt engineers are often just people who communicate clearly.
Now you know how to talk to AI. But what happens when AI talks back with complete confidence about things that aren't true? Next: What are AI Hallucinations?
Get new explanations in your inbox
Every Tuesday and Friday. No spam, just AI clarity.
Powered by AutoSend