Prompt engineering for developers: a guide that saves you hours every day
Jump to section
At my trainings, I start with one experiment. I give the entire team the same task — refactor a function using AI. The results vary dramatically. Not because some developers are better — but because they write better prompts.
Prompt engineering isn't a buzzword. For developers, it's a practical skill you can learn in an afternoon that immediately improves output quality. The difference between one iteration and five is often just how you phrase the question.
The three most common mistakes
1. Prompts that are too vague
The most common mistake. 'Fix this code' tells AI nothing — what's wrong? What's the expected output? What framework are you using? The more context you provide, the more precisely AI responds.
# Bad prompt:
Fix this code.
# Good prompt:
This function has a race condition on
concurrent cache access.
Add a mutex lock around cache operations
and write a test that reproduces the issue
with 10 simultaneous accesses.The second prompt gets the right answer on the first try. The first prompt leads to guessing and 5 iterations of corrections.
2. Missing context
AI doesn't know your framework, your conventions, or the broader context of the change. The more context you provide, the fewer iterations you need. Attach relevant files, describe the existing architecture, mention constraints.
- What to include in context:
- Framework and language (Next.js 16, TypeScript, Prisma)
- Team conventions (naming, error handling, test patterns)
- Existing architecture (where auth lives, how routing works)
- Constraints (backward compatibility, performance requirements)
- What MUST NOT break (existing APIs, tests, integrations)
3. No verification in the prompt
'Do X' vs. 'Do X. Then run the tests and fix any failures.' The second approach gives AI a feedback loop — it can verify its own work and iterate. This is the simplest way to dramatically improve output quality.
When you add 'run tests and fix failures' to the end of every prompt, output quality improves 2-3x. Without verification, AI doesn't know if its changes work. With verification, it self-corrects.
A template for technical prompts
After hundreds of hours of experimentation, I developed a simple template for technical prompts. Four components:
- 1. WHAT I want to change — concrete task, not a vague request
- 2. WHY (context) — reason for the change, broader architecture
- 3. CONSTRAINTS — what must not break, backward compatibility
- 4. HOW TO VERIFY — tests, linter, expected output
# Template in practice:
# WHAT:
Rewrite the auth middleware from session-based
to JWT.
# WHY:
We're moving to microservices and need
stateless auth. Currently using
express-session with Redis store.
# CONSTRAINTS:
- Existing API endpoints must work
without changes (backward compatible)
- Refresh token rotation (not single-use)
- httpOnly cookies for token storage
# VERIFICATION:
After completion, run the full test suite
and fix failures. Verify all existing
tests pass.Advanced techniques
Chain of thought — breaking down into steps
For complex tasks, tell AI to break the problem into steps before implementing. 'First analyze the existing code. Then propose an approach. Then implement step by step. After each step run tests.'
# Chain of thought prompt:
1. Analyze src/auth/ — how does the
current auth work?
2. Propose a JWT migration plan
(what steps, in what order)
3. Implement step by step
4. After each step run tests
5. At the end verify everything worksNegative instructions — what NOT to do
AI tends to add things you didn't ask for. Explicitly tell it what NOT to do. 'Don't modify existing tests.' 'Don't add new dependencies.' 'Don't refactor parts of the code unrelated to the change.'
Example-based prompts
Show AI an example of existing code and say 'do the same for X.' AI understands your style, conventions, and patterns from the example and applies them consistently. This is especially powerful for code conventions.
# Example-based prompt:
Here's an existing endpoint in our style:
[attach existing endpoint code]
Write a new endpoint for /api/invoices
in the SAME style — same structure,
same error handling convention,
same logging pattern.Common patterns that work
- 'Explain what you're doing before you do it' — forces AI to think
- 'Write tests BEFORE implementation' — TDD with AI
- 'Use a subagent to figure out how X works' — research without context loss
- 'Run tests after each step' — feedback loop
- 'Update CLAUDE.md so this doesn't happen again' — learning from mistakes
Why it's worth learning
A developer with good prompts finishes a task in one or two iterations. A developer with bad prompts spends five iterations fixing AI output and eventually writes it by hand. That difference is between 'AI saves me hours daily' and 'AI is useless.'
Prompt engineering isn't about writing longer prompts. It's about writing more precise ones. Four sentences with context, constraints, and verification beat two paragraphs of vague instructions.
Investment: an afternoon of learning. Return: hours every day. That's the best ROI you'll get in your dev workflow.
Start now: take your last prompt and add: 1) context (framework, architecture), 2) constraints (what must not break), 3) verification (run tests). You'll see the difference immediately.
Want to go deeper? Check out our full course Prompt Engineering for Developers at /en/courses/prompt-engineering-devs
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
AI and Technical Debt: The Paradox Defining 2026
AI can 10x development speed — but also 10x the creation of technical debt. 75% of companies already face moderate to high debt levels due to AI. How to break the cycle.
AI guardrails for dev teams: a practical framework for one afternoon
Your team uses AI. But do they have rules? Without them, people are afraid to experiment. Here's a framework you can implement in an afternoon that accelerates adoption 3-4x.
Security risks of AI in development: what your team is probably doing wrong
At trainings I ask: 'Who has sent production code to ChatGPT?' Most hands up. 'Who checked for API keys in it?' No hands.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation