AI as a pair programmer: when it works, when it doesn't, and how to get the most out of it
Jump to section
Pair programming with a colleague: you share context, discuss design, catch each other's mistakes. Pair programming with AI: you have a tireless partner who never says 'I don't know', but also never says 'that's a bad idea.'
This is the fundamental difference most people don't grasp. An AI pair programmer is great at implementation — but bad at decision-making. Understanding this difference changes how you use AI.
Where AI pair programming excels
Exploratory coding
'Let's try this approach, what would it look like?' AI writes a prototype in a minute. Don't like it? 'Try a different approach with X.' Another prototype in a minute. In 10 minutes you have 5 different implementations and can make an informed decision. With a human pair, you'd have one in an hour.
# Exploratory prompt — rapid prototyping:
I want to implement a cache for API calls.
Show me 3 different approaches:
1. In-memory with TTL
2. Redis with invalidation
3. Stale-while-revalidate pattern
For each, write a 20-line prototype
and list pros/cons.Rubber duck debugging
You explain the problem to AI and understand it yourself in the process. The difference from a rubber duck: AI actually asks back. 'It says the value is undefined — are you checking it exists before access?' Often you find the answer to your problem in the process of formulating it.
Boilerplate and routine implementation
AI writes boring code, you solve interesting problems. CRUD endpoints, form validation, test setup, config files — everything mechanical and repetitive. Your time is better invested in design and business logic.
Learning and exploration
'How does connection pooling work in Prisma? Show me an example and explain trade-offs.' AI is an infinitely patient teacher who prepares examples tailored to your use case. No question is too 'stupid.'
Where AI pair programming fails
Design discussions
AI agrees with everything. Say 'do microservices' and it does microservices. Say 'do monolith' and it does monolith. It never says 'this is overengineered' or 'the other approach would be simpler.' You need a human who says no.
Watch out for the 'yes-man' effect. AI will confirm even a bad idea and implement it beautifully. Make architectural decisions with a human or through review — not with AI that agrees with everything.
Domain context
AI doesn't know your business rules, your users, your historical decisions. It doesn't know that 'discounts over 50% need manager approval' or that 'this endpoint is used by three partners and must not change.' Humans must supply this — and AI won't ask.
Mentoring
A junior learns HOW to do things from AI. But not WHY. AI won't say: 'This works, but in a year it'll be unmaintainable because...' Mentoring requires experience, context, and willingness to challenge the approach — not just implement it.
Practical model: AI for implementation, human for decisions
The most effective workflow I've found:
- You decide WHAT and WHY (architecture, API design, abstractions)
- AI implements HOW (writes code, tests, documentation)
- You review and correct the result
- AI iterates based on your feedback
# Example of effective pair programming with AI:
# You decide:
"I need a cache layer for user service.
Use Redis, TTL 5 minutes, invalidation
on user update. Stale-while-revalidate
for read-heavy endpoints."
# AI implements:
[writes code, tests, configuration]
# You review:
"Good foundation, but:
1. Missing fallback when Redis is down
2. TTL for admin endpoints should be shorter
3. Add metrics for cache hit/miss"
# AI iterates:
[fixes based on your feedback]When it works best
AI pair programming is most effective when: 1) you have a clear idea of what you want, 2) the task is implementation, not design, 3) tests exist to verify correctness, 4) you're willing to review and correct the output.
It's least effective when: 1) you don't know what you want, 2) you're deciding on architecture, 3) no tests exist, 4) you blindly trust the output without review.
AI is great at 'how' and bad at 'whether.' Use it for implementation, not decision-making. And when you're unsure about design — discuss it with a colleague, not with AI.
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
Debugging with AI: 4 techniques that save hours every day
AI sees the entire stacktrace at once and reads without assumptions. Most developers don't use it for debugging — and they're missing the biggest time savings.
AI and testing: generate tests, not excuses
Writing tests is the task nobody enjoys. AI won't make it fun — but it makes it 5x faster. Here's the workflow that will finally raise your coverage.
AI and documentation: a practical workflow that actually works
Nobody wants to write documentation. AI turns a half-day task into a half-hour one — and the result is often better than what you'd write yourself.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation