Jump to section
Debugging is work where AI excels — but most developers don't use it for debugging. They'd rather spend an hour reading logs than throw those logs into AI and say 'find the problem.' Yet this is exactly where the time savings are greatest.
After a year of using AI for debugging on real projects, I have four techniques I use daily. Each solves a different type of problem.
Technique 1: Paste error, say fix
The simplest pattern: copy the entire error output (stacktrace, logs, CI output) and send it to AI with context. Don't describe the bug in words — give AI the raw data. Your interpretation adds abstraction that often loses the detail AI needs to pinpoint the root cause.
# Bad approach:
"I have an auth problem, test sometimes fails."
# Good approach — paste raw CI output:
$ npm test -- --run auth.test.ts
FAILED: auth.test.ts:42
Expected: 200
Received: 401
at Object.<anonymous> (auth.test.ts:42:5)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
This test fails in CI but passes locally.
CI runs in a Docker container.
Find the cause and suggest a fix.In this case, AI immediately identifies typical causes: environment variables not set in CI, timezone differences, or missing test fixtures in the Docker image. You'd get there too — but after 30 minutes of googling.
Technique 2: AI as a second pair of eyes
'Look through this code and find why it occasionally returns null.' You search for an hour, AI finds the missing null check in 10 seconds. Not because it's smarter — but because it reads code without assumptions about what 'should' work.
This is especially powerful for code you didn't write yourself. You tend to read your own code with assumptions. AI reads every line literally and finds inconsistencies you overlook.
# Prompt for finding intermittent bugs:
This function sometimes returns null instead
of an object. Passes 95% of the time but
occasionally fails in production.
[attach function code]
Find all code paths that could lead to a null
return. For each one, explain the conditions
under which it would occur.Technique 3: Reproduction and isolation
'Write a minimal test case that reproduces this bug.' AI creates an isolated test that reproduces the issue. Now you have something to iterate on — and it becomes a regression test after the fix.
This is particularly useful for bugs that are hard to reproduce. Describe the symptoms, attach the relevant code, and AI creates a test that targets exactly the conditions under which the bug occurs.
# Prompt for bug reproduction:
In production we occasionally see:
"TypeError: Cannot read property 'email'
of undefined" in user-service.ts:127.
This error only happens under high load.
[attach user-service.ts code]
Write a minimal test case that reproduces
this bug. Use vitest and simulate concurrent
access that could cause this state.Technique 4: Systematic log analysis
You have 500 lines of production logs and somewhere in them is the root cause. AI is the ideal tool for this — it can read the entire log and identify anomalies, time correlations, and error patterns.
# Prompt for log analysis:
Here are 200 lines of production logs
from the last 30 minutes. At 14:23 the
payment service went down.
[attach logs]
Find:
1. The first error that could have cascaded
2. Time correlation between errors
3. What changed before the first error
4. Suggest root cause and next stepsWhen AI won't help
Timing issues and race conditions that depend on precise timing. Heisenbugs that disappear when you observe them. Problems dependent on specific system state you can't reproduce. For these you need experience and intuition.
AI excels at data analysis and pattern recognition. It does not excel at reproducing non-deterministic problems. Use it for the former — give it logs, stacktraces, code — and let it find patterns you'd spend hours searching for.
Practical debugging workflow with AI
- Bug appears — immediately copy the entire error output
- Paste into AI with context (what it should do, what it does instead, environment)
- AI suggests causes — you validate against your system knowledge
- AI writes a reproduction test — you verify it reproduces the right problem
- AI suggests a fix — you check for side effects and edge cases
- Reproduction test becomes a regression test
Debugging with AI isn't about AI debugging for you. It's about AI processing data faster than you can, and you applying your experience to validate and decide.
Try it next time you hit a bug. Instead of googling the error message, copy the entire output into AI. For most problems, you'll get an answer faster — and learn new debugging techniques along the way.
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
AI and testing: generate tests, not excuses
Writing tests is the task nobody enjoys. AI won't make it fun — but it makes it 5x faster. Here's the workflow that will finally raise your coverage.
AI as a pair programmer: when it works, when it doesn't, and how to get the most out of it
Pair programming with AI isn't like with a human. It's better at implementation and worse at decision-making. Understanding that difference changes how you use AI.
AI and documentation: a practical workflow that actually works
Nobody wants to write documentation. AI turns a half-day task into a half-hour one — and the result is often better than what you'd write yourself.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation