Jump to section
You know the drill. Deadline approaching, feature is done, and tests... we'll write tests next time. Next time never comes. Coverage drops, regressions pile up, and the team spends more time debugging than developing.
AI won't fix this on its own — but it dramatically lowers the barrier. Test generation is one of the use cases where AI truly excels. In 15 minutes you have a foundation that would take hours to write manually.
Where AI excels in testing
Unit tests for existing code
'Write unit tests for this function. Cover happy path, edge cases, and error states.' AI analyzes the function, identifies input combinations, and generates tests. In one minute you have 10-15 tests that would take half an hour to write manually.
# Prompt for unit test generation:
Write unit tests for calculateDiscount().
Rules:
- Use vitest
- Cover: happy path, edge cases, error states
- Edge cases: zero price, negative price,
discount > 100%, null inputs, empty cart
- Name each test descriptively:
'should return X when Y'
- Run tests and fix failures
- Use describe blocks for groupingKey point: AI won't just generate 'happy path' tests. When you explicitly ask for edge cases, it finds combinations you wouldn't think of — zero values, extreme inputs, race conditions in async code.
Generating test data
'Generate realistic test data for a user profile — 20 examples with various edge cases.' AI is much more creative than most developers at this.
# Prompt for test data:
Generate 20 test users.
Include edge cases:
- Empty name / email
- Unicode characters in name (diacritics, Chinese)
- Extremely long strings (500+ chars)
- SQL injection in name field
- Email without domain, with multiple @
- Birth date in the future
- Negative age
Format: TypeScript array of objects.Transforming tests during refactoring
When you refactor code, AI can automatically update existing tests. Instead of manually fixing 50 broken tests after renaming a method, tell AI to fix them. 'I renamed UserService.getUser to UserService.findById. Update all tests.'
Regression tests from bugs
Hit a bug? Before fixing it, have AI write a test that reproduces it. After the fix, the test must pass. Now you have a guarantee this bug won't return. Every bug = a new test. Coverage grows organically.
# Workflow: bug -> test -> fix -> verify
1. Bug report: 'Discount calculation is wrong
for orders over $100'
2. AI writes reproduction test:
test('should apply discount correctly
for orders over 100', () => {
expect(calculateDiscount(150, 0.1))
.toBe(15);
});
3. Test FAILS (confirms bug exists)
4. AI fixes the implementation
5. Test PASSES (confirms fix works)
6. Test remains as regression protectionWhere AI testing falls short
- Integration tests depending on complex system state
- E2E tests requiring deep knowledge of business flows
- Tests for race conditions and timing issues
- Tests where WHAT to test matters more than HOW (strategic decisions)
- Performance tests with realistic load
For these, you need a developer who understands the system. AI can help with the skeleton, but the strategy and logic have to come from you.
Practical workflow for team adoption
Here's the workflow that works in real teams:
- 1. Write the feature
- 2. Tell AI: 'Write tests for this. Run them. Fix failures.'
- 3. Check that tests test the right things (not just that they pass)
- 4. Add edge cases AI missed
- 5. Every bug = reproduction test before the fix
- 6. During refactoring: AI updates broken tests
Instead of 'I'll spend the whole afternoon writing tests' it's 'in 15 minutes I have the foundation, in another 15 I fine-tune it.' The barrier drops enough that testing stops being a task you postpone.
Measurable results
Teams I work with report after introducing AI-assisted testing:
- Test coverage increases 30-50% within the first month
- Time spent writing tests drops 60-70%
- Regression count decreases — every bug generates a test
- Developers write tests BEFORE merging, not after (or never)
- Refactoring becomes less risky — a safety net exists
AI doesn't write perfect tests. It writes good first drafts. And a good draft in 2 minutes is infinitely better than no test in 2 hours.
Add to CLAUDE.md: 'Every new function must have tests. Before submitting PR, run tests and verify they pass.' AI will follow this rule automatically.
Want to go deeper? Check out our full course AI-Powered Development: The Complete Workflow at /en/courses/ai-dev-workflow
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
Debugging with AI: 4 techniques that save hours every day
AI sees the entire stacktrace at once and reads without assumptions. Most developers don't use it for debugging — and they're missing the biggest time savings.
AI as a pair programmer: when it works, when it doesn't, and how to get the most out of it
Pair programming with AI isn't like with a human. It's better at implementation and worse at decision-making. Understanding that difference changes how you use AI.
AI and documentation: a practical workflow that actually works
Nobody wants to write documentation. AI turns a half-day task into a half-hour one — and the result is often better than what you'd write yourself.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation