Jump to section
Claude Code is one of the most powerful AI tools for developers. But most people use barely 10% of what it can do. At my trainings, I see the same patterns over and over — a team installs the tool, tries a few prompts, and either it works 'kind of' or they give up.
From the dozens of tips I've collected over a year of daily use, I picked 15 with the biggest impact. These are the ones I recommend at every workshop.
Fundamentals that make the biggest difference
1. Give Claude a feedback loop — tests, linter, expected output
This is tip number one for a reason. When you tell Claude 'Refactor the auth middleware to JWT. Run the existing test suite after changes. Fix any failures before calling it done' — Claude runs the tests, sees failures, and fixes them without you stepping in. Boris Cherny (the creator of Claude Code) says this alone gives a 2-3x quality improvement.
# Prompt WITHOUT feedback loop (bad):
Refactor auth middleware to JWT.
# Prompt WITH feedback loop (good):
Refactor auth middleware to JWT.
Run the existing test suite after.
Fix any failures before calling
it done.Golden rule: every prompt should end with a verification step. 'Run tests.' 'Run linter.' 'Verify the build passes.' Without this, Claude doesn't know if its changes work.
2. Install the LSP plugin for your language
LSP plugins give Claude automatic diagnostics after every edit — type errors, unused imports, missing return types. Claude sees and fixes issues before you even notice them. This is the single highest-impact plugin you can install. TypeScript, Python, Rust, Go — all major languages are supported.
3. Use Plan Mode for complex changes
Shift+Tab cycles between Normal, Auto-Accept, and Plan Mode. For multi-file changes, unfamiliar code, or architectural decisions, a few minutes of planning pays off — it prevents Claude from spending 20 minutes confidently solving the wrong problem. For small, clear-scope tasks, skip it.
Productivity and workflow
4. Esc stops, Esc+Esc rewinds
Esc stops Claude mid-action without losing context. Esc+Esc (or /rewind) opens a scrollable menu of every checkpoint. You can restore code, conversation, or both. This means you can try the approach you're only 40% sure about. If it works, great. If not, rewind. Zero damage done.
5. /clear between unrelated tasks
A clean session with a sharp prompt beats a messy three-hour session every time. Different task? /clear first. I know it feels like throwing away progress, but accumulated context from earlier work drowns out your current instructions. Five seconds of /clear saves 30 minutes of diminishing returns.
6. Stop describing bugs. Paste raw data.
Describing a bug in words is slow. Claude guesses, you correct, repeat. Instead: paste the error log, CI output, or Slack thread and say 'fix.' Your interpretation adds abstraction that often loses the detail Claude needs to pinpoint the root cause. Give Claude the raw data and get out of the way.
# Bad:
"I have an auth problem in tests."
# Good — paste raw output:
$ npm test -- --run auth.test.ts
FAILED: auth.test.ts:42
Expected: 200
Received: 401
at Object.<anonymous> (auth.test.ts:42)
Fix this. The test passes locally but
fails in CI (Docker container).Context and scaling
7. 'Ultrathink' for complex reasoning
A keyword that sets effort to high and triggers adaptive reasoning on Opus. Use it for architecture decisions, tricky debugging, and multi-step reasoning. You don't need it for renaming a variable — but for 'design how to split this monolith into services' you do.
8. Subagents keep your main context clean
'Use a subagent to figure out how the payment flow handles failed transactions.' This spawns a separate Claude instance with its own context window. It reads all the files, reasons about the codebase, and reports back a concise summary. Your main session stays clean.
A deep investigation can consume half your context window — subagents keep that cost elsewhere. Use them for: analyzing unfamiliar code, figuring out how something works, reviewing large changesets.
9. --worktree for isolated parallel branches
claude --worktree feature-auth creates an isolated working copy with a new branch. Spin up 2-3 worktrees, each with its own Claude session. The Claude Code team calls this one of the biggest productivity unlocks.
# Parallel work with worktrees:
# Terminal 1: feature
claude --worktree feature-user-profile
# Terminal 2: bug fix
claude --worktree fix-payment-timeout
# Terminal 3: refactoring
claude --worktree refactor-auth
# Each has its own branch, own files,
# own Claude session. Zero conflicts.CLAUDE.md and configuration
10. Run /init, then cut the result in half
CLAUDE.md is a markdown file at your project root with persistent instructions. /init generates a starter version. The output tends to be bloated. If you can't explain why a line is there, delete it. There's roughly a 150-200 instruction budget before compliance drops off.
# CLAUDE.md — concise, effective version
## Stack
Next.js 16, TypeScript, Tailwind
## Commands
npm run dev # dev server
npm run build # production build
npm run lint # ESLint
npm run test # vitest
## Conventions
- Components: PascalCase
- Utilities: camelCase
- Tests: *.test.ts alongside source
## Important
- ALWAYS select_related in Django queries
- Auth: JWT + httpOnly cookies
- All APIs: { data, error, meta }11. When Claude makes a mistake, say: 'Update CLAUDE.md'
'Update the CLAUDE.md file so this doesn't happen again.' Claude writes its own rule. Next session, it follows it automatically. Over time your CLAUDE.md becomes a living document shaped by real mistakes — not theoretical rules.
12. Hooks for things that must work every time
CLAUDE.md is advisory — Claude follows it about 80% of the time. Hooks are deterministic, 100%. Formatting, linting, security checks — those belong in hooks. Tips and guidance go in CLAUDE.md.
- CLAUDE.md (advisory, ~80% compliance): conventions, tips, recommendations, architecture decisions
- Hooks (deterministic, 100%): formatting, linting, secret detection, test running
- Rules (.claude/rules/): contextual rules for specific file types
Collaboration and review
13. Let Claude interview you
You know what you want to build but don't have all the details. Tell Claude: 'I want to build [description]. Interview me about technical implementation, edge cases, and tradeoffs. Keep asking until we've covered everything, then write a complete spec to SPEC.md.' Then start a fresh session with the finished spec.
14. One Claude writes, another Claude reviews
First Claude implements the feature, second Claude reviews from fresh context like a staff engineer. The reviewer has no knowledge of the implementation shortcuts and will challenge every one of them. Same idea works for TDD — Session A writes tests, Session B writes code.
# Two-session workflow:
# Session 1 — implementation:
"Implement JWT auth middleware. Use
jsonwebtoken, httpOnly cookies, refresh
token rotation. Run tests."
# Session 2 — review (clean context):
"Review src/middleware/auth.ts as a staff
engineer. Focus on: security, edge
cases, error handling, maintainability.
Be critical."15. After 2 corrections on the same thing, start fresh
When you're going down a rabbit hole of corrections and the issue still isn't fixed, the context is full of failed approaches that actively hurt the next attempt. /clear and write a better starting prompt with what you learned. A clean session with a sharper prompt almost always outperforms a long session weighed down by dead ends.
Conclusion
You don't need all 15 at once. Pick the one that solves the thing that annoyed you most in your last session, and try it tomorrow. One tip that sticks is worth more than fifty you bookmarked.
Start with tip #1 (feedback loop). It alone improves output quality 2-3x. Then add #5 (/clear) and #6 (raw data). These three tips cover 80% of the improvement.
Want your team to master these tips in practice? That's exactly what I teach at my workshops — hands-on work with AI tools on your own code.
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
Debugging with AI: 4 techniques that save hours every day
AI sees the entire stacktrace at once and reads without assumptions. Most developers don't use it for debugging — and they're missing the biggest time savings.
AI and testing: generate tests, not excuses
Writing tests is the task nobody enjoys. AI won't make it fun — but it makes it 5x faster. Here's the workflow that will finally raise your coverage.
AI as a pair programmer: when it works, when it doesn't, and how to get the most out of it
Pair programming with AI isn't like with a human. It's better at implementation and worse at decision-making. Understanding that difference changes how you use AI.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation