AI for the whole team: shared workspaces, collective agents, and team workflows
Jump to section
Most teams use AI individually — each developer has their own tool, their own prompts, their own workflows. That's a waste. AI is much more powerful when the team uses it in a coordinated way. Shared config, common prompt libraries, parallel agents — this is the next level.
Shared configuration: CLAUDE.md as a team agreement
CLAUDE.md in a repo isn't just for one developer. It's a shared document that defines how AI works with your project: conventions, build commands, architectural decisions, important rules. When you keep it current, every team member (and every AI agent) works consistently.
# CLAUDE.md — team configuration
## Stack
Next.js 16, TypeScript, Tailwind, Prisma
## Conventions
- Components: PascalCase, files: kebab-case
- Tests: vitest, files alongside source (*.test.ts)
- Commit messages: conventional commits
## Important rules
- ALWAYS use select_related/prefetch_related
in Django queries (avoid N+1)
- Auth: JWT + httpOnly cookies
- All API responses wrapped in
{ data, error, meta } format
## Build
npm run dev # dev server on :3000
npm run build # production build
npm run test # vitestTip: add rules to .claude/rules/ for specific contexts — TypeScript conventions for .ts files, testing rules for test files. AI gets the right instructions automatically based on what it's working with.
Shared prompt libraries
When one developer finds a prompt that works great for code review, they should share it. Create a simple Slack channel, Notion doc, or file in the repo with shared prompts.
- Example shared prompts:
- Code review: 'Review this PR. Focus on: error handling, edge cases, consistency with existing code'
- Bug fix: 'Here is the error log. Find the cause and suggest a fix. Write a reproduction test.'
- Refactoring: 'Extract [logic] into a separate function. Run tests. Add types.'
- Documentation: 'Write docs for this module. Include: purpose, API, examples, limitations.'
- Onboarding: 'Explain this project architecture to a new developer. Start from main modules.'
Custom slash commands in Claude Code: create .claude/commands/ with your own commands. E.g., /review, /test-gen, /docs — the whole team uses the same optimized prompts.
Agent teams: multiple agents collaborating
Claude Code supports agent teams — a team lead distributes work among multiple agents, each with their own context. Practical use case: one agent refactors module A, another module B, a third writes tests. Team lead coordinates and resolves conflicts.
# Example: agent team for parallel work
# Terminal 1 — refactoring agent
claude --worktree refactor-auth
> "Refactor auth middleware. Add types,
extract utility functions, run tests."
# Terminal 2 — testing agent
claude --worktree add-tests
> "Write unit tests for payment service.
Cover all code paths and edge cases."
# Terminal 3 — documentation agent
claude --worktree update-docs
> "Update CLAUDE.md and README based on
current changes in the project."Start with 2-3 agents on independent tasks. Avoid assigning tasks that modify the same files. Research and review tasks (code analysis, PR review) are an ideal start.
Worktrees for parallel work
claude --worktree creates an isolated repo copy with a new branch. One developer can have 2-3 Claude Code sessions, each on a different task in a different worktree. No conflicts, no branch switching. When a task is done, merge.
- Typical worktree use cases:
- Feature A in one worktree, bug fix in another
- Refactoring in isolation — if it doesn't work out, discard the worktree with no impact
- Parallel experiments — try two approaches at once, pick the better one
- Code review — run Claude Code in a review worktree for an independent perspective
Measuring team impact
Individual productivity is hard to measure, but team metrics are clear. Measure them before and after AI adoption — data is the best argument for scaling.
- DORA metrics for measuring AI impact:
- PR review time (from PR creation to merge)
- Deployment frequency (how many times per week/month you deploy)
- Lead time for changes (from commit to production)
- Mean time to recovery (how fast you fix incidents)
- Other useful metrics: regression count, test coverage, onboarding time
Most teams see improvement within 30 days — if they have training and clear rules. Without training, metrics often stay the same because people don't use AI effectively.
Where to start: 4 steps this week
- 1. Add CLAUDE.md to main repositories — stack, conventions, build commands
- 2. Create a shared channel for AI tips and prompts — Slack, Notion, file in repo
- 3. At retrospective, ask: 'Where did AI help you most this sprint?'
- 4. Have 2 people try worktrees — parallel work on independent tasks
A team that talks about AI and shares experiences adopts faster than a team where everyone figures it out alone. AI adoption is a team sport.
Coordinated AI use in a team isn't about everyone doing the same thing. It's about sharing what works and learning from each other. CLAUDE.md, shared prompts, and worktrees are the tools that make this possible.
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
Cloud agents in practice: Devin, Codex, and when a cloud AI developer makes sense
Fully autonomous AI developers in the cloud promise a lot. But they handle only specific tasks well. Here's where they work, where they don't, and how to use them effectively.
AI-assisted refactoring: how to safely change code everyone's afraid of
Every codebase has that file. 2000 lines, no tests, whoever wrote it no longer works here. AI is the ideal tool for this — but only with the right process.
Debugging with AI: 4 techniques that save hours every day
AI sees the entire stacktrace at once and reads without assumptions. Most developers don't use it for debugging — and they're missing the biggest time savings.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation