AI guardrails for dev teams: a practical framework for one afternoon
Jump to section
Most dev teams use AI today — but without any rules. Each developer decides on their own what to send to AI, which tool to use, and how to handle the output. That's not adoption — that's chaos.
Guardrails aren't about restricting. They're about giving the team a clear framework within which they can move fast and safely. Teams with guardrails have 3-4x higher AI adoption than teams without. Why? Because developers know what they can do, and they're not afraid to experiment.
Guardrails aren't a brake — they're an accelerator. Without them, the team is afraid to experiment. With them, they can go full speed.
What guardrails should cover
1. Data classification — what can and cannot go into AI
This is the foundation. Without clear classification, developers either send everything (risk) or nothing (lost productivity). Create a simple cheat sheet and include it in onboarding.
# Data classification for AI tools
## GREEN — safe to send to AI
- Public code (open-source, your own libraries)
- Generic boilerplate and templates
- Error messages and stacktraces (without PII)
- Generic questions about frameworks and libraries
- Code not written specifically for a client
## RED — never send to AI
- API keys, connection strings, secrets
- Production data, PII (names, emails, numbers)
- Client code without client consent
- Internal security audits and pen tests
- Compliance-relevant documents
## AMBER — only with approval
- Client code with consent + enterprise AI plan
- Anonymized production data
- Internal architecture documentation2. Approved tools — a clear list
Define which AI tools are approved. This doesn't mean banning experimentation — but ensuring sensitive data only goes through approved channels with appropriate security. Enterprise plans typically guarantee that data isn't used for model training.
- What to verify for approved tools:
- Data retention policy — how long is data stored?
- Is data used for model training? (Enterprise plans typically no)
- Where does data reside? (EU vs US — relevant for GDPR)
- Is an audit log available? (who sent what and when)
- SSO integration? (for enterprise)
3. Review policy for AI-generated code
AI-generated code should go through the same review process as hand-written code. No 'AI wrote it, so it must be fine.' That's a dangerous illusion. AI generates code that looks correct but can have subtle bugs you won't catch without review.
Critical rules: auth, payments, and data mutations ALWAYS get human review. No exceptions. AI can write the first draft, but a human must approve it. This is non-negotiable.
- Review policy for AI code:
- AI-generated code = same review as manual code
- Auth, payments, data mutations = always human review
- Every PR with AI code tagged with co-authored-by
- Security-critical sections = double review (AI + human + second human)
- Tests for AI code = mandatory (AI writes them, you verify them)
4. Transparency — knowing what's from AI
The team should know when code was AI-generated. Not as a stigma — but as information for the reviewer. The reviewer knows to look more carefully for subtle logic errors, which AI commonly makes.
# Simple approach: Co-authored-by tag in commits
git commit -m "Add JWT auth middleware
Co-Authored-By: Claude <noreply@anthropic.com>"
# Claude Code does this automatically
# Cursor/Copilot — add manually or via hookHow to implement it: step by step
Don't turn this into a 20-page document. Write one page of rules — the article you're reading right now, adapted for your context, is more than enough.
- Week 1: Write a one-page document with rules (data classification, approved tools, review policy)
- Week 2: Walk through the document with the team at standup — open discussion, accept feedback
- Week 3: Publish in repo README or CLAUDE.md — must be easily accessible
- Week 4: Add a pre-commit hook for secret detection (e.g., gitleaks, detect-secrets)
- Month 2: First retrospective — what works, what doesn't, what's missing?
Within a month the team gets used to it and the rules become a natural part of the workflow — not an obstacle. Iterate based on real experience. Rules that the team doesn't use or works around should be deleted and replaced with something that works.
Common mistakes
- Rules too strict — people work around them instead of following them
- Rules too vague — nobody knows what specifically is allowed
- No enforcement — rules exist but nobody checks
- One-time implementation without iteration — rules go stale within a month
- No training — people don't know WHY the rules exist
Template to use
Here's a minimal template you can use as a starting point. Adapt for your context — add specific tools you use and rules specific to your industry (healthcare, finance, etc.).
# AI Guardrails — [Company Name]
# Version: 1.0 | Date: [today's date]
## Approved tools
[list of tools with plans]
## Data classification
GREEN: [what can go into AI]
RED: [what cannot go into AI]
AMBER: [what needs approval]
## Review policy
- AI code = standard review
- Auth/payments/data = always human review
- Co-authored-by tag in commits
## Contact
Questions: [who answers]
Updates: [how often it's revised]Guardrails aren't a one-time project. They're a living document that evolves with your team and tools. The most important thing is to start — and iterate.
Want to go deeper? Check out our full course AI Guardrails: Rules Without Brakes at /en/courses/ai-guardrails
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
Security risks of AI in development: what your team is probably doing wrong
At trainings I ask: 'Who has sent production code to ChatGPT?' Most hands up. 'Who checked for API keys in it?' No hands.
AI and Technical Debt: The Paradox Defining 2026
AI can 10x development speed — but also 10x the creation of technical debt. 75% of companies already face moderate to high debt levels due to AI. How to break the cycle.
Prompt engineering for developers: a guide that saves you hours every day
The difference between a developer who uses AI effectively and one who dismissed it is often just one better prompt. Here's how to write prompts that work on the first try.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation