Security risks of AI in development: what your team is probably doing wrong
Jump to section
At trainings I ask: 'Who has ever sent a piece of production code to ChatGPT?' Most hands go up. 'Who checked if that code contained an API key?' No hands.
This isn't fear-mongering — this is reality. Developers send sensitive data to AI tools every day. Not with malicious intent, but out of convenience and lack of rules. And that's a problem you need to address.
Real risks — not theoretical
- Leaking sensitive data in prompts — API keys, connection strings, customer PII
- AI-generated code with security vulnerabilities — SQL injection, XSS, insecure deserialization
- Reliance on AI without verification — 'AI wrote it, so it must be secure'
- Sharing proprietary code with third parties — without enterprise plans data may be used for training
- Prompt injection — attackers can manipulate AI output through input data
Data leakage in prompts
The most common risk. A developer debugging production copies entire logs including connection strings and customer emails into ChatGPT. Without an enterprise plan, this data could become part of training data. Even with an enterprise plan — data leaves your perimeter.
# Bad — full logs including secrets:
ERROR at PaymentService:
DB_URL=postgresql://admin:P@ssw0rd@db.prod:5432
Customer: jan.novak@company.com
Card: **** **** **** 4242
# Good — anonymized and without secrets:
ERROR at PaymentService:
DB connection failed at line 42
Customer: [REDACTED]
Error: ECONNREFUSEDAI-generated code with security flaws
AI generates code that looks correct but can have subtle security vulnerabilities. SQL injection via string concatenation instead of parameterized queries. XSS via unescaped user input. Hardcoded secrets in config. AI doesn't add security measures unless you explicitly ask for them.
// AI often generates this (BAD):
const query = `SELECT * FROM users
WHERE email = '${email}'`; // SQL injection!
// What you want instead (GOOD):
const result = await db.query(
'SELECT * FROM users WHERE email = $1',
[email]
);AI-generated code is NOT automatically secure. It goes through the same review as manual code. Auth, payments, and data mutations always get human review. No exceptions.
Five simple rules
You don't need a 20-page security document. These five rules cover 90% of risks:
- 1. Never paste code with hardcoded credentials — replace everything with placeholders before pasting
- 2. AI-generated code goes through the same review as manual code — no 'AI wrote it, so it's fine'
- 3. Auth, payments, and data mutations always get human review — that's the red line
- 4. Use enterprise plans with data retention policies — data must not be used for training
- 5. Add a pre-commit hook for secret detection — gitleaks, detect-secrets, trufflehog
Practical steps to secure your workflow
Pre-commit hook for secrets
Install a tool like gitleaks or detect-secrets as a pre-commit hook. Every commit is automatically checked for API keys, passwords, connection strings. This catches mistakes that code review misses.
# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
# Installation:
pip install pre-commit
pre-commit installEnterprise plans for sensitive projects
Copilot Business/Enterprise, Claude Team/Enterprise, ChatGPT Team/Enterprise — all guarantee that data isn't used for model training. For projects with client code or regulated data, this is the minimum.
Security review checklist for AI code
# Security review checklist for AI-generated code:
[ ] All SQL queries parameterized?
[ ] User input sanitized before rendering?
[ ] No hardcoded secrets (API keys, passwords)?
[ ] Auth checks on all protected endpoints?
[ ] Rate limiting on public APIs?
[ ] Input validation on all inputs?
[ ] Error messages don't reveal internal details?
[ ] CORS settings are correct?How to talk to your team about this
Don't scare people. Say: 'AI is a powerful tool. Like any powerful tool, it needs rules. Here are ours.' A simple one-page cheat sheet does more than a twenty-minute compliance lecture.
Security isn't about banning AI. It's about using it correctly. Teams with clear rules use AI more and more safely than teams without them.
Most important: rules must be simple, accessible (in repo README or CLAUDE.md), and revised regularly. Complex rules nobody reads. Simple rules people follow.
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
AI guardrails for dev teams: a practical framework for one afternoon
Your team uses AI. But do they have rules? Without them, people are afraid to experiment. Here's a framework you can implement in an afternoon that accelerates adoption 3-4x.
AI and Technical Debt: The Paradox Defining 2026
AI can 10x development speed — but also 10x the creation of technical debt. 75% of companies already face moderate to high debt levels due to AI. How to break the cycle.
Prompt engineering for developers: a guide that saves you hours every day
The difference between a developer who uses AI effectively and one who dismissed it is often just one better prompt. Here's how to write prompts that work on the first try.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation