The adoption gap: why 73% of dev teams don't actually use AI (and how to fix it)
Jump to section
According to GitHub's 2025 survey, 73% of companies are increasing AI investment for developers. At the same time, Stack Overflow's Developer Survey shows most developers use AI tools 'occasionally' or 'rarely.' This is the adoption gap — the difference between what the company bought and what the team actually uses.
We bought Copilot for the whole team. Three months later, three people out of twenty were actually using it. — CTO at one of my trainings
I see this at every training. A company spends tens of thousands on licenses, sends an email saying 'we use AI now,' and a month later adoption is at 15%. Leadership is frustrated, developers are skeptical, and nobody knows why it's not working.
Three reasons adoption fails
1. No training — or the wrong kind
The company buys licenses and expects people to 'learn on their own.' That's like giving a pilot a plane without training. AI tools are powerful but have a learning curve. Without structured onboarding, most people try one prompt, get bad output, and give up. Permanently.
The problem isn't motivation. The problem is that people don't know how to write effective prompts, don't know which tasks to delegate to AI, and have nobody to show them real use cases on their own code.
Most common scenario: a developer tries 'fix this bug' in ChatGPT, gets generic code that doesn't work in their context, and dismisses AI tools entirely. All they needed was context — framework, existing architecture, error message.
- What effective training must cover:
- How to write prompts specific to your stack (not generic 'explain this code')
- Which tasks to delegate to AI and which not to
- Hands-on exercises on the team's actual codebase
- Security rules — what can and cannot go into AI
- Tool configuration (CLAUDE.md, .cursorrules, Copilot config)
2. No guardrails — and therefore no trust
What can I send to AI? Can I put production data there? Client code? Internal documentation? Connection strings? When a team has no clear rules, people prefer not to use AI at all — out of caution. And they're right — without guardrails, using AI is a genuine risk.
The fix is simple: a one-page document with clear rules. What's OK (public code, boilerplate, generic questions). What's not (API keys, PII, client code without consent). Which tools are approved. How AI-generated code gets reviewed.
# AI Guidelines (example)
## Approved tools
- GitHub Copilot Business (autocomplete)
- Claude Team (reasoning, code review)
## NEVER send to AI
- API keys, connection strings, secrets
- Production data, PII
- Client code without consent
## Review policy
- AI-generated code = same review as manual
- Auth, payments, data mutations = always human review3. No measurement — and therefore no justification
If you don't know whether AI helps, you can't justify the investment or improve adoption. Measure review time, deployment frequency, time on routine tasks, regression count. Not to monitor people — but to see where AI actually delivers value and where it doesn't.
Concrete example: one team I work with started measuring time from PR creation to merge. Before AI code review: average 2.3 days. After: 0.9 days. Those numbers convince leadership better than any presentation.
- DORA metrics that show AI impact:
- Deployment frequency (how often you deploy)
- Lead time for changes (from commit to production)
- Mean time to recovery (how fast you fix incidents)
- Change failure rate (how many deployments cause problems)
A framework for successful adoption
Adoption isn't a one-time event. It's a process with four phases, and each phase has concrete steps and measurable outcomes.
Phase 1: Pioneers (weeks 1-2)
Identify 2-3 people on the team who are naturally curious about new tools. Give them time and space to experiment. Let them find use cases that work on your code. Their job: prepare 3-5 concrete examples where AI genuinely saves time.
Phase 2: Training (weeks 3-4)
Hands-on workshop for the whole team. Not a presentation about how amazing AI is — practical exercises on real code. Pioneers demo their use cases. Everyone writes a prompt, gets output, and evaluates quality. Goal: everyone leaves with at least one use case they'll use daily.
Phase 3: Rules and measurement (weeks 5-8)
Implement guardrails, set up metrics, do regular check-ins. At every retrospective, ask: 'Where did AI help you most this sprint? Where didn't it help?' Iterate rules based on real experience.
Phase 4: Scaling (months 3-6)
AI becomes part of the standard workflow. Shared prompt libraries, CLAUDE.md in repositories, AI code review in CI pipeline. Metrics show concrete improvement. The team uses AI daily and productively.
What determines success
Companies that invest in training and guardrails see 80% adoption in 3 months. Companies that just buy licenses see 15% adoption in 3 months. — my data from 20+ teams
Adoption isn't about technology. It's about people, processes, and culture. Companies that understand this have a team using AI daily and productively within 3-6 months. The rest have dusty licenses and frustrated leadership.
One license costs $20-100/month. One day of lost productivity from a senior developer costs the company $500-1500. Investment in proper onboarding pays for itself within the first month.
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
Your team doesn't want to use AI. Now what?
The CTO buys licenses, sends an email, a month later two out of twenty are using it. Resistance is normal. Here's what works better than a top-down mandate.
AI and Technical Debt: The Paradox Defining 2026
AI can 10x development speed — but also 10x the creation of technical debt. 75% of companies already face moderate to high debt levels due to AI. How to break the cycle.
Where AI in development is heading: 5 trends I'm watching in 2026
Cloud agents, MCP ecosystem, AI-native testing, CLI agents. What will survive, what's hype, and how to prepare.
Ready to start?
Free 30-minute consultation — we'll figure out where AI can level up your team the most.
Book a free consultation