AI agents write code, review PRs, and deploy: what's happening in March 2026
Jump to section
Forget autocomplete. What's happening in AI tooling right now is a fundamentally different category. AI agents today autonomously analyze codebases, plan changes, solve independent tasks in parallel, run tests, fix bugs, and create pull requests — without you approving every step. And this is just the beginning.
Here's what's happening at the cutting edge of AI tooling right now. No model roundups — this is about how the way we work is changing.
Agent teams: parallel AI developers on a single project
The biggest shift in recent weeks? AI agents stopped working solo. Claude Code now supports teams — coordinated groups of agents where each works in an isolated git worktree on a different part of the task. Imagine saying 'implement this feature' and three agents simultaneously work on the backend API, frontend component, and tests. In 10 minutes you have three PRs ready for review.
This isn't theory. This is exactly how I built PraktickAI — one person with a team of AI agents handling frontend, backend, tests, and deployment in parallel. The result? A platform with 15 courses, 25+ blog posts, Stripe payments, bilingual content — in a fraction of the time it would take traditionally.
Start with one agent on simple tasks. Once you understand how the agent thinks and where it needs guardrails, add parallel sessions. Going from 1 to 3 parallel agents is 5x more productive (not 3x — synergy effect).
MCP: the USB port for AI tools
Model Context Protocol has become what USB was for hardware — a universal connector. Instead of custom integrations for every tool, you have a standard protocol through which an AI agent accesses databases, issue trackers, monitoring, Slack, documentation, CI/CD, and anything else.
In March 2026, there's an MCP server for practically everything — Postgres, Linear, GitHub, Sentry, Vercel, Notion, Slack, Gmail, Google Calendar. An AI agent that can directly read production logs, create a Linear issue, and send a Slack notification is an order of magnitude more useful than one that can only edit files.
// claude mcp add — connecting AI agents to external systems
// Each MCP server = a new capability for the agent
{
"mcpServers": {
"postgres": { "command": "mcp-server-postgres", "args": ["postgresql://readonly@db/prod"] },
"linear": { "command": "mcp-server-linear" },
"vercel": { "url": "https://mcp.vercel.com", "transport": "http" },
"slack": { "url": "https://mcp.slack.com", "transport": "http" }
}
}MCP changes the economics of AI adoption. Instead of building custom integrations, you plug in existing MCP servers — hours of work instead of weeks. If your team isn't using MCP yet, start with one server (database or issue tracker) and add more gradually.
Durable agents: AI that survives a server crash
A classic AI agent runs in memory — when the server crashes, you lose everything. The new generation of durable agents (Vercel Workflow DevKit, Temporal, Inngest) solves this elegantly: every step of the agent is persisted and retryable. An agent can run for hours, days, or wait for human approval — and resume exactly where it left off.
Why does this matter? Because production AI agents must be reliable. An onboarding workflow that guides a new customer through a series of steps. A code review agent that waits for a developer's response. A data pipeline that processes thousands of documents. This can't run in memory.
// Durable agent — survives deploys, crashes, restarts
import { createWorkflow } from '@vercel/workflow';
const reviewAgent = createWorkflow({
id: 'code-review',
execute: async (context) => {
'use workflow';
const analysis = await context.run('analyze', async () => {
'use step';
return await agent.generate({ prompt: 'Review this PR...' });
});
// Waits for human approval — hours or even days
const approved = await context.waitForEvent('human-approval');
if (approved) {
await context.run('merge', async () => {
'use step';
return await mergePR(analysis.prId);
});
}
},
});Vibe coding: from meme to method
Andrej Karpathy coined 'vibe coding' in February 2025 — programming where you describe what you want in natural language and AI implements it. A year later, it's a legitimate development method. Not for operating system kernels, but for a huge category of software — internal tools, prototypes, MVPs, dashboards, landing pages.
Tools like v0 (by Vercel), Bolt, and Lovable let you create a working web application from a natural language description in minutes. v0 generates Next.js code with shadcn/ui components, connects to GitHub, and deploys to Vercel. From 'I have an idea' to 'I have a working prototype' in an afternoon.
The key shift: vibe coding isn't a replacement for programming. It's a new tool in the toolbox that dramatically lowers the barrier to entry. A product manager who builds their own prototype. A designer who implements their design directly. A developer who tries 5 approaches in an afternoon instead of one.
AI code review: agents guarding quality for you
CodeRabbit, Vercel Agent, OpenAI's Codex — AI review agents have become a standard part of CI/CD pipelines. They don't just catch syntax errors. They identify security vulnerabilities, N+1 query problems, missing error handling, inconsistent API contracts, and even suggest better architecture.
Vercel Agent goes further — it doesn't just review code but can directly investigate production incidents. It connects to your logs, metrics, and deployment history to help find the root cause of a problem. Think of an on-call engineer who never sleeps and has perfect memory.
- Where AI code review excels:
- Security auditing (SQL injection, XSS, secret leaks)
- Performance review (N+1 queries, missing indexes, memory leaks)
- API contract and typing consistency
- Duplicate code detection and missing tests
- Identifying breaking changes in public APIs
Chat SDK: one bot for Slack, Teams, Discord, and Telegram
A new category worth watching: Vercel's Chat SDK lets you write one chatbot and deploy it to Slack, Microsoft Teams, Discord, Telegram, Google Chat, GitHub, and Linear simultaneously. One codebase, one deployment, adapters for each platform.
Combined with AI SDK, this means your internal AI assistant — the one that answers questions from documentation, reports metrics, or triggers deployments — works everywhere your team communicates. No more 'this bot only works on Slack.'
What this means for your team — practically
The technology is ready. The question isn't 'if' but 'how fast' you adapt. Here are concrete steps you can take this week:
- This week:
- Install Claude Code (npm i -g @anthropic-ai/claude-code) and try it on a real task
- Connect one MCP server to your AI tool (database or issue tracker)
- Have an AI agent do a code review on your latest PR
- Give a non-technical colleague access to v0.dev and watch what they create
- Schedule a 2-hour team workshop: 'Hands-on with AI agents'
The pace of change is accelerating, not slowing down. Every month brings capabilities that were sci-fi a year ago. Teams that learn to work with AI agents now will have a lead that's very hard to close.
Want to get your team to the cutting edge of AI tooling? Check out our courses — from AI basics to advanced agentic workflows. Or book a custom workshop directly.
Karel Čech
Developer and AI consultant. I help technical teams adopt AI in their daily workflow — from workshops to long-term strategies.
LinkedIn →Stay ahead with AI insights
Practical tips on AI for dev teams. No spam, unsubscribe anytime.
Liked this post? Dive deeper with our course:
Related posts
Cloud agents in practice: Devin, Codex, and when a cloud AI developer makes sense
Fully autonomous AI developers in the cloud promise a lot. But they handle only specific tasks well. Here's where they work, where they don't, and how to use them effectively.
CLI agents: why the terminal beats the editor for complex AI tasks
Claude Code, Aider, Goose, Codex CLI — terminal agents have access to everything you do. The editor sees files. The terminal sees the system. That's the fundamental difference.
AI Agents in 2026: What Changed and How Developers Use Them
From chat to autonomous agents. 55% of developers regularly use AI agents. What this means for your workflow and how to get started.
Ready to start?
Start a free course or explore training options for your team.
Book a free consultation