What is MCP and Why Build a Server
Jump to section
The problem: AI tools are blind
You're using Claude Code or Cursor to write code. It's great at generating functions, explaining algorithms, and refactoring. But it can't see your Jira tickets. It doesn't know what's deployed in production. It can't query your database to check if that migration actually ran. Your AI assistant is powerful but blind to everything that matters in your specific workflow.
Before MCP, every AI tool built its own integration system. Cursor had plugins. ChatGPT had GPT Actions. Claude had tool use. None of them talked to each other. If you built an integration for one tool, you had to rebuild it for every other tool. It was the pre-USB era — every device had a different cable.
MCP = USB-C for AI
Model Context Protocol (MCP) is an open standard created by Anthropic that defines how AI tools connect to external systems. Think of it as USB-C — one universal connector that works everywhere. You build one MCP server, and it works with Claude Code, Cursor, Windsurf, Copilot, and any other tool that supports the protocol.
The architecture is simple: an MCP client (the AI tool) connects to an MCP server (your code) using a standard protocol. The server exposes your data and actions through three primitives. The client discovers what's available and uses it to help the user.
┌─────────────────┐ ┌─────────────────┐
│ AI Tool │ │ Your MCP Server │
│ (Claude Code, │◄───────►│ (your code) │
│ Cursor, etc.) │ MCP │ │
│ │ Protocol│ ┌─────────────┐ │
│ MCP Client │ │ │ Resources │ │
│ │ │ │ Tools │ │
│ │ │ │ Prompts │ │
└─────────────────┘ │ └─────────────┘ │
│ │
│ ┌─────────────┐ │
│ │ Your DB │ │
│ │ Your API │ │
│ │ Your Infra │ │
│ └─────────────┘ │
└─────────────────┘Three primitives: Resources, Tools, Prompts
MCP servers expose functionality through exactly three primitives. Understanding these is key to designing a good server.
Resources — read-only data
Resources let the AI read data from your systems. They're like GET endpoints in a REST API — the AI can look at them, but can't change anything. A resource might be a database record, a config file, deployment status, or API documentation. Each resource has a URI like myapp://users/123 or myapp://deployments/production.
Tools — actions with side effects
Tools let the AI perform actions. Create a Jira ticket. Deploy to staging. Run a database query. Send a Slack message. Every tool call requires user approval — the AI proposes the action, the user confirms it. This is the safety mechanism that makes MCP suitable for production use.
Prompts — reusable templates
Prompts are pre-written templates that help the AI interact with your system. Think of them as slash commands — /deploy, /debug, /summarize-tickets. They combine instructions with arguments to create context-rich interactions. They're optional but incredibly useful for standardizing common workflows.
// Resources: AI reads data
server.resource("deployment-status", "myapp://deploy/status", async () => {
const status = await getDeploymentStatus();
return { contents: [{ uri: "myapp://deploy/status", text: JSON.stringify(status) }] };
});
// Tools: AI performs actions (with user approval)
server.tool("create-ticket", { title: z.string(), priority: z.enum(["low", "medium", "high"]) }, async ({ title, priority }) => {
const ticket = await jira.createIssue({ title, priority });
return { content: [{ type: "text", text: `Created ticket ${ticket.key}` }] };
});
// Prompts: reusable templates
server.prompt("debug-error", { error_message: z.string() }, ({ error_message }) => ({
messages: [{ role: "user", content: { type: "text", text: `Debug this error from our production logs: ${error_message}` } }]
}));Transport: how client and server communicate
MCP supports two transport mechanisms. stdio is the most common — the client spawns the server as a subprocess and communicates via standard input/output. This is what Claude Code and Cursor use for local MCP servers. Streamable HTTP (and the older SSE transport) is used for remote servers — the client connects over HTTP. This is useful for shared team servers or cloud-hosted MCP services.
// Claude Code config (~/.claude/mcp.json) — stdio transport
{
"mcpServers": {
"my-app": {
"command": "node",
"args": ["./build/index.js"],
"env": {
"DATABASE_URL": "postgresql://localhost:5432/mydb"
}
}
}
}
// Cursor config (.cursor/mcp.json) — same format
{
"mcpServers": {
"my-app": {
"command": "npx",
"args": ["-y", "my-mcp-server"]
}
}
}Who adopted MCP
MCP started at Anthropic but has been adopted across the industry. Claude Code, Cursor, Windsurf, Copilot, and many other tools support it as clients. On the server side, there are official MCP servers for GitHub, Slack, Linear, Google Drive, PostgreSQL, and dozens more. OpenAI and Google have both announced MCP support. It's rapidly becoming the standard — not just one company's protocol.
MCP is to AI tools what REST was to web APIs. In 2005, you could build SOAP, XML-RPC, or custom protocols. By 2010, everyone standardized on REST. MCP is that standardization moment for AI integrations — build once, work everywhere.
Real-world MCP servers
- Linear MCP — AI reads your issues, creates tickets, updates status
- Slack MCP — AI reads channels, sends messages, searches history
- PostgreSQL MCP — AI queries your database directly (read-only or read-write)
- Kubernetes MCP — AI checks pod status, reads logs, scales deployments
- GitHub MCP — AI reads PRs, creates issues, checks CI status
- Sentry MCP — AI reads error reports, traces, and performance data
What you'll build in this course
By the end of this course, you'll have a production-ready MCP server for your application. It will expose your database as browsable resources, let the AI perform common actions through tools, and provide prompt templates for your team's workflows. You'll know how to secure it, deploy it, and distribute it to your team.
You don't need to be an AI expert to build an MCP server. If you can write a REST API, you can build an MCP server. The SDKs handle all the protocol details — you just define what data to expose and what actions to allow.
Look at your current development workflow and identify 3 things that an MCP server could expose to your AI tools: 1. One piece of data the AI can't currently see (e.g., your database schema, deployment status, ticket backlog) 2. One action you perform manually that the AI could do for you (e.g., creating tickets, running migrations, updating configs) 3. One workflow that could be a reusable prompt template (e.g., debugging production errors, onboarding a new service, code review checklist) Write these down — we'll implement them throughout the course.
Hint
Think about the things you copy-paste into the AI chat window. If you're constantly pasting database schemas, error logs, or ticket descriptions, those are perfect candidates for MCP resources.
- MCP is an open protocol that connects AI tools to your systems — build once, works everywhere
- Three primitives: Resources (read data), Tools (perform actions), Prompts (reusable templates)
- Two transports: stdio (local, most common) and HTTP (remote, for shared servers)
- Every tool call requires user approval — the AI proposes, the user confirms
- Adopted by Anthropic, OpenAI, Google, Microsoft — it's the industry standard
- If you can build a REST API, you can build an MCP server