- Abhi's AI Playbook
- Posts
- What Is MCP? The USB-C of AI Agents
What Is MCP? The USB-C of AI Agents
Your guide to the Model Context Protocol and why it matters

If you’ve been exploring AI agents lately, chances are you’ve come across the term MCP—short for Model Context Protocol. But what exactly is it? And why are developers calling it the USB-C of AI?
In this issue of Abhi’s AI Playbook, we’ll demystify MCP, unpack its components, and show you how it empowers AI systems with modular, flexible capabilities.
🔌 What Is MCP?
MCP is an open protocol created by Anthropic.
Its purpose? To standardize how AI applications connect to tools, context, and data.
📦 Think of MCP like USB-C for AI:
It gives AI models a universal port to access different tools, prompts, databases, or workflows—without hardcoding everything manually.
With MCP:
You can build powerful, reusable tools once—and use them across any AI app (like Claude, Cursor, or your own custom app).
You can plug in real-world data, like a customer database or file system, into an LLM workflow.
You can scale smarter, instead of rebuilding integrations from scratch every time.

🧱 Key Components of MCP
At its core, MCP follows a client-server architecture:
1. MCP Client (Inside Your AI App)
The client:
Discovers tools and resources available from servers.
Fetches external data to enrich prompts.
Executes functions the LLM can’t run itself (like sending an email).
Good news?
If you're using AI apps like Claude, Cursor, or others, the client is already built-in.
You won’t usually need to code this yourself.

2. MCP Server (Where You Customize the Magic)
This is where you, the builder, define what the AI can actually access.
An MCP server exposes:
🧩 Prompt Templates — ready-to-use prompts like “Write an outreach email for [Name]” or “Summarize meeting notes.”
🗂️ Resources — static files, databases, or document folders the AI can reference.
🛠️ Tools — functions or APIs that let the AI take real action (like drafting an email or updating a CRM).
👉 You control what gets exposed, giving you incredible flexibility to customize your agent workflows

🌐 Where Does MCP Run?
You can deploy MCP servers:
Locally using Standard IO (perfect for testing and dev work)
On the cloud using HTTP + SSE (Server-Sent Events) (for production apps)
Either way, you get consistent, scalable access to your AI tools across environments.