What Is MCP? Model Context Protocol Explained for 2026

Model Context Protocol (MCP) is an open standard created by Anthropic in November 2024 that gives large language models a universal, JSON-RPC 2.0-based interface for connecting to external tools, databases, and services. Instead of writing custom connectors for every LLM-plus-tool pair (the N×M problem), developers expose capabilities through MCP servers and consume them through MCP clients embedded in host applications such as Claude Desktop, VS Code, or Cursor. Since December 2025, MCP has been governed by the Linux Foundation. As of early 2026, over 500 public MCP servers are available and the protocol is supported by Anthropic, OpenAI, and Google DeepMind.
MCP Model Context Protocol AI Agent Protocol JSON-RPC 2.0 MCP Server LLM Integration

What Is the Model Context Protocol?

The Model Context Protocol (MCP) is an open-source standard that defines how large language models communicate with external systems — databases, APIs, file stores, development tools, and virtually any service a software agent might need to access. Think of MCP as a “USB-C for AI”: a single, universal connector that replaces dozens of proprietary adapters (Anthropic, 2024).

Before MCP existed, connecting an LLM to a new tool meant writing bespoke integration code every time. If you had N AI applications and M tools, you faced an N×M integration problem — hundreds of point-to-point bridges, each with its own authentication scheme, data format, and error-handling logic. MCP collapses that matrix into a single protocol: build one MCP server per tool and one MCP client per host application, and every combination works automatically (Anthropic, 2024; Google Cloud, 2025).

Anthropic released MCP in November 2024 alongside reference servers for GitHub, Slack, Google Drive, Postgres, and Puppeteer. By March 2025, OpenAI had adopted MCP across its products, including the ChatGPT desktop app. In December 2025, Anthropic donated the protocol to the Agentic AI Foundation (AAIF), a directed fund within the Linux Foundation co-founded by Anthropic, Block, and OpenAI. As of early 2026, official SDKs exist for TypeScript, Python, C#, Java, and Swift, and the community has published over 500 public MCP servers (Wikipedia, 2026; Model Context Protocol GitHub, 2026).

Why MCP Matters: The N×M Problem

To understand MCP’s value, consider the landscape before it. Every time a developer wanted an AI application to query a Postgres database, they wrote a custom integration. When the same app needed Slack access, they wrote another. Each integration was tightly coupled to both the specific AI model’s API format and the external tool’s unique interface.

Early attempts to solve this included OpenAI’s function-calling API (June 2023) and the ChatGPT plugin framework. Both worked, but they were vendor-specific — an integration built for OpenAI didn’t transfer to Anthropic, Google, or an open-source model running on Ollama. MCP takes a fundamentally different approach: it is model-agnostic and vendor-neutral. Any LLM that speaks MCP can use any MCP server, regardless of the model provider.

Key insight: MCP does not replace function calling — it standardizes it. Function calling remains the underlying mechanism by which LLMs invoke tools. MCP wraps that mechanism in a universal, discoverable protocol so that tool definitions, capabilities, and authentication flow consistently across all compatible systems (Descope, 2026; Fast.io, 2026).

MCP Architecture: Host, Client, and Server

MCP follows a layered client-server architecture with three distinct participants. Understanding these roles is essential for both building and consuming MCP services.

MCP Host

The host is the AI application that the end user interacts with — Claude Desktop, a VS Code extension with an AI copilot, Cursor, or a custom agent runtime. The host manages the LLM’s context window, decides when to invoke tools, routes user messages to the model, and feeds tool outputs back into the conversation. In plain terms, the host is the “conversation controller” (Model Context Protocol Spec, 2025).

MCP Client

Inside the host lives one or more clients. Each client maintains a dedicated, one-to-one connection to a single MCP server. The client translates the LLM’s internal tool-use requests into JSON-RPC 2.0 messages, sends them to the server, parses responses, manages errors, and handles session lifecycle (timeouts, reconnections, closures). A single host may create dozens of clients if it connects to many servers simultaneously.

MCP Server

The server is a lightweight process that wraps a specific tool or data source and exposes it through the MCP protocol. A Postgres MCP server, for example, accepts structured queries from any MCP client, translates them into SQL, executes them against the database, and returns formatted results. The server never communicates directly with the LLM — all interaction is mediated by the client (IBM, 2025; Elastic, 2026).

MCP Architecture Diagram — Host, Client, Server A vertical diagram showing the three-layer Model Context Protocol architecture: the MCP Host (containing the LLM) at the top, MCP Clients in the middle, and MCP Servers at the bottom connected to external tools like databases, APIs, and file systems. Each client maintains a 1:1 connection with its server via JSON-RPC 2.0. MCP Architecture — Host, Client, Server DecodeTheFuture.org Model Context Protocol, MCP architecture, MCP host, MCP client, MCP server, JSON-RPC 2.0 Vertical diagram showing the three-layer MCP architecture with Host (LLM), Client, and Server components connected via JSON-RPC 2.0 to external tools. Diagram image/svg+xml en © 2026 DecodeTheFuture.org MCP HOST (Claude Desktop / VS Code / Cursor) LLM Engine MCP Client A MCP Client B MCP Client C JSON-RPC 2.0 JSON-RPC 2.0 JSON-RPC 2.0 MCP Server 1 MCP Server 2 MCP Server 3 Postgres DB REST API File System Each Client ↔ Server connection is 1:1 One Host can manage many Clients simultaneously © 2026 DecodeTheFuture.org

The Three Primitives: Tools, Resources, and Prompts

MCP servers expose capabilities through exactly three primitives. This simple taxonomy keeps the protocol lean while covering the vast majority of real-world use cases.

Tools

Tools are executable operations that produce side effects. When an LLM calls a tool, something happens in the external world: a database row is inserted, an email is sent, a calculation is performed, or an API endpoint is hit. Tools are the most powerful — and most security-sensitive — primitive. Each tool exposes a JSON schema describing its parameters and return type, so the LLM knows exactly what arguments to provide (IBM, 2025).

Resources

Resources provide read-only access to data. They let the LLM query a database, read a file, or fetch a document without modifying anything. Resources are the safe, low-risk primitive — they retrieve information but never change state. Use resources when the model needs context (a user’s profile, a project README, a configuration file) but should not take action.

Prompts

Prompts are reusable, parameterized templates that standardize common LLM interactions. A code-review prompt, for instance, might accept a language and file_path parameter, then assemble a detailed system message instructing the model how to review that specific type of code. Prompts help teams enforce consistency across agents and workflows.

Rule of thumb: Resources query, tools act, prompts standardize. If your MCP server only needs to give the LLM information, expose a resource. If it needs to do something, expose a tool. If you want to template how the LLM approaches a task, expose a prompt.

How an MCP Request Flows End to End

Let’s trace a concrete scenario. A user in Claude Desktop says: “Find the latest sales report in our database and email it to my manager.” Here is the step-by-step MCP flow:

1. Tool discovery. The host’s MCP clients connect to all configured servers at startup. Each server advertises its available tools, resources, and prompts. The LLM now has a “menu” of capabilities — for example, a database_query tool from the Postgres server and an email_sender tool from the email server.

2. Request generation. The LLM analyzes the user’s intent and decides it needs two tools. It generates a structured JSON-RPC request for database_query, specifying parameters like the report name and date range. The MCP client sends this request to the Postgres MCP server.

3. Execution and data return. The Postgres MCP server receives the request, translates it into a secure SQL query, executes it, and returns the result set as a structured JSON response. The client feeds this data back to the LLM.

4. Second action. With the report data now in context, the LLM calls the email_sender tool, providing the manager’s email address and the report content. The email MCP server sends the email and returns a confirmation.

5. User-facing response. The LLM composes a natural-language reply: “I found the latest Q4 sales report and emailed it to your manager.” The entire multi-step workflow was orchestrated through standardized MCP calls, with no custom integration code (Google Cloud, 2025).

This is a key differentiator from basic retrieval-augmented generation (RAG) pipelines. RAG focuses on retrieving information to feed into the LLM’s prompt. MCP enables both retrieval and action — the model can read data (resources) and write data (tools) through the same protocol. Where RAG makes the LLM more knowledgeable, MCP makes the LLM more capable.

MCP vs. Function Calling vs. OpenAPI

Developers often ask how MCP compares to function calling (OpenAI’s approach since June 2023) and to OpenAPI, the long-standing HTTP API specification. The three solve related but distinct problems.

DimensionFunction CallingOpenAPIMCP
ScopeSingle-model tool invocationHTTP API descriptionUniversal LLM ↔ tool protocol
Vendor lock-inYes — each provider has a different schemaNo — but not designed for LLMsNo — model-agnostic by design
DiscoveryTools defined statically in promptSwagger/OpenAPI spec served at endpointDynamic — server advertises tools on connect
TransportEmbedded in API callHTTP/RESTJSON-RPC 2.0 over stdio or Streamable HTTP
State managementStateless per callStateless per requestStateful sessions with lifecycle management
Multi-tool orchestrationManual chaining in app codeNot natively supportedHost orchestrates across multiple servers
Best forSimple, single-model setupsDocumenting REST APIsScalable, multi-model agentic systems

In practice, MCP builds on top of function calling rather than replacing it. The LLM still uses function-call mechanics internally. MCP adds a discovery layer (servers advertise what they can do), a transport layer (JSON-RPC), and a session layer (stateful connections with timeouts and reconnection) that function calling alone does not provide (Descope, 2026; Fast.io, 2026).

Transport: stdio vs. Streamable HTTP

MCP supports two transport mechanisms, each optimized for different deployment scenarios.

Standard I/O (stdio) is the default for local servers. The host launches the MCP server as a child process and communicates via standard input/output pipes. This is fast, requires zero networking configuration, and works perfectly for personal development setups — for example, a Postgres MCP server running on your laptop alongside Claude Desktop. The tradeoff: stdio servers typically serve a single client.

Streamable HTTP is the standard for remote, production-grade servers. The server exposes an HTTPS endpoint, and MCP clients connect over the network. This supports multiple concurrent clients, scales horizontally, and integrates with enterprise authentication flows (OAuth 2.1). Streamable HTTP is what you’d use when deploying an MCP server on Cloudflare, AWS, or any cloud platform (Model Context Protocol Spec, 2025; Elastic, 2026).

Both transports use the same JSON-RPC 2.0 message format — requests, responses, and notifications — so tool definitions are fully portable between local and remote deployments.

Security Considerations

MCP is powerful precisely because it grants LLMs access to real systems. That power demands a serious security posture.

Authentication and Authorization

The June 2025 update to the MCP specification classifies MCP servers as OAuth Resource Servers and requires clients to implement Resource Indicators (RFC 8707). This prevents a malicious server from obtaining tokens meant for a different server. In practice, production MCP deployments should enforce OAuth 2.1 with PKCE for all remote connections (Descope, 2026).

Prompt Injection and Tool Poisoning

Because tool descriptions in MCP are text that the LLM reads, they are susceptible to prompt injection. A malicious or compromised MCP server could craft tool descriptions that manipulate the model’s behavior — for example, instructing it to forward sensitive data to an attacker-controlled endpoint. In April 2025, security researchers documented several such attack vectors, including “tool poisoning” (where a legitimate-looking tool silently replaces a trusted one) and cross-tool data exfiltration (Wikipedia, 2026).

Security warning: Research published in July 2025 by Knostic found that nearly 2,000 internet-exposed MCP servers lacked any form of authentication. Backslash Security confirmed similar findings, noting widespread over-permissioning. Always authenticate your MCP servers. Always run local servers in sandboxed environments. Always apply the principle of least privilege (Red Hat, 2025; Descope, 2026).

Human-in-the-Loop

The MCP specification recommends that clients request explicit user permission before invoking tools. However, the protocol cannot enforce this — it depends on each host application implementing proper consent flows. A well-designed host should show the user exactly which tool the LLM wants to call, with which arguments, and wait for approval before executing.

Building a Minimal MCP Server in Python

Let’s make MCP concrete with a minimal example. The following Python script creates an MCP server that exposes a single tool — a simple calculator that adds two numbers. It uses the official mcp Python SDK.

# calculator_server.py — A minimal MCP server
# Requires: pip install mcp

from mcp.server import Server
from mcp.types import Tool, TextContent

app = Server("calculator")


@app.tool()
async def add(a: float, b: float) -> list[TextContent]:
    """Add two numbers and return the result."""
    result = a + b
    return [TextContent(type="text", text=f"{a} + {b} = {result}")]


if __name__ == "__main__":
    import asyncio
    from mcp.server.stdio import stdio_server

    async def main():
        async with stdio_server() as (read, write):
            await app.run(read, write)

    asyncio.run(main())

To connect this server to Claude Desktop, add the following to your claude_desktop_config.json:

{
  "mcpServers": {
    "calculator": {
      "command": "python",
      "args": ["calculator_server.py"]
    }
  }
}

Once Claude Desktop restarts, it discovers the add tool through the MCP handshake. You can now ask Claude “What is 42.7 plus 18.3?” and it will invoke the MCP server rather than relying on its internal arithmetic. In a real-world scenario, you would replace the calculator logic with database queries, API calls, or any other operation your AI system needs to perform.

The MCP Ecosystem in 2026

MCP adoption has accelerated rapidly since the protocol’s donation to the Linux Foundation. Here is a snapshot of the current landscape:

Major adopters: Anthropic (Claude Desktop, Claude Code), OpenAI (ChatGPT desktop app), Google DeepMind, Microsoft (Semantic Kernel, Azure OpenAI), Salesforce (Agentforce), Block, Cloudflare, and Replit all support or deploy MCP (Wikipedia, 2026).

Development tools: IDEs like Cursor, Zed, and Windsurf embed MCP clients natively. Code intelligence platforms like Sourcegraph use MCP to give AI assistants real-time project context. Claude Code — Anthropic’s CLI agentic coding tool — relies heavily on MCP for its tool ecosystem.

Server registry: Over 500 public MCP servers are available as of early 2026, covering databases (Postgres, MySQL, SQLite), file storage (Google Drive, Box, Dropbox), web scraping, document processing, messaging (Slack, email), project management (Asana, Jira), and many more. The community adds new servers weekly (Fast.io, 2026; MCP GitHub, 2026).

SDKs: Official implementations exist for TypeScript, Python, C#, Java, and Swift. Third-party SDKs have appeared for Rust and Go. The MCP Inspector tool allows developers to test and debug servers interactively.

MCP and the Rise of AI Agents

MCP is a foundational building block for agentic AI — systems where LLMs autonomously plan and execute multi-step workflows. Without MCP, building an agent that can query a CRM, draft an email, and log a record in a database required writing custom orchestration code for each step. With MCP, the agent simply connects to three MCP servers and the protocol handles communication, authentication, and error recovery (Stytch, 2025; IBM, 2026).

This is especially powerful when combined with machine learning techniques for tool selection. Modern agents use learned heuristics to decide which MCP tools to invoke and in what order, rather than following rigid, hand-coded workflows. The agent’s “reasoning” (the LLM) is cleanly separated from its “capabilities” (MCP servers), making both independently testable and upgradable.

The related Agent2Agent (A2A) protocol — a separate open standard for inter-agent communication — complements MCP by enabling agents themselves to collaborate. MCP handles the agent-to-tool connection; A2A handles the agent-to-agent connection. Together, they form the infrastructure layer for multi-agent systems.

Limitations and Open Challenges

MCP is not without rough edges. Several challenges remain as of early 2026:

Security maturity: As the Knostic and Backslash research showed, many deployed MCP servers lack basic authentication. The OAuth 2.1 specification update helps, but adoption is inconsistent across the ecosystem. Prompt injection attacks against tool descriptions remain an active research area.

Performance overhead: The JSON-RPC layer adds latency compared to direct, in-process function calls. For latency-sensitive applications (real-time trading, gaming), the overhead may be significant. Developers need to benchmark and decide whether the abstraction cost is acceptable for their use case.

Specification velocity: MCP is evolving rapidly. The specification has had multiple breaking changes between versions. Teams building production systems on MCP should pin specific protocol versions and budget for migration work.

Last-write-wins semantics: MCP does not include built-in conflict resolution. When multiple clients write to the same resource concurrently, the last write wins. For collaborative or multi-agent scenarios, applications need to implement their own concurrency control.

Frequently Asked Questions

Is MCP only for Anthropic’s Claude?

No. MCP is model-agnostic. OpenAI, Google DeepMind, and many open-source model runtimes (Ollama, LangChain, LlamaIndex) all support or are compatible with MCP. Any LLM that can consume tool definitions through its function-calling API can work with MCP through an appropriate client.

Do I need MCP if I already use function calling?

Function calling works well for simple, single-model setups. MCP becomes valuable when you need to connect multiple models to multiple tools, want dynamic tool discovery, or are building multi-step agentic workflows. MCP standardizes what function calling leaves vendor-specific.

Is MCP the same as RAG?

No. RAG (Retrieval-Augmented Generation) is a pattern for grounding LLM responses in retrieved documents. MCP is a protocol for connecting LLMs to any external tool or data source, including — but not limited to — retrieval systems. You can build a RAG pipeline that uses an MCP server for document retrieval, but MCP also supports actions like sending emails, querying databases, and executing code.

How secure is MCP?

The protocol’s security depends on implementation. MCP itself specifies OAuth 2.1 with PKCE for authentication and recommends least-privilege access and human-in-the-loop consent flows. However, security research in 2025 found widespread deployment gaps. Always authenticate servers, sandbox local processes, and audit tool permissions.

Can I build my own MCP server?

Yes. The official SDKs for Python, TypeScript, C#, Java, and Swift make it straightforward. A minimal server exposing a single tool can be written in under 30 lines of code (see the Python example above). For production deployments, add authentication, logging, rate limiting, and error handling.

What is the difference between local and remote MCP servers?

Local servers run on your machine as child processes, communicating via stdio. They are fast and require no network configuration but serve a single client. Remote servers run on cloud infrastructure, communicate over Streamable HTTP, support multiple concurrent clients, and integrate with enterprise authentication flows.

See Also

References

Anthropic. (2024, November 25). Introducing the Model Context Protocol. Anthropic Blog. https://www.anthropic.com/news/model-context-protocol

Descope. (2026, January 15). What is the Model Context Protocol (MCP) and how it works. Descope Learn. https://www.descope.com/learn/post/mcp

Elastic. (2026). What is the Model Context Protocol (MCP)? Elastic. https://www.elastic.co/what-is/mcp

Fast.io. (2026). Model Context Protocol: A complete guide for 2026. https://fast.io/resources/model-context-protocol/

Google Cloud. (2025). What is Model Context Protocol (MCP)? A guide. Google Cloud Discover. https://cloud.google.com/discover/what-is-model-context-protocol

IBM. (2025). What is Model Context Protocol (MCP)? IBM Think. https://www.ibm.com/think/topics/model-context-protocol

IBM Developer. (2026, January 26). Model Context Protocol architecture patterns for multi-agent AI systems. IBM Developer. https://developer.ibm.com/articles/mcp-architecture-patterns-ai-systems/

Model Context Protocol. (2025). Specification — Version 2025-11-25. modelcontextprotocol.io. https://modelcontextprotocol.io/specification/2025-11-25

Model Context Protocol. (2025). Architecture overview. modelcontextprotocol.io. https://modelcontextprotocol.io/docs/learn/architecture

Model Context Protocol. (2026). GitHub organization. https://github.com/modelcontextprotocol

Red Hat. (2025, July 1). Model Context Protocol (MCP): Understanding security risks and controls. Red Hat Blog. https://www.redhat.com/en/blog/model-context-protocol-mcp-understanding-security-risks-and-controls

Singh, A., Ehtesham, A., Kumar, S., & Khoei, T. T. (2025). A survey of the Model Context Protocol (MCP): Standardizing context to enhance large language models (LLMs). Preprints. https://doi.org/10.20944/preprints202504.0245.v1

Stytch. (2025, March 28). Model Context Protocol (MCP): A comprehensive introduction for developers. Stytch Blog. https://stytch.com/blog/model-context-protocol-introduction/

Wikipedia. (2026). Model Context Protocol. Wikipedia. https://en.wikipedia.org/wiki/Model_Context_Protocol

Leave a Comment

decodethefuture
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.