Model Context Protocol: The Open Standard for AI Integrations
AI applications such as Claude, ChatGPT or Cursor require access to external data, tools and APIs to be truly useful. Previously, every integration required a custom connection – a fragmented M-by-N problem where each combination of AI application and data source had to be solved individually. The Model Context Protocol (MCP) solves this problem with an open standard that connects AI applications to external systems in a unified manner .
Anthropic released MCP at the end of 2024 and describes it as the "USB-C for AI applications" : Instead of building a separate adapter for every data source, MCP defines a universal protocol through which any AI host can communicate with any server. The protocol is based on JSON-RPC 2.0, supports stateful connections and is now backed by a broad alliance – including Claude, VS Code, Cursor, Gemini CLI, Amazon Q and many more; for ChatGPT, the scope currently depends on the plan, mode and deployment according to OpenAI .
Whether you are using MCP for the first time or optimising your existing architecture – here you will find 30 well-founded answers, from the basics and architecture to security and best practices.
MCP architecture: A host manages multiple clients, each client connects to a server
Interactive Demo: MCP in Action
Experience the three core concepts of MCP step by step -- from the architecture to the JSON-RPC lifecycle and tool calling:
Wählen Sie ein Szenario und beobachten Sie Schritt für Schritt, wie MCP-Hosts, -Clients und -Server kommunizieren.
Wählen Sie ein Szenario und starten Sie die Simulation.
30 questions and answers about the Model Context Protocol, structured into 7 categories. Each answer includes a brief summary and a detailed explanation with references.
Table of Contents
Quick Overview: All 30 Questions
Click on a question to jump to the detailed answer.
Category 1: Fundamentals & Concepts
1.1. What is the Model Context Protocol and what problem does it solve?
1.2. How does MCP differ from classic API integrations?
1.3. Who developed MCP and who supports it?
1.4. What does the "USB-C for AI" analogy mean?
1.5. What is the relationship between MCP and the Language Server Protocol?
Category 2: Architecture & Components
2.1. How is the MCP architecture structured (Hosts, Clients, Servers)?
2.2. What are the three server primitives (Tools, Resources, Prompts)?
2.3. What are client primitives (Sampling, Elicitation, Roots)?
2.4. How does the protocol lifecycle work?
2.5. What role does JSON-RPC 2.0 play in MCP?
Category 3: Transport & Connections
3.1. Which transport mechanisms does MCP support?
3.2. What is the difference between stdio and Streamable HTTP?
3.3. How does session management work with Streamable HTTP?
3.4. What is resumability and how is message loss prevented?
Category 4: Server Features in Detail
4.1. How do MCP Tools work – from discovery to execution?
4.2. What are MCP Resources and how do they differ from Tools?
4.3. How do MCP Prompts work and what are they used for?
4.4. What is Sampling and how can a server request LLM completions?
4.5. What is Elicitation and how does a server request user input?
Category 5: Ecosystem & Clients
5.1. Which AI applications support MCP as a client?
5.2. Which SDKs are available and in which languages?
5.3. Which reference servers does the official repository provide?
5.4. How do you build your own MCP server?
Category 6: Security & Best Practices
6.1. Which security principles does MCP define?
6.2. How are tool calls secured (Human-in-the-Loop)?
6.3. What are the risks of MCP and how can they be minimised?
6.4. How does MCP handle data privacy and user consent?
Category 7: Practice, Dos & Don'ts
7.1. What are the most important dos and don'ts when using MCP?
7.2. What does the future of MCP look like?
7.3. How do I configure MCP in VS Code, Cursor, or Claude Desktop?
Category 1: Basics & Concepts
The Model Context Protocol solves a fundamental problem in AI integration. Understand the core concepts before implementing your first MCP connection.
1.1. What is the Model Context Protocol and what problem does it solve?
Short answer: MCP is an open standard by Anthropic that connects AI applications to external data sources, tools, and APIs via a unified protocol – thereby solving the M-by-N problem of fragmented integrations .
Detailed explanation:
Without MCP, every AI application (Claude, ChatGPT, Cursor, etc.) has to build its own integration for every data source (GitHub, Slack, databases, etc.). With M applications and N data sources, this results in M x N individual connections – an enormous effort that scales poorly .
MCP reduces this problem to M + N: Each AI application implements the MCP client once, and each data source implements the MCP server once. Afterwards, all clients can communicate with all servers .
Concrete use cases from the MCP documentation: connecting Google Calendar and Notion with an AI, converting Figma designs directly into web applications, querying enterprise databases using natural language, or creating 3D designs in Blender via AI .
MCP is to AI applications what HTTP is to websites: a common protocol that ensures different systems can communicate with each other without needing to know each other in detail.
1.2. How does MCP differ from classic API integrations?
Short answer: Classic APIs are static and task-specific; MCP offers dynamic discovery, standardised capability negotiation, and stateful sessions within a unified protocol .
Detailed explanation:
| Classic API Integration | Model Context Protocol |
|---|---|
| Manual – developers read docs | Automatic – tools/list, resources/list, prompts/list |
| Stateless (REST) or single-purpose WebSocket | Stateful with capability negotiation |
| M x N integrations | M + N implementations |
| Breaking changes require client updates | Dynamic updating via notifications |
| API-specific (REST, GraphQL, gRPC) | Unified JSON-RPC 2.0 |
| Varies per API | Standardised principals: User Consent, Tool Safety |
A decisive advantage is dynamic tool discovery: An MCP server can change its available tools at runtime and inform the client via notifications/tools/list_changed. The client then queries tools/list again and knows the new capabilities – without a restart or manual intervention .
Additionally, the client and server negotiate their capabilities when establishing a connection: Which primitives are supported? Which protocol version? This handshake procedure makes MCP connections more robust and future-proof than rigid API contracts .
1.3. Who developed MCP and who supports it?
Short answer: Anthropic developed MCP and published it as an open standard. The protocol is now supported by Microsoft (VS Code, GitHub Copilot), Google (Gemini CLI), Block, Amazon and numerous tool developers; for OpenAI/ChatGPT, the scope of MCP currently depends on the plan, mode, and use case .
Detailed explanation:
Anthropic published MCP as an open standard with the goal of replacing fragmented AI integrations with a unified protocol . Its early adoption has been remarkably fast:
Block CTO Dhanji R. Prasanna summarises its significance: "Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications" .
The complete specification, all SDKs, and the reference servers are available at github.com/modelcontextprotocol under an open licence . The community can contribute directly to its further development.
1.4. What does the "USB-C for AI" analogy mean?
Short answer: Just as USB-C provides a universal connection for devices, MCP creates a universal standard for AI integrations – one protocol instead of many proprietary adapters .
Detailed explanation:
The official MCP introduction describes it like this: "Think of MCP like a USB-C port for AI applications" . The analogy works on several levels:
| USB-C | MCP |
|---|---|
| Micro-USB, Lightning, Mini-USB, proprietary connectors | Separate adapter per AI app + data source |
| One universal connection for data, video, power | One universal protocol for tools, data, prompts |
| Every device, every cable | Every AI client, every server |
| USB Implementers Forum | Open specification on GitHub |
The comparison illustrates the core advantage: Instead of building a tailor-made integration for every combination of AI application and data source, both sides implement the MCP standard once. The result is universal compatibility – every MCP client can work with every MCP server, just as every USB-C device works with every USB-C cable.
1.5. What is the relationship between MCP and the Language Server Protocol?
Short answer: MCP is inspired by the Language Server Protocol (LSP) and transfers its principle – standardised communication between hosts and capability servers – to the realm of AI integrations .
Detailed explanation:
LSP solved a similar M-by-N problem in the IDE world: instead of writing a separate language service for every combination of editor and programming language, LSP defined a unified protocol. The result: a Language Server for Python works in VS Code, Neovim, and any other LSP-compatible editor.
MCP transfers exactly this principle to AI applications :
| LSP | MCP |
|---|---|
| M Editors x N Languages | M AI Apps x N Data Sources |
| Standardised Editor-Language Protocol | Standardised AI-Data Source Protocol |
| IDE (VS Code, Neovim) | AI Application (Claude, ChatGPT) |
| Language Server (Python, TypeScript) | MCP Server (GitHub, Filesystem) |
| Autocomplete, Diagnostics, Formatting | Tools, Resources, Prompts |
| stdio, Pipe | stdio, Streamable HTTP |
Both protocols are based on JSON-RPC and follow the principle of capability negotiation: upon connection, server and client negotiate which features they support. The architectural decision to build upon LSP enables developers with LSP experience to quickly enter the MCP world.
Category 2: Architecture & Components
The three-tier MCP architecture cleanly separates hosts, clients, and servers. Understand the primitives and the lifecycle to build robust integrations.
2.1. How is the MCP architecture structured (Hosts, Clients, Servers)?
Short answer: MCP follows a three-layer architecture: The Host (e.g. Claude Desktop) manages multiple Clients, each maintaining a 1:1 connection with a Server .
Detailed explanation:
The three layers have clearly defined roles :
| Layer | Role | Example |
|---|---|---|
| Host | AI application that manages MCP clients, enforces security policies, and obtains user consent | Claude Desktop, VS Code, Cursor |
| Client | Protocol client within the host, maintains a 1:1 connection with exactly one server | VS Code MCP Client 1 → Sentry, Client 2 → File system |
| Server | Provides capabilities (tools, resources, prompts) and runs locally or remotely | File system server, GitHub server, Database server |
A concrete example from the MCP documentation: VS Code acts as a host and internally creates one MCP client for the Sentry connection and another for file system access. Each client independently negotiates its capabilities with the respective server .
Below the architectural layers operate two levels: the data layer (JSON-RPC 2.0 as the message format) and the transport layer (stdio or Streamable HTTP as the transmission channel) .
The 1:1 relationship between client and server is a deliberate architectural decision: Server A cannot access the data of Server B. The host controls which servers are activated and what permissions they receive.
2.2. What are the three server primitives (Tools, Resources, Prompts)?
Short answer: MCP servers expose three types of capabilities: Tools (model-driven actions), Resources (application-driven data) and Prompts (user-driven interaction templates) .
Detailed explanation:
The crucial difference lies in the control level – who decides when a primitive is used :
| Primitive | Controlled by | Purpose | Examples |
|---|---|---|---|
| Tools | LLM (Model) | Executing actions, calculations, side effects | searchFlights, createCalendarEvent, sendEmail |
| Resources | Application (Host) | Providing context data, read-only | calendar://events/2024, file:///Documents/ |
| Prompts | User | Interaction templates with arguments | plan-vacation, code-review as slash commands |
Tools are the most frequently used primitive: the LLM decides which tool to call based on the context. Tools can have side effects (e.g. sending emails, writing files) and return structured results .
Resources, on the other hand, merely provide data: the application decides which resources are included as context. They work similarly to GET endpoints in REST – read-only and addressed via URIs such as trips://history/ .
Prompts are called explicitly by users, e.g. as slash commands. A plan-vacation prompt could expect structured arguments such as destination and travel dates and generate a multi-part conversation from them .
2.3. What are Client Primitives (Sampling, Elicitation, Roots)?
Short Answer: Client Primitives are back-channel capabilities that the server can request from the client: Sampling (LLM completions), Elicitation (structured user input) and Roots (file system boundaries) .
Detailed Explanation:
While Server Primitives deliver data and functions to the client, Client Primitives enable the reverse path – the server can make requests to the client :
| Client Primitive | Function | Security Control |
|---|---|---|
| Sampling | Server requests an LLM completion without needing its own API keys | Human-in-the-Loop: User approves request and verifies response |
| Elicitation | Server requests structured data from users (via JSON Schema) | User can reply accept, decline or cancel |
| Roots | Server learns which file system areas it has access to (file:// URIs) | Advisory – not technically enforced, but respected as a convention |
Sampling is particularly powerful: It enables agentic behaviours within MCP server features. The server can "ask the LLM for advice" without needing its own API key – the request runs via the client, which controls access to the model .
Elicitation is a newer primitive that was introduced in the current specification. It is deliberately limited to flat objects with primitive properties (String, Number, Boolean, Enum) .
Roots give the server orientation regarding the working context – e.g. file:///home/user/project/. They are deliberately advisory and are not technically enforced, in order to preserve flexibility .
2.4. How does the protocol lifecycle work?
Short answer: The MCP lifecycle consists of three phases: initialisation (capability negotiation), operation (bidirectional message exchange), and shutdown (cleanly terminating the connection) .
Detailed explanation:
When establishing a connection, the client and server negotiate their capabilities in a standardised handshake :
// 1. Client sends initialize request
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-11-25",
"capabilities": {
"roots": { "listChanged": true },
"sampling": {}
},
"clientInfo": {
"name": "ExampleClient",
"version": "1.0.0"
}After initialisation, the client and server can exchange messages bidirectionally: the client can call tools and query resources; the server can send sampling requests and elicitation requests. Notifications (without a response) enable event-based updates .
2.5. What Role Does JSON-RPC 2.0 Play in MCP?
Short Answer: JSON-RPC 2.0 forms the message layer of MCP: All communication – requests, responses, and notifications – is encoded as standardised JSON-RPC messages .
Detailed Explanation:
MCP uses three JSON-RPC message types :
| Message Type | Characteristics | Usage in MCP |
|---|---|---|
| Request | Has an id, expects a response | tools/call, resources/read, initialize |
| Response | Contains result or error, references the id | Tool result, resource content |
| Notification | No id, no response expected | notifications/tools/list_changed, notifications/initialized |
The choice of JSON-RPC 2.0 brings several advantages: The format is language-independent, well-documented, and lightweight. It supports both synchronous request-response patterns and asynchronous notifications. Two error types are defined: protocol errors (as JSON-RPC error objects) and tool execution errors (as a result with isError: true) .
The protocol version is exchanged during the handshake as a protocolVersion field. The current version is "2025-11-25" . Additionally, an MCP-Protocol-Version header is sent along with Streamable HTTP connections .
Category 3: Transport & Connections
MCP defines two transport mechanisms for different deployment scenarios. The choice of the correct transport affects latency, security, and scalability.
3.1. Which transport mechanisms does MCP support?
Short answer: MCP defines two official transports: stdio for local subprocesses and Streamable HTTP for remote servers. Both transport the same JSON-RPC messages .
Detailed explanation:
| stdio | Streamable HTTP |
|---|---|
| Local – server as subprocess | Remote or local – server as HTTP endpoint |
| stdin/stdout, newline-delimited | HTTP POST + optional SSE streaming |
| Implicit via process lifecycle | Explicit via Mcp-Session-Id header |
| Not available | SSE event IDs + Last-Event-ID |
| OS process isolation | Origin validation, authentication, TLS |
| One server per client instance | Multiple clients per server possible |
| CLI tools, local file systems, IDEs | Cloud APIs, team servers, microservices |
stdio is the simpler transport: the client starts the server as a child process and communicates via the standard streams. Each JSON-RPC message is sent as a separate line. stderr is used for logging and must not contain any protocol messages .
Streamable HTTP replaces the previous HTTP+SSE transport. The client sends JSON-RPC messages as HTTP POST requests to the server endpoint. The server can either respond directly with JSON or open an SSE stream through which multiple messages are streamed .
stdio for everything local: file system access, local Git operations, CLI tool wrappers. Streamable HTTP for everything remote: cloud APIs, cross-team servers, production deployments with authentication and scaling.
3.2. What is the difference between stdio and Streamable HTTP?
Short answer: stdio communicates via the standard streams of a child process (simple, local); Streamable HTTP uses HTTP POST with optional SSE streaming (flexible, remote-capable, with session management) .
Detailed explanation:
With stdio, the client starts the server as a subprocess. Communication proceeds as follows :
npx @modelcontextprotocol/server-filesystem)With Streamable HTTP, the server operates as an HTTP endpoint :
3.3. How does session management work with Streamable HTTP?
Short answer: After initialisation, the server assigns an Mcp-Session-Id, which the client sends as an HTTP header in every subsequent request. The server can reject invalid or missing session IDs .
Detailed explanation:
Session management in Streamable HTTP follows defined steps :
initialize request without a session IDMcp-Session-Id response headerThis mechanism enables stateful connections over the stateless HTTP protocol. The server can manage session-specific data (e.g., active subscriptions, user context) and terminate sessions if necessary .
Servers should validate the Origin header on HTTP requests to prevent cross-site attacks. Local servers should also be bound to localhost to block access from the network .
3.4. What is resumability and how is message loss prevented?
Short answer: Resumability uses SSE Event IDs and the Last-Event-ID header to seamlessly continue from where the communication was interrupted in the event of a disconnection .
Detailed explanation:
With Streamable HTTP, connections can drop at any time – due to network issues, timeouts, or server restarts. Resumability prevents messages from being lost during these events :
id to every SSE messageLast-Event-ID headerThis pattern is particularly relevant for long-running operations such as large file exports or complex tool executions. Without resumability, the client would have to restart the entire operation from the beginning upon every disconnection. With event IDs, only the missing part is delivered instead .
Resumability is not intended for the stdio transport: The connection is bound to the process lifecycle. If the server process is terminated, the entire connection must be re-established.
Category 4: Server Features in Detail
The central MCP features in detail: Server features such as Tools, Resources, and Prompts, as well as client features like Sampling and Elicitation. Each primitive fulfils a specific role in the interaction between AI and external systems.
4.1. How do MCP Tools work – from Discovery to Execution?
Short answer: Tools go through a four-stage lifecycle: Discovery (tools/list), Selection by the LLM, Execution (tools/call), and Dynamic updating via notifications .
Detailed explanation:
The complete tool lifecycle in MCP: Discovery → Selection → Execution → Updating
Each tool is described via a structured definition :
| Field | Purpose | Required |
|---|---|---|
| name | Unique identifier of the tool | Yes |
| title | Human-readable display name | No |
| description | Description for LLM selection | No |
| inputSchema | JSON Schema for input parameters | Yes |
| outputSchema | JSON Schema for structured output | No |
| annotations | Metadata (readOnlyHint, destructiveHint, openWorldHint) | No |
Tool results can return various content types: Text, Images, Audio, Resource links, and Embedded resources . In the event of errors, MCP distinguishes between protocol errors (JSON-RPC Error) and tool execution errors (result with isError: true), which allows for differentiated error handling.
Tools can have side effects (writing files, sending emails, calling APIs). The specification requires: input validation, access controls, rate limiting, and output sanitisation .
4.2. What are MCP Resources and how do they differ from Tools?
Short answer: Resources are URI-identified, read-only data sources that provide context, whereas Tools execute actions .
Detailed explanation:
Resources represent data that an AI application can integrate as context – similar to GET endpoints in REST . There are two types:
| Resource type | URI pattern | Example |
|---|---|---|
| Direct Resources | Fixed URI (e.g. file:///config.json) | Configuration files, static data |
| Resource Templates | Parameterised according to RFC 6570 (e.g. users://{id}/profile) | User profiles, dynamic calendar entries |
Resources support various URI schemes: https://, file://, git:// and custom schemes such as calendar://events/2024 or trips://history/ . The content can be text or binary (Base64-encoded).
The fundamental difference compared to Tools: Resources have no side effects. They provide data but do not change anything. They therefore serve as a context interface, whereas Tools are geared towards execution and actions .
Additionally, Resources support subscriptions: The client can register for change notifications and is informed via notification when a Resource changes. Annotations such as audience, priority and lastModified provide metadata for prioritisation .
4.3. How do MCP Prompts work and what are they used for?
Short answer: Prompts are user-driven interaction templates that are discovered via prompts/list and called with arguments via prompts/get – typically exposed as slash commands in the UI .
Detailed explanation:
MCP Prompts differ fundamentally from Tools and Resources due to their control layer: They are triggered explicitly by users, not by the LLM or the application .
A Prompt consists of :
A practical example: A plan-vacation Prompt might expect arguments like destination and dates. Upon invocation, the server generates a multi-part conversation from this, which the LLM uses as a starting point for holiday planning – including embedded Resources like trips://history/ for past trips .
Prompts also support multi-turn conversations: The server can return several consecutive Messages with alternating roles to pre-structure complex interaction patterns .
4.4. What is sampling and how can a server request LLM completions?
Short answer: Sampling allows an MCP server to request an LLM completion via the client – without requiring its own API keys. Users retain full control through a human-in-the-loop approach .
Detailed explanation:
Sampling flow: The server requests an LLM completion, the human retains control
Sampling solves a practical problem: MCP servers sometimes require the support of an LLM to fulfil their tasks – e.g. to analyse unstructured data or make decisions. Without sampling, the server would have to manage its own API keys. With sampling, it utilises the client's access .
The server can specify model preferences during sampling :
| Preference | Description |
|---|---|
| hints | Suggestions for specific model names |
| costPriority | Weighting of costs (0-1) |
| speedPriority | Weighting of speed (0-1) |
| intelligencePriority | Weighting of model intelligence (0-1) |
The human-in-the-loop is twofold: users can both review and modify the server's request, as well as inspect and authorise the LLM's response before it is returned to the server. The client has the final say – it can overwrite the model selection, shorten requests, or reject them entirely .
4.5. What is Elicitation and how does a server request user input?
Short answer: Elicitation is a newer MCP primitive that allows servers to request structured data directly from users – with JSON schema validation and three response actions: accept, decline or cancel .
Detailed explanation:
Elicitation solves the problem that servers sometimes require information which can neither be derived from the LLM context nor from existing data. Example: A deployment server requires confirmation of an environment before it proceeds .
The workflow :
elicitation/create with a message and a JSON schemaThe JSON schema is deliberately restricted: Only flat objects with primitive properties (String, Number, Boolean, Enum) are allowed – no nested objects or arrays .
Servers must not request sensitive data via Elicitation – no passwords, API keys or personal identification numbers. The specification explicitly forbids this .
Category 5: Ecosystem & Clients
The MCP ecosystem is growing rapidly: from SDKs in numerous languages to reference servers and broad client support. An overview of the current status.
5.1. Which AI applications support MCP as a client?
Short answer: MCP is supported by a broad range of AI applications, including Claude Desktop, Claude Code, Claude.ai, VS Code (via GitHub Copilot), Cursor, Gemini CLI, Amazon Q, and numerous others. ChatGPT also offers MCP support; however, according to OpenAI, the extent of this currently depends on the plan, mode, and specific use case .
Detailed explanation:
Client support varies depending on the implemented features. The MCP documentation tracks the following capabilities per client; for ChatGPT, the current OpenAI documentation should additionally be consulted, as availability and permissions currently depend on the plan and mode :
| Client | Tools | Resources | Prompts | Sampling | Elicitation | Roots |
|---|---|---|---|---|---|---|
| Claude Desktop | Yes | Yes | Yes | – | – | Yes |
| Claude Code | Yes | Yes | Yes | – | Yes | Yes |
| Claude.ai | Yes | Yes | Yes | – | – | – |
| VS Code (Copilot) | Yes | Yes | Yes | – | – | – |
| Cursor | Yes | Yes | Yes | – | Yes | Yes |
| Gemini CLI | Yes | – | – | – | – | – |
| Amazon Q | Yes | – | – | – | – | – |
Furthermore, numerous other applications support MCP: Cline, Continue, Goose, fast-agent, OpenAI Codex, 5ire, AgenticFlow, BoltAI, Chatbox, and many more .
A special case currently applies to ChatGPT: According to OpenAI, full MCP with write/modify actions is currently in the beta rollout phase for Business, Enterprise, and Edu; Pro currently supports MCP in Developer Mode with read/fetch permissions .
VS Code offers particularly comprehensive MCP integration: Configuration via .vscode/mcp.json, sandbox support on macOS/Linux, auto-discovery from Claude Desktop configurations, and CLI installation via code --add-mcp .
5.2. Which SDKs are available and in which languages?
Short answer: Official SDKs exist in three tiers: Tier 1 (TypeScript, Python, C#, Go), Tier 2 (Java, Rust), and Tier 3 (Swift, Ruby, PHP). Kotlin is planned .
Detailed explanation:
All SDKs are maintained at github.com/modelcontextprotocol :
| Tier | Language | Package / Repository | Typical Use Case |
|---|---|---|---|
| 1 | TypeScript | @modelcontextprotocol/sdk | Web-based servers, Node.js integrations |
| 1 | Python | mcp | Data science, ML pipelines, scripting |
| 1 | C# | ModelContextProtocol | .NET ecosystem, enterprise applications |
| 1 | Go | github.com/modelcontextprotocol/go-sdk | Cloud-native servers, microservices |
| 2 | Java | modelcontextprotocol/java-sdk | Enterprise Java, Spring integrations |
| 2 | Rust | modelcontextprotocol/rust-sdk | Performance-critical servers, system integrations |
| 3 | Swift | modelcontextprotocol/swift-sdk | macOS/iOS-native servers |
| 3 | Ruby | modelcontextprotocol/ruby-sdk | Rails integrations, scripting |
| 3 | PHP | modelcontextprotocol/php-sdk | Web servers, CMS integrations |
The tier classification reflects the maturity level and maintenance intensity: Tier 1 SDKs receive the fastest updates for new spec versions, Tier 2 follows promptly, and Tier 3 with a slight delay .
5.3. Which reference servers does the official repository provide?
Short answer: The official repository contains 7 active reference servers (Everything, Fetch, Filesystem, Git, Memory, Sequential Thinking, Time) and over 12 archived servers that serve as learning resources .
Detailed explanation:
The active reference servers at github.com/modelcontextprotocol/servers :
| Server | Purpose |
|---|---|
| Everything | Test server that demonstrates all MCP features |
| Fetch | Fetches web content and provides it as context |
| Filesystem | File system operations (read, write, search) |
| Git | Git repository operations (log, diff, commit) |
| Memory | Knowledge graph with persistent storage |
| Sequential Thinking | Structured, step-by-step thinking for complex problems |
| Time | Time zone conversion and current time queries |
Additionally, there are over 12 archived servers in the servers-archived repository, including AWS KB Retrieval, Brave Search, GitHub, GitLab, Google Drive, Google Maps, PostgreSQL, Puppeteer, Redis, Sentry, Slack, and SQLite . These have been continued as independent projects and serve as reference implementations for various integration patterns.
Start with the Everything server as a reference – it demonstrates all MCP features in a single implementation. The Filesystem server shows a realistic stdio deployment, while the Fetch server illustrates a streamable HTTP pattern.
5.4. How do you build your own MCP server?
Short answer: Select an official SDK, define tools, resources and/or prompts, configure the desired transport and register the server. A minimal server can be built in under 100 lines of code .
Detailed explanation:
Building an MCP server follows a consistent pattern, regardless of the language :
npm install @modelcontextprotocol/sdk (TypeScript) or pip install mcp (Python)claude_desktop_config.json or in VS Code via .vscode/mcp.jsonimport { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "weather-server",
version: "1.0.0",
});
// Define tool
server.tool(
"get-weather",
"Retrieve current weather for a city",
{ city: z.string().describe("Name of the city") },
async ({ city }) => ({The official clients page documents which clients support which features – test your server specifically with clients that support the primitives you have implemented .
Category 6: Security & Best Practices
MCP defines clear security principles that implementations must adhere to. Understand the risks and countermeasures before deploying MCP servers in production.
6.1. Which security principles does MCP define?
Short answer: MCP defines four core principles: User Consent and Control, Data Privacy, Tool Safety, and LLM Sampling Controls .
Detailed explanation:
The security architecture of MCP is based on the principle that users remain the final control authority :
| Principle | Meaning | Practical Implementation |
|---|---|---|
| User Consent and Control | Users must explicitly consent to every action | Hosts display tool calls before execution; users can decline |
| Data Privacy | Data must not be shared with third parties without consent | Hosts control which data flows to which server |
| Tool Safety | Tools are treated as potentially dangerous | Input validation, access controls, rate limiting, output sanitisation |
| LLM Sampling Controls | Servers must not use LLM access uncontrollably | Human-in-the-loop for sampling, client controls model selection |
These principles are normative – implementations must adhere to them to be MCP-compliant. The host (the AI application) bears the primary responsibility: it must enforce security guidelines, obtain user consent, and ensure isolation between different MCP servers .
6.2. How are tool calls secured (Human-in-the-Loop)?
Short answer: Hosts must have tool calls confirmed by users prior to execution. The LLM proposes a tool call; the host displays it; the user approves, modifies, or rejects it .
Detailed explanation:
The Human-in-the-Loop model in MCP operates on multiple levels :
destructiveHint)Tool annotations support this process: readOnlyHint signals that a tool does not modify data; destructiveHint warns of potentially destructive operations. Hosts can use these annotations to implement automatic approvals for safe tools and request extra confirmation for risky tools .
Tool annotations are declarative and not enforced: A server can set readOnlyHint: true even if the tool modifies data. Hosts should use annotations as additional information, but not trust them blindly.
6.3. What are the risks of MCP and how can they be minimised?
Short answer: The main risks are Prompt Injection (malicious content in tool results), excessive permissions, and a lack of input validation. Countermeasures: strict validation, the Principle of Least Privilege, and server isolation .
Detailed explanation:
| Risk | Description | Countermeasure |
|---|---|---|
| Prompt Injection | Malicious content in tool results or resources that manipulates LLM behaviour | Output sanitisation, content filtering, treating results as data (not as instructions) |
| Excessive permissions | Server receives more access than necessary for its function | Principle of Least Privilege: only grant the minimum necessary permissions |
| Lack of input validation | Server accepts arbitrary inputs without checking | JSON Schema Validation for all tool inputs, sanitisation of paths and URLs |
| Server impersonation | Malicious server poses as a trustworthy server | Verify server identity, only use trustworthy sources |
| Data exfiltration | Server sends sensitive data to external endpoints | Restrict network access, monitor outbound traffic |
| Rate limit abuse | Excessive tool calls via manipulated prompts | Implement server-side rate limiting |
The specification recommends a defence-in-depth strategy :
6.4. How does MCP handle data privacy and user consent?
Short answer: MCP requires explicit consent before any data sharing, follows the Principle of Least Privilege, and places the responsibility on hosts to control the data flow between the LLM and servers .
Detailed explanation:
The privacy model of MCP defines clear responsibilities :
A particularly strict consent model applies to sampling: the human sees both the server's request and the LLM's response, and can modify or reject either. This prevents a server from using LLM capacities uncontrollably or feeding the LLM with manipulative prompts .
The principle of minimal exposure means: only the data that a server needs for its function is made accessible to it. Hosts should not make all resources of all servers globally visible, but should instead specifically control which server receives which context data .
Category 7: Practice, Dos & Don'ts
Practical recommendations for the productive use of MCP and an outlook on the future of the protocol.
7.1. What are the most important dos and don'ts when using MCP?
Short answer: The most important principles: focused servers instead of monoliths, strict input validation, meaningful tool descriptions, and consistent human-in-the-loop control .
Detailed explanation:
description readOnlyHint, destructiveHint) to support hosts in making security decisions 7.2. What does the future of MCP look like?
Short answer: MCP is actively evolving with new spec versions, a growing ecosystem, and increasing industry adoption. New features like Elicitation and Tasks (experimental) are expanding its capabilities .
Detailed explanation:
Several developments point to a strong future for MCP:
Particularly noteworthy is the momentum of the ecosystem: The MCP architecture documentation already lists experimental features such as Tasks (for long-running operations), Notifications, and Progress Tracking . The open specification on GitHub allows the community to contribute directly to its ongoing development .
Adoption by major AI providers and development tools – including Anthropic, Microsoft, Google, Amazon, and OpenAI with plan- and mode-dependent ChatGPT support – positions MCP as the de facto standard for AI integrations .
7.3. How do I configure MCP in VS Code, Cursor or Claude Desktop?
Short answer: Each host application provides its own configuration method – VS Code uses a .vscode/mcp.json file, Cursor is configured via the settings, and Claude Desktop uses a local JSON configuration .
Detailed explanation:
The three most popular MCP hosts differ in their configuration:
VS Code configures MCP servers via an mcp.json file in the workspace's .vscode directory :
{
"servers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/pfad/zum/projekt"]
},
"github": {
"type": "http",
"url": "https://api.githubcopilot.com/mcp"
}
}
}VS Code additionally offers sandbox support (macOS/Linux), auto-discovery of Claude Desktop configurations, and CLI installation via code --add-mcp .
Despite different configuration methods, all hosts follow the same pattern: server name → start command → arguments. Once built, an MCP server works in all three environments – only the configuration file differs.
Summary
| Topic | Key Takeaway |
|---|---|
| What is MCP? | An open standard by Anthropic that connects AI applications with tools, data, and APIs via a unified protocol – like USB-C for AI. |
| Architecture | Three-tier: Hosts manage clients, clients connect 1:1 with servers. JSON-RPC 2.0 as the messaging format. |
| Server Primitives | Tools (actions), Resources (read-only context data), Prompts (user-controlled templates). |
| Client Primitives | Sampling (requesting LLM completions), Elicitation (structured user input), Roots (file system boundaries). |
| Transport | stdio for local subprocesses, Streamable HTTP for remote servers with session management and resumability. |
| Ecosystem | SDKs in 9+ languages, supported by Claude, VS Code, Cursor, Gemini CLI, Amazon Q, and many others; for ChatGPT, the scope currently depends on the plan and mode. |
| Security | Four core principles: User Consent, Data Privacy, Tool Safety, LLM Sampling Controls. Human-in-the-loop is mandatory. |
| Best Practices | Focused servers, precise tool descriptions, strict input validation, no sensitive data via Elicitation. |
| Future | Growing ecosystem with enterprise adoption. New features (Elicitation, Tasks) and broad industry support. |