Model Context Protocol: 30 Questions and Answers on the Open Standard for AI Integrations

From architecture to security and best practices: 30 in-depth answers about MCP – the open protocol by Anthropic that standardises the connection between AI applications and tools, data, and APIs.

Overview

  • MCP is an open standard by Anthropic that standardises the connection between AI applications and external data sources and tools – like a USB-C for AI.
  • The protocol defines three server primitives: Tools (executing functions), Resources (reading context data), and Prompts (interaction templates).
  • MCP is supported by Claude, VS Code, Cursor, GitHub Copilot, and numerous other applications; for ChatGPT, the scope currently depends on the plan, mode, and use case.
  • Official SDKs exist in TypeScript, Python, C#, Go, Java, Rust, and other languages.

Model Context Protocol: The Open Standard for AI Integrations  

AI applications such as Claude, ChatGPT or Cursor require access to external data, tools and APIs to be truly useful. Previously, every integration required a custom connection – a fragmented M-by-N problem where each combination of AI application and data source had to be solved individually. The Model Context Protocol (MCP) solves this problem with an open standard that connects AI applications to external systems in a unified manner .

Anthropic released MCP at the end of 2024 and describes it as the "USB-C for AI applications" : Instead of building a separate adapter for every data source, MCP defines a universal protocol through which any AI host can communicate with any server. The protocol is based on JSON-RPC 2.0, supports stateful connections and is now backed by a broad alliance – including Claude, VS Code, Cursor, Gemini CLI, Amazon Q and many more; for ChatGPT, the scope currently depends on the plan, mode and deployment according to OpenAI .

Whether you are using MCP for the first time or optimising your existing architecture – here you will find 30 well-founded answers, from the basics and architecture to security and best practices.

MCP architecture: A host manages multiple clients, each client connects to a server

Interactive Demo: MCP in Action  

Experience the three core concepts of MCP step by step -- from the architecture to the JSON-RPC lifecycle and tool calling:

Interaktive DemoModel Context Protocol

Wählen Sie ein Szenario und beobachten Sie Schritt für Schritt, wie MCP-Hosts, -Clients und -Server kommunizieren.

Claude Desktop
MCP Host
Client erstellen
Capabilities melden
Connector
MCP Client
stdio / HTTP
Tools & Resources
Filesystem
MCP Server

Wählen Sie ein Szenario und starten Sie die Simulation.

Client → Server
JSON-RPC-Nachricht erscheint nach Start...
Server → Client
Antwort erscheint nach Simulation...
What to expect in this article

30 questions and answers about the Model Context Protocol, structured into 7 categories. Each answer includes a brief summary and a detailed explanation with references.


Table of Contents  


Quick Overview: All 30 Questions  

Click on a question to jump to the detailed answer.

Category 1: Fundamentals & Concepts  

Category 2: Architecture & Components  

Category 3: Transport & Connections  

Category 4: Server Features in Detail  

Category 5: Ecosystem & Clients  

Category 6: Security & Best Practices  

Category 7: Practice, Dos & Don'ts  

Category 1: Basics & Concepts

The Model Context Protocol solves a fundamental problem in AI integration. Understand the core concepts before implementing your first MCP connection.

1.1. What is the Model Context Protocol and what problem does it solve?  

Short answer: MCP is an open standard by Anthropic that connects AI applications to external data sources, tools, and APIs via a unified protocol – thereby solving the M-by-N problem of fragmented integrations .

Detailed explanation:

Without MCP, every AI application (Claude, ChatGPT, Cursor, etc.) has to build its own integration for every data source (GitHub, Slack, databases, etc.). With M applications and N data sources, this results in M x N individual connections – an enormous effort that scales poorly .

MCP reduces this problem to M + N: Each AI application implements the MCP client once, and each data source implements the MCP server once. Afterwards, all clients can communicate with all servers .

Standardised – one protocol instead of dozens of proprietary interfaces
Bidirectional – servers can not only provide data, but also request LLM completions
Open – the specification is available on GitHub under an open licence

Concrete use cases from the MCP documentation: connecting Google Calendar and Notion with an AI, converting Figma designs directly into web applications, querying enterprise databases using natural language, or creating 3D designs in Blender via AI .

MCP at a glance

MCP is to AI applications what HTTP is to websites: a common protocol that ensures different systems can communicate with each other without needing to know each other in detail.


1.2. How does MCP differ from classic API integrations?  

Short answer: Classic APIs are static and task-specific; MCP offers dynamic discovery, standardised capability negotiation, and stateful sessions within a unified protocol .

Detailed explanation:

Classic API IntegrationModel Context Protocol
Manual – developers read docsAutomatic – tools/list, resources/list, prompts/list
Stateless (REST) or single-purpose WebSocketStateful with capability negotiation
M x N integrationsM + N implementations
Breaking changes require client updatesDynamic updating via notifications
API-specific (REST, GraphQL, gRPC)Unified JSON-RPC 2.0
Varies per APIStandardised principals: User Consent, Tool Safety

A decisive advantage is dynamic tool discovery: An MCP server can change its available tools at runtime and inform the client via notifications/tools/list_changed. The client then queries tools/list again and knows the new capabilities – without a restart or manual intervention .

Additionally, the client and server negotiate their capabilities when establishing a connection: Which primitives are supported? Which protocol version? This handshake procedure makes MCP connections more robust and future-proof than rigid API contracts .


1.3. Who developed MCP and who supports it?  

Short answer: Anthropic developed MCP and published it as an open standard. The protocol is now supported by Microsoft (VS Code, GitHub Copilot), Google (Gemini CLI), Block, Amazon and numerous tool developers; for OpenAI/ChatGPT, the scope of MCP currently depends on the plan, mode, and use case .

Detailed explanation:

Anthropic published MCP as an open standard with the goal of replacing fragmented AI integrations with a unified protocol . Its early adoption has been remarkably fast:

First wave: Zed, Replit, Codeium and Sourcegraph integrated MCP early on
Enterprise adoption: Block and Apollo are among the early enterprise adopters
Broad support: Claude, VS Code, Cursor, Gemini CLI, Amazon Q and many more; for ChatGPT, according to OpenAI, the scope currently depends on the plan, mode, and use case

Block CTO Dhanji R. Prasanna summarises its significance: "Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications" .

The complete specification, all SDKs, and the reference servers are available at github.com/modelcontextprotocol under an open licence . The community can contribute directly to its further development.


1.4. What does the "USB-C for AI" analogy mean?  

Short answer: Just as USB-C provides a universal connection for devices, MCP creates a universal standard for AI integrations – one protocol instead of many proprietary adapters .

Detailed explanation:

The official MCP introduction describes it like this: "Think of MCP like a USB-C port for AI applications" . The analogy works on several levels:

USB-CMCP
Micro-USB, Lightning, Mini-USB, proprietary connectorsSeparate adapter per AI app + data source
One universal connection for data, video, powerOne universal protocol for tools, data, prompts
Every device, every cableEvery AI client, every server
USB Implementers ForumOpen specification on GitHub

The comparison illustrates the core advantage: Instead of building a tailor-made integration for every combination of AI application and data source, both sides implement the MCP standard once. The result is universal compatibility – every MCP client can work with every MCP server, just as every USB-C device works with every USB-C cable.


1.5. What is the relationship between MCP and the Language Server Protocol?  

Short answer: MCP is inspired by the Language Server Protocol (LSP) and transfers its principle – standardised communication between hosts and capability servers – to the realm of AI integrations .

Detailed explanation:

LSP solved a similar M-by-N problem in the IDE world: instead of writing a separate language service for every combination of editor and programming language, LSP defined a unified protocol. The result: a Language Server for Python works in VS Code, Neovim, and any other LSP-compatible editor.

MCP transfers exactly this principle to AI applications :

LSPMCP
M Editors x N LanguagesM AI Apps x N Data Sources
Standardised Editor-Language ProtocolStandardised AI-Data Source Protocol
IDE (VS Code, Neovim)AI Application (Claude, ChatGPT)
Language Server (Python, TypeScript)MCP Server (GitHub, Filesystem)
Autocomplete, Diagnostics, FormattingTools, Resources, Prompts
stdio, Pipestdio, Streamable HTTP

Both protocols are based on JSON-RPC and follow the principle of capability negotiation: upon connection, server and client negotiate which features they support. The architectural decision to build upon LSP enables developers with LSP experience to quickly enter the MCP world.

Category 2: Architecture & Components

The three-tier MCP architecture cleanly separates hosts, clients, and servers. Understand the primitives and the lifecycle to build robust integrations.

2.1. How is the MCP architecture structured (Hosts, Clients, Servers)?  

Short answer: MCP follows a three-layer architecture: The Host (e.g. Claude Desktop) manages multiple Clients, each maintaining a 1:1 connection with a Server .

Detailed explanation:

The three layers have clearly defined roles :

LayerRoleExample
HostAI application that manages MCP clients, enforces security policies, and obtains user consentClaude Desktop, VS Code, Cursor
ClientProtocol client within the host, maintains a 1:1 connection with exactly one serverVS Code MCP Client 1 → Sentry, Client 2 → File system
ServerProvides capabilities (tools, resources, prompts) and runs locally or remotelyFile system server, GitHub server, Database server

A concrete example from the MCP documentation: VS Code acts as a host and internally creates one MCP client for the Sentry connection and another for file system access. Each client independently negotiates its capabilities with the respective server .

Below the architectural layers operate two levels: the data layer (JSON-RPC 2.0 as the message format) and the transport layer (stdio or Streamable HTTP as the transmission channel) .

Isolation as a security principle

The 1:1 relationship between client and server is a deliberate architectural decision: Server A cannot access the data of Server B. The host controls which servers are activated and what permissions they receive.


2.2. What are the three server primitives (Tools, Resources, Prompts)?  

Short answer: MCP servers expose three types of capabilities: Tools (model-driven actions), Resources (application-driven data) and Prompts (user-driven interaction templates) .

Detailed explanation:

The crucial difference lies in the control level – who decides when a primitive is used :

PrimitiveControlled byPurposeExamples
ToolsLLM (Model)Executing actions, calculations, side effectssearchFlights, createCalendarEvent, sendEmail
ResourcesApplication (Host)Providing context data, read-onlycalendar://events/2024, file:///Documents/
PromptsUserInteraction templates with argumentsplan-vacation, code-review as slash commands

Tools are the most frequently used primitive: the LLM decides which tool to call based on the context. Tools can have side effects (e.g. sending emails, writing files) and return structured results .

Resources, on the other hand, merely provide data: the application decides which resources are included as context. They work similarly to GET endpoints in REST – read-only and addressed via URIs such as trips://history/ .

Prompts are called explicitly by users, e.g. as slash commands. A plan-vacation prompt could expect structured arguments such as destination and travel dates and generate a multi-part conversation from them .


2.3. What are Client Primitives (Sampling, Elicitation, Roots)?  

Short Answer: Client Primitives are back-channel capabilities that the server can request from the client: Sampling (LLM completions), Elicitation (structured user input) and Roots (file system boundaries) .

Detailed Explanation:

While Server Primitives deliver data and functions to the client, Client Primitives enable the reverse path – the server can make requests to the client :

Client PrimitiveFunctionSecurity Control
SamplingServer requests an LLM completion without needing its own API keysHuman-in-the-Loop: User approves request and verifies response
ElicitationServer requests structured data from users (via JSON Schema)User can reply accept, decline or cancel
RootsServer learns which file system areas it has access to (file:// URIs)Advisory – not technically enforced, but respected as a convention

Sampling is particularly powerful: It enables agentic behaviours within MCP server features. The server can "ask the LLM for advice" without needing its own API key – the request runs via the client, which controls access to the model .

Elicitation is a newer primitive that was introduced in the current specification. It is deliberately limited to flat objects with primitive properties (String, Number, Boolean, Enum) .

Roots give the server orientation regarding the working context – e.g. file:///home/user/project/. They are deliberately advisory and are not technically enforced, in order to preserve flexibility .


2.4. How does the protocol lifecycle work?  

Short answer: The MCP lifecycle consists of three phases: initialisation (capability negotiation), operation (bidirectional message exchange), and shutdown (cleanly terminating the connection) .

Detailed explanation:

When establishing a connection, the client and server negotiate their capabilities in a standardised handshake :

After initialisation, the client and server can exchange messages bidirectionally: the client can call tools and query resources; the server can send sampling requests and elicitation requests. Notifications (without a response) enable event-based updates .


2.5. What Role Does JSON-RPC 2.0 Play in MCP?  

Short Answer: JSON-RPC 2.0 forms the message layer of MCP: All communication – requests, responses, and notifications – is encoded as standardised JSON-RPC messages .

Detailed Explanation:

MCP uses three JSON-RPC message types :

Message TypeCharacteristicsUsage in MCP
RequestHas an id, expects a responsetools/call, resources/read, initialize
ResponseContains result or error, references the idTool result, resource content
NotificationNo id, no response expectednotifications/tools/list_changed, notifications/initialized

The choice of JSON-RPC 2.0 brings several advantages: The format is language-independent, well-documented, and lightweight. It supports both synchronous request-response patterns and asynchronous notifications. Two error types are defined: protocol errors (as JSON-RPC error objects) and tool execution errors (as a result with isError: true) .

The protocol version is exchanged during the handshake as a protocolVersion field. The current version is "2025-11-25" . Additionally, an MCP-Protocol-Version header is sent along with Streamable HTTP connections .

Category 3: Transport & Connections

MCP defines two transport mechanisms for different deployment scenarios. The choice of the correct transport affects latency, security, and scalability.

3.1. Which transport mechanisms does MCP support?  

Short answer: MCP defines two official transports: stdio for local subprocesses and Streamable HTTP for remote servers. Both transport the same JSON-RPC messages .

Detailed explanation:

stdioStreamable HTTP
Local – server as subprocessRemote or local – server as HTTP endpoint
stdin/stdout, newline-delimitedHTTP POST + optional SSE streaming
Implicit via process lifecycleExplicit via Mcp-Session-Id header
Not availableSSE event IDs + Last-Event-ID
OS process isolationOrigin validation, authentication, TLS
One server per client instanceMultiple clients per server possible
CLI tools, local file systems, IDEsCloud APIs, team servers, microservices

stdio is the simpler transport: the client starts the server as a child process and communicates via the standard streams. Each JSON-RPC message is sent as a separate line. stderr is used for logging and must not contain any protocol messages .

Streamable HTTP replaces the previous HTTP+SSE transport. The client sends JSON-RPC messages as HTTP POST requests to the server endpoint. The server can either respond directly with JSON or open an SSE stream through which multiple messages are streamed .

When to choose which transport?

stdio for everything local: file system access, local Git operations, CLI tool wrappers. Streamable HTTP for everything remote: cloud APIs, cross-team servers, production deployments with authentication and scaling.


3.2. What is the difference between stdio and Streamable HTTP?  

Short answer: stdio communicates via the standard streams of a child process (simple, local); Streamable HTTP uses HTTP POST with optional SSE streaming (flexible, remote-capable, with session management) .

Detailed explanation:

With stdio, the client starts the server as a subprocess. Communication proceeds as follows :

1
Client starts the server process (e.g. npx @modelcontextprotocol/server-filesystem)
2
JSON-RPC messages are sent via stdin to the server
3
Responses come back via stdout, separated by newlines
4
stderr is reserved for logging and debug output

With Streamable HTTP, the server operates as an HTTP endpoint :

1
Client sends JSON-RPC as an HTTP POST to the MCP endpoint
2
Server responds either directly with JSON or opens an SSE stream
3
Sessions are identified via the Mcp-Session-Id header
4
For server-initiated messages, the client can open a GET-based SSE channel

3.3. How does session management work with Streamable HTTP?  

Short answer: After initialisation, the server assigns an Mcp-Session-Id, which the client sends as an HTTP header in every subsequent request. The server can reject invalid or missing session IDs .

Detailed explanation:

Session management in Streamable HTTP follows defined steps :

1
Initialisation: The client sends an initialize request without a session ID
2
Session creation: The server generates a unique session ID and sends it in the Mcp-Session-Id response header
3
Ongoing: The client sends the session ID as a header in every subsequent request
4
Invalidation: The server responds with HTTP 404 if the session no longer exists – the client must reinitialise

This mechanism enables stateful connections over the stateless HTTP protocol. The server can manage session-specific data (e.g., active subscriptions, user context) and terminate sessions if necessary .

Security notice

Servers should validate the Origin header on HTTP requests to prevent cross-site attacks. Local servers should also be bound to localhost to block access from the network .


3.4. What is resumability and how is message loss prevented?  

Short answer: Resumability uses SSE Event IDs and the Last-Event-ID header to seamlessly continue from where the communication was interrupted in the event of a disconnection .

Detailed explanation:

With Streamable HTTP, connections can drop at any time – due to network issues, timeouts, or server restarts. Resumability prevents messages from being lost during these events :

1
Event IDs: The server assigns a unique id to every SSE message
2
Client Tracking: The client remembers the last received event ID
3
Reconnection: Upon re-establishing the connection, the client sends the Last-Event-ID header
4
Replay: The server resends all messages since the last received event ID

This pattern is particularly relevant for long-running operations such as large file exports or complex tool executions. Without resumability, the client would have to restart the entire operation from the beginning upon every disconnection. With event IDs, only the missing part is delivered instead .

stdio and resumability

Resumability is not intended for the stdio transport: The connection is bound to the process lifecycle. If the server process is terminated, the entire connection must be re-established.

Category 4: Server Features in Detail

The central MCP features in detail: Server features such as Tools, Resources, and Prompts, as well as client features like Sampling and Elicitation. Each primitive fulfils a specific role in the interaction between AI and external systems.

4.1. How do MCP Tools work – from Discovery to Execution?  

Short answer: Tools go through a four-stage lifecycle: Discovery (tools/list), Selection by the LLM, Execution (tools/call), and Dynamic updating via notifications .

Detailed explanation:

The complete tool lifecycle in MCP: Discovery → Selection → Execution → Updating

Each tool is described via a structured definition :

FieldPurposeRequired
nameUnique identifier of the toolYes
titleHuman-readable display nameNo
descriptionDescription for LLM selectionNo
inputSchemaJSON Schema for input parametersYes
outputSchemaJSON Schema for structured outputNo
annotationsMetadata (readOnlyHint, destructiveHint, openWorldHint)No

Tool results can return various content types: Text, Images, Audio, Resource links, and Embedded resources . In the event of errors, MCP distinguishes between protocol errors (JSON-RPC Error) and tool execution errors (result with isError: true), which allows for differentiated error handling.

Tool Security

Tools can have side effects (writing files, sending emails, calling APIs). The specification requires: input validation, access controls, rate limiting, and output sanitisation .


4.2. What are MCP Resources and how do they differ from Tools?  

Short answer: Resources are URI-identified, read-only data sources that provide context, whereas Tools execute actions .

Detailed explanation:

Resources represent data that an AI application can integrate as context – similar to GET endpoints in REST . There are two types:

Resource typeURI patternExample
Direct ResourcesFixed URI (e.g. file:///config.json)Configuration files, static data
Resource TemplatesParameterised according to RFC 6570 (e.g. users://{id}/profile)User profiles, dynamic calendar entries

Resources support various URI schemes: https://, file://, git:// and custom schemes such as calendar://events/2024 or trips://history/ . The content can be text or binary (Base64-encoded).

The fundamental difference compared to Tools: Resources have no side effects. They provide data but do not change anything. They therefore serve as a context interface, whereas Tools are geared towards execution and actions .

Additionally, Resources support subscriptions: The client can register for change notifications and is informed via notification when a Resource changes. Annotations such as audience, priority and lastModified provide metadata for prioritisation .


4.3. How do MCP Prompts work and what are they used for?  

Short answer: Prompts are user-driven interaction templates that are discovered via prompts/list and called with arguments via prompts/get – typically exposed as slash commands in the UI .

Detailed explanation:

MCP Prompts differ fundamentally from Tools and Resources due to their control layer: They are triggered explicitly by users, not by the LLM or the application .

A Prompt consists of :

Name and description – for discovery and display in the UI
Arguments – structured parameters for customisation (e.g. destination, travel dates)
Messages – conversational messages with roles (user/assistant) and content (text, images, audio, embedded Resources)

A practical example: A plan-vacation Prompt might expect arguments like destination and dates. Upon invocation, the server generates a multi-part conversation from this, which the LLM uses as a starting point for holiday planning – including embedded Resources like trips://history/ for past trips .

Prompts also support multi-turn conversations: The server can return several consecutive Messages with alternating roles to pre-structure complex interaction patterns .


4.4. What is sampling and how can a server request LLM completions?  

Short answer: Sampling allows an MCP server to request an LLM completion via the client – without requiring its own API keys. Users retain full control through a human-in-the-loop approach .

Detailed explanation:

Sampling flow: The server requests an LLM completion, the human retains control

Sampling solves a practical problem: MCP servers sometimes require the support of an LLM to fulfil their tasks – e.g. to analyse unstructured data or make decisions. Without sampling, the server would have to manage its own API keys. With sampling, it utilises the client's access .

The server can specify model preferences during sampling :

PreferenceDescription
hintsSuggestions for specific model names
costPriorityWeighting of costs (0-1)
speedPriorityWeighting of speed (0-1)
intelligencePriorityWeighting of model intelligence (0-1)

The human-in-the-loop is twofold: users can both review and modify the server's request, as well as inspect and authorise the LLM's response before it is returned to the server. The client has the final say – it can overwrite the model selection, shorten requests, or reject them entirely .


4.5. What is Elicitation and how does a server request user input?  

Short answer: Elicitation is a newer MCP primitive that allows servers to request structured data directly from users – with JSON schema validation and three response actions: accept, decline or cancel .

Detailed explanation:

Elicitation solves the problem that servers sometimes require information which can neither be derived from the LLM context nor from existing data. Example: A deployment server requires confirmation of an environment before it proceeds .

The workflow :

1
Server sends elicitation/create with a message and a JSON schema
2
Client shows users a form based on the schema
3
User responds with accept (filling out data), decline (rejecting) or cancel (aborting)
4
Client sends the validated response back to the server

The JSON schema is deliberately restricted: Only flat objects with primitive properties (String, Number, Boolean, Enum) are allowed – no nested objects or arrays .

Security Rule for Elicitation

Servers must not request sensitive data via Elicitation – no passwords, API keys or personal identification numbers. The specification explicitly forbids this .

Category 5: Ecosystem & Clients

The MCP ecosystem is growing rapidly: from SDKs in numerous languages to reference servers and broad client support. An overview of the current status.

5.1. Which AI applications support MCP as a client?  

Short answer: MCP is supported by a broad range of AI applications, including Claude Desktop, Claude Code, Claude.ai, VS Code (via GitHub Copilot), Cursor, Gemini CLI, Amazon Q, and numerous others. ChatGPT also offers MCP support; however, according to OpenAI, the extent of this currently depends on the plan, mode, and specific use case .

Detailed explanation:

Client support varies depending on the implemented features. The MCP documentation tracks the following capabilities per client; for ChatGPT, the current OpenAI documentation should additionally be consulted, as availability and permissions currently depend on the plan and mode :

ClientToolsResourcesPromptsSamplingElicitationRoots
Claude DesktopYesYesYesYes
Claude CodeYesYesYesYesYes
Claude.aiYesYesYes
VS Code (Copilot)YesYesYes
CursorYesYesYesYesYes
Gemini CLIYes
Amazon QYes

Furthermore, numerous other applications support MCP: Cline, Continue, Goose, fast-agent, OpenAI Codex, 5ire, AgenticFlow, BoltAI, Chatbox, and many more .

A special case currently applies to ChatGPT: According to OpenAI, full MCP with write/modify actions is currently in the beta rollout phase for Business, Enterprise, and Edu; Pro currently supports MCP in Developer Mode with read/fetch permissions .

VS Code offers particularly comprehensive MCP integration: Configuration via .vscode/mcp.json, sandbox support on macOS/Linux, auto-discovery from Claude Desktop configurations, and CLI installation via code --add-mcp .


5.2. Which SDKs are available and in which languages?  

Short answer: Official SDKs exist in three tiers: Tier 1 (TypeScript, Python, C#, Go), Tier 2 (Java, Rust), and Tier 3 (Swift, Ruby, PHP). Kotlin is planned .

Detailed explanation:

All SDKs are maintained at github.com/modelcontextprotocol :

TierLanguagePackage / RepositoryTypical Use Case
1TypeScript@modelcontextprotocol/sdkWeb-based servers, Node.js integrations
1PythonmcpData science, ML pipelines, scripting
1C#ModelContextProtocol.NET ecosystem, enterprise applications
1Gogithub.com/modelcontextprotocol/go-sdkCloud-native servers, microservices
2Javamodelcontextprotocol/java-sdkEnterprise Java, Spring integrations
2Rustmodelcontextprotocol/rust-sdkPerformance-critical servers, system integrations
3Swiftmodelcontextprotocol/swift-sdkmacOS/iOS-native servers
3Rubymodelcontextprotocol/ruby-sdkRails integrations, scripting
3PHPmodelcontextprotocol/php-sdkWeb servers, CMS integrations

The tier classification reflects the maturity level and maintenance intensity: Tier 1 SDKs receive the fastest updates for new spec versions, Tier 2 follows promptly, and Tier 3 with a slight delay .


5.3. Which reference servers does the official repository provide?  

Short answer: The official repository contains 7 active reference servers (Everything, Fetch, Filesystem, Git, Memory, Sequential Thinking, Time) and over 12 archived servers that serve as learning resources .

Detailed explanation:

The active reference servers at github.com/modelcontextprotocol/servers :

ServerPurpose
EverythingTest server that demonstrates all MCP features
FetchFetches web content and provides it as context
FilesystemFile system operations (read, write, search)
GitGit repository operations (log, diff, commit)
MemoryKnowledge graph with persistent storage
Sequential ThinkingStructured, step-by-step thinking for complex problems
TimeTime zone conversion and current time queries

Additionally, there are over 12 archived servers in the servers-archived repository, including AWS KB Retrieval, Brave Search, GitHub, GitLab, Google Drive, Google Maps, PostgreSQL, Puppeteer, Redis, Sentry, Slack, and SQLite . These have been continued as independent projects and serve as reference implementations for various integration patterns.

Learning path for MCP server development

Start with the Everything server as a reference – it demonstrates all MCP features in a single implementation. The Filesystem server shows a realistic stdio deployment, while the Fetch server illustrates a streamable HTTP pattern.


5.4. How do you build your own MCP server?  

Short answer: Select an official SDK, define tools, resources and/or prompts, configure the desired transport and register the server. A minimal server can be built in under 100 lines of code .

Detailed explanation:

Building an MCP server follows a consistent pattern, regardless of the language :

1
Install SDK – e.g. npm install @modelcontextprotocol/sdk (TypeScript) or pip install mcp (Python)
2
Create server instance – define name, version and capabilities
3
Implement primitives – register tool handlers, resource providers and/or prompt templates
4
Configure transport – stdio for local use or Streamable HTTP for remote deployment
5
Register in client – e.g. in Claude Desktop via claude_desktop_config.json or in VS Code via .vscode/mcp.json

The official clients page documents which clients support which features – test your server specifically with clients that support the primitives you have implemented .

Category 6: Security & Best Practices

MCP defines clear security principles that implementations must adhere to. Understand the risks and countermeasures before deploying MCP servers in production.

6.1. Which security principles does MCP define?  

Short answer: MCP defines four core principles: User Consent and Control, Data Privacy, Tool Safety, and LLM Sampling Controls .

Detailed explanation:

The security architecture of MCP is based on the principle that users remain the final control authority :

PrincipleMeaningPractical Implementation
User Consent and ControlUsers must explicitly consent to every actionHosts display tool calls before execution; users can decline
Data PrivacyData must not be shared with third parties without consentHosts control which data flows to which server
Tool SafetyTools are treated as potentially dangerousInput validation, access controls, rate limiting, output sanitisation
LLM Sampling ControlsServers must not use LLM access uncontrollablyHuman-in-the-loop for sampling, client controls model selection

These principles are normative – implementations must adhere to them to be MCP-compliant. The host (the AI application) bears the primary responsibility: it must enforce security guidelines, obtain user consent, and ensure isolation between different MCP servers .


6.2. How are tool calls secured (Human-in-the-Loop)?  

Short answer: Hosts must have tool calls confirmed by users prior to execution. The LLM proposes a tool call; the host displays it; the user approves, modifies, or rejects it .

Detailed explanation:

The Human-in-the-Loop model in MCP operates on multiple levels :

1
Tool discovery: The LLM sees the tool descriptions and decides which tool is relevant
2
Proposal: The LLM proposes a tool call with specific arguments
3
Review: The host displays the proposal to the user – including the tool name, arguments, and annotations (e.g. destructiveHint)
4
Decision: The user can approve, modify, or reject it
5
Execution: The call is sent to the server only after approval

Tool annotations support this process: readOnlyHint signals that a tool does not modify data; destructiveHint warns of potentially destructive operations. Hosts can use these annotations to implement automatic approvals for safe tools and request extra confirmation for risky tools .

Annotations are hints, not guarantees

Tool annotations are declarative and not enforced: A server can set readOnlyHint: true even if the tool modifies data. Hosts should use annotations as additional information, but not trust them blindly.


6.3. What are the risks of MCP and how can they be minimised?  

Short answer: The main risks are Prompt Injection (malicious content in tool results), excessive permissions, and a lack of input validation. Countermeasures: strict validation, the Principle of Least Privilege, and server isolation .

Detailed explanation:

RiskDescriptionCountermeasure
Prompt InjectionMalicious content in tool results or resources that manipulates LLM behaviourOutput sanitisation, content filtering, treating results as data (not as instructions)
Excessive permissionsServer receives more access than necessary for its functionPrinciple of Least Privilege: only grant the minimum necessary permissions
Lack of input validationServer accepts arbitrary inputs without checkingJSON Schema Validation for all tool inputs, sanitisation of paths and URLs
Server impersonationMalicious server poses as a trustworthy serverVerify server identity, only use trustworthy sources
Data exfiltrationServer sends sensitive data to external endpointsRestrict network access, monitor outbound traffic
Rate limit abuseExcessive tool calls via manipulated promptsImplement server-side rate limiting

The specification recommends a defence-in-depth strategy :

Input layer: Validate all tool arguments against the schema before they are processed
Access layer: Implement access controls and authentication
Execution layer: Set rate limiting, timeouts, and resource limits
Output layer: Sanitise results and mask sensitive data

Short answer: MCP requires explicit consent before any data sharing, follows the Principle of Least Privilege, and places the responsibility on hosts to control the data flow between the LLM and servers .

Detailed explanation:

The privacy model of MCP defines clear responsibilities :

Hosts must inform users before sending data to MCP servers
Hosts must give users the option to approve or reject server connections
Servers must not share data with third parties without user consent
Elicitation must not request sensitive data (passwords, API keys)

A particularly strict consent model applies to sampling: the human sees both the server's request and the LLM's response, and can modify or reject either. This prevents a server from using LLM capacities uncontrollably or feeding the LLM with manipulative prompts .

The principle of minimal exposure means: only the data that a server needs for its function is made accessible to it. Hosts should not make all resources of all servers globally visible, but should instead specifically control which server receives which context data .

Category 7: Practice, Dos & Don'ts

Practical recommendations for the productive use of MCP and an outlook on the future of the protocol.

7.1. What are the most important dos and don'ts when using MCP?  

Short answer: The most important principles: focused servers instead of monoliths, strict input validation, meaningful tool descriptions, and consistent human-in-the-loop control .

Detailed explanation:

Do: Build one server per domain (e.g. a GitHub server, a database server) – do not pack everything into a monolith
Do: Write precise tool descriptions – the LLM selects tools based on the description
Do: Define JSON Schema for all tool inputs and validate them on the server side
Do: Set tool annotations (readOnlyHint, destructiveHint) to support hosts in making security decisions
Do: Use resources for context data, tools for actions – maintain a clear separation
Don't: Feed tool results unfiltered into the LLM context – always sanitise to prevent prompt injection
Don't: Request sensitive data (API keys, passwords) via elicitation – the specification prohibits this
Don't: Blindly trust tool annotations – they are hints, not security guarantees
Don't: Allow sampling requests to pass without a human-in-the-loop – a human must be able to verify the request and response
Don't: Deploy remote servers without origin validation and authentication

7.2. What does the future of MCP look like?  

Short answer: MCP is actively evolving with new spec versions, a growing ecosystem, and increasing industry adoption. New features like Elicitation and Tasks (experimental) are expanding its capabilities .

Detailed explanation:

Several developments point to a strong future for MCP:

Growing client base: From early adopters (Claude, Zed, Replit) to a broad alliance including ChatGPT, VS Code, Cursor, Gemini CLI, and Amazon Q
Active spec development: New primitives like Elicitation and experimental features such as Tasks demonstrate continuous evolution
SDK diversity: Grown from 2 to 9+ languages, with a clear tier structure and community contributions
Enterprise adoption: Companies like Block are integrating MCP into their production systems
Transport evolution: The shift from HTTP+SSE to Streamable HTTP shows that the protocol is being refined based on practical experience

Particularly noteworthy is the momentum of the ecosystem: The MCP architecture documentation already lists experimental features such as Tasks (for long-running operations), Notifications, and Progress Tracking . The open specification on GitHub allows the community to contribute directly to its ongoing development .

MCP as the de facto standard

Adoption by major AI providers and development tools – including Anthropic, Microsoft, Google, Amazon, and OpenAI with plan- and mode-dependent ChatGPT support – positions MCP as the de facto standard for AI integrations .


7.3. How do I configure MCP in VS Code, Cursor or Claude Desktop?  

Short answer: Each host application provides its own configuration method – VS Code uses a .vscode/mcp.json file, Cursor is configured via the settings, and Claude Desktop uses a local JSON configuration .

Detailed explanation:

The three most popular MCP hosts differ in their configuration:

VS Code configures MCP servers via an mcp.json file in the workspace's .vscode directory :

VS Code additionally offers sandbox support (macOS/Linux), auto-discovery of Claude Desktop configurations, and CLI installation via code --add-mcp .

Common Principle

Despite different configuration methods, all hosts follow the same pattern: server name → start command → arguments. Once built, an MCP server works in all three environments – only the configuration file differs.


Summary  

TopicKey Takeaway
What is MCP?An open standard by Anthropic that connects AI applications with tools, data, and APIs via a unified protocol – like USB-C for AI.
ArchitectureThree-tier: Hosts manage clients, clients connect 1:1 with servers. JSON-RPC 2.0 as the messaging format.
Server PrimitivesTools (actions), Resources (read-only context data), Prompts (user-controlled templates).
Client PrimitivesSampling (requesting LLM completions), Elicitation (structured user input), Roots (file system boundaries).
Transportstdio for local subprocesses, Streamable HTTP for remote servers with session management and resumability.
EcosystemSDKs in 9+ languages, supported by Claude, VS Code, Cursor, Gemini CLI, Amazon Q, and many others; for ChatGPT, the scope currently depends on the plan and mode.
SecurityFour core principles: User Consent, Data Privacy, Tool Safety, LLM Sampling Controls. Human-in-the-loop is mandatory.
Best PracticesFocused servers, precise tool descriptions, strict input validation, no sensitive data via Elicitation.
FutureGrowing ecosystem with enterprise adoption. New features (Elicitation, Tasks) and broad industry support.

Let's talk about your project

Locations

  • Mattersburg
    Johann Nepomuk Bergerstraße 7/2/14
    7210 Mattersburg, Austria
  • Vienna
    Ungargasse 64-66/3/404
    1030 Wien, Austria

Parts of this content were created with the assistance of AI.