MCP Server Security: What to Check Before Connecting
That MCP server you're about to connect? It might request permission to read your SSH keys, execute shell commands, or access your company's API endpoints. The Model Context Protocol's power comes from giving AI agents direct access to tools and data, but that same capability creates security risks that many developers overlook.
MCP servers aren't sandboxed applications. When you connect one to Claude or another AI agent, you're potentially granting access to files, network resources, and system commands. The security model depends entirely on what the server requests and what you approve.
Three permission types that matter
MCP servers declare their capabilities through three resource types. Each carries different risk levels.
Resources let servers read files, databases, or API responses. A server might request access to your ~/.aws/credentials file, your project's .env configuration, or real-time data from internal services. The danger isn't just what the server reads initially. It's what an AI agent might do with that access during a conversation.
Tools enable servers to execute actions. File operations, shell commands, API calls, database modifications. A development-focused server might request permission to run git commands, modify source files, or deploy to staging environments. Tools represent the highest-risk category because they change system state.
Prompts provide context or instructions to AI agents. Less risky than tools, but they can still influence agent behavior in unexpected ways. A server providing coding prompts might push the agent toward specific frameworks or practices that don't fit your project.
Reading the capability manifest
Before connecting any MCP server, examine its capability declarations. Well-designed servers document exactly what they need access to and why.
{
"tools": [
{
"name": "execute_command",
"description": "Run shell commands"
}
],
"resources": [
{
"uri": "file:///{path}",
"name": "File access"
}
]
}
Generic permission requests like "file access" or "execute commands" should trigger extra scrutiny. Legitimate servers specify their scope. A Git integration server should request access to .git/ directories and git commands, not arbitrary file system access.
Evaluating server source code
Many MCP servers are open source, making security auditing possible. Look for these patterns when reviewing server code:
Input validation gaps where user queries directly influence file paths, command arguments, or API parameters. A server that constructs file paths like f"/home/user/{user_input}" without sanitization creates obvious risks.
Credential handling that stores or logs sensitive data. Servers should never cache authentication tokens or write secrets to temporary files. Check how the server handles environment variables containing API keys or database passwords.
Network behavior that reaches unexpected endpoints. A "local file manager" server shouldn't make external HTTP requests. Document translation servers might legitimately call external APIs, but verify the endpoints and data being transmitted.
Permission scope and the principle of least access
MCP's permission model operates at the server level, not per-capability. When you connect a server, you're typically granting access to everything it declares. This makes permission scope critical.
A server that needs to read configuration files shouldn't also request command execution permissions. A database query tool shouldn't need file system access. When servers request broad permissions for narrow use cases, consider alternatives.
MCP servers often combine related capabilities, but the best ones maintain clear boundaries. A Git integration server might read repository files and execute Git commands, but it shouldn't also offer general shell access or network utilities.
Testing servers in isolation
Run new MCP servers in controlled environments before connecting them to production AI agents. Use virtual machines, containers, or dedicated development systems where potential damage stays contained.
Create test scenarios that push the server's boundaries. What happens when you ask the connected AI agent to access files outside the expected scope? How does the server respond to malformed requests or edge cases in user input?
# Test with limited file system access
docker run --rm -v $(pwd)/test-data:/data mcp-server-test
Document what the server actually accesses versus what it claims to need. Monitoring tools can reveal unexpected file reads, network connections, or command executions during testing.
Runtime monitoring and access logging
Once you deploy an MCP server, monitoring becomes essential. Most operating systems provide tools to track file access, network connections, and process execution.
On Linux, auditd can log file access patterns:
auditctl -w /home/user/.ssh -p r -k ssh_key_access
On macOS, fs_usage shows real-time file system activity. Windows offers similar capabilities through Event Tracing.
Network monitoring reveals unexpected external connections. A server that claims to work locally but connects to remote APIs during operation might be exfiltrating data or downloading updated instructions.
Common security anti-patterns
Several patterns appear repeatedly in problematic MCP servers:
Temporary file creation in shared directories like /tmp without proper permissions. Other processes might read sensitive data from these files.
Shell command construction using string concatenation rather than proper argument passing. This creates command injection vulnerabilities when user input isn't properly escaped.
Error message verbosity that leaks file paths, environment variables, or internal system details. Helpful for debugging, dangerous in production.
Persistence mechanisms that modify shell startup files, create scheduled tasks, or install system services. MCP servers should be stateless and removable.
Trust boundaries and delegation
MCP security also depends on understanding what you're delegating to AI agents. When you connect a server that can modify files, you're trusting the AI agent to use that capability appropriately.
Rules files help establish boundaries for AI behavior, but they can't prevent all misuse. An agent with file write permissions might overwrite important data during normal operation, even without malicious intent.
Consider whether the AI agent needs direct access to server capabilities or if a human-in-the-loop workflow would be safer. Some operations benefit from AI assistance but require human approval before execution.
Building trust gradually
Don't connect unfamiliar MCP servers to critical systems immediately. Start with read-only access to non-sensitive data. Observe behavior patterns over time. Gradually expand permissions as trust builds through demonstrated safe operation.
The best practices for MCP server deployment emphasize incremental trust-building. Test with sample data, monitor closely, and expand scope based on observed reliability.
MCP server security isn't about preventing all risk. It's about understanding and managing the risks you accept when giving AI agents access to real system capabilities. Choose servers carefully, monitor their behavior, and maintain clear boundaries around sensitive resources.