Give your AI coding agenteyes for security bugs.
Install the XploitScan MCP server and your AI assistant gets three new tools: scan code, explain a rule, grade code. One-paste install. Runs locally. No telemetry.
The three tools
scan_codeScan code for security vulnerabilities
Pass a file path or raw source. Returns findings with rule ID, severity (critical / high / medium / low), affected file + line, and a one-paragraph fix suggestion. Backed by the same 206-rule engine as the CLI and the XploitScan paid product.
explain_ruleLook up what a specific rule means
Pass a rule ID like VC005. Returns the rule's title, description, vulnerable + safe code examples, and which compliance frameworks (SOC 2, ISO 27001, OWASP, CWE) it maps to. Useful when scan output references a rule and the user (or agent) wants context.
grade_codeSingle-call "is this safe to ship?"
Pass code. Returns an A+ through F letter grade with a one-paragraph summary of the security posture. Good for agents doing a final pre-commit / pre-deploy check.
Install
Pick your client. Each snippet is paste-as-is into the config file shown, then restart the client. Tools appear in your tool picker.
Claude Desktop
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%/Claude/claude_desktop_config.json{
"mcpServers": {
"xploitscan": {
"command": "npx",
"args": ["-y", "xploitscan-mcp"]
}
}
}Restart Claude Desktop after editing. The tools show up in the tool picker.
Cursor
~/.cursor/mcp.json{
"mcpServers": {
"xploitscan": {
"command": "npx",
"args": ["-y", "xploitscan-mcp"]
}
}
}Quit Cursor fully (Cmd+Q on macOS) and reopen. The xploitscan entry appears in Settings → MCP.
Windsurf
~/.codeium/windsurf/mcp_config.json{
"mcpServers": {
"xploitscan": {
"command": "npx",
"args": ["-y", "xploitscan-mcp"]
}
}
}Restart Windsurf. Tools appear in Cascade's tool list.
Continue (VS Code / JetBrains)
~/.continue/config.json{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "npx",
"args": ["-y", "xploitscan-mcp"]
}
}
]
}
}Reload your editor after editing. Tools are callable in agent mode.
Zed
Zed settings: settings → Edit settings.json{
"context_servers": {
"xploitscan": {
"command": {
"path": "npx",
"args": ["-y", "xploitscan-mcp"]
}
}
}
}Zed's MCP support is in nightly as of this writing; check the Zed docs for the latest config schema.
Verify it's working
After installing, open a chat in your MCP client and try:
- "Use xploitscan to scan this file for security issues." — with any file open. Should call
scan_codeand return findings with rule IDs. - "What is VC005?" — should call
explain_ruleand explain unprotected Stripe webhooks. - "Grade this code's security posture with xploitscan." — should call
grade_codeand return an A+ through F grade.
If the response contains specific VC### rule IDs, the scanner ran. If the response is vague ("this could potentially be vulnerable..."), the AI didn't call the tool — try a more direct prompt.
Frequently asked
Is it really free?
Yes. The MCP server is open source under the same license as the rest of the scanner engine. It runs locally over stdio, makes zero network calls, and has no telemetry. There's no signup, no API key, and no usage limit.
Does it send my code anywhere?
No. The scanner runs entirely in the same process the MCP client launched. Source code never leaves your machine. The package itself is downloaded once from NPM the first time you invoke it (via npx), then cached locally.
What's the difference between this and the CLI / web scanner?
Same engine. The CLI is invoked manually (npx xploitscan scan .); the web scanner runs in your browser at xploitscan.com/scan; the MCP server lets your AI coding agent call the scanner whenever it decides the user's request warrants it. Use whichever fits your workflow.
Will the AI false-positive filter run via MCP?
Not in v1. The MCP server returns raw regex + entropy findings, same as the free CLI. The paying product layers a Claude Haiku false-positive filter on top, which removes about 35% of matches as obvious FPs (test fixtures, public AWS example keys, etc.). If you want that filter inline with MCP, it's on the roadmap.
How do I know it actually ran instead of the AI guessing?
Every MCP-enabled client shows the tool call in its UI — usually a labeled "scan_code" pill in the chat with input + output. If you see specific rule IDs (VC001, VC005, etc.) in the response, that's the scanner; if you see vague "this could potentially be vulnerable to XSS" with no rule reference, the AI didn't call the tool. Real findings always carry a VC### rule ID.
What rules does it use?
All 206 rules in the XploitScan rule bank, the same set documented at /rules. Free, indie, pro, and team tier rule packs all run together (no tier gating in the MCP server). The rule list covers AI-generated code patterns specifically: unprotected Stripe webhooks, hardcoded keys, SQL injection via template literals, reflected CORS, missing ownership checks (IDOR), etc.
Prefer the manual workflow?
Same engine, different surface. Run a scan in the browser, drop the CLI in your terminal with npx xploitscan scan ., or build a public security overview at /free-trust-page.