I wanted a single interface where an AI agent could run WHOIS, pull SSL certs, enumerate subdomains, check CVEs, and query threat intel feeds — all from one prompt.
So I built 23 security tools as an MCP server. Any AI agent that speaks MCP can call them natively.
Here's what I built, how to set it up, and what I learned.
Setup (2 minutes)
Let me start with the setup because it's the simplest part.
Add this to your MCP client config:
{
"mcpServers": {
"contrast": {
"command": "npx",
"args": ["-y", "@anthropic-ai/mcp-remote", "https://api.contrastcyber.com/mcp/"]
}
}
}
Works with Claude Desktop, Cursor, Windsurf, Cline, VS Code — anything that speaks MCP.
No API key. No signup. 100 requests/hour free.
The 23 Tools
Recon — "What's running on this domain?"
| Tool | What it does |
|---|---|
domain_report |
Full security report — DNS, WHOIS, SSL, subdomains, risk score |
dns_lookup |
A, AAAA, MX, NS, TXT, CNAME, SOA records |
whois_lookup |
Registrar, creation date, expiry, nameservers |
ssl_check |
Certificate chain, cipher suite, expiry, grade (A-F) |
subdomain_enum |
Brute-force + Certificate Transparency logs |
tech_fingerprint |
CMS, frameworks, CDN, analytics, server stack |
scan_headers |
Live HTTP security headers — CSP, HSTS, X-Frame-Options |
email_mx |
Mail provider, SPF/DMARC/DKIM validation |
ip_lookup |
PTR, open ports, hostnames, reputation |
asn_lookup |
AS number, holder, IP prefixes |
Real scenario: "Check if any of our subdomains have expiring SSL certs" — the agent calls subdomain_enum, loops through each result with ssl_check, and reports which ones expire within 30 days. Zero code.
Vulnerability — "Is this CVE exploitable?"
| Tool | What it does |
|---|---|
cve_lookup |
CVE details, CVSS, EPSS score, KEV status |
cve_search |
Search by product, severity, or date range |
exploit_lookup |
Public exploits from GitHub Advisory + ExploitDB |
Real scenario: "Find all critical CVEs for Apache httpd from the last 6 months that have public exploits" — one sentence, three tool calls chained automatically.
Threat Intelligence — "Is this IOC malicious?"
| Tool | What it does |
|---|---|
ioc_lookup |
Auto-detect IP/domain/URL/hash → ThreatFox + URLhaus |
hash_lookup |
Malware hash reputation via MalwareBazaar |
phishing_check |
Known phishing/malware URL check |
password_check |
Breach check via HIBP (k-anonymity, password never sent) |
email_disposable |
Disposable/temporary email detection |
Real scenario: You get a suspicious URL in Slack. Paste it and ask "is this safe?" — the agent runs phishing_check + ioc_lookup and tells you if it's a known threat.
Code Security — "Does my code have vulnerabilities?"
| Tool | What it does |
|---|---|
check_secrets |
Detect hardcoded AWS keys, tokens, passwords in source |
check_injection |
SQL injection, command injection, path traversal |
check_headers |
Validate security header configuration |
Real scenario: Before a PR merge, ask your agent to scan the diff for hardcoded secrets and injection vulnerabilities.
Phone & Email — "Is this contact legit?"
| Tool | What it does |
|---|---|
phone_lookup |
Validation, country, carrier, line type |
What It Looks Like
"Run a full security audit on example.com"
Domain: example.com
Risk Score: 32/100 (Low)
DNS: 6 records found
SSL: Grade A, expires 2027-01-15, TLS 1.3
Headers: 4/7 present (missing CSP, HSTS preload, Permissions-Policy)
Subdomains: 3 found
WHOIS: Registered 1995-08-14, ICANN
Tech: Akamai CDN, nginx
"Check if CVE-2024-3094 has public exploits"
CVE-2024-3094 (xz backdoor)
CVSS: 10.0 CRITICAL
EPSS: 0.947 (top 0.1%)
KEV: Yes — actively exploited
Exploits found: 3
"Is this password breached: hunter2"
EXPOSED in 17,043 breaches
Do NOT use this password.
(checked via k-anonymity — password was never transmitted)
Why MCP?
ContrastAPI is also a REST API with a Node.js SDK. You can curl it from any language.
But MCP changes the workflow:
Without MCP: Call endpoint → parse JSON → decide next step → call another endpoint → parse again → format output.
With MCP: "Audit this domain." Done.
The agent picks the right tools, chains them, and gives you a summary. You focus on decisions, not plumbing.
Architecture
- FastAPI + official MCP Python SDK
- 30 REST endpoints, 23 MCP tools (same backend)
- 1,115 tests (912 API + 203 C scanner)
- Domain scanner written in C — scores SSL, DNS, headers, email in under 2 seconds
- All data from free, public sources — no paid feeds, no vendor lock-in
What I Learned
1. No API key = fastest adoption.
I removed the API key requirement and traffic jumped immediately. Zero friction wins. The free tier (100 req/hr) is generous enough that nobody has hit the limit yet.
2. MCP users are stickier.
MCP users make more requests per session than REST users. Once an agent has access to the tools, it chains them naturally — a single prompt can trigger 5-10 tool calls.
3. Get listed everywhere, early.
mcp.so, mcpservers.org, Smithery — these directories drive most of the discovery right now. The ecosystem is early and low-competition.
Limitations
Being transparent about what this isn't:
- Passive only — no port scanning, no active exploitation. This is OSINT and public data, not a pentest tool.
- Rate limited — 100 req/hr free, 1000/hr on Pro ($19/mo). Enough for individual use, not bulk scanning.
- Solo project — I'm one developer. Response times are fast, but I don't have an SRE team on-call.
- You don't need API keys — we handle the integrations (Shodan, AbuseIPDB, ThreatFox, NVD, and more). No vendor accounts to set up on your end.
Try It
- GitHub: github.com/UPinar/contrastapi
- MCP setup: contrastcyber.com/mcp-setup
- Web scanner: contrastcyber.com
- API docs: api.contrastcyber.com
Free. Open source. No API key.
If you find it useful, a ⭐ on GitHub helps more than you think.
What security tools do you wish your AI agent could use? I'm always looking for what to build next.
Top comments (2)
This is impressive — 23 tools is a solid toolkit. Security for AI agents is one of those areas that gets overlooked until something goes wrong. I especially like the idea of giving agents built-in security capabilities rather than bolting them on after. Are any of these tools designed to work with creative AI workflows (video generation, image synthesis, etc.)?
Thanks! These tools are focused on infrastructure and application security — things like auditing domains, checking CVEs, and scanning code for vulnerabilities. They're not specific to creative AI workflows yet, but some could be useful there too. For example, tech_fingerprint and scan_headers could audit the security of any platform serving generated content. If there's a specific security concern in creative AI pipelines you're thinking about, I'd be curious to hear more — always looking for what to build next.