Monitor site health from your editor, not a dashboard

SiteCurl's MCP server gives AI coding tools 11 commands to scan sites, check scores, and track regressions. One config line to connect.

Full Studio access. No credit card required.

182,000+ checks run and counting Includes AI readiness checks for ChatGPT and Perplexity Up to 50 pages per scan Shareable report links

Why it matters

Developers check code quality in CI but not site quality. Broken meta tags, slow pages, and security gaps ship unnoticed until a PM files a ticket.

What you get with the API and MCP:

  • 11 MCP tools for Claude Code, Cursor, and Windsurf. List sites, run scans, read findings, and track regressions without leaving your editor.
  • REST API you can call from deploy scripts, CI jobs, or your own tools. Full spec file included so you can get started fast.
  • Webhooks that ping you when scans finish, scores drop, or new issues appear. Signed payloads with code samples in Ruby, Node.js, and Python.
  • 90 checks across SEO, speed, security, accessibility, technical health, uptime, and AI readiness. Same checks in the UI, API, and MCP.
11

MCP tools available to AI coding assistants

Source: SiteCurl API

<60s

to get scan results back via API or MCP

Source: SiteCurl API

What developers miss on marketing sites

  • Meta tags that break after a deploy. Title tags, OG images, and link tags shift when you change a template. A quick scan after each push catches these early.
  • Pages that slow down over time. A new script or a big image can drop your score. Weekly scans spot the change before users notice.
  • Security headers that vanish after a migration. HSTS and CSP can go missing when you switch hosts. Scans flag the gap right away.
  • AI tools that get your product wrong. If your headings and data are unclear, ChatGPT and Perplexity may describe you badly. AI readiness checks catch this.

Add one JSON block to your editor config and you get 11 tools. List sites, run scans, check scores, view findings, and track what changed. Your AI coding tool does the work while you stay in context.

The REST API works the same way over plain HTTP. Use it in deploy scripts to block a build when your score drops, or set up webhooks to post results to Slack or your own tools.

Each scan runs 90 checks across seven areas. You get a score, a list of issues, and a fix for each one. Filter by type or page to focus on what matters right now.

How this usually fits into the workflow

  1. Run a baseline scan after a launch, redesign, or major content push so you know what changed.
  2. Share the report with the people who need it, then fix the highest-impact items first instead of chasing every small warning.
  3. Re-run the scan on a schedule so regressions show up before a client, donor, customer, or stakeholder notices them.

What to compare before you choose a tool

  • Coverage: SEO alone is not enough. Look for one report that also covers speed, security, accessibility, technical health, uptime, and AI readiness.
  • Output: The best tool for a team is the one that turns raw findings into clear priorities people can actually act on.
  • Follow-up: Scheduled scans, score history, and change detection matter more than a one-time snapshot.

Full Studio access. No credit card required.

Frequently asked questions

Which AI tools work with the MCP server?

Claude Code, Cursor, and Windsurf all work today. Any editor that speaks MCP can connect. Add the server URL and your API key to the config and you are ready to go.

How is MCP different from the REST API?

MCP lets your AI coding tool call SiteCurl right in the chat. The REST API is plain HTTP for scripts, CI, and custom tools. Both use the same API key and return the same data.

What plans include API and MCP access?

Pro and Studio. Pro gives you 3 API keys and 100 calls per hour. Studio gives you 10 keys and 1,000 per hour. New accounts start with a 7-day Studio trial.

Can I trigger scans from a CI pipeline?

Yes. Call the scan endpoint after each deploy, then check the status endpoint or wait for a webhook. Block the build if the score drops below your bar.

What webhook events are available?

Three events: scan done, score dropped, and new critical issue. Payloads are signed. Code samples for Ruby, Node.js, and Python are in the API docs.

Site health checks that live where you code.

After 7 days, keep your reports on the free plan or upgrade. No auto-charges.