API reference (v1)

SEODiff's API is the product. The dashboard and CI integrations are clients of the same API documented here.

Contents

Authentication

Most endpoints require an API key passed in the Authorization header:

Authorization: Bearer <your_api_key>

Get your key from the API Keys page in your account. Keys use the sd_live_ or sd_test_ prefix.

Base URL

All paths are relative to:

https://seodiff.io/api/v1

https://api.seodiff.io/api/v1 also works (same backend).

Quick test

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/me

Rate limits

API requests are rate-limited per account tier. Free accounts get 60 requests per minute. Pro and Enterprise get higher limits. Exceeding the limit returns 429 Too Many Requests.

Account & Plan

GET /api/v1/me

Returns your account info, plan details, and checkout URL (for plan upgrades).

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/me

Response:

{
  "account": { "id": "...", "name": "you@example.com" },
  "plan": {
    "tier": "pro",
    "max_sites": 10,
    "max_pages_per_scan": 5000,
    "max_deep_audit_pages": 10000
  },
  "checkout_url": "https://..."
}

GET /api/v1/audit

List recent API key audit events (key created, rotated, revoked, etc.).

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/audit

Sites & Monitoring

GET /api/v1/sites

List all monitored sites for your account.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/sites

Response:

[
  {
    "id": "...",
    "base_url": "https://example.com",
    "enabled": true,
    "schedule": "nightly"
  }
]

POST /api/v1/sites

Add or update a monitored site.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "base_url": "https://example.com",
    "enabled": true,
    "schedule": "nightly"
  }' \
  https://seodiff.io/api/v1/sites

POST /api/v1/monitor/keepalive

Send a keepalive ping for a monitored site. Keeps monitoring active if schedule-based probes are paused.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"base_url":"https://example.com"}' \
  https://seodiff.io/api/v1/monitor/keepalive

Scanning & CI/CD

Recommended first call

If you are integrating for the first time, start with POST /api/v1/validate using wait=true. It gives a single response with pass/fail plus links to all artifacts.

POST /api/v1/scan

Enqueue a surface scan. Returns 202 Accepted with an id and a status_url.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "base_url": "https://preview.example.com",
    "render_js": false,
    "lighthouse": false
  }' \
  https://seodiff.io/api/v1/scan

Response:

{
  "id": "s_abc123...",
  "status_url": "/api/v1/scans/s_abc123.../status"
}

POST /api/v1/validate

CI-friendly scan wrapper. When wait=true, blocks until complete and returns pass/fail.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "base_url": "https://preview.example.com",
    "preset": "fast",
    "fail_on": "fetch_errors,non200_status,schema_missing_required",
    "max_issue_rate": 10,
    "wait": true,
    "timeout_seconds": 180
  }' \
  https://seodiff.io/api/v1/validate

Response (wait=true):

{
  "pass": true,
  "reason": "",
  "failing": {},
  "report_url": "/scans/s_abc123/report.html",
  "json_url": "/scans/s_abc123/findings.json",
  "summary_markdown_url": "/api/v1/scans/s_abc123/summary.md"
}
FieldTypeDescription
passbooleanWhether the scan passed all checks
reasonstringHuman-readable failure reason (empty on pass)
failingobjectFailing keys with details
report_urlstringHTML report link
json_urlstringJSON findings export
summary_markdown_urlstringMarkdown summary for PR comments

Returns 200 for pass, 409 for fail. May return 202 if the scan hasn't completed within the timeout.

GET /api/v1/scans/{id}/summary.md Domain verified

Markdown summary suitable for GitHub PR comments.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/scans/s_abc123/summary.md

GET /api/v1/scans/{id}/findings.json Domain verified

Normalized findings as JSON for downstream tooling.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/scans/s_abc123/findings.json

GET /api/v1/scans/{id}/findings.csv Domain verified

Normalized findings as CSV.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/scans/s_abc123/findings.csv

GET /api/v1/incidents

List recent drift incidents detected by monitoring.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/incidents

GET /api/v1/templates?base_url=...

List template identifiers detected for a monitored site.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/templates?base_url=https://example.com"

Response:

{
  "templates": ["/product/*", "/collections/*"]
}

GET /api/v1/timeline?base_url=...&template=...

Drift timeline for a given base URL and template. The template value should match an entry from /api/v1/templates.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/timeline?base_url=https://example.com&template=/product/*"

GET /api/v1/template-drift-summaries?base_url=...

Aggregated template drift summaries across all templates for a site.

GET /api/v1/project-overview?base_url=...

Dashboard aggregate for a project. Returns project summary cards, recent scans, and Search Console data.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/project-overview?base_url=https://example.com"

Deep Audit Pro

POST /api/v1/deep-audit Domain verified

Start a full-site deep crawl. Requires domain verification and Pro (or higher) plan.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "base_url": "https://example.com",
    "crawl_scope": "deep_audit",
    "max_pages": 500,
    "render_js": false,
    "respect_robots": true,
    "crawl_speed": "normal",
    "include_patterns": [],
    "exclude_patterns": []
  }' \
  https://seodiff.io/api/v1/deep-audit

Response:

{
  "job_id": "da_abc123...",
  "status_url": "/api/v1/deep-audit/da_abc123...",
  "report_url": "/api/v1/deep-audit/da_abc123.../report"
}
ParameterTypeDescription
base_urlstringRequired. URL to crawl.
crawl_scopestringdeep_audit (default) or full_site (Enterprise).
max_pagesintegerMax pages to crawl (plan-limited).
render_jsbooleanEnable JavaScript rendering.
respect_robotsbooleanObey robots.txt (default true).
crawl_speedstringslow, normal, or fast.
include_patternsstring[]URL patterns to include (glob).
exclude_patternsstring[]URL patterns to exclude (glob).

GET /api/v1/deep-audit/

List deep-audit jobs for your account.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/deep-audit/

GET /api/v1/deep-audit/{id}

Get job status, progress percentage, and metadata.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/deep-audit/da_abc123

Response:

{
  "job_id": "da_abc123",
  "status": "complete",
  "progress": 100,
  "base_url": "https://example.com",
  "pages_crawled": 342,
  "started_at": "2025-01-15T10:00:00Z",
  "finished_at": "2025-01-15T10:05:23Z"
}

GET /api/v1/deep-audit/{id}/report

HTML report of the deep audit (job must be complete).

GET /api/v1/deep-audit/{id}/json

Raw JSON result with all crawled page data.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/deep-audit/da_abc123/json

GET /api/v1/deep-audit/{id}/summary

Lightweight summary payload. Includes top-level metrics, issue counts, and crawl stats.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/deep-audit/da_abc123/summary

GET /api/v1/deep-audit/{id}/graph

Template-level internal link graph payload for visualization.

GET /api/v1/deep-audit/{id}/url-pagerank

URL-level internal PageRank scores and distribution summary.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/deep-audit/da_abc123/url-pagerank

GET /api/v1/deep-audit/{id}/link-heat

Link heatmap data showing internal link equity distribution across templates and URLs.

GET /api/v1/deep-audit/{id}/full-audit

Full audit aggregate payload for enterprise-style views. Combines all deep-audit sub-reports.

GET /api/v1/deep-audit/{id}/agent-md

Structured Markdown export designed for LLM and AI agent consumption.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/deep-audit/da_abc123/agent-md

GET /api/v1/deep-audit/{id}/pseo-agent

Programmatic SEO diagnostics payload for coding agents. Includes template stats, placeholder detection, hallucination rates, schema drift analysis, and actionable fix lists.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/deep-audit/da_abc123/pseo-agent

Response (abbreviated):

{
  "meta": { "job_id": "da_abc123", "base_url": "https://example.com" },
  "health": "B",
  "health_score": 72,
  "templates": [ ... ],
  "data_integrity": {
    "placeholder_outbreak": { "severity": "warning" },
    "schema_type_drift": { "severity": "ok" }
  },
  "top_fixes": [ ... ]
}

GET /api/v1/deep-audit/{id}/diff

Scan-over-scan diff JSON comparing this audit to the previous one for the same domain. Returns detected changes across 23 detection algorithms.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/deep-audit/da_abc123/diff

GET /api/v1/deep-audit/{id}/diff-report

HTML diff report showing changes between consecutive deep audits.

GET /api/v1/project/{id}/graph Legacy

Redirects to /api/v1/deep-audit/{id}/graph.

GET /api/v1/project/{id}/url_pagerank Legacy

Redirects to /api/v1/deep-audit/{id}/url-pagerank.

GET /api/v1/project/{id}/full_audit Legacy

Redirects to /api/v1/deep-audit/{id}/full-audit.

Extraction Rules Pro

GET /api/v1/extraction-rules?base_url=...

List custom extraction rules for a site.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/extraction-rules?base_url=https://example.com"

POST /api/v1/extraction-rules?base_url=...

Create or update a custom extraction rule.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "field_name": "price",
    "selector_type": "css",
    "selector": ".product-price",
    "expected_type": "number",
    "required": true
  }' \
  "https://seodiff.io/api/v1/extraction-rules?base_url=https://example.com"

DELETE /api/v1/extraction-rules?base_url=...&field_name=...

Delete an extraction rule by field name.

curl -X DELETE -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/extraction-rules?base_url=https://example.com&field_name=price"

POST /api/v1/extraction-rules/validate

Dry-run a rule against sampled pages before saving.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "base_url": "https://example.com",
    "rule": {
      "field_name": "price",
      "selector_type": "css",
      "selector": ".product-price",
      "expected_type": "number",
      "required": true
    }
  }' \
  https://seodiff.io/api/v1/extraction-rules/validate

Domain Verification

Some endpoints (deep-audit, scan exports) require you to prove ownership of the domain. You can verify via DNS TXT record or by connecting Google Search Console.

GET /api/v1/domain-verify/challenge?domain=...

Get a verification challenge for a domain. Returns a DNS TXT record value to add.

curl "https://seodiff.io/api/v1/domain-verify/challenge?domain=example.com"

Response:

{
  "domain": "example.com",
  "txt_record": "seodiff-verify=abc123..."
}

POST /api/v1/domain-verify/confirm

Confirm domain verification after adding the DNS TXT record.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"domain":"example.com"}' \
  https://seodiff.io/api/v1/domain-verify/confirm

GET /api/v1/domain-verify/status?domain=...

Check whether a domain is verified for your account.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/domain-verify/status?domain=example.com"

Google Search Console Pro

GET /api/v1/gsc/connect?base_url=...

Start Google Search Console OAuth flow. Returns an auth_url to redirect the user to.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/gsc/connect?base_url=https://example.com"

GET /api/v1/gsc/properties?base_url=...

List available and selected Search Console properties for a connected site.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/gsc/properties?base_url=https://example.com"

POST /api/v1/gsc/select-property

Set the active Search Console property for a site.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "base_url": "https://example.com",
    "property": "sc-domain:example.com"
  }' \
  https://seodiff.io/api/v1/gsc/select-property

POST /api/v1/gsc/sync

Trigger a fresh Search Console data sync for a site.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"base_url":"https://example.com"}' \
  https://seodiff.io/api/v1/gsc/sync

GET /api/v1/gsc/diagnostics?base_url=...

GSC indexability diagnostics summary. Requires an active GSC connection.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/gsc/diagnostics?base_url=https://example.com"

Alerts

GET /api/v1/alerts

List alerts for your account.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/alerts

GET /api/v1/alerts/unread

Get count of unread alerts.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/alerts/unread

Response:

{"unread": 3}

POST /api/v1/alerts/dismiss

Dismiss a single alert by ID.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"id":"alert_abc123"}' \
  https://seodiff.io/api/v1/alerts/dismiss

POST /api/v1/alerts/dismiss-all

Dismiss all alerts.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/alerts/dismiss-all

GET /api/v1/alerts/preferences

Get alert notification preferences.

POST /api/v1/alerts/preferences

Update alert notification preferences.

Schema Drift

GET /api/v1/schema-drift?base_url=...

Get schema drift analysis for a monitored site.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/schema-drift?base_url=https://example.com"

GET /api/v1/schema-drift/diff?base_url=...&from=...&to=...

Get schema diff between two snapshots.

GET /api/v1/schema-drift/timeline?base_url=...

Schema change timeline for a site.

Indexation & IndexNow

GET /api/v1/indexation-health?base_url=...

Indexation health summary for a site. Requires GSC connection.

curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  "https://seodiff.io/api/v1/indexation-health?base_url=https://example.com"

GET /api/v1/indexation-health/inspect?url=...

Inspect indexation status for a specific URL.

GET /api/v1/indexnow/settings?base_url=...

Get IndexNow configuration for a site.

POST /api/v1/indexnow/settings/update

Update IndexNow settings (enable/disable, key, thresholds).

POST /api/v1/indexnow/push

Push URLs to search engines via IndexNow protocol.

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "base_url": "https://example.com",
    "urls": [
      "https://example.com/page-1",
      "https://example.com/page-2"
    ]
  }' \
  https://seodiff.io/api/v1/indexnow/push

GET /api/v1/indexnow/log?base_url=...

Get IndexNow push log with submission history and status codes.

Visibility & Radar Public

These endpoints are publicly accessible without authentication.

GET /api/v1/visibility/domain/{domain}

Get ACRI visibility score and metadata for any domain.

curl https://seodiff.io/api/v1/visibility/domain/example.com

Response:

{
  "domain": "example.com",
  "acri_score": 78.5,
  "tranco_rank": 1234,
  "categories": ["technology"],
  "last_crawled": "2025-01-15T08:00:00Z"
}

GET /api/v1/visibility/search?q=...

Search for domains in the visibility index.

curl "https://seodiff.io/api/v1/visibility/search?q=example"

GET /api/v1/visibility/leaderboard?category=...

Leaderboard of top domains by ACRI score, optionally filtered by category.

GET /api/v1/radar/scanner/status

Live ticker showing recent radar scans, freshness stats, and queue depth.

curl https://seodiff.io/api/v1/radar/scanner/status

GET /api/v1/radar/scanner/pulse

Industry pulse: daily movers, biggest ACRI changes, and trending domains.

curl https://seodiff.io/api/v1/radar/scanner/pulse

Agentic Evaluation Pro

Give “web eyes” to your AI coding agent

Instead of using curl to fetch raw HTML (which blows out the LLM’s context window), call this endpoint. SEODiff crawls the pages, runs your assertions, computes SEO/GEO metrics, and returns a token-compressed summary designed for LLM consumption.

POST /api/v1/agent/evaluate

Evaluate programmatic SEO pages at scale. Provide URLs (explicit list or sitemap + pattern), custom assertions, and get back a pass/fail verdict with clustered errors and a structural DOM fingerprint — all in a compact JSON payload your AI agent can reason about.

Request

curl -X POST -H "Authorization: Bearer $SEODIFF_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "base_url": "https://staging.example.com",
    "url_pattern": "/etf/*",
    "sample_size": 30,
    "assertions": [
      {"rule": "contains_string", "value": "Dividend History", "severity": "critical"},
      {"rule": "not_contains_string", "value": "undefined", "severity": "critical"},
      {"rule": "min_word_count", "value": 500, "severity": "warning"},
      {"rule": "selector_exists", "value": "table.holdings", "severity": "critical"},
      {"rule": "has_schema"},
      {"rule": "no_placeholders"},
      {"rule": "max_js_ghost_ratio", "value": 0.1, "severity": "critical"}
    ],
    "wait": false
  }' \
  https://seodiff.io/api/v1/agent/evaluate
FieldTypeDescription
base_urlstringBase URL for sitemap discovery and pattern matching. Required unless urls is provided.
urlsstring[]Explicit list of URLs to evaluate. Overrides sitemap discovery.
url_patternstringGlob pattern to filter discovered URLs (e.g. /etf/*, /trail/*/details).
sample_sizeintegerMax pages to evaluate (default 20, max 200).
assertionsarrayCustom assertions to run against each page (see below).
baseline_eval_idstringPrevious evaluation_id to compare against. If provided, the response includes a regressions array showing any metric degradations (e.g. dropped H1 coverage, ACRI regression, new placeholder leaks).
waitbooleanBlock until evaluation completes (default true). When false, returns 202 Accepted with a status_url for polling.
timeout_secondsintegerMax wait time (default 120, max 300).
LLM Timeout Safety: use wait: false

Most LLM tool-call APIs (OpenAI, Anthropic, local IDE extensions) have HTTP timeouts of 30–60 seconds. For large evaluations (30+ pages), set wait: false. The endpoint returns instantly with a status_url. Instruct your agent: “Poll the status_url every 5 seconds until status is passed or failed.”

Assertion rules

RuleValue typeDescription
contains_stringstringVisible text must contain this string
not_contains_stringstringVisible text must NOT contain this string
min_word_countintegerMinimum word count
max_word_countintegerMaximum word count
status_codeintegerExpected HTTP status (e.g. 200)
has_schemaAt least one JSON-LD schema block
min_schema_countintegerMinimum number of JSON-LD blocks
has_h1Page must have an H1
has_meta_descriptionPage must have a meta description
selector_existsCSS selectorA CSS selector that must match ≥1 element
selector_countintegerCSS selector (in selector field) must match ≥N elements
regex_matchregexVisible text must match this regex
regex_not_matchregexVisible text must NOT match this regex
no_placeholdersNo pSEO placeholder leaks ({{var}}, undefined, NaN, etc.)
min_acriintegerMinimum ACRI score (0–100)
max_token_bloatfloatMaximum HTML-to-text ratio
no_noindexPage must not have noindex
max_js_ghost_ratiofloat (0–1)Maximum JS ghost ratio. Detects pages that require JavaScript to render content (React/Next.js/Vue/Angular/Svelte). A ratio of 0.95 means the page is almost invisible to HTML-only crawlers. Use 0.1 to ensure proper SSR.

Each assertion accepts an optional severity: "critical" (default) or "warning".

Response (synchronous, wait: true)

{
  "evaluation_id": "eval_17120...",
  "status": "failed",
  "pages_evaluated": 30,
  "pass_rate": 87,
  "duration_ms": 4521,
  "summary": "Evaluation failed: 26/30 pages passed. critical assertion \"not_contains_string\"=\"undefined\" failed on 4/30 pages.",

  "failed_assertions": [
    {
      "rule": "not_contains_string",
      "value": "undefined",
      "severity": "critical",
      "failure_count": 4,
      "failure_rate": "4/30 pages",
      "example_url": "https://staging.example.com/etf/GLD",
      "context": "Found \"undefined\" in: …<div class='dividend-yield'>undefined</div>…"
    }
  ],

  "regressions": [
    {
      "metric": "schema_coverage",
      "previous": "100%",
      "current": "87%",
      "delta": "-13%",
      "diagnosis": "Schema coverage dropped — JSON-LD blocks may have been removed."
    }
  ],

  "metrics": {
    "avg_word_count": 1234,
    "avg_acri": 72,
    "schema_coverage": "87%",
    "h1_coverage": "100%",
    "meta_desc_coverage": "93%",
    "avg_token_bloat": 8.2,
    "non_200_count": 0,
    "error_count": 0,
    "placeholder_pages": 2
  },

  "structural_fingerprint": "H1(SPY ETF Overview) > H2(Holdings) > Table(50 rows) > H2(Dividend History) > Chart > Footer",

  "failing_pages": [
    {
      "url": "https://staging.example.com/etf/GLD",
      "http_status": 200,
      "word_count": 12,
      "acri": 31,
      "failed_assertions": ["not_contains_string:undefined", "min_word_count:500"],
      "structural_fingerprint": "H1(GLD) > Div(error) > [EXPECTED table.holdings MISSING]"
    }
  ]
}

Response (async, wait: false)

When wait: false, the endpoint returns 202 Accepted immediately:

{
  "evaluation_id": "eval_17120...",
  "status": "processing",
  "status_url": "/api/v1/agent/evaluate/eval_17120...",
  "pages_planned": 30
}

GET /api/v1/agent/evaluate/{evaluation_id}

Poll evaluation status. While processing, returns a lightweight status object. Once complete, returns the full evaluation result (same schema as the synchronous response above).

# Poll until complete
curl -H "Authorization: Bearer $SEODIFF_API_KEY" \
  https://seodiff.io/api/v1/agent/evaluate/eval_17120...

Evaluation results are cached for 1 hour. Completed evaluations can be referenced as baseline_eval_id in subsequent evaluations to detect regressions.

FieldDescription
statuspassed, failed, or processing (async mode)
status_urlPolling URL (async mode only). GET this URL to check progress or retrieve the completed result.
pass_ratePercentage of pages that passed all assertions (0–100)
summaryOne-paragraph human/LLM-readable summary of the evaluation
failed_assertionsAggregated assertion failures clustered by rule (not per-page). Includes exact context snippets showing where each failure occurred.
regressionsMetric degradations vs. the baseline_eval_id (only present when a baseline is provided). Tracks: pass_rate, avg_acri, avg_word_count, schema_coverage, h1_coverage, meta_desc_coverage, non_200_count, placeholder_pages, avg_token_bloat.
metricsAveraged SEO metrics across all evaluated pages
structural_fingerprintCompact DOM skeleton of the first page (token-compressed). For failing pages, missing selectors are annotated: [EXPECTED .selector MISSING]
failing_pagesDetails of pages that failed (max 20, for token budget)

Workflow: Test-Driven SEO with an AI Agent

The autonomous fix loop

1. You ask your AI agent: “Add a dividend section to the ETF template. Verify it works across edge cases.”

2. The agent writes code, pushes to staging.

3. The agent calls POST /api/v1/agent/evaluate with wait: false and assertions like not_contains_string: undefined, selector_exists: .dividend-table, and max_js_ghost_ratio: 0.1.

4. The agent polls GET /api/v1/agent/evaluate/{id} every 5 seconds until status is passed or failed.

5. SEODiff finds GLD (Gold ETFs have no dividends) renders “undefined”. The failing page’s fingerprint shows: H1(GLD) > [EXPECTED .dividend-table MISSING].

6. The agent reads the compact JSON, adds a null-check, re-pushes, re-evaluates — this time passing baseline_eval_id from the first run to catch regressions. Status: passed, 0 regressions.

7. The agent reports back: “Feature deployed and verified across 30 edge cases.”

Macro vs. Micro

/agent/evaluate (Micro): Use in your daily coding prompts. The agent runs this during the coding loop on 30 staging pages to verify a feature PR before merging.

/deep-audit/{id}/pseo-agent (Macro): Use for weekly portfolio maintenance. An agent runs this on a full 10,000-page deep audit to find widespread architectural rot.

AI Readiness Tools Free tier

These tools analyze how well a page is optimized for AI consumption. Most are free-tier and accept a URL in the POST body.

POST /api/v1/aes

AI Extractability Score (AES). Analyzes how easily AI systems can extract structured data from a page.

curl -X POST -H "Content-Type: application/json" \
  -d '{"url":"https://example.com/page"}' \
  https://seodiff.io/api/v1/aes

POST /api/v1/chunking

RAG chunking simulator. Shows how the page would be chunked for retrieval-augmented generation.

POST /api/v1/entity-schema

Entity schema generator. Extracts and suggests JSON-LD structured data for a page.

POST /api/v1/training-data

LLM training data auditor. Analyzes quality signals for training corpus inclusion.

POST /api/v1/crawler-health

Robots.txt and crawler health check. Validates bot access and directives.

POST /api/v1/ai-crawler-sim

AI crawler simulation. Shows what an AI bot sees when crawling the page.

POST /api/v1/ai-answer-preview

AI answer preview. Simulates how an LLM would summarize or reference the page.

POST /api/v1/entropy

Structural entropy analysis. Measures content structure quality for machine consumption.

POST /api/v1/hallucination-test

Hallucination risk checker. Evaluates whether page content may induce LLM hallucinations.

POST /api/v1/llmstxt/validate

Validate a site's llms.txt file against the emerging standard.

POST /api/v1/llmstxt/generate

Generate an llms.txt file for a site.

Error Model

Errors return JSON with an error field:

{
  "error": "domain not verified for this account"
}
StatusMeaning
400Invalid input (missing/malformed fields)
401Missing or invalid API key
403Plan-gated feature or insufficient permissions
404Resource not found
409Validation failed (scan did not pass)
429Rate limit exceeded
502Upstream error (database, crawl engine)

Access Scoping

API keys are scoped to your account. You can only access:

Domain verification is required for deep-audit creation and scan exports. Verify domains via DNS TXT or by connecting Google Search Console.

Pass/fail behavior

API-first by design

The dashboard and CI/CD automation are clients of the same API. This keeps behavior consistent and allows SEODiff to evolve heuristics without changing your integration surface.

Integrations & Examples

Developer Hub
Central index of all code examples, integrations, and guides.
Assertion Glossary
17 assertion rules with problem explanations and API payloads.
Code Examples
Copy-paste examples in cURL, Python, Node.js, Go, and PHP.
CI/CD Integrations
GitHub Actions, GitLab CI, Vercel, Jenkins, and more.
Agent & IDE Guides
System prompts for Cursor, Copilot, Cline, Windsurf, LangChain.

Related