Score any company's AI-agent readiness — inside Clay
Drop a domain into a Clay table, get back a 0–100 Agent Readiness Score plus a per-category breakdown across 18 standards (robots.txt, llms.txt, MCP, OAuth, x402, Content Signals, etc.).
Setup in Clay (3 steps)
Get your StartupHub.ai API key
Sign in at My Account → API & MCP and click Create Key. You'll see the key once — copy it.
Add an HTTP API enrichment column
In your Clay table, click Enrich Data → HTTP API, then paste these settings:
POSThttps://www.startuphub.ai/api/v1/agent-readinessBearer YOUR_API_KEYapplication/json{
"url": "{{Domain}}"
}Replace {{Domain}} with whatever Clay variable holds the company URL — typically {{Company Domain}} or {{Website}}.
Map response fields to columns
Sample response shape:
{
"data": {
"url": "https://anthropic.com",
"final_url": "https://www.anthropic.com/",
"scanned_at": "2026-05-04T18:50:11.275Z",
"total_score": 39,
"grade": "F",
"categories": [
{ "category": "discoverability", "label": "Discoverability", "score": 91, "passed": 2, "total": 3 },
{ "category": "content", "label": "Content", "score": 5, "passed": 0, "total": 3 },
{ "category": "access_control", "label": "Access Control", "score": 40, "passed": 0, "total": 2 },
{ "category": "capabilities", "label": "Capabilities", "score": 26, "passed": 0, "total": 5 },
{ "category": "commerce", "label": "Commerce", "score": 100,"passed": 0, "total": 0 },
{ "category": "quality", "label": "Quality", "score": 49, "passed": 2, "total": 4 }
],
"checks": [
{ "id": "robots_txt", "category": "discoverability", "title": "robots.txt present", "status": "pass", "weight": 3 },
{ "id": "sitemap", "category": "discoverability", "title": "Sitemap discoverable", "status": "pass", "weight": 3 }
/* …~18 checks total, one per standard… */
],
"extras": {
"tokens_html": 412300,
"tokens_markdown": 28100,
"token_savings_pct": 93,
"ttfb_ms": 187,
"wins": ["robots.txt present", "Sitemap discoverable"],
"weaknesses": ["No llms.txt", "No MCP server card"]
}
},
"credits": { "cost": 1, "remaining_period": 1499, "remaining_balance": 0 }
}Common field paths to extract into Clay columns:
| Field path | Suggested column name | Type |
|---|---|---|
| data.total_score | Agent Readiness Score | integer 0–100 |
| data.grade | Grade | string A+ → F |
| data.categories[0].score | Discoverability score | integer 0–100 |
| data.categories[1].score | Content score | integer 0–100 |
| data.categories[2].score | Access Control score | integer 0–100 |
| data.categories[3].score | Capabilities score | integer 0–100 |
| data.categories[4].score | Commerce score | integer 0–100 |
| data.categories[5].score | Quality score | integer 0–100 |
| data.extras.token_savings_pct | Markdown token savings % | integer 0–100 |
| data.extras.ttfb_ms | Time to first byte (ms) | integer |
| data.extras.wins[0] | Top win | string |
| data.extras.weaknesses[0] | Top weakness | string |
Other endpoints in the same pool
All credit-pool-shared, all Bearer-token authed, same setup pattern as above.
POST /api/v1/companies/by-linkedinLive LinkedIn scrape — pass a LinkedIn URL, get fresh employee count + followers + name + headline. Cross-references our DB so you also get funding, founders, score when we know the company. Killer for Sales-Nav exports.
GET /api/v1/companies/revenue?domain=…Revenue + revenue multiple + revenue per employee. Verified flag on each. From our enrichment cron.
GET /api/v1/companies/funding?domain=…Total funding + latest round + valuation + currently-fundraising flag + investor count.
POST /api/v1/email/validateVerify deliverability of any email. Real mailbox check, catch-all and disposable detection. Free credit on catch-all/unknown.
POST /api/v1/email/discoverFind an email by name + domain. Permutation generation + verification.
POST /api/v1/enrichFull company enrichment with funding intel, founders, scores, sectors, tech stack. Use when you need everything in one call.
Full reference: /api-docs
What teams build with this
Pre-sales account scoring
Run on every account in your Clay pipeline. Filter to score < 60 → those companies haven't built for AI agents yet → high-fit prospects for AI tooling, MCP servers, OAuth providers, observability.
Outbound research enrichment
Cold list + Clay → Agent Readiness Score → personalize emails referencing exactly what the company is missing (e.g. "I noticed you don't have an llms.txt yet…").
Investor due diligence
Run across a portfolio or watchlist; flag companies whose AI-agent readiness is below their cohort's median.
Conference / event lead lists
Score every attendee company; sort by readiness gap to prioritize hot demo-able prospects.
Pricing
1 credit per scan. No throttling, no per-domain markup. Top-up packs:
Free tier includes 1,500 credits/month — enough to enrich ~50 companies a day from Clay before any payment is needed. Paid plans (Pro Lite/Pro/Pro+) ship 10K–250K credits/month included.Plan details →
FAQ
How fast is one scan?
Median 4–7 seconds per domain (we make ~12 outbound HTTP requests per scan to verify each standard). Hard timeout: 30 seconds. Clay's default HTTP timeout is sufficient.
What does the score actually measure?
Eighteen agent-readiness standards across six categories: Discovery (robots.txt, sitemap, llms.txt), Identity (OG metadata, structured data), Content negotiation (Markdown variants, JSON-LD), Tooling (MCP server card, OpenAPI), Auth (OAuth discovery, x402 payment rails), and Trust signals (HTTPS, security headers, Content Signals header).
What if the URL is unreachable or 404s?
Returns HTTP 200 with a low score and `extras.homepage_blocked: true`. Never throws — Clay rows won't fail.
Can I batch many URLs in one call?
Currently 1 URL per call. Clay parallelizes naturally across rows — typical Clay tables run 10–50 rows concurrently and our API is sized for that. If you need true batch (single request, many URLs in one response), let us know and we'll prioritize.
Is this rate-limited?
Not at the per-second level. The credit count is the only ceiling. If you hit your monthly credit allowance, the next call returns 429 with the credit balance — buy a top-up pack and continue.
Where is the data stored?
Each scan also writes to our internal agent_readiness_scans table for our own analytics — we don't expose your scans to third parties or use them for our public Top Lists without consent.
Ready to wire it up?
Questions? Email [email protected].