Why we built the Agent Readiness scanner, and what comes next

A founder note on the April 20, 2026 launch of /agent-readiness — a 50+ signal, six-dimension audit of how prepared a website is for AI agents. Includes what we're shipping next: a public Agent-Ready Web Standard, one-click fix PRs, and conformance tiers.

3 min read

April 20, 2026 was the day we shipped the first version of Agent Readiness — a free, public scanner that scores any URL against 50+ signals AI agents use to actually understand a website. Six dimensions: Discoverability, Content, Access, Capabilities, Commerce, Quality. Letter grade A+ through F. Free, no signup, MCP-callable from Claude Desktop or Cursor.

We built it because we run the largest AI startup directory on the web and watched the same gap show up across thousands of company sites: agents could find a homepage but couldn't do anything with it. The HTML was scaffold-heavy and JavaScript-gated. There was no llms.txt, no machine-readable pricing, no schema.org markup beyond the bare minimum, no MCP server card, no graceful fallback when an agent set Accept: text/markdown. Founders were spending six figures on SEO without realizing the next search interface — Perplexity, ChatGPT, Claude — was going to score them on entirely different rails.

Related startups

Three weeks after we launched, similar tools started appearing. That's the category working. Some are narrow specs focused on a single technique like dual-format content. Others ship as npm packages with self-test CLIs. All of it's good for the web — agent readiness is a real surface area and it deserves the activity. But we built our scanner deliberately as a generalist audit rather than a single-spec validator, because most sites fail across multiple categories at once and a 9-check passing grade on one slice can hide a 40-point failure on five others.

Where we're going next

  1. The Agent-Ready Web Standard — a public RFC-2119 specification covering all six dimensions, dated and versioned, anchored on existing IETF primitives (RFC 8288 link relations, RFC 7234 caching, RFC 9110 content negotiation). Published openly so any scanner can implement it.
  2. One-click fix PRs — connect a GitHub repo to your scan; we open a PR with the actual fix (robots.txt entry, llms.txt scaffold, schema markup, MCP server card). No more “here's a prompt to paste into ChatGPT” — straight to merge.
  3. Conformance tiers — Basic / Optimized / First — alongside the existing grade. Gives teams a clear finish line, not just a relative rank.
  4. Daily coverage of the directory — 14,000+ startup websites scored continuously, refreshed every 30 days, public leaderboard at /agent-readiness.

If you're a founder shipping a website in 2026, the bar isn't “does Google index this.” It's “does an agent that hits this URL with Accept: text/markdown get something useful, and if it does, can it act on what it gets.” We're going to keep raising that bar in public.

— Daniel Singer, founder, StartupHub.ai

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.