Visual TL;DR — public beta

JSON diagrams.
For AI agents.

One API call takes an article and returns the diagram in four formats: JSON, Markdown, SVG, and JSON-LD. AI agents pick the format they parse fastest, humans get the SVG, search engines get the JSON-LD. Same generation, same credit, every shape at once.

5
Free diagrams/month
~6s
Median generation
WP · REST · MCP
Connector types
3
Views per diagram

Try it on any article

Paste a public article URL. We'll fetch it, run the diagram pipeline, and render the result inline. No sign-up required for this preview.

Color
Font

Free preview — one diagram per IP per hour. Theme + font choices flow through to the SVG output (API: options.theme / options.font). For unlimited use, grab an API key.

How it works

1

Send title + content

POST { title, content_html } to /api/v1/visual-tldr/generate with your Bearer key. Or have the WordPress plugin fire it automatically on publish. Or call generate_visual_tldr from Claude over MCP.

2

Gemini extracts the graph

6-10 typed concept nodes + edges with short verb labels, grounded in the actual article text. Essential nodes are flagged so the Quick view is the truthiest summary.

3

Get inject-ready HTML

svg_html with both desktop and mobile SVG variants in a single CSS-toggled wrapper. Drop into your post body, or let the plugin / MCP write it back via the WP REST API.

For AI agents + humans

Four formats, one API call

Same article in. JSON diagram, Markdown TL;DR, SVG image, and JSON-LD block out. AI agents pick the format they parse fastest, humans get the SVG, search engines get the JSON-LD.

No second call. No format conversion. No parsing the SVG to pull the graph back out.

  • JSON diagram — typed nodes + verb-labeled edges, 200 tokens to parse
  • Markdown TL;DR — paste into Notion, Substack, Claude, ChatGPT
  • SVG diagram — desktop + mobile variants, drop inline anywhere
  • JSON-LD ImageObject — Google AI Overviews + ChatGPT Search citations
Our content agent now writes the post, calls Visual TL;DR, and pastes the SVG inline before publish. The JSON output goes into our internal RAG index in the same step.
Editorial automation lead, mid-market publisher
API response
{ json, markdown, svg, jsonld }
nodes_count
8 concept nodes returned
attribution_included
false (Pro tier)
JSON diagram
3 typed nodes · 2 edges
Distributed cache
concept · essential
Stale read
effect · essential
Source-of-truth refresh
result
Markdown TL;DR
Concepts · Flow
## Concepts
3 typed entries with descriptions
## Flow
cache → causes → stale read → triggers refresh
For AI agents + humans

Four formats, one API call

Every diagram is rendered in JSON, Markdown, SVG, and JSON-LD simultaneously. No second call, no format conversion, no parsing the SVG to pull the graph back out.

JSON diagram
Agent-native
{
  "nodes": [
    { "id": "n1", "label": "Distributed cache",
      "type": "concept", "essential": true },
    { "id": "n2", "label": "Stale read",
      "type": "effect", "essential": true },
    { "id": "n3", "label": "Source-of-truth refresh",
      "type": "result" }
  ],
  "edges": [
    { "from": "n1", "to": "n2", "label": "causes" },
    { "from": "n2", "to": "n3", "label": "triggers" }
  ],
  "title": "Why cache invalidation is hard"
}
Typed nodes (cause / effect / result / key / concept) + verb-labeled edges. Parses in 5 lines of any language. The format AI agents prefer for downstream reasoning.
Markdown TL;DR
LLM-friendly
# Visual TL;DR — Why cache invalidation is hard

## Concepts
- **Distributed cache** _(concept)_: holds copies across nodes
- **Stale read** _(effect)_: client gets outdated value
- **Source-of-truth refresh** _(result)_: pulls fresh data

## Flow
- Distributed cache → _causes_ → Stale read
- Stale read → _triggers_ → Source-of-truth refresh

---
Diagram by startuphub.ai
Pasteable into Notion, Substack, Linear, ChatGPT, Claude. The format an LLM agent will ingest with zero pre-processing, and the format you paste into your own writing tools.
SVG diagram
Inject inline
<style>...</style>
<div data-article-diagram>
  <svg viewBox="0 0 808 220" ...>
    <rect x="40" y="20" width="160" height="60"
      rx="8" fill="#EFF6FF" stroke="#3B82F6" />
    <text x="120" y="55" text-anchor="middle"
      font-weight="700">Distributed cache</text>
    <!-- ...nodes + edges + labels... -->
  </svg>
</div>
Two responsive variants in one wrapper: desktop 808px, mobile 340px, swapped by CSS @media. No JS framework needed. Drops into WordPress, Ghost, Substack, MDX, plain HTML.
JSON-LD schema
SEO + AI search
{
  "@context": "https://schema.org",
  "@type": "ImageObject",
  "name": "Why cache invalidation is hard",
  "description": "Distributed cache causes stale
    reads, which trigger source-of-truth refresh.",
  "contentUrl": "https://cdn.startuphub.ai/.../diagram.webp",
  "encodingFormat": "image/webp",
  "creator": { "@type": "Organization",
               "name": "startuphub.ai" }
}
Schema.org ImageObject so Google AI Overviews, Bing Copilot, and ChatGPT Search treat the diagram as a structured asset of your article. Boosts citation odds.

Why agents prefer JSON diagrams over rendered SVG: an SVG forces an LLM to OCR text, infer node connections from coordinates, and lose the semantic typing. The JSON above takes 200 tokens and is fully traversable. That is why every serious agent-to-article pipeline now pulls the JSON, not the picture.

What ships with every diagram

3 cascading views

Each diagram renders as Quick (4-5 essential nodes), Explain (essentials + inline descriptions), and Deeper (full extracted concept graph). CSS-only toggle — no JS framework needed in the host page.

Concept extraction with Gemini

Gemini grounds the diagram in the article text itself: nodes are typed (cause / effect / result / key / concept) and edges carry short verbs ("leads to", "causes", "enables") so the diagram reads as prose.

Responsive without scaling text

Two SVG variants ship in every diagram. Desktop renders the cascade at native 808px so text stays 16px. Mobile (<640px) swaps to a 340px-wide compact layout via CSS @media — readable at 14px, no horizontal scroll.

WordPress plugin

Download our plugin .zip, upload via wp-admin → Plugins → Add New → Upload, activate, set your API key. Diagrams generate automatically on post publish OR on-demand from the editor. Caches per-post so a re-render doesn't re-spend credits. (Not on the WP marketplace — we ship directly so updates land same-day.)

REST API for any platform

POST title + content_html to /api/v1/visual-tldr/generate, get back svg_html ready to inject. Ghost, Substack-via-RSS, Jamstack — anywhere with a publish webhook works.

MCP for AI workflows

Same generator exposed as the generate_visual_tldr MCP tool. Connect once in Claude.ai / Cursor / Windsurf and your agent can drop a diagram into any post via natural language.

Who uses it

Editorial publishers

Auto-add a Visual TL;DR to every news article. AI Overviews and ChatGPT cite the diagram's alt-text + JSON-LD as a structured summary, lifting AI-attribution traffic.

B2B blogs + docs

Long-form technical posts get a flow diagram readers can scan in 5 seconds before deciding to commit to the 10-minute read. Dwell time and scroll depth both move.

AI agent pipelines

Drop the MCP server into a content-publishing agent. After draft + review + image, the agent calls generate_visual_tldr and pastes the result inline before publish.

Pricing

One generation = one credit. Unused credits don't roll over.

Free

Try the product, pin one blog.

$0forever
  • 5 diagrams / month
  • "From StartupHub.ai" footer attribution
  • WordPress plugin OR REST API
  • JSON-LD + screen-reader prose included
Most popular

Pro

Most publishers land here.

$19/ month
  • 500 diagrams / month
  • Attribution footer optional
  • WordPress + REST + MCP
  • Bulk-backfill your archive
  • Email support

Agency

Run 5+ client sites from one key.

$99/ month
  • 5,000 diagrams / month
  • Up to 5 connected sites
  • White-label embed (no footer)
  • Per-site usage reports
  • Slack support

Direct API call

Identical response shape via REST or MCP.

curl -X POST https://www.startuphub.ai/api/v1/visual-tldr/generate \
  -H "Authorization: Bearer sk_live_..." \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Why Cache Invalidation Is So Hard",
    "content_html": "<p>...your article HTML...</p>",
    "source_url": "https://yourblog.com/post-slug",
    "options": { "include_attribution": false }
  }'

# Returns:
# {
#   "success": true,
#   "svg_html": "<style>...</style><div>...3-view toggle + SVG...</div>",
#   "image_url": "https://cdn.startuphub.ai/...diagram.webp",
#   "markdown": "# Visual TL;DR — ...",
#   "jsonld": { "@type": "ImageObject", ... },
#   "nodes_count": 8,
#   "attribution_included": false
# }

FAQ

Why do AI agents prefer JSON diagrams over SVG or images?

A rendered SVG forces an LLM to OCR text and infer connections from coordinates. A JSON diagram with typed nodes and labeled edges is fully traversable in 200 tokens. Agents pipeline downstream reasoning, citation, and re-rendering from the JSON; the SVG is for the human reader at the end of the chain.

What does the JSON diagram format look like?

Two arrays: nodes (id, label, type one of cause/effect/result/key/concept, optional desc, essential flag) and edges (from, to, optional label verb). Plus a top-level title string. Same shape every call. No schema drift.

Can I use the Markdown output to feed another LLM?

Yes. The Markdown variant is structured as a concept list plus a flow list, which Claude, GPT, Gemini, and Llama all parse cleanly without prompt engineering. It is the format most agent-to-agent handoff pipelines settle on because it is human-readable, LLM-friendly, and lossless against the source JSON.

Is there an MCP server for Claude / Cursor / Windsurf?

Yes. The generate_visual_tldr tool ships in the same MCP server as our other agent tools (Search startups, Discover email, Agent readiness scan). One auth, drop-in for any model client that speaks MCP. Returns all four formats in the tool response.

How is this different from Mermaid or Graphviz?

Mermaid and Graphviz are diagram syntaxes you write by hand. Visual TL;DR is a pipeline: article in, structured diagram out, no manual authoring. The JSON shape we return is intentionally simpler than the underlying Mermaid AST so agents can read and write it without learning a DSL.

Is there a free tier?

Five diagrams per month, all four output formats, every diagram includes a "From startuphub.ai" footer. No credit card. Sign up returns a Bearer key immediately. Most editorial blogs land on Pro ($19/mo, 500 diagrams, optional attribution) once they decide to pin a diagram to every post.

Add a diagram to your next article.

Free tier ships in under a minute. Install the plugin or grab an API key and you're generating diagrams before your coffee's done.