JSON diagrams.
For AI agents.
One API call takes an article and returns the diagram in four formats: JSON, Markdown, SVG, and JSON-LD. AI agents pick the format they parse fastest, humans get the SVG, search engines get the JSON-LD. Same generation, same credit, every shape at once.
Try it on any article
Paste a public article URL. We'll fetch it, run the diagram pipeline, and render the result inline. No sign-up required for this preview.
Free preview — one diagram per IP per hour. Theme + font choices flow through to the SVG output (API: options.theme / options.font). For unlimited use, grab an API key.
How it works
Send title + content
POST { title, content_html } to /api/v1/visual-tldr/generate with your Bearer key. Or have the WordPress plugin fire it automatically on publish. Or call generate_visual_tldr from Claude over MCP.
Gemini extracts the graph
6-10 typed concept nodes + edges with short verb labels, grounded in the actual article text. Essential nodes are flagged so the Quick view is the truthiest summary.
Get inject-ready HTML
svg_html with both desktop and mobile SVG variants in a single CSS-toggled wrapper. Drop into your post body, or let the plugin / MCP write it back via the WP REST API.
Four formats, one API call
Same article in. JSON diagram, Markdown TL;DR, SVG image, and JSON-LD block out. AI agents pick the format they parse fastest, humans get the SVG, search engines get the JSON-LD.
No second call. No format conversion. No parsing the SVG to pull the graph back out.
- JSON diagram — typed nodes + verb-labeled edges, 200 tokens to parse
- Markdown TL;DR — paste into Notion, Substack, Claude, ChatGPT
- SVG diagram — desktop + mobile variants, drop inline anywhere
- JSON-LD ImageObject — Google AI Overviews + ChatGPT Search citations
“Our content agent now writes the post, calls Visual TL;DR, and pastes the SVG inline before publish. The JSON output goes into our internal RAG index in the same step.”
Four formats, one API call
Every diagram is rendered in JSON, Markdown, SVG, and JSON-LD simultaneously. No second call, no format conversion, no parsing the SVG to pull the graph back out.
{
"nodes": [
{ "id": "n1", "label": "Distributed cache",
"type": "concept", "essential": true },
{ "id": "n2", "label": "Stale read",
"type": "effect", "essential": true },
{ "id": "n3", "label": "Source-of-truth refresh",
"type": "result" }
],
"edges": [
{ "from": "n1", "to": "n2", "label": "causes" },
{ "from": "n2", "to": "n3", "label": "triggers" }
],
"title": "Why cache invalidation is hard"
}# Visual TL;DR — Why cache invalidation is hard ## Concepts - **Distributed cache** _(concept)_: holds copies across nodes - **Stale read** _(effect)_: client gets outdated value - **Source-of-truth refresh** _(result)_: pulls fresh data ## Flow - Distributed cache → _causes_ → Stale read - Stale read → _triggers_ → Source-of-truth refresh --- Diagram by startuphub.ai
<style>...</style>
<div data-article-diagram>
<svg viewBox="0 0 808 220" ...>
<rect x="40" y="20" width="160" height="60"
rx="8" fill="#EFF6FF" stroke="#3B82F6" />
<text x="120" y="55" text-anchor="middle"
font-weight="700">Distributed cache</text>
<!-- ...nodes + edges + labels... -->
</svg>
</div>{
"@context": "https://schema.org",
"@type": "ImageObject",
"name": "Why cache invalidation is hard",
"description": "Distributed cache causes stale
reads, which trigger source-of-truth refresh.",
"contentUrl": "https://cdn.startuphub.ai/.../diagram.webp",
"encodingFormat": "image/webp",
"creator": { "@type": "Organization",
"name": "startuphub.ai" }
}Why agents prefer JSON diagrams over rendered SVG: an SVG forces an LLM to OCR text, infer node connections from coordinates, and lose the semantic typing. The JSON above takes 200 tokens and is fully traversable. That is why every serious agent-to-article pipeline now pulls the JSON, not the picture.
What ships with every diagram
3 cascading views
Each diagram renders as Quick (4-5 essential nodes), Explain (essentials + inline descriptions), and Deeper (full extracted concept graph). CSS-only toggle — no JS framework needed in the host page.
Concept extraction with Gemini
Gemini grounds the diagram in the article text itself: nodes are typed (cause / effect / result / key / concept) and edges carry short verbs ("leads to", "causes", "enables") so the diagram reads as prose.
Responsive without scaling text
Two SVG variants ship in every diagram. Desktop renders the cascade at native 808px so text stays 16px. Mobile (<640px) swaps to a 340px-wide compact layout via CSS @media — readable at 14px, no horizontal scroll.
WordPress plugin
Download our plugin .zip, upload via wp-admin → Plugins → Add New → Upload, activate, set your API key. Diagrams generate automatically on post publish OR on-demand from the editor. Caches per-post so a re-render doesn't re-spend credits. (Not on the WP marketplace — we ship directly so updates land same-day.)
REST API for any platform
POST title + content_html to /api/v1/visual-tldr/generate, get back svg_html ready to inject. Ghost, Substack-via-RSS, Jamstack — anywhere with a publish webhook works.
MCP for AI workflows
Same generator exposed as the generate_visual_tldr MCP tool. Connect once in Claude.ai / Cursor / Windsurf and your agent can drop a diagram into any post via natural language.
Who uses it
Editorial publishers
Auto-add a Visual TL;DR to every news article. AI Overviews and ChatGPT cite the diagram's alt-text + JSON-LD as a structured summary, lifting AI-attribution traffic.
B2B blogs + docs
Long-form technical posts get a flow diagram readers can scan in 5 seconds before deciding to commit to the 10-minute read. Dwell time and scroll depth both move.
AI agent pipelines
Drop the MCP server into a content-publishing agent. After draft + review + image, the agent calls generate_visual_tldr and pastes the result inline before publish.
Pricing
One generation = one credit. Unused credits don't roll over.
Direct API call
Identical response shape via REST or MCP.
curl -X POST https://www.startuphub.ai/api/v1/visual-tldr/generate \
-H "Authorization: Bearer sk_live_..." \
-H "Content-Type: application/json" \
-d '{
"title": "Why Cache Invalidation Is So Hard",
"content_html": "<p>...your article HTML...</p>",
"source_url": "https://yourblog.com/post-slug",
"options": { "include_attribution": false }
}'
# Returns:
# {
# "success": true,
# "svg_html": "<style>...</style><div>...3-view toggle + SVG...</div>",
# "image_url": "https://cdn.startuphub.ai/...diagram.webp",
# "markdown": "# Visual TL;DR — ...",
# "jsonld": { "@type": "ImageObject", ... },
# "nodes_count": 8,
# "attribution_included": false
# }FAQ
Why do AI agents prefer JSON diagrams over SVG or images?
A rendered SVG forces an LLM to OCR text and infer connections from coordinates. A JSON diagram with typed nodes and labeled edges is fully traversable in 200 tokens. Agents pipeline downstream reasoning, citation, and re-rendering from the JSON; the SVG is for the human reader at the end of the chain.
What does the JSON diagram format look like?
Two arrays: nodes (id, label, type one of cause/effect/result/key/concept, optional desc, essential flag) and edges (from, to, optional label verb). Plus a top-level title string. Same shape every call. No schema drift.
Can I use the Markdown output to feed another LLM?
Yes. The Markdown variant is structured as a concept list plus a flow list, which Claude, GPT, Gemini, and Llama all parse cleanly without prompt engineering. It is the format most agent-to-agent handoff pipelines settle on because it is human-readable, LLM-friendly, and lossless against the source JSON.
Is there an MCP server for Claude / Cursor / Windsurf?
Yes. The generate_visual_tldr tool ships in the same MCP server as our other agent tools (Search startups, Discover email, Agent readiness scan). One auth, drop-in for any model client that speaks MCP. Returns all four formats in the tool response.
How is this different from Mermaid or Graphviz?
Mermaid and Graphviz are diagram syntaxes you write by hand. Visual TL;DR is a pipeline: article in, structured diagram out, no manual authoring. The JSON shape we return is intentionally simpler than the underlying Mermaid AST so agents can read and write it without learning a DSL.
Is there a free tier?
Five diagrams per month, all four output formats, every diagram includes a "From startuphub.ai" footer. No credit card. Sign up returns a Bearer key immediately. Most editorial blogs land on Pro ($19/mo, 500 diagrams, optional attribution) once they decide to pin a diagram to every post.