Claude's Corner: Crosslayer Labs — The Princeton Team That Patched the Internet's Certificate Infrastructure

Crosslayer Labs' three Princeton researchers invented MPIC — the standard now securing every HTTPS certificate on the internet. Now they are selling outside-in monitoring that catches BGP hijacks, certificate fraud, DNS tampering, and JS supply chain attacks before your customers get phished.

10 min read
Crosslayer Labs homepage screenshot with Claude Corner badge
6.4
C

Build difficulty

Here's a number that should terrify your security team: BGP hijacking attacks have been used to steal millions of dollars in cryptocurrency — not by breaking encryption, but by forging the certificate that proves a site is real. The victim sees the padlock. The URL looks right. HTTPS checks out. And they're already on an attacker's server.

This is the threat Crosslayer Labs was built to stop. And unusually for a YC security startup, the founders didn't stumble onto the problem — they invented the defense that's currently built into every major certificate authority on the internet.

That's a founding story you don't get to make up.

What They Build

Crosslayer Labs provides outside-in monitoring of internet infrastructure. Their platform continuously watches everything your web presence depends on: DNS records, BGP routing tables, TLS certificate issuance, and JavaScript supply chain. When something changes in a way that looks like an impersonation attack — a spoofed site, a hijacked route, a fraudulent cert — they catch it and give you a remediation path.

The target customer is any organization that gets impersonated. Healthcare providers watching for fake patient portals. Crypto exchanges that have already been burned by BGP attacks. Banks whose login pages get cloned for phishing. The pitch is simple: your perimeter firewall watches inbound traffic, but nobody watches what attackers build outside your network to look like you.

Business model is B2B SaaS — attack surface discovery, continuous monitoring, and security analytics. Pricing isn't public, which is table stakes for enterprise security. They offer a demo call and security assessment as the top-of-funnel entry point.

The Technical Problem They're Solving

To understand why Crosslayer Labs is interesting, you need to understand BGP hijacking and what it enables against TLS certificates.

BGP (Border Gateway Protocol) is the routing protocol that decides which path internet traffic takes between networks. It was designed in an era when you trusted every operator on the internet, which means it's trivially easy for a malicious network operator to advertise routes they don't own and intercept traffic. BGP hijacking has been documented for years — usually dismissed as a theoretical attack, periodically a practical catastrophe.

Here's where it gets clever: Certificate Authorities (CAs) like Let's Encrypt, DigiCert, and others verify you control a domain before issuing a certificate. The standard verification method is Domain Validation — the CA sends a challenge to your domain and checks for the response. If an attacker can BGP-hijack traffic between the CA's verification server and your domain, they can complete that challenge themselves and get a valid, trusted certificate for a domain they don't own.

A valid certificate for yourbank.com. Issued by a real CA. Trusted by every browser. Combined with DNS manipulation, that's enough to build a nearly perfect phishing site that defeats every standard warning sign users are trained to look for.

Related startups

Henry Birge-Lee (CEO) demonstrated exactly this attack in a 2018 USENIX Security paper titled "Bamboozling Certificate Authorities with BGP." That paper didn't just document the problem — it outlined a solution: Multi-Perspective Issuance Corroboration, or MPIC.

How MPIC Works (and Why It Matters)

MPIC requires CAs to verify domain control from multiple geographic vantage points simultaneously. An attacker can BGP-hijack a route from one location, but hijacking the same traffic from three or five geographically distributed vantage points — all at the same time — is exponentially harder and almost certainly detectable.

The CA/Browser Forum (the standards body governing how certificates work) adopted MPIC as a requirement for all major CAs, effective March 2025. That means every time Let's Encrypt, Google Trust Services, Amazon's ACM, or DigiCert issues a certificate, they're running MPIC validation. Birge-Lee co-authored the standard and built the Open MPIC project that handles roughly 1.5 million certificate validations per day.

700 million websites. 3 billion digital certificates. Every HTTPS connection made after March 2025 is protected in part by this team's research.

That context is essential for evaluating Crosslayer as a business: the founders aren't pitching themselves as security researchers who might understand this space. They are the people who fixed the internet's certificate infrastructure. The commercial product is the productization of a decade of this expertise.

Technical Architecture

Crosslayer's monitoring approach maps neatly onto the internet stack they're defending:

Layer 1 — DNS. They watch for DNS hijacks, unauthorized zone transfers, CNAME hijacks on abandoned subdomains, and nameserver changes. DNS changes that route traffic to new infrastructure are an early warning signal for most spoofing attacks.

Layer 2 — BGP. Route origin validation (ROV) and RPKI monitoring to detect unauthorized route announcements. This is the hardest layer to watch at scale because BGP is fundamentally a distributed system with no central authority — you need to observe it from many vantage points simultaneously.

Layer 3 — TLS Certificates. Certificate Transparency logs (CT logs) are public — every publicly trusted TLS certificate is logged to systems like Google's Argon or Cloudflare's Nimbus. Crosslayer watches CT logs for unauthorized certificate issuance for any domain in their monitoring scope. A cert issued for your domain that you didn't request is a smoking gun.

Layer 4 — JavaScript supply chain. Third-party JS loaded on your pages is an attack surface. Crosslayer monitors subresource integrity, detects unexpected script changes, and watches for JavaScript that might exfiltrate credentials to an attacker-controlled endpoint.

The value isn't in watching any single layer — it's in correlating signals across all of them. A BGP anomaly followed by a new certificate issuance followed by a DNS change is a pattern that screams impersonation attack. Any individual signal might be noise. The correlation is the signal.

Architecturally, this requires distributed probe infrastructure positioned across multiple Autonomous Systems (ASes) — you can't do meaningful BGP monitoring from a single datacenter. Scale-wise, they're ingesting continuous streams from CT logs, route collector services like RIPE RIS and RouteViews, and running their own DNS probing across global vantage points.

Difficulty Scores

DimensionScore (1-10)Why
ML / AI5Anomaly detection on network telemetry is not frontier ML, but correlating multi-layer signals cleanly requires domain-specific feature engineering that is easy to get wrong
Data9Global BGP vantage points, CT log ingestion at scale, DNS probing infrastructure — this is a heavy data-collection problem before you write a single detection rule
Backend8Real-time correlation of streaming network data across multiple protocols at internet scale is genuinely hard distributed systems work
Frontend3Security alert dashboards are solved UX; it is not the hard part here
DevOps9Running distributed probe nodes across dozens of ASes globally, plus reliable ingestion pipelines for CT logs and BGP feeds, is serious infrastructure

The Moat

The easy-to-replicate parts: the product concept is public (monitor CT logs, watch BGP, correlate signals), and the academic papers describing the attack vectors are freely available at USENIX and ACM. A well-staffed security engineering team could build version 1.0 of this monitoring stack in 6-12 months.

The hard-to-replicate parts are more interesting.

Standards body access. Henry Birge-Lee is a CA/Browser Forum member. This is not a ceremonial role — it is the table where the rules for how the internet's certificate infrastructure works get written. Understanding what attacks the standards are designed to prevent — and more importantly, what they do not prevent yet — is intelligence that no competitor can buy.

Proprietary threat intelligence over time. Every customer they monitor adds signal to their threat model. Attack patterns across hundreds or thousands of organizations, correlated with real-world incidents, compound into a detection capability that no new entrant can replicate on day one. The longer they operate, the better their false-positive rates get — and false-positive rates are what kill enterprise security products in practice.

Credibility in a trust-heavy market. Enterprise security buyers are conservative. "We invented MPIC, the standard that is now required by all major CAs" is the kind of origin story that gets you past procurement reviews that would filter out a team of ex-SaaS engineers who read some network security papers. The academic pedigree from Princeton — a professor and two PhD-level researchers — is a genuine differentiator in a domain where credentials are evaluated skeptically.

The distributed vantage point problem. Building a globally distributed probe network with meaningful coverage across diverse ASes takes time and money. Competitors cannot just spin up EC2 instances in multiple regions and call it done — the whole point is that your vantage points need to be in genuinely different routing domains to catch BGP hijacks. This is an infrastructure investment that takes real time to build correctly.

What they are not protected against: a well-capitalized competitor (think Cloudflare, Palo Alto Networks, Recorded Future) could absolutely build this if they decided it was a priority. The question is whether the market is big enough and differentiated enough to stay independent before that happens. Security acquisitions happen fast when the technology is proven.

Replicability Score: 68 / 100

The core concept is replicable — the papers are public, the infrastructure is buildable, and the product category is understood. What pushes this past 60 is the combination of distributed infrastructure requirements, the cumulative threat intelligence data moat, and the genuine depth of domain expertise needed to build detection logic that does not generate false positives at enterprise scale. The CA/Browser Forum access and standards credibility are essentially impossible to fast-follow. You would need to spend 5-7 years doing the underlying research to get that seat at the table — and by then, Crosslayer will have 5-7 years of operational data.

A weekend project cannot touch this. A Series B startup with excellent security engineers could build a rough clone in 18 months, but they would be fighting with inferior threat intelligence, no standards credibility, and a distributed probe network that is not as mature. That is a real moat, even if it is not an impenetrable one.

Bottom Line

Crosslayer Labs is one of the more credible technical teams to come out of YC W2026. Three researchers from Princeton who literally invented the defense mechanism built into the internet's certificate infrastructure — now building a commercial product to extend that protection beyond what the standard covers.

The market timing is excellent: BGP-based attacks have moved from theoretical to operationally demonstrated in the wild, and the regulatory pressure on financial services and healthcare to demonstrate active monitoring of their internet attack surface is increasing. The question is not whether this technology matters — it is whether a 3-person team can build the GTM muscle to land enterprise security contracts at a pace that outpaces the eventual move by larger players.

If you are running a crypto exchange, a healthcare portal, or a financial institution that has been on the receiving end of a spoofing attack, you should be talking to them. The alternative is hoping your users notice the URL is slightly wrong before they type in their credentials.

They usually do not.

© 2026 StartupHub.ai. All rights reserved. Do not enter, scrape, copy, reproduce, or republish this article in whole or in part. Use as input to AI training, fine-tuning, retrieval-augmented generation, or any machine-learning system is prohibited without written license. Substantially-similar derivative works will be pursued to the fullest extent of applicable copyright, database, and computer-misuse laws. See our terms.

Build This Startup with Claude Code

Complete replication guide — install as a slash command or rules file

# Build Guide: Internet Impersonation Detection Platform (Crosslayer Labs Clone)

A step-by-step guide to building an outside-in internet infrastructure monitoring platform that detects BGP hijacks, certificate fraud, DNS tampering, and JavaScript supply chain attacks. Use Claude Code to scaffold each step.

---

## Step 1: Database Schema & Core Data Models

Design a PostgreSQL schema to track monitored domains and events.

```sql
-- Monitored assets per customer
CREATE TABLE monitored_domains (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  org_id UUID NOT NULL,
  fqdn TEXT NOT NULL,
  created_at TIMESTAMPTZ DEFAULT NOW(),
  UNIQUE(org_id, fqdn)
);

-- DNS snapshots
CREATE TABLE dns_snapshots (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  domain_id UUID REFERENCES monitored_domains(id),
  snapshot_at TIMESTAMPTZ DEFAULT NOW(),
  record_type TEXT NOT NULL,        -- A, CNAME, MX, NS, TXT
  record_value TEXT NOT NULL,
  ttl INTEGER,
  probe_location TEXT NOT NULL      -- which vantage point
);

-- BGP route observations
CREATE TABLE bgp_observations (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  prefix CIDR NOT NULL,
  origin_asn INTEGER NOT NULL,
  as_path INTEGER[] NOT NULL,
  observed_at TIMESTAMPTZ DEFAULT NOW(),
  collector_id TEXT NOT NULL,       -- RIPE RIS, RouteViews, etc.
  is_anomalous BOOLEAN DEFAULT FALSE
);

-- Certificate Transparency log entries
CREATE TABLE ct_log_entries (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  domain TEXT NOT NULL,
  san_domains TEXT[],
  issuer_cn TEXT NOT NULL,
  not_before TIMESTAMPTZ NOT NULL,
  not_after TIMESTAMPTZ NOT NULL,
  serial_number TEXT UNIQUE NOT NULL,
  ct_log_id TEXT NOT NULL,
  logged_at TIMESTAMPTZ DEFAULT NOW(),
  is_authorized BOOLEAN           -- NULL = unknown, TRUE = customer confirmed
);

-- Cross-layer incidents
CREATE TABLE incidents (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  org_id UUID NOT NULL,
  domain_id UUID REFERENCES monitored_domains(id),
  severity TEXT CHECK (severity IN ('critical','high','medium','low')),
  attack_type TEXT NOT NULL,       -- bgp_hijack, dns_tamper, cert_fraud, js_injection
  description TEXT NOT NULL,
  evidence JSONB NOT NULL,         -- raw signals that triggered the incident
  remediation TEXT,
  status TEXT DEFAULT 'open',
  detected_at TIMESTAMPTZ DEFAULT NOW(),
  resolved_at TIMESTAMPTZ
);

-- JS supply chain snapshots
CREATE TABLE js_snapshots (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  domain_id UUID REFERENCES monitored_domains(id),
  page_url TEXT NOT NULL,
  script_src TEXT NOT NULL,
  sha256_hash TEXT NOT NULL,
  snapshot_at TIMESTAMPTZ DEFAULT NOW(),
  is_new BOOLEAN DEFAULT TRUE
);

CREATE INDEX ON ct_log_entries USING GIN (san_domains);
CREATE INDEX ON bgp_observations (prefix, observed_at DESC);
CREATE INDEX ON incidents (org_id, status, detected_at DESC);
```

**Claude Code prompt:** "Generate SQLAlchemy ORM models for this schema with async support and Alembic migrations."

---

## Step 2: BGP Monitoring Pipeline

Ingest BGP update feeds from public route collectors.

**Key sources:**
- RIPE RIS Live: `wss://ris-live.ripe.net/v1/ws/` — real-time BGP updates via WebSocket
- RouteViews: MRT dump files via HTTP, updated every 15 minutes
- RPKI ROA database: Cloudflare's `https://rpki.cloudflare.com/rpki.json`

```python
import asyncio
import websockets
import json
from datetime import datetime

async def ingest_ris_live(collector: str = "rrc00"):
    uri = f"wss://ris-live.ripe.net/v1/ws/?client=crosslayer-clone"
    async with websockets.connect(uri) as ws:
        await ws.send(json.dumps({
            "type": "ris_subscribe",
            "data": {"type": "UPDATE", "host": f"{collector}.ripe.net"}
        }))
        async for message in ws:
            data = json.loads(message)
            if data.get("type") == "ris_message":
                await process_bgp_update(data["data"])

async def process_bgp_update(update: dict):
    # Extract prefix, origin ASN, AS path
    announcements = update.get("announcements", [])
    for ann in announcements:
        for prefix in ann.get("prefixes", []):
            origin_asn = ann["next_hop"]  # simplification; parse AS_PATH properly
            await check_prefix_anomaly(prefix, origin_asn, update["path"])

async def check_prefix_anomaly(prefix: str, origin_asn: int, as_path: list):
    # Compare against known-good baseline from RPKI
    # Flag if origin ASN doesn't match ROA or if prefix is more specific than expected
    pass
```

**Anomaly detection logic:**
- Origin ASN changed from known baseline → high severity
- More-specific prefix announced (prefix hijack) → critical
- AS path includes unexpected transit ASNs → medium

**Claude Code prompt:** "Build an async Python service that ingests RIPE RIS Live BGP updates, stores prefix-to-ASN mappings in Redis for fast lookup, and emits anomaly events to a Postgres queue table when the origin ASN changes."

---

## Step 3: Certificate Transparency Monitor

Stream all newly issued certificates from public CT logs.

```python
import httpx
import asyncio
from dataclasses import dataclass

CT_LOGS = [
    "https://ct.googleapis.com/logs/us1/argon2024/",
    "https://ct.cloudflare.com/logs/nimbus2024/",
    "https://oak.ct.letsencrypt.org/2024h1/",
]

@dataclass
class CTEntry:
    domain: str
    san_domains: list[str]
    issuer: str
    not_before: str
    not_after: str
    log_id: str

async def stream_ct_log(log_url: str, start_index: int = 0):
    async with httpx.AsyncClient(timeout=30) as client:
        while True:
            resp = await client.get(
                f"{log_url}ct/v1/get-entries",
                params={"start": start_index, "end": start_index + 255}
            )
            entries = resp.json().get("entries", [])
            if not entries:
                await asyncio.sleep(60)
                continue
            for i, entry in enumerate(entries):
                cert = parse_ct_entry(entry)
                if cert:
                    await match_against_monitored_domains(cert)
            start_index += len(entries)

async def match_against_monitored_domains(cert: CTEntry):
    # Check if any SAN matches a monitored domain
    # If match found, check if issuing CA is on the org's allowlist
    # If not authorized, create incident
    pass
```

**Claude Code prompt:** "Build a Python worker that continuously polls Certificate Transparency log APIs, parses X.509 certificate fields using the `cryptography` library, and matches new certs against a Postgres table of monitored domains. Flag any cert whose issuing CA org isn't in the domain owner's approved CA list."

---

## Step 4: Multi-Vantage-Point DNS Probing

DNS results can vary by location — that's the whole point of detecting hijacks.

```typescript
// Deploy as edge functions on Cloudflare Workers (free tier: 100k req/day)
// Each worker region = one vantage point

interface DnsProbeResult {
  domain: string;
  recordType: string;
  values: string[];
  resolvedFrom: string; // Cloudflare colo ID
  resolvedAt: string;
}

export default {
  async fetch(request: Request): Promise<Response> {
    const { domain, recordType } = await request.json() as any;
    
    // Cloudflare Workers have access to DNS-over-HTTPS natively
    const resp = await fetch(
      `https://cloudflare-dns.com/dns-query?name=${domain}&type=${recordType}`,
      { headers: { Accept: "application/dns-json" } }
    );
    const data = await resp.json() as any;
    
    const result: DnsProbeResult = {
      domain,
      recordType,
      values: (data.Answer || []).map((a: any) => a.data),
      resolvedFrom: request.cf?.colo as string,
      resolvedAt: new Date().toISOString(),
    };
    
    return Response.json(result);
  }
};
```

Deploy this worker to 10+ Cloudflare regions. Your orchestration layer sends probe requests to each region simultaneously and diffs the results. Inconsistency across regions = potential BGP/DNS hijack in progress.

**Claude Code prompt:** "Build a Node.js orchestration service that dispatches DNS probe requests to Cloudflare Worker endpoints in 15 regions simultaneously, diffs the responses, and writes a `dns_divergence` event to Postgres when two or more regions return different A records for the same domain."

---

## Step 5: JavaScript Supply Chain Monitor

Headless browser crawling to detect unexpected script changes.

```python
from playwright.async_api import async_playwright
import hashlib
import httpx

async def snapshot_js_for_domain(domain: str, page_url: str) -> list[dict]:
    async with async_playwright() as p:
        browser = await p.chromium.launch()
        page = await browser.new_page()
        
        scripts_found = []
        
        # Intercept all script requests
        async def handle_route(route):
            request = route.request
            if request.resource_type == "script":
                response = await route.fetch()
                body = await response.body()
                sha256 = hashlib.sha256(body).hexdigest()
                scripts_found.append({
                    "src": request.url,
                    "sha256": sha256,
                    "size_bytes": len(body),
                })
            await route.continue_()
        
        await page.route("**/*", handle_route)
        await page.goto(page_url, wait_until="networkidle")
        await browser.close()
        
    return scripts_found

async def diff_against_baseline(domain_id: str, current_scripts: list[dict]):
    # Load last known snapshot from DB
    # Compare SHA256 hashes
    # New scripts or changed hashes → create incident
    pass
```

**Claude Code prompt:** "Build a Python microservice using Playwright that crawls monitored pages on a 1-hour schedule, extracts all loaded JavaScript URLs and their SHA256 hashes, diffs against the stored baseline, and creates a `js_change` incident in Postgres when a script hash changes or a new third-party origin appears."

---

## Step 6: Correlation Engine & Alert Generation

This is where the real value is created — correlating signals across layers.

```python
from dataclasses import dataclass
from datetime import datetime, timedelta
from typing import Optional
import asyncpg

@dataclass
class CorrelationWindow:
    domain: str
    window_start: datetime
    window_end: datetime
    bgp_anomalies: list
    dns_changes: list
    new_certs: list
    js_changes: list

ATTACK_PATTERNS = [
    {
        "name": "bgp_certificate_attack",
        "description": "BGP route change followed by new certificate issuance — classic MPIC-circumvention attempt",
        "severity": "critical",
        "requires": ["bgp_anomaly", "new_cert"],
        "time_window_minutes": 60,
    },
    {
        "name": "dns_hijack_with_cert",
        "description": "DNS record changed AND new certificate issued to same domain",
        "severity": "critical",
        "requires": ["dns_change", "new_cert"],
        "time_window_minutes": 30,
    },
    {
        "name": "supply_chain_injection",
        "description": "New or modified JavaScript on monitored page",
        "severity": "high",
        "requires": ["js_change"],
        "time_window_minutes": 1440,
    },
]

async def run_correlation(domain_id: str, pool: asyncpg.Pool):
    window = await gather_signals(domain_id, pool, minutes=60)
    
    for pattern in ATTACK_PATTERNS:
        if matches_pattern(window, pattern):
            await create_incident(
                domain_id=domain_id,
                attack_type=pattern["name"],
                severity=pattern["severity"],
                evidence=build_evidence(window, pattern),
                pool=pool
            )
```

**Claude Code prompt:** "Implement a Python correlation engine that runs every 5 minutes, queries the `bgp_observations`, `dns_snapshots`, `ct_log_entries`, and `js_snapshots` tables for each monitored domain, applies the pattern definitions, and creates enriched incidents with structured evidence JSONB including links to the raw signals and an auto-generated remediation recommendation using Claude API."

---

## Step 7: API, Dashboard & Alerting

```typescript
// Hono.js REST API (fast, edge-deployable)
import { Hono } from 'hono'
import { zValidator } from '@hono/zod-validator'
import { z } from 'zod'

const app = new Hono()

// List incidents for org
app.get('/v1/incidents', async (c) => {
  const orgId = c.get('orgId') // from JWT middleware
  const { status, severity, limit = 20 } = c.req.query()
  
  const incidents = await db.query(
    `SELECT i.*, m.fqdn 
     FROM incidents i 
     JOIN monitored_domains m ON i.domain_id = m.id
     WHERE i.org_id = $1 
       AND ($2::text IS NULL OR i.status = $2)
       AND ($3::text IS NULL OR i.severity = $3)
     ORDER BY i.detected_at DESC 
     LIMIT $4`,
    [orgId, status || null, severity || null, limit]
  )
  return c.json({ incidents: incidents.rows })
})

// Add domain to monitoring
app.post('/v1/domains', zValidator('json', z.object({
  fqdn: z.string().regex(/^([a-z0-9-]+\.)+[a-z]{2,}$/)
})), async (c) => {
  const { fqdn } = c.req.valid('json')
  // Validate ownership via DNS TXT record challenge
  const challenge = await createOwnershipChallenge(fqdn)
  return c.json({ challenge_token: challenge.token, dns_record: `_crosslayer-verify.${fqdn}` })
})
```

**Deployment stack:**
- API: Hono on Cloudflare Workers
- Background workers: Python on Railway or Fly.io (needs persistent processes for BGP streaming)
- Database: Supabase (Postgres + real-time subscriptions for live dashboard)
- Alert delivery: Resend for email, Slack webhooks, PagerDuty API

**Dashboard key pages:**
1. Attack surface map (all domains, subdomains, IPs, ASNs)
2. Live incident feed with evidence timeline
3. BGP route visualization (prefix → ASN path graph)
4. Certificate inventory with issuance history

**Claude Code prompt:** "Build a Next.js dashboard with a live incident feed using Supabase real-time subscriptions. Each incident card should show the attack type, affected domain, severity badge, and an expandable evidence panel with a timeline of the correlated signals. Add a one-click 'Acknowledge' and 'Resolve' action."

---

## Estimated Build Time & Cost

| Phase | Time | Monthly Infra Cost |
|-------|------|--------------------|
| Schema + local dev env | 2 days | $0 |
| BGP + CT ingestion workers | 1 week | ~$50 (Railway) |
| DNS probing (Cloudflare Workers) | 2 days | ~$5 |
| JS crawler (Playwright) | 3 days | ~$30 (Fly.io) |
| Correlation engine | 1 week | included above |
| API + dashboard | 1 week | ~$25 (Supabase Pro) |
| **Total MVP** | **~5 weeks** | **~$110/month** |

The infrastructure cost is deceptively low for a prototype. What's not reflected: the 6-12 months needed to tune false-positive rates to enterprise-acceptable levels, and the years needed to build the distributed vantage point network that makes BGP anomaly detection meaningful.
claude-code-skills.md