"Regulate use, do not regulate development." This seemingly straightforward dictum from Matt Perault, Head of AI Policy at a16z, cuts to the core of the "Little Tech Agenda" and the ongoing struggle to shape AI policy in Washington, D.C. and beyond. It’s a philosophy born from the realization that while "Big Tech" has long held sway in policy debates, the unique needs and challenges of startups—the "Little Tech" builders—have been largely overlooked.
In a recent episode of "The a16z Podcast," General Partner Erik Torenberg sat down with Perault and Collin McCune, Head of Government Affairs at a16z, to unpack the origins and vision of the Little Tech Agenda. Launched in July 2023 by Marc Andreessen and Ben Horowitz, the initiative aims to be the voice for startups in a regulatory landscape often dominated by incumbents. McCune articulated the firm's motivation: "There wasn't anyone actually advocating on behalf of the startups and entrepreneurs, the smaller builders in the space." Perault echoed this sentiment, highlighting how the Little Tech Agenda became a "recruiting vehicle" for him, revealing an "empty seat" at the policy table for these nascent companies.
One core insight from the discussion is the fundamental disparity between regulating established tech giants and nascent startups. McCune stressed that a regulatory framework designed for a trillion-dollar company with hundreds of thousands of employees is simply not applicable to a five-person team in a garage. "How are you supposed to be able to comply with the same things that are built for a thousand-person compliance team?" he questioned, emphasizing that such a one-size-fits-all approach stifles competition and innovation. This distinction is paramount, as current policy debates often fail to differentiate between these vastly different entities, potentially creating insurmountable barriers for emerging players.
The speakers also delved into the evolution of AI policy debates, noting a significant shift in focus following a series of Senate hearings in late 2023. These hearings, featuring major AI CEOs, sparked widespread speculation and fear about the technology. "The message that folks heard was, one, we need and want to be regulated," McCune stated, but this often came with "a lot of speculation about the industry" that "absolutely jump-started this whole huge wave of conversation around the rise of Terminator." This fear-driven narrative, exacerbated by well-funded effective altruist groups, has pushed the conversation toward rapid, often ill-conceived, regulation. Perault observed that this period saw proposals for licensing regimes akin to nuclear energy, a "historic" and "unprecedented" level of regulation for software development.
A second crucial insight emerges here: the danger of regulating development rather than use. Perault emphasized the need for "robust and expansive" regulation that focuses on "regulating harmful use, not on regulating development." He lamented that this nuanced position is often misinterpreted as advocating for "do not regulate" entirely. The distinction is critical: existing laws can address harmful applications of AI (e.g., violating consumer protection or civil rights laws). However, regulating the foundational development of AI models themselves, particularly open-source initiatives, risks stifling the very innovation that drives progress and global competitiveness.
The discussion frequently returned to the geopolitical implications of AI policy. McCune issued a stark warning: if the U.S. stifles its AI industry through overly restrictive regulation, "we lose to China." This underscores the third core insight: that American technology supremacy, and the critical role of startups in ensuring it, is a first-class political issue. The speakers highlighted how a decade of well-funded advocacy by certain interest groups, often pushing alarmist narratives, has shaped policy to the detriment of innovation. This has led to a situation where, despite the clear need for thoughtful governance, some policymakers prioritize "quick hits" and fear-mongering over evidence-based approaches.
Perault pointed out the irony that many politicians who previously decried the lack of competition in social media are now pushing policies that would further entrench large incumbents in AI. These policies, often involving complex administrative hurdles, disproportionately impact startups that lack the legal and compliance resources of larger firms. He argued that the current regulatory proposals, even those seemingly designed to protect consumers, often fail to address the root causes of harm effectively, while simultaneously creating significant barriers to entry for new innovators. The a16z team's mission, therefore, is not to advocate for zero regulation, but for smart, proportionate regulation that fosters a competitive and innovative AI ecosystem.

