Jordan Fisher, co-founder of Standard AI and now leading an AI alignment research team at Anthropic, recently addressed attendees at AI Startup School on June 17th, 2025. His talk, far from offering definitive answers, framed the future of startups through a series of pressing questions, challenging founders to navigate a landscape where Artificial General Intelligence (AGI) may be just a few years away. Fisher opened by confessing, "I'm extremely confused... more confused than I've ever been in my entire life," positing that such a state of confusion is often the genesis of significant discovery, especially in a rapidly accelerating field like AI.
The central inquiry Fisher posed was existential for the entrepreneurial class: "With everything that's rapidly changing in AI, how should I think about my startup's product and strategy?" This question, he argued, extends beyond mere product roadmaps to encompass team building, market dynamics, and even the fundamental purpose of starting a company today. He highlighted a profound paradox inherent in entrepreneurship: founders are constantly advised to maintain sharp focus, yet they must simultaneously oversee every facet of their nascent ventures—hiring, fundraising, product development, strategy, go-to-market, and engineering. This inherent multi-tasking, Fisher suggested, ironically positions founders uniquely to confront the biggest, most multifaceted questions posed by AGI's approach.
A core insight from Fisher's commentary is the imperative to drastically alter strategic planning horizons. He noted that the "canonical advice" of the last six months has been to plan products based on where foundation models will be in six months. However, he urged a more audacious and necessary shift: "Plan your company based on AGI arriving in 2 years." While acknowledging the inherent uncertainty—it might be two years, or three—Fisher underscored the high probability of AGI's near-term emergence, demanding that founders integrate this reality into their entire company strategy. This isn't about rigid two-year plans, but about operating with a lens that acknowledges the profound, imminent transformation.
Fisher delved into the evolving dynamics of the B2B market, offering a second crucial insight: AI's disruptive force will not be confined to the supply side of product creation but will profoundly reshape the demand side as well. He challenged the conventional wisdom that large enterprises are slow to adopt new technologies. "The force of AI is not just on this like product revolution that the startups are building, it's also on the buy-side," Fisher stated. He predicted that enterprises, armed with AGI and powerful agents, might increasingly build bespoke solutions in-house. This could lead to a commoditization of traditional software, as companies realize they can "throw two people at Claude Code and they'll build it," tailored precisely to their needs, rather than buying off-the-shelf SaaS. The question then becomes whether software will fully commoditize, or if the bar for quality and bespoke functionality will be raised so substantially that only truly exceptional, AI-native solutions will thrive.
The implications extend deeply into the fabric of organizational structure and trust. Fisher pondered whether "teams that start small and stay small [will] have structural advantages over teams that downsize?" This points to the potential for AI-native teams to operate with unprecedented efficiency and agility. More critically, he emphasized that trust will become a paramount concern. As AI agents gain autonomy and integrate into both personal and professional spheres, ensuring they act "on their behalf" becomes a complex challenge. "How can a user ensure that an agent is working on their behalf?" he asked, highlighting the inherent conflict of interest if an agent is optimized for a company's benefit over the user's. This leads to a third core insight: Defensibility in an AGI-driven world will increasingly rely on robust, transparent security models and a focus on inherently "hard problems" that resist easy automation, rather than traditional data moats.
Fisher advocated for a re-imagining of auditing and guardrails in this new paradigm. Traditional human oversight, with its inherent biases and memory, may prove insufficient. He proposed the concept of "AI-powered auditing," where AI systems could objectively inspect every action within a company, then "delete itself" to prevent data leakage. Such a system could offer unprecedented transparency and accountability, potentially building trust through verifiable neutrality. This vision, however, also raises questions about who audits the AI auditors and the trustworthiness of the underlying systems.
He also explored the notion of "AI neutrality," drawing parallels to net neutrality and the electrical grid. As AGI becomes foundational infrastructure, the question arises whether it should be treated as a neutral utility, preventing centralized control or biased outputs from a handful of corporations. This discussion ties into the broader societal impact, including the potential for policies like Universal Basic Income (UBI) or Universal Basic Compute (UBC) to address economic shifts.
Fisher concluded by reflecting on Silicon Valley's old mantra: "Change the world." He lamented a recent shift towards prioritizing immediate monetization over impactful innovation, driven by fear and short-term thinking. This moment, he argued, might be the last truly significant opportunity for founders to build products that not only people *want* but that society genuinely *needs*. He urged entrepreneurs to leverage their unique perspective to identify and solve the truly "hard problems" that AGI cannot yet easily conquer, such as those in infrastructure, energy, and advanced manufacturing. These enduring challenges, he suggested, will define the new moats of defensibility in a post-AGI era. The message is clear: the rules are changing, and continuous, deep thinking about every aspect of the venture, driven by a desire for genuine impact, is the only path forward.

