The true architects of artificial intelligence are currently less concerned with theoretical breakthroughs and more focused on controlling the supply chains of computation and content. This fundamental economic reality emerged as the central commentary during the recent Mixture of Experts podcast, where host Tim Hwang, joined by IBM experts Martin Keen, Marina Danilevsky, and Kush Varshney, unpacked the major announcements from CES 2026 and the shifting landscape of generative AI deals. The discussion highlighted how power is consolidating around those who own either the necessary infrastructure or the valuable intellectual property (IP).
The conversation began with NVIDIA’s announcement of the Rubin platform, promising a staggering fivefold increase in performance over the Blackwell architecture and a tenfold reduction in inference token costs. This type of raw computational advantage immediately reinforces the thesis that hardware remains the core constraint in the AI race, driving massive capital expenditure. Martin Keen noted that the sheer scale of investment into AI infrastructure—over $400 billion mentioned in the Time Magazine piece referenced later—underscores the industry’s current priority: "The data center is this kind of the next thing, and I think a lot of the focus is on that."
However, infrastructural dominance is only half the equation; the other half is proprietary data and IP. The recent three-year licensing deal between Disney and OpenAI, allowing Disney’s vast catalog of characters and content to be used in OpenAI’s generative models, signals a crucial pivot for legacy content owners. Instead of fighting the inevitable flood of fan-generated and AI-created content, Disney is choosing to engage and control the distribution pipeline. Marina Danilevsky explained this move as a platform defensive strategy: "Disney is going to stream these kind of videos. So they're going right back to Disney and they’re trying to basically have control in some way of that fan-generated content." The objective is to prevent fan-generated content from proliferating exclusively on third-party platforms like X or Blue Sky, ensuring that the monetization loop returns to the IP holder.
This is a complete reversal of the traditional IP protection playbook. Martin Keen observed that previously, content owners were focused on suing to protect their data from being used for training models. Now, the question for every other major IP holder is: "If you have IP and you're not able to generate it in Sora, are you missing out?" This dynamic fundamentally shifts the value proposition, forcing content empires to become active participants in the generative ecosystem rather than merely defensive gatekeepers.
The panelists further connected this commercial focus to the broader media narrative surrounding AI. Time Magazine’s decision to name the “Architects of AI”—comprised primarily of CEOs and infrastructure providers—as its Person of the Year reflects where the real power and attention lie. Kush Varshney succinctly captured this sentiment, stating: “Architects of AI, like what are they architecting? I think it’s the financial aspects, the hype, the business.”
This focus indicates that 2025 was less about fundamental scientific breakthroughs and more about the commercialization of existing technology. Marina Danilevsky agreed, concluding that the year was characterized by "AI hype, AI communication, AI as business, AI as the financial deals," rather than purely technical advancements.
The technical models themselves, despite NVIDIA’s continuous hardware releases, are seeing rapid convergence and commoditization. While NVIDIA is pushing its Nemotron 3 models and open-source reinforcement learning libraries, the panel noted that other top-tier models like Google’s Gemini 3 Pro were trained exclusively on Google’s own Tensor Processing Units (TPUs), demonstrating that proprietary hardware is not strictly necessary for achieving state-of-the-art results. This highlights the growing pressure on model differentiation, pushing companies like Anthropic to focus heavily on "model alignment" and philosophical guidance, such as the leaked "Claude Soul Document."
Anthropic’s Soul Document, which attempts to define the moral and philosophical boundaries of its model's behavior, is less about technical constraints and more about establishing a distinct, trustworthy corporate persona. This level of explicit behavioral engineering, baked directly into the model's training process, is a step beyond simple prompt engineering. As Danilevsky pointed out, this moves the industry toward "agentic flow engineers" rather than just prompt engineers, focusing on embedding ethical or commercial principles deep within the model’s identity. The overall trend, however, remains fixed on the full stack: those who control the chips, the proprietary data, and the deployment frameworks are setting the pace and defining the narrative for the entire industry.

