Jensen Huang, President and CEO of NVIDIA, recently spoke with venture capitalists Sarah Guo and Elad Gil on the "No Priors" podcast, providing sharp commentary on the state of artificial intelligence entering 2026. The wide-ranging discussion focused on the rapid advancements in reasoning models, the economic implications of AI on labor, the geopolitical dynamics of open source technology, and the structural shift toward accelerated computing. Huang’s perspective cut through the prevailing market anxieties, positing that far from being a bubble, the industry is undergoing a foundational re-architecture driven by compounding technological improvements.
One of the most encouraging surprises of the preceding year, according to Huang, was the rapid improvement in the fidelity and utility of AI outputs, particularly concerning "grounding" and "reasoning." This leap addressed one of the biggest skeptical responses to early AI models: their tendency toward hallucination and generating unreliable information. The industry has effectively addressed these issues by connecting models to search and routing requests based on confidence levels, significantly improving the quality and accuracy of answers across language, vision, robotics, and autonomous systems.
The plummeting cost of computation is transforming the economics of intelligence. This deflationary trend fuels explosive adoption across every sector.
Huang pointed out that this rapid technological advancement is translating directly into economic viability, particularly in inference. He expressed satisfaction, and even a degree of surprise, that tokens generated by reasoning models are now highly profitable. "I’m so pleased that these tokens are now profitable," Huang stated, noting that some AI-native companies are already achieving 90% gross margins. This profitability is driven by the speed at which inference token generation rates are accelerating—a pace Huang suggested is multiple exponentials faster than the historical benchmark of Moore’s Law. This accelerating cost reduction makes AI adoption inevitable and structurally sound, definitively refuting the "AI bubble" narrative that often accompanies periods of intense technological investment.
Huang framed the current AI acceleration not just as a technological shift, but as the establishment of a new global infrastructure. He categorized data centers as "AI factories," likening them to the transformative infrastructure projects of the past, such as power grids or the internet. These factories require vast physical resources, creating three new classes of industrial plants: chip fabs (like those built by TSMC), sophisticated supercomputer centers (like NVIDIA’s Grace Blackwell systems), and the AI factories themselves. This construction boom is generating an immense, immediate demand for skilled labor across the United States and globally. Huang noted that electricians, plumbers, and network engineers are seeing their paychecks double as they are paid to travel the country to build this new digital infrastructure.
This perspective directly addresses concerns regarding AI’s impact on human employment. Huang introduced a crucial distinction between the task of a job and the purpose of a job. He cited the famous prediction that AI would eliminate radiologists. In reality, AI has automated the task of scanning images, but the purpose—diagnosing disease—has not changed. Instead, radiologists are now empowered to process more scans, request more complex studies, and serve more patients, increasing their productivity and value. Huang emphasized that if AI automates routine tasks, professionals are freed up to focus on higher-value problem-solving. "If your purpose literally is coding... maybe you’re going to get replaced by the AI, but the goal of all of our software engineers is to solve problems." This productivity boost increases overall economic activity and creates demand for new, complex human roles.
Beyond economics, Huang delved into the geopolitical landscape, particularly concerning the debate around open-source AI models versus tightly controlled, closed-source systems. He strongly advocated for the importance of open source technology for American competitiveness and global innovation. He observed that the rapid dissemination of foundational research, such as the DeepSeek paper, immediately benefited American startups and infrastructure companies. Huang believes that closed-source, monolithic AI models—the so-called "God AI"—are a distant, almost biblical concept, and that focusing policy on regulating such extremes is "unhelpful" and "deeply conflicted." Instead, he argued, the focus should be on practical, immediate policy that encourages broad innovation. "Whatever we decide to do with policies, do not damage that innovation flywheel." He stressed that stifling open source innovation would only benefit large, established players capable of funding massive proprietary models, effectively suffocating the startups and diverse industries that rely on accessible pre-trained models to build specialized applications. The diversity of AI applications—from biological research to financial services—is too broad to be dominated by a single, monolithic model, making open source contributions essential for widespread adoption and sovereign capability across diverse nations.
The technological advancements of 2025 demonstrate that the AI revolution is not a fragile bubble, but a deep, infrastructural transformation. The continued reduction in the cost of intelligence, coupled with the ability of AI to augment human productivity across specialized domains, suggests a decade of profound and positive economic restructuring is already underway.

