“There is no way to scale a support organization to the level of mass that we’re going to have on Sunday, overnight. You cannot hire trained people, you know, these are travel agents, right? You cannot hire enough people overnight… and therefore AI is super, super important.” This statement, delivered by Navan CEO Ariel Cohen to CNBC’s Jon Fortt, crystallizes the immediate, existential pressure facing modern, AI-reliant businesses: when sudden, massive disruption hits, traditional human scaling fails, forcing AI systems to prove their worth under extreme duress.
The conversation, aired on CNBC’s Squawk on the Street, focused on the dual challenges presented by a massive winter storm sweeping across the US. On one hand, the storm served as a brutal real-world stress test for AI-driven customer service platforms, particularly in the notoriously chaotic business travel sector. On the other, it highlighted the growing, often overlooked infrastructural bottleneck of the AI boom itself: power consumption and the stability of the electrical grid. Fortt spoke with Cohen about the travel industry’s reliance on AI during crisis moments, and then broadened the discussion to the macro environment where data centers—the engine rooms of generative AI—are becoming such enormous power consumers that they are now central to national energy policy discussions.
For companies like Navan, which offers business travel and expense management software, the weather-induced chaos—flights canceled, hotels closed, roads blocked—immediately triggers a surge in customer service demands that no human call center could possibly handle. Cohen explained that Navan’s AI chatbot, Ava, is designed precisely for this kind of acute scaling challenge. The inability to rapidly onboard and train hundreds of human travel agents during an instantaneous crisis means that the AI must handle the majority of the load. Cohen noted that Ava already manages a significant portion of routine interactions, stating, "Ava knows to support, we reported in Q3, 54% of the interactions with us." The impending storm, however, represents a far more complex challenge, demanding not just routine booking changes but proactive, multi-variable re-routing and crisis management—a true proving ground for the efficacy and robustness of their AI architecture.
This operational stress test immediately connects to a deeper, more systemic issue within the AI ecosystem: power infrastructure. Fortt noted the fascinating development where the administration is exploring ways to tell grid operators to leverage backup power from data centers back into the grid, acknowledging the sheer scale of energy consumption required to run modern AI models and cloud infrastructure. This proposal underscores a stark reality: the same infrastructure powering the AI revolution is now so large that it is being considered a critical, if volatile, component of national energy stability.
The data center’s massive power draw is not merely an environmental concern; it is rapidly becoming the ultimate constraint on AI growth and profitability. Fortt aptly described power as "one of these major gating factors in these AI dreams that are propping up so many of these valuations right now in technology." This constraint forces founders and developers to make difficult choices regarding model selection and deployment, moving beyond the simple pursuit of the largest, most cutting-edge Large Language Models (LLMs). The economic imperative is driving a rapid re-evaluation of model size versus performance.
The resulting insight for tech leaders is clear: efficiency is the new frontier of competitive advantage. The cost and power demands of running massive models like those offered by OpenAI or Anthropic are unsustainable for many enterprise applications. Companies are now asking: Do we truly need the most expensive, most powerful LLM for every task? Or, as Fortt posited, "Do I need these frontier models? Do I need OpenAI? Do I need Anthropic? Or can I go with a more basic model and enhance it myself and actually save over time, run it on less expensive chips perhaps than the cutting-edge offerings from the likes of an Nvidia." This is leading to a major push toward model slimming, fine-tuning smaller open-source models, and developing specialized silicon optimized for inference—moving AI from a capital-intensive, power-hungry research tool into a scalable, economically viable enterprise solution. The winter storm and its resulting chaos, therefore, are not just disrupting travel; they are accelerating the ruthless economic calculus that will define which AI applications survive and thrive in a world of finite resources.



