Brett Taylor, co-founder and CEO of agentic AI startup Sierra and Chairman of OpenAI's Board of Directors, recently articulated a transformative vision for artificial intelligence on CNBC's Squawk Box. Speaking with Andrew Ross Sorkin, Taylor unveiled Sierra's new Agent Data Platform, framing it within a broader discussion about the economic implications and future trajectory of AI, particularly its role in customer interaction and business operations. His central thesis was stark: "Every company's AI agent is going to be more important than their website in a handful of years."
Taylor positioned Sierra at the vanguard of this shift, moving beyond mere chatbots to create truly agentic AI. This new generation of AI, powered by Sierra's platform, is designed to possess memory, allowing it to "personalize their conversations not just over days, but over months and years." This capability transcends the transactional nature of traditional customer service, enabling AI agents to act as sophisticated "concierges for your brand," adept at handling both service inquiries and sales opportunities with unprecedented depth and continuity.
The distinction Taylor drew between a basic chatbot and an agentic AI is profound. While chatbots often operate on rigid, rules-based logic, an AI agent, deriving its name from the concept of "agency," is empowered to be creative and make autonomous decisions within defined goals and guardrails. This allows for a level of empathy and problem-solving previously thought exclusive to human interaction. Taylor cited the example of a canceled flight, a frustrating experience often exacerbated by impersonal, script-driven customer service. An agentic AI, he suggested, could leverage its perfect access to information, multilingual capabilities, and instant availability to offer "unreasonable hospitality," such as a free coffee or loyalty points, anticipating needs and proactively mitigating distress.
Indeed, the economic benefits of this shift are staggering. A typical human-led customer service call today costs a company approximately $25. This high cost limits personalized interaction, especially for businesses with millions of customers or those offering lower-priced products where the average revenue per user (ARPU) is less than the cost of a single support call. Agentic AI democratizes personalized engagement, making it economically viable to have tailored conversations with every customer, regardless of scale. This isn't just about cost reduction; it's about transforming customer relationships from reactive problem-solving to proactive, empathetic brand ambassadorship.
Taylor highlighted how Sierra's AI agents are already demonstrating superior performance. The Weight Watchers agent, powered by Sierra, has achieved "higher empathy scores than the coaches in their call center." This underscores the potential for AI to not just replicate human capabilities but to exceed them in specific, measurable ways, particularly in scenarios demanding patience, consistent information recall, and rapid processing of complex data.
However, this ambitious vision is not without its challenges, particularly concerning the underlying infrastructure. The immense computational power required to train and run these sophisticated AI models is becoming the "limiting factor" for the growth of leading AI companies like OpenAI. Taylor explained that large data centers are crucial for two phases of the AI pipeline: training the models and inference (the process of running the trained models to make predictions or generate responses). For modern models, inference is particularly demanding, requiring substantial GPU resources to facilitate complex "chain of thought reasoning" – essentially, the AI thinking aloud to arrive at a solution.
Related Reading
- Orchestrator Agents and Model Context Protocol: The Future of AI Automation
- Shopify president: We're laying the rails for agentic commerce
- Amazon Fights Perplexity Over AI Shopping Agents
The rapid pace of technological obsolescence in the compute space adds another layer of complexity. Unlike traditional infrastructure investments, such as laying down train tracks or fiber optic cables, the lifespan of AI-specific hardware, like NVIDIA chips, is relatively short. Investors face the reality that these expensive components may need replacement every few years to keep pace with innovation, posing a significant capital expenditure challenge. This creates a unique economic dynamic, where the upfront investment is massive, and the depreciation cycle is accelerated.
Despite these hurdles, Taylor remains optimistic. He believes the market will adapt, and the focus will shift towards building the largest possible compute infrastructure to meet the burgeoning demand for AI. He also emphasized that the future isn't about a single, monolithic AI model. Instead, engineers and applied AI companies will strategically mix and match various models across a spectrum of cost, quality, latency, and speed to suit specific tasks. A low-latency phone conversation, for instance, might demand a fast, relatively inexpensive model, while complex actuarial sciences would prioritize accuracy, regardless of computational cost. This nuanced approach to model deployment will be critical in optimizing both performance and economics as AI continues to permeate every sector of the economy.

