The promise of AI data agents, capable of automating complex tasks and answering intricate business questions, has hit a significant roadblock: context. Despite advancements in Large Language Models (LLMs) and the maturation of the modern data stack, organizations are finding their agents largely ineffective. This isn't a failure of the AI models themselves, but a fundamental gap in how they understand and interact with enterprise data, as detailed in a recent analysis from Andreessen Horowitz.
The journey began with the modern data stack, aiming to consolidate and clean disparate data sources. The idea was that with organized data, business intelligence would be straightforward. Then came the AI agent frenzy of 2024-2025, fueled by LLM capabilities. Companies rushed to build 'chat with your data' tools and support agents, expecting increased efficiency.
However, reality set in. MIT's 'State of AI in Business 2025' report highlighted that most AI deployments fail due to brittle workflows and a lack of contextual learning. Data agents, in particular, struggled with ambiguous queries, deciphering business definitions, and reasoning across messy, disparate data. The core issue wasn't just translating natural language to SQL; it was understanding the nuances of business terms and data sources.
