The promise of AI agents accessing vast enterprise data often collides with the reality of their inability to understand it. Despite access to every database and data lake, agents frequently falter on basic business questions, delivering hesitant or incorrect answers. This isn't a data volume problem; it's a profound gap in semantic understanding. According to the announcement, enterprises building reliable AI agents critically need two distinct yet interconnected types of ontologies: descriptive and structural. This dual-ontology approach is emerging as the foundational layer for AI agents to move beyond mere information retrieval to genuine comprehension and trustworthy action.
The core issue stems from the divergence between how businesses conceptualize operations and how data is physically stored. A "qualified pipeline" means something specific to sales methodology, involving criteria like deal size and decision-maker engagement, which rarely map neatly to single database columns. This semantic disconnect leads to ambiguous intent, inconsistent interpretations across teams, and brittle integrations that break with every schema change. Simply throwing more data at AI or refining prompts offers only temporary fixes. The enduring solution requires creating complementary translation layers that equip machines with both business meaning and data reality, moving beyond raw data to actionable intelligence.
Descriptive ontologies serve as the enterprise's definitive business dictionary for AI agents. They meticulously capture the meaning, policies, relationships, and causal logic that govern an organization's actual operations, independent of underlying data structures. These ontologies define critical business concepts such as customer entitlement, service level agreements, and qualified opportunities, along with roles, responsibilities, and the rules dictating how events should unfold. Owned by domain experts and product owners, descriptive ontologies evolve as business meaning shifts, providing agents with the essential context to understand user intent, enforce policies, and guide next-best actions.
Structural ontologies, conversely, function as the data atlas for AI agents, mapping directly to the physical and virtual locations of information. They detail data entities, schemas, attributes, relationships, and constraints across data warehouses, lakes, and semantic layers. These ontologies are the domain of data architects and knowledge engineers, changing as platforms evolve or data models are refactored. They enable agents to navigate from abstract business concepts to concrete data pathways, translating natural language queries into precise SQL or SOQL, ensuring consistent metric definitions, and facilitating entity resolution across disparate sources.
Bridging Meaning and Data
The true power of this dual-ontology architecture lies in its synergistic operation. When an AI agent processes a query, it first consults the descriptive ontology to grasp the business intent, translating a high-level question like "premium support entitlement" into a specific set of criteria (e.g., contract type, active status, tier level). Subsequently, the agent leverages the structural ontology to pinpoint where the necessary data for these criteria resides across various systems and how those data points relate. After retrieving the raw facts, the descriptive ontology is re-engaged to apply business rules and policies, formulating a trustworthy, contextually accurate answer. This separation allows each layer to evolve independently while maintaining a robust, auditable connection, preventing the common problem of business logic drifting from data reality.
Real-world applications, such as those within Salesforce's Agentforce, demonstrate the practical impact. For service agents, descriptive ontologies define refund eligibility or escalation policies, while structural ontologies map these to order histories or case tables. This enables agents to not only retrieve facts but also apply complex policies correctly. In sales, descriptive ontologies model qualification criteria and forecasting methodologies, ensuring "qualified pipeline" is interpreted consistently with the VP of sales' definition, while structural ontologies bind these concepts to opportunity objects and custom fields. This integrated approach ensures agents make autonomous decisions that are both intelligent and compliant, significantly enhancing operational efficiency and accuracy.
Beyond operational benefits, dual ontologies are critical for establishing trust, governance, and explainability in AI agent interactions. Every agent response becomes an auditable artifact, detailing the policies applied, the exact data sources used, and even the specific records that informed the answer. This "showing work by design" is crucial for defending answers to customers, auditors, and regulators. Independent versioning of descriptive and structural layers, coupled with explicit mappings, allows for controlled evolution and rollback capabilities. Furthermore, these ontologies enable the implementation of robust guardrails, early drift detection, and the enforcement of privacy and data residency rules, ensuring agents operate within defined ethical and legal boundaries.
The current AI landscape underscores the urgency of adopting this ontological approach. Context engineering is rapidly supplanting prompt engineering, demanding rich, structured context that descriptive ontologies inherently provide. Semantic design systems are emerging to standardize how AI interprets information, with ontologies forming their bedrock. Knowledge graphs, built on RDF and graph-based principles, offer the natural layer for AI reasoning, inferring new facts and explaining decisions in ways flat tables cannot. While retrieval-augmented generation (RAG) improves factual grounding, it's the semantic understanding provided by descriptive ontologies, combined with the data navigation of structural ontologies, that truly makes agents reliable and trustworthy. Enterprises that master this dual-ontology strategy will be best positioned to deploy AI agents that are not just "unstoppable," but also profoundly dependable.



