"Large language models have well-known issues and constraints. And so if you want to solve complex problems, you're going to want to adopt what's called multi-method agentic AI, which combines large language models with other kinds of proven automation technologies so that you can build more adaptable, more transparent systems that are much more likely to survive regulatory scrutiny." This assertion by James Taylor, Executive Partner at Blue Polaris, set the stage for a compelling presentation on the strategic evolution of AI systems. Taylor, speaking in a concise, whiteboard-driven format, laid out a vision for enterprise AI that transcends the current hype around generative models, advocating for a holistic approach to automation.
Taylor’s core argument centers on the inherent limitations of Large Language Models (LLMs) when deployed as standalone solutions for intricate business processes. While LLMs excel at natural language understanding, generation, and basic reasoning, they often fall short in areas demanding absolute consistency, transparency, state management, and strict adherence to rules or regulations. To address these gaps, Multi-Method Agentic AI proposes integrating LLMs with other established automation technologies, creating a robust, intelligent ecosystem capable of tackling complex, real-world challenges. This integrated framework promises not only enhanced capability but also crucial attributes like auditability and reliability, essential for high-stakes applications.
Consider the intricate process of a bank deciding whether to lend money, a scenario Taylor meticulously diagrams. The initial customer interaction, which might involve a natural language query about loan policies, is perfectly suited for an LLM-powered chat agent. This agent excels at understanding nuanced human language, intent, and even adapting to different linguistic styles. However, its role is primarily interpretative, translating unstructured input into actionable requests. It's a powerful tool, but as Taylor notes, "Large language models are a great tool... but they need to be not the only tool in your toolbox."
The chat agent's output is then passed to an orchestration agent, also an LLM-based component. This agent's intelligence lies in its ability to understand the intent of the customer's query (e.g., "loan policy question" vs. "loan application request") and intelligently route it to the appropriate specialized agent. For policy-related questions, the orchestration agent directs the query to a "Loan Policy" agent. This agent leverages Retrieval Augmented Generation (RAG), combining an LLM with a vast repository of internal bank documents—product descriptions, risk policies, marketing materials—to generate accurate, context-aware answers. This RAG-based approach ensures that the LLM's responses are grounded in authoritative data, mitigating hallucination and providing verifiable references.
Should the customer express a desire to *apply* for a loan, the orchestration agent routes the request to a "Loan Application" agent. This is where the limitations of pure LLMs become apparent. A loan application is a multi-step process requiring the system to remember the customer's progress, manage data collection over time, and ensure all necessary steps are completed. LLMs, by their nature, are stateless; they excel at individual interactions but struggle with long-running processes that demand memory and sequential logic. Therefore, the Loan Application agent is built upon workflow technology, explicitly designed to manage state, track progress, and guide the customer through the application journey. Workflow platforms, often utilizing visual models like BPMN (Business Process Model and Notation), provide a structured, transparent, and auditable framework for complex processes, allowing for interruptions and resumptions without loss of context.
Within the loan application workflow, a critical step is determining customer eligibility and making the final origination decision. For these tasks, Multi-Method Agentic AI advocates for a "Decision Agent," powered by a business rules management system or a dedicated decision platform. As Taylor highlights, "Decision agents... are also generally not a good candidate for large language models... you want to always make decisions consistently... you've often got to be transparent about this, you've got to be able to explain it to regulators, to auditors, to people who run the business. And large language models are not good at either of these things." Decision platforms ensure consistent application of logic, provide transparent audit trails, and can be easily updated to reflect changing regulations, offering a level of reliability and explainability that pure LLMs cannot match. This decision agent will also interact with data agents that retrieve real-time customer data, such as credit bureau reports, from internal systems or external sources, integrating diverse information streams into the decision-making process.
Related Reading
- Tesla's AI Ambition: Beyond the Car, a New Industrial Revolution
- Google's Agentic RAG Redefines E-commerce Personalization
- ChatGPT Unlocks Enterprise Data with New Company Knowledge Feature
Further enhancing the system's capabilities are specialized agents like the "Ingestion Agent" and the "Explainer Agent." The Ingestion Agent, often LLM-powered, tackles unstructured data, such as a customer's handwritten boat brochure. It uses advanced OCR and natural language processing to extract key details (boat model, price, age) and convert them into structured data for the decision-making process. This capability significantly streamlines data entry and reduces manual effort. Should a human customer service representative need to intervene due to complex issues or inconsistencies, a "Companion Agent" (an LLM-based assistant) can provide quick answers and context. Crucially, an "Explainer Agent" can translate the internal decision log—a detailed record of *why* a particular decision was made by the decision platform—into natural language for the human agent, empowering them to clearly communicate the rationale to the customer or regulators.
This layered architecture, where LLMs handle conversational interfaces and knowledge retrieval, while workflow and decision platforms manage state, process, and rule-based logic, represents a sophisticated and practical approach to enterprise AI. It leverages the strengths of each technology, mitigating individual weaknesses and creating systems that are not only intelligent but also trustworthy, compliant, and adaptable to the dynamic demands of modern business.

