Skip to content
Back to articles

Why AI Agents Need More Than Just LLMs

March 2, 2026·11 min read·2,211 words
Agentic AIMulti-Method AIBusiness RulesIBMAutomation
How AI Agents and Decision Agents Combine Rules & ML in Automation — YouTube thumbnail
Image: Screenshot from YouTube.

Key insights

  • Multi-method agentic AI combines LLMs with workflow platforms, business rules, and ML for more robust automation
  • 3 of the 9 agents in a real-world loan system deliberately avoid LLMs — using workflow, rules, or data technologies instead
  • Decision agents built on business rules provide the consistency, transparency, and audit trails that regulators require
SourceYouTube
Published October 28, 2025
IBM Technology
IBM Technology
Hosts:IBM Technology
Blue Polaris
Guest:James Taylor (Executive Partner & Founder)Blue Polaris

This article is a summary of How AI Agents and Decision Agents Combine Rules & ML in Automation. Watch the video

Read this article in norsk


In Brief

James Taylor, founder of Blue Polaris (an IBM Gold Business Partner), walks through a bank loan example that uses nine different AI agents. Six of them use a Large Language Model (LLM), but three deliberately avoid LLMs and rely on workflow platforms, business rules engines, and data technologies instead. Taylor's core argument: real-world agentic AI systems need this "multi-method" approach to handle state, stay consistent, and survive regulatory scrutiny.


What is multi-method agentic AI?

When most people hear "AI agent," they picture a chatbot powered by a Large Language Model (think ChatGPT, Claude, or Gemini): a system trained on massive amounts of text that can generate human-like responses. Taylor argues this picture is incomplete. LLMs are powerful, but they are one tool in a toolbox that needs several others.

Multi-method agentic AI means building systems where different agents use different technologies depending on what they need to do. Some agents use LLMs for understanding language. Others use workflow platforms for managing processes. Others use business rules engines for making consistent, auditable decisions.

Analogy:

Explained simply: Think of it like a hospital. A doctor diagnoses, a surgeon operates, a pharmacist dispenses medication, and an administrator handles billing. Each specialist uses different tools and training. You wouldn't want the surgeon doing the billing, even if they're brilliant. Multi-method agentic AI works the same way: each agent specializes in what it does best. Unlike a hospital where each specialist is a person, these agents are software components that can run simultaneously and pass work to each other instantly.

Why LLMs alone aren't enough

Taylor identifies three core limitations that push organizations toward a multi-method approach:

  • LLMs can't manage state. A loan application has dozens of steps, might span multiple days, and needs to remember exactly where a customer left off. LLMs don't naturally track this. Workflow platforms do (6:58)
  • LLMs make inconsistent decisions. Ask the same question twice, get two different answers. For regulated industries like banking, every customer must be evaluated using the same logic (8:50)
  • LLMs are black boxes for regulators. A business rules management system (BRMS, software that stores and executes business logic as explicit, human-readable rules) records exactly why it reached a decision. An LLM cannot (9:09)
Analogy:

Explained simply: Imagine a restaurant kitchen. An LLM is like a talented chef who improvises every dish. The food is usually great, but it tastes slightly different every time, and they can't explain exactly what they did. A business rules engine is like a recipe book: every dish comes out the same way, and you can show the recipe to a health inspector. For a bank deciding whether to approve your loan, you want the recipe book. Unlike a real kitchen where one person could theoretically do both jobs, in software these are fundamentally different technologies with different strengths.


The bank loan example: 9 agents in action

Taylor builds his case by walking through a realistic scenario: a customer wants to borrow money to buy a boat (1:06). What seems like a simple request triggers a chain of nine agents, each handling a specific piece of the process.

LLM-powered agents

Six of the nine agents use Large Language Models because they deal with unstructured language, document interpretation, or natural communication.

1

Chat agent — Understanding the customer

The customer types a message. It might be misspelled, long-winded, or vague. The chat agent's only job is to figure out what the customer wants, whether that's asking a question or requesting an action. Taylor notes that LLMs excel here because they handle messy human language naturally (1:34).

2

Orchestration agent — Routing the request

This agent looks up a registry of all available agents and decides which one should handle the request. It connects the chat agent's interpreted intent to the right specialist (2:59). Think of it as a receptionist who knows which department handles which type of question.

3

Loan policy agent — Answering questions with RAG

When the customer asks "can I borrow money for a boat?", this agent uses retrieval-augmented generation (RAG, a technique where the AI looks up relevant documents before answering) to search the bank's policy documents. It pulls from product descriptions, risk policies, and marketing materials stored in a vector database (a specialized database that stores text as mathematical representations for fast similarity searching) (3:45). The response explains the bank's lending policy in natural language with references to the source documents.

4

Document ingestion agent — Extracting data from messy documents

The customer has a boat brochure with a photo, specifications, and a handwritten price scrawled on it. This agent extracts structured data from that unstructured document. Taylor emphasizes that LLMs handle blurry prints, handwritten notes, and stapled business cards remarkably well (12:32). It matches extracted fields to what the loan application needs.

5

Companion agent — Assisting the call center rep

When the loan decision comes back as a "maybe" and the customer needs to speak with a human representative, the companion agent helps that rep. It provides quick access to all relevant information: bank records, credit bureau data, the application, and the boat brochure data (15:01). Taylor describes this as a classic LLM use case because it needs to synthesize diverse information sources and answer ad-hoc questions.

6

Explainer agent — Translating decisions to plain language

Decision agents produce internal logs written in bank terminology. The explainer agent takes those logs and converts them into natural language that a call center representative can relay to the customer (15:54). Instead of citing "policy 4.2.1 breach," it tells the rep something like "the income on the tax return doesn't match what the credit report shows."

Non-LLM agents

These three agents deliberately avoid LLMs because the tasks require consistency, state management, or data retrieval that LLMs can't reliably provide.

1

Loan application agent — Managing the process (Workflow)

This agent runs on a workflow platform using BPMN (Business Process Model and Notation, a visual standard for defining business processes). It creates an instance for each customer, tracks how far they've progressed, and remembers state across sessions (6:58). If the customer leaves to pick up their kids and returns hours later, the workflow knows exactly where they left off (14:16). Taylor calls this the first agent in the system that doesn't need an LLM.

2

Eligibility agent — Consistent decisions (Business Rules)

Built on a BRMS rather than an LLM, this agent evaluates whether a customer qualifies for a loan. Taylor explains that you must apply the same logic to each customer. That doesn't necessarily mean the same answer, but the same rules (8:50). Decision platforms record exactly how they reached their conclusion, which regulators and auditors can inspect.

3

Data agent — Retrieving structured data (MCP)

This agent pulls customer records and external data (like credit bureau reports) through MCP (Model Context Protocol, a standard for connecting AI agents to external data sources and tools) (9:35). It handles the plumbing: getting the right data from the right systems at the right time.


How the pieces connect

Taylor's example reveals a clear pattern: LLMs handle the messy human edges, while specialized technologies handle the structured core. The customer-facing agents (chat, policy, ingestion, companion, explainer) use LLMs because they deal with natural language and unstructured documents. The back-end process agents (workflow, eligibility, data) use purpose-built platforms because they need state management, consistency, and auditability.

The agents communicate through a registry where each agent's capabilities are defined. The orchestration agent uses this registry to route requests. Taylor mentions MCP as the protocol that exposes non-LLM technologies (workflow platforms, data systems) as agents that the orchestration layer can discover and call (7:54).

The "maybe" path

One of the most revealing parts of Taylor's example is what happens when the loan decision isn't a simple yes or no. The eligibility agent returns "maybe," which triggers a human-in-the-loop workflow (13:44):

  1. The loan application agent (workflow) pauses and waits for resolution
  2. The customer leaves and comes back later. The stateful workflow remembers where they stopped
  3. A call center rep uses the companion agent (LLM) to understand all the collected information
  4. The explainer agent (LLM) translates the decision log into plain language
  5. After the rep resolves the issue, the loan application agent resubmits to the decision agent
  6. This time the decision comes back "yes" (17:20)

This cycle would be nearly impossible with LLMs alone. The state persistence, the deterministic re-evaluation, and the audit trail all require non-LLM technologies.


Common pitfalls: When multi-method goes wrong

Building multi-method agentic systems introduces new challenges. Based on the patterns Taylor describes, here are the most common mistakes:

  • Using an LLM where you need consistency? If the same input should always produce the same output (eligibility checks, pricing, compliance), use a business rules engine. LLMs give slightly different answers each time
  • Skipping the workflow layer? Multi-step processes that span hours or days need state management. An LLM's context window is not a substitute for a workflow database
  • Making the orchestration agent too smart? The orchestration agent should route, not decide. Keep its job simple: match the request to the right agent in the registry
  • Forgetting the explainer? Decision agents produce technical logs. Without an explainer agent to translate, call center reps and customers can't understand why a decision was made
  • Treating all agents as LLM agents? Taylor's core message is that the best technology depends on the task. Audit the requirements of each agent before choosing its underlying technology
Remember:

Remember: Multi-method doesn't mean more complex. It means each component uses the right tool, which often makes the overall system simpler and more reliable.


Practical implications

For beginners exploring AI agents

Start by understanding that "AI agent" doesn't always mean "chatbot." Many agents in a real system don't use language models at all. If you're learning about agentic AI, study workflow platforms and business rules alongside LLMs.

For teams designing agentic systems

Map each agent to its ideal technology before writing any code. Taylor's framework gives a clear heuristic: if the task involves understanding human language or unstructured data, consider an LLM. If it involves consistent decisions, use business rules. If it involves multi-step processes with state, use a workflow platform.

For regulated industries

The audit trail is not optional. Decision agents built on BRMS platforms give regulators exactly what they need: a complete, human-readable record of why each decision was made. Pair them with explainer agents to make those records accessible to non-technical stakeholders.

Test yourself

  1. Architecture: A hospital needs to automate patient intake, triage decisions, and treatment planning. Which of Taylor's nine agent types would you need, and which technology would power each one?
  2. Trade-off: An LLM could technically make eligibility decisions. When would the inconsistency be acceptable, and when would it be a dealbreaker?
  3. Transfer: Taylor uses MCP to expose workflow and data technologies as agents. How would this pattern apply to a supply chain system that needs to coordinate inventory, shipping, and customs?
  4. Behavior: If decision agents provide complete audit trails, how might this change how organizations handle customer complaints about denied applications?
  5. Architecture trade-off: The orchestration agent uses an LLM to route requests. What happens if the registry grows to hundreds of agents — would you still use an LLM for routing, or switch to a different approach?

Glossary

TermDefinition
Agentic AIAI systems where autonomous agents perform tasks, make decisions, and interact with other agents or humans to accomplish goals.
BPMN (Business Process Model and Notation)A visual standard for drawing business processes as flowcharts. Used by workflow platforms to define the steps in a process like a loan application.
BRMS (Business Rules Management System)Software that stores and executes business logic as explicit, human-readable rules. Think of it as a recipe book that always produces the same dish and can explain every step.
Decision agentAn agent built on a rules or decision platform (not an LLM) that makes consistent, auditable decisions. Used when regulators or auditors need to see exactly how a decision was made.
LLM (Large Language Model)An AI model trained on massive amounts of text that can understand and generate human-like language. ChatGPT, Claude, and Gemini are examples.
MCP (Model Context Protocol)A standard for connecting AI agents to external data sources and tools. Taylor describes it as the bridge that lets LLM-based agents talk to workflow platforms and databases.
Multi-method agentic AIAn approach that combines LLMs with other automation technologies (workflow, business rules, ML) in a single system. Each agent uses the technology best suited to its task.
Orchestration agentAn agent whose job is to route requests to the right specialist agent. It looks up available agents in a registry and matches them to the incoming request.
RAG (Retrieval-Augmented Generation)A technique where an AI looks up relevant documents before generating an answer. Instead of relying only on what it learned during training, it can reference up-to-date documents.
State managementKeeping track of where a process is at any given time. A workflow platform remembers that a customer completed steps 1-5 and needs to do step 6, even if they come back days later.
Vector databaseA specialized database that stores text as mathematical representations (vectors) for fast similarity searching. Used in RAG systems to find relevant documents quickly.
Workflow platformSoftware that manages multi-step processes. It defines the steps, tracks progress, handles interruptions, and ensures the process completes correctly.

Sources and resources