Rules, RPA, ML, LLMs, and Agents: The Decision Ladder

One of the most common AI planning mistakes is to start with the most advanced-sounding option. In reality, many business tasks should begin with a rule, a deterministic workflow, or a simple classifier. Strong architecture often looks less glamorous than the sales pitch because the best system is usually the simplest one that does the job well.

Introduction: Why This Matters

Teams often ask, “Should we use an AI agent for this?” before they have asked a more basic question: What kind of task is this actually? If the work is deterministic, a rule may be enough. If the work lives in legacy interfaces, RPA may be enough. If the work is a stable prediction over structured data, traditional ML may be enough. If the work is language-heavy, an LLM workflow may be better. If the work truly requires adaptive multi-step behavior, then agent-like design may become justified.

This lesson gives you a ladder for choosing the right level of automation. The principle is simple: start at the lowest rung that can solve the business problem reliably.

The Decision Ladder at a Glance

Rung Best for Strength Main weakness
Rules Clear thresholds and policy logic Reliable and auditable Brittle when language varies
RPA Repetitive UI actions in existing systems Fast automation without deep integration Fragile when interfaces change
Traditional ML Narrow predictions on structured data Strong at scoring and classification Needs labels and stable definitions
LLM workflow Language-heavy work with variable phrasing Flexible interpretation and generation Can be inconsistent without controls
Agent-like system Adaptive, multi-step, tool-using tasks Handles branching and exploration Harder to govern, test, and price

1) Rules

Rules are explicit instructions such as:

  • if amount > threshold, route for approval
  • if country is X, apply policy Y
  • if a field is missing, reject the submission
  • if text contains a defined term, trigger a warning

Use rules when:

  • the logic is known
  • exceptions are limited
  • auditability matters
  • the decision should be deterministic

Do not use rules when:

  • the input is messy language
  • the number of exceptions is huge
  • the business meaning changes too often for manual maintenance

Rules are often underrated because they are not fashionable. But in many business systems, they are still the most trustworthy layer.

2) RPA

RPA, or robotic process automation, imitates user actions in existing systems:

  • opening a legacy application
  • copying values across screens
  • downloading files from a portal
  • moving data between systems that lack clean APIs

Use RPA when:

  • the process is repetitive and UI-based
  • the systems are old or poorly integrated
  • the steps are stable and visible

Do not use RPA when:

  • the interface changes often
  • the work requires judgment on messy language
  • APIs or direct system integrations are available and cheaper long term

RPA is often a bridge technology. It can be useful, but it should not automatically become the permanent architecture.

3) Traditional ML

Traditional machine learning works best when the task is a narrow prediction on structured or semi-structured data.

Examples:

  • churn scoring
  • fraud detection
  • demand forecasting
  • lead scoring
  • risk classification

Use traditional ML when:

  • historical labels exist
  • the desired output is stable
  • you want a repeatable prediction, not free-form language
  • the problem is measurable over time

Do not use traditional ML when:

  • the key challenge is understanding free-form language
  • the output must be flexible narrative
  • the task changes too frequently for a narrowly trained model

Traditional ML is not “old AI.” It is often still the right tool.

4) LLM Workflows

LLM workflows are best when the task is language-heavy and the workflow can still be bounded.

Examples:

  • summarizing meetings
  • extracting fields from messy documents
  • drafting client emails
  • answering questions over internal policies
  • transforming notes into structured outputs

Use LLM workflows when:

  • the input is language-rich
  • the output needs interpretation or generation
  • the workflow benefits from flexibility but still has clear guardrails
  • review and formatting can be designed

Do not use LLM workflows when:

  • you only need a deterministic rule
  • the output must be exact with no room for ambiguity
  • the task is so high-risk that language generation adds more risk than value

An LLM workflow is often the right middle ground between rigid automation and open-ended autonomy.

5) Agent-Like Systems

Agent-like systems are justified when the path to the answer cannot be fully fixed in advance.

Examples:

  • exploring multiple knowledge sources to produce a research memo
  • using several tools in sequence depending on what is discovered
  • managing exception cases that require branching choices
  • coordinating multi-step actions under bounded supervision

Use agent-like systems when:

  • the task truly requires adaptive planning
  • tool choice depends on intermediate findings
  • the workflow is not just one model call plus formatting
  • the value of autonomy exceeds the cost of governance

Do not use agent-like systems when:

  • a normal workflow can solve the task
  • the task touches sensitive actions without strong controls
  • nobody can clearly describe what the system should be allowed to do

In business settings, the burden of proof should usually be on the agent proposal, not on the workflow proposal.

How to Choose the Right Rung

Ask these questions in order:

  1. Can explicit rules solve most of the problem?
  2. If not, is the main work repetitive interface handling?
  3. If not, is this a stable predictive task over structured data?
  4. If not, is the task mainly language understanding or generation inside a bounded workflow?
  5. Only then ask whether adaptive agent-like behavior is truly necessary.

This order matters because complexity creates cost, latency, and governance burden.

Business Use Cases

Example 1: Invoice processing

  • rule: threshold approvals
  • RPA: download invoices from a vendor portal
  • LLM workflow: extract messy fields and summarize exceptions
  • agent: usually unnecessary

Example 2: Internal policy assistant

  • rule: permissions and escalation thresholds
  • retrieval + LLM workflow: answer questions from policy documents
  • agent: only if the assistant must investigate across many systems and synthesize next steps

Example 3: Lead qualification

  • traditional ML: score structured signals
  • LLM workflow: summarize notes and emails
  • rules: route based on thresholds
  • agent: only if the system must adaptively research and coordinate outreach steps

Tools, Models, and Stack Options

Approach Typical tools
Rules Rules engines, spreadsheets, workflow conditions
RPA UI automation platforms, scripted browser or desktop automation
Traditional ML Tabular models, time-series tools, scoring pipelines
LLM workflow LLM APIs, prompt templates, retrieval, structured outputs
Agent-like systems Orchestrators, tool routers, evaluators, memory layers

The technical tool choice matters less than the task logic behind it.

Risks, Limits, and Common Mistakes

  • Treating newer as better.
  • Using RPA where direct integration would be more stable.
  • Using an LLM where a rule would be clearer and cheaper.
  • Using an agent where a bounded workflow would be safer.
  • Failing to combine methods. Many good systems use several rungs together.

The ladder is not a rule that says “pick one forever.” It is a way to pick the right starting point.

Example Scenario

A customer support team wants to improve response operations. A simple rules layer can route tickets by product line and urgency. RPA may help pull data from an old internal screen. An LLM workflow can summarize the ticket and draft a response based on the knowledge base. A traditional ML model may help score churn risk from account history. An agent is only justified if the support assistant must adaptively navigate several systems, investigate special cases, and propose next actions across multiple tools. Without that need, the agent label only adds complexity.

How to Roll This Out in a Real Team

Start with a workflow map and mark which steps are deterministic, which are UI-bound, which are predictive, and which are language-heavy. Then assign the lightest suitable approach to each step. This often leads to a hybrid system: rules for governance, LLMs for interpretation, and maybe ML for scoring. That mixed design usually performs better than forcing the entire process into one fashionable pattern.

Practical Checklist

  • Could simple rules solve this task well enough?
  • Is the real pain in legacy interfaces rather than in reasoning?
  • Is this a structured prediction problem with historical labels?
  • Is the work mostly language-heavy and bounded?
  • Does the workflow truly require adaptive planning and tool choice?
  • Am I choosing complexity for business reasons or because it sounds advanced?

Continue Learning