Opening — Why This Matters Now

If you are building autonomous systems, agentic workflows, or regulatory reasoning engines, you are implicitly choosing a theory of belief change.

When new information arrives, does your system revise its beliefs or update them?

In AI theory, this distinction is classical. In practice, it determines whether your system behaves like a cautious auditor or an adaptive strategist.

This paper demonstrates something subtle but powerful: the logic of KM belief update is contained within the logic of AGM belief revision. Not metaphorically. Formally.

In modal terms, revision is not a rival theory — it is a strengthening.

And for strong KM update, the difference collapses to a single axiom — one dealing exclusively with unsurprising information.

For builders of autonomous agents, this is not philosophy. It’s architecture.


Background — Update vs. Revision (The Old Debate)

Two major frameworks dominate belief change theory:

Theory Core Idea Historical Origin Interpretation
AGM Revision Incorporate new information while preserving consistency Alchourrón, Gärdenfors & Makinson (1985) The world is static; beliefs adjust
KM Update Adapt beliefs to reflect a changing world Katsuno & Mendelzon (1991) The world evolves; beliefs track change

Traditionally:

  • Revision = “I was wrong; let me fix my beliefs.”
  • Update = “The world changed; let me track the new state.”

KM update is usually modeled with preorders over possible worlds. AGM revision is characterized by rationality postulates about belief sets.

This paper reframes both using a unified modal logic with three operators:

  • $B\varphi$ — the agent believes $\varphi$
  • $\varphi > \psi$ — if $\varphi$ were the case, then $\psi$ would be the case
  • $\Box \varphi$ — $\varphi$ is necessarily true

Belief change becomes a statement about what the agent believes regarding conditionals:

$$ \psi \in K * \varphi \quad \Leftrightarrow \quad B(\varphi > \psi) $$

This translation is the key move.

Once both AGM and KM are expressed in the same modal language, comparison becomes surgical.


The Core Move — Translating KM into Modal Logic

Each KM update axiom is translated into a modal formula involving belief and counterfactual operators.

For example:

  • KM Success: $\varphi \in K \diamond \varphi$ becomes $$B(\varphi > \varphi)$$

  • KM Consistency constraints become modal constraints preventing contradictory conditionals.

The paper shows:

Every semantic frame property corresponding to KM axioms can be represented by a modal formula.

More importantly:

Every such modal axiom is provable inside the AGM modal logic.

Which yields the central theorem:

$$ L_{KM} \subseteq L_{AGM} $$

KM update logic is contained within AGM revision logic.

Not approximately. Strictly.


Findings — Where the Difference Actually Lives

Once translated, most KM and AGM axioms coincide.

Here’s what remains:

Shared Between KM and AGM Unique to AGM
Closure under consequence Stronger treatment of unsurprising inputs
Success condition Concentration on prior plausible worlds
Tautology invariance
Strong update axiom (KM9s) = AGM8

The decisive distinction lies in one modal axiom:

KM Version (Weaker)

$$ B\varphi \wedge B\psi \rightarrow B(\varphi > \psi) $$

AGM Version (Stronger)

$$ \neg B\neg \varphi \wedge B(\varphi \rightarrow \psi) \rightarrow B(\varphi > \psi) $$

AGM requires that when information is not initially disbelieved, revision must concentrate on prior plausible $\varphi$-worlds.

KM allows more freedom.

In semantic terms:

  • AGM keeps revision inside the prior belief sphere.
  • KM may reach outside.

For surprising information (where $\neg \varphi$ was initially believed), both theories behave identically.

The difference exists only for unsurprising inputs.

That’s a surprisingly narrow gap for three decades of theoretical debate.


What This Means for AI Systems

Let’s translate this into engineering consequences.

1. Autonomous Agents

If your agent must:

  • Preserve internal coherence
  • Minimize belief drift
  • Restrict changes to previously plausible states

You are implicitly implementing AGM-style revision.

If instead your system:

  • Adapts to environmental transitions
  • Reinterprets the state space more flexibly

You’re closer to KM update.

But this paper shows that AGM already contains KM.

Choosing AGM does not reduce expressiveness — it strengthens discipline.


2. Regulatory & Compliance AI

In governance systems, “unsurprising information” corresponds to:

  • Expected regulatory updates
  • Clarifications of existing rules
  • Anticipated disclosures

AGM enforces tighter constraints when inputs were already considered plausible.

That makes it more suitable for:

  • Legal reasoning engines
  • Audit trails
  • Compliance automation

KM’s flexibility may be appropriate for dynamic, state-transition modeling (e.g., robotics, dynamic planning).


3. Iterated Belief Change

The modal framework opens deeper territory:

  • Introspection of suppositional beliefs
  • Nested counterfactuals
  • Iterated revision/update

Examples the paper hints at:

  • $B(\varphi > \psi) \rightarrow BB(\varphi > \psi)$
  • $B(B\varphi > B\psi)$

These structures matter for:

  • Multi-agent reasoning
  • Strategic AI
  • Recursive planning systems

The modal embedding makes such extensions natural.


A Structural Insight

Visually, the relationship can be summarized as:

Logic Scope
Basic Modal Logic $L$ Belief + Conditional framework
$L_{KM}$ $L$ + KM update axioms
$L_{AGM}$ $L$ + AGM revision axioms

And formally:

$$ L \subseteq L_{KM} \subseteq L_{AGM} $$

If we restrict KM to its strong version:

  • The gap reduces to one axiom.
  • That axiom governs treatment of unsurprising information.

The rest is shared structure.

In short:

Revision is disciplined update.


Conclusion — The Debate Was About One Axiom

The historical framing suggested two fundamentally different paradigms.

This paper shows otherwise.

Once expressed in modal logic:

  • KM update is not a rival to AGM revision.
  • It is a special case.
  • Under strong update, the difference shrinks to a single constraint.

For AI architects, this matters.

When designing belief-change systems, the real question is not “update or revise?”

It is:

How should the system treat information it already considered plausible?

That is the only structural divergence.

Everything else is shared machinery.

Elegant, minimal, and quietly decisive.


Cognaptus: Automate the Present, Incubate the Future.