Opening — Why This Matters Now

Autonomous systems are no longer living in tidy, discrete worlds.

A warehouse robot moves (discrete action), but battery levels decay continuously. A medical AI prescribes a drug (discrete decision), but a patient’s vitals evolve over time. A cooling system fails at 15:03, but temperature climbs gradually toward catastrophe.

Yet most formal accounts of actual causation—especially those inspired by structural equation models—assume a fundamentally discrete universe. Actions flip switches. Variables jump. Causes precede effects in clean, surgical steps.

Reality is less polite.

The paper we analyze here tackles a deceptively hard question:

How do we define primary actual cause in domains where change is both discrete and continuous?

Not metaphorically. Logically. Formally. In a way that survives counterfactual scrutiny.

If you build safety-critical AI, regulatory audit trails, digital twins, or agentic systems that operate in physical environments—this is not philosophy. It’s infrastructure.


Background — From Pearl to Hybrid Worlds

The dominant formal tradition of actual causation traces back to Judea Pearl and the Halpern–Pearl structural model framework. It operationalizes Hume’s classic “but-for” intuition:

If A had not occurred, B would not have occurred.

The problem? Preemption.

In discrete domains, multiple potential causes may compete. One fires first; another becomes irrelevant. The naive but-for test fails.

Later refinements introduced contingency reasoning, selective interventions, and richer logical encodings. Meanwhile, the Situation Calculus provided a powerful action-theoretic framework for reasoning about dynamic domains—especially for AI agents.

But almost all of this work assumed discrete change.

Hybrid domains—where continuous fluents evolve over time under context—remained underdeveloped. The Hybrid Temporal Situation Calculus (HTSC) extended the framework to allow:

  • Discrete actions (e.g., pipe rupture)
  • Continuous evolution (e.g., temperature increasing per second)
  • Context-dependent rates of change

However, defining actual cause within this hybrid setting remained largely unresolved.

This paper closes that gap.


Analysis — Two Definitions, One Result

The authors propose two independent definitions of primary achievement cause for temporal effects in hybrid systems.

Let’s unpack them.

1️⃣ Foundational Definition: Context-Triggered Causation

In hybrid domains, continuous change occurs only under specific contexts—mutually exclusive discrete states that govern how a temporal fluent evolves.

For example:

Context Condition Temperature Increase Rate
γ₁ Pipe ruptured ∧ Cooling failed +100°/sec
γ₂ Pipe ruptured ∧ Cooling working +35°/sec
γ₃ Pipe intact ∧ Cooling failed +55°/sec

The key insight:

A primary cause of a temporal effect is the action that directly enabled the context active at the moment the effect was achieved.

So instead of asking:

“Which action raised temperature above 1000°?”

We ask:

“Which action last enabled the context under which temperature evolved into that state?”

The definition hinges on identifying the achievement situation:

  • The earliest situation where the effect becomes true
  • And remains true thereafter

Formally, the structure ensures:

  • Uniqueness of achievement situation
  • Uniqueness of primary cause
  • Context mutual exclusivity

In business terms: the framework isolates the structural trigger, not just the temporal coincidence.


2️⃣ Contribution-Based Definition: Production First

The second definition reframes causation as actual contribution.

An action is a primary cause if:

  • It is a direct actual contributor
  • Its contribution leads to an achievement situation
  • The effect persists after its contribution

This production-style definition resembles earlier discrete accounts of causation, making it conceptually portable.

And here is the elegant part:

The two definitions are formally equivalent.

That equivalence matters. It shows that structural context-enabling and contribution-based reasoning converge in hybrid systems.


The Modified But-For Test — Fixing Preemption in Hybrid Systems

Standard but-for reasoning fails under preemption.

Hybrid systems make this worse:

  • Preempted contributors may occur before the actual cause
  • Continuous evolution may obscure causal chains

The paper introduces a clever refinement:

  1. Replace the identified primary cause with a no-op.
  2. If a new primary cause emerges, replace that too.
  3. Continue until no primary causes remain.
  4. Evaluate the resulting defused situation.

This produces a maximal “cause-stripped” scenario.

The result:

Condition Outcome in Defused Scenario
No context initially active Effect disappears OR scenario becomes non-executable
Context initially active Effect may persist (implicit cause)

This restores counterfactual dependence in a principled way.

It is not a naive but-for test. It is a structurally-aware but-for test.

And that’s a meaningful distinction.


Key Formal Properties (Why This Is Not Just Elegant, But Useful)

The framework proves several non-trivial properties:

Property Implication
Uniqueness of primary cause No ambiguity in attribution
Uniqueness of achievement situation Deterministic audit point
Persistence Causes remain valid if effect persists
Counterfactual dependence Effect disappears when causes and preemptions removed

For AI governance and assurance, this is gold.

It means:

  • Traceability is formally well-defined
  • Causal audits can be mechanized
  • Responsibility attribution can be grounded in logic

Practical Implications — Why Businesses Should Care

Let’s translate this beyond academic logic.

1️⃣ Safety-Critical Systems

Energy plants, autonomous vehicles, medical devices—all operate in hybrid environments.

Understanding primary cause in such systems supports:

  • Root cause analysis
  • Compliance reporting
  • Post-incident reconstruction

2️⃣ AI Governance & Regulation

As regulators push for explainability, discrete-event explanations are insufficient.

Hybrid causation modeling enables:

  • Temporal responsibility tracking
  • Continuous risk attribution
  • Formal explanation pipelines

3️⃣ Multi-Agent Systems

In concurrent systems, multiple agents influence shared continuous variables.

The uniqueness results prevent causal over-attribution.

That matters when liability enters the conversation.


Where the Framework Is Still Limited

To its credit, the paper is transparent about constraints:

  • Focused on primitive temporal fluents
  • Linear scenarios only
  • No compound effects
  • No indirect temporal causes yet

This is foundational work, not final architecture.

But it establishes something critical:

Hybrid domains require hybrid causation semantics.

And that cannot be retrofitted from purely discrete logic.


Conclusion — Causation Grows Up

Most AI systems today operate in environments where discrete decisions and continuous dynamics intertwine.

If our causation models ignore that, we are building explanations on sand.

This paper does something quietly powerful:

  • It formalizes primary cause in hybrid domains
  • It proves equivalence between structural and production views
  • It rehabilitates counterfactual reasoning under preemption

Not flashy. Not hype-driven.

Just rigorous.

And rigor is exactly what AI governance will demand in the next decade.

Cognaptus: Automate the Present, Incubate the Future.