Counterfactuals Unchained: How Causality Escapes Its Own Models
Opening — Why this matters now
AI systems increasingly make decisions that trigger other decisions — an expanding domino chain woven from predictions, nudges, and sometimes hallucinations. When businesses want explanations, regulators demand accountability, or agents need to reason about what would have happened, classic causal models quickly reveal their limits. The paper “Causality Without Causal Models” by Halpern & Pass fileciteturn0file0 argues that our current machinery for defining causes is simply too rigid. Their proposal: liberate causality from structural equations and reinterpret it in any counterfactual framework.
For an industry building autonomous workflows, this is more than philosophical housekeeping — it’s a foundations upgrade.
Background — Context and prior art
The standard Halpern–Pearl (HP) definition treats causality inside structural equation models (SEMs). These models are tidy: variables, arrows, and equations describing how each variable depends on its parents. When you “intervene,” the structural equations are surgically edited. All tidy. All neat.
The problem? Real reasoning rarely stays inside tidy boxes. Structural models:
- Cannot naturally express disjunctions (“A or B caused X”).
- Struggle with nested counterfactuals (“If Alice had believed Bob believed the alarm was fake…”).
- Typically forbid backtracking — counterfactual worlds where upstream causes shift.
- Do not embed modalities such as belief, intention, or knowledge.
In business environments, human decisions and AI agents jointly produce entangled causal webs. Restricting causality to SEMs is a bit like insisting modern finance must run exclusively on spreadsheets from 1998.
Analysis — What the paper does
Halpern & Pass abstract the HP definition into a general template that works in any system capable of evaluating counterfactuals. They call these systems causal–counterfactual families (ccfs), where the only requirement is: the framework knows what it means for a statement of the form “if φ were true, then ψ would follow.”
By lifting the definition into this generalized space, several new capabilities appear:
1. Causality without structural equations
You no longer need structural equations, graphs, or exogenous variables. Instead, causes are defined by three abstract criteria:
- AC1′: A and B are true in the actual world.
- AC2′: There exists a condition C that holds such that if ¬A ∧ C were true, then ¬B would be true.
- AC3′: A is minimal.
The entire HP apparatus collapses into a pure counterfactual relation.
2. Support for richer languages
Because the abstract framework works on languages that allow modalities and nested counterfactuals, you can now express things like:
- “If we intervened on Alice’s beliefs, she’d take the vaccine.”
- “A is a cause of B even though A is a disjunction (A1 ∨ A2).”
- Security properties described as nested counterfactual structures.
3. Backtracking becomes natural
Backtracking — where changing a downstream variable influences upstream reasoning — is trivial in a possible‑worlds semantics. The abstract definition can allow or disallow backtracking simply by choosing what information is kept fixed.
This is a significant improvement. In many business and organizational settings, humans reason with backtracking all the time.
4. A unified treatment of explanations
The authors extend the framework to define explanations (EX1′–EX4′). Explanations remain relative to an agent’s knowledge, echoing the HP tradition but now compatible with non‑structural models.
The result is an all‑terrain definition of causality and explanation.
Findings — Results with visualization
The key conceptual shift can be summarized in the table below.
Table 1 — From Structural Causality to Abstract Causality
| Feature | Structural Equation Models (HP) | Abstract Counterfactual Framework (Halpern–Pass) |
|---|---|---|
| Language expressiveness | Primitive events, limited boolean combinations | Arbitrary formulas, nested counterfactuals, belief modalities |
| Support for disjunctions | Weak and controversial | Fully supported |
| Backtracking | Forbidden by construction | Allowed or disallowed by design |
| Representation | Directed acyclic graphs + equations | Any counterfactual space (possible worlds, modal logic, etc.) |
| Explanation | Tied to causal paths | Tied to counterfactual sufficiency + agent knowledge |
| Applicability | Narrow, SEM-specific | Universal across counterfactual models |
Diagram — How AC2′ generalizes HP’s AC2
| Element | HP AC2 Condition | Abstract AC2′ Condition |
|---|---|---|
| Role of “witness” variables | Fixes W to hold actual values during intervention | Any formula τ that holds and supports counterfactual dependence |
| Counterfactual evaluation | Through structural equations | Through the nearest-world semantics or any counterfactual mechanism |
| Flexibility | Rigid | Extremely flexible |
The crucial gain is: conditional but‑for causality becomes portable across worlds, languages, and formalisms.
Implications — Why businesses and AI builders should care
1. Auditability of AI systems
When LLM agents form chains of decisions — routing emails, approving transactions, escalating exceptions — regulators will not care that your causal model is too limited. They will ask: Why did the system do X? What would have happened if Y had not occurred? The Halpern–Pass framework enables explanations that match how humans naturally reason.
2. Robust compliance and assurance
Compliance teams often ask counterfactual questions involving intentions, beliefs, or disjunctions (“either the model misclassified or the data were incomplete…”). These are impossible to formalize inside SEMs. Now they are straightforward.
3. Agentic AI architectures
As enterprises move toward multi-agent AI ecosystems, agents will need to resolve causal dependencies on-the-fly — from detecting responsibility to planning under uncertainty. A generalized causal semantics becomes part of the operating system.
4. Improved failure analysis
Backtracking counterfactuals let analysts explore hypothetical upstream failures (“if the signal had not been delayed, the downstream action would have succeeded…”). This aligns far better with root‑cause analysis in operations.
5. A path to unified reasoning frameworks
By showing equivalence between structural and abstract causality in recursive settings, the paper provides a bridge: one foundation, many architectures.
Conclusion — Wrap-up
“Causality Without Causal Models” is not just another tweak to philosophical definitions. It is a liberation movement for how machines and humans reason about hypothetical worlds. For AI builders, it offers a vocabulary that can match the complexity of human decisions, regulatory expectations, and the multi-agent automation architectures now taking shape.
Cognaptus: Automate the Present, Incubate the Future.