Chains of Causality, Not Just Thought

Large language models (LLMs) have graduated from being glorified autocomplete engines to becoming fully-fledged agents. They write code, control mobile devices, execute multi-step plans. But with this newfound autonomy comes a fundamental problem: they act—and actions have consequences. Recent research from KAIST introduces Causal Influence Prompting (CIP), a method that doesn’t just nudge LLMs toward safety through general heuristics or fuzzy ethical reminders. Instead, it formalizes decision-making by embedding causal influence diagrams (CIDs) into the prompt pipeline. The result? A structured, explainable safety layer that turns abstract AI alignment talk into something operational. ...

July 2, 2025 · 4 min · Zelina

Bias Busters: Teaching Language Agents to Think Like Scientists

In the latest paper “Language Agents Mirror Human Causal Reasoning Biases” (Chen et al., 2025), researchers uncovered a persistent issue affecting even the most advanced language model (LM) agents: a disjunctive bias—a tendency to prefer “OR”-type causal explanations over equally valid or even stronger “AND”-type ones. Surprisingly, this mirrors adult human reasoning patterns and undermines the agents’ ability to draw correct conclusions in scientific-style causal discovery tasks. ...

May 15, 2025 · 3 min