From Causal Parrots to Causal Counsel: When LLMs Argue with Data
Opening — Why This Matters Now Everyone wants AI to “understand” causality. Fewer are comfortable with what that actually implies. Large Language Models (LLMs) can generate plausible causal statements from variable names alone. Give them “smoking,” “lung cancer,” “genetic mutation” and they confidently sketch arrows. The problem? Plausible is not proof. The paper “Leveraging Large Language Models for Causal Discovery: a Constraint-based, Argumentation-driven Approach” fileciteturn0file0 confronts this tension directly. It asks two uncomfortable but necessary questions: ...