Cover image

From Features to Actions: Why Agentic AI Needs a New Explainability Playbook

Opening — Why this matters now Explainable AI has always promised clarity. For years, that promise was delivered—at least partially—through feature attributions, saliency maps, and tidy bar charts explaining why a model predicted this instead of that. Then AI stopped predicting and started acting. Tool-using agents now book flights, browse the web, recover from errors, and occasionally fail in slow, complicated, deeply inconvenient ways. When that happens, nobody asks which token mattered most. They ask: where did the agent go wrong—and how did it get there? ...

February 9, 2026 · 4 min · Zelina
Cover image

When One Heatmap Isn’t Enough: Layered XAI for Brain Tumour Detection

Opening — Why this matters now Medical AI is no longer struggling with accuracy. In constrained tasks like MRI-based brain tumour detection, convolutional neural networks routinely cross the 90% mark. The real bottleneck has shifted elsewhere: trust. When an algorithm flags—or misses—a tumour, clinicians want to know why. And increasingly, a single colourful heatmap is not enough. ...

February 7, 2026 · 3 min · Zelina
Cover image

Training Models to Explain Themselves: Counterfactuals as a First-Class Objective

Opening — Why this matters now As AI systems increasingly decide who gets a loan, a job interview, or access to public services, explanations have stopped being a philosophical luxury. They are now a regulatory, ethical, and operational requirement. Counterfactual explanations—“If your income were $5,000 higher, the loan would have been approved”—have emerged as one of the most intuitive tools for algorithmic recourse. ...

January 24, 2026 · 4 min · Zelina
Cover image

Hook, Line, and Confidence: When Humans Outthink the Phish Bot

Opening — Why this matters now Phishing is no longer about bad grammar and suspicious links. It is about plausibility, tone, and timing. As attackers refine their craft, the detection problem quietly shifts from raw accuracy to judgment under uncertainty. That is precisely where today’s AI systems, despite their statistical confidence, begin to diverge from human reasoning. ...

January 11, 2026 · 4 min · Zelina
Cover image

Pulling the Thread: Why LLM Reasoning Often Unravels

Opening — Why this matters now Large Language Model (LLM) agents have crossed an uncomfortable threshold. They are no longer just autocomplete engines or polite chat companions; they are being entrusted with financial decisions, scientific hypothesis generation, and multi-step autonomous actions. With that elevation comes a familiar demand: explain yourself. Chain-of-Thought (CoT) reasoning was supposed to be the answer. Let the model “think out loud,” and transparency follows—or so the story goes. The paper behind Project Ariadne argues, with unsettling rigor, that this story is largely fiction. Much of what we see as reasoning is closer to stagecraft: convincing, articulate, and causally irrelevant. ...

January 6, 2026 · 4 min · Zelina
Cover image

Many Minds, One Decision: Why Agentic AI Needs a Brain, Not Just Nerves

Opening — Why this matters now Agentic AI has officially crossed the line from clever demo to operational liability. We are no longer talking about chatbots that occasionally hallucinate trivia. We are deploying autonomous systems that decide, act, and trigger downstream consequences—often across tools, APIs, and real-world processes. In that setting, the old comfort blanket of “the model said so” is no longer defensible. ...

December 29, 2025 · 3 min · Zelina
Cover image

When More Explanation Hurts: The Early‑Stopping Paradox of Agentic XAI

Opening — Why this matters now We keep telling ourselves a comforting story: if an AI explanation isn’t good enough, just refine it. Add another round. Add another chart. Add another paragraph. Surely clarity is a monotonic function of effort. This paper politely demolishes that belief. As agentic AI systems—LLMs that reason, generate code, analyze results, and then revise themselves—move from demos into decision‑support tools, explanation quality becomes a first‑order risk. Not model accuracy. Not latency. Explanation quality. Especially when the audience is human, busy, and allergic to verbose nonsense. ...

December 25, 2025 · 4 min · Zelina
Cover image

XAI, But Make It Scalable: Why Experts Should Stop Writing Rules

Opening — Why this matters now Explainable AI has reached an awkward phase of maturity. Everyone agrees that black boxes are unacceptable in high‑stakes settings—credit, churn, compliance, healthcare—but the tools designed to open those boxes often collapse under their own weight. Post‑hoc explainers scale beautifully and then promptly contradict themselves. Intrinsic approaches behave consistently, right up until you ask who is going to annotate explanations for millions of samples. ...

December 23, 2025 · 4 min · Zelina
Cover image

Doctor GPT, But Make It Explainable

Opening — Why this matters now Healthcare systems globally suffer from a familiar triad: diagnostic bottlenecks, rising costs, and a shortage of specialists. What makes this crisis especially stubborn is not just capacity—but interaction. Diagnosis is fundamentally conversational, iterative, and uncertain. Yet most AI diagnostic tools still behave like silent oracles: accurate perhaps, but opaque, rigid, and poorly aligned with how humans actually describe illness. ...

December 22, 2025 · 4 min · Zelina
Cover image

When Tokens Remember: Graphing the Ghosts in LLM Reasoning

Opening — Why this matters now Large language models don’t think—but they do accumulate influence. And that accumulation is exactly where most explainability methods quietly give up. As LLMs move from single-shot text generators into multi-step reasoners, agents, and decision-making systems, we increasingly care why an answer emerged—not just what token attended to what prompt word. Yet most attribution tools still behave as if each generation step lives in isolation. That assumption is no longer just naïve; it is actively misleading. ...

December 18, 2025 · 4 min · Zelina