Cover image

Recommendations With Receipts: When LLMs Have to Prove They Behaved

Opening — Why this matters now LLMs are increasingly trusted to recommend what we watch, buy, or read. But trust breaks down the moment a regulator, auditor, or policy team asks a simple question: prove that this recommendation followed the rules. Most LLM-driven recommenders cannot answer that question. They can explain themselves fluently, but explanation is not enforcement. In regulated or policy-heavy environments—media platforms, marketplaces, cultural quotas, fairness mandates—that gap is no longer tolerable. ...

January 17, 2026 · 4 min · Zelina
Cover image

When Debate Stops Being a Vote: DynaDebate and the Engineering of Reasoning Diversity

Opening — Why this matters now Multi-agent debate was supposed to be the antidote to brittle single-model reasoning. Add more agents, let them argue, and truth would somehow emerge from friction. In practice, what often emerges is something closer to a polite echo chamber. Despite the growing popularity of Multi-Agent Debate (MAD) frameworks, many systems quietly degenerate into majority voting over nearly identical reasoning paths. When all agents make the same mistake—just phrased slightly differently—debate becomes theater. The paper DynaDebate: Breaking Homogeneity in Multi-Agent Debate with Dynamic Path Generation tackles this problem head-on, and, refreshingly, does so by treating reasoning as an engineered process rather than a conversational one. fileciteturn0file0 ...

January 12, 2026 · 4 min · Zelina
Cover image

Vibe Coding a Theorem Prover: When LLMs Prove (and Break) Themselves

Opening — Why this matters now LLMs can write code, explain proofs, and occasionally hallucinate both with equal confidence. So the obvious next question—posed almost mischievously in this paper—is whether an LLM can code a theorem prover that itself relies on LLMs. Not as a demo. Not as a toy. But as a fully automatic, kernel-checked prover that runs on a laptop and outperforms Isabelle’s industrial-grade automation in at least some regimes. ...

January 11, 2026 · 4 min · Zelina
Cover image

When Solvers Guess Smarter: Teaching SMT to Think in Functions

Opening — Why this matters now Quantified SMT solving has always lived in an uncomfortable space between elegance and brute force. As models grew richer—mixing non-linear arithmetic, real-valued domains, and uninterpreted functions—the solvers stayed stubbornly syntactic. They match patterns. They enumerate. They hope. Meanwhile, large language models have quietly absorbed a century’s worth of mathematical intuition. AquaForte asks an obvious but previously taboo question: what if we let SMT solvers borrow that intuition—without surrendering formal guarantees? ...

January 11, 2026 · 3 min · Zelina
Cover image

Guardrails Over Gigabytes: Making LLM Coding Agents Behave

Opening — Why this matters now AI coding agents are everywhere—and still, maddeningly unreliable. They pass unit tests they shouldn’t. They hallucinate imports. They invent APIs with confidence that would be admirable if it weren’t so destructive. The industry response has been predictable: bigger models, longer prompts, more retries. This paper proposes something less glamorous and far more effective: stop asking stochastic models to behave like deterministic software engineers. ...

December 27, 2025 · 4 min · Zelina
Cover image

Truth Machines: VeriCoT and the Next Frontier of AI Self-Verification

Why this matters now Large language models have grown remarkably persuasive—but not necessarily reliable. They often arrive at correct answers through logically unsound reasoning, a phenomenon both amusing in games and catastrophic in legal, biomedical, or policy contexts. The research paper VeriCoT: Neuro-Symbolic Chain-of-Thought Validation via Logical Consistency Checks proposes a decisive step toward addressing that flaw: a hybrid system where symbolic logic checks the reasoning of a neural model, not just its answers. ...

November 7, 2025 · 4 min · Zelina
Cover image

Memory That Fights Back: How SEDM Turns Agent Logs into Verified Knowledge

TL;DR Most “agent memory” is a junk drawer: it grows fast, gets noisy, and slows everything down. SEDM (Self‑Evolving Distributed Memory) proposes an auditable, efficiency‑first overhaul. It verifies each candidate memory by replaying the exact run in a Self‑Contained Execution Context (SCEC), assigns an initial utility‑aligned weight, and then self‑schedules what to retrieve next. The result: higher task accuracy with fewer tokens versus strong memory baselines on FEVER and HotpotQA. ...

September 17, 2025 · 5 min · Zelina