Cover image

When LLMs Stop Guessing and Start Complying: Agentic Neuro-Symbolic Programming

Opening — Why this matters now Large Language Models are excellent improvisers. Unfortunately, software systems—especially those embedding logic, constraints, and guarantees—are not jazz clubs. They are factories. And factories care less about eloquence than about whether the machine does what it is supposed to do. Neuro-symbolic (NeSy) systems promise something enterprises quietly crave: models that reason, obey constraints, and fail predictably. Yet in practice, NeSy frameworks remain the domain of specialists fluent in obscure DSLs and brittle APIs. The result is familiar: powerful theory, low adoption. ...

January 5, 2026 · 4 min · Zelina
Cover image

Pieces, Not Puzzles: How ArcMemo Turns LLM Reasoning into Reusable Skills

TL;DR Most memory systems hoard instances (queries, answers, snippets). ArcMemo instead distills concepts—compact, reusable abstractions of what a model learned while solving a problem. Those concepts are retrieved and recombined for new problems. On ARC‑AGI, this design beats strong no‑memory baselines and continues to scale with retries, showing a credible path to lifelong, test‑time learning without finetuning. Why this paper matters The status quo of “inference‑time scaling” is a treadmill: longer chains of thought today, amnesia tomorrow. Enterprises building agentic systems—customer ops copilots, finance/excel agents, or low‑code RPA flows—need their agents to keep what they learn and apply it later without weight updates. ArcMemo’s move from instance‑level to concept‑level memory is the right abstraction boundary: ...

September 8, 2025 · 5 min · Zelina