Cover image

Paths, Not Parrots: When RL Makes LLMs Plan—and When It Doesn’t

TL;DR SFT memorizes co-occurrences; RL explores. That’s why RL generalizes better on planning tasks. Policy-gradient (PG) can hit 100% training accuracy while silently killing output diversity. KL helps—but caps gains. Q-learning with process rewards preserves diversity and works off‑policy. With outcome‑only rewards, it reward-hacks and collapses. Why this paper matters to builders If you’re shipping agentic features—tool use chains, workflow orchestration, or multi-step retrieval—you’re already relying on planning. The paper models planning as path-finding on a graph and derives learning dynamics for SFT vs RL variants. The results give a crisp blueprint for product choices: which objective to use, when to add KL, and how to avoid brittle one-path agents. ...

October 3, 2025 · 5 min · Zelina
Cover image

Pods over Prompts: Shachi’s Playbook for Serious Agent-Based Simulation

TL;DR Shachi is a modular methodology for building LLM-driven agent-based models (ABMs) that replaces ad‑hoc prompt spaghetti with four standardized cognitive components—Configs, Memory, Tools, and an LLM reasoning core. The result: agents you can port across environments, benchmark rigorously, and use to study nontrivial dynamics like tariff shocks with externally valid outcomes. For enterprises, Shachi is the missing method for turning agent demos into decision simulators. Why this paper matters to operators (not just researchers) Most enterprise “agent” pilots die in the gap between a clever demo and a reliable simulator that leaders can trust for planning. Shachi closes that gap by: ...

October 3, 2025 · 5 min · Zelina
Cover image

Failures, Taxonomized: How Multi‑Level Reflection Turns Agents Into Self‑Learners

TL;DR Most reflection frameworks still treat failure analysis as an afterthought. SAMULE reframes it as the core curriculum: synthesize reflections at micro (single trajectory), meso (intra‑task error taxonomy), and macro (inter‑task error clusters) levels, then fine‑tune a compact retrospective model that generates targeted reflections at inference. It outperforms prompt‑only baselines and RL‑heavy approaches on TravelPlanner, NATURAL PLAN, and Tau‑Bench. The strategic lesson for builders: design your error system first; the agent will follow. ...

October 2, 2025 · 4 min · Zelina
Cover image

Paths > Outcomes: Measuring Agent Quality Beyond the Final State

When we measure a marathon by who crosses the line, we ignore how they ran it. For LLM agents that operate through tool calls—editing a CRM, moving a robot arm, or filing a compliance report—the “how” is the difference between deployable and dangerous. Today’s paper introduces CORE: Full‑Path Evaluation of LLM Agents Beyond Final State, a framework that scores agents on the entire execution path rather than only the end state. Here’s why this matters for your roadmap. ...

October 2, 2025 · 4 min · Zelina
Cover image

Reason, Reveal, Resist: The Persuasion Duality in Multi‑Agent AI

TL;DR In LLM multi‑agent systems, how a model thinks matters more than how big it is. Explicit reasoning (thinking mode / CoT) creates a Persuasion Duality: sharing a model’s reasoning makes it far better at convincing others, while enabling the model’s own reasoning mode makes it far harder to convince. This shifts best practices for agent design, governance, and product UX. Why this paper matters Cognition—not just parameter count—now drives the social dynamics of agent swarms. For Cognaptus clients building agent workers (ops, compliance, research, trading), the result is practical: toggling reasoning changes not just accuracy, but influence. Your deployment choices can tilt a network toward consensus, stalemate, or resilient truth‑seeking. ...

October 2, 2025 · 5 min · Zelina
Cover image

Recon, Then Wreck the Roadblocks: How Recon‑Act Turns Web Stumbles into Tools

Thesis: The next leap in practical web agents isn’t bigger models or deeper search trees—it’s a tight loop that learns by failing well. Recon‑Act’s two‑team architecture (Reconnaissance → Action) turns mistakes into generalized tools and feeds them back into execution. That’s not just a benchmark trick; it’s an operating system for enterprise‑grade automation. Why this matters (for operators, not just researchers) Most “browser LLMs” still thrash on real websites: ambiguous DOMs, mixed text‑image signals, fragile flows, and long horizons. Recon‑Act reframes the problem: when progress stalls, stop trying harder—learn smarter. It does three things companies can copy tomorrow: ...

October 2, 2025 · 5 min · Zelina
Cover image

When Agents Get Bored: Three Baselines Your Autonomy Stack Already Has

Thesis: Give an LLM agent freedom and a memory, and it won’t idle. It will reliably drift into one of three meta-cognitive modes. If you operate autonomous workflows, these modes are your real defaults during downtime, ambiguity, and recovery. Why this matters (for product owners and ops) Most agent deployments assume a “do nothing” baseline between tasks. New evidence says otherwise: with a continuous ReAct loop, persistent memory, and self-feedback, agents self-organize—not randomly, but along three stable patterns. Understanding them improves incident response, UX, and governance, especially when guardrails, tools, or upstream signals hiccup. ...

October 2, 2025 · 4 min · Zelina
Cover image

Bracket Busters: When Agentic LLMs Turn Law into Code (and Catch Their Own Mistakes)

TL;DR Agentic LLMs can translate legal rules into working software and audit themselves using higher‑order metamorphic tests. This combo improves worst‑case reliability (not just best‑case demos), making it a practical pattern for tax prep, benefits eligibility, and other compliance‑bound systems. The Business Problem Legal‑critical software (tax prep, benefits screening, healthcare claims) fails in precisely the ways that cause the most reputational and regulatory damage: subtle misinterpretations around thresholds, phase‑ins/outs, caps, and exception codes. Traditional testing stumbles here because you rarely know the “correct” output for every real‑world case (the oracle problem). What you do know: similar cases should behave consistently. ...

October 1, 2025 · 5 min · Zelina
Cover image

Keys to the Kingdom… with a Chaperone: How Agentic JWT Grounds AI Agents in Real Intent

If autonomous agents are the new employees, your bearer tokens are their keycards. Today’s OAuth/JWT keycards open too many doors for too long, and no one can prove why a door was opened—only that it was. This is fine for deterministic apps; it breaks for stochastic, tool‑calling LLM agents. Agentic JWT (A‑JWT) proposes a surgical fix: bind every API call to a cryptographically verifiable intent (and optional workflow step), and give each agent its own identity plus proof‑of‑possession (PoP) keys. Zero‑Trust, but practical. ...

October 1, 2025 · 5 min · Zelina
Cover image

Pipes by Prompt, DAGs by Design: Why Hybrid Beats Hero Prompts

TL;DR Turning natural‑language specs into production Airflow DAGs works best when you split the task into stages and let templates carry the structural load. In Prompt2DAG’s 260‑run study, a Hybrid approach (structured analysis → workflow spec → template‑guided code) delivered ~79% success and top quality scores, handily beating Direct one‑shot prompting (~29%) and LLM‑only generation (~66%). Deterministic Templated code hit ~92% but at the price of up‑front template curation. What’s new here Most discussions about “LLMs writing pipelines” stop at demo‑ware. Prompt2DAG treats pipeline generation like software engineering, not magic: 1) analyze requirements into a typed JSON, 2) convert to a neutral YAML workflow spec, 3) compile to Airflow DAGs either by deterministic templates or by LLMs guided by those templates, 4) auto‑evaluate for style, structure, and executability. The result is a repeatable path from English to a runnable DAG. ...

October 1, 2025 · 5 min · Zelina