Cover image

MatchTIR: Stop Paying Every Token the Same Salary

Opening — Why this matters now Tool-using agents are no longer a novelty. They are quietly becoming the default interface between LLMs and the real world: APIs, databases, search engines, execution environments. Yet most reinforcement learning pipelines still behave as if every step in a trajectory deserves the same bonus. That assumption was tolerable when tasks were short. It collapses when agents think, call tools, fail, retry, and recover over ten or more turns. ...

January 17, 2026 · 4 min · Zelina
Cover image

Credit Where It's Due: How CAPO Brings Verifiable Precision to LLM Reasoning

When training Large Language Models (LLMs) to reason, reinforcement learning has proven to be a powerful yet blunt instrument. Most methods reduce the entire model output to a single pass/fail reward, applying that verdict to every token—regardless of whether it contributed to success or failure. This makes credit assignment vague, verifiability weak, and learning inefficient. Enter CAPO (Credit Assignment Policy Optimization), a method that shifts the paradigm: it brings verifiable, fine-grained credit assignment to the token level, using LLMs themselves as judgment agents. ...

August 5, 2025 · 4 min · Zelina