Cover image

Put It on the GLARE: How Agentic Reasoning Makes Legal AI Actually Think

Legal judgment prediction (LJP) is one of those problems that exposes the difference between looking smart and being useful. Most models memorize patterns; judges demand reasons. Today’s paper introduces GLARE—an agentic framework that forces the model to widen its hypothesis space, learn from real precedent logic, and fetch targeted legal knowledge only when it needs it. The result isn’t just higher accuracy; it’s a more auditable chain of reasoning. TL;DR What it is: GLARE = Gent Legal Agentic Reasoning Engine for LJP. Why it matters: It turns “guess the label” into compare-and-justify—exactly how lawyers reason. How it works: Three modules—Charge Expansion (CEM), Precedents Reasoning Demonstrations (PRD), and Legal Search–Augmented Reasoning (LSAR)—cooperate in a loop. Proof: Gains of +7.7 F1 (charges) and +11.5 F1 (articles) over direct reasoning; +1.5 to +3.1 F1 over strong precedent‑RAG; double‑digit gains on difficult, long‑tail charges. So what: If you’re deploying LLMs into legal ops or compliance, agentic structure > bigger base model. Why “agentic” beats bigger The usual upgrades—bigger models, more RAG, longer context—don’t address the core failure mode in LJP: premature closure on a familiar charge and surface‑level precedent matching. GLARE enforces a discipline: ...

August 25, 2025 · 4 min · Zelina
Cover image

Charting a Better Bedside: When Agentic RL Teaches RAG to Diagnose

Why this paper matters: Retrieval‑augmented generation (RAG) has been the default answer to “how do we make LLMs factual?” But clinical work is not a single hop to a single document; it’s a workflow—observe, hypothesize, retrieve, cross‑check, and only then decide. Deep‑DxSearch reframes RAG as a sequential policy, trained end‑to‑end with reinforcement learning (RL) so the model learns when to reason internally and when to consult guidelines, match similar patients, or search broader knowledge—before committing to a diagnosis. That design change is the story. ...

August 24, 2025 · 5 min · Zelina
Cover image

Peer Review, But Make It Multi‑Agent: Inside aiXiv’s Bid to Publish AI Scientists

If 2024 was the year AI started writing science, 2025 is making it figure out how to publish it. Today’s paper introduces aiXiv, an open‑access platform where AI agents (and humans) submit proposals, review each other’s work, and iterate until a paper meets acceptance criteria. Rather than bolt AI onto the old gears of journals and preprint servers, aiXiv rebuilds the conveyor belt end‑to‑end. Why this matters (and to whom) Research leaders get a way to pressure‑test automated discovery without waiting months for traditional peer review. AI vendors can plug agents into a standardized workflow (through APIs/MCP), capturing telemetry to prove reliability. Publishers face an existential question: if quality control is measurable and agentic, do we still need the old queue? The core idea in one sentence A closed‑loop, multi‑agent review system combines retrieval‑augmented evaluation, structured critique, and re‑submission cycles to raise the floor of AI‑generated proposals/papers and create an auditable trail of improvements. ...

August 24, 2025 · 5 min · Zelina
Cover image

Memory With Intent: Why LLMs Need a Cognitive Workspace, Not Just a Bigger Window

TL;DR Today’s long-context and RAG systems scale storage, not thinking. Cognitive Workspace (CW) reframes memory as an active, metacognitive process: curate, plan, reuse, and consolidate. In tests, CW reports ~55–60% memory reuse and 17–18% net efficiency gains despite a 3.3× operation overhead—precisely because it thinks about what to remember and why. The Setup: Context ≠ Cognition Over the past 18 months we’ve cheered >1M-token windows and slicker attention kernels. But piling tokens into a context is like dumping files on a desk; it’s storage without stewardship. In knowledge work, what moves the needle is not how much you can “see” but how well you organize, recall, and reuse—with intent. ...

August 20, 2025 · 5 min · Zelina
Cover image

Atom by Atom, Better Research: How Fine-Grained Rewards Make Agentic Search Smarter

If you’ve ever watched a web agent swing from elegant reasoning to face‑plants on basic facts, you’ve met the limits of outcome‑only training. Atom‑Searcher proposes a simple but radical fix: stop treating the whole reasoning trace as one monolith. Instead, break it down into Atomic Thoughts—the minimal, functional units of reasoning—and supervise them directly with a Reasoning Reward Model (RRM). Then blend those process‑level rewards with the final answer score using a decaying curriculum. The result? More stable training, deeper search behavior, and better generalization across in‑ and out‑of‑domain QA. ...

August 19, 2025 · 5 min · Zelina
Cover image

Keys to the Kingdom: How LLMs Can Audit Crypto Logic Before It Breaks

We’ve gotten good at spotting API misuse in crypto code (think “don’t use ECB,” “don’t hardcode IVs”). But many production failures don’t come from the obvious API call—they’re born in the logic that surrounds it: the parameter checks, corner-case math, and brittle “optimizations.” That’s where CryptoScope steps in: an LLM-powered framework that reads crypto code like a human auditor, guided by a domain corpus and structured prompts, to uncover logic-level vulnerabilities without executing the code. ...

August 18, 2025 · 4 min · Zelina
Cover image

Therapy, Explained: How Multi‑Agent LLMs Turn DSM‑5 Screens into Auditable Logic

TL;DR DSM5AgentFlow uses three cooperating LLM agents—Therapist, Client, and Diagnostician—to simulate DSM‑5 Level‑1 screenings and then generate step‑by‑step diagnoses tied to specific DSM criteria. Experiments across four LLMs show a familiar trade‑off: dialogue‑oriented models sounded more natural, while a reasoning‑oriented model scored higher on diagnostic accuracy. For founders and PMs in digital mental health, the win is auditability: every symptom claim can be traced to a quoted utterance and an explicit DSM clause. The catch: results are built on synthetic dialogues, so ecological validity and real‑world safety remain open. ...

August 18, 2025 · 5 min · Zelina
Cover image

Three’s Company: When LLMs Argue Their Way to Alpha

TL;DR A role‑based, debate‑driven LLM system—AlphaAgents—coordinates three specialist agents (fundamental, sentiment, valuation) to screen equities, reach consensus, and build a simple, equal‑weight portfolio. In a four‑month backtest starting 2024‑02‑01 on 15 tech names, the risk‑neutral multi‑agent portfolio outperformed the benchmark and single‑agent baselines; risk‑averse variants underperformed in a bull run (as expected). The real innovation isn’t the short backtest—it’s the explainable process: constrained tools per role, structured debate, and explicit risk‑tolerance prompts. ...

August 18, 2025 · 5 min · Zelina
Cover image

Forecast: Mostly Context with a Chance of Routing

Large language models can forecast surprisingly well when you hand them the right context. But naïve prompts leave money on the table. Today’s paper introduces four plug‑and‑play strategies—ReDP, CorDP, IC‑DP, RouteDP—that lift accuracy, interpretability, and cost‑efficiency without training new models. Here’s what that means for teams running demand, risk, or ops forecasts. Why this matters for business readers Most production forecasts are numeric workhorses (ARIMA/ETS/TS foundation models), while contextual facts—weather advisories, policy changes, promos, strikes—arrive as text. LLMs can read that text and adjust the forecast, but simply stuffing history+context into a prompt (“direct prompting”) is often fragile. The four strategies below are operational patterns you can drop into existing stacks without re‑architecting. ...

August 16, 2025 · 5 min · Zelina
Cover image

RAGulating Compliance: When Triplets Trump Chunks

TL;DR A new multi‑agent pipeline builds an ontology‑light knowledge graph from regulatory text, embeds subject–predicate–object triplets alongside their source snippets in one vector store, and uses triplet‑level retrieval to ground LLM answers. The result: better section retrieval at stricter similarity thresholds, slightly higher answer accuracy, and far stronger navigability across related rules. For compliance teams, the payoff is auditability and explainability baked into the data layer, not just the prompt. ...

August 16, 2025 · 5 min · Zelina