Cover image

HyFedRAG: Caching Privacy into Federated RAG

Centralized Retrieval-Augmented Generation (RAG) systems promise smarter answers, but they quietly assume one big, clean dataset in one place. Reality is far messier: hospitals, insurers, or financial groups each hold their own silo, often in incompatible formats, and none are willing—or legally allowed—to pool raw data. The HyFedRAG framework tackles this head‑on by making RAG federated, heterogeneous, and privacy‑aware. Edge First, Cloud Second Instead of centralizing records, HyFedRAG runs retrieval at the edge. Each hospital or business unit: ...

September 12, 2025 · 3 min · Zelina
Cover image

Pareto on Autopilot: Evolving RL Policies for Messy Supply Chains

TL;DR Most “multi‑objective” solutions collapse trade‑offs into a single number. MORSE keeps the trade‑offs alive: it evolves a Pareto front of policies—not just solutions—so operators can switch policies in real time as priorities shift (profit ↔ emissions ↔ lead time). Add a CVaR knob and the system becomes tail‑risk aware, reducing catastrophic outcomes without babysitting. Why this matters (for operators & P&L owners) Supply chains live in tension: service levels vs working capital, speed vs emissions, resilience vs cost. Traditional methods either: ...

September 12, 2025 · 4 min · Zelina
Cover image

Graph and Circumstance: Maestro Conducts Reliable AI Agents

When agent frameworks stall in the real world, the culprit is rarely just a bad prompt. It’s the wiring: missing validators, brittle control flow, no explicit state, and second-hop retrieval that never gets the right handle. Maestro proposes something refreshingly uncompromising: optimize both the agent’s graph and its configuration together, with hard budgets on rollouts, latency, and cost—and let textual feedback from traces steer edits as much as numeric scores. ...

September 11, 2025 · 5 min · Zelina
Cover image

Mind the Gap: How OSC Turns Agent Chatter into Compound Intelligence

Multi‑agent LLMs work great on paper and go sideways in practice. We over‑select experts, flood the channel with verbose thoughts, and then pray a meta‑LLM can stitch it all together. OSC (Orchestrating Cognitive Synergy) proposes a missing middle: a learned orchestration layer that constantly models what each agent knows, spots “cognitive gaps,” and then tells agents how to talk—what to say, to whom, and at what level of detail—before the aggregator votes. ...

September 11, 2025 · 4 min · Zelina
Cover image

Model Portfolio: When LLMs Sit the CFA

If your firm is debating whether to trust an LLM on investment memos, this study is a gift: 1,560 questions from official CFA mock exams across Levels I–III, run on three model archetypes—multimodal generalist (GPT‑4o), deep-reasoning specialist (GPT‑o1), and lightweight cost‑saver (o3‑mini)—both zero‑shot and with a domain‑reasoning RAG pipeline. Below is what matters for adoption, not just leaderboard bragging rights. What the paper really shows Reasoning beats modality for finance. The reasoning‑optimized model (GPT‑o1) dominates across levels; the generalist (GPT‑4o) is inconsistent, especially on math‑heavy Level II. RAG helps where context is long and specialized. Gains are largest at Level III (portfolio cases) and in Fixed Income/Portfolio Management, modest at Level I. Retrieval cannot fix arithmetic. Most errors are knowledge gaps, not reading problems. Readability barely moves accuracy; the bottleneck is surfacing the right curriculum facts and applying them. Cost–accuracy has a sweet spot. o3‑mini + targeted RAG is strong enough for high‑volume workflows; o1 should be reserved for regulated, high‑stakes analysis. Executive snapshot CFA Level GPT‑4o (ZS → RAG) GPT‑o1 (ZS → RAG) o3‑mini (ZS → RAG) Takeaway I 78.6% → 79.4% 94.8% → 94.8% 87.6% → 88.3% Foundations already in‑model; RAG adds little II 59.6% → 60.5% 89.3% → 91.4% 79.8% → 84.3% Level II exposes math + integration gaps; RAG helps smaller models most III 64.1% → 68.6% 79.1% → 87.7% 70.9% → 76.4% Case‑heavy; RAG is decisive, especially for o1 ZS = zero‑shot. Accuracies are from the paper’s aggregated results. ...

September 11, 2025 · 4 min · Zelina
Cover image

Parallel Minds, Shorter Time: ParaThinker’s Native Thought Width

The pitch: We’ve stretched LLM “depth” by making models think longer. ParaThinker flips the axis—training models to think wider: spawn several independent lines of thought in parallel and then fuse them. The result is higher accuracy than single‑path “long thinking” at roughly the same wall‑clock time—and it scales. TL;DR for operators What it is: An end‑to‑end framework that natively generates multiple reasoning paths with special control tokens, then summarizes using cached context. Why it matters: It tackles the test‑time scaling bottleneck (aka Tunnel Vision) where early tokens lock a model into a suboptimal path. Business takeaway: You can trade a bit of GPU memory for more stable, higher‑quality answers at nearly the same latency—especially on math/logic‑heavy tasks and agentic workflows. The problem: “Think longer” hits a wall Sequential test‑time scaling (à la o1 / R1‑style longer CoT) delivers diminishing returns. After a point, more tokens don’t help; they reinforce early mistakes. ParaThinker names this failure mode Tunnel Vision—the first few tokens bias the entire trajectory. If depth traps us, width can free us. ...

September 11, 2025 · 4 min · Zelina
Cover image

Plan, Then Rewrite: Why Explicit Intent Wins in Agent Workflows

When assistants coordinate multiple tools or agents, the biggest unforced error is planning off the raw chat log. RECAP (REwriting Conversations for Agent Planning) argues—and empirically shows—that a slim “intent rewriter” sitting between the dialogue and the planner yields better, cleaner plans, especially in the messy realities of ambiguity, intent drift, and mixed goals. The headline: rewriting the conversation into a concise, up‑to‑date intent beats throwing the whole transcript at your planner. ...

September 11, 2025 · 4 min · Zelina
Cover image

Agreeable to a Fault: Why LLM ‘People’ Can’t Hold Their Ground

If you’ve been tempted to A/B‑test a marketing idea on thousands of synthetic “customers,” read this first. A new study introduces a dead‑simple but devastating test for LLM‑based agents: ask them to first state their internal stance (preference) and their openness to persuasion, then drop them into a short dialogue and check whether their behavior matches what they just claimed. That’s it. If agents are believable stand‑ins for people, the conversation outcome should line up with those latent states. ...

September 8, 2025 · 5 min · Zelina
Cover image

Pieces, Not Puzzles: How ArcMemo Turns LLM Reasoning into Reusable Skills

TL;DR Most memory systems hoard instances (queries, answers, snippets). ArcMemo instead distills concepts—compact, reusable abstractions of what a model learned while solving a problem. Those concepts are retrieved and recombined for new problems. On ARC‑AGI, this design beats strong no‑memory baselines and continues to scale with retries, showing a credible path to lifelong, test‑time learning without finetuning. Why this paper matters The status quo of “inference‑time scaling” is a treadmill: longer chains of thought today, amnesia tomorrow. Enterprises building agentic systems—customer ops copilots, finance/excel agents, or low‑code RPA flows—need their agents to keep what they learn and apply it later without weight updates. ArcMemo’s move from instance‑level to concept‑level memory is the right abstraction boundary: ...

September 8, 2025 · 5 min · Zelina
Cover image

Plan, Act, Replan: When LLM Agents Run the Aisles

Modern retail planning isn’t a spreadsheet; it’s a loop. A new supply‑chain agent framework—deployed at JD.com’s scale—treats planning as a closed‑loop system: gather data → generate plans → execute → diagnose → correct → repeat. That shift from “one‑and‑done forecasting” to continuous replanning is the core idea worth copying. What’s actually new here Agentic decomposition around business intents. Instead of dumping a vague prompt into a model, the system classifies the operator’s request into three intent families: (1) inventory turnover & diagnostics, (2) in‑stock monitoring, (3) sales/inventory/procurement recommendations. Each intent triggers a structured task list rather than ad‑hoc code. Atomic analytics, not monoliths. The execution agent generates workflows as chains of four primitives—Filter → Transform → Groupby → Sort—and stitches them with function calls to vetted business logic. This keeps code inspectable, traceable, and reusable. Dynamic reconfiguration. After every sub‑task, observations feed back into the planner, which prunes, reorders, or adds steps. The output isn’t a static report; it’s a plan that learns while it runs. Why it matters for operators (not just researchers) Traditional MIP‑heavy or rule‑based planning works well when the world is stationary and well‑specified. Retail isn’t. Promotions, seasonality, logistics bottlenecks, supplier constraints—these create moving objective functions. The agentic design here bakes in: ...

September 8, 2025 · 4 min · Zelina