Cover image

Infinite Tasks, Finite Minds: Why Agents Keep Forgetting—and How InfiAgent Cheats Time

Opening — Why this matters now Everyone wants an autonomous agent that can just keep going. Write a literature review. Audit 80 papers. Run an open-ended research project for days. In theory, large language models (LLMs) are perfect for this. In practice, they quietly collapse under their own memory. The problem isn’t model intelligence. It’s state. ...

January 7, 2026 · 4 min · Zelina
Cover image

MAGMA Gets a Memory: Why Flat Retrieval Is No Longer Enough

Opening — Why this matters now LLM agents are no longer judged by how clever they sound in a single turn. They are judged by whether they remember, whether they reason, and—more awkwardly—whether they can explain why an answer exists at all. As agentic systems move from demos to infrastructure, the limits of flat retrieval become painfully obvious. Semantic similarity alone is fine when the question is what. It collapses when the question is when, why, or who caused what. The MAGMA paper enters precisely at this fault line. ...

January 7, 2026 · 4 min · Zelina
Cover image

Rationales Before Results: Teaching Multimodal LLMs to Actually Reason About Time Series

Opening — Why this matters now Multimodal LLMs are increasingly being asked to reason about time series: markets, traffic, power grids, pollution. Charts are rendered. Prompts are polished. The answers sound confident. And yet—too often—they’re wrong for the most boring reason imaginable: the model never actually reasons. Instead, it pattern-matches. This paper dissects that failure mode with unusual clarity. The authors argue that the bottleneck is not model scale, data access, or even modality alignment. It’s the absence of explicit reasoning priors that connect observed temporal patterns to downstream outcomes. Without those priors, multimodal LLMs hallucinate explanations after the fact, mistaking surface similarity for causality. ...

January 7, 2026 · 4 min · Zelina
Cover image

Trust Issues at 35,000 Feet: Assuring AI Digital Twins Before They Fly

Opening — Why this matters now Digital twins have quietly become one of aviation’s favorite promises: simulate reality well enough, and you can test tomorrow’s airspace decisions today—safely, cheaply, and repeatedly. Add AI agents into the mix, and the ambition escalates fast. We are no longer just modeling aircraft trajectories; we are training decision-makers. ...

January 7, 2026 · 5 min · Zelina
Cover image

When Pipes Speak in Probabilities: Teaching Graphs to Explain Their Leaks

Opening — Why this matters now Water utilities do not suffer from a lack of algorithms. They suffer from a lack of trustworthy ones. In an industry where dispatching a repair crew costs real money and false positives drain already thin operational budgets, a black‑box model—no matter how accurate—remains a risky proposition. Leak detection in water distribution networks (WDNs) has quietly become an ideal stress test for applied AI. The data are noisy, the events are rare, the topology is non‑Euclidean, and the consequences of wrong decisions are painfully tangible. This paper enters precisely at that fault line: it asks not only where a leak might be, but also how an engineer can understand why the model thinks so. ...

January 7, 2026 · 4 min · Zelina
Cover image

When Prompts Learn Themselves: The Death of Task Cues

Opening — Why this matters now Prompt engineering was supposed to be a temporary inconvenience. A short bridge between pre‑trained language models and real-world deployment. Instead, it became a cottage industry—part folklore, part ritual—where minor phrasing changes mysteriously decide whether your system works or embarrasses you in production. The paper Automatic Prompt Engineering with No Task Cues and No Tuning quietly dismantles much of that ritual. It asks an uncomfortable question: what if prompts don’t need us nearly as much as we think? And then it answers it with a system that is deliberately unglamorous—and therefore interesting. ...

January 7, 2026 · 3 min · Zelina
Cover image

EverMemOS: When Memory Stops Being a Junk Drawer

Opening — Why this matters now Long-context models were supposed to solve memory. They didn’t. Despite six-figure token windows, modern LLM agents still forget, contradict themselves, and—worse—remember the wrong things at the wrong time. The failure mode is no longer missing information. It is unstructured accumulation. We’ve built agents that can recall fragments indefinitely but cannot reason over them coherently. ...

January 6, 2026 · 3 min · Zelina
Cover image

FormuLLA: When LLMs Stop Talking and Start Formulating

Opening — Why this matters now Pharmaceutical 3D printing has promised personalization for over a decade. In practice, it has mostly delivered spreadsheets, failed filaments, and a great deal of human patience. The bottleneck has never been imagination—it has been formulation. Every new drug–excipient combination still demands expensive trial-and-error, even as printers themselves have matured. ...

January 6, 2026 · 4 min · Zelina
Cover image

Jerk Matters: Teaching Reinforcement Learning Some Mechanical Manners

Opening — Why this matters now Reinforcement learning (RL) has a bad habit: it optimizes rewards with the enthusiasm of a short‑term trader and the restraint of a caffeinated squirrel. In simulation, this is tolerable. In the real world—where motors wear down, compressors hate being toggled, and electricity bills arrive monthly—it is not. As RL inches closer to deployment in robotics, energy systems, and smart infrastructure, one uncomfortable truth keeps resurfacing: reward-optimal policies are often physically hostile. The question is no longer whether RL can control real systems, but whether it can do so without shaking them apart. ...

January 6, 2026 · 4 min · Zelina
Cover image

Pulling the Thread: Why LLM Reasoning Often Unravels

Opening — Why this matters now Large Language Model (LLM) agents have crossed an uncomfortable threshold. They are no longer just autocomplete engines or polite chat companions; they are being entrusted with financial decisions, scientific hypothesis generation, and multi-step autonomous actions. With that elevation comes a familiar demand: explain yourself. Chain-of-Thought (CoT) reasoning was supposed to be the answer. Let the model “think out loud,” and transparency follows—or so the story goes. The paper behind Project Ariadne argues, with unsettling rigor, that this story is largely fiction. Much of what we see as reasoning is closer to stagecraft: convincing, articulate, and causally irrelevant. ...

January 6, 2026 · 4 min · Zelina