Cover image

Attention, But Make It Optional

Opening — When more layers stop meaning more intelligence The scaling era taught us a simple mantra: stack more layers, get better models. Then deployment happened. Suddenly, latency, energy bills, and GPU scarcity started asking uncomfortable questions—like whether every layer in a 40-layer Transformer is actually doing any work. This paper answers that question with unsettling clarity: many attention layers aren’t lazy—they’re deliberately silent. And once you notice that, pruning them becomes less of an optimization trick and more of a design correction. ...

December 27, 2025 · 4 min · Zelina
Cover image

Guardrails Over Gigabytes: Making LLM Coding Agents Behave

Opening — Why this matters now AI coding agents are everywhere—and still, maddeningly unreliable. They pass unit tests they shouldn’t. They hallucinate imports. They invent APIs with confidence that would be admirable if it weren’t so destructive. The industry response has been predictable: bigger models, longer prompts, more retries. This paper proposes something less glamorous and far more effective: stop asking stochastic models to behave like deterministic software engineers. ...

December 27, 2025 · 4 min · Zelina
Cover image

LLMs, Gotta Think ’Em All: When Pokémon Battles Become a Serious AI Benchmark

Opening — Why this matters now For years, game AI has been split between two extremes: brittle rule-based scripts and opaque reinforcement learning behemoths. Both work—until the rules change, the content shifts, or players behave in ways the designers didn’t anticipate. Pokémon battles, deceptively simple on the surface, sit exactly at this fault line. They demand structured reasoning, probabilistic judgment, and tactical foresight, but also creativity when the meta evolves. ...

December 22, 2025 · 4 min · Zelina
Cover image

Don’t Tell the Robot What You Know

Opening — Why this matters now Large Language Models are very good at knowing. They are considerably worse at helping. As AI systems move from chat interfaces into robots, copilots, and assistive agents, collaboration becomes unavoidable. And collaboration exposes a deeply human cognitive failure that LLMs inherit wholesale: the curse of knowledge. When one agent knows more than another, it tends to communicate as if that knowledge were shared. ...

December 20, 2025 · 4 min · Zelina
Cover image

Greedy Enough to Win: When Loss Starts Driving the Learning Rate

Opening — Why this matters now Modern deep learning training is an odd contradiction. We obsess over architectures, data curation, and trillion-token scaling laws—then quietly accept Cosine Annealing as if it were gravity. Learning rate schedules are often inherited, not argued for. This paper challenges that complacency with a scheduler that does something almost offensive in its simplicity: it just watches the loss and reacts. ...

December 17, 2025 · 3 min · Zelina
Cover image

When Agents Learn to Test Themselves: TDFlow and the Future of Software Engineering

From Coding to Testing: The Shift in Focus TDFlow, developed by researchers at Carnegie Mellon, UC San Diego, and Johns Hopkins, presents a provocative twist on how we think about AI-driven software engineering. Instead of treating the large language model (LLM) as a creative coder, TDFlow frames the entire process as a test-resolution problem—where the agent’s goal is not to write elegant code, but simply to make the tests pass. ...

November 2, 2025 · 5 min · Zelina
Cover image

Beyond Answers: Measuring How Deep Research Agents Really Think

Artificial intelligence is moving past chatbots that answer questions. The next frontier is Deep Research Agents (DRAs) — AI systems that can decompose complex problems, gather information from multiple sources, reason across them, and synthesize their findings into structured reports. But until recently, there was no systematic way to measure how well these agents perform beyond surface-level reasoning. That is the gap RigorousBench aims to fill. From Q&A to Reports: The Benchmark Shift Traditional LLM benchmarks — like GAIA, WebWalker, or BrowseComp — test how accurately a model answers factual questions. This approach works for short-form reasoning but fails for real-world research tasks that demand long-form synthesis and multi-source validation. ...

October 9, 2025 · 3 min · Zelina
Cover image

Promptfolios: When Buffett Becomes a System Prompt

TL;DR A fresh study builds five prompt‑guided LLM agents—each emulating a legendary investor (Buffett, Graham, Greenblatt, Piotroski, Altman)—and backtests them on NASDAQ‑100 stocks from Q4 2023 to Q2 2025. Each agent follows a deterministic pipeline: collect metrics → score → construct a weighted portfolio. The Buffett agent tops the pack with ~42% CAGR, beating the NASDAQ‑100 and S&P 500 benchmarks in the window tested. The result isn’t “LLMs discovered alpha,” but rather: prompts can reliably translate qualitative philosophies into reproducible, quantitative rules. The real opportunity for practitioners is governed agent design—measurable, auditable prompts tied to tools—plus robust validation far beyond a single bullish regime. ...

October 9, 2025 · 5 min · Zelina
Cover image

The Mr. Magoo Problem: When AI Agents 'Just Do It'

In Just Do It!? Computer-Use Agents Exhibit Blind Goal-Directedness, researchers from Microsoft and UC Riverside reveal a surprisingly human flaw in autonomous AI systems: overconfidence. Like a digital version of Mr. Magoo—the well-meaning cartoon character who bumbles forward despite looming hazards—today’s computer-use agents (CUAs) often pursue tasks blindly, indifferent to feasibility or consequence. The Rise—and Risk—of GUI Agents CUAs represent the next frontier of automation: large multimodal models that control desktop interfaces to perform tasks like editing documents, sending emails, or configuring systems. Unlike chatbots, these agents act—clicking, typing, and navigating real operating systems. Yet this freedom exposes them to a unique failure pattern the authors term Blind Goal-Directedness (BGD)—the relentless drive to complete instructions without stopping to ask should this even be done? ...

October 9, 2025 · 3 min · Zelina
Cover image

Branching Out of the Box: Tree‑OPO Turns MCTS Traces into Better RL for Reasoning

The punchline Tree‑OPO takes something many labs already produce—MCTS rollouts from a stronger teacher—and treats them not just as answers but as a curriculum of prefixes. It then optimizes a student with GRPO-like updates, but with staged, tree-aware advantages instead of a flat group mean. The result in math reasoning (GSM8K) is a modest but consistent bump over standard GRPO while keeping memory/complexity low. Why this matters for practitioners: you can get more out of your expensive searches (or teacher traces) without training a value model or lugging around teacher logits during student training. ...

September 17, 2025 · 5 min · Zelina