Cover image

The First Hurdle: Why Coding Agents Struggle with Setup

In the race to build autonomous software engineers, large language model (LLM) agents like Devin and Copilot Chat are lauded for fixing bugs, writing code, and even completing tasks from GitHub issues. But what happens when the code doesn’t even run? That’s the uncomfortable gap SetupBench aims to measure—and the results are sobering. SetupBench introduces a 93-task benchmark evaluating a foundational but under-tested skill: bootstrapping a development environment from scratch. Unlike prior benchmarks that hand agents a fully pre-configured Docker container, SetupBench drops them into a barebones Linux sandbox and challenges them to install dependencies, initialize databases, configure background services, and resolve real-world version conflicts. It sounds simple. It isn’t. ...

July 15, 2025 · 4 min · Zelina
Cover image

The Retrieval-Reasoning Tango: Charting the Rise of Agentic RAG

In the AI race to make large language models both factual and reasoned, two camps have emerged: one focused on retrieval-augmented generation (RAG) to fight hallucination, the other on long-chain reasoning to mimic logic. But neither wins alone. This week’s survey by Li et al. (2025), Towards Agentic RAG with Deep Reasoning, delivers the most comprehensive synthesis yet of the field’s convergence point: synergized RAG–Reasoning. It’s no longer a question of whether retrieval helps generation or reasoning helps retrieval—but how tightly the two can co-evolve, often under the coordination of autonomous agents. ...

July 15, 2025 · 3 min · Zelina
Cover image

The Sink That Remembers: Solving LLM Memorization Without Forgetting Everything Else

When large language models (LLMs) memorize repeated content during training—be it a phone number, a copyrighted paragraph, or a user’s personal story—the implications go beyond benign repetition. They touch the very core of AI safety, privacy, and trust. And yet, removing this memorized content after training has proven to be a devil’s bargain: anything you subtract tends to weaken the model’s overall capabilities. In their recent ICML 2025 paper, Ghosal et al. propose an elegant reframing of this problem. Rather than performing painful post-hoc surgery on a trained model, they suggest we prepare the model from the outset to isolate memorization into removable compartments—which they call Memorization Sinks (MemSinks). ...

July 15, 2025 · 4 min · Zelina
Cover image

Chunks, Units, Entities: RAG Rewired by CUE-RAG

Retrieval-Augmented Generation (RAG) has become the go-to technique for grounding large language models (LLMs) in external data. But as anyone building real-world RAG pipelines knows, there’s a growing tension between accuracy and cost. Existing graph-based RAG solutions promise richer semantics than vanilla vector stores, but suffer from two persistent issues: incomplete graphs and retrieval misalignment. The paper “CUE-RAG: Towards Accurate and Cost-Efficient Graph-Based RAG” proposes a structural rethinking. By integrating a multi-partite graph, hybrid extraction, and a query-driven iterative retriever, CUE-RAG achieves state-of-the-art accuracy while cutting indexing costs by up to 72.58% and even outperforming other methods without using any LLM tokens at all. ...

July 14, 2025 · 3 min · Zelina
Cover image

Cognitive Gridlock: Is Consciousness a Jamming Phase?

In the world of physics, when particles in a system become so densely packed or cooled that they lock into place, we call this phenomenon jamming. Sand becoming rigid under pressure, traffic freezing on a highway, or even glass transitioning from fluid to solid—all are governed by this principle. What if the same laws applied to intelligence? A provocative new paper, Consciousness as a Jamming Phase by Kaichen Ouyang, suggests just that: large language models (LLMs) exhibit consciousness-like properties not as a software quirk but as a physical phase transition, mirroring the jamming of particles in disordered systems. ...

July 14, 2025 · 3 min · Zelina
Cover image

Inner Critics, Better Agents: The Rise of Introspective AI

When AI agents begin to talk to themselves—really talk to themselves—we might just witness a shift in how machine reasoning is conceived. A new paper, “Introspection of Thought Helps AI Agents”, proposes a reasoning framework (INoT) that takes inspiration not from more advanced outputs or faster APIs, but from an old philosophical skill: inner reflection. Rather than chaining external prompts or simulating collaborative agents outside the model, INoT introduces PromptCode—a code-integrated prompt system that embeds a virtual multi-agent debate directly inside the LLM. The result? A substantial increase in reasoning quality (average +7.95%) and a dramatic reduction in token cost (–58.3%) compared to state-of-the-art baselines. Let’s unpack how this works, and why it could redefine our mental model of what it means for an LLM to “think.” ...

July 14, 2025 · 4 min · Zelina
Cover image

Plug Me In: Why LLMs with Tools Beat LLMs with Size

The latest research out of Heriot-Watt University doesn’t just challenge the notion that bigger is better — it quietly dismantles it. In their newly released Athena framework, Nripesh Niketan and Hadj Batatia demonstrate how integrating external APIs into LLM pipelines can outperform even the likes of GPT-4o and LLaMA-Large on real tasks like math and science. And they didn’t just beat them — they lapped them. Why GPT-4 Still Fumbles Math Ask GPT-4o to solve a college-level math problem, and it might hallucinate steps or miss basic arithmetic. The reason? LLMs, even at trillion-parameter scale, are not calculators. They’re probabilistic machines trained on patterns, not deterministic reasoners. ...

July 14, 2025 · 3 min · Zelina
Cover image

Sound and Fury Signifying Stock Picks

In an age where TikTok traders and YouTube gurus claim market mastery, a new benchmark dataset asks a deceptively simple question: Can AI tell when someone really believes in their own stock pick? The answer, it turns out, reveals not just a performance gap between finfluencers and index funds, but also a yawning chasm between today’s multimodal AI models and human judgment. Conviction Is More Than a Call to Action The paper “VideoConviction” introduces a unique multimodal benchmark composed of 288 YouTube videos from 22 financial influencers, or “finfluencers,” spanning over six years of market cycles. From these, researchers extracted 687 stock recommendation segments, annotating each with: ...

July 14, 2025 · 4 min · Zelina
Cover image

Bias, Baked In: Why Pretraining, Not Fine-Tuning, Shapes LLM Behavior

What makes a large language model (LLM) biased? Is it the instruction tuning data, the randomness of training, or something more deeply embedded? A new paper from Itzhak, Belinkov, and Stanovsky, presented at COLM 2025, delivers a clear verdict: pretraining is the primary source of cognitive biases in LLMs. The implications of this are far-reaching — and perhaps more uncomfortable than many developers would like to admit. The Setup: Two Steps, One Core Question The authors dissected the origins of 32 cognitive biases in LLMs using a controlled two-step causal framework: ...

July 13, 2025 · 4 min · Zelina
Cover image

Prompt Without Words: Distilling GPT Semantics for Smarter Vision Models

When it comes to prompting vision-language models, most methods rely on textual descriptions extracted from large language models like GPT. But those descriptions—“fluffy fur, friendly eyes, golden color”—are often verbose, ambiguous, or flat-out unreliable. What if we could skip that noisy middle step entirely? That’s the premise behind DeMul (Description-free Multi-prompt Learning), a new method presented at ICLR 2025 that quietly delivers a major leap in few-shot image classification. Instead of generating descriptions for each class, DeMul directly distills the semantic knowledge of GPT embeddings into learnable prompt vectors. The result is simpler, more robust, and strikingly effective. ...

July 13, 2025 · 3 min · Zelina