Cover image

Wheel Smarts > Wheel Reinvention: What GitTaskBench Really Measures

Agents don’t build Rome from scratch—they retrofit the city. GitTaskBench (arXiv:2508.18993) is the first benchmark that grades code agents on how well they exploit existing GitHub repositories to deliver real-world outcomes, not just pass algorithm puzzles. It also puts a price tag on success via an Alpha value that blends accuracy with cost, bringing long-missing business realism to agent evals. TL;DR What’s new: 54 tasks across 7 modalities (image, video, speech, office docs, web scraping, security/privacy, biosignals), each paired to a real repo and a practical, automated test harness. Why it matters: The hard part isn’t just writing code—it’s environment setup, dependency wrangling, repo comprehension, and workflow orchestration. Headline result: Even the best stack—OpenHands + Claude 3.7—passes only ~48% of tasks; environment/setup issues cause ~65% of all failures. Business twist: The Alpha value estimates net economic benefit per task by combining success, quality, and token costs. Expensive tasks become clear wins; cheap tasks require ruthless cost control. The Benchmark, de-jargoned Problem framed: In real shops, devs search, fork, and adapt. GitTaskBench simulates that reality. Each task gives an agent a specific repo (e.g., DeOldify, Scrapy, NeuroKit, SpeechBrain) and a concrete user goal (e.g., “colorize this photo” or “extract author/quote pairs into CSV”). Success is determined by a task-specific metric (e.g., NIQE for image quality; SNR/SDR for speech separation; field-level F1 for scraping; column/row fidelity for office docs) and an execution check (the thing actually runs and outputs in the right format). ...

August 27, 2025 · 5 min · Zelina
Cover image

Crystal Ball, Meet Cron Job: What FutureX Reveals About ‘Live’ Forecasting Agents

The one-sentence take A new live benchmark, FutureX, swaps lab-style trivia for rolling, real-world future events, forcing agentic LLMs to search, reason, and hedge under uncertainty that actually moves—and the results expose where today’s “agents” are still brittle. Why FutureX matters now Enterprise teams are deploying agents to answer questions whose truth changes by the hour—markets, elections, sports, product launches. Static leaderboards don’t measure that. FutureX runs as a cron job on reality: it collects new events every day, has agents make predictions, and grades them after events resolve. That turns evaluation from a screenshot into a time series and makes overfitting to benchmark quirks a lot harder. ...

August 19, 2025 · 4 min · Zelina
Cover image

Red Flag on the Track: Why LLMs Still Struggle with Real Algorithmic Reasoning

In the world of AI benchmarks, most roads lead to flashy competitions: solving coding puzzles, climbing Codeforces ratings, or passing Olympiad-level problems. But a new benchmark — FormulaOne — changes the race. It doesn’t ask, “Can you win a medal?” It asks, “Can you think like a researcher?” And the answer from today’s frontier LLMs? A resounding no. From Codeforces Champs to Research Rookies The authors of FormulaOne strip away the glitz of competitive programming and delve into something far more consequential: research-grade algorithmic problems grounded in Monadic Second-Order (MSO) logic over graphs. These aren’t out-of-distribution visual puzzles like ARC. They’re in-distribution, theoretically tractable problems designed with precision to demand multi-step symbolic reasoning, mathematical insight, and clean implementation. ...

July 18, 2025 · 4 min · Zelina
Cover image

The First Hurdle: Why Coding Agents Struggle with Setup

In the race to build autonomous software engineers, large language model (LLM) agents like Devin and Copilot Chat are lauded for fixing bugs, writing code, and even completing tasks from GitHub issues. But what happens when the code doesn’t even run? That’s the uncomfortable gap SetupBench aims to measure—and the results are sobering. SetupBench introduces a 93-task benchmark evaluating a foundational but under-tested skill: bootstrapping a development environment from scratch. Unlike prior benchmarks that hand agents a fully pre-configured Docker container, SetupBench drops them into a barebones Linux sandbox and challenges them to install dependencies, initialize databases, configure background services, and resolve real-world version conflicts. It sounds simple. It isn’t. ...

July 15, 2025 · 4 min · Zelina
Cover image

Passing Humanity's Last Exam: X-Master and the Emergence of Scientific AI Agents

Is it possible to train a language model to become a capable scientist? That provocative question lies at the heart of a new milestone in AI research. In SciMaster: Towards General-Purpose Scientific AI Agents, a team from Shanghai Jiao Tong University introduces X-Master, a tool-augmented open-source agent that has just achieved the highest score ever recorded on Humanity’s Last Exam (HLE)—surpassing even OpenAI and Google. But what makes this feat more than just a leaderboard update is how X-Master got there. Instead of training a larger model or fine-tuning on more data, the researchers innovated on agentic architecture and inference-time workflows. The result? An extensible framework that emulates the exploratory behavior of human scientists, not just their answers. ...

July 8, 2025 · 4 min · Zelina
Cover image

Ping, Probe, Prompt: Teaching AI to Troubleshoot Networks Like a Pro

When a network fails, it doesn’t whisper its problems—it screams in silence. Packet drops, congestion, and flapping links rarely announce themselves clearly. Engineers must piece together clues scattered across logs, dashboards, and telemetry. It’s a detective game where the evidence hides behind obscure port counters and real-time topological chaos. Now imagine handing this job to a Large Language Model. That’s the bold challenge taken up by researchers in “Towards a Playground to Democratize Experimentation and Benchmarking of AI Agents for Network Troubleshooting”. They don’t just propose letting LLMs debug networks—they build an entire sandbox where AI agents can learn, act, and be judged on their troubleshooting skills. It’s not theory. It’s a working proof-of-concept. ...

July 6, 2025 · 4 min · Zelina
Cover image

Mind the Gap: Fixing the Flaws in Agentic Benchmarking

If you’ve looked at any leaderboard lately—from SWE-Bench to WebArena—you’ve probably seen impressive numbers. But how many of those reflect real capabilities of AI agents? This paper by Zhu et al. makes a bold claim: agentic benchmarks are often broken, and the way we evaluate AI agents is riddled with systemic flaws. Their response is refreshingly practical: a 33-point diagnostic called the Agentic Benchmark Checklist (ABC), designed not just to critique, but to fix the evaluation process. It’s a must-read not only for benchmark creators, but for any team serious about deploying or comparing AI agents in real-world tasks. ...

July 4, 2025 · 5 min · Zelina
Cover image

Grounded and Confused: Why RAG Systems Still Fail in the Enterprise

Grounded and Confused: Why RAG Systems Still Fail in the Enterprise If you’ve been following the RAG (retrieval-augmented generation) hype train, you might believe we’ve cracked enterprise search. Salesforce’s new benchmark—HERB (Heterogeneous Enterprise RAG Benchmark)—throws cold water on that optimism. It exposes how even the most powerful agentic RAG systems, armed with top-tier LLMs, crumble when facing the chaotic, multi-format, and noisy reality of business data. Deep Search ≠ Deep Reasoning Most current RAG benchmarks focus on shallow linkages—documents tied together via entity overlap or topic clusters. HERB rejects this toy model. It defines Deep Search as not just multi-hop reasoning, but searching across unstructured and structured formats, like Slack threads, meeting transcripts, GitHub PRs, and internal URLs. It’s what real enterprise users do daily, and it’s messy. ...

July 1, 2025 · 3 min · Zelina
Cover image

Mind Games for Machines: How Decrypto Reveals the Hidden Gaps in AI Reasoning

As large language models (LLMs) evolve from mere tools into interactive agents, they are increasingly expected to operate in multi-agent environments—collaborating, competing, and communicating not just with humans but with each other. But can they understand the beliefs, intentions, and misunderstandings of others? Welcome to the world of Theory of Mind (ToM)—and the cleverest AI benchmark you haven’t heard of: Decrypto. Cracking the Code: What is Decrypto? Inspired by the award-winning board game of the same name, Decrypto is a three-player game of secret codes and subtle hints, reimagined as a benchmark to test LLMs’ ability to coordinate and deceive. Each game features: ...

June 26, 2025 · 4 min · Zelina
Cover image

Mind the Context: How ContextAgent Listens, Sees, and Acts Before You Ask

Introduction: From Reaction to Proaction Imagine an assistant that doesn’t wait for your command. It notices you’re standing by a bus stop late at night and proactively checks the next bus arrival. If it’s too far off, it suggests calling a ride instead. Welcome to the world of ContextAgent — a proactive, context-aware Large Language Model (LLM) agent designed to act before you’re forced to ask. While most LLM agents still require explicit prompts and work in tightly scoped environments like desktops, ContextAgent leverages open-world sensory inputs (from devices like smart glasses, earphones, and smartphones) to understand user context and offer unobtrusive help. ...

May 21, 2025 · 3 min