Grounded and Confused: Why RAG Systems Still Fail in the Enterprise

Grounded and Confused: Why RAG Systems Still Fail in the Enterprise If you’ve been following the RAG (retrieval-augmented generation) hype train, you might believe we’ve cracked enterprise search. Salesforce’s new benchmark—HERB (Heterogeneous Enterprise RAG Benchmark)—throws cold water on that optimism. It exposes how even the most powerful agentic RAG systems, armed with top-tier LLMs, crumble when facing the chaotic, multi-format, and noisy reality of business data. Deep Search ≠ Deep Reasoning Most current RAG benchmarks focus on shallow linkages—documents tied together via entity overlap or topic clusters. HERB rejects this toy model. It defines Deep Search as not just multi-hop reasoning, but searching across unstructured and structured formats, like Slack threads, meeting transcripts, GitHub PRs, and internal URLs. It’s what real enterprise users do daily, and it’s messy. ...

July 1, 2025 · 3 min · Zelina

Mind Games for Machines: How Decrypto Reveals the Hidden Gaps in AI Reasoning

As large language models (LLMs) evolve from mere tools into interactive agents, they are increasingly expected to operate in multi-agent environments—collaborating, competing, and communicating not just with humans but with each other. But can they understand the beliefs, intentions, and misunderstandings of others? Welcome to the world of Theory of Mind (ToM)—and the cleverest AI benchmark you haven’t heard of: Decrypto. Cracking the Code: What is Decrypto? Inspired by the award-winning board game of the same name, Decrypto is a three-player game of secret codes and subtle hints, reimagined as a benchmark to test LLMs’ ability to coordinate and deceive. Each game features: ...

June 26, 2025 · 4 min · Zelina

Mind the Context: How ContextAgent Listens, Sees, and Acts Before You Ask

Introduction: From Reaction to Proaction Imagine an assistant that doesn’t wait for your command. It notices you’re standing by a bus stop late at night and proactively checks the next bus arrival. If it’s too far off, it suggests calling a ride instead. Welcome to the world of ContextAgent — a proactive, context-aware Large Language Model (LLM) agent designed to act before you’re forced to ask. While most LLM agents still require explicit prompts and work in tightly scoped environments like desktops, ContextAgent leverages open-world sensory inputs (from devices like smart glasses, earphones, and smartphones) to understand user context and offer unobtrusive help. ...

May 21, 2025 · 3 min

Raising the Bar: Why AI Competitions Are the New Benchmark Battleground

In the rapidly evolving landscape of Generative AI (GenAI), we’ve long relied on static benchmarks—standardized datasets and evaluations—to gauge model performance. But what if the very foundation we’re building our trust upon is fundamentally shaky? Static benchmarks often rely on IID (independent and identically distributed) assumptions, where training and test data come from the same statistical distribution. In such a setting, a model achieving high accuracy might simply be interpolating seen patterns rather than truly generalizing. For example, in language modeling, a model might “memorize” dataset-specific templates without capturing transferable reasoning patterns. ...

May 3, 2025 · 3 min