
RAG in the Wild: When More Knowledge Hurts
Retrieval-Augmented Generation (RAG) is often hailed as a cure-all for domain adaptation and factual accuracy in large language models (LLMs). By injecting external context at inference time, RAG systems promise to boost performance on knowledge-intensive tasks. But a new paper, RAG in the Wild (Xu et al., 2025), reveals that this promise is brittle when we leave the sanitized lab environment and enter the real world of messy, multi-source knowledge. ...