SD‑RAG: Don’t Trust the Model, Trust the Pipeline
Opening — Why this matters now RAG was supposed to make LLMs safer. Instead, it quietly became a liability. As enterprises rushed to bolt retrieval layers onto large language models, they unintentionally created a new attack surface: sensitive internal data flowing straight into a model that cannot reliably distinguish instructions from content. Prompt injection is not a corner case anymore—it is the default threat model. And telling the model to “behave” has proven to be more of a suggestion than a guarantee. ...