Cover image

GraphRAG Without the Drag: Scaling Knowledge-Augmented LLMs to Web-Scale

When it comes to retrieval-augmented generation (RAG), size matters—but not in the way you might think. Most high-performing GraphRAG systems extract structured triples (subject, predicate, object) from texts using large language models (LLMs), then link them to form reasoning chains. But this method doesn’t scale: if your corpus contains millions of documents, pre-processing every one with an LLM becomes prohibitively expensive. That’s the bottleneck the authors of “Millions of GeAR-s” set out to solve. And their solution is elegant: skip the LLM-heavy preprocessing entirely, and use existing knowledge graphs (like Wikidata) as a reasoning scaffold. ...

July 24, 2025 · 3 min · Zelina
Cover image

From Snippets to Synthesis: INRAExplorer and the Rise of Agentic RAG

Most Retrieval-Augmented Generation (RAG) systems promise to make language models smarter by grounding them in facts. But ask them to do anything complex—like trace research funding chains or identify thematic overlaps across domains—and they break down into isolated snippets. INRAExplorer, a project out of Ekimetrics for INRAE, dares to change that. By merging agentic RAG with knowledge graph reasoning, it offers a glimpse into the next generation of AI: systems that don’t just retrieve answers—they reason. ...

July 23, 2025 · 3 min · Zelina