Chunks, Units, Entities: RAG Rewired by CUE-RAG

Retrieval-Augmented Generation (RAG) has become the go-to technique for grounding large language models (LLMs) in external data. But as anyone building real-world RAG pipelines knows, there’s a growing tension between accuracy and cost. Existing graph-based RAG solutions promise richer semantics than vanilla vector stores, but suffer from two persistent issues: incomplete graphs and retrieval misalignment. The paper “CUE-RAG: Towards Accurate and Cost-Efficient Graph-Based RAG” proposes a structural rethinking. By integrating a multi-partite graph, hybrid extraction, and a query-driven iterative retriever, CUE-RAG achieves state-of-the-art accuracy while cutting indexing costs by up to 72.58% and even outperforming other methods without using any LLM tokens at all. ...

July 14, 2025 · 3 min · Zelina

Cut the Fluff: Leaner AI Thinking

Cut the Fluff: Leaner AI Thinking When it comes to large language models (LLMs), brains aren’t the only thing growing—so are their waistlines. As AI systems become increasingly powerful in their ability to reason, a hidden cost emerges: token bloat, high latency, and ballooning energy consumption. One of the most well-known methods for boosting LLM intelligence is Chain-of-Thought (CoT) reasoning. CoT enables models to break down complex problems into a step-by-step sequence—much like how humans tackle math problems by writing out intermediate steps. This structured thinking approach, famously adopted by models like OpenAI’s o1 and DeepSeek-R1 (source), has proven to dramatically increase both performance and transparency. ...

April 6, 2025 · 4 min