Cover image

Titles, Not Tokens: Making Job Matching Explainable with STR + KGs

The big idea Job titles are messy: “Managing Director” and “CEO” share zero tokens yet often mean the same thing, while “Director of Sales” and “VP Marketing” are different but related. Traditional semantic similarity (STS) rewards look‑alikes; real hiring needs relatedness (STR)—associations that capture hierarchy, function, and context. A recent study proposes a hybrid pipeline that pairs fine‑tuned Sentence‑BERT embeddings with a skill‑level Knowledge Graph (KG), then evaluates models by region of relatedness (low/medium/high) instead of only global averages. The punchline: this KG‑augmented approach is both more accurate where it matters (high‑STR) and explainable—it can show which skills link two titles. ...

September 17, 2025 · 4 min · Zelina
Cover image

RAGulating Compliance: When Triplets Trump Chunks

TL;DR A new multi‑agent pipeline builds an ontology‑light knowledge graph from regulatory text, embeds subject–predicate–object triplets alongside their source snippets in one vector store, and uses triplet‑level retrieval to ground LLM answers. The result: better section retrieval at stricter similarity thresholds, slightly higher answer accuracy, and far stronger navigability across related rules. For compliance teams, the payoff is auditability and explainability baked into the data layer, not just the prompt. ...

August 16, 2025 · 5 min · Zelina
Cover image

Graphs, Gains, and Guile: How FinKario Outruns Financial LLMs

In the world of financial AI, where speed meets complexity, most systems are either too slow to adapt or too brittle to interpret the nuanced messiness of real-world finance. Enter FinKario, a new system that combines event-enhanced financial knowledge graphs with a graph-aware retrieval strategy — and outperforms both specialized financial LLMs and institutional strategies in real-world backtests. The Retail Investor’s Dilemma While retail traders drown in information overload, professional research reports contain rich insights — but they’re long, unstructured, and hard to parse. Most LLM-based tools don’t fully exploit these reports. They either extract static attributes (e.g., stock ticker, sector, valuation) or respond to isolated queries without contextual awareness. ...

August 5, 2025 · 3 min · Zelina
Cover image

GraphRAG Without the Drag: Scaling Knowledge-Augmented LLMs to Web-Scale

When it comes to retrieval-augmented generation (RAG), size matters—but not in the way you might think. Most high-performing GraphRAG systems extract structured triples (subject, predicate, object) from texts using large language models (LLMs), then link them to form reasoning chains. But this method doesn’t scale: if your corpus contains millions of documents, pre-processing every one with an LLM becomes prohibitively expensive. That’s the bottleneck the authors of “Millions of GeAR-s” set out to solve. And their solution is elegant: skip the LLM-heavy preprocessing entirely, and use existing knowledge graphs (like Wikidata) as a reasoning scaffold. ...

July 24, 2025 · 3 min · Zelina
Cover image

From Snippets to Synthesis: INRAExplorer and the Rise of Agentic RAG

Most Retrieval-Augmented Generation (RAG) systems promise to make language models smarter by grounding them in facts. But ask them to do anything complex—like trace research funding chains or identify thematic overlaps across domains—and they break down into isolated snippets. INRAExplorer, a project out of Ekimetrics for INRAE, dares to change that. By merging agentic RAG with knowledge graph reasoning, it offers a glimpse into the next generation of AI: systems that don’t just retrieve answers—they reason. ...

July 23, 2025 · 3 min · Zelina