Cover image

PRISM and the Art of Not Losing Meaning

Opening — Why this matters now Generative Sequential Recommendation (GSR) is having its moment. By reframing recommendation as an autoregressive generation problem over Semantic IDs (SIDs), the field promises something long overdue: a unified retrieval-and-ranking pipeline that actually understands what items mean, not just where they sit in an embedding table. But beneath the hype sits an uncomfortable truth. Most lightweight GSR systems are quietly sabotaging themselves. They collapse their own codebooks, blur semantic boundaries, and then wonder why performance tanks—especially on sparse, long‑tail data. PRISM arrives as a sober correction to that pattern. ...

January 26, 2026 · 4 min · Zelina
Cover image

Recommendations With Receipts: When LLMs Have to Prove They Behaved

Opening — Why this matters now LLMs are increasingly trusted to recommend what we watch, buy, or read. But trust breaks down the moment a regulator, auditor, or policy team asks a simple question: prove that this recommendation followed the rules. Most LLM-driven recommenders cannot answer that question. They can explain themselves fluently, but explanation is not enforcement. In regulated or policy-heavy environments—media platforms, marketplaces, cultural quotas, fairness mandates—that gap is no longer tolerable. ...

January 17, 2026 · 4 min · Zelina
Cover image

ID Crisis, Resolved: When Semantic IDs Stop Fighting Hash IDs

Opening — Why this matters now Recommender systems have quietly hit an identity crisis. As item catalogs explode and user attention fragments, sequential recommendation models are being asked to do two incompatible things at once: memorize popular items with surgical precision and generalize intelligently to the long tail. Hash IDs do the former well. Semantic embeddings do the latter—sometimes too well. The paper “The Best of the Two Worlds: Harmonizing Semantic and Hash IDs for Sequential Recommendation” formalizes why these worlds keep colliding, and proposes a framework—H2Rec—that finally stops forcing us to choose sides. fileciteturn0file0 ...

December 14, 2025 · 4 min · Zelina
Cover image

Titles, Not Tokens: Making Job Matching Explainable with STR + KGs

The big idea Job titles are messy: “Managing Director” and “CEO” share zero tokens yet often mean the same thing, while “Director of Sales” and “VP Marketing” are different but related. Traditional semantic similarity (STS) rewards look‑alikes; real hiring needs relatedness (STR)—associations that capture hierarchy, function, and context. A recent study proposes a hybrid pipeline that pairs fine‑tuned Sentence‑BERT embeddings with a skill‑level Knowledge Graph (KG), then evaluates models by region of relatedness (low/medium/high) instead of only global averages. The punchline: this KG‑augmented approach is both more accurate where it matters (high‑STR) and explainable—it can show which skills link two titles. ...

September 17, 2025 · 4 min · Zelina
Cover image

Urban Loops and Algorithmic Traps: How AI Shapes Where We Go

The Invisible Hand of the Algorithm You open your favorite map app and follow a suggestion for brunch. So do thousands of others. Without realizing it, you’ve just participated in a city-scale experiment in behavioral automation—guided by a machine learning model. Behind the scenes, recommender systems are not only shaping what you see but where you physically go. This isn’t just about convenience—it’s about the systemic effects of AI on our cities and social fabric. ...

April 11, 2025 · 4 min