Cover image

Beyond Cosine: When Order Beats Angle in Embedding Similarity

Opening — Why this matters now Cosine similarity has enjoyed an unusually long reign. From TF‑IDF vectors to transformer embeddings, it remains the default lens through which we judge “semantic closeness.” Yet the more expressive our embedding models become, the more uncomfortable this default starts to feel. If modern representations are nonlinear, anisotropic, and structurally rich, why are we still evaluating them with a metric that only understands angles? ...

February 7, 2026 · 4 min · Zelina
Cover image

PRISM and the Art of Not Losing Meaning

Opening — Why this matters now Generative Sequential Recommendation (GSR) is having its moment. By reframing recommendation as an autoregressive generation problem over Semantic IDs (SIDs), the field promises something long overdue: a unified retrieval-and-ranking pipeline that actually understands what items mean, not just where they sit in an embedding table. But beneath the hype sits an uncomfortable truth. Most lightweight GSR systems are quietly sabotaging themselves. They collapse their own codebooks, blur semantic boundaries, and then wonder why performance tanks—especially on sparse, long‑tail data. PRISM arrives as a sober correction to that pattern. ...

January 26, 2026 · 4 min · Zelina
Cover image

Clustering Without Amnesia: Why Abstraction Keeps Fighting Representation

Opening — Why this matters now We are drowning in data that knows too much. Images with millions of pixels, embeddings with thousands of dimensions, logs that remember every trivial detail. And yet, when we ask machines to group things meaningfully—to abstract—we often get either chaos or collapse. Clustering, the supposedly humble unsupervised task, has quietly become one of the most conceptually demanding problems in modern machine learning. ...

January 20, 2026 · 4 min · Zelina
Cover image

When Graphs Stop Guessing: Teaching Models to Rewrite Their Own Meaning

Opening — Why this matters now Graph learning has quietly run into a ceiling. Not because graph neural networks (GNNs) are weak, but because they are confidently opinionated. Once you choose a GNN, you lock in assumptions about where signal should live: in node features, in neighborhoods, in homophily, in motifs. That works—until it doesn’t. ...

December 26, 2025 · 4 min · Zelina
Cover image

When LLMs Stop Talking and Start Choosing Algorithms

Opening — Why this matters now Large Language Models are increasingly invited into optimization workflows. They write solvers, generate heuristics, and occasionally bluff their way through mathematical reasoning. But a more uncomfortable question has remained largely unanswered: do LLMs actually understand optimization problems—or are they just eloquent impostors? This paper tackles that question head‑on. Instead of judging LLMs by what they say, it examines what they encode. And the results are quietly provocative. ...

December 16, 2025 · 4 min · Zelina
Cover image

ID Crisis, Resolved: When Semantic IDs Stop Fighting Hash IDs

Opening — Why this matters now Recommender systems have quietly hit an identity crisis. As item catalogs explode and user attention fragments, sequential recommendation models are being asked to do two incompatible things at once: memorize popular items with surgical precision and generalize intelligently to the long tail. Hash IDs do the former well. Semantic embeddings do the latter—sometimes too well. The paper “The Best of the Two Worlds: Harmonizing Semantic and Hash IDs for Sequential Recommendation” formalizes why these worlds keep colliding, and proposes a framework—H2Rec—that finally stops forcing us to choose sides. fileciteturn0file0 ...

December 14, 2025 · 4 min · Zelina
Cover image

Spurious Minds: How Embedding Regularization Could Fix Bias at Its Roots

Why this matters now Modern AI models are astonishingly good at pattern recognition—and dangerously bad at knowing which patterns matter. A neural network that labels birds can achieve 95% accuracy on paper yet collapse when the background changes from lake to desert. This fragility stems from spurious correlations—the model’s habit of linking labels to irrelevant cues like color, lighting, or background texture. The deeper the network, the deeper the bias embeds. ...

November 8, 2025 · 4 min · Zelina