Cover image

Fine-Tuning Without Fine-Tuning: How Fints Reinvents Personalization at Inference Time

Opening — Why this matters now Personalization has long been the Achilles’ heel of large language models (LLMs). Despite their impressive fluency, they often behave like charming strangers—articulate, but impersonal. As AI assistants, tutors, and agents move toward the mainstream, the inability to instantly adapt to user preferences isn’t just inconvenient—it’s commercially limiting. Retraining is costly; prompt-tweaking is shallow. The question is: can a model become personal without being retrained? ...

November 5, 2025 · 4 min · Zelina
Cover image

Love in the Time of Context: Why LLMs Still Don't Get You

Personalization is the love language of AI. But today’s large language models (LLMs) are more like well-meaning pen pals than mind-reading confidants. They remember your name, maybe your writing style — but the moment the context shifts, they stumble. The CUPID benchmark, introduced in a recent COLM 2025 paper, shows just how wide the gap still is between knowing the user and understanding them in context. Beyond Global Preferences: The Rise of Contextual Alignment Most LLMs that claim to be “personalized” assume you have stable, monolithic preferences. If you like bullet points, they’ll always give you bullet points. If you once asked for formal tone, they’ll keep things stiff forever. ...

August 5, 2025 · 4 min · Zelina