Cover image

When Maps Start Thinking: Teaching Agents to Plan in Time and Space

Opening — Why this matters now AI can already write poetry, debug code, and argue philosophy. Yet ask most large language models to plan a realistic trip—respecting time, geography, traffic, weather, and human constraints—and they quietly fall apart. Real-world planning is messy, asynchronous, and unforgiving. Unlike math problems, you cannot hallucinate a charging station that does not exist. ...

January 1, 2026 · 3 min · Zelina
Cover image

OrchestRA and the End of Linear Drug Discovery

Opening — Why this matters now Drug discovery has a reputation problem. It is slow, expensive, and structurally brittle. Despite exponential growth in biomedical data and modeling tools, R&D productivity has declined for decades. The core reason is not lack of intelligence — human or artificial — but fragmentation. Biology, chemistry, and pharmacology still operate like loosely coupled departments passing half-finished work downstream. ...

December 29, 2025 · 3 min · Zelina
Cover image

When One Clip Isn’t Enough: Teaching LLMs to Watch Long Videos Like Adults

Opening — Why this matters now Large language models have learned to see. Unfortunately, they still have the attention span of a distracted intern when the video runs longer than a minute. As multimodal LLMs expand their context windows and promise “end-to-end” video understanding, a hard reality remains: long videos are not just longer inputs—they are fundamentally different reasoning problems. Information is sparse, temporally distant, multimodal, and often only meaningful when grounded precisely in time and space. Compress everything up front, and you lose the evidence. Don’t compress, and you blow the context budget. ...

December 24, 2025 · 4 min · Zelina
Cover image

Replace, Don’t Expand: When RAG Learns to Throw Things Away

Opening — Why this matters now RAG systems are having an identity crisis. On paper, retrieval-augmented generation is supposed to ground large language models in facts. In practice, when queries require multi-hop reasoning, most systems panic and start hoarding context like it’s a survival skill. Add more passages. Expand the window. Hope the model figures it out. ...

December 12, 2025 · 4 min · Zelina
Cover image

Fine-Tuning Without Fine-Tuning: How Fints Reinvents Personalization at Inference Time

Opening — Why this matters now Personalization has long been the Achilles’ heel of large language models (LLMs). Despite their impressive fluency, they often behave like charming strangers—articulate, but impersonal. As AI assistants, tutors, and agents move toward the mainstream, the inability to instantly adapt to user preferences isn’t just inconvenient—it’s commercially limiting. Retraining is costly; prompt-tweaking is shallow. The question is: can a model become personal without being retrained? ...

November 5, 2025 · 4 min · Zelina