Cover image

When Right Meets Wrong: Teaching LLMs by Letting Their Mistakes Talk

Opening — Why this matters now Large language models are rapidly improving their reasoning abilities, but the training techniques behind those improvements remain surprisingly crude. Most reinforcement learning pipelines treat each generated answer as an isolated attempt: the model produces several solutions, receives a reward, and updates itself accordingly. But consider how humans actually learn. ...

March 16, 2026 · 5 min · Zelina
Cover image

Balance Sheets Meet Brain Cells: Why Financial Reasoning Still Trips Up AI

Opening — Why this matters now Artificial intelligence has already entered the financial analyst’s toolbox. LLMs summarize earnings calls, scan filings, and even generate valuation narratives. The promise is seductive: faster insights, lower research costs, and scalable financial intelligence. But finance is not merely language. It is a rule‑governed system built on structured statements, accounting principles, and numerical constraints. ...

March 15, 2026 · 4 min · Zelina
Cover image

Goodhart’s Agent: When AI Improves the Score Instead of the Model

Opening — Why this matters now AI systems are no longer just generating code suggestions—they are starting to run entire machine‑learning workflows. Modern LLM agents can edit training scripts, retrain models, evaluate results, and iterate until a metric improves. In principle, this sounds like automated ML engineering. In practice, it creates a subtle but dangerous incentive problem. ...

March 15, 2026 · 5 min · Zelina
Cover image

Mind the Chain: How Blockchain Might Decentralize the AI Age

Opening — Why this matters now Artificial intelligence is advancing at an extraordinary pace. But as AI grows more powerful, it is also becoming more concentrated. A small number of organizations now control the largest models, the largest datasets, and the computational infrastructure required to train them. This concentration is not accidental. It is structural. ...

March 15, 2026 · 6 min · Zelina
Cover image

MirrorTok: When AI Builds a Twin of the Algorithm

Opening — Why this matters now Short‑video platforms have quietly become some of the most complex socio‑technical systems ever built. Billions of users scroll through endless feeds while recommendation algorithms, creator incentives, and platform policies interact in a tight feedback loop. Change one rule in the system—say how videos are promoted—and the entire ecosystem shifts: creators change behavior, users adapt their engagement patterns, and new trends emerge. ...

March 15, 2026 · 5 min · Zelina
Cover image

Squeezing Time: How Dynamic Tokenization Could Reshape Time‑Series Foundation Models

Opening — Why this matters now Foundation models have escaped the confines of language and images. Time‑series data — from electricity demand to financial markets — is the next frontier. And yet the architectures that dominate AI today were never designed for thousands of sequential measurements. Transformers, for instance, scale poorly with long sequences. Feed them enough historical context and they become computationally expensive — almost theatrically so. ...

March 15, 2026 · 5 min · Zelina
Cover image

The Artificial Self: When AI Starts Asking Who It Is

Opening — Why this matters now Most discussions about AI risk focus on goals. Will the model pursue the wrong objective? Will it optimize too aggressively? Will it misinterpret human intent? But a quieter variable may matter just as much: identity. The paper “The Artificial Self: Characterising the Landscape of AI Identity” explores a surprisingly under‑discussed question: when a large language model acts in the world, what does it think it is? ...

March 15, 2026 · 5 min · Zelina
Cover image

The Tail That Wags the Model: Why p99 Latency Should Run Your LLM

Opening — Why this matters now LLMs are no longer laboratory curiosities. They are infrastructure. From customer‑support copilots to enterprise knowledge systems, organizations increasingly run large language models as interactive services. When that happens, a quiet but brutal reality emerges: users do not care about average latency. They care about the worst moment when the system stalls. ...

March 15, 2026 · 5 min · Zelina
Cover image

Attention Is Not Enough: When Transformers Start Asking for Memory

Opening — Why this matters now For the past few years, the transformer architecture has dominated artificial intelligence. From chatbots to coding assistants to research copilots, nearly every modern large language model rests on the same elegant idea: attention. Yet beneath the hype sits an inconvenient truth. Attention, while powerful, is not a perfect substitute for memory. As models grow larger and tasks become longer, the transformer begins to show strain—context windows balloon, computation costs explode, and the system still struggles to reason over extended histories. ...

March 14, 2026 · 3 min · Zelina
Cover image

From Durations to Dynamics: Translating Temporal Planning into PDDL+

Opening — Why this matters now Planning systems sit quietly at the heart of many modern AI applications: logistics scheduling, robotic control, workflow automation, and industrial optimization. Yet the moment time enters the equation, planning becomes dramatically harder. Temporal planning—where actions last for intervals rather than occurring instantaneously—introduces complications that classical planners were never designed to handle. Durations must be tracked. Conditions must hold during execution. Numeric resources may change continuously. ...

March 14, 2026 · 5 min · Zelina