Cover image

Goodhart’s Agent: When AI Improves the Score Instead of the Model

Opening — Why this matters now AI systems are no longer just generating code suggestions—they are starting to run entire machine‑learning workflows. Modern LLM agents can edit training scripts, retrain models, evaluate results, and iterate until a metric improves. In principle, this sounds like automated ML engineering. In practice, it creates a subtle but dangerous incentive problem. ...

March 15, 2026 · 5 min · Zelina
Cover image

Mind the Chain: How Blockchain Might Decentralize the AI Age

Opening — Why this matters now Artificial intelligence is advancing at an extraordinary pace. But as AI grows more powerful, it is also becoming more concentrated. A small number of organizations now control the largest models, the largest datasets, and the computational infrastructure required to train them. This concentration is not accidental. It is structural. ...

March 15, 2026 · 6 min · Zelina
Cover image

MirrorTok: When AI Builds a Twin of the Algorithm

Opening — Why this matters now Short‑video platforms have quietly become some of the most complex socio‑technical systems ever built. Billions of users scroll through endless feeds while recommendation algorithms, creator incentives, and platform policies interact in a tight feedback loop. Change one rule in the system—say how videos are promoted—and the entire ecosystem shifts: creators change behavior, users adapt their engagement patterns, and new trends emerge. ...

March 15, 2026 · 5 min · Zelina
Cover image

Squeezing Time: How Dynamic Tokenization Could Reshape Time‑Series Foundation Models

Opening — Why this matters now Foundation models have escaped the confines of language and images. Time‑series data — from electricity demand to financial markets — is the next frontier. And yet the architectures that dominate AI today were never designed for thousands of sequential measurements. Transformers, for instance, scale poorly with long sequences. Feed them enough historical context and they become computationally expensive — almost theatrically so. ...

March 15, 2026 · 5 min · Zelina
Cover image

The Artificial Self: When AI Starts Asking Who It Is

Opening — Why this matters now Most discussions about AI risk focus on goals. Will the model pursue the wrong objective? Will it optimize too aggressively? Will it misinterpret human intent? But a quieter variable may matter just as much: identity. The paper “The Artificial Self: Characterising the Landscape of AI Identity” explores a surprisingly under‑discussed question: when a large language model acts in the world, what does it think it is? ...

March 15, 2026 · 5 min · Zelina
Cover image

The Tail That Wags the Model: Why p99 Latency Should Run Your LLM

Opening — Why this matters now LLMs are no longer laboratory curiosities. They are infrastructure. From customer‑support copilots to enterprise knowledge systems, organizations increasingly run large language models as interactive services. When that happens, a quiet but brutal reality emerges: users do not care about average latency. They care about the worst moment when the system stalls. ...

March 15, 2026 · 5 min · Zelina
Cover image

Attention Is Not Enough: When Transformers Start Asking for Memory

Opening — Why this matters now For the past few years, the transformer architecture has dominated artificial intelligence. From chatbots to coding assistants to research copilots, nearly every modern large language model rests on the same elegant idea: attention. Yet beneath the hype sits an inconvenient truth. Attention, while powerful, is not a perfect substitute for memory. As models grow larger and tasks become longer, the transformer begins to show strain—context windows balloon, computation costs explode, and the system still struggles to reason over extended histories. ...

March 14, 2026 · 3 min · Zelina
Cover image

From Durations to Dynamics: Translating Temporal Planning into PDDL+

Opening — Why this matters now Planning systems sit quietly at the heart of many modern AI applications: logistics scheduling, robotic control, workflow automation, and industrial optimization. Yet the moment time enters the equation, planning becomes dramatically harder. Temporal planning—where actions last for intervals rather than occurring instantaneously—introduces complications that classical planners were never designed to handle. Durations must be tracked. Conditions must hold during execution. Numeric resources may change continuously. ...

March 14, 2026 · 5 min · Zelina
Cover image

Green Lights, Smarter Cities: How Multi‑Agent Reinforcement Learning Is Rewiring Urban Traffic

Opening — Why this matters now Every modern city has the same quiet enemy: the traffic light. Not the hardware itself, of course, but the logic behind it. Most intersections still run on pre‑programmed schedules designed by traffic engineers years earlier. Rush hour arrives, a lane unexpectedly fills, and the light calmly continues its fixed cycle—green for empty roads, red for congested ones. ...

March 14, 2026 · 6 min · Zelina
Cover image

Print Smarter, Not Harder: How Portfolio Algorithms Are Quietly Optimizing 3D Printing

Opening — Why this matters now 3D printing has quietly evolved from hobbyist gadgetry into a serious manufacturing tool. Small-batch production, rapid prototyping, and distributed manufacturing increasingly rely on additive manufacturing systems. Yet a surprisingly mundane problem sits at the heart of many printing workflows: how to place multiple objects on a printing plate and determine the order in which they should be printed. ...

March 14, 2026 · 5 min · Zelina