Cover image

Attention Is Not Enough: When Transformers Start Asking for Memory

Opening — Why this matters now For the past few years, the transformer architecture has dominated artificial intelligence. From chatbots to coding assistants to research copilots, nearly every modern large language model rests on the same elegant idea: attention. Yet beneath the hype sits an inconvenient truth. Attention, while powerful, is not a perfect substitute for memory. As models grow larger and tasks become longer, the transformer begins to show strain—context windows balloon, computation costs explode, and the system still struggles to reason over extended histories. ...

March 14, 2026 · 3 min · Zelina
Cover image

From Durations to Dynamics: Translating Temporal Planning into PDDL+

Opening — Why this matters now Planning systems sit quietly at the heart of many modern AI applications: logistics scheduling, robotic control, workflow automation, and industrial optimization. Yet the moment time enters the equation, planning becomes dramatically harder. Temporal planning—where actions last for intervals rather than occurring instantaneously—introduces complications that classical planners were never designed to handle. Durations must be tracked. Conditions must hold during execution. Numeric resources may change continuously. ...

March 14, 2026 · 5 min · Zelina
Cover image

Green Lights, Smarter Cities: How Multi‑Agent Reinforcement Learning Is Rewiring Urban Traffic

Opening — Why this matters now Every modern city has the same quiet enemy: the traffic light. Not the hardware itself, of course, but the logic behind it. Most intersections still run on pre‑programmed schedules designed by traffic engineers years earlier. Rush hour arrives, a lane unexpectedly fills, and the light calmly continues its fixed cycle—green for empty roads, red for congested ones. ...

March 14, 2026 · 6 min · Zelina
Cover image

Print Smarter, Not Harder: How Portfolio Algorithms Are Quietly Optimizing 3D Printing

Opening — Why this matters now 3D printing has quietly evolved from hobbyist gadgetry into a serious manufacturing tool. Small-batch production, rapid prototyping, and distributed manufacturing increasingly rely on additive manufacturing systems. Yet a surprisingly mundane problem sits at the heart of many printing workflows: how to place multiple objects on a printing plate and determine the order in which they should be printed. ...

March 14, 2026 · 5 min · Zelina
Cover image

Too Smart to Share: When AI Agents Get Smarter, Systems Get Worse

Opening — Why this matters now The next generation of AI will not live in the cloud alone. It will live everywhere. Autonomous cars negotiating intersections. Drones sharing relay bandwidth. Medical devices competing for wireless channels in hospital wards. Electric vehicles choosing whether to queue for a charging slot. In these environments, AI systems are not solving isolated problems — they are competing for finite shared resources. ...

March 14, 2026 · 5 min · Zelina
Cover image

Topology Trouble: Why Even Frontier LLMs Still Get Lost in a Grid

Opening — Why this matters now Large language models are increasingly marketed as general reasoning systems. They write code, solve math problems, and even pass professional exams. Naturally, businesses are beginning to assume that these models can reason about any structured problem given the right prompt. The paper introducing TopoBench offers a rather sobering reality check. ...

March 14, 2026 · 4 min · Zelina
Cover image

When Models Forget How to Learn: The Hidden Bottleneck in LLM Training

Opening — Why this matters now Every generation of large language models promises a simple narrative: more data, larger models, better intelligence. The industry’s scaling laws seem reassuringly linear. Add tokens, add parameters, add GPUs — intelligence emerges. But occasionally a paper appears that quietly disrupts this narrative. Not by introducing a bigger model or a clever benchmark, but by pointing out something structurally wrong with how we train them. ...

March 14, 2026 · 4 min · Zelina
Cover image

Agents With Memory: Turning Execution Logs into Institutional Knowledge

Opening — Why this matters now Most AI agents today suffer from a strange form of amnesia. They can reason, plan, call APIs, browse the web, and orchestrate workflows. But once the task is finished, the experience disappears. The next time the same task appears, the agent starts again from scratch — repeating the same mistakes, inefficiencies, and blind guesses. ...

March 13, 2026 · 6 min · Zelina
Cover image

Audit the Bots: When AI Judges the Work of Other AI

Opening — Why this matters now Autonomous computer agents are quietly learning to use your computer. Not metaphorically. Literally. A new class of systems—Computer‑Use Agents (CUAs)—can read your instruction, observe the screen, and operate graphical interfaces the way a human would: clicking buttons, typing text, navigating menus, scrolling documents. In theory, they can complete everyday digital tasks across applications without dedicated APIs or custom automation scripts. ...

March 13, 2026 · 6 min · Zelina
Cover image

Diagnosis, But Make It Iterative: When AI Learns Like a Doctor

Opening — Why this matters now AI models already score impressively on medical exams. They diagnose diseases in curated benchmarks and summarize clinical literature with startling fluency. And yet, hospitals remain cautious. The reason is simple: real diagnosis is not a one-shot prediction problem. A clinician rarely receives a complete patient record and instantly outputs a diagnosis. Instead, they run an investigation. They ask questions, order tests, interpret results, and revise hypotheses. The process unfolds sequentially, often under uncertainty. ...

March 13, 2026 · 5 min · Zelina