Cover image

Compression, But Make It Pedagogical: Rate–Distortion KGs for Smarter AI Learning Assistants

Opening — Why This Matters Now The age of AI-powered learning assistants has arrived, but most of them still behave like overeager interns—confident, quick, and occasionally catastrophically wrong. The weakest link isn’t the models; it’s the structure (or lack thereof) behind their reasoning. Lecture notes fed directly into an LLM produce multiple-choice questions with the usual suspects: hallucinations, trivial distractors, and the unmistakable scent of “I made this up.” ...

November 20, 2025 · 6 min · Zelina
Cover image

Flip the Switch: How Heterogeneous Agents Learn to Restore the Grid

Opening — Why this matters now Extreme weather, brittle infrastructure, and decentralised energy markets are converging into one perennial headache: when the power goes out, restoring it is neither quick nor cheap. Utilities increasingly rely on automation and AI assistance, but most existing systems buckle under the messy, nonlinear physics of real distribution networks. Restoration isn’t just an optimisation puzzle — it’s an orchestration of microgrids, generators, constraints, and switching actions that cascade through the system. ...

November 20, 2025 · 4 min · Zelina
Cover image

Prompted and Confused: When LLMs Forget the Assignment

Opening — Why this matters now The industry narrative says LLMs are marching confidently toward automating everything from tax audits to telescope alignment. Constraint programming — the backbone of scheduling, routing, and resource allocation — is often portrayed as the next domain ripe for “LLM takeover.” Just describe your optimisation problem in plain English and voilà: a clean, executable model. ...

November 20, 2025 · 4 min · Zelina
Cover image

Skills to Pay the Agent Bills: Why LLMs Need Better Moves, Not Bigger Models

Opening — Why This Matters Now Large language model agents are expanding into tasks that look suspiciously like real work: navigating UIs, operating tools, and making sequential decisions in messy environments. The industry’s response has been predictable—give the model more context, more examples, more memory, more everything. But bigger prompts aren’t the same as better reasoning. Most agents still wander around like interns on their first day: energetic, but directionless. ...

November 20, 2025 · 4 min · Zelina
Cover image

Thresholds, Trade-offs, and the Art of Not Overthinking Your Robot

Opening — Why this matters now The current wave of robotics and agentic AI is colliding with a familiar enemy: uncertainty. You can train a visual model to spot a cup, a box, or an inexplicably glossy demo object—but when those predictions get fed into a planner, the whole pipeline begins to wobble. Businesses deploying AI agents in warehouses, kitchens, labs, or digital environments need systems that don’t fold the moment the camera blinks. ...

November 20, 2025 · 4 min · Zelina
Cover image

Tools of Habit: Why LLM Agents Benefit from a Little Inertia

Tools of Habit: Why LLM Agents Benefit from a Little Inertia Opening — Why this matters now LLM agents are finally doing real work—querying APIs, navigating unstructured systems, solving multi-step tasks. But their shiny autonomy hides a quiet tax: every tool call usually means another LLM inference. And when you chain many of them together (as all interesting workflows do), latency and cost balloon. ...

November 20, 2025 · 4 min · Zelina
Cover image

Value Collision Course: When LLM Alignment Plays Favorites

Opening — Why this matters now The industry is finally waking up to an uncomfortable truth: AI alignment isn’t a monolithic engineering task—it’s a political act wrapped in an optimization problem. Every time we say a model is “safe,” we’re really saying it is safe for whom. A new empirical study puts hard numbers behind what many practitioners suspected but lacked the data to prove: the way we collect, compress, and optimize human feedback implicitly privileges certain groups over others. And in a world where LLMs increasingly mediate customer service, financial advice, hiring flows, and mental-health interactions, this is not an academic quibble—it’s a governance risk hiding in plain sight. ...

November 20, 2025 · 5 min · Zelina
Cover image

Ask, Navigate, Repeat: Why Socially Aware Agents Are the Next Frontier

Opening — Why this matters now The AI industry has spent the past two years obsessing over what large models can say. Less attention has gone to what they can do—and, more importantly, how they behave around humans. As robotics companies race to deploy humanoid form factors and VR environments inch closer to training grounds for embodied agents, we face a new tension: agents that can follow instructions aren’t necessarily agents that can ask, adapt, or navigate socially. ...

November 18, 2025 · 4 min · Zelina
Cover image

Benchmarked Brilliance: How CreBench Rewrites the Rules of Machine Creativity

Opening — Why This Matters Now Creativity has finally become quantifiable—at least according to the latest wave of multimodal models promising artistic flair, design reasoning, and conceptual imagination. But here’s the problem: no one actually agrees on what “machine creativity” means, much less how to measure it. Enter CreBench, a benchmark that doesn’t just test if models can invent shiny things—it evaluates whether they understand creativity the way humans do: from the spark of an idea, through the messy iterative process, to the final visual output. In a world where AI increasingly participates in ideation and design workflows, this shift isn’t optional; it’s overdue. ...

November 18, 2025 · 4 min · Zelina
Cover image

Ghostwriters in the Machine: How Multi‑Agent LLMs Turn Raw Transport Data Into Decisions

Opening — Why this matters now Public transport operators are drowning in telemetry. Fuel logs, route patterns, driver behavior metrics—every dataset promises “efficiency,” but most decision-makers receive only scatterplots and silence. As AI sweeps through industry, the bottleneck is no longer data generation but data interpretation. The paper we examine today argues that multimodal LLMs—when arranged in a disciplined multi‑agent architecture—can convert analytical clutter into credible, consistent, human-ready narratives. Not hype. Not dashboards. Actual decisions. ...

November 18, 2025 · 3 min · Zelina