Cover image

Stop or Strip? Teaching Disassembly When to Quit

Opening — Why this matters now Circular economy rhetoric is everywhere. Circular economy decision-making is not. Most end-of-life products still follow a depressingly simple rule: disassemble until it hurts, or stop when the operator gets tired. The idea that we might formally decide when to stop disassembling — based on value, cost, safety, and information — remains oddly underdeveloped. This gap is no longer academic. EV batteries, e‑waste, and regulated industrial equipment are forcing operators to choose between speed, safety, and sustainability under real constraints. ...

December 20, 2025 · 4 min · Zelina
Cover image

The Ethics of Not Knowing: When Uncertainty Becomes an Obligation

Opening — Why this matters now Modern systems act faster than their understanding. Algorithms trade in microseconds, clinical protocols scale across populations, and institutions make irreversible decisions under partial information. Yet our ethical vocabulary remains binary: act or abstain, know or don’t know, responsible or not. That binary is failing. The paper behind this article introduces a deceptively simple idea with uncomfortable implications: uncertainty does not reduce moral responsibility — it reallocates it. When confidence falls, duty does not disappear. It migrates. ...

December 20, 2025 · 4 min · Zelina
Cover image

Adversaries, Slices, and the Art of Teaching LLMs to Think

Opening — Why this matters now Large language models can already talk their way through Olympiad math, but they still stumble in embarrassingly human ways: a missed parity condition, a silent algebra slip, or a confident leap over an unproven claim. The industry’s usual fix—reward the final answer and hope the reasoning improves—has reached diminishing returns. Accuracy nudges upward, but reliability remains brittle. ...

December 19, 2025 · 4 min · Zelina
Cover image

AGI by Committee: Why the First General Intelligence Won’t Arrive Alone

Opening — Why this matters now For years, AGI safety discussions have revolved around a single, looming figure: the model. One system. One alignment problem. One decisive moment. That mental model is tidy — and increasingly wrong. The paper “Distributional AGI Safety” argues that AGI is far more likely to emerge not as a monolith, but as a collective outcome: a dense web of specialized, sub‑AGI agents coordinating, trading capabilities, and assembling intelligence the way markets assemble value. AGI, in this framing, is not a product launch. It is a phase transition. ...

December 19, 2025 · 4 min · Zelina
Cover image

CitySeeker: Lost in Translation, Found in the City

Opening — Why this matters now Urban navigation looks deceptively solved. We have GPS, street-view imagery, and multimodal models that can describe a scene better than most humans. And yet, when vision-language models (VLMs) are asked to actually navigate a city — not just caption it — performance collapses in subtle, embarrassing ways. The gap is no longer about perception quality. It is about cognition: remembering where you have been, knowing when you are wrong, and understanding implicit human intent. This is the exact gap CitySeeker is designed to expose. ...

December 19, 2025 · 3 min · Zelina
Cover image

Painkillers with Foresight: Teaching Machines to Anticipate Cancer Pain

Opening — Why this matters now Cancer pain is rarely a surprise to clinicians. Yet it still manages to arrive uninvited, often at night, often under-treated, and almost always after the window for calm, preventive adjustment has closed. In lung cancer wards, up to 90% of patients experience moderate to severe pain episodes — and most of these episodes are predictable in hindsight. ...

December 19, 2025 · 4 min · Zelina
Cover image

Stack Overflow for Ethics: Governing AI with Feedback, Not Faith

Opening — Why this matters now AI governance is stuck in a familiar failure mode: we have principles everywhere and enforcement nowhere. Fairness. Transparency. Accountability. Autonomy. Every serious AI organization can recite them fluently. Very few can tell you where these values live in the system, how they are enforced at runtime, or who is responsible when the model drifts quietly into social damage six months after launch. ...

December 19, 2025 · 5 min · Zelina
Cover image

TOGGLE or Die Trying: Giving LLM Compression a Spine

Opening — Why this matters now LLM compression is having an identity crisis. On one side, we have brute-force pragmatists: quantize harder, prune deeper, pray nothing important breaks. On the other, we have theoreticians insisting that something essential is lost — coherence, memory, truthfulness — but offering little beyond hand-waving and validation benchmarks. As LLMs creep toward edge deployment — embedded systems, on-device assistants, energy‑capped inference — this tension becomes existential. You can’t just say “it seems fine.” You need guarantees. Or at least something better than vibes. ...

December 19, 2025 · 4 min · Zelina
Cover image

When Black Boxes Grow Teeth: Mapping What AI Can *Actually* Do

Opening — Why this matters now We are deploying black-box AI systems faster than we are understanding them. Large language models, vision–language agents, and robotic controllers are increasingly asked to do things, not just answer questions. And yet, when these systems fail, the failure is rarely spectacular—it is subtle, conditional, probabilistic, and deeply context-dependent. ...

December 19, 2025 · 3 min · Zelina
Cover image

Artism, or How AI Learned to Critique Itself

Opening — Why this matters now AI didn’t kill originality. It industrialized its absence. Contemporary art has been circling the same anxiety for decades: the sense that everything has already been done, named, theorized, archived. AI merely removed the remaining friction. What once took years of study and recombination now takes seconds of probabilistic interpolation. The result is not a new crisis, but a visible one. ...

December 18, 2025 · 4 min · Zelina