Cover image

When Agents Agree Too Much: Emergent Bias in Multi‑Agent AI Systems

Opening — Why this matters now Multi‑agent AI systems are having a moment. Debate, reflection, consensus — all the cognitive theater we associate with human committees is now being reenacted by clusters of large language models. In finance, that sounds reassuring. Multiple agents, multiple perspectives, fewer blind spots. Or so the story goes. This paper politely ruins that assumption. ...

December 21, 2025 · 4 min · Zelina
Cover image

When Tensors Meet Telemedicine: Diagnosing Leukemia at the Edge

Opening — Why this matters now Healthcare AI has a credibility problem. Models boast benchmark-breaking accuracy, yet quietly fall apart when moved from lab notebooks to hospital workflows. Latency, human-in-the-loop bottlenecks, and fragile classifiers all conspire against real-world deployment. Leukemia diagnosis—especially Acute Lymphocytic Leukemia (ALL)—sits right in the crosshairs of this tension: early detection saves lives, but manual microscopy is slow, subjective, and error-prone. ...

December 21, 2025 · 4 min · Zelina
Cover image

Prompt-to-Parts: When Language Learns to Build

Opening — Why this matters now Text-to-image was a party trick. Text-to-3D became a demo. Text-to-something you can actually assemble is where the stakes quietly change. As generative AI spills into engineering, manufacturing, and robotics, the uncomfortable truth is this: most AI-generated objects are visually plausible but physically useless. They look right, but they don’t fit, don’t connect, and certainly don’t come with instructions a human can follow. ...

December 20, 2025 · 4 min · Zelina
Cover image

Stop or Strip? Teaching Disassembly When to Quit

Opening — Why this matters now Circular economy rhetoric is everywhere. Circular economy decision-making is not. Most end-of-life products still follow a depressingly simple rule: disassemble until it hurts, or stop when the operator gets tired. The idea that we might formally decide when to stop disassembling — based on value, cost, safety, and information — remains oddly underdeveloped. This gap is no longer academic. EV batteries, e‑waste, and regulated industrial equipment are forcing operators to choose between speed, safety, and sustainability under real constraints. ...

December 20, 2025 · 4 min · Zelina
Cover image

AGI by Committee: Why the First General Intelligence Won’t Arrive Alone

Opening — Why this matters now For years, AGI safety discussions have revolved around a single, looming figure: the model. One system. One alignment problem. One decisive moment. That mental model is tidy — and increasingly wrong. The paper “Distributional AGI Safety” argues that AGI is far more likely to emerge not as a monolith, but as a collective outcome: a dense web of specialized, sub‑AGI agents coordinating, trading capabilities, and assembling intelligence the way markets assemble value. AGI, in this framing, is not a product launch. It is a phase transition. ...

December 19, 2025 · 4 min · Zelina
Cover image

TOGGLE or Die Trying: Giving LLM Compression a Spine

Opening — Why this matters now LLM compression is having an identity crisis. On one side, we have brute-force pragmatists: quantize harder, prune deeper, pray nothing important breaks. On the other, we have theoreticians insisting that something essential is lost — coherence, memory, truthfulness — but offering little beyond hand-waving and validation benchmarks. As LLMs creep toward edge deployment — embedded systems, on-device assistants, energy‑capped inference — this tension becomes existential. You can’t just say “it seems fine.” You need guarantees. Or at least something better than vibes. ...

December 19, 2025 · 4 min · Zelina
Cover image

Delegating to the Almost-Aligned: When Misaligned AI Is Still the Rational Choice

Opening — Why this matters now The AI alignment debate has a familiar rhythm: align the values first, deploy later. Sensible, reassuring—and increasingly detached from reality. In practice, we are already delegating consequential decisions to systems we do not fully understand, let alone perfectly align. Trading algorithms rebalance portfolios, recommendation engines steer attention, and autonomous agents negotiate, schedule, and filter on our behalf. The real question is no longer “Is the AI aligned?” but “Is it aligned enough to justify delegation, given what it can do better than us?” ...

December 18, 2025 · 4 min · Zelina
Cover image

Reasoning Loops, Not Bigger Brains

Opening — Why this matters now For the past two years, AI progress has been narrated as a story of scale: more parameters, more data, more compute. Yet the ARC-AGI leaderboard keeps delivering an inconvenient counterexample. Small, scratch-trained models—no web-scale pretraining, no trillion-token diet—are routinely humiliating far larger systems on abstract reasoning tasks. This paper asks the uncomfortable question: where is the reasoning actually coming from? ...

December 17, 2025 · 3 min · Zelina
Cover image

When Attention Learns to Breathe: Sparse Transformers for Sustainable Medical AI

Opening — Why this matters now Healthcare AI has quietly run into a contradiction. We want models that are richer—multi-modal, context-aware, clinically nuanced—yet we increasingly deploy them in environments that are poorer: fewer samples, missing modalities, limited compute, and growing scrutiny over energy use. Transformers, the industry’s favorite hammer, are powerful but notoriously wasteful. In medicine, that waste is no longer academic; it is operational. ...

December 17, 2025 · 4 min · Zelina
Cover image

When Medical AI Stops Guessing and Starts Asking

Opening — Why this matters now Medical AI has become very good at answering questions. Unfortunately, medicine rarely works that way. Pathology, oncology, and clinical decision-making are not single-query problems. They are investigative processes: observe, hypothesize, cross-check, revise, and only then conclude. Yet most medical AI benchmarks still reward models for producing one-shot answers — neat, confident, and often misleading. This mismatch is no longer academic. As multimodal models edge closer to clinical workflows, the cost of shallow reasoning becomes operational, regulatory, and ethical. ...

December 16, 2025 · 4 min · Zelina