Cover image

FAQ It Till You Make It: Fixing LLM Quantization by Teaching Models Their Own Family History

Opening — Why this matters now Large language models are getting cheaper to run, not because GPUs suddenly became charitable, but because we keep finding new ways to make models forget precision without forgetting intelligence. Post-training quantization (PTQ) is one of the most effective tricks in that playbook. And yet, despite years of algorithmic polish, PTQ still trips over something embarrassingly mundane: the calibration data. ...

January 20, 2026 · 4 min · Zelina
Cover image

Rotate Less, Quantize Better: OptRot and the Geometry of LLM Compression

Opening — Why this matters now Quantization is no longer a niche optimization; it is the price of admission for deploying large language models at scale. As model sizes balloon and inference budgets stubbornly refuse to follow, post-training quantization (PTQ) has become the default survival strategy. Yet one stubborn problem keeps resurfacing: outliers. A handful of extreme weights—or activations—can quietly wreck an otherwise elegant low‑bit deployment. This paper introduces OptRot, a method that tackles that problem not with more data, more calibration, or more training, but with something almost suspiciously modest: a carefully chosen rotation objective. ...

January 3, 2026 · 4 min · Zelina
Cover image

Attention, But Make It Optional

Opening — When more layers stop meaning more intelligence The scaling era taught us a simple mantra: stack more layers, get better models. Then deployment happened. Suddenly, latency, energy bills, and GPU scarcity started asking uncomfortable questions—like whether every layer in a 40-layer Transformer is actually doing any work. This paper answers that question with unsettling clarity: many attention layers aren’t lazy—they’re deliberately silent. And once you notice that, pruning them becomes less of an optimization trick and more of a design correction. ...

December 27, 2025 · 4 min · Zelina
Cover image

Graft and Go: How Knowledge Grafting Shrinks AI Without Shrinking Its Brain

If you’ve ever tried to run a powerful AI model on a modest device—say, a drone, a farm robot, or even a Raspberry Pi—you’ve likely hit the wall of hardware limitations. Today’s most accurate models are big, bloated, and brittle when it comes to efficiency. Enter knowledge grafting, a refreshingly biological metaphor for a novel compression technique that doesn’t just trim the fat—it transfers the muscle. Rethinking Compression: Not What to Cut, But What to Keep Traditional model optimization methods—quantization, pruning, and distillation—all try to make the best of a difficult trade-off: shrinking the model while limiting the damage to performance. These methods often fall short, especially when you push compression past 5–6x. ...

July 28, 2025 · 3 min · Zelina
Cover image

Unsafe at Any Bit: Patching the Safety Gaps in Quantized LLMs

When deploying large language models (LLMs) on mobile devices, edge servers, or any resource-constrained environment, quantization is the go-to trick. It slashes memory and compute costs by reducing model precision from 16-bit or 32-bit floating points to 8-bit or even 4-bit integers. But there’s a problem: this efficiency comes at a cost. Quantization can quietly erode the safety guarantees of well-aligned models, making them vulnerable to adversarial prompts and jailbreak attacks. ...

June 26, 2025 · 3 min · Zelina
Cover image

The Outlier Is a Lie: Quantization Breakthroughs with OSP

When it comes to deploying large language models (LLMs) efficiently, few challenges are as stubborn—and misunderstood—as activation outliers. For years, engineers have treated them like a natural disaster: unpredictable but inevitable. But what if they’re more like bad habits—learned and fixable? That’s the provocative premise behind a new framework called Outlier-Safe Pre-Training (OSP). Developed by researchers at Korea University and AIGEN Sciences, OSP proposes a simple but radical shift: instead of patching over outliers post hoc with quantization tricks, why not train the model to never form outliers in the first place? ...

June 25, 2025 · 3 min · Zelina