
Traces of War: Surviving the LLM Arms Race
Traces of War: Surviving the LLM Arms Race The AI frontier is heating up—not just in innovation, but in protectionism. As open-source large language models (LLMs) flood the field, a parallel move is underway: foundation model providers are fortifying their most powerful models behind proprietary walls. A new tactic in this defensive strategy is antidistillation sampling—a method to make reasoning traces unlearnable for student models without compromising their usefulness to humans. It works by subtly modifying the model’s next-token sampling process so that each generated token is still probable under the original model but would lead to higher loss if used to fine-tune a student model. This is done by incorporating gradients from a proxy student model and penalizing tokens that improve the student’s learning. In practice, this significantly reduces the effectiveness of distillation. For example, in benchmarks like GSM8K and MATH, models distilled from antidistilled traces performed 40–60% worse than those trained on regular traces—without harming the original teacher’s performance. ...