Compression, But Make It Pedagogical: Rate–Distortion KGs for Smarter AI Learning Assistants
Opening — Why This Matters Now The age of AI-powered learning assistants has arrived, but most of them still behave like overeager interns—confident, quick, and occasionally catastrophically wrong. The weakest link isn’t the models; it’s the structure (or lack thereof) behind their reasoning. Lecture notes fed directly into an LLM produce multiple-choice questions with the usual suspects: hallucinations, trivial distractors, and the unmistakable scent of “I made this up.” ...