When AI Can Solve But Can't Search: The MathNet Equation
Opening — Why this matters now The AI industry enjoys announcing that models now perform at medal level on Olympiad mathematics. Impressive headlines. Elegant demos. Much applause. Then MATHNET arrives with the social grace of an auditor. This new benchmark shows that while leading models can often solve difficult mathematics, they are far worse at finding related problems, recognizing structural equivalence, or reliably using retrieved examples to improve reasoning. In practical terms: your AI intern may ace the exam, then fail to locate the right binder. ...