Roots of Understanding: When Transformers Try to Learn the Language of Numbers
Opening — Why this matters now Modern AI models excel at human language, protein folding, and occasionally pretending to do mathematics. But ask them to infer the prime factorization of a number from a symbolic sequence, and they often blink politely. The paper Testing Transformer Learnability on the Arithmetic Sequence of Rooted Trees fileciteturn0file0 asks a sharper question: Can a transformer learn the grammar embedded in the integers themselves? ...