How to Make Neural Networks Talk: Register Automata as Their Unexpected Interpreters
Opening — Why this matters now Neural networks have grown fluent in everything from stock‑price rhythms to ECG traces, yet we still struggle to explain why they behave the way they do. The interpretability toolbox—saliency maps, linear probes, distilled decision trees—remains oddly mismatched with models that consume continuous‑valued sequences. When inputs are real numbers rather than discrete tokens, “classic” DFA extraction stops being a party trick and becomes a dead end. ...