Thinking About Thinking: When LLMs Start Writing Their Own Report Cards
Opening — Why This Matters Now For the past two years, reinforcement learning has been the quiet architect behind the reasoning leap of large language models (LLMs). We reward them when they land the right answer. They get better at landing the right answer. Efficient. Scalable. And slightly naive. Because if you only reward the final answer, you are implicitly saying: “I don’t care how you think — just get it right.” ...