Cover image

Who’s Really in Charge? Epistemic Control After the Age of the Black Box

Opening — Why this matters now Machine learning has become science’s most productive employee—and its most awkward colleague. It delivers predictions at superhuman scale, spots patterns no graduate student could ever see, and does so without asking for coffee breaks or tenure. But as ML systems increasingly mediate discovery, a more uncomfortable question has resurfaced: who is actually in control of scientific knowledge production? ...

January 20, 2026 · 5 min · Zelina
Cover image

What Happens in Backtests… Misleads in Live Trades

When your AI believes too much, you pay the price. AI-driven quantitative trading is supposed to be smart—smarter than the market, even. But just like scientific AI systems that hallucinate new protein structures that don’t exist, trading models can conjure signals out of thin air. These errors aren’t just false positives—they’re corrosive hallucinations: misleading outputs that look plausible, alter real decisions, and resist detection until it’s too late. The Science of Hallucination Comes to Finance In a recent philosophical exploration of AI in science, Charles Rathkopf introduced the concept of corrosive hallucinations—a specific kind of model error that is both epistemically disruptive and resistant to anticipation1. These are not benign missteps. They’re illusions that change the course of reasoning, especially dangerous when embedded in high-stakes workflows. ...

April 15, 2025 · 7 min