Cover image

Big AI and the Metacrisis: When Scaling Becomes a Liability

Opening — Why this matters now The AI industry insists it is ushering in an Intelligent Age. The paper you just uploaded argues something colder: we may instead be engineering a metacrisis accelerator. As climate instability intensifies, democratic trust erodes, and linguistic diversity collapses, Big AI—large language models, hyperscale data centers, and their political economy—is not a neutral observer. It is an active participant. And despite the industry’s fondness for ethical manifestos, it shows little appetite for restraint. ...

January 2, 2026 · 3 min · Zelina
Cover image

LeanCat-astrophe: Why Category Theory Is Where LLM Provers Go to Struggle

Opening — Why this matters now Formal theorem proving has entered its confident phase. We now have models that can clear olympiad-style problems, undergraduate algebra, and even parts of the Putnam with respectable success rates. Reinforcement learning, tool feedback, and test-time scaling have done their job. And then LeanCat arrives — and the success rates collapse. ...

January 2, 2026 · 4 min · Zelina
Cover image

Planning Before Picking: When Slate Recommendation Learns to Think

Opening — Why this matters now Recommendation systems have quietly crossed a threshold. The question is no longer what to recommend, but how many things, in what order, and with what balance. In feeds, short-video apps, and content platforms, users consume slates—lists experienced holistically. Yet most systems still behave as if each item lives alone, blissfully unaware of its neighbors. ...

January 2, 2026 · 3 min · Zelina
Cover image

Secrets, Context, and the RAG Illusion

Opening — Why this matters now Personalized AI assistants are rapidly becoming ambient infrastructure. They draft emails, recall old conversations, summarize private chats, and quietly stitch together our digital lives. The selling point is convenience. The hidden cost is context collapse. The paper behind this article introduces PrivacyBench, a benchmark designed to answer an uncomfortable but overdue question: when AI assistants know everything about us, can they be trusted to know when to stay silent? The short answer is no—not reliably, and not by accident. ...

January 2, 2026 · 4 min · Zelina
Cover image

Deployed, Retrained, Repeated: When LLMs Learn From Being Used

Opening — Why this matters now The AI industry likes to pretend that training happens in neat, well-funded labs and deployment is merely the victory lap. Reality, as usual, is less tidy. Large language models are increasingly learning after release—absorbing their own successful outputs through user curation, web sharing, and subsequent fine‑tuning. This paper puts a sharp analytical frame around that uncomfortable truth: deployment itself is becoming a training regime. ...

January 1, 2026 · 4 min · Zelina
Cover image

Gen Z, But Make It Statistical: Teaching LLMs to Listen to Data

Opening — Why this matters now Foundation models are fluent. They are not observant. In 2024–2025, enterprises learned the hard way that asking an LLM to explain a dataset is very different from asking it to fit one. Large language models know a lot about the world, but they are notoriously bad at learning dataset‑specific structure—especially when the signal lives in proprietary data, niche markets, or dated user behavior. This gap is where GenZ enters, with none of the hype and most of the discipline. ...

January 1, 2026 · 4 min · Zelina
Cover image

Learning the Rules by Breaking Them: Exception-Aware Constraint Mining for Care Scheduling

Opening — Why this matters now Care facilities are drowning in spreadsheets, tacit knowledge, and institutional memory. Shift schedules are still handcrafted—painfully—by managers who know the rules not because they are written down, but because they have been violated before. Automation promises relief, yet adoption remains stubbornly low. The reason is not optimization power. It is translation failure. ...

January 1, 2026 · 4 min · Zelina
Cover image

The Invariance Trap: Why Matching Distributions Can Break Your Model

Opening — Why this matters now Distribution shift is no longer a corner case; it is the default condition of deployed AI. Models trained on pristine datasets routinely face degraded sensors, partial observability, noisy pipelines, or institutional drift once they leave the lab. The industry response has been almost reflexive: enforce invariance. Align source and target representations, minimize divergence, and hope the problem disappears. ...

December 31, 2025 · 4 min · Zelina
Cover image

When the Paper Talks Back: Lost in Translation, Rejected by Design

Opening — Why this matters now Academic peer review is buckling under scale. ICML alone now processes close to ten thousand submissions a year. In response, the temptation to insert LLMs somewhere into the review pipeline—screening, triage, or scoring—is understandable. Efficiency, after all, is a persuasive argument. Unfortunately, efficiency is also how subtle failures scale. This paper asks an uncomfortable but necessary question: what happens when the paper being reviewed quietly talks back to the model reviewing it? Not loudly. Not visibly. Just enough to tip the scales. ...

December 31, 2025 · 4 min · Zelina
Cover image

MIRAGE-VC: Teaching LLMs to Think Like VCs (Without Drowning in Graphs)

Opening — Why this matters now Venture capital has always been a strange mix of narrative craft and network math. Partners talk about vision, conviction, and pattern recognition, but behind the scenes, outcomes are brutally skewed: most startups fail quietly, a few dominate returns, and almost everything depends on who backs whom, and in what order. ...

December 30, 2025 · 4 min · Zelina