Cover image

Prompt Wars: When Pedagogy Beats Cleverness

Opening — Why this matters now Educational AI has entered its prompt era. Models are powerful, APIs are cheap, and everyone—from edtech startups to university labs—is tweaking prompts like seasoning soup. The problem? Most of this tweaking is still artisanal. Intuition-heavy. Barely documented. And almost never evaluated with the same rigor we expect from the learning science it claims to support. ...

January 23, 2026 · 3 min · Zelina
Cover image

Auditing the Illusion of Forgetting: When Unlearning Isn’t Enough

Opening — Why this matters now “Right to be forgotten” has quietly become one of the most dangerous phrases in AI governance. On paper, it sounds clean: remove a user’s data, comply with regulation, move on. In practice, modern large language models (LLMs) have turned forgetting into a performance art. Models stop saying what they were trained on—but continue remembering it internally. ...

January 22, 2026 · 4 min · Zelina
Cover image

DISARM, but Make It Agentic: When Frameworks Start Doing the Work

Opening — Why this matters now Foreign Information Manipulation and Interference (FIMI) has quietly evolved from a niche security concern into a persistent, high‑tempo operational problem. Social media platforms now host influence campaigns that are faster, cheaper, and increasingly AI‑augmented. Meanwhile, defenders are expected to produce timely, explainable, and interoperable assessments—often across national and institutional boundaries. ...

January 22, 2026 · 4 min · Zelina
Cover image

Lost Without a Map: Why Intelligence Is Really About Navigation

Opening — Why this matters now AI discourse is increasingly stuck in a sterile debate: how smart are large models, really? The paper you just uploaded cuts through that noise with a sharper question—what even counts as intelligence? At a time when transformers simulate reasoning, cells coordinate without brains, and agents act across virtual worlds, clinging to neuron‑centric or task‑centric definitions of intelligence is no longer just outdated—it is operationally misleading. ...

January 21, 2026 · 4 min · Zelina
Cover image

Rebuttal Agents, Not Rebuttal Text: Why ‘Verify‑Then‑Write’ Is the Only Scalable Future

Opening — Why this matters now Peer review rebuttals are one of the few moments in modern science where precision still beats fluency. Deadlines are tight, stakes are high, and every sentence is implicitly a legal statement about what the paper does—and does not—claim. Yet this is exactly where many researchers now lean on large language models. ...

January 21, 2026 · 3 min · Zelina
Cover image

Deep GraphRAG: Teaching Retrieval to Think in Layers

Opening — Why this matters now Retrieval-Augmented Generation has reached an awkward adolescence. Vector search is fast, scalable, and confidently wrong when questions require structure, multi-hop reasoning, or global context. GraphRAG promised salvation by injecting topology into retrieval — and promptly ran into its own identity crisis: global search is thorough but slow, local search is precise but blind, and most systems oscillate between the two without ever resolving the tension. ...

January 20, 2026 · 4 min · Zelina
Cover image

Greedy, but Not Blind: Teaching Optimization to Listen

Opening — Why this matters now Public-sector AI has a credibility problem. Not because it cannot optimize—but because it optimizes too cleanly. In health system planning, decisions are rarely about pure efficiency. They are negotiated compromises shaped by terrain, politics, institutional memory, and hard-earned intuition. Classic optimization methods politely ignore all that. This paper tackles a question many planners quietly ask but rarely formalize: Can we let algorithms optimize without silencing human judgment—and still keep mathematical guarantees intact? ...

January 19, 2026 · 4 min · Zelina
Cover image

Think-with-Me: When LLMs Learn to Stop Thinking

Opening — Why this matters now The AI industry has developed an unhealthy obsession with thinking longer. More tokens, deeper chains, bigger context windows—surely that must mean better reasoning. Except, increasingly, it doesn’t. Large Reasoning Models (LRMs) often reason past the point of usefulness, slipping into self-validation loops or overwriting correct answers with unnecessary exploration. This paper proposes a heretical idea in the age of scaling: maybe the model doesn’t need to think more—it needs to know when to stop. ...

January 19, 2026 · 3 min · Zelina
Cover image

One-Shot Brains, Fewer Mouths: When Multi-Agent Systems Learn to Stop Talking

Opening — Why this matters now Multi-agent LLM systems are having a moment. Software engineering agents argue with each other, math solvers debate proofs, and code reviewers nitpick outputs like caffeinated interns. The results are often impressive—and painfully expensive. Token budgets explode, latency compounds, and the coordination logic starts to look like an over-managed meeting that should have been an email. ...

January 18, 2026 · 4 min · Zelina
Cover image

Redundancy Overload Is Optional: Finding the FDs That Actually Matter

Opening — Why this matters now Functional dependency (FD) discovery has quietly become a victim of its own success. Modern algorithms can enumerate everything—and that is precisely the problem. On realistic schemas, exhaustive FD discovery produces hundreds of thousands of valid dependencies, most of which are technically correct and practically useless. Computationally expensive. Cognitively overwhelming. Operationally irrelevant. ...

January 18, 2026 · 4 min · Zelina