Cover image

When Agents Believe Their Own Hype: The Hidden Cost of Agentic Overconfidence

Opening — Why this matters now AI agents are no longer toy demos. They write production code, refactor legacy systems, navigate websites, and increasingly make decisions that matter. Yet one deceptively simple question remains unresolved: can an AI agent reliably tell whether it will succeed? This paper delivers an uncomfortable answer. Across frontier models and evaluation regimes, agents are systematically overconfident about their own success—often dramatically so. As organizations push toward longer-horizon autonomy, this blind spot becomes not just an academic curiosity, but a deployment risk. ...

February 9, 2026 · 4 min · Zelina
Cover image

When Aligned Models Compete: Nash Equilibria as the New Alignment Layer

Opening — Why this matters now Alignment used to be a single‑model problem. Train the model well, filter the data, tune the reward, and call it a day. That framing quietly breaks the moment large language models stop acting alone. As LLMs increasingly operate as populations—running accounts, agents, bots, and copilots that interact, compete, and imitate—alignment becomes a system‑level phenomenon. Even perfectly aligned individual models can collectively drift into outcomes no one explicitly asked for. ...

February 9, 2026 · 4 min · Zelina
Cover image

When Privacy Meets Chaos: Making Federated Learning Behave

Opening — Why this matters now Federated learning was supposed to be the grown-up solution to privacy anxiety: train models collaboratively, keep data local, and everyone sleeps better at night. Then reality arrived. Real devices are heterogeneous. Real data are wildly Non-IID. And once differential privacy (DP) enters the room—armed with clipping and Gaussian noise—training dynamics start to wobble like a poorly calibrated seismograph. ...

February 9, 2026 · 4 min · Zelina
Cover image

Learning to Inject: When Prompt Injection Becomes an Optimization Problem

Opening — Why this matters now Prompt injection used to be treated as a craft problem: clever wording, social engineering instincts, and a lot of trial and error. That framing is now obsolete. As LLMs graduate from chatbots into agents that read emails, browse documents, and execute tool calls, prompt injection has quietly become one of the most structurally dangerous failure modes in applied AI. ...

February 8, 2026 · 4 min · Zelina
Cover image

First Proofs, No Training Wheels

Opening — Why this matters now AI models are now fluent in contest math, symbolic manipulation, and polished explanations. That’s the easy part. The harder question—the one that actually matters for science—is whether these systems can do research when the answer is not already in the training set. The paper First Proof arrives as a deliberately uncomfortable experiment: ten genuine research-level mathematics questions, all solved by humans, none previously public, and all temporarily withheld from the internet. ...

February 7, 2026 · 3 min · Zelina
Cover image

Hallucination-Resistant Security Planning: When LLMs Learn to Say No

Opening — Why this matters now Security teams are being asked to do more with less, while the attack surface keeps expanding and adversaries automate faster than defenders. Large language models promise relief: summarize logs, suggest response actions, even draft incident playbooks. But there’s a catch that every practitioner already knows—LLMs are confident liars. In security operations, a hallucinated action isn’t just embarrassing; it’s operationally expensive. ...

February 7, 2026 · 4 min · Zelina
Cover image

When One Heatmap Isn’t Enough: Layered XAI for Brain Tumour Detection

Opening — Why this matters now Medical AI is no longer struggling with accuracy. In constrained tasks like MRI-based brain tumour detection, convolutional neural networks routinely cross the 90% mark. The real bottleneck has shifted elsewhere: trust. When an algorithm flags—or misses—a tumour, clinicians want to know why. And increasingly, a single colourful heatmap is not enough. ...

February 7, 2026 · 3 min · Zelina
Cover image

When RAG Needs Provenance, Not Just Recall: Traceable Answers Across Fragmented Knowledge

Opening — Why this matters now RAG is supposed to make large language models safer. Ground the model in documents, add citations, and hallucinations politely leave the room—or so the story goes. In practice, especially in expert domains, RAG often fails in a quieter, more dangerous way: it retrieves something relevant, but not the right kind of evidence. ...

February 7, 2026 · 4 min · Zelina
Cover image

AgenticPay: When LLMs Start Haggling for a Living

Opening — Why this matters now Agentic AI has moved beyond polite conversation. Increasingly, we expect language models to act: negotiate contracts, procure services, choose suppliers, and close deals on our behalf. This shift quietly transforms LLMs from passive tools into economic actors. Yet here’s the uncomfortable truth: most evaluations of LLM agents still resemble logic puzzles or toy auctions. They test reasoning, not commerce. Real markets are messy—private constraints, asymmetric incentives, multi-round bargaining, and strategic patience all matter. The paper behind AgenticPay steps directly into this gap. ...

February 6, 2026 · 4 min · Zelina
Cover image

Simulate This: When LLMs Stop Talking and Start Modeling

Opening — Why this matters now For decades, modeling and simulation lived in a world of equations, agents, and carefully bounded assumptions. Then large language models arrived—verbose, confident, and oddly persuasive. At first, they looked like narrators: useful for documentation, maybe scenario description, but not serious modeling. The paper behind this article argues that this view is already outdated. ...

February 6, 2026 · 3 min · Zelina