Cover image

Tensor-DTI: Binding the Signal, Not the Noise

Opening — Why this matters now Drug discovery has a scale problem. Not a small one. A billion-compound problem. Chemical space has outpaced every classical screening method we have—experimental or computational. Docking strains at a few million compounds. Diffusion models demand structural data that simply doesn’t exist for most targets. Meanwhile, enumerated libraries like Enamine REAL quietly crossed 70+ billion molecules, and nobody bothered to ask whether our AI tooling is actually ready for that reality. ...

January 14, 2026 · 4 min · Zelina
Cover image

When Views Go Missing, Labels Talk Back

Opening — Why this matters now In theory, multi‑view multi‑label learning is a gift: more modalities, richer semantics, better predictions. In practice, it is a recurring disappointment. Sensors fail, annotations are partial, budgets run out, and the elegant assumption of “complete views with full labels” quietly collapses. What remains is the real industrial problem: fragmented features and half‑known truths. ...

January 14, 2026 · 4 min · Zelina
Cover image

When LLMs Stop Talking and Start Driving

Opening — Why this matters now Digital transformation has reached an awkward phase. Enterprises have accumulated oceans of unstructured data, deployed dashboards everywhere, and renamed half their IT departments. Yet when something actually breaks—equipment fails, suppliers vanish, costs spike—the organization still reacts slowly, manually, and often blindly. The uncomfortable truth: most “AI-driven transformation” initiatives stop at analysis. They classify, predict, and visualize—but they rarely decide. This paper confronts that gap directly, asking a sharper question: what does it take for large models to become operational drivers rather than semantic commentators? fileciteturn0file0 ...

January 11, 2026 · 4 min · Zelina
Cover image

Distilling the Thought, Watermarking the Answer: When Reasoning Models Finally Get Traceable

Opening — Why this matters now Large Language Models have learned to reason. Unfortunately, our watermarking techniques have not. As models like DeepSeek-R1 and Qwen3 increasingly rely on explicit or implicit chain-of-thought, traditional text watermarking has started to behave like a bull in a logic shop: detectable, yes — but at the cost of broken reasoning, degraded accuracy, and occasionally, outright nonsense. ...

January 9, 2026 · 4 min · Zelina
Cover image

Model Cannibalism: When LLMs Learn From Their Own Echo

Opening — Why this matters now Synthetic data is no longer a contingency plan; it is the backbone of modern model iteration. As access to clean, human-authored data narrows—due to cost, licensing, or sheer exhaustion—LLMs increasingly learn from text generated by earlier versions of themselves. On paper, this looks efficient. In practice, it creates something more fragile: a closed feedback system where bias, preference, and quality quietly drift over time. ...

January 9, 2026 · 4 min · Zelina
Cover image

Argue With Yourself: When AI Learns by Contradiction

Opening — Why this matters now Modern AI systems are fluent, fast, and frequently wrong in subtle ways. Not catastrophically wrong — that would be easier to fix — but confidently misaligned. They generate answers that sound coherent while quietly diverging from genuine understanding. This gap between what a model says and what it actually understands has become one of the most expensive problems in applied AI. ...

January 8, 2026 · 3 min · Zelina
Cover image

When Pipes Speak in Probabilities: Teaching Graphs to Explain Their Leaks

Opening — Why this matters now Water utilities do not suffer from a lack of algorithms. They suffer from a lack of trustworthy ones. In an industry where dispatching a repair crew costs real money and false positives drain already thin operational budgets, a black‑box model—no matter how accurate—remains a risky proposition. Leak detection in water distribution networks (WDNs) has quietly become an ideal stress test for applied AI. The data are noisy, the events are rare, the topology is non‑Euclidean, and the consequences of wrong decisions are painfully tangible. This paper enters precisely at that fault line: it asks not only where a leak might be, but also how an engineer can understand why the model thinks so. ...

January 7, 2026 · 4 min · Zelina
Cover image

When Prompts Learn Themselves: The Death of Task Cues

Opening — Why this matters now Prompt engineering was supposed to be a temporary inconvenience. A short bridge between pre‑trained language models and real-world deployment. Instead, it became a cottage industry—part folklore, part ritual—where minor phrasing changes mysteriously decide whether your system works or embarrasses you in production. The paper Automatic Prompt Engineering with No Task Cues and No Tuning quietly dismantles much of that ritual. It asks an uncomfortable question: what if prompts don’t need us nearly as much as we think? And then it answers it with a system that is deliberately unglamorous—and therefore interesting. ...

January 7, 2026 · 3 min · Zelina
Cover image

FormuLLA: When LLMs Stop Talking and Start Formulating

Opening — Why this matters now Pharmaceutical 3D printing has promised personalization for over a decade. In practice, it has mostly delivered spreadsheets, failed filaments, and a great deal of human patience. The bottleneck has never been imagination—it has been formulation. Every new drug–excipient combination still demands expensive trial-and-error, even as printers themselves have matured. ...

January 6, 2026 · 4 min · Zelina
Cover image

Causality Remembers: Teaching Social Media Defenses to Learn from the Past

Opening — Why this matters now Social media coordination detection is stuck in an awkward adolescence. Platforms know coordinated inauthentic behavior exists, regulators know it scales faster than moderation teams, and researchers know correlation-heavy detectors are brittle. Yet most deployed systems still behave as if yesterday’s parameters will work tomorrow. This paper introduces Adaptive Causal Coordination Detection (ACCD)—not as another accuracy tweak, but as a structural correction. Instead of freezing assumptions into static thresholds and embeddings, ACCD treats coordination detection as a learning system with memory. And that subtle shift matters more than the headline F1 score. ...

January 5, 2026 · 4 min · Zelina