Cover image

When Words Start Walking: Rethinking Semantic Search Beyond Averages

Opening — Why this matters now Search systems have grown fluent, but not necessarily intelligent. As enterprises drown in text—contracts, filings, emails, reports—the gap between what users mean and what systems match has become painfully visible. Keyword search still dominates operational systems, while embedding-based similarity often settles for crude averages. This paper challenges that quiet compromise. ...

February 8, 2026 · 3 min · Zelina
Cover image

Beyond Cosine: When Order Beats Angle in Embedding Similarity

Opening — Why this matters now Cosine similarity has enjoyed an unusually long reign. From TF‑IDF vectors to transformer embeddings, it remains the default lens through which we judge “semantic closeness.” Yet the more expressive our embedding models become, the more uncomfortable this default starts to feel. If modern representations are nonlinear, anisotropic, and structurally rich, why are we still evaluating them with a metric that only understands angles? ...

February 7, 2026 · 4 min · Zelina
Cover image

Whispering Feelings: When ASR Models Learn to Read Emotion

Opening — Why this matters now As AI systems inch closer to everyday human interaction, emotion is no longer a “nice-to-have” signal. It is a prerequisite. Voice assistants, mental‑health tools, call‑center analytics, and social robots all face the same bottleneck: understanding not just what was said, but how it was said. Speech Emotion Recognition (SER) has promised this capability for years, yet progress has been throttled by small datasets, brittle features, and heavyweight models that struggle to scale. ...

February 6, 2026 · 4 min · Zelina
Cover image

Algorithmic Context Is the New Heuristic

Opening — Why this matters now For decades, heuristic design has been a quiet tax on optimization. Every serious deployment of A* or tree search comes with a familiar cost: domain experts handcraft rules, tune parameters, and babysit edge cases. The process is expensive, slow, and brittle. Large Language Models promised automation—but until recently, mostly delivered clever greedy tricks for toy problems. ...

February 2, 2026 · 3 min · Zelina
Cover image

When ERP Meets Attention: Teaching Transformers to Pack, Schedule, and Save Real Money

Opening — Why this matters now Enterprise Resource Planning (ERP) systems are excellent at recording what has happened. They are far less impressive at deciding what should happen next. When decision-making involves combinatorial explosions—packing furnaces, sequencing machines, allocating scarce inputs—ERP often falls back on brittle heuristics, slow solvers, or human intuition. None scale gracefully. ...

January 31, 2026 · 4 min · Zelina
Cover image

PyraTok: When Video Tokens Finally Learn to Speak Human

Opening — Why this matters now Text-to-video models are scaling at an alarming pace. Resolution is no longer the bottleneck—semantic fidelity is. As generators push into 4K and even 8K regimes, a quieter but more consequential problem emerges underneath: the tokenizer. If visual tokens do not align with language, no amount of diffusion steps will save downstream reasoning, control, or zero-shot transfer. ...

January 24, 2026 · 3 min · Zelina
Cover image

When Models Guess the Verb by Looking at the Drawer

Opening — Why this matters now If you have ever watched a video model confidently predict opening drawer when the person is clearly closing it, you have already encountered the core problem of modern compositional video understanding: the model isn’t really watching the action. It is guessing. As video models are increasingly deployed in robotics, industrial monitoring, and human–AI interaction, the ability to correctly generalize unseen verb–object combinations is no longer academic. A robot that confuses opening with closing is not merely inaccurate—it is dangerous. ...

January 24, 2026 · 4 min · Zelina
Cover image

Skeletons in the Proof Closet: When Lean Provers Need Hints, Not More Compute

Opening — Why this matters now Neural theorem proving has entered its industrial phase. With reinforcement learning pipelines, synthetic data factories, and search budgets that would make a chess engine blush, models like DeepSeek‑Prover‑V1.5 are widely assumed to have internalized everything there is to know about formal proof structure. This paper politely disagrees. Under tight inference budgets—no massive tree search, no thousand-sample hail‑Mary—the author shows that simple, almost embarrassingly old‑fashioned structural hints still deliver large gains. Not new models. Not more data. Just better scaffolding. ...

January 23, 2026 · 4 min · Zelina
Cover image

Vibe Coding a Theorem Prover: When LLMs Prove (and Break) Themselves

Opening — Why this matters now LLMs can write code, explain proofs, and occasionally hallucinate both with equal confidence. So the obvious next question—posed almost mischievously in this paper—is whether an LLM can code a theorem prover that itself relies on LLMs. Not as a demo. Not as a toy. But as a fully automatic, kernel-checked prover that runs on a laptop and outperforms Isabelle’s industrial-grade automation in at least some regimes. ...

January 11, 2026 · 4 min · Zelina
Cover image

When Solvers Guess Smarter: Teaching SMT to Think in Functions

Opening — Why this matters now Quantified SMT solving has always lived in an uncomfortable space between elegance and brute force. As models grew richer—mixing non-linear arithmetic, real-valued domains, and uninterpreted functions—the solvers stayed stubbornly syntactic. They match patterns. They enumerate. They hope. Meanwhile, large language models have quietly absorbed a century’s worth of mathematical intuition. AquaForte asks an obvious but previously taboo question: what if we let SMT solvers borrow that intuition—without surrendering formal guarantees? ...

January 11, 2026 · 3 min · Zelina