Cover image

One-Shot Brains, Fewer Mouths: When Multi-Agent Systems Learn to Stop Talking

Opening — Why this matters now Multi-agent LLM systems are having a moment. Software engineering agents argue with each other, math solvers debate proofs, and code reviewers nitpick outputs like caffeinated interns. The results are often impressive—and painfully expensive. Token budgets explode, latency compounds, and the coordination logic starts to look like an over-managed meeting that should have been an email. ...

January 18, 2026 · 4 min · Zelina
Cover image

Redundancy Overload Is Optional: Finding the FDs That Actually Matter

Opening — Why this matters now Functional dependency (FD) discovery has quietly become a victim of its own success. Modern algorithms can enumerate everything—and that is precisely the problem. On realistic schemas, exhaustive FD discovery produces hundreds of thousands of valid dependencies, most of which are technically correct and practically useless. Computationally expensive. Cognitively overwhelming. Operationally irrelevant. ...

January 18, 2026 · 4 min · Zelina
Cover image

When Memory Stops Guessing: Stitching Intent Back into Agent Memory

Opening — Why this matters now Everyone is chasing longer context windows. Million-token prompts. Endless chat logs. The assumption is simple: if the model can see everything, it will remember correctly. This paper shows why that assumption fails. In long-horizon, goal-driven interactions, errors rarely come from missing information. They come from retrieving the wrong information—facts that are semantically similar but contextually incompatible. Bigger windows amplify the problem. Noise scales faster than relevance. ...

January 17, 2026 · 3 min · Zelina
Cover image

Drawing with Ghost Hands: When GenAI Helps Architects — and When It Quietly Undermines Them

Opening — Why this matters now Architectural studios are quietly changing. Not with robotic arms or parametric scripts, but with prompts. Text-to-image models now sit beside sketchbooks, offering instant massing ideas, stylistic variations, and visual shortcuts that once took hours. The promise is obvious: faster ideation, lower friction, fewer blank pages. The risk is less visible. When creativity is partially outsourced, what happens to confidence, authorship, and cognitive effort? ...

January 16, 2026 · 4 min · Zelina
Cover image

One Agent Is a Bottleneck: When Genomics QA Finally Went Multi-Agent

Opening — Why this matters now Genomics QA is no longer a toy problem for language models. It sits at the uncomfortable intersection of messy biological databases, evolving schemas, and questions that cannot be answered from static training data. GeneGPT proved that LLMs could survive here—barely. This paper shows why surviving is not the same as scaling. ...

January 16, 2026 · 3 min · Zelina
Cover image

When Agents Talk Back: Why AI Collectives Need a Social Theory

Opening — Why this matters now Multi-agent AI is no longer a lab curiosity. Tool-using LLM agents already negotiate, cooperate, persuade, and sometimes sabotage—often without humans in the loop. What looks like “emergent intelligence” at first glance is, more precisely, a set of interaction effects layered on top of massive pre-trained priors. And that distinction matters. Traditional multi-agent reinforcement learning (MARL) gives us a language for agents that learn from scratch. LLM-based agents do not. They arrive already socialized. ...

January 16, 2026 · 3 min · Zelina
Cover image

When Goals Collide: Synthesizing the Best Possible Outcome

Opening — Why this matters now Most AI control systems are still designed around a brittle assumption: either the agent satisfies everything, or the problem is declared unsolvable. That logic collapses quickly in the real world. Robots run out of battery. Services compete for shared resources. Environments act adversarially, not politely. In practice, goals collide. ...

January 16, 2026 · 4 min · Zelina
Cover image

When Models Know They’re Wrong: Catching Jailbreaks Mid-Sentence

Opening — Why this matters now Most LLM safety failures don’t look dramatic. They look fluent. A model doesn’t suddenly turn malicious. It drifts there — token by token — guided by coherence, momentum, and the quiet incentive to finish the sentence it already started. Jailbreak attacks exploit this inertia. They don’t delete safety alignment; they outrun it. ...

January 16, 2026 · 4 min · Zelina
Cover image

Mind Reading the Conversation: When Your Brain Reviews the AI Before You Do

Opening — Why this matters now Conversational AI is no longer a novelty interface. It is infrastructure: answering customer tickets, tutoring students, advising patients, and quietly reshaping how humans externalize cognition. Yet, the dominant alignment loop—reinforcement learning from human feedback (RLHF)—still depends on something profoundly inefficient: asking people after the fact what they thought. ...

January 14, 2026 · 4 min · Zelina
Cover image

SAFE Enough to Think: Federated Learning Comes for Your Brain

Opening — Why this matters now Brain–computer interfaces (BCIs) have quietly crossed a threshold. They are no longer laboratory curiosities; they are clinical tools, assistive technologies, and increasingly, commercial products. That transition comes with an uncomfortable triad of constraints: generalization, security, and privacy. Historically, you could optimize for two and quietly sacrifice the third. The paper behind SAFE challenges that trade-off—and does so without the usual academic hand-waving. ...

January 14, 2026 · 4 min · Zelina