Cover image

Blind Spots, Bright Ideas: How Risk-Aware Cooperation Could Save Autonomous Driving

Opening — Why this matters now Autonomous driving has a bandwidth problem. The industry dreams of cars chatting seamlessly with one another, trading LiDAR views like gossip. Reality is less glamorous: wireless channels choke, vehicles multiply, and every agent insists on streaming gigabytes of data that no one asked for. In traffic-dense environments — the ones where autonomous driving is supposed to shine — communication collapses under its own ambition. ...

November 24, 2025 · 4 min · Zelina
Cover image

Bridging the Clinical Gap: When Bayesian Networks Meet Messy Medical Text

Opening — Why this matters now Electronic health records are the data equivalent of a junk drawer: indispensable, vast, and structurally chaotic. As hospitals accelerate AI adoption, the gap between structured and unstructured information becomes a governance problem. Tabular fields are interpretable and auditable; clinical notes are a wild garden of habits, abbreviations, omissions, and contradictions. Yet decisions in healthcare—arguably the highest‑stakes domain for AI—depend increasingly on integrating both. ...

November 24, 2025 · 5 min · Zelina
Cover image

Hierarchy, Not Hype: Why Domain Logic Beats Agent Chaos

Opening — Why this matters now Agent frameworks have been multiplying faster than AI policy memos. Every week, a new architecture promises reasoning, planning, or vaguely defined autonomy. Yet when enterprises try to deploy these agents beyond toy tasks, they encounter the familiar triad of failure: hallucinated workflows, brittle execution, and performance that depends more on model luck than system design. ...

November 24, 2025 · 4 min · Zelina
Cover image

Mind Over Matter: How a BDI Ontology Gives AI Agents an Actual Inner Life

Opening — Why this matters now Artificial agents are getting bold. They generate plans, pursue goals, and occasionally hallucinate their way into uncharted territory. As enterprises deploy agentic systems into production—handling workflows, customer interactions, and autonomous decision-making—the question becomes painfully clear: what exactly is going on inside these things? The AI industry’s infatuation with autonomous agents demands more than clever prompting or RAG pipelines. We need cognitive clarity—shared semantics, explainability, and sanity checks that prevent agents from improvising their own logic. The paper The Belief-Desire-Intention Ontology for Modelling Mental Reality and Agency fileciteturn0file0 answers this with a formal, reusable ontology that gives AI agents something they’ve desperately lacked: a structured mental life. ...

November 24, 2025 · 5 min · Zelina
Cover image

Probe and Error: Why Off‑Policy Training Warps LLM Behaviour Detectors

Opening — Why this matters now LLM governance is entering its awkward adolescence. Models behave well in demos, misbehave in deployment, and—most annoyingly—lie about whether they are misbehaving. Behaviour probes promised a clean solution: tiny classifiers that sit atop the model’s hidden states and report when deception, sycophancy, or other governance‑relevant traits appear. But the paper “That’s Not Natural: The Impact of Off‑Policy Training Data on Probe Performance” delivers an uncomfortable reminder: if you train probes on synthetic or off‑policy data, they don’t learn the behaviour. They learn the artefacts. ...

November 24, 2025 · 4 min · Zelina
Cover image

When Curiosity Becomes Contagious: Mutual Intrinsic Rewards in Multi-Agent RL

Opening — Why this matters now Sparse‑reward reinforcement learning is the gym membership most agents sign up for and then immediately abandon. The treadmill is too long, the reward too far, and the boredom too fatal. Now imagine doing all that with teammates who can’t decide whether to help you or block the exit. ...

November 24, 2025 · 4 min · Zelina
Cover image

CLOZE Encounters: When LLMs Start Editing Medical Ontologies

Opening — Why this matters now Medical ontologies age faster than clinical practice. New diseases appear, old terminology mutates, and clinicians keep writing whatever reflects reality today. The result: a widening semantic gap between structured ontologies and the messy, unstructured world of clinical notes. In the era of LLMs, that gap is no longer just inconvenient—it’s a bottleneck. Every downstream application, from diagnosis prediction to epidemiological modeling, depends on ontologies that are both up‑to‑date and hierarchically consistent. And updating these ontologies manually is about as scalable as handwriting ICD‑12 on stone tablets. ...

November 23, 2025 · 4 min · Zelina
Cover image

Concurrency, But Make It Fashion: Why Trustworthy AI Needs an Agentic Lakehouse

Opening — Why this matters now Enterprise leaders increasingly ask a deceptively simple question: “If AI agents are so smart, why can’t I trust them with my production data?” The awkward silence that follows says more about the state of AI infrastructure than the state of AI intelligence. While LLMs learn tools and coding at uncanny speed, they still operate atop systems built for small, careful human teams—not swarms of semi‑autonomous agents. Traditional lakehouses crack under concurrent access, opaque runtimes, and unpredictable writes. Governance becomes a game of whack‑a‑mole. ...

November 23, 2025 · 4 min · Zelina
Cover image

Drift Happens: Why AI Needs a Memory for People, Not Just Patterns

Opening — Why this matters now As AI systems seep into care environments—from daily reminders to conversational companions—they’re increasingly asked to do something deceptively difficult: notice when a person subtly changes. Not day-to-day mood swings, but long arcs of cognitive drift. This is especially relevant in dementia care, where conversations flatten, wander, or disassemble slowly over weeks—not minutes. ...

November 23, 2025 · 4 min · Zelina
Cover image

ESG in the Age of AI: When Reports Stop Being Read and Start Being Parsed

Opening — Why this matters now ESG is no longer a soft-power marketing exercise. Mandatory disclosures are tightening, regulators expect traceability, and investors want evidence rather than adjectives. The problem? ESG reports—hundreds of pages of slide-like layouts, drifting hierarchies, and orphaned charts—remain designed for optics, not analysis. Even advanced document models buckle under their chaotic reading order. ...

November 23, 2025 · 4 min · Zelina