Cover image

Patch, Don’t Preach: The Coming Era of Modular AI Safety

Opening — Why this matters now The safety race in AI has been running like a software release cycle: long, expensive, and hopelessly behind the bugs. Major model updates arrive every six months, and every interim week feels like a patch Tuesday with no patches. Meanwhile, the risks—bias, toxicity, and jailbreak vulnerabilities—don’t wait politely for version 2.0. ...

November 12, 2025 · 4 min · Zelina
Cover image

When Compliance Blooms: ORCHID and the Rise of Agentic Legal AI

Opening — Why this matters now In a world where AI systems can write policy briefs but can’t reliably follow policies, compliance is the next frontier. The U.S. Department of Energy’s classification of High-Risk Property (HRP)—ranging from lab centrifuges to quantum chips—demands both accuracy and accountability. A single misclassification can trigger export-control violations or, worse, national security breaches. ...

November 10, 2025 · 4 min · Zelina
Cover image

Parallel Minds: How OMPILOT Redefines Code Translation for Shared Memory AI

Opening — Why this matters now As Moore’s Law wheezes toward its physical limits, the computing world has shifted its faith from faster cores to more of them. Yet for developers, exploiting this parallelism still feels like assembling IKEA furniture blindfolded — possible, but painful. Enter OMPILOT, a transformer-based model that automates OpenMP parallelization without human prompt engineering, promising to make multicore programming as accessible as autocomplete. ...

November 9, 2025 · 4 min · Zelina
Cover image

Beyond Oversight: Why AI Governance Needs a Memory

Opening — Why this matters now In 2025, the world’s enthusiasm for AI regulation has outpaced its understanding of it. Governments publish frameworks faster than models are trained, yet few grasp how these frameworks will sustain relevance as AI systems evolve. The paper “A Taxonomy of AI Regulation Frameworks” argues that the problem is not a lack of oversight, but a lack of memory — our rules forget faster than our models learn. ...

November 8, 2025 · 3 min · Zelina
Cover image

When RAG Meets the Law: Building Trustworthy Legal AI for a Moving Target

Opening — Why this matters now Legal systems are allergic to uncertainty. Yet, AI thrives on it. As generative models step into the courtroom—drafting opinions, analyzing precedents, even suggesting verdicts—the question is no longer can they help, but can we trust them? The stakes are existential: a hallucinated statute or a misapplied precedent isn’t a typo; it’s a miscarriage of justice. The paper Hybrid Retrieval-Augmented Generation Agent for Trustworthy Legal Question Answering in Judicial Forensics offers a rare glimpse at how to close this credibility gap. ...

November 6, 2025 · 4 min · Zelina
Cover image

Agents with Interest: How Fintech Taught RAG to Read the Fine Print

Opening — Why this matters now The fintech industry is an alphabet soup of acronyms and compliance clauses. For a large language model (LLM), it’s a minefield of misunderstood abbreviations, half-specified processes, and siloed documentation that lives in SharePoint purgatory. Yet financial institutions are under pressure to make sense of their internal knowledge—securely, locally, and accurately. Retrieval-Augmented Generation (RAG), the method of grounding LLM outputs in retrieved context, has emerged as the go-to approach. But as Mastercard’s recent research shows, standard RAG pipelines choke on the reality of enterprise fintech: fragmented data, undefined acronyms, and role-based access control. The paper Retrieval-Augmented Generation for Fintech: Agentic Design and Evaluation proposes a modular, multi-agent redesign that turns RAG from a passive retriever into an active, reasoning system. ...

November 4, 2025 · 4 min · Zelina
Cover image

When Rules Go Live: Policy Cards and the New Language of AI Governance

When Rules Go Live: Policy Cards and the New Language of AI Governance In 2019, Model Cards made AI systems more transparent by documenting what they were trained to do. Then came Data Cards and System Cards, clarifying how datasets and end-to-end systems behave. But as AI moves from prediction to action—from chatbots to trading agents, surgical robots, and autonomous research assistants—documentation is no longer enough. We need artifacts that don’t just describe a system, but govern it. ...

November 2, 2025 · 4 min · Zelina
Cover image

Paper Tigers or Compliance Cops? What AIReg‑Bench Really Says About LLMs and the EU AI Act

The gist AIReg‑Bench proposes the first benchmark for a deceptively practical task: can an LLM read technical documentation and judge how likely an AI system complies with specific EU AI Act articles? The dataset avoids buzzword theater: 120 synthetic but expert‑vetted excerpts portraying high‑risk systems, each labeled by three legal experts on a 1–5 compliance scale (plus plausibility). Frontier models are then asked to score the same excerpts. The headline: best models reach human‑like agreement on ordinal compliance judgments—under some conditions. That’s both promising and dangerous. ...

October 9, 2025 · 5 min · Zelina
Cover image

Bracket Busters: When Agentic LLMs Turn Law into Code (and Catch Their Own Mistakes)

TL;DR Agentic LLMs can translate legal rules into working software and audit themselves using higher‑order metamorphic tests. This combo improves worst‑case reliability (not just best‑case demos), making it a practical pattern for tax prep, benefits eligibility, and other compliance‑bound systems. The Business Problem Legal‑critical software (tax prep, benefits screening, healthcare claims) fails in precisely the ways that cause the most reputational and regulatory damage: subtle misinterpretations around thresholds, phase‑ins/outs, caps, and exception codes. Traditional testing stumbles here because you rarely know the “correct” output for every real‑world case (the oracle problem). What you do know: similar cases should behave consistently. ...

October 1, 2025 · 5 min · Zelina
Cover image

Prolog & Paycheck: When Tax AI Shows Its Work

TL;DR Neuro‑symbolic architecture (LLMs + Prolog) turns tax calculation from vibes to verifiable logic. The paper we analyze shows that adding a symbolic solver, selective refusal, and exemplar‑guided parsing can lower the break‑even cost of an AI tax assistant to a fraction of average U.S. filing costs. Even more interesting: chat‑tuned models often beat reasoning‑tuned models at few‑shot translation into logic — a counterintuitive result with big product implications. Why this matters for operators (not just researchers) Most back‑office finance work is a chain of (1) rules lookup, (2) calculations, and (3) audit trails. Generic LLMs are great at (1), decent at (2), and historically bad at (3). This work shows a practical path to auditable automation: translate rules and facts into Prolog, compute with a trusted engine, and price the risk of being wrong directly into your product economics. ...

August 31, 2025 · 5 min · Zelina