Cover image

Credit Where It’s Due: The New Reasoning Stack for Agentic AI

Opening — Why this matters now The current agentic AI conversation has a very convenient myth: if an AI agent fails, give it a better model, a longer context window, more tool calls, and perhaps a heroic prompt containing the phrase “think step by step” in several places. Then wait for magic. Preferably billable magic. ...

May 7, 2026 · 16 min · Zelina
Cover image

No Free Tokens: The New Economics of LLM Inference

Opening — Why this matters now For the last few years, AI strategy has been narrated as a model-quality story: bigger models, better benchmarks, longer context windows, more agents, more demos, more adjectives. That story was useful. It was also incomplete. The less glamorous reality is now arriving with the invoice attached. LLM systems are not merely models. They are production services that consume GPU memory, scheduling capacity, engineering attention, and operational patience. Once a business moves from a prototype to repeated daily use, the question changes from “Can the model answer?” to “Can the system answer reliably, cheaply, and repeatedly when real users arrive at inconvenient times?” ...

May 7, 2026 · 16 min · Zelina
Cover image

Synthesize, but Verify: The Data Flywheel Behind Useful AI Automation

Opening — Why this matters now The easiest AI demo in the world is a model producing something plausible. A product description. A support reply. A defect image. A peer-review report. A compliance explanation. A benchmark answer. The output looks competent enough to be shown in a slide deck, which is often where corporate AI strategy goes to enjoy a short but well-lit life. ...

May 6, 2026 · 17 min · Zelina

Cost, Latency, and ROI of AI Systems

A practical framework for understanding the economic trade-offs of AI systems, including model cost, response speed, review effort, and business payoff.

April 23, 2026 · 6 min · Michelle
Cover image

Trex Marks the Spot: When AI Starts Training AI

Opening — Why this matters now Everyone wants custom AI. Few want the invoices, GPU queues, brittle data pipelines, and endless hyperparameter arguments required to build it. Fine-tuning large language models remains one of the least glamorous bottlenecks in modern AI deployment. It is expensive, iterative, and strangely dependent on whoever in the room has the strongest opinions. ...

April 16, 2026 · 4 min · Zelina
Cover image

Wheel Smarts > Wheel Reinvention: What GitTaskBench Really Measures

Agents don’t build Rome from scratch—they retrofit the city. GitTaskBench (arXiv:2508.18993) is the first benchmark that grades code agents on how well they exploit existing GitHub repositories to deliver real-world outcomes, not just pass algorithm puzzles. It also puts a price tag on success via an Alpha value that blends accuracy with cost, bringing long-missing business realism to agent evals. TL;DR What’s new: 54 tasks across 7 modalities (image, video, speech, office docs, web scraping, security/privacy, biosignals), each paired to a real repo and a practical, automated test harness. Why it matters: The hard part isn’t just writing code—it’s environment setup, dependency wrangling, repo comprehension, and workflow orchestration. Headline result: Even the best stack—OpenHands + Claude 3.7—passes only ~48% of tasks; environment/setup issues cause ~65% of all failures. Business twist: The Alpha value estimates net economic benefit per task by combining success, quality, and token costs. Expensive tasks become clear wins; cheap tasks require ruthless cost control. The Benchmark, de-jargoned Problem framed: In real shops, devs search, fork, and adapt. GitTaskBench simulates that reality. Each task gives an agent a specific repo (e.g., DeOldify, Scrapy, NeuroKit, SpeechBrain) and a concrete user goal (e.g., “colorize this photo” or “extract author/quote pairs into CSV”). Success is determined by a task-specific metric (e.g., NIQE for image quality; SNR/SDR for speech separation; field-level F1 for scraping; column/row fidelity for office docs) and an execution check (the thing actually runs and outputs in the right format). ...

August 27, 2025 · 5 min · Zelina
Cover image

Hypotheses, Not Hunches: What an AI Data Scientist Gets Right

Most “AI for analytics” pitches still orbit model metrics. The more interesting question for executives is: What should we do next, and why? A recent paper proposes an AI Data Scientist—a team of six LLM “subagents” that march from raw tables to clear, time‑boxed recommendations. The twist isn’t just automation; it’s hypothesis‑first reasoning. Instead of blindly optimizing AUC, the system forms crisp, testable claims (e.g., “active members are less likely to churn”), statistically validates them, and only then engineers features and trains models. The output is not merely predictions—it’s an action plan with KPIs, timelines, and rationale. ...

August 26, 2025 · 5 min · Zelina