Cover image

Provenance, Not Prompts: How LLM Agents Turn Workflow Exhaust into Real-Time Intelligence

TL;DR Most teams still analyze pipelines with brittle SQL, custom scripts, and static dashboards. A new reference architecture shows how schema-driven LLM agents can read workflow provenance in real time—across edge, cloud, and HPC—answering “what/when/who/how” questions, plotting quick diagnostics, and flagging anomalies. The surprising finding: guideline-driven prompting (not just bigger context) is the single highest‑ROI upgrade. Why this matters (for operators, data leads, and CTOs) When production AI/data workflows sprawl across services (queues, training jobs, GPUs, file systems), the real telemetry isn’t in your app logs; it’s in the provenance—the metadata of tasks, inputs/outputs, scheduling, and resource usage. Turning that exhaust into live answers is how you: ...

October 1, 2025 · 4 min · Zelina
Cover image

From DAGs to Swarms: The Quiet Revolution of Agentic Workflows

TL;DR Traditional workflow managers treat science as a frozen DAG; the agentic era treats it as a living state machine that learns, optimizes, and—at scale—swarms. The payoff isn’t just speed. It’s a shift from execution pipelines to discovery loops, where hypotheses are generated, tested, and replanned continuously across labs, clouds, and HPC. Why this matters (beyond the lab) Enterprises keep wiring LLMs into point solutions and call it “automation.” Science, under stricter constraints (traceability, causality, irreversibility), is sketching a federated architecture where reasoning agents, facilities, and data fabrics negotiate in real time. If it works in a beamline, it’ll work in your back office. The blueprint is a reusable pattern for any AI-powered operation that must be auditable, distributed, and adaptive. ...

September 19, 2025 · 5 min · Zelina
Cover image

Seeing is Retraining: How VizGenie Turns Visualization into a Self-Improving AI Loop

Scientific visualization has long been caught in a bind: the more complex the dataset, the more domain-specific the visualization, and the harder it is to automate. From MRI scans to hurricane simulations, modern scientific data is massive, high-dimensional, and notoriously messy. While dashboards and 2D plots have benefitted from LLM-driven automation, 3D volumetric visualization—especially in high-performance computing (HPC) settings—has remained stubbornly manual. VizGenie changes that. Developed at Los Alamos National Laboratory, VizGenie is a hybrid agentic system that doesn’t just automate visualization tasks—it refines itself through them. It blends traditional visualization tools (like VTK) with dynamically generated Python modules and augments this with vision-language models fine-tuned on domain-specific images. The result: a system that can answer questions like “highlight the tissue boundaries” and actually improve its answers over time. ...

August 2, 2025 · 4 min · Zelina