Cover image

Hypotheses, Not Hunches: What an AI Data Scientist Gets Right

Most “AI for analytics” pitches still orbit model metrics. The more interesting question for executives is: What should we do next, and why? A recent paper proposes an AI Data Scientist—a team of six LLM “subagents” that march from raw tables to clear, time‑boxed recommendations. The twist isn’t just automation; it’s hypothesis‑first reasoning. Instead of blindly optimizing AUC, the system forms crisp, testable claims (e.g., “active members are less likely to churn”), statistically validates them, and only then engineers features and trains models. The output is not merely predictions—it’s an action plan with KPIs, timelines, and rationale. ...

August 26, 2025 · 5 min · Zelina
Cover image

Quants With a Plan: Agentic Workflows That Outtrade AutoML

If AutoML is a fast car, financial institutions need a train with tracks—a workflow that knows where it’s going, logs every switch, and won’t derail when markets regime-shift. A new framework called TS-Agent proposes exactly that: a structured, auditable, LLM-driven agent that plans model development for financial time series instead of blindly searching. Unlike generic AutoML, TS-Agent formalizes modeling as a multi-stage decision process—Model Pre-selection → Code Refinement → Fine-tuning—and anchors each step in domain-curated knowledge banks and reflective feedback from real runs. The result is not just higher accuracy; it’s traceability and consistency that pass governance sniff tests. ...

August 20, 2025 · 5 min · Zelina
Cover image

Forecast First, Ask Later: How DCATS Makes Time Series Smarter with LLMs

When it comes to forecasting traffic patterns, weather, or financial activity, the prevailing wisdom in machine learning has long been: better models mean better predictions. But a new approach flips this assumption on its head. Instead of chasing ever-more complex architectures, the DCATS framework (Data-Centric Agent for Time Series), developed by researchers at Visa, suggests we should first get our data in order—and let a language model do it. The Agentic Turn in AutoML DCATS builds on the trend of integrating Large Language Model (LLM) agents into AutoML pipelines, but with a twist. While prior systems like AIDE focus on automating model design and hyperparameter tuning, DCATS delegates a more fundamental task to its LLM agent: curating the right data. ...

August 7, 2025 · 3 min · Zelina
Cover image

Agents in Formation: Fine-Tune Meets Fine-Structure in Quant AI

The next generation of quantitative investment agents must be more than data-driven—they must be logic-aware and structurally adaptive. Two recently published research efforts provide important insights into how reasoning patterns and evolving workflows can be integrated to create intelligent, verticalized financial agents. Kimina-Prover explores how reinforcement learning can embed formal reasoning capabilities within a language model for theorem proving. Learning to Be a Doctor shows how workflows can evolve dynamically based on diagnostic feedback, creating adaptable multi-agent frameworks. While each stems from distinct domains—formal logic and medical diagnostics—their approaches are deeply relevant to two classic quant strategies: the Black-Litterman portfolio optimizer and a sentiment/technical-driven Bitcoin perpetual futures trader. ...

April 17, 2025 · 7 min