Cover image

Quants With a Plan: Agentic Workflows That Outtrade AutoML

If AutoML is a fast car, financial institutions need a train with tracks—a workflow that knows where it’s going, logs every switch, and won’t derail when markets regime-shift. A new framework called TS-Agent proposes exactly that: a structured, auditable, LLM-driven agent that plans model development for financial time series instead of blindly searching. Unlike generic AutoML, TS-Agent formalizes modeling as a multi-stage decision process—Model Pre-selection → Code Refinement → Fine-tuning—and anchors each step in domain-curated knowledge banks and reflective feedback from real runs. The result is not just higher accuracy; it’s traceability and consistency that pass governance sniff tests. ...

August 20, 2025 · 5 min · Zelina
Cover image

Forecast First, Ask Later: How DCATS Makes Time Series Smarter with LLMs

When it comes to forecasting traffic patterns, weather, or financial activity, the prevailing wisdom in machine learning has long been: better models mean better predictions. But a new approach flips this assumption on its head. Instead of chasing ever-more complex architectures, the DCATS framework (Data-Centric Agent for Time Series), developed by researchers at Visa, suggests we should first get our data in order—and let a language model do it. The Agentic Turn in AutoML DCATS builds on the trend of integrating Large Language Model (LLM) agents into AutoML pipelines, but with a twist. While prior systems like AIDE focus on automating model design and hyperparameter tuning, DCATS delegates a more fundamental task to its LLM agent: curating the right data. ...

August 7, 2025 · 3 min · Zelina
Cover image

Shattering the Spectrum: How PRISM Revives Signal Processing in Time-Series AI

In the race to conquer time-series classification, most modern models have sprinted toward deeper Transformers and wider convolutional architectures. But what if the real breakthrough came not from complexity—but from symmetry? Enter PRISM (Per-channel Resolution-Informed Symmetric Module), a model that merges classical signal processing wisdom with deep learning, and in doing so, delivers a stunning blow to overparameterized AI. PRISM’s central idea is refreshingly simple: instead of building a massive model to learn everything from scratch, start by decomposing the signal like a physicist would—using symmetric FIR filters at multiple temporal resolutions, applied independently per channel. Like a prism splitting light into distinct wavelengths, PRISM separates time-series data into spectral components that are clean, diverse, and informative. ...

August 7, 2025 · 3 min · Zelina
Cover image

Causality in Stereo: How Multi-Band Granger Unveils Frequency-Specific Influence

Causality is rarely one-size-fits-all—especially in the dynamic world of time series data. Whether you’re analyzing brainwaves, financial markets, or industrial processes, the timing of influence and the frequency at which it occurs both matter. Traditional Granger causality assumes a fixed temporal lag, while Variable-Lag Granger Causality (VLGC) brings some flexibility by allowing dynamic time alignment. But even VLGC falls short of capturing frequency-specific causal dynamics, which are ubiquitous in complex systems. ...

August 4, 2025 · 4 min · Zelina
Cover image

Quantum Bulls and Tensor Tails: Modeling Financial Time Series with QGANs

If you’re tired of classical GANs hallucinating financial time series that look right but behave wrong, you’re not alone. Markets aren’t just stochastic — they’re structured, memory-laced, and irrational in predictable ways. A recent paper, Quantum Generative Modeling for Financial Time Series with Temporal Correlations, dives into whether quantum GANs (QGANs) — once considered an esoteric fantasy — might actually be better suited for this synthetic financial choreography. ...

August 3, 2025 · 3 min · Zelina
Cover image

The Grammar and the Glow: Making Sense of Time-Series AI

The Grammar and the Glow: Making Sense of Time-Series AI What if time-series data had a grammar, and AI could read it? That idea is no longer poetic conjecture—it now has theoretical teeth and practical implications. Two recent papers offer a compelling convergence: one elevates interpretability in time-series AI through heatmap fusion and NLP narratives, while the other proposes that time itself forms a latent language with motifs, tokens, and even grammar. Read together, they suggest a future where interpretable AI is not just about saliency maps or attention—it becomes a linguistically grounded system of reasoning. ...

July 2, 2025 · 4 min · Zelina
Cover image

When Text Doesn’t Help: Rethinking Multimodality in Forecasting

The Multimodal Mirage In recent years, there’s been growing enthusiasm around combining unstructured text with time series data. The promise? Textual context—say, clinical notes, weather reports, or market news—might inject rich insights into otherwise pattern-driven numerical streams. With powerful vision-language and text-generation models dominating headlines, it’s only natural to wonder: Could Large Language Models (LLMs) revolutionize time series forecasting too? A new paper from AWS researchers provides the first large-scale empirical answer. The verdict? The benefits of multimodality are far from guaranteed. In fact, across 14 datasets spanning domains from agriculture to healthcare, incorporating text often fails to outperform well-tuned unimodal baselines. Multimodal forecasting, it turns out, is more of a conditional advantage than a universal one. ...

June 30, 2025 · 3 min · Zelina