Cover image

Shattering the Spectrum: How PRISM Revives Signal Processing in Time-Series AI

In the race to conquer time-series classification, most modern models have sprinted toward deeper Transformers and wider convolutional architectures. But what if the real breakthrough came not from complexity—but from symmetry? Enter PRISM (Per-channel Resolution-Informed Symmetric Module), a model that merges classical signal processing wisdom with deep learning, and in doing so, delivers a stunning blow to overparameterized AI. PRISM’s central idea is refreshingly simple: instead of building a massive model to learn everything from scratch, start by decomposing the signal like a physicist would—using symmetric FIR filters at multiple temporal resolutions, applied independently per channel. Like a prism splitting light into distinct wavelengths, PRISM separates time-series data into spectral components that are clean, diverse, and informative. ...

August 7, 2025 · 3 min · Zelina
Cover image

Noise-Canceling Finance: How the Information Bottleneck Tames Overfitting in Asset Pricing

Deep learning has revolutionized many domains of finance, but when it comes to asset pricing, its power is often undercut by a familiar enemy: noise. Financial datasets are notoriously riddled with weak signals and irrelevant patterns, which easily mislead even the most sophisticated models. The result? Overfitting, poor generalization, and ultimately, bad bets. A recent paper by Che Sun proposes an elegant fix by drawing inspiration from information theory. Titled An Information Bottleneck Asset Pricing Model, the paper integrates information bottleneck (IB) regularization into an autoencoder-based asset pricing framework. The goal is simple yet profound: compress away the noise, and preserve only what matters for predicting asset returns. ...

August 1, 2025 · 3 min · Zelina
Cover image

Boxed In, Cashed Out: Deep Gradient Flows for Fast American Option Pricing

Pricing American options has long been the Achilles’ heel of quantitative finance, particularly in high dimensions. Unlike European options, American-style derivatives introduce a free-boundary problem due to their early exercise feature, making analytical solutions elusive and most numerical methods inefficient beyond two or three assets. But a recent paper by Jasper Rou introduces a promising technique — the Time Deep Gradient Flow (TDGF) — that sidesteps several of these barriers with a fresh take on deep learning design, optimization, and sampling. ...

July 27, 2025 · 4 min · Zelina
Cover image

Residual Entanglement: How ResQuNNs Fix Gradient Flow in Quantum Neural Networks

Residual Entanglement: How ResQuNNs Fix Gradient Flow in Quantum Neural Networks In classical deep learning, residual connections revolutionized the training of deep networks. Now, a similar breakthrough is happening in quantum machine learning. The paper “ResQuNNs: Towards Enabling Deep Learning in Quantum Convolution Neural Networks” introduces a method to overcome a fundamental bottleneck in Quantum Convolutional Neural Networks (QuNNs): the inability to train multiple quantum layers due to broken gradient flow. ...

July 12, 2025 · 4 min · Zelina
Cover image

From Trendlines to Transformers: DeepSupp Redefines Support Level Detection

In technical analysis, few concepts are as foundational as support levels — those invisible lines where prices tend to stop falling, bounce back, and spark new rallies. For decades, traders have relied on hand-drawn trendlines, Fibonacci ratios, and moving averages to guess where those turning points might be. But what if the real market structure is too complex, too dynamic, and too subtle for static rules? Enter DeepSupp, a new deep learning architecture that doesn’t guess support zones — it discovers them. By analyzing evolving market correlations through attention mechanisms and clustering latent embeddings, DeepSupp offers a glimpse into a future where support level detection is less of an art, and more of a science. ...

July 6, 2025 · 4 min · Zelina
Cover image

Branching Out, Beating Down: Why Trees Still Outgrow Deep Roots in Quant AI

In the age of Transformers and neural nets that write poetry, it’s tempting to assume deep learning dominates every corner of AI. But in quantitative investing, the roots tell a different story. A recent paper—QuantBench: Benchmarking AI Methods for Quantitative Investment1—delivers a grounded reminder: tree-based models still outperform deep learning (DL) methods across key financial prediction tasks. ...

April 30, 2025 · 7 min
Cover image

Crunch Time for AI: Photonic Chips Enter the Menu

Crunch Time for AI: Photonic Chips Enter the Menu In the diet of modern artificial intelligence, chips are the staple. For decades, CPUs, GPUs, and more recently TPUs, have powered the explosion of deep learning. But what if the future of AI isn’t just about faster silicon—it’s about harnessing the speed of light itself? Two recent Nature papers—Hua et al. (2025)1 and Ahmed et al. (2025)2—offer a potent answer: photonic computing is no longer experimental garnish—it’s becoming the main course. ...

April 16, 2025 · 5 min · Cognaptus Insights