Cover image

From Charts to Circuits: How TINs Rewire Technical Analysis for the AI Era

In a field where LSTMs, transformers, and black-box agents often dominate the conversation, a new framework dares to ask: What if our old tools weren’t wrong, just under-optimized? That’s the central premise behind Technical Indicator Networks (TINs) — a novel architecture that transforms traditional technical analysis indicators into interpretable, trainable neural networks. Indicators, Meet Neural Networks Rather than discarding hand-crafted indicators like MACD or RSI, the TIN approach recasts them as neural network topologies. A Moving Average becomes a linear layer. MACD? A cascade of two EMAs with a subtractive node and a smoothing layer. RSI? A bias-regularized division circuit. The resulting neural networks aren’t generic function approximators; they’re directly derived from the mathematical structure of the indicators themselves. ...

August 3, 2025 · 3 min · Zelina
Cover image

Cognitive Gridlock: Is Consciousness a Jamming Phase?

In the world of physics, when particles in a system become so densely packed or cooled that they lock into place, we call this phenomenon jamming. Sand becoming rigid under pressure, traffic freezing on a highway, or even glass transitioning from fluid to solid—all are governed by this principle. What if the same laws applied to intelligence? A provocative new paper, Consciousness as a Jamming Phase by Kaichen Ouyang, suggests just that: large language models (LLMs) exhibit consciousness-like properties not as a software quirk but as a physical phase transition, mirroring the jamming of particles in disordered systems. ...

July 14, 2025 · 3 min · Zelina
Cover image

Sharpe Thinking: How Neural Nets Redraw the Frontier of Portfolio Optimization

The search for the elusive optimal portfolio has always been a balancing act between signal and noise. Covariance matrices, central to risk estimation, are notoriously fragile in high dimensions. Classical fixes like shrinkage, spectral filtering, or factor models have all offered partial answers. But a new paper by Bongiorno, Manolakis, and Mantegna proposes something different: a rotation-invariant, end-to-end neural network that learns the inverse covariance matrix directly from historical returns — and does so better than the best analytical techniques, even under realistic trading constraints. ...

July 3, 2025 · 5 min · Zelina