Cover image

When One Heatmap Isn’t Enough: Layered XAI for Brain Tumour Detection

Opening — Why this matters now Medical AI is no longer struggling with accuracy. In constrained tasks like MRI-based brain tumour detection, convolutional neural networks routinely cross the 90% mark. The real bottleneck has shifted elsewhere: trust. When an algorithm flags—or misses—a tumour, clinicians want to know why. And increasingly, a single colourful heatmap is not enough. ...

February 7, 2026 · 3 min · Zelina
Cover image

Clustering Without Amnesia: Why Abstraction Keeps Fighting Representation

Opening — Why this matters now We are drowning in data that knows too much. Images with millions of pixels, embeddings with thousands of dimensions, logs that remember every trivial detail. And yet, when we ask machines to group things meaningfully—to abstract—we often get either chaos or collapse. Clustering, the supposedly humble unsupervised task, has quietly become one of the most conceptually demanding problems in modern machine learning. ...

January 20, 2026 · 4 min · Zelina
Cover image

When Prophet Meets Perceptron: Chasing Alpha with NP‑DNN

Opening — Why this matters now Stock prediction papers arrive with clockwork regularity, each promising to tame volatility with yet another hybrid architecture. Most quietly disappear after publication. A few linger—usually because they claim eye‑catching accuracy. This paper belongs to that second category, proposing a Neural Prophet + Deep Neural Network (NP‑DNN) stack that reportedly delivers over 93%–99% accuracy in stock market prediction. ...

January 9, 2026 · 3 min · Zelina
Cover image

Greedy Enough to Win: When Loss Starts Driving the Learning Rate

Opening — Why this matters now Modern deep learning training is an odd contradiction. We obsess over architectures, data curation, and trillion-token scaling laws—then quietly accept Cosine Annealing as if it were gravity. Learning rate schedules are often inherited, not argued for. This paper challenges that complacency with a scheduler that does something almost offensive in its simplicity: it just watches the loss and reacts. ...

December 17, 2025 · 3 min · Zelina
Cover image

Sound Zones Without the Handcuffs: Teaching Neural Networks to Bend Acoustic Space

Opening — Why this matters now Personal sound zones (PSZs) have always promised something seductive: multiple, private acoustic realities coexisting in the same physical space. In practice, they’ve delivered something closer to a bureaucratic nightmare. Every new target sound scene demands the same microphone grid, the same painstaking measurements, the same fragile assumptions. Change the scene, and you start over. ...

December 14, 2025 · 4 min · Zelina
Cover image

Synthetic Seas: When Artificial Data Trains Real Eyes in Space

Opening — Why this matters now The ocean economy has quietly become one of the world’s fastest‑growing industrial frontiers. Oil and gas rigs, offshore wind farms, and artificial islands now populate the seas like metallic archipelagos. Yet, despite their scale and significance, much of this infrastructure remains poorly monitored. Governments and corporations rely on fragmented reports and outdated maps—while satellites see everything, but few know how to interpret the data. ...

November 8, 2025 · 4 min · Zelina
Cover image

Shattering the Spectrum: How PRISM Revives Signal Processing in Time-Series AI

In the race to conquer time-series classification, most modern models have sprinted toward deeper Transformers and wider convolutional architectures. But what if the real breakthrough came not from complexity—but from symmetry? Enter PRISM (Per-channel Resolution-Informed Symmetric Module), a model that merges classical signal processing wisdom with deep learning, and in doing so, delivers a stunning blow to overparameterized AI. PRISM’s central idea is refreshingly simple: instead of building a massive model to learn everything from scratch, start by decomposing the signal like a physicist would—using symmetric FIR filters at multiple temporal resolutions, applied independently per channel. Like a prism splitting light into distinct wavelengths, PRISM separates time-series data into spectral components that are clean, diverse, and informative. ...

August 7, 2025 · 3 min · Zelina
Cover image

Noise-Canceling Finance: How the Information Bottleneck Tames Overfitting in Asset Pricing

Deep learning has revolutionized many domains of finance, but when it comes to asset pricing, its power is often undercut by a familiar enemy: noise. Financial datasets are notoriously riddled with weak signals and irrelevant patterns, which easily mislead even the most sophisticated models. The result? Overfitting, poor generalization, and ultimately, bad bets. A recent paper by Che Sun proposes an elegant fix by drawing inspiration from information theory. Titled An Information Bottleneck Asset Pricing Model, the paper integrates information bottleneck (IB) regularization into an autoencoder-based asset pricing framework. The goal is simple yet profound: compress away the noise, and preserve only what matters for predicting asset returns. ...

August 1, 2025 · 3 min · Zelina
Cover image

Boxed In, Cashed Out: Deep Gradient Flows for Fast American Option Pricing

Pricing American options has long been the Achilles’ heel of quantitative finance, particularly in high dimensions. Unlike European options, American-style derivatives introduce a free-boundary problem due to their early exercise feature, making analytical solutions elusive and most numerical methods inefficient beyond two or three assets. But a recent paper by Jasper Rou introduces a promising technique — the Time Deep Gradient Flow (TDGF) — that sidesteps several of these barriers with a fresh take on deep learning design, optimization, and sampling. ...

July 27, 2025 · 4 min · Zelina
Cover image

Residual Entanglement: How ResQuNNs Fix Gradient Flow in Quantum Neural Networks

Residual Entanglement: How ResQuNNs Fix Gradient Flow in Quantum Neural Networks In classical deep learning, residual connections revolutionized the training of deep networks. Now, a similar breakthrough is happening in quantum machine learning. The paper “ResQuNNs: Towards Enabling Deep Learning in Quantum Convolution Neural Networks” introduces a method to overcome a fundamental bottleneck in Quantum Convolutional Neural Networks (QuNNs): the inability to train multiple quantum layers due to broken gradient flow. ...

July 12, 2025 · 4 min · Zelina