Cover image

Volume Shock Therapy: Why Markowitz Risk Might Be Lying to You

Most risk models in finance still trace their roots to Harry Markowitz’s 1952 portfolio theory. His formula for portfolio variance has become institutional orthodoxy, from asset managers’ spreadsheets to central bank macro-models. But what if the model’s foundations are missing a critical component of today’s noisy markets? Victor Olkhov’s recent paper makes a sharp yet mathematically grounded argument: Markowitz variance drastically underestimates or overestimates true portfolio risk when trade volumes fluctuate — which they almost always do. ...

August 3, 2025 · 3 min · Zelina
Cover image

When Mortality Meets Memory: Pricing Risk in the Long Haul

In finance and insurance, we’ve long priced instruments like bonds and annuities based on the assumption that interest rates and mortality evolve fairly independently — and without memory. But COVID-19 shattered that illusion. Today, the joint dynamics of mortality and macroeconomics demand a rethink, and a recent paper by Zhou & Zhou offers exactly that. ...

August 3, 2025 · 4 min · Zelina
Cover image

When Small Coins Roar: Rethinking Systemic Risk in Crypto Volatility Forecasting

In traditional finance, systemic risk is often linked to size — the bigger the institution, the bigger the threat. But in crypto? The rules are different. A recent paper from researchers at Jinan University rewrites the forecasting playbook by demonstrating that systemic influence in crypto markets is more about network positioning than market cap. The authors introduce a state-adaptive volatility model that integrates multi-scale realized volatility measures (like semivariance and jump components) with time-varying quantile spillovers, producing a high-resolution view of inter-asset contagion — especially under stress. ...

August 3, 2025 · 3 min · Zelina
Cover image

When the Market Speaks: A New Dataset That Actually Listens

In financial sentiment analysis, the devil has always been in the labeling. Most datasets — even the industry-standard Financial-Phrasebank — ask human annotators to tag headlines as positive, negative, or neutral. But here’s the problem: the market often disagrees. Take a headline reporting widening losses. Annotators marked it “negative.” Yet the stock rose the next day. Welcome to the disconnect. Enter FinMarBa, a bold new dataset that cuts out the middleman — the human — and lets the market itself do the labeling. Developed by Lefort et al. (2025), this 61,252-item dataset uses next-day price reactions to classify financial news, creating a labeling method that is empirically grounded, scalable, and (critically) aligned with investor behavior. ...

August 3, 2025 · 3 min · Zelina
Cover image

From Scroll to Structure: Rethinking Academic Reading with TreeReader

For centuries, reading has meant scrolling—page by page, line by line. But what if reading could mean navigating a tree? TreeReader, a new system from researchers at the University of Toronto and the Vector Institute, challenges the linearity of academic literature. It proposes a reimagined interface: one where large language models (LLMs) summarize each section and paragraph into collapsible nodes in a hierarchical tree, letting readers skim, zoom, and verify with surgical precision. The result is more than a UX tweak—it’s a new cognitive model for how scholars might interact with complex documents in the era of AI. ...

August 2, 2025 · 3 min · Zelina
Cover image

Merge Without Mayhem: How Orthogonal Deltas Could Revolutionize Model Composition

In the era of foundation models, one challenge looms increasingly large: how to safely, scalably, and reversibly compose AI systems from multiple task-specific fine-tunings. Traditional solutions — from naïve weight averaging to adapter stacking — often create interference, forgetfulness, and compliance nightmares. But a recent paper introduces a promising new direction: Modular Delta Merging with Orthogonal Constraints (MDM-OC). Rather than combining entire model weights, MDM-OC treats each task-specific fine-tuned model as a delta from a shared base. Think of these deltas as compact, focused perturbations that encode only what changed to solve a given task. The twist? Before merging, each delta is orthogonalized — projected into a subspace that doesn’t overlap with others. This creates a modular, mathematically principled structure for interference-free integration. ...

August 2, 2025 · 3 min · Zelina
Cover image

Mind's Eye for Machines: How SimuRA Teaches AI to Think Before Acting

What if AI agents could imagine their future before taking a step—just like we do? That’s the vision behind SimuRA, a new architecture that pushes LLM-based agents beyond reactive decision-making and into the realm of internal deliberation. Introduced in the paper “SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model”, SimuRA’s key innovation lies in separating what might happen from what should be done. Instead of acting step-by-step based solely on observations, SimuRA-based agents simulate multiple futures using a learned world model and then reason over those hypothetical outcomes to pick the best action. This simple-sounding shift is surprisingly powerful—and may be a missing link in developing truly general AI. ...

August 2, 2025 · 3 min · Zelina
Cover image

Noisy by Nature: Rethinking Financial Time Series Generation with GBM-Inspired Diffusion

Most generative models for time series—particularly those borrowed from image generation—treat financial prices like any other numerical data: throw in Gaussian noise, then learn to clean it up. But markets aren’t like pixels. Financial time series have unique structures: they evolve multiplicatively, exhibit heteroskedasticity, and follow stochastic dynamics that traditional diffusion models ignore. In this week’s standout paper, “A diffusion-based generative model for financial time series via geometric Brownian motion,” Kim et al. propose a subtle yet profound twist: model the noise using financial theory, specifically geometric Brownian motion (GBM), rather than injecting it naively. ...

August 2, 2025 · 3 min · Zelina
Cover image

Seeing is Retraining: How VizGenie Turns Visualization into a Self-Improving AI Loop

Scientific visualization has long been caught in a bind: the more complex the dataset, the more domain-specific the visualization, and the harder it is to automate. From MRI scans to hurricane simulations, modern scientific data is massive, high-dimensional, and notoriously messy. While dashboards and 2D plots have benefitted from LLM-driven automation, 3D volumetric visualization—especially in high-performance computing (HPC) settings—has remained stubbornly manual. VizGenie changes that. Developed at Los Alamos National Laboratory, VizGenie is a hybrid agentic system that doesn’t just automate visualization tasks—it refines itself through them. It blends traditional visualization tools (like VTK) with dynamically generated Python modules and augments this with vision-language models fine-tuned on domain-specific images. The result: a system that can answer questions like “highlight the tissue boundaries” and actually improve its answers over time. ...

August 2, 2025 · 4 min · Zelina
Cover image

🚀 All Talk, No Stocks? What Reddit Sentiment *Doesn't* Predict

In the wake of the GameStop and AMC frenzies, financial firms and researchers have been racing to decode one question: Can social media sentiment predict stock prices? A new paper from researchers at Wrocław University of Science and Technology provides a sobering answer: not really. Despite employing advanced sentiment models—including a ChatGPT-annotated and emoji-savvy version of Financial-RoBERTa—the study found only weak and inconsistent relationships between sentiment and price movement for GME and AMC. ...

August 1, 2025 · 3 min · Zelina