Lévy processes — stochastic processes with jumps — are the bedrock of modern financial modeling. From the Variance Gamma model to the CGMY framework, these models have replaced Brownian motion in capturing the reality of financial returns: asymmetry, fat tails, and sudden discontinuities. But what if we told you these processes don’t just live on probability distributions — they live on manifolds?
In a bold generalization, Jaehyung Choi’s recent work extends the powerful tools of information geometry to all Lévy processes. This geometrization allows us to measure the “distance” between models, define natural priors for Bayesian inference, and design estimators with reduced bias — all from the curvature of the model space.
Lévy Processes as Geometric Objects
At the core of this geometric turn lies the α-divergence, a generalization of the Kullback–Leibler (KL) divergence that depends on a real parameter $\alpha$. The divergence between two Lévy processes $P$ and $Q$ (with characteristic triplets $(\sigma, \nu_P, \gamma_P)$ and $(\sigma, \nu_Q, \gamma_Q)$) is given by closed-form expressions depending on $\alpha$, involving integrals over their Lévy measures:
- When $\alpha = -1$, it recovers KL divergence
- When $\alpha = 0$, it becomes the symmetric Hellinger distance
- When $\alpha \to \pm 1$, it admits smooth limits using L’Hôpital’s rule
This divergence then defines a Riemannian metric $g_{ij}$ (the Fisher information matrix) and an $\alpha$-connection $\Gamma^{(\alpha)}_{ijk}$, capturing local curvature.
Concept | Formula ($\sigma \neq 0$) |
---|---|
Fisher Information | $g_{ij} = \frac{T}{\sigma^2} \partial_i m \partial_j m + T \int \partial_i \log \nu \cdot \partial_j \log \nu \cdot d\nu$ |
$\alpha$-Connection | Similar terms + curvature-like adjustments based on $\alpha$ |
where $m = \gamma - \int_{|x|<1} x , d\nu(x)$.
Case Studies: From Theory to Practice
The paper moves beyond abstraction by deriving these geometric quantities for several widely-used financial models:
1. Tempered Stable & GTS Processes
- These processes modify stable distributions by exponentially tempering the tails.
- The $\alpha$-divergence between two GTS processes (differing in $\lambda_+, \lambda_-$) has a closed-form involving Gamma functions.
- The geometry yields a block-diagonal Fisher matrix in $\lambda_+$ and $\lambda_-$, enabling clean parameter decoupling.
2. CGMY / CTS Models
- Special case of GTS where both tails share the same index $a$ and scale $C$.
- Enables symmetric modeling of upward/downward jumps.
- The Jeffreys prior becomes: $J(\xi) \propto \frac{C \Gamma(2-a)}{(\lambda_+ \lambda_-)^{2-a}}$
3. Variance Gamma Processes
- Not strictly tempered stable, but can be viewed as a limiting case as $a \to 0$.
- Requires special care due to divergence of the Lévy measure; resolved by regularization.
- Results in a simplified Fisher matrix: $g_{ij} = \frac{TC}{\lambda_i^2}$, showing high curvature near small $\lambda$.
Practical Implications
Why should a financial modeler or data scientist care about this geometry?
-
Bias-Reduced Estimation: Using the geometric structure, one can construct penalized log-likelihoods:
$$ \ell^*(\xi) = \ell(\xi) + \log J(\xi) $$
where $J(\xi)$ is the Jeffreys prior derived from the Fisher matrix.
-
Better Bayesian Prediction: The geometry helps construct superharmonic priors $\tilde{J} = \rho J$, which outperform Jeffreys priors in predictive tasks. Superharmonicity is defined via the Laplace–Beltrami operator:
$$ \Delta \rho = \frac{1}{\sqrt{\det g}} \partial_i \left( \sqrt{\det g} , g^{ij} \partial_j \rho \right) < 0 $$
-
Model Distance & Robustness: Want to quantify how far your estimated model is from a benchmark? Use $D^{(\alpha)}(P|Q)$. Need robustness against parameter misspecification? Small curvature implies local stability.
-
Unified Framework: Instead of treating VG, CGMY, and stable models separately, this framework puts them all on the same manifold, enabling systematic comparison, calibration, and inference.
Geometry Meets Finance
In a field obsessed with precision — implied vol surfaces, VaR backtests, Sharpe ratios to the 3rd decimal — there’s something liberating about stepping back and asking: what is the shape of our models? Choi’s paper answers that question. It shows us that financial models don’t just live in spreadsheets or Monte Carlo simulations — they live in curved spaces, where distance, curvature, and geometry reveal deeper truths.
And with tools like $\alpha$-divergence and Fisher geometry, we’re no longer walking blindfolded through model space. We’re navigating with maps.
Cognaptus: Automate the Present, Incubate the Future