The St. Petersburg paradox has long been a thorn in the side of rational decision theory. Offering an infinite expected payout but consistently eliciting modest real-world bids, the game exposes a rift between mathematical expectation and human judgment. Most solutions dodge this by modifying utility functions, imposing discounting, or resorting to exotic number systems. But what if we change the addition itself?

That’s the premise behind Takashi Izumo’s fascinating paper, which proposes coarse addition—a summation method grounded not in mathematical precision but in cognitive resolution. In this model, large numbers are binned into coarse-grained categories, each with a representative value (typically the median). Additions are performed on these representatives, not the raw numbers, and the result is then remapped to its corresponding grain. The upshot? Repeated additions that would normally diverge can become inert—locked into a stable category that stops growing. The paradox dissolves.

From Divergence to Inertness

Let’s recall the setup. The St. Petersburg game offers payouts of $2^{n-1}$ for the first heads on the $n$-th coin toss. The expected value, $ \sum_{n=1}^\infty \frac{1}{2^n} \cdot 2^{n-1} = \infty$, diverges. Yet no one in their right mind would pay infinity to play.

In Izumo’s framework, these payouts are not summed directly. Instead:

  1. Partition the outcome space (e.g., the natural numbers) into disjoint grains (e.g., using Fibonacci intervals).
  2. Map each value to the median of its grain.
  3. Add these representatives.
  4. Remap the result into its grain to get the updated value.

With properly constructed partitions—especially those with increasing width—the sum stabilizes. Even if new terms keep arriving, their representative additions fall into a large grain that eventually absorbs further increments. Hence, inertness: growth without effect.

Why This Matters (and Not Just for Economists)

1. It models how humans actually think.

We don’t add infinite decimals in our heads. We think in chunks: “a few dollars,” “a huge win,” “barely anything.” This model respects that bounded precision and mathematically formalizes it.

2. It offers an alternative to ethically shaky discounting.

Exponential discounting treats far-future outcomes as nearly worthless—raising deep ethical concerns in domains like climate policy. Coarse addition sidesteps this by treating cognitive saturation (not impatience) as the reason for diminishing sensitivity.

Method Limitation Coarse Addition’s Advantage
Diminishing Utility Arbitrary utility function assumptions Alters the summation rule directly
Discounting Implies ethically dubious preferences No need to devalue the future artificially
Hyperreal Models Abstract and uncomputable Provides concrete, computable operations
Prospect Theory Still diverges with fast-growing payoffs Clearly delineates coarse divergence vs inertness

3. It guides the design of value-aligned AI.

If AI agents are to mirror human preferences or reason in ways we find interpretable, they too must respect coarse reasoning. That means accepting that small additions may become irrelevant once a perceptual threshold is crossed—a powerful constraint for bounded rationality models and human-aligned aggregation protocols.

Not Just a Paradox Fix—A Cognitive Lens

Izumo’s proposal does more than clean up an old mathematical oddity. It reframes how we think about aggregation in any domain where cognition is bounded—from behavioral economics to AI ethics to neurocomputational modeling. It aligns with how we speak (“a lot,” “barely moved”), how we plan (“don’t sweat the small stuff”), and how we design interfaces (progress bars, signal strength, star ratings).

There’s beauty in letting go of unrealistic precision. The true value of this work isn’t in fixing the St. Petersburg paradox—it’s in offering a framework where thinking like a human is no longer a flaw, but a design principle.


Cognaptus: Automate the Present, Incubate the Future.