When Ray Kurzweil first proposed the “Law of Accelerating Returns,” he suggested that technological progress builds on itself, speeding up over time. But what if even that framing is too slow?
David Orban’s recent paper, Jolting Technologies: Superexponential Acceleration in AI Capabilities and Implications for AGI, pushes the discussion into new mathematical territory. Instead of modeling AI progress as exponential (where capability growth accelerates at a constant rate), he proposes something more radical: positive third-order derivatives — or in physics terms, jolts.
🧮 From Exponential to Jolting
Let’s break down the math:
Term | Meaning | Analogy |
---|---|---|
C(t) |
AI capability at time t |
Distance |
C'(t) |
Velocity (rate of improvement) | Speed |
C''(t) |
Acceleration (rate of velocity increase) | Acceleration |
C'''(t) |
Jolt (rate of acceleration increase) | Jerk – change in acceleration |
In simple terms: not only is AI improving, it’s getting better at improving, and faster than before.
Orban formalizes this into a hypothesis: if AI benchmarks show sustained positive third derivatives, we’re in a jolting regime — a growth curve steeper than even the most optimistic exponential models predict.
📊 Detecting the Jolt: Evidence and Method
To detect such jolts, Orban constructs a robust framework:
- Data Selection: Long-running benchmarks like MMLU, ImageNet, and AgentBench.
- Smoothing & Derivatives: Applies Savitzky-Golay filters and spline regressions to compute first, second, and third derivatives.
- Monte Carlo Validation: Simulations show that his hybrid detection model performs well even under noisy conditions:
Noise Level | True Positive Rate | False Positive Rate |
---|---|---|
Low | 95% | 5% |
Medium | 92% | 8% |
High | 85% | 15% |
This isn’t just statistical gymnastics. The resulting jolt magnitude — normalized and dimensionless — allows for comparisons across different benchmarks and time periods.
🤖 Simulated Agents, Real Warning Signs
In a case study using AgentBench-style simulated environments, Orban modeled agent performance across complex multi-step tasks. By simulating breakthroughs (e.g., model scale increases or algorithmic shifts), he demonstrated how jolts can emerge not gradually but abruptly — with clear safety implications:
“Jolts in agent capabilities could lead to sudden economic and societal impact… if alignment mechanisms don’t keep pace.”
The implication: if your AI system’s capability goes from 60 to 100 overnight, are your controls still valid?
📉 Exponential Timelines Are Underestimating Risk
AGI forecasts often assume exponential trends. But Orban shows that this assumption may systematically underestimate how fast things can change. With positive jolt dynamics, doubling times shrink — rapidly. This could mean:
- AGI isn’t 10 years away — it’s 3.
- Economic shocks hit before policy catches up.
- Phase transitions in capabilities occur without warning.
The paper notes that prediction markets like Metaculus have already shown a persistent bias toward underestimating AI speed. Jolts may be the missing variable.
🏛️ Governance in a Jolting World
If AI jolts are real, our governance systems are structurally unprepared. Reactive policy models can’t keep pace with superexponential change. Orban recommends:
- Regulatory sandboxes that evolve with the tech.
- Sunset clauses forcing reevaluation as capabilities leap.
- Foresight teams monitoring acceleration metrics — not just raw capabilities.
- Multinational coordination, because jolts know no borders.
This isn’t optional. Jolting AI may render today’s compliance regimes obsolete in months, not years.
📎 Implications for Business Automation
For Cognaptus readers, the jolt hypothesis is more than academic. It implies:
- Discontinuous gains in agent performance — prepare for sudden leaps.
- Automation ROI curves may steepen dramatically — making laggards vulnerable.
- Monitoring your own AI systems’ improvement rates (not just performance) becomes a competitive edge.
Future-proofing isn’t about waiting for AGI. It’s about watching for jolts in your narrow-domain models today.
🧭 The Way Forward
Orban’s work is early-stage — real-world third-derivative validation awaits richer longitudinal data. But as a theoretical and methodological contribution, it’s a wake-up call:
Don’t just track what your AI is doing — track how fast its acceleration is increasing.
Or as we might say at Cognaptus:
Automate the Present, Incubate the Future — Before It Jolts.