The most radical idea in Michael I. Jordan’s latest manifesto isn’t a new model, a benchmark, or even a novel training scheme. It’s a reorientation. He argues that we’ve misdiagnosed the nature of intelligence—and in doing so, we’ve built AI systems that are cognitively brilliant yet socially blind. The cure? Embrace a collectivist, economic lens.

This is not techno-utopianism. Jordan—a towering figure in machine learning—offers a pointed critique of both the AGI hype and the narrow symbolic legacy of classical AI. The goal shouldn’t be to build machines that imitate lone geniuses. It should be to construct intelligent collectives—systems that are social, uncertain, decentralized, and deeply intertwined with human incentives. In short: AI needs an economic imagination.


From Frankenstein Fantasies to Market Mechanisms

The term “AI” is overloaded with 1950s-era ambition: the dream of thinking machines. But as Jordan explains, our most transformative progress hasn’t come from mimicking cognition—it’s come from optimizing data flows, incentives, and systems. Recommendation engines, credit markets, even Wikipedia are forms of collective intelligence. They succeed not by being smart in the anthropomorphic sense, but by channeling distributed, often messy contributions into emergent, functional behavior.

Jordan highlights how current AI, especially LLMs, resemble cultural artifacts more than individuals. An LLM isn’t a single mind—it’s a crystallization of millions of human voices. It operates like a culture: storing narratives, expressing shared knowledge, and evolving through interaction.

This shift has a powerful implication: if intelligence is collective, then markets—not minds—are our best metaphors.


The Two Missing Principles: Uncertainty and Incentives

Modern ML excels at pattern recognition but falters under real-world uncertainty. LLMs often deliver fluent yet overconfident answers. In contrast, human intelligence shines when knowledge is partial and conflicting. Why? Because humans lean on social reasoning—they ask, trade, hedge, and adapt.

Jordan connects this to economic theory: markets are naturally uncertain, yet robust. They’re populated by agents with partial knowledge and conflicting incentives. And crucially, markets don’t eliminate uncertainty—they channel it through contracts, pricing, and iterative feedback.

Design Principle What Current AI Does What a Collectivist System Would Do
Treats Uncertainty Overconfident outputs Designs for ambiguity, hedging, robustness
Treats Individuals Trains monolithic agents Models networks of agents with strategic goals
Treats Data Centralized, inert corpus Decentralized, incentive-aligned streams

Beyond Recommendation: Rebuilding Digital Markets

One of Jordan’s most concrete contributions is his redesign of digital markets. Take the music industry. Traditional streaming platforms decouple artist and listener, monetizing attention but not contribution. Jordan envisions a three-sided market: artists, listeners, and brands—where ML-based recommender systems match brands to artists, and brands pay creators directly based on real-time audience feedback.

This model isn’t theoretical. It underlies United Masters, which Jordan advises. Over 1.5 million musicians have signed up, and major brands now license music with clear, contract-driven flows of value. It’s algorithmic capitalism with aligned incentives.

He offers a second case: data markets. In a layered ecosystem of users, platforms, and data buyers, how can platforms protect user privacy while remaining economically viable? Jordan models this as a generalized Stackelberg game, where platforms compete on both service quality and privacy guarantees. The result? A design where privacy noise becomes a priced feature, and platforms optimize across user trust and data buyer value.


Towards a New Engineering Discipline

Jordan calls for a tripartite blend in AI design:

  1. Computational Thinking — abstraction, modularity, and scale.
  2. Inferential Thinking — reasoning under uncertainty, causal inference.
  3. Economic Thinking — incentives, contracts, equilibrium.

He illustrates this with the case of AI regulation. Imagine a government agency deciding whether self-driving cars should be approved. The decision isn’t just statistical—it’s strategic. Firms submit only their best-performing models. To prevent gaming the test, the regulator must design contracts with performance-contingent payoffs, informed by both empirical evidence and game theory.

This isn’t the standard curriculum in CS departments. Indeed, as Jordan notes, academia has created pairwise hybrids (ML blends computation and statistics; econometrics blends stats and economics; algorithmic game theory blends econ and CS) but has no field that unifies all three. The result? Fragmented thinking and brittle systems.


The Middle Kingdom: A New Curriculum for a Social AI

Jordan’s parting vision is educational. He urges the birth of a new engineering discipline—a “Middle Kingdom” between the mathematical rigor of engineering and the interpretive nuance of the humanities. This discipline would:

  • Train designers who understand data, incentives, and uncertainty as co-evolving, not separate layers.
  • Produce systems where bias correction, privacy, and trust are baked into the market mechanism, not tacked on.
  • Reframe AI not as autonomy, but as coordination.

In an age of foundation models and synthetic media, we risk scaling intelligence without anchoring it in society. Jordan’s collectivist vision reminds us that the ultimate unit of intelligence may not be the neuron—or even the model—but the marketplace of interaction.


Cognaptus: Automate the Present, Incubate the Future.