Opening — Why this matters now

In 2025, the world’s enthusiasm for AI regulation has outpaced its understanding of it. Governments publish frameworks faster than models are trained, yet few grasp how these frameworks will sustain relevance as AI systems evolve. The paper “A Taxonomy of AI Regulation Frameworks” argues that the problem is not a lack of oversight, but a lack of memory — our rules forget faster than our models learn.

Background — Context and prior art

The regulatory landscape today is fragmented. The EU AI Act defines risk categories; the U.S. relies on soft law and voluntary standards; Asia experiments with adaptive licensing. But all share a fatal weakness: they are static, paper-born systems attempting to manage code that learns, self-updates, and proliferates across jurisdictions.

Previous work has focused on what to regulate — data, models, outcomes — and who should do it — states, firms, or international bodies. This paper instead asks how to regulate learning systems that themselves change the context of regulation. In short, how can governance frameworks evolve as AI does?

Analysis — What the paper does

The authors propose a taxonomy of AI regulatory models based on three axes:

Dimension Description Examples
Prescriptiveness How rigidly the framework defines acceptable AI behavior EU AI Act (high), NIST AI RMF (low)
Adaptivity How quickly it adjusts to new risks or technologies Singapore’s AI Verify (moderate), OECD Principles (low)
Autonomy of Enforcement The degree of algorithmic participation in oversight China’s algorithmic auditing systems (medium-high)

This classification clarifies where current frameworks stand — and, more importantly, how they can evolve. The authors argue for a “Regulatory Memory System” (RMS): a continuously updated governance layer that tracks, learns, and revises rules based on the system’s own performance data.

Findings — From compliance to feedback

In the RMS model, regulation becomes recursive. Rather than issuing one-off mandates, the regulator uses data pipelines, automated audits, and model-level observability to measure real-world impact. Compliance becomes a feedback function:

Regulation(t+1) = Regulation(t) + Δ(System Performance)

The authors propose four operational layers:

  1. Policy Encoding Layer – Converts legal norms into structured, machine-readable rules.
  2. Monitoring Layer – Tracks AI behavior across deployments.
  3. Learning Layer – Identifies drift between intended and observed outcomes.
  4. Revision Layer – Suggests policy modifications automatically.

In essence, regulation becomes an AI itself — an organism that adapts, not just observes.

Implications — For business and governance

For enterprises, this means compliance can no longer be retroactive. It becomes continuous, data-driven, and algorithmically enforced. Corporate AI governance must integrate telemetry and auditability as first-class citizens — not bureaucratic afterthoughts.

For governments, the RMS framework suggests a new kind of regulator: part legal institution, part data platform. It would require shared infrastructure between states and industries — something closer to a distributed ledger of regulatory updates than a PDF in Brussels.

Stakeholder Old Paradigm RMS Paradigm
Regulator Issues rules and checks compliance periodically Continuously updates rules via system feedback
Enterprise Reacts to audits Co-evolves governance metrics with regulator
Public Receives opaque reports Gains transparent metrics of societal impact

Conclusion — The memory of rules

AI governance will fail if it treats oversight as a static act. The future of assurance lies in regulatory memory — the ability to learn from deployment data as models learn from training data. We must design governance systems that adapt not yearly, but iteratively, at machine speed.

If AI is to remain accountable, regulation must learn too.

Cognaptus: Automate the Present, Incubate the Future.