Opening — Why this matters now

The AI world is rediscovering an old truth: when agents learn to play many games, they learn to reason. General Game Playing (GGP) has long promised this—training systems that can pick up unfamiliar environments, interpret rules, and adapt. Elegant in theory, painfully slow in practice.

The new Regular Games (RG) formalism aims to change that. It proposes a simple idea wrapped in an almost provocatively pragmatic design: make games run fast again. And for anyone building AI agents or simulations—from RL researchers to automation developers—the implications ripple far beyond board games.

Background — Context and prior art

For nearly two decades, GGP’s history has been dominated by two extremes:

System Strength Weakness
GDL / GDL-II Universal, logic-clean Slow, logic resolution bottlenecks
Ludii Rich, expressive, concept-heavy Huge keyword set, Java-tied, complex
Regular Boardgames (RBG) Extremely fast, compiled Not fully general, verbose, no imperfect information

The field needed something universal and efficient—without becoming a language zoo.

That’s where Regular Games steps in with a hybrid philosophy: minimal core, maximal extensibility.

Analysis — What the paper actually does

Regular Games introduces:

1. A low-level language built on NFAs

Instead of representing rules as logic clauses or bloated expressions, RG describes the entire game as a finite automaton. Every move is just a labeled walk over the graph.

Surprisingly, this is enough to encode any finite turn-based game with imperfect information and randomness. The result is both:

  • more succinct than regex-based formalisms (like RBG), and
  • more machine-friendly than logic-based ones (like GDL).

2. High-level Regular Games (HRG)

Because no human wants to write an NFA by hand, HRG offers:

  • declarative structures
  • numeric ranges
  • pattern matching
  • block-style graphs

All transpiled down into the minimal RG automaton.

3. A multi-language ecosystem

RG isn’t a language; it’s an assembly layer for game systems. It supports:

  • RBG → RG automatic translation
  • GDL → RG (experimental)
  • domain-specific frameworks (e.g., Alquerque-like generators)
  • visualization, IDE, debugger, optimizer, benchmarking tools

4. Aggressive optimization passes

The pipeline runs transformations such as:

  • constant propagation
  • inlining
  • reachability analysis
  • skipping redundant tags
  • automaton node/edge pruning

Some games see 70% reductions in automaton size. For many workloads, that’s the difference between “can test 3 RL agents” and “can test 3000 episodes per hour”.

Findings — Results visualized

The headline result is straightforward:

RG routinely outperforms Ludii and RBG. Often by 10× or more.

Below is a simplified comparison of Monte Carlo playouts per second:

Game RG (HRG) RBG Ludii
Alquerque (lud) 24,871 15,962 1,981
Amazons 6,226 3,693
Connect Four 1,297,176 914,514 55,858
Yavalath 415,251 352,910 93,642

And here’s the ecosystem advantage:

Feature RG Ludii RBG
Imperfect info ✔︎ ✔︎
Randomness ✔︎ ✔︎ Limited
C++ compilation ✔︎ ✔︎
High-level DSL ✔︎ HRG ✔︎ Ludemes
Multi-language ingestion ✔︎ GDL, RBG
IDE with LSP ✔︎ ✔︎

The tone of the results is unmistakable: RG behaves like a practical unification layer, not a competing silo.

Implications — Why this matters for automation and AI

RG’s design has consequences far outside game-playing competitions:

1. Fast simulations = better agents

RL pipelines thrive on throughput. If a formalism runs 10×–20× faster, model training benefits immediately. In business automation, this translates into:

  • faster A/B evaluations of agent policies
  • richer simulation environments
  • more realistic multi-agent testing

2. A general-purpose engine for procedural environments

Because RG can consume GDL, RBG, HRG, or domain-specific formats, it behaves like LLVM for games. You build the frontend; RG handles correctness and speed.

3. Imperfect information is a first-class citizen

This makes RG useful for:

  • card-game simulators
  • negotiation agents
  • security protocol testing
  • supply-chain or multi-party workflows in enterprise AI

4. Deterministic keeper logic simplifies system events

The system-managed “keeper” player is an elegant abstraction for background operations—essentially a native “system agent”.

It looks suspiciously like the missing piece in agentic workflow simulations—something Cognaptus’ automation stack could readily exploit.

Conclusion — The verdict

Regular Games delivers what GGP has lacked for years: simplicity, universality, and speed. By grounding rules in automata rather than logic or massive DSLs, RG offers a refreshing reminder that minimalism can scale.

For AI researchers, automation architects, and simulation designers, RG is more than another niche language. It’s a fast, composable substrate—a foundation for any system that needs agents to explore, reason, or compete.

Cognaptus: Automate the Present, Incubate the Future.