Opening — Why this matters now
Agentic AI has officially entered its awkward adolescence. It can plan, call tools, collaborate, and occasionally impress investors—but it also hallucinates, forgets, loops endlessly, and collapses under modest real‑world complexity. The problem is no longer model capability. It’s architecture.
Today’s agent systems are mostly stitched together through intuition, blog wisdom, and prompt folklore. Powerful, yes—but brittle. What’s missing is not another clever prompt trick, but an engineering discipline.
That is precisely the intervention made by Agentic Design Patterns: A System-Theoretic Framework.
Background — From prompt hacks to principled systems
Most existing agent frameworks follow a bottom‑up trajectory. Designers observe what seems to work—Reflection here, Tool Use there—and gradually accumulate an informal taxonomy. Andrew Ng’s four strategies. RAG everywhere. Multi‑agent by default.
Useful—but incomplete.
These approaches suffer from three structural flaws:
- They lack a unifying theory explaining why components interact as they do.
- They conflate architectural choices with design patterns.
- They provide little guidance for diagnosing failure modes.
In other words, they document what exists, not what should exist.
Analysis — A system-theoretic agent, properly decomposed
The paper proposes a clean break from monolithic LLM‑centric design. Instead, it treats an agent as a system of interacting subsystems, derived from first principles in system theory.
The five subsystems
| Layer | Subsystem | Role |
|---|---|---|
| Cognitive Core | Reasoning & World Model (RWM) | Maintains beliefs, plans, and decisions |
| Interfaces | Perception & Grounding (PG) | Validates and structures inputs |
| Interfaces | Action Execution (AE) | Executes actions and tools |
| Interfaces (Optional) | Inter‑Agent Communication (IAC) | Structured social interaction |
| Adaptive Shell | Learning & Adaptation (LA) | Observes, updates, and governs behavior |
This layered architecture immediately clarifies something most agent stacks obscure: failure is usually a coordination problem between subsystems, not a model weakness.
Findings — From architecture to design patterns
From this structure, the authors derive 12 Agentic Design Patterns (ADPs)—not vague strategies, but interaction‑level solutions in the spirit of classic GoF patterns.
Pattern categories
| Category | Purpose | Example Patterns |
|---|---|---|
| Foundational | Stabilize state & knowledge | Integrator, Retriever, Recorder |
| Cognitive & Decisional | Shape planning & choice | Selector, Planner, Deliberator |
| Execution & Interaction | Ensure reliable action | Executor, Tool Use, Coordinator |
| Adaptive & Learning | Enable improvement | Reflector, Skill Build, Controller |
Crucially, each pattern is explicitly mapped to:
- A failure class (e.g., hallucination, coordination breakdown)
- A subsystem boundary
- A known software pattern analogue (Mediator, Observer, Proxy)
This makes them implementable, not inspirational.
Case Study — Why ReAct breaks (and how to fix it)
ReAct is widely admired for combining reasoning and acting. It is also structurally fragile.
Diagnosis:
- World model exists only implicitly in the prompt
- No validation of observations
- No persistent state
- No learning loop
Prescription:
| ReAct Weakness | Applied Pattern | Result |
|---|---|---|
| Hallucinated observations | Integrator | Input validation |
| Context loss | Retriever + Recorder | Stable memory |
| Tool failure loops | Executor | Error recovery |
| No learning | Reflector | Causal adaptation |
The takeaway is blunt: ReAct doesn’t need a better prompt—it needs an architecture.
Implications — What this means for builders and regulators
For practitioners:
- Stop scaling agent complexity without subsystem boundaries
- Treat learning, ethics, and execution as first‑class components
- Design agents to fail locally, not catastrophically
For governance:
- Patterns like Controller offer a concrete path to continuous alignment
- System transparency improves auditability
- Accountability shifts from prompts to architecture
This is how agentic AI becomes deployable, not just demo‑ready.
Conclusion — Engineering, finally
Agentic AI is not failing because models are weak. It fails because systems are improvised.
This paper does not promise magic. It promises something better: structure. A shared language. A way to reason about agents before they break.
That alone makes it required reading for anyone serious about autonomous systems.
Cognaptus: Automate the Present, Incubate the Future.