Opening — Why this matters now

The past two years of AI development have produced an unusual paradox.

Large language models are extraordinarily capable — yet most AI agents deployed in real organizations still feel shallow. They can search, summarize, and automate workflows, but they rarely capture the real expertise of the professionals they are meant to assist.

The bottleneck is no longer model capability. It is knowledge encoding.

Traditional agent development assumes expertise can be packaged upfront: either through software pipelines or increasingly elaborate prompts. In practice, however, professional knowledge is messy, tacit, and constantly evolving.

A recent research proposal introduces a different philosophy entirely: instead of building agents, we should raise them.

The paper introduces Nurture‑First Development (NFD) — a methodology where AI agents grow their expertise through continuous interaction with practitioners, periodically transforming experience into structured knowledge.

If widely adopted, this paradigm could fundamentally reshape how organizations deploy domain‑expert AI systems.


Background — From Code to Prompts to Something Else

Most modern agent architectures fall into two familiar development paradigms.

Paradigm How expertise is encoded Strength Weakness
Code‑First Logic embedded in deterministic pipelines and APIs Reliable and reproducible Cannot capture nuanced human judgment
Prompt‑First Expertise embedded in system prompts and examples Fast and flexible Becomes fragile and unscalable as prompts grow
Nurture‑First Expertise grows through experience and crystallization Captures tacit knowledge Requires sustained interaction

Both traditional approaches share the same structural assumption:

Agent development happens before deployment.

But this assumption clashes with how human expertise actually works.

Professionals rarely possess a fully articulated decision framework. Instead, their knowledge lives in a mixture of:

  • pattern recognition
  • contextual judgment
  • case memory
  • tacit heuristics

Much of it only emerges when they explain decisions in context.

This observation leads to a provocative reframing:

The most effective way to build an expert AI agent may be to let it learn the way apprentices learn — through interaction.


Analysis — The Nurture‑First Development Model

Nurture‑First Development treats agents as cognitive systems that mature over time, rather than static software artifacts.

Three core ideas define the paradigm.

1. Development and deployment are fused

Instead of a traditional build‑then‑deploy lifecycle, NFD agents become operational immediately with minimal scaffolding.

Their expertise grows during real use.

This creates a continuous spiral of learning:

  1. Conversations generate knowledge fragments
  2. Experiences accumulate in memory
  3. Patterns are extracted and crystallized
  4. New knowledge improves future performance

Each iteration raises the agent’s baseline capability.

2. Knowledge grows through conversational interaction

The primary mechanism for transferring expertise is not documentation or coding.

It is dialogue.

When a practitioner explains reasoning during everyday tasks, the agent records:

  • decisions
  • reasoning traces
  • observed patterns
  • mistakes and corrections
  • contextual insights

These fragments collectively form the raw material for knowledge development.

3. Development occurs through “knowledge crystallization”

Periodically, accumulated experience is converted into structured knowledge assets.

This process is called the Knowledge Crystallization Cycle.

Phase What happens Outcome
Conversational Immersion Expert interacts with agent in real tasks Tacit knowledge surfaces
Experiential Accumulation Conversations logged and structured Experience corpus grows
Deliberate Crystallization Patterns extracted and formalized Reusable frameworks created
Grounded Application New knowledge applied in future tasks Higher‑quality reasoning

The key insight is subtle but powerful:

Development becomes the act of distilling experience into structure.

Not writing code.


Findings — The Three‑Layer Cognitive Architecture

To support continuous learning, the framework organizes agent knowledge into three layers based on volatility and personalization.

Layer Volatility Content Function
Constitutional Low Identity, principles, behavioral rules Governs agent behavior
Skill Medium Analytical frameworks, domain methods Reusable capabilities
Experiential High Logs, cases, observations Raw learning material

This architecture mirrors cognitive systems in humans.

  • Experiential layer behaves like episodic memory
  • Skill layer resembles learned procedures
  • Constitutional layer acts as stable identity and values

Information flows continuously between layers.

Experiences accumulate at the bottom and gradually crystallize upward into structured skills.

Meanwhile, constitutional principles guide how experiences are interpreted.

This layered design prevents two common problems in agent systems:

  1. Prompt bloat
  2. Context overload

Only stable knowledge remains permanently loaded.

Everything else is retrieved dynamically.


Case Study — Growing a Financial Research Agent

The paper demonstrates the framework through the development of an equity research assistant.

The analyst began with:

  • ~400 historical research notes
  • an intuitive but undocumented investment framework

Within weeks, conversational interaction revealed tacit knowledge that had never been written down.

Examples included:

  • dynamic weighting of valuation factors depending on macro conditions
  • interpretation of management guidance language during earnings calls
  • sector‑specific adjustments to free cash‑flow analysis

One correction during analysis of a semiconductor company produced three knowledge updates:

  1. an error log explaining the misinterpretation
  2. an update to the valuation framework
  3. a new bias pattern recorded in memory

Over time, repeated interactions crystallized these insights into structured decision frameworks.

The agent evolved from a generic assistant into a personalized research partner.

Observed progression

Metric Early Phase Mid Phase Mature Phase
Useful analyses 38% 71% 74%
Historical case recalls 2 12 15
Bias detection events 0 4 5
Skill references 2 6 8

Interestingly, one unexpected benefit emerged.

Explaining reasoning to the agent forced the analyst to clarify their own thinking, revealing inconsistencies in their framework.

In other words, nurturing the agent also improved the practitioner.


Implications — The Rise of the “Agent Nurturer”

If the nurture‑first paradigm proves practical, it implies several structural changes in AI adoption.

1. Domain experts become developers

Traditional AI systems require engineers.

NFD agents are primarily developed by practitioners themselves.

The domain expert becomes the system’s trainer, mentor, and architect.

2. Knowledge assets become experiential

Instead of static documentation, organizations accumulate:

  • decision frameworks
  • case libraries
  • error pattern databases

These become living knowledge repositories.

3. AI systems become organizational memory

A mature agent can recall years of historical reasoning and decisions.

This turns the system into something closer to institutional cognition.

The value compounds over time.

4. New operational roles emerge

The paper hints at a new professional role:

Agent nurturers.

These individuals continuously cultivate specialized AI partners through dialogue and crystallization.

It is closer to mentorship than programming.


Conclusion — The Apprentice Model of AI

The nurture‑first paradigm reframes AI agents in a deeply human way.

Instead of static tools configured once and deployed forever, agents become apprentices that mature through experience.

This shift aligns AI development with how expertise actually forms in the real world.

Knowledge emerges through practice, reflection, and refinement.

If this framework proves scalable, the next generation of AI agents may not be built in engineering sprints at all.

They will be raised over months and years, accumulating judgment the way professionals do.

And the organizations that learn how to nurture them effectively will possess something far more powerful than software.

They will possess compounding expertise.

Cognaptus: Automate the Present, Incubate the Future.