Opening — Why this matters now

For decades, companies have tried to capture knowledge the way accountants capture numbers—clean, structured, and preferably in a database.

It rarely worked.

The problem was never storage. It was translation. The most valuable knowledge in an organization—how a technician “just knows” something is wrong, how a trader senses regime change—refuses to be written down.

Now generative AI has entered the room, not as a better filing system, but as something more unsettling: a machine that can work with incomplete, messy, and half-formed knowledge.

This paper proposes a shift that many organizations are quietly circling around: stop trying to fully formalize knowledge. Start working with fragments instead. fileciteturn0file0

Background — Context and prior art

The intellectual backbone here is the SECI model by Nonaka and Takeuchi—a framework that divides knowledge into two types: tacit (what we know but cannot explain) and explicit (what we can document).

It describes four transformations:

Process Direction Description
Socialization Tacit → Tacit Learning through observation and shared experience
Externalization Tacit → Explicit Converting intuition into documents or models
Combination Explicit → Explicit Structuring and integrating knowledge
Internalization Explicit → Tacit Learning by doing

The model worked well—at least conceptually. In practice, it assumed something optimistic: that tacit knowledge can eventually be made explicit.

That assumption aged poorly.

Modern knowledge systems became bureaucratic machines. They demanded structured input, but delivered limited retrieval. The cost of documenting knowledge exceeded the value of using it.

Recent AI-era extensions (like GRAI and AKI) attempted to fix this by introducing AI as a new “actor” or even a new form of knowledge.

This paper takes a quieter stance.

AI is not a new participant. It is a tool—just a very unusual one.

Analysis — What the paper actually does

The proposed GenAI SECI model keeps the original four processes but changes the underlying assumption: knowledge does not need to be complete to be useful.

Instead, it introduces a third category:

Digital Fragmented Knowledge

This is where things get interesting.

Rather than forcing knowledge into manuals, the model allows organizations to accumulate:

  • Partial observations
  • Voice notes
  • Images and sensor data
  • Incomplete thoughts from workers

These are not “knowledge” in the classical sense. They are fragments.

Previously, such fragments were unusable noise. Generative AI changes that by:

Stage Role of GenAI Practical Effect
Externalization Aggregates fragments across media Reduces burden of documentation
Combination Organizes fragments (e.g., knowledge graphs) Enables loose structure
Internalization Recommends relevant fragments Enhances learning in context

The key departure from traditional SECI is subtle but decisive:

You no longer need to fully convert tacit knowledge into explicit form before it becomes usable.

Instead, fragments are directly fed back into human learning.

This is less elegant. It is also more realistic.

Findings — From system design to operational logic

The paper goes further and proposes a concrete architecture: the Digital Knowledge Twin System.

It operates across three pipelines:

Layer Function Mechanism
Externalization Capture field knowledge Voice, images, sensors + AI aggregation
Combination Link fragments loosely Knowledge graphs + contextual linking
Internalization Deliver insight AI-driven recommendations + workshops

What stands out is not the technology—it is the workflow design.

Knowledge is no longer treated as an asset to be stored, but as a process to be continuously recombined.

Another notable shift is the emphasis on workshops.

This is not accidental.

The model explicitly argues that AI-generated outputs are not “understood” until humans internalize them through reflection and discussion. In other words, meaning is still a human responsibility.

Comparative positioning

Model Role of AI Knowledge Type Complexity Philosophy
GRAI AI as actor Tacit + Explicit Medium Expand interactions
AKI AI creates new knowledge Adds Artificial Knowledge High AI as co-creator
GenAI SECI AI as tool Adds Fragmented Knowledge Low Human-centered learning

If GRAI and AKI are ambitious rewrites, GenAI SECI is a surgical adjustment.

And that may be its advantage.

Implications — What this means for business

There are three implications worth noting.

1. Knowledge systems shift from “clean data” to “useful mess”

Traditional systems optimized for structure. This model optimizes for usability.

Organizations may need to rethink their instinct to over-standardize data.

Sometimes, a messy voice note is more valuable than a polished report.

2. AI’s real role is compression, not creation

Despite popular narratives, the model positions AI as an aggregator and recommender, not a creator of authoritative knowledge.

This aligns with reality: most enterprise value comes from combining existing signals, not inventing new truths.

3. Competitive advantage moves to tacit pipelines

If generative AI can finally operationalize tacit knowledge—even partially—then firms with strong “Gen-Ba” environments (real operational contexts) gain an edge.

The bottleneck shifts from data availability to experience capture and feedback loops.

In other words, the advantage is no longer who writes the best manuals.

It is who captures the most meaningful fragments.

Conclusion — Knowledge, slightly out of order

For years, knowledge management tried to impose order before usefulness.

This model suggests the opposite sequence.

Capture first. Structure later. Understand continuously.

It is a modest idea on paper. In practice, it challenges decades of organizational habits.

And perhaps that is why it feels plausible.

Cognaptus: Automate the Present, Incubate the Future.