A Hidden Cost of AI Efficiency

When AI takes over routine tasks, companies often see immediate productivity gains. Senior staff can accomplish more without relying on juniors, costs go down, and short-term profits rise. But beneath these benefits lies a risk that most boardrooms overlook: the erosion of tacit knowledge—the hands-on expertise that only develops through years of guided practice.

Tacit skills aren’t in manuals or knowledge bases. They’re the intuition of a surgeon who adapts mid-procedure, the judgment of a lawyer during negotiations, the troubleshooting instincts of an engineer. These skills pass from experts to novices mainly through direct collaboration on real work. Remove the entry-level work, and you cut the ladder that builds tomorrow’s experts.


The Model Behind the Warning

Economist Enrique Ide’s model captures this dynamic in stark terms:

  • Two sources of growth:

    1. Tacit knowledge diffusion — novices learning from experts.
    2. Innovation — novices creating new ideas.
  • Automation shock: A drop in the cost of machines performing entry-level tasks shifts work away from novices.

  • Three possible futures:

    • Full Learning — High novice participation sustains maximum growth.
    • Constrained Learning — Moderate automation keeps growth positive but slower.
    • Learning Breakdown — Heavy automation erodes skills and stalls growth.

The key insight: AI automation at the entry level accelerates the slide toward Learning Breakdown unless countered by new ways to keep novices engaged in real work.


Numbers that Should Worry You

Back-of-envelope estimates suggest that automating:

  • 5% of entry-level work can trim long-run GDP growth by ~0.05 percentage points/year.
  • 30% automation could cut ~0.35 percentage points/year.

In the most aggressive scenario, initial gains of +7% output within a decade flip to a 19.8% loss after a century. Break-even comes after about 35 years—just beyond most corporate planning horizons.


AI Co-Pilots: Help or Harm?

AI decision-support tools (“co-pilots”) can partially offset skill loss by giving undertrained experts access to high-level expertise on demand. But there’s a catch: if novices expect to rely on AI later, they may invest less in learning now. Ide’s extension of the model shows that opaque AI systems—those that can’t explain their reasoning—risk further undermining tacit knowledge transfer.

Interpretability matters: If AI outputs can be unpacked and taught, the harm diminishes. If not, each generation must rely directly on the AI instead of building their own expertise.


Strategic Takeaways for Leaders

  1. Distinguish automation from augmentation — Use AI to make juniors more capable, not replace them.
  2. Protect high-learning-value tasks — Keep novices in the loop on work that builds critical tacit skills.
  3. Invest in mentorship infrastructure — Subsidize or incentivize structured knowledge transfer.
  4. Push for interpretable AI — Demand systems that can explain their reasoning in human-understandable terms.
  5. Measure human capital health — Track whether your junior pipeline is acquiring real, experience-based competence.

AI’s long-term payoff depends on whether we treat it as a partner in talent development or as a shortcut that empties the bench. The silent skill drain is avoidable—but only if we act before it becomes irreversible.


Cognaptus: Automate the Present, Incubate the Future