Opening — Why this matters now

The current generation of AI systems is remarkably good at predicting what comes next. Unfortunately, prediction is not the same as purpose.

As enterprises push toward autonomous agents—systems that act, not just respond—the question quietly shifts from “What is likely?” to “What should be done?” That distinction sounds philosophical. It is, inconveniently, also operational.

The paper “Computational Concept of the Psyche” proposes an alternative framing: intelligence is not prediction accuracy, but need-driven decision-making under constraints. In other words, less chatbot, more organism.

If that sounds like a step toward AGI, it is. If it sounds messy, it is also that.


Background — Context and prior art

Most modern AI systems fall into two broad camps:

Paradigm Core Idea Limitation
Predictive models (LLMs, transformers) Learn patterns, maximize likelihood No intrinsic goals or motivation
Reinforcement learning Optimize reward signals over time Rewards are externally defined and often brittle

The paper argues that both approaches miss a critical ingredient: internal needs.

Drawing from psychology (Freud, Maslow), behavioral economics (prospect theory), and systems theory, the authors reinterpret intelligence as:

The ability to continuously optimize actions to satisfy competing needs under uncertainty and resource constraints.

This is not entirely new—but what is new is the attempt to formalize it computationally as a unified architecture.


Analysis — The psyche as an operating system

The central idea is deceptively simple:

The psyche is the operating system. Intelligence is the decision engine running on top of it.

1. The State Space: More than observations

Unlike standard RL formulations, the agent operates in a composite state space:

  • Sensations (external inputs)
  • Needs (internal drives)
  • Actions (possible interventions)

Together, they form a unified “space of states.” fileciteturn0file0

This is already a departure from most AI systems, which treat internal motivation as either fixed or irrelevant.

2. Needs as a first-class variable

Instead of a single scalar reward, the model introduces a vector of needs:

Component Description
$x$ Long-term priorities (personality / genetic or learned biases)
$y$ Current dissatisfaction levels (urgency of needs)
$z = x \cdot y$ Motivational vector

This turns decision-making into a multi-objective optimization problem rather than a single reward maximization task.

In practical terms: the agent is not just maximizing reward—it is balancing hunger, risk, curiosity, and efficiency simultaneously.

3. Utility is no longer scalar

Traditional RL uses:

$$ Q(s, a) $$

This model extends it into a prospect-aware, multi-dimensional utility function:

  • Positive outcomes (gains)
  • Negative outcomes (losses)
  • Probabilities of each
  • Energy costs
  • Predictability (expectation vs reality gap)

The result is closer to behavioral economics than classical control theory.

4. Decision rule: maximize “prospected utility”

Instead of maximizing expected reward, the agent selects actions based on:

$$ \arg\max_s \left( U(s) \cdot P(s) \right) $$

This explicitly integrates risk, uncertainty, and subjective valuation.

Interestingly, the paper notes that humans often prefer lower but guaranteed outcomes over higher expected ones—a detail most AI systems ignore.

5. Hybrid cognition: System 1 + System 2

The architecture maps neatly onto a hybrid design:

Layer Implementation
System 1 (fast) Neural networks / associative models
System 2 (slow) Symbolic reasoning / graphs

This aligns with the growing interest in neuro-symbolic systems, but with a clearer behavioral grounding.

6. Memory as a four-layer stack

The proposed memory system resembles a modern AI stack more than a brain metaphor:

Layer Function
Episodic memory Raw experience logs
Model memory Learned abstractions (NN or symbolic)
Short-term memory Active context
Attention Current focus

If this feels suspiciously like RAG + LLMs, that is because it essentially is—just framed as cognition instead of architecture.


Findings — What actually works (and what doesn’t)

The paper includes a minimal experiment: a reinforcement learning agent playing single-player ping-pong. fileciteturn0file0

The interesting part is not the game—it is the need structure.

Experimental need space

Need Role
Happiness Positive reinforcement (hit success)
Sadness Negative reinforcement (failure)
Novelty Exploration incentive
Expectedness Predictability / model accuracy

Key observation

Configuration Outcome
Equal weight on positive & negative feedback Learning slows or fails
Higher weight on positive feedback Stable learning achieved

This is subtly important.

It suggests that over-penalizing failure suppresses exploration, a problem already observed in real-world RL systems.

In business terms: if your AI is too risk-averse, it becomes useless.


Implications — Why this matters for real systems

1. From “tools” to “agents”

Most enterprise AI today is reactive. This model pushes toward proactive systems that:

  • Anticipate future needs
  • Allocate resources dynamically
  • Trade off risk vs efficiency

Think less “generate report” and more “decide whether the report is worth generating.”

2. A better abstraction for ROI

The introduction of “survival energy” as a universal currency is quietly practical.

It provides a way to unify:

  • Computational cost
  • Business value
  • Risk exposure

Which, for once, aligns AI design with how CFOs actually think.

3. Alignment through needs, not rules

Instead of hardcoding constraints, the system encodes:

  • Needs (what matters)
  • Priorities (how much it matters)

This is closer to how humans operate—and potentially more robust than rule-based alignment.

Though, naturally, it also introduces new failure modes.

4. Industrial applications are surprisingly realistic

The paper explicitly points to:

  • Process control systems
  • Industrial automation
  • Smart environments

These are domains where:

  • Multi-objective trade-offs are constant
  • Interpretability matters
  • Pure black-box models are insufficient

In other words: not flashy, but economically meaningful.


Conclusion — Intelligence, redefined (again)

The paper does not give us AGI. It does something more interesting: it reframes the problem.

Instead of asking:

How do we make machines smarter?

It asks:

What does it mean for a system to care about outcomes?

By grounding intelligence in needs, trade-offs, and survival-like constraints, the authors move AI closer to something that resembles agency rather than automation.

Whether this becomes the dominant paradigm is unclear.

But one thing is certain: a system that understands what it wants—and why—will outperform one that merely predicts what comes next.

That is not philosophy. That is strategy.

Cognaptus: Automate the Present, Incubate the Future.