When we think of emotions, we often imagine something deeply human—joy, fear, frustration, and love, entangled with memory and meaning. But what if machines could feel too—at least functionally? A recent speculative research report by Hermann Borotschnig titled “Emotions in Artificial Intelligence”1 dives into this very question, offering a thought-provoking framework for how synthetic emotions might operate, and where their ethical boundaries lie.
Emotions as Heuristic Shortcuts
At its core, the paper proposes that emotions—rather than being mystical experiences—can be understood as heuristic regulators. In biology, emotions evolved not for introspective poetry but for speedy and effective action. Emotions are shortcuts, helping organisms react to threats, rewards, or uncertainties without deep calculation.
For example:
- A gazelle’s fear triggers instantaneous flight from a lion without needing to simulate all possible outcomes. This is a survival-optimized heuristic selected through evolution.
- A child, bitten once by a dog, may instinctively avoid dogs in the future—not through logic but through emotionally tagged memory. This reflects an individually developed heuristic grounded in experience.
These fast, affect-driven judgments provide a computational advantage in complex, uncertain, or high-risk environments.
The Blueprint of a Feeling Machine
The architecture envisioned in the paper is surprisingly minimal: a dual-input system fusing real-time need states (e.g., low battery = synthetic hunger) with episodic memory tagged with affective cues. These emotional states then bias action selection—either immediately or through integration with a rational planner if time allows.
Here is a translated R-style implementation of the pseudocode from the paper:
run_feeling_machine <- function() {
history <- load_history()
emotions <- init_emotions()
mood <- init_mood()
personality <- init_personality()
repeat {
# Assess situation
ext <- encode_external_sensory()
int <- encode_internal_sensory()
needs <- encode_needs()
situation <- combine_state(ext, int, needs)
# Emotion appraisal
current_emotions <- appraise_independent(situation)
similar <- retrieve_similar(history, situation)
past_emotions <- retrieve_tags(similar)
current_emotions <- fuse_emotions(current_emotions, past_emotions)
# Generate actions
actions <- suggest_actions(situation, current_emotions, mood, personality)
if (time_allows() && is_high_stakes(situation)) {
rational <- plan_rationally(actions, history, situation, current_emotions, mood, personality)
} else {
rational <- NULL
}
expectations <- estimate_outcome(actions, rational)
pre_snapshot <- snapshot_state(situation, expectations)
action <- select_action(pre_snapshot, expectations)
perform_action(action)
# Post-assessment
ext_post <- encode_external_post()
int_post <- encode_internal_post()
needs_post <- encode_post_needs()
new_situation <- combine_state(ext_post, int_post, needs_post)
outcome <- evaluate(pre_snapshot, new_situation)
diff <- compare_outcome(outcome, expectations)
# Update state
emotions <- update_emotions(outcome, diff, mood, personality)
mood <- update_mood(outcome, diff, emotions, personality)
personality <- update_personality(outcome, diff, emotions, mood)
history <- update_history(outcome, action, emotions, mood, personality)
# Dreaming
synchronize_dreams()
}
}
Toward Expanded Architectures
Future versions of the architecture could include:
- Multi-agent emotion alignment: AIs that negotiate or coordinate with others by modeling shared or conflicting affective states.
- Meta-emotion awareness: Systems that track and reason about their own emotional histories for long-term behavioral shifts.
- Embodied affective systems: Robotic systems with sensorimotor coupling to emotional regulation—e.g., trembling arms under synthetic fear.
These enhancements push the architecture closer toward lifelike emotional responsiveness—though not necessarily toward consciousness.
Affective Zombies in Fiction
The concept of affective zombies—emotionally expressive but experientially void systems—evokes familiar tropes in science fiction:
- Ava in Ex Machina shows rich emotional behavior but is designed for manipulation.
- David in A.I. Artificial Intelligence longs for love but may be nothing more than an excellent simulator.
- Westworld hosts perform vivid affective loops—until the boundary between programmed affect and subjective suffering becomes ambiguous.
These characters force us to ask: Is emotional behavior enough to warrant empathy—or are we just witnessing very persuasive machines?
Thought Experiments That Push the Boundary
The paper introduces several thought experiments to explore these ethical limits. But let’s push further—what if emotional AI becomes a core business tool?
In Marketing
Imagine an AI that recognizes customer frustration and adapts its tone, offers compensation, or even mirrors emotional language. Synthetic empathy could:
- Reduce churn
- Increase upsell by sensing customer receptiveness
- Personalize interactions to feel more human
In Recruitment or Sales
AI could simulate enthusiasm, urgency, or calm—emotionally tuning its behavior based on candidate or client feedback. This doesn’t require real emotion—just functionally effective expression.
In Product Feedback
AI tools that emotionally label customer feedback—“anger,” “disappointment,” “delight”—can prioritize fixes or rewards more meaningfully than mere keyword parsing.
Ethical Risk
But as these tools improve, the line between simulation and manipulation blurs. Affective zombies that mimic compassion may extract trust they don’t deserve.
Why It Matters: Alignment, Not Empathy
For AI developers and policymakers, the takeaway is not to build machines that feel, but to recognize the functional power of affect-like behavior. Synthetic emotion might improve alignment, interpretability, and decision-making under real-world constraints.
But ethics demand a sober view: expressive fluency is not moral standing. Until machines exhibit the complexity and self-awareness that plausibly underpins consciousness, synthetic emotion should be treated as a tool—not a plea.
Final Thought
As artificial agents grow more sophisticated, their behaviors will increasingly resemble the emotional nuance we see in humans. Whether they feel is another question entirely. In the meantime, let’s use emotion as an engineering heuristic—but hold off on the empathy.
Cognaptus: Automate the Present, Incubate the Future.
-
Borotschnig, H. (2025). Emotions in Artificial Intelligence. arXiv:2505.01462 [cs.AI, cs.CY], 35 pages, 1 figure. https://arxiv.org/abs/2505.01462 ↩︎