Cover image

Back to School for AGI: Memory, Skills, and Self‑Starter Instincts

Large models are passing tests, but they’re not yet passing life. A new paper proposes Experience‑driven Lifelong Learning (ELL) and introduces StuLife, a collegiate “life sim” that forces agents to remember, reuse, and self‑start across weeks of interdependent tasks. The punchline: today’s best models stumble, not because they’re too small, but because they don’t live with their own memories, skills, and goals. Why this matters now Enterprise buyers don’t want parlor tricks; they want agents that schedule, follow through, and improve. The current stack—stateless calls, long prompts—fakes continuity. ELL reframes the problem: build agents that accumulate experience, organize it as memory + skills, and act proactively when the clock or context demands it. This aligns with what we’ve seen in real deployments: token context ≠ memory; chain‑of‑thought ≠ skill; cron jobs ≠ initiative. ...

August 27, 2025 · 4 min · Zelina
Cover image

Mind's Eye for Machines: How SimuRA Teaches AI to Think Before Acting

What if AI agents could imagine their future before taking a step—just like we do? That’s the vision behind SimuRA, a new architecture that pushes LLM-based agents beyond reactive decision-making and into the realm of internal deliberation. Introduced in the paper “SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model”, SimuRA’s key innovation lies in separating what might happen from what should be done. Instead of acting step-by-step based solely on observations, SimuRA-based agents simulate multiple futures using a learned world model and then reason over those hypothetical outcomes to pick the best action. This simple-sounding shift is surprisingly powerful—and may be a missing link in developing truly general AI. ...

August 2, 2025 · 3 min · Zelina
Cover image

Jolting Ahead: Why AI’s Acceleration Is Accelerating

When Ray Kurzweil first proposed the “Law of Accelerating Returns,” he suggested that technological progress builds on itself, speeding up over time. But what if even that framing is too slow? David Orban’s recent paper, Jolting Technologies: Superexponential Acceleration in AI Capabilities and Implications for AGI, pushes the discussion into new mathematical territory. Instead of modeling AI progress as exponential (where capability growth accelerates at a constant rate), he proposes something more radical: positive third-order derivatives — or in physics terms, jolts. ...

July 10, 2025 · 4 min · Zelina
Cover image

Jack of All Trades, Master of AGI? Rethinking the Future of Multi-Domain AI Agents

What will the future AI agent look like—a collection of specialized tools or a Swiss army knife of intelligence? As researchers and builders edge closer to Artificial General Intelligence (AGI), the design and structure of multi-domain agents becomes both a technical and economic question. Recent proposals like NGENT1 highlight a clear vision: agents that can simultaneously perceive, plan, act, and learn across text, vision, robotics, emotion, and decision-making. But is this convergence inevitable—or even desirable? ...

May 2, 2025 · 4 min