Cover image

Talking to Yourself, but Make It Useful: Intrinsic Self‑Critique in LLM Planning

Opening — Why this matters now For years, the received wisdom in AI planning was blunt: language models can’t really plan. Early benchmarks—especially Blocksworld—made that verdict look almost charitable. Models hallucinated invalid actions, violated preconditions, and confidently declared failure states as success. The field responded by bolting on external verifiers, symbolic planners, or human-in-the-loop corrections. ...

January 3, 2026 · 3 min · Zelina
Cover image

Mind's Eye for Machines: How SimuRA Teaches AI to Think Before Acting

What if AI agents could imagine their future before taking a step—just like we do? That’s the vision behind SimuRA, a new architecture that pushes LLM-based agents beyond reactive decision-making and into the realm of internal deliberation. Introduced in the paper “SimuRA: Towards General Goal-Oriented Agent via Simulative Reasoning Architecture with LLM-Based World Model”, SimuRA’s key innovation lies in separating what might happen from what should be done. Instead of acting step-by-step based solely on observations, SimuRA-based agents simulate multiple futures using a learned world model and then reason over those hypothetical outcomes to pick the best action. This simple-sounding shift is surprisingly powerful—and may be a missing link in developing truly general AI. ...

August 2, 2025 · 3 min · Zelina
Cover image

Rollout Renaissance: How Pareto-NRPA Revives Monte Carlo for Multi-Objective Optimization

Monte Carlo search algorithms rarely make the shortlist in multi-objective optimization (MOO). Traditionally, the field has belonged to evolutionary algorithms like NSGA-II and SMS-EMOA. But a paper from Paris Dauphine-PSL and Thales upends that hierarchy with an audacious twist: what if we generalized NRPA — a niche but powerful single-objective method — to handle multiple objectives, constraints, and diversity, all in one elegant framework? ...

July 28, 2025 · 3 min · Zelina
Cover image

The Joy of Many Minds: How JoyAgents-R1 Unleashes the Power of Multi-LLM Reinforcement Learning

When it comes to language model agents, more minds may not always mean merrier results. Multi-agent reinforcement learning (MARL) promises a flexible path for decomposing and solving complex tasks, but coordinating multiple large language models (LLMs) remains riddled with instability, inefficiency, and memory fragmentation. Enter JoyAgents-R1, a novel framework that proposes an elegant, scalable solution for jointly evolving heterogeneous LLM agents using Group Relative Policy Optimization (GRPO). Developed by researchers at JD.com, JoyAgents-R1 combines memory evolution, policy optimization, and clever sampling strategies to form a resilient multi-agent architecture capable of matching the performance of larger SOTA models with far fewer parameters. ...

June 25, 2025 · 3 min · Zelina