Cover image

When AI Discovers Physics: Inside the Multi-Agent Renaissance of Scientific Machine Learning

Opening — Why this matters now Scientific discovery has always been bottlenecked by one thing: human bandwidth. In scientific machine learning (SciML), where physics meets data-driven modeling, that bottleneck shows up as painstaking trial and error—architectures tuned by hand, loss functions adjusted by intuition, and results validated by weeks of computation. Enter AgenticSciML, a new framework from Brown University that asks a radical question: What if AI could not only run the experiment, but design the method itself? ...

November 11, 2025 · 4 min · Zelina
Cover image

When AI Packs Too Much Hype: Reassessing LLM 'Discoveries' in Bin Packing

Opening — Why this matters now The academic world has been buzzing ever since a Nature paper claimed that large language models (LLMs) had made “mathematical discoveries.” Specifically, through a method called FunSearch, LLMs were said to have evolved novel heuristics for the classic bin packing problem—an NP-hard optimization task as old as modern computer science itself. The headlines were irresistible: AI discovers new math. But as with many shiny claims, the real question is whether the substance matches the spectacle. ...

November 5, 2025 · 5 min · Zelina
Cover image

Recursive Minds: How ReCAP Turns LLMs into Self-Correcting Planners

In long-horizon reasoning, large language models still behave like short-term thinkers. They can plan, but only in a straight line. Once the context window overflows, earlier intentions vanish, and the model forgets why it started. The new framework ReCAP (Recursive Context-Aware Reasoning and Planning)—from Stanford’s Computer Science Department and MIT Media Lab—offers a radical solution: give LLMs a recursive memory of their own reasoning. The Problem: Context Drift and Hierarchical Amnesia Sequential prompting—used in CoT, ReAct, and Reflexion—forces models to reason step by step along a linear chain. But in complex, multi-stage tasks (say, cooking or coding), early goals slide out of the window. Once the model’s focus shifts to later steps, earlier plans are irretrievable. Hierarchical prompting tries to fix this by spawning subtasks, but it often fragments information across layers—each sub-agent loses sight of the global goal. ...

November 2, 2025 · 4 min · Zelina
Cover image

Mind the Gap: How AI Papers Misuse Psychology

It has become fashionable for AI researchers to pepper their papers with references to psychology: System 1 and 2 thinking, Theory of Mind, memory systems, even empathy. But according to a recent meta-analysis titled “The Incomplete Bridge: How AI Research (Mis)Engages with Psychology”, these references are often little more than conceptual garnish. The authors analyze 88 AI papers from NeurIPS and ACL (2022-2023) that cite psychological concepts. Their verdict is sobering: while 78% use psychology as inspiration, only 6% attempt to empirically validate or challenge psychological theories. Most papers cite psychology in passing — using it as window dressing to make AI behaviors sound more human-like. ...

July 31, 2025 · 3 min · Zelina