Cover image

One Point to Rule Them All: Why AI Optimization Is Quietly Abandoning the Pareto Frontier

Opening — Why this matters now In AI, we’ve spent years chasing completeness. More data. More models. More outputs. More possibilities. And in optimization? The holy grail has long been the Pareto frontier — a beautifully complex surface representing every optimal trade-off between competing objectives. It looks impressive. It feels rigorous. It is, frankly, overkill. ...

April 13, 2026 · 4 min · Zelina
Cover image

Probe, Then Commit: Why Solver Tuning Finally Grew Up

Opening — Why this matters now Constraint programming (CP) has always promised elegance: state the problem, let the solver do the work. In practice, however, seasoned users know the uncomfortable truth—solver performance lives or dies by hyperparameters most people neither understand nor have time to tune. As problem instances grow larger and solver configurations explode combinatorially, manual tuning has become less of an art and more of a liability. The paper Hyperparameter Optimization of Constraint Programming Solvers confronts this reality head-on, proposing a framework that finally treats solver configuration as what it is: a resource allocation problem under uncertainty. ...

January 19, 2026 · 4 min · Zelina