Rules of Engagement: Why LLMs Need Logic to Plan

Rules of Engagement: Why LLMs Need Logic to Plan When it comes to language generation, large language models (LLMs) like GPT-4o are top of the class. But ask them to reason through a complex plan — such as reorganizing a logistics network or optimizing staff scheduling — and their performance becomes unreliable. That’s the central finding from ACPBench Hard (Kokel et al., 2025), a new benchmark from IBM Research that tests unrestrained reasoning about action, change, and planning. ...

April 2, 2025 · 4 min

How Ultra-Large Context Windows Challenge RAG

Gemini 2.5 and the Rise of the 2 Million Token Era In March 2025, Google introduced Gemini 2.5 Pro with a 2 million token context window, marking a major milestone in the capabilities of language models. While this remains an experimental and high-cost frontier, it opens the door to new possibilities. To put this in perspective (approximate values, depending on tokenizer): 📖 The entire King James Bible: ~785,000 tokens 🎭 All of Shakespeare’s plays: ~900,000 tokens 📚 A full college textbook: ~500,000–800,000 tokens This means Gemini 2.5 could, in theory, process multiple entire books or large document repositories in one go—though with substantial compute and memory costs that make practical deployment currently limited. ...

March 29, 2025 · 3 min · Cognaptus Insights