Cover image

Seeing is Believing? Not Quite — How CoCoT Makes Vision-Language Models Think Before They Judge

Vision-language models (VLMs) may describe what they see, but do they truly understand what they’re looking at — especially in social contexts? A recent paper introduces Cognitive Chain-of-Thought (CoCoT), a deceptively simple yet remarkably effective prompting strategy that helps these models reason like humans: through layered cognition, not flat logic. The Problem with Flat Reasoning Traditional Chain-of-Thought (CoT) prompting, while powerful for math and symbolic tasks, falls short when it comes to social or moral interpretation. Consider a scene where a person wears a mask indoors, and another says, “Hiding from the paparazzi, huh?” CoT may recognize the mask, but often misfires in guessing intent — is it a joke? A warning? An instruction? ...

July 29, 2025 · 3 min · Zelina