Cover image

Tunnel Vision, Literally: When Cropping Makes Multimodal Models Blind

Opening — Why this matters now Multimodal Large Language Models (MLLMs) can reason, explain, and even philosophize about images—until they’re asked to notice something small. A number on a label. A word in a table. The relational context that turns a painted line into a parking space instead of a traffic lane. The industry’s default fix has been straightforward: crop harder, zoom further, add resolution. Yet performance stubbornly plateaus. This paper makes an uncomfortable but important claim: the problem is not missing pixels. It’s missing structure. ...

December 14, 2025 · 3 min · Zelina