Cover image

Seeing Too Much: When Multimodal Models Forget Privacy

Opening — Why this matters now Multimodal models have learned to see. Unfortunately, they have also learned to remember—and sometimes to reveal far more than they should. As vision-language models (VLMs) are deployed into search, assistants, surveillance-adjacent tools, and enterprise workflows, the question is no longer whether they can infer personal information from images, but how often they do so—and under what conditions they fail to hold back. ...

January 12, 2026 · 3 min · Zelina