Cover image

When Privacy Meets Chaos: Making Federated Learning Behave

Opening — Why this matters now Federated learning was supposed to be the grown-up solution to privacy anxiety: train models collaboratively, keep data local, and everyone sleeps better at night. Then reality arrived. Real devices are heterogeneous. Real data are wildly Non-IID. And once differential privacy (DP) enters the room—armed with clipping and Gaussian noise—training dynamics start to wobble like a poorly calibrated seismograph. ...

February 9, 2026 · 4 min · Zelina
Cover image

Noise Without Regret: How Error Feedback Fixes Differentially Private Image Generation

Opening — Why this matters now Synthetic data has quietly become the backbone of privacy‑sensitive machine learning. Healthcare, surveillance, biometrics, and education all want the same thing: models that learn from sensitive images without ever touching them again. Differential privacy (DP) promises this bargain, but in practice it has been an expensive one. Every unit of privacy protection tends to shave off visual fidelity, diversity, or downstream usefulness. ...

January 22, 2026 · 4 min · Zelina
Cover image

Privacy by Proximity: How Nearest Neighbors Made In-Context Learning Differentially Private

Opening — Why this matters now As large language models (LLMs) weave themselves into every enterprise workflow, a quieter issue looms: the privacy of the data used to prompt them. In‑context learning (ICL) — the art of teaching a model through examples in its prompt — is fast, flexible, and dangerously leaky. Each query could expose confidential examples from private datasets. Enter differential privacy (DP), the mathematical armor for sensitive data — except until now, DP methods for ICL have been clumsy and utility‑poor. ...

November 8, 2025 · 4 min · Zelina