Cover image

When Data Can’t Travel, Models Must: Federated Transformers Meet Brain Tumor Reality

Opening — Why this matters now Medical AI has reached an awkward phase of maturity. The models are powerful, the architectures increasingly baroque, and the clinical promise undeniable. Yet the data they require—high‑dimensional, multi‑modal, deeply personal—remains stubbornly immobile. Hospitals cannot simply pool MRI scans into a central data lake without running headlong into privacy law, ethics boards, and public trust. ...

January 22, 2026 · 4 min · Zelina
Cover image

Don’t Just Fuse It — Align It: When Multimodal Recommendation Grows a Spine

Opening — Why this matters now Multimodal recommendation has quietly hit a ceiling. Not because we ran out of data — quite the opposite. Images are sharper, text embeddings richer, and interaction logs longer than ever. The problem is architectural complacency: most systems add modalities, but few truly reason across them. Visual features get concatenated. Text is averaged. Users remain thin ID vectors staring helplessly at semantically over-engineered items. ...

January 20, 2026 · 4 min · Zelina
Cover image

When Riders Become Nodes: Mapping Fraud in Ride-Hailing with Graph Neural Networks

Opening — Why this matters now Ride-hailing fraud is no longer a fringe operational headache. It is a structural problem amplified by scale, incentives, and post-pandemic digitization. As platforms expanded, so did adversarial behavior: GPS spoofing, collusive rides, route inflation, and off-platform hire conversions quietly eroded trust and margins. Traditional fraud detection systems—feature-heavy, transaction-centric, and largely static—have struggled to keep up. The paper under review argues that the problem is not merely more fraud, but more relational fraud. And relational problems demand relational models. ...

January 4, 2026 · 4 min · Zelina
Cover image

When Graphs Stop Guessing: Teaching Models to Rewrite Their Own Meaning

Opening — Why this matters now Graph learning has quietly run into a ceiling. Not because graph neural networks (GNNs) are weak, but because they are confidently opinionated. Once you choose a GNN, you lock in assumptions about where signal should live: in node features, in neighborhoods, in homophily, in motifs. That works—until it doesn’t. ...

December 26, 2025 · 4 min · Zelina
Cover image

Policy Gradients Grow Up: Teaching RL to Think in Domains

Opening — Why this matters now Reinforcement learning keeps winning benchmarks, but keeps losing the same argument: it doesn’t generalize. Train it here, deploy it there, and watch confidence evaporate. Meanwhile, classical planning—decidedly uncool but stubbornly correct—has been quietly producing policies that provably work across arbitrarily large problem instances. This paper asks the uncomfortable question the RL community often dodges: can modern policy-gradient methods actually learn general policies, not just big ones? ...

December 23, 2025 · 4 min · Zelina
Cover image

From Cora to Cosmos: How PyG 2.0 Scales GNNs for the Real World

Graph Neural Networks (GNNs) have come a long way since they solved Cora and PubMed node classification. But what happens when you want to model an entire traffic network, a biomedical knowledge graph, or a social graph with billions of nodes? That’s where PyG 2.0 steps in. The Industrialization of GNNs PyTorch Geometric (PyG) has been a dominant tool in the academic development of GNNs. With PyG 2.0, it graduates into the world of industrial-strength machine learning. This isn’t just a library update—it’s a fundamental re-architecture with three goals: ...

July 24, 2025 · 3 min · Zelina
Cover image

Nodes Know Best: A Smarter Graph for Long-Term Stock Forecasts

Can a model trained to think like a day trader ever truly understand long-term market moves? Most financial AI systems today seem stuck in the equivalent of high-frequency tunnel vision — obsessed with predicting tomorrow’s returns and blind to the richer patterns that shape actual investment outcomes. A new paper, NGAT: A Node-level Graph Attention Network for Long-term Stock Prediction, proposes a more grounded solution. It redefines the task itself, the architecture behind the prediction, and how we should even build the graphs powering these systems. ...

July 4, 2025 · 4 min · Zelina