Cover image

Causality Remembers: Teaching Social Media Defenses to Learn from the Past

Opening — Why this matters now Social media coordination detection is stuck in an awkward adolescence. Platforms know coordinated inauthentic behavior exists, regulators know it scales faster than moderation teams, and researchers know correlation-heavy detectors are brittle. Yet most deployed systems still behave as if yesterday’s parameters will work tomorrow. This paper introduces Adaptive Causal Coordination Detection (ACCD)—not as another accuracy tweak, but as a structural correction. Instead of freezing assumptions into static thresholds and embeddings, ACCD treats coordination detection as a learning system with memory. And that subtle shift matters more than the headline F1 score. ...

January 5, 2026 · 4 min · Zelina
Cover image

Crossing the Line: Teaching Pedestrian Models to Reason, Not Memorize

Opening — Why this matters now Pedestrian fatalities are rising, mid-block crossings dominate risk exposure, and yet most models tasked with predicting pedestrian behavior remain stubbornly local. They perform well—until they don’t. Move them to a new street, a wider arterial, or a different land-use mix, and accuracy quietly collapses. This is not a data problem. It’s a reasoning problem. ...

January 5, 2026 · 4 min · Zelina
Cover image

When LLMs Stop Guessing and Start Complying: Agentic Neuro-Symbolic Programming

Opening — Why this matters now Large Language Models are excellent improvisers. Unfortunately, software systems—especially those embedding logic, constraints, and guarantees—are not jazz clubs. They are factories. And factories care less about eloquence than about whether the machine does what it is supposed to do. Neuro-symbolic (NeSy) systems promise something enterprises quietly crave: models that reason, obey constraints, and fail predictably. Yet in practice, NeSy frameworks remain the domain of specialists fluent in obscure DSLs and brittle APIs. The result is familiar: powerful theory, low adoption. ...

January 5, 2026 · 4 min · Zelina
Cover image

When Models Remember Too Much: The Quiet Economics of Memorization

Opening — Why this matters now Large Language Models (LLMs) are often praised for what they generalize. Yet, beneath the surface, a less glamorous behavior quietly persists: they remember—sometimes too well. In an era where models are trained on ever-larger corpora under increasing regulatory scrutiny, understanding when memorization occurs, why it happens, and how it can be isolated is no longer an academic indulgence. It is an operational concern. ...

January 5, 2026 · 3 min · Zelina
Cover image

When Systems Bleed: Teaching Distributed AI to Heal Itself

Opening — Why this matters now Distributed systems are no longer just distributed. They are fragmented across clouds, edges, fog nodes, IoT devices, and whatever underpowered hardware someone insisted on deploying in a basement. This so‑called computing continuum promises flexibility, but in practice it delivers something else: constant failure. Nodes disappear. Latency spikes. Logs contradict each other. Recovery scripts work—until they don’t. Traditional fault‑tolerance assumes failures are predictable, classifiable, and politely arrive one at a time. Reality, as usual, disagrees. ...

January 5, 2026 · 4 min · Zelina
Cover image

Safety First, Reward Second — But Not Last

Opening — Why this matters now Reinforcement learning has spent the last decade mastering games, simulations, and neatly bounded optimization problems. Reality, inconveniently, is none of those things. In robotics, autonomous vehicles, industrial automation, and any domain where mistakes have real-world consequences, almost safe is simply unsafe. Yet most “safe RL” methods quietly rely on a compromise: allow some violations, average them out, and hope the system behaves. This paper refuses that bargain. It treats safety as a hard constraint, not a tunable preference—and then asks an uncomfortable question: can we still learn anything useful? ...

January 4, 2026 · 4 min · Zelina
Cover image

Trust No One, Train Together: Zero-Trust Federated Learning Grows Teeth

Opening — Why this matters now Critical infrastructure is no longer attacked by teenagers in hoodies. It is probed, poisoned, and patiently undermined by adversaries who understand distributed systems better than most defenders. From water treatment plants to national energy grids, Industrial IoT (IIoT) has become a strategic attack surface. Federated Learning (FL) was supposed to help—privacy-preserving, collaborative, decentralized. Instead, it quietly introduced a new problem: you are now trusting hundreds or thousands of autonomous agents not to lie. ...

January 4, 2026 · 4 min · Zelina
Cover image

When Fairness Fails in Groups: From Lone Counterexamples to Discrimination Clusters

Opening — Why this matters now Most algorithmic fairness debates still behave as if discrimination is a rounding error: rare, isolated, and best handled by catching a few bad counterexamples. Regulators ask whether a discriminatory case exists. Engineers ask whether any unfair input pair can be found. Auditors tick the box once a model is declared “2-fair.” ...

January 4, 2026 · 4 min · Zelina
Cover image

When Riders Become Nodes: Mapping Fraud in Ride-Hailing with Graph Neural Networks

Opening — Why this matters now Ride-hailing fraud is no longer a fringe operational headache. It is a structural problem amplified by scale, incentives, and post-pandemic digitization. As platforms expanded, so did adversarial behavior: GPS spoofing, collusive rides, route inflation, and off-platform hire conversions quietly eroded trust and margins. Traditional fraud detection systems—feature-heavy, transaction-centric, and largely static—have struggled to keep up. The paper under review argues that the problem is not merely more fraud, but more relational fraud. And relational problems demand relational models. ...

January 4, 2026 · 4 min · Zelina
Cover image

AI Writes the Rules: When Formal Logic Teaches Language Discipline

Opening — Why this matters now Natural language is where most software failures quietly begin. Requirements are written in good faith, read with confidence, and implemented with subtle misunderstandings that only surface once systems are deployed, audited, or—worse—regulated. The uncomfortable truth is that natural language is flexible where engineering systems demand rigidity. This paper tackles that gap head‑on, proposing a method where formal logic leads and language follows. Instead of writing requirements first and wrestling semantics into place later, the authors invert the workflow: start from formal specification patterns, then systematically generate a controlled natural language (CNL) using an AI assistant. Precision first. Fluency second. ...

January 3, 2026 · 4 min · Zelina