Opening — Why this matters now

Safety-critical AI has a credibility problem. Not because it fails spectacularly—though that happens—but because we often cannot say where it is allowed to succeed. Regulators demand clear operational boundaries. Engineers deliver increasingly capable models. Somewhere in between, the Operational Design Domain (ODD) is supposed to translate reality into something certifiable.

That translation is breaking down.

In aviation, autonomous driving, and industrial control, ODDs are still largely written by experts before systems ever see the real world. But modern AI systems are trained, tuned, and stress-tested on data collected long after those early design documents are frozen. The result is an uncomfortable gap: systems proven safe by data, but bounded by assumptions that no longer reflect reality.

This paper tackles that gap head-on—and does so without the usual hand-waving.

Background — Why ODDs are harder than they look

An ODD is not just a checklist of ranges like speed, altitude, or weather. It has two layers:

  • Taxonomy: the parameters and their ranges
  • Ontology: the relationships between those parameters

The first is manageable. The second is where things fall apart.

Real operational environments are non-convex, discontinuous, and deeply entangled. Certain combinations of parameters simply never occur—or must never occur—even if each parameter looks acceptable in isolation. Traditional representations (tables, YAML schemas, convex polytopes) either oversimplify these relationships or include unsafe voids they cannot see.

Data-driven approaches promise realism but introduce a new problem: non-determinism. If your ODD changes depending on sample order, retraining, or stochastic optimization, it becomes legally indefensible.

So the question becomes precise:

Can we extract a deterministic, interpretable, and certifiable ODD purely from data?

This paper argues that yes—you can, if you stop thinking in boxes and start thinking in fields.

Analysis — From hard boundaries to affinity fields

The core move is deceptively simple: stop asking whether a point is inside or outside the ODD. Ask instead:

How strongly does this operating condition belong to what we have seen before?

ODDs as affinity functions

Each observed in-distribution data point becomes an anchor. Around each anchor, the method places a kernel—specifically an RBF kernel—that assigns a smoothly decaying affinity score as you move away from known conditions.

These local affinities are then combined multiplicatively into a global ODD affinity function:

  • Continuous
  • Bounded between 0 and 1
  • Order-independent
  • Fully deterministic

A single threshold on this affinity function defines ODD membership. No retraining. No randomness. No fragile geometry.

Why kernels, not convex hulls

Convex hulls are tempting: simple, fast, familiar. They also happily declare large regions safe where no data exists.

Kernel superposition behaves differently. It wraps tightly around the data manifold, respecting concavities, holes, and nonlinear structure. If the data never supported a condition, the affinity decays—automatically.

This is Safety-by-Design without pretending the world is convex.

Parameterization without expert tuning

Kernel widths are derived from local data density using nearest-neighbor distances. Dense regions expand influence; sparse regions contract it. Two global parameters control behavior:

  • Maximum kernel spread
  • Decay rate with distance

That’s it. No per-dimension micromanagement. No expert-crafted rules masquerading as mathematics.

Out-of-distribution data as first-class citizens

Crucially, the framework allows explicit OOD samples. These are enforced as low-affinity regions by construction. If an OOD point scores too high, the responsible kernels are adjusted—without moving anchor points or breaking determinism.

This aligns neatly with certification logic: forbidden regions stay forbidden, by proof rather than convention.

Findings — Does it actually work?

The authors validate the approach twice: once synthetically, once in aviation.

Monte Carlo validation

Random ODDs with hidden nonlinear constraints are generated. The kernel-based ODD is reconstructed from samples alone and evaluated against both:

  • The true underlying ODD (known only in simulation)
  • The convex hull of anchor points (a realistic proxy)

The result: precision–recall curves from the kernel-based ODD are nearly indistinguishable from the true ODD, with $R^2 > 0.97$ even in higher dimensions.

Comparison Target Precision $R^2$ Recall $R^2$
True ODD > 0.97 > 0.97
Convex Hull > 0.98 > 0.99

Convex hulls, interestingly, perform well as validation proxies—but not as operational boundaries.

Real-world aviation case: collision avoidance

The method is applied to a neural-network-based aircraft collision avoidance system (VCAS), using over 600,000 real operational states across five dimensions.

Once again, the kernel-based ODD mirrors the behavior of the expert-defined ODD—with the added benefit of smooth affinity decay, enabling graded safety responses instead of binary cutoffs.

Implications — Why this changes the certification conversation

Three things stand out.

1. Determinism is not optional

Certification does not tolerate stochastic safety arguments. This framework is fully deterministic, order-independent, and reproducible. That alone puts it ahead of most learning-based safety envelopes.

2. Soft boundaries are safer than hard ones

An affinity score provides early warning, graceful degradation, and interpretable margins. Binary logic provides denial and surprises.

3. Data can define safety—if handled conservatively

This approach does not claim to discover truth. It claims to reconstruct what the data can justify, no more, no less. That epistemic humility is exactly what Safety-by-Design requires.

Conclusion — When safety becomes a field, not a fence

This paper quietly reframes how Operational Design Domains should be built in the age of data-driven AI. Not as static documents. Not as brittle geometries. But as continuous, interpretable affinity fields grounded in observed reality.

It does not eliminate expert judgment—it constrains it. It does not replace certification—it strengthens it. And most importantly, it aligns how safety is argued with how modern AI systems are actually built.

The uncomfortable implication is this: if your ODD cannot be derived from your data, your safety case may already be obsolete.

Cognaptus: Automate the Present, Incubate the Future.