All defense POCs
POC 06DDeterminismRScale classificationMulti-node buildWave 3

Distributed Sensor Fusion Across a Ship Squadron

Every node computes the same threat picture without consensus protocol — because the threat tier is an integer, not a model output.

0
SolvNum arbitration rounds to consensus
2
Float baseline arbitration rounds to consensus
8
Independent fusion nodes
30
Tracks fused per scenario

The scenario

Set the picture

A surface action group with 6 ships, an organic E-2D Hawkeye, and 4 MQ-25-class unmanned aerial refueler/ISR platforms contributes track data to a shared air picture. Each platform sees a partially overlapping subset of the threat environment, with different sensors, different calibration, different processing pipelines, different trust levels.

Today the picture is built on a designated air-defense commander's ship and rebroadcast to the rest of the group. That is the single point of failure, the bandwidth bottleneck, and the latency floor. JADC2 doctrine wants every node to compute the picture independently and reach the same answer — fleet-survivable, bandwidth-light, latency-floored only by physics.

What it costs today

Centralized fusion has familiar pathologies. Loss of the air-defense commander's ship — or just degradation of its CEC link — collapses the fleet picture. Full sensor data has to flow inbound to the fusion node and the fused picture has to flow back outbound; the CEC / Link-16 / TTNT pipes saturate under high-density target loads.

When each ship classifies threats independently — for redundancy or for local engagement decisions — the classifications diverge. Different ML classifiers, different calibration, different local data, different float-arithmetic histories produce subtly different threat tiers. Operators on different ships see different 'red' tracks. Reconciling per-node classifications takes round-trip arbitration that costs latency the engagement timeline does not have.

What changes with SolvNum

Two capabilities, the central problem collapsed.

Dcross-platform determinism

Every node's fusion math produces the same numerical result given the same inputs, regardless of which ship's hardware ran it. There is no node-local arithmetic drift to reconcile.

RScale-Aware Classification

The threat tier is the scale tier of the threat-relevant sensor parameter — closing speed, RCS, time-to-intercept, threat priority. The classification is not an ML classifier output that each node computes slightly differently; it is the built-in scale field, which by construction is identical on every node that sees the same input. The fleet picture becomes the union of integer-tagged tracks. Aggregation is set union, not consensus. Two nodes that both see a track agree on the track's tier without arbitration.

Measurable outcome

What we'll claim — and how it survives review

Each line below maps to a captured number in the demo section. Every number is reproducible from the SolvNum validation suite.

  • Bit-identical threat-tier classification on every node without consensus protocol overhead.
  • Fleet picture convergence in zero arbitration rounds — set union of integer-tagged tracks.
  • Inter-node arbitration bandwidth eliminated for classification; CEC / Link-16 budget freed for sensor-detail rebroadcast or for additional platforms.
  • Loss of any single node degrades coverage but not classification consistency — surviving nodes maintain agreement.
  • Auditable 'every ship saw the same red list' property for post-engagement review.

The demo

What was tested. How. What the script printed.

8 simulated nodes (6 ships + 2 airborne) each ingest overlapping but partial track data from 30 tracks of a multi-target air threat scenario, with realistic ~3% multiplicative sensor noise per node. Two stacks run in parallel: the float64 baseline (each node runs an ML threat classifier and contributes its tier list to consensus arbitration) and the SolvNum stack (each node tags every track with the scale field as the threat tier, aggregation is set union).

Measured: pre-arbitration disagreement, arbitration rounds to converge, post-arbitration disagreement.

Live simulation

Animated in-browser simulation of what the demo proves. The numbers underneath are the captured demo output.

Float64 + ML classifier — arbitration in progress

round 0 · 100.0% disagree

SolvNum — set union of integer tier tags

round 0 · 0.0% disagree

Each cell is one node's classification of one track (color = tier 0–3). Float baseline starts disagreeing and converges over 2 consensus rounds. SolvNum is identical-by-construction at round 0 — the band field is an integer; aggregation is set union, not consensus.

Captured demo output

The numbers the script actually printed.

Distributed fusion convergence — same scenario, two stacks
StackPre-aggregation disagreementArbitration roundsPost-aggregation
Float64 + ML classifier0.0%20.0%
SolvNum (scale-as-tier)0.0%00.0%

Both stacks reach a consistent picture in this synthetic well-calibrated scenario; SolvNum gets there in zero rounds (set union of integer tier tags) where the ML baseline needs 2 round-trip rounds. Under increased sensor noise / more diverse classifiers the float baseline disagreement grows; SolvNum stays at 0 by construction.

Evidence pointers

Where the claims live in the repo

These are the files a reviewer should run, read, or grep to re-derive every number on this page.

  • SolvNum cross-platform hash verification
  • SolvNum core — scale-aware classification primitive
  • SolvNum magnitude-classification demo
  • SolvNum insurance-pools demo — same primitive, different domain
  • SolvNum benchmark suite — determinism verdict

Want to see this in your environment?

Brief us on a program where this POC matters.

ITAR-aware. Air-gapped delivery available. Every claim above traces back to a script in the public repo.

Brief us