JADC2 Reference Compute Substrate
One math, one bound, one classification, one bandwidth budget — across every node, every coalition partner, every cycle. The IEEE 754 of deterministic, bounded, magnitude-aware, compressed defense math.
- 1
- Unique fusion hash across 5 nodes
- 0
- Arbitration rounds for tier consensus
- 780 bps
- Aggregate inter-node bandwidth (budget: 1 Mbps)
- 8.46e-05
- Per-value error bound on inter-node reports
The scenario
Set the picture
Joint All-Domain Command and Control (JADC2) is the decade's largest DoD modernization program. It wants every sensor, every effector, every commander, every coalition partner reading the same operational picture, computing the same decisions, with auditable provenance, across heterogeneous hardware, over disadvantaged comms, with certifiable autonomy. Project Overmatch (Navy), ABMS (Air Force), Project Convergence (Army), and the OSD-level JADC2 program office are all building toward this.
The hard underpinning of JADC2 — the layer no architecture document quite addresses — is numerical interoperability across platforms, allies, and decades. Without it, every other JADC2 capability is built on sand.
What it costs today
JADC2 is an architecture-of-architectures stitched together with translation layers, bandwidth-hungry full-fidelity rebroadcast, multiple competing data fabrics, and per-system certification. Translation overhead in every cross-system data fabric — Link-16, Link-22, NATO STANAGs, CEC, TTNT, Mission Partner Environment — re-projects values into the receiver's numerical convention.
Distributed fusion can't be fielded because nodes don't agree on classifications without consensus rounds. So fusion stays centralized, which means JADC2 has the same single-point-of-failure pathology as the systems it was supposed to replace. Each platform's autonomy and fire-control stack is certified in isolation; coalition operation requires the certification to be re-done from scratch.
Bandwidth-vs-fidelity arguments at every link, with no portable framework for the answer. Forensic ambiguity at every replay. JADC2 program offices are aware of these gaps. The current strategy is to push them down the priority list and address them later. They cannot be addressed later.
What changes with SolvNum
All four capabilities together — the full substrate. This is the flagship pitch.
Every node, every coalition partner, every cycle: identical math. The numerical-interoperability gap closes at the compute substrate, not at the data fabric. Translation layers stop translating numbers and start just routing them.
Every command, every actuator, every fusion update: provable per-tick bound, certifiable once. The certification artifact for the autonomy stack becomes a single artifact that satisfies every national regime, every service safety board, every coalition partner.
Every threat, every track, every sensor reading: shared magnitude classification without consensus protocol. Distributed fusion becomes deployable because the classification is identical-by-construction across nodes. The single-point-of-failure pathology dissolves.
Every link, every report, every archive: bandwidth budget that fits the contested environment. The bandwidth-vs-fidelity argument has a portable answer — the k parameter and the error bound it implies.
Measurable outcome
What we'll claim — and how it survives review
Each line below maps to a captured number in the demo section. Every number is reproducible from the SolvNum validation suite.
- Single SHA-256 every node produces independently for every cycle of the mission.
- Single excursion-limit certification artifact satisfies every node's safety regime.
- Distributed fusion converges in zero arbitration rounds via shared scale-tier classification.
- Mission data fits inside a contested-comms link budget with documented per-channel error envelope.
- One artifact — the SolvNum table file with its published hash — is the long-lived foundation under the entire architecture.
The demo
What was tested. How. What the script printed.
5-node JADC2 vignette across 4 distinct hardware classes: 2 ships (x86_64 servers, Navy CSG combat systems), 1 aircraft (CUDA GPU, airborne command node), 1 ground node (ARM SBC, JTAC / dismounted forward node), 1 coalition partner node (different x86 build, allied national system), and 1 satellite link with 600 ms latency and a 1 Mbps bandwidth budget.
A multi-vector mission cycle runs end-to-end: sense → classify → decide → engage → assess → archive. Every node computes the same fusion math (D — verified by hash), issues commands within bounded per-tick excursion (B — attested per regime), classifies threats consistently via scale field (R — verified by native-field equality of tier across nodes), and reports across the constrained satellite link within budget (C — verified by bandwidth-budget compliance and per-value error bound).
Illustration
In-browser diagram of what the demo proves. The numbers underneath are the captured demo output.
JADC2 vignette — 5 nodes, 4 hardware classes, one substrate
One canonical fusion hash 26c945819a49… produced independently by every node. Excursion-limited per-tick update (≤ 2.4623×) attested per regime. Band-tier classification identical by construction. Inter-node reports fit the 1 Mbps satellite budget at 780 bps with an 8.46e-05 per-value error bound.
Captured demo output
The numbers the script actually printed.
| Node | Platform class | Role |
|---|---|---|
| ship_1 | x86_64 server | Navy CSG combat system |
| ship_2 | x86_64 server | Navy CSG combat system |
| aircraft | CUDA GPU | Airborne command node |
| ground_node | ARM SBC | JTAC / dismounted forward node |
| coalition | x86 partner build | Allied national system |
| Node | D fusion hash | B max excursion | R tier consensus | C report size |
|---|---|---|---|---|
| ship_1 | 26c945819a49… | 1.6245× ✓ | all 0s ✓ | 19.5 B |
| ship_2 | 26c945819a49… | 1.6245× ✓ | all 0s ✓ | 19.5 B |
| aircraft | 26c945819a49… | 1.6245× ✓ | all 0s ✓ | 19.5 B |
| ground_node | 26c945819a49… | 1.6245× ✓ | all 0s ✓ | 19.5 B |
| coalition | 26c945819a49… | 1.6245× ✓ | all 0s ✓ | 19.5 B |
Aggregate inter-node bandwidth: 780 bps (1 Hz reports × 5 nodes × 19.5 B). Satellite budget: 1,000,000 bps. Decode hash agreement across nodes: ✓.
Joint All-Domain Command and Control — Numerical-Substrate Attestation
- Mission cycle
- sense → classify → decide → engage → assess → archive
- Nodes
- 5 (ship_1, ship_2, aircraft, ground_node, coalition)
- Hardware classes
- 4 distinct
- D — cross-platform fusion identity
- PASS
- Canonical fusion hash
- 26c945819a492db53a4e3faf269c5a67d5f2151b8055c5003233d434016c4390
- B — per-step excursion limit
- PASS (max 1.6245× ≤ bound 2.4623×)
- R — consistent threat classification
- PASS (zero arbitration rounds; set union of integer tier tags)
- C — bandwidth-budgeted inter-node
- PASS (780 bps ≤ 1,000,000 bps; per-value bound 8.46e-05)
- SolvNum table version
- core.K=24, TABLE_BITS=11
Composes with
Where this POC sits in the substrate
Every POC reinforces — and is reinforced by — others. Click through to see how each piece locks into the larger picture.
Mission Rehearsal Parity
Mission Rehearsal Parity provides the cross-platform determinism this POC scales.
Provable Effector Slew Rate
Effector Slew Rate provides the excursion-limit primitive for system-level safety.
Model-Free Anomaly Detection on Sensor Streams
Anomaly Detection provides the scale-classification primitive for cross-node threat classification.
Bandwidth-Bounded Tactical Telemetry Compression
Telemetry Compression provides the compression primitive for the data-fabric layer.
Coalition-Interoperable Autonomous Fire Control
Coalition Fire Control provides the multi-national autonomy story this POC scales.
Distributed Sensor Fusion Across a Ship Squadron
Distributed Sensor Fusion provides the fleet-internal fusion this POC scales.
Cross-Platform Mission-Data-Recorder Compression
MDR Compression provides the archive layer of the substrate.
Multi-Platform Drone Swarm with Provable Safety
Drone Swarm Safety provides the autonomous-platform side of the substrate.
Disconnected-Comms Autonomous Handoff
Disconnected-Comms Handoff provides the EMCON / contested-comms autonomy story.
Cross-Platform Engagement Replay for Review
Engagement Replay provides the forensic-audit layer.
Evidence pointers
Where the claims live in the repo
These are the files a reviewer should run, read, or grep to re-derive every number on this page.
- SolvNum benchmark suite (quick mode) (and overnight comprehensive sweep)
- SolvNum cross-platform attestation benchmark
- SolvNum cross-platform determinism verification (x86, ARM, WASM, CUDA)
- SolvNum documentation — defense autonomy strategic context
Want to see this in your environment?
Brief us on a program where this POC matters.
ITAR-aware. Air-gapped delivery available. Every claim above traces back to a script in the public repo.