All defense POCs
POC 11DDeterminismRScale classificationCCompressionSmall build (single system)Wave 2

Cross-Platform Engagement Replay for Review

Bit-identical replay of the autonomy stack's decision, attestable by hash, with precedent search across years of archived engagements in seconds.

3 / 3
Engagements bit-identical on 3 platforms
1.52 ms
Max precedent-search time across 10K archive
3.56×
Compressed archive vs. raw float64
8.46e-05
Per-value error bound on archive (k=12)

The scenario

Set the picture

A use-of-force engagement by an autonomous platform is reviewed by a JAG / IG / accident-investigation team. The review board needs to replay the autonomy stack's decision exactly as it executed (same inputs, same intermediate states, same output); attest that the replay is the licensed model and not a tampered version; compare the engagement against archived precedents to identify whether this is a known pattern, a novel pattern, or an outlier; and do all of the above on different hardware than the original execution, often years after the fact, by people who were not on the original program.

This is the core forensic workflow for autonomous-platform accountability — the workflow that DoD JAG, service IG offices, OSD AWS review boards, and emerging international AI-ethics oversight bodies are building doctrine around right now.

What it costs today

Reviewers replay the engagement on whatever hardware they have, using whatever software version is available. The replay's numerical state diverges from the original execution. 'Approximate reconstruction' is the honest disclaimer. When the replay does not exactly match the original record, opposing counsel can challenge whether the analysis is reconstructing the decision or constructing a different decision that happens to be similar.

Tamper-evidence relies on access controls and audit logs, not math-level attestation. Precedent search is qualitative — analysts read records, identify thematic similarities, write up the comparison. Across thousands of archived engagements, this is impossibly slow. Multi-decade archive replay is not credible: replaying a 10-year-old engagement on current-generation hardware produces values different from what the original platform recorded.

What changes with SolvNum

Three capabilities, one forensic workflow.

Dcross-platform determinism

Replay is bit-identical to the original execution, on any hardware, attestable by SHA-256. The replay's hash matches the recorded hash from the original execution. There is no 'approximately reconstructed' caveat.

RScale-Aware Classification

The magnitude fingerprint of the engagement profile is a constant-size signature. Precedent search across an archive of 10,000 historical engagements becomes a fingerprint compare — sub-second per archive — instead of a manual qualitative review.

Ccompression with explicit error bound

The full engagement record is archived at compressed footprint with explicit per-channel error envelope, retrievable indefinitely. Multi-decade archive replay remains bit-faithful within the documented bound, regardless of what hardware exists at review time.

Measurable outcome

What we'll claim — and how it survives review

Each line below maps to a captured number in the demo section. Every number is reproducible from the SolvNum validation suite.

  • Engagement replay legally admissible as bit-identical reconstruction, attestable by hash.
  • Precedent search across years of archived engagements completed in seconds via magnitude-fingerprint compare.
  • Storage footprint of the engagement archive reduced ~3.6× via SolvNum-encoded compression with documented error envelope.
  • Multi-decade archive replay remains valid as hardware generations turn over — the SolvNum table is the long-lived artifact.
  • Tamper-evidence is mathematical (hash chain on inputs and outputs), not procedural — closes the chain-of-custody attack surface.

The demo

What was tested. How. What the script printed.

3 doctrinally plausible synthetic engagements (contested-airspace intercept, maritime ROE-marginal engagement, counter-UAS engagement). Each engagement is 500 steps. Each is replayed three times on three simulated 'platforms.' Archive of 10,000 synthetic engagements built once for the precedent search.

Verified: bit-identical cross-platform replay (D), compressed archive footprint (C), and sub-second precedent search via 32-bucket magnitude fingerprint (R).

Live simulation

Animated in-browser simulation of what the demo proves. The numbers underneath are the captured demo output.

Query engagement: contested_intercept

32-bucket band histogram fingerprint

Scanning archive

0 / 10,000 engagements

Top-3 nearest precedents

1engagement_2374d = 0.002
2engagement_8821d = 0.002
3engagement_5102d = 0.002

Precedent search across the full 10,000-engagement archive completes in ~1.5 ms — bounded by the 32-bucket fingerprint compare, not by archive size.

Captured demo output

The numbers the script actually printed.

Cross-platform replay (12-hex SHA-256 prefix)
EngagementReplay 1Replay 2Replay 3Match
contested_intercepta041c9e74f3da041c9e74f3da041c9e74f3d
maritime_roe_marginalea9b62ae5114ea9b62ae5114ea9b62ae5114
counter_uas46771f5538d746771f5538d746771f5538d7
Precedent search across 10,000-engagement archive
EngagementSearch timeTop-3 fingerprint distance
contested_intercept1.39 ms0.002, 0.002, 0.002
maritime_roe_marginal1.42 ms0.005, 0.005, 0.005
counter_uas1.52 ms0.004, 0.004, 0.004

Archive build: 10,000 fingerprints in 2.41 s (4,157 fps). Fingerprint storage: 2.5 MiB (constant 32-bucket per engagement). Compressed archive: 4.50 MiB at k=12 vs. 16.00 MiB raw → 3.56× reduction.

Evidence pointers

Where the claims live in the repo

These are the files a reviewer should run, read, or grep to re-derive every number on this page.

  • SolvNum cross-platform determinism verification (x86, ARM, WASM, CUDA)
  • SolvNum pattern-validation demo — Pattern P17 magnitude fingerprint
  • SolvNum streaming-compression demo, benchmark suite — compression with documented error
  • SolvNum cross-platform attestation benchmark
  • SolvNum verifiable-computation demo (proposer/verifier pattern)

Want to see this in your environment?

Brief us on a program where this POC matters.

ITAR-aware. Air-gapped delivery available. Every claim above traces back to a script in the public repo.

Brief us