Model-Free Anomaly Detection on Sensor Streams
A 2+ band jump in any channel is, by definition, a regime change. No model. No training data. No retuning.
- 100%
- Scale-discontinuity recall, all 3 regimes
- 0
- Scale-discontinuity false positives per million samples
- 0%
- CUSUM recall in maritime regime (tuned for urban)
- 1234
- CUSUM false positives per million in contested regime
The scenario
Set the picture
A SIGINT collection platform processes a 100-channel RF stream. Adversary EW activity — a new jammer turning on, a spoofed emitter, a sudden change in noise floor on a small subset of channels — manifests as sudden order-of-magnitude shifts in receive power, pulse density, or noise statistics on a few channels at a time.
The same operational pattern shows up across the defense sensor stack: sudden regime shifts in a small subset of high-rate channels embedded in a much larger stream. Acoustic on a sonar array. Magnetic on a MAD boom. Optical on a wide-field staring array. Vibration on an aircraft engine bus. Each one wants the same answer: something just changed; what?
What it costs today
The state-of-practice anomaly detector is an ML classifier (or, on older systems, a hand-tuned rolling-z-score / CUSUM threshold). ML classifiers trained on one operational environment generate floods of false positives in a different one — different geography, different season, different mix of friendly and adversarial emitters. False-positive rates of 10–100× the training-time number are routine.
When false positives spike, operators are trained to ignore alerts. The detector's effective recall drops to near zero exactly when conditions change — which is exactly when you want it to fire. Updating the ML classifier requires labeled data from the new environment, an offline training run, model validation, and re-deployment. For classified systems, the full cycle is months. The system is operationally degraded for the entire interval.
Hand-tuned z-score / CUSUM thresholds are brittle to baseline drift, require per-channel tuning, and fail silently when their assumptions break. The better-performing ML architectures don't fit the bandwidth and compute constraints at the edge.
What changes with SolvNum
Every channel sample is stored as a SolvNum value. The scale tier — the order-of-magnitude band — is a built-in field, available without computing log(). The detection rule is one integer comparison per sample.
If the current sample's scale tier differs from the rolling-baseline scale tier by 2 or more, this channel just experienced a 4× or larger discontinuity. By definition: a regime change. Zero training data. Zero model. Zero distribution-drift retuning. The detector is one line of integer logic. It runs on the sensor's signal-processing hardware without a dedicated ML accelerator. It cannot become stale, because it has no parameters.
Measurable outcome
What we'll claim — and how it survives review
Each line below maps to a captured number in the demo section. Every number is reproducible from the SolvNum validation suite.
- Ship-day-one detection capability against EW spoofing / jamming onset, sensor faults, and equipment failure — no training data required.
- Zero distribution-drift maintenance burden over the operational lifetime of the system.
- Detection latency at the sensor's sample rate — the rule is a single integer comparison per sample.
- Footprint small enough to deploy on the radio's signal-processing chip without dedicated ML compute.
- A single deterministic rule auditable by the operator and by the certification authority. No 'explain why the model fired' problem.
The demo
What was tested. How. What the script printed.
100-channel synthetic RF stream, 2,000 samples per channel, with embedded jammer onsets, sensor faults, and equipment failures (≥ 8× = 3+ band jumps) injected at random times across random subsets of channels in three baseline regimes (urban, maritime, contested/jammed).
Three detectors run in parallel: a rolling z-score and a CUSUM detector, both tuned for the urban regime; and the SolvNum scale-discontinuity detector, not tuned for any regime. Measured: detection recall, false positives per million samples.
Live simulation
Animated in-browser simulation of what the demo proves. The numbers underneath are the captured demo output.
Sensor stream — two detectors, same data
SolvNum detector
0 detections · 0 false alarms
z-score detector
0 fires · 0 false alarms
SolvNum fires only on real threats (solid green circles with checkmarks). z-score fires on threats and on regime shifts where its rolling baseline is briefly miscalibrated — dashed circles marked ! are false alarms that train operators to ignore alerts.
Captured demo output
The numbers the script actually printed.
| Regime | Detector | Recall | Detected | FP / Msamp |
|---|---|---|---|---|
| urban | z_score | 100.0% | 49 / 49 | 376.1 |
| urban | cusum | 100.0% | 49 / 49 | 0.0 |
| urban | scale_disc | 100.0% | 49 / 49 | 0.0 |
| maritime | z_score | 100.0% | 50 / 50 | 441.3 |
| maritime | cusum | 0.0% | 0 / 50 | 0.0 |
| maritime | scale_disc | 100.0% | 50 / 50 | 0.0 |
| contested | z_score | 100.0% | 50 / 50 | 612.0 |
| contested | cusum | 100.0% | 50 / 50 | 1234.0 |
| contested | scale_disc | 100.0% | 50 / 50 | 0.0 |
Cross-regime stability: scale_disc min recall 100.0%, max FP/Msamp 0.0. z_score: min recall 100.0%, max FP/Msamp 612.0. cusum: min recall 0.0%, max FP/Msamp 1234.0.
Composes with
Where this POC sits in the substrate
Every POC reinforces — and is reinforced by — others. Click through to see how each piece locks into the larger picture.
Counter-Spoofing PNT in GPS-Denied / Jammed Environments
Counter-Spoofing PNT uses scale-discontinuity detection across PNT sources to flag GPS spoofing.
Distributed Sensor Fusion Across a Ship Squadron
Distributed Sensor Fusion uses the scale tier as the consistent integer threat-tier classifier across nodes.
Multi-Platform Drone Swarm with Provable Safety
Drone Swarm Safety uses scale-tier classification to share threat priorities across platforms without consensus protocol.
Cross-Platform Engagement Replay for Review
Engagement Replay uses magnitude-fingerprint analysis for precedent search across archived engagements.
Evidence pointers
Where the claims live in the repo
These are the files a reviewer should run, read, or grep to re-derive every number on this page.
- SolvNum magnitude-classification demo — scale-based LSH, anomaly detection
- SolvNum battery-knee demo — scale-transition detector beats CUSUM (Severson et al. method)
- SolvNum pattern-validation demo — Pattern P17 magnitude fingerprint
- SolvNum core — scale-aware classification primitive
Previous · POC 02
Provable Effector Slew Rate
Next · POC 04
Bandwidth-Bounded Tactical Telemetry Compression
Want to see this in your environment?
Brief us on a program where this POC matters.
ITAR-aware. Air-gapped delivery available. Every claim above traces back to a script in the public repo.