Five capabilities. Five POCs. One pricing engine.
Every finance POC on this site is built on some combination of five first-order capabilities of LRDE and SolvNum. This page is the technical reference: what each capability is, why it matters to quants and model-risk teams, and which POCs lead on it.
PDE Pricing Without Time-Stepping
LRDE computes Black-Scholes PDE solutions as a single contour integral in the Laplace domain. No time-stepping. Cost is independent of maturity and stiffness.
How it works
Every options pricing library reduces the Black-Scholes PDE to a linear ODE system dV/dτ = A·V + g(τ) via method-of-lines discretization on a log-S grid. Today this is solved by marching through hundreds of small time steps: Crank-Nicolson, BDF, or ADI.
LRDE computes V(T) as a single contour integral in the Laplace domain via the Talbot method. One LU factorization of (sₖI − A) is reused across every strike at the same vol. Cost is dominated by a small fixed number of LU solves (N=32 quadrature points), not by the number of time steps.
This is published academic mathematics (Talbot 1979, Weideman & Trefethen 2006). The innovation is not the contour method itself but the productization: amortized LU reuse across strikes, maturities, and scenarios, packaged as a drop-in replacement for the pricing inner loop.
Why it matters
- 5–40× faster than scipy BDF across four hedge-fund benchmark workloads, with sub-basis-point agreement. The speedup compounds with book size because LU amortization is structural.
- Cost is independent of maturity: a 5-year option costs the same as a 1-month option. Cost is independent of stiffness: high-vol, fine-grid problems don’t slow down.
- The honest caveat: against a hand-tuned in-house Crank-Nicolson kernel, LRDE is 1.5–2× faster on a single PDE solve. The structural wins (26–41× on vol surfaces, 40× on Greek refresh) show up when LU reuse kicks in across many right-hand sides.
Validation status
PASS — five benchmark configurations (single vanilla, vol surface, Greeks, overnight fan-out, SolvNum overlay). Every LRDE price matches scipy BDF to sub-basis-point precision. SHA-256 of every price array for bit-identical verification.
LU Amortization Across Scenarios
For a single σ, the BS operator A is the same for every strike at every maturity. LRDE pre-factors once and reuses across thousands of right-hand sides — a structural advantage no time-stepper can access.
How it works
In the Laplace domain, solving for a new (K, T) pair requires only a forward/back substitution against an existing LU factorization — not a new factorization. The LU from one strike at a given vol is valid for every strike at that vol.
This means a 6,000-cell vol surface does not cost 6,000× a single solve. It costs 1 LU factorization + 6,000 cheap triangular solves. The ratio improves as book size grows.
Production-mode bucketing (σ to 0.5 vol pts, r to 25 bps) further reduces the number of distinct LU factorizations by grouping scenarios into buckets that share the same operator. This introduces a controlled sub-1% NPV approximation well within model-risk tolerance.
Why it matters
- Vol surface revaluation: 26–41× speedup on benchmark configurations, driven entirely by LU reuse across strikes.
- Greek refresh: 9 full book revaluations (base + 8 bumped scenarios) in 0.72 s vs 28.7 s — because the bumped scenarios share the same LU as the base.
- Overnight risk fan-out: 200 stochastic scenarios processed at 22–30 scenarios/sec vs 2.6 with BDF. The fan-out is structurally cheaper because most scenarios map to the same vol bucket.
Validation status
PASS — measured throughput improvement from LU reuse verified on vol surface (26–41×), Greek refresh (40×), and overnight fan-out (8.5–12×). Agreement vs BDF maintained to 2.9 × 10⁻⁵ per trade PV.
POCs that lead on this capability
Cross-Platform Deterministic Arithmetic
SolvNum produces bit-identical arithmetic across x86, ARM, GPU, and WebAssembly. One SHA-256 receipt proves every desk agrees.
How it works
Floating-point math disagrees across machines because different CPUs round differently, different math libraries approximate transcendentals differently, and different compilers reorder operations. Across hundreds of steps those gaps compound into visibly different answers.
SolvNum uses a deterministic arithmetic representation and fixed lookup tables. Every CPU performs the same operations on the same data the same way. The hash of the output is the proof.
For finance: the front-office pricer, the risk engine, the back-office settlement system, and the regulatory report all produce the same bits. A model-risk officer can re-derive on their own hardware and verify the SHA-256 match.
Why it matters
- Cross-desk reconciliation. Front office, risk, settlement, and compliance agree by construction — not by post-hoc tolerance windows.
- Regulatory reproducibility. Basel III/IV, FRTB IMA, and SR 11-7 all benefit from deterministic replay: the approved model, the production run, and the audit replay produce the same bits.
- Model validation. “Run it on my machine and show me the same answer” becomes a one-hash check instead of a tolerance argument.
Validation status
PASS — SHA-256 match verified across x86_64, ARM, CUDA, WASM on SolvNum-encoded pricing artifacts from all five POCs.
POCs that lead on this capability
Compression with Explicit Error Bound
SolvNum is also a tunable lossy compressor with an explicit, per-value relative-error bound. Decompression runs anywhere and costs nothing.
How it works
The k parameter controls how many bits of precision you keep. At k=12 (the default), every value has at most ~4×10⁻⁵ median relative error — sub-cent on a $100 spot. Compression ranges from 3.8–4.9× depending on the artifact.
Three properties distinguish this from gzip or float16: (1) the error bound is per-value and exact, not statistical; (2) no dictionary or state — each chunk compresses independently; (3) decompression is lightweight and runs on any platform.
Encode/decode overhead: 0.2–5.4 ms per artifact at production sizes — negligible compared to the LRDE pricing time.
Why it matters
- Storage efficiency. A nightly 5,000-scenario PV cube is ~4 GB raw. SolvNum at k=12 reduces it to ~1 GB with an auditable error bound on every value. Over 7-year retention that’s 30 TB saved per bank.
- Wire efficiency. Compressed pricing artifacts travel to downstream consumers (risk consolidator, compliance, regulators) 4× smaller, with the error bound in the header.
- Audit portability. Compressed artifacts carry a SHA-256 receipt that re-derives bit-identically on any platform. The hash either matches or it doesn’t — a mismatch means the inputs differ, not the math.
Validation status
PASS — measured on POC 05: SolvNum k=12 compresses six LRDE artifact types 3.8–4.9× with 4×10⁻⁵ median relative error. SHA-256 cross-platform match verified.
POCs that lead on this capability
Evidence-Based Solver Selection
SolvScout fingerprints your ODE/PDE system. SolvTune benchmarks every candidate solver on it. SolvBench archives the decision. “Why this solver?” has a defensible answer.
How it works
SolvScout characterizes the stiffness, sparsity, and dimensional structure of a given ODE/PDE system, producing a system fingerprint that classifies which solver families are appropriate.
SolvTune runs a ranked benchmark of every candidate solver on the actual system — dead zones called out, not hidden. The output is a comparison report suitable for model-risk review.
SolvBench archives the system fingerprint, benchmark results, and final solver choice in an encrypted, replayable profile. Months later, a model-risk officer can re-run the benchmark and verify the decision.
Why it matters
- SR 11-7 compliance. Model risk management requires knowing why a particular solver was chosen and whether it is appropriate. The SolvScout→SolvTune→SolvBench pipeline produces the evidence trail.
- Eliminates institutional inertia. The conversation stops being “we picked the solver the senior quant used in 2012” and starts surviving model validation review.
- Dead-zone transparency. SolvTune explicitly reports where each solver fails — parameter ranges, stiffness regimes, convergence failures — so the model-risk team knows what they’re accepting.
Validation status
Tooling validated on 500+ ODE/PDE systems across six domains (finance, aerospace, bio, energy, defense, telecom). System fingerprints cross-validated against known stiffness classifications.
How they reinforce each other
Pair-wise and full-stack benefits
The pair-wise compounds are where most of the value lives. The full-stack combination — LRDE + LU amortization + SolvNum determinism + compression + solver selection — is the complete audit-ready pricing substrate.
L + A
Laplace pricing plus LU amortization — the core structural advantage. Single-solve speed (5–7×) compounds to 26–41× on production-size books because the LU is reused across thousands of right-hand sides.
L + D
Fast pricing plus deterministic arithmetic — every desk gets the same price at the same speed. No reconciliation breaks from numerical disagreement.
L + C
Fast pricing plus compression — the nightly batch runs 40× faster and the outputs are 4× smaller, with an auditable error bound on every value.
D + C
Deterministic compressed artifacts — the SHA-256 receipt is valid on every platform. The combination is the audit artifact: portable, compact, and verifiable.
A + C
LU-amortized pricing plus compression — the entire overnight risk batch (price, compress, sign) completes in 12 seconds on a laptop. The outputs are regulatory-ready.
L + A + D + C + S
The full finance substrate: LRDE prices, LU amortization scales it, SolvNum makes outputs portable and verifiable, SolvScout/Tune/Bench documents why this solver was chosen. End to end, attestable by one hash, defensible at model validation.