Blog
← Back to Blog

What V1 Gets Wrong

8 min read

Two months of building. Engine works. Tests pass. Benchmarks look good. Time to talk about what's broken.

Every v1 has problems you don't see until you've run it against real data for a while. Ours has two big ones.

Problem 1: Static Multipliers

Our adversarial model uses fixed stress multipliers:

typescript
1const ADVERSARIAL = {
2 slippageMultiplier: 2.0, // always 2x
3 gasSurgeMultiplier: 1.5, // always 1.5x
4 bridgeDelayMultiplier: 3.0, // always 3x
5 mevExtraction: 0.003, // always 0.3%
6 priceMovement: 0.005, // always 0.5%
7};

These numbers were calibrated against six months of historical bridge data. 2.0x slippage captures 93.8% of real slippage events. Good enough for a starting point.

The problem: Wormhole on Ethereum at 3am UTC is very different from deBridge on Solana during a memecoin launch. Same multipliers applied to both. That's lazy.

Real slippage patterns are chain-specific, bridge-specific, and time-dependent. Allbridge pools on SOL-ETH drain on weekends. LayerZero DVN verification slows down when Ethereum gas is above 50 gwei. deBridge DLN fills get competitive during high volatility because fillers are also trading.

A 2.0x multiplier is too conservative for deBridge-to-Arbitrum at 2am (conditions are stable, fills are fast) and too aggressive for Allbridge-to-Solana during a network congestion event (real slippage can hit 5x on shallow pools).

What We're Building

Dynamic calibration. Per-chain, per-bridge, per-time-window multiplier adjustment based on actual execution data.

typescript
1// v1: static, same for everything
2const slippageMultiplier = 2.0;
3
4// v2: dynamic, learned from execution history
5const slippageMultiplier = calibrator.getMultiplier({
6 bridge: 'debridge',
7 fromChain: 'ethereum',
8 toChain: 'arbitrum',
9 hour: currentHour(), // time-of-day factor
10 gasPrice: currentGasGwei(), // network condition
11 poolDepth: currentDepth(), // liquidity state
12});
13// returns 1.3 for ETH→ARB via deBridge at 2am (low risk)
14// returns 3.8 for SOL→ETH via Allbridge at 3pm during congestion (high risk)

The idea is Bayesian-style: start with our current priors (the static values), then update them as execution data comes in. Every completed transfer tells us something about actual conditions vs quoted conditions. Feed that back into the model.

We started collecting execution data two weeks ago. Not enough to retrain yet, but the pipeline is there. Once we have ~500 completed transfers, the model should have enough signal to produce per-bridge multipliers that meaningfully outperform the static ones.

Problem 2: Path Discovery Is Too Conservative

Our path discovery uses a static connectivity graph. We know which chains connect to which bridges, enumerate all combinations up to 3 hops, and evaluate them. If a chain pair isn't in the graph, we don't find it.

text
1Current discovery:
2 ETH → SOL: direct (4 bridges) + 2-hop via ARB, BASE, OP, MATIC, BSC, AVAX
3 = ~50 candidate paths
4
5What we're missing:
6 ETH → AVAX → BSC → SOL (3-hop, unusual but sometimes cheapest)
7 ETH → [swap to DAI] → [bridge DAI] → SOL → [swap to USDC]
8 (intermediate token swaps that reduce bridge costs)
9
10The graph doesn't know about these. Alpha-beta pruning can't find
11routes that were never generated as candidates.

Alpha-beta pruning is great at evaluating known candidates efficiently. It's terrible at discovering unknown ones. We're pruning a tree that's too small to begin with.

Real example from last week: a $200K USDC transfer from Ethereum to Solana. Our engine picked deBridge direct. Fine route, good worst-case score. But we later found that swapping USDC to DAI on Ethereum, bridging DAI via Allbridge (which had a deeper DAI pool than USDC at that moment), then swapping back to USDC on Solana would have saved ~$340. The path was never generated because our discovery doesn't consider intermediate token swaps as part of the bridge routing.

What We're Building

Probabilistic path exploration. Instead of only enumerating from a static graph, we add a Monte Carlo-style sampling phase that generates random candidate paths — unusual chain orderings, intermediate token swaps, unconventional bridge combinations.

typescript
1// v1: deterministic enumeration from connectivity graph
2const candidates = pathDiscovery.enumerate(fromChain, toChain);
3
4// v2: deterministic + probabilistic exploration
5const knownPaths = pathDiscovery.enumerate(fromChain, toChain);
6const exploredPaths = pathExplorer.sample({
7 fromChain,
8 toChain,
9 samples: 200, // random path samples
10 maxHops: 4, // allow deeper exploration
11 allowIntermediateSwaps: true, // try different tokens mid-route
12 temperatureDecay: 0.95, // gradually prefer better-looking paths
13});
14
15// merge, deduplicate, then run through minimax as usual
16const allCandidates = [...knownPaths, ...exploredPaths];
17const scored = minimaxEngine.search(allCandidates);

Most of the sampled paths will be terrible. That's fine — they get pruned immediately by alpha-beta. The 10ms search budget can handle the extra candidates. But occasionally, the sampler finds a path nobody was looking at that actually scores well. Those are free money.

Timeline

Dynamic multiplier calibration: collecting data now, aiming to ship within a few weeks once we hit 500+ completed transfers for the initial training set.

Probabilistic path exploration: implementation is in progress. The sampler is straightforward — it's the deduplication and integration with the existing search that takes care. Should be ready around the same time.

Both will ship as engine updates. If you're using the SDK, it's a version bump. If you're using the hosted routing, it just gets better.

Why Write This

We could have kept quiet about the limitations and shipped the improvements silently. But if someone is using the engine for real transfers, they should know where the edge cases are. Static multipliers work fine for 90% of transfers under $50K. For large transfers, unusual chain pairs, or volatile conditions, the current v1 model has blind spots.

Knowing your tool's limits is more useful than pretending it has none.