Skip to main content

Block-level attack surfaces

Censorship and inclusion attacks

An attacker can report a bad price, censor all disputes for the settlement time, and settle the bad price, draining any external notional that depended on it.

Censorship context

Assume all block producers are accepting bribes, and if the bribe is large enough, they will not include dispute transactions. Assume zero altruistic block producers, perfect bribery infrastructure with 100% participation, and zero slashing. Alternatively, assume ePBS is live, which makes these conditions realistic. The attacker submits a bad price and tries to censor all disputes with bribe payments for the settlement time.

Block count vs. unique producers

The below formulas assume per-block bribery and try to find an oracle game settlement time TT to enforce costs on an attacker, where each block is an independent bribe event (e.g. via ePBS). More generally, the settlement window must contain enough independent bribe events to satisfy the grief ratio. What counts as an independent event depends on the bribery infrastructure:
  • Per-block bribery (ePBS flow): each block is a separate bribe event, so TT in blocks is correct as stated below.
  • No strong commitment infrastructure: a producer controlling non-contiguous streaks (e.g. AAAA BB AAAA) must be re-bribed per streak, so the settlement window must contain enough runs to satisfy the grief ratio.
  • Credible long-horizon bribery commitments: a producer is bribed once for all their blocks regardless of streaks, so the settlement window must contain enough unique producers.
For a single-sequencer L2, under the unique-entity model, independent events are capped at 1 regardless of how long the settlement window is. If the required number of events exceeds 1, no settlement window can satisfy it — this is exactly the L2 trust ceiling described below.

Linear notionals

The attacker has an external notional NN, submits a price with error RR with oracle initial liquidity LL, and censors disputes for TT blocks. The profit and cost of the attack are:
Profit=RN\text{Profit} = R \cdot N
Cost=T(Rκ)L\text{Cost} = T \cdot (R - \kappa) \cdot L
where κ\kappa is the cost to report (from Oracle Accuracy & Cost, κ0.65Mσ(T)\kappa \approx 0.65 \cdot M \cdot \sigma(T) under Fs=0F_s = 0 and Fp=0F_p = 0) and R>κR > \kappa (otherwise disputes are unprofitable and censorship is free). Each block, the attacker must bribe at least the net dispute profit (Rκ)L(R - \kappa) \cdot L — the amount a block producer would earn by including a dispute. For a desired grief ratio GG (where the attacker loses GG dollars for every dollar extracted), set Cost=(G+1)Profit\text{Cost} = (G + 1) \cdot \text{Profit}:
T(Rκ)L=(G+1)RNT \cdot (R - \kappa) \cdot L = (G + 1) \cdot R \cdot N
Solving for TT with I=L/NI = L / N:
T=(G+1)R(Rκ)IT = \frac{(G + 1) \cdot R}{(R - \kappa) \cdot I}
For example, consider a $1 million perpetual long with a price settled by an openOracle initial report with $100k of initial liquidity in each token, and for simplification assume 0 cost to report. If an attacker submits a price 5% off, they must pay >$5k per block since that is the dispute profit available to a block producer who includes a dispute. The attacker extracts 5% on $1 million = $50k. If the settlement time is >10 blocks, the extraction is strictly less than the bribery cost.

Binary notionals

For binary notionals, the internal oracle price error must be the distance from the true price PP to the binary threshold KK. Here, we assume the attacker’s profit is NN. Define the required price error as R=PK/PR = |P - K| / P. The cost of the attack is the same:
Profit=N\text{Profit} = N
Cost=T(Rκ)L\text{Cost} = T \cdot (R - \kappa) \cdot L
Setting Cost=(G+1)Profit\text{Cost} = (G + 1) \cdot \text{Profit}:
T(Rκ)L=(G+1)NT \cdot (R - \kappa) \cdot L = (G + 1) \cdot N
Solving for TT with I=L/NI = L / N:
T=G+1(Rκ)IT = \frac{G + 1}{(R - \kappa) \cdot I}
In a liquidation setup, risk increases when the configured settlement time TsetT_{\text{set}} is less than the required safe time Tsafe=(G+1)/((Rκ)I)T_{\text{safe}} = (G+1)/((R - \kappa) \cdot I).

L2 sequencer trust

On L2s, a sequencer is generally trusted for transaction inclusion. This places a hard limit on the amount the oracle can secure without better inclusion guarantees. If the amount at stake exceeds what the sequencer can be relied upon for, the oracle can migrate to L1.

Parasitic open interest under censorship

External notional can latch onto a report as parasitic open interest, increasing NN and decreasing II, which increases the required settlement time for a given grief ratio. However, in a censorship bribery context, parasitic open interest is unstable. If there is standing liquidity willing to latch onto any report that fits within its parameters, an attacker (original non-parasitic counterparty) can add their own notional to the same report. Anyone who latched on blows up, unfortunately taking the original passive oracle dependent with it. Standing liquidity is therefore economically unstable and likely to be selected out over time before the oracle user submits a price request. Contrived oracle games can also take advantage of parasitic open interest: as soon as there is enough standing liquidity, the payoffs are added together and a game can be contrived to take advantage of them.

Block stuffing attacks

Anyone can attempt a block stuffing attack. By spamming high-fee transactions to inflate gas costs, an attacker can report a bad price and make it uneconomic for honest disputers to respond — unless those disputers are economically aligned to the oracle output. Assume one attacker whose counterparty is completely passive, and assume zero swap or protocol fees in the oracle game. The attacker wants to sustain a mispricing for the settlement time. The gas fee per dispute must exceed the dispute profit RLR \cdot L, where RR is the percentage oracle price error and LL is the oracle liquidity. For simplicity, we avoid the cost of reporting in the dispute economics. The attacker’s total cost is:
Cost=RLGasblockGasdisputeT\text{Cost} = R \cdot L \cdot \frac{Gas_{\text{block}}}{Gas_{\text{dispute}}} \cdot T
where GasblockGas_{\text{block}} is the block gas limit, GasdisputeGas_{\text{dispute}} is the gas per dispute transaction, and TT is the settlement time in blocks. On EIP-1559 chains, block stuffing exponentially escalates the base fee at +12.5%+12.5\% per block, which is burned. Now assume the attacker raises the base fee just past the point where it prices out disputers, then alternates stuffing blocks to stabilize. Ignoring ramp-up cost and assuming the attacker alternates every other block:
Costalt=RLGasblockGasdisputeT2\text{Cost}_{\text{alt}} = R \cdot L \cdot \frac{Gas_{\text{block}}}{Gas_{\text{dispute}}} \cdot \frac{T}{2}
Assume the attacker earns RNR \cdot N on a linear external notional. The attack is profitable when:
RN>CostaltR \cdot N > \text{Cost}_{\text{alt}}
Simplifying, the attack is unprofitable when:
L>2NGasdisputeGasblockTL > \frac{2 \cdot N \cdot Gas_{\text{dispute}}}{Gas_{\text{block}} \cdot T}
For example, on Ethereum L1 with a block gas limit of 60 million and dispute gas of 250k, there are 240 disputes per block to price out. For a linear notional of $1 million and 15 blocks of settlement time, the required initial liquidity is only ~$550. Increasing liquidity increases the attacker’s grief ratio linearly. If real gas fees are already near the point of pricing out disputes, the alternating attack gets cheaper. The initial liquidity should already be high enough that gas spikes do not impact accuracy beyond what is tolerable. At $50k initial liquidity for example, the cost to compensate the initial reporter is cheap relative to the $1 million notional. An attacker who is also a block producer can censor disputes on their own blocks and combine this with alternating stuffing on blocks they don’t control. If the attacker controls some fraction CC of blocks, parameters should scale like L/(1C)L / (1-C) or T/(1C)T / (1-C), or a combination. If we include the cost of reporting in the dispute economics, the net dispute profit becomes (R0.65Mσ(T))L(R - 0.65 \cdot M \cdot \sigma(T)) \cdot L (from Oracle Accuracy & Cost), and the unprofitable condition becomes:
L>2RNGasdispute(R0.65Mσ(T))GasblockTL > \frac{2 \cdot R \cdot N \cdot Gas_{\text{dispute}}}{(R - 0.65 \cdot M \cdot \sigma(T)) \cdot Gas_{\text{block}} \cdot T}
For example, with R=5%R = 5\%, M=1.1M = 1.1, σ(T)=2%\sigma(T) = 2\%, the reporter cost is 0.651.12%1.43%0.65 \cdot 1.1 \cdot 2\% \approx 1.43\%, and the required liquidity for the same Ethereum L1 parameters rises from ~$550 to ~$780. As RR gets very close to the cost of reporting 0.65Mσ(T)0.65 \cdot M \cdot \sigma(T), the required liquidity approaches and exceeds the external notional. However, at that point the price error is so small that the barrier game from Linear notional manipulation comes back into play, and it becomes harder for the block stuffing attack to settle. Also, the use of higher initial liquidity fractions suggested by the linear notional manipulation section makes this strategy untenable. For example, the required liquidity to defend against a price error 10% wider than the honest dispute barriers from block stuffing attacks is:
R=1.10.65Mσ(T)=1.10.651.10.021.57%R = 1.1 \cdot 0.65 \cdot M \cdot \sigma(T) = 1.1 \cdot 0.65 \cdot 1.1 \cdot 0.02 \approx 1.57\%
L>20.01571,000,000250,0000.0014360,000,00015$6,112L > \frac{2 \cdot 0.0157 \cdot 1{,}000{,}000 \cdot 250{,}000}{0.00143 \cdot 60{,}000{,}000 \cdot 15} \approx \$6{,}112
This is still much smaller than L/N=10%L/N = 10\%, or LL = $100k, even at a price error just 10% past the reporter cost. This also assumes the price does not move during the settlement time, which is maximally favorable to the attacker. For binary notionals, the required price error may be very large — for example, for liquidations, the distance from the current price to the liquidation threshold likely renders block stuffing very expensive relative to the fixed liquidation penalty for any sensible initial liquidity so long as the borrower is maintaining an appropriate health ratio. On the one hand, an attacker can add up all payoffs attached to the oracle, perform a block stuffing attack, and cover multiple oracle games with the same spend. On the other hand, the game is symmetric, and the larger the amounts get, the more likely the attacker’s counterparties will not be completely passive during a block stuffing attack. It just takes one counter-dispute for the attacker to lose that portion of their payout, at great expense. There are some other mitigations: in a p2p lending setup, longer settlement times for liquidation + a max base fee tolerance at time of settlement can mitigate the effects of these attacks, because the attacker no longer gets that payout. The base fee grows 12.5% per block in the ramp-up period, protecting the smaller oracle game dependents from the blockspace being occupied by transactions affecting larger dependents. There are clearly tradeoffs with the max base fee tolerance approach: for example, it may take multiple oracle games to liquidate a position, and you also create incentives to push the base fee past the tolerance. Additionally, any payouts where ETH is involved benefit here, since gas costs are denominated in ETH regardless of its price. But with appropriate parameterization, we can minimize the negative effects of this approach.

Dynamic programming analysis

Under censorship, the manipulator can report a wrong price and pay ePBS bribes to suppress disputes until suppression is no longer worth it. At that point, they can either let the round settle or self-dispute to refresh the game and repeat. Here, we investigate an unconstrained strategy where the attacker can choose report placement and refresh timing freely. This is exceptionally complicated, so we would invite anyone to correct any errors they see. The analysis should be treated as an approximation. Unless otherwise noted, assume the manipulator effectively controls inclusion and can always capture the dispute slot when choosing to refresh. Under this assumption, internal oracle game losses are abstracted away and the dominant tradeoff is external extraction versus censorship spend. This can be written as a single dynamic program on an augmented state:
  • Round index nn with game size Sn=LMnS_n = L \cdot M^n
  • Settlement timer τ\tau (blocks remaining in the round)
  • Settlement horizon TT (fixed blocks per round; τ\tau is initialized at TT and counts down to 0)
  • Raw reported-vs-true error RR in attack direction (fractional price error)
Define honest dispute barriers (from Oracle Accuracy & Cost):
κ=0.65Mσ(T)\kappa = 0.65 \cdot M \cdot \sigma(T)
The honest barrier κ\kappa may differ from 0.65Mσ(T)0.65 \cdot M \cdot \sigma(T) under self-consistent manipulation barriers, but for simplicity we use the 0.65M0.65 \cdot M form here. External extraction scales with raw error RR, while censorship cost scales only with excess error past the honest barrier. A simple per-block censorship bribe requirement is:
briben(R)=max(0,Rκ)Sn\text{bribe}_n(R) = \max(0, R-\kappa) \cdot S_n
Censorship is not required inside the dispute barriers. The manipulator’s control each block is:
  1. censor: pay briben(R)\text{bribe}_n(R) for one block, evolve RR under unconditional price dynamics, and reduce timer ττ1\tau \to \tau-1.
  2. refresh: self-dispute, choose a new bad report RR', reset timer, and move to next round size Sn+1=MSnS_{n+1}=M\cdot S_n.
  3. settle: settlement is only available at timer expiry (τ=0\tau=0).
Let Vn(τ,R)V_n(\tau,R) be attacker value at state (n,τ,R)(n,\tau,R). With settlement only at timer expiry:
Vn(0,R)=Usettle(R)=RNsettle payoffV_n(0,R) = \underbrace{U_{\text{settle}}(R) = R \cdot N}_{\text{settle payoff}}
and for τ>0\tau>0:
Vn(τ,R)=max(briben(R)+E[Vn(τ1,R~)]censor one more block (delay value),maxRAVn+1(T,R)self-dispute refresh)V_n(\tau,R)=\max\Bigl( \underbrace{-\text{bribe}_n(R)+\mathbb{E}[V_n(\tau-1,\widetilde R)]}_{\text{censor one more block (delay value)}}, \underbrace{\max_{R' \in \mathcal{A}}V_{n+1}(T,R')}_{\text{self-dispute refresh}} \Bigr)
where A\mathcal{A} is the allowed refresh report space (equivalently, post-refresh mispricing states). We allow unconstrained bad report placement in attack direction, so A={R:R0}\mathcal{A}=\{R:R\ge 0\}. In the censor branch, R~\widetilde R follows the unconditional one-block return process. Refresh moves the game to n+1n+1, so both future bribe costs and future delay payoffs scale with MM. The way to calculate this is:
  1. Pick a round cap nmaxn_{\max} (for example, 80), a settlement horizon TT, and a list of RR for price evolution (positive and negative). Then define the attacker-choosable subset AR\mathcal{A} \subseteq R for attacker price reports (initial and refresh), where A={RiR:Ri0}\mathcal{A}=\{R_i \in R : R_i \ge 0\}.
  2. To handle price movements between blocks (the R~\widetilde{R} term above), build the one-block transition probability matrix on the same list of RR:
    Pi,j=Pr(Rnext=RjR=Ri)P_{i,j} = \Pr(R_{\text{next}} = R_j \mid R = R_i)
    All this means is, for a given starting RR, what are the chances it evolves into other RR in the list over one block of time. This is how we account for the underlying distribution of returns.
  3. Fill settlement values for every round and every RR in the list: Vn(0,Ri)=RiNV_n(0, R_i) = R_i \cdot N.
  4. For rounds n=nmaxn = n_{\max} down to 00, for blocks remaining starting from τ=1\tau = 1 to TT (backwards in time), and for each RiR_i in the list:
    censor_valuen,τ(Ri)=briben(Ri)+jPi,jVn(τ1,Rj)\text{censor\_value}_{n,\tau}(R_i) = -\text{bribe}_n(R_i) + \sum\limits_{j} P_{i,j}\,V_n(\tau-1, R_j)
    refresh_valuen,τ(Ri)={0,n=nmaxmaxRjAVn+1(T,Rj),n<nmax\text{refresh\_value}_{n,\tau}(R_i)= \begin{cases} 0, & n=n_{\max} \\ \max_{R_j \in \mathcal{A}} V_{n+1}(T, R_j), & n<n_{\max} \end{cases}
    Vn(τ,Ri)=max(censor_valuen,τ(Ri),refresh_valuen,τ(Ri))V_n(\tau, R_i)=\max(\text{censor\_value}_{n,\tau}(R_i), \text{refresh\_value}_{n,\tau}(R_i))
    Once we arrive at τ=T\tau = T, we have a list of Vn(T,Ri)V_n(T, R_i) across all choices of RiR_i. We first compute this list for VnmaxV_{n_{\max}}, then Vnmax1V_{n_{\max}-1}, and so on: it is easy to calculate the RR' used in the next round in the refresh step, since we have the list of values for the round after it:
    R(n,τ,Ri)=argmaxRjAVn+1(T,Rj)R'_{\star}(n,\tau,R_i)=\arg\max_{R_j \in \mathcal{A}} V_{n+1}(T, R_j)
    Here (and in the refresh_value\text{refresh\_value} equation above), RjR_j is just a scan over all allowed next-round report choices in A\mathcal{A}, and the argmax picks the best one.
  5. We have enough information to continue until all Vn(τ,Ri)V_n(\tau, R_i) are filled down to n=0n=0.
Once we’ve finished computing, the attacker’s optimal initial report is the choice of RR that maximizes V0(T,R)V_0(T, R). Bias is recovered from value plus internal losses:
bias=V0(T,R)+E[internal losses]Nσ(T)\text{bias}=\frac{V_0(T,R_\star)+\mathbb{E}[\text{internal losses}]}{N\cdot\sigma(T)}
For intuition, if the manipulator holds a fixed report for τ\tau blocks then settles in round nn, define a one-round objective Πn(τ,R)\Pi_n(\tau,R) representing strategy value at block τ\tau and price error RR:
Πn(τ,R)=RNτ(Rκ)+Sn\Pi_n(\tau,R) = R \cdot N - \tau \cdot (R-\kappa)_+ \cdot S_n
For R>κR > \kappa, the slope is:
ΠnR=NτSn\frac{\partial \Pi_n}{\partial R} = N - \tau \cdot S_n
So once τSn>N\tau \cdot S_n > N, increasing RR lowers value. Equivalently, with In=Sn/NI_n = S_n/N, the condition is τIn>1\tau \cdot I_n > 1. The core limiting force is the per-block censorship spend implied by the oracle game’s initial liquidity ratio and settlement time.

Numerical sweeps from this model

Unless otherwise noted:
  • Counterparty is passive.
  • Attacker can choose the prices for the initial report and each refresh report from unconstrained attack-direction space.
  • Per-block censorship cost is (Rκ)+Sn(R-\kappa)_+ \cdot S_n.
  • Score is 0.65(L/N)+bias0.65 \cdot (L/N) + |\text{bias}|, reflecting initial report cost + bias extraction out of the notional in a single “cost” term.
  • We report score in σ(T)\sigma(T) units, and optionally convert to percent using a 4% daily volatility assumption.
Even with this unconstrained attacker action space, the optimizer does not prefer extreme wrong-price placement. Attacker-optimal behavior tends to keep reports near the honest dispute barriers and then switch censorship behavior based on whether the state is inside or outside the no-dispute band. The same recursion also shows why early refresh is usually avoided, but can become optimal in states where immediate censor burn is high relative to continuation value from refreshing. At B=40B=40 blocks per round (8 minutes at 12s blocks), the best score is fairly robust across tested increment models:
Model        Best L/N  Best M  Best score (sigma units)  Absolute % (4% daily vol)
Gaussian      0.250    1.15           0.768                   0.229%
Student-t(6)  0.200    1.15           0.773                   0.230%
Student-t(8)  0.200    1.15           0.771                   0.230%
Student-t(10) 0.200    1.15           0.770                   0.230%
Jump mixture  0.200    1.15           0.782                   0.233%
The censorship-optimal score basin appears directionally stable under these tested distributions.

Settlement window tradeoff

For Gaussian increments, sweeping block count BB (same L/NL/N and MM grid):
B blocks   Time    Best L/N  Best M  Best score (sigma units)  Absolute %
   20      4.0m     0.300    1.15           0.879               0.185%
   40      8.0m     0.250    1.15           0.768               0.229%
   80     16.0m     0.150    1.15           0.680               0.287%
  100     20.0m     0.150    1.15           0.654               0.308%
Here, score in σ(T)\sigma(T) units improves as BB increases, but absolute percent worsens because σ(T)\sigma(T) grows like T\sqrt{T}. In a short-window sweep (B=4B=4 to 2020), best absolute percent was at the shortest tested window (about 0.121% at B=4B=4).

Bribe and control sensitivity

At B=40B=40, Gaussian, passive counterparty:
Setting                                  Best L/N  Best M  Best score
bribe_mult = 1.0, p_lose = 0              0.250    1.15      0.768
bribe_mult = 2.0, p_lose = 0              0.150    1.15      0.682
bribe_mult = 5.0, p_lose = 0              0.125    1.10      0.594
p_lose = 1% per block, jump n->n+2        0.200    1.15      0.758
p_lose = 5% per block, jump n->n+2        0.150    1.15      0.722
We model p_lose within the actively censoring states, and assume the manipulator cannot lose control during any wait. So higher effective censorship cost and/or nonzero per-block control loss probability can reduce attacker-optimal score. The jump nn+2n \to n+2 reflects that after the attacker loses control, they must dispute again to regain control of a report with a bad price, so there are two escalations in the oracle game per loss of control. In reality, this is not always the case. If the price has moved in the attacker’s favor since the original bad report, the control-assumer’s report may itself be favorable to the attacker, and the attacker can begin censoring from that report instead of needing to dispute again. So the n+2n+2 penalty may overestimate the attacker’s cost of losing control in some states. In practice, loss of control can come from at least three sources. First, inclusion is mediated by a globally distributed set of block producers, and the manipulator is incentivized to time a blocking bribe at the latest possible moment because the true price continues moving in real time; under proposer-builder timing quirks, this creates execution risk and increases the effective bribe the manipulator must pay if they want a low loss-of-control probability. Second, the manipulator does not know the honest dispute barriers exactly, so censorship bids need a minimum safety margin (for example, some fraction such as 10% of the honest dispute barrier), which not only requires highly accurate volatility forecasting, but requires the rest of the market to agree with their forecasting within a threshold too. If we proxy the attacker strategy as paying bribes against some fraction of the dispute barrier (e.g, 0.9 * kappa):
Setting                                      Best L/N  Best M  Best score
bribe = max(R - 0.9*kappa, 0), p_lose = 0      0.200    1.15      0.694
bribe = max(R - 0.9*kappa, 0), p_lose = 1%     0.200    1.15      0.684
bribe = max(R - 0.9*kappa, 0), p_lose = 3%     0.200    1.15      0.665
bribe = max(R - 0.8*kappa, 0), p_lose = 0      0.200    1.20      0.616
bribe = max(R - 0.8*kappa, 0), p_lose = 1%     0.200    1.20      0.607
bribe = max(R - 0.7*kappa, 0), p_lose = 0      0.200    1.20      0.538
Like before, loss of control escalates the game two rounds. It seems difficult in the real world for the attacker to burden passive oracle users with the full 0.768 score as in the first table, given the uncertainty around the true dispute barriers and penalties for losing control. Finally, the third loss of control source is a passive counterparty becoming active and counter-bribing to force inclusion of a dispute (discussed further in the later counterbribe discussion).

Unique entity modeling

You could consider long-horizon bribery commitment infrastructure as a smart contract where validators can prove multiple keys are held by one entity, accept bribes, then be slashed in-contract if there is proof they included a forbidden transaction over the time window. We assume zero social slashing at the validator level. Alternatively, one could imagine an all-or-nothing contract where everybody is only paid if the attempt succeeds, without slashing. To approximate a unique entity bribery regime (rather than strict per-block bribery), a first proxy is to scale per-block censorship spend by an event compression factor:
bribenuniq(R)=q(Rκ)+Sn,q=unique entities in windowblocks in window\text{bribe}^{\text{uniq}}_n(R) = q \cdot (R-\kappa)_+ \cdot S_n,\quad q = \frac{\text{unique entities in window}}{\text{blocks in window}}
So if a 100-block window has about 30 unique validator entities, this maps to q0.3q \approx 0.3 (bribe_mult = 0.3). This is a compressed payment proxy, not a full commitment game. It is best read as a stress shortcut for “fewer payable censorship opportunities” while keeping the same Bellman structure. At B=40B=40, Gaussian increments:
Setting                                             Best L/N  Best M  Best score
baseline (bribe_mult = 1.0, p_lose = 0)             0.250    1.15      0.768
unique-entity proxy (bribe_mult = 0.3, p_lose = 0)  0.400    1.20      0.989
proxy + p_lose = 1%, jump n->n+2                    0.300    1.20      0.961
This proxy shows a low liquidity failure region: many low L/NL/N cells hit effectively unbounded extraction. Using L/N=0.2L/N = 0.2 and M=1.2xM = 1.2x, we see score worsen from 0.772 to 1.053. Using L/N=0.05L/N = 0.05 and M=1.2xM = 1.2x, we see score worsen from 1.029 to 19.280. Score worsening for fixed L/NL/N and MM is driven by bias increasing. Operationally, this means standing liquidity is at risk. Unique entity concentration in a fixed settlement window can shift over time, so for chosen liquidity and settlement time parameters, the block count must be high enough that the probability of very low unique entity windows is low enough over the full lifetime of the standing position (for example, a p2p borrow-lend arrangement). Even though this section is a linear notional analysis, the same risk management conclusion applies across notional types. Also, this means that at the adversarial limit, a blockchain is only as strong as its unique entity count in the validator set. Blockchains that make block production costly enough to concentrate participation are structurally brittle under this model. It should also be noted how an AMM median price oracle may behave under this adversarial model. Those mechanisms can break once an attacker controls enough entities to exceed ~50% of producers over the median observation window, and losses can be contained to ~ only the AMM pool fees paid. Also, ePBS bribes can be used to give the AMM manipulators an extra block right before their control window to mutate the on-chain state without risk of loss to arbitrage. This is a much weaker requirement than needing to bribe all unique entities the dispute value in an openOracle game.

Late counterbribes

If a defending party chooses to contest near the end of a round, their one-shot loss can be small relative to external notional. In one representative run (Gaussian, B=40B=40, M=1.15M=1.15, L/N=0.15L/N=0.15, bribe_mult=2\text{bribe\_mult}=2, plose=0p_{\text{lose}}=0), forward simulation under the optimal policy gave:
  • P(Rsettle>κ)29.1%P(R_{\text{settle}} > \kappa) \approx 29.1\%.
  • P(τ=1,R>κ,attacker action=censor)24.3%P(\tau=1, R>\kappa, \text{attacker action}=\text{censor}) \approx 24.3\%.
  • E[Rκτ=1, R>κ, attacker action=censor]0.175σ(T)E[R-\kappa \mid \tau=1,\ R>\kappa,\ \text{attacker action}=\text{censor}] \approx 0.175 \cdot \sigma(T).
If a defender overbids once and the net oracle game loss is approximately (Rκ)Sn(R-\kappa)\cdot S_n, the external-notional-normalized loss is approximately:
defender loss on move(Rκ)(L/N)\text{defender loss on move} \approx (R-\kappa)\cdot (L/N)
For example, with Rκ0.175σ(T)R-\kappa \approx 0.175 \cdot \sigma(T) and L/N=15%L/N=15\%, this is about 0.026σ(T)0.026 \cdot \sigma(T) for that move. In practice, the attacker might increase bribes towards the end of the window to try to stave off counterbribes, but structurally counterbribes seem cheap for the counterparty and inflict a much greater expense on the attacker.