I agree that “risk” needs to be qualified, because there are (at least) two distinct phenomena and the builder/validator interaction matters.
(A) Honest-operation DA failure (operational risk / liveness cost).
If a block contains $N_B$ blobs and each blob has validator-side failure probability $\varepsilon$ (either because the data is actually unavailable, or due to sampling Type-II error), then the probability that at least one blob triggers rejection is
$$ 1-(1-\varepsilon)^{N_B}\approx N_B\varepsilon \quad (\varepsilon\ll 1). $$
This is the “linear accumulation” I meant: for small $\varepsilon$, adding blobs increases the chance of a block being rejected roughly linearly, which translates into wasted block-production effort and more frequent retries.
(B) Adversarial ‘unavailable block gets adopted’ (security risk).
I agree this requires combining views. The event “an unavailable-data block is built, the builder wins, and validators accept” can be expressed as
$$ \Pr[\text{build unavailable}] \cdot \Pr[\text{win}] \cdot \Pr[\text{validators accept}\mid \text{unavailable}]\le \Pr[\text{validators accept}\mid \text{unavailable}] $$
and the last term is upper-bounded by $\varepsilon^{N_B}$ under the independence assumption across blobs. To make $\Pr[\text{build unavailable}]$ meaningful we indeed need an explicit cap/latency budget on how many candidate blobs a builder may try (or an economic cost model), otherwise “try hard enough” makes that term approach 1 in the model.
On (2): what happens if others reject?
If the winning producer extends with a block that most honest validators reject, it simply creates a short-lived fork that is not extended by honest parties; the producer’s branch is expected to be abandoned unless it can keep producing/convincing others. So the practical impact is again wasted work / temporary divergence, unless a non-negligible fraction of validators accept (which is exactly what $\varepsilon^{N_B}$ is capturing).
So I propose we restate the conclusion as: