What is sent to each node?
- One proof and one chunk
- For a 32MB block, we need an original 256 bytes elements square matrix. 512 extended.
- Totally 262,144 rs chunks and kzg proofs. Meaning that for a 10k target we would have to send 25 chunks per node.
What are the RS corner cases that may lead us to lose the block?
We need all the original data or up to 25% loss. With 26% we reach the corner case in where it is impossible to reconstruct.
KZG commitments/proofs
| KZG commitments allow us to easily and quickly verify that an encoded sample is indeed part of a given set of erasure-coded samples
Can they confirm that the RS was produced properly?
They do exactly that, proofs membership of the polynomial from RS. We need the sample and the proof (proof validates the sample).
KZG commitment is the combination of all the proofs.
What is the scalability of the scheme? Can be improved?
- In ethereum there is a lot of replication (around 700 per chunk). But still much better than whole block ofc.
- It is quite safe because of replication, otherwise it is just up to 25% chunk loss.
What drawbacks does this method have?
- DHT may not actually work
- Distributing chunks may actually take longer that needed for ethereum
- Maybe we have the same issue
Any suggestion on libraries to use/ take a look at?
leopard c++
https://github.com/catid/leopard
fast-ecc
https://github.com/Bulat-Ziganshin/FastECC
go-kzg protolambda
https://github.com/mratsim/constantine/issues/112https://github.com/protolambda/go-kzg