Planning & Ideas
Current state of Nomos DA / Potential next steps
- Full replication protocol is implemented - it disseminates the blob without chunking to multiple nodes. This is a good starting point to implement actual DA encoding protocol without blocking DA API implementation.
- DA nodes does not have READ/WRITE api at the moment. The blob data is retrieved from a generic storage api, which should be instead used by the DA specific service for caching and persisting blobs according to the blockchain
- DA caching is not implemented, should follow the similar logic as how blobs are persisted at the moment.
- DA service should communicate with the consensus (via Overwatch handle) and track the latest state of a blockchain. This also could be achieved via the block explorer, TBD
- Block definition in the code needs to be updated, but not necessary to implement the majority of functionality of DA api.
Preparation for DA API
There are unknowns on which the final DA API implementation will depend on, but initial implementation steps can be taken.
Unknowns
Until we have full DA and DAS spec, there will be potential implementation changes required, but it is still possible to identify independent areas and work on them to be ready for final changes when DA Spec is complete.
Unknowns |
Description |
Missing |
What is already implemented |
Steps |
Encoding / DA Protocol |
Data provided by the EZs for decentralized persistence need to be encoded and chunked in a certain way for Nomos to ensure certain type of data availability |
- DA Spec is being worked on and already can be referenced when implementing DA API |
|
|
- Concrete types and sizes might be missing, but shouldn’t block the implementation | Full Replication DA Protocol is implemented and available for use in Nomos codebase.
Full Replication could be used until we have RS+KZG spec and implementation, it seems that the current abstraction will allow seamless change of DA Protocol. | - Wait for RS+KZG DA Protocol spec and implementation
- Use Full Replication DA Protocol to implement DA Read/Write API implementation |
| Dissemination | Encoded data chunks need to be distributed across the DA nodes in an efficient manner. | - Sending data chunks directly to multiple DA nodes might not be efficient, more research is needed | Dissemination is currently using NetworkAdapter abstraction implemented as Libp2pAdapter in data availability module.
When data is disseminated (in full replication chunk represents complete original data), a specific Libp2p topic is used to broadcast this chunk to DA Nodes.
This abstraction will allow seamless change of Dissemination implementation later on. | - Start discussion about efficient way for dissemination
- Use current implementation as is for DA Read/Write API implementation |
| Attestation | After data chunks are distributed across the DA nodes, the attestations of the dissemination needs to be provided back to the EZ that data has indeed reached it’s target. | - DA Spec will dictate final layout, types and sizes | At the moment Attestation is a data structure returned from DA Node to EZ after the chunk is received. At the moment it’s a simple hash of the original data.
DA Protocol implementation controls the definition of Attestation, in this case it’s Full Replication. This should also be seamlessly replaced by another DA Protocol implementation | - Wait for RS+KZG DA Protocol spec and implementation
- Use Full Replication DA Protocol provided Attestations to implement DA Read/Write API implementation |
| Certification | Some form of certification needs to be created from the attestations received after the dissemination. The certification should be sent by the EZ to the DA mempool for Nomos block producer to include required information into the block about the data that should be available via Nomos DA layer. The certification should also include required metadata for data indexing (data ranges) and AppID that it is related to. | - Related to the Block, at the moment it’s not clear what minimal information should be derived from the attestations that should be persisted in the blockchain about the disseminated data
- In addition to certification, meta data about the data indexing/ranges needs to be provided.
- Not clear if metadata should be part of certificate (need to define terminology for clearer communication in team) | Currently, attestations from DA Nodes are used to form a Certificate that is sent by EZ to the DA mempool.
The layout of such Certificate can be seen here: ‣ | - Start discussion about information required for block producer and information required to go into the block itself
- Update the current Certificate definition to include AppID and nonce (index). This shouldn’t reflect the final implementation, it needs to have metadata required for DA Read/Write API implementation. |
| Block | Block contains collections of Transactions and Certificates(?) | - The certification and metadata sent to the mempool might not end up in the block. This dbifference needs to be explored and documented | Implemented Block structure is at the moment is specific to Carnot consensus.
This will change soon, but DA API should be mostly affected by the data certification related part. | - Wait for Cryptarchia spec and implementation
- Changes in Certificate definition will automatically reflect in the Block, but shouldn’t interfere with consensus implementation. |
| Block Producer selection / verification | Block producer needs to verify that the certification for the AppID data is sequential by checking the latest data index for AppID in the blockchain. | - Depending on the data that is provided by the EZ to the mempool, block producer needs to make a decision for inclusion in the block. This depends on all the items above.
- Block producer verifying sequence might be a potential surface for DOS attack if it includes blockchain traversal.
- Depends on Consensus Specification | Carnot leader will take whatever is it’s DA mempool without any additional steps and include it into the Block | - Use Carnot until we are ready to swap it for new consensus
- Discuss the best place to add verification for AppID data sequence checking
- Add minimal checking just as proof of concept to the Carnot (mempool?) for DA Read/Write API implementation |
| Can be worked on | Description | Missing | What we have | Steps |
| DA Node blob caching | When DA Node receives a chunk, it does not make it available via the Read API, it waits until the required certification is written into the blockchain, that way it can trust that other DA Nodes will use the same information for indexing the data. | - Implementation detail | - Caching is not implemented, but storage and immediate availability is implemented | - Use current storage implementation as a reference for caching implementation |
| DA Node blockchain tracking | DA Node needs a way to retrieve latest block of a blockchain and decide what should be “persisted” and made available via Read API from the cache. | - Implementation detail | - Overwatch handle to the consensus, data could be pulled from there
- Block explorer could be used, but doesn’t sound as reliable | - Add handle to consensus in DA service
- Use consensus blocks information to crosscheck the cache and certification in new blocks |
| DA Node blob storage with AppID index | DA Node needs to use metadata in the Certification to efficiently store data related to different AppIDs | - Implementation detail | - Storage is implemented, indexing is not present | - Update current storage implementation to allow data queries for specific AppID and range |
| Blob data pulling via DA Read API | Instead of blindly pulling all the blobs from the DA Nodes, EZs can use DA Read API to query data related to specific AppID. It does so by providing the AppID and the range. | - Spec (minimal) | - Testnet and Demo App is using DA Nodes for showcasing variety of Nomos functionalities. Demo App can be easily updated to test the new DA API | - When above steps are done, update related DA methods in Demo App and test the changes in the testnet. |