Adding Event Driven Randomness to the Carnot Consensus Algorithm

Revision Changes Date
0 Initial version. 28.03.2023
1 Added the “Random Beacons Discussion” section.
Added the “Abstract Beacon” section.
Modified the “Construction” section by removing explicit security checks on the BLS hash function and adding adequate requirement to its definition.
Minor corrections to algorithms in the “Construction” section.
Extending the “High-level overview” section with normal to random mode of operation discussion.
Added the “Leader Selection” section with 2 algorithms.
Added the “Threshold Recovery Mode” subsection to the “High-level overview” section. 21.04.2023

Motivation

The goal of this document is to describe minimalistic algorithm for provisioning randomness to the Carnot Consensus algorithm. By “minimalistic” we understand a design that is simple in a construction and easy to implement. The randomness provided by this schema must be bias-resistant and unpredictable to some practical extend but it may not be truly or fully random from theoretical point of view.

Please note that we are generalizing in this proposal, therefore the design might not follow fully the Carnot specification, but nevertheless it should be compatible with it.

Random Beacons Discussion

Random Beacons are source of randomness for many decentralized and distributed systems. They are defined as protocols between multiple actors that produce a beacon output which is: Unpredictable: the beacon values cannot be predicted by a threshold of nodes.

Bias-resistant: the beacon values cannot be changed or influenced by a threshold of nodes.

The protocols deliver such properties through an application of threshold cryptography [https://eprint.iacr.org/2016/1067, https://eprint.iacr.org/2019/1320, https://arxiv.org/abs/1805.04548], public verifiable secret sharing [https://eprint.iacr.org/2016/1067, https://eprint.iacr.org/2018/319, https://eprint.iacr.org/2017/216], verifiable random functions [https://eprint.iacr.org/2017/573, https://eprint.iacr.org/2017/454], or even erasure coding [https://arxiv.org/abs/2109.04911].

The common feature of most of these designs is an execution of the protocol on a group of nodes. In other words, a group of nodes (or a network) collectively calculates the value of the beacon. This group process guarantees correct production of a beacon under an assumption that the number of nodes in the group that are byzantine is under certain threshold. Hence, there is a inherent computation and communication complexities. The process of aggregating and verifying signatures is paid in execution time correlated with the number of used signature shares.

However, group or network-wide coordination is very expensive in a situation where a beacon must be generated very frequently. In case of the Carnot, a new beacon is required every new block proposal, in order to select a new leader for the next proposal. Which means that generation of a new beacon must be quick. Therefore, multiparty random beacon protocols are not ideal as they inflict an additional delay needed for the random beacon group synchronization.

High-level overview

https://viewer.diagrams.net/?border=0&tags={}&highlight=0000ff&edit=_blank&layers=1&nav=1&open=R7Vpbb5swFP41PK4CXAh5bBPWVtvaTZ269dGBU%2FAGGBmzJPv1s4MJ10bJmoRkmtRKPgcfsD9%2F5%2BZWQ5N4ccNwGn6iPkSaqfsLDU010xybpiZ%2FdH9ZKGzDLhQBI36hMirFI%2FkNSqkrbU58yBoTOaURJ2lT6dEkAY83dJgxOm9Oe6FR86spDqCjePRw1NV%2BIz4PC61j6ZX%2BFkgQll82dPUkxuVkpchC7NN5TYVcDU0YpbwYxYsJRBK7EpfC7v0rT9cLY5DwbQweLGfmfrn9dHX3Pcgmk%2Bkkf9bfWcVbfuEoVxt%2BunO%2FCc3k9ur%2BxhUD98m9%2F6p2wJclLOALlJRIGQ9pQBMcuZX2mtE88UF%2BWxdSNecjpalQGkL5AzhfqiPHOadCFfI4Uk%2FFttjyu7JfCc9SuLBKcbqoP5wuS2lB%2BMrswhlZSq5ZCqkylELd7jMwEgMHpnTFvuVmXwVdqTKaMw82IF06AscsAL5p4pobwqeAiuWwpbBjEGFOfjUXghW7g%2FW8telnSsQSTV05ojlWNFR%2BaJY8LV9RbEBZVTQSg9oyKtWKXDsQbdQh2rP7OAitFD2MXaixpmLFvucG%2BfqpuE%2F6bMkewzkIfWyjRR%2FUok%2BxsIPRZ9yhz%2F3DsEFpNyZUQakekoyNrBuAPYchz2WbPPZxY4%2FZIc9dJm3kEr%2BKeE9zgcn7Dp1YSONZLpZ4PQ8Jh8cUrzCeizqnSQr1emAcFpuPqgutMkAj1ICodK95VXMYukrWYb3eaIfx%2Bmk0cNwVNKOL2kAuV7rOeQVsZ0uXQ%2FZh8j2ymj7nHDdgm6P%2FyX3vXDHHb%2BTKmyKC6Wxdw33EM9EGNo4GRyRIxNgT6Mj6%2BlrGSyIarSv1ICa%2BX7AAMvIbz1bvk0CnkqOrzVjXmjXdFHBVF6iMtXXvVT%2BUDXR9NTy%2F0y9GlmU3PaqQ%2FtZPl70G9OUlg4M4ZPfwVklwBliAJgYPHwbPga0ywXD6kqB%2B1CRodFB7ohw6MInePpXDlFEPsi3gmmHvZ7AKdw85j0gCe4TRbsKI%2BmC0emC0DobiWTd%2Fe4zz5njLQG%2B8tSh423nZnfO6gQQYlsxfR4wTdoF2tzq8CyBzSLrv0nWeQDlknEc9ZHRT6hMw8rKsp9U8I0lQVzDwqCC3nBRT%2F7RTiYVaGbmvLe3zo3aDv78atKcrBT6n7GfXv6KIpJn0oxJSL6K5f5RSxmnidtkDm9kDW7tH2x9s3YheXnycMv%2BMkytlULcgPJ9bkUEqmVdO%2BEgta%2Ffe%2BCoW8p38mPhNYK5JR7QjLpnMxCiQowiwL5vUYRsiq31vOnxDhC7PpY7ZI9mRvW09cpy7PHTZOt4D35%2Bjjf1A4UP%2FRsHTTji9HrenhCPE6o%2F%2FxVFV%2F0GB3D8%3D

The network generates a view change event. Such event is a function of previous round of the consensus. It can be generated as a consequence of a new block proposal acceptance or a timeout, where no new proposal was submitted or majority was not convinced by the proposal. Therefore, we have two modes of operation for two distinct root causes of the event.

  1. Normal mode of operation for the case when the leader proposed a valid block, where a beacon is generated as a function of leader keys and a view number.

  2. Recovery mode of operation for the case where a timeout happened, where a beacon is a function of a last beacon and a view number.

The main mode of the operation is normal mode, it is bias-resistant and unpredictable, due to the usage of Verifiable Random Functions (VRF). The beacon is in the normal state until a selected leader does not propose a correct block on time. That also means that the selected leader was not able to execute the VRF function and provide fresh randomness for selecting the next leader (which is the sole purpose of this beacon construction). Then the beacon algorithm enters into a recovery mode, during which the beacon generation algorithm looses the unpredictability property (but it is still bias-resistant) for the time necessary for selecting a new leader, that will successfully submits a correct block proposal. Hence, moving back to the normal mode of operation.

Threshold Recovery Mode

The unpredictability of the Recovery Mode can be recovered by applying a threshold-signature based random beacon, where a group of nodes jointly generates and publishes new beacon. This will come with a price of longer execution time, as it will require additional network synchronization. Additional complexity will be with secure selection of the group members executing threshold-beacon generation. More members will limit the probability of the failure but will increase the execution time. Finally, there will be a need for an additional recovery mode for a rare event when the group is not formed correctly or was unable to produce the beacon on time.