Carnot node is linked to the initial proposal. The implementation of that proposal is Overwatch, a micro-services framework.
Overwatch makes it simple to develop isolated pieces that communicates through an in-memory messages. This allows to have plugable services that compound a system together. By having this components separated it makes easier to test them isolated or grouped as needed. By just communicating through messages it makes simple to mock other system components as well.
Services are developed in a per-crate fashion. Meaning they live in their own environment and can be tested separately.
A service must fill the following characteristics:
To facilitate things, we intend to provide a mock version of each of them. The mock versions should provide a way that would help testing other services that need of it. Specifics should be found in each service documentation.
The main entry point for the service is the implementation of the ServiceData and ServiceCore Overwatch ****traits. Those traits are the facing front side of the Nomos services. But they are usually structured in a way that they can be composed as well.
Nomos services are usually composed of a frontend implementation and a backend implementation. The front is the part that plugs into the Overwatch traits, and it uses a generic backend interface to do what needs to be done. The specifics of the implementation goes into different types that can be switchable (as they implement the intermediary interface). That way changing the service backend is just a matter of composing the type when creating the Overwatch app.
A clear example of this would be the Storage service. The storage service acts as a key-value store. Other services are not aware of how things are handled, they just need to worry about what to save or what to retrieve. The storage service exposes this messaging API:
/// Storage message that maps to [`StorageBackend`] trait
pub enum StorageMsg<Backend: StorageBackend> {
Load {
key: Bytes,
reply_channel: tokio::sync::oneshot::Sender<Option<Bytes>>,
},
Store {
key: Bytes,
value: Bytes,
},
Remove {
key: Bytes,
reply_channel: tokio::sync::oneshot::Sender<Option<Bytes>>,
},
Execute {
transaction: Backend::Transaction,
reply_channel:
tokio::sync::oneshot::Sender<<Backend::Transaction as StorageTransaction>::Result>,
},
}
It directly matches the storage backend trait:
/// Main storage functionality trait
#[async_trait]
pub trait StorageBackend: Sized {
/// Backend settings
type Settings: Clone + Send + Sync + 'static;
/// Backend operations error type
type Error: Error + 'static + Send + Sync;
/// Backend transaction type
/// Usually it will be some function that modifies the storage directly or operates
/// over the backend as per the backend specification.
type Transaction: StorageTransaction;
/// Operator to dump/load custom types into the defined backend store type [`Bytes`]
type SerdeOperator: StorageSerde + Send + Sync + 'static;
fn new(config: Self::Settings) -> Result<Self, Self::Error>;
async fn store(&mut self, key: Bytes, value: Bytes) -> Result<(), Self::Error>;
async fn load(&mut self, key: &[u8]) -> Result<Option<Bytes>, Self::Error>;
async fn remove(&mut self, key: &[u8]) -> Result<Option<Bytes>, Self::Error>;
/// Execute a transaction in the current backend
async fn execute(
&mut self,
transaction: Self::Transaction,
) -> Result<<Self::Transaction as StorageTransaction>::Result, Self::Error>;
}
That way is pretty easy to exchange storage engines without interfering with implementation too much. Later on other services may choose how to encode themselves depending of the type of services and services backends they need to interact with. There are different options: