Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Breakdown the Router module on Dmp, Ump, Hrmp modules #1939

Merged
16 commits merged into from
Nov 16, 2020
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion roadmap/implementers-guide/src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,9 @@
- [Scheduler Module](runtime/scheduler.md)
- [Inclusion Module](runtime/inclusion.md)
- [InclusionInherent Module](runtime/inclusioninherent.md)
- [Router Module](runtime/router.md)
- [DMP Module](runtime/dmp.md)
- [UMP Module](runtime/ump.md)
- [HRMP Module](runtime/hrmp.md)
- [Session Info Module](runtime/session_info.md)
- [Runtime APIs](runtime-api/README.md)
- [Validators](runtime-api/validators.md)
Expand Down
1 change: 1 addition & 0 deletions roadmap/implementers-guide/src/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ Here you can find definitions of a bunch of jargon, usually specific to the Polk
- Parathread: A parachain which is scheduled on a pay-as-you-go basis.
- Proof-of-Validity (PoV): A stateless-client proof that a parachain candidate is valid, with respect to some validation function.
- Relay Parent: A block in the relay chain, referred to in a context where work is being done in the context of the state at this block.
- Router: The router module used to be a runtime module responsible for routing messages between paras and the relay chain. At some point it was split up into separate runtime modules: Dmp, Ump, Hrmp, each responsible for the respective part of message routing.
pepyakin marked this conversation as resolved.
Show resolved Hide resolved
- Runtime: The relay-chain state machine.
- Runtime Module: See Module.
- Runtime API: A means for the node-side behavior to access structured information based on the state of a fork of the blockchain.
Expand Down
59 changes: 59 additions & 0 deletions roadmap/implementers-guide/src/runtime/dmp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# DMP Module
pepyakin marked this conversation as resolved.
Show resolved Hide resolved

A module responsible for DMP. See [Messaging Overview](../messaging.md) for more details.

## Storage

General storage entries

```rust
/// Paras that are to be cleaned up at the end of the session.
/// The entries are sorted ascending by the para id.
OutgoingParas: Vec<ParaId>;
```

Storage layout required for implementation of DMP.

```rust
/// The downward messages addressed for a certain para.
DownwardMessageQueues: map ParaId => Vec<InboundDownwardMessage>;
/// A mapping that stores the downward message queue MQC head for each para.
///
/// Each link in this chain has a form:
/// `(prev_head, B, H(M))`, where
/// - `prev_head`: is the previous head hash or zero if none.
/// - `B`: is the relay-chain block number in which a message was appended.
/// - `H(M)`: is the hash of the message being appended.
DownwardMessageQueueHeads: map ParaId => Hash;
```

## Initialization

No initialization routine runs for this module.

## Routines

Candidate Acceptance Function:

* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`:
pepyakin marked this conversation as resolved.
Show resolved Hide resolved
1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long.
1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty.

Candidate Enactment:

* `prune_dmq(P: ParaId, processed_downward_messages)`:
1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`.

Utility routines.

`queue_downward_message(P: ParaId, M: DownwardMessage)`:
1. Check if the size of `M` exceeds the `config.max_downward_message_size`. If so, return an error.
1. Wrap `M` into `InboundDownwardMessage` using the current block number for `sent_at`.
1. Obtain a new MQC link for the resulting `InboundDownwardMessage` and replace `DownwardMessageQueueHeads` for `P` with the resulting hash.
1. Add the resulting `InboundDownwardMessage` into `DownwardMessageQueues` for `P`.

## Session Change

1. Drain `OutgoingParas`. For each `P` happened to be in the list:
1. Remove all `DownwardMessageQueues` of `P`.
1. Remove `DownwardMessageQueueHeads` for `P`.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Router Module
# HRMP Module
pepyakin marked this conversation as resolved.
Show resolved Hide resolved

The Router module is responsible for all messaging mechanisms supported between paras and the relay chain, specifically: UMP, DMP, HRMP and later XCMP.
A module responsible for HRMP. See [Messaging Overview](../messaging.md) for more details.
drahnr marked this conversation as resolved.
Show resolved Hide resolved

## Storage

Expand All @@ -12,61 +12,6 @@ General storage entries
OutgoingParas: Vec<ParaId>;
```

### Upward Message Passing (UMP)

```rust
/// The messages waiting to be handled by the relay-chain originating from a certain parachain.
///
/// Note that some upward messages might have been already processed by the inclusion logic. E.g.
/// channel management messages.
///
/// The messages are processed in FIFO order.
RelayDispatchQueues: map ParaId => Vec<UpwardMessage>;
/// Size of the dispatch queues. Caches sizes of the queues in `RelayDispatchQueue`.
///
/// First item in the tuple is the count of messages and second
/// is the total length (in bytes) of the message payloads.
///
/// Note that this is an auxilary mapping: it's possible to tell the byte size and the number of
/// messages only looking at `RelayDispatchQueues`. This mapping is separate to avoid the cost of
/// loading the whole message queue if only the total size and count are required.
///
/// Invariant:
/// - The set of keys should exactly match the set of keys of `RelayDispatchQueues`.
RelayDispatchQueueSize: map ParaId => (u32, u32); // (num_messages, total_bytes)
/// The ordered list of `ParaId`s that have a `RelayDispatchQueue` entry.
///
/// Invariant:
/// - The set of items from this vector should be exactly the set of the keys in
/// `RelayDispatchQueues` and `RelayDispatchQueueSize`.
NeedsDispatch: Vec<ParaId>;
/// This is the para that gets dispatched first during the next upward dispatchable queue
/// execution round.
///
/// Invariant:
/// - If `Some(para)`, then `para` must be present in `NeedsDispatch`.
NextDispatchRoundStartWith: Option<ParaId>;
```

### Downward Message Passing (DMP)

Storage layout required for implementation of DMP.

```rust
/// The downward messages addressed for a certain para.
DownwardMessageQueues: map ParaId => Vec<InboundDownwardMessage>;
/// A mapping that stores the downward message queue MQC head for each para.
///
/// Each link in this chain has a form:
/// `(prev_head, B, H(M))`, where
/// - `prev_head`: is the previous head hash or zero if none.
/// - `B`: is the relay-chain block number in which a message was appended.
/// - `H(M)`: is the hash of the message being appended.
DownwardMessageQueueHeads: map ParaId => Hash;
```

### HRMP

HRMP related structs:

```rust
Expand Down Expand Up @@ -189,13 +134,6 @@ No initialization routine runs for this module.

Candidate Acceptance Function:

* `check_upward_messages(P: ParaId, Vec<UpwardMessage>`):
1. Checks that there are at most `config.max_upward_message_num_per_candidate` messages.
1. Checks that no message exceeds `config.max_upward_message_size`.
1. Verify that `RelayDispatchQueueSize` for `P` has enough capacity for the messages
* `check_processed_downward_messages(P: ParaId, processed_downward_messages)`:
1. Checks that `DownwardMessageQueues` for `P` is at least `processed_downward_messages` long.
1. Checks that `processed_downward_messages` is at least 1 if `DownwardMessageQueues` for `P` is not empty.
* `check_hrmp_watermark(P: ParaId, new_hrmp_watermark)`:
1. `new_hrmp_watermark` should be strictly greater than the value of `HrmpWatermarks` for `P` (if any).
1. `new_hrmp_watermark` must not be greater than the context's block number.
Expand Down Expand Up @@ -232,42 +170,12 @@ Candidate Enactment:
> parametrization this shouldn't be a big of a deal.
> If that becomes a problem consider introducing an extra dictionary which says at what block the given sender
> sent a message to the recipient.
* `prune_dmq(P: ParaId, processed_downward_messages)`:
1. Remove the first `processed_downward_messages` from the `DownwardMessageQueues` of `P`.
* `enact_upward_messages(P: ParaId, Vec<UpwardMessage>)`:
1. Process each upward message `M` in order:
1. Append the message to `RelayDispatchQueues` for `P`
1. Increment the size and the count in `RelayDispatchQueueSize` for `P`.
1. Ensure that `P` is present in `NeedsDispatch`.

The following routine is intended to be called in the same time when `Paras::schedule_para_cleanup` is called.

`schedule_para_cleanup(ParaId)`:
1. Add the para into the `OutgoingParas` vector maintaining the sorted order.

The following routine is meant to execute pending entries in upward message queues. This function doesn't fail, even if
dispatcing any of individual upward messages returns an error.

`process_pending_upward_messages()`:
1. Initialize a cumulative weight counter `T` to 0
1. Iterate over items in `NeedsDispatch` cyclically, starting with `NextDispatchRoundStartWith`. If the item specified is `None` start from the beginning. For each `P` encountered:
1. Dequeue the first upward message `D` from `RelayDispatchQueues` for `P`
1. Decrement the size of the message from `RelayDispatchQueueSize` for `P`
1. Delegate processing of the message to the runtime. The weight consumed is added to `T`.
1. If `T >= config.preferred_dispatchable_upward_messages_step_weight`, set `NextDispatchRoundStartWith` to `P` and finish processing.
1. If `RelayDispatchQueues` for `P` became empty, remove `P` from `NeedsDispatch`.
1. If `NeedsDispatch` became empty then finish processing and set `NextDispatchRoundStartWith` to `None`.
> NOTE that in practice we would need to approach the weight calculation more thoroughly, i.e. incorporate all operations
> that could take place on the course of handling these upward messages.

Utility routines.

`queue_downward_message(P: ParaId, M: DownwardMessage)`:
1. Check if the size of `M` exceeds the `config.max_downward_message_size`. If so, return an error.
1. Wrap `M` into `InboundDownwardMessage` using the current block number for `sent_at`.
1. Obtain a new MQC link for the resulting `InboundDownwardMessage` and replace `DownwardMessageQueueHeads` for `P` with the resulting hash.
1. Add the resulting `InboundDownwardMessage` into `DownwardMessageQueues` for `P`.

## Entry-points

The following entry-points are meant to be used for HRMP channel management.
Expand Down Expand Up @@ -336,15 +244,8 @@ the parachain executed the message.
1. Drain `OutgoingParas`. For each `P` happened to be in the list:
1. Remove all inbound channels of `P`, i.e. `(_, P)`,
1. Remove all outbound channels of `P`, i.e. `(P, _)`,
1. Remove all `DownwardMessageQueues` of `P`.
1. Remove `DownwardMessageQueueHeads` for `P`.
1. Remove `RelayDispatchQueueSize` of `P`.
1. Remove `RelayDispatchQueues` of `P`.
1. Remove `HrmpOpenChannelRequestCount` for `P`
1. Remove `HrmpAcceptedChannelRequestCount` for `P`.
1. Remove `P` if it exists in `NeedsDispatch`.
1. If `P` is in `NextDispatchRoundStartWith`, then reset it to `None`
- Note that if we don't remove the open/close requests since they are going to die out naturally at the end of the session.
1. For each channel designator `D` in `HrmpOpenChannelRequestsList` we query the request `R` from `HrmpOpenChannelRequests`:
1. if `R.confirmed = false`:
1. increment `R.age` by 1.
Expand Down
16 changes: 8 additions & 8 deletions roadmap/implementers-guide/src/runtime/inclusion.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,20 +67,20 @@ All failed checks should lead to an unrecoverable error making the block invalid
1. Ensure that any code upgrade scheduled by the candidate does not happen within `config.validation_upgrade_frequency` of `Paras::last_code_upgrade(para_id, true)`, if any, comparing against the value of `Paras::FutureCodeUpgrades` for the given para ID.
1. Check the collator's signature on the candidate data.
1. check the backing of the candidate using the signatures and the bitfields, comparing against the validators assigned to the groups, fetched with the `group_validators` lookup.
1. call `Router::check_upward_messages(para, commitments.upward_messages)` to check that the upward messages are valid.
1. call `Router::check_processed_downward_messages(para, commitments.processed_downward_messages)` to check that the DMQ is properly drained.
1. call `Router::check_hrmp_watermark(para, commitments.hrmp_watermark)` for each candidate to check rules of processing the HRMP watermark.
1. using `Router::check_outbound_hrmp(sender, commitments.horizontal_messages)` ensure that the each candidate sent a valid set of horizontal messages
1. call `Ump::check_upward_messages(para, commitments.upward_messages)` to check that the upward messages are valid.
1. call `Dmp::check_processed_downward_messages(para, commitments.processed_downward_messages)` to check that the DMQ is properly drained.
1. call `Hrmp::check_hrmp_watermark(para, commitments.hrmp_watermark)` for each candidate to check rules of processing the HRMP watermark.
1. using `Hrmp::check_outbound_hrmp(sender, commitments.horizontal_messages)` ensure that the each candidate sent a valid set of horizontal messages
1. create an entry in the `PendingAvailability` map for each backed candidate with a blank `availability_votes` bitfield.
1. create a corresponding entry in the `PendingAvailabilityCommitments` with the commitments.
1. Return a `Vec<CoreIndex>` of all scheduled cores of the list of passed assignments that a candidate was successfully backed for, sorted ascending by CoreIndex.
* `enact_candidate(relay_parent_number: BlockNumber, CommittedCandidateReceipt)`:
1. If the receipt contains a code upgrade, Call `Paras::schedule_code_upgrade(para_id, code, relay_parent_number + config.validationl_upgrade_delay)`.
> TODO: Note that this is safe as long as we never enact candidates where the relay parent is across a session boundary. In that case, which we should be careful to avoid with contextual execution, the configuration might have changed and the para may de-sync from the host's understanding of it.
1. call `Router::enact_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments).
1. call `Router::prune_dmq` with the para id of the candidate and the candidate's `processed_downward_messages`.
1. call `Router::prune_hrmp` with the para id of the candiate and the candidate's `hrmp_watermark`.
1. call `Router::queue_outbound_hrmp` with the para id of the candidate and the list of horizontal messages taken from the commitment,
1. call `Ump::enact_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../types/messages.md#upward-message) from the [`CandidateCommitments`](../types/candidate.md#candidate-commitments).
1. call `Dmp::prune_dmq` with the para id of the candidate and the candidate's `processed_downward_messages`.
1. call `Hrmp::prune_hrmp` with the para id of the candiate and the candidate's `hrmp_watermark`.
1. call `Hrmp::queue_outbound_hrmp` with the para id of the candidate and the list of horizontal messages taken from the commitment,
1. Call `Paras::note_new_head` using the `HeadData` from the receipt and `relay_parent_number`.
* `collect_pending`:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,5 @@ Included: Option<()>,
1. Invoke `Scheduler::schedule(freed)`
1. Invoke the `Inclusion::process_candidates` routine with the parameters `(backed_candidates, Scheduler::scheduled(), Scheduler::group_validators)`.
1. Call `Scheduler::occupied` using the return value of the `Inclusion::process_candidates` call above, first sorting the list of assigned core indices.
1. Call the `Router::process_pending_upward_messages` routine to execute all messages in upward dispatch queues.
1. Call the `Ump::process_pending_upward_messages` routine to execute all messages in upward dispatch queues.
1. If all of the above succeeds, set `Included` to `Some(())`.
6 changes: 4 additions & 2 deletions roadmap/implementers-guide/src/runtime/initializer.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,10 @@ The other parachains modules are initialized in this order:
1. Paras
1. Scheduler
1. Inclusion
1. Validity.
1. Router.
1. Validity
1. DMP
1. UMP
1. HRMP

The [Configuration Module](configuration.md) is first, since all other modules need to operate under the same configuration as each other. It would lead to inconsistency if, for example, the scheduler ran first and then the configuration was updated before the Inclusion module.

Expand Down
Loading