Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Runtime] Bound XCMP queue #3952

Merged
merged 29 commits into from
May 16, 2024
Merged

[Runtime] Bound XCMP queue #3952

merged 29 commits into from
May 16, 2024

Conversation

ggwpez
Copy link
Member

@ggwpez ggwpez commented Apr 2, 2024

Re-applying #2302 after increasing the MaxPageSize.

Remove without_storage_info from the XCMP queue pallet. Part of #323

Changes:

  • Limit the number of messages and signals a HRMP channel can have at most.
  • Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all BoundedVecs still decode and not truncate after upgrade. The storage version is thereby bumped to 5 to have our tooling remind us to deploy that migration.

Integration

If you see this error in your try-runtime-cli:

Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535

Then increase the MaxPageSize of the cumulus_pallet_xcmp_queue to something like this:

type MaxPageSize = ConstU32<{ 103 * 1024 }>;

There is currently no easy way for on-chain governance to adjust the HRMP max message size of all channels, but it could be done: #3145.

ggwpez and others added 14 commits April 2, 2024 19:29
Remove `without_storage_info` from the XCMP queue pallet. Part of
#323

Changes:
- Limit the number of channels that can be suspended at the same time.
- Limit the number of channels that can have messages or signals pending
at the same time.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 4 to have our tooling remind us to deploy that migration.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez ggwpez added the T14-system_parachains This PR/Issue is related to system parachains. label Apr 10, 2024
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@ggwpez ggwpez marked this pull request as ready for review April 10, 2024 19:46
cumulus/pallets/xcmp-queue/src/lib.rs Outdated Show resolved Hide resolved
cumulus/pallets/xcmp-queue/src/lib.rs Outdated Show resolved Hide resolved
cumulus/pallets/xcmp-queue/src/lib.rs Outdated Show resolved Hide resolved
cumulus/pallets/xcmp-queue/src/migration/v5.rs Outdated Show resolved Hide resolved
cumulus/pallets/xcmp-queue/src/lib.rs Outdated Show resolved Hide resolved
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@acatangiu acatangiu requested a review from bkontur April 22, 2024 11:48
Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
@muharem muharem self-requested a review April 23, 2024 13:47

/// Any signal messages waiting to be sent.
#[pallet::storage]
pub(super) type SignalMessages<T: Config> =
StorageMap<_, Blake2_128Concat, ParaId, Vec<u8>, ValueQuery>;
StorageMap<_, Blake2_128Concat, ParaId, WeakBoundedVec<u8, T::MaxPageSize>, ValueQuery>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if MaxPageSize affects the weights a lot. From what I can read below, only one signal for every para id. The size can be constant, if it really make any difference.
I assume we could even use a Signal type here, but that would require a migration.

Copy link
Member Author

@ggwpez ggwpez May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea true. In the past, there was the possibility for multiple signals to be enqueued, but i removed that, since later signals completely overwrite any previous ones.
Now indeed it is wasting weight here... it could probably be changed to something very small, like 16 without an issue. But it could also require a migration, so with the Weak vector and large size we at least are at no risk.

@ggwpez ggwpez added this pull request to the merge queue May 16, 2024
Merged via the queue into master with commit 4adfa37 May 16, 2024
147 of 150 checks passed
@ggwpez ggwpez deleted the oty-reapply-2302 branch May 16, 2024 11:07
hitchhooker pushed a commit to ibp-network/polkadot-sdk that referenced this pull request Jun 5, 2024
Re-applying paritytech#2302 after increasing the `MaxPageSize`.  

Remove `without_storage_info` from the XCMP queue pallet. Part of
paritytech#323

Changes:
- Limit the number of messages and signals a HRMP channel can have at
most.
- Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 5 to have our tooling remind us to deploy that migration.

## Integration

If you see this error in your try-runtime-cli:  
```pre
Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535
```

Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
something like this:
```rust
type MaxPageSize = ConstU32<{ 103 * 1024 }>;
```

There is currently no easy way for on-chain governance to adjust the
HRMP max message size of all channels, but it could be done:
paritytech#3145.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
TarekkMA pushed a commit to moonbeam-foundation/polkadot-sdk that referenced this pull request Aug 2, 2024
Re-applying paritytech#2302 after increasing the `MaxPageSize`.  

Remove `without_storage_info` from the XCMP queue pallet. Part of
paritytech#323

Changes:
- Limit the number of messages and signals a HRMP channel can have at
most.
- Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 5 to have our tooling remind us to deploy that migration.

## Integration

If you see this error in your try-runtime-cli:  
```pre
Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535
```

Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
something like this:
```rust
type MaxPageSize = ConstU32<{ 103 * 1024 }>;
```

There is currently no easy way for on-chain governance to adjust the
HRMP max message size of all channels, but it could be done:
paritytech#3145.

---------

Signed-off-by: Oliver Tale-Yazdi <oliver.tale-yazdi@parity.io>
Co-authored-by: Francisco Aguirre <franciscoaguirreperez@gmail.com>
magecnion added a commit to freeverseio/laos that referenced this pull request Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T14-system_parachains This PR/Issue is related to system parachains.
Projects
Status: Audited
Development

Successfully merging this pull request may close these issues.

3 participants