-
Notifications
You must be signed in to change notification settings - Fork 2.6k
MaxEncodedLen
tracking issue
#8719
Comments
Why? Why should this exist in scale-codec? What I still don't understand, how should this help to calculate the proof size? The proof size is not only composed of the size of the individual elements. The size also comes from the trie structure itself that is encoded in the proof. Than there are also values included which are may not being accessed by an extrinsic and still would increase the proof size. |
After our chat in Element, I agree this probably can go in Substrate instead of SCALE. @coriolinus you can transfer the issue. |
I can help with
Where is this number coming from?
Given all of this, in the context of a parachain, we pretty much want to ban storing anything unbounded like |
I'm working on the derive macro right now, but assistance on the Frame side would be much appreciated.
Just a SWAG; it's high enough that it'll probably suffice for most cases without being high enough to make the weight predictions unreasonably high.
Yes, we're specifically excluding |
BoundedEncodedLen
tracking issueMaxEncodedLen
tracking issue
* Add `BoundedBTreeMap` to `frame_support::storage` Part of #8719. * max_encoded_len will never encode length > bound * requiring users to maintain an unchecked invariant is unsafe * only impl debug when std * add some marker traits * add tests
* Add `BoundedBTreeMap` to `frame_support::storage` Part of paritytech#8719. * max_encoded_len will never encode length > bound * requiring users to maintain an unchecked invariant is unsafe * only impl debug when std * add some marker traits * add tests
* Add `BoundedBTreeSet` Part of paritytech#8719 * fix copy-pasta errors Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com> Co-authored-by: Kian Paimani <5588131+kianenigma@users.noreply.github.com>
Rationale
For PoV benchmarking, we need some way to estimate what the maximum encoded size of an arbitrary storage item can be. We've settled on a new trait,
MaxEncodedLen
.While
Encode::size_hint
already exists, we can't rely on it. One fatal flaw: it's an optional, "if possible" method which for many structs simply returns the default size of0
. Another:Encode
is implemented for certain types such asVecDeque<T>
, which we'd want to intentionally exclude from storage, as they aren't bounded. We need a trait which we can use to exclude types.Design
Roadmap
BoundedEncodedLen
traitEncode
is implemented. Mostly defers tosp_std::mem::size_of::<T>()
.BoundedVec<T>
Option<T>
Result<T, E>
PhantomData<T>
Compact<T>
Bounded
variants for other container types for whichEncode
is implemented. ImplementBoundedEncodedLen
for these bounded variants.BoundedBTreeMap
toframe_support::storage
#8745BTreeMap<K, V>
BoundedBTreeSet
#8750BTreeSet<T>
BinaryHeap<T>
LinkedList<T>
VecDeque<T>
#[derive(MaxEncodedLen)]
#8737 Derive macro:#[derive(Encode, BoundedEncodedLen)]
for non-container structs, enums. Requires all fields to implementEncode
andBoundedEncodedLen
.StorageMap
and friends are actually implemented as newtypes which implement the full API of the parent type, plusBoundedEncodedLen
#[storage::map_bound(300_000)]
. The value specified in this attribute is used to determine the bounded length of the map.frame::storage
: all storage items must implementBoundedEncodedLen
.Notes
This always explicitly computes an upper bound. While a
u128
can potentially be compact-encoded in only one byte,Compact<u128>::max_encoded_len()
will always return 17.While implementations of
BoundedVec<T>
and friends impose a hard cap on their length, we do not useBoundedEncodedLen
to impose a hard cap on the size ofStorageMap
and friends. While potentially a parachain author could misrepresent the bounds on its maps to attempt to pack a parachain block with more transactions, that behavior is self-defeating. Eventually they will try to pack too many transactions into a block, causing that block to become invalid because its actual size is too large. User documentation for thestorage::map_bound
attribute should emphasize that it is in the chain author's best interest to use a reasonable value here.We'd like to offer logging warnings that could be noted by Prometheus etc when the size of
StorageMap
exceeds certain thresholds (eg. info at 80%, warn at 100%), but that feature is out of scope of this tracking issue. It requires additional design work because currently the only way to determine a map's size is to iterate over its members, which is not performant.max_encoded_len
is a function instead of an associated constant for pragmatic rather than theoretical reasons. Currently,BoundedVec
's limit is not a constant, so we can't use its value in a constant context.The text was updated successfully, but these errors were encountered: