Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warn about aggregate possibly consuming a lot of RAM? #2414

Closed
vorner opened this issue Jan 31, 2021 · 3 comments
Closed

Warn about aggregate possibly consuming a lot of RAM? #2414

vorner opened this issue Jan 31, 2021 · 3 comments

Comments

@vorner
Copy link
Contributor

vorner commented Jan 31, 2021

Hello

I'm wondering about one thing. Let's say I have a web server and it wants to get the whole body sent by the user in POST. The aggregate would be a good candidate for that. However, I think there's no way of protecting against the client sending excessive amounts of data and consuming all my RAM that way (at least, I've found none).

I could check the Content-Length header first, before calling this function. However, if the client is sending with chunked encoding, the header is optional. That makes the function (and its sibling to_bytes) potential DoS hazard.

Therefore I wanted to ask if there should be some kind of aggregate_with_limit function, or if the documentation should at least include a warning and point out the fact it shouldn't be used when exposed to untrusted clients.

@seanmonstar
Copy link
Member

Yes, it's possible to use without protection, and that could be problematic. I see it kind of similar to Read::read_to_end. That doesn't include a warning. That said, I see nothing wrong with including a warning in the documentation.

My personal view is that an aggregate_with_limit is less useful. When creating servers, I've always required that data/uploads have a content-length header, with a maximum, so that I can know before having to buffer any of the body whether I'd want to accept it. If I was accepting chunked request payloads, I wouldn't be buffering it fully, but rather streaming it somewhere else.

So, I'd be fine with adding a warning, but otherwise I'll just close this.

@vorner
Copy link
Contributor Author

vorner commented Feb 1, 2021

I see it kind of similar to Read::read_to_end

I'd say situations where I can trust local files are much more common than trusting something on the other side of the Internet. And sometimes, one has no option than just support arbitrary clients.

So, I'd be fine with adding a warning, but otherwise I'll just close this.

Thanks :-).

@seanmonstar
Copy link
Member

I can trust local files

Yes, certainly. Though, Read isn't only implemented for File, but TcpStream and who knows what else. 🤷

seanmonstar pushed a commit that referenced this issue Feb 2, 2021
The to_bytes and aggregate don't check how long the body is, so the user
better be aware.

Relates to #2414.
BenxiangGe pushed a commit to BenxiangGe/hyper that referenced this issue Jul 26, 2021
The to_bytes and aggregate don't check how long the body is, so the user
better be aware.

Relates to hyperium#2414.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants